Jump to content

Question Please We have a process that runs in this manner:...

robert fuschetto

Recommended Posts

Question Please

We have a process that runs in this manner: HTML page - 2 buttons.

Step 1 - get institutional data for a desired period of time; user is prompted and HOLD file created for them in foccache for entire business for that specific period takes up to 2 minutes.

Step 2 - User then prompted for more specific criteria w/ability to run charts, reports etc Each report / chart etc. pops up instantaneously.

Why 2 steps

May times a user will want to run multiple reports for their Division / Dept / Specific personnel etc. We have found that by grabbing info for the entire business for their chosen period, then allowing them to apply more specific criteria to a subset of data, is more efficient that having them run the whole thing over and over. Sounds crazy but that is what we found. Its been this way for years.


Who wants to wait 2 minutes for the first step when users typically look at the month just closed 80% of the time.


Over night, create a hold file for Prior month (that month that just closed). Replumb the dash board to reference this HOLD file when the want Prior Month. The data is alreadyno more waiting for Step 1 to run. If they want some other period, then they must wait while data is gathered at that time as teh process works today.


I would think we could set up a fex to create the hold file for the prior period. I assume Report Caster could fire it off each morning. But where do we write the HOLD too Is there a special place to park the hold file so it is available to all users If so it feels like we need to add a PRIOR PERIOD button to the HTML page that runs STEP 2 right off the overnight filenow no processing wait! If they want any other period, the process runs as it does today.

Link to comment
Share on other sites

You can HOLD the file where ever you want to. Ideally a folder which is available to everyone (or at least the group who will access the data)

That said, I even suggest/recommend, if possible, to create a DBMS table instead of a HOLD file.

As per example for MS-SQL DBMS



SUM ...

BY ...



(The DROP option above is used when you want to have a fresh/cleaned/truncated table each time which is great if you rebuilt the data each day)

That way you may store it in the same structure/system as your possible other source data.

Will be also managed by your backup system and become available for much more things than just WF

Also, you can create the data for much more than just the last closed month. You can have the whole year or the last three.

With proper index, the selection/data retrieval will be fast enough to end up with step-2 only no matter the selected period.

Then you create a master file that point to that new DB table and adjust your reports accordingly

Link to comment
Share on other sites


We do have many staged files using SQL. Still millions of rows slows things down. Likely issues with the network set up as well. Example: Our DEV environment take 60s for the same thing that runs in TEST/PROD for 2m. And it points to the exact same datanot a whittled down repository. Further, as crazy as it sounds, WF has access to some of the required datathatalas we can not get at it from SQL server. So many battles to fight with ISso little time to fight them.


We are on 8207v27. What is format XFOCUS

Link to comment
Share on other sites

WebFOCUS has an old internal data format, FOCUS. It was updated to XFOCUS with more capabilities. It is convenient because you dont need a FILEDEF for the data file of the hold file when you reference it in a report. If you run the code I provided you will see permanent files:




which would be available to everyone.

Documentation: wf8207crlang.pdf (informationbuilders.com)

If it is workable for you, the solution provided by MartinY could be even better.

Link to comment
Share on other sites

Robert, even going with something between my option or Davids one, I think that you should also limit the number of available data at first level.

Working with million of rows is not a problem. The problem is to have them accessible to users.

So you should better build DataMart (summary table) where only data at years, months or weeks aggregated level is available and swap from one table to the other depending on user request.

You can have one year table, one year-month, one store-year, one store-year-month, where each will only contain few hundreds rows at worst per example and have all those tables processed over night once a day.

That will increase your user experience and when they ask for detail (e.g. by invoice), then they should be aware that they will have to wait for the result a longer time.

Link to comment
Share on other sites

Hello Robert,

To me, it sounds like you need to work the question backwards. If you have a time requirement, then how many records can you read in 2 minutes or whatever your time requirement is.

I would design the sample output file. If you can make it hierarchical (instead of a single segment), then you can save some significant space. Design a master file of what you want to hold, where you want to hold it. Then you can use the server feature to load sample data, see how fast 500K rows are, see how fast 1M rows, 2M rows. With a good database design, you can get good performance out of a large file. A focus / xfocus file has the advantage of reading blocks of data. I would never make 2M rows a flat hold file.

I think what you want to do is common. Make a hold file in an app folder, simply put it in the app folder, for the domain that you want the people to read it. ON TABLE HOLD AS /myfolder/myfilename FORMAT XFOCUS.

You can use a multi-verb request to make a multi-segment focus hold file. Or, you can design a master, and use modify to load the multi-segment focus hold file. But, your real question to me is the performance, and that will be done by the design of the master file.

Design a good master file, and youll get good performance. Based on your questions, I would make a mult-segment XFOCUS file. Design your master file first, use sample data, to see if you get the performance you need. If you dont, then re-design the master file. Where to put the data is the easy part.

Link to comment
Share on other sites

Yes a better datamart. The issue is I do not have SQl access to the data. I only have WF to work with. Are you suggesting that I use WF to create a FORMAT = FOCUS file. That is what we do now. It takes 2 minutes. Then things fly off this file. So I want to schedule this 2 minute job to run each night. Then the data is staged for 80% of what the users need.

JNC is talking a little over my head. He / She may be right but we cannot reinvent the wheel at this point.

So the focus file can be built anywhere I thought it had to be in a particular folder

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...