Jump to content
The ibi Community has moved to a new platform: Please Sign In and choose Forgot Password to continue

I have been troubleshooting for hours on this issue. I am t...


Kristi Carter

Recommended Posts

I have been troubleshooting for hours on this issue. I am trying to do the following and something so simple is taking around 10 minutes to run. It is not erroring, just taking an extreme amount of time.

I have an Excel File that has a data set that I have uploaded and turned into two Master Files. I then have a SQL Query I have uploaded and turned into a Master File. I am now trying to join the two master files together but everything I have tried does not enhance the performance.

I am getting the errors of:

FOC2590 - AGGREGATION NOT DONE FOR THE FOLLOWING REASON:

FOC2599 - NON-SQL SEGMENT IN HIERARCHY (OTHER INTERFACE PRESENT)

Anything I have searched for on these errors, I have tried but am unsuccessful on making it run quicker. What do I need to to to make this work faster

Is this even an option to do without having huge performance issues when taking a master file and bumping up to a SQL created master file

I can provide some sample code if needed to help. Below shows at least the cluster master file that is trying to join all three master files together.

FILENAME=CLU_RMA_USER_FEES, $

SEGMENT=RMA_USER_FEES, CRFILE=INDUSTRY/RMA/RMA_USER_FEES, CRINCLUDE=ALL, $

SEGMENT=RMA_USER_FEES_REFERENCE, SEGTYPE=KU, PARENT=RMA_USER_FEES, CRFILE=INDUSTRY/RMA/RMA_USER_FEES_REFERENCE, CRINCLUDE=ALL, CRJOINTYPE=INNER,

JOIN_WHERE=RMA_USER_FEES.PRODUCT EQ RMA_USER_FEES_REFERENCE.PRODUCT;, $

SEGMENT=508H_INVOICE_VERIFICATION_SQL, SEGTYPE=KU, PARENT=RMA_USER_FEES, CRFILE=INDUSTRY/HOLD_FILES/508H_INVOICE_VERIFICATION_SQL, CRINCLUDE=ALL, CRJOINTYPE=INNER,

JOIN_WHERE=RMA_USER_FEES.REINS_YR EQ 508H_INVOICE_VERIFICATION_SQL.REINS_YR AND RMA_USER_FEES.PRODUCT EQ 508H_INVOICE_VERIFICATION_SQL.PRODUCT;, $

Link to comment
Share on other sites

If you search your results.txt for SELECT, youll see that you have multiple SELECTs and UNION ALLs. This indicates that you are scanning the data many times and pushing the results to the Reporting Server. The output is also in a Document (AHTML). I dont see a JOIN. Not sure how the master file was created. Is it a cluster master that has all the joins in it
Link to comment
Share on other sites

The SQL QUERY is attached. It is doing a CTE in the beginning to temporary store a data set that then the Union All uses for everything.

505h_InvoiceVerification_SQL.txt (5.0 KB)

The report itself - yes is running as an AHTML file.

Yes this is a CLUSTER master file that is joining Two Master files that were created from an Excel document and the third join is to the Master file that is created using the SQL query.

Attached is the text on the Cluster showing the join. (File: clu_rma_user_fees.txt)

Thanks,

Kristiclu_rma_user_fees.txt (653 Bytes)

Link to comment
Share on other sites

Couple of things to consider.

 

 

Take the SQL and paste it directly in your database and run it there. Then check the timing.

 

 

FOC2590 - AGGREGATION NOT DONE FOR THE FOLLOWING REASON:

FOC2599 - NON-SQL SEGMENT IN HIERARCHY (OTHER INTERFACE PRESENT)

The above errors says Aggregation is not happening with your SQL because the code is having some Non-SQL segment in the hierarchy. You need to find which part on your code is causing that.

Sometimes when we convert to sql, it wont accept the DEFINE field and it wont do the aggregation. You need to find which part of the code is doing it.

 

 

Try print it as Excel instead of AHTML OR HTML for testing.

Link to comment
Share on other sites

Are all three tables in SQL server or are two of them in Excel and one in SQL Server The best way to optimize is to put everything in SQL Server since Excel isnt a real database and doesnt have Index or Primary key. Joining tables from different data sources will cause optimization to be turned off. If youre not allowed to load the data into SQL Server, then consider creating FOCUS files. They arent ideal but better than reading and joining Excel files.
Link to comment
Share on other sites

I have actually already taken the SQL portion, created as a hold file and then joined that to the two other master files and it works fine. The reason why I dont want to go that route is because that is an additional step to always be sure the hold file is scheduled before it can be used. I was trying to eliminate that step from happening.
Link to comment
Share on other sites

The timing of SQL is up to 30 seconds. Longer then I would like, but still acceptable for the purpose of this report.

I am starting fresh. I have joined all three to one master file. I have not changed any alias, data types, added any defines etc I then created a new HTML report. I immediately have the issue when developing the report. As soon as I add a field to the report that is from the SQL portion of the master file, it takes minutes to add the field to the report.

Link to comment
Share on other sites

Kristi

I did test this. Use this code in the request that joins your excel tables together ON TABLE HOLD AS whateveryouwant FORMAT SQLMSS, then join that table to your real sql table to get your report.

The last step you should do is to clean up the temporary table with this code.

ENGINE SQLMSS SET DEFAULT_CONNECTION webfocus

SQL SQLMSS PREPARE SQLOUT FOR

DROP TABLE whateveryouwant

END

There are several caveats to making this work, you need to look at the access file of the sql table and use that same connection in my case my connection name was called webfocus, The SQL Server user ID associated with that connection needs to have the ability to create and drop tables in the database, The default database for the SQL Server user ID needs to be the database where your SQL table resides or you could end up with another performance issue. The WebFOCUS adapter connection can specify a default database as well.

Let me know if you need any additional help

Link to comment
Share on other sites

  • 2 weeks later...

I know this may seem a little counter-intuitive but i have had great success performance wise by extracting the files, HOLDing them as in Focus File format (I usually specify the HOLD format as XFOCUS which are larger fileblocks) and specify an Index on the Hold File column. I suspect that that you might be doing extensive sequential reads between the JOINed files because you dont have an index.

Like I said this approach is somewhat counter intuitive because you have the added step of building 3 HOLD files in XFOCUS format with an included index for each, but if youre getting horrible response time, and youre waiting 10X in time to get a final answer set it might make sense to spend 1X up front (extracting the data to Indexed Hold files) , and only wait 1X to get the final answer set.

the Syntax would read something like:

.

.

.

ON TABLE HOLD AS FILE01 FORMAT [FOCUS, XFOCUS] INDEX COL_01 [COL_02] [COL_03]

END

As I remember the column names have to be either less-than-equal to 12 characters or 16 characters I forgot which. Also if you need to create an Index that is a composite (2 columns for example) youll need, to create the composite in a DEFINE or a COMPUTED field (if you want to create an index on Date, Part No combo for example) You might need to convert the date to alpha format and combine it with alpha part no. Youll need to do something similar for the Extract File #2 and/or Extract File #3

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
  • Create New...