Slow Performance when adding items on Sales Orders - related to really poor coding of spimIMSQtyAvailable

SOLVED

Hi,

We have been having issues in SAGE2016 (we recently installed the May updates to see if it would help) when entering items on Sales Orders.

The last suggestion we got from support seems to help alleviate the problem. 

The suggestion was to run 'recompile spimIMSQtyAvailable' on the SQL Server when we have the issue.

This gives us temporary relief until the plan in the procedure cache is bad again. 

In looking at the stored procedure, it does a tremendous amount of looping with cursors that seems redundant. The stored procedure was written in 1998 and has had no updates for 20 years. 

What is the right way to get this fixed so that it isn't a performance bottle neck?

This site (https://www.rklesolutions.com/blog/sage-500-process-replenishment-orders ) has a great suggestion to change the overflow checks but we need this as base code from SAGE.

This SP was written in 1998 and needs to be rewritten with modern methods that are much more efficient.

Parents
  • +1
    verified answer

    Hi Marty, 

    We run into this challenge with some stored procedures in 500 that are native code. I figure that if you are at the point that you know the problem is with spimIMSQtyAvailable, that you may know some of this. Here is the process that I use when approaching the performance problem similar to what you are describing. 

    1. I review the execution plans in the plan cache for the stored proc in question. I like the free tool from SQL Sentry called Plan Explorer as it will show me the relative costs if each section of a proc as a percent, which helps me decifer where to focus. 

    2. If I can add indexes to help the offending areas of code then I do that as it doesn't change the code base. However with upgrades, those indexes can be dropped so I document the indexes that I add for repeatablity

    3. if the code needs to be changed and it is native, I'll again document well what I am doing, test in a test environment. This is pretty safe as it won't get touched until the next upgrade. Again document. 

    4. I'll provide the code Sage with a recommendation to review and see if they will implement. Their challenge is the regression testing that they need to do in order to ensure that the change is safe in all circumstances. 

    Hope that helps, 

    John 

  • 0 in reply to jlputtman

    Hi John,

    Thanks for the response and information.

    I read your post on this and would like to add the try catch block where the spimIMSQTYAvailable checks for an overflow starting about line 855.  We use lots and the block it spends most of it's time in is that one.  It is 78% of the cost on the plan for those 3 overflow check statements.  Right now, we are running sp_recompile to set this to recompile every minute which is clearing the procedure cache and seems to band-aid this. It seems like the execution plan gets stale really fast on this due to the number of parameters that are in the sp. I have a ticket open with SAGE on this too.  We added a bunch of suggested indexes to try to speed things up and those helped a little bit. If we do nothing, the system will run fine for a few hours and then it will get really slow when entering items on sales order lines. If we set the spimIMSQTYAvailable to recompile, it speeds up again.

    Thanks,

    Marty

  • 0 in reply to Marty D

    Hi Marty, 

    You will notice that the code has the repeating pattern of three overflow checks and then the select into the local variables. I have not traced how the Sales Order screen passes in data to the proc. But I would expect it doesn't care about lot quantity available, just qty available in the warehouse. If you want to make sure that you are changing the code in the right place, you can run a profiler trace / or create an extended event to trace on a specific user running sales order. That will walk you through to the code that is actually being hit. Most Likely it is this exact place.  

    If the code change is a bit out of your comfort zone I can help further through RKL eSolutions. If you have a reseller they may have a performance tuner on staff. Or they can contact us and we can assist. 

    Kind Regards, 

    John

  • 0 in reply to Marty D

    Hi John,

    Having similar issue with our system.  We also lot track and have noticed similar pattern in that SP.  What indexes did you add?  We are trying the recompile band-aid also.

  • 0 in reply to cyoungren

    Hi,

    The biggest impact we had was adding the recompile for the Stored Procedure.  We are recompiling it every minute a SQL Job.  We tested adding "with recompile" directly to the stored procedure and that was not the best option.

    We had SAGE change the SP for us and we are running with the overflow check commented out.  

    As for Indexes, we have added 9 indexes to the DB.  We used a script to determine which indexes to add based on the cost. Did some testing on them and added them after they were validated.  There are a couple of blogs I have seen on how to identify potential missing indexes.  These two might help you find what is missing.

    https://blog.sqlauthority.com/2011/01/03/sql-server-2008-missing-index-script-download/

    https://basitaalishan.com/2013/03/13/find-missing-indexes-using-sql-servers-index-related-dmvs/

     

  • 0 in reply to Marty D

    Marty,

    Thanks for the info - much appreciated

    Chris

  • 0 in reply to cyoungren

    HI Chris, 

    Not surprised that you are needing to do tuning in this area. The main challenge is that this feature is loop based and non of the query plans are available within the cache after you run replenishment. 

    What I did to tune a specific system was to trace a complete run, and included my sp:stmtcomplete and sql:complete events. I then saved that to a trace table and ran a select statement that aggregated the duration grouping by the first 50 characters or so of the textdata. That helped me determine the exact statements that were being called for each iteration of the loop and the impact to the overall time of the run. 

    The queries within the proc(s) are pretty straight forward. Adding indexes to help those will give you a little performance boost.  The biggest boost that I have seen thus far has been getting rid of the overflow checks. 

    I've got a couple of customers who rely on replenishment. I've been itching to re-write this set based using windowing functions. Anybody else interested?

    Cheers, 

    John

Reply
  • 0 in reply to cyoungren

    HI Chris, 

    Not surprised that you are needing to do tuning in this area. The main challenge is that this feature is loop based and non of the query plans are available within the cache after you run replenishment. 

    What I did to tune a specific system was to trace a complete run, and included my sp:stmtcomplete and sql:complete events. I then saved that to a trace table and ran a select statement that aggregated the duration grouping by the first 50 characters or so of the textdata. That helped me determine the exact statements that were being called for each iteration of the loop and the impact to the overall time of the run. 

    The queries within the proc(s) are pretty straight forward. Adding indexes to help those will give you a little performance boost.  The biggest boost that I have seen thus far has been getting rid of the overflow checks. 

    I've got a couple of customers who rely on replenishment. I've been itching to re-write this set based using windowing functions. Anybody else interested?

    Cheers, 

    John

Children
No Data