Skip to main content

Common Performance Issues In X++

common performance
Written By: Feba Thomas

Blog

Common Performance Issues In X++

September 26, 2019 7-Minute read

Introduction:
With every passing day there is growth in data volume, user load, distribution and more. There are multiple issues that spring up as a result of this growth and many other factors that are difficult to gauge at an early stage. Ensuring the key scenarios in your application undergo performance test helps isolate and fix some of the issues in advance. The remaining issues are mostly unknown and may fall under the below categories.

This article lists most common performance issues observed in Microsoft DynamicsAX and possible steps to avoid them.

1. Table group and Cache lookup settingson table

Common Issue:
Wrong caching leads to unnecessary calls to the database leading to too many round trips to Server

Description of Suggestion/Fix:
The three tier architecture of Dynamics AX allows you to define caching on AOS and client. Not using caching properly is the first root cause for performance. Make sure the following rules are defined:

All your customized tables have relevant table group defined in AOT. For example, master data should have “Main” category and transactional data should be set to “Transaction” (header or line).

Don’t leave the default setting “None” for Cache Lookup and set the correct value to your table such as “EntireTable” for Parameter table and “Found” for Main group.

Review the cache limit for every table group in the server performance settings For best practice to setup Cache Lookup on tables Refer Cache Lookup Best Practice.

2. Use of display method and Display method caching

Common Issue:
Slow opening of Forms with huge Data

Description of Suggestion/Fix:
This is a common root cause of slow opening or refreshing of form object. It is quite easy to find such root cause by removing the display method and run Trace Parser tool to compare the two traces. In a scenario where a display method is used to display a column on a grid, the display method is executed for each row that is populated in the grid. That makes the overhead of the method quite expensive. The performance of display methods can be improved by caching them if they are calculated on AOS. Caching display methods can also improve performance when records are transferred from the server to the client. The value of all cached methods is set when data is fetched from the back-end database. In addition, the value is refreshed when the reread method is called on the form data source.

3. Caution on using Nested while loops for highly expanding tables

Common Issue:
Scenario Slowness

Description of Suggestion/Fix:

Review the need of using nested loops especially for getting the count / any other aggregate function can be done using a SQL query instead of running through the table For ex: Replace the Below code

Table1 table1;

intincCount;

WhileSelecttable1wheretable1.Field1 == "1001"

{ incCount++; }

WITH

Selectcount(RecId) fromtable1wheretable1.Field1== "1001";

incCount= table1.RecId;

4. SQL Queries on large tables/highly expanding tables without index optimization

Common Issue:
Scenario Slowness

Description of Suggestion/Fix:
We often write complex queries/stored procedures to achieve complex functionality on Reports or any business process. The impact of index is mostly unnoticed at the development stage as there is not much data in the table , especially when transactional tables are involved (highly expanding in real time) it will have a heavy impact on the performance in real time. Such queries should be index optimized beforehand.

The below guidelines are helpful while designing and implementing an index :
When designing indexes, follow these guidelines:

  • Use indexes on tables with numerous rows, on columns that are used in the WHERE clause of queries or in table joins, and on columns used in ORDER BY and GROUP BY queries.
  • Avoid infrequently used indexes on frequently updated columns. In addition, avoid having many indexes on a table that is frequently updated. Otherwise, you unnecessarily increase the insert and update times of your queries. To improve performance, minimize the total width of the indexed columns.
  • Use clustered and nonclustered indexes appropriately. Understand the purpose of each and choose the correct type for your scenario.
  • Use a covering index to reduce the query execution time of frequently used statements. A covering index is a nonclustered index that has all the columns that come in a WHERE clause and in the query column selection.
  • Maintain the correct order of columns in Index , exactly in the order they are entering into the query context , which ensures the index is used and optimally used.

5. Row by Row operation vs Bulk operation

Common Issue:
Query performance and High Round trips to server

Description of Suggestion/Fix:
The three bulk operations, Update_RecordSet, Insert_RecordSet and Delete_From operations are great way to improve the performance of X++ queries with only one RPC call to the database for multiple rows impacted. The problem is that overwriting the SYS method update(), insert() and delete() methods may break these features. Wrong customization can lead into a row by row operation. So it is recommended to verify the RPC calls with Trace Parser to verify the performance of the intended X++ code.

6. Pre-Allocation of non-continuous number sequences

Common Issue :
Intermittent Scenario Slowness

Description of Suggestion/Fix:
This is a crucial setting in the Dynamics AX application that needs to be reviewed every few months on production to match the usage of the number sequences. Basically, number sequences can be Continuous or Non Continuous. When they are non-continuous, you can allow pre allocation per ID and therefore reduce the database calls and improve performance. When consumption is high, like several ID per seconds, we have noticed Lock Escalation events on the table NumberSequenceTable. This is especially true when Dynamics AX batch are running and generate thousands of records for Journals Lines creation or Sales Order Invoicing. Please read this blog post to better estimate this consumption.

7. Replace select * with field list and use Exist Join

Common Issue:
Scenario Slowness

Description of Suggestion/Fix:
This is one of the most common issue with huge performance impact.Use the Field list and Exist join when possible. For every X++ Select statement, make sure you only retrieve the necessary fields. Same idea with Exists Join statement where you can reduce the amount of communication between the AOS and the database. This is especially true when customization adds lot of new fields on existing tables.

For example, replace the following code:While select Table1 {

    Select Table2 where Table2.Field1 == Table1.Field1; Print Table2.Field2;

}

WITH

While select Field1 from Table1 Join Field1, Field2 from Table2

Where Table2.Field1 == Table1.Field1; {

    Print Table2.Field2;

}

8. Unnecessary Joins across multiple tables with huge data

Common Issue:
Query slowness

Description of Issue/Fix:
It is a general observation that there exist multiple joins on the most time taking , complex queries to achieve complex functionality. A common mistake that occurs is having unnecessary joins among related tables regardless of whether data is utilized from joined tables or not.

For Example:

You can replace the following query:

While select Field1 from Table1

Join Field1, Field2 from Table2

Join Field2,Field3 from Table3

Where Table2.Field1 == Table1.Field1

&& table3.field2 == table2.Field2; {

    Print Table2.Field2;

Join Field2,Field3 from Table3

Where Table2.Field1 == Table1.Field1

&& table3.field2 == table2.Field2; {

    Print Table2.Field2;

}

WITH

While select Field1 from Table1

Join Field1, Field2 from Table2

Where Table2.Field1 == Table1.Field1{

    Print Table2.Field2;

}

9. Outdated statistics on highly growing tables leading to poor execution plans

Common Issue:
Long running queries

Description if Suggestion/Fix:
It is a common observation that despite turning the AUTO UPDATE STATISTICS to “true” we sometimes encounter outdated stats leading to poor execution plans and hence long running SQLs.

TO diagnose this issue , after the expensive query is identified , you can run the “Set Statistics Profile on” before running the expensive query for Analysis. Observe the output to compare the Actual Rows and Estimated Rows. If the difference is too high either or both of the following two suggestions can be considered :

a. Check if there is room for query Optimization (like by adding a new filter/range to span a smaller set of records.)

b. Update statistics With Full scan on the table

10. AX/SQL Configuration affecting performance

Common Issue:
Overall System/Process slowness

Description of Suggestion/Fix:
Sometimes there is no clue from the code and still the overall system is slow or a specific scenario is inconsistently showing poor performance. Some of this attributes to inappropriate setting for AX / SQL server configurable parameters. Please refer Top 10 issues from AX Health Check blog post to know more about these settings.