IBM i > ADMINISTRATOR > DB2

Protection From Slow Queries

Introducing DB2 for i Adaptive Query Processing


 

In V7R1M0, Adaptive Query Processing (AQP) functionality was added to the DB2 for i SQL Query Engine (SQE). Adaptive refers to query plans changing either in real time (on a current run of the query) or over time (on a future run of the query). AQP is a framework that lets SQE monitor, learn from and react to the runtime characteristics of individual queries on IBM i. Thus, IBM i users have an additional layer of performance protection from slow queries.

Query Optimization Review

Figure 1 shows the architecture of the SQE engine. This demonstrates a query being passed from a user into SQE where the SQL Parser processes it and then passes the parsed query to the query optimizer. After considering many permutations of query access plans, the optimizer selects the query plan deemed to be the cheapest cost in terms of time and/or system resources (CPU, RAM, I/O, etc.).

Each access plan considered contains different join orders of tables from the query as well as different access methods (index, table scan, hash table, sort, etc.) for each table. Each access method will have different performance characteristics depending on how many rows must be processed through it. Thus, the query optimizer consults the statistics manager many times to understand the number of rows various portions of the query will select as depicted in Figure 1.

After deciding on a final query plan, the query access plan is passed to the database engine where the query plan is executed, and the appropriate result set is returned to the user as shown. Lastly, this query access plan is saved into an access plan cache so that if the same query is run again, the plan can be reused from the plan cache, which saves the optimizer from having to reoptimize the query and results in faster query performance.

For more details on query optimization with IBM i, see “Query Optimization in DB2 for i5/OS.”

Join Order Example

The most important criteria for optimal query performance is to get the join order correct. Ultimately, the goal of optimization is to rewrite the query to discard rows that aren’t part of the result set at the earliest possible point so time and resources aren’t spent working on a row that’s later rejected. The following example will make this apparent. For simplicity in this example, I’ll ignore indexing, parallel processing and other DB2 optimization strategies.

Consider the SQL in the example in Figure 2. If we join Orders to Customers, then we must scan the rows in Orders from 1 to 100,000, and for each row that matches the criteria 'Back Ordered' we then probe Customers on the custID looking for the name that matches our criteria. If we suppose that 25 percent of the rows in Orders are selected (have STATUS = 'Back Ordered'), then we’ll probe Customers 25,000 (25 percent of 100,000) times. Thus, conceptually, we’ll do 125,000 (100,000 + 25,000) row lookups.

Next, if we use a technique in DB2 for i called Look-ahead Predicate Generation (LPG), we can mitigate some of the effect of join order. For example, consider Figure 3. In this example, we’re still joining from Orders to Customer, but we internally modify the query to add the customer IDs we’re interested in to the selection criteria. Thus, SQE would first scan Customers (1000 rows), and find the custID for the NAME of interest (o.custid=1). Then it would modify the query with this criteria (as shown in Figure 3), and then scan Orders (100,000 rows) and for the one row that matched the entire WHERE clause, it’d join back to Customers. In this LPG scenario, 10101 (1000 + 10000 +1) rows would be examined. Here we see that LPG improved the performance of the Orders to Customer join by about 25 percent in our trivial example. Please note that much larger performance improvements are frequently seen in more complex queries with LPG. For more details on LPG, see “The Power and Magic of LPG.”

Conversely, if we change the join order, and join from Customers to Orders as in Figure 4, we’ll scan the customers from 1 to 1000. For only one of the rows where the name criteria matches the selection, we continue on with the probe into Orders. In this case, we probe the Orders table twice, and therefore, we perform 1002 (1000 + 2) row lookups. Thus, in this simple case, the Customer to Orders join is almost 125 times faster than the original Orders to Customer join.

The Visual Explain diagrams for the three example queries are shown in Figure 5.

Even in these simple examples, we demonstrate that join order had a drastic impact on the amount of work SQE must do to retrieve the user’s result set. Of course with a few indexes, we could get the number of row lookups down to two or three with either join order, but we won’t go into indexing details here.

The complexity of join optimization and the consequences of an incorrect join order become much more severe as we join larger numbers of tables and larger tables (numbers of rows per table). Therefore, it’s imperative for the optimizer to get the join order correct or to have a means to adaptively change join orders if a suboptimal join order is detected.

 

Brian Muras, a software engineer in Rochester, Minn,, has worked on DB2 for i in various components including the Query Optimizer, Statistics Engine and DB2 WebQuery.


comments powered by Disqus

Advertisement

Advertisement

2017 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Untangling Web Query

How metadata can reduce query and report complexity

Protection From Slow Queries

Introducing DB2 for i Adaptive Query Processing

Intelligent Queries

DB2 for i Learning Statistics Engine works smarter

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters