Session 1: Finding execution plans
If a database has been upgraded from 9i, the plan table may be of an old version. Additionally, it will not be global temporary table. You can run @?/rdbms/admin/catplan.sql to set it up properly. This also creates a public synonym and grants acces to public.
When tracing, helfpful to use query block names (qb_name) in sql to identify later on in trace file.
The 10132 trace is new, and provides a great alternative to running the 10053 trace (much more condense and practical). Great for baselining all query plans when set at a system level.
tkprof is a little tricky - traceonly does NOT mean that the query is not run; it only eliminates the resulting output of a query. Hence, inserts, updates and deletes still execute, but a straight select will not do much (no output).
Can format autotrace formatting with _plus_exp columns.
The pipelined dbms_xplan.display function is very nice. There are several other functions in that package I was not aware of, like display_cursor, display_awr and display_sqlset. display_cursor is great for getting the plan of a cursor currently in memory.
Since the function is pipelined, it can be used in a table join operation. Jonathan showed a greate example finding the query plan for all cursors that contained a FULL TABLE SCAN.
Jonathan's tests with reporting sql also showed easy ways to implement query-level optimizer paramters via hints (ie, /*+ opt_param('parallel_execution_enabled','false') */).
Rowsource execution statistics can be generated by setting statistics_level = all, or more surgically with _rowsource_execution_statistics = true. Can also be set with a hint (/*+ gather_plan_statistics */).
What I really like about rowsource execution stats is that you can easily generate Estimated and Actual stats, and can use something like Wolfgang's Cardinality Feedback to get a fairly good idea where performance problems may be popping up. The "ALLSTATS LAST" parameter of dbms_xplan.display_cursor displays this quite nicely. I need to do some more research on this function so we can use it more readily. Great tool for developers and DBAs alike.
Finding cursors in v$sql is made efficient with hash value and sql address. Unfortunately, 10g statspack has an "old" hash value that is inefficient. Do not use it.
When looking at parsed cursors, especially via sprepsql, the version count can be quite useful. Since the stats are aggregated, you have to be careful about summarizing and translating what the stats actually are trying to say about the query.
Any "explain" may be untrue.
Like House says, "Everyone lies."
Section 2: Interpreting Execution Plans
The good news about reading explain plans is that it is (should be and can be) much easier than it might first look. For the most part, it comes down to the good old standby, "Divide and Conquer". To that extent, Jonathan has two basic rules when breaking down plans, which essentially follow the concept of finding all your child cursors until you run out of descendants (think left tree-walk). The bad news is that there are, as always, exceptions. But the rules cover a vast majority of plans. And they are simple.
There are no complicated execution plans, only long execution plans.
First off, it really helps to look at the query plan with some aids. The PID (Parent ID) column is essential. Having the children indented incrementally also helps.
RULE 1 - Single-Child Parent
The child follows the parent and feeds the parent.
RULE 2 - Multi-Child Parent
The parent is an operation that determines how the children's results are combined.
All explain plans are a combination of Rule 1 and Rule 2. So what you do is break it down using those rules. That is what this section does; I'll leave it to you to go through Jonathan's slides yourself. =)
Rule 1 is relatively free of tricks. Rule 2, however, is a lot more convoluted when you start looking at the various different operations, and the various ways those operations are implemented. So while it may be "easy" to group them together, interpreting them takes a little time to understand the myriad subrules. However, the context usually helps that a bit. For instance, if you see a BITMAP AND operation, you are likely to see BITMAP CONVERSION operations; knowing what an operation is expecting (ie, ROWIDs, bitmap numbers, etc) helps you figure out what the child operation is passing up. I did not take many notes on this section because the slides are quite informative and easy to grasp once you see them.
When talking about filter predicates, I learned that access predicates are much more preferred if they can replace filter predicates. The reason being that filter predicates usually happen after you get the data, while access predicates determine how you get the data. This can be showcased by an inappropriate range scan; if you have an index with 3 columns and the query only specifies column 1 and column 3, the optimizer has to do a range scan for column 2. If the query can be adjusted to included column 2, the optimizer no longer has to do a range scan, and can move the filter predicate to an access predicate. Basically, without column 2, the range of "interested rows" in the index is bounded by the smallest value of (column 1, column 3) and the largest value; a range which may (quite possibly) include values that you are not really interested in (and have to be FILTERED out). If you define all columns, you only grab the rows you want in the first place.
select * from t1 where id = nvl(:b1, id)
"Horrendously bad practice."
"The ideal is to go back and smack someone."
The execution plan has to include a path for both :b1 is null and :b1 is not null. Instead, the application should have two different queries for these conditions. Additionally, the above code does not catch rows when id is null.
Jonathan stressed the idea of watching your query plans to see if the optimizer uses any unusual enhancements, so that one may learn from them. For instance, Oracle has a MIN/MAX optimization for indexes. This is especially useful for correlated subqueries that can be optimized by (redundantly) copying a predicate from the parent query that just happens to contain extra columns of an existing index. Sometimes work will be completely removed, like in the case with ORDER BY elimination.
Section 3: Problems with Plans
So now that Jonathan has razzle-dazzled us with all this awesome stuff, here comes some bad news.
- Explain plan may be telling lies
- Trace files don't always have stat lines
- tkprof might not dump the stat lines even if they are in the trace file
- tkprof may dump the "wrong" stat lines
- v$sql_plan might be out of date
- v$sql_plan can have holes
Explain plans are not aware of the datatypes for bind variables. Hence, if you are trying to get an acurate explain plan, it is a good idea to explicitly declare the datatypes of bind variables with a cast or to_ function.
Jonathan made some interesting comments about "good" applications. In particular, he stated that they do not close their cursors. We have been telling our developers to close cursors. Why is that? I think the statement needs to be taken in the context, or at least the assumption, that the cursors will be held open for the intent of reusing them, opposed to forgetting to close them when not needed. When cursors are not closed, their stat lines are not dumped to a trace file. Hence, if you want plans and statistics for a unclosed cursor, have to get them out of v$ views.
In cases where a cursor has multiple versions and/or executions, tkprof will only report on the first one in the trace file, even though it summarizes the stats for all of them. It is entirely possible that one bad execution caused a majority of the "bad" stats, and that the explain plan in tkprof is for a completely different execution.
While showing us some sql to demonstrate subquery factoring, he showed us his materialize hint (/*+ materialize */). There is also a inline hint that does the opposite.
Scalar subqueries are slightly problematic in that they are not costed properly, due to the fact that Oracle does not know how many times it will run.
Jonathan talked about "hash table cache" and "scalar subquery cache", both of which I am quite unfamiliar with, and want to research them a little more. Oracle will automatically cache hashed results of a scalar subquery so that it does not have to execute the subquery over and over. However, if two values of the subquery happen to hash to the same address, the first one in wins, and the second value has to be calculated again and again and again...
Additionally, Oracle has introduced "order by" elimination with 10g. Unfortunately, sometimes "order by" is quite useful when dealing with scalar subqueries and thus one may need to use the no_eliminate_oby hint.
There is another hint that can be used to swap the join inputs of a hash join (/*+ use_hash(t3) swap_join_inputs(t3) */). 10g also allows use of the no_swap_join_hint to further fine-tune exactly which table you want to be thrown into memory as the "first" table.
Section 4: Advanced Features
The MIN/MAX optimization that we talked about earlier has another nice little feature. It is not a filter predicate, but rather an access predicate, meaning it gets exactly the one little piece of data that it wants and nothing else.
Have to watch query plans. If you expect an optimal subquery but see a filter operation for it, you may want to re-evaluate it.
Remote operations are quite an interesting phenonemon when it comes to query plans. Although remote statistics are fetched, they are not fully used when generating a query plan. For instance, a 3-table join may be ordered such that a remote table is joined to a local table, which is then joined to a remote table. How much sense does that make? In most cases, the query will perform better if remote operations are done in as large a chunk as possible (and reasonable), instead of making many small trips back and forth.
Parallel operations are quite unique. Contrary to popular belief, they are not always a good thing.
Parallel query is designed to be brutal and inefficient.
I believe Jonathan said this because the idea of parallel query is to use up more resources (ie, more CPU) than a serial query would. In fact, it attempts to max out the CPU in an attempt to get the query done as fast as possible. I think he even said that parallel query should never be used in an OLTP database, but I did not write that down, so I could be wrong about that.
Anyway, parallel query has to be carefully tuned, for it is far too easy to thrash your system. Looking at an explain plan and trace files, we have to be careful about what we find. The infamous "idle" waits associated with parallel operations may indicate problems, or the may not, it depends on where they are happening. If an explain plan shows many P->P (parallel to parallel) operations, that could easily be a source of contention. Much better to have PCWP (forgot to write the definition) operations. In the case of a HASH join, you do not want to broadcase a large table to all the parallel query slaves; instaed, broadcast the small one (for memory) and read from the disk directly for the larger table (and joined locally). For nested loops, it is better to join and aggregate smaller chunks at a time, thus reducing "an entire layer of slaves".
Near the end, I had trouble keeping up with my notes (both because my butt was sore from the hard chair, getting tired, and being overwhelmed with so much information). We moved on to partitioned queries, and talked about the benefits of being careful when designed partitions and how to take advantage of parallel query. To fully realize the potential of partition-wise joins, it is best to partition early in the plan so that correllated, similar work can be parallelized.
On a slightly off-topic of partition statistics, Jonathan mentioned:
You ought to be faking your statistics. Sometimes.
Especially true since subpartition statistics are not dealt with properly at all.
And finally, if you see a "PX SEND BROADCAST" in your query plan, that automatically means you are not doing any partition-wise joins. They are mutually exclusive.
One last quote from Jonathan Lewis:
You stop getting luckyIn reference to performance problems that happen all the sudden even though "nothing changes"
6 comments:
Great summary, much more better than mine here - http://tonguc.wordpress.com/2007/03/01/another-great-undocumented-oracle-hint/
Thank you for sharing :)
Thanks for stopping by, Tonguc. Yes, Jonathan Lewis is a most excellent teacher, and I am inspired by his willingness to share from the depths of his learning. We need more folks like him.
Thought this was a great summary, with lots of important points nested in there.
Liked the JPL quotes.
Nice Article,
thanks for sharing with us.
This line gave me idea for solution to problem I had spent hours on at work.
"... it is a good idea to explicitly declare the datatypes of bind variables with a cast or to_ function."
What a relief - thank you for posting this.
Can u help on below statement for better understanding.
Oracle has a MIN/MAX optimization for indexes. This is especially useful for correlated subqueries that can be optimized by (redundantly) copying a predicate from the parent query that just happens to contain extra columns of an existing index. Sometimes work will be completely removed, like in the case with ORDER BY elimination.
Thank you
Post a Comment