Thursday, September 21, 2006

A Study in Irony

Metalink note #388422.1

Oracle says one of the best things you can do in regards to early detection of corrupted blocks is turn on db_block_checking. But wait, what is this I see?
Bug 4684074 OERI:510 / block corruption (ORA-1578) with DB_BLOCK_CHECKING

Just to make sure that db_block_checking does actually find corrupted blocks, it will do the dirty work for you.

Monday, September 11, 2006

Oracle bug status codes

11: under development review / research
16: under BDE / platform review & research
10: need more information from customer (dev)
30: need more information (bde)
35: means the engineer needs to review
36: means duplicate issue
52: pending approval of platform line (PL)

The lure of histograms

I had been doing a lot of research into histograms. I scooped a lot of references (which I will list later), and filled my head with no shortage of verbiage. Our current dilemma now is, "Does one introduce more problems when proactively creating targeted histograms (frequency histograms on non-unique columns) than you solve?" I am of the opinion that the benefits outweigh the risks. But I am being cautious because people smarter than I (ie, Jonathan Lewis, Wolfgang Breitling, Dave Ensor) all exercise extreme prejudice and use histograms as little as absolutely necessary. Or at least, that is my understanding. I firmly believe that if you tell the CBO absolutely all the facts about your data and resources, you will get the optimal plan 100% of the time. Which infers that if you fail to tell the truth 100% of the time, you are running the chance of getting a "bad" plan, or even the "worst" plan.

And than the whole topic of equi-depth histograms, or "height balanced". Granted, there will always be a limit to the number of buckets you can use, currently 254 in Oracle 10.2.0.2. If you have more distinct values than max buckets, you will have to resort to summarizing your data as best as possible. This is a vast, grey area. On the one hand, you can shoot in the dark, sample your data and hope for the best (which works most of the time). A slightly better method would be to describe your most popular values. However, the whole exercise is moot if your business needs require specific information about the least-popular values. This is the part that makes equi-depth histograms so messy. There is no silver bullet.

For this reason alone, I wish Oracle would vastly increase the number of histogram buckets one could collect, so as to avoid the whole guessing game. You have to admit, it is much easier to calculate frequency histograms than dive into your data and business logic to determine which data most needs to be described. Yes, it would also be more resource intensive, but my point is it would be easier on the dba.