Wednesday, September 23, 2009

Concepts Guide: 7/27 - Data Dictionary

I still cannot believe "someone" had the gall to reduce the data dictionary to 5 pages. =) And one blank page to rub salt into the wound.

I had an interesting challenge when trying to explain the dictionary to a colleague. So we all have a general idea of what the data dictionary is and what it does (just read the first page of this chapter in the docs). When does it get built? Interesting question. Usually we say that catalog.sql builds the dictionary, or even bsq.sql. However, did you realize that 601 fixed tables (10.2.0.4) exist in memory just for starting the instance (NOTE: no database!)? Try this one on for size:
/u01/app/oracle/product/10.2.0.4/dbs: echo "db_name = SAC" > initSAC.ora
/u01/app/oracle/product/10.2.0.4/dbs: . oraenv
ORACLE_SID = [DUMMY] ? SAC
ORACLE_HOME = [/u01/app/oracle] ? /u01/app/oracle/product/10.2.0.4
/u01/app/oracle/product/10.2.0.4/dbs: sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Sep 23 09:32:35 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SAC_SQL > startup nomount
ORACLE instance started.

Total System Global Area 419430400 bytes
Fixed Size 2286584 bytes
Variable Size 114105352 bytes
Database Buffers 268435456 bytes
Redo Buffers 34603008 bytes


SAC_SQL > select count(*) from X$KQFTA;

COUNT(*)
----------
601

If you want all the names of those fixed tables, select KQFTANAM.

So, even without running any scripts, we have "data dictionary" objects residing in memory. This helps me to understand what the dictionary is and what it does. For an analogy, it is like the overhead address space a program uses to store all its global variables. In this case, it is just happens to be organized into relational tables with gobbly-gook table/column names and normalized up the wazoo! =)

I have to confess, I was quite disappointed with this chapter. Of the 5 pages, the first 4 tell you not to change anything and give a brief overview of the differences between USER, ALL and DBA views. The last page starts to get just a tiny bit juicy and at least mention dynamic performance views, but then like a cheap battery dies out too soon.

There is some really cool stuff locked away in the x$ tables, let alone the rest of the db metadata. For a quick glimpse under the covers, I found eygle's list quit informative and helpful; it is allegdegly a copy of Metalink note 22241.1, which is supposed to be internal only (I cannot see it at this point in time); it merely lists the fixed tables to demonstrate the naming convention, but it also gives us an idea how things are organized and what types of things the kernel tracks. I hope one day Oracle wakes up and realizes the advantage of sharing more information like this.

Yong Huang also has a listing.

Monday, September 21, 2009

Learning about parameters

We are kicking off an effort to analyze and hopefully standardize parameters across some 200 databases. Fortunately, we have a history of parameters which is made possible by an in-house job that scrapes them from each database every night.

So this post is more about my own little path of discovery; I am sure most of you know this already.

Lesson 1: Pay attention to what the documentation says
For instance, db_cache_size has this little blurb in the 10gR2 docs:
The value must be at least 4M * number of cpus * granule size (smaller values are automatically rounded up to this value).
Let us assume the documentation means to say 4*1024*1024, as opposed to 4 megabytes; 4 megabytes * granule size (which is also in bytes) would give you bytes squared, which makes no sense. We will also assume the granule size = 8KB, but in order for the numbers to be realistic, we will say 8 bytes. That first 4M is really throwing things off. 4M * 8 bytes = 33,554,432 bytes, or 32 megabytes.

So let's say we have a generic setting of, say 200M (bear with me). If you upgrade your host from using 5 CPUs (4M * 5 cpus * 8K = 32MB * 5 = 160MB) to 10 CPUs (160MB * 2 = 320MB), the setting for db_cache_size will automatically jump up to 320MB, despite your explicit setting of 200M. If you had budgeted your databases to max out memory, you might want to be a tad careful. =)

Lesson 2: Comments in the init file are stored in x$ksppcv.ksppstcmnt (and thus v$parameter.UPDATE_COMMENT)
Of course, one has to put the comments in the right place. We pretty much discovered this by accident; if you put a # followed by text after the parameter=value, you are all set. Of course, if you ever look at spfiles, they have the same thing. Just something we have missed for a very long time. =) We have found this to be very handy in identifying parameters that deviate from a given "standard" or are set as a result of a bug. Or an upgrade (which are just really massive bug fixes, as well as new bug introductions, right? *grin*).

Lesson 3: Undocumented/unpublished x$ tables really sucks
I really wish Oracle would document the x$ tables for us. So I am looking at X$KSPPI.ksppity; all the online blogs/white papers/articles that I can find decode this value into 4 groups (boolean, string, number, file). But wait, in 10gR2 I have 5 distinct types (1-4, 6). 6 seems to correspond to "Big Integer", that little convention of using [K|M|G] after a number to distinguish a shorthand for its size. Not sure why 5 was skipped - we do not have any values for 5.

Overdramatic? Saying that such a small thing sucks? This is merely the tip of the iceberg - there is so much out there that is not available for the general public, and the experts end up making guesses (really good and quite often very accurate guesses, but guesses none-the-less).


Well that is it for now. Still exploring, learning... seeing what's out there.

Friday, September 11, 2009

Concepts Guide: 6/27 - Dependencies Among Schema Objects

Almost nice to have a short chapter. I cheated a peeked ahead and saw that Chapter 7 is only 6 pages; how the heck did they squeeze the Data Dictionary into 6 pages? Guess we will have to wait and see.

I like how Oracle automatically attempts to recompile dependent objects if the referenced object is changed - the discussion in this chapter highlights Oracle's infrastruture (although does not detail it) and how it all happens like black magic. =) Here is a quick example:

SQL > create table t1 (a char(1));

Table created.

SQL > create view v1 as select * from t1;

View created.

SQL > create view v2 as select * from v1;

View created.

SQL > select object_name, status from dba_objects where object_name like '__';

OBJECT_NAME STATUS
------------------------------ -------
V1 VALID
V2 VALID
T1 VALID

SQL > alter table t1 add (b char(1));

Table altered.

SQL > select object_name, status from dba_objects where object_name like '__';

OBJECT_NAME STATUS
------------------------------ -------
V1 INVALID
V2 INVALID
T1 VALID

SQL > select * from v2;

no rows selected

SQL > select object_name, status from dba_objects where object_name like '__';

OBJECT_NAME STATUS
------------------------------ -------
V1 VALID
V2 VALID
T1 VALID


Note how V2 depends on V1, which depends on T1. Even though the text is not clear on whether or not a recurssive recompilation will occur, we can see that in practice, it does. Cool stuff.

The notes on page 6 (Object Name resolution) are key to remember. It seems easy sometimes to forget the order of precedence:
  1. own schema
  2. public synonym
  3. schema matches fully-qualified object owner

There is an important ramification to this:
Because of how Oracle resolves references, it is possible for an object to depend on the nonexistence of other objects. This situation occurs when the dependent object uses a reference that would be interpreted differently were another object present.

For instance, it is possible that V1 needs to have T1 not be in the current schema, but rather a synonym.

Right below that is a section on how SQL also maintains a dependency on the referenced object. Because Oracle (as of Oracle 11gR1) does not fully support surgically removing individual sql cursors, one workaround is to change the dependent object, which immediately invalidates any associated cursors. Great for if you want a new query plan because something changed and you do not want to redo the stats. Well... kinda great; not to many shops allow the DBA to make changes to the table on the fly in Production. =)

The section on remote dependencies was slightly interesting - I did not realize that Oracle checked not only on timestamps, but also on signatures. When you think about it, it is amazing how much Oracle does behind the scenes.

Wednesday, September 09, 2009

RAC Attack!

Jeremy Schneider graced us with RAC Attack last week - it was quite awesome! Jeremy brings such a wealth of knowledge and passion for the technology that often times I found myself hard pressed to keep the workshop going. As I was the "organizer" person, I felt some responsibilities in those directions.

It also opened my eyes on several fronts. This is the first time I have helped to facilitate such a workshop, and there were a number of interesting obstacles, logistical and technological. Jeremy handled it all with his usual easy manner and we got it all worked out quite well. For instance, the harddrives of the individual computers were just a tad too small to accomodate all the jumpstart VM images that Jeremy likes to deploy; as a result, we ended up hosting files on various computers and mapping network drives. Not the quickest thing in the world, but hey, it worked. Also, again from the perspective of a facilitator, I found it challenging to address the numerous questions that folks had from time to time, which gave me a greater respect for those who do this kind of thing on a regular basis. Not only did Jeremy answer questions, but took advantage of several opportunities to delve into the deeper details of "how things work".

In retrospect, we are faced with the ubiquitous puzzle of how to address different styles of learning. For those, like me, who crave the hands-on aspect, this workshop is excellent! For those who need more lecture, this lab was a bit of a wake-up call. *grin* Actually, if only we had more time, we could certainly have entertained more dialogue; RAC is rich with controversy. =)

Jeremy was also able to spill the beans a little on Oracle 11gR2, since someone decided to release the Linux version the Tuesday before the workshop began. So we were treated to a few sneak peeks and tidbits. Good stuff.

Personally, I was challenged to discover new ways to do these kind of labs/workshops. I heard a lot of positive feedback about the wide variety of skill sets and job roles in the class, but as a result of that, the various backgrounds required different levels of "background information". Going forward, I would try to break the labs into more modular components (opposed to a totally open lab time) and preceed each lab with some solid instruction. What Jeremy did was great for DBAs, but we had some folks who needed a bit more hand-holding. That is just the nature of the beast. The good news is that Jeremy equipped us to do exactly that - we can now hold our own lab and choose any pace we want. I am hoping to pursue this a little and get others involved, especially in terms of disucssing how we as an organization want to tackle overlapping job roles in regards to cluster and storage management.

The virtualization aspect was also very nice. I think it gave us a glimpse into what we can do with virtualized resources, something we can definitely utilize more fully for future labs and group sessions.

Thanks, Jeremy,