Thursday, October 08, 2009

Concepts Guide: 8/27 - Memory Architecture

Like the picture on the 2nd page. Yes, I am visually oriented.

On page 4:
When automatic SGA memory management is enabled, the sizes of the different SGA
components are flexible and can adapt to the needs of a workload without requiring
any additional configuration.

However, the SGA components are flexible regardless of the setting for automatic SGA memory management (aka, automatic shared

memory management, ak ASMM). While I agree that ASMM as a conceptual feature is rather cool, there are known issues when

allocations and deallocations oscillate rapidly, causing locks on the underlying pools and decreasing performance overall.

To be truly effective, dynamic memory management would allow the kernel to dynamically allocate memory segments from the OS.

As it stands, 100% of greater(SGA_TARGET,SGA_MAX_SIZE) is allocated when the instance starts; if SGA_MAX_SIZE is greater, the

memory is allocated but not used, saved away for a potential future use. How is that dynamic?

From page 5:
"As the workload changes, memory is redistributed to ensure optimal performance."

*cough cough* Unless of course you hit a bug....

Good overview and discussion of the SGA and buffer cache in general.

Page 6:
"If the user process finds a dirty buffer as it searches the LRU list, it moves that buffer
to the write list and continues to search."

I did not realize that the user process could move dirty buffers to the write list; I thought only the DBWriter processes

could do that. Hmm...

Slightly disappointed that there is no talk about the negative ramifications of setting a large buffer cache, specifically

how it adversely affects the time to scan the LRU. The positive effects are fairly obvious. =) I was also hoping for more

coverage of the Redo Log buffer. But, the couple of paragraphs in the document (and more laterin Chap 9?) at least introduce

the concept. I still find it strange that the defaults are so small.

By the time I get to page 10, I am ready for another picture; it would be nice to start with the overall picture, and then

"zoom in" on each section to go along with the text a little bit.

Data Dictionary Cache, aka Row Cache. Is this the same cache that 11g is using to store sql results?

Like the detailed steps on page 12; this is the kind of details I have been waiting for. And now I can rip them up. =)
The first bullet of #2 states that the ANALYZE command (when used to update/modify stats) flushes the cursor from the shared

pool. People still use the ANALYZE command in 10g? I thought the dbms_stats procedures were preferred and more popular. In

any event, the underlying mechanics are still the same; if you alter the stats, the cursor is no longer valid and dropped.

Now if only we had a way to surgically splice out individual cursors without having to make relatively "impactful" changes...

Bullet 3 is interesting, I had no idea that changing the global name flushed the shared pool. Wow.

The couple of paragraphs about the Large Pool are succint, a bit too much so. How does Oracle decide when to use Large Pool

memory, as opposed to the Shared Pool? What are some good rules of thumb in terms of how much to allocate (probably covered

in the Performance Tuning Guide, but the Concepts Guide should say _something_)? No Large Pool Advisor? =)

The section on Streams Pool is even worse. It does not tell me what exactly the memory is used for, how to tune it, what to

do with it, nothing. And the 2nd paragraph is quite confusing; the Streams Pool may come from different parent pools

depending on how SGA_TARGET is set? Who signed off on that little bit of tom-foolerly?

The section on Dynamic SGA is almost misleading. The shared memory allocated to the server instance does not actually change

at all; 100% of the requested memory (even if it is not used) is given during instance startup. This concept of a "dynamic

memory usage" feature totally ignores that fact. So if the instance gets 50GB of memory but only uses 500MB, you are wasting

45GB for no good reason. That is not dynamic memory.

"Both the buffer cache and the SGA pools can grow and shrink at runtime according to some internal, Oracle-managed policy."

This is the Concepts Guide, for crying out loud! Some internal, Oracle-managed policy?!? Who wrote that?!? Worse, this

particular policy has some serious drawbacks, especially earlier in 10g. We observed that Oracle would thrash itself trying

to resize the "dynamic" pools many times a second, apparently oscillating between two different "optimal" thresholds. Some

policy.

LOCK_SGA: I have never heard of this parameter. And, oh, here is the reason why; we run Solaris. Good thing the

Concepts Guide cleared that up. So, this is the Concepts Guide, right? I am not reading some PR BS, am I? Why would

LOCK_SGA be a bad thing? If you are running a Production database, especially on a host that is dedicated to Oracle, why

would I ever want memory pages to swap out?

SHARED_MEMORY_ADDRESS: Again, never heard of this one. In fact, Google is suspiciously quiet on this one as well. Why would I

want to set this parameter? Don't get me wrong, I am glad the Concepts Guide at least says *something*, but some explanations

would be nice. I assume this parameter is offset; if you set the same thing in multiple databases, I would have a hard time

believing that this parameter actually is meant to grab an absolute, OS Memory address. That would be just silly. Actually, I

am surprised this is not a underscore parameter. Which makes me wonder, is this short little blurb the only thing that keeps

this parameter from being "internally supported"?


Didn't realize the PGA had two independently-maintained subsections (pesistent area and run-time area). Good to know.

It is puzzling that the author would choose to include v$sesstat and v$sysstat as means of collecting performance statistics for the PGA without giving any insight whatosever as to what to look for; in fact, just browsing v$statname (the lookup table for the statistic#) does not make this any more clearer either. I personally find it quite unhelpful a document that purports to "demystify" something flippantly references a source that is not at all obvious.

I'll wrap up with that. I still find it hard to wade through these chapters; drains my brain. *grin* Overall I felt that I learned a few small things about the Memory Architecture, but I still feel like I have a ways to go before I fully grasp the skeleton that holds everything together. I hope this becomes more lucid as the chapters progress.

Wednesday, September 23, 2009

Concepts Guide: 7/27 - Data Dictionary

I still cannot believe "someone" had the gall to reduce the data dictionary to 5 pages. =) And one blank page to rub salt into the wound.

I had an interesting challenge when trying to explain the dictionary to a colleague. So we all have a general idea of what the data dictionary is and what it does (just read the first page of this chapter in the docs). When does it get built? Interesting question. Usually we say that catalog.sql builds the dictionary, or even bsq.sql. However, did you realize that 601 fixed tables (10.2.0.4) exist in memory just for starting the instance (NOTE: no database!)? Try this one on for size:
/u01/app/oracle/product/10.2.0.4/dbs: echo "db_name = SAC" > initSAC.ora
/u01/app/oracle/product/10.2.0.4/dbs: . oraenv
ORACLE_SID = [DUMMY] ? SAC
ORACLE_HOME = [/u01/app/oracle] ? /u01/app/oracle/product/10.2.0.4
/u01/app/oracle/product/10.2.0.4/dbs: sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Sep 23 09:32:35 2009

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SAC_SQL > startup nomount
ORACLE instance started.

Total System Global Area 419430400 bytes
Fixed Size 2286584 bytes
Variable Size 114105352 bytes
Database Buffers 268435456 bytes
Redo Buffers 34603008 bytes


SAC_SQL > select count(*) from X$KQFTA;

COUNT(*)
----------
601

If you want all the names of those fixed tables, select KQFTANAM.

So, even without running any scripts, we have "data dictionary" objects residing in memory. This helps me to understand what the dictionary is and what it does. For an analogy, it is like the overhead address space a program uses to store all its global variables. In this case, it is just happens to be organized into relational tables with gobbly-gook table/column names and normalized up the wazoo! =)

I have to confess, I was quite disappointed with this chapter. Of the 5 pages, the first 4 tell you not to change anything and give a brief overview of the differences between USER, ALL and DBA views. The last page starts to get just a tiny bit juicy and at least mention dynamic performance views, but then like a cheap battery dies out too soon.

There is some really cool stuff locked away in the x$ tables, let alone the rest of the db metadata. For a quick glimpse under the covers, I found eygle's list quit informative and helpful; it is allegdegly a copy of Metalink note 22241.1, which is supposed to be internal only (I cannot see it at this point in time); it merely lists the fixed tables to demonstrate the naming convention, but it also gives us an idea how things are organized and what types of things the kernel tracks. I hope one day Oracle wakes up and realizes the advantage of sharing more information like this.

Yong Huang also has a listing.

Monday, September 21, 2009

Learning about parameters

We are kicking off an effort to analyze and hopefully standardize parameters across some 200 databases. Fortunately, we have a history of parameters which is made possible by an in-house job that scrapes them from each database every night.

So this post is more about my own little path of discovery; I am sure most of you know this already.

Lesson 1: Pay attention to what the documentation says
For instance, db_cache_size has this little blurb in the 10gR2 docs:
The value must be at least 4M * number of cpus * granule size (smaller values are automatically rounded up to this value).
Let us assume the documentation means to say 4*1024*1024, as opposed to 4 megabytes; 4 megabytes * granule size (which is also in bytes) would give you bytes squared, which makes no sense. We will also assume the granule size = 8KB, but in order for the numbers to be realistic, we will say 8 bytes. That first 4M is really throwing things off. 4M * 8 bytes = 33,554,432 bytes, or 32 megabytes.

So let's say we have a generic setting of, say 200M (bear with me). If you upgrade your host from using 5 CPUs (4M * 5 cpus * 8K = 32MB * 5 = 160MB) to 10 CPUs (160MB * 2 = 320MB), the setting for db_cache_size will automatically jump up to 320MB, despite your explicit setting of 200M. If you had budgeted your databases to max out memory, you might want to be a tad careful. =)

Lesson 2: Comments in the init file are stored in x$ksppcv.ksppstcmnt (and thus v$parameter.UPDATE_COMMENT)
Of course, one has to put the comments in the right place. We pretty much discovered this by accident; if you put a # followed by text after the parameter=value, you are all set. Of course, if you ever look at spfiles, they have the same thing. Just something we have missed for a very long time. =) We have found this to be very handy in identifying parameters that deviate from a given "standard" or are set as a result of a bug. Or an upgrade (which are just really massive bug fixes, as well as new bug introductions, right? *grin*).

Lesson 3: Undocumented/unpublished x$ tables really sucks
I really wish Oracle would document the x$ tables for us. So I am looking at X$KSPPI.ksppity; all the online blogs/white papers/articles that I can find decode this value into 4 groups (boolean, string, number, file). But wait, in 10gR2 I have 5 distinct types (1-4, 6). 6 seems to correspond to "Big Integer", that little convention of using [K|M|G] after a number to distinguish a shorthand for its size. Not sure why 5 was skipped - we do not have any values for 5.

Overdramatic? Saying that such a small thing sucks? This is merely the tip of the iceberg - there is so much out there that is not available for the general public, and the experts end up making guesses (really good and quite often very accurate guesses, but guesses none-the-less).


Well that is it for now. Still exploring, learning... seeing what's out there.

Friday, September 11, 2009

Concepts Guide: 6/27 - Dependencies Among Schema Objects

Almost nice to have a short chapter. I cheated a peeked ahead and saw that Chapter 7 is only 6 pages; how the heck did they squeeze the Data Dictionary into 6 pages? Guess we will have to wait and see.

I like how Oracle automatically attempts to recompile dependent objects if the referenced object is changed - the discussion in this chapter highlights Oracle's infrastruture (although does not detail it) and how it all happens like black magic. =) Here is a quick example:

SQL > create table t1 (a char(1));

Table created.

SQL > create view v1 as select * from t1;

View created.

SQL > create view v2 as select * from v1;

View created.

SQL > select object_name, status from dba_objects where object_name like '__';

OBJECT_NAME STATUS
------------------------------ -------
V1 VALID
V2 VALID
T1 VALID

SQL > alter table t1 add (b char(1));

Table altered.

SQL > select object_name, status from dba_objects where object_name like '__';

OBJECT_NAME STATUS
------------------------------ -------
V1 INVALID
V2 INVALID
T1 VALID

SQL > select * from v2;

no rows selected

SQL > select object_name, status from dba_objects where object_name like '__';

OBJECT_NAME STATUS
------------------------------ -------
V1 VALID
V2 VALID
T1 VALID


Note how V2 depends on V1, which depends on T1. Even though the text is not clear on whether or not a recurssive recompilation will occur, we can see that in practice, it does. Cool stuff.

The notes on page 6 (Object Name resolution) are key to remember. It seems easy sometimes to forget the order of precedence:
  1. own schema
  2. public synonym
  3. schema matches fully-qualified object owner

There is an important ramification to this:
Because of how Oracle resolves references, it is possible for an object to depend on the nonexistence of other objects. This situation occurs when the dependent object uses a reference that would be interpreted differently were another object present.

For instance, it is possible that V1 needs to have T1 not be in the current schema, but rather a synonym.

Right below that is a section on how SQL also maintains a dependency on the referenced object. Because Oracle (as of Oracle 11gR1) does not fully support surgically removing individual sql cursors, one workaround is to change the dependent object, which immediately invalidates any associated cursors. Great for if you want a new query plan because something changed and you do not want to redo the stats. Well... kinda great; not to many shops allow the DBA to make changes to the table on the fly in Production. =)

The section on remote dependencies was slightly interesting - I did not realize that Oracle checked not only on timestamps, but also on signatures. When you think about it, it is amazing how much Oracle does behind the scenes.

Wednesday, September 09, 2009

RAC Attack!

Jeremy Schneider graced us with RAC Attack last week - it was quite awesome! Jeremy brings such a wealth of knowledge and passion for the technology that often times I found myself hard pressed to keep the workshop going. As I was the "organizer" person, I felt some responsibilities in those directions.

It also opened my eyes on several fronts. This is the first time I have helped to facilitate such a workshop, and there were a number of interesting obstacles, logistical and technological. Jeremy handled it all with his usual easy manner and we got it all worked out quite well. For instance, the harddrives of the individual computers were just a tad too small to accomodate all the jumpstart VM images that Jeremy likes to deploy; as a result, we ended up hosting files on various computers and mapping network drives. Not the quickest thing in the world, but hey, it worked. Also, again from the perspective of a facilitator, I found it challenging to address the numerous questions that folks had from time to time, which gave me a greater respect for those who do this kind of thing on a regular basis. Not only did Jeremy answer questions, but took advantage of several opportunities to delve into the deeper details of "how things work".

In retrospect, we are faced with the ubiquitous puzzle of how to address different styles of learning. For those, like me, who crave the hands-on aspect, this workshop is excellent! For those who need more lecture, this lab was a bit of a wake-up call. *grin* Actually, if only we had more time, we could certainly have entertained more dialogue; RAC is rich with controversy. =)

Jeremy was also able to spill the beans a little on Oracle 11gR2, since someone decided to release the Linux version the Tuesday before the workshop began. So we were treated to a few sneak peeks and tidbits. Good stuff.

Personally, I was challenged to discover new ways to do these kind of labs/workshops. I heard a lot of positive feedback about the wide variety of skill sets and job roles in the class, but as a result of that, the various backgrounds required different levels of "background information". Going forward, I would try to break the labs into more modular components (opposed to a totally open lab time) and preceed each lab with some solid instruction. What Jeremy did was great for DBAs, but we had some folks who needed a bit more hand-holding. That is just the nature of the beast. The good news is that Jeremy equipped us to do exactly that - we can now hold our own lab and choose any pace we want. I am hoping to pursue this a little and get others involved, especially in terms of disucssing how we as an organization want to tackle overlapping job roles in regards to cluster and storage management.

The virtualization aspect was also very nice. I think it gave us a glimpse into what we can do with virtualized resources, something we can definitely utilize more fully for future labs and group sessions.

Thanks, Jeremy,

Wednesday, June 24, 2009

Concepts Guide: 5/27 - Schema Objects

I found that reading the Guide is quite hard if you are already tired. ;-)

As always, I like pictures. Figure 5-1 on page 5-3 does justice. Although they would make their point more clear by labelling the schemas.

Was not aware of intra-block chaining (pp 5-5) - interesting concept. Especially since it does not affect performance (does not increase the number of IO calls).

Figure 5-3 is good in that it really helps to see the various pieces of a block and the row, including the headers.

As much as I hate how nulls are handled in Oracle, the one good thing is that I like how Oracle just does not even record information for null-trailing rows (ie, column-length not stored in block). Except, of course, if you have LONG data - DO NOT USE LONG! =)

I was not aware how table compression actually worked. Now that I know a little more about it, I am surprised the guide did not mention any limitations. For example, if a block has 100% unique data (uncompressable), would the symbol table still be built and populated? If not, what is the cut-off? At what point does Oracle deem compression worthwhile, pragamatically?

I have never seen a practical use for nested tables, but I'll keep my eyes open. I jumped to 27-7 as referenced just to see what it said. I still have never seen a practical use for nested tables.

The fact that sessions are "bound" to temp tables was new to me; I did not realize you could not do ddl on a temp table if is already bound to a session. Kinda makes sense, though. I wonder why they do not simply call it "locking"? =) Afterall, that is how normal tables are handled.

Ojbect Views really confuse me, not being familiar with the concept. And this being the Concepts Guide, I found that the short little blurb did not really help me much at all.

I also did not realize that one could put declaritive constraints on views; interesting way to propagate metadata information for restricted environments.

The short paragraph on Materialized View Logs did not do the concept any justice. I get the impression that either space and/or time was restrained when this section was done. =)

The intro section to Dimensions left my head whirling. I am not a Warehousing guy by any means; while I appreciate the extra background and the quasi-example, I find that it deep-dives too quick for me. And using an example of a year being tied to a year-row is just the most absurd thing I have ever heard. Why not a practical, real-life example that "anyone" can grasp?

Good discussion for sequences; I like how the good is balanced with the bad - the "Caution" is stronger than I expected, but I think very important and am glad to see that the authors made it stand out.

Nice long section on indexes. *grin* I always find it strange to find in 10g documentation references to "analyze table", when most of the time I believe they really mean collect stats, for which we are told to use dbms_stats instead. The intro to deterministic (user-defined) indexes was quite interesting. I would personally consider this an example of "black magic" in Oracle. Another one of those cases where there is a strong lack of practical examples.

Figure 5-7 starts out looking like a great pictoral example. However I found it to be quite confusing. Actually, what I really want to see is how an index is built, starting with one row. At the very least, it would be helpful to augment the figure with text explaining the function of the values for the branch blocks. However, excellent information on how searches are mathematically bounded (big-O notation).

Excellent piece on bitmap indexes; great examples, wonderful discourse. I appreciate the balanced approach to addressing the pros and cons of bitmap indexes, which may at the outset seem to be the pancea of query performance issues. The sidebar on cardinality was very well done as well.

The section on Index-organized tables was also quite interesting, however I wonder why, if they are so highly recommended for OLTP applications, why are they not more popular?

Application Domain indexes, and the Oracle Data Cartridge in general, are another area of black magic that I fear ever going back to. I dabbled in it once when attempting to define custom statistics for a function and never looked back. =) I am sure they have their place on some "True Expert"'s shelf, but not here....

Like IOTs, the Concepts Guide does a good job selling Clusters and Hash Clusters as beneficial methods, but I do not see many folks using it in Real Life. Is it merely the learning curve that keeps the standard DBA away from these features? We have a lot of third-party apps; shall we assume that the vendors simply do not have the expertise to utilize these fun but advanced toys?

Interesting stuff none-the-less.

Friday, June 12, 2009

Fusion Tables

So I admit it, I read slashdot (who doesn't?? *grin*). While some topics I really do not care about, for some reason "Oracle" in the headline does. =) And I am not opposed to Oracle-bashing, because I do a fair share myself.


I love how folks at Google Labs come up with all this crazy stuff. And not just GL, but Apple and lots of other places as well. The way technology moves is absolutely spellbinding, and I mean that in the most literal sense possible. *grin*

What I hate is techno-marketing gibberish:
"So now we have an n-cube, a four-dimensional space, and in that space we can now do new kinds of queries which create new kinds of products and new market opportunities"
Ok so I can grapple with n-cube or 4-space. Show me a query that can create a new kind of product. Heck, show me a query that can make an old product! Create new market opportunities?!? Come on, everything in the galaxy is a market opportunity. You couldn't hit a house fly with a query. And I mean that in the most literal sense. *wink*

Thursday, May 21, 2009

Concepts Guide: 4/27 - Transactions

Good info on transactions. Lots of detail, great overview. The example of a bank and financial transactions is a great analogy and fits well with the discussion. The statements about asynchronous commits was rather interesting. I had my doubts, so I looked up the referenced Oracle Database Application Developer's Guide - Fundamentals. Chapter 2 of that book (http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_sqlproc.htm#sthref182) informed me about the commit_write parameter. I found out that one could specify NOWAIT - as the caveats mention, one has to be aware that when using this parameter data could potentially be lost. I wonder how many folks have "accidentally" set this parameter thinking they were getting better performance for free.

Naming transactions - I had no idea LogMiner could search for named transactions. That is really cool, and I'll have to keep that in mind.

I have to reread the part on two-phase commit and distributed transactions.

The coverage of autonomous transactions and pl/sql is really helpful as well. This is the kind of stuff that crosses the lines between DBA and Developers - clearly, the text states that developers are responsible for avoiding deadlock conditions. A "pure" DBA is an oxymoron in the sense that any DBA worth his/her salt is going to have to know a little about sql and pl/sql. And the more one knows, the more effective of a DBA one can be. Which cements my belief, and one of the few areas in which I agree about the recent changes in Oracle Certification, that SQL and PL/SQL need more limelight as a fundamental DBA skill.

Short chapter. But a great chapter. Whitespace on 3-9 and 3-10??

Friday, May 15, 2009

Concepts Guide: 3/27 - Tablespaces, Datafiles, and Control Files

Again, I like the figures on 3-1 and 3-3,3-4; good to have something other than just plain words. 

Interesting how the discussion starts with Bigfile tablespaces, especially since these are not default and are a new feature. I strongly disagree that bigfile tablespaces + OMF "simplify datafile management" (3-5). While they do make many things transparent to the user (which is mostly good), and yes, even make some things easier for the DBA, it by no means makes things more simple. This is especially true for OMF. The caveats listed on 3-6 are noteworthy, especially in regards to recovery. We do a lot of clones were we copy databases across the wire, and we find that these operations work better on multiple small files in parallel, instead of gigantic, fewer files. The addition of OMF potentially just makes things that much more confusing when trying to diagnose issues, not to mention trying to teach concepts and architecture to Jr. DBAs. "Don't worry about where the files are or what they are called" is not exactly a "teachabel" moment.

Who uses exabyte storage centers anyway? *grin* The few, the proud?

The short blurb on 3-7 for the SYSAUX tablespace had a very intriguing note. "It reduces the number of tablespaces created by default, both in the seed database and in user-defined databases." Really? So in 9i we had fewer tablespaces, by default, and yet SYSAUX somehow reduces tablespaces in 10g? What did I read wrong?

Based on these short accounts, it is not overly clear what the purpose of the SYSTEM or SYSAUX tablespaces actually do for a living. Ok, so they store metadata that make the database run. What does that mean. It will be interesting to flesh out the details a bit.

On the other hand, the descriptions of Undo and Temporary tablespaces is quite helpful, and I thought the author did a great job at introducing those concepts. Not so sure about the "use" statements on 3-9. Use a tablespace to create tablespaces?

Following that, I appreciated and value the explanations of Extent and Segment Management. I still do not quite understand the justification for extents, but I do see the importance of Locally Managed Tablespaces and ASSM. Kinda makes me wonder when (not if) the older, manual methods will be deprecated.

I like the detail the author goes into for the Temporary Tablespaces. However, the blurb about sort segments has me wondering. Does Oracle really only use one shared sort segment? The wording is a little confusing; at one point it sounds like there is one sort segment (and only one?) per TEMP tablespace, yet the last sentence on 3-13 states that the sort segment will grow to be as large as all the storage requirements for all sorts in the instance. Perhaps the implication is that the sum of all sort segments will grow to accomodate such space usage?

The Controlfile overview is good - I was hoping for another picture. =) I can't wait for a deep-dive on this critical file.

Overall, a great chapter. I thought the organization was a bit backwards, starting with BigFile tablespaces and ending with controlfiles, but maybe that's just me. A great introduction into the core pieces of the lower-level storage mechanisms.

Thursday, May 07, 2009

Concepts Guide: 2 of 27

The figures (diagrams) in this chapter really help explain the basic structures of blocks, which are in themselves basic building blocks for understanding how Oracle works. After reading this chapter, I am quite unclear what the purpose of the extent is. But I'll get to that shortly.

The discussion about high water marks is good, but slighlty misleading due to the fact that Oracle does not have a water mark, but rather two; a low high water mark (LHWM) and a high high water (HHWM). *grin* And the space above the LHWM is not guartanteed to be free (only above the HHWM).

I rather enjoyed the section on data blocks (starting 2-3). While the bit about data being stored in a byte on the OS is a tad confusing, the pictures and discussion really helps one to gain a toehold as one starts the journey up this monstrous moutain. I did not realize the "overhead" section was actually so small, considering all the information stored in the header. This section left me want more details, which I hope are forthcoming in later chapters. *grin*

With the many advantages of ASSM (automatic segment space management), I am surprised that Manual mode is even available. Perhaps there is a use case for particular warehousing applications that need to manually control the freespace? Might as well throw LMT (Locally managed Tablespaces) in as well - if Oracle has such a strong recommendation against the older, more manual methods. =)

On 2-6, I found myself appreciating the information about free space and the various storage parameters and row chaining/migration. Again, the figures really help here, and I can only hope that they continue throughout the rest of this Guide. I did learn that Oracle only coalesces free space at the time of DML when needed - Oracle does not even attempt to coalesce during "idle" times. Which is interesting - why not?

So on 2-9 we finally get to extents. They seem like middlemen in this game, acting as a second layer of abstraction between "blocks" and "segments". Why? What role do extents play? While I did not find a clear answer by reading this chapter, I would have to assume that the most important role they do play is to allocate a group of blocks together for a particular segment (ergo, segment type), kinda make them sticky. Could not segment headers be made a little larger and subsume all the responsibilities of extents? Just curious. Perhaps I am missing something fundamental here. Maybe the whole point is to keep the respective object headers small? I did find one page that does not answer the question what-so-ever. *grin* I also see a short blurb about the Segment Space advisor (in the context of Extents, keep in mind) - call me naive, but I would expect Oracle to automatically resolve fragmentation quietly, internally, automatically.

The subsequent explanations of the various segment types were good, and I liked the way the Undo concept was spelled out. The way Temporary extents/segments and Undo retention is detailed is excellent! Again, a great overview of some very basic things (and not so basic, appropriately).


Wednesday, May 06, 2009

Sojourn through the Concepts Guide: 1 of 27

I have been challenged on at least three occasions to read the Concepts guide (thanks to Tom Kyte, Rich Niemiec and Job Miller).

The Concepts Guide (Book? Epic Saga?) is quite large. 542 pages at 8.07mb (PDF version), compacting the knowledge about the Oracle RDBMS into a mere 27 chapters. Since I am most familiar with 10g, I decided to start there. I noticed that the Primary Author (Michele Cyran) is also noted as Primary for the 9i version, and I imagine that she is listed as a Contributing Author in the 11g version merely because so much of the content is reused (which makes sense - hard to improve upon a such a good thing). After reading the first few pages of each, I confirmed that much of the original content is unchanged. Again, this makes sense - we are talking about concepts, and the basic, fundamental concepts of the RDBMS have not changed that much, if at all (aside from new features).

The first chapter, as an overview, is excellent! I found myself being surprised that it was possible to sum up the entire thing in just 38 pages without obviously cutting out too much. Given that, I was equally surprised that there is so much attention given to the Grid Architecture (one blatant, rather large addition since the 9i version).

So, going through the first chapter chronologically.
I found the opening statements quite appropriate - again, an excellent summary. How often have DBAs been asked by lay folks and family members "What is a database?". I like how the author starts at ground zero and builds upwards.

The next 7 pages spent a bit of time on the Grid Architecture. Surprising, as it spans not only the roadmap of the database but extends to Applications and even to the enterprise as well (from Storage to how Information is handled as a commodity). While I agree with the theory of these ideas, and I see how it would be easy to get caught up in the novelty, from practical experience I know these features are whitewashed pretty well for general consumption as the PR folks try to sell it. But I digress. The bottomline is that I value the insight we are provided by these pages, as it gives those of us outside the Oracle Corp a general idea where Larry is going.

The "How Oracle Works" section on 1-17 is excellent as well. This is the kind of stuff that makes me want to go and read about all the nitty-gritty details. The flow and overly-simplisitic nature of this section is perfect in this part of the Guide.

Another interesting point was the short blurb on Quiesce (1-20). I have never thought to use Quiesce to put the database into a sort of "restricted mode". In fact, the few occasions in the past that I have found my database in Quiesced mode, SYS could not do anything (other than shutdown/startup and change the mode back). Shows how little I know about this feature.

The rest of the chapter was appropriately basic, in my opinion. Lots of good bits of information, and I think they set the stage well for the rest of the Guide.

Tuesday, May 05, 2009

runInstaller problems: a tale of no support

Granted, I am in a rather cynical mood; after painfully working my way through several SRs this past week (about 6 on Enterprise Manager and/or agents alone), I was not in the mood for any more games. So I gritted my teeth rather loudly when I had to open another SR about the Oracle Universal Installer. Or maybe we should call it the "Oracle Undetermined Installer"? Oracle Unbeknowst Installer?

We were doing a lot of Oracle Home installs. Yes, I could simply have clone a Home (choose your poison), but this was for a class and I wanted students to go through the experience. And we were using the silent installer, because that is just so much cooler. And much faster too boot. So half-way through, we start seeing something really strange - we would get the normal banner, the "Please Wait...." message and then... nothing. No matter how long you waited, nothing. The annoying part is that runInstaller likes to kick things off in the background, so it is not unusual to see some messages flash by, but one can still navigate. So if you see "Please Wait..." and yet can still type, one thinks the installer is still running. One would be wrong.

The Installer was simply aborting. The logs had absolutely no meaningful help in them whatsoever. So I pleaded my case with the smart folks on oracle-l and filed a case with Oracle. As always, I received some interesting information from oracle-l quickly, and Oracle Support just sorta... slowly... asked for an RDA (why?!? For all that is sacred and worthy, why on earth do you need an RDA for this?!?), asked for versions, and without blinking an eye told me my OS was unsupported.

Flustered, I fired up the Installer in graphical mode. Lo and behold, I get an error message saying /tmp was full (or rather, only 50mb was free, which is apparently not enough room for the temp JRE it wants to kick off). So after deleting some crap that the installer left behind from previous runs, I can now run the silent installer.

Thanks, Oracle Support, for being so helpful.

The OUI seems really antiquated. No, I take that back. It does some really awesome stuff when dealing with clusters! Amazing! But simple things like having a fully documented silent installer with cookbook response files are quite lacking. And the total absense of helpful error messages? But like many things with Oracle, they concentrate on the "bright shiny new things" and often do not turn back to polish the older things. No, that is not fair either - that is quite wrong. Oracle does go back and fix some things rather well. But the things that get left behind or fall through the cracks, they cause one to stumble, and are thus more noticeable. =) Unfortunately.

Thursday, April 02, 2009

If Oracle were an MMO, what skills would you be trained in?

Been thinking a lot about certification and skill assement lately. Had some excellent correspondence with Dan Norris, commented on Paul Sorensen's article, and talked with a new acquatance who is pursuing research on a Skills Management White paper. Dan Norris also hooked me up with Dan Morgan, who has been chewing on a new way to do certification that closely follows what the Medical Board does. All the while I am teaching a class that is supposed to prep the students for Oracle Certification (OCA, possibly OCP for those that can hack it), and really disliking the book.

Then in my free time, Stefan Knecht has re-introduced me to EVE Online (ummm... thanks? *grin*). EVE has a very cool, but very slow, skill training system. I am sure most folks would probably be familiar, in concept at least, with games like Blizzard's Warcraft that give "experience points" to characters for doing certain things. In EVE, you basically just pick a book and x hours later, you know that skill. A little strange, perhaps. But the cool thing (IMO) is the tech tree is based on the skill set, with a very thoughtful dependency relationship and considerable time investment for more advanced skills.

So... what happens when you smash these things together? What if you had some kind of classification like "Standard DBA Level 3", and, by definition, there were a set of prerequisite skills that the owner of said classification would have demonstrated proficiency in. Perhaps such granular classification does not make a perfect analogy in the game world, but still my mind is chewing on it. Perhaps being "certified" in every single skill a "standard" DBA does is not only impractical but unwieldly. But surely there would be a way to have some sort of dependancy tree. Almost anything is better than Oracle University's current system. =)

As I dwell on this topic, I have been thinking about what those skills might be. For instance, there are obviously broad categories like "Backup & Recovery", "Performance Tuning", "High Availability". Each of these could be further divided into component pieces; at some point, you get down to atomic skills. What are those atomic skills? There is obviously a similarity to the Mercer Salary Survey as well. Perhaps that is the closest thing I kind find in the IT world that attempts to do something like this. I am sure there are other efforts.

Another way this does fit with the game analogy is that it would not be feasibly for a person to attempt to specialize in everything (kinda counter-intuitive, anyway). As with EVE, you choose a career-path and pursue it (invest time). DBAs, or any professional career for that matter, are like this - start with something fundamental and then build on it in one direction or another.

Wednesday, April 01, 2009

Default number of rows displayed for "SET FEEDBACK"

A colleague walked into my office and asked me what the default threshold was for the number of rows in SET FEEDBACK. I said "I am pretty sure it is just on", but wanting to be absolutely positive, I looked it up.

SET FEED[BACK] {6 | n | ON | OFF}

How odd! Why would the default be set to 6?? What kind of number is that? Why would you have a default number at all?

The colleague walked out with a smile.

Wednesday, March 04, 2009

Resolving file header contention on tempfiles

We are finding many bottlenecks as we attempt to optimize a character-set conversion for our 450gb database. One of the tricks is using a custom-made script (paresh) to build indexes in parallel (ie, many indexes at once), since datapump likes to do them one at a time, albeit using a degree of parallelism.

So, the most recent bottleneck was buffer busy waits. So after a little digging around, we finally came up with this query:
select p1 "File #", p2 "Block #", p3 "Reason Code" from v$session_wait where event = 'buffer busy waits';


The tricky part was that we were getting back p1 values that exceeded the highest file_id found in both dba_data_files and dba_temp_files. Tanel Poder and K Gopalakrishnan helped us out by stating that these extraordinary numbers were actually the "real" file_id for the tempfiles, which are numbered file_id+db_files, and you get db_files from v$parameter.

With that out of the way, we realized that were hitting file header block contention (Reason Code 13) on the temp files. As Tanel succintly put it, "it's the block 2 in the file, this is where the LMT extent management bitmap header block lives." So our index builds were hammering the few lucky header blocks in our tempfiles that contained the extent bitmap.

My first thought was to increase the size of the tablespace extents, thinking that perhaps if there were few extents to worry about, the extent bitmap would not be in such high demand. Turns out the effectiveness of that idea was rather poor - or at least unnoticeable. Then I decided to increase the number of header blocks. One simple way to do that is add more tempfiles. And Viola! Now I create x number of tempfiles for x number of slave processes, and I have yet to see any more file header block contention on my tempfiles. Coolness.

On to the next bottleneck...

Thursday, February 19, 2009

Complaint about Enterprise Manager (version 10g)

First off, let me say that EM has some really cool stuff going on! This post will be largely negative and critical, but I do not want to discount the fact that the developers of this tool have done a lot of hard work to make some things very neat and slick. Real-time data for performance monitoring at various levels is quite nice. While the interface is kooky (very!), I have to give credit for the massive conglomeration of features that are packed into this "free" tool. And most of them work to boot! *grin*

Ok, so my first gripe is just administrating the EM itself. The logfiles are a complete mess. There is no one log file that summarizes all the activity. If you get an error in the GUI, it will most likely say "see log file for further details". Which log file? Granted, most of them are in one place (depending on if you are using DBControl or GC), but not all of them. Worse, it is very common to find a logfile that repeats what you saw on the GUI and it refers to itself for further details! How is that helpful? I want one file that can either tell me where exactly the problem is, or at least point to the file where the real further details are. Same for the config files. There are xml fiels sprinkled throughout, and .properties. Since EM is comprised of so many different components, there are many different log files and many different configuration files. But nothing ties them all together. Say I want to alter the ports of the components (I have tried this, and it is not pretty). I had to touch about 5 or 6 different files, each with multiple entries for various port numbers. And then there are those ports that are defined by global variables, with no indication where those are set. If one wanted a headache, a lemon-encased gold brick to the skull would be more appealing.

What next.... The GUI itself is extremely cluttered and the opposite of intuitive. I understand that some tools take a while to learn how to navigate (ie, TOAD - still catching on), but there are some functions I use quite often and I always find myself in the wrong place. Here is a tricky example, how about "Historical SQL". Do you mean the sql that EM has generated on your behalf, or AWR SQL? What about "features" that do not work quite well. We have tried Data Masking and Change Capture, both of which hung and finally died without any visible productive work. And trying to debug that?? See above paragraph. =) Or how about that infamous "remove datafile" option in the Tablespace screen? Cool, I did not know you could remove a datafile from a tablespace, this must be new. Yeah, try it and get back to me on that. I have had the opportunity to Queisce my database by accident; if you flip through the Maintenance Window management screens (just click ok or next), you can freeze your database because one of the options to specify the type of the current maintenance window defaults to Quiesce, even if that is not the current type. Handy, eh?

I am curious what the 11g EM will be like. I briefly looked at the 11g Database Control, but not enough to make any kind of judgement. Would it be foolish to even hope that the administration side of things has been made easier?

'nuff for now.

Monday, February 16, 2009

A tuning performance example using 10046, SQLTXPLAIN and 10053

In my tribute to Carlos Sierra, I implied (well, rather, I just said) that I had more to write. Well, here it is.

I was alerted to a "slow performing" job in our ERP (the one out in Massachusetts). My first response is to turn on a 10046 trace; 99% of the time, this is extremely effective. The other 1% I either hit that strange issue where the event itself will change the query plan (I hate that!) or the job does too much stuff client-side for me to be of much use ("Yeah, umm... you only used .01 seconds of CPU time for your 5-hour job").

I have a script I use to create a trigger that turns on event 10046 (and others). If blogger had a better way to post links, I would put it up. But the guts are pretty easy to figure out - I accept parameters, then basically build a query that does the following:
CREATE OR REPLACE TRIGGER SYS.$trig_name
AFTER LOGON
ON DATABASE
begin
if (USER = '$user') then
execute immediate 'alter session set max_dump_file_size = UNLIMITED';
execute immediate 'alter session set tracefile_identifier = ''$user''';
-- execute immediate 'alter session set statistics_level = all';
execute immediate 'alter session set events = ''$trace trace name context forever, level $level''';
end if;
end;
/



If the session is already connected (hmm... that was redundant), I like to do the following:
set feed off verify off
set lines 112
select 'EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE(session_id => '||sid||', serial_num => '||serial#||', waits => TRUE, binds => TRUE);' "Turn Trace on for user" from v$session where username = '&&username';
select 'EXEC DBMS_MONITOR.SESSION_TRACE_DISABLE(session_id => '||sid||', serial_num => '||serial#||');' "Turn trace off for user" from v$session where username = '&&username';
set feed on verify on


I have never seen a really good reason to use only level 4 or only level 8; I would use only level 1 if all I want is the SQL, but for the most part, level 12 all the way, baby!

Just to see what I can see, I run the resulting trace file through tkprof with sort options (sort=prsela,exeela,fchela), which puts the queries with the highest run times at the top of the tkprof output file. Lo and behold, I get a few solid candidates with this particular example.

SELECT SSBSECT_TERM_CODE,SSBSECT_PTRM_CODE,SSBSECT_REG_FROM_DATE,
SSBSECT_REG_TO_DATE,SSBSECT_CRN,SSBSECT_SUBJ_CODE,SSBSECT_CRSE_NUMB,
SSBSECT_SEQ_NUMB,SSBSECT_SSTS_CODE,SSBSECT_CAMP_CODE,SSBSECT_CRSE_TITLE,
SCBCRSE_TITLE,SSBSECT_SCHD_CODE,SSBSECT_INSM_CODE,SSBSECT_NUMBER_OF_UNITS,
SSBSECT_DUNT_CODE,SSBSECT_LINK_IDENT,SSRXLST_XLST_GROUP,
SSBSECT_RESERVED_IND,SSBSECT_MAX_ENRL,SSBSECT_ENRL,SSBSECT_SEATS_AVAIL,
SSBSECT_WAIT_CAPACITY,SSBSECT_WAIT_COUNT,SSBSECT_WAIT_AVAIL
FROM
SSVSECT order by ssbsect_term_code, ssbsect_subj_code, ssbsect_crse_numb,
ssbsect_seq_numb


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.27 0.27 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 996.36 1970.42 88 956167 176632 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 996.63 1970.70 88 956167 176632 0


I was told that this usually takes closer to 5 minutes (not 33 minutes). Stranger yet, all examples of this query with valid predicates trimmed the run-time down significantly (sub-second), which tells me that predicates are being pushed into the view (a good thing).

Just as a litmus test, I ran this query with a privileged account (SYS), and noticed that the query plan was indeed much better - so at least I knew there was a better plan. And then I realized that we use FGAC policies against the underlying tables to enforce VPD. Great. Tuning FGAC is no fun at all. Because I like to have the complete query in front of me, I got the view source (formatted in TOAD), then ran the query with the original user id and event 10730 turned on (captures VPD predicates). I was then able to reconstruct the query that finally got passed into the parser.

Trying to get the original query to use a better plan, I started playing around with hints and statistics (in independent tests). The hints are a pain because, as Jonathan Lewis will often point out, you want to use a ton of 'em (not just one or two). Which got me to thinking about the 10053 trace and how that gives you the hints to reconstruct a query plan in a nice format (easier than reading other_xml out of v$sql_plan). I also stumbled upon some new index statistics that gave me the "better" query plan. As I attempted to explain which statistics did the trick, I turned to SQLXPLAIN to compare the queries for me, and it did a wonderful job (as stated in previous post)! It helped me to narrow down the problem to the fact that the the combination of more rows and fewer leaf blocks had altered the clustering factor enough to be noticeable.

"Now wait", you say, "how do you get more rows and fewer leaf blocks?!?" Easy. If you rebuild your index and do not collect stats for a few months on a busy table... viola! *evil grin* Ok, I need to include the fact that we recently went through a character set conversion which required us to completely rebuild the database; at the time, we also decided to import all the old stats, which is much faster than generating new ones. Unfortunately, we overlooked the fact that indexes will generate new stats when you build them, so importing old stats is a bad idea (in general) for indexes.

So, we have an altered clustering factor. How did that affect the query plan to go from 5 minutes to 33 minutes? To be honest, I do not have all the answers yet. I am still researching the "why" behind this. What I have gathered so far is that the index selectivities have changed dramatically (these came out of the 10053 trace):

OLD/SLOW - uses PK index
Index: PK_SSBSECT Col#: 53 1 2
LVLS: 2 #LB: 4104 #DK: 649804 LB/K: 1.00 DB/K: 1.00 CLUF: 30917.00

Access Path: index (IndexOnly)
Index: PK_SSBSECT
resc_io: 3.00 resc_cpu: 21564
ix_sel: 7.6946e-07 ix_sel_with_filters: 7.6946e-07
Cost: 3.01 Resp: 3.01 Degree: 1

NEW/FAST - use table scan instead
Index: PK_SSBSECT Col#: 53 1 2
LVLS: 2 #LB: 3072 #DK: 689157 LB/K: 1.00 DB/K: 1.00 CLUF: 34646.00

Access Path: index (IndexOnly)
Index: PK_SSBSECT
resc_io: 2294.00 resc_cpu: 119163783
ix_sel: 0.74604 ix_sel_with_filters: 0.74604
Cost: 2326.68 Resp: 2326.68 Degree: 1


So, CLUF increased by 12% and LB decreased by a whopping 25% and DK up by 5.7%. So how did I end up with a ix_sel that was different by 10^6? I mean, I see why a tablescan was preferred! =) But what the heck?

Did I mention I am eagerly awaiting for that 10053 parser? =)

Friday, February 13, 2009

Kudos to Carlos Sierra

I have been playing more and more with Oracle's SQLTXPLAIN; or rather, let me give the full credit to Carlos Sierra, who has written and fully supports this nifty little gem.

Jonathan Lewis get me hooked on dbms_xplan, and I like it a lot! Just the other day I picked up +PEEKED_BINDS as an option from various posters (Tanel Poder, Greg Rahn, Jonathan Lewis) and started using it right away. And I never really tackled SQLTXPLAIN just because.... I had used the earlier version (TRCANLYZR...or something). Until recently.

Carlos has really done a bit of quality work on this piece; he has provided excellent documentation and some good examples. He has shown a wonderful amount of foresight in generating various traces (10053, 10046) and packages functionality like generating testcases, comprehensive web reports and sql plan comparisons. The last of those I had an opportunity to explore today, and it helped me find a problem with some indexes (that will be a seperate post after the dust settles). I just really like the presentation and functionality of this tool. The "main" html report is very tasteful with expand/collaspe java snippets, go flow, and tons of useful information. The comparison did a great job at highlighting areas of concern, and his "Observations" often make me think about something, even if only to affirm. I like the flexibility of grabbing a sql_id, sql hash or plan_hash.

True, it is not perfect yet. Very close, though. I do not like how the install scripts ask for a "schema user" or "application user". To work around that, I create a new role (SQLTXPLAIN_ROLE) and provide that as my "application_user". Then, whenever I want to run SQLTXPLAIN, I just grant/revoke that role to the real application userid. Also, altering the parameters are a little kludgy. I love that Mr. Sierra has made the parameters available and has provided documentation for them (a pleasant surprise), but it would be nice to specify them at run time in a more "normal" fashion. Like, having a flag to turn off the table exports. And what about using datapump instead? =)

One other thing that I am still working on is to get it to work as a client-side piece. After working through a few kinks (ie, my bads) with Sierra, the only portion that bothers me a little is that it keeps asking me for the password for exporting tables. For the most part, I want to skip that.

Anyway, this tool is AWESOME! I love it! I will still use dbms_xplan a bit (afterall, this tool does as well), since it is much lighter weight. But I am definitely adding SQLTXPLAIN to my toolbox and telling others about it. I have also had a few exchanges with Carlos, and he is definitely on the ball.

I am extremely curious about one thing yet. There are a number of interesting tables in the SQLTXPLAIN schema. What is SQLT$_10053_PARSE for? I have never seen it populated with any information. Is there a tool that parses the 10053? Can I have it?!? =)

Now I wonder what other tools BDE has hidden away....

Monday, January 26, 2009

Teaching and Certifcation

I have been presented with the opportunity to teach at a nearby community college, and I was instantly interested. Little did I know what would be under this newly turned stone.

The class is the 4th part of a 4-part series based (loosely) on Watson & Bersinic's "Oracle 10g All-in-one Certification Exam Prep". After going over the book and finding it dishearteningly weak on practicality, I opted to rewrite the curriculum a little. Or rather, a bit.

The book is interesting; it is a comprehensive, rehashed summarization of the online documentation that aims to get you ready to take the OCA and OCP tests. Perhaps it does that. Ever since Bush's "No Child Left Behind" poorly-implemented federal program cursed our educational systems, I have been convinced that "teaching to the test" is horrible in the long-run and only encourages bad habits of short-term consumption and regurgitation. However, the book is also rife with errors, both in the form of typos, bad index pages, and also technical errors and self-contradictory statements. To top it off, the list price is a freakin' $105!!

Having had dreams of utilizing the tenets of the OCM practicum in a classroom setting, I took this bull by the horns and decided to make all the online homework based on the published OCM outline. Some of the outline topics are rather easy to work up a lab for (ie, creating a database, using RMAN, etc). Some of the topics are a bit more difficult to recreate; for example, what exactly does the 10g OCM practicum test in terms of RAC setup and configuration? But for my purposes, I do not really care. Just as a disclaimer, I have never taken the 10g OCM.

This is a relatively entry-level class, so I am concentrating on the "basics"; I am going to make sure the students know how to create, configure, connect to, manage, backup, recover, tune and optimize the database. The first homework assignment was awesome (IMO). Awesome because these students, after attending the prerequisite classes, had never created a database from the command-line before; they even struggled with some of the things like where to put datafiles, how to size the memory parameters and running the catalog/catproc scripts. And I think that is a good kind of struggle. They asked questions, they explored, they made mistakes. In general, they got their hands dirty doing what we in the real world do. Opposed to answering some silly multi-choice question about which line of sql has an error in it.

And what does this have to do with Certification? Would you hire someone who held an OCP card yet had never created a database? Never had the frustration of a ora-4031 (or any number of ORA errors)? What the heck is Oracle actually certifying as a "Professional" anyway? Oracle ostentaciously says that they require the instructor-led training classes specifically for that "hands-on" aspect of the OCP. Have you ever taken those courses? The labs are a joke. The classes are mostly boring lectures. Unless you end up with someone passionate like Andy Fortunak or Sue Jang. Still, I find the whole scheme rather offensive.

I spotted a blog entry by Dan Norris, who in turn had stumbled up on a thought-provoking blog entry by Bex Huff. Dan also mentioned that Oracle has a Certification blog, so I quickly made my way over there and spouted a little. I really mean what I said there; rewrite the certification paradigm and actually make it worth something, instead of a cash cow for Oracle U; isn't Larry already rolling in enough dough?

To top it all off, my students will not be eligible for certification even if they do pass the OCP. Why? The community college where I teach is not a "Oracle Certified Teaching Partner". So, even though the students will most likely get a much better education at the community college than some class they have to travel to and pay $3500 for, they will not be able to be certitifed as those that are able to expend the time and money to do so.

I dearly hope my students appreciate the hell I am putting them through. =) Fortunately, I have recieved a lot of feedback that they thoroughly enjoy it. It is going to be a good semester.