DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE runs and evolves successfully, but errors when returning output (ORA-06502)

We recently have upgraded many of our databases to 11g.  One of our tools to make sure performance did not degrade was to implement SQL Plan Baselines for our top queries, as identified a few different ways via AWR.  That’s a long story, but one interesting short note is that after you have Baselines, you might want to evaluate them periodically to see if there is a better plan available.  Oracle provides DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE so you can do this.

I chose to wrap my evolve in a shell script and call it from cron.  The script worked perfectly for just over a month, then one day all my log had in it was an error:

DECLARE
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 13

I tried increasing my SET LONG statement – no dice.  Also, it was clear from the baselines themselves and the total run time before the error that the evolve was actually working, it was just the log/text output at the end that was failing.  In fact, the DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE wasn’t failing at all.  It was actually DBMS_OUTPUT.PUT_LINE that was failing.

Finally I read that DBMS_OUTPUT.PUT_LINE has a 32767 character limit for a single line.   Since I could see that my output had grown slowly but steadily each day, and on the day it started failing it had grown beyond this limit, this was the answer.  This is not exactly a bug in the DBMS_SPM package, however since this output should be expected to be long, one wonders why a better output method wasn’t developed.  Oh well – on to the solution.

Many  smart people have posted about how to chunk up a clob to get around this limit and print it to the screen/sqlplus.  I happened to like and use this one (modified to use put_line instead of writing to a file).

My working evolve script — leaving out the surrounding shell stuff which you’d want to customize for your environment anyway — looks like this:

set serveroutput on
SET LONG 1000000
DECLARE
  results  CLOB;
  r_len    NUMBER;
  putlimit constant binary_integer := 32767;
  r_buffer VARCHAR2(32767);
  r_pos    PLS_INTEGER := 1;
BEGIN
  results := DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE (
   time_limit   => 180,
   verify       => 'YES',
   commit       => 'YES'
    );
  dbms_output.put_line('finished with the evolve here, no errors so far.');
  r_len := dbms_lob.getlength(results);
  dbms_output.put_line('length of results var: '||r_len);
  while r_pos < r_len loop
    r_buffer := dbms_lob.substr(results, putlimit, r_pos);
    exit when r_buffer is null;
    DBMS_OUTPUT.PUT_LINE(r_buffer);
    r_pos := r_pos + LEAST(LENGTH(r_buffer)+1,putlimit);
  end loop;
END;
/

IOUG Collaborate – Tuesday morning

After yesterday’s maelstrom of learning, I never thought I’d say this, but felt like too many breaks today.  I get it – exhibitors need their time to talk to us – just champing at the bit for the next session.  🙂

Started the day with Arup Nanda and Performance Tuning in RAC.  Since we are moving away from RAC back to bigger servers, this was mostly for my own edification, and he didn’t disappoint.  Picked up a better understanding of the underpinnings of why certain RAC waits happen.  I had seen all of them before and had some idea of why they happened, researched them before, etc., but he was great at making the process very tangible.  Interesting tidbit: log file sync happens to store requested dirty block to disk on node that had the requested block … Rac calls for the sync.  Makes i/o important for these syncs (still need fast disk for at least redo).  Log flush waits could lead to buffer busy waits (flush before loading from disk to buffer)

Went to Tim Quinlan’s AWR workshop where he gave an intro to AWR, predominantly through sqlplus rather than EM, and then shared all of the scripts he has customized to create an ADDM/AWR based trending system.  Thanks Tim – now I know what I’m doing for my ‘special project’ / objective for the rest of the year.  Last year I built a bunch of SAN growth trending graphs in java for our oracle systems, as well as an ETL script in Perl to get the data from NMS and bring it into a central database so it could be graphed.  We already have cpu/memory/i/o trending via NMS and EM, and I have been wanting to add capacity reporting about more than just disk to my system, with more ‘meaning’ to it.  His AWR trending scripts seem like a great opportunity to expand what my system can do.

Shoutout to the Collaborate 10 planning committee for great vegetarian options.  I’m not a veggie but I prefer to eat veg if they’re around, and they have been tasty and have gone beyond the usual “here’s some pasta salad” fare.

More tonight after afternoon sessions.

Schema refresh script

We run our smaller java apps against a single database with multiple schemas.  Not best practice for tuning, but it is a good use of resources for us and if any schema ever starts to have significant traffic/load, we break them off into their own database.  Most of these apps run at less than 1% cpu most of the time, similar for disk/memory.

For testing purposes, we receive a lot of schema refresh requests for our non-production databases.  It is relatively easy to do with data pump, but there are a number of steps in the process and it can take 15-30 minutes to do them all, depending on the time it takes to export/import the schema with data pump.  After a while I had enough of the little fiddly steps and wrote a script to handle the drop/recreate/import part of the process (I already have a schema export script but it is trivial and most of its elements, such as setting up the standard Oracle directory for export if it doesn’t yet exist and file naming by date, can be seen in this script as well).  Now the only manual step is to export (using a script), then copy the file to the non-production server, then run the schema refresh script.  Takes < 5 minutes and you can run more than one at once, or schedule it in cron if you would like to restore a “gold” copy daily.

Some modifications would be required for anyone else to run this – from standard directory naming to the naming convention of your production servers.  Please do not try to use this “as is” but use it as a reference.

Specific functions:

  1. Check if you’re on a production server.  If so, quit – you don’t want someone to run this in the wrong place and refresh a production schema.
  2. Creates your standard Oracle directory for datapump imports.
  3. Backs up the schema you are refreshing for safety unless you tell it not to (developer told you to refresh the wrong schema, for instance).
  4. Looks up the default tablespace and the encrypted password for the schema you are refreshing (“destination” schema).
  5. Exports permissions for the destination schema, if requested.
  6. Clears sessions using the schema.
  7. Drops it.
  8. Recreates it with the same password and tablespace (profile is hard coded here for our standards, change if needed).
  9. Imports the data using data pump.
  10. Reapplies permissions, if requested.
> ./schemarefresh.ksh -h

schemarefresh.ksh [[-hisdpflbgpomx] [value]]

Examples:
schemarefresh.ksh -i MYDB -s MYUSER -d MYUSER -f myDPexportfile%U.dmp -l mylogfile.log
schemarefresh.ksh -i MYDB -s MYUSER -d MYNEWUSER -f myDPexportfile%U.dmp -l mylogfile.log -b -p
schemarefresh.ksh -i MYDB -s MYUSER -d MYNEWUSER -f myDPexportfile%U.dmp -l mylogfile.log -b -g -m /mydir/mydir -x 4 

 h = This help info.
 i = The database you are importing into.
 s = Source - the schema you exported from.
 d = Destination - the schema you are importing into - this may be the same as -s.
 If -s and -d are not the same, will add a remap_schema to the impdp command.
 f = Filename of the export file.  Include %U if you exported in parallel.
 l = Log filename for your import. Will be placed in the export directory.
 b = [optional] Include this if you do NOT want to back up the destination schema first.
 g = [optional] Include this to export grants and roles for the schema user beforehand,
 and reapply them afterward.
 p = [optional] Do not print the encrypted password to the screen.
 o = [optional] Include the full syntax for any other data pump options here (such as remap_tablespace).
 Exactly what you type will be added to the impdp command line.
 m = [optional] Specify a directory to import from that is different from the destination
 database's export directory. You must still specify filename (using -f).  
 x = [optional] Specify parallel degree.

Here’s the script at pastebin.com which allows for more friendly code formatting.