Schema refresh script

We run our smaller java apps against a single database with multiple schemas.  Not best practice for tuning, but it is a good use of resources for us and if any schema ever starts to have significant traffic/load, we break them off into their own database.  Most of these apps run at less than 1% cpu most of the time, similar for disk/memory.

For testing purposes, we receive a lot of schema refresh requests for our non-production databases.  It is relatively easy to do with data pump, but there are a number of steps in the process and it can take 15-30 minutes to do them all, depending on the time it takes to export/import the schema with data pump.  After a while I had enough of the little fiddly steps and wrote a script to handle the drop/recreate/import part of the process (I already have a schema export script but it is trivial and most of its elements, such as setting up the standard Oracle directory for export if it doesn’t yet exist and file naming by date, can be seen in this script as well).  Now the only manual step is to export (using a script), then copy the file to the non-production server, then run the schema refresh script.  Takes < 5 minutes and you can run more than one at once, or schedule it in cron if you would like to restore a “gold” copy daily.

Some modifications would be required for anyone else to run this – from standard directory naming to the naming convention of your production servers.  Please do not try to use this “as is” but use it as a reference.

Specific functions:

  1. Check if you’re on a production server.  If so, quit – you don’t want someone to run this in the wrong place and refresh a production schema.
  2. Creates your standard Oracle directory for datapump imports.
  3. Backs up the schema you are refreshing for safety unless you tell it not to (developer told you to refresh the wrong schema, for instance).
  4. Looks up the default tablespace and the encrypted password for the schema you are refreshing (“destination” schema).
  5. Exports permissions for the destination schema, if requested.
  6. Clears sessions using the schema.
  7. Drops it.
  8. Recreates it with the same password and tablespace (profile is hard coded here for our standards, change if needed).
  9. Imports the data using data pump.
  10. Reapplies permissions, if requested.
> ./schemarefresh.ksh -h

schemarefresh.ksh [[-hisdpflbgpomx] [value]]

Examples:
schemarefresh.ksh -i MYDB -s MYUSER -d MYUSER -f myDPexportfile%U.dmp -l mylogfile.log
schemarefresh.ksh -i MYDB -s MYUSER -d MYNEWUSER -f myDPexportfile%U.dmp -l mylogfile.log -b -p
schemarefresh.ksh -i MYDB -s MYUSER -d MYNEWUSER -f myDPexportfile%U.dmp -l mylogfile.log -b -g -m /mydir/mydir -x 4 

 h = This help info.
 i = The database you are importing into.
 s = Source - the schema you exported from.
 d = Destination - the schema you are importing into - this may be the same as -s.
 If -s and -d are not the same, will add a remap_schema to the impdp command.
 f = Filename of the export file.  Include %U if you exported in parallel.
 l = Log filename for your import. Will be placed in the export directory.
 b = [optional] Include this if you do NOT want to back up the destination schema first.
 g = [optional] Include this to export grants and roles for the schema user beforehand,
 and reapply them afterward.
 p = [optional] Do not print the encrypted password to the screen.
 o = [optional] Include the full syntax for any other data pump options here (such as remap_tablespace).
 Exactly what you type will be added to the impdp command line.
 m = [optional] Specify a directory to import from that is different from the destination
 database's export directory. You must still specify filename (using -f).  
 x = [optional] Specify parallel degree.

Here’s the script at pastebin.com which allows for more friendly code formatting.

Advertisements

Index rebuilds in parallel change the parallel DEGREE

As many of the comments to this handy article echo, we got stung by this one today.  Definitely check out the article.  After a large data shuffle with some new partioning, involving index rebuilds, we saw a huge load on the server.  Didn’t take long to see that we had a query utilizing over 80 parallel threads.  Dig down further, turns out the tables are set to DEGREE of 1 but the indexes are set to DEGREE 8.  A quick alter index <name> noparallel; and the explain plan went back to normal, no parallelism. It is possible you might have to gather stats afterward to get the plan to change, but I didn’t have to.

Note that if you have an IOT (index organized table) then you will need to alter the table to noparallel, not the index, even though the index will show up in dba_indexes.

There is no parallel DML (since it was an UPDATE query) enabled on any session:

SQL> select distinct PDML_STATUS from v$session;

PDML_STA
--------
DISABLED

And the tables are not set to parallel degree > 1: 

1* select degree from dba_tables where 
lower(table_name)='mytable' and owner='myschema'

SQL> / 

DEGREE
------------------------------
1

So no problems there ... BUT the indexes are set to parallel 8 ... 

1* select degree from dba_indexes where 
index_name='PK_mytable' and OWNER='myschema'

SQL> /

DEGREE
----------------------------------------
8

1* select count(*) from dba_indexes where degree <> 'DEFAULT' 
and to_number(degree,99) > 1 and owner='myschema'

SQL> /

COUNT(*)
----------
703

Explain plan:

Operation Object

—————————————————————————————— ——————————
UPDATE STATEMENT ()
UPDATE () AGN_TOOL_VIEW_STATE
PX COORDINATOR ()
PX SEND (QC (RANDOM)) :TQ10004
NESTED LOOPS ()

ETC.

Query to generate noparallel statements for all the indexes that 
need fixing (with thanks to the article mentioned above for the 
original query showing which had a degree > 1):

SELECT
'alter index '||OWNER||'.'||INDEX_NAME||' noparallel;'
FROM
DBA_INDEXES
WHERE
degree <> 'DEFAULT'
AND to_number(DEGREE) > 1
-- put any exclusions here
--AND INDEX_NAME NOT IN ('BINARY_SEARCH_INDEX','TEXT_SEARCH_INDEX','SIMPLE_SEARCH_INDEX') 
ORDER BY
INDEX_NAME
/

Data Guard standby redo logs / resizing the primary’s redo logs

Data Guard standby redo logs are worth a few comments.  The misconception that there have to be the exact same number of standby redo logs as there are primary redo logs seems to come up often.  In fact, the size has to be the same, but the standby should always have at least one more standby redo log group than the primary does.  As well, if you are using threads/RAC on the primary, then you need to have one extra standby redo log group for each primary thread.

For example, a RAC database with three threads and 3 log groups in each thread would have 9 redo logs.  A physical standby hanging off this primary database would then need to have a minimum of 12 standby redo logs.  A logical standby would as well, but could be given even more standby redo logs to help with realtime log apply.

If you experience a large load/throughput on your primary database, and you do not have enough standby redo logs, you may run into this error:

RFS[89]: No standby redo logfiles of size 203806 blocks available

This is not to be confused with the error received when there is no standby redo log that matches the primary redo log’s size.

This section of the data guard documentation explains how to configure standby redo logs.

To see your standby redo logs, you use a special v$ view, v$standby_log.  The only thing about this view to note is that “UNASSIGNED” means the same thing as “INACTIVE” does in the v$log view of normal redo logs.

Recently due to log file switch contention, I had to increase the size and number of the primary’s redo logs, which then meant doing the same to the standby logs.  Since they are 10g logical standbys, they are open and also have their own redo logs.  Those logs did not have to be modified.

To make primary redo log changes:
-- prod and standbys: set STANDBY_FILE_MANAGEMENT to MANUAL so file changes
-- are not transferred to standbys
 alter system set STANDBY_FILE_MANAGEMENT=MANUAL scope=both;
-- standbys: stop log apply
 ALTER DATABASE STOP LOGICAL STANDBY APPLY;
-- add new redo groups/logs - first a set of new ones,
-- then log switch until the database
-- is using the new ones, then drop the old groups and
-- replace them with the new size/naming
-- and the old group number
alter database add logfile group 4 ('/u03/oradata/MYDB/redo04.log') size 512M;
 -- repeat until the new logs are active/current and the old ones are not
 alter system switch logfile;
 -- then:
 alter system checkpoint;
 select * from v$log;
 -- can't drop a logfile group until it is archived
 alter system archive log group X;
 -- drop the old groups
 alter database drop logfile group 1;
-- on the file system:
## clean up the old files
 mv /u03/oradata/MYDB/redo101.log /u03/oradata/MYDB/old
alter database add logfile group 1 ('/u03/oradata/MYDB/redo01.log') size 512M;

---------------------------------------------------------------------
-- logical standbys - run on each - run on standby db unless noted
---------------------------------------------------------------------

-- stop log apply again to be sure
ALTER DATABASE STOP LOGICAL STANDBY APPLY;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 101 
('/u03/oradata/MYDB/stbyredo101.log') SIZE 512M;
-- repeat log switches on PRODUCTION as necessary for standby redo
-- logs to become "unassigned"
alter system switch logfile;
-- remove one standby redo log at a time, then replace it with the
-- renamed log
ALTER DATABASE DROP STANDBY LOGFILE GROUP 10;

-- on the file system:
## clean up the old files
mv /u03/oradata/MYDB/redo10.log /u03/oradata/MYDB/old
-- rename the online redo files on the standby and add more logs to match prod
-- online redo in standbys is also overtaxed by reporting activity and log apply
alter database add logfile group 4 ('/u03/oradata/MYDB/redo04.log') size 512M;
-- issue this in the STANDBY db as often as needed to free up the online redo files
alter system switch logfile;
alter database drop logfile group 1;

-- on the file system:
alter database add logfile group 1
('/u03/oradata/MYDB/redo01.log') size 512M;
## clean up the old files
mv /u03/oradata/MYDB/redo101.log /u03/oradata/MYDB/old
-- in Production: set STANDBY_FILE_MANAGEMENT back to AUTO
alter system set STANDBY_FILE_MANAGEMENT=AUTO scope=both;

-- may want to restart the standbys here due to possible bug.
-- start log apply in standbys
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

One minor tricky thing — while the add group syntax uses parentheses, the drop logfile member syntax does not:

ALTER DATABASE DROP LOGFILE MEMBER '/u03/oradata/MYDB/redo10.log';  -- does not work
                                                                       with ()