This is worth a post, despite my dearth of time to post lately … just had an interesting one and found the perfect blog article to resolve my issue, so definitely need to send a shoutout to the author, Andy Black.
In a nutshell, v11 of GG now uses the same sort of “delete only when applied” method that logicals do. v10 does not. If you create an extract and don’t delete it, it thinks the archive logs are still needed and won’t delete them when you do an rman backup. I did have the extract enabled for a bit, I think that is also needed to create this situation, not just an unused extract.
This of course can provide no end of confusion if you have a primary database with both dataguard and goldengate running, and you don’t know GG v11 made this change. I was so confused as to why my logicals weren’t telling the primary that they were long done with these logs. Turns out GG was the holdup.
Andy provides more than enough info to fix the problem, and I also left a comment there about a specific situation and how to fix it (quick ref – make sure to dblogin before you delete an extract, or it will leave a record in db_capture and you’ll end up with RMAN-08137 issues).
Gotta love Oracle / DBA work, learn something new every day.
UPDATE 12/14: we had the same problem but in a restored non-prod copy of the database. Even after installing goldengate and trying to use ggsci to remove the extract, it wouldn’t budge from dba_capture. So I put in an oracle SR. They came back with this oracle article (1351352.1) and this DBMS to remove things manually from dba_capture – DBMS_CAPTURE_ADM.DROP_CAPTURE. I used the following:
select 'exec DBMS_CAPTURE_ADM.DROP_CAPTURE ('''||capture_name||''');' from dba_capture;
then run whichever bits of the output need to be removed.
First day of IOUG was great! Lots of good sessions, saw some interesting products in the Hall, learned a lot.
In particular Michael Messina from TUSC led a great session on 11g RMAN features; he is a very dynamic presenter. The features that will be the most useful for our shop are:
- defining sections for large datafiles – these are like ‘sub-channels’ in that through 10g, you can define multiple channels, but PER DATAFILE this is a single thread; now you can backup large datafiles in parallel as well using sections
- fast incremental backup of physical standbys (they now do BCT) … this is above and beyond the loveliness that is a readable physical standby (says the guy who works with logical standbys a fair amount)
- block repair of production from physical standby
If you have the disk space, a physical standby of every database is starting to sound like a Good Thing ™. Sounds like cool improvements to duplicating databases and other things, but we don’t use those as much. (I think many of these new features are discussed here.) Side note: he did an informal poll from a packed room of DBAs on who was using flashback. Looked like about 1/3 to me. #1 gripe was disk space to run it. From the number of people who said they had turned it on then back off again, I think having a decent formula to estimate FRA before turning it on as well as better ROI for why it is needed is crucial to people feeling confident in moving forward with flashback.
Daniel Stober did a great job with his SQL Brainteaser session; it isn’t easy to put your code out there as a good way of doing something and he did it with openness to new options and some interesting questions. His original idea to release a SQL puzzle each month to help foster better SQL writing among developers is one that I may take home and see if we can get started.
Gary Gordhamer explained NLS very well, I understand it much better than I did before. I will probably suggest that we change our standard for how we create new databases due to what I learned. One tidbit: windows desktop created war files from developers can keep nls_lang settings from the windows machine and hose things up royally, as in data-gone-for-good royally.