EM / phone home

I can’t get this out of my head so hopefully posting it here will free my brain to enjoy the rest of my post-conf vacation in Vegas.   Most databases are not in the DMZ.  They perhaps can call out locally, but not to the Internet.  This is not a “crazy” or rare setup, especially for companies with stringent security and auditing requirements.  So – why didn’t Oracle anticipate this?  Why not create an application level proxy server (free, hopefully) that could sit in a less-secure place on a company’s network and ferry phone-home traffic from EM to Oracle’s site.  Or at least provide enough openness for companies to do this themselves — don’t require online phone-home (it’s not real-time anyway), allow flat-file dumps that a company can move around as they wish before transmitting them, allow them to be analyzed and customized to exclude inappropriate data, etc.

Could be that some of this is available, and I just don’t know about it.

Advertisements

IOUG Collaborate – Thursday

Highlight of the day was the global launch of Enterprise Manager 11.  Well, the actual highlight was the elite DBA debate over whether to upgrade to 11g or not.  Nice job folks!  Anyway, back to EM – cool stuff, and going in an even cooler direction.  See lots more info at Oracle’s site.

The new EM promises to be extremely cool and useful.  Two tidbits that drew a lot of complaints from attendees:

1) Licenses that you already own may incur a new cost when upgrading, even if you have premium support.  The reason given for this was that a lot of work went into the products.

2) Phone home capabilities are getting more and more crucial to using EM effectively.  It can be turned off as in previous versions. Many people put their hand up and said, “There’s no way to implement this, my servers are not allowed to talk to the Internet, period.”

Oracle is trying to provide a complete stack mgmt tool with biz logic, phone home proactive monitoring, etc.  This is not the EM DBAs are used to, although the DBA bits remain largely the same or same-enhanced.  This is a top to bottom tool to manage the entire stack, right down to OS, SAN, and server buildout.  We already knew that EM could monitor Peoplesoft, but this is clearly an attempt to make it a tool that is perfectly tailored to managing your application, database, and OS/disk tiers.

A few notes:

– there are no new packs

– phone-home data will generate configuration issue analysis based on specific config, delivered to your EM interface

– “ops center” – for infrastructure stack (built by sun – was existing sun product and has been around for quite a while, so this is more of an integration with EM than a new product)

– can deploy sun servers

New:

– real user experience monitoring

– sounded like coradiant or other network packet sniffing, real user experience platforms … the difference is that this would integrate into EM so that you could tie system events to user events.  It is smart about products like Peoplesoft, so that you don’t just see random long urls, you see psft-aware module/page info and tuning advice

– has web to db tracing of all sorts of tech (j2ee, psft), it is tier aware and has auto SLAs

– seems to go as far, if you want, as “see problem – alert – diagnose – deploy new virtual servers to mitigate” … Or rollback java code … Etc

Basically if you buy all the way in, it looks amazing. Not a decision we are likely to make now, since it probably means invalidating how we use a few big current non-oracle tools. But something to watch.

Shoutout to Oracle for the Ed Muntz videos.  Not all the Oracle videos have hit the funny bone, but these are just enough cornball without being silly.

– addon: application testing suite – automated script generation – doesn’t look like too many psft-specific goodies yet – like mercury loadrunner

– standby db tuning added

– RAT improvement – capture large load and replay only part of it, prev had to play whole capture

– patch recommendations WITH integrated community download stats, reviews, discussion of issues experienced, etc – cool!!! Basically integrating community into patch and fix stuff, all in EM. If patch isn’t available you can req it via EM without a call to support(!), you can also roll your own patches (you choose) with conflict detection

IOUG Collaborate – Tues aft and Wed

Somehow my Tues afternoon notes disappeared.  Crappy.  I attended a session on tuning RAC performance, and one on using a free built-in Oracle tool to create a dblink from Oracle to SQL Server (suitable for smaller SQL Server datasets only, but a good tool in the toolbox nonetheless).  There was an hour off for exhibit hall.  I just typed a lengthy paragraph about my chat with netapp about SSD but stupid luxor ate it when I hit save, since I guess my daily internet lease expired.  The ONE time I don’t select-all/copy before hitting submit … argh.

Wednesday was another packed day.  Today was a day mostly full of exposure sessions for me, ie. “I should know something about this aspect” rather than learning more deeply about things we could use or do use right now, so I took fewer notes.

Attended a session on TimesTen in-memory database by Susan Cheung from Oracle.  Interesting product, not something we need but good to understand how it works. Then, Oracle Advanced Compression and Hybrid Columnar Compression by Bill Hodak from Oracle.  Again, good presentation, interesting info.  He mentioned that he had done research and found that most companies have non-prod data that is 5-10x the size of their production data.  We see at least 5x so it is good to know we are not alone.  (This also says something subtle about limited QA data sets, etc — to get that much duplication, the vast majority of companies must still be just making copies of Production to test against.  Not surprising.)  As well the average data growth per year is 20-40%.

Then hit another Mike Messina/TUSC presentation, Centralize Your AWR Data for Better Analysis.  Excellent again.  Between Tim Quinlan’s AWR trending stuff and Mike’s AWR centralization stuff, I know what I’m coding in my ‘spare’ time for the foreseeable future.  Very jazzed to get home and start.  (Can’t hit it now … 4 days of vaca coming, spouse is joining me, and I will get smacked if I try to code while on vaca.  😉   My only question is – how soon is Oracle going to notice everyone centralizing their AWR data, and build it as an EM pack?

Met up with a guy from my city there whom I met on Tuesday, we had lunch and ended up talking database right through the next session.  Smart guy, hopefully we’ll be in touch again.  Next, hit virtual private db and application contexts with Robert Corfman from Boeing.  Very detailed.  I  like presentations like this that show actual commands run in sqlplus, it puts my brain in the right mode to remember commands and understand how they work and what their output means.  I had not dug into this before and the presenter clearly had tested out a lot of poorly or unclearly documented things about how it works.  This may have some applications for us in non-production environments; not sure.  Good to know about and be able to throw out as an option to projects.

Finally, hit DBA 2.0: The Future of Database Management with Mughees Minhas from Oracle.  I was sorry to miss the Amazon data guard pres, but this one was pretty darn funny.   I laughed a lot.  He had prepared an entire dramatization showing DBA 1.0 (scripts) versus DBA 2.0 (EM) and had them duke it out onstage to solve the same problem with their favorite tools — with a six minute time limit.  Of course it was staged but it was pretty believable and exactly what my coworkers/I think about EM vs scripts — use the best tool for the job.  The fact is, graphs aren’t just pretty or easy.  (Well, they are easy, but that’s not a bad thing.)  Graphs take related information, sometimes on many vectors, and put it in one place where the human brain can easily spot sudden changes, spikes, and other trends.  When we have a performance problem it is the FIRST thing we look at, including our senior DBA who has 10 years of Oracle experience.  He told all of us: use EM but don’t be totally reliant on it.  Know the scripts, have your own set of scripts for when you don’t have EM.  Hit the “show sql” button in EM whenever you can and just read it.

Falling asleep, so done for tonight.

IOUG Collaborate – Tuesday morning

After yesterday’s maelstrom of learning, I never thought I’d say this, but felt like too many breaks today.  I get it – exhibitors need their time to talk to us – just champing at the bit for the next session.  🙂

Started the day with Arup Nanda and Performance Tuning in RAC.  Since we are moving away from RAC back to bigger servers, this was mostly for my own edification, and he didn’t disappoint.  Picked up a better understanding of the underpinnings of why certain RAC waits happen.  I had seen all of them before and had some idea of why they happened, researched them before, etc., but he was great at making the process very tangible.  Interesting tidbit: log file sync happens to store requested dirty block to disk on node that had the requested block … Rac calls for the sync.  Makes i/o important for these syncs (still need fast disk for at least redo).  Log flush waits could lead to buffer busy waits (flush before loading from disk to buffer)

Went to Tim Quinlan’s AWR workshop where he gave an intro to AWR, predominantly through sqlplus rather than EM, and then shared all of the scripts he has customized to create an ADDM/AWR based trending system.  Thanks Tim – now I know what I’m doing for my ‘special project’ / objective for the rest of the year.  Last year I built a bunch of SAN growth trending graphs in java for our oracle systems, as well as an ETL script in Perl to get the data from NMS and bring it into a central database so it could be graphed.  We already have cpu/memory/i/o trending via NMS and EM, and I have been wanting to add capacity reporting about more than just disk to my system, with more ‘meaning’ to it.  His AWR trending scripts seem like a great opportunity to expand what my system can do.

Shoutout to the Collaborate 10 planning committee for great vegetarian options.  I’m not a veggie but I prefer to eat veg if they’re around, and they have been tasty and have gone beyond the usual “here’s some pasta salad” fare.

More tonight after afternoon sessions.

IOUG Collaborate – Monday

First day of IOUG was great!  Lots of good sessions, saw some interesting products in the Hall, learned a lot.

In particular Michael Messina from TUSC led a great session on 11g RMAN features; he is a very dynamic presenter. The features that will be the most useful for our shop are:

  • defining sections for large datafiles – these are like ‘sub-channels’ in that through 10g, you can define multiple channels, but PER DATAFILE this is a single thread; now you can backup large datafiles in parallel as well using sections
  • fast incremental backup of physical standbys (they now do BCT) … this is above and beyond the loveliness that is a readable physical standby (says the guy who works with logical standbys a fair amount)
  • block repair of production from physical standby

If you have the disk space, a physical standby of every database is starting to sound like a Good Thing ™.  Sounds like cool improvements to duplicating databases and other things, but we don’t use those as much. (I think many of these new features are discussed here.)  Side note: he did an informal poll from a packed room of DBAs on who was using flashback.  Looked like about 1/3 to me.  #1 gripe was disk space to run it.  From the number of people who said they had turned it on then back off again, I think having a decent formula to estimate FRA before turning it on as well as better ROI for why it is needed is crucial to people feeling confident in moving forward with flashback.

Daniel Stober did a great job with his SQL Brainteaser session; it isn’t easy to put your code out there as a good way of doing something and he did it with openness to new options and some interesting questions.  His original idea to release a SQL puzzle each month to help foster better SQL writing among developers is one that I may take home and see if we can get started.

Gary Gordhamer explained NLS very well, I understand it much better than I did before.  I will probably suggest that we change our standard for how we create new databases due to what I learned.  One tidbit: windows desktop created war files from developers can keep nls_lang settings from the windows machine and hose things up royally, as in data-gone-for-good royally.