How to generate subset out of Real Application Testing captures

Originally posted on “Databases at CERN” blog

I’ve already mentioned on this blog very useful Consolidated Database Replay feature, for example while testing unified auditing performance impact (Unified auditing performance) or while investigating problems with hanging workload capture (Starting workload capture hangs – is it really problem with RAT?). But only recently I’ve found that along with this functionality, there was additional and again very handy capability introduced, allowing you to create a subset from already captured workload.

Continue reading “How to generate subset out of Real Application Testing captures”

Advertisements

XFS on RHEL6 for Oracle – solving issue with direct I/O

Originally posted on “Databases at CERN” blog

Recently we were refreshing our recovery system infrastructure, by moving automatic recoveries to new servers, with big bunch of disks directly connected to each of them. Everything went fine until we started to run recoveries – they were much slower than before, even though they were running on more powerful hardware. We started investigation and found some misconfigurations, but after correcting them, performance gain was still too small.

Continue reading “XFS on RHEL6 for Oracle – solving issue with direct I/O”

How to create your own Oracle database merge patch

Originally posted on “Databases at CERN” blog

A little bit scary title, isn’t it? Please keep in mind that definitely it is neither supported nor advised method to solve your problems and you should be really careful while doing it – hopefully not on production environment. But it may sometimes happen that you end up with the situation where creating your own merge patch for Oracle database could not be as crazy idea as it sounds :).

Continue reading “How to create your own Oracle database merge patch”

Oracle 12c – causing problem by solving it!?!

Originally posted on “Databases at CERN” blog

Regular readers of our blog probably already know that for most of our databases we’re using two storage layers to keep our backups – NAS volumes as a primary layer and tapes as secondary one – please check Datafile without backups – how to restore? for more details.

Continue reading “Oracle 12c – causing problem by solving it!?!”

Nuances of Oracle Managed Files (OMF) and RMAN

Originally posted on “Databases at CERN” blog

Oracle Managed Files (OMF) have many advantages, but the fact that such files could coexist in the same database with manually added (and named) ones, could sometimes lead to confusion. Situation is made worse by the fact, that there is no straightforward way (at least of which I’m aware of…or rather was – please check the comment of Mikhail Velikikh visible on CERN’s blog) to say if the file is Oracle managed or not. Oracle documentation seems to confirm this:

The database identifies an Oracle managed file based on its name.

Continue reading “Nuances of Oracle Managed Files (OMF) and RMAN”

Which shared memory segments belong to my database instance?

Originally posted on “Databases at CERN” blog

I’ve already described how important is to test your backup strategy and restore/recovery procedures, but while doing so, you could of course encounter some problems, not really related with the recoverability as such. Recently, we’ve got such a problem on our recovery server, at the very beginning of an automatic restore (database name masked):

Continue reading “Which shared memory segments belong to my database instance?”

Starting workload capture hangs – is it really problem with RAT?

Originally posted on “Databases at CERN” blog

If you plan to introduce changes in your environment and want to estimate their impact, Real Application Testing feature seems to be one of the best options. As we needed to check the influence of changes planned in our databases, I’ve started to look for good candidates to capture the workloads. I wanted to capture only workloads associated with small number of schemas, but from several databases, to be able to properly simulate as much types of production workloads existing in our databases as possible. This strategy would also allow us to use/test new Consolidated Replay feature of RAT. Ideal set of schemas should be responsible for quite a big amount of workload in the database, but should not be too big in terms of space – to be easily exportable using EXPDP with FLASHBACK_SCN option.

Continue reading “Starting workload capture hangs – is it really problem with RAT?”