Potential of “import catalog” command

Originally posted on “Databases at CERN” blog

Since version 11.1 of Oracle database, there is very useful command available, allowing DBAs to easily move RMAN recovery catalog schemas between databases. Its functionality is even broader, as it also allows one catalog schema to be merged into another – either the whole schema or just the metadata of chosen databases. Command I’m writing about is of course IMPORT CATALOG, which I had a chance to use recently, to move our recovery catalog to the new database. It was moved from Oracle 11.2.0.3 database to 11.2.0.4, with recovery catalog schema using below version:

Continue reading “Potential of “import catalog” command”

Advertisements

Datafile without backups – how to restore?

Originally posted on “Databases at CERN” blog

Have you ever had a problem with restoring datafiles without any backups available? It’s easy, of course if you have all archived logs from the time datafile was created. Please check it here: Re-Creating Data Files When Backups Are Unavailable. Moreover, RMAN is clever enough to create empty datafile automatically during restore phase and then recover it using archived logs. So far, so good, but…

Continue reading “Datafile without backups – how to restore?”

Importance of testing yours backup strategy

Originally posted on “Databases at CERN” blog

Most of you for sure know, that ability to restore data in case of failure is a primary skill for each DBA. You should always be able to restore and recover data you’re responsible for. This is an axiom. To be sure, that you’re able to do it, you should test it on regular basis. There is of course possibility to use some Oracle features, like BACKUP ... VALIDATE or RESTORE ... VALIDATE commands, but if you want to be certain as much as possible, the only way is just to run real restore and recovery. Doing it periodically for big amount of databases is extremely tough, both because of resources needed and DBA time. That’s why one of our DBAs Ruben Gaspar Aparicio has created recovery system, which is heavily used at CERN. Good news – it is available as Open Source on Sourceforge (Recovery Platform). Since its release we’ve introduced many modifications to it, but still it could be good starting point to check the source code in order to start developing your own solution. We’re using it as a validation of our backup strategy, to run real restore and recovery every week or two for most of our databases.

Continue reading “Importance of testing yours backup strategy”

Backups in Data Guard environment

Originally posted on “Databases at CERN” blog

Physical standby databases seem to be ideal candidates for offloading backups from primary ones. Instead of “wasting” resources (unless you’re already using Active Data Guard for example), you could avoid affecting primary performance while backing up your database, especially if your storage is under heavy load even during normal (user- or application-generated) workload. So, if you’re seeking for good reasons to convince your boss/finance department/etc. that having standby database(s) is a must in your environment, ability to offload backups from primary databases would be for sure important (apart from usual ones, related to disaster recovery, etc.).

Continue reading “Backups in Data Guard environment”

How to verify if archived log deletion policy is correctly applied?

Originally posted on “Databases at CERN” blog

What is the best way to handle archived logs deletion in environments with standby and downstream capture databases? One could use own scripts, to delete for example all backed up archived logs, older than n days. But better way, will be to set RMAN archived log deletion policy, because then, additional options could be specified, to delete archived logs which are not only BACKED UP N TIMES, but also APPLIED or SHIPPED to other databases in the environment. Then, with proper settings, we should not end up with standby database which needs already deleted archived log… Of course unless there are some bugs causing problems with correct handling of archived logs deletion, so it’s good idea to double-check your configuration, before real deletion occurs, which usually happens when there is space pressure in FRA.

Continue reading “How to verify if archived log deletion policy is correctly applied?”

Unified Auditing performance

Originally posted on “Databases at CERN” blog

In my previous blog post (Migrating to Oracle Database 12c – what to do with auditing?) I provided you with number of reasons why unified auditing looks very promising and should be seriously considered while migrating to 12c. Nonetheless, I was not talking at all about performance – which also seems to be greatly improved.

Continue reading “Unified Auditing performance”

Starting workload capture hangs – is it really problem with RAT?

Originally posted on “Databases at CERN” blog

If you plan to introduce changes in your environment and want to estimate their impact, Real Application Testing feature seems to be one of the best options. As we needed to check the influence of changes planned in our databases, I’ve started to look for good candidates to capture the workloads. I wanted to capture only workloads associated with small number of schemas, but from several databases, to be able to properly simulate as much types of production workloads existing in our databases as possible. This strategy would also allow us to use/test new Consolidated Replay feature of RAT. Ideal set of schemas should be responsible for quite a big amount of workload in the database, but should not be too big in terms of space – to be easily exportable using EXPDP with FLASHBACK_SCN option.

Continue reading “Starting workload capture hangs – is it really problem with RAT?”