Monitor backup jobs using RMAN views

Proper monitoring of backup jobs is one of the crucial elements in ensuring that your databases are well-enough protected against data loss. You can for example grep your backup logs, put some kind of alerting inside your backup scripts, but the most obvious method is just to use Oracle dictionary views.

Continue reading “Monitor backup jobs using RMAN views”

Advertisements

Oracle 12c – causing problem by solving it!?!

Originally posted on “Databases at CERN” blog

Regular readers of our blog probably already know that for most of our databases we’re using two storage layers to keep our backups – NAS volumes as a primary layer and tapes as secondary one – please check Datafile without backups – how to restore? for more details.

Continue reading “Oracle 12c – causing problem by solving it!?!”

Potential of “import catalog” command

Originally posted on “Databases at CERN” blog

Since version 11.1 of Oracle database, there is very useful command available, allowing DBAs to easily move RMAN recovery catalog schemas between databases. Its functionality is even broader, as it also allows one catalog schema to be merged into another – either the whole schema or just the metadata of chosen databases. Command I’m writing about is of course IMPORT CATALOG, which I had a chance to use recently, to move our recovery catalog to the new database. It was moved from Oracle 11.2.0.3 database to 11.2.0.4, with recovery catalog schema using below version:

Continue reading “Potential of “import catalog” command”

Datafile without backups – how to restore?

Originally posted on “Databases at CERN” blog

Have you ever had a problem with restoring datafiles without any backups available? It’s easy, of course if you have all archived logs from the time datafile was created. Please check it here: Re-Creating Data Files When Backups Are Unavailable. Moreover, RMAN is clever enough to create empty datafile automatically during restore phase and then recover it using archived logs. So far, so good, but…

Continue reading “Datafile without backups – how to restore?”

Importance of testing yours backup strategy

Originally posted on “Databases at CERN” blog

Most of you for sure know, that ability to restore data in case of failure is a primary skill for each DBA. You should always be able to restore and recover data you’re responsible for. This is an axiom. To be sure, that you’re able to do it, you should test it on regular basis. There is of course possibility to use some Oracle features, like BACKUP ... VALIDATE or RESTORE ... VALIDATE commands, but if you want to be certain as much as possible, the only way is just to run real restore and recovery. Doing it periodically for big amount of databases is extremely tough, both because of resources needed and DBA time. That’s why one of our DBAs Ruben Gaspar Aparicio has created recovery system, which is heavily used at CERN. Good news – it is available as Open Source on Sourceforge (Recovery Platform). Since its release we’ve introduced many modifications to it, but still it could be good starting point to check the source code in order to start developing your own solution. We’re using it as a validation of our backup strategy, to run real restore and recovery every week or two for most of our databases.

Continue reading “Importance of testing yours backup strategy”

Backups in Data Guard environment

Originally posted on “Databases at CERN” blog

Physical standby databases seem to be ideal candidates for offloading backups from primary ones. Instead of “wasting” resources (unless you’re already using Active Data Guard for example), you could avoid affecting primary performance while backing up your database, especially if your storage is under heavy load even during normal (user- or application-generated) workload. So, if you’re seeking for good reasons to convince your boss/finance department/etc. that having standby database(s) is a must in your environment, ability to offload backups from primary databases would be for sure important (apart from usual ones, related to disaster recovery, etc.).

Continue reading “Backups in Data Guard environment”

How to verify if archived log deletion policy is correctly applied?

Originally posted on “Databases at CERN” blog

What is the best way to handle archived logs deletion in environments with standby and downstream capture databases? One could use own scripts, to delete for example all backed up archived logs, older than n days. But better way, will be to set RMAN archived log deletion policy, because then, additional options could be specified, to delete archived logs which are not only BACKED UP N TIMES, but also APPLIED or SHIPPED to other databases in the environment. Then, with proper settings, we should not end up with standby database which needs already deleted archived log… Of course unless there are some bugs causing problems with correct handling of archived logs deletion, so it’s good idea to double-check your configuration, before real deletion occurs, which usually happens when there is space pressure in FRA.

Continue reading “How to verify if archived log deletion policy is correctly applied?”