Sunday, September 11, 2011

Duplicate Oracle Database with RMAN

A powerful feature of RMAN is the ability to duplicate (clone), a database from a backup. It is possible to create a duplicate database on:
  • A remote server with the same file structure
  • A remote server with a different file structure
  • The local server with a different file structureA duplicate database is distinct from a standby database, although both types of databases are created with the DUPLICATE command. A standby database is a copy of the primary database that you can update continually or periodically by using archived logs from the primary database. If the primary database is damaged or destroyed, then you can perform failover to the standby database and effectively transform it into the new primary database. A duplicate database, on the other hand, cannot be used in this way: it is not intended for failover scenarios and does not support the various standby recovery and failover options.
    To prepare for database duplication, you must first create an auxiliary instance. For the duplication to work, you must connect RMAN to both the target (primary) database and an auxiliary instance started in NOMOUNT mode.
    So long as RMAN is able to connect to the primary and duplicate instances, the RMAN client can run on any machine. However, all backups, copies of datafiles, and archived logs used for creating and recovering the duplicate database must be accessible by the server session on the duplicate host.
    As part of the duplicating operation, RMAN manages the following:
  • Restores the target datafiles to the duplicate database and performs incomplete recovery by using all available backups and archived logs.
     
  • Shuts down and starts the auxiliary database.
     
  • Opens the duplicate database with the RESETLOGS option after incomplete recovery to create the online redo logs.
     
  • Generates a new, unique DBID for the duplicate database.

Preparing the Duplicate (Auxiliary) Instance for Duplication

Create an Oracle Password File
First we must create a password file for the duplicate instance.
export ORACLE_SID=APP2
orapwd file=orapwAPP2 password=manager entries=5 force=y
Ensure Oracle Net Connectivity to both Instances
Next add the appropriate entries into the TNSNAMES.ORA and LISTENER.ORA files in the $TNS_ADMIN directory.
LISTENER.ORA
APP1 = Target Database, APP2 = Auxiliary Database
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = APP1.WORLD)
      (ORACLE_HOME = /opt/oracle/product/10.2.0)
      (SID_NAME = APP1)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = APP2.WORLD)
      (ORACLE_HOME = /opt/oracle/product/10.2.0)
      (SID_NAME = APP2)
    )
  )
TNSNAMES.ORA
APP1 = Target Database, APP2 = Auxiliary Database
APP1.WORLD =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = APP1.WORLD)
    )
  )

APP2.WORLD =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = APP2.WORLD)
    )
  )
SQLNET.ORA
NAMES.DIRECTORY_PATH= (TNSNAMES)
NAMES.DEFAULT_DOMAIN = WORLD
NAME.DEFAULT_ZONE = WORLD
USE_DEDICATED_SERVER = ON
Now restart the Listener
lsnrctl stop
lsnrctl start
Create an Initialization Parameter File for the Auxiliary Instance
Create an INIT.ORA parameter file for the auxiliary instance, you can copy that from the target instance and then modify the parameters.
### Duplicate Database
### -----------------------------------------------
# This is only used when you duplicate the database
# on the same host to avoid name conflicts

DB_FILE_NAME_CONVERT              = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/)
LOG_FILE_NAME_CONVERT             = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/,
                                     /opt/oracle/db/APP1/,/opt/oracle/db/APP2/)

### Global database name is db_name.db_domain
### -----------------------------------------

db_name                           = APP2
db_unique_name                    = APP2_GENTIC
db_domain                         = WORLD
service_names                     = APP2
instance_name                     = APP2

### Basic Configuration Parameters
### ------------------------------

compatible                        = 10.2.0.4
db_block_size                     = 8192
db_file_multiblock_read_count     = 32
db_files                          = 512
control_files                     = /u01/oracle/db/APP2/con/APP2_con01.con,
                                    /opt/oracle/db/APP2/con/APP2_con02.con

### Database Buffer Cache, I/O
### --------------------------
# The Parameter SGA_TARGET enables Automatic Shared Memory Management

sga_target                        = 500M
sga_max_size                      = 600M

### REDO Logging without Data Guard
### -------------------------------

log_archive_format                = APP2_%s_%t_%r.arc
log_archive_max_processes         = 2
log_archive_dest                  = /u01/oracle/db/APP2/arc

### System Managed Undo
### -------------------

undo_management                   = auto
undo_retention                    = 10800
undo_tablespace                   = undo

### Traces, Dumps and Passwordfile
### ------------------------------

audit_file_dest                   = /u01/oracle/db/APP2/adm/admp
user_dump_dest                    = /u01/oracle/db/APP2/adm/udmp
background_dump_dest              = /u01/oracle/db/APP2/adm/bdmp
core_dump_dest                    = /u01/oracle/db/APP2/adm/cdmp
utl_file_dir                      = /u01/oracle/db/APP2/adm/utld
remote_login_passwordfile         = exclusive
Create a full Database Backup
Make sure that a full backup of the target is accessible on the duplicate host. You can use the following BASH script to backup the target database.
rman nocatalog target / <<-EOF
   configure retention policy to recovery window of 3 days;
   configure backup optimization on;
   configure controlfile autobackup on;
   configure default device type to disk;
   configure device type disk parallelism 1 backup type to compressed backupset;
   configure datafile backup copies for device type disk to 1;
   configure maxsetsize to unlimited;
   configure snapshot controlfile name to '/u01/backup/snapshot_controlfile';
   show all;

   run {
     allocate channel ch1 type Disk maxpiecesize = 1900M;
     backup full database noexclude
     include current controlfile
     format '/u01/backup/datafile_%s_%p.bak'
     tag 'datafile_daily';
   }

   run {
     allocate channel ch1 type Disk maxpiecesize = 1900M;
     backup archivelog all
     delete all input
     format '/u01/backup/archivelog_%s_%p.bak'
     tag 'archivelog_daily';
   }

   run {
      allocate channel ch1 type Disk maxpiecesize = 1900M;
      backup format '/u01/backup/controlfile_%s.bak' current controlfile;
   }

   crosscheck backup;
   list backup of database;
   report unrecoverable;
   report schema;
   report need backup;
   report obsolete;
   delete noprompt expired backup of database;
   delete noprompt expired backup of controlfile;
   delete noprompt expired backup of archivelog all;
   delete noprompt obsolete recovery window of 3 days;
   quit
EOF

Creating a Duplicate Database on the Local Host

Before beginning RMAN duplication, use SQL*Plus to connect to the auxiliary instance and start it in NOMOUNT mode. If you do not have a server-side initialization parameter file for the auxiliary instance in the default location, then you must specify the client-side initialization parameter file with the PFILE parameter on the DUPLICATE command.

Get original Filenames from TARGET
To rename the database files you can use the SET NEWNAME command. Therefore, get the original filenames from the target and modify these names in the DUPLICATE command.
ORACLE_SID=APP1
export ORACLE_SID
set feed off
set pagesize 10000
column name format a40 heading "Datafile"
column file# format 99 heading "File-ID"

select  name, file# from v$dbfile;

column member format a40 heading "Logfile"
column group# format 99 heading "Group-Nr"

select  member, group# from v$logfile;

Datafile                                  File-ID
----------------------------------------  -------
/u01/oracle/db/APP1/sys/APP1_sys1.dbf           1
/u01/oracle/db/APP1/sys/APP1_undo1.dbf          2
/u01/oracle/db/APP1/sys/APP1_sysaux1.dbf        3
/u01/oracle/db/APP1/usr/APP1_users1.dbf         4

Logfile                                  Group-Nr
---------------------------------------- --------
/u01/oracle/db/APP1/rdo/APP1_log1A.rdo          1
/opt/oracle/db/APP1/rdo/APP1_log1B.rdo          1
/u01/oracle/db/APP1/rdo/APP1_log2A.rdo          2
/opt/oracle/db/APP1/rdo/APP1_log2B.rdo          2
/u01/oracle/db/APP1/rdo/APP1_log3A.rdo          3
/opt/oracle/db/APP1/rdo/APP1_log3B.rdo          3
/u01/oracle/db/APP1/rdo/APP1_log4A.rdo          4
/opt/oracle/db/APP1/rdo/APP1_log4B.rdo          4
/u01/oracle/db/APP1/rdo/APP1_log5A.rdo          5
/opt/oracle/db/APP1/rdo/APP1_log5B.rdo          5
/u01/oracle/db/APP1/rdo/APP1_log6A.rdo          6
/opt/oracle/db/APP1/rdo/APP1_log6B.rdo          6
/u01/oracle/db/APP1/rdo/APP1_log7A.rdo          7
/opt/oracle/db/APP1/rdo/APP1_log7B.rdo          7
/u01/oracle/db/APP1/rdo/APP1_log8A.rdo          8
/opt/oracle/db/APP1/rdo/APP1_log8B.rdo          8
/u01/oracle/db/APP1/rdo/APP1_log9A.rdo          9
/opt/oracle/db/APP1/rdo/APP1_log9B.rdo          9
/u01/oracle/db/APP1/rdo/APP1_log10A.rdo        10
/opt/oracle/db/APP1/rdo/APP1_log10B.rdo        10
Create Directories for the duplicate Database
mkdir -p /u01/oracle/db/APP2
mkdir -p /opt/oracle/db/APP2
cd /opt/oracle/db/APP2
mkdir con rdo
cd /u01/oracle/db/APP2
mkdir adm arc con rdo sys tmp usr bck
cd adm
mkdir admp bdmp cdmp udmp utld
Create Symbolic Links to Password and INIT.ORA File
Oracle must be able to locate the Password and INIT.ORA File.
cd $ORACLE_HOME/dbs
ln -s /home/oracle/config/10.2.0/orapwAPP2 orapwAPP2
ln -s /home/oracle/config/10.2.0/initAPP2.ora initAPP2.ora
Duplicate the Database
Now you are ready to duplicate the database APP1 to APP2.
ORACLE_SID=APP2
export ORACLE_SID

sqlplus sys/manager as sysdba
startup force nomount pfile='/home/oracle/config/10.2.0/initAPP2.ora';
exit;
rman TARGET sys/manager@APP1 AUXILIARY sys/manager@APP2
Recovery Manager: Release 10.2.0.4.0 - Production on Tue Oct 28 12:00:13 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: APP1 (DBID=3191823649)
connected to auxiliary database: APP2 (not mounted)
RUN
{
  SET NEWNAME FOR DATAFILE 1 TO '/u01/oracle/db/APP2/sys/APP2_sys1.dbf';
  SET NEWNAME FOR DATAFILE 2 TO '/u01/oracle/db/APP2/sys/APP2_undo1.dbf';
  SET NEWNAME FOR DATAFILE 3 TO '/u01/oracle/db/APP2/sys/APP2_sysaux1.dbf';
  SET NEWNAME FOR DATAFILE 4 TO '/u01/oracle/db/APP2/usr/APP2_users1.dbf';
  DUPLICATE TARGET DATABASE TO APP2
  PFILE = /home/oracle/config/10.2.0/initAPP2.ora
  NOFILENAMECHECK
  LOGFILE GROUP 1 ('/u01/oracle/db/APP2/rdo/APP2_log1A.rdo',
                   '/opt/oracle/db/APP2/rdo/APP2_log1B.rdo') SIZE 10M REUSE,
          GROUP 2 ('/u01/oracle/db/APP2/rdo/APP2_log2A.rdo',
                   '/opt/oracle/db/APP2/rdo/APP2_log2B.rdo') SIZE 10M REUSE,
          GROUP 3 ('/u01/oracle/db/APP2/rdo/APP2_log3A.rdo',
                   '/opt/oracle/db/APP2/rdo/APP2_log3B.rdo') SIZE 10M REUSE,
          GROUP 4 ('/u01/oracle/db/APP2/rdo/APP2_log4A.rdo',
                   '/opt/oracle/db/APP2/rdo/APP2_log4B.rdo') SIZE 10M REUSE,
          GROUP 5 ('/u01/oracle/db/APP2/rdo/APP2_log5A.rdo',
                   '/opt/oracle/db/APP2/rdo/APP2_log5B.rdo') SIZE 10M REUSE,
          GROUP 6 ('/u01/oracle/db/APP2/rdo/APP2_log6A.rdo',
                   '/opt/oracle/db/APP2/rdo/APP2_log6B.rdo') SIZE 10M REUSE,
          GROUP 7 ('/u01/oracle/db/APP2/rdo/APP2_log7A.rdo',
                   '/opt/oracle/db/APP2/rdo/APP2_log7B.rdo') SIZE 10M REUSE,
          GROUP 8 ('/u01/oracle/db/APP2/rdo/APP2_log8A.rdo',
                   '/opt/oracle/db/APP2/rdo/APP2_log8B.rdo') SIZE 10M REUSE,
          GROUP 9 ('/u01/oracle/db/APP2/rdo/APP2_log9A.rdo',
                   '/opt/oracle/db/APP2/rdo/APP2_log9B.rdo') SIZE 10M REUSE,
          GROUP 10 ('/u01/oracle/db/APP2/rdo/APP2_log10A.rdo',
                    '/opt/oracle/db/APP2/rdo/APP2_log10B.rdo') SIZE 10M REUSE;
}
The whole, long output is not shown here, but check, that RMAN was able to open the duplicate database with the RESETLOGS option.
.....
.....
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 28-OCT-08
As the final step, eliminate or uncomment the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT in the INIT.ORA file and restart the database.
initAPP2.ora
### Duplicate Database
### -----------------------------------------------
# This is only used when you duplicate the database
# on the same host to avoid name conflicts
# DB_FILE_NAME_CONVERT =  (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/)
# LOG_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/,
                           /opt/oracle/db/APP1/,/opt/oracle/db/APP2/)
sqlplus / as sysdba
shutdown immediate;
startup;
Total System Global Area 629145600 bytes
Fixed Size 1269064 bytes
Variable Size 251658936 bytes
Database Buffers 373293056 bytes
Redo Buffers 2924544 bytes
Database mounted.
Database opened.

Creating a Duplicate Database to Remote Host

This scenario is exactly the same as described for the local host. Copy the RMAN Backup files to the remote host on the same directory as on the localhost.
cd /u01/backup
scp gentic:/u01/backup/* .
The other steps are the same as described under «Creating a Duplicate Database on the Local Host».

Oracle Data Guard

Oracle Data Guard is one of the most effective and comprehensive data protection and disaster recovery solutions available today for enterprise data

Oracle Data Guard is the management, monitoring, and automation software infrastructure that creates, maintains, and monitors one or more standby databases to protect enterprise data from failures, disasters, errors, and corruptions.

Data Guard maintains these standby databases as transactionally consistent copies of the production database. These standby databases can be located at remote disaster recovery sites thousands of miles away from the production data center, or they may be located in the same city, same campus, or even in the same building. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing the downtime associated with the outage, and preventing any data loss.

Available as a feature of the Enterprise Edition of the Oracle Database, Data Guard can be used in combination with other Oracle High Availability (HA) solutions such as Real Application Clusters (RAC) and Recovery Manager (RMAN), to provide a high level of data protection and data availability that is unprecedented in the industry.
The following diagram presents a hi-level overview of Oracle Data Guard.
 
Overview of Oracle Data Guard Functional Components
Data Guard Configuration
A Data Guard configuration consists of one production (or primary) database and up to nine standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided that they can communicate with each other. However, for disaster recovery, it is recommended that the standby databases are hosted at sites that are geographically separated from the primary site.
Redo Apply and SQL Apply
A standby database is initially created from a backup copy of the primary database. Once created, Data Guard automatically maintains the standby database as a transactionally consistent copy of the primary database by transmitting primary database redo data to the standby system and then applying the redo logs to the standby database.
Data Guard provides two methods to apply this redo data to the standby database and keep it transactionally consistent with the primary, and these methods correspond to the two types of standby databases supported by Data Guard.
  • Redo Apply, used for physical standby databases
  • SQL Apply, used for logical standby databases
A physical standby database provides a physically identical copy of the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. The Redo Apply technology applies redo data on the physical standby database using standard Oracle media recovery techniques.

A logical standby database contains the same logical information as the production database, although the physical organization and structure of the data can be different. The SQL apply technology keeps the logical standby database synchronized with the primary database by transforming the data in the redo logs received from the primary database into SQL statements and then executing the SQL statements on the standby database. This makes it possible for the logical standby database to be accessed for queries and reporting purposes at the same time the SQL is being applied to it. Thus, a logical standby database can be used concurrently for data protection and reporting.
Role Management
Using Data Guard, the role of a database can be switched from a primary role to a standby role and vice versa, ensuring no data loss in the process, and minimizing downtime. There are two kinds of role transitions - a switchover and a failover. A switchover is a role reversal between the primary database and one of its standby databases. This is typically done for planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role and the standby database transitions to the primary role. The transition occurs without having to re-create either database. A failover is an irreversible transition of a standby database to the primary role. This is only done in the event of a catastrophic failure of the primary database, which is assumed to be lost and to be used again in the Data Guard configuration, it must be re-instantiated as a standby from the new primary.
Data Guard Protection Modes
In some situations, a business cannot afford to lose data at any cost. In other situations, some applications require maximum database performance and can tolerate a potential loss of data. Data Guard provides three distinct modes of data protection to satisfy these varied requirements:
  • Maximum Protection—This mode offers the highest level of data protection. Data is synchronously transmitted to the standby database from the primary database and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode. If the last standby database configured in this mode becomes unavailable, processing stops on the primary database. This mode ensures no-data-loss.

  • Maximum Availability—This mode is similar to the maximum protection mode, including zero data loss. However, if a standby database becomes unavailable (for example, because of network connectivity problems), processing continues on the primary database. When the fault is corrected, the standby database is automatically resynchronized with the primary database.

  • Maximum Performance—This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode. In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database. The commit operation of the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database. If any standby destination becomes unavailable, processing continues on the primary database and there is little effect on primary database performance.
Data Guard Broker
The Oracle Data Guard Broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. All management operations can be performed either through Oracle Enterprise Manager, which uses the Broker, or through the Broker's specialized command-line interface (DGMGRL).
Data Guard Architecture Diagram
The following diagram shows an overview of the Oracle Data Guard architecture.
What's New in Oracle Data Guard 10g?
Data Guard in Oracle Database 10g has been developed with the following goals in mind.
  • Ease of use
  • Tight integration with new 10g HA features such as flashback database
  • Comprehensiveness in terms of features & functionality
This section will highlight some of the key new features of Oracle Data Guard 10g. For details into these features, please refer to the Oracle Data Guard 10g documentation.
Click here for a presentation on the Oracle Data Guard 10g new features.
General New Features
Real Time Apply
With this feature, redo data can be applied on the standby database (whether Redo Apply or SQL Apply) as soon as they have written to a Standby Redo Log (SRL). Prior releases of Data Guard require this redo data to be archived at the standby database in the form of archivelogs before they can be applied.

The Real Time Apply feature allows standby databases to be closely synchronized with the primary database, enabling up-to-date and real-time reporting (especially for Data Guard SQL Apply). This also enables faster switchover and failover times, which in turn reduces planned and unplanned downtime for the business.

The impact of a disaster is often measured in terms of Recovery Point Objective (RPO - i.e. how much data can a business afford to lose in the event of a disaster) and Recovery Time Objective (RTO - i.e. how much time a business can afford to be down in the event of a disaster). With Oracle Data Guard, when Maximum Protection is used in combination with Real Time Apply, businesses get the benefits of both zero data loss as well as minimal downtime in the event of a disaster and this makes Oracle Data Guard the only solution available today with the best RPO and RTO benefits for a business.
Integration with Flashback Database
Data Guard in 10g has been integrated with the Flashback family of features to bring the Flashback feature benefits to a Data Guard configuration.
One such benefit is human error protection. In Oracle9i, administrators may configure Data Guard with an apply delay to protect standby databases from possible logical data corruptions that occurred on the primary database. The side-effects of such delays are that any reporting that gets done on the standby database is done on old data, and switchover/failover gets delayed because the accumulated logs have to be applied first. In Data Guard 10g, with the Real Time Apply feature, such delayed-reporting or delayed-switchover/failover issues do not exist, and - if logical corruptions do land up affecting both the primary and standby database, the administrator may decide to use Flashback Database on both the primary and standby databases to quickly revert the databases to an earlier point-in-time to back out such user errors.
Another benefit that such integration provides is during failovers. In releases prior to 10g, following any failover operation, the old primary database must be recreated (as a new standby database) from a backup of the new primary database, if the administrator intends to bring it back in the Data Guard configuration. This may be an issue when the database sizes are fairly large, and the primary/standby databases are hundreds/thousands of miles away. However, in Data Guard 10g, after the primary server fault is repaired, the primary database may simply be brought up in mounted mode, "flashed back" (using flashback database) to the SCN at which the failover occurred, and then brought back as a standby database in the Data Guard configuration. No reinstantiation is required.
SQL Apply New Features
Zero Downtime Instantiation
Logical standby database can now be created from an online backup of the primary database, without shutting down or quiescing the primary database, as was the case in prior releases. No shutdown of the primary system implies production downtime is eliminated, and no quiesce implies no waiting for quiescing to take effect and no dependence on Resource Manager.
Rolling Upgrades
Oracle Database 10g supports database software upgrades (from Oracle Database 10g Patchset 1 onwards) in a rolling fashion, with near zero database downtime, by using Data Guard SQL Apply. The steps involve upgrading the logical standby database to the next release, running in a mixed mode to test and validate the upgrade, doing a role reversal by switching over to the upgraded database, and then finally upgrading the old primary database. While running in a mixed mode for testing purpose, the upgrade can be aborted and the software downgraded, without data loss. For additional data protection during these steps, a second standby database may be used.

By supporting rolling upgrades with minimal downtimes, Data Guard reduces the large maintenance windows typical of many administrative tasks, and enables the 24x7 operation of the business.
Additional Datatypes
SQL Apply now supports the following additional data types.
  • NCLOB
  • LONG
  • LONG RAW
  • BINARY_FLOAT
  • BINARY_DOUBLE
  • IOT-s (without overflows and without LOB columns)
This support for additional datatypes allows logical standby databases to recover and protect a wider variety of data, thus increasing the overall database protection and recovery options for Data Guard.
Enterprise Manager / Data Guard Broker New Features
RAC Support
It is now possible to use the Data Guard Broker, and the Broker's Command Line Interface (DGMGRL), as well as Enterprise Manager, to create and manage Data Guard configurations that contain RAC primary and RAC standby databases. In Oracle9i, such administration is possible only through SQL*Plus. In Data Guard 10g, Data Guard Broker interfaces with Oracle Clusterware such that it has control over critical operations during specific Data Guard state transitions, such as switchovers, failovers, protection mode changes and state changes.
Simplified Browser-based Interface
Administration of a Data Guard configuration can be done through the new streamlined browser-based HTML interface of Enterprise Manager, that enables complete standby database lifecycle management. The focus of such streamlined administration is on:
  • Ease of use.
  • Management based on best practices.
  • Pre-built integration with other HA features.
Data Guard Benefits
  1. Disaster recovery and high availability

    Data Guard provides an efficient and comprehensive disaster recovery and high availability solution. Easy-to-manage switchover and failover capabilities allow role reversals between primary and standby databases, minimizing the downtime of the primary database for planned and unplanned outages.

  2. Complete data protection

    With its standby databases, Data Guard ensures no data loss, even in the face of unforeseen disasters. A standby database provides a safeguard against data corruption and user errors. Storage level physical corruptions on the primary database do not propagate to the standby database. Similarly, logical corruptions or user errors that cause the primary database to be permanently damaged can be resolved. Finally, the redo data is validated at the time it is applied to the standby database.

  3. Efficient utilization of system resources

    The standby database tables that are updated with redo logs received from the primary database can be used for other tasks such as backup operations, reporting, summations, and queries, thereby reducing the primary database workload necessary to perform these tasks, saving valuable CPU and I/O cycles. With a logical standby database, users can perform normal data manipulation operations on tables in schemas that are not updated from the primary database. A logical standby database can remain open while the tables are updated from the primary database and the tables are simultaneously available for read-only access. Finally, additional indexes and materialized views can be created on the maintained tables for better query performance and to suit specific business requirements.

  4. Flexibility in data protection to balance availability against performance requirements

    Oracle Data Guard offers the maximum protection, maximum availability, and maximum performance modes to help enterprises balance data availability against system performance requirements.

  5. Automatic gap detection and resolution

    If connectivity is lost between the primary and one or more standby databases (e.g. because of network problems), redo data being generated on the primary database cannot be sent to those standby databases. Once connectivity is re-established, the missing archivelog sequence (or the gap) is automatically detected by Data Guard and the necessary archivelogs are automatically transmitted to the standby databases. The standby databases are resynchronized with the primary database, with no manual intervention by the administrator. 

  6. Centralized and simple management

    The Data Guard Broker automates the management and operational tasks across the multiple databases in a Data Guard configuration. The Broker also monitors all of the systems within a single Data Guard configuration. Administrators may use either Oracle Enterprise Manager or the Broker's own specialized command-line interface (DGMGRL) to take advantage of this integrated management framework.

  7. Integrated with Oracle database

    Oracle Data Guard is available as a fully integrated feature of the Oracle Database (Enterprise Edition) at no extra cost.

Tuesday, August 2, 2011

Oracle DBA Interview Questions Answers Explanation


Oracle Interview questions and answers are here which will help you to prepare for the interview.
1.Explain the difference between a hot backup and a cold backup and the benefits associated with each. - A hot backup is basically taking a backup of the database while it is still up and running and it must be in archive log mode. A cold backup is taking a backup of the database while it is shut down and does not require being in archive log mode. The benefit of taking a hot backup is that the database is still available for use while the backup is occurring and you can recover the database to any point in time. The benefit of taking a cold backup is that it is typically easier to administer the backup and recovery process. In addition, since you are taking cold backups the database does not require being in archive log mode and thus there will be a slight performance gain as the database is not cutting archive logs to disk.
2.You have just had to restore from backup and do not have any control files. How would you go about bringing up this database? - I would create a text based backup control file, stipulating where on disk all the data files where and then issue the recover command with the using backup control file clause.

3.What is different between TRUNCATE and DELETE?

The Delete command will log the data changes in the log file where as the truncate will simply remove the data without it. Hence Data removed by Delete command can be rolled back but not the data removed by TRUNCATE. Truncate is a DDL statement whereas DELETE is a DML statement.

4.What is the maximum buffer size that can be specified using the DBMS_OUTPUT.ENABLE function?

1000000

5.What is an UTL_FILE? What are different procedures and functions associated with it?

The UTL_FILE package lets your PL/SQL programs read and write operating system (OS) text files. It provides a restricted version of standard OS stream file input/output (I/O).
Subprogram -Description
FOPEN function-Opens a file for input or output with the default line size.
IS_OPEN function -Determines if a file handle refers to an open file.
FCLOSE procedure -Closes a file.
FCLOSE_ALL procedure -Closes all open file handles.
GET_LINE procedure -Reads a line of text from an open file.
PUT procedure-Writes a line to a file. This does not append a line terminator.
NEW_LINE procedure-Writes one or more OS-specific line terminators to a file.
PUT_LINE procedure -Writes a line to a file. This appends an OS-specific line terminator.
PUTF procedure -A PUT procedure with formatting.
FFLUSH procedure-Physically writes all pending output to a file.
FOPEN function -Opens a file with the maximum line size specified.

6.What are between database triggers and form triggers?

Database triggers are fired whenever any database action like INSERT, UPATE, DELETE, LOGON LOGOFF etc occurs. Form triggers on the other hand are fired in response to any event that takes place while working with the forms, say like navigating from one field to another or one block to another and so on.

7.What is OCI. What are its uses?

OCI is Oracle Call Interface. When applications developers demand the most powerful interface to the Oracle Database Server, they call upon the Oracle Call Interface (OCI). OCI provides the most comprehensive access to all of the Oracle Database functionality. The newest performance, scalability, and security features appear first in the OCI API. If you write applications for the Oracle Database, you likely already depend on OCI. Some types of applications that depend upon OCI are: 
· PL/SQL applications executing SQL
· C++ applications using OCCI
· Java applications using the OCI-based JDBC driver
· C applications using the ODBC driver
· VB applications using the OLEDB driver
· Pro*C applications
· Distributed SQL

8.What are ORACLE PRECOMPILERS?

A precompiler is a tool that allows programmers to embed SQL statements in high-level source programs like C, C++, COBOL, etc. The precompiler accepts the source program as input, translates the embedded SQL statements into standard Oracle runtime library calls, and generates a modified source program that one can compile, link, and execute in the usual way. Examples are the Pro*C Precompiler for C, Pro*Cobol for Cobol, SQLJ for Java etc.

9.What is syntax for dropping a procedure and a function? Are these operations possible?

Drop Procedure/Function ; yes, if they are standalone procedures or functions. If they are a part of a package then one have to remove it from the package definition and body and recompile the package.

10.How to check if application 11i System is Autoconfig enabled ?

Under $AD_TOP/bin check for file adcfginfo.sh and if this exists use adcfginfo.sh contextfile=<CONTEXT> show=enabled 
If this file is not there , look for any configuration file under APPL_TOP if system is Autoconfig enabled then you will see entry like

11.How to check if Oracle application 11i System is Rapid Clone enabled ?

For syetem to be Rapid Clone enabled , it should be Autoconfig enabled (Check above How to confirm if application 11i is Autoconfig enabled). You should have Rapid Clone Patches applied , Rapid Clone is part of Rapid Install Product whose Family Pack Name is ADX. By default all application 11i Instances 11.5.9 and above are Autoconfig and Rapid Clone enabled.

12.What is main concurrent Manager types.

# ICM - Internal Concurrent Manager which manages concurrent Managers
# Standard Managers - Which Manage processesing of requests.
# CRM - Conflict Resolution Managers , resolve conflicts in case of incompatibility.

13.What is US directory in $AD_TOP or under various product TOP's .

US directory is defauly language directory in Oracle Applications. If you have multiple languages Installed in your Applications then you will see other languages directories besides US, that directory will contain reports, fmx and other code in that respective directory like FR for France, AR for arabic, simplifies chinese or spanish.

14.Where is Concurrent Manager log file location.

By default standard location is $APPLCSF/$APPLLOG , in some cases it can go to $FND_TOP/log as well.

15.Where would I find .rf9 file, and what it dose ?

These files are used during restart of patch in case of patch failure because of some reason.

16.Where is appsweb.cfg or appsweb_$CONTEXT.cfg stored and why its used ?

This file is defined by environment variable FORMS60_WEB_CONFIG_FILE This is usually in directory $OA_HTML/bin on forms tier. 
This file is used by any forms client session. When a user try to access forms , f60webmx picks up this file and based on this configuration file creates a forms session to user/client.

17.What is Multi Node System ?

Multi Node System in Oracle Applications 11i means you have Applications 11i Component on more than one system. Typical example is Database, Concurrent Manager on one machine and forms, Web Server on second machine is example of Two Node System.

18.Can a function take OUT parameters. If not why?

yes, IN, OUT or IN OUT.

Oracle DBA Interview Questions & Answers

Oracle DBA Interview Questions and Answers:

1. How does a deadlock arise? How to avoid it or overcome it?
The deadlock is a race situation where two or more processes are brought down waiting in their attempt to lock a specific row. This case occurs due to the poor design of the database access or front end application. This is avoided by creating the specific lock mechanisms to be atomic in nature or used in same sequential order everywhere. Once deadlock occurs there is huge impact on the server’s performance and it is taken off only by commit/rollback of the database or by manual termination of the processes in deadlock.

2. What is a look-up table?
A look-up table is a table with data that is updated very rarely. This table is more of a static valued entity that is cross referenced against some indexed values (more like a hash). The other processes refer this table to get some info now and then.
3. When are hashed clusters suitable to be created?
The hash clusters are usually created for a table with less frequently added/modified data. Usually the number of key values and total number of input that we are dealing with is predictable and does not exceed a predefined high value. This can come in handy when most of the queries accessing the hashed values are using equality comparisons.

4. What is an index?
Oracle creates an index for every unique row that is entered into the database. This is the one that can be used directly to access any specific row in any table in the database. This is created by the system whenever a new row is inserted into any table.

5. What is row chaining?
Row chaining refers to the concept of extending a row data into multiple rows, when a single empty row cannot hold the entire data due to less block size values.

6. What is data segment?
A data segment refers to the set of extents that the oracle system allocates to the specific logical data containers.For example, table is a logical storage structure and there is a data segment that is allocated for the table entity as a single unit before allocation and allocates appropriate data segments when clusters are created.

7. What is a network database link?
You can connect the different systems with their own databases into a network. When oracle name servers are given the network names of the different servers, it automatically generates the links for each of the different databases in the different servers. This global name is referred to as the oracle network database link.

8. What are table clusters?
The tables which share a specific column among them are grouped together with the point of join as that single field that they share. This is referred to as cluster. Cluster can be used for efficient retrieval of data from the database.

9. How are synonyms helpful?
The synonyms offer the abstraction to the database users. The user need not worry about the real source of the data or network aliases that are used at different times. The queries can thus be made constant at all time and more meaningful. The security of the accessed objects are also enhanced when we use synonyms for the accesses. This way no one needs to know about the real owner or real name of a specific data entity.

10. What is a database instance?
The database instance is the set of the memory structure and the background processes that are running at a specific time. The database instance should be created to have the access to the specific database and initially the processes are started in the no mount phase. This can help us in increasing the database performance.

11. What are temporary segments?
The temporary segments are temporary memory regions that are created by the Oracle when an sql query execution requires additional space for processing. This temporary segment is then destroyed after the query execution is completed.

12. What is the difference between DDL and DML?
DDL refers to the commands that defines the database. This included create, drop, truncate etc. The DML commands include the insert, delete, update etc. which help in the modification of just the values in the database. The DDL is the one that defines the Schema of the entire database and DML are the ones that are used for value modifications to the values in the database fields.

13. What is the difference between hot backup and cold backup?
The hot backup is the process of backing up the database while the system is still actively used by users. This requires the database to be running in the archive log mode. This can create quite a constraint on the performance when there are lot of users and a huge database. The cold backup is one that is taken when the system is shut down. The cold back up is an easier and simpler way to backup/restore the database. The system need not be in the archive log mode too. But the availability of system is compromised when we are aiming for cold backups.

14. How are hints passed to the SQL engine?
The hints are passed through the comment lines with a + sign to begin with.

15. What is buffer busy wait? How can it be avoided?
It is the condition in which the server is made to wait for the segment header for the specific table on which concurrent insert and delete has to be done. The segment header has the link for the list of free blocks for that table. This can be increased by the changing the freelist_group parameter appropriately. This will help us avoid the buffer busy wait situation.

16. How can you try increasing the buffer cache hit ratio?
Getting the info from the $db_cache_advice after using the buffer cache advisory will give the appropriate info as to whether a change in the buffer cache size is required.

17. What does the error ORA01555 signify?
This is the error which is obtained when the rollback sequence has encountered a very old snapshot that has to be restored. There may be some logical error and the undo retention parameters can be set to overcome this error.

18. What is the difference between fact tables and measurement tables?
The fact tables contain the real values that are going to be used as metrics. The dimension table are the one that classify and categorize the facts and help us to infer more info on the overall schema related scenarios.

19. What are parallel servers?
Multiple instances of the Oracle server accessing the same database at different places are referred to as parallel servers. The parallel servers implementation is possible in multiple CPU processors and they can greatly increase the performance characteristics of the overall system regarding specific queries and complex ones that are involving access to multiple tables for processing.

20. Can views be based on other views?
Yes the views can be created from other views by directing a SELECT query to use the other view’s data.

21. What is a schema?
Schema refers to the overall organization of data into the multiple data tables. The relationship between all these tables and the interaction between the different tables to get the final inferential output etc are brought out from the schema through the relationship mapping across the various tables and various fields.

22. What are the different locks that you can have over a table?
The different types of LOCKs are shared, shared exclusive and exclusive. These have the dependency over the other entity trying to lock the table too (in case of shared and shared exclusive).

Describe the following:

a) Oracle Archive log
b) Oracle catalog

a) Oracle Archive logs are records which help in putting up the database in event of unforeseen failure or in an event that there is a general need to perform log mining, it is important as it improves the system availability thus improving dependability, one of the features necessary to win a user confidence in system engineering, as it is clear that without this, users will often reject the system in question.
This also helps to save the organization using the system as sometimes data is usually very expensive, even more expensive than the computer system on which it is being handled and the abrupt loss may mean loss of money used in research as well as the time thus having a fall back log is very handy.
In summary, logs are simply captured and stored records.
b) Oracle catalog: These are catalogs comprising of tables and views which helps users to obtain information concerning the entire database such as the metadata as well as other tables and views also available in the database, their attributes and constraints. In these systems (the oracle systems), the available system relation catalog are usually so many that an average user just has an access to a few of them, I mean the available systems relation catalogs runs to the tunes of several hundreds.

Describe PCT Free.
This is a block of storage used to determine the amount of space necessary for future updates in the database so that it is left blank; it is done by a process called mentioning the PCTFREE. This is conspicuous in a situation where one record, say, Isaac can in future is updated to Isaac Kumar.

If for instance PCTFREE has been mentioned as 10, then oracle will have to add new rows to the block thus it allows 10 for future updates.


Describe the difference between hot back up and cold backup as well as the benefits associated with each of them.
a) Hot backup: this is the type of backup which takes place while the system is up and running, usually in the archive log mode.
Benefits of the hot backup: it is good owing that one can recover the database at any instance as this kind of backup takes place when the database is still running and up hence higher chances of capturing all instances which occur in the database at any point in time.
Also this system is convenient as the database is still available to the user to access even during the time of update.
b) Cold Backup: This is the type of backup which occurs while the database is shut down and hence does not require being in the archive log mode. It has an advantage in that it is easy to administer, both the backup and the recovery processes. Also the fact that you don’t need to put the database in the archive log mode during the backup is an advantage in its own as during such a situation, usually there is a performance gain since the database is cutting no archive logs to disk
Explain how you would go about to bring up a database in a situation whereby you are restoring from a backup and have no control files at all.

The following would be the recommended procedure:
  • Create a text based backup control file whereby you stipulate the location of all data files in the disk.
  • Issue a recover command using the control file clause.

Describe the term, ‘Dump Destination’; also explain in details the meaning of bdump, cdump and udump.
The term dump destination refers to the location in harboring the trace files for all the oracle processes.

adump--> for ASM processes
udump--> User Processes dump
bdump-->Background processes + alert_SID.log file location
cdump--> Core Processes dump
Useful information pertinent to system failures are contained in these destinations

Name the four performance bottlenecks that are likely to be found in a database server.
  • Central processing unit (CPU) bottlenecks
  • Incorrect memory structures, also known as the undersized memory structures
  • Poor SQL statements,
  • Poor database settings

Name the four errors that one can find in an alert (error) log which are capable of ruining the appetite of the DBA and also state how one can avoid them.
  • Oracle internal errors
  • Backup and recovery errors
  • Snapshot to old errors
  • Deadlock Errors

Is it possible for one to perform DML operations when the standby database is in read only in Standby Database?

There exist two kinds of standby, namely the logical standby and the physical standby.
Physical standby resembles primary database in terms of the structure.
During read only mode, it is possible to run SQL queries on it, during the other mode, i.e. the redo-apply mode, the database is not accessible to the queries.

Sometimes the logical standby has a different structure from that of the primary database and while in read- only mode, the SQL statements which are normally generated from redo are applied and there exist a possibility for queries to run concurrently.

During the other mode, i.e. the read-write mode, one is in a position to modify data in tables created in addition to the primary schema. It is worth noting that such settings can be overwritten by specifying further security options.

Explain what you will do in an event that a situation arises whereby an error occurs which you are not able to troubleshoot yet there are no seniors or colleagues around.
First, it will be important to find out the particular point in compilation where the error is taking place, then divide the code in to segments, check the correctness of the code line by line in the segment containing the error, actually debug the code, until we find where the problem is arising from and then try and take an appropriate action based on the characteristics of the problem in the code.

Assuming you are an Oracle Database Administrator (DBA), name the whole UNIX file you need to understand.
To check the process I would use
ps -ef |grep pmon
ps -ef

To check the alert log file I would use
tail -f alert.log 

To check the cpu usage, I would use
top
vmstat 25
How to Convert local management tablespace into a dictionary managed tablespace and vice versa?
To convert dictionary managed tablespace into a local management tablespace, one is expected to do the following:

Enter the command:
exec dbms_space_admin.Tablespace_Migrate_to_Local('tablespace');

But to convert the local to dictionary managed tablespace, the following becomes the procedure:

Enter the command:
exec dbms_space_admin.Tablespace_Migrate_FROM_Local('tablespace');

State the different scenarios that lead to rebuilding of indexes.
Essentially, there are two scenarios to consider:
  • Deleted entries represent more then twenty percent of the current entries.
  • The depth of the index has grown more than four levels

It is wise noting that oracle index nodes never get physically deleted when the rows of tables are deleted, neither are entries removed from the index yet oracle does the deletion logically leaving behind the so called ‘dead nodes’ in the index tree and these deleted nodes may be handy in event that there is need for another adjacent entry. When large number of tables has been deleted, it is possible that that deleted leaf nodes will make the index full scans for longer periods thus disadvantageous.

What should be checked in an event that a database hangs when the developer is attempting to compile the package?
Here, what one checks is the bdump destination.

What to do when rectifying a situation a certain rollback segment gets corrupted?
In such a situation, one restores and then recovers the corrupted rollback segment.

State one of the ways of transferring a table between different schemas.
Out of the many available methods, one may use the following:
1) Export-Import
2) Create table…as SELECT or COPY
State the importance of the IMPORT IGNORE Option, also state its default setting.
This option tells the import to ignore the errors which already exist in the system. By default, the already existing tables will be skipped, else the error is ignored and the table’s data inserted, usually it has a default value of N.
State what happens in event that the DEFAULT and TEMPORARY tablespace clauses aren’t included in the CREATE USER command.
In such a situation, the user is assigned the SYSTEM tablespace which is the default and a temporary table space, this isn’t advisable since sends the user object and temporary segments to the SYSTEM tablespace leading to fragmentation and poor table placement since only objects for the data dictionary and system rollback segment should be in the SYSTEM.

State what transpires when the constraint name is not included in the constraint clause.
This will lead into a situation whereby the oracle system uses a default name like SYS_Cxxxx where xxxx is a number generated by the system. This habit is not advisable as it disguises the process of tracking the tables to which particular constraints belong or even the work of a given constraint.

Describe clearly what would transpire in an event that a tablespace clause is omitted from the primary key constraint clause.
The result of the above situation is that one will get an index which is generated automatically being placed in the default tablespace. Definitely, this occurrence has high probability of causing system performance problems since this will usually be the same tablespace for the table which is being created thus a collision.
Owing to the fact that performance of any system is one of the factors that users consider when procuring computer systems, one is highly encouraged to abstain from this habit as it contradicts with the goals of system development.

State what happens in event that one disables the primary key constraints and later on enables it without specifying the primary index fully.
In such a situation, the following becomes the outcome:
  • All sizing information is lost.
  • There is creation of the index in the default tablespace which belongs to the user.

It is worth noting that this information is not stored by the oracle as part of the definition of the constraint yet are stored as part of the index definition such that when the constraint is disabled, the index is dropped leading to loss of information.

Explain the cause of the error called ‘snapshot too old’ stating its prevention or even ways with which it can be mitigated.
This type of error arises in situations where there is large or transactions running for long having been either wrapped into their pertinent rollback space or have had their rollback space being written on by other transactions. The easiest way to mitigate or prevent this from happening is to employ the divide and rule method! Here one breaks the transactions into smaller units or increases the sizes of segments for rollback as well as there extents
How to identify invalid database object?
Check the status column of the DBE_, ALL or USER_OBJECTS as determined by the kind of permission at your disposal, i.e. whether you have permission on the view or you are using a DBA account.
State the best way of obtaining sizing data to be used in production table implementation assuming one has an example table.
Here, one analyses the table as well as the user data that has been provided in the DBA_TABLES view in order to get pertinent data for calculation including the average row length. There another way – to count the number of blocks in the table actually and ratio the number of blocks in the table to the number of blocks vis-à-vis number of expected rows, though this procedure is considered quick but dirty.

How to find out the number of users logged in a database at any particular time with their operating system id?
Many ways lead to this problem’s solution e.g. looking at the v$sesion or the v$process
One can also consider checking the current_login parameters found at the v$sysstat view.
For UNIX users, one can try to do "ps -ef|grep oracle|wc -l? command though this works only against a single instance of installation.
Describe the recommended way to disable and re-enable a primary key constraint.
Here one uses the ALTER TABLE command in both occasions only that when it comes to enable clause, one is expected to specify the USING INDEX and the TABLESPACE clause that are pertained to the primary key. Identify the time when it is necessary that one uses more than one database writer process stating the number that should be used at such times