Oracle Database - Enterprise Edition - Version 12.1.0.2 and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Cloud at Customer - Version N/A and later
Information in this document applies to any platform.
NOTE:?In the images and/or the document content below, the user information and data used represents fictitious data from the Oracle sample schema(s) or Public Documentation delivered with an Oracle database product. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.
Maximum Availability Architecture
The Maximum Availability Architecture (MAA) defines Oracle’s most comprehensive architecture for reducing downtime for scheduled outages as well as preventing, detecting and recovering from unscheduled outages. Real Application Clusters (RAC) and Oracle Data Guard are integral components of the Database MAA reference architectures and solutions.
More detailed information, such as a discussion of the purpose of MAA and the benefits it provides, can be found on the Oracle Technology Network (OTN) at?Maximum Availability Architecture (MAA) | Oracle
Purpose of this Document
Provide a step-by-step guide for instantiating a standby database using the RMAN “from service” clause to copy directly from the primary database through an Oracle Net connection.
NOTE:
About RMAN 'FROM SERVICE' clause
The RMAN ‘from service’ clause enables the restore and recover of primary database files to a standby database across the network. This functionality can be used to instantiate a standby database in lieu of the RMAN DUPLICATE DATABASE command and is more intuitive and less error prone thus saving time. Additionally, utilizing the SECTION SIZE clause with multiple RMAN channels improves the efficiency of parallelization of the restore, further improving instantiation times.
NOTE: This ‘FROM SERVICE‘ method can be used to restore or recover an entire database, individual data files, control files, server parameter file, or tablespaces. This method is useful in synchronizing the primary and standby database.
This paper assumes that the following conditions exist:
All of the example names illustrated in this document use the following naming:
Hosts and Databases Used in this Example | ||
Primary | Standby | |
Hosts | <primaryhost1>,<primaryhost2> | <standbyhost1>,<standbyhost2> |
Database Unique Name | <primary unique name> | <standby unique name> |
Instance names | <primary unique name>1, <primary unique name>2 | <standby unique name>1, <standby unique name>2 |
The following are the steps used to create the Data Guard standby database:
?
[oracle@<primaryhost1>]$ sqlplus / as sysdba
SQL> alter database force logging;
SQL>exit
Check existing redo log members and their sizes.
[oracle@<primaryhost1>]$ sqlplus / as sysdba
SQL> select thread#,group#,bytes,status from v$log;
THREAD#? ? GROUP#? ? ?BYTES? ? ? ? ? STATUS
---------- ---------- -------------- ----------------
1? ? ? ? ? 1? ? ? ? ? 4294967296? ? ?CURRENT
1? ? ? ? ? 2? ? ? ? ? 4294967296? ? ?UNUSED
1? ? ? ? ? 3? ? ? ? ? 4294967296? ? ?UNUSED
2? ? ? ? ? 4? ? ? ? ? 4294967296? ? ?CURRENT
2? ? ? ? ? 5? ? ? ? ? 4294967296? ? ?UNUSED
2? ? ? ? ? 6? ? ? ? ? 4294967296? ? ?UNUSED
[oracle@<primaryhost1>]$ sqlplus / as sysdba
SQL> alter database add standby logfile thread 1
group 7 (‘+DATAC1’) ?size 4294967296,
group 8 (‘+DATAC1’) ?size 4294967296,
group 9 (‘+DATAC1’) ?size 4294967296;
SQL> alter database add standby logfile thread 2
group 11 (‘+DATAC1’) ?size 4294967296,
group 12 (‘+DATAC1’) ?size?4294967296,
group 13 (‘+DATAC1’) ?size 4294967296;
?
[oracle@<primaryhost1>]$ sqlplus / as sysdba
SQL> alter system set STANDBY_FILE_MANAGEMENT=AUTO scope=both sid='*';
SQL>exit
[oracle@<primaryhost1>]$asmcmd?cp +DATAC1/<primary unique name>/PASSWORD/passwd /tmp/passwd.<primary unique name>
copying +DATAC1/<primary unique name>/PASSWORD/passwd -> /tmp/passwd.<primary unique name>
[oracle@<primaryhost1>]$ scp /tmp/orapw<standby unique name>1 oracle@<standbyhost1>:/tmp/orapw<standby unique name>
NOTE:?If Transparent Data Encryption (TDE) is enabled on the primary, the TDE wallet must be copied to the standby also.
[oracle@<primaryhost1>]$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
[oracle@<primaryhost1>]$ export ORACLE_SID=<primary unique name>1
[oracle@<primaryhost1>]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@<primaryhost1>]$ sqlplus / as sysdba
SQL> create pfile='/tmp/primary.pfile' from spfile;
[oracle@<primaryhost1>]$ scp?/tmp/primary.pfile oracle@<standbyhost1>:/tmp/standby.pfile
TNS Aliases used in the Oracle Data Guard configuration |
<primary unique name> = <standby unique name> =
<primary unique name>2 = <standby unique name>1 = <standby unique name>2 = # < create an entry for each instance of primary and standby if more than two > # ##### END RAC ONLY######## |
On all standby hosts create the audit directory for the standby database.
[oracle@<standbyhost1>]$ mkdir -p /u01/app/oracle/admin/<standby unique name>/adump
[oracle@<standbyhost2>]$ mkdir -p /u01/app/oracle/admin/<standby unique name>/adump
Register the standby with clusterware and start the database nomount
Single instance example:
[oracle@<standbyhost1>]$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
[oracle@<standbyhost1>]$ export ORACLE_SID=<standby unique name>1
[oracle@<standbyhost1>]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@<standbyhost1>]$ srvctl add database -db <standby unique name> -oraclehome /u01/app/oracle/product/12.2.0.1/dbhome_1 -dbtype SINGLE -instance <standby unique name>1 -node <standbyhost1> -dbname <primary unique name> -diskgroup RECOC1,DATAC1 -role physical_standby
RAC example.
[oracle@<standbyhost1>]$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
[oracle@<standbyhost1>]$ export ORACLE_SID=<standby unique name>1
[oracle@<standbyhost1>]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@<standbyhost1>]$ srvctl add database -db <standby unique name> -oraclehome /u01/app/oracle/product/12.2.0.1/dbhome_1 -dbtype RAC -dbname <db_name parameter> -diskgroup RECOC1,DATAC1 -role physical_standby?
[oracle@<standbyhost1>]$ srvctl add instance -database <standby unique name> -instance <standby unique name>1 -node <standbyhost1>
[oracle@<standbyhost1>]$ srvctl add instance -database <standby unique name> -instance <standby unique name>2 -node <standbyhost2>
***Repeat ‘add instance’ as needed for each instance of the standby database
First create the database directory in the DATA disk group then place the password file copied from the primary database to /tmp
[oracle@<standbyhost1>]$ export ORACLE_HOME=/u01/app/12.2.0.1/grid?<- Grid Home
[oracle@<standbyhost1>]$ export ORACLE_SID=+ASM1
[oracle@<standbyhost1>]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@<standbyhost1>]$ asmcmd mkdir +DATAC1/<standby unique name>
[oracle@<standbyhost1>]$ asmcmd mkdir +DATAC1/<standby unique name>/PASSWORD
[oracle@<standbyhost1>]$ asmcmd --privilege sysdba pwcopy -f –dbuniquename <standby db_unique_name> /tmp/orapw<standby unique name>?+DATAC1/<standby unique name>/PASSWORD/orapw<standby unique name>
*** This command will update clusterware for the database’s pwfile location
Change the pfile copied to the standby in step 5 /tmp/standby.pfile update the instance specific RAC parameters, db_unique_name. For example:
NOTE:?This list is not exhaustive. There are many parameters that may need to be changed due to a change in db_unique_name or disk group names or db_domain. Review each parameter in the pfile and change as appropriate.
Parameters to be modified on the Standby as compared to the Primary | |
---|---|
Primary - Only CONVERT parameters may change, the rest are for reference | Standby - changes to /tmp/standby.pfile |
*.cluster_database=TRUE <primary unique name>2.instance_number=2 <primary unique name>1.instance_number=1 <primary unique name>2.thread=2 <primary unique name>1.thread=1 <primary unique name>2.undo_tablespace='UNDOTBS2' <primary unique name>1.undo_tablespace='UNDOTBS1' ………. ………. *.db_unique_name=<primary unique name> *.audit_file_dest=/u01/app/oracle/admin/<primary unique name>/adump *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) MAX_FAILURE=1 REOPEN=5 DB_UNIQUE_NAME=<primary unique name> ALTERNATE=LOG_ARCHIVE_DEST_10' *.log_archive_dest_10='LOCATION=+DATAC1 VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=<primary unique name> ALTERNATE=LOG_ARCHIVE_DEST_1' *.log_archive_dest_state_10='ALTERNATE' *.control_files='+DATAC1/<primary unique name>/CONTROLFILE/control.ctl' # *CONVERT parameters are not dynamic and require a restart of the database. *.LOG_FILE_NAME_CONVERT='+DATAC1/<standby unique name>','+DATAC1/<primary unique name>' *.DB_FILE_NAME_CONVERT='+DATAC1/<standby unique name>','+DATAC1/<primary unique name>' | *.cluster_database=TRUE <standby unique name>2.instance_number=2 <standby unique name>1.instance_number=1 <standby unique name>2.thread=2 <standby unique name>1.thread=1 <standby unique name>2.undo_tablespace='UNDOTBS2' <standby unique name>1.undo_tablespace='UNDOTBS1' ………. ………. *.db_unique_name=<standby unique name> *.audit_file_dest=/u01/app/oracle/admin/<standby unique name>/adump *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) MAX_FAILURE=1 REOPEN=5 DB_UNIQUE_NAME=<standby unique name> ALTERNATE=LOG_ARCHIVE_DEST_10' *.log_archive_dest_10='LOCATION=+DATAC1 VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=<standby unique name> ALTERNATE=LOG_ARCHIVE_DEST_1' *.log_archive_dest_state_10='ALTERNATE' *.control_files='+DATAC1' *.LOG_FILE_NAME_CONVERT=’+DATAC1/<primary unique name>’,’+DATAC1/<standby unique name>’ *.DB_FILE_NAME_CONVERT=’+DATAC1/<primary unique name>’,’+DATAC1/<standby unique name>’ # For 12.2 and higher remove remote_listener and local_listener they will be picked up by clusterware. For 12.1, edit accordingly. |
NOTE:?The database parameter db_name must be the same between primary and all standby database.
NOTE:?The CONVERT parameters, log_file_name_convert and db_file_name_convert are not required for file name translation when Oracle Managed Files is used and the standby is on a different cluster than the primary. Setting LOG_FILE_NAME_CONVERT to some value enables online redo log pre-clearing which improves role transition performance.
NOTE:?If disk group names are different between the primary and standby, change all disk group names accordingly.?
[oracle@<standbyhost1>]$ rman target /
RMAN> startup nomount pfile=’/tmp/<pfile name>’
RMAN> restore standby controlfile from service '<primary unique nameprimary database service name>';?<- the service name the TNS alias for the primary database
The 'output file name' from the restore will show the OMF file name for the controlfile of the standby. This will be added to the pfile before creating an spfile.
For example:
Starting restore at <DATE>
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=<sid> instance=<instance> device type=DISK
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service <service>
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:10
output file name=<OMF controlfile name>? <-- THIS FILE NAME WILL BE ADDED TO THE PFILE
Finished restore at <DATE>
RMAN> alter database mount;
control_files='<OMF controlfile name>'
[oracle@<standbyhost1>]$ sqlplus “/ as sysdba”
SQL> create spfile='+DATAC1/<standby unique name>/spfile<standby unique name>.ora' from pfile='/tmp/standby.pfile';
File created.
*** This will update clusterware with the spfile location
SQL> shutdown immediate
Then mount the database (using the spfile and new standby controlfile):
[oracle@<standbyhost1>]$ srvctl start database -d <standby db_unique_name> -o mount
The standby database can be encrypted during instantiation when the primary is not encrypted.? In this scenario, the TDE wallet is created on the primary database, copied to the standby and the restore command with the AS ENCRYPTED clause? will encrypt datafiles as they are restored.
This is useful for Hybrid Data Guard configurations where the primary is on premises and the standby is in the cloud.? Another use case is a fully encrypted configuration by encrypting the standby with RESTORE AS ENCRYPTED, executing a switchover to make the encrypted database the primary database, then using offline encryption to encrypt the original primary as a standby.
OFFLINE encryption uses the database default encryption algorithm, an algorithm cannot be passed with the encrypt command.? By default for versions through 23c, the default encryption algorithm is AES128.? If a different algorithm is desired, set "_tablespace_encryption_default_algorrithm" accordingly on both the primary and standby databases.? ?This must be done prior to the first SET KEY command to set the master encryption key.
?
ALTER SYSTEM SET "_TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM"=AES256 SCOPE = BOTH SID = '*';
For 21c the parameter name changes to TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM
ALTER SYSTEM SET "TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM"=AES256 SCOPE = BOTH SID = '*'; -- for 21c and later
For a full list of supported algorithms, refer to?Supported Encryption and Integrity Algorithms?for the appropriate database version.
WARNING: For reasons of compatibility between versions, if the master key was created/set in database version 11.1, after upgrading to 11.2 or higher the master key should be re-keyed.
The keystore must be created in a read-write database and is therefore configured on the primary database.? The standby database then must have access to that keystore either through OKV or in the case of file based keystores, the files must be copied to the standby keystore location.
Follow the steps in?Configuring Transparent Data Encryption?for the desired type of keystore.
Return to this document when you have reached the step for 'Encrypt Your Data'
The standby database needs access to the same keystore as was created on the primary.? In the case of file-based keystores, the files must be copied to the standby's <WALLET_ROOT>/tde location.??Refer to the documentation?for configuring standby access for other methods.
[oracle@<standbyhost1>]$ grep -c ^processor /proc/cpuinfo
[oracle@<standbyhost1>]$ rman target /
RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
Large Database Optimizations
The two topics below, instance parallelism and gap resolution with recover from service, are optimizations for large databases.? While this note without these optimizations is relevant for most cases without these optimizations, these optimizations?can significantly shorten the time required to instantiate very large databases.
Instance Parallelism?- Using multiple channels with the PARALLELISM setting is helpful in utilizing the available resources in a single node.? This is generally enough for small to medium sized databases measured in the hundreds of gigabytes or less.? For larger databases,?spreading the RMAN work across multiple nodes/instances of a RAC cluster can utilize additional resources and bandwidth across multiple nodes of a cluster. This process is described further in the next step.
Gap Resolution with?RECOVER?DATABASE FROM SERVICE?- While the restore database from service is running, the gap in redo to make the database consistent is growing.? The first data file is copied as of time while the last data file is copied as of a later time.? Therefore, when the restore is complete, the database is not consistent and requires redo from the primary database to catch up.? In most cases, simply enabling redo transport will resolve the gap in a reasonable amount of time.? However, there are cases for which transport/apply would be either a complicated or lengthy process.? For example:
For these situations, it is often more efficient to utilize RECOVER DATABASE FROM SERVICE to 'roll forward' the standby database.? The recovery process is optimized to only copy the blocks that have changed.
The steps to execute recover database from service is described at the appropriate point in the process below.
For more details see the?RMAN documentation
Restoring the datafiles from the primary to standby for instantiation is initiated on the standby cluster.? Maximizing the use of the channels configured for parallelization by the previous step can be improved in some cases by using the RMAN SECTION SIZE clause.
Parallelization across channels is most efficient when the datafiles are all equally sized.? Since each file by default is copied by only one channel, if one or two files are significantly larger than the rest, those files will take longer while other files have finished leaving some channels idle.? When a small subset of files is larger than the rest, allowing RMAN to copy those large files in sections can utilize all channels to maximum efficiency.? The section size can be set with the RMAN SECTION SIZE clause.? RMAN testing has shown SECTION SIZE=64G to provide the best efficiency for files less than 16TB.? If the data file is smaller than the section size chosen, it will not be broken into sections during the restore.
On the primary, query the largest datafile size to determine the section size to be used for the recover command.
SQL> select max(bytes)/1073741824 GB from v$datafile;
If the largest file is:
<15TB use section size of 64GB
>15TB and <30TB used section size of 128G
>30TB and <60TB used section size of 256G
>60TB use section size of 512G
For more information please refer to?documentation?RMAN ‘restore from service’. Also see refer to the best practice for?Cloud.
In order to initiate the copy of files, connect to the standby database and issue the restore command below using the descriptor in the tnsnames.ora for the primary database.? In this example, that is the primary db_unique_name.
[oracle@<standbyhost1>]$ rman target /RMAN> backup spfile;
RMAN> switch database to copy;? ?<- This may result in a no-op (nothing executes)
NOTE: If the restore database from service fails...select?HXFNM from x$kcvfh where FHTYP=11;?
select 'restore datafile '||listagg(FILE#,', ') within group (order by file#)||' from service <primary unique name> section size 64G;' from v$datafile_header where ERROR='FILE NOT FOUND';
??For example: RESTORE TABLESPACE <tablespace name> FROM SERVICE <tns alias for primary database> SECTION SIZE <section size> AS ENCRYPTED;
??For larger databases, instead of the previous code block the following can be used.
In order to parallelize the restore across all instances and utilize the bandwidth of all nodes or a cluster, use the connect clause in RMAN.? In this method, the parallelization is created through allocating channels in the run block rather than the setting defined in the previous step.
If you've followed this note at this point all instances are not mounted.? Stop the database and startup mount all instance, then execute the run block with the proper substitutions.
Allocate the number of channels to match results of the network evaluation to achieve the best performance.
For 4-node and 8-node clusters allocate additional channels connecting to those instances.
[oracle@<standbyhost1>]$ srvctl stop database -db <standby unique name>RMAN> switch database to copy;??<- This is not always necessary so may result in a no-op (nothing executes)
NOTE:?For larger clusters or a higher degree of parallelism, allocate additional channels accordingly.
For large databases which took a long time to copy, generated a lot of redo during the copy, or do not have the archived logs available at the primary since the copy started, use this RECOVER DATABASE FROM SERVICE option before enabling Data Guard Broker in the next step.? For databases which do not meet that description, skip to the next step.?
[oracle@<standbyhost1>]$ srvctl stop database -d <standby unique name>
[oracle@<standbyhost1>]$ rman target /
RMAN > startup nomount;
RMAN > restore standby controlfile from service <primary unique name>;
RMAN > alter database mount;
RMAN > catalog start with '<DATA DISK GROUP>' noprompt; <-- This step catalogs all files into the copied controlfile.
RMAN > catalog start with '<RECO DISK GROUP>' noprompt; <-- Some files cannot be cataloged and will generate an error.
RMAN > switch database to copy;
RMAN > shutdown immediate;
RMAN > exit;
[oracle@<standbyhost1>]$ srvctl start database -d <standby unique name> -o mount
This process uses the instance parallelization method, normal parallelization where all channels run on one instance can also be used.
Allocate the number of channels to match results of the network evaluation to achieve the best performance.
For 4-node and 8-node clusters allocate additional channels connecting to those instances.
-- All database instances should be mounted
[oracle@<standbyhost1>]$ rman target sya/<password>??<- It is necessary to connect with the password
RMAN > run {
allocate channel c1 type disk connect '/@<standby instance 1 SID_NAME>';
allocate channel c2 type disk connect '/@<standby instance 1 SID_NAME>';
allocate channel c3 type disk connect '/@<standby instance 1 SID_NAME>';
allocate channel c4 type disk connect '/@<standby instance 1 SID_NAME>';
allocate channel c5 type disk connect '/@<standby instance 2 SID_NAME>';
allocate channel c6 type disk connect '/@<standby instance 2 SID_NAME>';
allocate channel c7 type disk connect '/@<standby instance 2 SID_NAME>';
allocate channel c8 type disk connect '/@<standby instance 2 SID_NAME>';
recover database from service '<primary unique name>' section size 64G;
}
RMAN> switch database to copy;??<- This is not always necessary so may result in a no-op (nothing executes)
RMAN > exit;
NOTE:?This process can be used any time a standby has an unresolvable or large gap as a means of catching up to the primary.? See MOS Note?2850185.1?for a full description of the process on an established standby database.
Clear all Online Redo Logs and Standby Redo Logs
[oracle@<standbyhost1>]$ sqlplus / as sysdba
begin
for log_cur in ( select group# group_no from v$log )
loop
execute immediate 'alter database clear logfile group '||log_cur.group_no;
end loop;
end;
/
begin
for log_cur in ( select group# group_no from v$standby_log )
loop
execute immediate 'alter database clear logfile group '||log_cur.group_no;
end loop;
end;
/
Modify the script below to your environment and save as PostCR.sql
NOTE:?These commands can also be executed individually from sqlplus as sys
alter system set dg_broker_config_file1='+DATAC1/<standby unique name>/dr1.dat' scope=spfile;
alter system set dg_broker_config_file2='+RECOC1/<standby unique name>/dr2.dat' scope=spfile;
alter system set dg_broker_start=true scope=spfile;
shutdown immediate
startup mount
alter system register;
connect sys/<password>@<primary unique name> as sysdba
alter system set dg_broker_config_file1='+DATAC1/<primary unique name>/dr1.dat' scope=both sid='*';
alter system set dg_broker_config_file2='+RECOC1/<primary unique name>/dr2.dat' scope=both sid='*';
alter system set dg_broker_start=TRUE;
host sleep 30
host dgmgrl sys/<password>@<primary unique name> "CREATE CONFIGURATION dgconfig AS PRIMARY DATABASE IS <primary unique name> CONNECT IDENTIFIER IS <primary unique name>";
host sleep 30
host dgmgrl sys/<password>@<primary unique name> "ADD DATABASE <standby unique name> AS CONNECT IDENTIFIER IS <standby unique name>" ;
host dgmgrl sys/<password>@<primary unique name> "ENABLE CONFIGURATION"
exit
?SQL> @PostCR.sql
[oracle@<standbyhost1>]$ srvctl stop database -db <standby unique name> -o immediate
[oracle@<standbyhost1>]$ srvctl start database -db <standby unique name>
DGMGRL> show configuration
Configuration - dgconfig
Protection Mode: MaxPerformance
Members:
<primary unique name> - Primary database
<standby unique name> - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 58 seconds ago)
The following settings are recommended per MAA best practices and should be set on the primary and standby databases:
The parameters below help actively protect against data corruption.
DB_BLOCK_CHECKING=MEDIUM or higher?
NOTE:?DB_BLOCK_CHECKING can have performance implications on a primary database.? Any changes to this setting should be thoroughly tested before implementing.
DB_BLOCK_CHECKSUM=TYPICAL or higher
DB_LOST_WRITE_PROTECT=TYPICAL
Flashback database is required to reinstate a failed primary after a failover.?
It is an MAA recommendation but there are some performance implications and should be tested to determine if the impact to the performance of the application is acceptable.
NOTE:?Without flashback enabled the primary database must be fully re-instantiated after a failover using another restore from service.? Switchover does not require flashback database.
Primary:
sqlplus / as sysdba
SQL> alter database flashback on;
?To enable flashback database on the standby the redo apply process must first be stopped. Once flashback has been enabled redo apply can be restarted:
DGMGRL> CONNECT sys/<password>@<standby unique name>
DGMGRL> EDIT DATABASE?<standby unique name> SET STATE=APPLY-OFF;
DGMGRL> SQL "ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=1440 SCOPE=BOTH SID='*'";
DGMGRL> SQL "ALTER DATABASE FLASHBACK ON";
DGMGRL> EDIT DATABASE <standby unique name> SET STATE=APPLY-ON;