Skip to main content

Dataguard Switch Over Steps In Case of Disaster.



Dataguard Switch Over Steps In Case of Disaster

1. Cancel the Archivelog Apply:

alter database recover managed standby database cancel;

2. Check whether any gap exists between archivelogs:

SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

3. If gap exists OS copy the archive files to standby and then issue this command. In my case I had 4 archivelogs not reaching to standby.

ALTER DATABASE REGISTER PHYSICAL LOGFILE  '/RECOVERY_DESTINATION/STAN/archivelog/2012_02_07/o1_mf_1_15_7m1w964b_.arc';

ALTER DATABASE REGISTER PHYSICAL LOGFILE  ‘/RECOVERY_DESTINATION/STAN/archivelog/2012_02_07/o1_mf_1_16_7m1wdr56_.arc’;

ALTER DATABASE REGISTER PHYSICAL LOGFILE  '/RECOVERY_DESTINATION/STAN/archivelog/2012_02_07/o1_mf_1_17_7m1wg7wp_.arc’;

ALTER DATABASE REGISTER PHYSICAL LOGFILE  '/RECOVERY_DESTINATION/STAN/archivelog/2012_02_07/o1_mf_1_18_7m1wrrmz_.arc’;


4. ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH FORCE;


5. Switching the standby to Primary. 
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

6. ALTER DATABASE OPEN;
 _____________________________________________________________________________
Happy to Help !!!

Comments

Popular posts from this blog

All Dictionary View Tables:

All Dictionary View Tables: TABLE_NAME COMMENTS ALL_ALL_TABLES Description of all object and relational tables accessible to the user ALL_APPLY Details about each apply process that dequeues from the queue visible to the current user ALL_APPLY_CONFLICT_COLUMNS Details about conflict resolution on tables visible to the current user ALL_APPLY_DML_HANDLERS Details about the dml handler on tables visible to the current user ALL_APPLY_ENQUEUE Details about the apply enqueue action for user accessible rules where the destination queue exists and is visible to the user ALL_APPLY_ERROR Error transactions that were generated after dequeuing from the queue visible to the current user ALL_APPLY_EXECUTE Details about the apply execute action for all rules visible to the user ALL_APPLY_KEY_COLUMNS Alternative key columns for a STREAMS table visible to the current user ALL_APPLY_PARAME...

Solution of problem: Resultset Exceeds the Maximum Size (100 MB)

Solution of problem: Resultset Exceeds the Maximum Size (100 MB) I was running a select statement in PL/SQL Developer. it was a short query but the data volume that the query was fetching was huge. But when ever i Click the button Fetch Last Page or press 'ALT+End' button a message box comes after a while saying: Then I started looking for the exact reason of this sort of problem in Google. When I realized there was no direct solution in the web, I started looking the PL/SQL Developer Software menu and found the ultimate solution. The reason of this problem is there is a parameter of maximum result set size in PL/SQL Developer Software which is by default set to 100 MB. To change this parameter you have to go to the following location: 1. Goto Edit Menu and click ' PL/SQL Beautifier Options '. A new window will open. 2. Click SQL Window of " Window Types ". 3. Now Change the value of "Maximum Result Set Size( 0 is unlimited)"  ...

EXPDP/IMPDP Export/Import dumpfile to a Remote Server Using Network_Link.

EXPDP/IMPDP Export/Import dumpfile to a Remote Server Using Network_Link. Step 1:   First you have to create a TNS entry at destination database which will be used to connect to the remote target database. pumplink =   (DESCRIPTION =     (ADDRESS_LIST =       (ADDRESS = (PROTOCOL = TCP)(HOST = 172.17.1.171)(PORT = 1521))     )     (CONNECT_DATA =       (SERVER = DEDICATED)       (SERVICE_NAME = Ultimus)     )   ) Step 2:   Connect to SQL plus: --Issue the following command to create db link on destination database: CREATE PUBLIC DATABASE LINK pumplink    connect to scott identified by tiger USING 'pumplink'; Step 3:   Issue the expdp command on the destination server using Network_link parameter: expdp scott/tiger directory= dumpdir logfile=impi_temp.log network_link= pumplink  schemas=scott dump...