Skip to main content

Step By Step Dataguard Implementation (oracle 10g)


Step By Step Dataguard Implementation (oracle 10g)


Your Pfile in primary should have entries like the following entries:


** init File for primary

PRIM.__db_cache_size=4194304000
PRIM.__java_pool_size=16777216
PRIM.__large_pool_size=16777216
PRIM.__shared_pool_size=872415232
PRIM.__streams_pool_size=0
*.audit_file_dest='AUDIT_FILE_LOCATION/PRIM/adump'
*.background_dump_dest='AUDIT_FILE_LOCATION/PRIM/bdump'
*.compatible='10.2.0.1.0'
*.control_files='DATAFILE_LOCATION/PRIM/control01.ctl','DATAFILE_LOCATION/PRIM/control02.ctl','DATAFILE_LOCATION/PRIM/control03.ctl'
*.core_dump_dest='AUDIT_FILE_LOCATION/PRIM/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='PRIM'
db_unique_name='PRIM'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
*.db_recovery_file_dest_size=536870912000
*.db_recovery_file_dest='RECOVERY_DESTINATION

log_archive_dest_10='LOCATION=USE_DB_RECOVERY_FILE_DEST'
LOG_ARCHIVE_DEST_2=
'SERVICE=STAN LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STAN'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
*.dispatchers='(PROTOCOL=TCP) (SERVICE=PRIMXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=1704984576
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
FAL_SERVER=STAN
FAL_CLIENT=PRIM
STANDBY_FILE_MANAGEMENT=AUTO
DB_FILE_NAME_CONVERT='DATAFILE_LOCATION/STAN/','DATAFILE_LOCATION /PRIM/'
LOG_FILE_NAME_CONVERT='DATAFILE_LOCATION/STAN/','DATAFILE_LOCATION /PRIM/'
*.sga_target=5117050880
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='AUDIT_FILE_LOCATION/PRIM/udump'


Your tnsnames.ora in primary should have entries like the following entries:
Tnsnames.ora

STAN =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 172.17.1.202)(PORT = 10521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = STAN)
    )
  )
PRIM =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 172.17.1.201)(PORT = 10521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = PRIM)
    )
  )


Your listener.ora in primary should have entries like the following entries:
Tnsname.ora



Listener.ora
# listener.ora Network Configuration File:
# Generated by Oracle configuration tools.

SID_LIST_LISTENER_STAN =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = $ORACLE_HOME)
      (PROGRAM = extproc)
    )
    (SID_DESC =
      (SID_NAME = STAN)
      (GLOBAL_DBNAME = STAN)
      (ORACLE_HOME = $ORACLE_HOME)
    )
  )
LISTENER_STAN =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 172.17.1.202)(PORT = 10521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    )
  )
SID_LIST_LISTENER_PRIM =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = $ORACLE_HOME)
      (PROGRAM = extproc)
    )
    (SID_DESC =
      (SID_NAME = PRIM)
      (GLOBAL_DBNAME = PRIM)
      (ORACLE_HOME = $ORACLE_HOME)
    )
  )

LISTENER_PRIM =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 172.17.1.201)(PORT = 10521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    )
  )


Tasks must perform on PRIMARY :

On Primary site

1. Install oracle
2. Create database by dbca
3. Create archivelog location
4. Turn on force logging on primary database:

ALTER DATABASE FORCE LOGGING;

select * from v$pwfile_users;

select bytes from v$log;

select group#, member from v$logfile;

5. Adding standby redo log group.

ALTER DATABASE ADD STANDBY LOGFILE GROUP 5(‘DATAFILE_LOCATION/ULTIMUS/redo07.log’) SIZE 100M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 (‘DATAFILE_LOCATION/ULTIMUS/redo08.log’)SIZE 100M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 ('DATAFILE_LOCATION/ULTIMUS/redo09.log')SIZE 100M;

select * from v$standby_log;


6. shutdown immediate;
7. startup mount;
8. alter database archivelog;
9. alter database open;
    archive log list;
10. startup mount;

11. Create standby control file and transfer it to Standby.
alter database create standby controlfile as ‘DATAFILE_LOCATION/ULTIMUS/STAN.ctl’;
12. ALTER DATABASE OPEN;
13. Recheck init file,tnsname,listener file.
14. create spfile.
15. start listener
16. tnsping
17. shutdown




 Init file in standby server

STAN.__db_cache_size=3087007744
STAN.__java_pool_size=16777216
STAN.__large_pool_size=16777216
STAN.__shared_pool_size=704643072
STAN.__streams_pool_size=0
*.audit_file_dest='AUDIT_FILE_LOCATION/STAN/adump'
*.background_dump_dest='AUDIT_FILE_LOCATION/STAN/bdump'
*.compatible='10.2.0.1.0'
*.control_files='DATAFILE_LOCATION/STAN/STAN.ctl'
*.core_dump_dest='AUDIT_FILE_LOCATION/STAN/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='PRIM'
DB_UNIQUE_NAME=STAN
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
log_archive_dest_1='LOCATION=RECOVERY_DESTINATION/STAN/archivelog
valid_for=(ALL_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STAN'
LOG_ARCHIVE_DEST_2=
'SERVICE=PRIM LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=PRIM'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
DB_FILE_NAME_CONVERT='DATAFILE_LOCATION/PRIM/','DATAFILE_LOCATION/STAN/'
LOG_FILE_NAME_CONVERT'DATAFILE_LOCATION/PRIM/','DATAFILE_LOCATION/STAN/'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=STANXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=1276116992
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=3829399552
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='AUDIT_FILE_LOCATION/STAN/udump'
STANDBY_FILE_MANAGEMENT=AUTO


 On standby site

1. install oracle
2. create database by dbca,
3. shutdown.delete datafile
3. create archivelog location
4. copy datafile,tnsname.ora,lIstener.ora
5. delete control file except standby,delete standby redo file
5. recheck pfile.
6. startup mount pfile
OR. startup   mount pfile='$ORACLE_HOME/dbs/initSTAN.ora'

7. if necessary:

ALTER DATABASE DROP LOGFILE GROUP 5;
ALTER DATABASE DROP LOGFILE GROUP 6;
ALTER DATABASE DROP LOGFILE GROUP 7;

8. Add stanby redo log group:

ALTER DATABASE ADD STANDBY LOGFILE GROUP 5(‘DATAFILE_LOCATION/STAN/redo07.log’) SIZE  100M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP  6(‘DATAFILE_LOCATION/STAN/redo08.log’) SIZE  100M;

ALTER DATABASE ADD STANDBY LOGFILE GROUP  7(‘DATAFILE_LOCATION/STAN/redo09.log’) SIZE  100M;

select * from v$standby_log;

9. start listener
10. tnsping
11. shutdown immediate
12. startup mout pfile

13. to start log applying to secondary:

alter database recover managed standby database disconnect; ---apply cuurent logfile

* To check archives are applied to secondary, issue the following command.

select sequence#,first_time,applied from v$archived_log order by first_time;

* To check if the process MRPO exists in secondary, issue the command.
SELECT PROCESS FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';





-----------------------------------------------------------------------
Happy to Help !!!





Comments

Popular posts from this blog

EXPDP/IMPDP Export/Import dumpfile to a Remote Server Using Network_Link.

EXPDP/IMPDP Export/Import dumpfile to a Remote Server Using Network_Link. Step 1:   First you have to create a TNS entry at destination database which will be used to connect to the remote target database. pumplink =   (DESCRIPTION =     (ADDRESS_LIST =       (ADDRESS = (PROTOCOL = TCP)(HOST = 172.17.1.171)(PORT = 1521))     )     (CONNECT_DATA =       (SERVER = DEDICATED)       (SERVICE_NAME = Ultimus)     )   ) Step 2:   Connect to SQL plus: --Issue the following command to create db link on destination database: CREATE PUBLIC DATABASE LINK pumplink    connect to scott identified by tiger USING 'pumplink'; Step 3:   Issue the expdp command on the destination server using Network_link parameter: expdp scott/tiger directory= dumpdir logfile=impi_temp.log network_link= pumplink  schemas=scott dump...

All Dictionary View Tables:

All Dictionary View Tables: TABLE_NAME COMMENTS ALL_ALL_TABLES Description of all object and relational tables accessible to the user ALL_APPLY Details about each apply process that dequeues from the queue visible to the current user ALL_APPLY_CONFLICT_COLUMNS Details about conflict resolution on tables visible to the current user ALL_APPLY_DML_HANDLERS Details about the dml handler on tables visible to the current user ALL_APPLY_ENQUEUE Details about the apply enqueue action for user accessible rules where the destination queue exists and is visible to the user ALL_APPLY_ERROR Error transactions that were generated after dequeuing from the queue visible to the current user ALL_APPLY_EXECUTE Details about the apply execute action for all rules visible to the user ALL_APPLY_KEY_COLUMNS Alternative key columns for a STREAMS table visible to the current user ALL_APPLY_PARAME...

Solution of problem: Resultset Exceeds the Maximum Size (100 MB)

Solution of problem: Resultset Exceeds the Maximum Size (100 MB) I was running a select statement in PL/SQL Developer. it was a short query but the data volume that the query was fetching was huge. But when ever i Click the button Fetch Last Page or press 'ALT+End' button a message box comes after a while saying: Then I started looking for the exact reason of this sort of problem in Google. When I realized there was no direct solution in the web, I started looking the PL/SQL Developer Software menu and found the ultimate solution. The reason of this problem is there is a parameter of maximum result set size in PL/SQL Developer Software which is by default set to 100 MB. To change this parameter you have to go to the following location: 1. Goto Edit Menu and click ' PL/SQL Beautifier Options '. A new window will open. 2. Click SQL Window of " Window Types ". 3. Now Change the value of "Maximum Result Set Size( 0 is unlimited)"  ...