Oracle GoldenGate Interview Questions and Answers

Oracle GoldenGate Interview Questions and Answers

Last updated on 05th Oct 2020, Blog, Interview Question

About author

Madhan (Sr Cloud Engineer )

He is Highly Experienced in Respective Technical Domain with 6+ Years, Also He is a Respective Technical Trainer for Past 5 Years & Share's This Important Articles For us.

(5.0) | 16147 Ratings 2054

If you’re looking for Oracle GoldenGate Interview Questions & Answers for Experienced or Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research Oracle GoldenGate has a market share of about 9.3%. So, You still have opportunity to move ahead in your career in Oracle GoldenGate Development. ACTE offers Advanced Oracle GoldenGate Interview Questions 2021 that helps you in cracking your interview & acquire dream career as Oracle GoldenGate Developer.

1.What type of Topology does Goldengate support?


GoldenGate supports the following topologies.

    1. 1. Unidirectional
    2. 2. Bidirectional
    3. 3. Peer-to-peer
    4. 4. Broadcast
    5. 5. Consolidation
    6. 6. Cascading

2.What are the main components of the Goldengate replication?


The replication configuration consists of the following processes.

    1. 1. Manager
    2. 2. Extract
    3. 3. Pump
    4. 4. Replicate

3.What transaction types does Goldengate support for Replication?


Goldengate supports both DML and DDL Replication from the source to target.

4.What are the supplemental logging prerequisites?


The following supplemental logging is required.

    1. 1. Database supplemental logging
    2. 2. Object level logging

5.Why is Supplemental logging required for Replication?


Integrated Capture (IC):

    1. 1. In the Integrated Capture mode, GoldenGate works directly with the database log mining server to receive the data changes in the form of logical change records (LCRs).
    2. 2. IC mode does not require any special setup for the databases using ASM, transparent data encryption, or Oracle RAC.
    3. 3. This feature is only available for oracle databases in Version or higher.
    4. 4. It also supports various object types which were previously not supported by Classic Capture.
    5. 5. This Capture mode supports extracting data from source databases using compression.

Integrated Capture can be configured in an online or downstream mode.

6.List the minimum parameters that can be used to create the extract process?


The following are the minimum required parameters which must be defined in the extract parameter file.

    1. 1. EXTRACT NAME
    2. 2. USERID
    3. 3. EXTTRAIL
    4. 4. TABLE

 I want to configure multiple extracts to write to the same exttrail file? Is this possible?

Only one Extract process can write to one exttrail at a time. So you can’t configure multiple extracts to write to the same exttrail.

7.What type of Encryption is supported in Goldengate?


Oracle Goldengate provides 3 types of Encryption.

  • Data Encryption using Blowfish.
  • Password Encryption.
  • Network Encryption.

8.What are the different password encryption options available with OGG?


You can encrypt a password in OGG using

  • Blowfish algorithm and
  • Advanced Encryption Standard (AES) algorithm

9.What are the different encryption levels in AES?


You can encrypt the password/data using the AES in three different keys

a) 128 bit

b) 192 bit and

c) 256 bit

10.What are some of the key features of GoldenGate 12c?


The following are some of the more interesting features of Oracle GoldenGate 12c:

  • Support for Multitenant Database
  • Coordinated Replicat
  • Integrated Replicat Mode
  • Use of Credential store
  • Use of Wallet and master key
  • Trigger-less DDL replication
  • Automatically adjusts threads when RAC node failure/start
  • Supports RAC PDML Distributed transaction
  • RMAN Support for mined archive logs
Subscribe For Free Demo

Error: Contact form not found.

11.What are the installation options available in OGG 12c?


You can install Oracle GoldenGate 12c using in 2 ways:

1) Interactive Installation with OUI – Graphical interface

2) Silent Installation with OUI – Command Interface

12.What is a Credential Store in OGG 12c?


OGG Credential Store manages Encrypted Passwords and USERIDs that are used to interact with the local database and Associate them with Alias. Instead of specifying actual USER ID and Password in a command or a parameter file, you can use an alias. The Credential Store is implemented as an auto login wallet within the Oracle Credential Store Framework (CSF).

13.How to configure a Credential Store in OGG 12c?


Steps to configure Oracle Credential Store are as follows:

1) The Default Credential Store is located under the “dircrd” directory.

If you want to specify a different location, use the “CREDENTIAL STORE LOCATION” parameter in the GLOBALS file.



2) Goto OGG home and connect to GGSCI.

  • cd $OGG_HOME
  • ./ggsci
  • GGSCI>

14.What command is used to create the credential store?



 How do you add credentials to the credential store?

  • [PASSWORD password]
  • [ALIAS alias]
  • [DOMAIN domain]



15.How do you retrieve information from the Oracle Credential Store?





16.What are the different data encryption methods available in OGG 12c?


In OGG 12c you can encrypt data with the following 2 methods:

1) Encrypt Data with Master Key and Wallet

2) Encrypt Data with ENCKEYS

17.How do you enable Oracle GoldenGate for Oracle database


The database services required to support Oracle GoldenGate capture and apply must be enabled explicitly for an Oracle database. This is required for all modes of Extract and Replicat.

To enable Oracle GoldenGate, set the following database initialization parameter. All instances in Oracle RAC must have the same setting.


18.How does the Replicat work in a Coordinated Mode?


In a Coordinated Mode Replicat operates as follows:

  • Read the Oracle GoldenGate trail.
  • Performs data filtering, mapping, and conversion.
  • Constructs SQL statements that represent source database DML or DDL transactions (in committed order).
  • Applies the SQL to the target through the SQL interface that is supported for the given target database, such as ODBC or the native database interface.

19.What is the difference between Classic and Coordinated Replicat?


The difference between classic mode and coordinated mode is that Replicat is multi-threaded in coordinated mode. Within a single Replicated instance, multiple threads read the trail independently and apply transactions in parallel. Each thread handles all of the filtering, mapping, conversion, SQL construction, and error handling for its assigned workload. A coordinator thread coordinates the transactions across threads to account for dependencies among the threads.

20.How do you create a COORDINATED REPLICATE in OGG 12c?


You can create the COORDINATED REPLICATE with the following OGG Command:


I have created a Replicat process in OGG 12c and forgot to specify the DISCARDFILE parameter. What will happen?

Starting with OGG 12c, if you don’t specify a DISCARDFILE OGG process now generates a discard file with default values whenever a process is started with START command through GGSCI.

21.Is it possible to start OGG EXTRACT at a specific CSN?


Yes, Starting with OGG 12c you can now start Extract at a specific CSN in the transaction log or trail.


  • START EXTRACT finance AFTERCSN 67890

22.List a few parameters which may help improve the replicat performance?


The parameters below can be used to improve the replicat performance:

  1. 1. BATCHSQL
div class=”row course-curriculum-widget m-0 my-4 p-3 align-items-center”>
Course Curriculum

Learn From Comprehensive Oracle Goldengate Training By Expert Trainers

  • Instructor-led Sessions
  • Real-life Case Studies
  • Assignments
Explore Curriculum

23.What are the areas to monitor in Goldengate Replication?


The lag and checkpoint latency of the Extract, pump and Replicat processes are normally monitored.

24.What is the PASSTHRU mode used for?


In pass-through mode, the Extract process does not look up the table definitions, either from the database or from a data definitions file. This increases the throughput of the data pump, as the object definition look-up is bypassed.

25.What are the most common reasons for an Extract process slowing down?


Some of the possible reasons are:

  1. 1. Long running batch transactions on a table.
  2. 2. Insufficient memory on the Extract side. Uncommitted, long running transactions can cause writing of a transaction to a temporary area (dirtmp) on disk. Once the transaction is committed it is read from the temporary location on the file system and converted to trail files.
  3. 3. Slow or overburdened Network.

26.What are the most common reasons of the Replicat process slowing down?


Some of the possible reasons are:

  1. 1. Large amount of transactions on a particular table.
  2. 2. Blocking sessions on the destination database where non-Goldengate transactions are also taking place on the same table as the replicat processing.
  3. 3. If using DBFS, writing & reading of trail files may be slow if SGA parameters are not tuned.
  4. 4. For slow Replicat’s, latency may be due to missing indexes on target.
  5. 5. Replicat having to process Update, delete of rows in very large tables.

27.My extract was running fine for a long time. All of a sudden it went down. I started the extract processes after 1 hour. What will happen to my committed transactions that occurred in the database during the last 1 hour?


OGG checkpoint provides the fault tolerance and makes sure that the transaction marked for committed is captured and captured only once. Even if the extract went down abnormally, when you start the process again it reads the checkpoint file to provide the read consistency and transaction recovery.

28.I have configured Oracle GoldenGate integrated capture process using the default values. As the data load increases I see that extract starts lagging behind by an hour (or more) and database performance degrades. How will you resolve this performance issue?


When operating in integrated capture mode, you must make sure that you have assigned sufficient memory to STREAMS_POOL_SIZE. An undersized STREAMS_POOL_SIZE or limiting the streams pool to use a specific amount of memory can cause troubles.

The best practice is to allocate STREAMS_POOL_SIZE at the instance level and allocate the MAX. SGA at GG process level as below:

  • SQL> alter system set STREAMS_POOL_SIZE=3G

29.Why would you segregate the tables in a replication configuration? How would you do it?


In OGG you can configure replicat at the data at the schema level or at the table level using TABLE parameter of extract and MAP parameter of replicat.

For replicating the entire database you can list all the schemas in the database in the extract/replicat parameter file.

Depending the amount of redo generation you can split the tables in a schema in multiple extracts and replicats to improve the performance of data replication. Alternatively you can also group a set of tables in the configuration by the application functionality.

Alternatively you may need to remove tables which have long running transactions in a separate extract process to eliminate lag on the other tables.

Let’s say that you have a schema named SCOTT and it has 100 hundred tables.

Out of these hundred tables, 50 tables are heavily utilized by application.

To improve the overall replication performance you create 3 extract and 3 replicats as follows:

  • Ext_1/Rep_1 –> 25 tables
  • Ext_2/Rep_2 –> 25 tables
  • Ext_3/Rep_3 –> 50 tables
  • Ext_1/Rep_1 and Ext_2/Rep_2 contain 25 tables each which are heavily utilized or generate more redo.
  • Ext_3/Rep_3 contains all the other 50 tables which are least used.

30.How can we report on long running transactions?


The WARNLONGTRANS parameter can be specified with a threshold time that a transaction can be open before Extract writes a warning message to the ggs error log.


31.What command can be used to view the checkpoint information for the extract process?


Use the following command to view the Extract checkpoint information.

  • GGSCI> info extract , showch
  • GGSCI> info extract ext_fin, showch

32.How is the RESTARTCOLLISION parameter different from HANDLECOLLISIONS?


The RESTARTCOLLISION parameter is used to skip ONE transaction only in a situation when the GoldenGate process crashed and performed an operation (INSERT, UPDATE & DELETE) in the database but could not checkpoint the process information to the checkpoint file/table. On recovery it will skip the transaction and AUTOMATICALLY continue to the next operation in the trail file.

When using HANDLECOLLISIONS GoldenGate will continue to overwrite and process transactions until the parameter is removed from the parameter files and the processes restarted.

33.How do you view the data which has been extracted from the redo logs?


The logdump utility is used to open the trail files and look at the actual records that have been extracted from the redo or the archive log files.

34.What does the RMAN-08147 warning signify when your environment has a GoldenGate Capture Processes configured?


This occurs when the V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required by the GoldenGate Capture process and RMAN is trying to delete the archived logs. The RMAN-08147 error is raised when RMAN tries to delete these files.

When the database is open it uses the DBA_CAPTURE values to determine the log files required for mining. However if the database is in the mount state the V$ARCHIVED_LOG. NEXT_CHANGE# value is used.

35.How would you look at a trail file using logdump, if the trail file is Encrypted?


You must use the DECRYPT option before viewing data in the Trail data.

List a few useful Logdump commands to view and search data stored in OGG trail files.

Below are few logdump commands used on a daily basis for displaying or analyzing data stored in a trail file.

  • $ ./logdump – to connect to the logdump prompt
  • logdump> open /u01/app/oracle/dirdat/et000001 – to open a trail file in logdump
  • logdump> fileheader on – to view the trail file header
  • logdump> ghdr on – to view the record header with data
  • logdump> detail on – to view column information
  • logdump> detail data – to display HEX and ASCII data values to the column list
  • logdump> reclen 200 – to control how much record data is displayed
  • logdump> pos 0 – To go to the first record
  • logdump> next (or simply n) – to move from one record to another in sequence
  • logdump> count – counting records in a trail
Course Curriculum

Join Best Oracle Goldengate Course with Global Recognised Certification

Weekday / Weekend BatchesSee Batch Details

36.Why should I upgrade my GoldenGate Extract processes to Integrated Extract?


Oracle is able to provide faster integration of the new database features by moving the GoldenGate Extraction processes into the database. Due to this, the GoldenGate Integrated Extract has a number of features like Compression which are not supported in the traditional Extract. You can read more about how to upgrade to Integrated Extract and more about Integrated Delivery. Going forward preference should be given to create new extracts as Integrated Extracts and also to upgrade existing traditional Extracts.

37.What is the minimum Database version which supports Integrated Delivery?


Oracle is the minimum required database version which supports both Integrated extract and Integrated Replicat.

38.What databases support GoldenGate Integrated Delivery?


Oracle Integrated Delivery is only available for Oracle Databases.

 With Integrated Delivery, where can we look for the performance stats?

Yes with 12c, performance statistics are collected in the AWR repository and the data is available via the normal AWR reports.

39.What are the steps required to add a new table to an existing replication setup?


The steps to be executed would be the following:

  • Include the new table to the Extract & pump process.
  • Obtain starting database SCN and Copy the source table data to the target database
  • Start Replicat on target at the source SCN database point.

44.What does the GoldenGate CAN equate to, in the Oracle Database?


It is equivalent to the Oracle database SCN transaction number.

41.How do you create a CSV file from the Extracted data?


You will have to use the CSV Flat File Adaptor to create CSV files. The source would be the extract trail files which use the configuration of the adapter settings to generate CSV files.

42.What is the purpose of the DEFGEN utility?


When the source and the target schema objects are not the same (different DDL) the Replicat process needs to know the source definition of the objects. The output from the DEFGEN utility is used in conjunction with the trail data to determine which column value in the trail belongs to which column.

43.We want to set up one-way data replication for my online transaction processing application. However there are compressed tables in the environment. Please suggest how I can achieve it.


You must use OGG 11.2 and configure GoldenGate Integrated Capture process to extract data from compressed tables.

Note: Pre OGG 11.2 doesn’t support extracting data from compressed tables

44.We want to set up one-way data replication using Integrated Extract for my Oracle database running Please suggest the best architecture to achieve it.


Oracle GoldenGate Integrated Capture process supports Oracle databases 10.2 and higher. But if you are running Oracle database 10.2 and want your Oracle GoldenGate Integrated Capture process then you must configure downstream topology.

45.I am migrating my Oracle database from non-exadata to exadata but my source OGG software and target OGG software versions are different (Say Source is running OGG 11.1 and target is running OGG 11.2). How can I configure my OGG process to work in such configuration?


It is recommended that all instances of Oracle GoldenGate be the same version to take advantage of the new functionality, but this is not possible all the time and is not required. In this scenario, OGG provides a parameter called ‘FORMAT RELEASE’ which allows customers to use different versions of Oracle GoldenGate Extract, trail files, and Replicat together.

Example: RmtTrail /u01/app/oracle/dirdat, Format Release 11.1

Note: The input and output trails of a data pump must have the same trail file version.

46.What are the different OGG Initial load methods available?


OGG has 2 functionalities, one it is used for Online data Replication and second for Initial Loading.

If you are replicating data between 2 homogeneous databases then the best method is to use database specific methods (Exp/Imp, RMAN, Transportable tablespaces, Physical Standby and so on). Database specific methods are usually faster than the other methods.

If you are replicating data between 2 heterogeneous databases or your replicat involves complex transformations, then the database specific method can’t be used. In those cases you can always use Oracle GoldenGate to perform initial load.

Within Oracle GoldenGate you have 4 different ways to perform initial load.

  • Direct Load – Faster but doesn’t support LOB data types (12c include support for LOB)
  • Direct Bulk Load – Uses SQL*LOAD API for Oracle and SSIS for MS SQL SERVER
  • File to replicat – Fast but the rmtfile limit is 2GB. If the table can’t be fit in 1 rmtfile you can use maxfiles but the replicat needs to be registered on the target OGG home to read the rmt files from source.
  • File to Database utility – depending on the target database, use SQL*LOAD for Oracle and SSIS for MS SQL SERVER and so on.

Oracle GoldenGate initial loading reads data directly from the source database tables without locking them. So you don’t need downtime but it will use database resources and can cause performance issues. Take extra precaution to perform the initial load during the non-peak time so that you don’t run into resource contention.

47.I have a table called ‘TEST’ on source and target with the same name, structure and data type but in a different column order. How can you set up replication for this table?

OGG by default assumes that the sources and target tables are identical. A table is said to be identical if and only if the table structure, data type and column order are the same on both the source and the target.


If the tables are not identical you must use the parameter ‘SOURCEDEFS’ pointing to the source table definition and ‘COLMAP’ parameter to map the columns from source to target.

48.What is the best practice to delete the extract files in OGG?


Use the manager process to delete the extract files after they are consumed by the extract/replicat process


49.What parameters can be used to configure Oracle GoldenGate to extract data from Archived Redo log files only?


Use the TRANLOGOPTIONS ARCHIVEDLOGONLY option in the parameter file.

50.Can you see the progress of a FLASHBACK DATABASE operation?


Yes, you can. During a FLASHBACK DATABASE operation, you can query V$SESSION_LONGOPS from another session to see the progress of the flashback.

  • The FLASHBACK DATABASE operation has two distinct phases: the actual flashback and the media recovery that happens afterwards to bring the database to a consistent state.
  • While the actual flashback is running, you’ll see the following message in V$SESSION_LONGOPS, on Oracle 11gR2:
  • Flashback Database: Flashback Data Applied : 238 out of 282 Megabytes done
  • During the media recovery, the following messages will be seen:
  • Media Recovery: Redo Applied: 263 out of 0 Megabytes done
  • Media Recovery: Average Apply Rate: 1164 out of 0 KB/sec done
  • Media Recovery: Last Applied Redo: 626540 out of 0 SCN+Time done
  • Media Recovery: Elapsed Time: 232 out of 0 Seconds done
  • Media Recovery: Active Time: 116 out of 0 Seconds done
  • Media Recovery: Active Apply Rate: 1859 out of 0 KB/sec done
  • Media Recovery: Maximum Apply Rate: 1859 out of 0 KB/sec done
  • Media Recovery: Log Files: 15 out of 0 Files done
  • Media Recovery: Apply Time per Log: 7 out of 0 Seconds done

51.How should I set the database to improve Flashback performance?


Oracle’s recommendations are:

  • Use a fast file system for your flash recovery area, preferably without operating system file caching. It is recommended to use a file system that avoids operating system file caching, such as ASM.
  • Configure enough disk spindles for the file system that will hold the flash recovery area. For large production databases, multiple disk spindles may be needed to support the required disk throughput for the database to write the flashback logs effectively.
  • If the storage system used to hold the flash recovery area does not have non-volatile RAM, try to configure the file system on top of striped storage volumes, with a relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance
  • For large production databases, set the init.ora parameter LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.

52.What is the OGG s/w version to use if source DB is Oracle 12c R2 and platform Linux and destination is DB2 10.1 (32-bit) with 32-bit OS. Does it mean we have to download and install separate OGG software versions at source and target servers? Will they be able to communicate with each other?


In any case, you will have to install and configure Goldengate on both target and source even if versions are the same. however, they will be two separate binaries.

53.Can the source platform be 32-bit and target platform on 64 bit OS for OGG implementation?



54.We ran an update statement in the source database which updates ten million records, committed it and immediately ran “shut abort” in the source database. Will data get replicated correctly to the target database by OGG?


If DB is down, Goldengate will abide. As long as data is written in logs when you restart the process, goldengate will pick from the point where it stopped. So everything depends on what is getting written to logs and goldengate will pick committed transactions.

55.Instead of running ‘add trandata’, if I directly run ‘alter table add supplemental logging’ at SQL prompt, will OGG still work?



56.What happens if we add trandata for a table which do not have a primary key or unique key, but has invisible columns:

a) Will the invisible column be considered for uniqueness while enabling supplemental logging?

b) What happens when we make the invisible column of the table visible?


No invisible columns will not be considered for uniqueness if invisible columns are made visible they will be treated as normal columns.

57.When we talk about OGG initial load, the target tables should be empty, but the metadata should be present. What about the indexes corresponding to the tables at the Target database? Should they be defined at the target database before starting OGG initial load?


Goldengate Initial load takes care of only data. So yes metadata should be present. For faster load, it is advisable to turn off indexes but it’s not mandatory.

58.OGG “Skip Transaction” option can be provided only for replicat process? Can it be used for extract and data pump processes also?


Replicat only

59.If both the source database and target database are RAC databases, will the OGG instances also be RAC?


There is no such thing called an OGG RAC. Only DB is RAC.

60.Under which circumstances do we need to run dblogin in ggsci?


Whenever any change to DB is required from GGSCI

Oracle-goldengate Sample-resumes! Download & Edit, Get Noticed by Top Employers! Download

61.How will the OGG processes behave? Will they get ABENDED?


Yes. OGG processes get AMENDED

62.How will the underlying source and databases behave?


Source and Target DB are independent of OGG hence no effect on DBS.

63.How will the applied gap be detected and restored?


As soon as you restart the process, it will be taken care of. This is done using the Goldengate Checkpoint process.

66.How do you add credentials to the credential store?



[DOMAIN domain]



67.If I have created a Replicat process in OGG 12c and forgot to specify DISCARDFILE parameter. What will happen?


Starting with OGG 12c, if you don’t specify a DISCARDFILE OGG process now generates a discard file with default values whenever a process is started with START command through GGSCI.

68.Is there a separate background process for writing flashback logs?


Yes. RVWR (Recovery Writer, a.k.a Flashback Writer) was introduced in Oracle 10g to write flashback data from the Flashback Buffer in the SGA to the flashback database logs on disk.

69.Do I need to shutdown and mount the database to turn flashback on?


ALTER DATABASE FLASHBACK ON is an online operation in 11g Release 2. In 10g Release 2 the database must be mounted, but not open.

Once flashback is turned on Oracle starts to save the before-images of the database blocks before they are changed. This guarantees that the database can be flashbacked to the exact point-in-time when the flashback mode was turned on.

70.What happens if RVWR cannot write to disk?


  • It depends on the context where the write error occurs:
  • If there’s a Guaranteed Restore Point, the database crashes to ensure the restore point guarantee is not voided.
  • If there isn’t a Guaranteed Restore Point and it’s a primary database, the Flashback Mode will be automatically turned off for the database, which will continue to operate normally.
  • If there isn’t a Guaranteed Restore Point and it’s a standby database, the database will hang until the cause of the write failure is fixed.

71.Is it possible to specify the size of the Flashback Buffer in the SGA?


Yes, but indirectly. The size of the Flashback Buffer is set to 2 * LOG_BUFFER.

For performance reasons, it’s recommended to set LOG_BUFFER to at least 8MB for

databases running in Flashback Mode.

UPDATE: For large databases with more than a 4GB SGA, you may consider setting LOG_BUFFER to values in the range of 32-64 MB.

72.Can RMAN be used to backup flashback logs?


No. Flashback Logs are not backed up. Even if the command BACKUP RECOVERY AREA is used to backup the contents of the FRA to tape only the following file types are backed up: full and incremental backup sets, control file autobackups, datafile copies, and archived redo logs.

  • Flashback Logs are considered to be transient files and cannot be backed up by RMAN. They are not needed for media recovery.

73.When are the flashback logs deleted?


Flashback logs are managed by Oracle only. Oracle will try to keep as many Flashback logs as needed to satisfy the DB_FLASHBACK_RETENTION_TARGET parameter. However, if there’s space pressure in the Flash Recovery Area (FRA), flashback logs may be deleted to make room for other things, like backups and archived logs.

74.How can you determine if the parameters for a process have recently changed?


Whenever a process is started, the parameters in the .prm file for the process is written to the process REPORT. You can look at the older process reports to view the parameters which were used to start up the process. By comparing the older and the current reports you can identify the changes in the parameters.

75.How would you comment out a line in the parameter file?


You can use the “–” character to comment out a line.

76.What are the basic resources required to configure Oracle GoldenGate high availability solution with Oracle Clusterware?


There are 3 basic resources required:

    1. 1. Virtual IP
    2. 2. Shared storage
    3. 3. Action script

77.I have a one-way replication setup. The system administration team wants to apply an OS patch to both the OGG source host and the target servers. Provide the sequence of steps that you will carry before and after applying this patch.



1. Check to make sure that the Extract has processed all the records in the data source (Online Redo/archive logs)

  • GGSCI> send extract , logend

(The above command should print YES)

2. Verify the extract, pump and replicat has zero lag.

  • GGSCI> send extract , getlag
  • GGSCI> send extract , getlag
  • GGSCI> send replicat , getlag

(The above command should pring “At EOF, no more records to process.”)

3. Stop all application and database activity.

4. Make sure that the primary extract is reading the end of the redo log and that there is no LAG at all for the processes.

5. Now proceed with stopping the processes:?


1. Stop the primary extract

2. Stop the pump extract

3. Stop the manager process

4. Make sure all the processes are down.


1. Stop replicat process

2. Stop mgr

3. Make sure that all the processes are down.

4. Proceed with the maintenance

5. After the maintenance, proceed with starting up the processes:


1. Start the manager process

2. Start the primary extract

3. Start the pump extract

(Or simply all the extract processes as GGSCI> start extract *)

4. Make sure all that the processes are up.


    1. 1. Start the manager process
    2. 2. Start the replicat process.
    3. 3. Make sure that all the processes are up.

78.What database does GoldenGate support for replication?


  • Oracle Database
  • TimesTen
  • MySQL
  • IBM DB2
  • Microsoft SQL Server
  • Informix
  • Teradata
  • Sybase
  • Enscribe
  • SQL/MX

Are you looking training with Right Jobs?

Contact Us

Popular Courses