Oracle Golden Gate (GG)

Golden Gate (GG)

Golden Gate is a new and probably the best replication software purchased by Oracle in 2009. This technology of Oracle is a comprehensive software package that enables cost-effective and low-impact real-time data integration and continuous availability solutions. Golden Gate is very easy to configure and deploy in a large-scale environment.

Due to the flexible architecture, Golden Gate satisfies the following business requirements:

  • High Availability
  • Data Integration
  • Data upgrade and migration
  • Data Warehousing
  • Live reporting database

New features of Golden Gate (GG) included in the 12.1.2 release:

  • Installing Oracle Golden Gate from Oracle Universal Installer
  • Improved performance of integrated capture
  • Integrated Replicate
  • Capture and apply to multi-tenant container databases
  • Three-part object names
  • Native DDL capture
  • Enhanced character set conversion
  • Remote task datatype support

These new features will make Oracle Golden Gate even better and enable organizations to make the jump to multi-tenant databases much easier.

For more detail on these features please read:

Architecture of GG (Golden Gate)

Architecture-of-Golden Gate

Now let us explain the architecture of Golden Gate in detail:


    The process of the manager is on both of the databases, in other words the source as well as the destination. It is the control process of Oracle Golden Gate as it controls the swapping, tracking and restarting processes, allocating information storage and maintain trail files logging errors. It must be up and run before creating EXTRACT or REPLICAT process.

    It is the extraction mechanism of the Golden Gate, in other words it's a capture process that derives the data from transaction logs. It runs on the source server and will dump its read/write position to a local file in the case of crash or recovery.
  3. TRAIL

    It contains the data changes written in the Golden Gate common format. It exists on both the source server-side as well as on the target server-side. It is popularly known in two ways:

    a. Known as EXTRACT TRAIL if exists on the local/source system.
    b. Known as REMOTE TRAIL if exists on the target/destination system.

    It is a secondary extract process that is used to send the data in large blocks from an extract trail across the TCP/IP network to remote trail on destination/target server. Data pumps add the storage flexibility and it isolates the primary extract process from TCP/IP activity.

    The Collector runs on the target server background and process the information from the source server. It receives the extracted data via a TCP/IP network and recompiles the data and writes it to a remote Golden Gate trail file.

    Replication runs on the destination server. It is just like the extract process since it is also configured for an initial load as well as change synchronization. Working of the replication is to read the transactional data changes and also DDD changes and replicates them to the target database.