A single standby database is a reasonable starting point for disaster recovery, but it forces a binary choice at failover time. You either cut over to your one available standby or you do not. Real production environments have multiple failure domains, different latency tolerances, and workloads that should not be competing for the same primary. A single standby does not handle any of that cleanly.
Oracle Data Guard on OCI now supports up to six standby databases per primary, organized under a construct called a Data Guard Group. This article covers how the multi-standby architecture is structured, the redo transport mechanics that underpin synchronization, how to configure read offload and snapshot testing on standbys, and how OCI's managed automation maps to the underlying Data Guard configuration primitives.

The Data Guard Group Model
The Data Guard Group is the OCI-level abstraction that represents your entire Data Guard configuration: one primary and up to six standbys. Each member of the group is a database resource in OCI, and the group model lets you manage lifecycle operations on the whole configuration from a single control point rather than orchestrating changes on each database node individually.
The group currently supports a maximum of six standby databases per primary across both local and remote configurations. You configure and manage the group through the OCI Console, the OCI CLI, REST APIs, SDKs, or Terraform. All of these surfaces call the same control plane under the hood, so your choice of interface does not constrain what you can do.
Redo Transport Modes and RPO Implications
The most important architectural decision for each standby is the redo transport mode. This directly determines your RPO in a failure scenario. Data Guard supports three transport modes: SYNC, ASYNC, and FASTSYNC.
SYNC mode means the primary does not acknowledge a transaction commit to the application until the redo has been received and written to the standby redo log. Your RPO is effectively zero because no acknowledged transaction can be lost, but you pay the round-trip network latency on every commit. This mode is only viable for a local standby in the same availability domain or a standby in a nearby region with sub-millisecond network latency. Putting a SYNC standby in a geographically distant region will serialize your application's commit throughput against intercontinental round trips.
ASYNC mode ships redo to the standby as network capacity allows, with no commit latency penalty on the primary. Your RPO is bounded by the redo shipping lag, which depends on your write volume and network throughput. For remote standbys across regions, ASYNC is the correct default. FASTSYNC is a hybrid that acknowledges the commit after the redo is sent but before the standby confirms receipt; it reduces some of the SYNC latency risk while still offering better RPO than pure ASYNC in low-latency network paths.
| Transport Mode | Commit Waits For | RPO | Recommended Placement |
|---|---|---|---|
| SYNC | Redo received and written at standby | Zero data loss | Local standby, same region |
| FASTSYNC | Redo sent to standby network buffer | Near-zero | Low-latency remote standby |
| ASYNC | Nothing on the standby side | Bounded by lag | Remote standby, cross-region |
Designing a Two-Standby Reference Configuration
The most common production pattern uses two standbys: one local and one remote. The local standby runs in the same OCI region as the primary, in a different availability domain, with SYNC transport. It is your fast-failover target for infrastructure-level failures that do not affect the whole region. RTO for a switchover to this standby can be measured in seconds once the Data Guard role transition completes.
The remote standby runs in a second OCI region with ASYNC transport. It absorbs redo from the primary continuously and serves as your recovery target if the entire primary region becomes unavailable. Your RTO here is longer because it involves DNS re-pointing, application connection pool reconfiguration, and potential redo application catch-up at the remote site. You should measure and document this RTO regularly, not just assume it based on theory.
A third standby becomes worth considering when you have compliance requirements that mandate a separate backup copy, or when you want a dedicated reporting standby that is isolated from your HA standby and your DR standby. Adding a third member to the Data Guard Group is an additive operation through the OCI control plane; it does not require reconfiguring the existing two members.
Active Data Guard and Read Workload Offload
Standard Data Guard standbys are mounted but not open. Active Data Guard opens the standby in read-only mode while redo application continues in the background. The standby is simultaneously applying incoming redo from the primary and serving read queries from connected sessions.
This makes the standby a viable target for reporting queries, analytics workloads, and database backup jobs that would otherwise consume I/O and CPU on the primary. Backup jobs running on the standby still protect your data because the standby is synchronized with the primary. You do not need a separate RMAN target on the primary for routine backup scheduling.
The connection routing to an Active Data Guard standby uses a service name configured on the standby database. Your application connection pool or read-routing layer connects to this service. The standby does not accept DML, so any writes will fail with ORA-16000. Your read workloads must be explicitly separated from write workloads at the application or connection pool level before you route them to the standby. This is not automatic.
Snapshot Standby for Pre-Production Testing
A snapshot standby is a temporary conversion of a physical standby to a read-write database. When you convert a physical standby to snapshot mode, Data Guard saves the current standby state, stops redo apply, and opens the database in read-write mode. The database is now isolated from the primary and accepts full DML. Incoming redo from the primary is received and buffered but not applied.
This gives you a writable copy of production data for integration testing, schema migration validation, or upgrade pre-checks against real data volumes. When you are done, you convert the database back to physical standby mode. Data Guard discards all changes made during the snapshot window and resumes applying the buffered redo that accumulated while the standby was in snapshot mode. The standby catches up automatically.
The operational implication is that your redo buffer must have enough space to hold the redo generated during the snapshot window. For databases with high write rates, a long snapshot session creates a deep backlog. Monitor the redo shipping lag during snapshot mode and make sure the standby has caught up to near-zero lag before relying on it for HA again. The OCI Data Guard Group view shows the transport lag for each standby member, which you can also query via the V$DATAGUARD_STATS view on the standby itself.
Switchover vs. Failover: Choosing the Right Role Transition
A switchover is a planned, graceful role transition. The primary flushes all redo to the target standby, the standby catches up fully, and both databases swap roles cleanly. No data is lost. You use this for planned maintenance, region migrations, and Exadata hardware generation upgrades where you want a zero-RPO cutover.
A failover is an unplanned transition initiated when the primary is unavailable. If you are using ASYNC transport to your remote standby, the standby may be behind by some amount of redo at the time of failover. Data Guard can apply any redo that was received but not yet applied, but redo that never arrived at the standby is gone. This is why transport mode selection matters before an incident, not after.
With multiple standbys in a Data Guard Group, you choose which standby to fail over to based on the situation. You do not have a single hardcoded target. OCI surfaces the current transport lag and apply lag for each standby in the group, so you can select the standby with the smallest lag at the moment of failover to minimize data exposure.
Zero-Downtime Infrastructure Upgrades Using Standby Migration
One of the most operationally useful patterns enabled by multiple standbys is a migration between Exadata hardware generations without taking a maintenance window on the primary. You add a new standby database provisioned on the target hardware, for example on Exadata X11M while your primary runs on X9M. You configure this new standby as a member of your existing Data Guard Group and let it synchronize.
Once the new standby is current, you perform a switchover. The new hardware standby becomes the primary, and your old primary becomes a standby. Your application now runs on the new hardware. The old standby continues to protect the new primary until you are confident in the new configuration and decommission the old hardware. The risk window is the switchover operation itself, which is a clean transition with zero data loss.
OCI Automation and Terraform Configuration
The OCI Terraform provider exposes the Data Guard Group through the oci_database_data_guard_association resource. When adding a standby, you specify the peer database home, the peer region for cross-region standbys, the protection mode, and the transport type. A cross-region standby resource requires the peer region identifier in the peer_region argument and a database home OCID in that region as the placement target.
For the protection mode attribute, the valid values are MAXIMUM_AVAILABILITY, MAXIMUM_PERFORMANCE, and MAXIMUM_PROTECTION. Maximum Availability corresponds to SYNC transport with automatic fallback to ASYNC if the standby is unreachable. Maximum Performance corresponds to ASYNC transport. Maximum Protection is SYNC with no fallback, which will hang the primary if the standby becomes unreachable, and is rarely appropriate for cloud deployments.
The OCI CLI equivalent for adding a standby to an existing group is oci db data-guard-association create with-existing-db-system for same-region standbys, or oci db data-guard-association create from-existing-db-system with the appropriate cross-region parameters for remote standbys. Verify the exact subcommand structure against the installed CLI version, as the subcommand paths have shifted across CLI releases.
Supported Infrastructure Targets
Multi-standby Data Guard Group automation is available across the full range of Oracle cloud database infrastructure options. This includes Exadata Database Service on Dedicated Infrastructure, Exadata Database Service on Cloud@Customer, Exadata Database Service on Exascale Infrastructure, and Base Database Service. The automation model is consistent across all four targets, meaning the same Terraform configurations and API calls work regardless of which underlying infrastructure your database runs on.
Conclusion
A multi-standby Data Guard architecture on OCI solves problems that a single-standby configuration cannot. You get independent failure domain coverage at the local and regional level, the ability to offload read and backup workloads without impacting primary throughput, a safe environment for testing against real production data volumes, and a clean mechanism for zero-downtime infrastructure migrations.
The configuration decisions that matter most are transport mode per standby, protection mode for the group, and the scope of your redo buffer during snapshot standby windows. Getting those right up front avoids operational surprises at the worst possible moment: when you actually need the standby to work.



