oracdc is a set of software solutions for the Apache Kafka Connect ecosystem designed to transfer information about changes in the source Oracle database (versions 9i, 10g, 11c, 12c, 18c, 19c, 21c, 23ai, and 26ai ) for further processing. This processing usually involves reacting to changes in the source system using event-driven architecture principles or performing data integration and replication to heterogeneous systems.
oracdc supports the following Oracle Database Editions
-
Enterprise
-
Standard
-
Express
-
Free
-
Personal
oracdc includes
-
three connectors for Apache Kafka Connect ecosystem that implement CDC
-
solutions.a2.cdc.oracle.OraCdcRedoMinerConnector which directly reads changes from Oracle Database redo log files
-
solutions.a2.cdc.oracle.OraCdcLogMinerConnector which reads changes from Oracle Database using Oracle LogMiner
-
solutions.a2.cdc.oracle.runtime.thread.KafkaSourceSnapshotLogConnector which reads changes from Oracle Database materialized view logs
-
Important: Although the source connectors listed above were designed for Apache Kafka Connect, they can easily be adapted to work in other ecosystems or independently. For more information on this please email us at oracle@a2.solutions
-
Two Apache Kafka sink connectors solutions.a2.kafka.sink.JdbcSinkConnector and solutions.a2.kafka.sink.WrappedDataJdbcSinkConnector optimized for delivery of data from Kafka topic to PostgreSQL or Oracle Database
-
some transformations between native Oracle data types like NUMBER and INTERVAL%
-
solutions.a2. kafka.transforms.OraNumberConverte r to convert native Oracle NUMBER bytes to Kafka types
-
solutions.a2. kafka.transforms.OraIntervalConverte r to convert native Oracle INTERVAL bytes to Kafka types
-
Additional converters
solutions.a2.kafka.transforms. HeaderToFieldConverter
solutions.a2. kafka.transforms.KeyToValueConverter
solutions.a2.kafka.transforms.ToLowerCaseNameConverter
oracdc is open-source software that is developed by a commercial organization. Whilst we strive to provide support to the open-source edition whenever we can; for immediate help, please consider purchasing a support subscription from A2 Rešitve d.o.o./A2 Solutions LLC. Subscribers receive priority support, security hotfixes, and new features in advance of the open-source release. Community developers should raise issues or PR directly in github. Please ensure you have read our Contribution Agreement(CONTRIBUTING.md) before submitting any PR requests.
We recommend using solutions.a2.cdc.oracle.OraCdcRedoMinerConnector because:
-
It uses the database only to read table column information (one table = one read operation from data dictionary) and to determine the current database SCN, which eliminates additional CPU and IO load on the database instance.
-
It does not create any objects in the database.
-
It does not require any additional software to be installed or running on the database server.
-
It works with various database configurations, including Oracle RAC.
-
It uses streaming data transfer using the SSH, SMB protocols (for a database server on a Windows platform), or using the BFILENAME Oracle database function.
-
It streams redo files directly from the Oracle ASM (requiring SYSASM/SYSDBA permissions).
-
It processes and transforms data without involving the database server (including XMLTYPE, JSON, VECTOR).
-
It works with minimal SUPPLEMENTAL LOGGING at the database level and without any SUPPLEMENTAL LOGGING settings at the schema and table levels.
We recommend using solutions.a2.cdc.oracle.OraCdcLogMinerConnector only if you cannot use type solutions.a2.cdc.oracle.OraCdcRedoMinerConnector , for example, if your organization’s security policy prohibits remote connection to the Oracle ASM with SYSASM/SYSDBA rights.
To get started we recommend
-
For solutions.a2.cdc.oracle.OraCdcRedoMinerConnector, please refer to the Bye LogMiner, welcome ssh!. Full connector documentation is available at RedoMinerConnector.adoc.
-
For solutions.a2.cdc.oracle.OraCdcLogMinerConnector, please refer to the How to set up Oracle Database tables replication in minutes. Full connector documentation is available at - LogMinerConnector.adoc.
solutions.a2.cdc.oracle.runtime.thread.KafkaSourceSnapshotLogConnector reads changes from Oracle Database materialized view logs. While it demonstrates Oracle capabilities, it is not recommended for production use due to the very large load it creates on the database. solutions.a2.cdc.oracle.runtime.thread.KafkaSourceSnapshotLogConnector documentation is available at - SnapshotLogConnector.adoc.
All connectors use Oracle JDBC driver version 26ai to connect to the database and Oracle ASM. To configure the JDBC connection string using TLS please refer to Connect using the Oracle JDBC driver
-
AWS Marketplace - optimized for Amazon MSK and AWS Glue Schema Registry 4.1. x86_64 CloudFormation Stack 4.2. AWS Graviton CloudFormation Stack 4.3. amd64 Container
Minimum supported version Java 21. To run using this version, you need to pass the following arguments to the JVM
--enable-preview --enable-native-access=ALL-UNNAMED
It is recommended to use Java 25. To run using this version, you need to pass the following arguments to the JVM
--enable-native-access=ALL-UNNAMED
All connectors publish a number of metrics about the their activity that can be monitored through JMX. For complete list of metrics please refer to JMX-METRICS.adoc
Aleksei Veremeev - Initial work - A2 Rešitve d.o.o.
-
better schema management including ideas from timestamp of creation of a schema version
-
transition to Java 25
-
Replacement of Chronicle Queue off-heap memory support to Java’s native foreign memory API
-
In addition to the already supported SSР/SFTP connection methods using SSHJ and Maverick Synergy, support for Apache Mina SSH has been added
-
Bug fixes
-
Dockerfile: fix for GHSA-72hv-8253-57qq and CVE-2026-1605
-
New strategy for specifying a Kafka topic by table name solutions.a2.cdc.oracle.runtime.config.KafkaFlexibleTopicNameMapper, for details, see RedoMinerConnector.adoc
-
Bug fixes
-
Code cleanup: split code into core CDC code and code specific to the Kafka Connect ecosystem.
-
Bug fixes
-
Important changes to ensure proper handling of supplemental log data in partial rollback operations.
-
Transition to JDK21
-
Bug fixes
-
Using a physical standby database as a source of change information. For more information, see section "Using a physical standby database as a source of change information." in RedoMinerConnector.adoc. For a sample configuration and architecture diagram, see Make physical standby Active (Great?!) Again
-
Bug fixes
-
Reading changes from the redo log files located in Oracle ASM using DBMS_FILE_TRANSFER package. For more information, see section "Reading using DBMS_FILE_TRANSFER and BFILENAME" in RedoMinerConnector.adoc.
-
OP:11.2(IRP) and OP:11.6(ORP) from tables with advanced compression enabled are now processed
-
Bug fixes
-
Big-endian platforms fixes for processing OP:19.1 and OP:11.17
-
solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector performance improvements.
-
Java 25 LTS for container image
-
New parameter
a2.supplemental.logging- the connector currently supports different levels of supplemental logging. If you need more information, please contact us at oracle@a2.solutions. -
New parameter
a2.stop.on.missed.log.file- for more information please read parameters.adoc. -
a2.tables.in.process.size,a2.tables.out.of.scope.size, anda2.transactions.in.process.sizeto manage initial size of internal memory structures. For more information please read parameters.adoc. -
solutions.a2.cdc.oracle.utils.file.OraRedoLogFile utility enhancements and solutions.a2.cdc.oracle.utils.file.OraCdcIncidentReader utility
-
Default ssh provider changed from
mavericktosshj -
Techstack upgrade (new versions of BounceCastle JCE, the Maverick Synergy Java SSH Library and others)
-
Bug fixes and enhancements
-
Optimized code for converting from raw bytes to Java/Kafka data types
-
Bug fixes and enhancements
-
New parameter
a2.ignore.stored.offset -
Techstack upgrade (new version of BounceCastle JCE and others)
-
Bug fixes and enhancements
-
New parameter
a2.process.all.update.statementsfor the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector -
New parameter
a2.unable.to.map.col.id.warningfor the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector -
OP:10.30 & OP:10.35 parsing for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector
-
Initial load backported to the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector
-
Schema evolution support for Sink connector
-
This version supports Oracle TDE column encryption for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector that reads redo files directly and does not use a LogMiner. To decrypt encrypted columns, you need to set the
a2.tde.wallet.pathanda2.tde.wallet.passwordparameters -
Java17 is used for compilation
-
Big fixes and enhancements
Fix CVE-2020-36843 from dependencies
-
This version supports index-organized tables including overflow processing for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector that reads redo files directly and does not use a LogMiner.
-
XMLTYPE enhancements for the solutions.a2.cdc.oracle.OraCdcLogMinerConnector connector
-
Big fixes and enhancements
-
This version supports large objects processing for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector that reads redo files directly and does not use a LogMiner. Supported data types
-
BLOB
-
CLOB
-
NCLOB
-
XMLTYPE
-
JSON (RDBMS 21c+)
-
VECTOR (RDBMS 23ai+)
-
-
This version supports BOOLEAN data type for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector that reads redo files directly and does not use a LogMiner.
-
Big fixes and enhancements
-
This version supports redo files located in any remote filesystem via BFILE for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector that reads redo files directly and does not use a LogMiner. For more information, please send us an email at oracle@a2.solutions or request a meeting on https://a2.solutions/
-
Oracle JDBC v23
-
Big fixes and enhancements
-
Additional SSH provider for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector that reads redo files directly and does not use a LogMiner. For more information, please send us an email at oracle@a2.solutions or request a meeting on https://a2.solutions/
-
This version supports redo files located in SMB file shares for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector that reads redo files directly and does not use a LogMiner. For more information, please send us an email at oracle@a2.solutions or request a meeting on https://a2.solutions/
-
Big fixes and enhancements
-
This version supports redo files located in remote database servers via SSH for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector that reads redo files directly and does not use a LogMiner. For more information, please send us an email at oracle@a2.solutions or request a meeting on https://a2.solutions/
-
Oracle NUMBER datatype mapping enhancements
-
This version supports redo files located in Oracle ASM for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector that reads redo files directly and does not use a LogMiner. For more information, please send us an email at oracle@a2.solutions or request a meeting on https://a2.solutions/
-
Throttling control, especially for the solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector which uses heap memory checks and the Linux parameter vm.max_map_count(/proc/sys/vm/max_map_count)
-
Fully compatible with JDK 21 LTS
-
This version includes a solutions.a2.cdc.oracle.OraCdcRedoMinerConnector connector (in addition to LogMiner based connector solutions.a2.cdc.oracle.OraCdcLogMinerConnector and materialized view based solutions.a2.cdc.oracle.OraCdcSourceConnector) that reads redo files directly and does not use a LogMiner. For more information, please send us an email at oracle@a2.solutions or request a meeting on https://a2.solutions/
-
improved solutions.a2.cdc.oracle.utils.file.OraRedoLogFile CLI utility which produces output similar to the ALTER SYSTEM DUMP LOGFILE command (currently only for changes Layer 5 and 11) and includes additional supplemental logging information
-
Checking for the presence of columns in the table that have not been dropped completely and printing recommendations to the log
-
SMT converters for solutions.a2.cdc.oracle.data.OraNumber/solutions.a2.cdc.oracle.data.OraIntervalYM/solutions.a2.cdc.oracle.data.OraIntervalDS (oracle.sql.NUMBER/oracle.sql.INTERVALYM/oracle.sql.INTERVALDS)
-
Dockerfile enhancements (Schema registry client updated to Confluent 7.7.1), Dockerfile.snowflake to quickly create a data delivery pipeline between transactional Oracle and analytical Snowflake
1) Improved processing of transactions containing partial rollback (with ROLLBACK=1) statements
2. JMX: LastProcessedSequence metric. For more information please refer to JMX-METRICS.adoc
3. Obsoleted and removed parameters: a2.resiliency.type, a2.persistent.state.file, a2.redo.count, a2.redo.size
4. New parameter to control the selection of database table columns to create key fields of a Kafka Connect record a2.key.override. For more information please refer to parameters.adoc.
5. New parameter to add notifications about last processed redo sequence a2.last.sequence.notifier. For more information please refer to parameters.adoc.
-
Oracle Active DataGuard support for Oracle Database settings check utility
-
Fix for Oracle DataGuard when V$STANDBY_LOG does not contain rows
-
Fix ORA-310/ORA-334 under heavy RDBMS load
-
New parameters to support pseudo columns -
a2.pseudocolumn.ora_rowscn,a2.pseudocolumn.ora_commitscn,a2.pseudocolumn.ora_rowts, &a2.pseudocolumn.ora_operation. For more information please refer to parameters.adoc. -
New parameters to support audit pseudo columns:
a2.pseudocolumn.ora_username,a2.pseudocolumn.ora_osusername,a2.pseudocolumn.ora_hostname,a2.pseudocolumn.ora_audit_session_id,a2.pseudocolumn.ora_session_info, &a2.pseudocolumn.ora_client_id. For more information please refer to parameters.adoc.
Simplification of configuration for Oracle Active DataGuard - now the same configuration is used for Oracle Active DataGuard as for a primary database
-
New parameter -
a2.stop.on.ora.1284to manage the connector behavior on ORA-1284. For more information please refer to parameters.adoc. -
Checking the number of non-zero columns returned from a redo record for greater reliability.
-
Handling of partial rollback records in RDBMS 19.13 i.e. when redo record with ROLLBACK=1 is before redo record with ROLLBACK=0
-
Processing of DELETE operation for tables ROWID pseudo key
-
New parameter -
a2.print.unable.to.delete.warningto manage the connector output in log for DELETE operations over table’s without PK. For more information please refer to parameters.adoc. -
New parameter -
a2.schema.name.mapperto manage schema names generation. For more information please refer to parameters.adoc.
-
Enhanced handling of partial rollback redo records (ROLLBACK=1). For additional information about these redo records please read ROLLBACK INTERNALS starting with the sentence "The interesting thing is with partial rollback."
-
New parameter
a2.topic.mapperto manage the name of the Kafka topic to which data will be sent. For more information please refer to parameters.adoc. -
Oracle Database settings check utility
ServiceLoader manifest files, for more information please read KIP-898: Modernize Connect plugin discovery
-
Now oracdc now also checks for first available SCN in V$LOG
-
Reducing the output about scale differences between redo and dictionary
-
Separate first available SCN detection for primary and standby
a2.incomplete.redo.tolerance - to manage connector behavior when processing an incomplete redo record. For more information please refer to parameters.adoc.
a2.print.all.online.scn.ranges - to control output when processing online redo logs. For more information please refer to parameters.adoc.
a2.log.miner.reconnect.ms - to manage reconnect interval for LogMiner for Unix/Linux. For more information please refer to parameters.adoc.
a2.pk.type - to manage behavior when choosing key fields in schema for table. For more information please refer to parameters.adoc.
a2.use.rowid.as.key - to manage behavior when the table does not have appropriate PK/unique columns for key fields. For more information please refer to parameters.adoc.
a2.use.all.columns.on.delete - to manage behavior when reading and processing a redo record for DELETE. For more information please refer to parameters.adoc.
Online redo logs are processed when parameter a2.process.online.redo.logs is set to true (Default - false). To control the lag between data processing in Oracle, the parameter a2.scn.query.interval.ms is used, which sets the lag in milliseconds for processing data in online logs.
This expands the range of connector tasks and makes its use possible where minimal and managed latency is required.
-
HEX('59') (and some other single byte values) for DATE/TIMESTAMP/TIMESTAMPTZ are treated as NULL
-
HEX('787b0b06113b0d')/HEX('787b0b0612013a')/HEX('787b0b0612090c')/etc (2.109041558E-115,2.1090416E-115,2.109041608E-115) for NUMBER(N)/NUMBER(P,S) are treated as NULL. Information about such values is not printed by default in the log; to print messages you need to set the parameter
a2.print.invalid.hex.value.warningvalue to true
Solution for problem described in LogMiner REDO_SQL missing WHERE clause and LogMiner Redo SQL w/o WHERE-clause
-
New
a2.pk.string.lengthparameter for Sink Connector and other Sink Connector enhancement -
New
a2.transaction.implementationparameter for LogMiner Source Connector: when set toChronicleQueue(default) oracdc uses Chronicle Queue to store information about SQL statements in Oracle transaction and uses off-heap memory and needs disk space to store memory mapped files; when set toArrayListoracdc uses ArrayList to store information about SQL statements in Oracle transaction and uses JVM heap (no disk space needed). -
Fix for ORA-17002 while querying data dictionary
-
Better handling for SQLRecoverableException while querying data dictionary
fix unhandled ORA-17410 running 12c on Windows and more strict checks for supplemental logging settings
Oracle 23c readiness, supplemental logging checks, fixes for Oracle RDBMS on Microsoft Windows
fix for #40 & jackson library update
Single instance physical standby for Oracle RAC support
LOB_TRIM/LOB_ERASE output to log & Jackson Databind version change (fix for CVE-2022-42004)
Oracle RAC support, for more information please see What about Oracle RAC?
Deprecation of parameters a2.tns.admin, a2.tns.alias, a2.standby.tns.admin, a2.standby.tns.alias, a2.distributed.tns.admin, and a2.distributed.tns.alias. Please use a2.jdbc.url, a2.standby.jdbc.url, and a2.distributed.jdbc.url respectively. Please refer to parameters.adoc for parameter description and Oracle® Database JDBC Java API Reference, Release 23c for more information about JDBC URL format.
Min Java version → Java11, Java 17 LTS - recommended Package name change: eu.solutions.a2 → solutions.a2
a2.resiliency.type = fault-tolerant to ensure 100% compatibility with Kafka Connect distributed mode
DDL operations support for LogMiner source
SYS.XMLTYPE support and fixes for partitioned tables with BLOB/CLOB/NCLOB columns
Distributed database configuration
MAY-21 features/fixes (fix ORA-2396, add lag to JMX metrics, add feth size parameter)
LOB support. See also a2.process.lobs parameter
Kafka topic name configuration using a2.topic.name.style & a2.topic.name.delimiter parameters
Schema Editor GUI preview (java -cp <> solutions.a2.cdc.oracle.schema.TableSchemaEditor). This GUI required for more precise mapping between Oracle and Kafka Connect datatypes. See also a2.dictionary.file parameter
Ability to run Oracle Log Miner on the physical database when V$DATABASE.OPEN_MODE = MOUNTED to reduce TCO
Removing dynamic invocation of Oracle JDBC. Ref.: Oracle Database client libraries for Java now on Maven Central
-
Oracle Log Miner as CDC source
-
Removed AWS Kinesis support
-
New class hierarchy