Oracle Streams Replication Example

I promised to show off our streams implementation, because I found so few examples out there. Keep in mind that your streams implementation will probably look nothing like mine at all, and even if it will look like mine, you can get the same results in very different way. This example is not an excuse to avoid reading the docs.

Our goal: Set up a reporting server, with replication of schemas arriving from many source DBs. We also want to filter out and not replicate some of the less critical tables.

Decisions, decisions: When implementing streams there are several things you want to ask yourself: Do you want bi-directional replications? Do you want to capture the changes on the source DB, destination DB or a third server? Do you do the instantiation (first copy of data) with DataPump or RMan?

We went with one-directional replication, capturing will be done on the destination server, and we’ll use DataPump to initially move the schemas to the destination.

Create users: You want to create a user both on source and destination server that will own all the streams related objects. Also, you want to give him his own tablespace (or at least keep his objects away from SYSTEM tablespace) because in case of errors lots of data can accumulate in stream queues. The docs recommend making the tablespace autoextend, but I’m very much against it. After creating the strmadmin user, the rest of my instructions should be run from this user.

CREATE TABLESPACE streams_tbs DATAFILE '/data/oradata/DB05/streams_tbs.dbf' SIZE 25M;

CREATE USER strmadmin
IDENTIFIED BY strmadminpw
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;

GRANT DBA TO strmadmin;

BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strmadmin',
grant_privileges => true);
END;

DBLinks: You will need a DBLink from source to destination DB. For convenience, I created a DBLink from destination to source which I’ll use during the configuration of Streams and will drop afterwards. Of course, the servers should be able to tnsping each other before this will work.


--on source
CONNECT strmadmin/strmadminpw;
CREATE DATABASE LINK REP01 CONNECT TO strmadmin IDENTIFIED BY strmadminpw
USING 'REP01';
select count(*) from dba_tables@REP01;

-- on target
CONNECT strmadmin/strmadminpw;
CREATE DATABASE LINK DB05 CONNECT TO strmadmin IDENTIFIED BY strmadminpw
USING 'DB05';
select count(*) from dba_tables@DB05;

Directory Objects: I decided to use Datapump for instantiation and to use DBMS_STREAMS to create a configuration scripts. This means I needed two directory objects on destination (for the script and Datapump) and one of the source (just Datapump). I used existing directories, so I had no need to create them, but in general you create a directory like so:

create directory EXP_DP_VOL as '/exp_dp';

Copying Archive redo logs: You need to get the archived redo logs from the source server to the server doing the capturing (destination server in my case, but can be a 3rd server). There are two decisions to make here: do you use ARCH or LGWR to write the log files to the remote location? And will the new archiving destination be MANDATORY or OPTIONAL (i.e. does the writing have to succeed before the corresponding redo file at the source can be overwritten)?
Also, when specifying the destination directory, you want it to be a separate directory from where REP01 redo logs are archived, and make extra sure that the directory really exists as you wrote it.
I specified a non-existent directory, this resulted in errors on the alert log about failure to archive (scary!), and after few retries the archiver stopped attempting to write to the new destination. So, after creating the directory I had to reset the log_archive_dest_2 parameter (i.e. set it to ” and then change it back again).

On the source server, we run:

ALTER SYSTEM set LOG_ARCHIVE_DEST_2='SERVICE=REP01 ARCH OPTIONAL NOREGISTER TEMPLATE=/u10/oradata/DB05_arch/DB05_arch_%t_%s_%r.redo';
ALTER SYSTEM set LOG_ARCHIVE_STATE_2=ENABLE;

Create instantiation and configuration script: Since we do replication of (almost) entire schemas, we could use DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS to do both the instantiation and the configuration of Streams.
I used the option that creates a script with the instructions (instead of run everything). I had many problems with the script and ended up using only parts of it, and other parts doing in different ways. I’m not sure if it means that using a script is a bad idea (since I had so many problems) or a good idea (since the same problems would be more annoying if I did not have the script). To create the script, I ran the following command on the destination database. If you are using a 3rd server for capture, run it on the 3rd server:

BEGIN
DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
schema_names => 'DEV110',
source_directory_object => 'EXPORTS_DIR',
destination_directory_object => 'EXP_DP_VOL',
source_database => 'DB05',
destination_database => 'REP01',
perform_actions => false,
script_name => 'src_pre_inst_streams.sql',
script_directory_object => 'EXP_DP_VOL',
dump_file_name => 'dev110.dmp',
capture_queue_table => 'rep_capture_queue_table',
capture_queue_name => 'rep_capture_queue',
capture_queue_user => NULL,
apply_queue_table => 'rep_dest_queue_table',
apply_queue_name => 'rep_dest_queue',
apply_queue_user => NULL,
capture_name => 'capture_110dev',
propagation_name => 'prop_110dev',
apply_name => 'apply_110dev',
log_file => 'export_110dev.clg',
include_ddl => true,
instantiation => DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA);
END;

Running the result script: Try running the script you just created on the same DB where you ran the procedure to create it. Note that it asks you for connection details of three sites. The first should be source, the second the destination, and the third can be garbage as it is not used in the script at all.

If the stars are aligned right, maybe you run this script to the finish and now have a working schema replication. In my case, it ran fine until the Datapump part, and created all the capture processes and queues. However, the Datapump export failed with very unusual errors. I spent two days trying to get it to work, and then given up and simply used Datapump manually to export the schema from source and import into destination. After that I continued running the rest of the script, and things were fine.

Fun and games: Now that you have your own schema replication. It is time to enjoy it. Create new tables, create as select, update data, insert new data, delete data, drop tables, create procedures and types. Everything replicates!

Just remember that if you don’t commit, it won’t replicate. Reasons should be obvious. Also, if you are testing on low-volume system, you probably want to alter system switch logfile every few minutes so you can see your changes faster.

Watch the replication process: While you are replicating changes, you probably want to watch how they move from source to destination:

You can see when the last message was read from redo archive:
SELECT CAPTURE_NAME,
LOGMINER_ID,
AVAILABLE_MESSAGE_NUMBER,
TO_CHAR(AVAILABLE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY')
AVAILABLE_MESSAGE_CREATE_TIME
FROM V$STREAMS_CAPTURE;

You can see when the changes were entered into the capture queue:
SELECT CAPTURE_NAME,
(ENQUEUE_TIME-ENQUEUE_MESSAGE_CREATE_TIME)*86400 LATENCY_SECONDS,
TO_CHAR(ENQUEUE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME,
TO_CHAR(ENQUEUE_TIME, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
ENQUEUE_MESSAGE_NUMBER
FROM V$STREAMS_CAPTURE;

And you can see when the apply process started working on them:
SELECT APPLY_NAME,
(DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') CREATION,
TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD/YY') LAST_DEQUEUE,
DEQUEUED_MESSAGE_NUMBER
FROM V$STREAMS_APPLY_READER;

Another nice thing to try is to generate errors. For example, you can delete a table from the destination, make a change to it in the source, and see how it doesn’t replicate. The error will be found here:
SELECT APPLY_NAME,
SOURCE_DATABASE,
LOCAL_TRANSACTION_ID,
ERROR_NUMBER,
ERROR_MESSAGE,
MESSAGE_COUNT
FROM DBA_APPLY_ERROR;

And you should also check which archived logs can be deleted:
SELECT r.CONSUMER_NAME,
r.NAME,
r.FIRST_SCN,
r.NEXT_SCN,
r.PURGEABLE
FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE r.CONSUMER_NAME = c.CAPTURE_NAME;

Excluding Tables:
The last thing I did was filter out the tables I did not need to replicate. Those are tables the application uses as queues or temps – tons of inserts and deletes, but the data is useless for reports.
I used the DBMS_STREAMS_ADM.ADD_TABLE_RULES procedure. I selected to do the filtering during the capture (you can also do it during apply).
The important bits: table_name MUST contain the schema name in it. Otherwise the procedure will assume that the table belongs to strmadmin, and nothing will get filtered.
inclusion_rule=>FALSE is the part that indicates that we don’t want this table.

begin
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name=>'DEV110.APP_QUEUE',
streams_type=>'capture',
streams_name=>'CAPTURE_110DEV',
queue_name=>'REP_CAPTURE_QUEUE',
include_dml=>TRUE,
include_ddl=>TRUE,
include_tagged_lcr=> FALSE,
source_database=>'DB05',
inclusion_rule=>FALSE,
and_condition=>NULL);
end;

You can then see the rule you created by running select * from DBA_RULES

What now: The process I just described got me safely past a POC. There is obviously lot more to be done before this solution goes production. The performance of the capturing, and what is the impact of lags is of concern. Also, the entire system needs to be recreated from scratch whenever we do “alter database open reset logs” on the source DB (hopefully not often). But the immediate next step for me is to prepare a nice presentation to management showing what a great solution we prepared and how useful it will be for the business and how much this will be worth the investment.


31 Comments on “Oracle Streams Replication Example”

  1. […] 6, 2008 Two weeks after the last streams post, and I need to give some updates and […]

  2. Naren says:

    How can we know how many more redo logs the CAPTURE process has to mine.

    Our Capture was down for a week and we brought it up now and want to monitor how many more it has to mine.

    Is there any simple query that can give us this.

    Thx

  3. prodlife says:

    Naren,

    You can get the current SCN processing from v$streams_capture.
    Then you can check DBA_REGISTERED_ARCHIVE_LOGS to see which logfile contains this SCN, and how many arrived after that point.

  4. delia says:

    Thank You very much, this article is very detailed and very helpful. I am also trying to setup STREAMS for reporting database. I was planning on using schema level capture, schema level apply, though a little confused on how to filter out the tables not needed for replication. I am not sure which would be the best approach, we have 300 tables in source and only need to replicate (DML) 170 tables on destination. Question-should I use schema level capture and use the table rule procedure with inclusion=false to filter out tables not needed (130)? or should I use table rules for only the tables needed (170)? For the apply process can I use schema level apply procedure? do I need to include inclusion=false to filter out tables not needed? Oh yea we are using schema transformations too.

    Just a little confused, would appreciate any help,
    Delia

  5. prodlife says:

    Hi Delia,
    To choose between table and schema replication, ask yourself:
    1) What should happen if a new table is added. Do you want it replicated or not by default?
    2) What about other objects in schema such as stored procedures? should they be replicated?

    I encourage you to post your streams questions in the OTN streams forum: http://forums.oracle.com/forums/forum.jspa?forumID=70
    So you can get advice and hear opinions of different experts.

  6. Basi says:

    Thanks for the time you took to write this blog. Its was great help.

    First time when i setup streams it work fine without any errors.

    I remove stream using remove_streams_configuration and trying to setup again for different scheme it give below error

    ERROR at line 1:
    ORA-23616: Failure in executing block 6 for script
    06BA169BD1572587E040B98B70234302
    ORA-06512: at “SYS.DBMS_RECOVERABLE_SCRIPT”, line 457
    ORA-06512: at “SYS.DBMS_STREAMS_MT”, line 7631
    ORA-06512: at “SYS.DBMS_STREAMS_ADM”, line 2376
    ORA-06512: at line 1

    Thanks

  7. prodlife says:

    Basi,

    Ask at OTN Streams forum or at MetaLink. If I knew how to solve all streams issues my life would be so much better 🙂

  8. william says:

    Hi Chen,

    Just surfed to here as I am finding some web resource on Oracle streams.

    I have very new to Oracle streams and so I want to know more about it.

    Can Oracle streams support the features multimaster replication provided? Many people say Oracle streams is easier to managed than replication, isn’t the truth?

    Thanks.

    William

  9. Lene says:

    Thanks for taking to time to do this, it’s hard to find good information about the streams process.

    Just a quick comment on the errors encountered during the datapump part – if you grant your streams admin user the mgmt_user role, you can get around it. Apparently, dba is not enough.

    Also, generating the script you can run into errors if the individual tables don’t all have the right constraints – if a primary key is not defined, the supplemental logging at the table level won’t work and the script doesn’t generate. One way around that is to generate the script in an empty database or one with just a single test table, and adding the supplemental logging at the database level manually in the script instead.

  10. prodlife says:

    Thanks for your insightful comment, Lene.

    Keep in mind that if your table has no primary key, you are very likely to run into more annoying problems with streams.

    For example, if your source table has a duplicate field (very likely, with no PK), and you try to delete it – it will delete fine from the source, but will cause an error for the replication.

  11. Ravi Prakash says:

    Good One, We have few issues with Stream Setup, any insight on this…
    ORA-12012: error on auto execute of job 28339
    ORA-03135: connection lost contact
    ORA-02068: following severe error from QR69
    ORA-03135: connection lost contact
    ORA-25307: Enqueue rate too high, flow control enabled
    ORA-06512: at “SYS.DBMS_AQADM_SYS”, line 7224
    ORA-06512: at “SYS.DBMS_AQADM_SYS”, line 7502
    ORA-06512: at “SYS.DBMS_AQADM”, line 978
    ORA-06512: at line 1
    Wed Jan 14 22:10:54 2009
    Propagation Schedule for (STRMADMIN.STREAMS_QUEUE_xxx_QR69, ORACLEDB.xxxx.COM) encountered following error:
    ORA-03135: connection lost contact
    ORA-02068: following severe error from QR69
    ORA-03135: connection lost contact
    Wed Jan 14 22:16:28 2009
    Propagation Schedule for (STRMADMIN.STREAMS_QUEUE_xxxx_QR69, ORACLEDB.xxxx.COM) encountered following error:
    ORA-25307: Enqueue rate too high, flow control enabled

    Thanks
    Ravi Prakash

  12. Hafedh KADDACHI says:

    Thanks,
    i’am using Streams in production site since 2005, and i strongly advice everybody to use at lest the 10g version and
    not the 9i (many bugs !!!)

  13. Ron Lehman says:

    Agree, streams in 10g are better than in 9i. There are still many issues with streams, even in 10g.
    As I am aware there couple of good alternatives to streams. For instance database sync from wisdomforce is a good one: http://www.wisdomforce.com/products-DatabaseSync.html
    It is easier to use, more reliable and allows to replicate at same time different platforms .

  14. Hafedh KADDACHI says:

    Hi,
    now that my bidirectional streams are running in production, i would like to configure Dataguarad on the DB where streams are running.
    Could anyone recommend me a note (except Data Guard Role Transit and streams Feb 2005) or give me an example of code.
    Thanks

  15. DBA says:

    Hi Chen,
    I have a specific requirement on transforming an UPDATE on a source database into an INSERT on target database using Streams (to keep track of history). This is supposed to work like an AFTER EACH ROW trigger. The caveat is that I want to capture the equivalent of :NEW (trigger) into my stored procedure (a DML handler) to apply to the target. I want old column values of the row that were not updated and the new column values that were updated of that row. Here is a sample code for the UPDATE:

    IF cmd_type=’UPDATE’
    THEN
    l_row_record.set_command_type (‘INSERT’);
    l_row_list := l_row_record.get_values (‘new’);
    l_row_record.set_values (‘new’, l_row_list);
    l_row_record.set_values (‘old’, NULL);
    l_row_record.execute (TRUE);
    END IF;

    The problem is I am getting only new values (columns that were updated) in the target. All the other (non-updated) column values are NULL. I want the non-updated column values that are old values. Does that make sense?

    How can I solve this?

    Thanks!

    • prodlife says:

      @Prashant

      If you want to get ALL columns, not just the ones you modified, you’ll want to change your supplemental log settings to log all columns.

  16. DBA says:

    I forgot to mention the fact above that it is an 11g environment.

  17. DBA says:

    Ok. That’s a big deal! 🙂 How about if I want to just log the primary key only? In other words, log primary key and all the updated columns and don’t worry about all the other non-updated columns.
    How can I get the value of the primary key using get_value function of the lcr$_row_record type?

  18. DBA says:

    Thank you Chen. I was looking for exact same behavior that the OP in the forum wanted. BUT… I am not seeing that behavior.

    All I get is only the new column values. I don’t get the old column values. Now Of course, I don’t expect all the old column values because I am not doing any supplemental logging. BUT… I at least expect the old primary key value. And I am not getting it.

    So is this a bug? I am on 11g.

  19. prodlife says:

    @DBA,

    I’m on 11.1.0.7, running on Linux, and I see the old values as I should. I’d check with Oracle why you don’t.

  20. DBA says:

    Can you believe it?

    “l_row_record.set_command_type (’INSERT’);” line needs to be after “l_row_record.set_values (’new’, l_row_list);” line.

    So following works:

    IF cmd_type=’UPDATE’
    THEN
    l_row_list := l_row_record.get_values (’new’);
    l_row_record.set_values (’new’, l_row_list);
    l_row_record.set_values (’old’, NULL);
    l_row_record.set_command_type (’INSERT’);
    l_row_record.execute (TRUE);
    END IF;

    I guess as soon as you set the new command_type, the definition of ‘old’ value and ‘new’ value changes.

    I just figured this out by trail and error. Oracle support had no idea.

    Anyway, keep up the good work!

  21. Bobby says:

    I am looking for an example of setting up bi-direction replication at database level (global), using RMAN duplicate for instantiation. Anybody has any link to it? Thanks!

  22. Aseem says:

    Thanks a lot ..!
    everything goes well when we run script, but when i make any dml/ddl change in source it doesnt replicate to dest?
    do we need to run this procedure again and again which i dont think so to be solution?
    i ‘ve switch logfile as well but of no use
    i’m currently using 10g .

    if any one could help me on this?

    Regrds
    Aseem

  23. Hashmi says:

    Hi,
    Can you pease guide ?
    I have a database A (10.2.0.1) and various other databases(9.0.0.1)
    I need to merge data from various other database to this database A on daily basis.

    Can i use oracle streams in this scenario?

    Thanks

    • prodlife says:

      @Hashmi:
      No problem using streams between different versions.
      Streams normally provide real time replication so replicating just once a day is possible but not recommended (or simple).

  24. Hashmi says:

    Hi,
    Thankyou so much for your reply.
    Actually I need to update database A on hourly basis. The problem is when i try to run below code as sysdba on 9.0.0.1, i get no rows selected.

    SQL> select owner,object_name,object_type from dba_objects where object_name like ‘%STREAMS%’;

    If I do
    1 BEGIN
    2 DBMS_STREAMS_ADM.MAINTAIN_SCHE();
    3* end;
    SQL> /
    It throws the below error:
    DBMS_STREAMS_ADM.MAINTAIN_SCHE();
    *
    ERROR at line 2:
    ORA-06550: line 2, column 1:
    PLS-00201: identifier ‘DBMS_STREAMS_ADM.MAINTAIN_SCHE’ must be declared
    ORA-06550: line 2, column 1:
    PL/SQL: Statement ignored
    I know its not the correct way to call the package but just to check whether oracle recognizes DBMS_STREAMS_ADM package or not.

    Thats why I asked is it possible to apply oracle streams between 10.2.0.1 and 9.0.0.1 databases? There are 29 different 9.0.0.1 databases that i need fetch data and insert them on 10.2.0.1 database on hourly basis.

    Many thanks once again

  25. Hashmi says:

    Hi, Just a Typo.
    Its SCHEMAS and not SCH i mentioned.

  26. joseph says:

    It was really helpfull….


Leave a comment