A client has an ETL process that checks for success by ensuring –
Source Rows = Rows inserted + Rows in DML error logging table.
This is normally fine, however one needs to be aware that rows inserted by DML error logging are done so autonomously, outside the parent transaction. Thus you need to be careful when combining [Rows Inserted] (which is atomic) and [Rows in DML error logging table] (which is not).
As an example, if you combine all four of the below conditions you’ll get a scenario that’ll cause the above check to fail unnecessary.
Create demonstration table with indexes:
SQL> ALTER SESSION SET deferred_segment_creation = true Session altered. SQL> CREATE TABLE DML_DFC (ID VARCHAR2(30) ,NAME VARCHAR2(20) ,ACTIVE varchar2(1 byte)) PARTITION BY LIST (ACTIVE) ( PARTITION P_ACTIVE VALUES ('Y'), PARTITION P_INACTIVE VALUES (NULL) ) Table created. SQL> BEGIN dbms_errlog.create_error_log(dml_table_name => 'DML_DFC', err_log_table_name => 'DML_DFC_ERR'); END; PL/SQL procedure successfully completed. SQL> CREATE UNIQUE INDEX DFX_PK ON DML_DFC(id) Index created. SQL> CREATE UNIQUE INDEX DFC_IX01 ON DML_DFC(NAME) Index created.
Confirm Deferred Segment Creation:
SQL> SELECT count(*) FROM user_extents WHERE segment_name = 'DML_DFC' COUNT(*) ---------- 0 1 row selected.
Capture a few useful statistics before running the test case:
SQL> select name, value FROM v$sesstat a INNER JOIN v$statname b ON a.statistic# = b.statistic# AND b.name like '%rollbacks%' AND a.sid = (select sid from v$mystat where rownum = 1) order by 1 NAME VALUE ---------------------------------------------------------------- ---------- IMU CR rollbacks 0 cleanouts and rollbacks - consistent read gets 0 rollbacks only - consistent read gets 0 transaction rollbacks 0 transaction tables consistent read rollbacks 0 user rollbacks 0
Simulate an ETL load with four source rows:
SQL> MERGE INTO DML_DFC dst USING ( SELECT * FROM ( select '1' AS id ,null AS name ,null as ACTIVE FROM DUAL UNION ALL select '1' AS id ,null AS name ,null as ACTIVE FROM DUAL UNION ALL select '2' AS id ,'hello' AS name ,null as ACTIVE FROM DUAL UNION ALL select '3' AS id ,'hello' AS name ,null as ACTIVE FROM DUAL ) ORDER BY 1,2 ) src ON (dst.id = src.id) WHEN NOT MATCHED THEN INSERT VALUES (src.id ,src.name ,src.active) LOG ERRORS INTO DML_DFC_err('tdh') REJECT LIMIT UNLIMITED
Monitor the changes to statistics captured earlier:
SQL> select name, value FROM v$sesstat a INNER JOIN v$statname b ON a.statistic# = b.statistic# AND b.name like '%rollbacks%' AND a.sid = (select sid from v$mystat where rownum = 1) order by 1 NAME VALUE ---------------------------------------------------------------- ---------- IMU CR rollbacks 0 cleanouts and rollbacks - consistent read gets 0 rollbacks only - consistent read gets 0 transaction rollbacks 1 transaction tables consistent read rollbacks 0 user rollbacks 0
Interesting – At some point during the statement executing, Oracle has performed an implicit rollback and restart. If you weren’t aware that Oracle could do this, see Tom Kyte’s series on Transaction Restarts – Part i is here.
That series, from 2005, predate Deferred Segment Creation so it’s not a scenario that’ll be mentioned there.
Check the inserted and rejected row counts:
SQL> SELECT count(*) FROM DML_DFC COUNT(*) ---------- 2 1 row selected. SQL> SELECT count(*) FROM DML_DFC_err COUNT(*) ---------- 3 1 row selected.
Oh dear – We had 4 source rows, we’ve inserted 2 but we’ve managed to reject 3 rows!
For the moment, the application in question has disabled deferred segment creation but the strategic fix is to find an implementation that doesn’t have an implicit expectation that DML error logging is transactional.
End to End instrumentation is an extremely powerful feature for investigating or monitoring performance across all layers of a modern multi-tiered application.
To facilitate this, the application is expected to tag each database operation with a MODULE and ACTION string. These are free form and can be populated with whatever is most appropriate for the application, however releases up to and including 11.2.0.1 have limited their lengths to – a restrictive – 48 and 32 bytes respectively.
When I was reviewing our recently upgraded 11.2.0.2 environment I was excited to discover:
SQL> SELECT 2 column_name 3 ,data_length 4 FROM dba_tab_columns 5 WHERE table_name = 'V_$SESSION' 6 AND column_name IN ('MODULE','ACTION'); COLUMN_NAME DATA_LENGTH ------------------------------ ----------- MODULE 64 ACTION 64
Confirming this wasn’t simply the view definition, I checked the package specification for DBMS_APPLICATION_INFO:
procedure set_module(module_name varchar2, action_name varchar2); -- Input arguments: -- module_name -- The name of the module that will now be running. The maximum -- length of the module name is 64 bytes. Longer names will be -- truncated. -- action_name -- The name of the action that will now be running. The maximum -- length of the action_name is 64 bytes. Longer names will be -- truncated. If the action name is not being specified, then null -- should be passed for this value.
Excellent! That gives us a bit more to work with! Let’s try it…
SQL> SQL> select lengthb('Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do') AS module_length_bytes 2 ,lengthb('Ut enim ad minim veniam, quis nostrud exercitation ullamco labor') AS action_length_bytes 3 from dual; MODULE_LENGTH_BYTES ACTION_LENGTH_BYTES ------------------- ------------------- 64 64 SQL> begin 2 dbms_application_info.set_module('Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do' 3 ,'Ut enim ad minim veniam, quis nostrud exercitation ullamco labor'); 4 end; 5 / PL/SQL procedure successfully completed. SQL> SELECT module 2 ,length(module) 3 ,action 4 ,length(action) 5 FROM v$session 6 WHERE sid = (SELECT sid 7 FROM v$mystat 8 WHERE rownum = 1) 9 / MODULE LENGTH(MODULE) ACTION LENGTH(ACTION) ---------------------------------------------------------------- -------------- ---------------------------------------------------------------- -------------- Lorem ipsum dolor sit amet, consectetur adipisic 48 Ut enim ad minim veniam, quis no 32 SQL> BEGIN 2 dbms_monitor.serv_mod_act_trace_enable('RAC1' 3 ,'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do' 4 ,'Ut enim ad minim veniam, quis nostrud exercitation ullamco labor'); 5 END; 6 / BEGIN * ERROR at line 1: ORA-12899: value too large for column "SYS"."WRI$_TRACING_ENABLED"."QUALIFIER_ID1" (actual: 64, maximum: 48) ORA-06512: at "SYS.DBMS_MONITOR", line 78 ORA-06512: at "ORACLE.PK_DBMS_MONITOR", line 13 ORA-06512: at line 2
Oh well.. Guess it’s sticking to 48 and 32 bytes for now then.
UPDATE: Oracle support have pointed out the, rather odd, “_module_action_old_length” parameter:
SQL> alter session set "_module_action_old_length" = false; Session altered. SQL> begin dbms_application_info.set_module('Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do' ,'Ut enim ad minim veniam, quis nostrud exercitation ullamco labor'); end; / SQL> SELECT module ,length(module) ,action ,length(action) FROM v$session WHERE sid = (SELECT sid FROM v$mystat WHERE rownum = 1) / MODULE LENGTH(MODULE) ACTION LENGTH(ACTION) ---------------------------------------------------------------- -------------- ---------------------------------------------------------------- -------------- Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do 64 Ut enim ad minim veniam, quis nostrud exercitation ullamco labor 64
So you can take advantage of the larger fields on 11.2.0.3. However, the usefulness of doing so is questionable so far as you still can’t enable tracing for values longer than the “old lengths”
SQL> BEGIN dbms_monitor.serv_mod_act_trace_enable('RAC1' ,'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do' ,'Ut enim ad minim veniam, quis nostrud exercitation ullamco labor'); END; / 2 3 4 5 6 BEGIN * ERROR at line 1: ORA-12899: value too large for column "SYS"."WRI$_TRACING_ENABLED"."QUALIFIER_ID1" (actual: 64, maximum: 48) ORA-06512: at "SYS.DBMS_MONITOR", line 78 ORA-06512: at line 2 SQL> set lin 100 SQL> desc "SYS"."WRI$_TRACING_ENABLED" Name Null? Type ----------------------------------------------------- -------- ------------------------------------ TRACE_TYPE NOT NULL NUMBER PRIMARY_ID VARCHAR2(64) QUALIFIER_ID1 VARCHAR2(48) QUALIFIER_ID2 VARCHAR2(32) INSTANCE_NAME VARCHAR2(16) FLAGS NUMBER
In 11.1.0.6, there appears to be a bug regarding the interaction of Virtual Columns with the ‘row shipping’ enhancement.
Thanks to Miladin Modrakovic for the heads-up on row shipping.
Wide Table Select Row Shipping
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Oct 5 09:26:33 2009 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production With the Partitioning option SQL> create table t2_vc 2 ( col0 varchar2(30) generated always as ('SAMPLE'), 3 col1 number, 4 col2 number, 5 col3 number, 6 col4 number 7 ); Table created. SQL> insert into t2_vc (col1) values (1); 1 row created. SQL> select * from t2_vc; COL0 COL1 COL2 COL3 COL4 ----- ---------- ---------- ---------- ---------- 1
Note COL0 is incorrectly returned as NULL.
Turn on row shipping notes in the explain plan
SQL> alter session set "_row_shipping_explain" = true; Session altered. SQL> explain plan for select * from t2_vc; Explained. SQL> select * from table(dbms_xplan.display); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 3599290104 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 69 | 3 (0)| 00:00:01 | | 1 | TABLE ACCESS FULL| T2_VC | 1 | 69 | 3 (0)| 00:00:01 | --------------------------------------------------------------------------- Note ----- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- - dynamic sampling used for this statement - row shipping is enabled for this statement 13 rows selected.
Virtual columns fail with row shipping enabled. Try again without row shipping:
SQL> alter session set "_enable_row_shipping" = false; Session altered. SQL> select * from t2_vc; COL0 COL1 COL2 COL3 COL4 ----- ---------- ---------- ---------- ---------- SAMPLE 1
COL0 is correctly returned.
SQL> explain plan for select * from t2_vc; Explained. SQL> select * from table(dbms_xplan.display); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 3599290104 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 69 | 3 (0)| 00:00:01 | | 1 | TABLE ACCESS FULL| T2_VC | 1 | 69 | 3 (0)| 00:00:01 | --------------------------------------------------------------------------- Note ----- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- - dynamic sampling used for this statement 12 rows selected.
Virtual columns now work correctly.
Test the threshold (default is 80%). If we select more than 4 columns from our 5 column table, the threshold is exceeded and row shipping is enabled. Select only 4 columns and row shipping is disabled and everything is fine.
SQL> alter session set "_enable_row_shipping" = true; Session altered. SQL> select col0,col1,col2,col3,col4 from t2_vc; COL0 COL1 COL2 COL3 COL4 ----- ---------- ---------- ---------- ---------- 1 Execution Plan ---------------------------------------------------------- Plan hash value: 3599290104 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 69 | 3 (0)| 00:00:01 | | 1 | TABLE ACCESS FULL| T2_VC | 1 | 69 | 3 (0)| 00:00:01 | --------------------------------------------------------------------------- Note ----- - dynamic sampling used for this statement - row shipping is enabled for this statement SQL> select col0,col1,col2,col3 from t2_vc; COL0 COL1 COL2 COL3 ----- ---------- ---------- ---------- SAMPLE 1 Execution Plan ---------------------------------------------------------- Plan hash value: 3599290104 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 56 | 3 (0)| 00:00:01 | | 1 | TABLE ACCESS FULL| T2_VC | 1 | 56 | 3 (0)| 00:00:01 | --------------------------------------------------------------------------- Note ----- - dynamic sampling used for this statement
Interestingly, the threshold calculation also seems a bit broken, at least it does the way I interpret it. Even though I’m selecting all columns, if I set the threshold to 81% row shipping is disabled for the virtual column table.
See the following attachment for an example of the threshold seemingly behaving in an unexpected way when virtual columns are involved.
Here is a Perl script and instructions for summarising a redo log by object. It operates on the formatted dump version of the redo, NOT the actual binary redo log itself. To save needing a lot of disk space for the formatted dump, you can use a named pipe to directly pipe the text from Oracle into the Perl script.
In the instructions below, the SQL*Plus session and the Unix session must be two separate sessions as the Perl will hang until you press Ctrl-C.
This takes a long time so it’s not actually all that practical and please don’t do it anywhere important. Save it for test databases. I’m also not a Perl expert by any means so no guarantees.
SQL> select vp.spid from v$session vs, V$PROCESS vp where vs.sid = (select sid from v$mystat where rownum = 1) and vp.addr = vs.paddr;
SPID
————
13229
$ cd /oracle/admin/CNR/udump/ $ mkfifo ora_13229.trc $ perl redo_summary.pl ora_13229.trc
This session will then wait for data on the pipe
SQL> ALTER SYSTEM DUMP LOGFILE 'filename';
filename should be an archived log file.
Wait until…
Statement processed.
Then Ctrl-C on the perl session
And you’ll get something like this:
OBJECT_ID REDO_SIZE 3081559 131457104 63261 127620100 3845955 100570888 3845956 100565432 63268 70334108 NON-OBJECT REDO 44421440 36990 40602644
Cheers,
Tim