Friday, April 5, 2013

Dynamic Sampling - What's That?

Many of our DBA friends would have used dbms_xplan.display_cursor to know the execution path of a particular troublesome SQL cursor, and in some cases, the troublesome SQL's take a different path than an expected one. 

Something like this.


    CPU      Elapsed                  CPU per          % Total             
  Time (s)   Time (s)  Executions     Exec (s) % Total DB Time    SQL Id   
---------- ---------- ------------ ----------- ------- ------- -------------
    39,621     37,137    1,542,605        0.03    10.0     2.1 260cnxfjvb6am

Module: jdbc.EFS.DataSource
select /*+ FIRST_ROWS(1) */ tab.rowid, tab.msgid, tab.corrid, tab.priority, t
ab.delay, tab.expiration, tab.retry_count, tab.exception_qschema, tab.except
ion_queue, tab.chain_no, tab.local_order_no, tab.enq_time, tab.time_manager_in
fo, tab.state, tab.enq_tid, tab.step_no, tab.sender_name, tab.sender_address,


In such cases,  The most common misconception is that Dynamic Sampling is a replacement for optimizer statistics. But actually it helps optimizer statistics when regular statistics are not good enough good quality cardinality estimates.

Secondly, dynamic sampling kicks of when the SQL has a pretty much complex predicates(Also displayed when dbms_xplan.display_cursor is used). Complex predicates can be like where clause predicates on two correlated columns standard statistics would not be sufficient and will require extended statistics. 

 The Optimizer estimates the cardinality as 1 rows with Standard statistics. But the actual number of rows returned by this query is 2084. By setting OPTIMIZER_DYNAMIC_SAMPLING to level 4, the optimizer will be forced to use dynamic sampling to gather additional information about the complex predicate expression. The additional information provided by dynamic sampling allows the optimizer to generate a more accurate estimate of cardinality and will choose a better execution plan.

Plan hash value: 2338978088
---------------------------------------------------------------------------------------------
|Id   | Operation                    | Name          | Rows  | Bytes | Cost (%CPU)|Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |               |       |       |     1 (100)|         |          
|   1 |  FOR UPDATE                  |               |       |       |            |         | 
|   2 |  TABLE ACCESS BY INDEX ROWID | XX_FLOW_AQ    |     1 |  9929 |     1   (0)| 00:00:01|
|   3 |    INDEX RANGE SCAN          | AQ$_FLOW_AQ_I |     1 |       |     1   (0)| 00:00:01|
---------------------------------------------------------------------------------------------


Alter session set optimizer_dynamic_sampling=4;


Plan hash value: 4208087888
----------------------------------------------------------------------------------------
| Id  | Operation           | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                  |       |       |     3 (100)|          |
|   1 |  FOR UPDATE         |                  |       |       |            |          |
|   2 |   SORT ORDER BY     |                  |     1 |  29K  |     3  (34)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| XX_FLOW_AQ       |  1518 |  29K  |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Note
----
 - dynamic sampling used for this statement


Now below table can explain the various levels of dynamic sampling and, what are the scenario's they kicked off.




HAPPY LEARNING!


Wednesday, April 3, 2013

DBMS_REDEFINITION Is Good Option For Zero Downtime Environments


Tried DBMS_REDEFINITION for the table structure change activity. This is almost an no downtime activity and was able to succeed. 
Below are the steps done

Except Step 5, all other step can be performed online.


1) Check table can be redefined

EXEC Dbms_Redefinition.Can_Redef_Table('XX', 'XX_ITEM_COST_INT');

PL/SQL procedure successfully completed.

2) Create intermediate table without any constraints and index (change the INITRANS,PCTFREE value as required )

CREATE TABLE "XX"."XX_ITEM_COST_INT_2"
   (    "ODI_RUN_ID" NUMBER,
        "ORG_TYPE" VARCHAR2(10) NOT NULL ENABLE,
        "ORG" VARCHAR2(40) NOT NULL ENABLE,
.
.
   ) PCTFREE 25 PCTUSED 40 INITRANS 15 MAXTRANS 255
  STORAGE(
.
.
COMPRESS FOR OLTP LOGGING
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "XX_TS_TX_DATA" )  
  Table created.

3) start redefine table:

EXEC Dbms_Redefinition.Start_Redef_Table('XX','XX_ITEM_COST_INT','XX_ITEM_COST_INT_2');

PL/SQL procedure successfully completed.

Elapsed: 00:08:30.85

4) Add Constraints and indexes: (change the INITRANS,PCTFREE value as required )

alter table "XX"."XX_ITEM_COST_INT_2" add
(CONSTRAINT "XX_CST_SCDH_ITEM_COST_PK_2" PRIMARY KEY ("ODI_RUN_ID", "ITEM_NUM","COST_TYPE", "ORG")
USING INDEX PCTFREE 25 INITRANS 15 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "XX_TS_TX_DATA");
  
  Table altered.

Elapsed: 00:10:13.45

CREATE INDEX "XX"."XX_CST_SCDH_ITEM_PROCESS_2" ON "XX"."XX_ITEM_COST_INT_2" ("PROCESS_FLAG")
  PCTFREE 25 INITRANS 15 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "XX_TS_TX_IDX";
  
Elapsed: 00:03:02.62

  CREATE INDEX "XX"."XX_CST_SCDH_ITEM_PROCESS_2_2" ON "XX"."XX_ITEM_COST_INT_2" ("ITEM_NUM", "ORG")
  PCTFREE 25 INITRANS 15 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "XX_TS_TX_IDX"

Elapsed: 00:04:09.39
  

5) Finish the redefinition

new   1: select OWNER,SEGMENT_NAME,sum(BYTES)/1024/1024/1024 from dba_segments where SEGMENT_NAME='XX_ITEM_COST_INT_2' group by OWNER,SEGMENT_NAME
XX
XX_ITEM_COST_INT_2
               1.94921875 –(with new storage clause size got increased)\

new   1: select OWNER,SEGMENT_NAME,sum(BYTES)/1024/1024/1024 from dba_segments where SEGMENT_NAME='XX_ITEM_COST_INT' group by OWNER,SEGMENT_NAME
XX
XX_ITEM_COST_INT
               1.79882813

EXEC Dbms_Redefinition.Finish_Redef_Table( 'XX', 'XX_ITEM_COST_INT', 'XX_ITEM_COST_INT_2');

PL/SQL procedure successfully completed.

new   1: select OWNER,SEGMENT_NAME,sum(BYTES)/1024/1024/1024 from dba_segments where SEGMENT_NAME='XX_ITEM_COST_INT_2' group by OWNER,SEGMENT_NAME
XX
XX_ITEM_COST_INT_2
               1.79882813
new   1: select OWNER,SEGMENT_NAME,sum(BYTES)/1024/1024/1024 from dba_segments where SEGMENT_NAME='XX_ITEM_COST_INT' group by OWNER,SEGMENT_NAME
XX
XX_ITEM_COST_INT –(table got redefined)
               1.94921875

6) Tidy up the work

DROP TABLE XX.XX_ITEM_COST_INT_2;
ALTER TABLE XX.XX_ITEM_COST_INT RENAME CONSTRAINT XX_CST_SCDH_ITEM_COST_PK_2 TO XX_CST_SCDH_ITEM_COST_PK;
ALTER INDEX XX.XX_CST_SCDH_ITEM_PROCESS_2 RENAME TO XX_CST_SCDH_ITEM_PROCESS_IDX1;
ALTER INDEX XX.XX_CST_SCDH_ITEM_PROCESS_2_2 RENAME TO XX_CST_SCDH_ITEM_PROCESS_IDX2;


HAPPY LEARNING!