Friday, May 29, 2015

Temp Table Transformation Cardinality Estimates - 2

Continuing from the previous part - which was about the Temp Table Transformation and join cardinality estimates - using the same simple table setup here is a slight variation of the previously used query to demonstrate the potential impact on single table cardinality estimates:
explain plan for
with
cte as (
select /* inline */ id from t1 t
where 1 = 1
)
select /*+
           no_merge(a) no_merge(b)
       */ * from cte a, cte b
where a.id = b.id
and a.id > 990 and b.id > 990
;

-- 11.2.0.x Plan with TEMP transformation
--------------------------------------------------------------------------------
| Id  | Operation                  | Name                      | Rows  | Bytes |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                           |  1000 | 26000 |
|   1 |  TEMP TABLE TRANSFORMATION |                           |       |       |
|   2 |   LOAD AS SELECT           | SYS_TEMP_0FD9D661C_275FD9 |       |       |
|   3 |    TABLE ACCESS FULL       | T1                        |  1000 |  4000 |
|*  4 |   HASH JOIN                |                           |  1000 | 26000 |
|*  5 |    VIEW                    |                           |  1000 | 13000 |
|   6 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D661C_275FD9 |  1000 |  4000 |
|*  7 |    VIEW                    |                           |  1000 | 13000 |
|   8 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D661C_275FD9 |  1000 |  4000 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - access("A"."ID"="B"."ID")
   5 - filter("A"."ID">990)
   7 - filter("B"."ID">990)

-- 11.2.0.x Plan with INLINE hint
----------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |    10 |   260 |
|*  1 |  HASH JOIN          |      |    10 |   260 |
|   2 |   VIEW              |      |    10 |   130 |
|*  3 |    TABLE ACCESS FULL| T1   |    10 |    40 |
|   4 |   VIEW              |      |    10 |   130 |
|*  5 |    TABLE ACCESS FULL| T1   |    10 |    40 |
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("A"."ID"="B"."ID")
   3 - filter("ID">990)
   5 - filter("ID">990)
Again it's obvious that the Temp Table Transformation can have significant impact on the single table cardinality estimates.

In particular:

- Although the same filter is applied in both cases to the rowsources A and B, in case of the Temp Table Transformation it doesn't reduce the cardinality. So it's not uncommon to end up with significant cardinality overestimates in case the transformation gets used

- For Exadata environments particularly bad is that the filter isn't pushed into the TABLE ACCESS FULL operator, but only applied in the VIEW operator above, which means that it can't be offloaded - all the data needs to be sent from the Storage Cells to the Compute Nodes and filtered there. Not a very efficient way to operate on Exadata

The behaviour is still the same in 12c.

Monday, May 25, 2015

Temp Table Transformation Cardinality Estimates - 1

Having published recently two notes about the Temp Table Transformation highlighting the heuristics based decision and other weaknesses, for example regarding the projection of columns, it's time to publish some more notes about it.

The transformation can also have significant impact on cardinality estimates, both join and single table cardinality.

Looking at the difference in the join cardinality estimates of following simple example:
create table t1
as
select
        rownum as id
      , mod(rownum, 10) + 1 as id2
      , rpad('x', 100) as filler
from
        dual
connect by
        level <= 1000
;

exec dbms_stats.gather_table_stats(null, 't1')

alter session set tracefile_identifier = 'temp_trans_join_card';

alter session set events '10053 trace name context forever, level 1';

explain plan for
with
cte as (
select /* inline */ id + 1 as id from t1 t
where 1 = 1
)
select /*+
           --opt_estimate(@"SEL$2" join("A"@"SEL$2" "B"@"SEL$2") rows=1000)
           no_merge(a) no_merge(b)
       */ * from cte a, cte b
where a.id = b.id
;

alter session set events '10053 trace name context off';

-- 11.2.0.x Plan with TEMP transformation
--------------------------------------------------------------------------------
| Id  | Operation                  | Name                      | Rows  | Bytes |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                           |     1 |    26 |
|   1 |  TEMP TABLE TRANSFORMATION |                           |       |       |
|   2 |   LOAD AS SELECT           | SYS_TEMP_0FD9D660C_27269C |       |       |
|   3 |    TABLE ACCESS FULL       | T1                        |  1000 |  4000 |
|*  4 |   HASH JOIN                |                           |     1 |    26 |
|   5 |    VIEW                    |                           |  1000 | 13000 |
|   6 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D660C_27269C |  1000 |  4000 |
|   7 |    VIEW                    |                           |  1000 | 13000 |
|   8 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D660C_27269C |  1000 |  4000 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - access("A"."ID"="B"."ID")

-- 11.2.0.x Plan with INLINE hint
----------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   253K|
|*  1 |  HASH JOIN          |      | 10000 |   253K|
|   2 |   VIEW              |      |  1000 | 13000 |
|   3 |    TABLE ACCESS FULL| T1   |  1000 |  4000 |
|   4 |   VIEW              |      |  1000 | 13000 |
|   5 |    TABLE ACCESS FULL| T1   |  1000 |  4000 |
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("A"."ID"="B"."ID")
the following becomes obvious:

- There are vastly different cardinality estimates possible depending on whether the transformation gets used or not. In principle due to the NO_MERGE hints used, the transformation shouldn't have any impact on the estimates, but it does

- Looking at the optimizer trace file some information seems to get lost when the transformation gets used, in particular column related statistics

- This information loss, like in the example here, can lead to join cardinality estimates of 0 (rounded to 1 in the plan output)

- And even worse, at least in pre-12c versions, such a 0 cardinality estimate can't be corrected via OPT_ESTIMATE hints, since somehow the correction gets ignored/lost although being mentioned in the optimizer trace:
11.2.0.1:
  Column (#1): ID(
    AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
  Column (#1): ID(
    AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
Join Card:  0.000000 = = outer (1000.000000) * inner (1000.000000) * sel (0.000000)
>> Join Card adjusted from 0.000000  to: 1000.000000, prelen=2
Adjusted Join Cards: adjRatio=1.00 cardHjSmj=0.000000 cardHjSmjNPF=0.000000 cardNlj=0.000000 cardNSQ=1000.000000 cardNSQ_na=0.000000
Join Card - Rounded: 1 Computed: 0.00
The behaviour regarding the OPT_ESTIMATE hint changes in 12c, but then there are other oddities introduced in 12c that are not there in pre-12c - have a look at the "Query Block" section when using the INLINE variant of the query - there are two identical fully qualified object names, clearly a bug, making hinting using global hint syntax impossible for that query block.

Although my simple example here can be corrected via extended statistics on the join column expression used in the CTE query my point here is that depending on whether the transformation gets used or not vastly different and extreme cardinality estimates are possible - and those extreme cases even can't be corrected in pre-12c versions.

For example I recently had a real life case where two columns were joined that had a significant number of NULL values, one coming from a temp table transformation row source. Without the transformation the join cardinality estimates were reasonable, but the transformation again lead to such a 0 cardinality estimate (that couldn't be corrected via a (correctly specified) OPT_ESTIMATE hint), ruining the whole plan.

Thursday, May 7, 2015

Heuristic Temp Table Transformation - 2

Some time ago I've demonstrated the non-cost based decision for applying the temp table transformation when using CTEs (Common Table/Subquery Expressions). In this note I want to highlight another aspect of this behaviour.

Consider the following data creating a table with delibrately wide columns:
create table a
as
  select
          rownum as id
        , rownum as id2
        , rpad('x', 4000) as large_vc1
        , rpad('x', 4000) as large_vc2
        , rpad('x', 4000) as large_vc3
from
          dual
connect by
          level <= 1000
;

exec dbms_stats.gather_table_stats(null, 'a')
and this query and plans with and without the temp table transformation:
with cte
as
(
  select  /* inline */
          id
        , id2
        , large_vc1
        , large_vc2
        , large_vc3
from
          a
where
          1 = 1
)
select
        *
from
        (
          select id, count(*) from cte group by id
        ) a,
        (
          select id2, count(*) from cte group by id2
        ) b
where
        a.id = b.id2
;

-- Plan with TEMP TABLE transformation
--------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                           |  1000 | 52000 |  1341   (1)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION |                           |       |       |            |          |
|   2 |   LOAD AS SELECT           | SYS_TEMP_0FD9D6609_26FA32 |       |       |            |          |
|   3 |    TABLE ACCESS FULL       | A                         |  1000 |    11M|   452   (0)| 00:00:01 |
|*  4 |   HASH JOIN                |                           |  1000 | 52000 |   889   (1)| 00:00:01 |
|   5 |    VIEW                    |                           |  1000 | 26000 |   444   (1)| 00:00:01 |
|   6 |     HASH GROUP BY          |                           |  1000 |  4000 |   444   (1)| 00:00:01 |
|   7 |      VIEW                  |                           |  1000 |  4000 |   443   (0)| 00:00:01 |
|   8 |       TABLE ACCESS FULL    | SYS_TEMP_0FD9D6609_26FA32 |  1000 |    11M|   443   (0)| 00:00:01 |
|   9 |    VIEW                    |                           |  1000 | 26000 |   444   (1)| 00:00:01 |
|  10 |     HASH GROUP BY          |                           |  1000 |  4000 |   444   (1)| 00:00:01 |
|  11 |      VIEW                  |                           |  1000 |  4000 |   443   (0)| 00:00:01 |
|  12 |       TABLE ACCESS FULL    | SYS_TEMP_0FD9D6609_26FA32 |  1000 |    11M|   443   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

-- Plan with CTE inlined (turn INLINE into hint)
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |  1000 | 52000 |   907   (1)| 00:00:01 |
|*  1 |  HASH JOIN           |      |  1000 | 52000 |   907   (1)| 00:00:01 |
|   2 |   VIEW               |      |  1000 | 26000 |   453   (1)| 00:00:01 |
|   3 |    HASH GROUP BY     |      |  1000 |  4000 |   453   (1)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| A    |  1000 |  4000 |   452   (0)| 00:00:01 |
|   5 |   VIEW               |      |  1000 | 26000 |   453   (1)| 00:00:01 |
|   6 |    HASH GROUP BY     |      |  1000 |  4000 |   453   (1)| 00:00:01 |
|   7 |     TABLE ACCESS FULL| A    |  1000 |  4000 |   452   (0)| 00:00:01 |
-----------------------------------------------------------------------------
Looking at the query and plan output the following becomes obvious:

- The mere existence of a WHERE clause, even if it is just "WHERE 1 = 1" and referencing the CTE more than once triggers the transformation (nothing new, already demonstrated in the mentioned previous note, as well as the fact that the inlined CTE variant is cheaper in cost)

- There is a huge difference between the estimated size of the TEMP TABLE and the size of the row sources when using the CTE inline

The latter is particular noteworthy: Usually Oracle is pretty clever in optimizing the projection and uses only those columns required (doesn't apply to the target expression of MERGE statements, by the way), which is reflected in the plan output for the inline CTEs - the wide columns don't matter here because they aren't referenced, although being mentioned in the CTE. But in case of the temp table transformation obviously all columns / expressions mentioned in the CTE become materialized, although not necessarily being referenced when the CTE gets used.

So it would be nice if Oracle only materialized those columns / expressions actually used.

Now you might raise the question why mention columns and expressions in the CTE that don't get used afterwards: Well, generic approaches sometimes lead to such constructs - imagine the CTE part was static, including all possible attributes, but the actual usage of the CTE can be customized by a client. In such cases where only a small part of the available attributes get actually used a temp table transformation can lead to a huge overhead in size of the generated temp table. Preventing the transformation addresses this issue, but then the inlined CTE will have to be evaluated as many times as referenced - which might not be desirable either.