Thursday 29 November 2007

Oracle BI and Oracle ODBC driver

Oracle BI - using Oracle ODBC driver

It seems that name of the connection pool has to be the same as the tnsnames entry if you are using the Oracle ODBC driver with Oracle BI. If it is different then all appears to be OK when using the administration tool but all falls apart when you try and run a report with cannot find entry in tnsnames. Put an entry in the relevant tnsnames.ora file which the same as the connection pool name and it all magically starts working. Good eh?

Oracle BI - using Oracle ODBC driver

It seems that name of the connection pool has to be the same as the tnsnames entry if you are using the Oracle ODBC driver with Oracle BI. If it is different then all appears to be OK when using the administration tool but all falls apart when you try and run a report with cannot find entry in tnsnames. Put an entry in the relevant tnsnames.ora file which the same as the connection pool name and it all magically starts working. Good eh?

Thursday 1 November 2007

Oracle Data Miner

I've been trying to get Oracle Data Miner and it's demo database installed.
It would help if I read things before diving in with both feet. To get it going I first installed XE on my PC to discover that it only works with enterprise version of Oracle so I installed 11g to dicover that version 10.2. of data miner only works with version 10.2 of the database

Friday 19 October 2007

TimesTen

Following a visit from an Oracle consultant yesterday, I have successfully installed and got working the Windows version of Timesten. I did some speed tests which, given the hardware I was running on, were a bit inconclusive. I also got a couple of 'Timetens Oracle database Cache groups' working. One in 'readonly autorefresh' mode and the other in 'asynchronous writethrough' mode.

My objectives in installing were to :-
1) To familiarize myself with the product.
3) Learn about the set up and install of the product.
2) Run some tests to demonstrate how much quicker queries, updates ,insert and deletes were in Timesten is when compared to a standard Oracle database
3) Investigate the processes involved in setting up and running Timesten as an Oracle database Cache.

I chose the Windows version of Timeten to save the hassle of finding a suitable machine to run it on. My PC has plenty of memory (2Gb) and is reasonably quick.

I downloaded and installed the Windows version of Timetens from the Oracle website
http://www.oracle.com/technology/software/products/timesten/index.html


After a straight forward installation I was was left with a Timeten menu group and a number of new ODBC entries.

Of the newly created menu entries, ttisql proved to be the most useful, that said it is easier to run it from the DOS command prompt since you can specify which ODBC source you want to use.
e.g. ttisql wiscdata_tt70_32

This will log you straight in using your windows username as the Timeten user.
Typing 'help' gets you a long list of commands including 'tables' which show which tables are available to you. The interface is reminiscent of MySql, so up arrow gets you the previous line of the previous command etc. Pretty much all the SQL syntax is Oracle SQL (DECODE is even there). I created a large 1 million row table in Timesten and a similar table in Oracle 10g. It was at this point I hit a problem which meant I had to increase the value of the PERMSIZE parameter. It was easy enough, I just went to the ODBC source and upped the value of PERMSIZE by hitting the configure button and changing the value of PERMSIZE from 16 to 100. I ran various queries against both databases; using TTisql to access TimesTen and SQlplus to get at 10g.... the result timings were...well...similar. Not a very scientific test, I'll admit, especially since one was running on a local PC and the other on a remote Server. Where I did notice a difference was when I used Excel to pull data out of both databases using ODBC. Again not very scientific, remote server Vs local PC etc.... but there was a significant difference between the two, Timesten was much faster.


I then turned my attention to setting up TimesTen as a Oracle database cache.

Useful documents are
http://download-uk.oracle.com/otn_hosted_doc/timesten/603/TimesTen-Documentation/operations.pdf

and
http://download.oracle.com/otn_hosted_doc/timesten/703/TimesTen-Documentation/cacheconnect.pdf

Once you get it all working you can configure most things from
http://localhost:17004/cache/
but before you get there, there are a number of other steps.

The first thing you need to do is create a new ODBC source. The TimesTen database HAS to have the same character set as the Oracle database.

Set it as per the cacheconnect.pdf document. I had some trouble getting logged in via
http://localhost:17004/cache but once I had create the Timesten user. Granted it admin and ddl priviledges (which I had to do using ttisql when logged in as my Windows user) and got the character sets right, I got logged in

I then created two Cache groups, one was readonly and the other asynchronous. The readonly autorefresh group only started working after I did the extra step in TTiSql of issuing the command
LOAD CACHE GROUP bm.testgroup COMMIT EVERY 256 ROWS;

The asynchronous group started working once i'd start the 'RepAgent' using
ttadmin -repstart cgdsn

P.S. You have to stop both the repagent and the cache inorder to create a new cahe group
ttadmin -repstop cgdsn
ttadmin -cachestop cgdsn

create the new group

ttadmin -cachestart cgdsn
ttadmin -repstart cgdsn


Conclusion

Install and set up all looks pretty straight forward, on Windows at least. The relevant documentation ain't bad either, once you have found it. To progress further, we really need to get PHP talking to Timeten either via the ODBC or directly via the C++ libraries.

Wednesday 17 October 2007

Database Server info

SELECT
user
,sys_context('USERENV', 'DB_UNIQUE_NAME') DB_UNIQUE_NAME
,sys_context('USERENV', 'HOST') HOST
,sys_context('USERENV', 'INSTANCE') INTSANCE
,sys_context('USERENV', 'SERVER_HOST') SERVER_HOST
,sys_context('USERENV', 'SID') SID
FROM dual

or
select name from v$database;

Tuesday 2 October 2007

Date Format

For years in SQLPlus I've TO_CHAR'ed dates to get them in the format I wanted.
This is quicker in most situations :-
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';

Monday 1 October 2007

Database Links

There has been a long running debate at work about how to 'join databases together'. We have a number of different database each for their own application. However they share certain information e.g. user account data. Currently the databases are 'joined' by the PHP application software. I have long argued for using db links as a way to 'join' the data together, arguing that the CBO will do it in a far more efficient manner than any application ever could. A quick search on the asktom website came up with the answer. The best link turned out to be
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:14688146259705

My summary of it's conclusions goes as follows:-
1) should I use the application to 'join' different database together?
Answer:
definitely not , no way, nicht, never.

2) Should I use db links as part of my real time application code?
Answer: not unless you can absolutely avoid it, by all mean use DB links to copy data via materialized views or even in pl/sql that copies data from one db to another but anything that relies on both databases being up at the same time as part of a real time application should be avoided.

3) Should I use one of Oracle replication technologies to copy and sync data between different databases, so that real time applications are depend on one and only one database?
Answer: yes, were possible

4) Should I strive to keep everything on one database and use technologies like RAC to provide both processing power and resilience?
Answer: yes , yes yes.

Monday 17 September 2007

sqlplus html output

To produce html output from sqlplus

set markup HTML ON entmap off

see
http://download.oracle.com/docs/cd/B12037_01/server.101/b12170/ch8.htm

Thursday 13 September 2007

When is SQLNET.ORA read?

During the investigation into SQLNET problems (see previous blog entry) we had cause to change the EXPIRE_TIME in a SQNET.ORA file shared by more than one TNS listener. I could find no documentation that explicitly said when the SQLNET.ORA file was read but all interweb documents seemed to say 'change the file and then restart the listener'. From this we inferred that the lsnrctl only reads the SQLNET.ORA at start up. THIS IS NOT TRUE.
SQLNET.ORA is read on a regular basis by the listener.

see
http://www.dbasupport.com/forums/archive/index.php/t-28267.html

SQLNET expire_time

Had a lot of problems with SQLNET over our VPN recently. The problem is that any queries that did not return a result for long periods of time would just hang. This typically occurred when doing counts from large tables. If the query took longer than about 15 mins then ,intermittently, it would hang. Sometimes it would work fine and the next time hang, no error message...nothing. This made it virtually impossible to handle or code around. Oddly, if you just logged into SQLPLUS and did nothing for 24 hours there was never a problem.

I discovered that the hanging was nothing to do with the database but had to be somewhere in the network. I concluded this after I found I could reproduce the problem simply by doing a dbms_lock.sleep(2100) (35 mins). This proved the database was not hanging but had something to do with the comms between the sqlnet client and the database server. Furthermore when the problem occurred the server was showing that it was 'waiting for client' even after the allotted sleep time had finished. I tried many different combinations of Oracle client and server versions running both Windows and Unix, all produced the same behaviour...so not a version bug.

During a long and difficult investigation we discovered no packets are sent between client and server while a long running SQL executes. When the server eventually responds, having finished the query execution, the firewall fails to sent the packet on. The details of why this should happen we have never worked out.

Someone came up with the bright idea of turning on EXPIRE_TIME (EXPIRE_TIME=5) in the sqlnet.ora on the server. This causes the server to check the client is still alive every 5 minutes. If the client has died then SQLNET tidys up it's processes on the server. A side effect of this checking is that a conversation occurs between server and client every 5 minutes, enough to convince the firewall that the session is still valid and not chop the comms between the two.

After a brief hiccup (discussed in next blog entry) it fixed the problem and SQLNET is now 100% reliable.

The whole investigation took approximately one month and many, many man hours and the solution was depressingly simple!

Monday 3 September 2007

DBMS_UTILITY.FORMAT_ERROR_BACKTRACE

What a find! The problem with multiple statements in a pl/sql block is that when an error is caught in an exception you loose the exact line that caused the original error. In 10g you can now find out which line caused the original error by using

DBMS_UTILITY.FORMAT_ERROR_BACKTRACE

see http://www.quest-pipelines.com/newsletter-v5/0904_A.htm

for more detail

More streams problems - slow apply process

Managed to hit a bug in 10.2.0.2 streams a few days ago. Bug number 6163622, two sources users into one destination user with a large number of transactions causes the apply process to go V.E..R..Y V..E..R..Y......S..L..O..W.

It's a spill issue, if the number of transactions in one commit is too high (number of rows committed > TXN_LCR_SPILL_THRESHOLD value in the table dba_apply_parameters) then spilling occurs and the apply process then takes for ever to read back from the spill table and apply the change to the destination database. Look in v_$streams_apply_coordinator and you will see the queue with the problem stuck in appyling mode.

TXN_LCR_SPILL_THRESHOLD is set via :-

BEGIN
dbms_apply_adm.set_parameter(
apply_name => 'APPLY_E1STRM',
parameter => 'TXN_LCR_SPILL_THRESHOLD',
value => 20000 );
END;

Friday 24 August 2007

Oracle streams - errors and fixes

Hit a problem with the Streams capture process aborting with a duplicate value in the index I_STREAMS_APPLY_SPILL_TXN on the table sys.STREAMS$_APPLY_SPILL_TXN. Quick look on the interweb revealed the answer.. which was to delete old rows from the table.
The view dba_capture tells you the process has aborted.

To start and stop the process use:-
BEGIN
dbms_capture_adm.start_capture(capture_name => 'capture_e1strm');
END;

BEGIN
dbms_capture_adm.stop_capture(capture_name => 'capture_e1strm');
END;



A second error involved the propagation process. The error in the logs was
ORA-25307: Enqueue rate too high, flow control

the view dba_propagation showed the same error but the process
was still enabled
The answer turned out to be to
restart the propagation process.
i.e.

BEGIN
dbms_propagation_adm.stop_propagation(propagation_name => 'PROP_E1DB_TO_E2DB');
END;

BEGIN
dbms_propagation_adm.start_propagation(propagation_name => 'PROP_E1DB_TO_E2DB');
END;

Useful site is :


http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_trouble.htm



enabled

Oracle Streams - streaming two users into one

Hit and (hopefully) resolved an number of issues with Oracle Streams recently.

1) to stream two tables with the same structure owned by two different users into one destination user i.e. user1.tab and user2.tab on the source instance into user1.tab on the destination instance.

The answer seems to be to set up a normal schema stream for user1and user2 on the destination. Then modify the capture progress at the destination for user2 as follows:-

CREATE OR REPLACE PROCEDURE user1.write_reh4_lcrs
(v_any IN sys.AnyData)
IS
v_lcr SYS.LCR$_ROW_RECORD;
v_rc pls_integer;
v_command VARCHAR2(10);
v_old_values SYS.LCR$_ROW_LIST;
BEGIN
v_rc := v_any.GETOBJECT(v_lcr);
v_lcr.SET_OBJECT_owner('user1');
v_lcr.EXECUTE(TRUE);
END;



and then mod the capture process to use the proc

BEGIN
dbms_apply_adm.set_dml_handler(
object_name => 'user2.reh4'
,object_type => 'TABLE'
,operation_name => 'INSERT'
,error_handler => FALSE
,user_procedure => 'user1.write_reh4_lcrs'
,apply_database_link => NULL
,apply_name => NULL
);
END;

BEGIN
dbms_apply_adm.set_dml_handler(
object_name => 'user2.reh4'
,object_type => 'TABLE'
,operation_name => 'UPDATE'
,error_handler => FALSE
,user_procedure => 'user1.write_reh4_lcrs'
,apply_database_link => NULL
,apply_name => NULL
);
END;

BEGIN
dbms_apply_adm.set_dml_handler(
object_name => 'user2.reh4'
,object_type => 'TABLE'
,operation_name => 'DELETE'
,error_handler => FALSE
,user_procedure => 'user1.write_reh4_lcrs'
,apply_database_link => NULL
,apply_name => NULL
);
END;

Tuesday 14 August 2007

Oracle streams - lost archive logs

We recently had a problem when we lost the archive logs from the source database before they could be applied to the destination database..... don't ask how or why OK... enough already!

I spent ages trying to figure out how to reset the scn on the capture process to get the whole thing up and running again ( having manually resynced the tables). Turn out the way to do it was to drop and recreate the capture. I used drop_unused_rule_sets => true which meant I had to recreate all the associated rule but you may be able to get away with just dropping and recreating the capture.
i.e.

BEGIN
dbms_capture_adm.drop_capture(capture_name => 'capture_e1strm'
, drop_unused_rule_sets => FALSE
);
END;

followed by

BEGIN
dbms_capture_adm.create_capture(queue_name => 'capture_e1q'
,capture_name => 'capture_e1strm'
,source_database => 'ecit1'
,use_database_link => TRUE
);
END;

Tuesday 7 August 2007

Gather Stats – the importance of granularity

I got caught out today when using dbms_stats.gather_table_stats. I specified a partition name in the statement because I just wanted to gather stats for a particular partition. I did not however specify the granularity. The default value for the parameter will also cause dbms_stats.gather_table_stats to generate the global table stats. The partition I was trying to get stats for was only 60 millions rows so it should only have taken a few hours, the full table however contained 700 million rows so gathering the stats for the partition and the global table stats took 14 hours!

For reference the table below option for granularity.

GRANULARITY

Table Global

Partition Global

Partition Statistics

Subpartition Statistics

GLOBAL

YES

NO

NO

NO

PARTITION

NO

YES

YES

NO

DEFAULT

YES

YES

YES

NO

SUBPARTITION

NO

NO

YES

YES

ALL

YES

YES

YES

YES

This was obtained from the following webpage

http://www.dbazine.com/oracle/or-articles/nanda5

Moving a partitioned table’s tablespace

Moving the partitions with a table is straight forward e.g.

alter table reh_temp move partition p200705

tablespace TSD_Q_BLB_P200705;

This does not however move the table ‘header’ for that you need to do

alter table reh_temp modify default attributes tablespace tsd_q_blb;

Reference

alter table reh_temp modify default attributes tablespace tsd_q_blb; see

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:416718680907

about half way down the page search for alter table invoices modify default attributes tablespace manual;

The same is true of indexes but I do not know what the syntax is for moving indexe headers

Splitting a table partition

If speed at which this operation occurs is dependent on the number of rows in the partition. Empty partition split immediately. An example of the command is :-

ALTER TABLE reh_temp

SPLIT PARTITION pmax AT (TO_DATE('01-AUG-2007','DD-MON-YYYY'))

INTO (PARTITION p200707

,PARTITION pmax)

UPDATE GLOBAL INDEXES;

Exchanging a table for a table partition

An example of this command is

ALTER TABLE reh_temp

exchange partition p200609 with TABLE blb_200610

including indexes;

If the indexes on the ‘donor’ table are all valid etc and the partitioned table has the same indexes AND THEY ARE LOCAL INDEXES then the index should be carried across into the new partition and remain valid. Any global indexes will go invalid.

Partitioning a non-partitioned table using dbms_redefinition

DBMS_REDEFINTION is a really useful package for partitioning a table that is currently not partitioned. It does all the things you would want to do if you partitioned the table manually but does it is a more reliable and bug free way. The only restriction is that the table to be partitioned must have a primary key constraint. The basis steps involved in partitioning are :-

  • Create an empty copy of the table partitioned up as you need it. This table will eventually become your partitioned table so it is important the table and indexes etc are all in the correct tablespaces etc.
  • Check the table can be partitioned using dbms_redefintion e,.g.

BEGIN

sys.dbms_redefinition.can_redef_table('rehfacts'

,reh_orig);

END;

  • Start the redefinition procedure

BEGIN

dbms_redefinition.start_redef_table(uname => 'rehfacts'

,orig_table => 'reh_orig'

,int_table => 'REH_temp'

);

END;

Among other things this turns the new partitioned copy of the table into a materialized view.

  • Given that the data population of the new partitioned table by the step above may take some time, the next step is to sync any data that may have been changed or added during this processs

BEGIN

dbms_redefinition.sync_interim_table(uname => 'dwfacts'

,orig_table => 'reh_orig'

,int_table => 'REH_temp'

);

END;

  • Swap the tables and finish off

BEGIN

dbms_redefinition.finish_redef_table(uname => 'rehfacts'

,orig_table => 'reh_orig'

,int_table => reh_temp'

);

END;

  • Rename all the indexes and move any the foreign key dependencies.

References

http://www.oracle-base.com/articles/misc/PartitioningAnExistingTable.php

http://www.dbasupport.com/forums/archive/index.php/t-27379.html

Updating columns that are used in table partitioning

If you create a table

Create table reh_temp

(col1 NUMBER(8,0))

PARTITION BY RANGE (col1)

(

partition p1 values less than (100)

,partition p2 values less than (200)

,partition PMAX values less than (MAXVALUE) );

Insert into temp values(1);

Select * from reh_temp partition (p1);

Brings back one row as you would expect

Update reh_temp set col1 = 150

ERROR at line 1:

ORA-14402: updating partition key column would cause a partition change

i.e. The row would have to move partitions so it errors However alter table reh_temp enable row movement; then updates are ok and the row moves from partition to partition as you update

USER_TAB_MODIFICATIONS

This view contains all the changes since the table was last analyzed

Monitor Indexes

Which indexes get used in the database and which do not ?

Alter index index_name monitoring usage;

I’ve written a package that turns monitoring on for each index in turn and then turns it off again once it has been used. By this means you learn which indexes are used and which are not.



Latest on this is
http://www.oracloid.com/2006/05/vobject_usage-empty/

v$object_query only contains rows from the current user
so.........


Use this query

SELECT
aoj.*
from
(
select u.name owner
, io.name index_name
, t.name table_name
, decode(bitand(i.flags, 65536), 0, 'NO', 'YES') monitoring
, decode(bitand(ou.flags, 1), 0, 'NO', 'YES') used
, ou.start_monitoring
, ou.end_monitoring
from
sys.user$ u
, sys.obj$ io
, sys.obj$ t
, sys.ind$ i
, sys.object_usage ou
where
i.obj# = ou.obj#
and io.obj# = ou.obj#
and t.obj# = i.bo#
and u.user# = io.owner#
) aoj
,all_indexes u
WHERE aoj.index_name = u.index_name
and aoj.owner = u.owner


Quotation marks – how many are enough?

I’ve always struggled with how many single quotes to put in some SQL statements.

e.g If you want to select from dual the string

“I’ve got a luverly bunch o’ coconuts”

The SQL if something like

Select ‘I’’ve got a luverly bunch o’’ coconuts’ from dual;

An alternative and a far easier method to remember I think (only works with 10g) is

Select q’(I’ve got a luverly bunch o’ coconuts)’ from dual;

i.e. surround the string with q’( string )’

N.B The bracket can be any bracket (I believe) i.e. q['string]' or q'{string}'


Large number of Extents in a Table – The myth and the consequnces

Does a table having a large number of extents in 10g degrade performance? The answer seems to be an emphatic NO.

I’ve referenced below two different explanations found on the interweb as to why this is so.

The first reference can be summarized as follows:-

The operating system I/O buffer is of a limited size (128Kb say). So in a full table scan of a table with ten 128Kb extents it will require ten reads. Even if the ten extents were compressed into one 1280Kb extent, it would still take ten reads to scan the whole table (10 reads each filling the 128Kb I/O buffer)

By looking at the module2 system and working backwards from the value of DB_FILE_MULTIBLOCK_READ_COUNT (which is 16) and the block size of the majority of the tablespaces i.e. 8k. I assume the operating system I/O buffer size is 128Kb (16*8Kb). Most of our tablespaces have extents sizes of 64Kb. Assuming I’ve understood correctly, this means that for any table which has multiple extents we do have a problem, because the I/O could read 128Kb but can only read 64Kb at a time so a full table scan of a table spread over two extents take 2 reads and not one. Is this correct or is the OS/NAS clever enough to read 2*64Kb extents at a time?

Reference:

http://www.devshed.com/c/a/Oracle/Developing-and-Implementing-Applications-continued/3/

http://www.dizwell.com/prod/node/66

2. Local Indexes - Stats on the index 'header'’

From investigation of a problem I have come to the following conclusion. The standard dbms_gather_stats package used by Oracle 10g as the out-of-the-box method for keeping stats current was NOT regenerating the stats for the ‘header’ or global part of local indexes on partitioned tables.

i.e. for local indexes you can generate stats for each partition of the index AND also for the index itself (the header).

The effect of these out of date stats was that an update statement that had been working well for some time suddenly ran so slowly it effectively killed the live system. It turned out the Cost based Optimizer had decided to use an incorrect index on a table with 130 Millions rows in it. I regenerated the index headers stats for the incorrect index and the index it should have been using. The optimizer switched to using a (better but still incorrect) third index. Once I had regenerated the stats for this third index the optimizer started using the correct index.

Conclusion we cannot rely just on dbms_gather_stats to keep the stats up to date. I have written a packaged procedure to gather the stats for N indexes a day starting with the most out of date. In the absence of anything better, I would suggest we run this procedure on a daily basis on each database.

Reverse Indexes

All the examples on the ‘interweb’ that talk about situations where reverse indexes are useful, centre around inserting a large numbers of rows, in a short period of time, into a table where the primary key is a sequence number. This results in ‘hot spots’ where all the inserts need the same part of the index. Using a reverse index would spread the inserts out over the entire structure more evenly.

The downside is that a reverse index cannot be used for an index range scan. It does, however, work well with exact matches e.g. b_id = 1234567 and will full scan the index, if the CBO thinks it is appropriate.

Global Vs Local Indexes

In this paragraph I discuss the various merits of global and local indexes, this is done partly for my own benefit, as it always helps to write things down, but I would also welcome your comments if you have an opinion on the subject or you think I’ve just got it plain wrong.

On a partitioned table, of whatever flavor (range, hash or list), you have a choice of local or global indexes.

Points to note: -

i) You can have a mix of both local and global indexes on the same partitioned table.

ii) Local indexes are best suited to data warehouse application, while global indexes work better with OLTP systems.

iii) Local indexes are ideal for any index that is prefixed with the same column(s) used to partition the table.

iv) Global indexes can be range partitioned using a different ‘scheme’ to the underlying table.

Local Indexes
Pros

i) The one major advantage of local indexes is that if a table partition is exchanged (using ‘alter table partitioned exchange/drop/add partition’) a local index will remain valid.

Cons

i) The disadvantage of a local index that is not prefixed with the column(s) that are used to partition the table is that in order to find a value(s) in the index, each separate partition has to be searched for the index entry/entries. When a relatively small number of rows are required from the database (typical of a OLTP system) this additional overhead can be a significant part of the query execution time. In a data warehouse system, were typically a large number of rows are returned from a query, this overhead is a smaller proportion of the overall time taken by the query.

Global Indexes
Pros

i) Global indexes can be range partitioned using a different partition scheme from the underlying table. i.e. on a table (containing two columns id and start_date) that is hash partitioned by id, the global index on start_date can be range partitioned. This allows for partition elimination of the index entries when performing queries with a predicate including the start_date column.

ii) They can be used to enforce uniqueness on columns not used to partition the table.

iii) All the index entries are held in one place (i.e. not spread out over ‘n’ partitions)

Cons

i) They go invalid when partitions are exchanged.

V$SESSION_LONGOPS

have a look at the V$SESSION_LONGOPS view when running long running SQL. It shows (among other things) the work the SQL has done and the amount of work it still needs to do to complete the job.

e.g.

SELECT ROUND(((totalwork-sofar)/totalwork)*100) perc,vlo.*

FROM v$session_longops vlo

Where totalwork-sofar > 0

AND start_time > Trunc(Sysdate)

Order By sid,start_time Asc;

Autotrace

I have been using Autotrace under the qgames user on feltemp1. In order get it working I had to get the DBA’s to run the ~oracle/sqlplus/admin/plustrace.sql as SYS and then

grant plustrace to qgames;

Things to look at when investigating poorly performing SQL

i) The stats for the table and index may be out of date.

ii) Set the session variables as above.

iii) Don’t get caught out by explain plan (use toad most of the time because it’s easiest to use but use ‘explain plan for…’ to confirm conclusions.

iv) Don’t get caught out by database caching results in the SGA. Use different data values for queries to hit bits of the table than are not cached.

v) Don’t assume the query path is the same for all parameters in your query. The CBO may change it query path depending on the number of rows it expects to return.

vi) Toad only returns the first 500 rows by default, do a count(*) to confirm your findings.

Trust the CBO and do not use hints

I firmly believe this to be true, I have read it over and over again (but now cannot find a decent reference, best I can do is http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:15799499535198).

If you need to hint your code, in most cases this is merely a sticking plaster over an underlying problem. You should resolve the underlying problem and allow the CBO to do its job. This will result in more robust code since the CBO will compensate for changes in the underlying data patterns.

We use hints everywhere in our code it seems to me, this is a direct result of ‘stale stats’ and/or inappropriate init.paras.

Explain plan

‘All explain planners occasionally lie, some lie more often than others’ – Jonathon Lewis (allegedly).

Toad explain plan (the ambulance icon) lies more than ‘explain plan for …’ / ‘Select * from table(dbms_xplan.display)’.

Toad explain plan caught me out more than once.

Init paras

One of the biggest challenges I encountered during this research was generating conditions under which the Cost Based Optimizer (CBO) would behave in a consistent, predictable fashion. Stats played a major role here, but once I could eliminate them from the equation, I had particular problems persuading the CBO to use an index to perform a query (without the use of hints). The CBO would full-table-scan (FTS) at every opportunity, even though this was demonstrably a slower route. The answer lay in the init.paras which cause the CBO to FTS if the stats show the number of rows returned would be 5% (yes, only 5%) or more of the table.

The default init.paras for

optimizer_index_caching = 0

optimizer_index_cost_adj = 100

optimizer_mode = all_rows

are a set of ‘safe’ values and are adequate for most small/medium size database and designed for datawarehouse application but are ‘unsuitable for most OLTP applications’

By changing these values via

alter session set optimizer_index_caching = 90;

alter session set optimizer_index_cost_adj = 25;

alter session set optimizer_mode = first_rows_100;

The propensity for the CBO to FST so readily is lessened and the use of indexes increases without the need to resort to hints.

This proved to be one of the single most important factors in making queries against the view v_bet_union_vb perform in an acceptable manner.

ref : http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:173561100346113797

http://www.dba-oracle.com/oracle_tips_cost_adj.htm

http://www.dba-oracle.com/art_builder_sql_parm.htm

I have considered proposing the option of changing the init.para of all of the OLTP database to these values but have rejected the idea (for the moment) on the basis that this will affect ALL database operations in an unpredictable way.

Ref http://www.praetoriate.com/t_optimizer_index_caching.htm

Table Stats

Table stats are generated by the default automatic process that is created by Oracle when the database is installed i.e. dbms_stats.gather_database_stats. (I believe)

This process uses the view dba_table_modifications to establish which tables have changed by more than 10% (not configurable) since the last stats were gathered. For the vast majority of tables this works very well but for certain tables it is not adequate. I have carried out repeatable experiments that show (I believe) that certain queries perform extremely well (2 sec response time) on those partitions that have stats that are very recent and perform extremely badly against partitions that have out of date stats (2 minute response times) even though data in the partition has changed by less than 10%.

Reference : http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3058895998401