date -d "yesterday" +"%d %B %y"
http://www.walkernews.net/2007/06/03/date-arithmetic-in-linux-shell-scripts/
Monday, 15 December 2008
Bash date arithmetic
Not really oracle but useful nevertheless
Tuesday, 9 December 2008
Convert comma separated list into an array - dbms_utility.comma_to_table
Simple way to convert a comma separated list held in a single variable into its component parts.
pl/sql equivalent to php explode
e.g.
pl/sql equivalent to php explode
e.g.
SET SERVEROUTPUT ONhttp://www.oracle-base.com/articles/9i/UsefulProceduresAndFunctions9i.php
DECLARE
l_list1 VARCHAR2(50) := 'A,B,C,D,E,F,G,H,I,J';
l_list2 VARCHAR2(50);
l_tablen BINARY_INTEGER;
l_tab DBMS_UTILITY.uncl_array;
BEGIN
DBMS_OUTPUT.put_line('l_list1 : ' || l_list1);
DBMS_UTILITY.comma_to_table (
list => l_list1,
tablen => l_tablen,
tab => l_tab);
FOR i IN 1 .. l_tablen LOOP
DBMS_OUTPUT.put_line(i || ' : ' || l_tab(i));
END LOOP;
DBMS_UTILITY.table_to_comma (
tab => l_tab,
tablen => l_tablen,
list => l_list2);
DBMS_OUTPUT.put_line('l_list2 : ' || l_list2);
END;
Friday, 14 November 2008
Thursday, 7 August 2008
Oracle Recycle bin and Flashback
To remove your own dropped object from the recycle bin do:-
PURGE RECYCLEBIN;
Alternatively issue
alter session set recyclebin=off ;
before dropping the object.
If you are important enough (i.e. have the privs)
Purge dba_recyclebin;
will drop all recyclebin objects.http://www.oracle.com/technology/pub/articles/10gdba/week5_10gdba.html
PURGE RECYCLEBIN;
Alternatively issue
alter session set recyclebin=off ;
before dropping the object.
If you are important enough (i.e. have the privs)
Purge dba_recyclebin;
will drop all recyclebin objects.http://www.oracle.com/technology/pub/articles/10gdba/week5_10gdba.html
Friday, 18 July 2008
Parititions, global indexes - The magic "update indexes" clause
Global indexes need not go invalid when you change a table's partitioning if you use the magic 'update indexes' clause. Downside, it takes longer to perform the operation. Upside, users can continue to use the table without getting the error 'index is unusable'
All explained here :-
http://youngcow.net/doc/oracle10g/server.102/b14231/partiti.htm#i1006455
in the paragraph "Updating Indexes Automatically"
All explained here :-
http://youngcow.net/doc/oracle10g/server.102/b14231/partiti.htm#i1006455
in the paragraph "Updating Indexes Automatically"
Monday, 7 July 2008
Wednesday, 25 June 2008
Compression and bitmap indexes
When I first tried to compress an existing partitioned table with local bitmap indexes I got an error saying I 'could not compress the partition with a valid bitmap index' or word to that effect. I mistakenly took this to mean this was always the case. It appears that once the compress operation has taken place and the bitmap indexes rebuild from then on all will be OK and the erro does not reappear and this is the reason why :- http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10736/parpart.htm
DBMS_UTILITY.FORMAT_ERROR_STACK
Don't forget this
exception when others then
DBMS_UTILITY.FORMAT_ERROR_STACK
exception when others then
DBMS_UTILITY.FORMAT_ERROR_STACK
Tuesday, 24 June 2008
Confession about Compression
I've not look at compression before. What a mistakea to makea!
For warehouse and archive apps the improvements and savings appear to be amazing .
Test1
======
I did a small experiment with weekly_aggregates_fact and monthly_aggregates_fact. Both MV's built on pre-existing tables.
I compressed the partitions and performed a count(*) on weekly_aggregates_fact a partition at the same time.
Partitions that are not compressed returned results in about 2 secs (first time that is, faster subsequently, cos of cacheing) Partitions that have been compressed return results in 150msec. Not an exhaustive test, and the may be some caching going on, but promising I think.
Weekly_aggregates_fact was reduced from 9 Gb to 2.3 Gb monthly_aggregates_fact was reduced from 8.6 Gb to 2.1 Gb
Test2
======
Weekly_aggregates_fact and monthly_aggregates_fact have a similar number of rows, 29,874,318 and 29,095,901 respectively. I had already compressed weekly_aggregates_fact so I only have 'compressed' figure for that MV. For monthly_aggregates_fact I have before and after compression figures.
With compression -
Select Count(*) from Weekly_aggregates_fact took 38 secs initially, 2 secs subsequently. Cost 284,000 (toad)
With compression
select count(*),sum(gbp_stake)
from dwfacts.weekly_aggregates_fact
took 9secs initially and 9secs subsequently. Cost 287,000 (toad)
Without compression -
select Count(*) from monthly_aggregates_fact took 3mins 19 secs initially and 2 mins 40 secs subsequently. Cost 240,000 (toad)
Without compression -
select count(*),sum(gbp_stake)
from dwfacts.monthly_aggregates_fact
took 2mins 47 secs initially and 2 mins 47 secs subsequently (yes, the same). Cost 242,000 (toad)
With compression -
select Count(*) from monthly_aggregates_fact took 29secs initially and 1 secs subsequently. Cost 240,000 (toad) i.e. unchanged....hmmm oh dear
With compression -
select count(*),sum(gbp_stake)
from dwfacts.monthly_aggregates_fact
took 10 secs initially and 8 secs subsequently (yes, the same). Cost 242,000 (toad)
These results are truly remarkable , why aren't we using it?
To be fair, it says in tut'book that MVs benefit particularly well from compression because of the amount of repeated data in the aggregates (the group by columns) but even so..
references
http://boomslaang.wordpress.com/2008/06/05/table-compression-in-oracle/
http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_data_compression_10gr2_0505.pdf
http://husnusensoy.wordpress.com/2008/01/23/compressing-subpartition-segments/
For warehouse and archive apps the improvements and savings appear to be amazing .
Test1
======
I did a small experiment with weekly_aggregates_fact and monthly_aggregates_fact. Both MV's built on pre-existing tables.
I compressed the partitions and performed a count(*) on weekly_aggregates_fact a partition at the same time.
Partitions that are not compressed returned results in about 2 secs (first time that is, faster subsequently, cos of cacheing) Partitions that have been compressed return results in 150msec. Not an exhaustive test, and the may be some caching going on, but promising I think.
Weekly_aggregates_fact was reduced from 9 Gb to 2.3 Gb monthly_aggregates_fact was reduced from 8.6 Gb to 2.1 Gb
Test2
======
Weekly_aggregates_fact and monthly_aggregates_fact have a similar number of rows, 29,874,318 and 29,095,901 respectively. I had already compressed weekly_aggregates_fact so I only have 'compressed' figure for that MV. For monthly_aggregates_fact I have before and after compression figures.
With compression -
Select Count(*) from Weekly_aggregates_fact took 38 secs initially, 2 secs subsequently. Cost 284,000 (toad)
With compression
select count(*),sum(gbp_stake)
from dwfacts.weekly_aggregates_fact
took 9secs initially and 9secs subsequently. Cost 287,000 (toad)
Without compression -
select Count(*) from monthly_aggregates_fact took 3mins 19 secs initially and 2 mins 40 secs subsequently. Cost 240,000 (toad)
Without compression -
select count(*),sum(gbp_stake)
from dwfacts.monthly_aggregates_fact
took 2mins 47 secs initially and 2 mins 47 secs subsequently (yes, the same). Cost 242,000 (toad)
With compression -
select Count(*) from monthly_aggregates_fact took 29secs initially and 1 secs subsequently. Cost 240,000 (toad) i.e. unchanged....hmmm oh dear
With compression -
select count(*),sum(gbp_stake)
from dwfacts.monthly_aggregates_fact
took 10 secs initially and 8 secs subsequently (yes, the same). Cost 242,000 (toad)
These results are truly remarkable , why aren't we using it?
To be fair, it says in tut'book that MVs benefit particularly well from compression because of the amount of repeated data in the aggregates (the group by columns) but even so..
references
http://boomslaang.wordpress.com/2008/06/05/table-compression-in-oracle/
http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_data_compression_10gr2_0505.pdf
http://husnusensoy.wordpress.com/2008/01/23/compressing-subpartition-segments/
Wednesday, 12 March 2008
Materialized View - exchange partition - primary key
When exchanging partitions I have always not included the global primary key index on the table to be moved in. This is because the exchange partition fails if it is present. Makes sense since a primary key index has to be global and how does that fit in with having the same index in a new partition. The database has to check for uniqueness globally not locally....Anyway.... In turns out the primary key index can be present on the table to be passed in IF you specify DISABLE VALIDATE;
e.g.
ALTER TABLE sales_01_2001 ADD CONSTRAINT sales_pk_jan01
PRIMARY KEY (sales_transaction_id) DISABLE VALIDATE;
This link explains it all http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10736/refresh.htm
The key thing is once the table is exchanged in, does the global index go invalid and need rebuilding? I know it does go invalid when the index is NOT present on the table to be exchanged in. We'll see ..... I guess.
e.g.
ALTER TABLE sales_01_2001 ADD CONSTRAINT sales_pk_jan01
PRIMARY KEY (sales_transaction_id) DISABLE VALIDATE;
This link explains it all http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10736/refresh.htm
The key thing is once the table is exchanged in, does the global index go invalid and need rebuilding? I know it does go invalid when the index is NOT present on the table to be exchanged in. We'll see ..... I guess.
Tuesday, 19 February 2008
Materialized view - permissions and ora00600 error
Tried to create a materialized view on a schema I did not own, which produced permission errors. Solution turned out to be to grant the owner GLOBAL QUERY REWRITE. The solution was found here http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_6002.htm#i2162987
Other problem was that even thought a fairly complicated query materialized view created OK, (It consisted of two inline views) when the optimizer tried to use it via 'query rewrite' the query bombed out with ora-00600 [] [] [] etc.
Solution turned out to be to create two separate materialized views.
The optimizer then used the two materialized views with no problem and the query run time went from 45-60 seconds to instant! No rewriting of the query. Result.
Finally, if the mat view is created on a prebuilt table the user creating the view must have 'select ....with grant option' on the 'prebuilt' table.
Other problem was that even thought a fairly complicated query materialized view created OK, (It consisted of two inline views) when the optimizer tried to use it via 'query rewrite' the query bombed out with ora-00600 [] [] [] etc.
Solution turned out to be to create two separate materialized views.
The optimizer then used the two materialized views with no problem and the query run time went from 45-60 seconds to instant! No rewriting of the query. Result.
Finally, if the mat view is created on a prebuilt table the user creating the view must have 'select ....with grant option' on the 'prebuilt' table.
Thursday, 14 February 2008
Materialzied Views
I'm a bit like a born again Christian when it comes to materialized views. They've been around for ages and I've never used them... until now. Now I cann't get enough of them and feel the need to convert everybody and everything to use them.
Advantages are:-
With query rewrite enabled the optimizer will use the materialized view if it is quicker and the materialized view is up to date.
Materialized views, used in conjunction with the correct table partitioning and using dbms_mview.pmarker function in the view, will ensure that even a 'FULL' refresh only does the necessary partitions.
There more, much much more which I will write when I get time
Advantages are:-
With query rewrite enabled the optimizer will use the materialized view if it is quicker and the materialized view is up to date.
Materialized views, used in conjunction with the correct table partitioning and using dbms_mview.pmarker function in the view, will ensure that even a 'FULL' refresh only does the necessary partitions.
There more, much much more which I will write when I get time
dbms_crypto
Great little db package for encrypting data.
Below is an example which wraps the dbms_crypo package so it can be more readily used
CREATE OR REPLACE PACKAGE BODY d_security
AS
l_mod NUMBER := dbms_crypto.ENCRYPT_AES128
+ dbms_crypto.CHAIN_CBC
+ dbms_crypto.PAD_PKCS5;
FUNCTION encrypt(p_plainText IN VARCHAR2
,p_key IN VARCHAR2 DEFAULT '1234567890ABCDE'
)
RETURN VARCHAR2
AS
BEGIN
IF p_plainText IS NOT NULL THEN
RETURN(dbms_crypto.encrypt(UTL_I18N.STRING_TO_RAW (p_plainText, 'AL32UTF8')
,l_mod
,UTL_I18N.STRING_TO_RAW (p_key, 'AL32UTF8')
)
);
ELSE
RETURN(NULL);
END IF;
END encrypt;
FUNCTION decrypt(p_encryptedText IN VARCHAR2
,p_key IN VARCHAR2 DEFAULT '1234567890ABCDE'
)
RETURN VARCHAR2
AS
BEGIN
IF p_encryptedText IS NOT NULL THEN
RETURN( UTL_I18N.RAW_TO_CHAR(dbms_crypto.decrypt(
p_encryptedText
,l_mod
,UTL_I18N.STRING_TO_RAW (p_key, 'AL32UTF8')
)
)
);
ELSE
RETURN(NULL);
END IF;
END decrypt;
end d_security;
Below is an example which wraps the dbms_crypo package so it can be more readily used
CREATE OR REPLACE PACKAGE BODY d_security
AS
l_mod NUMBER := dbms_crypto.ENCRYPT_AES128
+ dbms_crypto.CHAIN_CBC
+ dbms_crypto.PAD_PKCS5;
FUNCTION encrypt(p_plainText IN VARCHAR2
,p_key IN VARCHAR2 DEFAULT '1234567890ABCDE'
)
RETURN VARCHAR2
AS
BEGIN
IF p_plainText IS NOT NULL THEN
RETURN(dbms_crypto.encrypt(UTL_I18N.STRING_TO_RAW (p_plainText, 'AL32UTF8')
,l_mod
,UTL_I18N.STRING_TO_RAW (p_key, 'AL32UTF8')
)
);
ELSE
RETURN(NULL);
END IF;
END encrypt;
FUNCTION decrypt(p_encryptedText IN VARCHAR2
,p_key IN VARCHAR2 DEFAULT '1234567890ABCDE'
)
RETURN VARCHAR2
AS
BEGIN
IF p_encryptedText IS NOT NULL THEN
RETURN( UTL_I18N.RAW_TO_CHAR(dbms_crypto.decrypt(
p_encryptedText
,l_mod
,UTL_I18N.STRING_TO_RAW (p_key, 'AL32UTF8')
)
)
);
ELSE
RETURN(NULL);
END IF;
END decrypt;
end d_security;
Hetrogenious services
Great for getting Oracle to talk to other databases (via ODBC) as if the other database was just another Oracle remote database. i.e. Through a db link
see
http://www.acs.ilstu.edu/docs/Oracle/server.101/b10764/tgvsgc.htm
and
http://www.dbasupport.com/oracle/ora10g/connection01.shtml
Straight forward to set up, at least for SQL Server. I had no luck however, getting it to talk to Timesten (thro the odbc), which was a shame. It did prove to be a bit flaky under Windows as the hsodbc.exe would hang around long after the query to the remote database had been chopped. The tnslistener process would not restart until after this process had been killed. I needed to do regular restarts of the tnslistener process. Not something I'd want to use in a production environment but OK for one off data extractions.
see
http://www.acs.ilstu.edu/docs/Oracle/server.101/b10764/tgvsgc.htm
and
http://www.dbasupport.com/oracle/ora10g/connection01.shtml
Straight forward to set up, at least for SQL Server. I had no luck however, getting it to talk to Timesten (thro the odbc), which was a shame. It did prove to be a bit flaky under Windows as the hsodbc.exe would hang around long after the query to the remote database had been chopped. The tnslistener process would not restart until after this process had been killed. I needed to do regular restarts of the tnslistener process. Not something I'd want to use in a production environment but OK for one off data extractions.
Subscribe to:
Posts (Atom)