Does a table having a large number of extents in 10g degrade performance? The answer seems to be an emphatic NO.
I’ve referenced below two different explanations found on the interweb as to why this is so.
The first reference can be summarized as follows:-
The operating system I/O buffer is of a limited size (128Kb say). So in a full table scan of a table with ten 128Kb extents it will require ten reads. Even if the ten extents were compressed into one 1280Kb extent, it would still take ten reads to scan the whole table (10 reads each filling the 128Kb I/O buffer)
By looking at the module2 system and working backwards from the value of DB_FILE_MULTIBLOCK_READ_COUNT (which is 16) and the block size of the majority of the tablespaces i.e. 8k. I assume the operating system I/O buffer size is 128Kb (16*8Kb). Most of our tablespaces have extents sizes of 64Kb. Assuming I’ve understood correctly, this means that for any table which has multiple extents we do have a problem, because the I/O could read 128Kb but can only read 64Kb at a time so a full table scan of a table spread over two extents take 2 reads and not one. Is this correct or is the OS/NAS clever enough to read 2*64Kb extents at a time?
Reference:
http://www.devshed.com/c/a/Oracle/Developing-and-Implementing-Applications-continued/3/
http://www.dizwell.com/prod/node/66
No comments:
Post a Comment