Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
overhead associated with each context switch between the two engines
code loops through a collection performing the same DML operation fo
in the collection it is possible to reduce context switches by bulk
whole collection to the DML statement in one operation.
In Oracle8i a collection must be defined for every column bound to the DML which
can make the code rather long winded. Oracle9i allows us to use Record structur
es during bulk operations so long as we don't reference individual columns of th
e collection. This restriction means that updates and deletes which have to refe
rence inividual columns of the collection in the where clause are still restrict
ed to the collection-per-column approach used in Oracle8i.
BULK COLLECT
Bulk binds can improve the performance when loading collections from a queries.
The BULK COLLECT INTO construct binds the output of the query to the collection.
To test this create the following table.
CREATE TABLE bulk_collect_test AS
SELECT owner,
object_name,
object_id
FROM
all_objects;
The following code compares the time taken to populate a collection manually and
using a bulk bind.
SET SERVEROUTPUT ON
DECLARE
TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE;
l_tab
t_bulk_collect_test_tab := t_bulk_collect_test_tab();
l_start NUMBER;
BEGIN
-- Time a regular population.
l_start := DBMS_UTILITY.get_time;
bulk_collect_test)
LOOP
l_tab.extend;
l_tab(l_tab.last) := cur_rec;
END LOOP;
SELECT *
BULK COLLECT INTO l_tab
FROM
bulk_collect_test;
DBMS_OUTPUT.put_line('Bulk
(DBMS_UTILITY.get_time - l_start));
END;
/
Regular (42578 rows): 66
Bulk
(42578 rows): 4
SQL>
We can see the improvement associated with bulk operations to reduce context swi
tches.
Note. The select list must match the collections record definition exactly for t
his to be successful.
Remember that collections are held in memory, so doing a bulk collect from a lar
ge query could cause a considerable performance problem. In actual fact you woul
d rarely do a straight bulk collect in this manner. Instead you would limit the
rows returned using the LIMIT clause and move through the data processing smalle
r chunks. This gives you the benefits of bulk binds, without hogging all the ser
ver memory. The following code shows how to chunk through the data in a large ta
ble.
SET SERVEROUTPUT ON
DECLARE
TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE;
l_tab t_bulk_collect_test_tab;
CURSOR c_data IS
SELECT *
FROM bulk_collect_test;
BEGIN
OPEN c_data;
LOOP
FETCH c_data
BULK COLLECT INTO l_tab LIMIT 10000;
EXIT WHEN l_tab.count = 0;
SQL>
So we can see that with a LIMIT 10000 we were able to break the data into chunks
of 10,000 rows, reducing the memory footprint of our application, while still t
aking advantage of bulk binds. The array size you pick will depend on the width
of the rows you are returning and the amount of memory you are happy to use.
From Oracle 10g onward, the optimizing PL/SQL compiler converts cursor FOR LOOPs
into BULK COLLECTs with an array size of 100. The following example compares th
e speed of a regular cursor FOR LOOP with BULK COLLECTs using varying array size
s.
SET SERVEROUTPUT ON
DECLARE
TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE;
l_tab
t_bulk_collect_test_tab;
CURSOR c_data IS
SELECT *
FROM
bulk_collect_test;
l_start NUMBER;
BEGIN
-- Time a regular cursor for loop.
l_start := DBMS_UTILITY.get_time;
bulk_collect_test)
LOOP
NULL;
END LOOP;
DBMS_OUTPUT.put_line('Regular : ' ||
(DBMS_UTILITY.get_time - l_start));
OPEN c_data;
LOOP
FETCH c_data
BULK COLLECT INTO l_tab LIMIT 10;
EXIT WHEN l_tab.count = 0;
END LOOP;
CLOSE c_data;
DBMS_OUTPUT.put_line('LIMIT 10 : ' ||
(DBMS_UTILITY.get_time - l_start));
OPEN c_data;
LOOP
FETCH c_data
BULK COLLECT INTO l_tab LIMIT 100;
EXIT WHEN l_tab.count = 0;
END LOOP;
CLOSE c_data;
OPEN c_data;
LOOP
FETCH c_data
BULK COLLECT INTO l_tab LIMIT 1000;
EXIT WHEN l_tab.count = 0;
END LOOP;
CLOSE c_data;
LIMIT 1000: 10
SQL>
You can see from this example the performance of a regular FOR LOOP is comparabl
e to a BULK COLLECT using an array size of 100. Does this mean you can forget ab
out BULK COLLECT in 10g onward? In my opinion no. I think it makes sense to have
control of the array size. If you have very small rows, you might want to incre
ase the array size substantially. If you have very wide rows, 100 may be too lar
ge an array size.
FORALL
The FORALL syntax allows us to bind the contents of a collection to a single DML
statement, allowing the DML to be run for each row in the collection without re
quiring a context switch each time. To test bulk binds using records we first cr
eate a test table.
CREATE TABLE forall_test (
id
NUMBER(10),
code
VARCHAR2(10),
description VARCHAR2(50));
l_tab
t_forall_test_tab := t_forall_test_tab();
l_start NUMBER;
l_size
NUMBER
:= 10000;
BEGIN
-- Populate collection.
FOR i IN 1 .. l_size LOOP
l_tab.extend;
l_tab(l_tab.last).id
:= i;
l_tab(l_tab.last).code
:= TO_CHAR(i);
COMMIT;
END;
/
Normal Inserts: 305
Bulk Inserts : 14
SQL>
The output clearly demonstrates the performance improvements you can expect to s
ee when using bulk binds to remove the context switches between the SQL and PL/S
QL engines.
Note. Since no columns are specified in the insert statement the record structur
e of the collection must match the table exactly.
Oracle9i Release 2 also allows updates using record definitions by using the ROW
keyword. The following example uses the ROW keyword, when doing a comparison of
normal and bulk updates.
SET SERVEROUTPUT ON
DECLARE
TYPE t_id_tab IS TABLE OF forall_test.id%TYPE;
TYPE t_forall_test_tab IS TABLE OF forall_test%ROWTYPE;
l_id_tab t_id_tab
:= t_id_tab();
l_tab
l_start
NUMBER;
l_size
NUMBER
BEGIN
-- Populate collections.
:= 10000;
l_id_tab(l_id_tab.last)
:= i;
l_tab(l_tab.last).id
:= i;
l_tab(l_tab.last).code
:= TO_CHAR(i);
ROW = l_tab(i)
WHERE id = l_tab(i).id;
END LOOP;
l_start := DBMS_UTILITY.get_time;
ROW = l_tab(i)
WHERE id = l_id_tab(i);
DBMS_OUTPUT.put_line('Bulk Updates
: ' ||
(DBMS_UTILITY.get_time - l_start));
COMMIT;
END;
/
Normal Updates : 235
Bulk Updates
: 20
SQL>
The reference to the ID column within the WHERE clause of the first update would
cause the bulk operation to fail, so the second update uses a separate collecti
on for the ID column. This restriction has been lifted in Oracle 11g, as documen
ted here.
Once again, the output shows the performance improvements you can expect to see
when using bulk binds.
SQL%BULK_ROWCOUNT
The SQL%BULK_ROWCOUNT cursor attribute gives granular information about the rows
affected by each iteration of the FORALL statement. Every row in the driving co
llection has a corresponding row in the SQL%BULK_ROWCOUNT cursor attribute.
The following code creates a test table as a copy of the ALL_USERS view. It then
attempts to delete 5 rows from the table based on the contents of a collection.
It then loops through the SQL%BULK_ROWCOUNT cursor attribute looking at the num
ber of rows affected by each delete.
CREATE TABLE bulk_rowcount_test AS
SELECT *
FROM
all_users;
SET SERVEROUTPUT ON
DECLARE
TYPE t_array_tab IS TABLE OF VARCHAR2(30);
l_array t_array_tab := t_array_tab('SCOTT', 'SYS',
'SYSTEM', 'DBSNMP', 'BANANA');
BEGIN
-- Perform bulk delete operation.
FORALL i IN l_array.first .. l_array.last
DELETE FROM bulk_rowcount_test
WHERE username = l_array(i);
Rows affected: 1
Element: SYS
Rows affected: 1
Element: SYSTEM
Rows affected: 1
Element: DBSNMP
Rows affected: 1
Element: BANANA
Rows affected: 0
SQL>
So we can see that no rows were deleted when we performed a delete for the usern
ame "BANANA".
SAVE EXCEPTIONS and SQL%BULK_EXCEPTION
We saw how the FORALL syntax allows us to perform bulk DML operations, but what
happens if one of those individual operations results in an exception? If there
is no exception handler, all the work done by the current bulk operation is roll
ed back. If there is an exception handler, the work done prior to the exception
is kept, but no more processing is done. Neither of these situations is very sat
isfactory, so instead we should use the SAVE EXCEPTIONS clause to capture the ex
ceptions and allow us to continue past them. We can subsequently look at the exc
eptions by referencing the SQL%BULK_EXCEPTION cursor attribute. To see this in a
ction create the following table.
l_tab
t_tab := t_tab();
l_error_count NUMBER;
ex_dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(ex_dml_errors, -24381);
BEGIN
-- Fill the collection.
FOR i IN 1 .. 100 LOOP
l_tab.extend;
l_tab(l_tab.last).id := i;
END LOOP;
-- Cause a failure.
l_tab(50).id := NULL;
l_tab(51).id := NULL;
BEGIN
FORALL i IN l_tab.first .. l_tab.last SAVE EXCEPTIONS
INSERT INTO exception_test
VALUES l_tab(i);
EXCEPTION
WHEN ex_dml_errors THEN
l_error_count := SQL%BULK_EXCEPTIONS.count;
DBMS_OUTPUT.put_line('Number of failures: ' || l_error_count);
FOR i IN 1 .. l_error_count LOOP
DBMS_OUTPUT.put_line('Error: ' || i ||
' Array Index: ' || SQL%BULK_EXCEPTIONS(i).error_index ||
' Message: ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
END;
/
Number of failures: 2
Error: 1 Array Index: 50 Message: ORA-01400: cannot insert NULL into ()
Error: 2 Array Index: 51 Message: ORA-01400: cannot insert NULL into ()
SQL>
As expected the errors were trapped. If we query the table we can see that 98 ro
ws were inserted correctly.
SELECT COUNT(*)
FROM
exception_test;
COUNT(*)
----------
98
1 row selected.
SQL>
Bulk Binds and Triggers
For bulk updates and deletes the timing points remain unchanged. Each row in the
collection triggers a before statement, before row, after row and after stateme
nt timing point. For bulk inserts, the statement level triggers only fire at the
start and the end of the the whole bulk operation, rather than for each row of
the collection. This can cause some confusion if you are relying on the timing p
oints from row-by-row processing.
You can see an example of this here.
Updates
In Oracle 10g and above, the optimizing PL/SQL compiler rewrites conventiona
l cursor for loops to use a BULK COLLECT with a LIMIT 100, so code that previous
ly didn't take advantage of bulk binds may now run faster.
Oracle 10g introduced support for handling sparse collections in FORALL stat
ements (here).
The restriction on accessing individual columns of the collection with a FOR
ALL has been removed in Oracle 11g (here).