Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
After completing this module, you will be able to: Describe various solutions to applications. Compare how different utilities work for the same application.
SQL statements are treated as single transactions. This requires Transient journal space for every updated or deleted row in the target table until the transaction finishes. A failure will cause the system to back out all changes that have been completed (which could take hours if the table and the number of completions are very large).
CREATE TABLE Customer_N ; INSERT INTO Customer_N SELECT Credit_Limit * 1.20 FROM Customer ; DROP TABLE Customer ; RENAME TABLE Customer_N TO Customer ; CREATE TABLE Trans_N ; INSERT INTO Trans_N SELECT * FROM Trans WHERE Trans_Date > 981231; DROP TABLE Trans; RENAME TABLE Trans_N TO Trans;
This approach reduces the number of changes that would have to be backed out by the system in case of a failure. The Fast path INSERT / SELECT offers the fastest possible transfer of data that can be achieved in a single SQL statement. The same approach could also be used to delete rows from a table, simply by not selecting them for insert.
Full table Read lock Full table scan No Transient Journal If the percentage of updates compared to number of table rows is large, use MultiLoad. If the percentage is small, use TPump.
MultiLoad advantages Sorts updates by Primary Index. Each data block is accessed only once. Full automatic restart under all conditions.
FastExport . . . , credit_limit * 1.20, . . . Customer over_limit_count >0 arrears_count =0 * Customer NOT (over_limit_count > 0 AND arrears_count = 0); FROM Customer;
An alternate solution when the external disk space is capable of housing the entire table. FastLoad
Utility Considerations
Utility support
Has the customer purchased the utility and does it run on your host?
Restart capability
Is there a restart log? What happens with a Teradata restart? What happens if the host fails?
Does the utility support multiple sessions? How do you choose the optimum number? Are errors captured in an error file? Do you have control over error handling? Does the utility support INMODs, OUTMODs, or AXSMODs? Does the job fit your batch window? Do the tables require continuous (7 x 24) access by the user groups?
Multiple sessions
Error handling
Which is the fastest method? 1. Submit the statement above. 2. INSERT/SELECT revised values to a new table, drop the original table and rename the new table.
INSERT / SELECT the revised values into a new table; drop the old table and rename the new table.
Timings: Create new table Insert/Select 900000 rows Drop Old Table Rename New Table Total Time: 3 seconds 1 minute 26 seconds 11 seconds 1 second 1 minute , 41 seconds
.EXPORT the new data values and the primary index columns to the Host and use MultiLoad UPDATE.
Timings: Export 900,000 rows MultiLoad/Update Total Time: 14 seconds 3 minutes, 27 seconds 3 minutes , 41 seconds
.EXPORT the whole rows to the Host selecting the updated values along with the rest of the record,
and FastLoad the table. Timings: Export 900,000 rows Delete old rows FastLoad the data Total Time: 16 seconds 6 seconds 1 minute 59 seconds 2 minutes, 21 seconds
Utility Choices: A. Use BTEQ to add new data, and BTEQ to remove old data. B. Use FastLoad to add new data, and BTEQ to remove old data. C. Use FastLoad to add new data, and MultiLoad to remove old data. D. Use MultiLoad to add new data, and MultiLoad to remove old data. E. Use TPump to add new data, and TPump to remove old data. F. Use TPump to add new data, and BTEQ to remove old data.
Utility Choices: A. Use BTEQ to add new data, and BTEQ to remove old data. B. Use FastLoad to add new data, and BTEQ to remove old data. C. Use FastLoad to add new data, and MultiLoad to remove old data. D. Use MultiLoad to add new data, and MultiLoad to remove old data. E. Use TPump to add new data, and TPump to remove old data. F. Use TPump to add new data, and BTEQ to remove old data.