Sei sulla pagina 1di 7

*Teradata makes itself the decision to use the index or not - if you are not careful you

spend time in table updates to keep up an index which is no used at all (one cannot give
the query optimizer hints to use some index - though collecting of statistics may affect
the optimizer strategy
*In the MP-RAS environment, look at the script "/etc/gsc/bin/perflook.sh". This will
provide a system-wide snapshot in a series of files. The GSC uses this data for incident
analysis.
* When using an index one must keep sure that the index condition is met in the sub
queries "using IN, nested queries, or derived tables"
* Indication of the proper index use is found by explain log entry "a ROW HASH
MATCH SCAN across ALL-AMPS"
* If the index is not used the result of the analysis is the 'FULL TABLE SCAN' where the
performance time grows when the size of the history table grows
* Keeping up an index information is a time/space consuming issue. Sometimes Teradata
is much better when you "manually" imitatate the index just building it from scratch.
* keeping up join index might help, but you cannot multiload to a table which is a part of
the join index - loading with 'tpump' or pure 'SQL' is OK but does not perform as well.
Dropping and re-creating a join index with a big table takes time and space.
* when your Teradata "explain" gives '25' steps from your query (even without the
update of the results) and the actual query is a join of six or more tables
Case e.g.
We had already given up updating the secondary indexes - because we have not had
much use for them.
After some trials and errors we ended up to the strategy, where the actual "purchase
frequency analysis" is never made "directly" against the history table.
Instead:
1) There is a "one-shot" run to build the initial "customer's previous purchase" from the
"purchase history" - it takes time, but that time is saved later
2) The purchase frequency is calculated by joining the "latest purchase" with the
"customer's previous purchase".
3) When the "latest purchase" rows are inserted to the "purchase history" the "customer's
previous purchase" table is dropped and recreated by merging the "customer's previous
purchase" with the "latest purchase"
4) By following these steps the performance is not too fast yet (about 25 minutes in our
two node system) for a bunch of almost 1.000.000 latest receipts - but it is tolerable now.
(We also tested by adding both the previous and latest purchase to the same table, but
because its size was in average case much bigger than the pure "latest purchase", the self-
join was slower in that case)
*********

MANAGING CONCURRENT WORKLOADS


Integrated e-commerce efforts present many warehouse challenges. Here's
how Teradata can help.
The word e-commerce means many things to many people. Although for some it
connotes only the Web, the real value of e-commerce can only be realized when all
channels of a business are integrated and have full access to all customer information and
transactions. In fact, to me, e-commerce means using the rich technology available today
to bring added value to the customer and additional value to the business through all
customer interaction channels.
Under this definition of e-commerce, an active warehouse is at the epicenter, providing
the storage and access for decision making in the e-commerce world. As more and more
companies adopt active warehousing for this purpose, data warehouse workloads are
expanding and changing.
If your warehouse relies on a Teradata DBMS, you'll find that handling the challenge of
high-volume, widely varying, disparate service-level workloads is one of its core
competencies. One of the biggest concerns I hear from customers is how to deal with the
quickly rising number of concurrent queries and concurrent users that can result from
active warehousing and e-commerce initiatives. Expected service levels vary widely
among different groups of users, as do query types. And, of course, the entire workload
must scale upward linearly as the demand increases, ideally with a minimum of effort
required from users and systems staff. Here's a look at some of the most frequent
questions I receive on the subject of mixed workloads and concurrency requirements.

How do I balance the work coming in across all nodes of my Teradata


configuration?
You don't. Teradata automatically balances sessions across all nodes to evenly distribute
work across the entire parallel configuration. Users connect to the system as a whole
rather than a specific node, and the system uses a balancing algorithm to assign their
sessions to a node. Balancing requires no effort from users or system administrators.

Does Teradata balance the work queries cause?


The even distribution of data is the key to parallelism and scalability in Teradata. Each
query request is sent to all units of parallelism, each of which has an even portion of the
data to process, resulting in even work distribution across the entire system.
For short queries and update flow typical of Web interactions, the optimizer recognizes
that only a single unit of parallelism is needed. A query coordinator routes the work to
the unit of parallelism needed to process the request. The hashing algorithm does not
cluster related data, but spreads it out across the entire system. For example, this month's
data and even today's data is evenly distributed across all units of parallelism, which
means the work to update or look at that data is evenly distributed.

Will many concurrent requests cause bottlenecks in query coordination?


Query coordination is carried out by a fully parallel parsing engine (PE) component.
Usually, one or more PEs are present on each node. Each PE handles the requests for a
set of sessions, and sessions are spread evenly across all configured PEs. Each PE is
multithreaded, so it can handle many requests concurrently. And each PE is independent
of the others with no required cross-coordination. The number of users logged on and
requests in flight are limited only by the number of PEs in the configuration.

How do you avoid bottlenecks when the query coordinator must retrieve
information from the data dictionary?
In Teradata, the DBMS itself manages the data dictionary. Each dictionary table is simply
a relational table, parallelized across all nodes. The same query engine that manages user
workloads also manages the dictionary access, using all nodes for processing dictionary
information to spread the load and avoid bottlenecks. The PE even caches recently used
dictionary information in memory. Because each PE has its own cache, there is no
coordination overhead. The cache for each PE learns the dictionary information most
likely to be needed by the sessions assigned to it.

With a large volume of work, how can all requests execute at once?
As in any computer system, the total number of items that can execute at the same time is
always limited to the number of CPUs available. Teradata uses the scheduling services
Unix and NT provide to handle all the threads of execution running concurrently. Some
requests might also exist on other queues inside the system, waiting for I/O from the disk
or a message from the BYNET, for example. Each work item runs in a thread; each
thread gets a turn at the CPU until it needs to wait for some external event or until it
completes the current work. Teradata configures several units of parallelism in each SMP
node. Each unit of parallelism contains many threads of execution that aren't restricted to
a particular CPU; therefore, every thread gets to compete equally for the CPUs in the
SMP node.
There is a limit, of course, to the number of pieces of work that can actually have a thread
allocated in a unit of parallelism. Once that limit is reached, Teradata queues work for the
threads. Each thread is context free, which means that it is not assigned to any session,
transaction, or request. Therefore, each thread is free to work on whatever is next on the
queue. The unit of work on the queue is a processing step for a request. Combining the
queuing of steps with context-free threads allows Teradata to share the processing service
equally across all the concurrent requests in the system. From the users' point of view, all
the requests in the system are running, receiving service, and sharing system resources.

How does Teradata avoid resource contention and the resulting


performance and management problems?
Teradata algorithms are very resource efficient. Other DBMSs optimize for single-query
performance by giving all resources to the single query. But Teradata optimizes for
throughput of many concurrent queries by allocating resources sparingly and using them
efficiently. This kind of optimization helps avoid wide performance variations that can
occur depending on the number of concurrent queries.
When faced with a workload that requires more system resources than are available,
Teradata tunes itself to that workload. Thrashing, a common performance failure mode in
computer systems, occurs when the system has fewer resources than the current workload
requires and begins using more processing time to manage resources than to do the work.
With most databases, a DBA would tune the system to avoid thrashing. However,
Teradata adjusts automatically to workload changes by adjusting the amount of running
work and internally pushing back incoming work. Each unit of parallelism manages this
flow control mechanism independently.

If all concurrent work shares resources evenly, how are different service
levels provided to different users?
The Priority Scheduler Facility (PSF) in Teradata manages service levels among different
parts of the workload. PSF allows granular control of system resources. The system
administrator can define up to five resource partitions; each partition contains four
available priorities. Together, they provide 20 allocation groups (AGs) to which portions
of the workload are assigned by an attribute of the logon ID for the user or application.
The administrator assigns each AG a portion of the total system resources and a
scheduling policy.
For example, the administrator can assign short queries from the Web site a guaranteed
20 percent of system resources and a high priority. In contrast, the administrator might
assign medium priority and 10 percent of system resources to more complex queries with
lower response-time requirements. Similarly, the administrator might assign data mining
queries a low priority and five percent of the total resources, effectively running them in
the background. You can define policies so that the resources adjust to the work in the
system. For example, you could allow data mining queries to take up all the resources in
the system if nothing else is running.
Unlike other scheduling utilities, PSF is fully integrated into the DBMS, not managed at
the task or thread level, which makes it easier to use for parallel database workloads.
Because PSF is an attribute of the session, it follows the work wherever it goes in the
system. Whether that piece of work is executed by a single thread in a single unit of
parallelism or in 2,000 threads in 500 units of parallelism, PSF manages it without system
administrator involvement.
CPU scheduling is a primary component of PSF, using all the normal techniques (such as
quantum size, CPU queues by priority, and so on). However, PSF is endemic throughout
the Teradata DBMS. There are many queues inside a DBMS handling a large volume
mixed workload. All of those queues are prioritized based on the priority of the work.
Thus, a high priority query entered after several lower priority requests that are awaiting
their turn to run will go to the head of the queue and will be executed first. I/O is
managed by priority. Data warehouse workloads are heavy I/O users, so a large query
performing a lot of I/O could hold up a short, high-priority request. PSF puts the high-
priority request I/Os to the head of the queue, helping to deliver response time goals.

Data warehouse databases often set the system environment to allow for
fast scans. Does Teradata performance suffer when the short work is
mixed in?
Because Teradata was designed to handle a high volume of concurrent queries, it doesn't
count on sequential scans to produce high performance for queries. Although other
DBMS products see a large fall in request performance when they go from a single large
query to multiple queries or when a mixed workload is applied, Teradata sees no such
performance change. Teradata never plans on sequential access in the first place. In fact,
Teradata doesn't even store the data for sequential accesses. Therefore, random accesses
from many concurrent requests are just business as usual.
Sync scan algorithms provide additional optimization. When multiple concurrent requests
are scanning or joining the same table, their I/O is piggybacked so that only a single I/O
is performed to the disk. Multiple concurrent queries can run without increasing the
physical I/O load, leaving the I/O bandwidth available for other parts of the workload.

What if work demand exceeds Teradata's capabilities?


There are limits to how much work the engine can handle. A successful data warehouse
will almost certainly create a demand for service that is greater than the total processing
power available on the system. Teradata always puts into execution any work presented
to the DBMS.
If the total demand is greater than the total resources, then controls must be in place
before the work enters the DBMS. When your warehouse reaches this stage, you can use
Database Query Manager (DBQM) to manage the flow of user requests into the
warehouse. DBQM, inserted between the users' ODBC applications and the DBMS,
evaluates each request and then applies a set of rules created by the system administrator.
If the request violates any of the rules, DBQM notifies the user that the request is denied
or deferred to a later time for execution.
Rules can include, for example, system use levels, query cost parameters, time of day,
objects accessed, and authorized users. You can read more about DBQM in a recent
Teradata Review article ("Field Report: DBQM," Summer 1999, available online at
www.teradatareview.com/summer99/truet.html).
How do administrators and DBAs stay on top of complex mixed
workloads?
The Teradata Manager utility provides a single operational system view for
administrators and DBAs. The tool provides real-time performance, logged past
performance, users and queries currently executing, management of the schema, and
more.

STAYING ACTIVE
The active warehouse is a busy place. It must handle all decision making for the
organization, including strategic, long-range data mining queries, tactical decisions for
daily operations, and event-based decisions necessary for effective Web sites.
Nevertheless, managing this diversity of work does not require a staff of hundreds
running a complex architecture with multiple data marts, operational data stores, and a
multitude of feeds. It simply requires a database management system that can manage
multiple workloads at varying service levels, scale with the business, and provide 2437
availability year round with a minimum of operational staff.

 Use COMPRESS in whichever attribute possible. This helps in reducing


IO and hence Improves performance. Especially for attribute having lots of
NULL values/Unique known values.

 COLLECT STATISTICS on daily basis (after every load) inorder to


improve performance.

 Drop and recreate secondary indices before and after every load. This
helps in improving load performance (if critical)

 Regularly Check for EVEN data distribution across all AMPs using
Teradata Manager or thru queryman

 Check for the combination on CPU, AMP’s, PE, nodes for performance
optimization.
Each AMP can handle 80 tasks and each PE can handle 120 sessions.

MLOAD – Customize the number sessions for each MLOAD jobs


depending on the
1. Number of concurrent MLOAD jobs &
2. Number of PE’s in the system

e.g
SCENARIO 1
# of AMPS = 10
# of MAx load Jobs handled by Teradata=5 (Parameter which
can be set values-5 to 15)
# of Sessions per load job= 1 (parameter that can be set at
Global or at each MLOAD script level)
# of PE's=1

So 10*5*1= 50 + 10 (2 per job overhead) = 60 is the Max


sessions on Teradata box
This is LESS then 120, which is max # of sessions a PE can
handle

SCENARIO 2
#AMPS = 16
#Max load Jobs handles by Teradata=15
#Sessions per load job= 1
#of PE's=1

So 16*15*1= 240 + 30 (2 per job ovehead) = 270 (Max


sessions on Teradata box).
This is MORE then 120, which is the max sessions a PE can
handle.

Hence MLOAD fail, inspite of the usage of the SLEEP &


TENACITY features.

 Use the SLEEP and TENACITY features of MLOAD for scheduling


MLOAD jobs.

 Check the TABLEWAIT parameter. If omitted can cause immediate


load job failure if you submit two MLOADS loads that are trying to
update the same table.

 JOIN INDEX - Check the limit on number of fields for a join Index (max
16 fields). It may vary by version

Join Index is like building the table physically. Hence it has the advantage like
BETTER Performance since data is physically stored and not calculated ON THE
FLY etc. Cons are of LOADING time(MLOAD needs Join Indices to be dropped
before loading) and additional space since it is a physical table.

Potrebbero piacerti anche