Sei sulla pagina 1di 7

HYPERTHREADING

General Introduction

Today, it’s very common that when we look at an advertisement leaflet on computers, we
see some expensive computers, especially the ones which run fast, with the feature of
Hyperthreading. It is a new concept in computer architecture. It has been in the industry
since only more less three years. Hyperthreading is the latest product of Intel, one of the
leading manufacture companies in the computer market. It makes the computers improve
their performance by making more efficient use of the hardware resources.

Basically, Hyperthreading is an architectural enhancement that can fool applications and


operating systems into treating one CPU as if it were two. In the same manner, if the
system were a multiprocessor technology with the processors being Hyperthreading
enabled, the processors would appear as four to the operating system and the software
that supports the technology. This technology was first adapted to the server processor
Xeon architecture, which allows the single processor to handle two separate code
streams, or threads, concurrently. In effect, it creates two logical processors inside a
single physical processor. The logical processors share the core physical resources of the
chip, such as the execution engine, caches, control registers, and the system bus interface,
but each logical processor can be directed to execute a specific thread independently. This
way of sharing the processor resources lets the chip more effectively manage incoming
data from different applications, continuously switching from one set of instructions to
the other. Using this technology has significantly speeded up the performance of the
applications designed for Hyperthreading.
The earliest announcement of the Hyperthreading technology was made by Intel Corp. in
late 2001. However, they released the first hyperthreaded chip, Xeon MP which is based
on Intel’s Pentium 4 design built for use in servers. By then, a spokesman from the
market’s another competitor company AMD argued that Hyperthreading is Intel’s attempt
to market a hardware feature known as simultaneous multithreading that has limited use
in the desktop. AMD company designers even thought that some applications may even
suffer from Hyperthreading technology because of their incompatibility.

Intel released the 3.06GHz version of its Pentium 4 processor in November 2002, and
introduced the Hyperthreading technology in desktop computers. This time the
Hyperthreading technology provided about 25 percent increase in performance with only
5 percent extra cost in hardware design, compared to the previous version of Pentium 4.
With this new advancement, Intel has made sure its latest performance – leading chip was
available in desktop systems as well. Later, Hyperthreading was utilized by workstations.

Hyperthreading-enabled computer systems are mostly appropriate for web servers,


database applications/servers, online transaction systems, intense gaming programs,
video creation, and multimedia applications, because of their ability to give faster
response to the user in the client side. For example on a web server, since it will need to
handle multiple requests simultaneously, the system cat get over with this complexity by
making use of the Hyperthreading technology.

How does Hyperthreading work?

Computer architects have always been trying to speed up the performance of processors.
They have been using techniques including superscalar execution, pipelining/super-
pipelining, out-of-order execution, and branch prediction, all of which cost a great deal in
terms of transistors and power consumption. In fact, chip size and power are increasing at
rates greater than processor performance. So, processor architects are looking for ways to
reverse this trend, that is, to improve performance at a greater rate than chip size and
power.

A look at software trends reveals that server applications consist of multiple threads or
processes that can be executed in parallel. Online transaction processing and web services
have an abundance of software threads that can be scheduled and executed
simultaneously for faster performance. The architects at Intel have been looking to
leverage this so-called “thread-level-parallelism” to gain a better performance vs. chip
size and power ratio.

One technique is core multiprocessing, where two processors are put on a single chip.
This can provide significant performance boost for multithreaded applications. However,
at double the size of a single-core chip it is expensive to manufacture. Another approach
is to allow a single processor to execute multiple threads by switching between them. In
time-slice multithreading, the processor switches between software threads after a fixed
time period. Switch-on-event multithreading switches threads on long latency events such
as cache misses. However, those techniques provide less than optimal performance
because the conditions to switch threads will not improve performance for other
significant sources of inefficient resource usage, such as branch mispredictions,
instruction dependencies and others.
The aim of Intel’s new Hyperthreading technology is to use a CPU’s resources more
efficiently to get the jobs done. As a spokesman from Intel Corp. puts it, “Today’s
processors are one cook with one pan; the new hyperthreaded chip is still one cook, but
with two pans.” A cook who works on two dishes at the same time should be able to
complete work on both in less time overall. Hyperthreading technology allows a single
processor to dynamically execute multiple threads or processes at the same time. It makes
the most effective use of processor resources to maximize the performance vs. chip size
and power.

Hyperthreading technology makes a single a single physical processor appear as multiple


logical processors; simply put, the physical execution resources are shared and the
architecture state is duplicated for each logical processor. From a software or architecture
perspective, this means operating systems and user programs can schedule processes or
threads to logical processors as they would on conventional physical processors. From a
microarchitecture perspective, this means that instructions from both logical processors
will persist and execute simultaneously on shared execution resources. With
Hyperthreading, each logical processor maintains a complete set of architecture state. The
architecture state consists of registers including the general-purpose registers, control
registers and some machine state registers. From a software perspective, once the
architecture state is duplicated, the processor appears to be two processors.

To understand how instructions from two logical processors can execute simultaneously
and better utilize the available hardware execution resources, we can view the resource
utilization as individual execution units. While a thread is executing on processor, it will
make use of the resources only the ones that it needs for that clock cycle. For a
Hyperthreading disabled processor these resources will remain idle while the other
resources are being utilized by the thread. For the instructions that are dependant on
outcome of the instructions those were executed earlier, there will be stalls, which is pure
waste of CPU time and resources. Instead, with the addition of the Hyperthreading
capability, we make better use of these unused CPU times and resources. While one or
more of the resources are not in use at a time, another thread or process can make use
those resources and CPU time. In that case, the scheduler, in advance knows what
resources are requested by each process in the ready queue, hence gives the CPU to the
appropriate threads. If there are multiple threads from same application, then the
performance would even be better, since switch context between threads and
communication of threads are faster than of individual processes. In order to accomplish
this multiple threaded execution, each time the scheduler keeps two threads/processes
inside the processor – for a single physical processor system- and the state of each of the
threads are preserved on their own register set. While a thread which is already executing
instructions in the CPU is utilizing, say, two of the executing units, the scheduler picks
the next instruction core that will make use of the third execution unit and so on.
However, the operating system has to make sure that it keeps the scheduler’s queue full
of instructions from both logical processors. In the case of stalls, one of the other
processes can make use of the whole processor execution units. However, there’s no
guarantee that all resources will be utilized all the time. Over time, since the instruction
execution rates of applications vary, there may be fair number of cycles with idle
execution units.

Hyperthreading technology can also be used with multiprocessors with giving almost
same amount of performance gain. However, similar to the conventional multiprocessing,
there will be some additional overhead, such as the distribution of processes to
processors, and balancing the load among processors.

The additional hardware needed for Hyperthreading

As mentioned earlier, there’s an additional 5-6 percent hardware used for making
processor Hyperthreading based, which gives almost same amount of increase in the cost.
The additional hardware we need are; an instruction streaming buffer, a next instruction
pointer (PC), a return stack predictor, a trace cache next IP, a trace cache fill buffer, an
instruction TLB (translation look-aside buffer), and two register alias tables.

How to migrate to Hyperthreading

Other than modifying the internal structure of the processor, the operating system and the
application programs also have to be compatible for Hyperthreading. The software has to
be re-written and the instructions be rearranged in a way that they can be multitasked.
The application programs, on the other hand, have to be fully optimized. In the market
today, Microsoft Windows XP, and Linux operating systems can make use of a
Hyperthreading based processor, however, any operating system which is aware of SMP
(symmetrical multiprocessing) can run on the Hyperthreading-enabled processor. In terms
of the way operating systems recognize the processors, when the power is turned on a
Hyperthreading-enabled computer, the operating system will see and report two
processors. That is what we want because Operating System has to schedule tasks in a
way that it can exploit Hyperthreading‘s parallelism.

Disadvantages:

Like every new innovation in the computer industry, this new technology, too, comes
with its ramifications. First of all, since we are adding new hardware inside the processor,
the architecture design is more complex. Also for the application and the operating
system programmers, they have to write the code “Intel-based”, since this is a product of
the Intel Corp. There is also a licensing issue : Software vendors don’t know how to price
their products, whether to price them as one-processor product or two-processor product,
since there are two logical processors in a single processor Hyperthreading-enabled chip.

Potrebbero piacerti anche