Sei sulla pagina 1di 14

Technical Seminar Report On

High Performance Computing

submitted by : MIT THAKKAR

Computing environment or a computer hardware system for massive and fast computation. Sometime referred as supercomputing or supercomputer. Maximum amount of computations in a minimum amount of time. Used in scientific and engineering fields. Performance more than 1gigaflop. Cost is more.

PARALLEL COMPUTING
A collection of processing elements that can communicate and cooperate to solve large problems more quickly than a single processing. TYPE OF PARALLELISM Overt Parallelism is visible to the programmer May be difficult to program (correctly) Large improvements in performance Covert Parallelism is not visible to the programmer Compiler responsible for parallelism Easy to do

PROCESSORS
GENERAL CLASSIFICATION Vector Large rows of data are operated on simultaneously. Scalar Data is operated in a sequential fashion. ARCHITECTURAL CLASSIFICATIONS SISD Machines Conventional single processor computers. The various circuits of the CPU are split up into functional units which are arranged into a pipeline. Each functional unit operates on the result of the previous functional unit during a clock cycle.

PROCESSORS(cont..)
SIMD Machines Single CPU devoted to control a large collection of subordinate processors. Each processor has its own local memory. Each processor runs the same program. Each processor processes different data streams. Each subordinate either executes the instruction or sits idle.

PROCESSORS(cont..)
MISD Machines Multiple instructions should act on a single stream of data. No practical machine in this class has been constructed. MIMD Machines Multiple processors(own or shared memory). Each processor can run the same or different programs. Each processor processes different data streams. Processors can work synchronously or asynchronously. Processors can be either tightly or loosely coupled. There are shared-memory systems and distributed-memory systems.

MEMORY ORGANIZATION
SHARED MEMORY One common memory block between all processors One common bus for all processors

Fig-1

Fig-2

Utilizes (complex) inter-connected network to connect processors to shared memory module

MEMORY ORGANIZATION(Cont..)
DISTRIBUTED MEMORY Each processor has private memory. Contents of private memory can only be accessed by that processor. In general, machines can be scaled to thousands of processors. Requires special programming techniques.

Fig-3

MEMORY ORGANIZATION(Cont..)
VIRTUAL SHARED MEMORY Objective is to have the scalability of distributed memory with shared memory. Global address space mapped onto physically distributed memory. Data moves between processors on demand.

Fig-4

PARALLEL PROGRAMMING CONCEPTS


A process is an instance of a program or subprogram executing autonomously on a processor. Processes can be considered running or blocked. A single-program and multiple-data. In shared memory programming processes have a parent. In distributed memory programming message passing used. Data parallelism- data structure is distributed among the processes, and the individual processes execute the same instructions on the parts of data structure.

AREAS OF HPC USE


Weather forecasting Earth quake prediction River solution simulation Air pollution Aircraft simulations Gene sequencing and mapping Artificial intelligence Computer Graphics Geophysical/Petroleum Modeling Database Applications

DISADVANTAGES

More costly More complex Individual processor speed is not concerned. Problems of high overhead and the difficulty of writing programs. Number of bugs Increases

CONCLUSION
The present supercomputing trends indicate the supercomputer of tomorrow will work faster, solve even more complex problems, and provide wider access to supercomputing power . HPC played a crucial role in the advancement of knowledge and the quality of life.

Thank you!!!

Potrebbero piacerti anche