Sei sulla pagina 1di 55

DS87C520/DS83C520

EPROM/ROM High-Speed Microcontrollers

FEATURES
􀂃 80C52 Compatible
8051 Pin- and Instruction-Set Compatible
Four 8-Bit I/O Ports
Three 16-Bit Timer/Counters
256 Bytes Scratchpad RAM
􀂃 Large On-Chip Memory
16kB Program Memory
1kB Extra On-Chip SRAM for MOVX
􀂃 ROMSIZE Feature
Selects Internal ROM Size from 0 to 16kB
Allows Access to Entire External Memory Map
Dynamically Adjustable by Software
Useful as Boot Block for External Flash
􀂃
Contd…
High-Speed Architecture
4 Clocks/Machine Cycle (8051 = 12)
Runs DC to 33MHz Clock Rates
Single-Cycle Instruction in 121ns
Dual Data Pointer
Optional Variable Length MOVX to Access
Fast/Slow RAM/Peripherals
􀂃 Power Management Mode
Programmable Clock Source to Save Power
CPU Runs from (crystal/64) or (crystal/1024)
Provides Automatic Hardware and Software Exit
􀂃 EMI Reduction Mode Disables ALE
􀂃 Two Full-Duplex Hardware Serial Ports
􀂃 High Integration Controller Includes:
Power-Fail Reset
Early-Warning Power-Fail Interrupt
Programmable Watchdog Timer
􀂃 13 Interrupt Sources with Six External
8255 Programmable Peripheral Interface
BSR mode
Goals of Parallel Computing
• Serial computing: One processor executes a
series of instructions to produce a result.
• Parallel computing: produce the same result
using multiple processors.
– Ideally want a program running on P processors
to execute P times faster.
– In practice performance depends on the manner
in which the problem is divided between the
processors.
– Want each processor to perform a similar amount
of work, i.e., ensure load balancing.
Introduction to Parallel
Architectures
SIMD Architecture
• Single Instruction Multiple Data.
• Each processor has its own memory where it
keeps its data.
• Every processor synchronously executes
same instructions on its local data.
• Instructions issued by controller processor.
• Processors can communicate with each
other.
• e.g. DAP, CM200
simd
MIMD Architecture

• Multiple Instruction Multiple Data


• Several independent processors
capable of executing
separate programs.
• Further subdivision on relationship
between processors and memory.
– Shared memory.
– Distributed memory.
– Virtual shared memory.
Shared Memory

• Small number of processors which


each have access to
a global memory store.
• Communications via write/reads to
memory.
• Simple to program (no explicit
communications).
• Poor scaling due to memory access
bottleneck.
Shared Memory
Distributed Memory

• Each processor has its own local memory.


• Processors connected via some interconnect
mechanism.
• Processors communicate via explicit message
passing.
• Local memory access quicker than remote memory
access.
• More scalable than Shared Memory architecture.
• Examples: Meiko Computing Surfaces, IBM SP-1 and
SP-2, Intel Paragon.
Distributed Memory
Virtual Shared Memory

• Each processor has some local memory.


• Direct access to remote memory by global
address
space.
• Hardware includes support circuitry to deal
with remote
accesses, allowing very fast
communications.
• Example: Cray T3D.
Parallel Programming Concepts

• Two main programming paradigms:


– Data parallel.
– Message passing.
• Common for a parallel computer to
support several programming models,
e.g., Cray T3D.
• Responsibility of programmer to
choose.
Message Passing

• Typically standard C or F77 code.

• Variables private to a process, communication


occurs via calls to a message passing interface.

• Common for each processor to run the same


executable - Single Program Multiple Data (SPMD).

• Use a process identifier to make different


processors execute different parts of the code.

Data Parallel

• Uses its own programming language;


normally an extension of Fortran or C.
• Parallel instrinsics and built-in
procedures operate on data that is
known to all processes.
• Exchange of data and coordination of
processes hidden from programmer.

Potrebbero piacerti anche