Sei sulla pagina 1di 7

1

How Computer Memory Works


When you think about it, it's amazing how many different types of electronic memory you encounter in daily life. Many of them have become an integral part of our vocabulary:

-hoto courtesy $ony A Sony Flash Memory Stick


R M R!M "ache #ynamic R M $tatic R M %lash memory Memory $ticks &irtual memory &ideo memory '(!$

)ou already know that the computer in front of you has memory. What you may not know is that most of the electronic items you use every day have some form of memory also. *ere are +ust a few e,amples of the many items that use memory: "ell phones -# s .ame consoles "ar radios &"Rs /&s 0ach of these devices uses different types of memory in different ways1 (n this article, you'll learn why there are so many different types of memory and what all of the terms mean. Computer Memory! How Computer Memory Works *ow '(!$ Works *ow "aching Works

*ow %lash Memory Works *ow R M Works *ow Removable $torage Works *ow R!M Works *ow &irtual Memory Works

Memory 'asics
lthough memory is technically any form of electronic storage, it is used most often to identify fast, temporary forms of storage. (f your computer's "-2 had to constantly access the hard drive to retrieve every piece of data it needs, it would operate very slowly. When the information is kept in memory, the "-2 can access it much more 3uickly. Most forms of memory are intended to store data temporarily.

s you can see in the diagram above, the "-2 accesses memory according to a distinct hierarchy. Whether it comes from permanent storage 4the hard drive5 or input 4the keyboard5, most data goes in random access memory 4R M5 first. /he "-2 then stores pieces of data it will need to access, often in a cache, and maintains certain special instructions in the register. We'll talk about cache and registers later.

/he -" -rocess


ll of the components in your computer, such as the "-2, the hard drive and the operating system, work together as a team, and memory is one of the most essential parts of this team. %rom the

3
moment you turn your computer on until the time you shut it down, your "-2 is constantly using memory. 6et's take a look at a typical scenario:

)ou turn the computer on. /he computer loads data from read-only memory 4R!M5 and performs a power-on selftest 4-!$/5 to make sure all the ma+or components are functioning properly. s part of this test, the memory controller checks all of the memory addresses with a 3uick read/write operation to ensure that there are no errors in the memory chips. Read7write means that data is written to a bit and then read from that bit. /he computer loads the asic input/output system 4'(!$5 from R!M. /he '(!$ provides the most basic information about storage devices, boot se3uence, security, !lug and !lay 4auto device recognition5 capability and a few other items. /he computer loads the operating system 4!$5 from the hard drive into the system's R M. .enerally, the critical parts of the operating system are maintained in R M as long as the computer is on. /his allows the "-2 to have immediate access to the operating system, which enhances the performance and functionality of the overall system. When you open an application, it is loaded into R M. /o conserve R M usage, many applications load only the essential parts of the program initially and then load other pieces as needed. fter an application is loaded, any files that are opened for use in that application are loaded into R M. When you sa"e a file and close the application, the file is written to the specified storage device, and then it and the application are purged from R M.

(n the list above, every time something is loaded or opened, it is placed into R M. /his simply means that it has been put in the computer's temporary storage area so that the "-2 can access that information more easily. /he "-2 re3uests the data it needs from R M, processes it and writes new data back to R M in a continuous cycle. (n most computers, this shuffling of data between the "-2 and R M happens millions of times every second. When an application is closed, it and any accompanying files are usually purged 4deleted5 from R M to make room for new data. (f the changed files are not saved to a permanent storage device before being purged, they are lost.

/he 8eed for $peed


!ne common 3uestion about desktop computers that comes up all the time is, 9Why does a computer need so many memory systems:9 typical computer has:

#e"el $ and le"el % caches &ormal system 'AM (irtual memory A hard disk

Why so many: /he answer to this 3uestion can teach you a lot about memory1

%ast, powerful "-2s need 3uick and easy access to large amounts of data in order to ma,imize their performance. (f the "-2 cannot get to the data it needs, it literally stops and waits for it. Modern "-2s running at speeds of about $ gigahert) can consume massive amounts of data ;; potentially billions of bytes per second. /he problem that computer designers face is that memory that can keep up with a <;gigahertz "-2 is e,tremely e*pensi"e ;; much more e,pensive than anyone can afford in large 3uantities. (n the ne,t section, you'll find out how designers addressed this cost problem.

Memory /iers
"omputer designers have solved the cost problem by 9tiering9 memory ;; using e,pensive memory in small 3uantities and then backing it up with larger 3uantities of less e,pensive memory. /he cheapest form of read7write memory in wide use today is the hard disk. *ard disks provide large 3uantities of ine,pensive, permanent storage. )ou can buy hard disk space for pennies per megabyte, but it can take a good bit of time 4approaching a second5 to read a megabyte off a hard disk. 'ecause storage space on a hard disk is so cheap and plentiful, it forms the final stage of a "-2s memory hierarchy, called "irtual memory. /he ne,t level of the hierarchy is 'AM. We discuss R M in detail in *ow R M Works, but several points about R M are important here. /he it si)e of a "-2 tells you how many bytes of information it can access from R M at the same time. %or e,ample, a <=;bit "-2 can process > bytes at a time 4< byte ? @ bits, so <= bits ? > bytes5, and a =A;bit "-2 can process @ bytes at a time. Megahert) 4M*z5 is a measure of a "-2's processing speed, or clock cycle, in millions per second. $o, a B>;bit @CC;M*z -entium ((( can potentially process A bytes simultaneously, @CC million times per second 4possibly more based on pipelining51 /he goal of the memory system is to meet those re3uirements. computer's system R M alone is not fast enough to match the speed of the "-2. /hat is why you need a cache 4discussed later5. *owever, the faster R M is, the better. Most chips today operate with a cycle rate of DC to EC nanoseconds. /he read7write speed is typically a function of the type of R M used, such as #R M, $#R M, R M'2$. We will talk about these various types of memory later. %irst, let's talk about system R M.

$ystem R M
$ystem R M speed is controlled by us width and us speed. 'us width refers to the number of bits that can be sent to the "-2 simultaneously, and bus speed refers to the number of times a group of bits can be sent each second. us cycle occurs every time data travels from memory to the "-2. %or e,ample, a <CC;M*z B>;bit bus is theoretically capable of sending A bytes 4B> bits divided by @ ? A bytes5 of data to the "-2 <CC million times per second, while a ==;M*z <=;bit bus can send > bytes of data == million times per second. (f you do the math, you'll find that simply changing the bus width from <= bits to B> bits and the speed from == M*z to <CC M*z in our

5
e,ample allows for three times as much data 4ACC million bytes versus <B> million bytes5 to pass through to the "-2 every second.

(n reality, R M doesn't usually operate at optimum speed. #atency changes the e3uation radically. 6atency refers to the number of clock cycles needed to read a bit of information. %or e,ample, R M rated at <CC M*z is capable of sending a bit in C.CCCCCCC< seconds, but may take C.CCCCCCCD seconds to start the read process for the first bit. /o compensate for latency, "-2s uses a special techni3ue called urst mode.

'urst Mode and -ipelining


'urst mode depends on the e,pectation that data re3uested by the "-2 will be stored in se+uential memory cells. /he memory controller anticipates that whatever the "-2 is working on will continue to come from this same series of memory addresses, so it reads several consecutive bits of data together. /his means that only the first bit is sub+ect to the full effect of latencyF reading successive bits takes significantly less time. /he rated urst mode of memory is normally e,pressed as four numbers separated by dashes. /he first number tells you the number of clock cycles needed to begin a read operationF the second, third and fourth numbers tell you how many cycles are needed to read each consecutive bit in the row, also known as the wordline. %or e,ample: D;<;<;< tells you that it takes five cycles to read the first bit and one cycle for each bit after that. !bviously, the lower these numbers are, the better the performance of the memory. 'urst mode is often used in con+unction with pipelining, another means of minimizing the effects of latency. -ipelining organizes data retrieval into a sort of assembly;line process. /he memory controller simultaneously reads one or more words from memory, sends the current word or words to the "-2 and writes one or more words to memory cells. 2sed together, burst mode and pipelining can dramatically reduce the lag caused by latency. $o why wouldn't you buy the fastest, widest memory you can get: /he speed and width of the memory's bus should match the system's bus. )ou can use memory designed to work at <CC M*z in a ==;M*z system, but it will run at the ==;M*z speed of the bus so there is no advantage, and B>;bit memory won't fit on a <=;bit bus.

"ache and Registers


0ven with a wide and fast bus, it still takes longer for data to get from the memory card to the "-2 than it takes for the "-2 to actually process the data. Caches are designed to alleviate this bottleneck by making the data used most often by the "-2 instantly available. /his is accomplished by building a small amount of memory, known as primary or le"el $ cache, right into the "-2. 6evel

6
< cache is very small, normally ranging between > kilobytes 4G'5 and =A G'.

/he secondary or le"el % cache typically resides on a memory card located near the "-2. /he level > cache has a direct connection to the "-2. dedicated integrated circuit on the motherboard, the #% controller, regulates the use of the level > cache by the "-2. #epending on the "-2, the size of the level > cache ranges from >D= G' to > megabytes 4M'5. (n most systems, data needed by the "-2 is accessed from the cache appro,imately HD percent of the time, greatly reducing the overhead needed when the "-2 has to wait for data from the main memory. $ome ine,pensive systems dispense with the level > cache altogether. Many high performance "-2s now have the level > cache actually built into the "-2 chip itself. /herefore, the size of the level > cache and whether it is on oard 4on the "-25 is a ma+or determining factor in the performance of a "-2. %or more details on caching, see *ow "aching Works. particular type of R M, static random access memory 4$R M5, is used primarily for cache. $R M uses multiple transistors, typically four to si,, for each memory cell. (t has an e,ternal gate array known as a ista le multi"i rator that switches, or flip;flops, between two states. /his means that it does not have to be continually refreshed like #R M. 0ach cell will maintain its data as long as it has power. Without the need for constant refreshing, $R M can operate e,tremely 3uickly. 'ut the comple,ity of each cell make it prohibitively e,pensive for use as standard R M. /he $R M in the cache can be asynchronous or synchronous. $ynchronous $R M is designed to e,actly match the speed of the "-2, while asynchronous is not. /hat little bit of timing makes a difference in performance. Matching the "-2's clock speed is a good thing, so always look for synchronized $R M. 4%or more information on the various types of R M, see *ow R M Works.5 /he final step in memory is the registers. /hese are memory cells built right into the "-2 that contain specific data needed by the "-2, particularly the arithmetic and logic unit 4 625. n integral part of the "-2 itself, they are controlled directly by the compiler that sends information for the "-2 to process. $ee *ow Microprocessors Work for details on registers.

/ypes of Memory
Memory can be split into two main categories: "olatile and non"olatile. &olatile memory loses any data as soon as the system is turned offF it re3uires constant power to remain viable. Most types of R M fall into this category.

'AM memory modules From the top, S-MM. /-MM and S0/-MM

8onvolatile memory does not lose its data when the system or device is turned off. number of types of memory fall into this category. /he most familiar is R!M, but Flash memory storage devices such as "ompact%lash or $martMedia cards are also forms of nonvolatile memory. $ee the links below for information on these types of memory.

'0M memory module

Potrebbero piacerti anche