How to get the most from your pc memory

Introduction

Here are a few tips on getting the most out of your memory. Always read your manual, it will help you determine what your system is capable of doing and get you on the path to higher learning and understanding of your system. I hope this helps clear up a couple issues for you and lends a hand in increasing your system overall performance.


SPD And CAS Latency Values

Every memory module has a limit that is predetermined at the factory. The type is usually marked on the module's label. The information is also stored inside a "SPD" chip, a small 6mm chip mounted on one corner of the memory module. You will see this as a setting option in the BIOS to determine your memory's speed automatically. CAS Latency is measured in CPU clock cycles with the lower number being the fastest and the highest being the slowest.

The CAS Latency value can usually be changed to the following settings: "1", "2", "2.5", or "3" depending on your available BIOS options.

If using different CAS latency memory in your configuration such as one chip being CAS 2 the other CAS 3 do not set the memory to the lower CAS 2 setting as this can cause stability errors.

The preferred setting choice is to have the BIOS use the SPD chip on the RAM module to determine CAS value. If this option exists in your BIOS for stability this setting should be used. Overclockers have pushed the limits of memory far beyond spec to achieve higher frequencies by altering these settings.


The Memory Bus

The memory bus is actually the major limiting factor to overall system performance. Older motherboards had the processor running at the same speed as the memory bus, but the more recent boards have the processor running at 2, 3 or even more times the speed of the memory.

The more that the processor is running faster than the memory, the more often it will have to be on standby waiting for information from the memory. This is why the system cache is very important, because it is much faster than the main memory, which means the processor can do more useful work and spend less time just waiting.


Bank Interleaving

Bank Interleaving is an advanced chipset technique used by high-end motherboard chipsets to improve memory performance. Memory interleaving increases bandwidth by allowing simultaneous access to more than one piece of memory. What this does is improve performance because the processor can now transfer more information to or from the memory in the same amount of time helping to alleviate the processor-memory bottleneck that has been a major limiting factor in overall system performance.

The process of interleaving works as a result of dividing the system memory into multiple blocks. The most common numbers are two or four, called two-way or four-way interleaving. Each block of memory is accessed using different sets of control lines, which are merged together on the memory bus. When a read or write cycle has begun to one block, a read or write to other blocks can be overlapped with the first one.

The more blocks, the more that overlapping can be done. In order to get the best performance from this type of memory system, consecutive memory addresses are spread over the different blocks of memory. In other words, if you have 4 blocks of interleaved memory, the system doesn't fill the first block, and then the second and so on. It uses all 4 blocks, spreading the memory around so that the interleaving can be fully taken advantage of.

Can you use Interleaving with only one stick of memory?

In most cases the answer would be yes. In this kind of interleaving, the chipset can remember the location of up to four recently used "pages" of memory on the module and can return to them instantly. Depending on the chipset, it can also remember the last four pages per module, for a total of sixteen pages.

The amount of interleaving depends on the size and type of the memory chips on the computer's RAM modules. SDRAM is required for this technique. If the chips on the modules store 16 megabits each, the chipset can achieve two-way interleaving; if the chips are 64 megabits, four-way interleaving is possible.


System Timing and Memory Speed

It is important for you to understand the relationship between the two main aspects that control the actual speed that your system memory runs at.

Memory Timing Settings:
the timing that the system is told to use, often via settings in the BIOS setup program derives the memory's actual speed. These settings control how quickly the system will try to read or write to the memory.
DRAM Speed:
This is the minimum access time that the DRAM can physically run at and is rated in measurements of nanoseconds (ns). The speed of the DRAM sets the limits for how fast your memory's timing can be set. Sometimes the latest SDRAM modules are rated in MHz (frequency) instead of access time.

The connection between these two factors is as follows. The faster the physical DRAM is, the faster the systems memory timing can be set at. When you increase the system timing (by reducing the number of clock cycles required to access the memory using the appropriate BIOS settings) then the system will run faster.

The catch here is though if you set them too fast for the DRAM you are using it will result in system errors occurring. The speed of the DRAM does not straightforwardly control the speed of the memory system. What it does is set the maximum limit.

What this means to the user is that if you replace your system's 10ns SDRAM with 8ns SDRAM, the system will not run faster unless you also increase the system timing speed so that it tries to access the faster memory in a faster fashion. Conversely, replacing faster memory with slower memory won't cause the system to run any slower unless the system timing is decreased; however, if the new slower memory is too slow for the timing settings, memory errors (crashes, lockups, BSOD) will result.

Some systems will automatically set the memory timing based on the (SPD) speed of the memory, which is detected by the motherboard. This is the reason there is some confusion in this matter. Using faster memory with this type of system makes the faster DRAM automatically cause the system to run faster. However, one principle still holds true. The system timing is what is making the memory run faster. It's just that the system timing is being increased automatically and is therefore transparent to the user.


Source : http://www.motherboards.org/articles/guides/1179_1.html

Categories:

0 comments:

Post a Comment