Please consider a donation to the Higher Intellect project. See https://preterhuman.net/donate.php or the Donate to Higher Intellect page for more info.

Changes

Jump to navigation Jump to search
no edit summary
Line 2: Line 2:  
''Non-Uniform Memory Access''' or '''Non-Uniform Memory Architecture''' ('''NUMA''') is a computer memory design used in multiprocessors, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors.
 
''Non-Uniform Memory Access''' or '''Non-Uniform Memory Architecture''' ('''NUMA''') is a computer memory design used in multiprocessors, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors.
   −
NUMA architectures logically follow in scaling from symmetric multiprocessing (SMP) architectures. Their commercial development came in work by Convex Computer (later HP), [[Silicon Graphics|SGI]], Sequent Computer Systems|Sequent and Data General]during the 1990s. Techniques developed by these companies later featured in a variety of Unix-like operating systems, as well as to some degree in Windows NT and in later versions of Microsoft Windows.
+
NUMA architectures logically follow in scaling from symmetric multiprocessing (SMP) architectures. Their commercial development came in work by Convex Computer (later HP), [[Silicon Graphics|SGI]], Sequent Computer Systems and Data General during the 1990s. Techniques developed by these companies later featured in a variety of Unix-like operating systems, as well as to some degree in Windows NT and in later versions of Microsoft Windows.
    
==Basic concept==
 
==Basic concept==
Line 20: Line 20:  
Typically, this takes place by using inter-processor communication between cache controllers to keep a consistent memory image when more than one cache stores the same memory location. For this reason, ccNUMA performs poorly when multiple processors attempt to access the same memory area in rapid succession. Operating-system support for NUMA attempts to reduce the frequency of this kind of access by allocating processors and memory in NUMA-friendly ways and by avoiding scheduling and locking algorithms that make NUMA-unfriendly accesses necessary.
 
Typically, this takes place by using inter-processor communication between cache controllers to keep a consistent memory image when more than one cache stores the same memory location. For this reason, ccNUMA performs poorly when multiple processors attempt to access the same memory area in rapid succession. Operating-system support for NUMA attempts to reduce the frequency of this kind of access by allocating processors and memory in NUMA-friendly ways and by avoiding scheduling and locking algorithms that make NUMA-unfriendly accesses necessary.
   −
Current implementations of ccNUMA systems are multi processor systems based on the AMD Opteron processor. Earlier ccNUMA approaches were systems based on the Alpha processor EV7 of Digital Equipment Corporation (DEC).
+
Current implementations of ccNUMA systems are multi processor systems based on the AMD Opteron processor. Earlier ccNUMA approaches were systems based on the Alpha processor EV7 of [[Digital Equipment Corporation]] (DEC).
    
==NUMA vs. cluster computing==
 
==NUMA vs. cluster computing==

Navigation menu