NUMA (Non-Uniform Memory Access): An Overview:
NUMA becomes more common because memory controllers get close to execution units on microprocessors.
NUMA (non-uniform memory access) is the phenomenon that memory at various points in the address space of a processor have different performance characteristics. At current processor speeds, the signal path length from the processor to memory plays a significant role. Increased signal path length not only increases latency to memory but also quickly becomes a throughput bottleneck if the signal path is shared by multiple processors. The performance differences to memory were noticeable first on large-scale systems where data paths were spanning motherboards or chassis. These systems required modified operating-system kernels with NUMA support that explicitly understood the topological properties of the system’s memory (such as the chassis in which a region of memory was located) in order to avoid excessively long signal path lengths. (Altix and UV, SGI’s large address space systems, are examples. The designers of these products had to modify the Linux kernel to support NUMA; in these machines, processors in multiple chassis are linked via a proprietary interconnect called NUMALINK.)
More Encryption Is Not the Solution:
Cryptography as privacy works only if both ends work at it in good faith.
The recent exposure of the dragnet-style surveillance of Internet traffic has provoked a number of responses that are variations of the general formula, "More encryption is the solution." This is not the case. In fact, more encryption will probably only make the privacy crisis worse than it already is.
20 Obstacles to Scalability:
Watch out for these pitfalls that can prevent Web application scaling.
Web applications can grow in fits and starts. Customer numbers can increase rapidly, and application usage patterns can vary seasonally. This unpredictability necessitates an application that is scalable. What is the best way of achieving scalability?
The Balancing Act of Choosing Nonblocking Features:
Design requirements of nonblocking systems
What is nonblocking progress? Consider the simple example of incrementing a counter C shared among multiple threads. One way to do so is by protecting the steps of incrementing C by a mutual exclusion lock L (i.e., acquire(L); old := C ; C := old+1; release(L);). If a thread P is holding L, then a different thread Q must wait for P to release L before Q can proceed to operate on C. That is, Q is blocked by P.