<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>ACM Queue - Concurrency</title>
    <link>http://queue.acm.org/listing.cfm?item_topic=Concurrency&amp;qc_type=topics_list&amp;filter=Concurrency&amp;page_title=Concurrency&amp;order=desc</link>
    <description />
    <item>
      <title>Scaling Synchronization in Multicore Programs</title>
      <link>http://queue.acm.org/detail.cfm?id=2991130</link>
      <description>Designing software for modern multicore processors poses a dilemma. Traditional software designs, in which threads manipulate shared data, have limited scalability because synchronization of updates to shared data serializes threads and limits parallelism. Alternative distributed software designs, in which threads do not share mutable data, eliminate synchronization and offer better scalability. But distributed designs make it challenging to implement features that shared data structures naturally provide, such as dynamic load balancing and strong consistency guarantees, and are simply not a good fit for every program. Often, however, the performance of shared mutable data structures is limited by the synchronization methods in use today, whether lock-based or lock-free. To help readers make informed design decisions, this article describes advanced (and practical) synchronization methods that can push the performance of designs using shared mutable data to levels that are acceptable to many applications.</description>
      <category>Concurrency</category>
      <pubDate>Tue, 23 Aug 2016 14:15:04 GMT</pubDate>
      <author>Adam Morrison</author>
      <guid isPermaLink="false">2991130</guid>
    </item>
    <item>
      <title>Challenges of Memory Management on Modern NUMA System</title>
      <link>http://queue.acm.org/detail.cfm?id=2852078</link>
      <description>Modern server-class systems are typically built as several multicore chips put together in a single system. Each chip has a local DRAM (dynamic random-access memory) module; together they are referred to as a node. Nodes are connected via a high-speed interconnect, and the system is fully coherent. This means that, transparently to the programmer, a core can issue requests to its node's local memory as well as to the memories of other nodes. The key distinction is that remote requests will take longer, because they are subject to longer wire delays and may have to jump several hops as they traverse the interconnect. The latency of memory-access times is hence non-uniform, because it depends on where the request originates and where it is destined to go. Such systems are referred to as NUMA (non-uniform memory access).</description>
      <category>Concurrency</category>
      <pubDate>Tue, 01 Dec 2015 13:05:48 GMT</pubDate>
      <author>Fabien Gaud, Baptiste Lepers, Justin Funston, Mohammad Dashti, Alexandra Fedorova, Vivien Qu&amp;#233;ma, Renaud Lachaize, Mark Roth</author>
      <guid isPermaLink="false">2852078</guid>
    </item>
    <item>
      <title>Parallel Processing with Promises</title>
      <link>http://queue.acm.org/detail.cfm?id=2742696</link>
      <description>In today's world, there are many reasons to write concurrent software. The desire to improve performance and increase throughput has led to many different asynchronous techniques. The techniques involved, however, are generally complex and the source of many subtle bugs, especially if they require shared mutable state. If shared state is not required, then these problems can be solved with a better abstraction called promises. These allow programmers to hook asynchronous function calls together, waiting for each to return success or failure before running the next appropriate function in the chain.</description>
      <category>Concurrency</category>
      <pubDate>Tue, 03 Mar 2015 16:17:56 GMT</pubDate>
      <author>Spencer Rathbun</author>
      <guid isPermaLink="false">2742696</guid>
    </item>
    <item>
      <title>Scalability Techniques for Practical Synchronization Primitives</title>
      <link>http://queue.acm.org/detail.cfm?id=2698990</link>
      <description>In an ideal world, applications are expected to scale automatically when executed on increasingly larger systems. In practice, however, not only does this scaling not occur, but it is common to see performance actually worsen on those larger systems.</description>
      <category>Concurrency</category>
      <pubDate>Sun, 14 Dec 2014 22:54:57 GMT</pubDate>
      <author>Davidlohr Bueso</author>
      <guid isPermaLink="false">2698990</guid>
    </item>
    <item>
      <title>Productivity in Parallel Programming: A Decade of Progress</title>
      <link>http://queue.acm.org/detail.cfm?id=2682913</link>
      <description>In 2002 DARPA (Defense Advanced Research Projects Agency) launched a major initiative in HPCS (high-productivity computing systems). The program was motivated by the belief that the utilization of the coming generation of parallel machines was gated by the difficulty of writing, debugging, tuning, and maintaining software at peta scale.</description>
      <category>Concurrency</category>
      <pubDate>Mon, 20 Oct 2014 16:34:27 GMT</pubDate>
      <author>John T. Richards, Jonathan Brezin, Calvin B. Swart, Christine A. Halverson</author>
      <guid isPermaLink="false">2682913</guid>
    </item>
    <item>
      <title>Scaling Existing Lock-based Applications with Lock Elision</title>
      <link>http://queue.acm.org/detail.cfm?id=2579227</link>
      <description>Multithreaded applications take advantage of increasing core counts to achieve high performance. Such programs, however, typically require programmers to reason about data shared among multiple threads. Programmers use synchronization mechanisms such as mutual-exclusion locks to ensure correct updates to shared data in the presence of accesses from multiple threads. Unfortunately, these mechanisms serialize thread accesses to the data and limit scalability.</description>
      <category>Concurrency</category>
      <pubDate>Sat, 08 Feb 2014 10:57:30 GMT</pubDate>
      <author>Andi Kleen</author>
      <guid isPermaLink="false">2579227</guid>
    </item>
    <item>
      <title>The Balancing Act of Choosing Nonblocking Features</title>
      <link>http://queue.acm.org/detail.cfm?id=2513575</link>
      <description>What is nonblocking progress? Consider the simple example of incrementing a counter C shared among multiple threads. One way to do so is by protecting the steps of incrementing C by a mutual exclusion lock L (i.e., acquire(L); old := C ; C := old+1; release(L);). If a thread P is holding L, then a different thread Q must wait for P to release L before Q can proceed to operate on C. That is, Q is blocked by P.</description>
      <category>Concurrency</category>
      <pubDate>Mon, 12 Aug 2013 18:06:14 GMT</pubDate>
      <author>Maged M. Michael</author>
      <guid isPermaLink="false">2513575</guid>
    </item>
    <item>
      <title>Nonblocking Algorithms and Scalable Multicore Programming</title>
      <link>http://queue.acm.org/detail.cfm?id=2492433</link>
      <description>Real-world systems with complicated quality-of-service guarantees may require a delicate balance between throughput and latency to meet operating requirements in a cost-efficient manner. The increasing availability and decreasing cost of commodity multicore and many-core systems make concurrency and parallelism increasingly necessary for meeting demanding performance requirements. Unfortunately, the design and implementation of correct, efficient, and scalable concurrent software is often a daunting task.</description>
      <category>Concurrency</category>
      <pubDate>Tue, 11 Jun 2013 23:53:23 GMT</pubDate>
      <author>Samy Al Bahra</author>
      <guid isPermaLink="false">2492433</guid>
    </item>
    <item>
      <title>Proving the Correctness of Nonblocking Data Structures</title>
      <link>http://queue.acm.org/detail.cfm?id=2490873</link>
      <description>Nonblocking synchronization can yield astonishing results in terms of scalability and realtime response, but at the expense of verification state space.</description>
      <category>Concurrency</category>
      <pubDate>Sun, 02 Jun 2013 09:33:34 GMT</pubDate>
      <author>Mathieu Desnoyers</author>
      <guid isPermaLink="false">2490873</guid>
    </item>
    <item>
      <title>Structured Deferral: Synchronization via Procrastination</title>
      <link>http://queue.acm.org/detail.cfm?id=2488549</link>
      <description>Developers often take a proactive approach to software design, especially those from cultures valuing industriousness over procrastination. Lazy approaches, however, have proven their value, with examples including reference counting, garbage collection, and lazy evaluation. This structured deferral takes the form of synchronization via procrastination, specifically reference counting, hazard pointers, and RCU (read-copy-update).</description>
      <category>Concurrency</category>
      <pubDate>Thu, 23 May 2013 13:27:44 GMT</pubDate>
      <author>Paul E. McKenney</author>
      <guid isPermaLink="false">2488549</guid>
    </item>
  </channel>
</rss>

