Concurrency

RSS
Sort By:

Scaling Synchronization in Multicore Programs

Advanced synchronization methods can boost the performance of multicore software.

by Adam Morrison | August 23, 2016

CACM This article appears in print in Communications of the ACM, Volume 59 Issue 11

0 comments

Challenges of Memory Management on Modern NUMA System

Optimizing NUMA systems applications with Carrefour

by Fabien Gaud, Baptiste Lepers, Justin Funston, Mohammad Dashti, Alexandra Fedorova, Vivien Quéma, Renaud Lachaize, Mark Roth | December 1, 2015

1 comments

Parallel Processing with Promises

A simple method of writing a collaborative system

by Spencer Rathbun | March 3, 2015

0 comments

Scalability Techniques for Practical Synchronization Primitives

Designing locking primitives with performance in mind

by Davidlohr Bueso | December 14, 2014

CACM This article appears in print in Communications of the ACM, Volume 58 Issue 1

0 comments

Productivity in Parallel Programming: A Decade of Progress

Looking at the design and benefits of X10

by John T. Richards, Jonathan Brezin, Calvin B. Swart, Christine A. Halverson | October 20, 2014

0 comments

Scaling Existing Lock-based Applications with Lock Elision

Lock elision enables existing lock-based programs to achieve the performance benefits of nonblocking synchronization and fine-grain locking with minor software engineering effort.

by Andi Kleen | February 8, 2014

CACM This article appears in print in Communications of the ACM, Volume 57 Issue 3

1 comments

The Balancing Act of Choosing Nonblocking Features

Design requirements of nonblocking systems

by Maged M. Michael | August 12, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 9

0 comments

Nonblocking Algorithms and Scalable Multicore Programming

Exploring some alternatives to lock-based synchronization

by Samy Al Bahra | June 11, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 7

3 comments

Proving the Correctness of Nonblocking Data Structures

So you've decided to use a nonblocking data structure, and now you need to be certain of its correctness. How can this be achieved? When a multithreaded program is too slow because of a frequently acquired mutex, the programmer's typical reaction is to question whether this mutual exclusion is indeed required. This doubt becomes even more pronounced if the mutex protects accesses to only a single variable performed using a single instruction at every site. Removing synchronization improves performance, but can it be done without impairing program correctness?

by Mathieu Desnoyers | June 2, 2013

CACM This article appears in print in Communications of the ACM, Volume 56 Issue 7

0 comments

Structured Deferral: Synchronization via Procrastination

We simply do not have a synchronization mechanism that can enforce mutual exclusion.

by Paul E. McKenney | May 23, 2013

1 comments

Software Transactional Memory: Why Is It Only a Research Toy?

The promise of STM may likely be undermined by its overheads and workload applicabilities.

by Calin Cascaval, Colin Blundell, Maged Michael, Harold W. Cain, Peng Wu, Stefanie Chiras, Siddhartha Chatterjee | October 24, 2008

1 comments

Parallel Programming with Transactional Memory

While sometimes even writing regular, single-threaded programs can be quite challenging, trying to split a program into multiple pieces that can be executed in parallel adds a whole dimension of additional problems. Drawing upon the transaction concept familiar to most programmers, transactional memory was designed to solve some of these problems and make parallel programming easier. Ulrich Drepper from Red Hat shows us how it's done.

by Ulrich Drepper | October 24, 2008

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 2

1 comments

Erlang for Concurrent Programming

What role can programming languages play in dealing with concurrency? One answer can be found in Erlang, a language designed for concurrency from the ground up.

by Jim Larson | October 24, 2008

CACM This article appears in print in Communications of the ACM, Volume 52 Issue 3

0 comments

Real-World Concurrency

In this look at how concurrency affects practitioners in the real world, Cantrill and Bonwick argue that much of the anxiety over concurrency is unwarranted.

by Bryan Cantrill, Jeff Bonwick | October 24, 2008

CACM This article appears in print in Communications of the ACM, Volume 51 Issue 11

0 comments

Unlocking Concurrency

Multicore architectures are an inflection point in mainstream software development because they force developers to write parallel programs. In a previous article in Queue, Herb Sutter and James Larus pointed out, “The concurrency revolution is primarily a software revolution.

by Ali-Reza Adl-Tabatabai, Christos Kozyrakis, Bratin Saha | December 28, 2006

0 comments

Threads without the Pain

Multithreaded programming need not be so angst-ridden.

by Andreas Gustafsson | December 16, 2005

0 comments

Software and the Concurrency Revolution

Leveraging the full power of multicore processors demands new tools and new thinking from the software industry. Concurrency has long been touted as the "next big thing" and "the way of the future," but for the past 30 years, mainstream software development has been able to ignore it. Our parallel future has finally arrived: new machines will be parallel machines, and this will require major changes in the way we develop software. The introductory article in this issue ("The Future of Microprocessors" by Kunle Olukotun and Lance Hammond) describes the hardware imperatives behind this shift in computer architecture from uniprocessors to multicore processors, also known as CMPs (chip multiprocessors).

by Herb Sutter, James Larus | October 18, 2005

0 comments

Trials and Tribulations of Debugging Concurrency

We now sit firmly in the 21st century where the grand challenge to the modern-day programmer is neither memory leaks nor type issues (both of those problems are now effectively solved), but rather issues of concurrency. How does one write increasingly complex programs where concurrency is a first-class concern. Or even more treacherous, how does one debug such a beast? These questions bring fear into the hearts of even the best programmers.

by Kang Su Gatlin | November 30, 2004

1 comments