Multiparty Computation

Vol. 21 No. 6 – November/December 2023

Multiparty Computation

Multiparty Computation: To Secure Privacy, Do the Math:
A discussion with Nigel Smart, Joshua W. Baron, Sanjay Saravanan, Jordan Brandt, and Atefeh Mashatan

Multiparty Computation is based on complex math, and over the past decade, MPC has been harnessed as one of the most powerful tools available for the protection of sensitive data. MPC now serves as the basis for protocols that let a set of parties interact and compute on a pool of private inputs without revealing any of the data contained within those inputs. In the end, only the results are revealed. The implications of this can often prove profound.

by Nigel Smart, Joshua W. Baron, Sanjay Saravanan, Jordan Brandt, Atefeh Mashatan

What do Trains, Horses, and Home Internet Installation have in Common?:
Avoid changes mid-process

At first, I thought he was just trying to shirk his responsibilities and pass the buck on to someone else. His advice, however, made a lot of sense. The installation team probably generated configurations ahead of time, planned out how and when those changes need to be activated, and so on. The entire day is planned ahead. Bureaucracies usually have a happy path that works well, and any deviation requires who knows what? Managers getting involved? Error-prone manual steps? Ad hoc database queries? There's no way I could know. The point was clear, however: Don't change horses midstream, or the color of the train.

by Thomas A. Limoncelli

How to Design an ISA:
The popularity of RISC-V has led many to try designing instruction sets.

Over the past decade I've been involved in several projects that have designed either ISA (instruction set architecture) extensions or clean-slate ISAs for various kinds of processors (you'll even find my name in the acknowledgments for the RISC-V spec, right back to the first public version). When I started, I had very little idea about what makes a good ISA, and, as far as I can tell, this isn't formally taught anywhere. With the rise of RISC-V as an open base for custom instruction sets, however, the barrier to entry has become much lower and the number of people trying to design some or all of an instruction set has grown immeasurably.

by David Chisnall

Automatically Testing Database Systems:
DBMS testing with test oracles, transaction history, and fuzzing

The automated testing of DBMS is an exciting, interdisciplinary effort that has seen many innovations in recent years. The examples addressed here represent different perspectives on this topic, reflecting strands of research from software engineering, (database) systems, and security angles. They give only a glimpse into these research strands, as many additional interesting and effective works have been proposed. Various approaches generate pairs of related tests to find both logic bugs and performance issues in a DBMS. Similarly, other isolation-level testing approaches have been proposed. Finally, various fuzzing approaches use different strategies to generate mostly valid and interesting test inputs that extract various kinds of feedback from the DBMS.

by Peter Alvaro, Manuel Rigger

Is There Another System?:
Computer science is the study of what can be automated.

One of the easiest tests to determine if you are at risk is to look hard at what you do every day and see if you, yourself, could code yourself out of a job. Programming involves a lot of rote work: templating, boilerplate, and the like. If you can see a way to write a system to replace yourself, either do it, don't tell your bosses, and collect your salary while reading novels in your cubicle, or look for something more challenging to work on.

by George V. Neville-Neil

Resolving the Human-subjects Status of Machine Learning's Crowdworkers:
What ethical framework should govern the interaction of ML researchers and crowdworkers?

In recent years, machine learning (ML) has relied heavily on crowdworkers both for building datasets and for addressing research questions requiring human interaction or judgment. The diversity of both the tasks performed and the uses of the resulting data render it difficult to determine when crowdworkers are best thought of as workers versus human subjects. These difficulties are compounded by conflicting policies, with some institutions and researchers regarding all ML crowdworkers as human subjects and others holding that they rarely constitute human subjects. Notably few ML papers involving crowdwork mention IRB oversight, raising the prospect of non-compliance with ethical and regulatory requirements. We investigate the appropriate designation of ML crowdsourcing studies, focusing our inquiry on natural language processing to expose unique challenges for research oversight. Crucially, under the U.S. Common Rule, these judgments hinge on determinations of aboutness, concerning both whom (or what) the collected data is about and whom (or what) the analysis is about. We highlight two challenges posed by ML: the same set of workers can serve multiple roles and provide many sorts of information; and ML research tends to embrace a dynamic workflow, where research questions are seldom stated ex ante and data sharing opens the door for future studies to aim questions at different targets. Our analysis exposes a potential loophole in the Common Rule, where researchers can elude research ethics oversight by splitting data collection and analysis into distinct studies. Finally, we offer several policy recommendations to address these concerns.

by Divyansh Kaushik, Zachary C. Lipton, Alex John London

DevEx in Action:
A study of its tangible impacts

DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.

by Nicole Forsgren, Eirini Kalliamvakou, Abi Noda, Michaela Greiler, Brian Houck, Margaret-Anne Storey

Programmer Job Interviews: The Hidden Agenda

Top tech interviews test coding and CS knowledge overtly, but they also evaluate a deeper technical instinct so subtly that candidates seldom notice the appraisal. We'll learn how interviewers create questions to covertly measure a skill that sets the best programmers above the rest. Equipped with empathy for the interviewer, you can prepare to shine on the job market by seizing camouflaged opportunities.

by Terence Kelly