Volume 21, Issue 6

Drill Bits
Programmer Job Interviews: The Hidden Agenda

  Terence Kelly

Top tech interviews test coding and CS knowledge overtly, but they also evaluate a deeper technical instinct so subtly that candidates seldom notice the appraisal. We'll learn how interviewers create questions to covertly measure a skill that sets the best programmers above the rest. Equipped with empathy for the interviewer, you can prepare to shine on the job market by seizing camouflaged opportunities.

Code, Development, Drill Bits, Business/Management

DevEx in Action

  Nicole Forsgren, Eirini Kalliamvakou, Abi Noda,
  Michaela Greiler, Brian Houck, Margaret-Anne Storey

A study of its tangible impacts

DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.

Business and Management, Development

Resolving the Human-subjects Status of Machine Learning's Crowdworkers

  Divyansh Kaushik, Zachary C. Lipton, Alex John London

What ethical framework should govern the interaction of ML researchers and crowdworkers?

In recent years, machine learning (ML) has relied heavily on crowdworkers both for building datasets and for addressing research questions requiring human interaction or judgment. The diversity of both the tasks performed and the uses of the resulting data render it difficult to determine when crowdworkers are best thought of as workers versus human subjects. These difficulties are compounded by conflicting policies, with some institutions and researchers regarding all ML crowdworkers as human subjects and others holding that they rarely constitute human subjects. Notably few ML papers involving crowdwork mention IRB oversight, raising the prospect of non-compliance with ethical and regulatory requirements. We investigate the appropriate designation of ML crowdsourcing studies, focusing our inquiry on natural language processing to expose unique challenges for research oversight.

AI, Privacy and Rights

Kode Vicious:
Is There Another System?

Computer science is the study of what can be automated.

One of the easiest tests to determine if you are at risk is to look hard at what you do every day and see if you, yourself, could code yourself out of a job. Programming involves a lot of rote work: templating, boilerplate, and the like. If you can see a way to write a system to replace yourself, either do it, don't tell your bosses, and collect your salary while reading novels in your cubicle, or look for something more challenging to work on.

AI, Kode Vicious

Research for Practice:
Automatically Testing Database Systems

  Manuel Rigger with introduction by Peter Alvaro

DBMS testing with test oracles, transaction history, and fuzzing

The automated testing of DBMS is an exciting, interdisciplinary effort that has seen many innovations in recent years. The examples addressed here represent different perspectives on this topic, reflecting strands of research from software engineering, (database) systems, and security angles. They give only a glimpse into these research strands, as many additional interesting and effective works have been proposed. Various approaches generate pairs of related tests to find both logic bugs and performance issues in a DBMS. Similarly, other isolation-level testing approaches have been proposed. Finally, various fuzzing approaches use different strategies to generate mostly valid and interesting test inputs that extract various kinds of feedback from the DBMS.

Databases, Research for Practice, Testing

How to Design an ISA

  David Chisnall

The popularity of RISC-V has led many to try designing instruction sets.

Over the past decade I've been involved in several projects that have designed either ISA (instruction set architecture) extensions or clean-slate ISAs for various kinds of processors (you'll even find my name in the acknowledgments for the RISC-V spec, right back to the first public version). When I started, I had very little idea about what makes a good ISA, and, as far as I can tell, this isn't formally taught anywhere. With the rise of RISC-V as an open base for custom instruction sets, however, the barrier to entry has become much lower and the number of people trying to design some or all of an instruction set has grown immeasurably.

Computer Architecture, Hardware

Operations and Life:
What do Trains, Horses, and Home Internet Installation have in Common?

  Thomas A. Limoncelli

Avoid changes mid-process.

At first, I thought he was just trying to shirk his responsibilities and pass the buck on to someone else. His advice, however, made a lot of sense. The installation team probably generated configurations ahead of time, planned out how and when those changes need to be activated, and so on. The entire day is planned ahead. Bureaucracies usually have a happy path that works well, and any deviation requires who knows what? Managers getting involved? Error-prone manual steps? Ad hoc database queries? There's no way I could know. The point was clear, however: Don't change horses midstream, or the color of the train.

Business and Management, Operations and Life, Systems Administration

Case Study:
Multiparty Computation:
To Secure Privacy, Do the Math

A discussion with Nigel Smart, Joshua W. Baron, Sanjay Saravanan, Jordan Brandt, and Atefeh Mashatan

Multiparty Computation is based on complex math, and over the past decade, MPC has been harnessed as one of the most powerful tools available for the protection of sensitive data. MPC now serves as the basis for protocols that let a set of parties interact and compute on a pool of private inputs without revealing any of the data contained within those inputs. In the end, only the results are revealed. The implications of this can often prove profound.

Case Studies, Privacy and Rights, Security


Volume 21, Issue 5

Bridging the Moat:
The Security Jawbreaker

  Phil Vachon

Access to a system should not imply authority to use it. Enter the principle of complete mediation.

When someone stands at the front door of your home, what are the steps to let them in? If it is a member of the family, they use their house key, unlocking the door using the authority the key confers. For others, a knock at the door or doorbell ring prompts you to make a decision. Once in your home, different individuals have differing authority based on who they are. Family members have access to your whole home. A close friend can roam around unsupervised, with a high level of trust. An appliance repair person is someone you might supervise for the duration of the job to be done. For more sensitive locations in your home, you can lock a few doors, giving you further assurance. Making these decisions is an implicit form of evaluating risk tolerance, or your willingness to accept the chance that something might go against your best interests.

Bridging the Moat, Security

Improving Testing of Deep-learning Systems

  Harsh Deokuliar, Raghvinder S. Sangwan, Youakim Badr, Satish M. Srinivasan

A combination of differential and mutation testing results in better test data.

We used differential testing to generate test data to improve diversity of data points in the test dataset and then used mutation testing to check the quality of the test data in terms of diversity. Combining differential and mutation testing in this fashion improves mutation score, a test data quality metric, indicating overall improvement in testing effectiveness and quality of the test data when testing deep learning systems.


Kode Vicious:
Dear Diary

On keeping a laboratory notebook

While a debug log is helpful, it's not the same thing as a laboratory notebook. If more computer scientists acted like scientists, we wouldn't have to fight over whether computing is an art or a science.

Development, Kode Vicious

Low-code Development Productivity

  João Varajão, António Trigo, Miguel Almeida

"Is winter coming" for code-based technologies?

This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.


The Soft Side of Software:
Software Managers' Guide to Operational Excellence

  Kate Matsudaira

The secret to being a great engineering leader? Setting up the right checks and balances.

Software engineering managers (or any senior technical leaders) have many responsibilities: the care and feeding of the team, delivering on business outcomes, and keeping the product/system/application up and running and in good order. Each of these areas can benefit from a systematic approach. The one I present here is setting up checks and balances for the team's operational excellence.

Business and Management, The Soft Side of Software

Use Cases are Essential

  Ivar Jacobson, Alistair Cockburn

Use cases provide a proven method to capture and explain the requirements of a system in a concise and easily understood format.

While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions. Ivar Jacobson and Alistair Cockburn, the two primary actors in this domain, are writing this article to describe to a new generation what use cases are and how they serve.


Device Onboarding using FDO and the Untrusted Installer Model

  Geoffrey H. Cooper

FDO's untrusted model is contrasted with Wi-Fi Easy Connect to illustrate the advantages of each mechanism.

Automatic onboarding of devices is an important technique to handle the increasing number of "edge" and IoT devices being installed. Onboarding of devices is different from most device-management functions because the device?s trust transitions from the factory and supply chain to the target application. To speed the process with automatic onboarding, the trust relationship in the supply chain must be formalized in the device to allow the transition to be automated.

Hardware, Networks, Security


Volume 21, Issue 4 - Confidential Computing

Operations and Life:
Knowing What You Need to Know

  Thomas A. Limoncelli

Personal, team, and organizational effectiveness can be improved with a little preparation

Blockers can take a tiny task and stretch it over days or weeks. Taking a moment at the beginning of a project to look for and prevent possible blockers can improve productivity. These examples of personal, team, and organizational levels show how gathering the right information and performing preflight checks can save hours of wasted time later.

Business and Management, Operations and Life

Kode Vicious:
Halfway Around the World

Learn the language, meet the people, eat the food

Not only do different cultures treat different features differently, but they also treat each other differently. How people act with respect to each other is a topic that can, and does, fill volumes of books that, as nerds, we probably have never read, but finding out a bit about where you're heading is a good idea. You can try to ask the locals, although people generally are so enmeshed in their own cultures that they have a hard time explaining them to others. It's best to observe with an open mind, watch how your new team reacts to each other and to you, and then ask simple questions when you see something you don't understand.

Business and Management, Kode Vicious

Drill Bits
Protecting Secrets from Computers

  Terence Kelly

Bob is in prison and Alice is dead; they trusted computers with secrets. Review time-tested tricks that can help you avoid the grim fate of the old crypto couple.

Code, Development, Drill Bits, Privacy and Rights, Security, Web Security

Confidential Computing: Elevating Cloud Security and Privacy

  Mark Russinovich

Working toward a more secure and innovative future

Confidential Computing (CC) fundamentally improves our security posture by drastically reducing the attack surface of systems. While traditional systems encrypt data at rest and in transit, CC extends this protection to data in use. It provides a novel, clearly defined security boundary, isolating sensitive data within trusted execution environments during computation. This means services can be designed that segment data based on least-privilege access principles, while all other code in the system sees only encrypted data. Crucially, the isolation is rooted in novel hardware primitives, effectively rendering even the cloud-hosting infrastructure and its administrators incapable of accessing the data. This approach creates more resilient systems capable of withstanding increasingly sophisticated cyber threats, thereby reinforcing data protection and sovereignty in an unprecedented manner.

Data, Hardware, Security

Hardware VM Isolation in the Cloud

  David Kaplan

Enabling confidential computing with AMD SEV-SNP technology

Confidential computing is a security model that fits well with the public cloud. It enables customers to rent VMs while enjoying hardware-based isolation that ensures that a cloud provider cannot purposefully or accidentally see or corrupt their data. SEV-SNP was the first commercially available x86 technology to offer VM isolation for the cloud and is deployed in Microsoft Azure, AWS, and Google Cloud. As confidential computing technologies such as SEV-SNP develop, confidential computing is likely to simply become the default trust model for the cloud.

Data, Hardware, Security

Creating the First Confidential GPUs

  Gobikrishna Dhanuskodi, Sudeshna Guha, Vidhya Krishnan, Aruna Manjunatha, Michael O'Connor, Rob Nertney, Phil Rogers

The team at NVIDIA brings confidentiality and integrity to user code and data for accelerated computing.

Today's datacenter GPU has a long and storied 3D graphics heritage. In the 1990s, graphics chips for PCs and consoles had fixed pipelines for geometry, rasterization, and pixels using integer and fixed-point arithmetic. In 1999, NVIDIA invented the modern GPU, which put a set of programmable cores at the heart of the chip, enabling rich 3D scene generation with great efficiency. It did not take long for developers and researchers to realize I could run compute on those parallel cores, and it would be blazing fast. In 2004, Ian Buck created Brook at Stanford, the first compute library for GPUs, and in 2006, NVIDIA created CUDA, which is the gold standard for accelerated computing on GPUs today.

Data, Hardware, Security

Why Should I Trust Your Code?

  Antoine Delignat-Lavaud, Cédric Fournet, Kapil Vaswani, Sylvan Clebsch, Maik Riechert, Manuel Costa, Mark Russinovich

Working toward a more secure and innovative future

Confidential computing enables users to authenticate code running in TEEs, but users also need evidence this code is trustworthy.

For Confidential Computing to become ubiquitous in the cloud, in the same way that HTTPS became the default for networking, a different, more flexible approach is needed. Although there is no guarantee that every malicious code behavior will be caught upfront, precise auditability can be guaranteed: Anyone who suspects that trust has been broken by a confidential service should be able to audit any part of its attested code base, including all updates, dependencies, policies, and tools. To achieve this, we propose an architecture to track code provenance and to hold code providers accountable. At its core, a new Code Transparency Service (CTS) maintains a public, append-only ledger that records all code deployed for confidential services. Before registering new code, CTS automatically applies policies to enforce code-integrity properties. For example, it can enforce the use of authorized releases of library dependencies and verify that code has been compiled with specific runtime checks and analyzed by specific tools. These upfront checks prevent common supply-chain attacks.

Data, Hardware, Security



Older Issues