Volume 21, Issue 5
Improving Testing of Deep-learning Systems
Harsh Deokuliar, Raghvinder S. Sangwan, Youakim Badr, Satish M. Srinivasan
A combination of differential and mutation testing results in better test data.
We used differential testing to generate test data to improve diversity of data points in the test dataset and then used mutation testing to check the quality of the test data in terms of diversity. Combining differential and mutation testing in this fashion improves mutation score, a test data quality metric, indicating overall improvement in testing effectiveness and quality of the test data when testing deep learning systems.
AI
Kode Vicious:
Dear Diary
On keeping a laboratory notebook
While a debug log is helpful, it's not the same thing as a laboratory notebook. If more computer scientists acted like scientists, we wouldn't have to fight over whether computing is an art or a science.
Development,
Kode Vicious
Low-code Development Productivity
João Varajão, António Trigo, Miguel Almeida
"Is winter coming" for code-based technologies?
This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.
Development
The Soft Side of Software:
Software Managers' Guide to Operational Excellence
Kate Matsudaira
The secret to being a great engineering leader? Setting up the right checks and balances.
Software engineering managers (or any senior technical leaders) have many responsibilities: the care and feeding of the team, delivering on business outcomes, and keeping the product/system/application up and running and in good order. Each of these areas can benefit from a systematic approach. The one I present here is setting up checks and balances for the team's operational excellence.
Business and Management,
The Soft Side of Software
Use Cases are Essential
Ivar Jacobson, Alistair Cockburn
Use cases provide a proven method to capture and explain the requirements of a system in a concise and easily understood format.
While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions. Ivar Jacobson and Alistair Cockburn, the two primary actors in this domain, are writing this article to describe to a new generation what use cases are and how they serve.
Development
Device Onboarding using FDO and the Untrusted Installer Model
Geoffrey H. Cooper
FDO's untrusted model is contrasted with Wi-Fi Easy Connect to illustrate the advantages of each mechanism.
Automatic onboarding of devices is an important technique to handle the increasing number of "edge" and IoT devices being installed. Onboarding of devices is different from most device-management functions because the device?s trust transitions from the factory and supply chain to the target application. To speed the process with automatic onboarding, the trust relationship in the supply chain must be formalized in the device to allow the transition to be automated.
Hardware,
Networks,
Security
Volume 21, Issue 4 - Confidential Computing
Operations and Life:
Knowing What You Need to Know
Thomas A. Limoncelli
Personal, team, and organizational effectiveness can be improved with a little preparation
Blockers can take a tiny task and stretch it over days or weeks. Taking a moment at the beginning of a project to look for and prevent possible blockers can improve productivity. These examples of personal, team, and organizational levels show how gathering the right information and performing preflight checks can save hours of wasted time later.
Business and Management,
Operations and Life
Kode Vicious:
Halfway Around the World
Learn the language, meet the people, eat the food
Not only do different cultures treat different features differently, but they also treat each other differently. How people act with respect to each other is a topic that can, and does, fill volumes of books that, as nerds, we probably have never read, but finding out a bit about where you're heading is a good idea. You can try to ask the locals, although people generally are so enmeshed in their own cultures that they have a hard time explaining them to others. It's best to observe with an open mind, watch how your new team reacts to each other and to you, and then ask simple questions when you see something you don't understand.
Business and Management,
Kode Vicious
Drill Bits
Protecting Secrets from Computers
Terence Kelly
Bob is in prison and Alice is dead; they trusted computers with secrets. Review time-tested tricks that can help you avoid the grim fate of the old crypto couple.
Code,
Development,
Drill Bits,
Privacy and Rights,
Security,
Web Security
Confidential Computing: Elevating Cloud Security and Privacy
Mark Russinovich
Working toward a more secure and innovative future
Confidential Computing (CC) fundamentally improves our security posture by drastically reducing the attack surface of systems. While traditional systems encrypt data at rest and in transit, CC extends this protection to data in use. It provides a novel, clearly defined security boundary, isolating sensitive data within trusted execution environments during computation. This means services can be designed that segment data based on least-privilege access principles, while all other code in the system sees only encrypted data. Crucially, the isolation is rooted in novel hardware primitives, effectively rendering even the cloud-hosting infrastructure and its administrators incapable of accessing the data. This approach creates more resilient systems capable of withstanding increasingly sophisticated cyber threats, thereby reinforcing data protection and sovereignty in an unprecedented manner.
Data,
Hardware,
Security
Hardware VM Isolation in the Cloud
David Kaplan
Enabling confidential computing with AMD SEV-SNP technology
Confidential computing is a security model that fits well with the public cloud. It enables customers to rent VMs while enjoying hardware-based isolation that ensures that a cloud provider cannot purposefully or accidentally see or corrupt their data. SEV-SNP was the first commercially available x86 technology to offer VM isolation for the cloud and is deployed in Microsoft Azure, AWS, and Google Cloud. As confidential computing technologies such as SEV-SNP develop, confidential computing is likely to simply become the default trust model for the cloud.
Data,
Hardware,
Security
Creating the First Confidential GPUs
Gobikrishna Dhanuskodi, Sudeshna Guha, Vidhya Krishnan, Aruna Manjunatha, Michael O'Connor, Rob Nertney, Phil Rogers
The team at NVIDIA brings confidentiality and integrity to user code and data for accelerated computing.
Today's datacenter GPU has a long and storied 3D graphics heritage. In the 1990s, graphics chips for PCs and consoles had fixed pipelines for geometry, rasterization, and pixels using integer and fixed-point arithmetic. In 1999, NVIDIA invented the modern GPU, which put a set of programmable cores at the heart of the chip, enabling rich 3D scene generation with great efficiency. It did not take long for developers and researchers to realize I could run compute on those parallel cores, and it would be blazing fast. In 2004, Ian Buck created Brook at Stanford, the first compute library for GPUs, and in 2006, NVIDIA created CUDA, which is the gold standard for accelerated computing on GPUs today.
Data,
Hardware,
Security
Why Should I Trust Your Code?
Antoine Delignat-Lavaud, Cédric Fournet, Kapil Vaswani, Sylvan Clebsch, Maik Riechert, Manuel Costa, Mark Russinovich
Working toward a more secure and innovative future
Confidential computing enables users to authenticate code running in TEEs, but users also need evidence this code is trustworthy.
For Confidential Computing to become ubiquitous in the cloud, in the same way that HTTPS became the default for networking, a different, more flexible approach is needed. Although there is no guarantee that every malicious code behavior will be caught upfront, precise auditability can be guaranteed: Anyone who suspects that trust has been broken by a confidential service should be able to audit any part of its attested code base, including all updates, dependencies, policies, and tools. To achieve this, we propose an architecture to track code provenance and to hold code providers accountable. At its core, a new Code Transparency Service (CTS) maintains a public, append-only ledger that records all code deployed for confidential services. Before registering new code, CTS automatically applies policies to enforce code-integrity properties. For example, it can enforce the use of authorized releases of library dependencies and verify that code has been compiled with specific runtime checks and analyzed by specific tools. These upfront checks prevent common supply-chain attacks.
Data,
Hardware,
Security
Volume 21, Issue 3
Kode Vicious:
Stone Knives and Bear Skins
There is no money in tools.
If you look at the software tooling landscape, you see that the majority of developers work with either open-source tools (LLVM and gcc for compilers, gdb for debugger, vi/vim or Emacs for editor); or tools from the recently reformed home of proprietary software, Microsoft, which has figured out that its Visual Studio Code system is a good way to sucker people into working with its platforms; or finally Apple, whose tools are meant only for its platform. In specialized markets, such as deeply embedded, military, and aerospace, there are proprietary tools that are often far worse than their open-source cousins, because the market for such tools is small but lucrative.
If systems were designed with these questions in mind (How do I extend this? How do I measure this? How do I debug this?), it would also be easier to build better tools. The tooling would have something to hang its hat on, rather than guessing what might be the meaning of some random bytes in memory. Is that a buffer? Is it an important buffer? Who knows, it's all memory!
Kode Vicious,
Tools
Pointers in Far Memory
Ethan Miller, George Neville-Neil, Achilles Benetopoulos, Pankaj Mehra, and Daniel Bittman
A rethink of how data and computations should be organized
CXL (Compute Express Link), a new technology emerging from the hardware side, is promising to provide far memory. Thus, there will be more memory capacity and perhaps even more bandwidth, but at the expense of greater latency. Optimization will first, seek to keep memory in far tiers colder, and, second, minimize the rates of both access into and promotion out of these tiers. Third, proactive promotion and demotion techniques being developed for far memory promote/demote whole objects instead of one cache line at a time to take advantage of bulk caching and eviction in order to avoid repeatedly incurring its long latency. Finally, offloading computations with many dependent accesses to a near-memory processor is already being seen as a way to keep the latency of memory out of the denominator of application throughput. With far memory, this will be a required optimization.
Effectively exploiting emerging far-memory technology requires consideration of operating on richly connected data outside the context of the parent process. Operating-system technology in development offers help by exposing abstractions such as memory objects and globally invariant pointers that can be traversed by devices and newly instantiated compute. Such ideas will allow applications running on future heterogeneous distributed systems with disaggregated memory nodes to exploit near-memory processing for higher performance and to independently scale their memory and compute resources for lower cost.
Data,
Hardware,
Memory
The Bikeshed:
Don't "Think of the Internet!"
Poul-Henning Kamp
No human right is absolute.
The "Think of the children!" trope is often rolled out because the lobbyists cannot be honest and say that their real agenda is "No universal human rights for those people." So we should examine whether "Think of the Internet as we know it!" is also political codespeak.
The Bikeshed,
Privacy and Rights
How Flexible is CXL's Memory Protection?
Samuel W. Stark, A. Theodore Markettos, Simon W. Moore
Replacing a sledgehammer with a scalpel
CXL, a new interconnect standard for cache-coherent memory sharing, is becoming a reality - but its security leaves something to be desired. Decentralized capabilities are flexible and resilient against malicious actors, and should be considered while CXL is under active development.
Memory,
Security
Bridging the Moat:
Security Mismatch
Phil Vachon
Security must be a business enabler, not a hinderer.
Information security teams that say 'no' need to change. Hiding behind a moat makes repelling attacks easy, but bridges allow you to replenish supplies and foster relationships with customers? castles. Remember, a security team's role is to empower their business to move forward with confidence, not to hinder progress.
Business and Management,
Bridging the Moat
The Soft Side of Software:
Managing Hybrid Teams
Kate Matsudaira
The combination of on-site and remote workers takes extra effort from team leaders.
After three years of working remotely, many companies are asking their people to return to the office.
Not everyone is coming back, however.
With some people in the office and some still working from home, leaders must get this transition to hybrid work right.
Hybrid is the worst of both worlds in some ways.
You can easily end up creating two experiences, which can lead to problems that will compound over time and have long-term damaging effects on your team.
For leaders who are navigating a newly hybridized work environment, this column presents the following recommendations to help make sure your team is as functional as possible.
Business and Management,
The Soft Side of Software
Echoes of Intelligence
Alvaro Videla
Textual interpretation and large language models
We are now in the presence of a new medium disguised as good old text, but that text has been generated by an LLM, without authorial intention—an aspect that, if known beforehand, completely changes the expectations and response a human should have from a piece of text.
Should our interpretation capabilities be engaged? If yes, under what conditions? The rules of the language game should be spelled out; they should not be passed over in silence.
AI