Dear KV,
There has been considerable discussion about the use of AI for automating a lot of work—including software development. I've just finished a Ph.D., I'm out of school, and I have been working for a few years. But now I wonder if software research and development will have a future that includes people like me, or if I'll simply be automated out of a career.
Colossally Concerned
Dear CC,
It might be odd to think that someone with an advanced degree could be automated out of a job, but I live in New York where there are plenty of minimum-wage workers with advanced degrees. I am afraid we are yet again caught in another tech hype cycle around an advance in software, and that is causing a lot of confusion both inside and outside our industry.
Do large language models pose a threat to software developers? That depends on what kind of software they develop and how easy it might be to automate their work, but this has always been true. Computer science is the study of what can be automated, and your corporate masters see your salary as impinging on their bonuses, so they're always happy to reduce the number of human resources.
Computer science and software development change more quickly than many other fields because what we do doesn't require much in the physical realm and because, for the moment, the fallout from our mistakes goes mostly unregulated and unpunished. Consider what it takes to innovate the construction of buildings or other physical infrastructures and you'll get what I mean. New bridges are built with newer materials, but such changes take years or even decades, while a new fad in computing can sweep at least part of the field in a few months. Like crypto crap and the Internet bubble before it, the "AI" bubble, and I put "AI" in quotes because—as a good friend said recently over food and a few beers— "AI is what people say when a computer does something they thought only a human could do."
Can large language models replace some parts of software development? Perhaps. I've seen evidence both for and against, and the Internet is littered with arguments on both sides.
It occurs to me that KV answered this question in another form many years ago [Avoiding Obsolescence, April 2010; and Bummed (the second letter on the page), December 2004] when I discussed how to stay up to date and fresh on the latest changes in computing. The key to a long career—as is obvious to those who have watched KV babble on lo these many years—is to continue to survey the field, see what's new, try new things, and see what works well for you.
KV has yet to see evidence of general AI appearing and replacing people, although the hype machine keeps telling us it's just around the corner, so you need to think of these new systems as aids to the programmer, much as early compilers were in the 1960s and 1970s.
Back when the first compilers appeared, they produced machine language that was not nearly as efficient as what was created by working programmers. Today, there are only a few of us who understand or work in assembly or machine code, and this is both good and bad. It's good because it means that most programmers can express concepts in code that would have been tortuous to produce on earlier systems. It's bad because machines still run machine code, and if you can't debug it, often you cannot find the true source of a performance or other issue. Compilers are tools, debuggers are tools, large language models are tools, and humans are—for the most part—tool makers and users.
One of the easiest tests to determine if you are at risk is to look hard at what you do every day and see if you, yourself, could code yourself out of a job. Programming involves a lot of rote work—templating, boilerplate, and the like. If you can see a way to write a system to replace yourself, either do it, don't tell your bosses, and collect your salary while reading novels in your cubicle, or look for something more challenging to work on.
There are days—I should say mostly late, late nights—when KV wishes the machines would take over. I'd gladly be a battery if I could just have some peace and quiet to think about the higher-order things in computer science, algorithms, operating systems, and efficiency. These creative endeavors are still beyond the reach of whatever it is we call "AI," and KV is willing to bet a lot of drinks that they will remain so for the duration of his—and your—career.
KV
George V. Neville-Neil works on networking and operating-system code for fun and profit. He also teaches courses on various subjects related to programming. His areas of interest are computer security, operating systems, networking, time protocols, and the care and feeding of large code bases. He is the author of The Kollected Kode Vicious and coauthor with Marshall Kirk McKusick and Robert N. M. Watson of The Design and Implementation of the FreeBSD Operating System. For nearly 20 years, he has been the columnist better known as Kode Vicious. Since 2014, he has been an industrial visitor at the University of Cambridge, where he is involved in several projects relating to computer security. He earned his bachelor's degree in computer science at Northeastern University in Boston, Massachusetts, and is a member of ACM, the Usenix Association, and IEEE. His software not only runs on Earth, but also has been deployed as part of VxWorks in NASA's missions to Mars. He is an avid bicyclist and traveler who currently lives in New York City.
Copyright © 2023 held by owner/author. Publication rights licensed to ACM.
Originally published in Queue vol. 21, no. 6—
Comment on this article in the ACM Digital Library
Michael Gschwind - AI: It's All About Inference Now
As the scaling of pretraining is reaching a plateau of diminishing returns, model inference is quickly becoming an important driver for model performance. Today, test-time compute scaling offers a new, exciting avenue to increase model performance beyond what can be achieved with training, and test-time compute techniques cover a fertile area for many more breakthroughs in AI. Innovations using ensemble methods, iterative refinement, repeated sampling, retrieval augmentation, chain-of-thought reasoning, search, and agentic ensembles are already yielding improvements in model quality performance and offer additional opportunities for future growth.
Vijay Janapa Reddi - Generative AI at the Edge: Challenges and Opportunities
Generative AI at the edge is the next phase in AI's deployment: from centralized supercomputers to ubiquitous assistants and creators operating alongside humans. The challenges are significant but so are the opportunities for personalization, privacy, and innovation. By tackling the technical hurdles and establishing new frameworks (conceptual and infrastructural), we can ensure this transition is successful and beneficial.
Erik Meijer - From Function Frustrations to Framework Flexibility
The principle of indirection can be applied to introduce a paradigm shift: replacing direct value manipulation with symbolic reasoning using named variables. This simple yet powerful trick directly resolves inconsistencies in tool usage and enables parameterization and abstraction of interactions. The transformation of function calls into reusable and interpretable frameworks elevates tool calling into a neuro-symbolic reasoning framework. This approach unlocks new possibilities for structured interaction and dynamic AI systems.
Chip Huyen - How to Evaluate AI that's Smarter than Us
Evaluating AI models that surpass human expertise in the task at hand presents unique challenges. These challenges only grow as AI becomes more intelligent. However, the three effective strategies presented in this article exist to address these hurdles. The strategies are: Functional correctness: evaluating AI by how well it accomplishes its intended tasks; AI-as-a-judge: using AI instead of human experts to evaluate AI outputs; and Comparative evaluation: evaluating AI systems in relationship with each other instead of independently.