The Kollected Kode Vicious

Kode Vicious - @kode_vicious

  Download PDF version of this article PDF

Is There Another System?

Computer science is the study of what can be automated.

Dear KV,

There has been considerable discussion about the use of AI for automating a lot of work—including software development. I've just finished a Ph.D., I'm out of school, and I have been working for a few years. But now I wonder if software research and development will have a future that includes people like me, or if I'll simply be automated out of a career.

Colossally Concerned

 

Colossus: The Forbin Project

Dear CC,

It might be odd to think that someone with an advanced degree could be automated out of a job, but I live in New York where there are plenty of minimum-wage workers with advanced degrees. I am afraid we are yet again caught in another tech hype cycle around an advance in software, and that is causing a lot of confusion both inside and outside our industry.

Do large language models pose a threat to software developers? That depends on what kind of software they develop and how easy it might be to automate their work, but this has always been true. Computer science is the study of what can be automated, and your corporate masters see your salary as impinging on their bonuses, so they're always happy to reduce the number of human resources.

Computer science and software development change more quickly than many other fields because what we do doesn't require much in the physical realm and because, for the moment, the fallout from our mistakes goes mostly unregulated and unpunished. Consider what it takes to innovate the construction of buildings or other physical infrastructures and you'll get what I mean. New bridges are built with newer materials, but such changes take years or even decades, while a new fad in computing can sweep at least part of the field in a few months. Like crypto crap and the Internet bubble before it, the "AI" bubble, and I put "AI" in quotes because—as a good friend said recently over food and a few beers— "AI is what people say when a computer does something they thought only a human could do."

Can large language models replace some parts of software development? Perhaps. I've seen evidence both for and against, and the Internet is littered with arguments on both sides.

It occurs to me that KV answered this question in another form many years ago [Avoiding Obsolescence, April 2010; and Bummed (the second letter on the page), December 2004] when I discussed how to stay up to date and fresh on the latest changes in computing. The key to a long career—as is obvious to those who have watched KV babble on lo these many years—is to continue to survey the field, see what's new, try new things, and see what works well for you.

KV has yet to see evidence of general AI appearing and replacing people, although the hype machine keeps telling us it's just around the corner, so you need to think of these new systems as aids to the programmer, much as early compilers were in the 1960s and 1970s.

Back when the first compilers appeared, they produced machine language that was not nearly as efficient as what was created by working programmers. Today, there are only a few of us who understand or work in assembly or machine code, and this is both good and bad. It's good because it means that most programmers can express concepts in code that would have been tortuous to produce on earlier systems. It's bad because machines still run machine code, and if you can't debug it, often you cannot find the true source of a performance or other issue. Compilers are tools, debuggers are tools, large language models are tools, and humans are—for the most part—tool makers and users.

One of the easiest tests to determine if you are at risk is to look hard at what you do every day and see if you, yourself, could code yourself out of a job. Programming involves a lot of rote work—templating, boilerplate, and the like. If you can see a way to write a system to replace yourself, either do it, don't tell your bosses, and collect your salary while reading novels in your cubicle, or look for something more challenging to work on.

There are days—I should say mostly late, late nights—when KV wishes the machines would take over. I'd gladly be a battery if I could just have some peace and quiet to think about the higher-order things in computer science, algorithms, operating systems, and efficiency. These creative endeavors are still beyond the reach of whatever it is we call "AI," and KV is willing to bet a lot of drinks that they will remain so for the duration of his—and your—career.

KV

 

George V. Neville-Neil works on networking and operating-system code for fun and profit. He also teaches courses on various subjects related to programming. His areas of interest are computer security, operating systems, networking, time protocols, and the care and feeding of large code bases. He is the author of The Kollected Kode Vicious and coauthor with Marshall Kirk McKusick and Robert N. M. Watson of The Design and Implementation of the FreeBSD Operating System. For nearly 20 years, he has been the columnist better known as Kode Vicious. Since 2014, he has been an industrial visitor at the University of Cambridge, where he is involved in several projects relating to computer security. He earned his bachelor's degree in computer science at Northeastern University in Boston, Massachusetts, and is a member of ACM, the Usenix Association, and IEEE. His software not only runs on Earth, but also has been deployed as part of VxWorks in NASA's missions to Mars. He is an avid bicyclist and traveler who currently lives in New York City.

Copyright © 2023 held by owner/author. Publication rights licensed to ACM.

acmqueue

Originally published in Queue vol. 21, no. 6
Comment on this article in the ACM Digital Library





More related articles:

Jim Waldo, Soline Boussard - GPTs and Hallucination
The findings in this experiment support the hypothesis that GPTs based on LLMs perform well on prompts that are more popular and have reached a general consensus yet struggle on controversial topics or topics with limited data. The variability in the applications's responses underscores that the models depend on the quantity and quality of their training data, paralleling the system of crowdsourcing that relies on diverse and credible contributions. Thus, while GPTs can serve as useful tools for many mundane tasks, their engagement with obscure and polarized topics should be interpreted with caution.


Erik Meijer - Virtual Machinations: Using Large Language Models as Neural Computers
We explore how Large Language Models (LLMs) can function not just as databases, but as dynamic, end-user programmable neural computers. The native programming language for this neural computer is a Logic Programming-inspired declarative language that formalizes and externalizes the chain-of-thought reasoning as it might happen inside a large language model.


Mansi Khemka, Brian Houck - Toward Effective AI Support for Developers
The journey of integrating AI into the daily lives of software engineers is not without its challenges. Yet, it promises a transformative shift in how developers can translate their creative visions into tangible solutions. As we have seen, AI tools such as GitHub Copilot are already reshaping the code-writing experience, enabling developers to be more productive and to spend more time on creative and complex tasks. The skepticism around AI, from concerns about job security to its real-world efficacy, underscores the need for a balanced approach that prioritizes transparency, education, and ethical considerations.


Divyansh Kaushik, Zachary C. Lipton, Alex John London - Resolving the Human-subjects Status of Machine Learning's Crowdworkers
In recent years, machine learning (ML) has relied heavily on crowdworkers both for building datasets and for addressing research questions requiring human interaction or judgment. The diversity of both the tasks performed and the uses of the resulting data render it difficult to determine when crowdworkers are best thought of as workers versus human subjects. These difficulties are compounded by conflicting policies, with some institutions and researchers regarding all ML crowdworkers as human subjects and others holding that they rarely constitute human subjects. Notably few ML papers involving crowdwork mention IRB oversight, raising the prospect of non-compliance with ethical and regulatory requirements.





© ACM, Inc. All Rights Reserved.