The Kollected Kode Vicious

Kode Vicious - @kode_vicious

  Download PDF version of this article PDF

Is There Another System?

Computer science is the study of what can be automated.

Dear KV,

There has been considerable discussion about the use of AI for automating a lot of work—including software development. I've just finished a Ph.D., I'm out of school, and I have been working for a few years. But now I wonder if software research and development will have a future that includes people like me, or if I'll simply be automated out of a career.

Colossally Concerned

 

Colossus: The Forbin Project

Dear CC,

It might be odd to think that someone with an advanced degree could be automated out of a job, but I live in New York where there are plenty of minimum-wage workers with advanced degrees. I am afraid we are yet again caught in another tech hype cycle around an advance in software, and that is causing a lot of confusion both inside and outside our industry.

Do large language models pose a threat to software developers? That depends on what kind of software they develop and how easy it might be to automate their work, but this has always been true. Computer science is the study of what can be automated, and your corporate masters see your salary as impinging on their bonuses, so they're always happy to reduce the number of human resources.

Computer science and software development change more quickly than many other fields because what we do doesn't require much in the physical realm and because, for the moment, the fallout from our mistakes goes mostly unregulated and unpunished. Consider what it takes to innovate the construction of buildings or other physical infrastructures and you'll get what I mean. New bridges are built with newer materials, but such changes take years or even decades, while a new fad in computing can sweep at least part of the field in a few months. Like crypto crap and the Internet bubble before it, the "AI" bubble, and I put "AI" in quotes because—as a good friend said recently over food and a few beers— "AI is what people say when a computer does something they thought only a human could do."

Can large language models replace some parts of software development? Perhaps. I've seen evidence both for and against, and the Internet is littered with arguments on both sides.

It occurs to me that KV answered this question in another form many years ago [Avoiding Obsolescence, April 2010; and Bummed (the second letter on the page), December 2004] when I discussed how to stay up to date and fresh on the latest changes in computing. The key to a long career—as is obvious to those who have watched KV babble on lo these many years—is to continue to survey the field, see what's new, try new things, and see what works well for you.

KV has yet to see evidence of general AI appearing and replacing people, although the hype machine keeps telling us it's just around the corner, so you need to think of these new systems as aids to the programmer, much as early compilers were in the 1960s and 1970s.

Back when the first compilers appeared, they produced machine language that was not nearly as efficient as what was created by working programmers. Today, there are only a few of us who understand or work in assembly or machine code, and this is both good and bad. It's good because it means that most programmers can express concepts in code that would have been tortuous to produce on earlier systems. It's bad because machines still run machine code, and if you can't debug it, often you cannot find the true source of a performance or other issue. Compilers are tools, debuggers are tools, large language models are tools, and humans are—for the most part—tool makers and users.

One of the easiest tests to determine if you are at risk is to look hard at what you do every day and see if you, yourself, could code yourself out of a job. Programming involves a lot of rote work—templating, boilerplate, and the like. If you can see a way to write a system to replace yourself, either do it, don't tell your bosses, and collect your salary while reading novels in your cubicle, or look for something more challenging to work on.

There are days—I should say mostly late, late nights—when KV wishes the machines would take over. I'd gladly be a battery if I could just have some peace and quiet to think about the higher-order things in computer science, algorithms, operating systems, and efficiency. These creative endeavors are still beyond the reach of whatever it is we call "AI," and KV is willing to bet a lot of drinks that they will remain so for the duration of his—and your—career.

KV

 

George V. Neville-Neil works on networking and operating-system code for fun and profit. He also teaches courses on various subjects related to programming. His areas of interest are computer security, operating systems, networking, time protocols, and the care and feeding of large code bases. He is the author of The Kollected Kode Vicious and coauthor with Marshall Kirk McKusick and Robert N. M. Watson of The Design and Implementation of the FreeBSD Operating System. For nearly 20 years, he has been the columnist better known as Kode Vicious. Since 2014, he has been an industrial visitor at the University of Cambridge, where he is involved in several projects relating to computer security. He earned his bachelor's degree in computer science at Northeastern University in Boston, Massachusetts, and is a member of ACM, the Usenix Association, and IEEE. His software not only runs on Earth, but also has been deployed as part of VxWorks in NASA's missions to Mars. He is an avid bicyclist and traveler who currently lives in New York City.

Copyright © 2023 held by owner/author. Publication rights licensed to ACM.

acmqueue

Originally published in Queue vol. 21, no. 6
Comment on this article in the ACM Digital Library





More related articles:

Divyansh Kaushik, Zachary C. Lipton, Alex John London - Resolving the Human-subjects Status of Machine Learning's Crowdworkers
In recent years, machine learning (ML) has relied heavily on crowdworkers both for building datasets and for addressing research questions requiring human interaction or judgment. The diversity of both the tasks performed and the uses of the resulting data render it difficult to determine when crowdworkers are best thought of as workers versus human subjects. These difficulties are compounded by conflicting policies, with some institutions and researchers regarding all ML crowdworkers as human subjects and others holding that they rarely constitute human subjects. Notably few ML papers involving crowdwork mention IRB oversight, raising the prospect of non-compliance with ethical and regulatory requirements.


Harsh Deokuliar, Raghvinder S. Sangwan, Youakim Badr, Satish M. Srinivasan - Improving Testing of Deep-learning Systems
We used differential testing to generate test data to improve diversity of data points in the test dataset and then used mutation testing to check the quality of the test data in terms of diversity. Combining differential and mutation testing in this fashion improves mutation score, a test data quality metric, indicating overall improvement in testing effectiveness and quality of the test data when testing deep learning systems.


Alvaro Videla - Echoes of Intelligence
We are now in the presence of a new medium disguised as good old text, but that text has been generated by an LLM, without authorial intention—an aspect that, if known beforehand, completely changes the expectations and response a human should have from a piece of text. Should our interpretation capabilities be engaged? If yes, under what conditions? The rules of the language game should be spelled out; they should not be passed over in silence.


Edlyn V. Levine - Cargo Cult AI
Evidence abounds that the human brain does not innately think scientifically; however, it can be taught to do so. The same species that forms cargo cults around widespread and unfounded beliefs in UFOs, ESP, and anything read on social media also produces scientific luminaries such as Sagan and Feynman. Today's cutting-edge LLMs are also not innately scientific. But unlike the human brain, there is good reason to believe they never will be unless new algorithmic paradigms are developed.





© ACM, Inc. All Rights Reserved.