I was lucky. I learned IT in an incredibly immersive way. My first two jobs were in organizations that followed the very best practices for their day. Because it was all I knew, I considered that to be normal. I had no idea how unique those organizations were. I didn't know at the time that these techniques would not be adopted by the rest of the industry for a decade or more.
My next career moves brought me in contact with organizations that did not adhere to the same best practices, nor any others. In fact, they were unaware that such best practices existed at all. I considered this to be a bug and went about fixing it, dismayed that anyone would settle for anything else. I was re-creating what I considered "normal."
The environment I was trying to reproduce, however, was not normal, or more accurately, it was not typical. A typical IT organization would be in relative disarray by comparison. I contend that the quality of IT organizations follows a bell curve: a few percent run like fine-tuned machines, a few percent look like toxic waste dumps on fire, and the vast majority are somewhere in the middle.
Fortunately for me, I won the IT career lottery. Early in my career I saw what the best in class looked like and considered it normal. Later this high standard made me look like a visionary. The truth is I just didn't know any other way.
Most IT practitioners are not so fortunate. They are not blessed with the same experience I was afforded, and they literally don't know any better.
This, I believe, is why the bell curve hasn't transformed into a hockey stick, or is even a lopsided blob. This is why we can't have nice things.
Students certainly aren't learning best practices in the classroom. In fact, students are more likely to learn the best-of-breed DevOps practices through extracurricular involvement in open-source projects than from their university professors.
Most large open-source projects use Git for source-code control, use Jenkins for CI/CD (continuous integration/continuous deployment), and have a fully automated testing procedure because it enables them to scale to large numbers of participants with minimal overhead. Smaller open-source projects tend to use these tools, too, because they lack resources and using these tools makes managing the project significantly easier.
Yet, how many universities require CS homework to be turned in via Git commit? How many universities have an IT department that is a showcase for best DevOps practices? How many universities have CS departments and IT departments that collaborate to push the boundaries of best practices? I assure you the number is very low. It is no surprise that the innovations that led to the DevOps transformation did not come from academia.
How can we make sure students are exposed to the best of the best practices from the start so that they consider anything else a bug?
How can we make curricula more immersive?
Here are a few small and big things that universities could do.
Students should use source-code repositories such as Git, and CI/CD tools such as Jenkins, as they do their CS homework. These processes should be established as the normal way to work. Professors should expect homework assignments to be turned in by linking to a Git commit and a Jenkins output log.
Some instructors will undoubtedly feel that it is difficult enough to teach first-year computer science without adding the complexities of Git. Most IDEs, however, make simple check-in/check-out operations a breeze, especially for single-person projects with no branches. By the time projects get more collaborative, the students will be ready for the more advanced Git features.
Last year I spoke to a roomful of third-year computer science majors and was shocked to learn that most didn't know HTML. The curriculum was fairly standard—undergraduate algorithms and such. HTML was something you learned in the art department; the computer science department was for serious students.
I think there's a middle ground between serious computer science theory and accidentally turning into a web application boot camp.
It isn't a radical statement to say that most software engineers write code that is somehow part of a web-based application. At companies like Squarespace and Google, software engineers' IDEs supply default templates for new programs. Such a template is for a self-contained web server that directs output to a web page. Even a simple "Hello world!'" program is a web server that outputs the greeting as a result of an HTTP request and, by default, generates logging information, monitoring metrics, and so on.
Yes, that's a bit much for an introductory student's "Print your name 10 times" program. But after that, generate a web page!
How could formal education better emulate the immersive experience that I was lucky enough to benefit from?
Most IT curricula are bottom up. Students are taught individual subsystems, followed by higher levels of abstractions. At the end, they learn how it all fits together. Toward the end of their college careers, they learn the best practices that make all of it sustainable. Or, more typically, those sustainability practices are not learned until later, when the new graduate has a job and is assigned to a coworker who explains "how things work in the real world."
Instead, an IT curriculum should start with a working system that follows all the best practices. Students should see this as the norm. They can dissect the individual subsystems and put them back together, rather than building them from scratch.
The Masters of System Administration curriculum at the University of Oslo includes a multiweek immersive experience called the Uptime Challenge.1 Students are divided into two teams, and each team is given a web-based application, including multiple web servers, a load balancer, a database, and so on. The application is a simple social network application called BookFace.
Once the system is running, the instructor enables a system that sends an ever-increasing amount of simulated traffic to the application. Each team's system is checked for uptime every five minutes. The team receives a certain amount of money (points) if the site is up, and a small bonus if the page loads within 0.5 seconds. If the site is down, money is deducted from the team. This simulates a typical website business model: you make money only if the site is up. Faster sites are more appealing and profitable. Customers react to down or slow sites by switching to competitors; thus, those lower-performing sites lose money.
The challenge lasts multiple weeks, during which the students learn to perform common web-operation tasks such as software upgrades, bug fixing, task automation, performance tuning, and so on. Inspired by Netflix's Chaos Monkey,2 individual hosts are randomly rebooted to test the resiliency of the overall system.
The Uptime Challenge enables students to understand IT's value to the organization and to identify the IT processes that impact this value and permit continuous improvement. As a result, students are more motivated and better able to assess their own work. This leads to improved engagement and fosters more practical class discussions. It creates a direct feedback loop between a student's actions and the value they create. Most importantly, it better prepares students for the real world.
IT projects usually involve some kind of legacy system. The most apt analogy is being asked to change the tires on a truck while it is being driven down the highway.
Software engineers spend more time reading other people's code than writing their own. We evolve existing systems. Green-field or "fresh start" opportunities are rare. Many people I've met have never been in a situation where they designed a new network, application, or infrastructure from scratch. Why can't education better prepare students for this?
Could something like the Uptime Challenge be introduced even earlier in the educational process?
Perhaps on the first day of class students should be handed not only copies of the syllabus, but also the username and passwords to the administrative control panel of a working system. Instruction and labs could be oriented around maintaining this system. Students would have their own wikis to maintain documentation and operational runbooks.
Each student would have their own working system, but I suggest that every few weeks students be randomly reassigned to administer a different system. Seeing how their fellow students had done things differently would be educational. Also, the best way to learn the value of a well-written runbook is to inherit someone else's badly maintained runbook.
Institutions are developing more immersive educational strategies. In cooperation with industry, one Louisiana community college, for example, is developing a DevOps degree program that will be highly immersive. (It hadn't been announced at the time of this publication.)
Education should seek to normalize best practices from the start. Working outside these best practices should be considered a bug. Students should not struggle to learn best practices after graduation, and they should be shocked if potential new employers do not already have these practices in place.
Both IT and CS curricula could be structured to be more immersive, as immersive education more reliably reflects the real world. It prepares students for industry and better informs the research of those who choose that path. Seeing the forest, and then understanding the trees, helps students understand why they are learning something before they learn it. It is more hands-on and therefore more engaging, and lends itself to gamification.
Our first experiences cement what becomes normal for us. Students should start off seeing a well-run system, dissect it, learn its parts, progressively dig down into the details. Don't let them see what a badly run system looks like until they have experienced one that is well run. A badly run system should then disgust them.
1. Begnum, K., Anderssen, S.S. 2016. Usenix Journal of Education in System Administration 2(1); the Uptime Challenge: a learning environment for value-driven operations through gamification; https://www.usenix.org/jesa/0201/begnum.
2. Tseitlin, A. 2013. The antifragile organization. Communications of the ACM 56(8): 40-44.
Undergraduate Software Engineering
- Michael J. Lutz, et al.
Addressing the Needs of Professional Software Development
http://queue.acm.org/detail.cfm?id=2653382
A Conversation with Alan Kay
Big talk with the creator of Smalltalk—and much more.
http://queue.acm.org/detail.cfm?id=1039523
Evolution of the Product Manager
- Ellen Chisa
Better education needed to develop the discipline
http://queue.acm.org/detail.cfm?id=2683579
Thomas A. Limoncelli is the site reliability engineering manager at Stack Overflow Inc. in New York City. His books include The Practice of Cloud Administration (http://the-cloud-book.com), The Practice of System and Network Administration (http://the-sysadmin-book.com), and Time Management for System Administrators. He blogs at EverythingSysadmin.com and tweets at @YesThatTom. He holds a B.A. in computer science from Drew University.
Copyright © 2017 held by owner/author. Publication rights licensed to ACM.
Originally published in Queue vol. 15, no. 3—
Comment on this article in the ACM Digital Library
Ellen Chisa - Evolution of the Product Manager
Software practitioners know that product management is a key piece of software development. Product managers talk to users to help figure out what to build, define requirements, and write functional specifications. They work closely with engineers throughout the process of building software. They serve as a sounding board for ideas, help balance the schedule when technical challenges occur - and push back to executive teams when technical revisions are needed. Product managers are involved from before the first code is written, until after it goes out the door.
Jon P. Daries, Justin Reich, Jim Waldo, Elise M. Young, Jonathan Whittinghill, Daniel Thomas Seaton, Andrew Dean Ho, Isaac Chuang - Privacy, Anonymity, and Big Data in the Social Sciences
Open data has tremendous potential for science, but, in human subjects research, there is a tension between privacy and releasing high-quality open data. Federal law governing student privacy and the release of student records suggests that anonymizing student data protects student privacy. Guided by this standard, we de-identified and released a data set from 16 MOOCs (massive open online courses) from MITx and HarvardX on the edX platform. In this article, we show that these and other de-identification procedures necessitate changes to data sets that threaten replication and extension of baseline analyses. To balance student privacy and the benefits of open data, we suggest focusing on protecting privacy without anonymizing data by instead expanding policies that compel researchers to uphold the privacy of the subjects in open data sets.
Michael J. Lutz, J. Fernando Naveda, James R. Vallino - Undergraduate Software Engineering: Addressing the Needs of Professional Software Development
In the fall semester of 1996 RIT (Rochester Institute of Technology) launched the first undergraduate software engineering program in the United States. The culmination of five years of planning, development, and review, the program was designed from the outset to prepare graduates for professional positions in commercial and industrial software development.