Curmudgeon

  Download PDF version of this article PDF

Evolution or Revolution?

Mache Creeger, Emergent Technology Associates

Where is the High in High Tech?

We work in an industry that prides itself on “changing the world,” one that chants a constant mantra of innovation and where new products could aptly be described as “this year’s breakthrough of the century.” While there are some genuine revolutions in the technology industry, including cellphones, GPS (global positioning system), quantum computing, encryption, and global access to content, the vast majority of new product introductions are evolutionary, not revolutionary. Real technical breakthroughs are few and far between. Most new products are just a recycling of an earlier idea.

The effort seems to go into refinement, not into changing the core paradigm. When I look back to when I was in graduate school some three decades ago, I believed the world would have changed dramatically by now. Pure research in computer science was introducing new concepts at breathtaking speeds. Looking back on that time, things have not changed all that much. Most of the technologies that I studied in graduate school are still widely used. The way work gets accomplished is relatively unchanged. For the most part, we see the same recycled concepts presented over and over with shiny new paint jobs, touted as new ground-breaking solutions. It seems to me that the pace of innovation has actually slowed. As private investment poured into R&D (research and development), dwarfing government funding, the vast majority of intellectual property became privately held. As a result, we have seen less, not more, genuine innovation. Let me give you some examples.

Application Retreads

A while back I met a guy who was launching a company to do corporate accounting as a service over the Web. He had made a lot of money during the course of his career and was confident that he could repeat his success in this new venture.

Why was he so confident? He had implemented the same basic business over and over again by just changing the technology platform to fit what was in vogue. By taking this approach, he had developed successful accounting businesses on minicomputers, workstations, PCs, client/server machines, and at that time was working to become an ASP (application service provider). He was confident that he would be successful because the benefits provided to the customer remained relatively unchanged. Only the technology platform and price point differed.

The Internet

For those of us old enough to remember, the Internet is more than 30 years old. Yes, it has changed and evolved, but the basic idea of internetworking individual computers into a large WAN is old stuff. Many of the networking constructs in use today are 20 to 30 years old. I expected by now to be using a much richer set of control structures than what is currently termed “state-of-the-art.”

TCP/IP started out in the early 1980s and is relatively unchanged. FTP, which has been around since the mid-1970s, hasn’t changed much at all. Telnet, which has been mostly superceded by its encryption-enabled twin SSH (Secure Shell), is still in heavy use as a character-based terminal emulation based on an obsolete product from a nonexistent company (Digital Equipment Corporation’s VT100). Moreover, if you look at the basic architecture of a Web server, you are going to see a lot in common with the FTP and Telnet protocols, with the addition of graphics rendering and some added automation. Similar observations can be made regarding e-mail using POP and SMTP protocols and RPC (Remote Procedure Call) as it evolved through NFS (Network File System), CORBA, and SOAs (service-oriented architectures). With a small set of exceptions, the basic control structures available for computer networking have changed little in the past 20 years.

IT Infrastructure Efficiency

In graduate school I studied the IBM 360/67 and CP-67, late-1960s technology that showed how a single physical machine could support multiple virtual machines. Now this technology has become red-hot as a way to compartmentalize a large number of application services on a single large server. On the other end of the spectrum, companies are borrowing the HPC (high-performance computing) architectures of supercomputers to link multiple generic servers via Gigabit Ethernet, dynamically creating clusters that support application services. These mature paradigms have been rediscovered as a result of the continual drive toward increased IT efficiency.

Revitalizing the Innovation Engine

The decline of pure research in computer technology is to blame. With the majority of R&D undertaken for commercial purposes, overall risk is minimized while ROI (return on investment) maximized. The commercial aim is to produce a solution that the customer can buy and integrate right now, not to rethink the entire approach. As industry continues to pick off low-hanging fruit, true innovation, which by definition is high risk, becomes more infrequent. As a result, things start to stagnate and people like me bemoan the lack of true innovation.

One example is the failure of the industry to leverage the increase in desktop computational resources over the past 25 years. While we have gone from a 4-MHz CPU with a 360-KB floppy disk to a 4-GHz CPU with 500-GB disks—a thousandfold increase in CPU and a millionfold increase in storage—the end user benefits remain relatively unchanged. People still struggle to debug spreadsheet formulas.

The massive accrual of business-method and algorithm patents by companies has further exacerbated the problem. Pioneered by IBM, large patent portfolios have become competitive weapons and profit centers, fencing off vast tracts of intellectual property from innovation by other parties. Large companies can afford to enter into cross-license agreements with each other, sharing patent portfolios. These tactics, however, lock out innovation by smaller companies and research institutions that have no portfolios to trade or cannot afford the license fees.

What to do? To grow the innovation pie, industry participants have to recognize two fundamental facts. First, they must manage R&D budgets as investment portfolios with the majority of projects in a low-risk, moderate-return category and a select set of projects in a higher-risk and higher-return category. If companies recognize that it is OK to fail sometimes, they can reap huge rewards when a high-risk venture succeeds. Second, something has to happen to break the stifling effect of large intellectual property portfolios. Either patent law must be reformed or companies should consider donating patents to the public domain to increase innovation across the board. Both of these ideas are being explored.

I hope something happens soon. I hate discussing start-ups and business initiatives with my peers like we are pitching a new movie in LA—a small twist on yet another sequel.

MACHE CREEGER ([email protected]) is a 30-year technology industry veteran based in Silicon Valley. He is the principal of Emergent Technology Associates, business development consultants to technology companies worldwide.

acmqueue

Originally published in Queue vol. 4, no. 3
Comment on this article in the ACM Digital Library





More related articles:

Nicole Forsgren, Eirini Kalliamvakou, Abi Noda, Michaela Greiler, Brian Houck, Margaret-Anne Storey - DevEx in Action
DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.


João Varajão, António Trigo, Miguel Almeida - Low-code Development Productivity
This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.


Ivar Jacobson, Alistair Cockburn - Use Cases are Essential
While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions.


Jorge A. Navas, Ashish Gehani - OCCAM-v2: Combining Static and Dynamic Analysis for Effective and Efficient Whole-program Specialization
OCCAM-v2 leverages scalable pointer analysis, value analysis, and dynamic analysis to create an effective and efficient tool for specializing LLVM bitcode. The extent of the code-size reduction achieved depends on the specific deployment configuration. Each application that is to be specialized is accompanied by a manifest that specifies concrete arguments that are known a priori, as well as a count of residual arguments that will be provided at runtime. The best case for partial evaluation occurs when the arguments are completely concretely specified. OCCAM-v2 uses a pointer analysis to devirtualize calls, allowing it to eliminate the entire body of functions that are not reachable by any direct calls.





© ACM, Inc. All Rights Reserved.