Departments

RSS
Sort By:

A "Perspectival" Mirror of the Elephant:
Investigating language bias on Google, ChatGPT, YouTube, and Wikipedia

Many people turn to Internet-based, software platforms such as Google, YouTube, Wikipedia, and more recently ChatGPT to find the answers to their questions. Most people tend to trust Google Search when it states that its mission is to deliver information from "many angles so you can form your own understanding of the world." Yet, our work finds that queries involving complex topics yield results focused on a narrow set of culturally dominant views, and these views are correlated with the language used in the search phrase. We call this phenomenon language bias, and this article shows how it occurs using the example of two complex topics: Buddhism and liberalism.

March 12, 2024

Topic: Privacy and Rights

0 comments

Software Drift:
Open source forking

Since the systems have a common parent, they probably work in the same technical domain, and therefore the features and fixes that are going to be added are probably similar. KV happens to have an example case at hand: two operating systems that diverged before they added SMP (symmetric multiprocessing) support. When an operating system adds SMP to an existing kernel, the first thing we think of is locks, those handy-dandy little performance killers that we've all been sprinkling around our code since the end of Dennard scaling.

March 12, 2024

Topic: Open Source

0 comments

Challenges in Adopting and Sustaining Microservice-based Software Development:
Organizational challenges can be more difficult than technical ones.

MS (microservice) has become the latest buzzword in software development. The MS approach to software development offers an alternative to the conventional monolith style. While benefits of MS-based development over monolith style are clear, industry experts agree that neither style provides an absolute advantage in all situations. Proponents contend that an MS approach to software development more readily facilitates mapping organizational changes manifesting from a more dynamic business environment to corresponding IT/IS (information technology/information systems) changes. This article identifies key challenges from the initial decision to adopt MSs to the ongoing task of sustaining the new paradigm over the long haul.

March 11, 2024

Topic: Development

0 comments

Free and Open Source Software - and Other Market Failures:
Open source is not a goal as much as a means to an end.

Open source was not so much the goal itself as a means to an end, which is freedom: freedom to fix broken things, freedom from people who thought they could clutch the source code tightly and wield our ignorance of it as a weapon to force us all to pay for and run Windows Vista. But the FOSS movement has won what it wanted, and no matter how much oldsters dream about their glorious days as young revolutionaries, it is not coming back, because the frustrations and anger of IT in 2024 are entirely different from those of 1991.

March 11, 2024

Topic: Open Source

0 comments

Give Your Project a Name:
It goes a long way toward creating a cohesive team with strong morale.

While some people are driven by infinite backlogs and iteration, others prefer launches and deadlines. Over the years, I have found certain milestones to be instrumental in creating a cohesive team with strong morale. When people have to work together to get through a challenging task, reaching those milestones brings them together.

March 7, 2024

Topic: Business/Management

0 comments

From Open Access to Guarded Trust:
Experimenting responsibly in the age of data privacy

The last decade witnessed the emergence and strengthening of data protection regulations. For software engineers, this new era poses a unique challenge: How do you maintain the precision and efficacy of your platforms when complete data access, one of your most potent tools, is gradually being taken off the table? The mission is clear: Reinvent the toolkit. The way we perceive, handle, and experiment with data needs a drastic overhaul to navigate this brave new world.

March 6, 2024

Topic: Privacy and Rights

0 comments

Developer Ecosystems for Software Safety:
Continuous assurance at scale

How to design and implement information systems so that they are safe and secure is a complex topic. Both high-level design principles and implementation guidance for software safety and security are well established and broadly accepted. For example, Jerome Saltzer and Michael Schroeder's seminal overview of principles of secure design was published almost 50 years ago, and various community and governmental bodies have published comprehensive best practices about how to avoid common software weaknesses.

February 29, 2024

Topic: Security

0 comments

Programmer Job Interviews: The Hidden Agenda

Top tech interviews test coding and CS knowledge overtly, but they also evaluate a deeper technical instinct so subtly that candidates seldom notice the appraisal. We'll learn how interviewers create questions to covertly measure a skill that sets the best programmers above the rest. Equipped with empathy for the interviewer, you can prepare to shine on the job market by seizing camouflaged opportunities.

January 15, 2024

Topic: Business/Management

0 comments

DevEx in Action:
A study of its tangible impacts

DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.

January 14, 2024

Topic: Development

0 comments

Resolving the Human-subjects Status of Machine Learning's Crowdworkers:
What ethical framework should govern the interaction of ML researchers and crowdworkers?

In recent years, machine learning (ML) has relied heavily on crowdworkers both for building datasets and for addressing research questions requiring human interaction or judgment. The diversity of both the tasks performed and the uses of the resulting data render it difficult to determine when crowdworkers are best thought of as workers versus human subjects. These difficulties are compounded by conflicting policies, with some institutions and researchers regarding all ML crowdworkers as human subjects and others holding that they rarely constitute human subjects. Notably few ML papers involving crowdwork mention IRB oversight, raising the prospect of non-compliance with ethical and regulatory requirements.

January 14, 2024

Topic: AI

0 comments

Is There Another System?:
Computer science is the study of what can be automated.

One of the easiest tests to determine if you are at risk is to look hard at what you do every day and see if you, yourself, could code yourself out of a job. Programming involves a lot of rote work: templating, boilerplate, and the like. If you can see a way to write a system to replace yourself, either do it, don't tell your bosses, and collect your salary while reading novels in your cubicle, or look for something more challenging to work on.

January 12, 2024

Topic: AI

0 comments

Automatically Testing Database Systems:
DBMS testing with test oracles, transaction history, and fuzzing

The automated testing of DBMS is an exciting, interdisciplinary effort that has seen many innovations in recent years. The examples addressed here represent different perspectives on this topic, reflecting strands of research from software engineering, (database) systems, and security angles. They give only a glimpse into these research strands, as many additional interesting and effective works have been proposed. Various approaches generate pairs of related tests to find both logic bugs and performance issues in a DBMS. Similarly, other isolation-level testing approaches have been proposed.

January 12, 2024

Topic: Databases

0 comments

How to Design an ISA:
The popularity of RISC-V has led many to try designing instruction sets.

Over the past decade I've been involved in several projects that have designed either ISA (instruction set architecture) extensions or clean-slate ISAs for various kinds of processors (you'll even find my name in the acknowledgments for the RISC-V spec, right back to the first public version). When I started, I had very little idea about what makes a good ISA, and, as far as I can tell, this isn't formally taught anywhere.

January 11, 2024

Topic: Computer Architecture

0 comments

What do Trains, Horses, and Home Internet Installation have in Common?:
Avoid changes mid-process

At first, I thought he was just trying to shirk his responsibilities and pass the buck on to someone else. His advice, however, made a lot of sense. The installation team probably generated configurations ahead of time, planned out how and when those changes need to be activated, and so on. The entire day is planned ahead. Bureaucracies usually have a happy path that works well, and any deviation requires who knows what? Managers getting involved? Error-prone manual steps? Ad hoc database queries? There's no way I could know.

January 10, 2024

Topic: System Administration

0 comments

Multiparty Computation: To Secure Privacy, Do the Math:
A discussion with Nigel Smart, Joshua W. Baron, Sanjay Saravanan, Jordan Brandt, and Atefeh Mashatan

Multiparty Computation is based on complex math, and over the past decade, MPC has been harnessed as one of the most powerful tools available for the protection of sensitive data. MPC now serves as the basis for protocols that let a set of parties interact and compute on a pool of private inputs without revealing any of the data contained within those inputs. In the end, only the results are revealed. The implications of this can often prove profound.

January 9, 2024

Topic: Privacy and Rights

0 comments

The Security Jawbreaker:
Access to a system should not imply authority to use it. Enter the principle of complete mediation.

When someone stands at the front door of your home, what are the steps to let them in? If it is a member of the family, they use their house key, unlocking the door using the authority the key confers. For others, a knock at the door or doorbell ring prompts you to make a decision. Once in your home, different individuals have differing authority based on who they are. Family members have access to your whole home. A close friend can roam around unsupervised, with a high level of trust. An appliance repair person is someone you might supervise for the duration of the job to be done.

December 3, 2023

Topic: Security

0 comments

Improving Testing of Deep-learning Systems:
A combination of differential and mutation testing results in better test data.

We used differential testing to generate test data to improve diversity of data points in the test dataset and then used mutation testing to check the quality of the test data in terms of diversity. Combining differential and mutation testing in this fashion improves mutation score, a test data quality metric, indicating overall improvement in testing effectiveness and quality of the test data when testing deep learning systems.

November 30, 2023

Topic: AI

0 comments

Dear Diary:
On keeping a laboratory notebook

While a debug log is helpful, it's not the same thing as a laboratory notebook. If more computer scientists acted like scientists, we wouldn't have to fight over whether computing is an art or a science.

November 29, 2023

Topic: Development

0 comments

Low-code Development Productivity:
"Is winter coming" for code-based technologies?

This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.

November 27, 2023

Topic: Development

0 comments

Software Managers' Guide to Operational Excellence:
The secret to being a great engineering leader? Setting up the right checks and balances.

Software engineering managers (or any senior technical leaders) have many responsibilities: the care and feeding of the team, delivering on business outcomes, and keeping the product/system/application up and running and in good order. Each of these areas can benefit from a systematic approach. The one I present here is setting up checks and balances for the team's operational excellence.

November 14, 2023

Topic: Business/Management

0 comments

Use Cases are Essential:
Use cases provide a proven method to capture and explain the requirements of a system in a concise and easily understood format.

While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions.

November 11, 2023

Topic: Development

0 comments

Device Onboarding using FDO and the Untrusted Installer Model:
FDO's untrusted model is contrasted with Wi-Fi Easy Connect to illustrate the advantages of each mechanism.

Automatic onboarding of devices is an important technique to handle the increasing number of "edge" and IoT devices being installed. Onboarding of devices is different from most device-management functions because the device's trust transitions from the factory and supply chain to the target application. To speed the process with automatic onboarding, the trust relationship in the supply chain must be formalized in the device to allow the transition to be automated.

November 9, 2023

Topic: Networks

0 comments

Knowing What You Need to Know:
Personal, team, and organizational effectiveness can be improved with a little preparation

Blockers can take a tiny task and stretch it over days or weeks. Taking a moment at the beginning of a project to look for and prevent possible blockers can improve productivity. These examples of personal, team, and organizational levels show how gathering the right information and performing preflight checks can save hours of wasted time later.

September 21, 2023

Topic: Business/Management

0 comments

Halfway Around the World:
Learn the language, meet the people, eat the food

Not only do different cultures treat different features differently, but they also treat each other differently. How people act with respect to each other is a topic that can, and does, fill volumes of books that, as nerds, we probably have never read, but finding out a bit about where you're heading is a good idea. You can try to ask the locals, although people generally are so enmeshed in their own cultures that they have a hard time explaining them to others.

September 20, 2023

Topic: Business/Management

0 comments

Protecting Secrets from Computers

Bob is in prison and Alice is dead; they trusted computers with secrets. Review time-tested tricks that can help you avoid the grim fate of the old crypto couple.

September 20, 2023

Topic: Security

0 comments

Creating the First Confidential GPUs:
The team at NVIDIA brings confidentiality and integrity to user code and data for accelerated computing.

Today's datacenter GPU has a long and storied 3D graphics heritage. In the 1990s, graphics chips for PCs and consoles had fixed pipelines for geometry, rasterization, and pixels using integer and fixed-point arithmetic. In 1999, NVIDIA invented the modern GPU, which put a set of programmable cores at the heart of the chip, enabling rich 3D scene generation with great efficiency.

September 7, 2023

Topic: Security

0 comments

Why Should I Trust Your Code?:
Confidential computing enables users to authenticate code running in TEEs, but users also need evidence this code is trustworthy.

For Confidential Computing to become ubiquitous in the cloud, in the same way that HTTPS became the default for networking, a different, more flexible approach is needed. Although there is no guarantee that every malicious code behavior will be caught upfront, precise auditability can be guaranteed: Anyone who suspects that trust has been broken by a confidential service should be able to audit any part of its attested code base, including all updates, dependencies, policies, and tools. To achieve this, we propose an architecture to track code provenance and to hold code providers accountable. At its core, a new Code Transparency Service (CTS) maintains a public, append-only ledger that records all code deployed for confidential services.

September 7, 2023

Topic: Security

0 comments

Hardware VM Isolation in the Cloud:
Enabling confidential computing with AMD SEV-SNP technology

Confidential computing is a security model that fits well with the public cloud. It enables customers to rent VMs while enjoying hardware-based isolation that ensures that a cloud provider cannot purposefully or accidentally see or corrupt their data. SEV-SNP was the first commercially available x86 technology to offer VM isolation for the cloud and is deployed in Microsoft Azure, AWS, and Google Cloud. As confidential computing technologies such as SEV-SNP develop, confidential computing is likely to simply become the default trust model for the cloud.

September 7, 2023

Topic: Security

0 comments

Confidential Computing: Elevating Cloud Security and Privacy:
Working toward a more secure and innovative future

Confidential Computing (CC) fundamentally improves our security posture by drastically reducing the attack surface of systems. While traditional systems encrypt data at rest and in transit, CC extends this protection to data in use. It provides a novel, clearly defined security boundary, isolating sensitive data within trusted execution environments during computation. This means services can be designed that segment data based on least-privilege access principles, while all other code in the system sees only encrypted data. Crucially, the isolation is rooted in novel hardware primitives, effectively rendering even the cloud-hosting infrastructure and its administrators incapable of accessing the data.

September 7, 2023

Topic: Security

0 comments

Stone Knives and Bear Skins

If you look at the software tooling landscape, you see that the majority of developers work with either open-source tools; or tools from the recently reformed home of proprietary software, Microsoft, which has figured out that its Visual Studio Code system is a good way to sucker people into working with its platforms; or finally Apple, whose tools are meant only for its platform. In specialized markets, such as deeply embedded, military, and aerospace, there are proprietary tools that are often far worse than their open-source cousins, because the market for such tools is small but lucrative.

July 18, 2023

Topic: Debugging

0 comments

Pointers in Far Memory:
A rethink of how data and computations should be organized

Effectively exploiting emerging far-memory technology requires consideration of operating on richly connected data outside the context of the parent process. Operating-system technology in development offers help by exposing abstractions such as memory objects and globally invariant pointers that can be traversed by devices and newly instantiated compute. Such ideas will allow applications running on future heterogeneous distributed systems with disaggregated memory nodes to exploit near-memory processing for higher performance and to independently scale their memory and compute resources for lower cost.

July 17, 2023

Topic: Data

0 comments

Don't "Think of the Internet!":
No human right is absolute.

I cannot help but notice few women subscribe to absolutist views of electronic privacy and anonymity. Can it be that only people who play life on the easiest setting find unrestricted privacy and anonymity a great idea?

July 10, 2023

Topic: Privacy and Rights

0 comments

How Flexible is CXL's Memory Protection?:
Replacing a sledgehammer with a scalpel

CXL, a new interconnect standard for cache-coherent memory sharing, is becoming a reality - but its security leaves something to be desired. Decentralized capabilities are flexible and resilient against malicious actors, and should be considered while CXL is under active development.

July 5, 2023

Topic: Security

0 comments

Security Mismatch:
Security must be a business enabler, not a hinderer.

Information security teams that say 'no' need to change. Hiding behind a moat makes repelling attacks easy, but bridges allow you to replenish supplies and foster relationships with customers? castles. Remember, a security team's role is to empower their business to move forward with confidence, not to hinder progress.

July 3, 2023

Topic: Security

0 comments

Managing Hybrid Teams:
The combination of on-site and remote workers takes extra effort from team leaders.

After three years of working remotely, many companies are asking their people to return to the office. Not everyone is coming back, however. With some people in the office and some still working from home, leaders must get this transition to hybrid work right. Hybrid is the worst of both worlds in some ways. You can easily end up creating two experiences?one for the people in the office and one for the remote workers?which can lead to problems that will compound over time and have long-term damaging effects on your team.

June 29, 2023

Topic: Business/Management

0 comments

Echoes of Intelligence:
Textual interpretation and large language models

We are now in the presence of a new medium disguised as good old text, but that text has been generated by an LLM, without authorial intention—an aspect that, if known beforehand, completely changes the expectations and response a human should have from a piece of text. Should our interpretation capabilities be engaged? If yes, under what conditions? The rules of the language game should be spelled out; they should not be passed over in silence.

June 27, 2023

Topic: AI

0 comments

You Don't know Jack about Application Performance:
Knowing whether you're doomed to fail is important when starting a project.

You don't need to do a full-scale benchmark any time you have a performance or capacity planning problem. A simple measurement will provide the bottleneck point of your system: This example program will get significantly slower after eight requests per second per CPU. That's often enough to tell you the most important thing: if you're going to fail.

May 24, 2023

Topic: Performance

0 comments

The Human Touch:
There is no substitute for good, direct, honest training.

The challenge of providing a safe communications environment in the face of such social engineering attacks isn't just the technology; it's also people. As anyone who has done serious work in computer security knows, the biggest problems are between the keyboard and the chair. Most people by default trust other people and are willing to give them the benefit of the doubt.

May 18, 2023

Topic: Business/Management

0 comments

OS Scheduling:
Better scheduling policies for modern computing systems

In any system that multiplexes resources, the problem of scheduling what computations run where and when is perhaps the most fundamental. Yet, like many other essential problems in computing (e.g., query optimization in databases), academic research in scheduling moves like a pendulum, with periods of intense activity followed by periods of dormancy when it is considered a "solved" problem. These three papers make significant contributions to an ongoing effort to develop better scheduling policies for modern computing systems.

May 16, 2023

Topic: Computer Architecture

0 comments

Cargo Cult AI:
Is the ability to think scientifically the defining essence of intelligence?

Evidence abounds that the human brain does not innately think scientifically; however, it can be taught to do so. The same species that forms cargo cults around widespread and unfounded beliefs in UFOs, ESP, and anything read on social media also produces scientific luminaries such as Sagan and Feynman. Today's cutting-edge LLMs are also not innately scientific. But unlike the human brain, there is good reason to believe they never will be unless new algorithmic paradigms are developed.

May 11, 2023

Topic: AI

0 comments

Beyond the Repository:
Best practices for open source ecosystems researchers

Much of the existing research about open source elects to study software repositories instead of ecosystems. An open source repository most often refers to the artifacts recorded in a version control system and occasionally includes interactions around the repository itself. An open source ecosystem refers to a collection of repositories, the community, their interactions, incentives, behavioral norms, and culture. The decentralized nature of open source makes holistic analysis of the ecosystem an arduous task, with communities and identities intersecting in organic and evolving ways. Despite these complexities, the increased scrutiny on software security and supply chains makes it of the utmost importance to take an ecosystem-based approach when performing research about open source.

May 4, 2023

Topic: Open Source

0 comments

DevEx: What Actually Drives Productivity:
The developer-centric approach to measuring and improving productivity

Developer experience focuses on the lived experience of developers and the points of friction they encounter in their everyday work. In addition to improving productivity, DevEx drives business performance through increased efficiency, product quality, and employee retention. This paper provides a practical framework for understanding DevEx, and presents a measurement framework that combines feedback from developers with data about the engineering systems they interact with. These two frameworks provide leaders with clear, actionable insights into what to measure and where to focus in order to improve developer productivity.

May 3, 2023

Topic: Business/Management

0 comments

Designing a Framework for Conversational Interfaces:
Combining the latest advances in machine learning with earlier approaches

Wherever possible, business logic should be described by code rather than training data. This keeps our system's behavior principled, predictable, and easy to change. Our approach to conversational interfaces allows them to be built much like any other application, using familiar tools, conventions, and processes, while still taking advantage of cutting-edge machine-learning techniques.

April 7, 2023

Topic: AI

0 comments

The Parchment Path?:
Is there ever a time when learning is not of value - for its own sake?

The greater the risk, the greater the reward, and if you do succeed, it will be an achievement that you can look back on and smile wryly about. Postdocs never laugh because postdocs are post-laughter. However, there are some things to consider before plunking down your application fee and writing all those essays.

April 5, 2023

Topic: Education

0 comments

Opportunity Cost and Missed Chances in Optimizing Cybersecurity:
The loss of potential gain from other alternatives when one alternative is chosen

Opportunity cost should not be an afterthought when making security decisions. One way to ease into considering complex alternatives is to consider the null baseline of doing nothing instead of the choice at hand. Opportunity cost can feel abstract, elusive, and imprecise, but it can be understood by everyone, given the right introduction and framing. Using the approach presented here will make it natural and accessible.

April 4, 2023

Topic: Security

0 comments

Improvement on End-to-End Encryption May Lead to Silent Revolution:
Researchers are on a brink of what could be the next big improvement in communication privacy.

Privacy is an increasing concern, whether you are texting with a business associate or transmitting volumes of data over the Internet. Over the past few decades, cryptographic techniques have enabled privacy improvements in chat apps and other electronic forms of communication. Now researchers are on the brink of what could be the next big improvement in communication privacy: E2EEEE (End-to-End Encryption with Endpoint Elimination). This article is based on interviews with researchers who plan on presenting at a symposium on the topic scheduled for April 1, 2023.

March 30, 2023

Topic: Privacy and Rights

0 comments

Catch-23: The New C Standard Sets the World on Fire

A new major revision of the C programming language standard is nearly upon us. C23 introduces pleasant conveniences, retains venerable traps for the unwary, and innovates a gratuitous catastrophe. A few steps forward, much sideways shuffling, and a drunken backward stumble into the fireplace come together in the official dance of C standardization, the Whiskey Tango Foxtrot.

March 29, 2023

Topic: Programming Languages

0 comments

Sharpening Your Tools:
Updating bulk_extractor for the 2020s

This article presents our experience updating the high-performance Digital forensics tool BE (bulk_extractor) a decade after its initial release. Between 2018 and 2022, we updated the program from C++98 to C++17. We also performed a complete code refactoring and adopted a unit test framework. DF tools must be frequently updated to keep up with changes in the ways they are used. A description of updates to the bulk_extractor tool serves as an example of what can and should be done.

March 28, 2023

Topic: Data

0 comments

More Than Just Algorithms:
A discussion with Alfred Spector, Peter Norvig, Chris Wiggins, Jeannette Wing, Ben Fried, and Michael Tingley

Dramatic advances in the ability to gather, store, and process data have led to the rapid growth of data science and its mushrooming impact on nearly all aspects of the economy and society. Data science has also had a huge effect on academic disciplines with new research agendas, new degrees, and organizational entities. The authors of a new textbook, Data Science in Context: Foundations, Challenges, Opportunities, share their ideas about the impact of the field on nearly all aspects of the economy and society.

March 27, 2023

Topic: Data

0 comments

All Sliders to the Right:
Hardware Overkill

There are many reasons why this year's model isn't any better than last year's, and many reasons why performance fails to scale, some of which KV has covered in these pages. It is true that the days of upgrading every year and getting a free performance boost are long gone, as we're not really getting single cores that are faster than about 4GHz. One thing that many software developers fail to understand is the hardware on which their software runs at a sufficiently deep level.

February 13, 2023

Topic: Performance

0 comments

Three-part Harmony for Program Managers Who Just Don't Get It, Yet:
Open-source software, open standards, and agile software development

This article examines three tools in the system acquisitions toolbox that can work to expedite development and procurement while mitigating programmatic risk: OSS, open standards, and the Agile/Scrum software development processes are all powerful additions to the DoD acquisition program management toolbox.

February 9, 2023

Topic: Open Source

0 comments

The Fun in Fuzzing:
The debugging technique comes into its own.

Stefan Nagy, an assistant professor in the Kahlert School of Computing at the University of Utah, takes us on a tour of recent research in software fuzzing, or the systematic testing of programs via the generation of novel or unexpected inputs. The first paper he discusses extends the state of the art in coverage-guided fuzzing with the semantic notion of "likely invariants," inferred via techniques from property-based testing. The second explores encoding domain-specific knowledge about certain bug classes into test-case generation.

February 1, 2023

Topic: Testing

0 comments

To PiM or Not to PiM:
The case for in-memory inferencing of quantized CNNs at the edge

As artificial intelligence becomes a pervasive tool for the billions of IoT (Internet of things) devices at the edge, the data movement bottleneck imposes severe limitations on the performance and autonomy of these systems. PiM (processing-in-memory) is emerging as a way of mitigating the data movement bottleneck while satisfying the stringent performance, energy efficiency, and accuracy requirements of edge imaging applications that rely on CNNs (convolutional neural networks).

January 30, 2023

Topic: Computer Architecture

0 comments

Taking Flight with Copilot:
Early insights and opportunities of AI-powered pair-programming tools

Over the next five years, AI-powered tools likely will be helping developers in many diverse tasks. For example, such models may be used to improve code review, directing reviewers to parts of a change where review is most needed or even directly providing feedback on changes. Models such as Codex may suggest fixes for defects in code, build failures, or failing tests. These models are able to write tests automatically, helping to improve code quality and downstream reliability of distributed systems. This study of Copilot shows that developers spend more time reviewing code than actually writing code.

January 26, 2023

Topic: AI

0 comments

Reinventing Backend Subsetting at Google:
Designing an algorithm with reduced connection churn that could replace deterministic subsetting

Backend subsetting is useful for reducing costs and may even be necessary for operating within the system limits. For more than a decade, Google used deterministic subsetting as its default backend subsetting algorithm, but although this algorithm balances the number of connections per backend task, deterministic subsetting has a high level of connection churn. Our goal at Google was to design an algorithm with reduced connection churn that could replace deterministic subsetting as the default backend subsetting algorithm.

December 14, 2022

Topic: Performance

0 comments

The Elephant in the Room:
It's time to get the POSIX elephant off our necks.

By writing code for the elephant that is Posix, we lose the chance to take advantage of modern hardware.

December 7, 2022

Topic: Programming Languages

0 comments

OCCAM-v2: Combining Static and Dynamic Analysis for Effective and Efficient Whole-program Specialization:
Leveraging scalable pointer analysis, value analysis, and dynamic analysis

OCCAM-v2 leverages scalable pointer analysis, value analysis, and dynamic analysis to create an effective and efficient tool for specializing LLVM bitcode. The extent of the code-size reduction achieved depends on the specific deployment configuration. Each application that is to be specialized is accompanied by a manifest that specifies concrete arguments that are known a priori, as well as a count of residual arguments that will be provided at runtime. The best case for partial evaluation occurs when the arguments are completely concretely specified. OCCAM-v2 uses a pointer analysis to devirtualize calls, allowing it to eliminate the entire body of functions that are not reachable by any direct calls.

November 30, 2022

Topic: Development

0 comments

OSS Supply-chain Security: What Will It Take?

While enterprise security teams naturally tend to turn their focus primarily to direct attacks on their own infrastructure, cybercrime exploits now are increasingly aimed at easier targets upstream. This has led to a perfect storm, since virtually all significant codebase repositories at this point include at least some amount of open-source software. But opportunities also abound there for the authors of malware. The broader cybercrime world, meanwhile, has noted that open-source supply chains are generally easy to penetrate. What's being done at this point to address the apparent risks?

November 16, 2022

Topic: Open Source

0 comments

Literate Executables

Literate executables redefine the relationship between compiled binaries and source code to be that of chicken and egg, so it's easy to derive either from the other. This episode of Drill Bits provides a general-purpose literacy tool and showcases the advantages of literacy by retrofitting it onto everyone's favorite command-line utility.

November 15, 2022

Topic: Data

0 comments

Split Your Overwhelmed Teams:
Two Teams of Five is Not the Same as One Team of Ten

This team?s low morale and high stress were a result of the members feeling overwhelmed by too many responsibilities. The 10-by-10 communication structure made it difficult to achieve consensus, there were too many meetings, and everyone was suffering from the high cognitive load. By splitting into two teams, each can be more nimble, which the manager likes, and have a lower cognitive load, which the team likes. There is more opportunity for repetition, which lets people develop skills and demonstrate them. Altogether, this helps reduce stress and improve morale.

November 10, 2022

Topic: Business/Management

0 comments

The Rise of Fully Homomorphic Encryption:
Often called the Holy Grail of cryptography, commercial FHE is near.

Once commercial FHE is achieved, data access will become completely separated from unrestricted data processing, and provably secure storage and computation on untrusted platforms will become both relatively inexpensive and widely accessible. In ways similar to the impact of the database, cloud computing, PKE, and AI, FHE will invoke a sea change in how confidential information is protected, processed, and shared, and will fundamentally change the course of computing at a foundational level.

September 26, 2022

Topic: Security

0 comments

Mapping the Privacy Landscape for Central Bank Digital Currencies:
Now is the time to shape what future payment flows will reveal about you.

As central banks all over the world move to digitize cash, the issue of privacy needs to move to the forefront. The path taken may depend on the needs of each stakeholder group: privacy-conscious users, data holders, and law enforcement.

September 20, 2022

Topic: Privacy and Rights

0 comments

From Zero to One Hundred:
Demystifying zero trust and its implications on enterprise people, process, and technology

Changing network landscapes and rising security threats have imparted a sense of urgency for new approaches to security. Zero trust has been proposed as a solution to these problems, but some regard it as a marketing tool to sell existing best practice while others praise it as a new cybersecurity standard. This article discusses the history and development of zero trust and why the changing threat landscape has led to a new discourse in cybersecurity.

September 19, 2022

Topic: Security

0 comments

The Arrival of Zero Trust: What Does it Mean?

It used to be that enterprise cybersecurity was all castle and moat. First, secure the perimeter and then, in terms of what went on inside that, Trust, but verify. The perimeter, of course, was the corporate network. But what does that even mean at this point? With most employees now working from home at least some of the time and organizations relying increasingly on cloud computing, there is no such thing as a single, enterprise-wide perimeter anymore. And, with corporate security breaches having become a regular news item over the past two decades, trust has essentially evaporated as well.

September 16, 2022

Topic: Security

0 comments

Crash Consistency:
Keeping data safe in the presence of crashes is a fundamental problem.

Keeping data safe in the presence of crashes is a fundamental problem in storage systems. Although the high-level ideas for crash consistency are relatively well understood, realizing them in practice is surprisingly complex and full of challenges. The systems research community is actively working on solving this challenge, and the papers examined here offer three solutions.

September 15, 2022

Topic: Data

0 comments

The Four Horsemen of an Ailing Software Project:
Don't let the pale rider catch you with an exception.

KV has talked about various measures of software quality in past columns, but perhaps falling software quality is one of the most objective measures that a team is failing. This Pestilence, brought about by the low morale engendered in the team by War and Famine, is a clear sign that something is wrong. In the real world, a diseased animal can be culled so that disease does not spread and become a pestilence over the land. Increasing bug counts, especially in the absence of increased functionality, is a sure sign of a coming project apocalypse.

September 14, 2022

Topic: Development

0 comments

CSRB's Opus One:
Comments on the Cyber Safety Review Board Log4j Event Report

We in FOSS need to become much better at documenting design decisions in a way and a place where the right people will find it, read it, and understand it, before they do something ill-advised or downright stupid with our code.

September 13, 2022

Topic: Privacy and Rights

0 comments

Privacy of Personal Information:
Going incog in a goldfish bowl

Each online interaction with an external service creates data about the user that is digitally recorded and stored. These external services may be credit card transactions, medical consultations, census data collection, voter registration, etc. Although the data is ostensibly collected to provide citizens with better services, the privacy of the individual is inevitably put at risk. With the growing reach of the Internet and the volume of data being generated, data protection and, specifically, preserving the privacy of individuals, have become particularly important.

July 26, 2022

Topic: Privacy and Rights

0 comments

Securing the Company Jewels:
GitHub and runbook security

Often the problem with a runbook isn't the runbook itself, it's the runner of the runbook that matters. A runbook, or a checklist, is supposed to be an aid to memory and not a replacement for careful and independent thought. But our industry being what it is, we now see people take these things to their illogical extremes, and I think this is the problem you are running into with your local runbook runner.

July 25, 2022

Topic: Security

0 comments

I'm Probably Less Deterministic Than I Used to Be:
Embracing randomness is necessary in cloud environments.

In my youth, I thought the universe was ruled by cause and effect like a big clock. In this light, computing made sense. Now I see that both life and computing can be a crapshoot, and that has given me a new peace.

July 24, 2022

Topic: Distributed Development

0 comments

The Challenges of IoT, TLS, and Random Number Generators in the Real World:
Bad random numbers are still with us and are proliferating in modern systems.

Many in the cryptographic community scoff at the mistakes made in implementing RNGs. Many cryptographers and members of the IETF resist the call to make TLS more resilient to this class of failures. This article discusses the history, current state, and fragility of the TLS protocol, and it closes with an example of how to improve the protocol. The goal is not to suggest a solution but to start a dialog to make TLS more resilient by proving that the security of TLS without the assumption of perfect random numbers is possible.

July 18, 2022

Topic: Development

0 comments

Convergence:
Research for Practice reboot

It is with great pride and no small amount of excitement that I announce the reboot of acmqueue's Research for Practice column. For three years, beginning at its inception in 2016, Research for Practice brought both seminal and cutting-edge research - via careful curation by experts in academia - within easy reach for practitioners who are too busy building things to manage the deluge of scholarly publications. We believe the series succeeded in its stated goal of sharing "the joy and utility of reading computer science research" between academics and their counterparts in industry. We know our readers have missed it, and we are delighted to rekindle the flame after a three-year hiatus.

July 15, 2022

Topic: Distributed Computing

0 comments

Linear Address Spaces:
Unsafe at any speed

The linear address space as a concept is unsafe at any speed, and it badly needs mandatory CHERI seat belts. But even better would be to get rid of linear address spaces entirely and go back to the future, as successfully implemented in the Rational R1000 computer 30-plus years ago.

June 8, 2022

Topic: Development

0 comments

When Should a Black Box Be Transparent?:
When is a replacement not a replacement?

The right answer in these cases is to ask the vendor for as much information as possible to reduce the risk in accepting this so-called replacement. First, ask for the test plans and test output so you can understand whether they tested the component in a way that relates to your use case. Just because they tested the thing doesn't mean they tested all the parts your product cares about. In fact, it's unlikely they did.

June 1, 2022

Topic: Security

0 comments

Walk a Mile in Their Shoes:
The Covid pandemic through the lens of four tech workers

Covid has changed how people work in many ways, but many of the outcomes have been paradoxical in nature. What works for one person may not work for the next (or even the same person the next day), and we have yet to figure out how to predict exactly what will work for everyone. As you saw in the composite personas described here, some people struggle with isolation and loneliness, have a hard time connecting socially with their teams, or find the time pressures of hybrid work with remote teams to be overwhelming. Others relish this newfound way of working, enjoying more time with family, greater flexibility to exercise during the day, a better work/life balance, and a stronger desire to contribute to the world.

May 25, 2022

Topic: Business/Management

0 comments

FHIR: Reducing Friction in the Exchange of Healthcare Data:
A discussion with James Agnew, Pat Helland, and Adam Cole

With the full clout of the Centers for Medicare and Medicaid Services currently being brought to bear on healthcare providers to meet high standards for patient data interoperability and accessibility, it would be easy to assume the only reason this goal wasn't accomplished long ago is simply a lack of will. Interoperable data? How hard can that be? Much harder than you think, it turns out. To dig into why this is the case, we asked Pat Helland, a principal architect at Salesforce, to speak with James Agnew (CTO) and Adam Cole (senior solutions architect) of Smile CDR, a Toronto, Ontario-based provider of a leading platform used by healthcare organizations to achieve FHIR (Fast Healthcare Interoperability Resources) compliance.

May 17, 2022

Topic: Data

0 comments

Long Live Software Easter Eggs!:
They are as old as software.

It's a period of unrest. Rebel developers, striking from continuous deployment servers, have won their first victory. During the battle, rebel spies managed to push an epic commit in the HTML code of https://pro.sony. Pursued by sinister agents, the rebels are hiding in commits, buttons, tooltips, API, HTTP headers, and configuration screens.

May 13, 2022

Topic: Development

0 comments

Persistent Memory Allocation:
Leverage to move a world of software

A lever multiplies the force of a light touch, and the right software interfaces provide formidable leverage in multiple layers of code: A familiar interface enables a new persistent memory allocator to breathe new life into an enormous installed base of software and hardware. Compatibility allows a persistent heap to slide easily beneath a widely used scripting-language interpreter, thereby endowing all scripts with effortless on-demand persistence.

May 11, 2022

Topic: Data

0 comments

Autonomous Computing:
We frequently compute across autonomous boundaries but the implications of the patterns to ensure independence are rarely discussed.

Autonomous computing is a pattern for business work using collaborations to connect fiefdoms and their emissaries. This pattern, based on paper forms, has been used for centuries. Here, we explain fiefdoms, collaborations, and emissaries. We examine how emissaries work outside the autonomous boundary and are convenient while remaining an outsider. And we examine how work across different fiefdoms can be initiated, run for long periods of time, and eventually be completed.

April 4, 2022

Topic: Data

0 comments

Distributed Latency Profiling through Critical Path Tracing:
CPT can provide actionable and precise latency analysis.

Low latency is an important feature for many Google applications such as Search, and latency-analysis tools play a critical role in sustaining low latency at scale. For complex distributed systems that include services that constantly evolve in functionality and data, keeping overall latency to a minimum is a challenging task. In large, real-world distributed systems, existing tools such as RPC telemetry, CPU profiling, and distributed tracing are valuable to understand the subcomponents of the overall system, but are insufficient to perform end-to-end latency analyses in practice.

March 29, 2022

Topic: Networks

0 comments

The Planning and Care of Data:
Rearranging buckets for no good reason

Questions such as, "How do we secure this data?" work only if you ask them at the start, and not when some lawyers or government officials are sitting in a conference room, rooting through your data and logs, and making threatening noises under their breath. All the things we care about with our data require forethought, but it seems in our rush to create "stakeholder value" we are willing to sacrifice these important attributes and just act like data gourmands, until, like Mr.

March 23, 2022

Topic: Data

0 comments

Middleware 101:
What to know now and for the future

Whether segregating a sophisticated software component into smaller services, transferring data between computers, or creating a general gateway for seamless communication, you can rely on middleware to achieve communication between different devices, applications, and software layers. Following the increasing agile movement, the tech industry has adopted the use of fast waterfall models to create stacks of layers for each structural need, including integration, communication, data, and security. Given this scope, emphasis must now be on endpoint connection and agile development. This means that middleware should not serve solely as an object-oriented solution to execute simple request-response commands.

March 15, 2022

Topic: Development

0 comments

Persistence Programming:
Are we doing this right?

A few years ago, my team was working on a commercial Java development project for Enhanced 911 (E911) emergency call centers. We were frustrated by trying to meet the data-storage requirements of this project using the traditional model of Java over an SQL database. After some reflection about the particular requirements (and nonrequirements) of the project, we took a deep breath and decided to create our own custom persistence layer from scratch.

March 14, 2022

Topic: Data

0 comments

Surveillance Too Cheap to Meter:
Stopping Big Brother would require an expensive overhaul of the entire system.

IT nerds tend to find technological solutions for all sorts of problems?economic, political, sociological, and so on. Most of the time, these solutions don't make the problems that much worse, but when a problem is of a purely economic nature, only solutions that affect the economics of the situation can possibly work. Neither cryptography nor smart programming will be able to move the needle even a little bit when the fundamental problem is that surveillance is too cheap to meter.

February 9, 2022

Topic: Privacy and Rights

0 comments

FPGAs in Client Compute Hardware:
Despite certain challenges, FPGAs provide security and performance benefits over ASICs.

FPGAs (field-programmable gate arrays) are remarkably versatile. They are used in a wide variety of applications and industries where use of ASICs (application-specific integrated circuits) is less economically feasible. Despite the area, cost, and power challenges designers face when integrating FPGAs into devices, they provide significant security and performance benefits. Many of these benefits can be realized in client compute hardware such as laptops, tablets, and smartphones.

February 7, 2022

Topic: Processors

0 comments

Getting Off the Mad Path:
Debuggers and assertions

KV continues to grind his teeth as he sees code loaded with debugging statements that would be totally unnecessary if the programmers who wrote the code could be both confident in and proficient with their debuggers. If one is lucky enough to have access to a good debugger, one should give extreme thanks to whatever they normally give thanks to and use the damn thing!

January 31, 2022

Topic: Debugging

0 comments

The Keys to the Kingdom:
A deleted private key, a looming deadline, and a last chance to patch a new static root of trust into the bootloader

An unlucky fat-fingering precipitated the current crisis: The client had accidentally deleted the private key needed to sign new firmware updates. They had some exciting new features to ship, along with the usual host of reliability improvements. Their customers were growing impatient, but my client had to stall when asked for a release date. How could they come up with a meaningful date? They had lost the ability to sign a new firmware release.

January 25, 2022

Topic: Failure and Recovery

0 comments

Steampunk Machine Learning:
Victorian contrivances for modern data science

Fitting models to data is all the rage nowadays but has long been an essential skill of engineers. Veterans know that real-world systems foil textbook techniques by interleaving routine operating conditions with bouts of overload and failure; to be practical, a method must model the former without distortion by the latter. Surprisingly effective aid comes from an unlikely quarter: a simple and intuitive model-fitting approach that predates the Babbage Engine. The foundation of industrial-strength decision support and anomaly detection for production datacenters, this approach yields accurate yet intelligible models without hand-holding or fuss.

January 18, 2022

Topic: AI

0 comments

Interpretable Machine Learning:
Moving from mythos to diagnostics

The emergence of machine learning as a society-changing technology in the past decade has triggered concerns about people's inability to understand the reasoning of increasingly complex models. The field of IML (interpretable machine learning) grew out of these concerns, with the goal of empowering various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.

January 12, 2022

Topic: AI

0 comments

A Conversation with Margo Seltzer and Mike Olson:
The history of Berkeley DB

Kirk McKusick sat down with Margo Seltzer and Mike Olson to discuss the history of Berkeley DB, for which they won the ACM Software System Award in 2021. Kirk McKusick has spent his career as a BSD and FreeBSD developer. Margo Seltzer has spent her career as a professor of computer science and as an entrepreneur of database software companies. Mike Olson started his career as a software developer and later started and managed several open-source software companies. Berkeley DB is a production-quality, scalable, NoSQL, Open Source platform for embedded transactional data management.

November 18, 2021

Topic: Databases

0 comments

It Takes a Community:
The Open-source Challenge

Of the many challenges faced by open-source developers, among the most daunting are some that other programmers scarcely ever think about. Building a successful open-source community depends on many different elements, some of which are familiar to any developer. Just as important are the skills to recruit, to inspire, to mentor, to manage, and to mediate disputes. But what exactly does it take to pull all that off?

November 17, 2021

Topic: Open Source

0 comments

Federated Learning and Privacy:
Building privacy-preserving systems for machine learning and data science on decentralized data

Centralized data collection can expose individuals to privacy risks and organizations to legal risks if data is not properly managed. Federated learning is a machine learning setting where multiple entities collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client's raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective.

November 16, 2021

Topic: Privacy and Rights

0 comments

Chip Measuring Contest:
The benefits of purpose-built chips

Alan Kay once said, "People who are really serious about software should make their own hardware." We are now seeing product companies genuinely live up to this value. It is exciting when the incumbents known as the chip vendors are being outdone, in the very technology that is their bread and butter, by their previous customers. Let's dive into some of the interesting bits of these purpose-built chips: the benefits of economics, user experience, and performance for the companies building them.

November 15, 2021

Topic: Computer Architecture

0 comments

Meaning and Context in Computer Programs:
Sharing domain knowledge among programmers using the source code as the medium

When you look at a function program's source code, how do you know what it means? Is the meaning found in the return values of the function, or is it located inside the function body? What about the function name? Answering these questions is important to understanding how to share domain knowledge among programmers using the source code as the medium. The program is the medium of communication among programmers to share their solutions.

November 10, 2021

Topic: Development

0 comments

Lamboozling Attackers: A New Generation of Deception:
Software engineering teams can exploit attackers' human nature by building deception environments.

The goal of this article is to educate software leaders, engineers, and architects on the potential of deception for systems resilience and the practical considerations for building deception environments. By examining the inadequacy and stagnancy of historical deception efforts by the information security community, the article also demonstrates why engineering teams are now poised to become significantly more successful owners of deception systems.

October 28, 2021

Topic: Security

0 comments

Patent Absurdity:
A case when ignorance is the best policy

The main reason a lawyer will give for not reading a software patent is that, if you run afoul of the patent and it can be shown that you had knowledge of it, your company will incur triple the damages that they would have, had you not had knowledge of the patent. That seems like reason enough to avoid reading them, but there is an even better reason, and that is, as design or technical documents, software patents suck.

September 29, 2021

Topic: Business/Management

0 comments

The Software Industry IS STILL the Problem:
The time is (also) way overdue for IT professional liability

The time is way overdue for IT engineers to be subject to professional liability, like almost every other engineering profession. Before you tell me that is impossible, please study how the very same thing happened with electricity, planes, cranes, trains, ships, automobiles, lifts, food processing, buildings, and, for that matter, driving a car.

September 29, 2021

Topic: Business/Management

0 comments

Crashproofing the Original NoSQL Key-Value Store

Fortifying software to protect persistent data from crashes can be remarkably easy if a modern file system handles the heavy lifting. This episode of Drill Bits unveils a new crash-tolerance mechanism that vaults the venerable gdbm database into the league of transactional NoSQL data stores. We'll motivate this upgrade by tracing gdbm's history. We'll survey the subtle science of crashproofing, navigating a minefield of traps for the unwary. We'll arrive at a compact and rugged design that leverages modern file-system features, and we'll tour the production-ready implementation of this design and its ergonomic interface.

September 19, 2021

Topic: Databases

0 comments

Designing UIs for Static Analysis Tools:
Evaluating tool design guidelines with SWAN

Static-analysis tools suffer from usability issues such as a high rate of false positives, lack of responsiveness, and unclear warning descriptions and classifications. Here, we explore the effect of applying user-centered approach and design guidelines to SWAN, a security-focused static-analysis tool for the Swift programming language. SWAN is an interesting case study for exploring static-analysis tool usability because of its large target audience, its potential to integrate easily into developers' workflows, and its independence from existing analysis platforms.

September 16, 2021

Topic: Development

0 comments

Human-Centered Approach to Static-Analysis-Driven Developer Tools:
The future depends on good HCI

Complex and opaque systems do not scale easily. A human-centered approach for evolving tools and practices is essential to ensuring that software is scaled safely and securely. Static analysis can unveil information about program behavior, but the goal of deriving this information should not be to accumulate hairsplitting detail. HCI can help direct static-analysis techniques into developer-facing systems that structure information and embody relationships in representations that closely mirror a programmer's thought. The survival of great software depends on programming languages that support, rather than inhibit, communicating, reasoning, and abstract thinking.

September 16, 2021

Topic: Development

0 comments

Static Analysis at GitHub:
An experience report

The Semantic Code team at GitHub builds and operates a suite of technologies that power symbolic code navigation on github.com. We learned that scale is about adoption, user behavior, incremental improvement, and utility. Static analysis in particular is difficult to scale with respect to human behavior; we often think of complex analysis tools working to find potentially problematic patterns in code and then trying to convince the humans to fix them.

September 16, 2021

Topic: Development

0 comments

Static Analysis: An Introduction:
The fundamental challenge of software engineering is one of complexity.

Modern static-analysis tools provide powerful and specific insights into codebases. The Linux kernel team, for example, developed Coccinelle, a powerful tool for searching, analyzing, and rewriting C source code; because the Linux kernel contains more than 27 million lines of code, a static-analysis tool is essential both for finding bugs and for making automated changes across its many libraries and modules. Another tool targeted at the C family of languages is Clang scan-build, which comes with many useful analyses and provides an API for programmers to write their own analyses. Like so many things in computer science, the utility of static analysis is self-referential: To write reliable programs, we must also write programs for our programs.

September 16, 2021

Topic: Development

0 comments

Don't Get Stuck in the "Con" Game:
Consistency, convergence, and confluence are not the same! Eventual consistency and eventual convergence aren't the same as confluence, either.

"Eventual consistency" is a popular phrase with a fuzzy definition. People are even inconsistent in their use of consistency. But two other terms, "convergence" and "confluence", that have crisper definitions and are more easily understood.

August 5, 2021

Topic: Data

0 comments

Declarative Machine Learning Systems:
The future of machine learning will depend on it being in the hands of the rest of us.

The people training and using ML models now are typically experienced developers with years of study working within large organizations, but the next wave of ML systems should allow a substantially larger number of people, potentially without any coding skills, to perform the same tasks. These new ML systems will not require users to fully understand all the details of how models are trained and used for obtaining predictions, but will provide them a more abstract interface that is less demanding and more familiar.

August 2, 2021

Topic: AI

0 comments

Real-world String Comparison:
How to handle Unicode sequences correctly

In many languages a string comparison is a pitfall for beginners. With any Unicode string as input, a comparison often causes problems even for advanced users. The semantic equivalence of different characters in Unicode requires a normalization of the strings before comparing them. This article shows how to handle Unicode sequences correctly. The comparison of two strings for equality often raises questions concerning the difference between comparison by value, comparison of object references, strict equality, and loose equality. The most important aspect is semantic equivalence.

July 29, 2021

Topic: Data

0 comments

Divide and Conquer:
The use and limits of bisection

Bisection is of no use if you have a heisenbug that fails only from time to time. These subtle bugs are the hardest to fix and the ones that cause us to think critically about what we are doing. Timing bugs, bugs in distributed systems, and all the difficult problems we face in building increasingly complex software systems can't yet be addressed by simple bisection. It's often the case that it would take longer to write a usable bisection test for a complex problem than it would to analyze the problem whilst at the tip of the tree.

July 26, 2021

Topic: Debugging

0 comments

When Curation Becomes Creation:
Algorithms, microcontent, and the vanishing distinction between platforms and creators

Media platforms today benefit from: (1) discretion to organize content, (2) algorithms for curating user-posted content, and (3) absolution from liability. This favorable regulatory environment results from the current legal framework, which distinguishes between intermediaries and content providers. This distinction is ill-adapted to the modern social media landscape, where platforms deploy powerful data-driven algorithms to play an increasingly active role in shaping what people see, and where users supply disconnected bits of raw content as fodder.

July 21, 2021

Topic: HCI

0 comments

Digging into Big Provenance (with SPADE):
A user interface for querying provenance

Several interfaces exist for querying provenance. Many are not flexible in allowing users to select a database type of their choice. Some provide query functionality in a data model that is different from the graph-oriented one that is natural for provenance. Others have intuitive constructs for finding results but have limited support for efficiently chaining responses, as needed for faceted search. This article presents a user interface for querying provenance that addresses these concerns and is agnostic to the underlying database being used.

July 19, 2021

Topic: Data

0 comments

What Went Wrong?:
Why we need an IT accident investigation board

Governments should create IT accident investigation boards for the exact same reasons they have done so for ships, railroads, planes, and in many cases, automobiles. Denmark got its Railroad Accident Investigation Board because too many people were maimed and killed by steam trains. The UK's Air Accidents Investigation Branch was created for pretty much the same reasons, but, specifically, because when the airlines investigated themselves, nobody was any the wiser. Does that sound slightly familiar in any way?

July 13, 2021

Topic: Compliance

0 comments

A New Era for Mechanical CAD:
Time to move forward from decades-old design

The hardware industry is desperate for a modern way to do mechanical design. A new CAD program created for the modern world would lower the barrier to building hardware, decrease the time of development, and usher in a new era of building. The tools used to build with today are supported on the shoulders of giants, but a lot could be done to make them even better. At some point, mechanical CAD lost some of its roots of innovation.

June 6, 2021

Topic: HCI

0 comments

ACID: My Personal:
How could I miss such a simple thing?

I had a chance recently to chat with my old friend, Andreas Reuter, the inventor of ACID. He and his Ph.D. advisor, Theo Härder, coined the term in their famous 1983 paper, Principles of Transaction-Oriented Database Recovery. I had blinders on after almost four decades of seeing C based on my assumptions. One big lesson for me is to work hard to ALWAYS question your assumptions. Try hard to surround yourself with curious and passionate people, both young and old, who will challenge you and try to dislodge your blinders.

June 3, 2021

Topic: Programming Languages

0 comments

In Praise of the Disassembler:
There's much to be learned from the lower-level details of hardware.

When you're starting out you want to be able to hold the entire program in your head if at all possible. Once you're conversant with your first, simple assembly language and the machine architecture you're working with, it will be completely possible to look at a page or two of your assembly and know not only what it is supposed to do but also what the machine will do for you step by step. When you look at a high-level language, you should be able to understand what you mean it to do, but often you have no idea just how your intent will be translated into action.

June 1, 2021

Topic: Development

0 comments

Schrödinger's Code:
Undefined behavior in theory and practice

Undefined behavior ranks among the most baffling and perilous aspects of popular programming languages. This installment of Drill Bits clears up widespread misconceptions and presents practical techniques to banish undefined behavior from your own code and pinpoint meaningless operations in any software—techniques that reveal alarming faults in software supporting business-critical applications at Fortune 500 companies.

May 26, 2021

Topic: Development

0 comments

Quantum-safe Trust for Vehicles:
The race is already on

In the automotive industry, cars now coming off assembly lines are sometimes referred to as "rolling data centers" in acknowledgment of all the entertainment and communications capabilities they contain. The fact that autonomous driving systems are also well along in development does nothing to allay concerns about security. Indeed, it would seem the stakes of automobile cybersecurity are about to become immeasurably higher just as some of the underpinnings of contemporary cybersecurity are rendered moot.

May 24, 2021

Topic: Security

0 comments

The Complex Path to Quantum Resistance:
Is your organization prepared?

There is a new technology on the horizon that will forever change the information security and privacy industry landscape. Quantum computing, together with quantum communication, will have many beneficial applications but will also be capable of breaking many of today's most popular cryptographic techniques that help ensure data protection?in particular, confidentiality and integrity of sensitive information. These techniques are ubiquitously embedded in today's digital fabric and implemented by many industries such as finance, health care, utilities, and the broader information communication technology (ICT) community.

May 17, 2021

Topic: Security

0 comments

Biases in AI Systems:
A survey for practitioners

This article provides an organization of various kinds of biases that can occur in the AI pipeline starting from dataset creation and problem formulation to data analysis and evaluation. It highlights the challenges associated with the design of bias-mitigation strategies, and it outlines some best practices suggested by researchers. Finally, a set of guidelines is presented that could aid ML developers in identifying potential sources of bias, as well as avoiding the introduction of unwanted biases. The work is meant to serve as an educational resource for ML developers in handling and addressing issues related to bias in AI systems.

May 12, 2021

Topic: AI

0 comments

Fail-fast Is Failing... Fast!:
Changes in compute environments are placing pressure on tried-and-true distributed-systems solutions.

For more than 40 years, fail-fast has been the dominant way of achieving fault tolerance. In this approach, some mechanism is responsible for ensuring that each component is up, functioning, and responding to work. As the industry moves to leverage cloud computing, this is getting more challenging. The way we create robust solutions is under pressure as the individual components don't fail fast but instead, starts running slow, which is far worse The slow component may be healthy enough to say, "I'm still here!" but slow enough to clog up all the work. This makes fail-fast schemes vulnerable.

March 25, 2021

Topic: Distributed Development

0 comments

Software Development in Disruptive Times:
Creating a software solution with fast decision capability, agile project management, and extreme low-code technology

In this project, the challenge was to "deploy software faster than the coronavirus spread." In a project with such peculiar characteristics, several factors can influence success, but some clearly stand out: top management support, agility, understanding and commitment of the project team, and the technology used. Conventional development approaches and technologies would simply not be able to meet the requirements promptly.

March 24, 2021

Topic: Development

0 comments

Aversion to Versions:
Resolving code-dependency issues

One should never hardcode a version or a path inside the code itself. Code needs to be flexible so that it can be installed anywhere and run anywhere so long as the necessary dependencies can be resolved, either at build time for statically compiled code or at runtime for interpreted code or code with dynamically linked libraries. There are current, good ways to get this right, so it's a shame that so many people continue to get it wrong.

March 23, 2021

Topic: Development

0 comments

WebRTC - Realtime Communication for the Open Web Platform:
What was once a way to bring audio and video to the web has expanded into more use cases we could ever imagine.

In this time of pandemic, the world has turned to Internet-based, RTC (realtime communication) as never before. The number of RTC products has, over the past decade, exploded in large part because of cheaper high-speed network access and more powerful devices, but also because of an open, royalty-free platform called WebRTC. WebRTC is growing from enabling useful experiences to being essential in allowing billions to continue their work and education, and keep vital human contact during a pandemic. The opportunities and impact that lie ahead for WebRTC are intriguing indeed.

March 16, 2021

Topic: Web Services

0 comments

Toward Confidential Cloud Computing:
Extending hardware-enforced cryptographic protection to data while in use

Although largely driven by economies of scale, the development of the modern cloud also enables increased security. Large data centers provide aggregate availability, reliability, and security assurances. The operational cost of ensuring that operating systems, databases, and other services have secure configurations can be amortized among all tenants, allowing the cloud provider to employ experts who are responsible for security; this is often unfeasible for smaller businesses, where the role of systems administrator is often conflated with many others.

March 7, 2021

Topic: Privacy and Rights

0 comments

The SPACE of Developer Productivity:
There's more to it than you think.

Developer productivity is about more than an individual's activity levels or the efficiency of the engineering systems relied on to ship software, and it cannot be measured by a single metric or dimension. The SPACE framework captures different dimensions of productivity, and here we demonstrate how this framework can be used to understand productivity in practice and why using it will help teams better understand developer productivity and create better measures to inform their work and teams.

March 6, 2021

Topic: Development

0 comments

Offline Algorithms in Low-Frequency Trading:
Clearing Combinatorial Auctions

Expectations run high for software that makes real-world decisions, particularly when money hangs in the balance. This third episode of the Drill Bits column shows how well-designed software can effectively create wealth by optimizing gains from trade in combinatorial auctions. We'll unveil a deep connection between auctions and a classic textbook problem, we'll see that clearing an auction resembles a high-stakes mutant Tetris, we'll learn to stop worrying and love an NP-hard problem that's far from intractable in practice, and we'll contrast the deliberative business of combinatorial auctions with the near-real-time hustle of high-frequency trading.

January 27, 2021

Topic: Development

0 comments

Enclaves in the Clouds:
Legal considerations and broader implications

With organizational data practices coming under increasing scrutiny, demand is growing for mechanisms that can assist organizations in meeting their data-management obligations. TEEs (trusted execution environments) provide hardware-based mechanisms with various security properties for assisting computation and data management. TEEs are concerned with the confidentiality and integrity of data, code, and the corresponding computation. Because the main security properties come from hardware, certain protections and guarantees can be offered even if the host privileged software stack is vulnerable.

January 26, 2021

Topic: Compliance

0 comments

Let's Play Global Thermonuclear Energy:
It's important to know where your power comes from.

For us to grow and progress as a civilization, we need more investment in providing electricity to the world through clean, safe, and efficient processes. Thermonuclear energy is a huge step forward. This article is mostly focused on the use cases around grid-scale reactors. It's hard to see a future without some sort of thermonuclear energy powering all sorts of things around us.

January 24, 2021

Topic: Power Management

0 comments

Best Practice: Application Frameworks:
While powerful, frameworks are not for everyone.

While frameworks can be a powerful tool, they have some disadvantages and may not make sense for all organizations. Framework maintainers need to provide standardization and well-defined behavior while not being overly prescriptive. When frameworks strike the right balance, however, they can offer large developer productivity gains. The consistency provided by widespread use of frameworks is a boon for other teams such as SRE and security that have a vested interest in the quality of applications. Additionally, the structure of frameworks provides a foundation for building higher-level abstractions such as microservices platforms, which unlock new opportunities for system architecture and automation.

January 20, 2021

Topic: Development

0 comments

The Non-psychopath's Guide to Managing an Open-source Project:
Respect your staff, learn from others, and know when to let go.

Transitioning from one of the technical faithful to one of the hated PHBs (pointy-haired bosses), whether in the corporate or the open-source world, is truly a difficult transition. Unless you are a type who has always been meant for the C-suite?, it's going to take a lot of work and a lot of patience, mostly with yourself, to make this transition.

January 18, 2021

Topic: Open Source

0 comments

Baleen Analytics:
Large-scale filtering of data provides serendipitous surprises.

Data analytics hoovers up anything it can find and we are finding patterns and insights that weren't available before, with implications for both data analytics and for messaging between services and microservices. It seems that a pretty good understanding among many different sources allows more flexibility and interconnectivity. Increasingly, flexibility dominates perfection.

January 7, 2021

Topic: Data

0 comments

Always-on Time-series Database: Keeping Up Where There's No Way to Catch Up:
A discussion with Theo Schlossnagle, Justin Sheehy, and Chris McCubbin

What if you found you needed to provide for the capture of data from disconnected operations, such that updates might be made by different parties at the same time without conflicts? And what if your service called for you to receive massive volumes of data almost continuously throughout the day, such that you couldn't really afford to interrupt data ingest at any point for fear of finding yourself so far behind present state that there would be almost no way to catch up?

December 14, 2020

Topic: Databases

0 comments

Everything VPN is New Again:
The 24-year-old security model has found a second wind.

The VPN (virtual private network) is 24 years old. The concept was created for a radically different Internet from the one we know today. As the Internet grew and changed, so did VPN users and applications. The VPN had an awkward adolescence in the Internet of the 2000s, interacting poorly with other widely popular abstractions. In the past decade the Internet has changed again, and this new Internet offers new uses for VPNs. The development of a radically new protocol, WireGuard, provides a technology on which to build these new VPNs.

November 23, 2020

Topic: Networks

0 comments

Battery Day:
A closer look at the technology that makes portable electronics possible

Tesla held its first Battery Day on September 22, 2020. The Tesla team didn't just look at one angle but all the angles: cell design, manufacturing, vehicle integration, and materials. If Tesla were to achieve 400 watt-hours per kilogram, a zero-emissions jet just might be on the horizon.

November 22, 2020

Topic: Power Management

0 comments

Differential Privacy: The Pursuit of Protections by Default:
A discussion with Miguel Guevara, Damien Desfontaines, Jim Waldo, and Terry Coatta

First formalized in 2006, differential privacy is an approach based on a mathematically rigorous definition of privacy that allows formalization and proof of the guarantees against re-identification offered by a system. While differential privacy has been accepted by theorists for some time, its implementation has turned out to be subtle and tricky, with practical applications only now starting to become available. To date, differential privacy has been adopted by the U.S. Census Bureau, along with a number of technology companies, but what this means and how these organizations have implemented their systems remains a mystery to many.

November 20, 2020

Topic: Privacy and Rights

0 comments

Kabin Fever:
KV's guidelines for KFH (koding from home)

Let me invite you to my next Zoom meeting on how to host Zoom meetings! As a devotee of mobile computing and remote work from my earliest days at university, I have, over time, developed a number of useful habits for maintaining a good and productive working rhythm, and I've found that many of these apply well to those of you who are newly working from home.

November 18, 2020

Topic: Business/Management

0 comments

Decentralized Computing

Feeding all relevant inputs to a central solver is the obvious way to tackle a problem, but it's not always the only way. Decentralized methods that make do with only local communication and local computation are sometimes the best way. This episode of Drill Bits reviews an elegant protocol for self-organizing wireless networks that can also solve a seemingly impossible social networking problem. The protocol preserves privacy among participants and is so simple that it can be implemented with pencil, paper, and postcards. Example software implements both the decentralized protocol and a centralized solver.

November 16, 2020

Topic: Distributed Computing

0 comments

The Time I Stole $10,000 from Bell Labs:
Or why DevOps encourages us to celebrate outages

If IT workers fear they will be punished for outages, they will adopt behavior that leads to even larger outages. Instead, we should celebrate our outages: Document them blamelessly, discuss what we've learned from them openly, and spread that knowledge generously. An outage is not an expense. It is an investment in the people who have learned from it. We can maximize that investment through management practices that maximize learning for those involved and by spreading that knowledge across the organization. Managed correctly, every outage makes the organization smarter.

November 11, 2020

Topic: Performance

0 comments

A Second Conversation with Werner Vogels:
The Amazon CTO sits with Tom Killalea to discuss designing for evolution at scale.

When I joined Amazon in 1998, the company had a single US-based website selling only books and running a monolithic C application on five servers, a handful of Berkeley DBs for key/value data, and a relational database. That database was called "ACB" which stood for "Amazon.Com Books," a name that failed to reflect the range of our ambition. In 2006 acmqueue published a conversation between Jim Gray and Werner Vogels, Amazon's CTO, in which Werner explained that Amazon should be viewed not just as an online bookstore but as a technology company. In the intervening 14 years, Amazon's distributed systems, and the patterns used to build and operate them, have grown in influence.

November 10, 2020

Topic: Web Services

0 comments

The Die is Cast:
Hardware Security is Not Assured

The future of hardware security will evolve with hardware. As packaging advances and focus moves to beyond Moore's law technologies, hardware security experts will need to keep ahead of changing security paradigms, including system and process vulnerabilities. Research focused on quantum hacking is emblematic of the translation of principles of security on the physical attack plane for emerging communications and computing technologies. Perhaps the commercial market will evolve such that the GAO will run a study on compromised quantum technologies in the not-too-distant future.

October 20, 2020

Topic: Security

0 comments

Out-of-this-World Additive Manufacturing:
From thingamabobs to rockets, 3D printing takes many forms.

Popular culture uses the term 3D printing as a synonym for additive manufacturing processes. In 2010, the American Society for Testing and Materials group came up with a set of standards to classify additive manufacturing processes into seven categories. Each process uses different materials and machine technology, which affects the use cases and applications, as well as the economics. I went down a rabbit hole researching the various processes in my hunt to buy the best 3D printer.

October 14, 2020

Topic: Development

0 comments

The Identity in Everyone's Pocket:
Keeping users secure through their smartphones

Newer phones use security features in many different ways and combinations. As with any security technology, however, using a feature incorrectly can create a false sense of security. As such, many app developers and service providers today do not use any of the secure identity-management facilities that modern phones offer. For those of you who fall into this camp, this article is meant to leave you with ideas about how to bring a hardware-backed and biometrics-based concept of user identity into your ecosystem.

October 7, 2020

Topic: Privacy and Rights

0 comments

Removing Kode:
Dead functions and dead features

Removing dead code from systems is one of KV's favorite koding pastimes because there is nothing quite like that feeling you get when you get rid of something you know wasn't being used. Code removal is like cleaning house, only sometimes you clean house with a flame thrower, which, honestly, is very satisfying. Since you're using a version-control system (you had better be using a VCS!), it's very easy to remove code without worry. If you ever need the code you removed, you can retrieve it from the VCS at will.

September 27, 2020

Topic: Development

0 comments

Security Analysis of SMS as a Second Factor of Authentication:
The challenges of multifactor authentication based on SMS, including cellular security deficiencies, SS7 exploits, and SIM swapping

Despite their popularity and ease of use, SMS-based authentication tokens are arguably one of the least secure forms of two-factor authentication. This does not imply, however, that it is an invalid method for securing an online account. The current security landscape is very different from that of two decades ago. Regardless of the critical nature of an online account or the individual who owns it, using a second form of authentication should always be the default option, regardless of the method chosen.

September 22, 2020

Topic: Security

0 comments

Efficient Graph Search

Welcome to Drill Bits, a new column about programming. This inaugural episode shows how graph search algorithms can avoid unnecessary work. A simple modification to classic breadth-first search improves the lower bound on its running time: Whereas classic BFS always requires time proportional to the number of vertices plus the number of edges, the improved "Efficient BFS" sometimes runs in time proportional to the number of vertices alone. Both asymptotic analysis and experiments show that Efficient BFS can be much faster than classic BFS.

September 13, 2020

Topic: Visualization

0 comments

The Life of a Data Byte:
Be kind and rewind.

One thing that remains true is the storing of 0s and 1s. The means by which that is done vary greatly. I hope the next time you burn a CD-RW with a mix of songs for a friend, or store home videos in an optical disc archive, you think about how the nonreflective bumps translate to a 0 and the reflective lands of the disk translate to a 1. If you are creating a mixtape on a cassette, remember that those are closely related to the Datasette used in the Commodore PET. Lastly, remember to be kind and rewind.

August 25, 2020

0 comments

Scrum Essentials Cards:
Experiences of Scrum Teams Improving with Essence

This article presents a series of examples and case studies on how people have used the Scrum Essentials cards to benefit their teams and improve how they work. Scrum is one of the most popular agile frameworks used successfully all over the world. It has been taught and used for 15-plus years. It is by far the most-used practice when developing software, and it has been generalized to be applicable for not just software but all kinds of products. It has been taught to millions of developers, all based on the Scrum Guide.

August 18, 2020

0 comments

Five Nonobvious Remote Work Techniques:
Emulating the efficiency of in-person conversations

The physical world has social conventions around conversations and communication that we use without even thinking. As we move to a remote-work world, we have to be more intentional to create such conventions. Developing these social norms is an ongoing commitment that outlasts initial technical details of VPN and desktop videoconference software configuration. Companies that previously forbade remote work can no longer deny its benefits. Once the pandemic-related lockdowns are over, many people will continue working remotely. Those who return to the office will need to work in ways that are compatible with their remotely working associates.

August 12, 2020

0 comments

Data on the Outside vs. Data on the Inside:
Data kept outside SQL has different characteristics from data kept inside.

This article describes the impact of services and trust on the treatment of data. It introduces the notions of inside data as distinct from outside data. After discussing the temporal implications of not sharing transactions across the boundaries of services, the article considers the need for immutability and stability in outside data. This leads to a depiction of outside data as a DAG of data items being independently generated by disparate services.

August 2, 2020

Topic: Data

0 comments

Sanity vs. Invisible Markings:
Tabs vs. spaces

Invisible and near-invisible markings bring us to the human part of the problem?not that code editor authors aren’t human, but most of us will not write new editors, though all of us will use editors. As we all know, once upon a time computers had small memories and the difference between a tab, which is a single byte, and a corresponding number of spaces (8) could be a significant difference between the size of source code stored on a precious disk, and also transferred, over whatever primitive and slow bus, from storage into memory.

July 26, 2020

Topic: Code

0 comments

The History, Status, and Future of FPGAs:
Hitting a nerve with field-programmable gate arrays

This article is a summary of a three-hour discussion at Stanford University in September 2019 among the authors. It has been written with combined experiences at and with organizations such as Zilog, Altera, Xilinx, Achronix, Intel, IBM, Stanford, MIT, Berkeley, University of Wisconsin, the Technion, Fairchild, Bell Labs, Bigstream, Google, DIGITAL (DEC), SUN, Nokia, SRI, Hitachi, Silicom, Maxeler Technologies, VMware, Xerox PARC, Cisco, and many others. These organizations are not responsible for the content, but may have inspired the authors in some ways, to arrive at the colorful ride through FPGA space described above.

July 22, 2020

Topic: Hardware

0 comments

Broken Hearts and Coffee Mugs:
The ordeal of security reviews

Overall, there are two broad types of security review: white box and black box. A white-box review is one in which the attackers have nearly full access to information such as code, design documents, and other information that will make it easier for them to design and carry out a successful attack. A black-box review, or test, is one in which the attackers can see the system only in the same way that a normal user or consumer would.

June 17, 2020

Topic: Security

0 comments

Debugging Incidents in Google’s Distributed Systems:
How experts debug production issues in complex distributed systems

This article covers the outcomes of research performed in 2019 on how engineers at Google debug production issues, including the types of tools, high-level strategies, and low-level tasks that engineers use in varying combinations to debug effectively. It examines the research approach used to capture data, summarizing the common engineering journeys for production investigations and sharing examples of how experts debug complex distributed systems. Finally, the article extends the Google specifics of this research to provide some practical strategies that you can apply in your organization.

June 6, 2020

Topic: Debugging

0 comments

Power to the People:
Reducing datacenter carbon footprints

By designing rack-level architectures, huge improvements can be made for power efficiency over conventional servers, since PSUs will be less oversized, more consolidated, and redundant for the rack versus per server. While the hyperscalers have benefited from these gains in power efficiency, most of the industry is still waiting. The Open Compute Project was started as an effort to allow other companies running datacenters to benefit from the power efficiencies as well. If more organizations run rack-scale architectures in their datacenters, the wasted carbon emissions caused by conventional servers can be lessened.

May 23, 2020

Topic: Power Management

0 comments

Is Persistent Memory Persistent?:
A simple and inexpensive test of failure-atomic update mechanisms

Power failures pose the most severe threat to application data integrity, and painful experience teaches that the integrity promises of failure-atomic update mechanisms can’t be taken at face value. Diligent developers and operators insist on confirming integrity claims by extensive firsthand tests. This article presents a simple and inexpensive testbed capable of subjecting storage devices, system software, and application software to ten thousand sudden whole-system power-interruption tests per week.

May 17, 2020

Topic: Memory

0 comments

Dark Patterns: Past, Present, and Future:
The evolution of tricky user interfaces

Dark patterns are an abuse of the tremendous power that designers hold in their hands. As public awareness of dark patterns grows, so does the potential fallout. Journalists and academics have been scrutinizing dark patterns, and the backlash from these exposures can destroy brand reputations and bring companies under the lenses of regulators. Design is power. In the past decade, software engineers have had to confront the fact that the power they hold comes with responsibilities to users and to society. In this decade, it is time for designers to learn this lesson as well.

May 17, 2020

Topic: HCI

0 comments

How Do Committees Invent? and Ironies of Automation:
The formulation of Conway’s law and the counterintuitive consequences of increasing levels of automation

The Lindy effect tells us that if a paper has been highly relevant for a long time, it’s likely to continue being so for a long time to come as well. My first choice is "How Do Committees Invent?" Author Melvin E. Conway provides a lot of great material that led up to the formulation of the law that bears his name. My second choice is Lisanne Bainbridge’s "Ironies of Automation." It’s a classic treatise on the counterintuitive consequences of increasing levels of automation.

April 15, 2020

Topic: Development

0 comments

Kode Vicious Plays in Traffic:
With increasing complexity comes increasing risk.

There is no single answer to the question of how to apply software to systems that can, literally, kill us, but there are models to follow that may help ameliorate the risk. The risks involved in these systems come from three major areas: marketing, accounting, and management. It is not that it is impossible to engineer such systems safely, but the history of automated systems shows us that it is difficult to do so cheaply and quickly.

April 8, 2020

Topic: System Evolution

0 comments

To Catch a Failure: The Record-and-Replay Approach to Debugging:
A discussion with Robert O’Callahan, Kyle Huey, Devon O’Dell, and Terry Coatta

When work began at Mozilla on the record-and-replay debugging tool called rr, the goal was to produce a practical, cost-effective, resource-efficient means for capturing low-frequency nondeterministic test failures in the Firefox browser. Much of the engineering effort that followed was invested in making sure the tool could actually deliver on this promise with a minimum of overhead. What was not anticipated, though, was that rr would come to be widely used outside of Mozilla?and not just for sleuthing out elusive failures, but also for regular debugging.

March 28, 2020

Topic: Debugging

0 comments

The Best Place to Build a Subway:
Building projects despite (and because of) existing complex systems

Many engineering projects are big and complex. They require integrating into the existing environment to tie into stuff that precedes the new, big, complex thing. It is common to bemoan the challenges of dealing with the preexisting stuff. Many times, engineers don’t realize that their projects (and their paychecks) exist only because of the preexisting and complex systems that impose constraints on the new work. This column looks at some sophisticated urban redevelopment projects that are very much part of daily life in San Francisco and compares them with the challenges inherent in building software.

March 24, 2020

Topic: System Evolution

0 comments

Demystifying Stablecoins:
Cryptography meets monetary policy

Self-sovereign stablecoins are interesting and probably here to stay; however, they face numerous regulatory hurdles from banking, financial tracking, and securities laws. For stablecoins backed by a governmental currency, the ultimate expression would be a CBDC. Since paper currency has been in steady decline (and disproportionately for legitimate transactions), a CBDC could reintroduce cash with technological advantages and efficient settlement while minimizing user fees.

March 16, 2020

Topic: Cryptocurrency

0 comments

Chipping Away at Moore’s Law:
Modern CPUs are just chiplets connected together.

Smaller transistors can do more calculations without overheating, which makes them more power efficient. It also allows for smaller die sizes, which reduce costs and can increase density, allowing more cores per chip. The silicon wafers that chips are made of vary in purity, and none are perfect, which means every chip has a chance of having imperfections that differ in effect. Manufacturers can limit the effect of imperfections by using chiplets.

March 13, 2020

Topic: Processors

0 comments

Communicate Using the Numbers 1, 2, 3, and More:
Leveraging expectations for better communication

People often use lists of various sizes when communicating. I might have 2 reasons for supporting the new company strategy. I might tell you my 3 favorite programming languages. I might make a presentation that describes 4 new features. There is 1 vegetable that I like more than any other. The length of the list affects how the audience interprets what is being said. Not aligning with what the human brain expects is like swimming upstream. Given the choice, why would anyone do that?

March 11, 2020

Topic: Business/Management

0 comments

The Way We Think About Data:
Human inspection of black-box ML models; reclaiming ownership of data

The two papers I’ve chosen for this issue of acmqueue both challenge the way we think about and use data, though in very different ways. In "Stop Explaining Black-box Machine-learning Models for High-stakes Decisions and Use Interpretable Models Instead," Cynthia Rudin makes the case for models that can be inspected and interpreted by human experts. The second paper, "Local-first Software: You Own Your Data, in Spite of the Cloud," describes how to retain sovereignty over your data.

February 18, 2020

Topic: Data

0 comments

Master of Tickets:
Valuing the quality, not the quantity, of work

Many silly metrics have been created to measure work, including the rate at which tickets are closed, the number of lines of code a programmer writes in a day, and the number of words an author can compose in an hour. All of these measures have one thing in common: They fail to take into account the quality of the output. If Alice writes 1,000 lines of impossible-to-read, buggy code in a day and Carol writes 100 lines of well-crafted, easy-to-use code in the same time, then who should be rewarded?

February 12, 2020

Topic: System Administration

0 comments

Securing the Boot Process:
The hardware root of trust

The goal of a hardware root of trust is to verify that the software installed in every component of the hardware is the software that was intended. This way you can verify and know without a doubt whether a machine’s hardware or software has been hacked or overwritten by an adversary. In a world of modchips, supply chain attacks, evil maid attacks, cloud provider vulnerabilities in hardware components, and other attack vectors it has become more and more necessary to ensure hardware and software integrity.

February 4, 2020

Topic: Hardware

0 comments

Beyond the Fix-it Treadmill:
The Use of Post-Incident Artifacts in High-Performing Organizations

Given that humanity’s study of the sociological factors in safety is almost a century old, the technology industry’s post-incident analysis practices and how we create and use the artifacts those practices produce are all still in their infancy. So don’t be surprised that many of these practices are so similar, that the cognitive and social models used to parse apart and understand incidents and outages are few and cemented in the operational ethos, and that the byproducts sought from post-incident analyses are far-and-away focused on remediation items and prevention.

January 21, 2020

Topic: Development

0 comments

Managing the Hidden Costs of Coordination:
Controlling coordination costs when multiple, distributed perspectives are essential

Some initial considerations to control cognitive costs for incident responders include: (1) assessing coordination strategies relative to the cognitive demands of the incident; (2) recognizing when adaptations represent a tension between multiple competing demands (coordination and cognitive work) and seeking to understand them better rather than unilaterally eliminating them; (3) widening the lens to study the joint cognition system (integration of human-machine capabilities) as the unit of analysis; and (4) viewing joint activity as an opportunity for enabling reciprocity across inter- and intra-organizational boundaries.

January 21, 2020

Topic: Development

0 comments

Cognitive Work of Hypothesis Exploration During Anomaly Response:
A look at how we respond to the unexpected

Four incidents from web-based software companies reveal important aspects of anomaly response processes when incidents arise in web operations, two of which are discussed in this article. One particular cognitive function examined in detail is hypothesis generation and exploration, given the impact of obscure automation on engineers’ development of coherent models of the systems they manage. Each case was analyzed using the techniques and concepts of cognitive systems engineering. The set of cases provides a window into the cognitive work "above the line" in incident management of complex web-operation systems.

January 21, 2020

Topic: Development

0 comments

Above the Line, Below the Line:
The resilience of Internet-facing systems relies on what is below the line of representation.

Knowledge and understanding of below-the-line structure and function are continuously in flux. Near-constant effort is required to calibrate and refresh the understanding of the workings, dependencies, limitations, and capabilities of what is present there. In this dynamic situation no individual or group can ever know the system state. Instead, individuals and groups must be content with partial, fragmented mental models that require more or less constant updating and adjustment if they are to be useful.

January 21, 2020

Topic: Development

0 comments

Revealing the Critical Role of Human Performance in Software:
It’s time to revise our appreciation of the human side of Internet-facing software systems.

Understanding, supporting, and sustaining the capabilities above the line of representation require all stakeholders to be able to continuously update and revise their models of how the system is messy and yet usually manages to work. This kind of openness to continually reexamine how the system really works requires expanding the efforts to learn from incidents.

January 21, 2020

Topic: Development

0 comments

Numbers Are for Computers, Strings Are for Humans:
How and where software should translate data into a human-readable form

Unless what you are processing, storing, or transmitting are, quite literally, strings that come from and are meant to be shown to humans, you should avoid processing, storing, or transmitting that data as strings. Remember, numbers are for computers, strings are for humans. Let the computer do the work of presenting your data to the humans in a form they might find palatable. That’s where those extra bytes and instructions should be spent, not doing the inverse.

January 13, 2020

Topic: Databases

0 comments

Opening up the Baseboard Management Controller:
If the CPU is the brain of the board, the BMC is the brain stem.

In 2011 Facebook announced the Open Compute Project to form a community around open-source designs and specifications for data center hardware. Since then, the project has expanded to all aspects of the open data center. This column focuses on the BMC and is an introduction to a complicated topic. The intention is to provide a full picture of the world of the open-source BMC ecosystem, starting with a brief overview of the BMC’s role in a system, touching on security concerns around the BMC, and then diving into some of the projects that have developed in the open-source ecosystem.

January 6, 2020

Topic: Open Source

0 comments

Blockchain Technology: What Is It Good for?:
Industry’s dreams and fears for this new technology

Business executives, government leaders, investors, and researchers frequently ask the following three questions: (1) What exactly is blockchain technology? (2) What capabilities does it provide? (3) What are good applications? Here we answer these questions thoroughly, provide a holistic overview of blockchain technology that separates hype from reality, and propose a useful lexicon for discussing the specifics of blockchain technology in the future.

December 16, 2019

Topic: Blockchain

0 comments

API Practices If You Hate Your Customers:
APIs speak louder than words.

Do you have disdain for your customers? Do you wish they would go away? When you interact with customers are you silently fantasizing about them switching to your competitor’s product? In short, do you hate your customers? In this article, I document a number of industry best practices designed to show customers how much you hate them. All of them are easy to implement. Heck, your company may be doing many of these already.

December 10, 2019

Topic: API Design

6 comments

The Reliability of Enterprise Applications:
Understanding enterprise reliability

Enterprise reliability is a discipline that ensures applications will deliver the required business functionality in a consistent, predictable, and cost-effective manner without compromising core aspects such as availability, performance, and maintainability. This article describes a core set of principles and engineering methodologies that enterprises can apply to help them navigate the complex environment of enterprise reliability and deliver highly reliable and cost-efficient applications.

December 3, 2019

Topic: Quality Assurance

0 comments

Space Time Discontinuum:
Combining data from many sources may cause painful delays.

Back when you had only one database for an application to worry about, you didn’t have to think about partial results. You also didn’t have to think about data arriving after some other data. It was all simply there. Now, you can do so much more with big distributed systems, but you have to be more sophisticated in the tradeoff between timely answers and complete answers.

November 18, 2019

Topic: Data

0 comments

Optimizations in C++ Compilers:
A practical journey

There’s a tradeoff to be made in giving the compiler more information: it can make compilation slower. Technologies such as link time optimization can give you the best of both worlds. Optimizations in compilers continue to improve, and upcoming improvements in indirect calls and virtual function dispatch might soon lead to even faster polymorphism.

November 12, 2019

Topic: Programming Languages

0 comments

Back under a SQL Umbrella:
Unifying serving and analytical data; using a database for distributed machine learning

Procella is the latest in a long line of data processing systems at Google. What’s unique about it is that it’s a single store handling reporting, embedded statistics, time series, and ad-hoc analysis workloads under one roof. It’s SQL on top, cloud-native underneath, and it’s serving billions of queries per day over tens of petabytes of data. There’s one big data use case that Procella isn’t handling today though, and that’s machine learning. But in ’Declarative recursive computation on an RDBMS... or, why you should use a database for distributed machine learning,’ Jankov et al.

November 6, 2019

Topic: Databases

0 comments

Putting Machine Learning into Production Systems:
Data validation and software engineering for machine learning

Breck et al. share details of the pipelines used at Google to validate petabytes of production data every day. With so many moving parts it’s important to be able to detect and investigate changes in data distributions before they can impact model performance. "Software Engineering for Machine Learning: A Case Study" shares lessons learned at Microsoft as machine learning started to pervade more and more of the company’s systems, moving from specialized machine-learning products to simply being an integral part of many products and services.

October 7, 2019

Topic: AI

0 comments

Hack for Hire:
Investigating the emerging black market of retail email account hacking services

Hack-for-hire services charging $100-$400 per contract were found to produce sophisticated, persistent, and personalized attacks that were able to bypass 2FA via phishing. The demand for these services, however, appears to be limited to a niche market, as evidenced by the small number of discoverable services, an even smaller number of successful services, and the fact that these attackers target only about one in a million Google users.

October 1, 2019

Topic: Privacy and Rights

0 comments

Write Amplification Versus Read Perspiration:
The tradeoffs between write and read

In computing, there’s an interesting trend where writing creates a need to do more work. You need to reorganize, merge, reindex, and more to make the stuff you wrote more useful. If you don’t, you must search or do other work to support future reads.

September 23, 2019

Topic: Databases

0 comments

The Effects of Mixing Machine Learning and Human Judgment:
Collaboration between humans and machines does not necessarily lead to better outcomes.

Based on the theoretical findings from the existing literature, some policymakers and software engineers contend that algorithmic risk assessments such as the COMPAS software can alleviate the incarceration epidemic and the occurrence of violent crimes by informing and improving decisions about policing, treatment, and sentencing. Considered in tandem, these findings indicate that collaboration between humans and machines does not necessarily lead to better outcomes, and human supervision does not sufficiently address problems when algorithms err or demonstrate concerning biases.

September 16, 2019

Topic: AI

1 comments

Koding Academies:
A low-risk path to becoming a front-end plumber

Encourage your friend to pick a course that will introduce concepts that can be used into the future, rather than just a specific set of buzzword technologies that are hot this year. Most courses are based around Python. Encourage your friend to study that as a first computer language, as the concepts learned in Python can be applied in other languages and other fields.

September 11, 2019

Topic: Education

2 comments

Persistent Memory Programming on Conventional Hardware:
The persistent memory style of programming can dramatically simplify application software.

Driven by the advent of byte-addressable non-volatile memory, the persistent memory style of programming will gain traction among developers, taking its rightful place alongside existing paradigms for managing persistent application state. Until NVM becomes available on all computers, developers can use the techniques presented in this article to enjoy the benefits of persistent memory programming on conventional hardware.

August 26, 2019

Topic: Memory

2 comments

DAML: The Contract Language of Distributed Ledgers:
A discussion between Shaul Kfir and Camille Fournier

We’ll see the same kind of Cambrian explosion we witnessed in the web world once we started using mutualized infrastructure in public clouds and frameworks. It took only three weeks to learn enough Ruby on Rails and Heroku to push out the first version of a management system for that brokerage. And that’s because I had to think only about the models, the views, and the controllers. The hardest part, of course, had to do with building a secure wallet.

August 19, 2019

Topic: Databases

0 comments

What is a CSO Good For?:
Security requires more than an off-the-shelf solution.

The CSO is not a security engineer, so let’s contrast the two jobs to create a picture of what we should and should not see.

August 13, 2019

Topic: Business/Management

0 comments

Demo Data as Code:
Automation helps collaboration.

A casual request for a demo dataset may seem like a one-time thing that doesn’t need to be automated, but the reality is that this is a collaborative process requiring multiple iterations and experimentation. There will undoubtedly be requests for revisions big and small, the need to match changing software, and to support new and revised demo stories. All of this makes automating the process worthwhile. Modern scripting languages make it easy to create ad hoc functions that act like a little language. A repeatable process helps collaboration, enables delegation, and saves time now and in the future.

August 5, 2019

Topic: Business/Management

0 comments

Velocity in Software Engineering:
From tectonic plate to F-16

Software engineering occupies an increasingly critical role in companies across all sectors, but too many software initiatives end up both off target and over budget. A surer path is optimized for speed, open to experimentation and learning, agile, and subject to regular course correcting. Good ideas tend to be abundant, though execution at high velocity is elusive. The good news is that velocity is controllable; companies can invest systematically to increase it.

July 29, 2019

Topic: Development

0 comments

The Evolution of Management:
Transitioning up the ladder

With each step up, the job changes - but not all of the changes are obvious. You have to shift your mindset, and focus on building new skills that are often very different from the skills that made you successful in your previous role.

July 22, 2019

Topic: Business/Management

0 comments

Open-source Firmware:
Step into the world behind the kernel.

Open-source firmware can help bring computing to a more secure place by making the actions of firmware more visible and less likely to do harm. This article’s goal is to make readers feel empowered to demand more from vendors who can help drive this change.

July 17, 2019

Topic: Open Source

1 comments

Time Protection in Operating Systems and Speaker Legitimacy Detection:
Operating system-based protection from timing-based side-channel attacks; implications of voice-imitation software

Timing-based side-channel attacks are a particularly tricky class of attacks to deal with because the very thing you’re often striving for can give you away. There are always more creative new instances of attacks to be found, so you need a principled way of thinking about defenses that address the class, not just a particular instantiation. That’s what Ge et al. give us in "Time Protection, the Missing OS Abstraction." Just as operating systems prevent spatial inference through memory protection, so future operating systems will need to prevent temporal inference through time protection. It’s going to be a long road to get there.

July 9, 2019

Topic: Security

0 comments

Surviving Software Dependencies:
Software reuse is finally here but comes with risks.

Software reuse is finally here, and its benefits should not be understated, but we’ve accepted this transformation without completely thinking through the potential consequences. The Copay and Equifax attacks are clear warnings of real problems in the way software dependencies are consumed today. There’s a lot of good software out there. Let’s work together to find out how to reuse it safely.

July 8, 2019

Topic: Development

0 comments

MUST and MUST NOT:
On writing documentation

Pronouncements without background or explanatory material are useless to those who are not also deeply steeped in the art and science of computer security or security in general. It takes a particular bend of mind to think like an attacker and a defender all at once, and most people are incapable of doing this; so, if you want the people reading the document to follow your guidance, then you must take them on a journey from ignorance to knowledge.

June 17, 2019

Topic: Development

0 comments

Extract, Shoehorn, and Load:
Data doesn’t always fit nicely into a new home.

It turns out that the business value of ill-fitting data is extremely high. The process of taking the input data, discarding what doesn’t fit, adding default or null values for missing stuff, and generally shoehorning it to the prescribed shape is important. The prescribed shape is usually one that is amenable to analysis for deeper meaning.

June 5, 2019

Topic: Databases

0 comments

Access Controls and Health Care Records: Who Owns the Data?:
A discussion with David Evans, Richard McDonald, and Terry Coatta

What if health care records were handled in more of a patient-centric manner, using systems and networks that allow data to be readily shared by all the physicians, clinics, hospitals, and pharmacies a person might choose to share them with or have occasion to visit? And, more radically, what if it was the patients who owned the data?

June 3, 2019

Topic: Privacy and Rights

0 comments

The DevOps Phenomenon:
An executive crash course

Stressful emergency releases are a thing of the past for companies that subscribe to the DevOps method of software development and delivery. New releases are frequent. Bugs are fixed rapidly. New business opportunities are sought with gusto and confidence. New features are released, revised, and improved with rapid iterations. DevOps presents a strategic advantage for organizations when compared with traditional software-development methods. Leadership plays an important role during that transformation. DevOps is about providing guidelines for faster time to market of new software features and achieving a higher level of stability. Implementing cross-functional, product-oriented teams helps bridge the gaps between software development and operations.

May 29, 2019

Topic: Development

1 comments

Overly Attached:
Know when to let go of emotional attachment to your work.

A smart, senior engineer couldn’t make logical decisions if it meant deprecating the system he and his team had worked on for a number of years. Even though the best thing would have been to help another team create the replacement system, they didn’t want to entertain the idea because it would mean putting an end to something they had invested so much in. It is good to have strong ownership, but what happens when you get too attached?

May 19, 2019

Topic: Business/Management

0 comments

Industry-scale Knowledge Graphs: Lessons and Challenges:
Five diverse technology companies show how it’s done

This article looks at the knowledge graphs of five diverse tech companies, comparing the similarities and differences in their respective experiences of building and using the graphs, and discussing the challenges that all knowledge-driven enterprises face today. The collection of knowledge graphs discussed here covers the breadth of applications, from search, to product descriptions, to social networks.

May 13, 2019

Topic: Development

1 comments

GAN Dissection and Datacenter RPCs:
Visualizing and understanding generative adversarial networks; datacenter RPCs can be general and fast.

Image generation using GANs (generative adversarial networks) has made astonishing progress over the past few years. While staring in wonder at some of the incredible images, it’s natural to ask how such feats are possible. "GAN Dissection: Visualizing and Understanding Generative Adversarial Networks" gives us a look under the hood to see what kinds of things are being learned by GAN units, and how manipulating those units can affect the generated images. February saw the 16th edition of the Usenix Symposium on Networked Systems Design and Implementation. Kalia et al. blew me away with their work on fast RPCs (remote procedure calls) in the datacenter.

May 2, 2019

Topic: Networks

0 comments

Troubling Trends in Machine Learning Scholarship:
Some ML papers suffer from flaws that could mislead the public and stymie future research.

Flawed scholarship threatens to mislead the public and stymie future research by compromising ML’s intellectual foundations. Indeed, many of these problems have recurred cyclically throughout the history of AI and, more broadly, in scientific research. In 1976, Drew McDermott chastised the AI community for abandoning self-discipline, warning prophetically that "if we can’t criticize ourselves, someone else will save us the trouble." The current strength of machine learning owes to a large body of rigorous research to date, both theoretical and empirical. By promoting clear scientific thinking and communication, our community can sustain the trust and investment it currently enjoys.

April 24, 2019

Topic: AI

0 comments

Tom’s Top Ten Things Executives Should Know About Software:
Software acumen is the new norm.

Software is eating the world. To do their jobs well, executives and managers outside of technology will benefit from understanding some fundamentals of software and the software-delivery process.

April 14, 2019

Topic: Business/Management

4 comments

Garbage Collection as a Joint Venture:
A collaborative approach to reclaiming memory in heterogeneous software systems

Cross-component tracing is a way to solve the problem of reference cycles across component boundaries. This problem appears as soon as components can form arbitrary object graphs with nontrivial ownership across API boundaries. An incremental version of CCT is implemented in V8 and Blink, enabling effective and efficient reclamation of memory in a safe manner.

April 9, 2019

Topic: Programming Languages

0 comments

How to Create a Great Team Culture (and Why It Matters):
Build safety, share vulnerability, establish purpose.

As leader of the team, you have significant influence over your team’s culture. You can institute policies and procedures that help make your team happy and productive, monitor team successes, and continually improve the team. Another important part of team culture, however, is helping people feel they are a part of creating it. How can you expand the job of creating a culture to other team members?

April 3, 2019

Topic: Business/Management

0 comments

Online Event Processing:
Achieving consistency where distributed transactions have failed

Support for distributed transactions across heterogeneous storage technologies is either nonexistent or suffers from poor operational and performance characteristics. In contrast, OLEP is increasingly used to provide good performance and strong consistency guarantees in such settings. In data systems it is very common for logs to be used as internal implementation details. The OLEP approach is different: it uses event logs, rather than transactions, as the primary application programming model for data management. Traditional databases are still used, but their writes come from a log rather than directly from the application. The use of OLEP is not simply pragmatism on the part of developers, but rather it offers a number of advantages.

March 24, 2019

Topic: Distributed Development

4 comments

The Worst Idea of All Time:
Revelations at 100!

In February 2004, with the other members of the Queue editorial board, I was at our monthly in-person dinner meeting, where we gather to come up with interesting discussion topics that will result in practitioner-oriented articles (and the best authors to write them) for publication in Queue. It was only our second year in business, and although we had published some successful and widely read articles, Queue still had no regular columnists. I was initially invited to board meetings by another editorial board member, Eric Allman, and had written a couple of articles for the publication. I was also co-authoring my first book but had never been a columnist.

March 18, 2019

Topic: Development

0 comments

Net Neutrality: Unexpected Solution to Blockchain Scaling:
Cloud-delivery networks could dramatically improve blockchains’ scalability, but clouds must be provably neutral first.

Provably neutral clouds are undoubtedly a viable solution to blockchain scaling. By optimizing the transport layer, not only can the throughput be fundamentally scaled up, but the latency could be dramatically reduced. Indeed, the latency distribution in today’s data centers is already biased toward microsecond timescales for most of the flows, with millisecond timescales residing only at the tail of the distribution. There is no reason why a BDN point of presence would not be able to achieve a similar performance. Adding dedicated optical infrastructure among such BDN points of presence would further alleviate throughput and reduce latency, creating the backbone of an advanced BDN.

March 12, 2019

Topic: Blockchain

2 comments

SageDB and NetAccel:
Learned models within the database system; network-accelerated query processing

The CIDR (Conference on Innovative Data Systems Research) runs once every two years, and luckily for us 2019 is one of those years. I’ve selected two papers from this year’s conference that highlight bold and exciting directions for data systems.

February 28, 2019

Topic: Development

0 comments

Identity by Any Other Name:
The complex cacophony of intertwined systems

New emerging systems and protocols both tighten and loosen our notions of identity, and that’s good! They make it easier to get stuff done. REST, IoT, big data, and machine learning all revolve around notions of identity that are deliberately kept flexible and sometimes ambiguous. Notions of identity underlie our basic mechanisms of distributed systems, including interchangeability, idempotence, and immutability.

February 19, 2019

Topic: Databases

0 comments

Edge Computing:
Scaling resources within multiple administrative domains

Creating edge computing infrastructures and applications encompasses quite a breadth of systems research. Let’s take a look at the academic view of edge computing and a sample of existing research that will be relevant in the coming years.

February 12, 2019

Topic: Databases

1 comments

Achieving Digital Permanence:
The many challenges to maintaining stored information and ways to overcome them

Today’s Information Age is creating new uses for and new ways to steward the data that the world depends on. The world is moving away from familiar, physical artifacts to new means of representation that are closer to information in its essence. We need processes to ensure both the integrity and accessibility of knowledge in order to guarantee that history will be known and true.

February 6, 2019

Topic: Databases

0 comments

Know Your Algorithms:
Stop using hardware to solve software problems.

Knowing that your CPU is in use 100 percent of the time doesn’t tell you much about the overall system other than it’s busy, but busy with what? Maybe it’s sitting in a tight loop, or some clown added a bunch of delay loops during testing that are no longer necessary. Until you profile your system, you have no idea why the CPU is busy. All systems provide some form of profiling so that you can track down where the bottlenecks are, and it’s your responsibility to apply these tools before you spend money on brand new hardware.

January 28, 2019

Topic: Development

2 comments

Metrics That Matter:
Critical but oft-neglected service metrics that every SRE and product owner should care about

Measure your site reliability metrics, set the right targets, and go through the work to measure the metrics accurately. Then, you’ll find that your service runs better, with fewer outages, and much more user adoption.

January 21, 2019

Topic: Web Services

1 comments

Design Patterns for Managing Up:
Four challenging work situations and how to handle them

Challenges come up all the time at work. Spend time now thinking about how you want to be seen at work, and then think about how that version of you would respond to the challenges that you could encounter. When you have a plan in place, you are much more likely to succeed.

January 16, 2019

Topic: Business/Management

3 comments

A Hitchhiker’s Guide to the Blockchain Universe:
Blockchain remains a mystery, despite its growing acceptance.

It is difficult these days to avoid hearing about blockchain. Despite the significant potential of blockchain, it is also difficult to find a consistent description of what it really is. This article looks at the basics of blockchain: the individual components, how those components fit together, and what changes might be made to solve some of the problems with blockchain technology.

January 8, 2019

Topic: Blockchain

1 comments

Tear Down the Method Prisons! Set Free the Practices!:
Essence: a new way of thinking that promises to liberate the practices and enable true learning organizations

This article explains why we need to break out of this repetitive dysfunctional behavior, and it introduces Essence, a new way of thinking that promises to free the practices from their method prisons and thus enable true learning organizations.

December 26, 2018

Topic: Development

0 comments

Security for the Modern Age:
Securely running processes that require the entire syscall interface

Giving operators a usable means of securing the methods they use to deploy and run applications is a win for everyone. Keeping the usability-focused abstractions provided by containers, while finding new ways to automate security and defend against attacks, is a great path forward.

December 19, 2018

Topic: Security

0 comments

SQL is No Excuse to Avoid DevOps:
Automation and a little discipline allow better testing, shorter release cycles, and reduced business risk.

Using SQL databases is not an impediment to doing DevOps. Automating schema management and a little developer discipline enables more vigorous and repeatable testing, shorter release cycles, and reduced business risk. When you can confidently deploy new releases, you do it more frequently. New features that previously sat unreleased for weeks or months now reach users sooner. Bugs are fixed faster. Security holes are closed sooner. It enables the company to provide better value to customers.

December 12, 2018

Topic: Testing

8 comments

Understanding Database Reconstruction Attacks on Public Data:
These attacks on statistical databases are no longer a theoretical danger.

With the dramatic improvement in both computer speeds and the efficiency of SAT and other NP-hard solvers in the last decade, DRAs on statistical databases are no longer just a theoretical danger. The vast quantity of data products published by statistical agencies each year may give a determined attacker more than enough information to reconstruct some or all of a target database and breach the privacy of millions of people. Traditional disclosure-avoidance techniques are not designed to protect against this kind of attack.

November 28, 2018

Topic: Security

0 comments

Writing a Test Plan:
Establish your hypotheses, methodologies, and expected results.

If you can think of each of your tests as an experiment with a hypothesis, a test methodology, and a test result, it should all fall into place rather than falling through the cracks.

November 27, 2018

Topic: Testing

0 comments

The Importance of a Great Finish:
You have to finish strong, every time.

How can you make sure that you are recognized as a valuable member of your team, whose work is seen as critical to the team’s success? You have to finish strong, every time. Here is how to keep your momentum up and make the right moves to be a visible contributor to the final success of every project.

November 19, 2018

Topic: Business/Management

0 comments

CodeFlow: Improving the Code Review Process at Microsoft:
A discussion with Jacek Czerwonka, Michaela Greiler, Christian Bird, Lucas Panjer, and Terry Coatta

Delivering a new set of capabilities for managing and improving Microsoft’s code-review process was the primary goal right from the start. In the course of accomplishing that, much was also learned about certain general code-review principles. In fact, subsequent research has offered surprising evidence of just how similar the impact can be when many of these principles are followed at companies other than Microsoft.

November 13, 2018

Topic: Workflow Systems

0 comments

Benchmarking "Hello, World!":
Six different views of the execution of "Hello, World!" show what is often missing in today’s tools

As more and more software moves off the desktop and into data centers, and more and more cell phones use server requests as the other half of apps, observation tools for large-scale distributed transaction systems are not keeping up. This makes it tempting to look under the lamppost using simpler tools. You will waste a lot of high-pressure time following that path when you have a sudden complex performance crisis.

November 6, 2018

Topic: Tools

0 comments

Using Remote Cache Service for Bazel:
Save time by sharing and reusing build and test output

Remote cache service is a new development that significantly saves time in running builds and tests. It is particularly useful for a large code base and any size of development team. Bazel is an actively developed open-source build and test system that aims to increase productivity in software development. It has a growing number of optimizations to improve the performance of daily development tasks.

October 21, 2018

Topic: Testing

0 comments

A Chance Gardener:
Harvesting open-source products and planting the next crop

It is a very natural progression for a company to go from being a pure consumer of open source, to interacting with the project via patch submission, and then becoming a direct contributor. No one would expect a company to be a direct contributor to all the open-source projects it consumes, as most companies consume far more software than they would ever produce, which is the bounty of the open-source garden. It ought to be the goal of every company consuming open source to contribute something back, however, so that its garden continues to bear fruit, instead of rotting vegetables.

October 17, 2018

Topic: Open Source

0 comments

Why SRE Documents Matter:
How documentation enables SRE teams to manage new and existing services

SRE (site reliability engineering) is a job function, a mindset, and a set of engineering approaches for making web products and services run reliably. SREs operate at the intersection of software development and systems engineering to solve operational problems and engineer solutions to design, build, and run large-scale distributed systems scalably, reliably, and efficiently. A mature SRE team likely has well-defined bodies of documentation associated with many SRE functions.

October 4, 2018

Topic: Web Development

0 comments

How to Live in a Post-Meltdown and -Spectre World:
Learn from the past to prepare for the next battle.

Spectre and Meltdown create a risk landscape that has more questions than answers. This article addresses how these vulnerabilities were triaged when they were announced and the practical defenses that are available. Ultimately, these vulnerabilities present a unique set of circumstances, but for the vulnerability management program at Goldman Sachs, the response was just another day at the office.

September 25, 2018

Topic: Security

0 comments

How to Get Things Done When You Don’t Feel Like It:
Five strategies for pushing through

If you want to be successful, then it serves you better to rise to the occasion no matter what. That means learning how to push through challenges and deliver valuable results.

September 18, 2018

Topic: Business/Management

4 comments

Tracking and Controlling Microservice Dependencies:
Dependency management is a crucial part of system and software design.

Dependency cycles will be familiar to you if you have ever locked your keys inside your house or car. You can’t open the lock without the key, but you can’t get the key without opening the lock. Some cycles are obvious, but more complex dependency cycles can be challenging to find before they lead to outages. Strategies for tracking and controlling dependencies are necessary for maintaining reliable systems.

September 11, 2018

Topic: Web Services

0 comments

The Obscene Coupling Known as Spaghetti Code:
Teach your junior programmers how to read code

Since you both are working on the same code base, you also have ample opportunity for leadership by showing this person how you code. You must do this carefully or the junior programmer will think you’re pulling rank, but, with a bit of gentle show and tell, you can get your Padawan to see what you’re driving at. This human interaction is often difficult for those of us who prefer to spend our days with seemingly logical machines.

August 7, 2018

Topic: Code

0 comments

Corp to Cloud: Google’s Virtual Desktops:
How Google moved its virtual desktops to the cloud

Over one-fourth of Googlers use internal, data-center-hosted virtual desktops. This on-premises offering sits in the corporate network and allows users to develop code, access internal resources, and use GUI tools remotely from anywhere in the world. Among its most notable features, a virtual desktop instance can be sized according to the task at hand, has persistent user storage, and can be moved between corporate data centers to follow traveling Googlers. Until recently, our virtual desktops were hosted on commercially available hardware on Google’s corporate network using a homegrown open-source virtual cluster-management system called Ganeti. Today, this substantial and Google-critical workload runs on GCP (Google Compute Platform).

August 1, 2018

Topic: Distributed Computing

0 comments

Knowledge Base Construction in the Machine-learning Era:
Three critical design points: Joint-learning, weak supervision, and new representations

More information is accessible today than at any other time in human history. From a software perspective, however, the vast majority of this data is unusable, as it is locked away in unstructured formats such as text, PDFs, web pages, images, and other hard-to-parse formats. The goal of knowledge base construction is to extract structured information automatically from this "dark data," so that it can be used in downstream applications for search, question-answering, link prediction, visualization, modeling and much more.

July 26, 2018

Topic: AI

0 comments

The Secret Formula for Choosing the Right Next Role:
The best careers are not defined by titles or resume bullet points.

Focus on factors that will increase your career capital and make you a more valuable hire in your next role, and the one after that, and the one after that. When you are looking at the options for your next role, there are smarter choices that you can make.

July 23, 2018

Topic: Business/Management

0 comments

The Mythos of Model Interpretability:
In machine learning, the concept of interpretability is both important and slippery.

Supervised machine-learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world?

July 17, 2018

Topic: AI

1 comments

GitOps: A Path to More Self-service IT:
IaC + PR = GitOps

GitOps lowers the bar for creating self-service versions of common IT processes, making it easier to meet the return in the ROI calculation. GitOps not only achieves this, but also encourages desired behaviors in IT systems: better testing, reduction of bus factor, reduced wait time, more infrastructure logic being handled programmatically with IaC, and directing time away from manual toil toward creating and maintaining automation.

July 9, 2018

Topic: Development

1 comments

Mind Your State for Your State of Mind:
The interactions between storage and applications can be complex and subtle.

Applications have had an interesting evolution as they have moved into the distributed and scalable world. Similarly, storage and its cousin databases have changed side by side with applications. Many times, the semantics, performance, and failure models of storage and applications do a subtle dance as they change in support of changing business requirements and environmental challenges. Adding scale to the mix has really stirred things up. This article looks at some of these issues and their impact on systems.

July 3, 2018

Topic: File Systems and Storage

0 comments

FPGAs in Data Centers:
FPGAs are slowly leaving the niche space they have occupied for decades.

This installment of Research for Practice features a curated selection from Gustavo Alonso, who provides an overview of recent developments utilizing FPGAs (field-programmable gate arrays) in datacenters. As Moore’s Law has slowed and the computational overheads of datacenter workloads such as model serving and data processing have continued to rise, FPGAs offer an increasingly attractive point in the trade-off between power and performance. Gustavo’s selections highlight early successes and practical deployment considerations that inform the ongoing, high-stakes debate about the future of datacenter- and cloud-based computation substrates.

June 5, 2018

Topic: Performance

0 comments

Workload Frequency Scaling Law - Derivation and Verification:
Workload scalability has a cascade relation via the scale factor.

This article presents equations that relate to workload utilization scaling at a per-DVFS subsystem level. A relation between frequency, utilization, and scale factor (which itself varies with frequency) is established. The verification of these equations turns out to be tricky, since inherent to workload, the utilization also varies seemingly in an unspecified manner at the granularity of governance samples. Thus, a novel approach called histogram ridge trace is applied. Quantifying the scaling impact is critical when treating DVFS as a building block. Typical application includes DVFS governors and or other layers that influence utilization, power, and performance of the system.

May 24, 2018

Topic: Performance

0 comments

Consistently Eventual:
For many data items, the work never settles on a value.

Applications are no longer islands. Not only do they frequently run distributed and replicated over many cloud-based computers, but they also run over many hand-held computers. This makes it challenging to talk about a single truth at a single place or time. In addition, most modern applications interact with other applications. These interactions settle out to impact understanding. Over time, a shared opinion emerges just as new interactions add increasing uncertainty. Many business, personal, and computational "facts" are, in fact, uncertain. As some changes settle, others meander from place to place. With all the regular, irregular, and uncleared checks, my understanding of our personal joint checking account is a bit hazy.

May 21, 2018

Topic: Databases

0 comments

Algorithms Behind Modern Storage Systems:
Different uses for read-optimized B-trees and write-optimized LSM-trees

This article takes a closer look at two storage system design approaches used in a majority of modern databases (read-optimized B-trees and write-optimized LSM (log-structured merge)-trees) and describes their use cases and tradeoffs.

May 14, 2018

Topic: File Systems and Storage

3 comments

Every Silver Lining Has a Cloud:
Cache is king. And if your cache is cut, you’re going to feel it.

Clearly, your management has never heard the phrase, "You get what you pay for." Or perhaps they heard it and didn’t realize it applied to them. The savings in cloud computing comes at the expense of a loss of control over your systems, which is summed up best in the popular nerd sticker that says, "The Cloud is Just Other People’s Computers." Some providers now have something called Metal-as-a-Service, which I really think ought to mean that an ’80s metal band shows up at your office, plays a gig, smashes the furniture, and urinates on the carpet, but alas, it’s just the cloud providers’ way of finally admitting that cloud computing isn’t really the right answer for all applications.

May 7, 2018

Topic: Distributed Computing

0 comments

C Is Not a Low-level Language:
Your computer is not a fast PDP-11.

In the wake of the recent Meltdown and Spectre vulnerabilities, it’s worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn’t been the case for decades.

April 30, 2018

Topic: Programming Languages

24 comments

Prediction-Serving Systems:
What happens when we wish to actually deploy a machine learning model to production?

This installment of Research for Practice features a curated selection from Dan Crankshaw and Joey Gonzalez, who provide an overview of machine learning serving systems. What happens when we wish to actually deploy a machine learning model to production, and how do we serve predictions with high accuracy and high computational efficiency? Dan and Joey’s selection provides a thoughtful selection of cutting-edge techniques spanning database-level integration, video processing, and prediction middleware.

April 25, 2018

Topic: AI

1 comments

Watchdogs vs. Snowflakes:
Taking wild-ass guesses

That a system can randomly jam doesn’t just indicate a serious bug in the system; it is also a major source of risk. You don’t say what your distributed job-control system controls, but let’s just say I hope it’s not something with significant, real-world side effects, like a power station, jet aircraft, or financial trading system. The risk, of course, is that the system will jam, not when it’s convenient for someone to add a dummy job to clear the jam, but during some operation that could cause data loss or return incorrect results.

April 10, 2018

Topic: Distributed Computing

0 comments

Thou Shalt Not Depend on Me:
A look at JavaScript libraries in the wild

Most websites use JavaScript libraries, and many of them are known to be vulnerable. Understanding the scope of the problem, and the many unexpected ways that libraries are included, are only the first steps toward improving the situation. The goal here is that the information included in this article will help inform better tooling, development practices, and educational efforts for the community.

April 4, 2018

Topic: Programming Languages

0 comments

How to Come up with Great Ideas:
Think like an entrepreneur.

No matter what your profession, learning to think more innovatively and spark new ideas can help you. I have included some points and inspiration that have helped me, but the real key is changing your behavior and taking action.

March 29, 2018

Topic: Business/Management

0 comments

Designing Cluster Schedulers for Internet-Scale Services:
Embracing failures for improving availability

Engineers looking to build scheduling systems should consider all failure modes of the underlying infrastructure they use and consider how operators of scheduling systems can configure remediation strategies, while aiding in keeping tenant systems as stable as possible during periods of troubleshooting by the owners of the tenant systems.

March 20, 2018

Topic: Web Services

0 comments

Manual Work is a Bug:
A.B.A: always be automating

Every IT team should have a culture of constant improvement - or movement along the path toward the goal of automating whatever the team feels confident in automating, in ways that are easy to change as conditions change. As the needle moves to the right, the team learns from each other’s experiences, and the system becomes easier to create and safer to operate. A good team has a structure in place that makes the process frictionless and collaborative

March 14, 2018

Topic: Development

1 comments

Canary Analysis Service:
Automated canarying quickens development, improves production safety, and helps prevent outages.

It is unreasonable to expect engineers working on product development or reliability to have statistical knowledge; removing this hurdle led to widespread CAS adoption. CAS has proven useful even for basic cases that don’t need configuration, and has significantly improved Google’s rollout reliability. Impact analysis shows that CAS has likely prevented hundreds of postmortem-worthy outages, and the rate of postmortems among groups that do not use CAS is noticeably higher.

March 6, 2018

Topic: Web Services

0 comments

Continuous Delivery Sounds Great, but Will It Work Here?:
It’s not magic, it just requires continuous, daily improvement at all levels.

Continuous delivery is a set of principles, patterns, and practices designed to make deployments predictable, routine affairs that can be performed on demand at any time. This article introduces continuous delivery, presents both common objections and actual obstacles to implementing it, and describes how to overcome them using real-life examples. Continuous delivery is not magic. It’s about continuous, daily improvement at all levels of the organization.

February 22, 2018

Topic: Development

0 comments

Toward a Network of Connected Things:
A look into the future of IoT deployments and their usability

While the scale of data presents new avenues for improvement, the key challenges for the everyday adoption of IoT systems revolve around managing this data. First, we need to consider where the data is being processed and stored and what the privacy and systems implications of these policies are. Second, we need to develop systems that generate actionable insights from this diverse, hard-to-interpret data for non-tech users. Solving these challenges will allow IoT systems to deliver maximum value to end users.

February 13, 2018

Topic: Networks

0 comments

Containers Will Not Fix Your Broken Culture (and Other Hard Truths):
Complex socio-technical systems are hard; film at 11.

We focus so often on technical anti-patterns, neglecting similar problems inside our social structures. Spoiler alert: the solutions to many difficulties that seem technical can be found by examining our interactions with others. Let’s talk about five things you’ll want to know when working with those pesky creatures known as humans.

February 5, 2018

Topic: Business/Management

3 comments

How Is Your Week Going So Far?:
Praise matters just as much as money.

None of us hears "thank you" or "awesome job" enough at work. Being the person who praises other people is an amazing person to be, especially when you follow this formula for making your praise ridiculously effective.

January 30, 2018

Topic: Business/Management

0 comments

DevOps Metrics:
Your biggest mistake might be collecting the wrong data.

Delivering value to the business through software requires processes and coordination that often span multiple teams across complex systems, and involves developing and delivering software with both quality and resiliency. As practitioners and professionals, we know that software development and delivery is an increasingly difficult art and practice, and that managing and improving any process or system requires insights into that system. Therefore, measurement is paramount to creating an effective software value stream. Yet accurate measurement is no easy feat.

January 22, 2018

Topic: Development

0 comments

Popping Kernels:
Choosing between programming in the kernel or in user space

In a world in which high-performance code continues to be written in a fancy assembler, a.k.a. C, with no memory safety and plenty of other risks, the only recourse is to stick to software engineering basics. Reduce the amount of code in harm’s way, keep coupling between subsystems efficient and explicit, and work to provide better tools for the job, such as static code checkers and large suites of runtime tests.

January 16, 2018

Topic: Development

0 comments

Monitoring in a DevOps World:
Perfect should never be the enemy of better.

Monitoring can seem quite overwhelming. The most important thing to remember is that perfect should never be the enemy of better. DevOps enables highly iterative improvement within organizations. If you have no monitoring, get something; get anything. Something is better than nothing, and if you’ve embraced DevOps, you’ve already signed up for making it better over time.

January 8, 2018

Topic: Performance

0 comments

Cluster Scheduling for Data Centers:
Expert-curated Guides to the Best of CS Research: Distributed Cluster Scheduling

This installment of Research for Practice features a curated selection from Malte Schwarzkopf, who takes us on a tour of distributed cluster scheduling, from research to practice, and back again. With the rise of elastic compute resources, cluster management has become an increasingly hot topic in systems R&D, and a number of competing cluster managers including Kubernetes, Mesos, and Docker are currently jockeying for the crown in this space.

December 13, 2017

Topic: Databases

0 comments

Operational Excellence in April Fools’ Pranks:
Being funny is serious work.

Successful pranks require care and planning. Write a design proposal and a project plan. Involve operations early. If this is a technical change to your website, perform load testing, preferably including a "dark launch" or hidden launch test. Hide the prank behind a feature flag rather than requiring a new software release. Perform a retrospective and publish the results widely. Remember that some of the best pranks require little or no technical changes at all. For example, one could simply summarize the best practices for launching any new feature but write it under the guise of how to launch an April Fools’ prank.

December 5, 2017

Topic: Development

0 comments

Bitcoin’s Underlying Incentives:
The unseen economic forces that govern the Bitcoin protocol

Incentives are crucial for the Bitcoin protocol’s security and effectively drive its daily operation. Miners go to extreme lengths to maximize their revenue and often find creative ways to do so that are sometimes at odds with the protocol. Cryptocurrency protocols should be placed on stronger foundations of incentives. There are many areas left to improve, ranging from the very basics of mining rewards and how they interact with the consensus mechanism, through the rewards in mining pools, and all the way to the transaction fee market itself.

November 28, 2017

Topic: Networks

0 comments

Reducing the Attack Surface:
Sometimes you can give the monkey a less dangerous club.

The best way to reduce the attack surface of a piece of software is to remove any unnecessary code. Since you now have two teams demanding that you leave in the code, it’s probably time to think about making two different versions of your binary. The application sounds like it’s an embedded system, so I’ll guess that it’s written in C and take it from there.

November 14, 2017

Topic: Security

0 comments

Titus: Introducing Containers to the Netflix Cloud:
Approaching container adoption in an already cloud-native infrastructure

We believe our approach has enabled Netflix to quickly adopt and benefit from containers. Though the details may be Netflix-specific, the approach of providing low-friction container adoption by integrating with existing infrastructure and working with the right early adopters can be a successful strategy for any organization looking to adopt containers.

November 7, 2017

Topic: Distributed Development

0 comments

Views from the Top:
Try to see things from a manager’s perspective.

Leadership is hard. None of us comes to work to do a bad job, and there are always ways we can be better. So, when you have a leader who isn’t meeting your expectations, maybe try reframing the situation and looking at things a little differently from the top down.

October 31, 2017

Topic: Business/Management

3 comments

Abstracting the Geniuses Away from Failure Testing:
Ordinary users need tools that automate the selection of custom-tailored faults to inject.

This article presents a call to arms for the distributed systems research community to improve the state of the art in fault tolerance testing. Ordinary users need tools that automate the selection of custom-tailored faults to inject. We conjecture that the process by which superusers select experiments can be effectively modeled in software. The article describes a prototype validating this conjecture, presents early results from the lab and the field, and identifies new research directions that can make this vision a reality.

October 26, 2017

Topic: Failure and Recovery

1 comments

Private Online Communication; Highlights in Systems Verification:
The importance of private communication will continue to grow. We need techniques to build larger verified systems from verified components.

First, Albert Kwon provides an overview of recent systems for secure and private communication. Second, James Wilcox takes us on a tour of recent advances in verified systems design.

October 4, 2017

Topic: Networks

0 comments

Network Applications Are Interactive:
The network era requires new models, with interactions instead of algorithms.

The miniaturization of devices and the prolific interconnectedness of these devices over high-speed wireless networks is completely changing how commerce is conducted. These changes (a.k.a. digital) will profoundly change how enterprises operate. Software is at the heart of this digital world, but the software toolsets and languages were conceived for the host-based era. The issues that already plague software practice (such as high defects, poor software productivity, information vulnerability, poor software project success rates, etc.) will be more profound with such an approach. It is time for software to be made simpler, secure, and reliable.

September 27, 2017

Topic: Networks

2 comments

XML and JSON Are Like Cardboard:
Cardboard surrounds and protects stuff as it crosses boundaries.

In cardboard, the safety and care for stuff is the important reason for its existence. Similarly, in XML and JSON the safety and care of the data, both in transit and in storage, are why we bother.

September 18, 2017

Topic: Databases

1 comments

Breadth and Depth:
We all wear many hats, but make sure you have one that fits well.

When people ask me the question of where they should focus their time I ask them what is the one thing you could be the best in the world at? The answer might be going deep or going wide. The important thing is to spend your time on building the skills that will move you to where to you want to go.

September 6, 2017

Topic: Business/Management

0 comments

Cache Me If You Can:
Building a decentralized web-delivery model

The world is more connected than it ever has been before, and with our pocket supercomputers and IoT (Internet of Things) future, the next generation of the web might just be delivered in a peer-to-peer model. It’s a giant problem space, but the necessary tools and technology are here today. We just need to define the problem a little better.

August 30, 2017

Topic: Networks

0 comments

Bitcoin’s Academic Pedigree:
The concept of cryptocurrencies is built from forgotten ideas in research literature.

We’ve seen repeatedly that ideas in the research literature can be gradually forgotten or lie unappreciated, especially if they are ahead of their time, even in popular areas of research. Both practitioners and academics would do well to revisit old ideas to glean insights for present systems. Bitcoin was unusual and successful not because it was on the cutting edge of research on any of its components, but because it combined old ideas from many previously unrelated fields. This is not easy to do, as it requires bridging disparate terminology, assumptions, etc., but it is a valuable blueprint for innovation.

August 29, 2017

Topic: Security

7 comments

Cold, Hard Cache:
On the implementation and maintenance of caches

Dear KV, Our latest project at work requires a large number of slightly different software stacks to deploy within our cloud infrastructure. With modern hardware, I can test this deployment on a laptop. The problem I keep running up against is that our deployment system seems to secretly cache some of my files and settings and not clear them, even when I repeatedly issue the command to do so. I’ve resorted to repeatedly using the find command so that I can blow away the offending files. What I’ve found is that the system caches data in many places so I’ve started a list.

August 22, 2017

Topic: Networks

1 comments

Vigorous Public Debates in Academic Computer Science:
Expert-curated Guides to the Best of CS Research

This installment of Research for Practice features a special curated selection from John Regehr, who takes us on a tour of great debates in academic computer science research. In case you thought flame wars were reserved for Usenet mailing lists and Twitter, think again: the academic literature is full of dramatic, spectacular, and vigorous debates spanning file systems, operating system kernel design, and formal verification.

August 14, 2017

Topic: Education

1 comments

Hootsuite: In Pursuit of Reactive Systems:
A discussion with Edward Steel, Yanik Berube, Jonas Bonér, Ken Britton, and Terry Coatta

It has become apparent how critical frameworks and standards are for development teams when using microservices. People often mistake the flexibility microservices provide with a requirement to use different technologies for each service. Like all development teams, we still need to keep the number of technologies we use to a minimum so we can easily train new people, maintain our code, support moves between teams, and the like.

August 5, 2017

Topic: Web Services

0 comments

Four Ways to Make CS & IT Curricula More Immersive:
Why the Bell Curve Hasn’t Transformed into a Hockey Stick

Our first experiences cement what becomes normal for us. Students should start off seeing a well-run system, dissect it, learn its parts, progressively dig down into the details. Don’t let them see what a badly run system looks like until they have experienced one that is well run. A badly run system should then disgust them.

August 1, 2017

Topic: Education

1 comments

Metaphors We Compute By:
Code is a story that explains how to solve a particular problem.

Programmers must be able to tell a story with their code, explaining how they solved a particular problem. Like writers, programmers must know their metaphors. Many metaphors will be able to explain a concept, but you must have enough skill to choose the right one that’s able to convey your ideas to future programmers who will read the code. Thus, you cannot use every metaphor you know. You must master the art of metaphor selection, of meaning amplification. You must know when to add and when to subtract. You will learn to revise and rewrite code as a writer does. Once there’s nothing else to add or remove, you have finished your work.

July 24, 2017

Topic: Code

0 comments

10 Ways to Be a Better Interviewer:
Plan ahead to make the interview a successful one.

Of course, there is no right way to do an interview, but you can always be better. Make an effort to make your candidates as comfortable as possible so they have the greatest chance for success.

July 18, 2017

Topic: Business/Management

0 comments

Is There a Single Method for the Internet of Things?:
Essence can keep software development for the IoT from becoming unwieldy.

The Industrial Internet Consortium predicts the IoT (Internet of Things) will become the third technological revolution after the Industrial Revolution and the Internet Revolution. Its impact across all industries and businesses can hardly be imagined. Existing software (business, telecom, aerospace, defense, etc.) is expected to be modified or redesigned, and a huge amount of new software, solving new problems, will have to be developed. As a consequence, the software industry should welcome new and better methods.

July 11, 2017

Topic: Development

0 comments

IoT: The Internet of Terror:
If it seems like the sky is falling, that’s because it is.

It is true that many security-focused engineers can sound like Chicken Little, running around announcing that the sky is falling, but, unless you’ve been living under a rock, you will notice that, indeed, the sky IS falling. Not a day goes by without a significant attack against networked systems making the news, and the Internet of Terror is leading the charge in taking distributed systems down the road to hell - a road that you wish to pave with your good intentions.

July 6, 2017

Topic: Security

2 comments

Research for Practice: Technology for UnderservedCommunities; Personal Fabrication:
Expert-curated Guides to the Best of CS Research

This installment of Research for Practice provides curated reading guides to technology for underserved communities and to new developments in personal fabrication. First, Tawanna Dillahunt describes design considerations and technology for underserved and impoverished communities. Designing for the more than 1.6 billion impoverished individuals worldwide requires special consideration of community needs, constraints, and context. Tawanna’s selections span protocols for poor-quality communication networks, community-driven content generation, and resource and public service discovery. Second, Stefanie Mueller and Patrick Baudisch provide an overview of recent advances in personal fabrication (e.g., 3D printers). Their selection covers new techniques for fabricating (and emulating) complex materials (e.g., by manipulating the internal structure of an object), for more easily specifying object shape and behavior, and for human-in-the-loop rapid prototyping.

June 6, 2017

Topic: Development

0 comments

Data Sketching:
The approximate approach is often faster and more efficient.

Do you ever feel overwhelmed by an unending stream of information? It can seem like a barrage of new email and text messages demands constant attention, and there are also phone calls to pick up, articles to read, and knocks on the door to answer. Putting these pieces together to keep track of what’s important can be a real challenge. In response to this challenge, the model of streaming data processing has grown in popularity. The aim is no longer to capture, store, and index every minute event, but rather to process each observation quickly in order to create a summary of the current state.

May 31, 2017

Topic: Databases

0 comments

Side Effects, Front and Center!:
One System’s Side Effect is Another’s Meat and Potatoes.

We think of computation in terms of its consequences. The big MapReduce job returns a large result. Web interactions display information. Enterprise applications update the database and return an answer. These are the reasons we do our work. What we rarely discuss are the side effects of doing the work we intend. Side effects may be unwanted, or they may actually cause desired behavior at different layers of the system. This column points out some fun patterns to keep in mind as we build and use our systems.

May 24, 2017

Topic: Development

0 comments

The Calculus of Service Availability:
You’re only as available as the sum of your dependencies.

Most services offered by Google aim to offer 99.99 percent (sometimes referred to as the "four 9s") availability to users. Some services contractually commit to a lower figure externally but set a 99.99 percent target internally. This more stringent target accounts for situations in which users become unhappy with service performance well before a contract violation occurs, as the number one aim of an SRE team is to keep users happy. For many services, a 99.99 percent internal target represents the sweet spot that balances cost, complexity, and availability.

May 17, 2017

Topic: Web Services

1 comments

Conversations with Technology Leaders: Erik Meijer:
Great engineers are able to maximize their mental power.

Whether you are a leader, a programmer, or just someone aspiring to be better, I am sure there are some smart takeaways from our conversation that will help you grow in your role. Oh, and if you read to the end, you can find out what his favorite job interview question is - and see if you would be able to pass his test.

May 10, 2017

Topic: Business/Management

1 comments

The IDAR Graph:
An improvement over UML

UML is the de facto standard for representing object-oriented designs. It does a fine job of recording designs, but it has a severe problem: its diagrams don’t convey what humans need to know, making them hard to understand. This is why most software developers use UML only when forced to. People understand an organization, such as a corporation, in terms of a control hierarchy. When faced with an organization of people or objects, the first question usually is, "What’s controlling all this?" Surprisingly, UML has no concept of one object controlling another. Consequently, in every type of UML diagram, no object appears to have greater or lesser control than its neighbors.

May 3, 2017

Topic: Workflow Systems

0 comments

The Observer Effect:
Finding the balance between zero and maximum

The problem is a failure to appreciate just what you are asking a system to do when polling it for information. Modern systems contain thousands of values that can be measured and recorded. Blindly retrieving whatever it is that might be exposed by the system is bad enough, but asking for it with a high-frequency poll is much worse.

April 25, 2017

Topic: System Administration

1 comments

Too Big NOT to Fail:
Embrace failure so it doesn’t embrace you.

Web-scale infrastructure implies LOTS of servers working together, often tens or hundreds of thousands of servers all working toward the same goal. How can the complexity of these environments be managed? How can commonality and simplicity be introduced?

April 5, 2017

Topic: Failure and Recovery

0 comments

Research for Practice: Tracing and Debugging Distributed Systems; Programming by Examples:
Expert-curated Guides to the Best of CS Research

This installment of Research for Practice covers two exciting topics in distributed systems and programming methodology. First, Peter Alvaro takes us on a tour of recent techniques for debugging some of the largest and most complex systems in the world: modern distributed systems and service-oriented architectures. The techniques Peter surveys can shed light on order amid the chaos of distributed call graphs. Second, Sumit Gulwani illustrates how to program without explicitly writing programs, instead synthesizing programs from examples! The techniques Sumit presents allow systems to "learn" a program representation from illustrative examples, allowing nonprogrammer users to create increasingly nontrivial functions such as spreadsheet macros.

March 29, 2017

Topic: Debugging

0 comments

The Debugging Mindset:
Understanding the psychology of learning strategies leads to effective problem-solving skills.

Software developers spend 35-50 percent of their time validating and debugging software. The cost of debugging, testing, and verification is estimated to account for 50-75 percent of the total budget of software development projects, amounting to more than $100 billion annually. While tools, languages, and environments have reduced the time spent on individual debugging tasks, they have not significantly reduced the total time spent debugging, nor the cost of doing so. Therefore, a hyperfocus on elimination of bugs during development is counterproductive; programmers should instead embrace debugging as an exercise in problem solving.

March 22, 2017

Topic: Debugging

1 comments

Forced Exception-Handling:
You can never discount the human element in programming.

Yes, KV also reads "The Morning Paper," although he has to admit that he does not read everything that arrives in his inbox from that list. Of course, the paper you mention piqued my interest, and one of the things you don’t point out is that it’s actually a study of distributed systems failures. Now, how can we make programming harder? I know! Let’s take a problem on a single system and distribute it. Someday I would like to see a paper that tells us if problems in distributed systems increase along with the number of nodes, or the number of interconnections.

March 14, 2017

Topic: Failure and Recovery

1 comments

MongoDB’s JavaScript Fuzzer:
The fuzzer is for those edge cases that your testing didn’t catch.

As MongoDB becomes more feature-rich and complex with time, the need to develop more sophisticated methods for finding bugs grows as well. Three years ago, MongDB added a home-grown JavaScript fuzzer to its toolkit, and it is now our most prolific bug-finding tool, responsible for detecting almost 200 bugs over the course of two release cycles. These bugs span a range of MongoDB components from sharding to the storage engine, with symptoms ranging from deadlocks to data inconsistency. The fuzzer runs as part of the CI (continuous integration) system, where it frequently catches bugs in newly committed code.

March 6, 2017

Topic: Quality Assurance

0 comments

Does Anybody Listen to You?:
How do you step up from mere contributor to real change-maker?

An idea on its own is not worth much. Just because you think you know a better way to do something, even if you’re right, no one is required to care. Making great things happen at work is about more than just being smart. Good ideas succeed or fail depending on your ability to communicate them correctly to the people who have the power to make them happen. When you are navigating an organization, it pays to know whom to talk to and how to reach them. Here is a simple guide to sending your ideas up the chain and actually making them stick.

March 1, 2017

Topic: Development

0 comments

Making Money Using Math:
Modern applications are increasingly using probabilistic machine-learned models.

A big difference between human-written code and learned models is that the latter are usually not represented by text and hence are not understandable by human developers or manipulable by existing tools. The consequence is that none of the traditional software engineering techniques for conventional programs (such as code reviews, source control, and debugging) are applicable anymore. Since incomprehensibility is not unique to learned code, these aspects are not of concern here.

February 22, 2017

Topic: AI

2 comments

Pervasive, Dynamic Authentication of Physical Items:
The use of silicon PUF circuits

Authentication of physical items is an age-old problem. Common approaches include the use of bar codes, QR codes, holograms, and RFID (radio-frequency identification) tags. Traditional RFID tags and bar codes use a public identifier as a means of authenticating. A public identifier, however, is static: it is the same each time when queried and can be easily copied by an adversary. Holograms can also be viewed as public identifiers: a knowledgeable verifier knows all the attributes to inspect visually. It is difficult to make hologram-based authentication pervasive; a casual verifier does not know all the attributes to look for.

January 31, 2017

Topic: Privacy and Rights

0 comments

Research for Practice: Cryptocurrencies, Blockchains, and Smart Contracts; Hardware for Deep Learning:
Expert-curated Guides to the Best of CS Research

First, Arvind Narayanan and Andrew Miller, co-authors of the increasingly popular open-access Princeton Bitcoin textbook, provide an overview of ongoing research in cryptocurrencies. Second, Song Han provides an overview of hardware trends related to another long-studied academic problem that has recently seen an explosion in popularity: deep learning.

January 24, 2017

Topic: Blockchain

0 comments

Uninitialized Reads:
Understanding the proposed revisions to the C language

Most developers understand that reading uninitialized variables in C is a defect, but some do it anyway. What happens when you read uninitialized objects is unsettled in the current version of the C standard (C11).3 Various proposals have been made to resolve these issues in the planned C2X revision of the standard. Consequently, this is a good time to understand existing behaviors as well as proposed revisions to the standard to influence the evolution of the C language. Given that the behavior of uninitialized reads is unsettled in C11, prudence dictates eliminating uninitialized reads from your code.

January 16, 2017

Topic: Programming Languages

1 comments

Heterogeneous Computing: Here to Stay:
Hardware and Software Perspectives

Mentions of the buzzword heterogeneous computing have been on the rise in the past few years and will continue to be heard for years to come, because heterogeneous computing is here to stay. What is heterogeneous computing, and why is it becoming the norm? How do we deal with it, from both the software side and the hardware side? This article provides answers to some of these questions and presents different points of view on others.

January 10, 2017

Topic: Computer Architecture

0 comments

Time, but Faster:
A computing adventure about time through the looking glass

The first premise was summed up perfectly by the late Douglas Adams in The Hitchhiker’s Guide to the Galaxy: "Time is an illusion. Lunchtime doubly so." The concept of time, when colliding with decoupled networks of computers that run at billions of operations per second, is... well, the truth of the matter is that you simply never really know what time it is. That is why Leslie Lamport’s seminal paper on Lamport timestamps was so important to the industry, but this article is actually about wall-clock time, or a reasonably useful estimation of it.

January 4, 2017

Topic: Networks

0 comments

The Chess Player who Couldn’t Pass the Salt:
AI: Soft and hard, weak and strong, narrow and general

The problem inherent in almost all nonspecialist work in AI is that humans actually don’t understand intelligence very well in the first place. Now, computer scientists often think they understand intelligence because they have so often been the "smart" kid, but that’s got very little to do with understanding what intelligence actually is. In the absence of a clear understanding of how the human brain generates and evaluates ideas, which may or may not be a good basis for the concept of intelligence, we have introduced numerous proxies for intelligence, the first of which is game-playing behavior.

December 26, 2016

Topic: AI

0 comments

Are You Load Balancing Wrong?:
Anyone can use a load balancer. Using them properly is much more difficult.

A reader contacted me recently to ask if it is better to use a load balancer to add capacity or to make a service more resilient to failure. The answer is: both are appropriate uses of a load balancer. The problem, however, is that most people who use load balancers are doing it wrong.

December 20, 2016

Topic: Networks

0 comments

Life Beyond Distributed Transactions:
An apostate’s opinion

This article explores and names some of the practical approaches used in the implementation of large-scale mission-critical applications in a world that rejects distributed transactions. Topics include the management of fine-grained pieces of application data that may be repartitioned over time as the application grows. Design patterns support sending messages between these repartitionable pieces of data.

December 12, 2016

Topic: Distributed Computing

2 comments

Research for Practice: Distributed Transactions and Networks as Physical Sensors:
Expert-curated Guides to the Best of CS Research

First, Irene Zhang delivers a whirlwind tour of recent developments in distributed concurrency control. If you thought distributed transactions were prohibitively expensive, Irene’s selections may prompt you to reconsider: the use of atomic clocks, clever replication protocols, and new means of commit ordering all improve performance at scale. Second, Fadel Adib provides a fascinating look at using computer networks as physical sensors. It turns out that the radio waves passing through our environment and bodies are subtly modulated as they do so.

December 7, 2016

Topic: Networks

0 comments

BBR: Congestion-Based Congestion Control:
Measuring bottleneck bandwidth and round-trip propagation time

When bottleneck buffers are large, loss-based congestion control keeps them full, causing bufferbloat. When bottleneck buffers are small, loss-based congestion control misinterprets loss as a signal of congestion, leading to low throughput. Fixing these problems requires an alternative to loss-based congestion control. Finding this alternative requires an understanding of where and how network congestion originates.

December 1, 2016

Topic: Networks

0 comments

Resolving Conflict:
Don’t "win." Resolve.

I am conflicted about conflict. On one hand, I hate it. Hearing people disagree, even about minor things, makes me want to run through the nearest wall and curl up under my bed until it’s over. On the other hand, when it happens, I always want to get into it.

November 15, 2016

Topic: Development

0 comments

Faucet: Deploying SDN in the Enterprise:
Using OpenFlow and DevOps for rapid development

While SDN as a technology continues to evolve and become even more programmable, Faucet and OpenFlow 1.3 hardware together are sufficient to realize benefits today. This article describes specifically how to take advantage of DevOps practices to develop and deploy features rapidly. It also describes several practical deployment scenarios, including firewalling and network function virtualization.

November 7, 2016

Topic: Networks

0 comments

The Unholy Trinity of Software Development:
Tests, documentation, and code

Questions like this bring to mind the fact that source code, documentation, and testing are the unholy trinity of software development, although many organizations like to see them as separate entities. It is interesting that while many groups pay lip service to "test-driven development," they do not include documentation in TDD.

October 31, 2016

Topic: Code

1 comments

Industrial Scale Agile - from Craft to Engineering:
Essence is instrumental in moving software development toward a true engineering discipline.

There are many, many ways to illustrate how fragile IT investments can be. You just have to look at the way that, even after huge investments in education and coaching, many organizations are struggling to broaden their agile adoption to the whole of their organization - or at the way other organizations are struggling to maintain the momentum of their agile adoptions as their teams change and their systems mature.

October 25, 2016

Topic: Development

0 comments

Research for Practice: Web Security and Mobile Web Computing:
Expert-curated Guides to the Best of CS Research

Our third installment of Research for Practice brings readings spanning programming languages, compilers, privacy, and the mobile web.

October 4, 2016

Topic: Web Development

0 comments

The Power of Babble:
Expect to be constantly and pleasantly befuddled

Metadata defines the shape, the form, and how to understand our data. It is following the trend taken by natural languages in our increasingly interconnected world. While many concepts can be communicated using shared metadata, no one can keep up with the number of disparate new concepts needed to have a common understanding.

September 27, 2016

Topic: Databases

0 comments

Functional at Scale:
Applying functional programming principles to distributed computing projects

Modern server software is demanding to develop and operate: it must be available at all times and in all locations; it must reply within milliseconds to user requests; it must respond quickly to capacity demands; it must process a lot of data and even more traffic; it must adapt quickly to changing product needs; and in many cases it must accommodate a large engineering organization, its many engineers the proverbial cooks in a big, messy kitchen.

September 20, 2016

Topic: Distributed Development

0 comments

Fresh Starts:
Just because you have been doing it the same way doesn’t mean you are doing it the right way.

I love fresh starts. Growing up, one of my favorite things was starting a new school year. From the fresh school supplies to the promise of a new class of students, teachers, and lessons, I couldn’t wait for summer to be over and to go back to school. The same thing happens with new jobs. They reinvigorate you, excite you, and get you going.

September 12, 2016

Topic: Development

0 comments

React: Facebook’s Functional Turn on Writing JavaScript:
A discussion with Pete Hunt, Paul O’Shannessy, Dave Smith and Terry Coatta

One of the long-standing ironies of user-friendly JavaScript front ends is that building them typically involved trudging through the DOM (Document Object Model), hardly known for its friendliness to developers. But now developers have a way to avoid directly interacting with the DOM, thanks to Facebook’s decision to open-source its React library for the construction of user interface components.

September 5, 2016

Topic: Web Development

2 comments

Cloud Calipers:
Naming the next generation and remembering that the cloud is just other people’s computers

For the time being, we are likely to continue to have programmers who version their functions as a result of the limitations of their languages, but let’s hope we can stop them naming their next generations after the next generation.

August 30, 2016

Topic: Development

0 comments

Scaling Synchronization in Multicore Programs:
Advanced synchronization methods can boost the performance of multicore software.

Designing software for modern multicore processors poses a dilemma. Traditional software designs, in which threads manipulate shared data, have limited scalability because synchronization of updates to shared data serializes threads and limits parallelism. Alternative distributed software designs, in which threads do not share mutable data, eliminate synchronization and offer better scalability. But distributed designs make it challenging to implement features that shared data structures naturally provide, such as dynamic load balancing and strong consistency guarantees, and are simply not a good fit for every program. Often, however, the performance of shared mutable data structures is limited by the synchronization methods in use today, whether lock-based or lock-free.

August 23, 2016

Topic: Concurrency

0 comments

10 Optimizations on Linear Search:
The operations side of the story

System administrators (DevOps engineers or SREs or whatever your title) must deal with the operational aspects of computation, not just the theoretical aspects. Operations is where the rubber hits the road. As a result, operations people see things from a different perspective and can realize opportunities outside of the basic O() analysis. Let’s look at the operational aspects of the problem of trying to improve something that is theoretically optimal already.

August 8, 2016

Topic: Search Engines

0 comments

The Singular Success of SQL:
SQL has a brilliant future as a major figure in the pantheon of data representations.

SQL has a brilliant past and a brilliant future. That future is not as the singular and ubiquitous holder of data but rather as a major figure in the pantheon of data representations. What the heck happens when data is not kept in SQL?

August 2, 2016

Topic: Data

0 comments

Idle-Time Garbage-Collection Scheduling:
Taking advantage of idleness to reduce dropped frames and memory consumption

Google’s Chrome web browser strives to deliver a smooth user experience. An animation will update the screen at 60 FPS (frames per second), giving Chrome around 16.6 milliseconds to perform the update. Within these 16.6 ms, all input events have to be processed, all animations have to be performed, and finally the frame has to be rendered. A missed deadline will result in dropped frames. These are visible to the user and degrade the user experience. Such sporadic animation artifacts are referred to here as jank. This article describes an approach implemented in the JavaScript engine V8, used by Chrome, to schedule garbage-collection pauses during times when Chrome is idle.

July 26, 2016

Topic: Performance

0 comments

Bad Software Architecture is a People Problem:
When people don’t work well together they make bad decisions.

It all started with a bug. Customers were complaining that their information was out of date on the website. They would make an update and for some reason their changes weren’t being reflected. Caching seemed like the obvious problem, but once we started diving into the details, we realized it was a much bigger issue.

July 18, 2016

Topic: Business/Management

1 comments

Dynamics of Change: Why Reactivity Matters:
Tame the dynamics of change by centralizing each concern in its own module.

Professional programming is about dealing with software at scale. Everything is trivial when the problem is small and contained: it can be elegantly solved with imperative programming or functional programming or any other paradigm. Real-world challenges arise when programmers have to deal with large amounts of data, network requests, or intertwined entities, as in UI (user interface) programming.

July 12, 2016

Topic: Development

0 comments

Research for Practice: Distributed Consensus and Implications of NVM on Database Management Systems:
Expert-curated Guides to the Best of CS Research

First, how do large-scale distributed systems mediate access to shared resources, coordinate updates to mutable state, and reliably make decisions in the presence of failures? Second, while consensus concerns distributed shared state, our second selection concerns the impact of hardware trends on single-node shared state.

July 5, 2016

Topic: Databases

0 comments

Cluster-level Logging of Containers with Containers:
Logging Challenges of Container-Based Cloud Deployments

This article shows how cluster-level logging infrastructure can be implemented using open source tools and deployed using the very same abstractions that are used to compose and manage the software systems being logged. Collecting and analyzing log information is an essential aspect of running production systems to ensure their reliability and to provide important auditing information. Many tools have been developed to help with the aggregation and collection of logs for specific software components (e.g., an Apache web server) running on specific servers (e.g., Fluentd and Logstash.)

June 28, 2016

Topic: Component Technologies

0 comments

Chilling the Messenger:
Keeping ego out of software-design review

Trying to correct someone who has just done a lot of work, even if, ultimately, that work is not the right work, is a daunting task. The person in question no doubt believes that he has worked very hard to produce something of value to the rest of the team, and walking in and spitting on it, literally or metaphorically, probably crosses your "offense" line--at least I think it does. I’m a bit surprised that since this is the first sprint and there is already so much code written, shouldn’t the software have shown up after the sprints established what was needed, who the stakeholders were, etc.?

June 22, 2016

Topic: Development

0 comments

The Hidden Dividends of Microservices:
Microservices aren’t for every company, and the journey isn’t easy.

Microservices are an approach to building distributed systems in which services are exposed only through hardened APIs; the services themselves have a high degree of internal cohesion around a specific and well-bounded context or area of responsibility, and the coupling between them is loose. Such services are typically simple, yet they can be composed into very rich and elaborate applications. The effort required to adopt a microservices-based approach is considerable, particularly in cases that involve migration from more monolithic architectures. The explicit benefits of microservices are well known and numerous, however, and can include increased agility, resilience, scalability, and developer productivity.

June 14, 2016

Topic: Web Services

0 comments

Standing on Distributed Shoulders of Giants:
Farsighted Physicists of Yore Were Danged Smart!

If you squint hard enough, many of the challenges of distributed computing appear similar to the work done by the great physicists. Dang, those fellows were smart! Here, we examine some of the most important physics breakthroughs and draw some whimsical parallels to phenomena in the world of computing... just for fun.

June 7, 2016

Topic: Distributed Computing

2 comments

Introducing Research for Practice:
Expert-curated guides to the best of CS research

Reading a great research paper is a joy. A team of experts deftly guides you, the reader, through the often complicated research landscape, noting the prior art, the current trends, the pressing issues at hand--and then, sometimes artfully, sometimes through seeming sheer force of will, expands the body of knowledge in a fell swoop of 12 or so pages of prose. A great paper contains a puzzle and a solution; these can be useful, enlightening, or both.

June 2, 2016

Topic: Development

0 comments

The Small Batches Principle:
Reducing waste, encouraging experimentation, and making everyone happy

The small batches principle is part of the DevOps methodology. It comes from the lean manufacturing movement, which is often called just-in-time manufacturing. It can be applied to just about any kind of process. It also enables the MVP (minimum viable product) methodology, which involves launching a small version of a service to get early feedback that informs the decisions made later in the project.

May 24, 2016

Topic: System Administration

2 comments

Debugging Distributed Systems:
Challenges and options for validation and debugging

Distributed systems pose unique challenges for software developers. Reasoning about concurrent activities of system nodes and even understanding the system’s communication topology can be difficult. A standard approach to gaining insight into system activity is to analyze system logs. Unfortunately, this can be a tedious and complex process. This article looks at several key features and debugging challenges that differentiate distributed systems from other kinds of software. The article presents several promising tools and ongoing research to help resolve these challenges.

May 18, 2016

Topic: Distributed Computing

0 comments

Nine Things I Didn’t Know I Would Learn Being an Engineer Manager:
Many of the skills aren’t technical at all.

When I moved from being an engineer to being a dev lead, I knew I had a lot to learn. My initial thinking was that I had to be able to do thorough code reviews, design and architect websites, see problems before they happened, and ask insightful technical questions. To me that meant learning the technology and becoming a better engineer. When I actually got into the role (and after doing it almost 15 years), the things I have learned--and that have mattered the most--weren’t those technical details.

May 9, 2016

Topic: Development

0 comments

Should You Upload or Ship Big Data to the Cloud?:
The accepted wisdom does not always hold true.

It is accepted wisdom that when the data you wish to move into the cloud is at terabyte scale and beyond, you are better off shipping it to the cloud provider, rather than uploading it. This article takes an analytical look at how shipping and uploading strategies compare, the various factors on which they depend, and under what circumstances you are better off shipping rather than uploading data, and vice versa. Such an analytical determination is important to make, given the increasing availability of gigabit-speed Internet connections, along with the explosive growth in data-transfer speeds supported by newer editions of drive interfaces such as SAS and PCI Express.

May 3, 2016

Topic: Distributed Computing

2 comments

What Are You Trying to Pull?:
A single cache miss is more expensive than many instructions.

Saving instructions - how very 1990s of him. It’s always nice when people pay attention to details, but sometimes they simply don’t pay attention to the right ones. While KV would never encourage developers to waste instructions, given the state of modern software, it does seem like someone already has. KV would, as you did, come out on the side of legibility over the saving of a few instructions.

April 27, 2016

Topic: Development

0 comments

The Flame Graph:
This visualization of software execution is a new necessity for performance profiling and debugging.

An everyday problem in our industry is understanding how software is consuming resources, particularly CPUs. What exactly is consuming how much, and how did this change since the last software version? These questions can be answered using software profilers, tools that help direct developers to optimize their code and operators to tune their environment. The output of profilers can be verbose, however, making it laborious to study and comprehend. The flame graph provides a new visualization for profiler output and can make for much faster comprehension, reducing the time for root cause analysis.

April 20, 2016

Topic: Visualization

2 comments

Delegation as Art:
Be someone who makes everyone else better.

When I started my career as a junior engineer, I couldn’t wait to be senior. I would regularly review our promotion guidelines and assess my progress and contributions against them. Of course, at the time I didn’t really understand what being senior meant. Being a senior engineer means having strong technical skills, the ability to communicate well and navigate ambiguous situations, and most important of all, the ability to grow and lead other people. Leadership isn’t just for managers anymore.

April 18, 2016

Topic: Development

1 comments

Why Logical Clocks are Easy:
Sometimes all you need is the right language.

Any computing system can be described as executing sequences of actions, with an action being any relevant change in the state of the system. For example, reading a file to memory, modifying the contents of the file in memory, or writing the new contents to the file are relevant actions for a text editor. In a distributed system, actions execute in multiple locations; in this context, actions are often called events. Examples of events in distributed systems include sending or receiving messages, or changing some state in a node. Not all events are related, but some events can cause and influence how other, later events occur.

April 12, 2016

Topic: Programming Languages

0 comments

Use-Case 2.0:
The Hub of Software Development

Use cases have been around for almost 30 years as a requirements approach and have been part of the inspiration for more-recent techniques such as user stories. Now the inspiration has flown in the other direction. Use-Case 2.0 is the new generation of use-case-driven development-light, agile, and lean-inspired by user stories and the agile methodologies Scrum and Kanban.

April 5, 2016

Topic: Development

0 comments

GNL is Not Linux:
What’s in a Name?

What, indeed, is in a name? As you’ve already seen, this quasi-technical topic continues to cause a bit of heat in the software community, particularly in the open-source world. You can find the narrative from the GNU side by clicking on the link provided in the postscript to this article, but KV finds that narrative lacking, and so, against my better judgment about pigs and dancing, I will weigh in with a few comments.

March 30, 2016

Topic: Development

2 comments

More Encryption Means Less Privacy:
Retaining electronic privacy requires more political engagement.

When Edward Snowden made it known to the world that pretty much all traffic on the Internet was collected and searched by the NSA, GCHQ (the UK Government Communications Headquarters) and various other countries’ secret services as well, the IT and networking communities were furious and felt betrayed.

March 17, 2016

Topic: Privacy and Rights

2 comments

Statistics for Engineers:
Applying statistical techniques to operations data

Modern IT systems collect an increasing wealth of data from network gear, operating systems, applications, and other components. This data needs to be analyzed to derive vital information about the user experience and business performance. For instance, faults need to be detected, service quality needs to be measured and resource usage of the next days and month needs to be forecast.

March 11, 2016

Topic: Databases

1 comments

Borg, Omega, and Kubernetes:
Lessons learned from three container-management systems over a decade

Though widespread interest in software containers is a relatively recent phenomenon, at Google we have been managing Linux containers at scale for more than ten years and built three different container-management systems in that time. Each system was heavily influenced by its predecessors, even though they were developed for different reasons. This article describes the lessons we’ve learned from developing and operating them.

March 2, 2016

Topic: System Evolution

8 comments

Code Hoarding:
Committing to commits, and the beauty of summarizing graphs

Dear KV, Why are so many useful features of open-source projects hidden under obscure configuration options that mean they’ll get little or no use? Is this just typically poor documentation and promotion, or is there something that makes these developers hide their code? It’s not as if the code seems broken. When I turned these features on in some recent code I came across, the system remained stable under test and in production. I feel that code should either be used or removed from the system. If the code is in a source-code repository, then it’s not really lost, but it’s also not cluttering the rest of the system.

February 23, 2016

Topic: Code

0 comments

The Paradox of Autonomy and Recognition:
Thoughts on trust and merit in software team culture

Who doesn’t want recognition for their hard work and contributions? Early in my career I wanted to believe that if you worked hard, and added value, you would be rewarded. I wanted to believe in the utopian ideal that hard work, discipline, and contributions were the fuel that propelled you up the corporate ladder. Boy, was I wrong.

February 16, 2016

Topic: Development

2 comments

How Sysadmins Devalue Themselves:
And how to track on-call coverage

Q: Dear Tom, How can I devalue my work? Lately I’ve felt like everyone appreciates me, and, in fact, I’m overpaid and underutilized. Could you help me devalue myself at work? A: Dear Reader, Absolutely! I know what a pain it is to lug home those big paychecks. It’s so distracting to have people constantly patting you on the back. Ouch! Plus, popularity leads to dates with famous musicians and movie stars. (Just ask someone like Taylor Swift or Leonardo DiCaprio.) Who wants that kind of distraction when there’s a perfectly good video game to be played?

February 8, 2016

Topic: System Administration

2 comments

The Verification of a Distributed System:
A practitioner’s guide to increasing confidence in system correctness

Leslie Lamport, known for his seminal work in distributed systems, famously said, "A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable." Given this bleak outlook and the large set of possible failures, how do you even begin to verify and validate that the distributed systems you build are doing the right thing?

February 1, 2016

Topic: Distributed Development

0 comments

Accountability in Algorithmic Decision-making:
A view from computational journalism

Every fiscal quarter automated writing algorithms churn out thousands of corporate earnings articles for the AP (Associated Press) based on little more than structured data. Companies such as Automated Insights, which produces the articles for AP, and Narrative Science can now write straight news articles in almost any domain that has clean and well-structured data: finance, sure, but also sports, weather, and education, among others. The articles aren’t cardboard either; they have variability, tone, and style, and in some cases readers even have difficulty distinguishing the machine-produced articles from human-written ones.

January 25, 2016

Topic: Privacy and Rights

0 comments

Immutability Changes Everything:
We need it, we can afford it, and the time is now.

There is an inexorable trend toward storing and sending immutable data. We need immutability to coordinate at a distance, and we can afford immutability as storage gets cheaper. This article is an amuse-bouche sampling the repeated patterns of computing that leverage immutability. Climbing up and down the compute stack really does yield a sense of déjà vu all over again.

January 20, 2016

Topic: Databases

2 comments

Time is an Illusion.:
Lunchtime doubly so. - Ford Prefect to Arthur Dent in "The Hitchhiker’s Guide to the Galaxy", by Douglas Adams

One of the more surprising things about digital systems - and, in particular, modern computers - is how poorly they keep time. When most programs ran on a single system this was not a significant issue for the majority of software developers, but once software moved into the distributed-systems realm this inaccuracy became a significant challenge.

January 12, 2016

Topic: Distributed Computing

4 comments

Non-volatile Storage:
Implications of the Datacenter’s Shifting Center

For the entire careers of most practicing computer scientists, a fundamental observation has consistently held true: CPUs are significantly more performant and more expensive than I/O devices. The fact that CPUs can process data at extremely high rates, while simultaneously servicing multiple I/O devices, has had a sweeping impact on the design of both hardware and software for systems of all sizes, for pretty much as long as we’ve been building them.

January 5, 2016

Topic: File Systems and Storage

12 comments

Schema.org: Evolution of Structured Data on the Web:
Big data makes common schemas even more necessary.

Separation between content and presentation has always been one of the important design aspects of the Web. Historically, however, even though most Web sites were driven off structured databases, they published their content purely in HTML. Services such as Web search, price comparison, reservation engines, etc. that operated on this content had access only to HTML. Applications requiring access to the structured data underlying these Web pages had to build custom extractors to convert plain HTML into structured data. These efforts were often laborious and the scrapers were fragile and error-prone, breaking every time a site changed its layout.

December 15, 2015

Topic: Databases

0 comments

A Purpose-built Global Network: Google’s Move to SDN:
A discussion with Amin Vahdat, David Clark, and Jennifer Rexford

Everything about Google is at scale, of course -- a market cap of legendary proportions, an unrivaled talent pool, enough intellectual property to keep armies of attorneys in Guccis for life, and, oh yeah, a private WAN (wide area network) bigger than you can possibly imagine that also happens to be growing substantially faster than the Internet as a whole.

December 11, 2015

Topic: Networks

0 comments

Pickled Patches:
On repositories of patches and tension between security professionals and in-house developers

I recently came upon a software repository that was not a repo of code, but a repo of patches. The project seemed to build itself out of several other components and then had complicated scripts that applied the patches in a particular order. I had to look at this repo because I wanted to fix a bug in the system, but trying to figure out what the code actually looked like at any particular point in time was baffling. Are there tools that would help in working like this?

December 9, 2015

Topic: Development

0 comments

It Probably Works:
Probabilistic algorithms are all around us--not only are they acceptable, but some programmers actually seek out chances to use them.

Probabilistic algorithms exist to solve problems that are either impossible or unrealistic (too expensive, too time-consuming, etc.) to solve precisely. In an ideal world, you would never actually need to use probabilistic algorithms. To programmers who are not familiar with them, the idea can be positively nervewracking: "How do I know that it will actually work? What if it’s inexplicably wrong? How can I debug it? Maybe we should just punt on this problem, or buy a whole lot more servers..."

December 7, 2015

Topic: Development

1 comments

Challenges of Memory Management on Modern NUMA System:
Optimizing NUMA systems applications with Carrefour

Modern server-class systems are typically built as several multicore chips put together in a single system. Each chip has a local DRAM (dynamic random-access memory) module; together they are referred to as a node. Nodes are connected via a high-speed interconnect, and the system is fully coherent. This means that, transparently to the programmer, a core can issue requests to its node’s local memory as well as to the memories of other nodes. The key distinction is that remote requests will take longer, because they are subject to longer wire delays and may have to jump several hops as they traverse the interconnect.

December 1, 2015

Topic: Concurrency

1 comments

Componentizing the Web:
We may be on the cusp of a new revolution in web development.

There is no task in software engineering today quite as herculean as web development. A typical specification for a web application might read: The app must work across a wide variety of browsers. It must run animations at 60 fps. It must be immediately responsive to touch. It must conform to a specific set of design principles and specs. It must work on just about every screen size imaginable, from TVs and 30-inch monitors to mobile phones and watch faces. It must be well-engineered and maintainable in the long term.

November 9, 2015

Topic: Web Development

0 comments

Automation Should Be Like Iron Man, Not Ultron:
The "Leftover Principle" Requires Increasingly More Highly-skilled Humans.

A few years ago we automated a major process in our system administration team. Now the system is impossible to debug. Nobody remembers the old manual process and the automation is beyond what any of us can understand. We feel like we’ve painted ourselves into a corner. Is all operations automation doomed to be this way?

October 31, 2015

Topic: Development

4 comments

Lean Software Development - Building and Shipping Two Versions:
Catering to developers’ strengths while still meeting team objectives

Once I was managing a software team and we were working on several initiatives. Projects were assigned based on who was available, their skillsets, and their development goals. This resulted in two developers, Mary and Melissa, being assigned to the same project.

October 31, 2015

Topic: Development

2 comments

Fail at Scale:
Reliability in the face of rapid change

Failure is part of engineering any large-scale system. One of Facebook’s cultural values is embracing failure. This can be seen in the posters hung around the walls of our Menlo Park headquarters: "What Would You Do If You Weren’t Afraid?" and "Fortune Favors the Bold."

October 27, 2015

Topic: Web Services

0 comments

How to De-identify Your Data:
Balancing statistical accuracy and subject privacy in large social-science data sets

Big data is all the rage; using large data sets promises to give us new insights into questions that have been difficult or impossible to answer in the past. This is especially true in fields such as medicine and the social sciences, where large amounts of data can be gathered and mined to find insightful relationships among variables. Data in such fields involves humans, however, and thus raises issues of privacy that are not faced by fields such as physics or astronomy.

October 25, 2015

Topic: Privacy and Rights

4 comments

Still Finding the Right Questions:
Branching out and changing with the times at acmqueue

Welcome to the newest incarnation of acmqueue. When we started putting together the first edition of ACM Queue in early 2003, it was a completely new experiment in publishing for ACM. Targeting a practitioner audience meant that much of what we did would differ from academic publishing. We created a new editorial board whose role was not only to vet articles, but also to identify topics and authors that would be of interest to both practitioners and academics. The board created the concept of guest experts who would take on an issue of the magazine and help the board acquire content and sometimes write the overarching piece that tied it all together.

October 22, 2015

Topic: Development

0 comments

Crash Consistency:
Rethinking the Fundamental Abstractions of the File System

The reading and writing of data, one of the most fundamental aspects of any Von Neumann computer, is surprisingly subtle and full of nuance. For example, consider access to a shared memory in a system with multiple processors. While a simple and intuitive approach known as strong consistency is easiest for programmers to understand, many weaker models are in widespread use (e.g., x86 total store ordering); such approaches improve system performance, but at the cost of making reasoning about system behavior more complex and error-prone.

July 7, 2015

Topic: File Systems and Storage

0 comments

Testing a Distributed System:
Testing a distributed system can be trying even under the best of circumstances.

Distributed systems can be especially difficult to program, for a variety of reasons. They can be difficult to design, difficult to manage, and, above all, difficult to test. Testing a normal system can be trying even under the best of circumstances, and no matter how diligent the tester is, bugs can still get through. Now take all of the standard issues and multiply them by multiple processes written in multiple languages running on multiple boxes that could potentially all be on different operating systems, and there is potential for a real disaster.

July 1, 2015

Topic: Distributed Development

5 comments

Natural Language Translation at the Intersection of AI and HCI:
Old questions being answered with both AI and HCI

The fields of artificial intelligence (AI) and human-computer interaction (HCI) are influencing each other like never before. Widely used systems such as Google Translate, Facebook Graph Search, and RelateIQ hide the complexity of large-scale AI systems behind intuitive interfaces. But relations were not always so auspicious. The two fields emerged at different points in the history of computer science, with different influences, ambitions, and attendant biases. AI aimed to construct a rival, and perhaps a successor, to the human intellect. Early AI researchers such as McCarthy, Minsky, and Shannon were mathematicians by training, so theorem-proving and formal models were attractive research directions.

June 28, 2015

Topic: AI

1 comments

Beyond Page Objects: Testing Web Applications with State Objects:
Use states to drive your tests

End-to-end testing of Web applications typically involves tricky interactions with Web pages by means of a framework such as Selenium WebDriver. The recommended method for hiding such Web-page intricacies is to use page objects, but there are questions to answer first: Which page objects should you create when testing Web applications? What actions should you include in a page object? Which test scenarios should you specify, given your page objects?

June 16, 2015

Topic: Web Development

1 comments

Hickory Dickory Doc:
On null encryption and automated documentation

Dear KV, While reviewing some encryption code in our product, I came across an option that allowed for null encryption. This means the encryption could be turned on, but the data would never be encrypted or decrypted. It would always be stored "in the clear." I removed the option from our latest source tree because I figured we didn’t want an unsuspecting user to turn on encryption but still have data stored in the clear. One of the other programmers on my team reviewed the potential change and blocked me from committing it, saying that the null code could be used for testing.

June 11, 2015

Topic: Development

1 comments

Dismantling the Barriers to Entry:
We have to choose to build a web that is accessible to everyone.

A war is being waged in the world of web development. On one side is a vanguard of toolmakers and tool users, who thrive on the destruction of bad old ideas ("old," in this milieu, meaning anything that debuted on Hacker News more than a month ago) and raucous debates about transpilers and suchlike.

June 8, 2015

Topic: Web Development

1 comments

Hadoop Superlinear Scalability:
The perpetual motion of parallel performance

We often see more than 100 percent speedup efficiency! came the rejoinder to the innocent reminder that you can’t have more than 100 percent of anything. But this was just the first volley from software engineers during a presentation on how to quantify computer system scalability in terms of the speedup metric. In different venues, on subsequent occasions, that retort seemed to grow into a veritable chorus that not only was superlinear speedup commonly observed, but also the model used to quantify scalability for the past 20 years failed when applied to superlinear speedup data.

June 4, 2015

Topic: Performance

2 comments

Lazarus Code:
No one expects the Spanish Acquisition.

I’ve been asked to look into the possibility of taking a 15-year-old piece of open-source software and updating it to work on a current system used by my company. The code itself doesn’t seem to be too bad, at least no worse than the code I’m used to reading, but I suspect it might be easier to write a new version from scratch than to try to understand code that I didn’t write and which no one has actively maintained for several years.

May 6, 2015

Topic: Code

0 comments

Evolution and Practice: Low-latency Distributed Applications in Finance:
The finance industry has unique demands for low-latency distributed systems.

Virtually all systems have some requirements for latency, defined here as the time required for a system to respond to input. Latency requirements appear in problem domains as diverse as aircraft flight controls, voice communications, multiplayer gaming, online advertising, and scientific experiments. Distributed systems present special latency considerations. In recent years the automation of financial trading has driven requirements for distributed systems with challenging latency requirements and global geographic distribution. Automated trading provides a window into the engineering challenges of ever-shrinking latency requirements, which may be useful to software engineers in other fields.

May 4, 2015

Topic: Distributed Computing

1 comments

The Science of Managing Data Science:
Lessons learned managing a data science research team

What are they doing all day? When I first took over as VP of Engineering at a startup doing data mining and machine learning research, this was what the other executives wanted to know. They knew the team was super smart, and they seemed like they were working really hard, but the executives had lots of questions about the work itself. How did they know that the work they were doing was the "right" work? Were there other projects they could be doing instead? And how could we get this research into the hands of our customers faster?

April 29, 2015

Topic: Data

0 comments

Using Free and Open Source Tools to Manage Software Quality:
An agile process implementation

The principles of agile software development place more emphasis on individuals and interactions than on processes and tools. They steer us away from heavy documentation requirements and guide us along a path of reacting efficiently to change rather than sticking rigidly to a pre-defined plan. To support this flexible method of operation, it is important to have suitable applications to manage the team’s activities. It is also essential to implement effective frameworks to ensure quality is being built into the product early and at all levels.

April 27, 2015

Topic: Tools

1 comments

From the EDVAC to WEBVACs:
Cloud computing for computer scientists

By now everyone has heard of cloud computing and realized that it is changing how both traditional enterprise IT and emerging startups are building solutions for the future. Is this trend toward the cloud just a shift in the complicated economics of the hardware and software industry, or is it a fundamentally different way of thinking about computing? Having worked in the industry, I can confidently say it is both.

April 9, 2015

Topic: Distributed Computing

0 comments

Spicing Up Dart with Side Effects:
A set of extensions to the Dart programming language, designed to support asynchrony and generator functions

The Dart programming language has recently incorporated a set of extensions designed to support asynchrony and generator functions. Because Dart is a language for Web programming, latency is an important concern. To avoid blocking, developers must make methods asynchronous when computing their results requires nontrivial time. Generator functions ease the task of computing iterable sequences.

March 19, 2015

Topic: Programming Languages

0 comments

Reliable Cron across the Planet:
...or How I stopped worrying and learned to love time

This article describes Google’s implementation of a distributed Cron service, serving the vast majority of internal teams that need periodic scheduling of compute jobs. During its existence, we have learned many lessons on how to design and implement what might seem like a basic service. Here, we discuss the problems that distributed Crons face and outline some potential solutions.

March 12, 2015

Topic: Distributed Computing

2 comments

There is No Now:
Problems with simultaneity in distributed systems

Now. The time elapsed between when I wrote that word and when you read it was at least a couple of weeks. That kind of delay is one that we take for granted and don’t even think about in written media. "Now." If we were in the same room and instead I spoke aloud, you might have a greater sense of immediacy. You might intuitively feel as if you were hearing the word at exactly the same time that I spoke it. That intuition would be wrong. If, instead of trusting your intuition, you thought about the physics of sound, you would know that time must have elapsed between my speaking and your hearing.

March 10, 2015

Topic: Distributed Computing

3 comments

Parallel Processing with Promises:
A simple method of writing a collaborative system

In today’s world, there are many reasons to write concurrent software. The desire to improve performance and increase throughput has led to many different asynchronous techniques. The techniques involved, however, are generally complex and the source of many subtle bugs, especially if they require shared mutable state. If shared state is not required, then these problems can be solved with a better abstraction called promises. These allow programmers to hook asynchronous function calls together, waiting for each to return success or failure before running the next appropriate function in the chain.

March 3, 2015

Topic: Concurrency

0 comments

Raw Networking:
Relevance and repeatability

Dear KV, The company I work for has decided to use a wireless network link to reduce latency, at least when the weather between the stations is good. It seems to me that for transmission over lossy wireless links we’ll want our own transport protocol that sits directly on top of whatever the radio provides, instead of wasting bits on IP and TCP or UDP headers, which, for a point-to-point network, aren’t really useful.

February 2, 2015

Topic: Development

0 comments

META II: Digital Vellum in the Digital Scriptorium:
Revisiting Schorre’s 1962 compiler-compiler

Some people do living history -- reviving older skills and material culture by reenacting Waterloo or knapping flint knives. One pleasant rainy weekend in 2012, I set my sights a little more recently and settled in for a little meditative retro-computing, ca. 1962, following the ancient mode of transmission of knowledge: lecture and recitation -- or rather, grace of living in historical times, lecture (here, in the French sense, reading) and transcription (or even more specifically, grace of living post-Post, lecture and reimplementation).

January 21, 2015

Topic: Programming Languages

4 comments

Model-based Testing: Where Does It Stand?:
MBT has positive effects on efficiency and effectiveness, even if it only partially fulfills high expectations.

You have probably heard about MBT (model-based testing), but like many software-engineering professionals who have not used MBT, you might be curious about others’ experience with this test-design method. From mid-June 2014 to early August 2014, we conducted a survey to learn how MBT users view its efficiency and effectiveness. The 2014 MBT User Survey, a follow-up to a similar 2012 survey, was open to all those who have evaluated or used any MBT approach. Its 32 questions included some from a survey distributed at the 2013 User Conference on Advanced Automated Testing. Some questions focused on the efficiency and effectiveness of MBT, providing the figures that managers are most interested in.

January 19, 2015

Topic: Quality Assurance

4 comments

Go Static or Go Home:
In the end, dynamic systems are simply less secure.

Most current and historic problems in computer and network security boil down to a single observation: letting other people control our devices is bad for us. At another time, I’ll explain what I mean by "other people" and "bad." For the purpose of this article, I’ll focus entirely on what I mean by control. One way we lose control of our devices is to external distributed denial of service (DDoS) attacks, which fill a network with unwanted traffic, leaving no room for real ("wanted") traffic. Other forms of DDoS are similar: an attack by the Low Orbit Ion Cannon (LOIC), for example, might not totally fill up a network, but it can keep a web server so busy answering useless attack requests that the server can’t answer any useful customer requests.

January 14, 2015

Topic: Web Security

7 comments

Securing the Network Time Protocol:
Crackers discover how to use NTP as a weapon for abuse.

In the late 1970s David L. Mills began working on the problem of synchronizing time on networked computers, and NTP (Network Time Protocol) version 1 made its debut in 1980. This was at a time when the net was a much friendlier place - the ARPANET days. NTP version 2 appeared approximately a year later, about the same time as CSNET (Computer Science Network). NSFNET (National Science Foundation Network) launched in 1986. NTP version 3 showed up in 1993.

January 8, 2015

Topic: Networks

0 comments

HTTP/2.0 - The IETF is Phoning It In:
Bad protocol, bad politics

In the long run, the most memorable event of 1989 will probably be that Tim Berners-Lee hacked up the HTTP protocol and named the result the "World Wide Web." Tim’s HTTP protocol ran on 10Mbit/s, Ethernet, and coax cables, and his computer was a NeXT Cube with a 25-MHz clock frequency. Twenty-six years later, my laptop CPU is a hundred times faster and has a thousand times as much RAM as Tim’s machine had, but the HTTP protocol is still the same.

January 6, 2015

Topic: Web Services

15 comments

Scalability Techniques for Practical Synchronization Primitives:
Designing locking primitives with performance in mind

In an ideal world, applications are expected to scale automatically when executed on increasingly larger systems. In practice, however, not only does this scaling not occur, but it is common to see performance actually worsen on those larger systems.

December 14, 2014

Topic: Concurrency

0 comments

Internal Access Controls:
Trust, but Verify

Every day seems to bring news of another dramatic and high-profile security incident, whether it is the discovery of longstanding vulnerabilities in widely used software such as OpenSSL or Bash, or celebrity photographs stolen and publicized. There seems to be an infinite supply of zero-day vulnerabilities and powerful state-sponsored attackers. In the face of such threats, is it even worth trying to protect your systems and data? What can systems security designers and administrators do?

December 10, 2014

Topic: Security

0 comments

Disambiguating Databases:
Use the database built for your access model.

The topic of data storage is one that doesn’t need to be well understood until something goes wrong (data disappears) or something goes really right (too many customers). Because databases can be treated as black boxes with an API, their inner workings are often overlooked. They’re often treated as magic things that just take data when offered and supply it when asked. Since these two operations are the only understood activities of the technology, they are often the only features presented when comparing different technologies.

December 8, 2014

Topic: Databases

3 comments

Too Big to Fail:
Visibility leads to debuggability.

Our project has been rolling out a well-known, distributed key/value store onto our infrastructure, and we’ve been surprised - more than once - when a simple increase in the number of clients has not only slowed things, but brought them to a complete halt. This then results in rollback while several of us scour the online forums to figure out if anyone else has seen the same problem. The entire reason for using this project’s software is to increase the scale of a large system, so I have been surprised at how many times a small increase in load has led to a complete failure.

December 1, 2014

Topic: Databases

1 comments

A New Software Engineering:
What happened to the promise of rigorous, disciplined, professional practices for software development?

What happened to software engineering? What happened to the promise of rigorous, disciplined, professional practices for software development, like those observed in other engineering disciplines? What has been adopted under the rubric of "software engineering" is a set of practices largely adapted from other engineering disciplines: project management, design and blueprinting, process control, and so forth. The basic analogy was to treat software as a manufactured product, with all the real "engineering" going on upstream of that - in requirements analysis, design, modeling, etc.

November 29, 2014

Topic: Development

12 comments

There’s No Such Thing as a General-purpose Processor:
And the belief in such a device is harmful

There is an increasing trend in computer architecture to categorize processors and accelerators as "general purpose." Of the papers published at this year’s International Symposium on Computer Architecture (ISCA 2014), nine out of 45 explicitly referred to general-purpose processors; one additionally referred to general-purpose FPGAs (field-programmable gate arrays), and another referred to general-purpose MIMD (multiple instruction, multiple data) supercomputers, stretching the definition to the breaking point. This article presents the argument that there is no such thing as a truly general-purpose processor and that the belief in such a device is harmful.

November 6, 2014

Topic: Computer Architecture

6 comments

The Responsive Enterprise: Embracing the Hacker Way:
Soon every company will be a software company.

As of July 2014, Facebook, founded in 2004, is in the top 20 of the most valuable companies in the S&P 500, putting the 10-year-old software company in the same league as IBM, Oracle, and Coca-Cola. Of the top five fastest-growing companies with regard to market capitalization in 2014 (table 1), three are software companies: Apple, Google, and Microsoft (in fact, one could argue that Intel is also driven by software, making it four out of five).

November 3, 2014

Topic: Development

14 comments

Evolution of the Product Manager:
Better education needed to develop the discipline

Software practitioners know that product management is a key piece of software development. Product managers talk to users to help figure out what to build, define requirements, and write functional specifications. They work closely with engineers throughout the process of building software. They serve as a sounding board for ideas, help balance the schedule when technical challenges occur - and push back to executive teams when technical revisions are needed. Product managers are involved from before the first code is written, until after it goes out the door.

October 22, 2014

Topic: Education

4 comments

Productivity in Parallel Programming: A Decade of Progress:
Looking at the design and benefits of X10

In 2002 DARPA (Defense Advanced Research Projects Agency) launched a major initiative in HPCS (high-productivity computing systems). The program was motivated by the belief that the utilization of the coming generation of parallel machines was gated by the difficulty of writing, debugging, tuning, and maintaining software at peta scale.

October 20, 2014

Topic: Concurrency

0 comments

JavaScript and the Netflix User Interface:
Conditional dependency resolution

In the two decades since its introduction, JavaScript has become the de facto official language of the Web. JavaScript trumps every other language when it comes to the number of runtime environments in the wild. Nearly every consumer hardware device on the market today supports the language in some way. While this is done most commonly through the integration of a Web browser application, many devices now also support Web views natively as part of the operating system UI (user interface).

October 14, 2014

Topic: Web Development

6 comments

Port Squatting:
Don’t irk your local sysadmin.

Dear KV, A few years ago you upbraided some developers for not following the correct process when requesting a reserved network port from IETF (Internet Engineering Task Force). While I get that squatting a used port is poor practice, I wonder if you, yourself, have ever tried to get IETF to allocate a port. We recently went through this with a new protocol on an open-source project, and it was a nontrivial and frustrating exercise.

September 28, 2014

Topic: Networks

0 comments

Security Collapse in the HTTPS Market:
Assessing legal and technical solutions to secure HTTPS

HTTPS (Hypertext Transfer Protocol Secure) has evolved into the de facto standard for secure Web browsing. Through the certificate-based authentication protocol, Web services and Internet users first authenticate one another ("shake hands") using a TLS/SSL certificate, encrypt Web communications end-to-end, and show a padlock in the browser to signal that a communication is secure. In recent years, HTTPS has become an essential technology to protect social, political, and economic activities online.

September 23, 2014

Topic: Web Security

3 comments

Why Is It Taking So Long to Secure Internet Routing?:
Routing security incidents can still slip past deployed security defenses.

BGP (Border Gateway Protocol) is the glue that sticks the Internet together, enabling data communications between large networks operated by different organizations. BGP makes Internet communications global by setting up routes for traffic between organizations - for example, from Boston University’s network, through larger ISPs (Internet service providers) such as Level3, Pakistan Telecom, and China Telecom, then on to residential networks such as Comcast or enterprise networks such as Bank of America.

September 11, 2014

Topic: Web Security

2 comments

Certificate Transparency:
Public, verifiable, append-only logs

On August 28, 2011, a mis-issued wildcard HTTPS certificate for google.com was used to conduct a man-in-the-middle attack against multiple users in Iran. The certificate had been issued by a Dutch CA (certificate authority) known as DigiNotar, a subsidiary of VASCO Data Security International. Later analysis showed that DigiNotar had been aware of the breach in its systems for more than a month - since at least July 19. It also showed that at least 531 fraudulent certificates had been issued. The final count may never be known, since DigiNotar did not have records of all the mis-issued certificates.

September 8, 2014

Topic: Web Security

1 comments

Securing the Tangled Web:
Preventing script injection vulnerabilities through software design

Script injection vulnerabilities are a bane of Web application development: deceptively simple in cause and remedy, they are nevertheless surprisingly difficult to prevent in large-scale Web development.

August 25, 2014

Topic: Web Security

0 comments

Privacy, Anonymity, and Big Data in the Social Sciences:
Quality social science research and the privacy of human subjects requires trust.

Open data has tremendous potential for science, but, in human subjects research, there is a tension between privacy and releasing high-quality open data. Federal law governing student privacy and the release of student records suggests that anonymizing student data protects student privacy. Guided by this standard, we de-identified and released a data set from 16 MOOCs (massive open online courses) from MITx and HarvardX on the edX platform. In this article, we show that these and other de-identification procedures necessitate changes to data sets that threaten replication and extension of baseline analyses. To balance student privacy and the benefits of open data, we suggest focusing on protecting privacy without anonymizing data by instead expanding policies that compel researchers to uphold the privacy of the subjects in open data sets.

August 14, 2014

Topic: Education

1 comments

The Network is Reliable:
An informal survey of real-world communications failures

The network is reliable tops Peter Deutsch’s classic list, "Eight fallacies of distributed computing", "all [of which] prove to be false in the long run and all [of which] cause big trouble and painful learning experiences." Accounting for and understanding the implications of network behavior is key to designing robust distributed programs; in fact, six of Deutsch’s "fallacies" directly pertain to limitations on networked communications.

July 23, 2014

Topic: Networks

1 comments

Undergraduate Software Engineering: Addressing the Needs of Professional Software Development:
Addressing the Needs of Professional Software Development

In the fall semester of 1996 RIT (Rochester Institute of Technology) launched the first undergraduate software engineering program in the United States. The culmination of five years of planning, development, and review, the program was designed from the outset to prepare graduates for professional positions in commercial and industrial software development.

July 21, 2014

Topic: Education

2 comments

Bringing Arbitrary Compute to Authoritative Data:
Many disparate use cases can be satisfied with a single storage system.

While the term ’big data’ is vague enough to have lost much of its meaning, today’s storage systems are growing more quickly and managing more data than ever before. Consumer devices generate large numbers of photos, videos, and other large digital assets. Machines are rapidly catching up to humans in data generation through extensive recording of system logs and metrics, as well as applications such as video capture and genome sequencing. Large data sets are now commonplace, and people increasingly want to run sophisticated analyses on the data.

July 13, 2014

Topic: Databases

0 comments

ACM and the Professional Programmer:
How do you, the reader, stay informed about research that influences your work?

In the very early days of computing, professional programming was nearly synonymous with academic research because computers tended to be devices that existed only or largely in academic settings. As computers became commercially available, they began to be found in private-sector, business environments. The 1950s and 1960s brought computing in the form of automation and data processing to the private sector and, along with this came a growing community of professionals whose focus on computing was pragmatic and production oriented. Computing was (and still is) evolving, and the academic community continued to explore new software and hardware concepts and constructs. New languages were invented (and are still being invented) to try new ideas in the formulation of programs.

July 2, 2014

Topic: Development

28 comments

Outsourcing Responsibility:
What do you do when your debugger fails you?

Dear KV, I’ve been assigned to help with a new project and have been looking over the admittedly skimpy documentation the team has placed on the internal wiki. I spent a day or so staring at what seemed to be a long list of open-source projects that they intend to integrate into the system they have been building, but I couldn’t find where their original work was described.

July 1, 2014

Topic: Debugging

2 comments

Quality Software Costs Money - Heartbleed Was Free:
How to generate funding for FOSS

The world runs on free and open-source software, FOSS for short, and to some degree it has predictably infiltrated just about any software-based product anywhere in the world.

June 19, 2014

Topic: Security

10 comments

Who Must You Trust?:
You must have some trust if you want to get anything done.

In his novel The Diamond Age, author Neal Stephenson describes a constructed society (called a phyle) based on extreme trust in one’s fellow members. Part of the membership requirements is that, from time to time, each member is called upon to undertake certain tasks to reinforce that trust. For example, a phyle member might be told to go to a particular location at the top of a cliff at a specific time, where he will find bungee cords with ankle harnesses attached. The other ends of the cords trail off into the bushes. At the appointed time he is to fasten the harnesses to his ankles and jump off the cliff.

May 30, 2014

Topic: Security

5 comments

Automated QA Testing at EA: Driven by Events:
A discussion with Michael Donat, Jafar Husain, and Terry Coatta

To millions of game geeks, the position of QA (quality assurance) tester at Electronic Arts must seem like a dream job. But from the company’s perspective, the overhead associated with QA can look downright frightening, particularly in an era of massively multiplayer games.

May 19, 2014

Topic: Quality Assurance

0 comments

Design Exploration through Code-generating DSLs:
High-level DSLs for low-level programming

DSLs (domain-specific languages) make programs shorter and easier to write. They can be stand-alone - for example, LaTeX, Makefiles, and SQL - or they can be embedded in a host language. You might think that DSLs embedded in high-level languages would be abstract or mathematically oriented, far from the nitty-gritty of low-level programming. This is not the case. This article demonstrates how high-level EDSLs (embedded DSLs) really can ease low-level programming. There is no contradiction.

May 15, 2014

Topic: Programming Languages

0 comments

Finding More Than One Worm in the Apple:
If you see something, say something.

In February Apple revealed and fixed an SSL (Secure Sockets Layer) vulnerability that had gone undiscovered since the release of iOS 6.0 in September 2012. It left users vulnerable to man-in-the-middle attacks thanks to a short circuit in the SSL/TLS (Transport Layer Security) handshake algorithm introduced by the duplication of a goto statement. Since the discovery of this very serious bug, many people have written about potential causes.

May 12, 2014

Topic: Security

13 comments

Domain-specific Languages and Code Synthesis Using Haskell:
Looking at embedded DSLs

There are many ways to give instructions to a computer: an electrical engineer might write a MATLAB program; a database administrator might write an SQL script; a hardware engineer might write in Verilog; and an accountant might write a spreadsheet with embedded formulas. Aside from the difference in language used in each of these examples, there is an important difference in form and idiom. Each uses a language customized to the job at hand, and each builds computational requests in a form both familiar and productive for programmers (although accountants may not think of themselves as programmers).

May 6, 2014

Topic: Programming Languages

2 comments

The NSA and Snowden: Securing the All-Seeing Eye:
How good security at the NSA could have stopped him

Edward Snowden, while an NSA (National Security Agency) contractor at Booz Allen Hamilton in Hawaii, copied up to 1.7 million top-secret and above documents, smuggling copies on a thumb drive out of the secure facility in which he worked, and later released many to the press. This has altered the relationship of the U.S. government with the American people, as well as with other countries. This article examines the computer security aspects of how the NSA could have prevented this, perhaps the most damaging breach of secrets in U.S. history.

April 28, 2014

Topic: Security

4 comments

The Curse of the Excluded Middle:
Mostly functional programming does not work.

There is a trend in the software industry to sell "mostly functional" programming as the silver bullet for solving problems developers face with concurrency, parallelism (manycore), and, of course, Big Data. Contemporary imperative languages could continue the ongoing trend, embrace closures, and try to limit mutation and other side effects. Unfortunately, just as "mostly secure" does not work, "mostly functional" does not work either. Instead, developers should seriously consider a completely fundamentalist option as well: embrace pure lazy functional programming with all effects explicitly surfaced in the type system using monads.

April 26, 2014

Topic: Programming Languages

35 comments

Forked Over:
Shortchanged by open source

How can one make reasonable packages based on open-source software when most open-source projects simply advise you to take the latest bits on GitHub or SourceForge? We could fork the code, as GitHub encourages us to do, and then make our own releases, but that puts the release-engineering work that we would expect from the project onto us.

April 23, 2014

Topic: Development

0 comments

Don’t Settle for Eventual Consistency:
Stronger properties for low-latency geo-replicated storage

Geo-replicated storage provides copies of the same data at multiple, geographically distinct locations. Facebook, for example, geo-replicates its data (profiles, friends lists, likes, etc.) to data centers on the east and west coasts of the United States, and in Europe. In each data center, a tier of separate Web servers accepts browser requests and then handles those requests by reading and writing data from the storage system.

April 21, 2014

Topic: Databases

3 comments

Please Put OpenSSL Out of Its Misery:
OpenSSL must die, for it will never get any better.

The OpenSSL software package is around 300,000 lines of code, which means there are probably around 299 bugs still there, now that the Heartbleed bug which allowed pretty much anybody to retrieve internal state to which they should normally not have access has been fixed.

April 12, 2014

Topic: Security

47 comments

A Primer on Provenance:
Better understanding of data requires tracking its history and context.

Assessing the quality or validity of a piece of data is not usually done in isolation. You typically examine the context in which the data appears and try to determine its original sources or review the process through which it was created. This is not so straightforward when dealing with digital data, however: the result of a computation might have been derived from numerous sources and by applying complex successive transformations, possibly over long periods of time.

April 10, 2014

Topic: Data

1 comments

Multipath TCP:
Decoupled from IP, TCP is at last able to support multihomed hosts.

The Internet relies heavily on two protocols. In the network layer, IP (Internet Protocol) provides an unreliable datagram service and ensures that any host can exchange packets with any other host. Since its creation in the 1970s, IP has seen the addition of several features, including multicast, IPsec (IP security), and QoS (quality of service). The latest revision, IPv6 (IP version 6), supports 16-byte addresses.

March 4, 2014

Topic: Networks

0 comments

Major-league SEMAT: Why Should an Executive Care?:
Becoming better, faster, cheaper, and happier

In today’s ever more competitive world, boards of directors and executives demand that CIOs and their teams deliver "more with less." Studies show, without any real surprise, that there is no one-size-fits-all method to suit all software initiatives, and that a practice-based approach with some light but effective degree of order and governance is the goal of most software-development departments.

February 27, 2014

Topic: Development

0 comments

The Logic of Logging:
And the illogic of PDF

I work in a pretty open environment, and by open I mean that many people have the ability to become the root user on our servers so that they can fix things as they break.

February 24, 2014

Topic: Development

0 comments

Eventually Consistent: Not What You Were Expecting?:
Methods of quantifying consistency (or lack thereof) in eventually consistent storage systems

Storage systems continue to lay the foundation for modern Internet services such as Web search, e-commerce, and social networking. Pressures caused by rapidly growing user bases and data sets have driven system designs away from conventional centralized databases and toward more scalable distributed solutions, including simple NoSQL key-value storage systems, as well as more elaborate NewSQL databases that support transactions at scale.

February 18, 2014

Topic: Databases

0 comments

Scaling Existing Lock-based Applications with Lock Elision:
Lock elision enables existing lock-based programs to achieve the performance benefits of nonblocking synchronization and fine-grain locking with minor software engineering effort.

Multithreaded applications take advantage of increasing core counts to achieve high performance. Such programs, however, typically require programmers to reason about data shared among multiple threads. Programmers use synchronization mechanisms such as mutual-exclusion locks to ensure correct updates to shared data in the presence of accesses from multiple threads. Unfortunately, these mechanisms serialize thread accesses to the data and limit scalability.

February 8, 2014

Topic: Concurrency

1 comments

Rate-limiting State:
The edge of the Internet is an unruly place

By design, the Internet core is dumb, and the edge is smart. This design decision has enabled the Internet’s wildcat growth, since without complexity the core can grow at the speed of demand. On the downside, the decision to put all smartness at the edge means we’re at the mercy of scale when it comes to the quality of the Internet’s aggregate traffic load. Not all device and software builders have the skills and the quality assurance budgets that something the size of the Internet deserves.

February 4, 2014

Topic: Security

7 comments

The API Performance Contract:
How can the expected interactions between caller and implementation be guaranteed?

When you call functions in an API, you expect them to work correctly; sometimes this expectation is called a contract between the caller and the implementation. Callers also have performance expectations about these functions, and often the success of a software system depends on the API meeting these expectations. So there’s a performance contract as well as a correctness contract. The performance contract is usually implicit, often vague, and sometimes breached (by caller or implementation). How can this aspect of API design and documentation be improved?

January 30, 2014

Topic: Performance

3 comments

Provenance in Sensor Data Management:
A cohesive, independent solution for bringing provenance to scientific research

In today’s information-driven workplaces, data is constantly being moved around and undergoing transformation. The typical business-as-usual approach is to use e-mail attachments, shared network locations, databases, and more recently, the cloud. More often than not, there are multiple versions of the data sitting in different locations, and users of this data are confounded by the lack of metadata describing its provenance or in other words, its lineage. The ProvDMS project at the Oak Ridge National Laboratory (ORNL) described in this article aims to solve this issue in the context of sensor data.

January 23, 2014

Topic: Data

0 comments

Node at LinkedIn: The Pursuit of Thinner, Lighter, Faster:
A discussion with Kiran Prasad, Kelly Norton, and Terry Coatta

Node.js, the server-side JavaScript-based software platform used to build scalable network applications, has been all the rage among many developers for the past couple of years, although its popularity has also managed to enrage some others, who have unleashed a barrage of negative blog posts to point out its perceived shortcomings. Still, while new and untested, Node continues to win more converts.

January 15, 2014

Topic: Web Services

1 comments

This is the Foo Field:
The meaning of bits and avoiding upgrade bog downs

When will someone write documentation that tells you what the bits mean rather than what they set? I’ve been working to integrate a library into our system, and every time I try to figure out what it wants from my code, all it tells me is what a part of it is: "This is the foo field." The problem is that it doesn’t tell me what happens when I set foo. It’s as if I’m supposed to know that already.

January 14, 2014

Topic: Development

2 comments

Unikernels: Rise of the Virtual Library Operating System:
What if all the software layers in a virtual appliance were compiled within the same safe, high-level language framework?

Cloud computing has been pioneering the business of renting computing resources in large data centers to multiple (and possibly competing) tenants. The basic enabling technology for the cloud is operating-system virtualization such as Xen1 or VMWare, which allows customers to multiplex VMs (virtual machines) on a shared cluster of physical machines. Each VM presents as a self-contained computer, booting a standard operating-system kernel and running unmodified applications just as if it were executing on a physical machine.

January 12, 2014

Topic: Distributed Computing

1 comments

Toward Software-defined SLAs:
Enterprise computing in the public cloud

The public cloud has introduced new technology and architectures that could reshape enterprise computing. In particular, the public cloud is a new design center for enterprise applications, platform software, and services. API-driven orchestration of large-scale, on-demand resources is an important new design attribute, which differentiates public-cloud from conventional enterprise data-center infrastructure. Enterprise applications must adapt to the new public-cloud design center, but at the same time new software and system design patterns can add enterprise attributes and service levels to public-cloud services.

January 6, 2014

Topic: Distributed Computing

0 comments

The Road to SDN:
An intellectual history of programmable networks

Designing and managing networks has become more innovative over the past few years with the aid of SDN (software-defined networking). This technology seems to have appeared suddenly, but it is actually part of a long history of trying to make computer networks more programmable.

December 30, 2013

Topic: Networks

5 comments

Center Wheel for Success:
Not invented here syndrome is not unique to the IT world.

When I first read the claim that HealthCare.gov, the Web site initiated by the Affordable Care Act, had cost $500 million to create, I didn’t believe the number. There is no way to make a Web site cost that much. But the actual number seems not to be an order-of-magnitude lower, and as I understand the reports, the Web site doesn’t have much to show for the high cost in term of performance, features, or quality in general.

December 20, 2013

Topic: Web Services

16 comments

The Software Inferno:
Dante’s tale, as experienced by a software architect

The Software Inferno is a tale that parallels The Inferno, Part One of The Divine Comedy written by Dante Alighieri in the early 1300s. That literary masterpiece describes the condemnation and punishment faced by a variety of sinners in their hell-spent afterlives as recompense for atrocities committed during their earthly existences. The Software Inferno is a similar account, describing a journey where "sinners against software" are encountered amidst their torment, within their assigned areas of eternal condemnation, and paying their penance.

December 16, 2013

Topic: Development

7 comments

Making the Web Faster with HTTP 2.0:
HTTP continues to evolve

HTTP (Hypertext Transfer Protocol) is one of the most widely used application protocols on the Internet. Since its publication, RFC 2616 (HTTP 1.1) has served as a foundation for the unprecedented growth of the Internet: billions of devices of all shapes and sizes, from desktop computers to the tiny Web devices in our pockets, speak HTTP every day to deliver news, video, and millions of other Web applications we have all come to depend on in our everyday lives.

December 3, 2013

Topic: Web Development

3 comments

Intermediate Representation:
The increasing significance of intermediate representations in compilers

Program compilation is a complicated process. A compiler is a software program that translates a high-level source language program into a form ready to execute on a computer. Early in the evolution of compilers, designers introduced IRs (intermediate representations, also commonly called intermediate languages) to manage the complexity of the compilation process. The use of an IR as the compiler’s internal representation of the program enables the compiler to be broken up into multiple phases and components, thus benefiting from modularity.

November 22, 2013

Topic: Programming Languages

2 comments

The Challenge of Cross-language Interoperability:
Interfacing between languages is increasingly important.

Interoperability between languages has been a problem since the second programming language was invented. Solutions have ranged from language-independent object models such as COM (Component Object Model) and CORBA (Common Object Request Broker Architecture) to VMs (virtual machines) designed to integrate languages, such as JVM (Java Virtual Machine) and CLR (Common Language Runtime). With software becoming ever more complex and hardware less homogeneous, the likelihood of a single language being the correct tool for an entire program is lower than ever. As modern compilers become more modular, there is potential for a new generation of interesting solutions.

November 19, 2013

Topic: Programming Languages

8 comments

Bugs and Bragging Rights:
It’s not always size that matters.

Dear KV, I’ve been dealing with a large program written in Java that seems to spend most of its time asking me to restart it because it has run out of memory.

November 11, 2013

Topic: Development

0 comments

Agile and SEMAT - Perfect Partners:
Combining agile and SEMAT yields more advantages than either one alone

Today, as always, many different initiatives are under way to improve the ways in which software is developed. The most popular and prevalent of these is the agile movement. One of the newer kids on the block is the SEMAT (Software Engineering Method and Theory) initiative. As with any new initiative, people are struggling to see how it fits into the world and relates to all the other things going on. For example, does it improve or replace their current ways of working?

November 5, 2013

Topic: Development

0 comments

Adopting DevOps Practices in Quality Assurance:
Merging the art and science of software development

Software life-cycle management was, for a very long time, a controlled exercise. The duration of product design, development, and support was predictable enough that companies and their employees scheduled their finances, vacations, surgeries, and mergers around product releases. When developers were busy, QA (quality assurance) had it easy. As the coding portion of a release cycle came to a close, QA took over while support ramped up. Then when the product released, the development staff exhaled, rested, and started the loop again while the support staff transitioned to busily supporting the new product.

October 30, 2013

Topic: Quality Assurance

1 comments

Passively Measuring TCP Round-trip Times:
A close look at RTT measurements with TCP

Measuring and monitoring network RTT (round-trip time) is important for multiple reasons: it allows network operators and end users to understand their network performance and help optimize their environment, and it helps businesses understand the responsiveness of their services to sections of their user base.

October 28, 2013

Topic: Networks

2 comments

Leaking Space:
Eliminating memory hogs

A space leak occurs when a computer program uses more memory than necessary. In contrast to memory leaks, where the leaked memory is never released, the memory consumed by a space leak is released, but later than expected. This article presents example space leaks and how to spot and eliminate them.

October 23, 2013

Topic: Memory

1 comments

Barbarians at the Gateways:
High-frequency Trading and Exchange Technology

I am a former high-frequency trader. For a few wonderful years I led a group of brilliant engineers and mathematicians, and together we traded in the electronic marketplaces and pushed systems to the edge of their capability.

October 16, 2013

Topic: Development

27 comments

Online Algorithms in High-frequency Trading:
The challenges faced by competing HFT algorithms

HFT (high-frequency trading) has emerged as a powerful force in modern financial markets. Only 20 years ago, most of the trading volume occurred in exchanges such as the New York Stock Exchange, where humans dressed in brightly colored outfits would gesticulate and scream their trading intentions. Nowadays, trading occurs mostly in electronic servers in data centers, where computers communicate their trading intentions through network messages. This transition from physical exchanges to electronic platforms has been particularly profitable for HFT firms, which invested heavily in the infrastructure of this new environment.

October 7, 2013

Topic: Development

4 comments

A Lesson in Resource Management:
Waste not memory, want not memory—unless it doesn’t matter

Dear KV, I’ve been reworking a device driver for a high-end, high-performance networking card and I have a resource allocation problem. The devices I’m working with have several network ports, but these are not always in use; in fact, many of our customers use only one of the four available ports. It would greatly simplify the logic in my driver if I could allocate the resources for all the ports -- no matter how many there are -- when the device driver is first loaded into the system, instead of dealing with allocation whenever an administrator brings up an interface.

September 3, 2013

Topic: Memory

4 comments

The Balancing Act of Choosing Nonblocking Features:
Design requirements of nonblocking systems

What is nonblocking progress? Consider the simple example of incrementing a counter C shared among multiple threads. One way to do so is by protecting the steps of incrementing C by a mutual exclusion lock L (i.e., acquire(L); old := C ; C := old+1; release(L);). If a thread P is holding L, then a different thread Q must wait for P to release L before Q can proceed to operate on C. That is, Q is blocked by P.

August 12, 2013

Topic: Concurrency

0 comments

NUMA (Non-Uniform Memory Access): An Overview:
NUMA becomes more common because memory controllers get close to execution units on microprocessors.

NUMA (non-uniform memory access) is the phenomenon that memory at various points in the address space of a processor have different performance characteristics. At current processor speeds, the signal path length from the processor to memory plays a significant role. Increased signal path length not only increases latency to memory but also quickly becomes a throughput bottleneck if the signal path is shared by multiple processors. The performance differences to memory were noticeable first on large-scale systems where data paths were spanning motherboards or chassis. These systems required modified operating-system kernels with NUMA support that explicitly understood the topological properties of the system’s memory (such as the chassis in which a region of memory was located) in order to avoid excessively long signal path lengths.

August 9, 2013

Topic: Processors

4 comments

20 Obstacles to Scalability:
Watch out for these pitfalls that can prevent Web application scaling.

Web applications can grow in fits and starts. Customer numbers can increase rapidly, and application usage patterns can vary seasonally. This unpredictability necessitates an application that is scalable. What is the best way of achieving scalability?

August 5, 2013

Topic: Web Development

0 comments

Rules for Mobile Performance Optimization:
An overview of techniques to speed page loading

Performance has always been crucial to the success of Web sites. A growing body of research has proven that even small improvements in page-load times lead to more sales, more ad revenue, more stickiness, and more customer satisfaction for enterprises ranging from small e-commerce shops to megachains such as Walmart.

August 1, 2013

Topic: Web Development

0 comments

More Encryption Is Not the Solution:
Cryptography as privacy works only if both ends work at it in good faith.

The recent exposure of the dragnet-style surveillance of Internet traffic has provoked a number of responses that are variations of the general formula, "More encryption is the solution." This is not the case. In fact, more encryption will probably only make the privacy crisis worse than it already is.

July 30, 2013

Topic: Privacy and Rights

16 comments

Best Practices on the Move: Building Web Apps for Mobile Devices:
Which practices should be modified or avoided altogether by developers for the mobile Web?

If it wasn’t your priority last year or the year before, it’s sure to be your priority now: bring your Web site or service to mobile devices in 2013 or suffer the consequences. Early adopters have been talking about mobile taking over since 1999 - anticipating the trend by only a decade or so. Today, mobile Web traffic is dramatically on the rise, and creating a slick mobile experience is at the top of everyone’s mind. Total mobile data traffic is expected to exceed 10 exabytes per month by 2017.

July 25, 2013

Topic: Web Development

0 comments

The Antifragile Organization:
Embracing Failure to Improve Resilience and Maximize Availability

Failure is inevitable. Disks fail. Software bugs lie dormant waiting for just the right conditions to bite. People make mistakes. Data centers are built on farms of unreliable commodity hardware. If you’re running in a cloud environment, then many of these factors are outside of your control. To compound the problem, failure is not predictable and doesn’t occur with uniform probability and frequency. The lack of a uniform frequency increases uncertainty and risk in the system.

June 27, 2013

Topic: Quality Assurance

3 comments

The Naming of Hosts is a Difficult Matter:
Also, the perils of premature rebooting

The naming of hosts is a difficult matter that ranks with coding style, editor choice, and language preference in the pantheon of things computer people fight about that don’t matter to anyone else in the whole world.

June 18, 2013

Topic: Development

1 comments

Nonblocking Algorithms and Scalable Multicore Programming:
Exploring some alternatives to lock-based synchronization

Real-world systems with complicated quality-of-service guarantees may require a delicate balance between throughput and latency to meet operating requirements in a cost-efficient manner. The increasing availability and decreasing cost of commodity multicore and many-core systems make concurrency and parallelism increasingly necessary for meeting demanding performance requirements. Unfortunately, the design and implementation of correct, efficient, and scalable concurrent software is often a daunting task.

June 11, 2013

Topic: Concurrency

3 comments

Proving the Correctness of Nonblocking Data Structures:
So you’ve decided to use a nonblocking data structure, and now you need to be certain of its correctness. How can this be achieved?

Nonblocking synchronization can yield astonishing results in terms of scalability and realtime response, but at the expense of verification state space.

June 2, 2013

Topic: Concurrency

0 comments

Structured Deferral: Synchronization via Procrastination:
We simply do not have a synchronization mechanism that can enforce mutual exclusion.

Developers often take a proactive approach to software design, especially those from cultures valuing industriousness over procrastination. Lazy approaches, however, have proven their value, with examples including reference counting, garbage collection, and lazy evaluation. This structured deferral takes the form of synchronization via procrastination, specifically reference counting, hazard pointers, and RCU (read-copy-update).

May 23, 2013

Topic: Concurrency

1 comments

Realtime GPU Audio:
Finite difference-based sound synthesis using graphics processors

Today’s CPUs are capable of supporting realtime audio for many popular applications, but some compute-intensive audio applications require hardware acceleration. This article looks at some realtime sound-synthesis applications and shares the authors’ experiences implementing them on GPUs (graphics processing units).

May 8, 2013

Topic: Processors

4 comments

There’s Just No Getting around It: You’re Building a Distributed System:
Building a distributed system requires a methodical approach to requirements.

Distributed systems are difficult to understand, design, build, and operate. They introduce exponentially more variables into a design than a single machine does, making the root cause of an application problem much harder to discover. It should be said that if an application does not have meaningful SLAs (service-level agreements) and can tolerate extended downtime and/or performance degradation, then the barrier to entry is greatly reduced. Most modern applications, however, have an expectation of resiliency from their users, and SLAs are typically measured by "the number of nines" (e.g., 99.9 or 99.99 percent availability per month).

May 3, 2013

Topic: Distributed Computing

4 comments

Resolved: the Internet Is No Place for Critical Infrastructure:
Risk is a necessary consequence of dependence

What is critical? To what degree is critical defined as a matter of principle, and to what degree is it defined operationally? I am distinguishing what we say from what we do.

April 26, 2013

Topic: Security

0 comments

Cherry-picking and the Scientific Method:
Software is supposed be a part of computer science, and science demands proof.

So while haggling with the cherry seller, it became obvious that buying a whole flat of cherries would be a better deal than buying a single basket, even though that was all we really wanted. Not wanting to pass up a deal, however, my friend bought the entire flat and off we went, eating and talking. It took another 45 minutes to get home, and during that time we had eaten more than half the flat of cherries.

April 22, 2013

Topic: Development

1 comments

A File System All Its Own:
Flash memory has come a long way. Now it’s time for software to catch up.

In the past five years, flash memory has progressed from a promising accelerator, whose place in the data center was still uncertain, to an established enterprise component for storing performance-critical data. It’s rise to prominence followed its proliferation in the consumer world and the volume economics that followed (see figure 1). With SSDs (solid-state devices), flash arrived in a form optimized for compatibility - just replace a hard drive with an SSD for radically better performance. But the properties of the NAND flash memory used by SSDs differ significantly from those of the magnetic media in the hard drives they often displace.

April 13, 2013

Topic: Memory

2 comments

Eventual Consistency Today: Limitations, Extensions, and Beyond:
How can applications be built on eventually consistent infrastructure given no guarantee of safety?

In a July 2000 conference keynote, Eric Brewer, now VP of engineering at Google and a professor at the University of California, Berkeley, publicly postulated the CAP (consistency, availability, and partition tolerance) theorem, which would change the landscape of how distributed storage systems were architected. Brewer’s conjecture--based on his experiences building infrastructure for some of the first Internet search engines at Inktomi--states that distributed systems requiring always-on, highly available operation cannot guarantee the illusion of coherent, consistent single-system operation in the presence of network partitions, which cut communication between active servers.

April 9, 2013

Topic: Databases

1 comments

Discrimination in Online Ad Delivery:
Google ads, black names and white names, racial discrimination, and click advertising

Do online ads suggestive of arrest records appear more often with searches of black-sounding names than white-sounding names? What is a black-sounding name or white-sounding name, anyway? How many more times would an ad have to appear adversely affecting one racial group for it to be considered discrimination? Is online activity so ubiquitous that computer scientists have to think about societal consequences such as structural racism in technology design? If so, how is this technology to be built? Let’s take a scientific dive into online ad delivery to find answers.

April 2, 2013

Topic: Search Engines

1 comments

How Fast is Your Web Site?:
Web site performance data has never been more readily available.

The overwhelming evidence indicates that a Web site’s performance (speed) correlates directly to its success, across industries and business metrics. With such a clear correlation (and even proven causation), it is important to monitor how your Web site performs. So, how fast is your Web site?

March 4, 2013

Topic: Performance

2 comments

FPGA Programming for the Masses:
The programmability of FPGAs must improve if they are to be part of mainstream computing.

When looking at how hardware influences computing performance, we have GPPs (general-purpose processors) on one end of the spectrum and ASICs (application-specific integrated circuits) on the other. Processors are highly programmable but often inefficient in terms of power and performance. ASICs implement a dedicated and fixed function and provide the best power and performance characteristics, but any functional change requires a complete (and extremely expensive) re-spinning of the circuits.

February 23, 2013

Topic: Processors

8 comments

The Evolution of Web Development for Mobile Devices:
Building Web sites that perform well on mobile devices remains a challenge.

The biggest change in Web development over the past few years has been the remarkable rise of mobile computing. Mobile phones used to be extremely limited devices that were best used for making phone calls and sending short text messages. Today’s mobile phones are more powerful than the computers that took Apollo 11 to the moon with the ability to send data to and from nearly anywhere.

February 17, 2013

Topic: Web Development

1 comments

Swamped by Automation:
Whenever someone asks you to trust them, don’t.

So your group fell for the "just install this software and things will be great" ploy. It’s an old trick that continues to snag sysadmins and others who have supporting roles around developers. Whenever someone asks you to trust them, don’t. Cynical as that might be, it’s better than being suckered.

February 12, 2013

Topic: Development

1 comments

The Story of the Teapot in DHTML:
It’s easy to do amazing things, such as rendering the classic teapot in HTML and CSS.

Before there was SVG (Scalable Vector Graphics), WebGL (Web Graphics Library), Canvas, or much of anything for graphics in the browser, it was possible to do quite a lot more than was initially obvious. To demonstrate, we created a JavaScript program that renders polygonal 3D graphics using nothing more than HTML and CSS. Our proof-of-concept is fast enough to support physics-based small-game content, but we started with the iconic 3D "Utah teapot" because it tells the whole story in one picture. It’s feasible to render this classic object using just regular DIV elements, CSS styles, and a little bit of JavaScript code.

February 11, 2013

Topic: Web Development

2 comments

Making the Mobile Web Faster:
Mobile performance issues? Fix the back end, not just the client.

Mobile clients have been on the rise and will only continue to grow. This means that if you are serving clients over the Internet, you cannot ignore the customer experience on a mobile device. There are many informative articles on mobile performance, and just as many on general API design, but you’ll find few discussing the design considerations needed to optimize the back-end systems for mobile clients. Whether you have an app, mobile Web site, or both, it is likely that these clients are consuming APIs from your back-end systems.

January 31, 2013

Topic: Web Development

3 comments

Hazy: Making it Easier to Build and Maintain Big-data Analytics:
Racing to unleash the full potential of big data with the latest statistical and machine-learning techniques.

The rise of big data presents both big opportunities and big challenges in domains ranging from enterprises to sciences. The opportunities include better-informed business decisions, more efficient supply-chain management and resource allocation, more effective targeting of products and advertisements, better ways to "organize the world’s information," faster turnaround of scientific discoveries, etc.

January 23, 2013

Topic: Data

0 comments

A Decade of OS Access-control Extensibility:
Open source security foundations for mobile and embedded devices

To discuss operating system security is to marvel at the diversity of deployed access-control models: Unix and Windows NT multiuser security; Type Enforcement in SELinux; anti-malware products; app sandboxing in Apple OS X, Apple iOS, and Google Android; and application-facing systems such as Capsicum in FreeBSD. This diversity is the result of a stunning transition from the narrow 1990s Unix and NT status quo to ’security localization’ - the adaptation of operating-system security models to site-local or product-specific requirements.

January 18, 2013

Topic: Security

2 comments

Divided by Division:
Is there a best used-by date for software?

Do you know of any rule of thumb for how often a piece of software should need maintenance? I’m not thinking about bug fixes, since bugs are there from the moment the code is written, but about the constant refactoring that seems to go on in code. Sometimes I feel as if programmers use refactoring as a way of keeping their jobs, rather than offering any real improvement.

January 10, 2013

Topic: Development

3 comments

Rethinking Passwords:
Our authentication system is lacking. Is improvement possible?

There is an authentication plague upon the land. We have to claim and assert our identity repeatedly to a host of authentication trolls, each jealously guarding an Internet service of some sort. Each troll has specific rules for passwords, and the rules vary widely and incomprehensibly.

December 31, 2012

Topic: Security

6 comments

Thinking Methodically about Performance:
The USE method addresses shortcomings in other commonly used methodologies.

Performance issues can be complex and mysterious, providing little or no clue to their origin. In the absence of a starting point, performance issues are often analyzed randomly: guessing where the problem may be and then changing things until it goes away. While this can deliver results it can also be time-consuming, disruptive, and may ultimately overlook certain issues. This article describes system-performance issues and the methodologies in use today for analyzing them, and it proposes a new methodology for approaching and solving a class of issues.

December 11, 2012

Topic: Performance

0 comments

Code Abuse:
One programmer’s extension is another programmer’s abuse.

During some recent downtime at work, I’ve been cleaning up a set of libraries, removing dead code, updating documentation blocks, and fixing minor bugs that have been annoying but not critical. This bit of code spelunking has revealed how some of the libraries have been not only used, but also abused. The fact that everyone and their sister use the timing library for just about any event they can think of isn’t so bad, as it is a library that’s meant to call out to code periodically (although some of the events seem as if they don’t need to be events at all).

December 5, 2012

Topic: Code

0 comments

Splinternet Behind the Great Firewall of China:
Once China opened its door to the world, it could not close it again.

What if you could not access YouTube, Facebook, Twitter, and Wikipedia? How would you feel if Google informed you that your connection had been reset during a search? What if Gmail was only periodically available, and Google Docs, which was used to compose this article, was completely unreachable? What a mess!

November 30, 2012

Topic: Web Security

4 comments

Browser Security Case Study: Appearances Can Be Deceiving:
A discussion with Jeremiah Grossman, Ben Livshits, Rebecca Bace, and George Neville-Neil

It seems every day we learn of some new security breach. It’s all there for the taking on the Internet: more and more sensitive data every second. As for privacy, we Facebook, we Google, we bank online, we shop online, we invest online& we put it all out there. And just how well protected is all that personally identifiable information? Not very.

November 20, 2012

Topic: Web Security

1 comments

Condos and Clouds:
Constraints in an environment empower the services.

Living in a condominium has its constraints and its services. By defining the lifestyle and limits on usage patterns, it is possible to pack many homes close together and to provide the residents with many conveniences. Condo living can offer a great value to those interested and willing to live within its constraints and enjoy the sharing of common services.

November 14, 2012

Topic: Distributed Computing

0 comments

The Web Won’t Be Safe or Secure until We Break It:
Unless you’ve taken very particular precautions, assume every Web site you visit knows exactly who you are.

The Internet was designed to deliver information, but few people envisioned the vast amounts of information that would be involved or the personal nature of that information. Similarly, few could have foreseen the potential flaws in the design of the Internet that would expose this personal information, compromising the data of individuals and companies.

November 6, 2012

Topic: Web Security

14 comments

The Essence of Software Engineering: The SEMAT Kernel:
A thinking framework in the form of an actionable kernel

Everyone who develops software knows that it is a complex and risky business, and its participants are always on the lookout for new ideas that will lead to better software. Fortunately, software engineering is still a young and growing profession that sees innovations and improvements in best practices every year. Just look, for example, at the improvements and benefits that lean and agile thinking have brought to software-development teams.

October 24, 2012

Topic: Development

8 comments

Anatomy of a Solid-state Drive:
While the ubiquitous SSD shares many features with the hard-disk drive, under the surface they are completely different.

Over the past several years, a new type of storage device has entered laptops and data centers, fundamentally changing expectations regarding the power, size, and performance dynamics of storage. The SSD (solid-state drive) is a technology that has been around for more than 30 years but remained too expensive for broad adoption.

October 17, 2012

Topic: File Systems and Storage

8 comments

Queue Portrait: Robert Watson:
Queue’s Kode Vicious interviews Robert Watson, a security researcher and open source developer at the University of Cambridge about a project studying the boundaries of the hardware/software interface.

Robert Watson is a security researcher and open source developer at the University of Cambridge looking at the hardware-software interface. He talks to us about spanning industry and academia, the importance of open source in software research, and challenges facing research that spans traditional boundaries in computer science. We also learn a bit about CPU security, and why applications, rather than operating systems, are increasingly the focus of security research. What are the challenges in the evolving hardware-software interface? Could open source hardware provide a platform for hardware-software research? And why is current hardware part of the problem?

October 12, 2012

Topic: Open Source

3 comments

Sender-side Buffers and the Case for Multimedia Adaptation:
A proposal to improve the performance and availability of streaming video and other time-sensitive media

The Internet/Web architecture has developed to the point where it is common for the most popular sites to operate at a virtually unlimited scale, and many sites now cater to hundreds of millions of unique users. Performance and availability are generally essential to attract and sustain such user bases. As such, the network and server infrastructure plays a critical role in the fierce competition for users. Web pages should load in tens to a few hundred milliseconds at most.

October 11, 2012

Topic: Web Services

0 comments

Weathering the Unexpected:
Failures happen, and resilience drills help organizations prepare for them.

Whether it is a hurricane blowing down power lines, a volcanic-ash cloud grounding all flights for a continent, or a humble rodent gnawing through underground fibers -- the unexpected happens. We cannot do much to prevent it, but there is a lot we can do to be prepared for it. To this end, Google runs an annual, company-wide, multi-day Disaster Recovery Testing event -- DiRT -- the objective of which is to ensure that Google’s services and internal business operations continue to run following a disaster.

September 16, 2012

Topic: Quality Assurance

0 comments

Resilience Engineering: Learning to Embrace Failure:
A discussion with Jesse Robbins, Kripa Krishnan, John Allspaw, and Tom Limoncelli

In the early 2000s, Amazon created GameDay, a program designed to increase resilience by purposely injecting major failures into critical systems semi-regularly to discover flaws and subtle dependencies. Basically, a GameDay exercise tests a company’s systems, software, and people in the course of preparing for a response to a disastrous event. Widespread acceptance of the GameDay concept has taken a few years, but many companies now see its value and have started to adopt their own versions. This discussion considers some of those experiences.

September 13, 2012

Topic: Quality Assurance

6 comments

Disks from the Perspective of a File System:
Disks lie. And the controllers that run them are partners in crime.

Most applications do not deal with disks directly, instead storing their data in files in a file system, which protects us from those scoundrel disks. After all, a key task of the file system is to ensure that the file system can always be recovered to a consistent state after an unplanned system crash (for example, a power failure). While a good file system will be able to beat the disks into submission, the required effort can be great and the reduced performance annoying.

September 6, 2012

Topic: File Systems and Storage

14 comments

Toward Higher Precision:
An introduction to PTP and its significance to NTP practitioners

It is difficult to overstate the importance of synchronized time to modern computer systems. Our lives today depend on the financial transactions, telecommunications, power generation and delivery, high-speed manufacturing, and discoveries in "big physics," among many other things, that are driven by fast, powerful computing devices coordinated in time with each other.

August 27, 2012

Topic: Networks

1 comments

Fault Injection in Production:
Making the case for resilience testing

When we build Web infrastructures at Etsy, we aim to make them resilient. This means designing them carefully so that they can sustain their (increasingly critical) operations in the face of failure. Thankfully, there have been a couple of decades and reams of paper spent on researching how fault tolerance and graceful degradation can be brought to computer systems. That helps the cause.

August 24, 2012

Topic: Quality Assurance

1 comments

A Generation Lost in the Bazaar:
Quality happens only when someone is responsible for it.

Thirteen years ago, Eric Raymond’s book "The Cathedral and the Bazaar" (O’Reilly Media, 2001) redefined our vocabulary and all but promised an end to the waterfall model and big software companies, thanks to the new grass-roots open source software development movement. I found the book thought provoking, but it did not convince me. On the other hand, being deeply involved in open source, I couldn’t help but think that it would be nice if he was right.

August 15, 2012

Topic: Development

152 comments

Can More Code Mean Fewer Bugs?:
The bytes you save today may bite you tomorrow

Dear One, You almost had me with your appeal to simplicity, that having a single line with system() on it reduces the potential for bugs. Almost, but not quite.

August 8, 2012

Topic: Code

1 comments

All Your Database Are Belong to Us:
In the big open world of the cloud, highly available distributed objects will rule.

In the database world, the raw physical data model is at the center of the universe, and queries freely assume intimate details of the data representation (indexes, statistics, metadata). This closed-world assumption and the resulting lack of abstraction have the pleasant effect of allowing the data to outlive the application. On the other hand, this makes it hard to evolve the underlying model independently from the queries over the model.

July 23, 2012

Topic: Databases

5 comments

Software Needs Seatbelts and Airbags:
Finding and fixing bugs in deployed software is difficult and time-consuming. Here are some alternatives.

Like death and taxes, buggy code is an unfortunate fact of life. Nearly every program ships with known bugs, and probably all of them end up with bugs that are discovered only post-deployment. There are many reasons for this sad state of affairs.

July 16, 2012

Topic: Patching and Deployment

1 comments

A New Objective-C Runtime: from Research to Production:
Backward compatibility always trumps new features.

The path from the research prototype (Étoilé runtime) to the shipping version (GNUstep runtime) involved a complete rewrite and redesign. This isn’t necessarily a bad thing: part of the point of building a prototype is to learn what makes sense and what doesn’t, and to investigate what is feasible in a world where you control the entire system, but not necessarily in production.

July 11, 2012

Topic: Programming Languages

0 comments

Multitier Programming in Hop:
A first step toward programming 21st-century applications

The Web is becoming the richest platform on which to create computer applications. Its power comes from three elements: (1) modern Web browsers enable highly sophisticated GUIs with 3D, multimedia, fancy typesetting, etc.; (2) calling existing services through Web APIs makes it possible to develop sophisticated applications from independently available components; and (3) open data availability allows applications to access a wide set of information that was unreachable or that simply did not exist before. The combination of these three elements has already given birth to revolutionary applications such as Google Maps, radio podcasts, and social networks.

July 9, 2012

Topic: Web Development

0 comments

OpenFlow: A Radical New Idea in Networking:
An open standard that enables software-defined networking

Computer networks have historically evolved box by box, with individual network elements occupying specific ecological niches as routers, switches, load balancers, NATs (network address translations), or firewalls. Software-defined networking proposes to overturn that ecology, turning the network as a whole into a platform and the individual network elements into programmable entities. The apps running on the network platform can optimize traffic flows to take the shortest path, just as the current distributed protocols do, but they can also optimize the network to maximize link utilization, create different reachability domains for different users, or make device mobility seamless.

June 20, 2012

Topic: Networks

5 comments

Extending the Semantics of Scheduling Priorities:
Increasing parallelism demands new paradigms.

Application performance is directly affected by the hardware resources that the application requires, the degree to which such resources are available, and how the operating system addresses its requirements with regard to the other processes in the system. Ideally, an application would have access to all the resources it could use and be allowed to complete its work without competing with any other activity in the system. In a world of highly shared hardware resources and generalpurpose, time-share-based operating systems, however, no guarantees can be made as to how well resourced an application will be.

June 14, 2012

Topic: Performance

0 comments

LinkedIn Password Leak: Salt Their Hide:
If it does not take a full second to calculate the password hash, it is too weak.

6.5 million unsalted SHA1 hashed LinkedIn passwords have appeared in the criminal underground. There are two words in that sentence that should cause LinkedIn no end of concern: "unsalted" and "SHA1."

June 7, 2012

Topic: Security

36 comments

A Nice Piece of Code:
Colorful metaphors and properly reusing functions

In the last installment of Kode Vicious (A System is not a Product, ACM Queue 10 (4), April 2012), I mentioned that I had recently read two pieces of code that had actually lowered, rather than raised, my blood pressure. As promised, this edition’s KV covers that second piece of code.

June 5, 2012

Topic: Code

2 comments

Getting What You Measure:
Four common pitfalls in using software metrics for project management

Software metrics - helpful tools or a waste of time? For every developer who treasures these mathematical abstractions of software systems there is a developer who thinks software metrics are invented just to keep project managers busy. Software metrics can be very powerful tools that help achieve your goals but it is important to use them correctly, as they also have the power to demotivate project teams and steer development in the wrong direction.

May 29, 2012

Topic: Workflow Systems

1 comments

My Compiler Does Not Understand Me:
Until our programming languages catch up, code will be full of horrors.

Only lately have a lot of smart people found audiences for making sound points about what and how we code. Various colleagues have been beating drums and heads together for ages trying to make certain that wise insights about programming stick to neurons. Articles on coding style in this and other publications have provided further examples of such advocacy.

May 21, 2012

Topic: Code

6 comments

Modeling People and Places with Internet Photo Collections:
Understanding the world from the sea of online photos

This article describes our work in using online photo collections to reconstruct information about the world and its inhabitants at both global and local scales. This work has been driven by the dramatic growth of social content-sharing Web sites, which have created immense online collections of user-generated visual data. Flickr.com alone currently hosts more than 6 billion images taken by more than 40 million unique users, while Facebook.com has said it grows by nearly 250 million photos every day.

May 11, 2012

Topic: Graphics

6 comments

Controlling Queue Delay:
A modern AQM is just one piece of the solution to bufferbloat.

Nearly three decades after it was first diagnosed, the "persistently full buffer problem" recently exposed as part of "bufferbloat", is still with us and made increasingly critical by two trends. First, cheap memory and a "more is better" mentality have led to the inflation and proliferation of buffers. Second, dynamically varying path characteristics are much more common today and are the norm at the consumer Internet edge. Reasonably sized buffers become extremely oversized when link rates and path delays fall below nominal values.

May 6, 2012

Topic: Networks

16 comments

A Guided Tour through Data-center Networking:
A good user experience depends on predictable performance within the data-center network.

The magic of the cloud is that it is always on and always available from anywhere. Users have come to expect that services are there when they need them. A data center (or warehouse-scale computer) is the nexus from which all the services flow. It is often housed in a nondescript warehouse-sized building bearing no indication of what lies inside. Amidst the whirring fans and refrigerator-sized computer racks is a tapestry of electrical cables and fiber optics weaving everything together -- the data-center network.

May 3, 2012

Topic: Networks

0 comments

Realtime Computer Vision with OpenCV:
Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.

Computer vision is a rapidly growing field devoted to analyzing, modifying, and high-level understanding of images. Its objective is to determine what is happening in front of a camera and use that understanding to control a computer or robotic system, or to provide people with new images that are more informative or aesthetically pleasing than the original camera images. Application areas for computer-vision technology include video surveillance, biometrics, automotive, photography, movie production, Web search, medicine, augmented reality gaming, new user interfaces, and many more.

April 22, 2012

Topic: HCI

9 comments

Idempotence Is Not a Medical Condition:
An essential property for reliable systems

The definition of distributed computing can be confusing. Sometimes, it refers to a tightly coupled cluster of computers working together to look like one larger computer. More often, however, it refers to a bunch of loosely related applications chattering together without a lot of system-level support. This lack of support in distributed computing environments makes it difficult to write applications that work together. Messages sent between systems do not have crisp guarantees for delivery. They can get lost, and so, after a timeout, they are retried. The application on the other side of the communication may see multiple messages arrive where one was intended.

April 14, 2012

Topic: Web Development

0 comments

A System is not a Product:
Stopping to smell the code before wasting time reentering configuration data

Every once in a while, I come across a piece of good code and like to take a moment to recognize this fact, if only to keep my blood pressure low before my yearly medical checkup. The first such piece of code to catch my eye was clocksource.h in Linux. Linux interfaces with hardware clocks, such as the crystal on a motherboard, through a set of structures that are put together like a set of Russian dolls.

April 12, 2012

Topic: Code

2 comments

CPU DB: Recording Microprocessor History:
With this open database, you can mine microprocessor trends over the past 40 years.

In November 1971, Intel introduced the world’s first single-chip microprocessor, the Intel 4004. It had 2,300 transistors, ran at a clock speed of up to 740 KHz, and delivered 60,000 instructions per second while dissipating 0.5 watts. The following four decades witnessed exponential growth in compute power, a trend that has enabled applications as diverse as climate modeling, protein folding, and computing real-time ballistic trajectories of angry birds.

April 6, 2012

Topic: Processors

13 comments

Your Mouse is a Database:
Web and mobile applications are increasingly composed of asynchronous and realtime streaming services and push notifications.

Among the hottest buzzwords in the IT industry these days is "big data," but the "big" is something of a misnomer: big data is not just about volume, but also about velocity and variety. The volume of data ranges from a small number of items stored in the closed world of a conventional RDMS (relational database management system) to a large number of items spread out over a large cluster of machines or across the entire World Wide Web.

March 27, 2012

Topic: Web Development

1 comments

Managing Technical Debt:
Shortcuts that save money and time today can cost you down the road.

In 1992, Ward Cunningham published a report at OOPSLA (Object-oriented Programming, Systems, Languages, and Applications) in which he proposed the concept of technical debt. He defines it in terms of immature code: "Shipping first-time code is like going into debt." Technical debt isn’t limited to first-time code, however. There are many ways and reasons (not all bad) to take on technical debt.

March 23, 2012

Topic: Development

2 comments

Scale Failure:
Using a tool for the wrong job is OK until the day when it isn’t.

Dear KV, I have been digging into a network-based logging system at work because, from time to time, the system jams up, even when there seems to be no good reason for it to do so. What I found would be funny, if only it weren’t my job to fix it: the central dispatcher for the entire logging system is a simple for loop around a pair of read and write calls; the for loop takes input from one of a set of file descriptors and sends output to one of another set of file descriptors. The system works fine so long as none of the remote readers or writers ever blocks, and normally that’s not a problem.

February 21, 2012

Topic: Tools

1 comments

Interactive Dynamics for Visual Analysis:
A taxonomy of tools that support the fluent and flexible use of visualizations

The increasing scale and availability of digital data provides an extraordinary resource for informing public policy, scientific discovery, business strategy, and even our personal lives. To get the most out of such data, however, users must be able to make sense of it: to pursue questions, uncover patterns of interest, and identify (and potentially correct) errors. In concert with data-management systems and statistical algorithms, analysis requires contextualized human judgments regarding the domain-specific significance of the clusters, trends, and outliers discovered in data.

February 20, 2012

Topic: Graphics

3 comments

Why LINQ Matters: Cloud Composability Guaranteed:
The benefits of composability are becoming clear in software engineering.

In this article we use LINQ (Language-integrated Query) as the guiding example of composability. LINQ is a specification of higher-order operators designed specifically to be composable. This specification is broadly applicable over anything that fits a loose definition of "collection," from objects in memory to asynchronous data streams to resources distributed in the cloud. With such a design, developers build up complexity by chaining together transforms and filters in various orders and by nesting the chains--that is, by building expression trees of operators.

February 14, 2012

Topic: Programming Languages

1 comments

Home Bufferbloat Demonstration Videos:
Under common loads, your real Internet "speed" can easily drop by a factor of ten due to bufferbloat.

While bufferbloat is regularly present in computers and routers throughout the Internet, we frequently suffer its effects most directly at home--and it is at home where it can easily be investigated. The videos presented here demonstrate two instances of "typical" bufferbloat found in ordinary, modern broadband equipment and home routers. Under common loads, your real Internet "speed" can easily drop by a factor of ten due to bufferbloat.

February 5, 2012

Topic: Networks

0 comments

The Hyperdimensional Tar Pit:
Make a guess, double the number, and then move to the next larger unit of time.

When I started in computing more than a quarter of a century ago, a kind elder colleague gave me a rule of thumb for estimating when I would have finished a task properly: make a guess, double the number, and then move to the next larger unit of time. This rule scales tasks in a very interesting way: a one-minute task explodes by a factor of 120 to take two hours. A one-hour job explodes by "only" a factor 48 to take two days, while a one-day job grows by a factor of 14 to take two weeks.

January 23, 2012

Topic: Code

2 comments

Revisiting Network I/O APIs: The netmap Framework:
It is possible to achieve huge performance improvements in the way packet processing is done on modern operating systems.

Today 10-gigabit interfaces are used more and more in datacenters and servers. On these links, packets flow as fast as one every 67.2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks.

January 17, 2012

Topic: API Design

17 comments

SAGE: Whitebox Fuzzing for Security Testing:
SAGE has had a remarkable impact at Microsoft.

Most ACM Queue readers might think of "program verification research" as mostly theoretical with little impact on the world at large. Think again. If you are reading these lines on a PC running some form of Windows (like 93-plus percent of PC users--that is, more than a billion people), then you have been affected by this line of work--without knowing it, which is precisely the way we want it to be.

January 11, 2012

Topic: Security

0 comments

The Network Protocol Battle:
A tale of hubris and zealotry

Dear KV, I’ve been working on a personal project that involves creating a new network protocol. Out of curiosity, I tried to find out what would be involved in getting an official protocol number assigned for my project and discovered that it could take a year and could mean a lot of back and forth with the powers that be at the IETF. I knew this wouldn’t be as simple as clicking something on a Web page, but a year seems excessive, and really it’s not a major part of the work, so it seems like this would mainly be a distraction.

January 5, 2012

Topic: Networks

24 comments

You Don’t Know Jack about Shared Variables or Memory Models:
Data races are evil.

A Google search for "Threads are evil" generates 18,000 hits, but threads are ubiquitous. Almost all of the processes running on a modern Windows PC use them. Software threads are typically how programmers get machines with multiple cores to work together to solve problems faster. And often they are what allow user interfaces to remain responsive while the application performs a background calculation.

December 28, 2011

Topic: Memory

1 comments

Advances and Challenges in Log Analysis:
Logs contain a wealth of information for help in managing systems.

Computer-system logs provide a glimpse into the states of a running system. Instrumentation occasionally generates short messages that are collected in a system-specific log. The content and format of logs can vary widely from one system to another and even among components within a system. A printer driver might generate messages indicating that it had trouble communicating with the printer, while a Web server might record which pages were requested and when.

December 20, 2011

Topic: System Administration

0 comments

Code Rototilling:
KV hates unnecessary work.

Dear KV, Whenever a certain programmer I work with needs to add a variable to a function and the name collides with a previously used name, he changes all of the previous instances to a new different name so that he can reuse the name himself. This causes his diffs to be far larger than they need to be and annoys the hell out of me. Whenever I challenge him on this, he says that the old usage was wrong, anyway, but I think that’s just him making an excuse.

December 14, 2011

Topic: Code

1 comments

BufferBloat: What’s Wrong with the Internet?:
A discussion with Vint Cerf, Van Jacobson, Nick Weaver, and Jim Gettys

Internet delays are now as common as they are maddening. That means they end up affecting system engineers just like all the rest of us. And when system engineers get irritated, they often go looking for what’s at the root of the problem. Take Jim Gettys, for example. His slow home network had repeatedly proved to be the source of considerable frustration, so he set out to determine what was wrong, and he even coined a term for what he found: bufferbloat.

December 7, 2011

Topic: Networks

16 comments

Bufferbloat: Dark Buffers in the Internet:
Networks without effective AQM may again be vulnerable to congestion collapse.

Today’s networks are suffering from unnecessary latency and poor system performance. The culprit is bufferbloat, the existence of excessively large and frequently full buffers inside the network. Large buffers have been inserted all over the Internet without sufficient thought or testing. They damage or defeat the fundamental congestion-avoidance algorithms of the Internet’s most common transport protocol. Long delays from bufferbloat are frequently attributed incorrectly to network congestion, and this misinterpretation of the problem leads to the wrong solutions being proposed.

November 29, 2011

Topic: Networks

17 comments

I/O Virtualization:
Decoupling a logical device from its physical implementation offers many compelling advantages.

The term virtual is heavily overloaded, evoking everything from virtual machines running in the cloud to avatars running across virtual worlds. Even within the narrowfigureer context of computer I/O, virtualization has a long, diverse history, exemplified by logical devices that are deliberately separate from their physical instantiations.

November 22, 2011

Topic: Virtualization

0 comments

Creating Languages in Racket:
Sometimes you just have to make a better mousetrap.

Choosing the right tool for a simple job is easy: a screwdriver is usually the best option when you need to change the battery in a toy, and grep is the obvious choice to check for a word in a text document. For more complex tasks, the choice of tool is rarely so straightforward--all the more so for a programming task, where programmers have an unparalleled ability to construct their own tools. Programmers frequently solve programming problems by creating new tool programs, such as scripts that generate source code from tables of data.

November 9, 2011

Topic: Programming Languages

0 comments

Coding Guidelines: Finding the Art in the Science:
What separates good code from great code?

Computer science is both a science and an art. Its scientific aspects range from the theory of computation and algorithmic studies to code design and program architecture. Yet, when it comes time for implementation, there is a combination of artistic flare, nuanced style, and technical prowess that separates good code from great code.

November 2, 2011

Topic: Code

27 comments

Wanton Acts of Debuggery:
Keep your debug messages clear, useful, and not annoying.

Dear KV, Why is it that people who add logging to their programs lack the creativity to differentiate their log messages? If they all say the same thing—for example, DEBUG—it’s hard to tell what is going on, or even why the previous programmer added these statements in the first place.

October 24, 2011

Topic: Debugging

1 comments

How Will Astronomy Archives Survive the Data Tsunami?:
Astronomers are collecting more data than ever. What practices can keep them ahead of the flood?

Astronomy is already awash with data: currently 1 PB of public data is electronically accessible, and this volume is growing at 0.5 PB per year. The availability of this data has already transformed research in astronomy, and the STScI now reports that more papers are published with archived data sets than with newly acquired data. This growth in data size and anticipated usage will accelerate in the coming few years as new projects such as the LSST, ALMA, and SKA move into operation. These new projects will use much larger arrays of telescopes and detectors or much higher data acquisition rates than are now used.

October 18, 2011

Topic: Databases

1 comments

Postmortem Debugging in Dynamic Environments:
Modern dynamic languages lack tools for understanding software failures.

Despite the best efforts of software engineers to produce high-quality software, inevitably some bugs escape even the most rigorous testing process and are first encountered by end users. When this happens, such failures must be understood quickly, the underlying bugs fixed, and deployments patched to avoid another user (or the same one) running into the same problem again.

October 3, 2011

Topic: Programming Languages

0 comments

OCaml for the Masses:
Why the next language you learn should be functional

Functional programming is an old idea with a distinguished history. Lisp, a functional language inspired by Alonzo Church’s lambda calculus, was one of the first programming languages developed at the dawn of the computing age. Statically typed functional languages such as OCaml and Haskell are newer, but their roots go deep.

September 27, 2011

Topic: Programming Languages

38 comments

Java Security Architecture Revisited:
Hard technical problems and tough business challenges

This article looks back at a few of the hardest technical problems from a design and engineering perspective, as well as some tough business challenges for which research scientists are rarely trained. Li Gong offers a retrospective here culled from four previous occasions when he had the opportunity to dig into old notes and refresh his memory.

September 15, 2011

Topic: Programming Languages

0 comments

Debugging on Live Systems:
It’s more of a social than a technical problem.

I’ve been trying to debug a problem on a system at work, but the control freaks who run our production systems don’t want to give me access to the systems on which the bug always occurs. I haven’t been able to reproduce the problem in the test environment on my desktop, but every day the bug happens on several production systems.

September 13, 2011

Topic: Debugging

1 comments

The Software Industry IS the Problem:
The time has come for software liability laws.

One score and seven years ago, Ken Thompson brought forth a new problem, conceived by thinking, and dedicated to the proposition that those who trusted computers were in deep trouble. I am, of course, talking about Thompson’s Turing Award lecture, "Reflections on Trusting Trust." Unless you remember this piece by heart, you might want to take a moment to read it if at all possible.

September 8, 2011

Topic: Privacy and Rights

48 comments

The World According to LINQ:
Big data is about more than size, and LINQ is more than up to the task.

Programmers building Web- and cloud-based applications wire together data from many different sources such as sensors, social networks, user interfaces, spreadsheets, and stock tickers. Most of this data does not fit in the closed and clean world of traditional relational databases. It is too big, unstructured, denormalized, and streaming in realtime. Presenting a unified programming model across all these disparate data models and query languages seems impossible at first. By focusing on the commonalities instead of the differences, however, most data sources will accept some form of computation to filter and transform collections of data.

August 30, 2011

Topic: Data

5 comments

Verification of Safety-critical Software:
Avionics software safety certification is achieved through objective-based standards.

Avionics software has become a keystone in today’s aircraft design. Advances in avionics systems have reduced aircraft weight thereby reducing fuel consumption, enabled precision navigation, improved engine performance, and provided a host of other benefits. These advances have turned modern aircraft into flying data centers with computers controlling or monitoring many of the critical systems onboard. The software that runs these aircraft systems must be as safe as we can make it.

August 29, 2011

Topic: Quality Assurance

3 comments

Abstraction in Hardware System Design:
Applying lessons from software languages to hardware languages using Bluespec SystemVerilog

The history of software engineering is one of continuing development of abstraction mechanisms designed to tackle ever-increasing complexity. Hardware design, however, is not as current. For example, the two most commonly used HDLs date back to the 1980s. Updates to the standards lag behind modern programming languages in structural abstractions such as types, encapsulation, and parameterization. Their behavioral semantics lag even further. They are specified in terms of event-driven simulators running on uniprocessor von Neumann machines.

August 18, 2011

Topic: System Evolution

1 comments

How to Improve Security?:
It takes more than flossing once a year.

We recently had a security compromise at work, and now the whole IT department is scrambling to improve security. One problem this whole episode has brought to light is that so much security advice is generic. It’s like being told to lock your door when you go out at night, without saying what kind of lock you ought to own or how many are enough to protect your house. I think by now most people know they need to lock their doors, so why aren’t there more specific guidelines for securing systems?

August 12, 2011

Topic: Security

1 comments

Mobile Devices in the Enterprise: CTO Roundtable Overview:
An overview of the key points discussed in the ACM Roundtable on Mobile Devices in the Enterprise

The CTO Roundtable on Mobile Devices in the Enterprise focuses on the implications of the widespread use of mobile devices, such as smartphones, in the enterprise computing environment. These new personal devices have presented great challenges and opportunities for the protection of valuable information assets and creation of business value. What follows are the key points from that broader conversation.

August 12, 2011

Topic: Mobile Computing

0 comments

ACM CTO Roundtable on Mobile Devices in the Enterprise:
Finding solutions as growth and fragmentation complicate mobile device support

BlackBerry? iPhone? Android? Other? Thin client or fat client? Browser or Wi-Fi? Developers of mobile applications have many variables to consider in a rapidly changing environment. The mobile device market is growing quickly and fragmenting as it does so. Supporting mobile devices in the enterprise is getting much more complicated because of both this rapid growth worldwide and the diverse set of devices and networks.

August 3, 2011

Topic: Mobile Computing

0 comments

The Most Expensive One-byte Mistake:
Did Ken, Dennis, and Brian choose wrong with NUL-terminated text strings?

IT both drives and implements the modern Western-style economy. Thus, we regularly see headlines about staggeringly large amounts of money connected with IT mistakes. Which IT or CS decision has resulted in the most expensive mistake?

July 25, 2011

Topic: Development

114 comments

Arrogance in Business Planning:
Technology business plans that assume no competition (ever)

In the Internet addressing and naming market there’s a lot of competition, margins are thin, and the premiums on good planning and good execution are nowhere higher. To survive, investors and entrepreneurs have to be bold. Some entrepreneurs, however, go beyond "bold" and enter the territory of "arrogant" by making the wild assumption that they will have no competitors if they create a new and profitable niche. So it is with those who would unilaterally supplant or redraw the existing Internet resource governance or allocation systems.

July 20, 2011

Topic: Networks

7 comments

File-system Litter:
Cleaning up your storage space quickly and efficiently

Dear KV, We recently ran out of storage space on a very large file server and upon closer inspection we found that it was just one employee who had used it all up. The space was taken up almost exclusively by small files that were the result of running some data-analysis scripts. These files were completely unnecessary after they had been read once. The code that generated the files had no good way of cleaning them up once they had been created; it just went on believing that storage was infinite. Now we’ve had to put quotas on our file servers and, of course, deal with weekly cries for more disk space.

July 12, 2011

Topic: File Systems and Storage

1 comments

The Pain of Implementing LINQ Providers:
It’s no easy task for NoSQL

I remember sitting on the edge of my seat watching the 2005 PDC (Professional Developers Conference) videos that first showed LINQ (Language Integrated Query). I wanted LINQ: it offered just about everything that I could hope for to make working with data easy. The impetus for building queries into the language is quite simple; it is something that is used all the time; and the promise of a unified querying model is good enough, even before you add all the language goodies that were dropped on us. Being able to write in C# and have the database magically understand what I am doing?

July 6, 2011

Topic: Object-Relational Mapping

3 comments

Computing without Processors:
Heterogeneous systems allow us to target our programming to the appropriate environment.

From the programmer’s perspective the distinction between hardware and software is being blurred. As programmers struggle to meet the performance requirements of today’s systems, they will face an ever increasing need to exploit alternative computing elements such as GPUs (graphics processing units), which are graphics cards subverted for data-parallel computing, and FPGAs (field-programmable gate arrays), or soft hardware.

June 27, 2011

Topic: Computer Architecture

5 comments

The Robustness Principle Reconsidered:
Seeking a middle ground

In 1981, Jon Postel formulated the Robustness Principle, also known as Postel’s Law, as a fundamental implementation guideline for the then-new TCP. The intent of the Robustness Principle was to maximize interoperability between network service implementations, particularly in the face of ambiguous or incomplete specifications. If every implementation of some service that generates some piece of protocol did so using the most conservative interpretation of the specification and every implementation that accepted that piece of protocol interpreted it using the most generous interpretation, then the chance that the two services would be able to talk with each other would be maximized.

June 22, 2011

Topic: Networks

0 comments

Interviewing Techniques:
Separating the good programmers from the bad

My work group has just been given approval to hire four new programmers, and now all of us have to interview people, both on the phone and in person. I hate interviewing people. I never know what to ask. I’ve also noticed that people tend to be careless with the truth when writing their resumes. We’re considering a programming test for our next round of interviewees, because we realized that some previous candidates clearly couldn’t program their way out of a paper bag. There have to be tricks to speeding up hiring without compromising whom we hire.

June 14, 2011

Topic: Business/Management

5 comments

Microsoft’s Protocol Documentation Program:
A Discussion with Nico Kicillof, Wolfgang Grieskamp and Bob Binder

In 2002, Microsoft began the difficult process of verifying much of the technical documentation for its Windows communication protocols.

June 8, 2011

Topic: Quality Assurance

5 comments

DSL for the Uninitiated:
Domain-specific languages bridge the semantic gap in programming.

One of the main reasons why software projects fail is the lack of communication between the business users, who actually know the problem domain, and the developers who design and implement the software model. Business users understand the domain terminology, and they speak a vocabulary that may be quite alien to the software people; it’s no wonder that the communication model can break down right at the beginning of the project life cycle.

June 1, 2011

Topic: Programming Languages

2 comments

If You Have Too Much Data, then "Good Enough" Is Good Enough:
In today’s humongous database systems, clarity may be relaxed, but business needs can still be met.

Classic database systems offer crisp answers for a relatively small amount of data. These systems hold their data in one or a relatively small number of computers. With a tightly defined schema and transactional consistency, the results returned from queries are crisp and accurate. New systems have humongous amounts of data content, change rates, and querying rates and take lots of computers to hold and process. The data quality and meaning are fuzzy. The schema, if present, is likely to vary across the data. The origin of the data may be suspect, and its staleness may vary.

May 23, 2011

Topic: Databases

5 comments

Deduplicating Devices Considered Harmful:
A good idea, but it can be taken too far

During the research for their interesting paper, "Reliably Erasing Data From Flash-based Solid State Drives," delivered at the FAST (File and Storage Technology) workshop at San Jose in February, Michael Wei and his co-authors from the University of California, San Diego discovered that at least one flash controller, the SandForce SF-1200, was by default doing block-level deduplication of data written to it. The SF-1200 is used in SSDs (solid-state disks) from, among others, Corsair, ADATA, and Mushkin.

May 17, 2011

Topic: Databases

2 comments

Passing a Language through the Eye of a Needle:
How the embeddability of Lua impacted its design

Scripting languages are an important element in the current landscape of programming languages. A key feature of a scripting language is its ability to integrate with a system language. This integration takes two main forms: extending and embedding. In the first form, you extend the scripting language with libraries and functions written in the system language and write your main program in the scripting language. In the second form, you embed the scripting language in a host program (written in the system language) so that the host can run scripts and call functions defined in the scripts; the main program is the host program.

May 12, 2011

Topic: Programming Languages

4 comments

Storage Strife:
Beware keeping data in binary format

Where I work we are very serious about storing all of our data, not just our source code, in our source-code control system. When we started the company we made the decision to store as much as possible in one place. The problem is that over time we have moved from a pure programming environment to one where there are other people - the kind of people who send e-mails using Outlook and who keep their data in binary and proprietary formats.

May 5, 2011

Topic: Data

0 comments

Scalable SQL:
How do large-scale sites and applications remain SQL-based?

One of the leading motivators for NoSQL innovation is the desire to achieve very high scalability to handle the vagaries of Internet-size workloads. Yet many big social Web sites and many other Web sites and distributed tier 1 applications that require high scalability reportedly remain SQL-based for their core data stores and services. The question is, how do they do it?

April 19, 2011

Topic: Databases

3 comments

Mobile Application Development: Web vs. Native:
Web apps are cheaper to develop and deploy than native apps, but can they match the native user experience?

A few short years ago, most mobile devices were, for want of a better word, "dumb." Sure, there were some early smartphones, but they were either entirely e-mail focused or lacked sophisticated touch screens that could be used without a stylus. Even fewer shipped with a decent mobile browser capable of displaying anything more than simple text, links, and maybe an image. This meant if you had one of these devices, you were either a businessperson addicted to e-mail or an alpha geek hoping that this would be the year of the smartphone.

April 12, 2011

Topic: Mobile Computing

6 comments

The One-second War (What Time Will You Die?):
As more and more systems care about time at the second and sub-second level, finding a lasting solution to the leap seconds problem is becoming increasingly urgent.

Thanks to a secretive conspiracy working mostly below the public radar, your time of death may be a minute later than presently expected. But don’t expect to live any longer, unless you happen to be responsible for time synchronization in a large network of computers, in which case this coup will lower your stress level a bit every other year or so. We’re talking about the abolishment of leap seconds, a crude hack added 40 years ago, to paper over the fact that planets make lousy clocks compared with quantum mechanical phenomena.

April 6, 2011

Topic: Development

34 comments

Weapons of Mass Assignment:
A Ruby on Rails app highlights some serious, yet easily avoided, security vulnerabilities.

In May 2010, during a news cycle dominated by users’ widespread disgust with Facebook privacy policies, a team of four students from New York University published a request for $10,000 in donations to build a privacy-aware Facebook alternative. The software, Diaspora, would allow users to host their own social networks and own their own data. The team promised to open-source all the code they wrote, guaranteeing the privacy and security of users’ data by exposing the code to public scrutiny. With the help of front-page coverage from the New York Times, the team ended up raising more than $200,000.

March 30, 2011

Topic: Security

3 comments

A co-Relational Model of Data for Large Shared Data Banks:
Contrary to popular belief, SQL and noSQL are really just two sides of the same coin.

Fueled by their promise to solve the problem of distilling valuable information and business insight from big data in a scalable and programmer-friendly way, noSQL databases have been one of the hottest topics in our field recently. With a plethora of open source and commercial offerings and a surrounding cacophony of technical terms, however, it is hard for businesses and practitioners to see the forest for the trees.

March 18, 2011

Topic: Databases

23 comments

Successful Strategies for IPv6 Rollouts. Really.:
Knowing where to begin is half the battle.

The design of TCP/IP began in 1973 when Robert Kahn and I started to explore the ramifications of interconnecting different kinds of packet-switched networks. We published a concept paper in May 1974, and a fairly complete specification for TCP was published in December 1974. By the end of 1975, several implementations had been completed and many problems were identified. Iteration began, and by 1977 it was concluded that TCP (by now called Transmission Control Protocol) should be split into two protocols: a simple Internet Protocol that carried datagrams end to end through packet networks interconnected through gateways; and a TCP that managed the flow and sequencing of packets exchanged between hosts on the contemplated Internet.

March 10, 2011

Topic: Networks

5 comments

Porting with Autotools:
Using tools such as Automake and Autoconf with preexisting code bases can be a major hassle.

A piece of C code I’ve been working on recently needs to be ported to another platform, and at work we’re looking at Autotools, including Automake and Autoconf, to achieve this. The problem is that every time I attempt to get the code building with these tools I feel like a rat in a maze. I can almost get things to build but not quite.

March 3, 2011

Topic: Tools

0 comments

Returning Control to the Programmer:
Exposing SIMD units within interpreted languages could simplify programs and unleash floods of untapped processor power.

Server and workstation hardware architecture is continually improving, yet interpreted languages have failed to keep pace with the proper utilization of modern processors. SIMD (single instruction, multiple data) units are available in nearly every current desktop and server processor and are greatly underutilized, especially with interpreted languages. If multicore processors continue their current growth pattern, interpreted-language performance will begin to fall behind, since current native compilers and languages offer better automated SIMD optimization and direct SIMD mapping support.

February 24, 2011

Topic: Virtual Machines

3 comments

B.Y.O.C. (1,342 Times and Counting):
Why can’t we all use standard libraries for commonly needed algorithms?

Although seldom articulated clearly, or even at all, one of the bedrock ideas of good software engineering is reuse of code libraries holding easily accessible implementations of common algorithms and facilities. The reason for this reticence is probably because there is no way to state it succinctly, without sounding like a cheap parody of Occam’s razor: It is pointless to do with several where few will suffice.

February 17, 2011

Topic: Development

12 comments

Two Books Alike in Dignity:
Formal and informal approaches to C++ mastery

Woke up this morning ... surprised to find my Sennheisers still connecting ears to my new MacBook Pro, with iTunes set to blues genre in shuffle mode. Lest you think I’ve succumbed to the despicable placement temptation that seduces so many columnists and filmmakers in these pursy times, I’m reluctant to price the named products, plug their sources, or elaborate on the immense pleasure I derive from their splendid cost-effective performances. Suffice it to mention that the subliminal album playing all night was Broke, Black and Blue, Volume 1, available for 7.95 US dollars or 7.95 Apple pounds sterling.

February 10, 2011

Topic: Programming Languages

0 comments

Testable System Administration:
Models of indeterminism are changing IT management.

The methods of system administration have changed little in the past 20 years. While core IT technologies have improved in a multitude of ways, for many if not most organizations system administration is still based on production-line build logistics (aka provisioning) and reactive incident handling. As we progress into an information age, humans will need to work less like the machines they use and embrace knowledge-based approaches. That means exploiting simple (hands-free) automation that leaves us unencumbered to discover patterns and make decisions.

January 31, 2011

Topic: System Administration

1 comments

National Internet Defense - Small States on the Skirmish Line:
Attacks in Estonia and Georgia highlight key vulnerabilities in national Internet infrastructure.

Despite the global and borderless nature of the Internet’s underlying protocols and driving philosophy, there are significant ways in which it remains substantively territorial. Nations have policies and laws that govern and attempt to defend "their Internet". This is far less palpable than a nation’s physical territory or even than "its air" or "its water". Cyberspace is still a much wilder frontier, hard to define and measure. Where its effects are noted and measurable, all too often they are hard to attribute to responsible parties.

January 19, 2011

Topic: Security

0 comments

Finding Usability Bugs with Automated Tests:
Automated usability tests can be valuable companions to in-person tests.

Ideally, all software should be easy to use and accessible for a wide range of people; however, even software that appears to be modern and intuitive often falls short of the most basic usability and accessibility goals. Why does this happen? One reason is that sometimes our designs look appealing so we skip the step of testing their usability and accessibility; all in the interest of speed, reducing costs, and competitive advantage.

January 12, 2011

Topic: HCI

3 comments

System Administration Soft Skills:
How can system administrators reduce stress and conflict in the workplace?

System administration can be both stressful and rewarding. Stress generally comes from outside factors such as conflict between SAs (system administrators) and their colleagues, a lack of resources, a high-interrupt environment, conflicting priorities, and SAs being held responsible for failures outside their control. What can SAs and their managers do to alleviate the stress? There are some well-known interpersonal and time-management techniques that can help, but these can be forgotten in times of crisis or just through force of habit.

January 4, 2011

Topic: System Administration

3 comments

A Plea to Software Vendors from Sysadmins - 10 Do’s and Don’ts:
What can software vendors do to make the lives of sysadmins a little easier?

A friend of mine is a grease monkey: the kind of auto enthusiast who rebuilds engines for fun on a Saturday night. He explained to me that certain brands of automobiles were designed in ways to make the mechanic’s job easier. Others, however, were designed as if the company had a pact with the aspirin industry to make sure there are plenty of mechanics with headaches. He said those car companies hate mechanics. I understood completely because, as a system administrator, I can tell when software vendors hate me. It shows in their products.

December 22, 2010

Topic: System Administration

48 comments

Bound by the Speed of Light:
There’s only so much you can do to optimize NFS over a WAN.

I’ve been asked to optimize our NFS (network file system) set up for a global network, but NFS doesn’t work the same over a long link as it does over a LAN. Management keeps yelling that we have a multigigabit link between our remote sites but what our users experience when they try to access their files over the WAN link is truly frustrating. Is this just an impossible task?

December 14, 2010

Topic: Networks

3 comments

Collaboration in System Administration:
For sysadmins, solving problems usually involves collaborating with others. How can we make it more effective?

George was in trouble. A seemingly simple deployment was taking all morning, and there seemed no end in sight. His manager kept coming in to check on his progress, as the customer was anxious to have the deployment done. He was supposed to be leaving for a goodbye lunch for a departing co-worker, adding to the stress. He had called in all kinds of help, including colleagues, an application architect, technical support, and even one of the system developers. He used e-mail, instant messaging, face-to-face contacts, his phone, and even his office mate’s phone to communicate with everyone. And George was no novice.

December 6, 2010

Topic: System Administration

1 comments

UX Design and Agile: A Natural Fit?:
A user experience designer and a software engineer from SAP discuss the challenges of collaborating on a business-intelligence query tool.

Found at the intersection of many fields, UX design addresses a software user’s entire experience: from logging on to navigating, accessing, modifying, and saving data. Unfortunately, UX design is often overlooked or treated as a "bolt-on," available only to those projects blessed with the extra time and budget to accommodate it. Careful design of the user experience, however, can be crucial to the success of a product. And it’s not just window dressing: choices made about the user experience can have a significant impact on a software product’s underlying architecture, data structures, and processing algorithms.

November 29, 2010

Topic: HCI

0 comments

Virtualization: Blessing or Curse?:
Managing virtualization at a large scale is fraught with hidden challenges.

Virtualization is often touted as the solution to many challenging problems, from resource underutilization to data-center optimization and carbon emission reduction. The hidden costs of virtualization, largely stemming from the complex and difficult system administration challenges it poses, are often overlooked, however. Reaping the fruits of virtualization requires the enterprise to navigate scalability limitations, revamp traditional operational practices, manage performance, and achieve unprecedented cross-silo collaboration. Virtualization is not a curse: it can bring material benefits, but only to the prepared.

November 22, 2010

Topic: System Administration

0 comments

A Conversation with Ed Catmull:
The head of Pixar Animation Studios talks tech with Stanford professor Pat Hanrahan.

With the release of Toy Story in 1995, Pixar Animation Studios President Ed Catmull achieved a lifelong goal: to make the world’s first feature-length, fully computer-generated movie. It was the culmination of two decades of work, beginning at the legendary University of Utah computer graphics program in the early 1970s, with important stops along the way at the New York Institute of Technology, Lucasfilm, and finally Pixar, which he cofounded with Steve Jobs and John Lasseter in 1986. Since then, Pixar has become a household name, and Catmull’s original dream has extended into a string of successful computer-animated movies. Each stage in his storied career presented new challenges, and on the other side of them, new lessons.

November 13, 2010

Topic: Graphics

5 comments

The Theft of Business Innovation: Overview:
An overview of key points discussed in the joint ACM-BCS Roundtable on Threats to Global to Competitiveness.

The joint ACM-BCS Roundtable on Threats to Global Competitiveness focuses on the new business security realities resulting from having practically all business information directly or indirectly connected to the Internet and the increased speed and volume of information movement. This new environment has enabled an entirely new dimension in what has been considered important business value-creation assets and in the criminal ways that information can be stolen or used to harm its owner. What follows are the key points from that broader conversation.

November 5, 2010

Topic: Security

0 comments

The Theft of Business Innovation: An ACM-BCS Roundtable on Threats to Global Competitiveness:
These days, cybercriminals are looking to steal more than just banking information.

Valuable information assets stretch more broadly than just bank accounts, financial-services transactions, or secret, patentable inventions. In many cases, everything that defines a successful business model resides on one or more directly or indirectly Internet-connected personal computers (e-mail, spreadsheets, word-processing documents, etc.) , in corporate databases, in software that implements business practices, or collectively on thousands of TCP/IP-enabled realtime plant controllers. While not the traditional high-powered information repositories one normally thinks of as attractive intellectual property targets, these systems do represent a complete knowledge set of a business’ operations.

November 1, 2010

Topic: Security

1 comments

Sir, Please Step Away from the ASR-33!:
To move forward with programming languages we need to break free from the tyranny of ASCII.

One of the naughty details of my Varnish software is that the configuration is written in a domain-specific language that is converted into C source code, compiled into a shared library, and executed at hardware speed. That obviously makes me a programming language syntax designer, and just as obviously I have started to think more about how we express ourselves in these syntaxes.

October 25, 2010

Topic: Programming Languages

86 comments

Gardening Tips:
A good library is like a garden.

I’ve been maintaining a set of libraries for my company for the past year. The libraries are used to interface to some special hardware that we sell, and all of the code we sell to our end users runs on top of the libraries, which talk, pretty much directly, to our hardware. The one problem I keep having is that the application programmers continually reach around the library to talk directly to the hardware, and this causes bugs in our systems because the library code maintains state about the hardware. If I make the library stateless, then every library call will have to talk to the hardware, which will slow down the library and all of the code that uses it.

October 18, 2010

Topic: Development

0 comments

The Case Against Data Lock-in:
Want to keep your users? Just make it easy for them to leave.

Engineers employ many different tactics to focus on the user when writing software: for example, listening to user feedback, fixing bugs, and adding features that their users are clamoring for. Since Web-based services have made it easier for users to move to new applications, it’s becoming even more important to focus on building and retaining user trust. We’ve found that an incredibly effective way to earn and maintain user trust is to make it easy for users to leave your product with their data in tow. This not only prevents lock-in and engenders trust, but also forces your team to innovate and compete on technical merit.

October 8, 2010

Topic: Data

4 comments

Keeping Bits Safe: How Hard Can It Be?:
As storage systems grow larger and larger, protecting their data for long-term storage is becoming more and more challenging.

These days, we are all data pack rats. Storage is cheap, so if there’s a chance the data could possibly be useful, we keep it. We know that storage isn’t completely reliable, so we keep backup copies as well. But the more data we keep, and the longer we keep it, the greater the chance that some of it will be unrecoverable when we need it.

October 1, 2010

Topic: File Systems and Storage

4 comments

Facing an Uncertain Past:
Excuses, excuses, excuses!

Using my favorite Greek passive-present-neuter participle (I trust you have one, too), I offer a dramatic prolegomenon to this overdue column. To wit, an apology and an explanation for its tardiness. (You’ll notice the potentially unending recursion, since both column and excuses are late.) The apology is easy: we Brits just say "Jolly sorry, actually," and project a pained sincerity, whether we mean it or not.

September 24, 2010

Topic: Code

3 comments

Tackling Architectural Complexity with Modeling:
Component models can help diagnose architectural problems in both new and existing systems.

The ever-increasing might of modern computers has made it possible to solve problems once thought too difficult to tackle. Far too often, however, the systems for these functionally complex problem spaces have overly complicated architectures. In this article I use the term architecture to refer to the overall macro design of a system rather than the details of how the individual parts are implemented. The system architecture is what is behind the scenes of usable functionality, including internal and external communication mechanisms, component boundaries and coupling, and how the system will make use of any underlying infrastructure (databases, networks, etc.) .

September 17, 2010

Topic: Development

0 comments

Photoshop Scalability: Keeping It Simple:
Clem Cole and Russell Williams discuss Photoshop’s long history with parallelism, and what they now see as the main challenge.

Over the past two decades, Adobe Photoshop has become the de facto image-editing software for digital photography enthusiasts, artists, and graphic designers worldwide. Part of its widespread appeal has to do with a user interface that makes it fairly straightforward to apply some extremely sophisticated image editing and filtering techniques. Behind that façade, however, stands a lot of complex, computationally demanding code. To improve the performance of these computations, Photoshop’s designers became early adopters of parallelism through efforts to access the extra power offered by the cutting-edge desktop systems of the day that were powered by either two or four processors.

September 9, 2010

Topic: Graphics

3 comments

Thinking Clearly about Performance:
Improving the performance of complex software is difficult, but understanding some fundamental principles can make it easier.

When I joined Oracle Corporation in 1989, performance was difficult. Only a few people claimed they could do it very well, and those people commanded high consulting rates. When circumstances thrust me into the "Oracle tuning" arena, I was quite unprepared. Recently, I’ve been introduced to the world of "MySQL tuning," and the situation seems very similar to what I saw in Oracle more than 20 years ago.

September 1, 2010

Topic: Performance

2 comments

A Paucity of Ports:
Debugging an ephemeral problem

I’ve been debugging a network problem in what should be a simple piece of network code. We have a small server process that listens for commands from all the other systems in our data center and then farms the commands out to other servers to be run. For each command issued, the client sets up a new TCP connection, sends the command, and then closes the connection after our server acknowledges the command.

August 24, 2010

Topic: Debugging

2 comments

Computers in Patient Care: The Promise and the Challenge:
Information technology has the potential to radically transform health care. Why has progress been so slow?

A 29-year-old female from New York City comes in at 3 a.m. to an ED (emergency department) in California, complaining of severe acute abdominal pain that woke her up. She reports that she is in California attending a wedding and that she has suffered from similar abdominal pain in the recent past, most recently resulting in an appendectomy. The emergency physician performs an abdominal CAT scan and sees what he believes to be an artifact from the appendectomy in her abdominal cavity. He has no information about the patient’s past history other than what she is able to tell him; he has no access to any images taken before or after the appendectomy, nor does he have any other vital information about the surgical operative note or follow-up.

August 12, 2010

Topic: Bioscience

2 comments

Injecting Errors for Fun and Profit:
Error-detection and correction features are only as good as our ability to test them.

It is an unfortunate fact of life that anything with moving parts eventually wears out and malfunctions, and electronic circuitry is no exception. In this case, of course, the moving parts are electrons. In addition to the wear-out mechanisms of electromigration (the moving electrons gradually push the metal atoms out of position, causing wires to thin, thus increasing their resistance and eventually producing open circuits) and dendritic growth (the voltage difference between adjacent wires causes the displaced metal atoms to migrate toward each other, just as magnets will attract each other, eventually causing shorts), electronic circuits are also vulnerable to background radiation.

August 6, 2010

Topic: Failure and Recovery

0 comments

CTO Roundtable: Virtualization Part II:
When it comes to virtualization platforms, experts say focus first on the services to be delivered.

Last month we published Part I of a CTO Roundtable forum on virtualization. Sponsored by the ACM Professions Board, the roundtable features five experts on virtualization discussing the current state of the technology and how companies can use it most effectively. In this second and final installment, the participants address key issues such as choosing the most appropriate virtual machine platform, using virtualization to streamline desktop delivery, and using virtualization as an effective disaster-recovery mechanism.

July 30, 2010

Topic: Virtualization

0 comments

Moving to the Edge: CTO Roundtable Overview:
An overview of the key issues addressed in ACM’s CTO Roundtable on network virtualization

The general IT community is just starting to digest how their world is changing with the advent of virtual machines and cloud computing. These new technologies promise to make applications more portable and raise the opportunity of more flexibility and efficiency in either on-premise or outsourced supporting infrastructure. Before taking advantage of these opportunities, however, data-center managers must have a better understanding of service infrastructure requirements than ever before. The CTO Roundtable on Network Virtualization focuses on how virtualization and clouds impact network service architectures, both in the ability to move legacy applications to more flexible and efficient virtualized environments and in what new functionality may become available.

July 28, 2010

Topic: Virtualization

0 comments

Lessons from the Letter:
Security flaws in a large organization

I recently received a letter in which a company notified me that they had exposed some of my personal information. While it is now quite common for personal data to be stolen, this letter amazed me because of how well it pointed out two major flaws in the systems of the company that lost the data. I am going to insert three illuminating paragraphs here and then discuss what they actually can teach us.

July 22, 2010

Topic: Security

1 comments

Moving to the Edge: An ACM CTO Roundtable on Network Virtualization:
How will virtualization technologies affect network service architectures?

The general IT community is just beginning to digest how the advent of virtual machines and cloud computing is changing their world. These new technologies promise to make applications more portable and increase the opportunity for more flexibility and efficiency in both on-premises and outsourced support infrastructures. However, virtualization can break long-standing linkages between applications and their supporting physical devices. Before data-center managers can take advantage of these new opportunities, they must have a better understanding of service infrastructure requirements and their linkages to applications.

July 15, 2010

Topic: Virtualization

0 comments

Software Development with Code Maps:
Could those ubiquitous hand-drawn code diagrams become a thing of the past?

To better understand how professional software developers use visual representations of their code, we interviewed nine developers at Microsoft to identify common scenarios, and then surveyed more than 400 developers to understand the scenarios more deeply.

July 4, 2010

Topic: Graphics

1 comments

The Ideal HPC Programming Language:
Maybe it’s Fortran. Or maybe it just doesn’t matter.

The DARPA HPCS program sought a tenfold productivity improvement in trans-petaflop systems for HPC. This article describes programmability studies undertaken by Sun Microsystems in its HPCS participation. These studies were distinct from Sun’s ongoing development of a new HPC programming language (Fortress) and the company’s broader HPCS productivity studies, though there was certainly overlap with both activities.

June 18, 2010

Topic: Programming Languages

3 comments

You’re Doing It Wrong:
Think you’ve mastered the art of server performance? Think again.

Would you believe me if I claimed that an algorithm that has been on the books as "optimal" for 46 years, which has been analyzed in excruciating detail by geniuses like Knuth and taught in all computer science courses in the world, can be optimized to run 10 times faster? A couple of years ago, I fell into some interesting company and became the author of an open source HTTP accelerator called Varnish, basically an HTTP cache to put in front of slow Web servers.

June 11, 2010

Topic: Performance

85 comments

Collecting Counters:
Gathering statistics is important, but so is making them available to others.

Over the past month I’ve been trying to figure out a problem that occurs on our systems when the network is under heavy load. After about two weeks I was able to narrow down the problem from "the network is broken" (a phrase that my coworkers use mostly to annoy me), to being something that is going wrong on the network interfaces in our systems.

June 4, 2010

Topic: Development

1 comments

Visualizing System Latency:
Heat maps are a unique and powerful way to visualize latency data. Explaining the results, however, is an ongoing challenge.

When I/O latency is presented as a visual heat map, some intriguing and beautiful patterns can emerge. These patterns provide insight into how a system is actually performing and what kinds of latency end-user applications experience. Many characteristics seen in these patterns are still not understood, but so far their analysis is revealing systemic behaviors that were previously unknown.

May 28, 2010

Topic: Graphics

16 comments

A Tour through the Visualization Zoo:
A survey of powerful visualization techniques, from the obvious to the obscure

Thanks to advances in sensing, networking, and data management, our society is producing digital information at an astonishing rate. According to one estimate, in 2010 alone we will generate 1,200 exabytes -- 60 million times the content of the Library of Congress. Within this deluge of data lies a wealth of valuable information on how we conduct our businesses, governments, and personal lives. To put the information to good use, we must find ways to explore, relate, and communicate the data meaningfully.

May 13, 2010

Topic: Graphics

25 comments

Securing Elasticity in the Cloud:
Elastic computing has great potential, but many security challenges remain.

As somewhat of a technology-hype curmudgeon, I was until very recently in the camp that believed cloud computing was not much more than the latest marketing-driven hysteria for an idea that has been around for years. Outsourced IT infrastructure services, aka IaaS (Infrastructure as a Service), has been around since at least the 1980s, delivered by the telecommunication companies and major IT outsourcers. Hosted applications, aka PaaS (Platform as a Service) and SaaS (Software as a Service), were in vogue in the 1990s in the form of ASPs (application service providers).

May 6, 2010

Topic: Distributed Computing

0 comments

Avoiding Obsolescence:
Overspecialization can be the kiss of death for sysadmins.

Dear KV, What is the biggest threat to systems administrators? Not the technical threat (security, outages, etc.), but the biggest threat to systems administrators as a profession?

April 29, 2010

Topic: Development

0 comments

Principles of Robust Timing over the Internet:
The key to synchronizing clocks over networks is taming delay variability.

Everyone, and most everything, needs a clock, and computers are no exception. Clocks tend to drift off if left to themselves, however, so it is necessary to bring them to heel periodically through synchronizing to some other reference clock of higher accuracy. An inexpensive and convenient way to do this is over a computer network.

April 21, 2010

Topic: Networks

4 comments

Why Cloud Computing Will Never Be Free:
The competition among cloud providers may drive prices downward, but at what cost?

The last time the IT industry delivered outsourced shared-resource computing to the enterprise was with timesharing in the 1980s, when it evolved to a high art, delivering the reliability, performance, and service the enterprise demanded. Today, cloud computing is poised to address the needs of the same market, based on a revolution of new technologies, significant unused computing capacity in corporate data centers, and the development of a highly capable Internet data communications infrastructure. The economies of scale of delivering computing from a centralized, shared infrastructure have set the expectation among customers that cloud-computing costs will be significantly lower than those incurred from providing their own computing.

April 16, 2010

Topic: Distributed Computing

1 comments

Simplicity Betrayed:
Emulating a video system shows how even a simple interface can be more complex—and capable—than it appears.

An emulator is a program that runs programs built for different computer architectures from the host platform that supports the emulator. Approaches differ, but most emulators simulate the original hardware in some way. At a minimum the emulator interprets the original CPU instructions and provides simulated hardware-level devices for input and output. For example, keyboard input is taken from the host platform and translated into the original hardware format, resulting in the emulated program "seeing" the same sequence of keystrokes. Conversely, the emulator will translate the original hardware screen format into an equivalent form on the host machine.

April 8, 2010

Topic: Development

2 comments

Enhanced Debugging with Traces:
An essential technique used in emulator development is a useful addition to any programmer’s toolbox.

Creating an emulator to run old programs is a difficult task. You need a thorough understanding of the target hardware and the correct functioning of the original programs that the emulator is to execute. In addition to being functionally correct, the emulator must hit a performance target of running the programs at their original realtime speed. Reaching these goals inevitably requires a considerable amount of debugging. The bugs are often subtle errors in the emulator itself but could also be a misunderstanding of the target hardware or an actual known bug in the original program. (It is also possible the binary data for the original program has become subtly corrupted or is not the version expected.)

March 31, 2010

Topic: Debugging

0 comments

A Conversation with Jeff Heer, Martin Wattenberg, and Fernanda Viégas:
Sharing visualization with the world

Visualization can be a pretty mundane activity: collect some data, fire up a tool, and then present it in a graph, ideally with some pretty colors. But all that is changing. The explosion of publicly available data sets on the Web, coupled with a new generation of collaborative visualization tools, is making it easier than ever to create compelling visualizations and share them with the world.

March 23, 2010

Topic: Graphics

0 comments

Broken Builds:
Frequent broken builds could be symptomatic of deeper problems within a development project.

Is there anything more aggravating to programmers than fellow team members checking in code that breaks a build? I find myself constantly tracking down minor mistakes in other people’s code simply because they didn’t check that their changes didn’t break the build. The worst part is when someone has broken the build and they get indignant about my pointing it out. Are there any better ways to protect against these types of problems?

March 17, 2010

Topic: Development

5 comments

Cooling the Data Center:
What can be done to make cooling systems in data centers more energy efficient?

Power generation accounts for about 40 to 45 percent of the primary energy supply in the US and the UK, and a good fraction is used to heat, cool, and ventilate buildings. A new and growing challenge in this sector concerns computer data centers and other equipment used to cool computer data systems. On the order of 6 billion kilowatt hours of power was used in data centers in 2006 in the US, representing about 1.5 percent of the country’s electricity consumption.

March 10, 2010

Topic: Power Management

3 comments

CTO Roundtable: Malware Defense Overview:
Key points from ACM’s CTO Roundtable on malware defense

The Internet has enabled malware to progress to a much broader distribution model and is experiencing a huge explosion of individual threats. There are automated tools that find vulnerable sites, attack them, and turn them into distribution sites. As commerce and the business of daily living migrate online, attacks to leverage information assets for ill-gotten benefit have increased dramatically. Security professionals are seeing more sophisticated and innovative profit models on par with business models seen in the legitimate world.

February 25, 2010

Topic: Web Security

1 comments

CTO Roundtable: Malware Defense:
The battle is bigger than most of us realize.

As all manner of information assets migrate online, malware has kept on track to become a huge source of individual threats. In a continuously evolving game of cat and mouse, as security professionals close off points of access, attackers develop more sophisticated attacks. Today profit models from malware are comparable to any seen in the legitimate world.

February 24, 2010

Topic: Web Security

0 comments

Toward Energy-Efficient Computing:
What will it take to make server-side computing more energy efficient?

By now, most everyone is aware of the energy problem at its highest level: our primary sources of energy are running out, while the demand for energy in both commercial and domestic environments is increasing, and the side effects of energy use have important global environmental considerations. The emission of greenhouse gases such as CO, now seen by most climatologists to be linked to global warming, is only one issue.

February 17, 2010

Topic: Power Management

1 comments

Commitment Issues:
When is the right time to commit changes?

One of the other people on my project insists on checking in unrelated changes in large batches. When I say unrelated, what I mean is he will fix several unrelated bugs and then make a few minor changes to spacing and indentation across the entire source tree. He will then commit all of these changes at once, usually with a short commit message that lists only the bugs he claims to have fixed. Do you think I’m being too picky in wanting each checkin to address only one issue or problem?

February 10, 2010

Topic: Development

0 comments

A Conversation with Steve Furber:
The designer of the ARM chip shares lessons on energy-efficient computing.

If you were looking for lessons on energy-efficient computing, one person you would want to speak with would be Steve Furber, principal designer of the highly successful ARM (Acorn RISC Machine) processor. Currently running in billions of cellphones around the world, the ARM is a prime example of a chip that is simple, low power, and low cost. Furber led development of the ARM in the 1980s while at Acorn, the British PC company also known for the BBC Microcomputer, which Furber played a major role in developing.

February 1, 2010

Topic: Power Management

5 comments

Managing Contention for Shared Resources on Multicore Processors:
Contention for caches, memory controllers, and interconnects can be alleviated by contention-aware scheduling algorithms.

Modern multicore systems are designed to allow clusters of cores to share various hardware structures, such as LLCs (last-level caches; for example, L2 or L3), memory controllers, and interconnects, as well as prefetching hardware. We refer to these resource-sharing clusters as memory domains, because the shared resources mostly have to do with the memory hierarchy.

January 20, 2010

Topic: Processors

1 comments

Power-Efficient Software:
Power-manageable hardware can help save energy, but what can software developers do to address the problem?

The rate at which power-management features have evolved is nothing short of amazing. Today almost every size and class of computer system, from the smallest sensors and handheld devices to the "big iron" servers in data centers, offers a myriad of features for reducing, metering, and capping power consumption. Without these features, fan noise would dominate the office ambience, and untethered laptops would remain usable for only a few short hours (and then only if one could handle the heat), while data-center power and cooling costs and capacity would become unmanageable.

January 8, 2010

Topic: Power Management

2 comments

Standards Advice:
Easing the pain of implementing standards

My mother took language, both written and spoken, very seriously. The last thing I wanted to hear upon showing her an essay I was writing for school was, "Bring me the red pen." In those days I did not have a computer; all my assignments were written longhand or on a typewriter, so the red pen meant a total rewrite. She was a tough editor, but it was impossible to question the quality of her work or the passion that she brought to the writing process.

December 30, 2009

Topic: Compliance

4 comments

Triple-Parity RAID and Beyond:
As hard-drive capacities continue to outpace their throughput, the time has come for a new level of RAID.

How much longer will current RAID techniques persevere? The RAID levels were codified in the late 1980s; double-parity RAID, known as RAID-6, is the current standard for high-availability, space-efficient storage. The incredible growth of hard-drive capacities, however, could impose serious limitations on the reliability even of RAID-6 systems. Recent trends in hard drives show that triple-parity RAID must soon become pervasive. In 2005, Scientific American reported on Kryder’s law, which predicts that hard-drive density will double annually. While the rate of doubling has not quite maintained that pace, it has been close.

December 17, 2009

Topic: File Systems and Storage

6 comments

Data in Flight:
How streaming SQL technology can help solve the Web 2.0 data crunch.

Web applications produce data at colossal rates, and those rates compound every year as the Web becomes more central to our lives. Other data sources such as environmental monitoring and location-based services are a rapidly expanding part of our day-to-day experience. Even as throughput is increasing, users and business owners expect to see their data with ever-decreasing latency. Advances in computer hardware (cheaper memory, cheaper disks, and more processing cores) are helping somewhat, but not enough to keep pace with the twin demands of rising throughput and decreasing latency.

December 10, 2009

Topic: Databases

1 comments

Some Rules and Restrictions May Apply:
An inquiry into contracts and the Next Big Thing

In many of our interactions with the outside world (solipsists can stop reading now, if indeed they ever started) we enter into contracts with diverse entities, some up front, some lurking below the surface. The commonly construed contractual theme is a mutual agreement where each party accepts certain costs and responsibilities, and in return can rely on certain benefits and rewards.

December 2, 2009

Topic: Development

2 comments

Maximizing Power Efficiency with Asymmetric Multicore Systems:
Asymmetric multicore systems promise to use a lot less energy than conventional symmetric processors. How can we develop software that makes the most out of this potential?

In computing systems, a CPU is usually one of the largest consumers of energy. For this reason, reducing CPU power consumption has been a hot topic in the past few years in both the academic community and the industry. In the quest to create more power-efficient CPUs, several researchers have proposed an asymmetric multicore architecture that promises to save a significant amount of power while delivering similar performance to conventional symmetric multicore processors.

November 20, 2009

Topic: Power Management

1 comments

Other People’s Data:
Companies have access to more types of external data than ever before. How can they integrate it most effectively?

Every organization bases some of its critical decisions on external data sources. In addition to traditional flat file data feeds, Web services and Web pages are playing an increasingly important role in data warehousing. The growth of Web services has made data feeds easily consumable at the departmental and even end-user levels. There are now more than 1,500 publicly available Web services and thousands of data mashups ranging from retail sales data to weather information to United States census data. These mashups are evidence that when users need information, they will find a way to get it.

November 13, 2009

Topic: Data

0 comments

What DNS Is Not:
DNS is many things to many people - perhaps too many things to too many people.

DNS (Domain Name System) is a hierarchical, distributed, autonomous, reliable database. The first and only of its kind, it offers realtime performance levels to a global audience with global contributors. Every TCP/IP traffic flow including every World Wide Web page view begins with at least one DNS transaction. DNS is, in a word, glorious.

November 5, 2009

Topic: Networks

42 comments

Merge Early, Merge Often:
Integrating changes in branched development

When doing merged development, how often should you merge? It’s obvious that if I wait too long, then I spend days in merge hell, where nothing seems to work and where I wind up using the revert command more often than commit; but the whole point of branched development is to be able to protect the main branch of development from unstable changes. Is there a happy middle ground?

October 29, 2009

Topic: Development

4 comments

You Don’t Know Jack About Software Maintenance:
Long considered an afterthought, software maintenance is easiest and most effective when built into a system from the ground up.

Everyone knows maintenance is hard and boring, and avoids doing it. Besides, their pointy-haired bosses say things like: "No one needs to do maintenance - that’s a waste of time."

October 23, 2009

Topic: Development

0 comments

Metamorphosis: the Coming Transformation of Translational Systems Biology:
In the future computers will mine patient data to deliver faster, cheaper healthcare, but how will we design them to give informative causal explanations? Ideas from philosophy, model checking, and statistical testing can pave the way for the needed translational systems biology.

One morning, as Gregorina Samsa was waking up from anxious dreams, she discovered that she had become afflicted with certain mysterious flu-like symptoms that appeared without any warning. Equally irritating, this capricious metamorphosis seemed impervious to a rational explanation in terms of causes and effects. "What’s happened to me?" she thought. Before seeing a doctor, she decided to find out more about what might ail her. She logged on to a Web site where she annotated a timeline with what she could remember. Since March, she’d had more headaches than usual, and then in April she had begun to experience more fatigue after exercise, and as of July she had also experienced occasional lapses in memory.

October 12, 2009

Topic: Bioscience

0 comments

Probing Biomolecular Machines with Graphics Processors:
The evolution of GPU processors and programming tools is making advanced simulation and analysis techniques accessible to a growing community of biomedical scientists.

Computer simulation has become an integral part of the study of the structure and function of biological molecules. For years, parallel computers have been used to conduct these computationally demanding simulations and to analyze their results. These simulations function as a "computational microscope," allowing the scientist to observe details of molecular processes too small, fast, or delicate to capture with traditional instruments. Over time, commodity GPUs (graphics processing units) have evolved into massively parallel computing devices, and more recently it has become possible to program them in dialects of the popular C/C++ programming languages.

October 6, 2009

Topic: Bioscience

0 comments

Unifying Biological Image Formats with HDF5:
The biosciences need an image format capable of high performance and long-term maintenance. Is HDF5 the answer?

The biological sciences need a generic image format suitable for long-term storage and capable of handling very large images. Images convey profound ideas in biology, bridging across disciplines. Digital imagery began 50 years ago as an obscure technical phenomenon. Now it is an indispensable computational tool. It has produced a variety of incompatible image file formats, most of which are already obsolete.

October 4, 2009

Topic: Bioscience

2 comments

A Threat Analysis of RFID Passports:
Do RFID passports make us vulnerable to identity theft?

It’s a beautiful day when your plane touches down at the airport. After a long vacation, you feel rejuvenated, refreshed, and relaxed. When you get home, everything is how you left it. Everything, that is, but a pile of envelopes on the floor that jammed the door as you tried to swing it open. You notice a blinking light on your answering machine and realize you’ve missed dozens of messages. As you click on the machine and pick up the envelopes, you find that most of the messages and letters are from debt collectors. Most of the envelopes are stamped "urgent," and as you sift through the pile you can hear the messages from angry creditors demanding that you call them immediately.

October 1, 2009

Topic: Privacy and Rights

2 comments

A Conversation with David Shaw:
In a rare interview, David Shaw discusses how he’s using computer science to unravel the mysteries of biochemistry.

In this interview, Hanrahan and Shaw discuss Shaw’s latest project at D. E. Shaw Research: Anton, a special-purpose supercomputer designed to speed up molecular dynamics simulations by several orders of magnitude. Four 512-processor machines are now active and already helping scientists to understand how proteins interact with each other and with other molecules at an atomic level of detail. Shaw’s hope is that these "molecular microscopes" will help unravel some biochemical mysteries that could lead to the development of more effective drugs for cancer and other diseases.

September 16, 2009

Topic: Bioscience

2 comments

Communications Surveillance: Privacy and Security at Risk:
As the sophistication of wiretapping technology grows, so too do the risks it poses to our privacy and security.

We all know the scene: It is the basement of an apartment building and the lights are dim. The man is wearing a trench coat and a fedora pulled down low to hide his face. Between the hat and the coat we see headphones, and he appears to be listening intently to the output of a set of alligator clips attached to a phone line. He is a detective eavesdropping on a suspect’s phone calls. This is wiretapping. It doesn’t have much to do with modern electronic eavesdropping, which is about bits, packets, switches, and routers.

September 11, 2009

Topic: Privacy and Rights

6 comments

Four Billion Little Brothers? Privacy, mobile phones, and ubiquitous data collection:
Participatory sensing technologies could improve our lives and our communities, but at what cost to our privacy?

They place calls, surf the Internet, and there are close to 4 billion of them in the world. Their built-in microphones, cameras, and location awareness can collect images, sound, and GPS data. Beyond chatting and texting, these features could make phones ubiquitous, familiar tools for quantifying personal patterns and habits. They could also be platforms for thousands to document a neighborhood, gather evidence to make a case, or study mobility and health. This data could help you understand your daily carbon footprint, exposure to air pollution, exercise habits, and frequency of interactions with family and friends.

August 27, 2009

Topic: Privacy and Rights

9 comments

Making Sense of Revision-control Systems:
Whether distributed or centralized, all revision-control systems come with complicated sets of tradeoffs. How do you find the best match between tool and team?

Modern software is tremendously complicated, and the methods that teams use to manage its development reflect this complexity. Though many organizations use revision-control software to track and manage the complexity of a project as it evolves, the topic of how to make an informed choice of revision-control tools has received scant attention. Until fairly recently, the world of revision control was moribund, so there was simply not much to say on this subject.

August 21, 2009

Topic: Tools

7 comments

The Meaning of Maintenance:
Software maintenance is more than just bug fixes.

Isn’t software maintenance a misnomer? I’ve never heard of anyone reviewing a piece of code every year, just to make sure it was still in good shape. It seems like software maintenance is really just a cover for bug fixing. When I think of maintenance I think of taking my car in for an oil change, not fixing a piece of code. Are there any people who actually review code after it has been running in a production environment?

August 14, 2009

Topic: Quality Assurance

6 comments

GFS: Evolution on Fast-forward:
A discussion between Kirk McKusick and Sean Quinlan about the origin and evolution of the Google File System

During the early stages of development at Google, the initial thinking did not include plans for building a new file system. While work was still being done on one of the earliest versions of the company’s crawl and indexing system, however, it became quite clear to the core engineers that they really had no other choice, and GFS (Google File System) was born.

August 7, 2009

Topic: File Systems and Storage

9 comments

Monitoring and Control of Large Systems with MonALISA:
MonALISA developers describe how it works, the key design principles behind it, and the biggest technical challenges in building it.

The HEP (high energy physics) group at the California Institute of Technology started developing the MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework in 2002, aiming to provide a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. Its initial target field of applications is the grid systems and the networks supporting data processing and analysis for HEP collaborations. Our strategy in trying to satisfy the demands of data-intensive applications was to move to more synergetic relationships between the applications, computing, and storage facilities and the network infrastructure.

July 30, 2009

Topic: Distributed Computing

0 comments

Reveling in Constraints:
The Google Web Toolkit is an end-run around Web development obstacles.

The Web’s trajectory toward interactivity, which began with humble snippets of JavaScript used to validate HTML forms, has really started to accelerate of late. A new breed of Web applications is starting to emerge that sports increasingly interactive user interfaces based on direct manipulations of the browser DOM (document object model) via ever-increasing amounts of JavaScript. Google Wave, publicly demonstrated for the first time in May 2009 at the Google I/O Developer Conference in San Francisco, exemplifies this new style of Web application.

July 21, 2009

Topic: Web Development

6 comments

Words Fail Them:
Dedesignating and other linguistic hazards

A recent announcement on the closing of an English nudist beach (have I captured your attention so early?) concluded with an apology to "all the naturalists" affected. This upset the "bird watchers," both naturalists and naturists (nudge, nudge), as well as those "word watchers" devoted to gooder English. Miffed and bemused letters appeared in Sally Baker’s London Times Feedback column, the traditional sounding board for disgruntled pop grammarians.

July 15, 2009

Topic: Development

3 comments

The Pathologies of Big Data:
Scale up your datasets enough and all your apps will come undone. What are the typical problems and where do the bottlenecks generally surface?

What is "big data" anyway? Gigabytes? Terabytes? Petabytes? A brief personal memory may provide some perspective. In the late 1980s at Columbia University I had the chance to play around with what at the time was a truly enormous "disk": the IBM 3850 MSS (Mass Storage System). The MSS was actually a fully automatic robotic tape library and associated staging disks to make random access, if not exactly instantaneous, at least fully transparent. In Columbia’s configuration, it stored a total of around 100 GB. It was already on its way out by the time I got my hands on it, but in its heyday, the early to mid-1980s, it had been used to support access by social scientists to what was unquestionably "big data" at the time: the entire 1980 U.S.

July 6, 2009

Topic: Databases

3 comments

Painting the Bike Shed:
A sure-fire technique for ending pointless coding debates

Last week one of our newer engineers checked in a short program to help in debugging problems in the code that we’re developing. Even though this was a test program, several people read the code and then commented on the changes they wanted to see. The code didn’t have any major problems, but it seemed to generate a lot of e-mail for what was being checked in. Eventually the comments in the thread were longer than the program itself. At some point in the thread the programmer who submitted the code said, "Look, I’ve checked in the code; you can paint the bike shed any color you want now," and then refused to make any more changes to the code.

June 25, 2009

Topic: Code

1 comments

Browser Security: Lessons from Google Chrome:
Google Chrome developers focused on three key problems to shield the browser from attacks.

The Web has become one of the primary ways people interact with their computers, connecting people with a diverse landscape of content, services, and applications. Users can find new and interesting content on the Web easily, but this presents a security challenge: malicious Web-site operators can attack users through their Web browsers. Browsers face the challenge of keeping their users safe while providing a rich platform for Web applications.

June 18, 2009

Topic: Web Security

6 comments

Calendar:
9-Apr

IPSN (ACM Conference on Information Processing in Sensor Networks)

June 17, 2009

0 comments

Book Reviews: PHP Objects, Patterns, and Practice:
PHP Objects, Patterns, and Practice. Matt Zandstra, Apress, 2007, $44.99,. ISBN: 1590599098. This is yet another book on PHP, but reading it will bring a number of pleasant surprises for many readers.

PHP Objects, Patterns, and Practice. Matt Zandstra, Apress, 2007, $44.99,. ISBN: 1590599098. This is yet another book on PHP, but reading it will bring a number of pleasant surprises for many readers.

June 17, 2009

0 comments

Cloud Computing: An Overview:
A summary of important cloud-computing issues distilled from ACM CTO Roundtables

Probably more than anything we’ve seen in IT since the invention of timesharing or the introduction of the PC, cloud computing represents a paradigm shift in the delivery architecture of information services. This overview presents some of the key topics discussed during the ACM Cloud Computing and Virtualization CTO Roundtables of 2008. While not intended to replace the in-depth roundtable discussions, the overview summarizes the fundamental issues generally agreed upon by the panels and should help readers to assess the applicability of cloud computing to their application areas.

June 12, 2009

Topic: Distributed Computing

8 comments

CTO Roundtable: Cloud Computing:
Our panel of experts discuss cloud computing and how companies can make the best use of it.

Many people reading about cloud computing in the trade journals will think it’s a panacea for all their IT problems. It is not. In this CTO Roundtable discussion we hope to give practitioners useful advice on how to evaluate cloud computing for their organizations. Our focus will be on the SMB (small- to medium-size business) IT managers who are underfunded, overworked, and have lots of assets tied up in out-of-date hardware and software. To what extent can cloud computing solve their problems? With the help of five current thought leaders in this quickly evolving field, we offer some answers to that question.

June 2, 2009

Topic: Distributed Computing

0 comments

One Peut-Être, Two Peut-Être, Three Peut-Être, More:
Puns and allusions

One is always loath to explain a joke. In face-to-face badinage, the joker can judge the comprehension of the jokes from their immediate reactions. Failing to win the approving smiles, chuckles, or belly laughs, the raconteur has a choice of remedies including the Quick Exit Strategy ("What a dumb crowd. I’m out of here!"). The modest teller will accept the blame ("Oh, I forgot to mention that the mother-in-law was a blond Republican Fortran programmer!") and order drinks all around.

May 18, 2009

Topic: Development

2 comments

Whither Sockets?:
High bandwidth, low latency, and multihoming challenge the sockets API.

One of the most pervasive and longest-lasting interfaces in software is the sockets API. Developed by the Computer Systems Research Group at the University of California at Berkeley, the sockets API was first released as part of the 4.1c BSD operating system in 1982. While there are longer-lived APIs, it is quite impressive for an API to have remained in use and largely unchanged for 27 years. The only major update to the sockets API has been the extension of ancillary routines to accommodate the larger addresses used by IPv6.

May 11, 2009

Topic: Networks

20 comments

A Conversation with Arthur Whitney:
Can code ever be too terse? The designer of the K and Q languages discusses this question and many more with Queue editorial board member Bryan Cantrill.

When it comes to programming languages, Arthur Whitney is a man of few words. The languages he has designed, such as A, K, and Q, are known for their terse, often cryptic syntax and tendency to use single ASCII characters instead of reserved words. While these languages may mystify those used to wordier languages such as Java, their speed and efficiency has made them popular with engineers on Wall Street.

April 20, 2009

Topic: Programming Languages

4 comments

Introducing...acmqueue:
A new Web - and print! - presence for Queue

You may already know that Queue’s new Web site, acmqueue, was launched at the beginning of March. If not, you owe it to yourself to get acquainted with it, as the new site offers readers a wider choice of delivery options and an expanded range of fresh content. Among the new offerings is Planet Queue, where Queue authors blog about the contemporary relevance of classic engineering work detailed in important articles contained in the ACM Digital Library.

April 17, 2009

0 comments

All-Optical Computing and All-Optical Networks are Dead:
Anxiously awaiting the arrival of all-optical computing? Don’t hold your breath.

We’re a venture capitalist and a communications researcher, and we come bearing bad news: optical computers and all-optical networks aren’t going to happen anytime soon. All those well-intentioned stories about computers operating at the speed of light, computers that would free us from Internet delays and relieve us from the tyranny of slow and hot electronic devices were, alas, overoptimistic. We won’t be computing or routing at the speed of light anytime soon. (In truth, we probably should have told you this about two years ago, but we only recently met, compared notes, and realized our experiences were consistent.)

April 17, 2009

Topic: Networks

3 comments

Network Front-end Processors, Yet Again:
The history of NFE processors sheds light on the tradeoffs involved in designing network stack software.

The history of the NFE (network front-end) processor, currently best known as a TOE (TCP offload engine), extends all the way back to the Arpanet IMP (interface message processor) and possibly before. The notion is beguilingly simple: partition the work of executing communications protocols from the work of executing the "applications" that require the services of those protocols. That way, the applications and the network machinery can achieve maximum performance and efficiency, possibly taking advantage of special hardware performance assistance. While this looks utterly compelling on the whiteboard, architectural and implementation realities intrude, often with considerable force.

April 17, 2009

Topic: Networks

4 comments

Fighting Physics: A Tough Battle:
Thinking of doing IPC over the long haul? Think again. The laws of physics say you’re hosed.

Over the past several years, SaaS (software as a service) has become an attractive option for companies looking to save money and simplify their computing infrastructures. SaaS is an interesting group of techniques for moving computing from the desktop to the cloud; however, as it grows in popularity, engineers should be aware of some of the fundamental limitations they face when developing these kinds of distributed applications - in particular, the finite speed of light.

April 15, 2009

Topic: Networks

1 comments

Cybercrime 2.0: When the Cloud Turns Dark:
Web-based malware attacks are more insidious than ever. What can be done to stem the tide?

As the Web has become vital for day-to-day transactions, it has also become an attractive avenue for cybercrime. Financially motivated, the crime we see on the Web today is quite different from the more traditional network attacks. A few years ago Internet attackers relied heavily on remotely exploiting servers identified by scanning the Internet for vulnerable network services. Autonomously spreading computer worms such as Code Red and SQLSlammer were examples of such scanning attacks. Their huge scale put even the Internet at large at risk; for example, SQLSlammer generated traffic sufficient to melt down backbones.

March 20, 2009

Topic: Web Security

0 comments

How Do I Model State? Let Me Count the Ways:
A study of the technology and sociology of Web services specifications

There is nothing like a disagreement concerning an arcane technical matter to bring out the best (and worst) in software architects and developers. As every reader knows from experience, it can be hard to get to the bottom of what exactly is being debated. One reason for this lack of clarity is often that different people care about different aspects of the problem. In the absence of agreement concerning the problem, it can be difficult to reach an agreement about the solutions.

March 17, 2009

Topic: Web Services

0 comments

Security in the Browser:
Web browsers leave users vulnerable to an ever-growing number of attacks. Can we make them secure while preserving their usability?

Sealed in a depleted uranium sphere at the bottom of the ocean. That’s the often-mentioned description of what it takes to make a computer reasonably secure. Obviously, in the Internet age or any other, such a machine would be fairly useless.

March 16, 2009

Topic: Web Security

0 comments

Don’t be Typecast as a Software Developer:
Kode Vicious’s temper obviously suffers from having to clean up after the mistakes of his peers. What would he have them learn now so that he can look forward to a graceful and mellow old age?

I would like to think that learning more will help me in my everyday job of writing glue and customization code at a systems integrator. But the obvious applicable knowledge is specific to tools and packages that may become obsolete or discontinued even within the lifetime of the project, and in some cases have already reached this destination.

March 13, 2009

Topic: Development

1 comments

Commentary: A Trip Without a Roadmap:
Instead of simply imagining what your users want or need, it’s always a good idea to first get their input.

Viewed broadly, programming projects fail far more often than they succeed. In some cases, failure is a useful step toward success, but all too often it is simply failure.

March 11, 2009

Topic: Web Services

0 comments

Debugging AJAX in Production:
Lacking proper browser support, what steps can we take to debug production AJAX code?

The JavaScript language has a curious history. What began as a simple tool to let Web developers add dynamic elements to otherwise static Web pages has since evolved into the core of a complex platform for delivering Web-based applications. In the early days, the language’s ability to handle failure silently was seen as a benefit. If an image rollover failed, it was better to preserve a seamless Web experience than to present the user with unsightly error dialogs.

March 11, 2009

Topic: Web Development

2 comments

Case Study: Making the Move to AJAX:
What a software-as-a-service provider learned from using an AJAX framework for RIA development

Small start-up companies often face a bewildering array of technical choices: how to deliver their application, what language to use, whether to employ existing components (either commercial or open source) or roll their own... and the list goes on. What’s more, the decisions surrounding these choices typically need to be made quickly. This case study offers a realistic representation of the sorts of challenges a young start-up company faces when working to deploy a product on the Web. As with many startups, this is also a story that does not have a happy ending.

March 11, 2009

Topic: Web Development

0 comments

The Flaws of Nature:
And the perils of indecision. The latest musings of Stan Kelly-Bootle.

Multicolumnar ideas had been lurking like Greek temples in my so-called mind as 2008 came to an end. There are so many annual journalistic clichés available, looking back at all our past-years’ mistakes and resolving never to repeat them in 2009. One annoyance that I must get off my chest here and now: Can we ban such empty constructions as "X is much worse than you may think"? Or "Y is much simpler than many suppose"? We may never have thought of X one way or the other, or made suppositions about the ease of Y, yet we tend to nod and move on as though some meaningful proposition has been asserted; and, worse, that some judgment has been validated beyond dispute.

February 23, 2009

Topic: Code

0 comments

Calendar:
9-Feb

SCALE (Southern California Linux Expo)

February 23, 2009

0 comments

The Obama Campaign:
The Obama campaign has been praised for its innovative use of technology. What was the key to its success?

On January 3, 2008, I sat in the boiler room waiting for the caucus to commence. At 7 p.m. the doors had been open for about an hour: months of preparation were coming to fruition. The phone calls had been made, volunteers had been canvassing, and now the moment had come. Could Barack Obama win the Iowa caucus? Doors closed and the first text message came from a precinct: it looked like a large attendance. Then came the second, the third, the fourth. Each was typed into our model, and a projection was starting to form. The fifth, the sixth, and now the seventh.

February 23, 2009

Topic: Web Development

1 comments

CTO Roundtable: Virtualization Part I:
CTOs from key players in the virtualization market examine current trends in virtualization and how IT managers can make the most effective use of it.

The topic of this forum is virtualization. When investing in virtualization technologies, IT managers need to know what is considered standard practice and what is considered too leading edge and risky for near-term deployment. For this forum we’ve assembled several leading experts on virtualization to discuss what those best practices should be. While the participants might not always agree with each other, we hope their insights will help IT managers navigate the virtualization landscape and make informed decisions on how best to use the technology.

February 23, 2009

Topic: Virtualization

0 comments

Purpose-Built Languages:
While often breaking the rules of traditional language design, the growing ecosystem of purpose-built "little" languages is an essential part of systems development.

In my college computer science lab, two eternal debates flourished during breaks from long nights of coding and debugging: "emacs versus vi?"; and "what is the best programming language?" Later, as I began my career in industry, I noticed that the debate over programming languages was also going on in the hallways of Silicon Valley campuses. It was the ’90s, and at Sun many of us were watching Java claim significant mindshare among developers, particularly those previously developing in C or C++.

February 23, 2009

Topic: Programming Languages

3 comments

A Conversation with Van Jacobson:
The TCP/IP pioneer discusses the promise of content-centric networking with BBN chief scientist Craig Partridge.

To those with even a passing interest in the history of the Internet and TCP/IP networking, Van Jacobson will be a familiar name. During his 25 years at Lawrence Berkeley National Laboratory and subsequent leadership positions at Cisco Systems and Packet Design, Jacobson has helped invent and develop some of the key technologies on which the Internet is based.

February 23, 2009

Topic: Networks

1 comments

Pride and Prejudice: (The Vasa):
What can software engineers learn from shipbuilders?

I teach computer science to undergraduate students at a school in California, and one of my friends in the English department, of all places, made an interesting comment to me the other day. He wanted to know if my students had ever read Frankenstein and if I felt it would make them better engineers. I asked him why he thought I should assign this book, and he said he felt that a book could change the way in which people think about their relationship to the world, and in particular to technology. He wasn’t being condescending; he was dead serious. Given the number of Frankenstein-like projects that seem to get built with information technology, perhaps it’s not a bad idea to teach these lessons to computer science undergraduates, to give them some notion that they have a social responsibility.

February 23, 2009

Topic: Education

0 comments

Calendar:
9-Jan

Macworld Conference and Expo

January 8, 2009

0 comments

CTO Roundtable: Storage Part II:
Leaders in the storage industry ponder upcoming technologies and trends.

The following conversation is the second installment of a CTO roundtable featuring seven world-class experts on storage technologies. This series of CTO forums focuses on the near-term challenges and opportunities facing the commercial computing community. Overseen by the ACM Professions Board, the goal of the series is to provide IT managers with access to expert advice to help inform their decisions when investing in new architectures and technologies.

January 8, 2009

Topic: File Systems and Storage

0 comments

Code Spelunking Redux:
Is this subject important enough to warrant two articles in five years? I believe it is.

It has been five years since I first wrote about code spelunking, and though systems continue to grow in size and scope, the tools we use to understand those systems are not growing at the same rate. In fact, I believe we are steadily losing ground. So why should we go over the same ground again? Is this subject important enough to warrant two articles in five years? I believe it is.

January 8, 2009

Topic: Code

0 comments

Better Scripts, Better Games:
Smarter, more powerful scripting languages will improve game performance while making gameplay development more efficient.

The video game industry earned $8.85 billion in revenue in 2007, almost as much as movies made at the box office. Much of this revenue was generated by blockbuster titles created by large groups of people. Though large development teams are not unheard of in the software industry, game studios tend to have unique collections of developers. Software engineers make up a relatively small portion of the game development team, while the majority of the team consists of content creators such as artists, musicians, and designers.

January 8, 2009

Topic: Game Development

2 comments

Scaling in Games & Virtual Worlds:
Online games and virtual worlds have familiar scaling requirements, but don’t be fooled: everything you know is wrong.

I used to be a systems programmer, working on infrastructure used by banks, telecom companies, and other engineers. I worked on operating systems. I worked on distributed middleware. I worked on programming languages. I wrote tools. I did all of the things that hard-core systems programmers do.

January 8, 2009

Topic: Game Development

3 comments

Debugging Devices:
What is the proper way to debug malfunctioning hardware?

I suggest taking a very sharp knife and cutting the board traces at random until the thing either works, or smells funny! I gather you’re not asking the same question that led me to use the word changeineer in another column. I figure you have an actually malfunctioning piece of hardware and that you’ve already sent three previous versions back to the manufacturer, complete with nasty letters containing veiled references to legal action should they continue to send you broken products.

January 8, 2009

Topic: Debugging

0 comments

Calendar:
8-Nov

ApacheCon

December 4, 2008

0 comments

XML Fever:
Don’t let delusions about XML develop into a virulent strain of XML fever.

XML (Extensible Markup Language), which just celebrated its 10th birthday, is one of the big success stories of the Web. Apart from basic Web technologies (URIs, HTTP, and HTML) and the advanced scripting driving the Web 2.0 wave, XML is by far the most successful and ubiquitous Web technology. With great power, however, comes great responsibility, so while XML’s success is well earned as the first truly universal standard for structured data, it must now deal with numerous problems that have grown up around it.

December 4, 2008

Topic: Web Development

0 comments

CTO Roundtable: Storage Part II:
Leaders in the storage world offer valuable advice for making more effective architecture and technology decisions.

Featuring seven world-class storage experts, this discussion is the first in a new series of CTO Roundtable forums focusing on the near-term challenges and opportunities facing the commercial computing community. Overseen by the ACM Professions Board, this series has as its goal to provide working IT managers with expert advice so they can make better decisions when investing in new architectures and technologies. This is the first installment of the discussion, with a second installment slated for publication in a later issue.

December 4, 2008

Topic: File Systems and Storage

0 comments

High Performance Web Sites:
Want to make your Web site fly? Focus on front-end performance.

Google Maps, Yahoo! Mail, Facebook, MySpace, YouTube, and Amazon are examples of Web sites built to scale. They access petabytes of data sending terabits per second to millions of users worldwide. The magnitude is awe-inspiring. Users view these large-scale Web sites from a narrower perspective. The typical user has megabytes of data that are downloaded at a few hundred kilobits per second. Users are not so interested in the massive number of requests per second being served; they care more about their individual requests.

December 4, 2008

Topic: Web Services

2 comments

Improving Performance on the Internet:
Given the Internet’s bottlenecks, how can we build fast, scalable content-delivery systems?

When it comes to achieving performance, reliability, and scalability for commercial-grade Web applications, where is the biggest bottleneck? In many cases today, we see that the limiting bottleneck is the middle mile, or the time data spends traveling back and forth across the Internet, between origin server and end user.

December 4, 2008

Topic: Web Services

0 comments

Eventually Consistent:
Building reliable distributed systems at a worldwide scale demands trade-offs?between consistency and availability.

At the foundation of Amazon’s cloud computing are infrastructure services such as Amazon’s S3 (Simple Storage Service), SimpleDB, and EC2 (Elastic Compute Cloud) that provide the resources for constructing Internet-scale computing platforms and a great variety of applications. The requirements placed on these infrastructure services are very strict; they need to score high marks in the areas of security, scalability, availability, performance, and cost effectiveness, and they need to meet these requirements while serving millions of customers around the globe, continuously.

December 4, 2008

Topic: Web Services

4 comments

Building Scalable Web Services:
Build only what you really need.

In the early days of the Web we severely lacked tools and frameworks, and in retrospect it seems noteworthy that those early Web services scaled at all. Nowadays, while the tools have progressed, so too have expectations with respect to richness of interaction, performance, and scalability. In view of these raised expectations it is advisable to build only what you really need, relying on other people’s work where possible. Above all, be cautious in choosing when, what, and how to optimize.

December 4, 2008

Topic: Web Services

2 comments

Get Real about Realtime:
Dear KV, I’m working on a networked system that has become very sensitive to timing issues.

I’m working on a networked system that has become very sensitive to timing issues. When the system was first developed the bandwidth requirements were well within the tolerance of off-the-shelf hardware and software, but in the past three years things have changed. The data stream has remained the same but now the system is being called on to react more quickly to events as they arrive. The system is written in C++ and runs on top of Linux. In a recent project meeting I suggested that the quickest route to decreasing latency was to move to a realtime version of Linux, since realtime operating systems are designed to provide the lowest-latency services to applications.

December 4, 2008

Topic: Code

0 comments

Affine Romance:
Buyer (and seller) beware

There’s a British idiom, “Suck it and see,” the epitome of skepticism, which despite its coarse brevity could well replace whole libraries of posh philosophic bigtalk about the fabric of reality. A less aggressive version is “Show me, I’m from Missouri,” which requires the proper Southern Mizoorah drawl for maximum impact. Wherever you’re from, the message is one of eternal vigilance in the face of fancy claims.

October 24, 2008

Topic: Code

0 comments

Calendar:
8-Sep

Web 2.0 Expo

October 24, 2008

0 comments

Software Transactional Memory: Why Is It Only a Research Toy?:
The promise of STM may likely be undermined by its overheads and workload applicabilities.

TM (transactional memory) is a concurrency control paradigm that provides atomic and isolated execution for regions of code. TM is considered by many researchers to be one of the most promising solutions to address the problem of programming multicore processors. Its most appealing feature is that most programmers only need to reason locally about shared data accesses, mark the code region to be executed transactionally, and let the underlying system ensure the correct concurrent execution. This model promises to provide the scalability of fine-grain locking, while avoiding common pitfalls of lock composition such as deadlock.

October 24, 2008

Topic: Concurrency

1 comments

Parallel Programming with Transactional Memory:
While sometimes even writing regular, single-threaded programs can be quite challenging, trying to split a program into multiple pieces that can be executed in parallel adds a whole dimension of additional problems. Drawing upon the transaction concept familiar to most programmers, transactional memory was designed to solve some of these problems and make parallel programming easier. Ulrich Drepper from Red Hat shows us how it’s done.

With the speed of individual cores no longer increasing at the rate we came to love over the past decades, programmers have to look for other ways to increase the speed of our ever-more-complicated applications. The functionality provided by the CPU manufacturers is an increased number of execution units, or CPU cores.

October 24, 2008

Topic: Concurrency

1 comments

Erlang for Concurrent Programming:
What role can programming languages play in dealing with concurrency? One answer can be found in Erlang, a language designed for concurrency from the ground up.

Erlang is a language developed to let mere mortals write, test, deploy, and debug fault-tolerant concurrent software. Developed at the Swedish telecom company Ericsson in the late 1980s, it started as a platform for developing soft realtime software for managing phone switches. It has since been open-sourced and ported to several common platforms, finding a natural fit not only in distributed Internet server applications, but also in graphical user interfaces and ordinary batch applications.

October 24, 2008

Topic: Concurrency

0 comments

Real-World Concurrency:
In this look at how concurrency affects practitioners in the real world, Cantrill and Bonwick argue that much of the anxiety over concurrency is unwarranted.

Software practitioners today could be forgiven if recent microprocessor developments have given them some trepidation about the future of software. While Moore’s law continues to hold (that is, transistor density continues to double roughly every 18 months), as a result of both intractable physical limitations and practical engineering considerations, that increasing density is no longer being spent on boosting clock rate. Instead, it is being used to put multiple CPU cores on a single CPU die.

October 24, 2008

Topic: Concurrency

0 comments

A Conversation with Steve Bourne, Eric Allman, and Bryan Cantrill:
In part two of their discussion, our editorial board members consider XP and Agile.

In the July/August 2008 issue of ACM Queue we published part one of a two-part discussion about the practice of software engineering. The goal was to gain some perspective on the tools, techniques, and methodologies that software engineers use in their daily lives. Three members of Queue’s editorial advisory board participated: Steve Bourne, Eric Allman, and Bryan Cantrill, each of whom has made significant and lasting real-world contributions to the field (for more information on each of the participants, see part one). In part two we rejoin their conversation as they discuss XP (Extreme Programming) and Agile.

October 24, 2008

Topic: Development

1 comments

Beautiful Code Exists, if You Know Where to Look:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

I’ve been reading your rants for a while now and I can’t help asking, is there any code you do like? You always seem so negative; I really wonder if you actually believe the world of programming is such an ugly place or if there is, somewhere, some happy place that you go to but never tell your readers about.

October 24, 2008

Topic: Code

3 comments

The Fabrication of Reality:
Is there an "out there" out there?

There are always anniversaries, real or concocted, to loosen the columnist’s writer’s block and/or justify the intake of alcohol. I’ll drink to that to the fact that we are blessed with a reasonably regular solar system providing a timeline of annual increments against which we can enumerate and toast past events. Hic semper hic. When the drinking occurs in sporadic and excessive bursts, it becomes known, disapprovingly, as "bingeing." I’m tempted to claim that this colorful Lincolnshire dialect word binge, meaning soak, was first used in the boozing-bout sense exactly 200 years ago. And that, shurely, calls for a schelebration.

September 24, 2008

Topic: Development

0 comments

Calendar:
8-Aug

IT Roadmap Conference and Expo

September 24, 2008

0 comments

The Five-Minute Rule 20 Years Later: and How Flash Memory Changes the Rules:
The old rule continues to evolve, while flash memory adds two new rules.

In 1987, Jim Gray and Gianfranco Putzolu published their now-famous five-minute rule for trading off memory and I/O capacity. Their calculation compares the cost of holding a record (or page) permanently in memory with the cost of performing disk I/O each time the record (or page) is accessed, using appropriate fractions of prices for RAM chips and disk drives. The name of their rule refers to the break-even interval between accesses. If a record (or page) is accessed more often, it should be kept in memory; otherwise, it should remain on disk and read when needed.

September 24, 2008

Topic: Memory

0 comments

Enterprise SSDs:
Solid-state drives are finally ready for the enterprise. But beware, not all SSDs are created alike.

For designers of enterprise systems, ensuring that hardware performance keeps pace with application demands is a mind-boggling exercise. The most troubling performance challenge is storage I/O. Spinning media, while exceptional in scaling areal density, will unfortunately never keep pace with I/O requirements. The most cost-effective way to break through these storage I/O limitations is by incorporating high-performance SSDs (solid-state drives) into the systems.

September 24, 2008

Topic: File Systems and Storage

0 comments

Flash Storage Today:
Can flash memory become the foundation for a new tier in the storage hierarchy?

The past few years have been an exciting time for flash memory. The cost has fallen dramatically as fabrication has become more efficient and the market has grown; the density has improved with the advent of better processes and additional bits per cell; and flash has been adopted in a wide array of applications. The flash ecosystem has expanded and continues to expand especially for thumb drives, cameras, ruggedized laptops, and phones in the consumer space.

September 24, 2008

Topic: Memory

0 comments

Flash Disk Opportunity for Server Applications:
Future flash-based disks could provide breakthroughs in IOPS, power, reliability, and volumetric capacity when compared with conventional disks.

NAND flash densities have been doubling each year since 1996. Samsung announced that its 32-gigabit NAND flash chips would be available in 2007. This is consistent with Chang-gyu Hwang’s flash memory growth model1 that NAND flash densities will double each year until 2010. Hwang recently extended that 2003 prediction to 2012, suggesting 64 times the current density250 GB per chip. This is hard to credit, but Hwang and Samsung have delivered 16 times since his 2003 article when 2-GB chips were just emerging. So, we should be prepared for the day when a flash drive is a terabyte(!) .

September 24, 2008

Topic: File Systems and Storage

0 comments

A Pioneer’s Flash of Insight:
Jim Gray’s vision of flash-based storage anchors this issue’s theme.

In the May/June issue of Queue, Eric Allman wrote a tribute to Jim Gray, mentioning that Queue would be running some of Jim’s best works in the months to come. I’m embarrassed to confess that when this idea was first discussed, I assumed these papers would consist largely of Jim’s seminal work on databases showing only that I (unlike everyone else on the Queue editorial board) never knew Jim. In an attempt to learn more about both his work and Jim himself, I attended the tribute held for him at UC Berkeley in May.

September 24, 2008

Topic: File Systems and Storage

0 comments

A Conversation with Steve Bourne, Eric Allman, and Bryan Cantrill:
In part one of a two-part series, three Queue editorial board members discuss the practice of software engineering.

In part one of a two-part series, three Queue editorial board members discuss the practice of software engineering. In their quest to solve the next big computing problem or develop the next disruptive technology, software engineers rarely take the time to look back at the history of their profession. What’s changed? What hasn’t changed? In an effort to shed light on these questions, we invited three members of ACM Queue’s editorial advisory board to sit down and offer their perspectives on the continuously evolving practice of software engineering.

September 24, 2008

Topic: Development

1 comments

Sizing Your System:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I’m working on a network server that gets into the situation you called livelock in a previous response to a letter (Queue May/June 2008). Our problem is that our system has only a fixed amount of memory to receive network data, but the system is frequently overwhelmed and can’t make progress. When I ask our application engineers about how much data they expect, the only answer I get is "a lot," which isn’t much help. How can I figure out how to size our systems appropriately?

September 24, 2008

Topic: Networks

0 comments

Distributed Computing Economics:
Computing economics are changing. Today there is rough price parity between: (1) one database access; (2) 10 bytes of network traffic; (3) 100,000 instructions; (4) 10 bytes of disk storage; and (5) a megabyte of disk bandwidth. This has implications for how one structures Internet-scale distributed computing: one puts computing as close to the data as possible in order to avoid expensive network traffic.

Computing is free. The world’s most powerful computer is free (SETI@Home is a 54-teraflop machine). Google freely provides a trillion searches per year to the world’s largest online database (two petabytes). Hotmail freely carries a trillion e-mail messages per year. Amazon.com offers a free book-search tool. Many sites offer free news and other free content. Movies, sports events, concerts, and entertainment are freely available via television.

July 28, 2008

Topic: Distributed Computing

0 comments

A Tribute to Jim Gray

Computer science attracts many very smart people, but a few stand out above the others, somehow blessed with a kind of creativity that most of us are denied. Names such as Alan Turing, Edsger Dijkstra, and John Backus come to mind. Jim Gray is another.

July 28, 2008

Topic: Databases

0 comments

BASE: An Acid Alternative:
In partitioned databases, trading some consistency for availability can lead to dramatic improvements in scalability.

Web applications have grown in popularity over the past decade. Whether you are building an application for end users or application developers (i.e., services), your hope is most likely that your application will find broad adoption and with broad adoption will come transactional growth. If your application relies upon persistence, then data storage will probably become your bottleneck.

July 28, 2008

Topic: File Systems and Storage

12 comments

There’s a Lot of It About:
And everybody’s doing it.

A lot of what, and about where? I hear you cry. One question at a time, I reply. First, there’s too much of everything these days, and, second, it’s happening all over. Furthermore, everybody’s doing it. As a contemporary Wordsworth might say: "The Web is too much with us, late and soon, getting and browsing we lay waste our powers." There is a glut of unfiltered information proving more dangerous than Alexander Pope’s "A Little Learning" where "shallow draughts intoxicate the brain."

July 28, 2008

Topic: Development

0 comments

Exposing the ORM Cache:
Familiarity with ORM caching issues can help prevent performance problems and bugs.

In the early 1990s, when object-oriented languages emerged into the mainstream of software development, a noticeable surge in productivity occurred as developers saw new and better ways to create software programs. Although the new and efficient object programming paradigm was hailed and accepted by a growing number of organizations, relational database management systems remained the preferred technology for managing enterprise data. Thus was born ORM (object-relational mapping), out of necessity, and the complex challenge of saving the persistent state of an object environment in a relational database subsequently became known as the object-relational impedance mismatch.

July 28, 2008

Topic: Databases

0 comments

ORM in Dynamic Languages:
O/R mapping frameworks for dynamic languages such as Groovy provide a different flavor of ORM that can greatly simplify application code.

A major component of most enterprise applications is the code that transfers objects in and out of a relational database. The easiest solution is often to use an ORM (object-relational mapping) framework, which allows the developer to declaratively define the mapping between the object model and database schema and express database-access operations in terms of objects. This high-level approach significantly reduces the amount of database-access code that needs to be written and boosts developer productivity.

July 28, 2008

Topic: Databases

0 comments

Bridging the Object-Relational Divide:
ORM technologies can simplify data access, but be aware of the challenges that come with introducing this new layer of abstraction.

Modern applications are built using two very different technologies: object-oriented programming for business logic; and relational databases for data storage. Object-oriented programming is a key technology for implementing complex systems, providing benefits of reusability, robustness, and maintainability. Relational databases are repositories for persistent data. ORM (object-relational mapping) is a bridge between the two that allows applications to access relational data in an object-oriented way.

July 28, 2008

Topic: Object-Relational Mapping

0 comments

A Conversation with Erik Meijer and Jose Blakeley:
The Microsoft perspective on ORM

To understand more about LINQ and ORM and why Microsoft took this approach, we invited two Microsoft engineers closely involved with their development, Erik Meijer and Jos Blakeley, to speak with Queue editorial board member Terry Coatta.

July 28, 2008

Topic: Object-Relational Mapping

0 comments

The Virtue of Paranoia:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I just joined a company that massages large amounts of data into an internal format for its own applications to work on. Although the data is backed up regularly, I have noticed that access to this data, which has accumulated to be several petabytes in size, is not particularly well secured. There is no encryption, and although the data is not easily reachable from the Internet, everyone at the company has direct access to the volumes, both physically and electronically, all the time.

July 28, 2008

Topic: Security

0 comments

A New Era:
A bigger, better online presence for Queue

I want to remind you that this will be the last printed issue of Queue but also to reassure you that Queue is not going away. As I mentioned in my letter in the last issue, ACM has decided to migrate Queue to the Web. As of July 2008, Queue will expand its publication frequency to 10 issues per year and publish those issues online using the most cutting-edge digital-editions technology available, as well as revamp the existing Queue Web site to provide an overall improved user experience.

July 28, 2008

0 comments

A Behavioral Approach to Security:
Analyzing the behavior of suspicious code

The CTO of Finjan, Yuval Ben-Itzhak, makes a strong case for a new approach to security that relies more on analyzing the behavior of suspicious code than signatures that have to developed after the attacks have already started.

July 21, 2008

Topic: Security

0 comments

The Silent Security Epidemic:
Developers are challenged by attacks that target certain types of applications.

Although the industry is generally getting better with dealing with routine types of security attacks, developers are today being challenged by more complex attacks that not only flow below the radar, but also specifically target certain types of applications. In this Queuecast edition, Ryan Sherstobitoff, CTO of Panda Software describes what new types of sophisticated attacks are being created and what proactive steps developers need to take to protect their applications.

July 21, 2008

Topic: Security

0 comments

The Power of IP Protection and Software Licensing:
Software Digital Rights Management solutions are the de-facto standard today for protecting IP.

Intellectual Property (IP) - which ranges from ideas, inventions, technologies, and patented, trademarked or copyrighted work and products - can account for as much as 80% of a software company’s total market value. Since IP is considered a financial asset in today’s business climate, the threats to IP create a real concern. In an interview with ACM Queuecast host Michael Vizard, Aladdin vice president Gregg Gronowski explains how Software Digital Rights Management solutions are the de-facto standard today for protecting software IP, preventing software piracy, and enabling software licensing and compliance.

July 21, 2008

Topic: Business/Management

0 comments

Things I Learned in School:
As we continue to develop the new UI for our product, we’ll definitely be using FSMs wherever possible.

How many of us have not had the experience of sitting in a classroom wondering idly: "Is this really going to matter out in the real world?" It’s curious, and in no small amount humbling, to realize how many of those nuggets of knowledge really do matter. One cropped up recently for me: the Finite State Machine.

July 14, 2008

Topic: Web Development

0 comments

From Here to There, the SOA Way:
SOA is no more a silver bullet than the approaches that preceded it.

Back in ancient times, say, around the mid ’80s when I was a grad student, distributed systems research was in its heyday. Systems like Trellis/Owl and Eden/Emerald were exploring issues in object-oriented language design, persistence, and distributed computing. One of the big themes to come out of that time period was location transparency—the idea that the way that you access an object should be independent of where it is located. That is, it shouldn’t matter whether an object is in the same process, on the same machine in a different process, or on another machine altogether.

July 14, 2008

Topic: Web Development

0 comments

Only Code Has Value?:
Even the best-written code can’t reveal why it’s doing what it’s doing.

A recent conversation about development methodologies turned to the relative value of various artifacts produced during the development process, and the person I was talking with said: the code has "always been the only artifact that matters. It’s just that we’re only now coming to recognize that." My reaction to this, not expressed at that time, was twofold. First, I got quite a sense of déjà-vu since it hearkened back to my time as an undergraduate and memories of many heated discussions about whether code was self-documenting.

July 14, 2008

Topic: Code

1 comments

The Yin and Yang of Software Development:
How infrastructure elements allow development teams to increase productivity without restricting creativity

The C/C++ Solution Manager at Parasoft explains how infrastructure elements allow development teams to increase productivity without restricting creativity.

July 14, 2008

Topic: Development

1 comments

Managing Collaboration:
Jeff Johnstone of TechExcel explains why there is a need for a new approach to application lifecycle management that better reflects the business requirements and challenges facing development teams.

I think that fundamentally development is thought of, has become more of a business process than simply a set of tools. In the past, like you said, developers and development organizations were kind of on their own. They were fairly autonomous and they would do things that were appropriate for each piece of the process and they would adopt technologies that were appropriate at a technology and tool level, but they didn’t really think of themselves as an integral part of any higher business process.

July 14, 2008

Topic: Development

0 comments

Getting Bigger Reach Through Speech:
Developers have a chance to significantly expand the appeal and reach of their applications by voice-enabling their applications, but is that going to be enough?

Mark Ericson, vice president of product strategy for BlueNote Networks argues that in order to take advantage of new voice technologies you have to have a plan for integrating that capability directly into the applications that drive your existing business processes.

July 14, 2008

Topic: VoIP

0 comments

Corba: Gone but (Hopefully) Not Forgotten:
There is no magic and the lessons of the past apply just as well today.

Back in the June 2006 issue of Queue, Michi Henning wrote a very good condensed history of CORBA and discussed how some of its technical limitations contributed to its downfall. While those limitations certainly aided CORBA’s demise, there is a very widespread notion that the ultimate cause was the ascendance of Web Services, a notion that is compounded with the further belief that Web Services’ dominance of the distributed computing landscape is indicative of its technical superiority to the systems that preceded it, such as CORBA and DCOM.

July 14, 2008

Topic: Distributed Development

0 comments

From Liability to Advantage: A Conversation with John Graham-Cumming and John Ousterhout:
Software production has become a bottleneck in many development organizations.

Software production (the back-end of software development, including tasks such as build, test, package and deploy) has become a bottleneck in many development organizations. In this interview Electric Cloud founder John Ousterhout explains how you can turn software production from a liability to a competitive advantage.

July 14, 2008

Topic: SIP

0 comments

Arm Your Applications for Bulletproof Deployment: A Conversation with Tom Spalthoff:
Companies can achieve a reliable desktop environment while reducing the time and cost spent preparing high-quality application packages.

The deployment of applications, updates, and patches is one of the most common - and risky - functions of any IT department. Deploying any application that isn’t properly configured for distribution can disrupt or crash critical applications and cost companies dearly in lost productivity and help-desk expenses - and companies do it every day. In fact, Gartner reports that even after 10 years of experience, most companies cannot automatically deploy software with a success rate of 90 percent or better.

July 14, 2008

Topic: SIP

0 comments

Intellectual Property and Software Piracy:
The Power of IP Protection and Software Licensing, an interview with Aladdin vice president Gregg Gronowski

We’re here today to talk about intellectual property and the whole issue of software piracy and our friends at Aladdin are considered one of the de facto standards today for protecting software IP, preventing software piracy, and enabling software licensing and compliance. So joining us today to discuss that topic is Aladdin Vice President, Greg Gronowski.

July 14, 2008

Topic: Security

1 comments

Custom Processing:
Today general-purpose processors from Intel and AMD dominate the landscape, but advances in processor designs such as the cell processor architecture overseen by IBM chief scientist Peter Hofstee promise to bring the costs of specialized system on a chip platforms in line with cost associated with general purpose computing platforms, and that just may change the art of system design forever.

Today we’re going to talk about system on a chip and some of the design issues that go with that, and more importantly, some of the newer trends, such as the work that IBM is doing around the cell processor to advance the whole system on a chip processor.

July 14, 2008

Topic: System Evolution

0 comments

Large Scale Systems: Best Practices:
Transcript of interview with Jerod Jenson, Enron Online

Time again companies moving to build large scale systems and networks stumble over the same problems. In an interview with ACM Queuecast host Michael Vizard, Jarod Jenson, the brains behind the Enron Online trading site, talks about the best practices he emphasizes now that he is the chief architect for Aeysis, a consulting firm that specializes on advising clients on how to build manageable high performance systems.

July 14, 2008

0 comments

Business Process Minded:
Transcript of interview with Edwin Khodabakchian, vice president of product development at Oracle

A new paradigm created to empower business system analysts by giving them access to meta-data that they can directly control to drive business process management is about to sweep the enterprise application arena. In an interview with ACM Queuecast host Michael Vizard, Oracle vice president of product development Edwin Khodabakchian explains how the standardization of service-oriented architectures and the evolution of the business process execution language are coming together to finally create flexible software architectures that can adapt to the business rather than making the business adapt to the software.

July 14, 2008

Topic: Databases

0 comments

Discipline and Focus:
Transcript of interview with Werner Vogels, CTO of Amazon

When it comes to managing and deploying large scale systems and networks, discipline and focus matter more than specific technologies. In a conversation with ACM Queuecast host Mike Vizard, Amazon CTO Werner Vogels says the key to success is to have a relentless commitment to a modular computer architecture that makes it possible for the people who build the applications to also be responsible for running and deploying those systems within a common IT framework.

July 14, 2008

Topic: Computer Architecture

0 comments

Automatic for the People:
Transcript of interview with Rob Gingell, CTO of Cassatt

Probably the single biggest challenge with large scale systems and networks is not building them but rather managing them on an ongoing basis. Fortunately, new classes of systems and network management tools that have the potential to save on labor costs because they automate much of the management process are starting to appear.

July 14, 2008

Topic: Networks

0 comments

Reconfigurable Future:
The ability to produce cheaper, more compact chips is a double-edged sword.

Predicting the future is notoriously hard. Sometimes I feel that the only real guarantee is that the future will happen, and that someone will point out how it’s not like what was predicted. Nevertheless, we seem intent on trying to figure out what will happen, and worse yet, recording these views so they can be later used against us. So here I go... Scaling has been driving the whole electronics industry, allowing it to produce chips with more transistors at a lower cost.

July 14, 2008

Topic: Processors

0 comments

The Emergence of iSCSI:
Modern SCSI, as defined by the SCSI-3 Architecture Model, or SAM, really considers the cable and physical interconnections to storage as only one level in a larger hierarchy.

When most IT pros think of SCSI, images of fat cables with many fragile pins come to mind. Certainly, that’s one manifestation. But modern SCSI, as defined by the SCSI-3 Architecture Model, or SAM, really considers the cable and physical interconnections to storage as only one level in a larger hierarchy. By separating the instructions or commands sent to and from devices from the physical layers and their protocols, you arrive at a more generic approach to storage communication

July 14, 2008

Topic: File Systems and Storage

0 comments

DAFS: A New High-Performance Networked File System:
This emerging file-access protocol dramatically enhances the flow of data over a network, making life easier in the data center.

The Direct Access File System (DAFS) is a remote file-access protocol designed to take advantage of new high-throughput, low-latency network technology.

July 14, 2008

Topic: File Systems and Storage

0 comments

Solomon’s Sword Beats Occam’s Razor:
Choosing your best hypothesis

I’ve told you a googol times or more: Don’t exaggerate! And, less often, I’ve ever-so-gently urged you not to understate. Why is my advice ignored? Why can’t you get IT... just right, balanced beyond dispute? Lez Joosts Mildews, as my mam was fond of sayin, boxing both my ears with equal devotion. Follow the Middle Way as Tao did in his Middle Kingdom. Or "straight down the middle," as golfer Bing Crosby used to croon. His other golf song was "The Wearing of the Green," but such digressions run counter to my straight, plow-on-ahead advice.

April 28, 2008

Topic: Code

0 comments

Calendar:
8-Apr

MySQL Conference

April 28, 2008

0 comments

Future Graphics Architectures:
GPUs continue to evolve rapidly, but toward what?

Graphics architectures are in the midst of a major transition. In the past, these were specialized architectures designed to support a single rendering algorithm: the standard Z buffer. Realtime 3D graphics has now advanced to the point where the Z-buffer algorithm has serious shortcomings for generating the next generation of higher-quality visual effects demanded by games and other interactive 3D applications. There is also a desire to use the high computational capability of graphics architectures to support collision detection, approximate physics simulations, scene management, and simple artificial intelligence.

April 28, 2008

Topic: Graphics

0 comments

Scalable Parallel Programming with CUDA:
Is CUDA the parallel programming model that application developers have been waiting for?

The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now parallel systems. Furthermore, their parallelism continues to scale with Moore’s law. The challenge is to develop mainstream application software that transparently scales its parallelism to leverage the increasing number of processor cores, much as 3D graphics applications transparently scale their parallelism to manycore GPUs with widely varying numbers of cores.

April 28, 2008

Topic: Graphics

1 comments

Data-Parallel Computing:
Data parallelism is a key concept in leveraging the power of today’s manycore GPUs.

Users always care about performance. Although often it’s just a matter of making sure the software is doing only what it should, there are many cases where it is vital to get down to the metal and leverage the fundamental characteristics of the processor.

April 28, 2008

Topic: Graphics

0 comments

GPUs: A Closer Look:
As the line between GPUs and CPUs begins to blur, it’s important to understand what makes GPUs tick.

A gamer wanders through a virtual world rendered in near- cinematic detail. Seconds later, the screen fills with a 3D explosion, the result of unseen enemies hiding in physically accurate shadows. Disappointed, the user exits the game and returns to a computer desktop that exhibits the stylish 3D look-and-feel of a modern window manager. Both of these visual experiences require hundreds of gigaflops of computing performance, a demand met by the GPU (graphics processing unit) present in every consumer PC.

April 28, 2008

Topic: Graphics

0 comments

A Conversation with Kurt Akeley and Pat Hanrahan:
Graphics veterans debate the evolution of the GPU.

Interviewing either Kurt Akeley or Pat Hanrahan for this month’s special report on GPUs would have been a great opportunity, so needless to say we were delighted when both of these graphics-programming veterans agreed to participate. Akeley was part of the founding Silicon Graphics team in 1982 and worked there for almost 20 years, during which he led the development of several high-end graphics systems, including GTX, VGX, and RealityEngine. He’s also known for his pioneering work on OpenGL, the industry-standard programming interface for high-performance graphics hardware.

April 28, 2008

Topic: Graphics

0 comments

Latency and Livelocks:
Sometimes data just doesn’t travel as fast as it should.

Dear KV: My company has a very large database with all of our customer information. The database is replicated to several locations around the world to improve performance locally, so that when customers in Asia want to look at their data, they don’t have to wait for it to come from the United States, where my company is based...

April 28, 2008

Topic: Data

0 comments

Going Digital:
ACM Queue enters a new era

Since its founding in March 2003, Queue has addressed the informational needs of the software development community through its printed version and Web site. Each issue has been carefully planned by a working board of prominent computing professionals and guest experts who meet monthly to set the magazine’s editorial agenda and suggest and enlist the most qualified and authoritative authors. Our Queue team has greatly benefited from your feedback over the past five years. You’ve helped us shape a unique magazine, and we thank you for your input and your loyalty.

April 28, 2008

0 comments

All Things Being Equal?:
New year, another perspective

By the time these belles-lettres reach you, a brand new year will be upon us. Another Year! Another Mighty Blow! as Tennyson thundered. Or as Humphrey Lyttelton (q.g.) might say, "The odious odometer of Time has clicked up another ratchette of entropic torture." Less fancifully, as well as trying hard not to write 2007 on our checks, many of us will take the opportunity to reflect on all the daft things we did last year and resolve not to do them no more. Not to mention all the nice things we failed to do. I have in mind the times when I missed an essential semicolon, balanced by the occasions when inserting a spurious one was equally calamitous.

March 4, 2008

Topic: Code

0 comments

How OSGi Changed My Life:
The promises of the Lego hypothesis have yet to materialize fully, but they remain a goal worth pursuing.

In the early 1980s I discovered OOP (object-oriented programming) and fell in love with it, head over heels. As usual, this kind of love meant convincing management to invest in this new technology, and most important of all, send me to cool conferences. So I pitched the technology to my manager. I sketched him the rosy future, how one day we would create applications from ready-made classes. We would get those classes from a repository, put them together, and voila, a new application would be born. Today we take objects more or less for granted, but if I am honest, the pitch I gave to my manager in 1985 never really materialized.

March 4, 2008

Topic: Component Technologies

2 comments

Network Virtualization: Breaking the Performance Barrier:
Shared I/O in virtualization platforms has come a long way, but performance concerns remain.

The recent resurgence in popularity of virtualization has led to its use in a growing number of contexts, many of which require high-performance networking. Consider server consolidation, for example. The efficiency of network virtualization directly impacts the number of network servers that can effectively be consolidated onto a single physical machine. Unfortunately, modern network virtualization techniques incur significant overhead, which limits the achievable network performance. We need new network virtualization techniques to realize the full benefits of virtualization in network-intensive domains.

March 4, 2008

Topic: Virtualization

0 comments

The Cost of Virtualization:
Software developers need to be aware of the compromises they face when using virtualization technology.

Virtualization can be implemented in many different ways. It can be done with and without hardware support. The virtualized operating system can be expected to be changed in preparation for virtualization, or it can be expected to work unchanged. Regardless, software developers must strive to meet the three goals of virtualization spelled out by Gerald Popek and Robert Goldberg: fidelity, performance, and safety.

March 4, 2008

Topic: Virtualization

0 comments

Beyond Server Consolidation:
Server consolidation helps companies improve resource utilization, but virtualization can help in other ways, too.

Virtualization technology was developed in the late 1960s to make more efficient use of hardware. Hardware was expensive, and there was not that much available. Processing was largely outsourced to the few places that did have computers. On a single IBM System/360, one could run in parallel several environments that maintained full isolation and gave each of its customers the illusion of owning the hardware. Virtualization was time sharing implemented at a coarse-grained level, and isolation was the key achievement of the technology.

March 4, 2008

Topic: Virtualization

0 comments

Meet the Virts:
Virtualization technology isn’t new, but it has matured a lot over the past 30 years.

When you dig into the details of supposedly overnight success stories, you frequently discover that they’ve actually been years in the making. Virtualization has been around for more than 30 years since the days when some of you were feeding stacks of punch cards into very physical machines yet in 2007 it tipped. VMware was the IPO sensation of the year; in November 2007 no fewer than four major operating system vendors (Microsoft, Oracle, Red Hat, and Sun) announced significant new virtualization capabilities; and among fashionable technologists it seems virtual has become the new black.

March 4, 2008

Topic: Virtualization

0 comments

A Conversation with Jason Hoffman:
A systems scientist looks at virtualization, scalability, and Ruby on Rails.

Jason Hoffman has a Ph.D. in molecular pathology, but to him the transition between the biological sciences and his current role as CTO of Joyent was completely natural: "Fundamentally, what I’ve always been is a systems scientist, meaning that whether I was studying metabolism or diseases of metabolism or cancer or computer systems or anything else, a system is a system," says Hoffman. He draws on this broad systems background in the work he does at Joyent providing scalable infrastructure for Web applications.

March 4, 2008

Topic: Virtualization

0 comments

Poisonous Programmers:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I hope you don’t mind if I ask you about a non-work-related problem, though I guess if you do mind you just won’t answer. I work on an open source project when I have the time, and we have some annoying nontechnical problems. The problems are really people, and I think you know the ones I mean: people who constantly fight with other members of the project over what seem to be the most trivial points, or who contribute very little to the project but seem to require a huge amount of help for their particular needs. I find myself thinking it would be nice if such people just went away, but I don’t think starting a flame war on our mailing lists over these things would really help.

March 4, 2008

Topic: Code

0 comments

Use It or Lose It:
Aphorisms in the abstract

My aphorisme du jour allows me to roam widely in many directions, some of which, I hope, will be timely and instructive for Queue readers. My choice of the French aphorisme is a justifiably elitist affectation, paying homage to Montaigne, Voltaire, Bertrand Meyer, and that cohue d’elegance. The Gallic gargled r (as in Brassens) and the sublime long final syllable, if you get them right, simply drip with class compared with the slovenly sequence of English diphthongs: a-for-iz-um. We tend to treat the terms aphorism and epigram as posh synonyms for maxim, motto, or even saying.

January 17, 2008

Topic: Code

0 comments

Calendar:
7-Dec

XML 2007 Conference

January 17, 2008

0 comments

Big Games, Small Screens:
Developing 3D games for mobile devices is full of challenges, but the rich, evolving toolset enables some stunning results.

One thing that becomes immediately apparent when creating and distributing mobile 3D games is that there are fundamental differences between the cellphone market and the more traditional games markets, such as consoles and handheld gaming devices. The most striking of these are the number of delivery platforms; the severe constraints of the devices, including small screens whose orientation can be changed; limited input controls; the need to deal with other tasks; the nonphysical delivery mechanism; and the variations in handset performance and input capability.

January 17, 2008

Topic: Game Development

1 comments

Understanding DRM:
Recognizing the tradeoffs associated with different DRM systems can pave the way for a more flexible and capable DRM.

The explosive growth of the Internet and digital media has created both tremendous opportunities and new threats for content creators. Advances in digital technology offer new ways of marketing, disseminating, interacting with, and monetizing creative works, giving rise to expanding markets that did not exist just a few years ago. At the same time, however, the technologies have created major challenges for copyright holders seeking to control the distribution of their works and protect against piracy.

January 17, 2008

Topic: Privacy and Rights

1 comments

Document & Media Exploitation:
The DOMEX challenge is to turn digital bits into actionable intelligence.

A computer used by Al Qaeda ends up in the hands of a Wall Street Journal reporter. A laptop from Iran is discovered that contains details of that country’s nuclear weapons program. Photographs and videos are downloaded from terrorist Web sites. As evidenced by these and countless other cases, digital documents and storage devices hold the key to many ongoing military and criminal investigations. The most straightforward approach to using these media and documents is to explore them with ordinary tools—open the word files with Microsoft Word, view the Web pages with Internet Explorer, and so on.

January 17, 2008

Topic: Security

0 comments

Powering Down:
Smart power management is all about doing more with the resources we have.

Power management is a topic of interest to everyone. In the beginning there was the desktop computer. It ran at a fixed speed and consumed less power than the monitor it was plugged into. Where computers were portable, their sheer size and weight meant that you were more likely to be limited by physical strength than battery life. It was not a great time for power management. Now consider the present. Laptops have increased in speed by more than 5,000 times. Battery capacity, sadly, has not. With hardware becoming increasingly mobile, however, users are demanding that battery life start matching the way they work.

January 17, 2008

Topic: Power Management

0 comments

A Conversation with Mary Lou Jepsen:
What’s behind that funky green machine?

From Tunisia to Taiwan, Mary Lou Jepsen has circled the globe in her role as CTO of the OLPC (One Laptop Per Child) project. Founded by MIT Media Lab co-founder Nicholas Negroponte in 2005, OLPC builds inexpensive laptops designed for educating children in developing nations. Marvels of engineering, the machines have been designed to withstand some of the harshest climates and most power-starved regions on the planet. To accomplish this, Jepsen and her team had to reinvent what a laptop could be. As Jepsen says, “You ask different questions and you get different answers.” The resulting machine, named the XO, is uniquely adapted to its purpose, combining super-low-power electronics, mesh networking, and a sunlight-readable screen, which Jepsen designed herself.

January 17, 2008

Topic: Hardware

0 comments

Take a Freaking Measurement!:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Have you ever worked with someone who is a complete jerk about measuring everything?

January 17, 2008

Topic: Quality Assurance

0 comments

The Code Delusion:
The real, the abstract, and the perceived

No, I’m not cashing in on that titular domino effect that exploits best sellers. The temptations are great, given the rich rewards from a gullible readership, but offset, in the minds of decent writers, by the shame of literary hitchhiking. Thus, guides to the Louvre become The Da Vinci Code Walkthrough for Dummies, milching, as it were, several hot cows on one cover. Similarly, conventional books of recipes are boosted with titles such as The Da Vinci Cookbook—Opus Dei Eating for the Faithful. Dan Brown’s pseudofiction sales stats continue to amaze, cleverly stimulated by accusations of plagiarism and subsequent litigation.

November 15, 2007

Topic: Code

0 comments

Calendar:
7-Oct

Software Business

November 15, 2007

0 comments

Storage Virtualization Gets Smart:
The days of overprovisioned, underutilized storage resources might soon become a thing of the past.

Over the past 20 years we have seen the transformation of storage from a dumb resource with fixed reliability, performance, and capacity to a much smarter resource that can actually play a role in how data is managed. In spite of the increasing capabilities of storage systems, however, traditional storage management models have made it hard to leverage these data management capabilities effectively. The net result has been overprovisioning and underutilization. In short, although the promise was that smart shared storage would simplify data management, the reality has been different.

November 15, 2007

Topic: File Systems and Storage

0 comments

Hard Disk Drives: The Good, the Bad and the Ugly!:
HDDs are like the bread in a peanut butter and jelly sandwich.

HDDs are like the bread in a peanut butter and jelly sandwich—sort of an unexciting piece of hardware necessary to hold the “software.” They are simply a means to an end. HDD reliability, however, has always been a significant weak link, perhaps the weak link, in data storage. In the late 1980s people recognized that HDD reliability was inadequate for large data storage systems so redundancy was added at the system level with some brilliant software algorithms, and RAID (redundant array of inexpensive disks) became a reality. RAID moved the reliability requirements from the HDD itself to the system of data disks.

November 15, 2007

Topic: File Systems and Storage

4 comments

Standardizing Storage Clusters:
Will pNFS become the new standard for parallel data access?

Data-intensive applications such as data mining, movie animation, oil and gas exploration, and weather modeling generate and process huge amounts of data. File-data access throughput is critical for good performance. To scale well, these HPC (high-performance computing) applications distribute their computation among numerous client machines. HPC clusters can range from hundreds to thousands of clients with aggregate I/O demands ranging into the tens of gigabytes per second.

November 15, 2007

Topic: File Systems and Storage

0 comments

A Conversation with Jeff Bonwick and Bill Moore:
The future of file systems

This month ACM Queue speaks with two Sun engineers who are bringing file systems into the 21st century. Jeff Bonwick, CTO for storage at Sun, led development of the ZFS file system, which is now part of Solaris. Bonwick and his co-lead, Sun Distinguished Engineer Bill Moore, developed ZFS to address many of the problems they saw with current file systems, such as data integrity, scalability, and administration. In our discussion this month, Bonwick and Moore elaborate on these points and what makes ZFS such a big leap forward.

November 15, 2007

Topic: File Systems and Storage

1 comments

The Next Big Thing:
The future of functional programming and KV’s top five protocol-design tips

Dear KV, I know you did a previous article where you listed some books to read. I would also consider adding How to Design Programs, available free on the Web. This book is great for explaining the process of writing a program. It uses the Scheme language and introduces FP. I think FP could be the future of programming. John Backus of the IBM Research Laboratory suggested this in 1977. Even Microsoft has yielded to FP by introducing FP concepts in C# with LINQ.

November 15, 2007

Topic: Code

0 comments

Ground Control to Architect Tom...:
Can you hear me?

Project managers love him, recent software engineering graduates bow to him, and he inspires code warriors deep in the development trenches to wonder if a technology time warp may have passed them by. How can it be that no one else has ever proposed software development with the simplicity, innovation, and automation being trumpeted by Architect Tom? His ideas sound so space-age, so futuristic, but why should that be so surprising? After all, Tom is an architecture astronaut!

November 15, 2007

Topic: Code

0 comments

Some Swans are Black:
…and other catastrophes

You may well expect from my title that I’m about to plumb the depths of Nassim Nicholas Taleb’s theories on catastrophe and quasi-empirical randomness. I, in turn, expect that you’ve already read Taleb’s best-selling The Black Swan—The Impact of the Highly Improbable dealing with life’s innate uncertainties and how to expect or even cope with the unexpected.

August 16, 2007

Topic: Code

0 comments

Calendar:
7-Aug

SIGGRAPH

August 16, 2007

0 comments

Voyage in the Agile Memeplex:
In the world of agile development, context is key.

Agile processes are not a technology, not a science, not a product. They constitute a space somewhat hard to define. Agile methods, or more precisely agile software development methods or processes, are a family of approaches and practices for developing software systems. Any attempt to define them runs into egos and marketing posturing. For our purposes here, we can define this space in two ways: By enumeration. Pointing to recognizable members of the set: XP, scrum, lean development, DSDM, Crystal, FDD, Agile RUP or OpenUP, etc.

August 16, 2007

Topic: Web Development

1 comments

Usablity Testing for the Web:
Today’s sophisticated Web applications make tracking and listening to users more important than ever.

Today’s Internet user has more choices than ever before, with many competing sites offering similar services. This proliferation of options provides ample opportunity for users to explore different sites and find out which one best suits their needs for any particular service. Users are further served by the latest generation of Web technologies and services, commonly dubbed Web 2.0, which enables a better, more personalized user experience and encourages user-generated content.

August 16, 2007

Topic: Web Development

1 comments

Phishing Forbidden:
Current anti-phishing technologies prevent users from taking the bait.

Phishing is a significant risk facing Internet users today. Through e-mails or instant messages, users are led to counterfeit Web sites designed to trick them into divulging usernames, passwords, account numbers, and personal information. It is up to the user to ensure the authenticity of the Web site. Browsers provide some tools, but these are limited by at least three issues.

August 16, 2007

Topic: Web Development

1 comments

Building Secure Web Applications:
Believe it or not, it’s not a lost cause.

In these days of phishing and near-daily announcements of identity theft via large-scale data losses, it seems almost ridiculous to talk about securing the Web. At this point most people seem ready to throw up their hands at the idea or to lock down one small component that they can control in order to keep the perceived chaos at bay.

August 16, 2007

Topic: Web Development

0 comments

A Conversation with Joel Spolsky:
What it takes to build a good software company

Joel Spolsky has never been one to hide his opinions. Since 2000, he has developed a loyal following for his insightful, tell-it-like-it-is essays on software development and management on his popular Weblog “Joel on Software”. The prolific essayist has also published four books and started a successful software company, Fog Creek, in New York City, a place he feels is sorely lacking in product-oriented software development houses. Spolsky started Fog Creek not with a specific product in mind, but rather to create a kind of software developers’ utopia, where “programmers and software developers are the stars and everything else serves only to make them productive and happy.” So far, he has succeeded.

August 16, 2007

Topic: Business/Management

0 comments

Gettin’ Your Head Straight:
Kode Vicious is hungry. He sustains himself on your questions from the software development trenches (and lots of beer). Without your monthly missives, KV is like a fish out of water, or a scientist without a problem to solve. So please, do you part to keep him sane (or at least free from psychotic episodes), occupied, and useful.

Dear KV, One of the biggest problems I have is memory. Not the RAM in my computer, but the wet squishy stuff in my head. It seems that no matter how many signs I put up around my cube, nor how often I turn off all the annoying instant messaging clients I need to use for work, I can’t get through more than 15 minutes of work without someone interrupting me, and then I lose my train of thought. If this happens when I’m reading e-mail, that’s not a problem, but when working on code, in particular when debugging a difficult problem in code, this makes my life very difficult.

August 16, 2007

Topic: Development

0 comments

Letters:
Great Curmudgeon column

Great Curmudgeon column (“Alloneword”) by Stan Kelly-Bootle in the May/June 2007 issue of Queue. I relish satire. I distrust all announced principles of anything.

August 16, 2007

0 comments

Toward a Commodity Enterprise Middleware:
Can AMQP enable a new era in messaging middleware? A look inside standards-based messaging with AMQP

AMQP was born out of my own experience and frustrations in developing front- and back-office processing systems at investment banks. It seemed to me that we were living in integration Groundhog Day - the same problems of connecting systems together would crop up with depressing regularity. Each time the same discussions about which products to use would happen, and each time the architecture of some system would be curtailed to allow for the fact that the chosen middleware was reassuringly expensive. From 1996 through to 2003 I was waiting for the solution to this obvious requirement to materialize as a standard, and thereby become a commodity.

June 7, 2007

Topic: Web Services

0 comments

The Seven Deadly Sins of Linux Security:
Avoid these common security risks like the devil.

The problem with security advice is that there is too much of it and that those responsible for security certainly have too little time to implement all of it. The challenge is to determine what the biggest risks are and to worry about those first and about others as time permits. Presented here are the seven common problems - the seven deadly sins of security - most likely to allow major damage to occur to your system or bank account.

June 7, 2007

Topic: Security

1 comments

API: Design Matters:
Why changing APIs might become a criminal offense.

After more than 25 years as a software engineer, I still find myself underestimating the time it will take to complete a particular programming task. Sometimes, the resulting schedule slip is caused by my own shortcomings: as I dig into a problem, I simply discover that it is a lot harder than I initially thought, so the problem takes longer to solve—such is life as a programmer. Just as often I know exactly what I want to achieve and how to achieve it, but it still takes far longer than anticipated. When that happens, it is usually because I am struggling with an API that seems to do its level best to throw rocks in my path and make my life difficult.

June 7, 2007

Topic: API Design

3 comments

Alloneword:
Errors, deceptions, and abmiguity

Three years ago, to the very tick, my first Curmudgeon column appeared in ACM Queue to the rapturous, one-handed claps of the silent majority. Since then my essays have alternated intermittently with those of other grumpy contributors. With this issue (muffled drumroll), I’m proud to announce a Gore-like climate change in the regime that will redefine the shallow roots of ACJ (agile computer journalism, of which more anon). The astute ACM Queue Management (yes, there is such - you really must read the opening pages of this magazine!) has offered me the chance to go solo. For the next few Queues, at least, I am crowned King Curmudgeon, the Idi Amin of Moaners, nay, Supreme General Secretary of the Complaining Party!

June 7, 2007

Topic: Development

0 comments

Calendar:
7-May

OSBC (Open Source Business Conference)

June 7, 2007

0 comments

A Conversation with Michael Stonebraker and Margo Seltzer:
Relating to databases

Over the past 30 years Michael Stonebraker has left an indelible mark on the database technology world. Stonebraker’s legacy began with Ingres, an early relational database initially developed in the 1970s at UC Berkeley, where he taught for 25 years. The Ingres technology lives on today in both the Ingres Corporation’s commercial products and the open source PostgreSQL software. A prolific entrepreneur, Stonebraker also started successful companies focused on the federated database and stream-processing markets. He was elected to the National Academy of Engineering in 1998 and currently is adjunct professor of computer science at MIT. Interviewing Stonebraker is Margo Seltzer, one of the founders of Sleepycat Software, makers of Berkeley DB, a popular embedded database engine now owned by Oracle.

June 7, 2007

Topic: Databases

0 comments

Embracing Wired Networks:
Even at home, hardwiring is the way to go.

Most people I know run wireless networks in their homes. Not me. I hardwired my home and leave the Wi-Fi turned off. My feeling is to do it once, do it right, and then forget about it. I want a low-cost network infrastructure with guaranteed availability, bandwidth, and security. If these attributes are important to you, Wi-Fi alone is probably not going to cut it. People see hardwiring as part of a home remodeling project and, consequently, a big headache. They want convenience. They purchase a wireless router, usually leave all the default settings in place, hook it up next to the DSL or cable modem, and off they go.

June 7, 2007

Topic: Networks

0 comments

KV the Loudmouth:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

What requirement is being satisfied by having Unclear build a P2P file-sharing system? Based upon the answer, it may be more effective, and perhaps even more secure, to use an existing open source project or purchase commercial software to address the business need.

June 7, 2007

Topic: Open Source

0 comments

Ode or Code? Programmers be Mused!:
Is your code literate or literary?

My sermon-text this grumpy month is Matt Barton’s article “The Fine Art of Computer Programming”, in which he extols the virtues of what is widely called literate programming. As with the related terms literary and literature, we have ample room for wranglings of a theological intensity, further amplified by disputes inherent in the questions: “Is computer science or art?” and “What do programmers need to know?” Just as we must prefer agile to clumsy programming, it’s hard to knock anything literate.

May 4, 2007

Topic: Code

0 comments

Calendar:
7-Apr

NSDI (Usenix Symposium on Networked Systems Design and Implementation) Gelato ICE (Itanium Conference and Expo) April 15-18, 2007 San Jose, California

May 4, 2007

0 comments

Beyond Beowulf Clusters:
As clusters grow in size and complexity, it becomes harder and harder to manage their configurations.

In the early ’90s, the Berkeley NOW Project under David Culler posited that groups of less capable machines could be used to solve scientific and other computing problems at a fraction of the cost of larger computers. In 1994, Donald Becker and Thomas Sterling worked to drive the costs even lower by adopting the then-fledgling Linux operating system to build Beowulf clusters at NASA’s Goddard Space Flight Center. By tying desktop machines together with open source tools such as PVM, MPI, and PBS, early clusters—which were often PC towers stacked on metal shelves with a nest of wires interconnecting them—fundamentally altered the balance of scientific computing.

May 4, 2007

Topic: Distributed Computing

0 comments

The Evolution of Security:
What can nature tell us about how best to manage our risks?

Security people are never in charge unless an acute embarrassment has occurred. Otherwise, their advice is tempered by “economic reality,” which is to say that security is a means, not an end. This is as it should be. Since means are about tradeoffs, security is about trade-offs, but you knew all that. Our tradeoff decisions can be hard to make, and these hard-to-make decisions come in two varieties. One type occurs when the uncertainty of the alternatives is so great that they can’t be sorted in terms of probable effect. As such, other factors such as familiarity or convenience will drive the decision.

May 4, 2007

Topic: Security

0 comments

DNS Complexity:
Although it contains just a few simple rules, DNS has grown into an enormously complex system.

DNS is a distributed, coherent, reliable, autonomous, hierarchical database, the first and only one of its kind. Created in the 1980s when the Internet was still young but overrunning its original system for translating host names into IP addresses, DNS is one of the foundation technologies that made the worldwide Internet possible. Yet this did not all happen smoothly, and DNS technology has been periodically refreshed and refined. Though it’s still possible to describe DNS in simple terms, the underlying details are by now quite sublime.

May 4, 2007

Topic: Networks

2 comments

A Conversation with Cory Doctorow and Hal Stern:
Considering the open source approach

For years, the software industry has used open source, community-based methods of developing and improving software—in many cases offering products for free. Other industries, such as publishing and music, are just beginning to embrace more liberal approaches to copyright and intellectual property. This month Queue is delighted to have a representative from each of these camps join us for a discussion of what’s behind some of these trends, as well as hot-topic issues such as identity management, privacy, and trust.

May 4, 2007

Topic: Open Source

0 comments

Advice to a Newbie:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I am new to programming and just started reading some books about programming, particularly C++ and Visual Basic. I truly enjoy programming a lot, to the extent that for the past couple of months I have never missed a day without writing some code. My main concern now is what the world holds for programmers. If someone is called a programmer (i.e., professionally), what will he or she really be programming? As in, will you always be inventing new software or what, really? This is mainly in the case of someone who will not be working for someone else.

May 4, 2007

Topic: Code

0 comments

Going Forward:
Web and digital enhancement for Queue

What an amazing group of readers you are. You now number well over 30,000, and we have been diligently observing your writing on the demographic wall. It’s clear that you care about ACM Queue and the role it plays in your work and professional lives. We appreciate your feedback and have taken steps to address some of your many fine suggestions.

May 4, 2007

0 comments

As Big as a Barn?:
Taking measure of measurement

The Texas rancher is trying to impress the English farmer with the size of his property. "I can drive out at dawn across my land, and by sundown I still haven’t reached my ranch’s borders." The Englishman nods sympathetically and says, "Yes, yes, I know what you mean. I have a car like that, too."

March 9, 2007

Topic: Development

0 comments

Calendar:
7-Mar

DAMA International Symposium and Wilshire Meta-Data Conference

March 9, 2007

0 comments

Unified Communications with SIP:
SIP can provide realtime communications as a network service.

Communications systems based on the SIP (Session Initiation Protocol) standard have come a long way over the past several years. SIP is now largely complete and covers even advanced telephony and multimedia features and feature interactions. Interoperability between solutions from different vendors is repeatedly demonstrated at events such as the SIPit (interoperability test) meetings organized by the SIP Forum, and several manufacturers have proven that proprietary extensions to the standard are no longer driven by technical needs but rather by commercial considerations.

March 9, 2007

Topic: SIP

0 comments

Making SIP Make Cents:
P2P payments using SIP could enable new classes of micropayment applications and business models.

The Session Initiation Protocol (SIP) is used to set up realtime sessions in IP-based networks. These sessions might be for audio, video, or IM communications, or they might be used to relay presence information. SIP service providers are mainly focused on providing a service that copies that provided by the PSTN (public switched telephone network) or the PLMN (public land mobile network) to the Internet-based environment.

March 9, 2007

Topic: SIP

0 comments

Decentralizing SIP:
If you’re looking for a low-maintenance IP communications network, peer-to-peer SIP might be just the thing.

SIP (Session Initiation Protocol) is the most popular protocol for VoIP in use today.1 It is widely used by enterprises, consumers, and even carriers in the core of their networks. Since SIP is designed for establishing media sessions of any kind, it is also used for a variety of multimedia applications beyond VoIP, including IPTV, videoconferencing, and even collaborative video gaming.

March 9, 2007

Topic: SIP

0 comments

SIP: Basics and Beyond:
More than just a simple telephony application protocol, SIP is a framework for developing communications systems.

Chances are you’re already using SIP (Session Initiation Protocol). It is one of the key innovations driving the current evolution of communications systems. Its first major use has been signaling in Internet telephony. Large carriers have been using SIP inside their networks for interconnect and trunking across long distances for several years. If you’ve made a long-distance call, part of that call probably used SIP.

March 9, 2007

Topic: SIP

0 comments

A Conversation with Cullen Jennings and Doug Wadkins:
Getting the lowdown on SIP

In our interview this month, Cisco Systems’ Cullen Jennings offers this call to arms for SIP (Session Initiation Protocol): "The vendors need to get on with implementing the standards that are made, and the standards guys need to hurry up and finish their standards." And he would know. Jennings has spent his career both helping define IP telephony standards and developing products based on them. As a Distinguished Engineer in Cisco’s Voice Technology Group, Jennings’s current work focuses on VoIP, conferencing, security, and firewall and NAT traversal.

March 9, 2007

Topic: SIP

0 comments

Repurposing Consumer Hardware:
New uses for small form-factor, low-power machines

These days you have to be more and more creative when tackling home technology projects because the inventory of raw material is changing so rapidly. Market and product cycles continue to shrink, standard form factors are being discarded to drive down costs, and pricing is becoming more dependent on market value and less on direct manufacturing cost. As a result, standard modular building blocks are disappearing. New alternative uses for obsolete or low-price products are emerging, however.

March 9, 2007

Topic: Processors

0 comments

APIs with an Appetite:
Time for everyone’s favorite subject again: API design. This one just doesn’t get old, does it? Well, OK, maybe it does, but leave it to Kode Vicious to inject some fresh insight into this age-old programming challenge. This month KV turns the spotlight on the delicate art of API sizing.

Dear KV, This may sound funny to you, but one of my co-workers recently called one of my designs fat. My project is to define a set of database APIs that will be used by all kinds of different front-end Web services to store and retrieve data. The problem is that a one-size-fits-all approach can’t work because each customer of the system has different needs. Some are storing images, some are storing text, sound, video, and just about anything else you can imagine. In the current design each type of data has its own specific set of APIs to store, search, retrieve, and manipulate its own type of data.

March 9, 2007

Topic: API Design

0 comments

DOA with SOA:
Adopting this architectural style is no cure-all.

It looks like today is finally the day that we all knew was coming. It was only a matter of time. An ambulance has just pulled up to haul away Marty the Software Manager after his boss pummeled him for failing to deliver on promises of money savings, improved software reuse, and reduced time to market that had been virtually guaranteed merely by adopting SOA (service-oriented architecture). Everything could have been so different for Marty. If only there had been a red-hot market for a software application that fetched the price of London gold, converted the price from pounds to dollars, calculated the shipping costs for the desired quantity, and then returned a random verse from the King James Bible.

February 2, 2007

Topic: Computer Architecture

1 comments

Calendar:
7-Feb

SCALE 5x

February 2, 2007

0 comments

Realtime Garbage Collection:
It’s now possible to develop realtime systems using Java.

Traditional computer science deals with the computation of correct results. Realtime systems interact with the physical world, so they have a second correctness criterion: they have to compute the correct result within a bounded amount of time. Simply building functionally correct software is hard enough. When timing is added to the requirements, the cost and complexity of building the software increase enormously.

February 2, 2007

Topic: Programming Languages

1 comments

Open vs. Closed:
Which source is more secure?

There is no better way to start an argument among a group of developers than proclaiming Operating System A to be "more secure" than Operating System B. I know this from first-hand experience, as previous papers I have published on this topic have led to reams of heated e-mails directed at me - including some that were, quite literally, physically threatening. Despite the heat (not light!) generated from attempting to investigate the relative security of different software projects, investigate we must.

February 2, 2007

Topic: Security

0 comments

One Step Ahead:
Security vulnerabilities abound, but a few simple steps can minimize your risk.

Every day IT departments are involved in an ongoing struggle against hackers trying to break into corporate networks. A break-in can carry a hefty price: loss of valuable information, tarnishing of the corporate image and brand, service interruption, and hundreds of resource hours of recovery time. Unlike other aspects of information technology, security is adversarial. It pits IT departments against hackers.

February 2, 2007

Topic: Security

0 comments

A Conversation with Jamie Butler:
Rootkitting out all evil

Rootkit technology hit center stage in 2005 when analysts discovered that Sony BMG surreptitiously installed a rootkit as part of its DRM (digital rights management) solution. Although that debacle increased general awareness of rootkits, the technology remains the scourge of the software industry through its ability to hide processes and files from detection by system analysis and anti-malware tools.

February 2, 2007

Topic: Security

0 comments

A License to Kode:
Code-scanning software is expensive and I’m not sure it’s worth it. What do you think?

While it’s sometimes tempting to blame the coders, the seeds of many problems are sown well before any lines of code (dodgy as they may be) have been written. Everything from the choice of tools to the choice of a software license can affect the quality, usability, and commercial potential of a product. This month Kode Vicious takes a step away from coding technique and addresses some of these tough decisions with which developers must grapple.

February 2, 2007

Topic: Tools

0 comments

What’s on Your Hard Drive?:
Over the past couple of years we’ve sorted through thousands of your submissions and developed a keen sense of which tools our readers love and loathe most. What’s interesting is that our data says nothing about the overall popularity of any one product: some of the most-lauded products are used by very few developers, while some of the most-loathed tools are ubiquitous.

Over the past couple of years we’ve sorted through thousands of your submissions and developed a keen sense of which tools our readers love and loathe most. What’s interesting is that our data says nothing about the overall popularity of any one product: some of the most-lauded products are used by very few developers, while some of the most-loathed tools are ubiquitous.

February 2, 2007

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Malware—Quantity over Quality

February 2, 2007

0 comments

Will the Real Bots Stand Up?:
From EDSAC to iPod—predictions elude us

When asked which advances in computing technology have most dazzled me since I first coaxed the Cambridge EDSAC into fitful leaps of calculation in the 1950s, I must admit that Apple’s iPod sums up the many unforeseen miracles in one amazing, iconic gadget. Unlike those electrical nose-hair clippers and salt ’n’ pepper mills that gather dust after a few shakes, my iPod lives literally near my heart, on and off the road, in and out of bed like a versatile lover—except when it’s recharging and downloading in the piracy of my own home.

December 28, 2006

Topic: Code

0 comments

Calendar:
6-Dec

LISA (Large Installation System Administration) Conference

December 28, 2006

0 comments

Better, Faster, More Secure:
Who’s in charge of the Internet’s future?

Since I started a stint as chair of the IETF in March 2005, I have frequently been asked, “What’s coming next?” but I have usually declined to answer. Nobody is in charge of the Internet, which is a good thing, but it makes predictions difficult. The reason the lack of central control is a good thing is that it has allowed the Internet to be a laboratory for innovation throughout its life—and it’s a rare thing for a major operational system to serve as its own development lab.

December 28, 2006

Topic: Networks

0 comments

The Virtualization Reality:
Are hypervisors the new foundation for system software?

A number of important challenges are associated with the deployment and configuration of contemporary computing infrastructure. Given the variety of operating systems and their many versions—including the often-specific configurations required to accommodate the wide range of popular applications—it has become quite a conundrum to establish and manage such systems.

December 28, 2006

Topic: Virtualization

3 comments

Unlocking Concurrency:
Multicore programming with transactional memory

Multicore architectures are an inflection point in mainstream software development because they force developers to write parallel programs. In a previous article in Queue, Herb Sutter and James Larus pointed out, “The concurrency revolution is primarily a software revolution. The difficult problem is not building multicore hardware, but programming it in a way that lets mainstream applications benefit from the continued exponential growth in CPU performance.” In this new multicore world, developers must write explicitly parallel applications that can take advantage of the increasing number of cores that each successive multicore generation will provide.

December 28, 2006

Topic: Concurrency

0 comments

A Conversation with John Hennessy and David Patterson:
They wrote the book on computing.

As authors of the seminal textbook, ’Computer Architecture: A Quantitative Approach’, John Hennessy and David Patterson probably don’t need an introduction. You’ve probably read them in college or, if you were lucky enough, even attended one of their classes. Since rethinking, and then rewriting, the way computer architecture is taught, both have remained committed to educating a new generation of engineers with the skills to tackle today’s tough problems in computer architecture, Patterson as a professor at Berkeley and Hennessy as a professor, dean, and now president of Stanford University.

December 28, 2006

Topic: Computer Architecture

1 comments

Peerless P2P:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I’ve just started on a project working with P2P software, and I have a few questions. Now, I know what you’re thinking, and no this isn’t some copyright-violating piece of kowboy kode. It’s a respectable corporate application for people to use to exchange data such as documents, presentations, and work-related information. My biggest issue with this project is security, for example, accidentally exposing our users data or leaving them open to viruses. There must be more things to worry about, but those are the top two. So, I want to ask "What would KV do?"

December 28, 2006

Topic: Networks

0 comments

What’s on Your Hard Drive?:
As the year draws to an end, we would like to thank all of our readers who have submitted to WOYHD.

Over the past 12 months we’ve seen a wide variety of tools mentioned, and, come 2007, we would like to see a lot more of the same.

December 28, 2006

0 comments

Forward Thinking:
Technology knows no fear.

I am of the opinion that humans are not flexible creatures. We resist change like oil resists water. Even if a change is made for the good of humankind, if it messes around with our daily routine, then our natural instinct is to fight the change like a virus.

December 28, 2006

0 comments

The Joy of Spam:
Embracing e-mail’s dark side

Not a day goes by that a large amount of spam doesn’t get past the two filters that I have in place. Most of this e-mail is annoying and some of it dangerous. But I have finally come to peace with spam and it no longer bothers me. How did I do that, you ask? I have learned to respect, even love, spam’s malicious beauty. I want to share my journey to inner peace, hopeful that you will find happiness, too.

November 10, 2006

Topic: Email and IM

0 comments

Calendar:
6-Nov

CIKM (Conference on Information and Knowledge Management)

November 10, 2006

0 comments

Playing for Keeps:
Will security threats bring an end to general-purpose computing?

Inflection points come at you without warning and quickly recede out of reach. We may be nearing one now. If so, we are now about to play for keeps, and “we” doesn’t mean just us security geeks. If anything, it’s because we security geeks have not worked the necessary miracles already that an inflection point seems to be approaching at high velocity.

November 10, 2006

Topic: Web Security

0 comments

Criminal Code: The Making of a Cybercriminal:
A fictional account of malware creators and their experiences

This is a fictional account of malware creators and their experiences. Although the characters are made up, the techniques and events are patterned on real activities of many different groups developing malicious software. “Make some money!” Misha’s father shouted. “You spent all that time for a stupid contest and where did it get you? Nowhere! You have no job and you didn’t even win! You need to stop playing silly computer games and earn some money!”

November 10, 2006

Topic: Web Security

0 comments

E-mail Authentication: What, Why, How?:
Perhaps we should have figured out what was going to happen when Usenet started to go bad.

Internet e-mail was conceived in a different world than we live in today. It was a small, tightly knit community, and we didn’t really have to worry too much about miscreants. Generally, if someone did something wrong, the problem could be dealt with through social means; “shunning” is very effective in small communities. Perhaps we should have figured out what was going to happen when Usenet started to go bad. Usenet was based on an inexpensive network called UUCP, which was fairly easy to join, so it gave us a taste of what happens when the community becomes larger and more distributed—and harder to manage.

November 10, 2006

Topic: Email and IM

0 comments

Cybercrime: An Epidemic:
Who commits these crimes, and what are their motivations?

Painted in the broadest of strokes, cybercrime essentially is the leveraging of information systems and technology to commit larceny, extortion, identity theft, fraud, and, in some cases, corporate espionage. Who are the miscreants who commit these crimes, and what are their motivations? One might imagine they are not the same individuals committing crimes in the physical world. Bank robbers and scam artists garner a certain public notoriety after only a few occurrences of their crimes, yet cybercriminals largely remain invisible and unheralded. Based on sketchy news accounts and a few public arrests, such as Mafiaboy, accused of paralyzing Amazon, CNN, and other Web sites, the public may infer these miscreants are merely a subculture of teenagers.

November 10, 2006

Topic: Web Security

1 comments

A Conversation with Douglas W. Jones and Peter G. Neumann:
Does technology help or hinder election integrity?

Elections form the fundamental basis of all democracies. In light of many past problems with the integrity of election processes around the world, ongoing efforts have sought to increase the use of computers and communications in elections to help automate the process. Unfortunately, many existing computer-related processes are poorly conceived and implemented, introducing new problems related to such issues as voter confidentiality and privacy, computer system integrity, accountability and resolution of irregularities, ease of administration by election officials, and ease of use by voters—with many special problems for those with various handicaps. Overall, the issues relating to computer security provide a representative cross-section of the difficulties inherent in attempting to develop and operate trustworthy systems for other applications.

November 10, 2006

Topic: Privacy and Rights

0 comments

Better Health Care Through Technology:
What can you do for aging loved ones?

Leveraging technology to support aging relatives in their homes is a cost-efficient way to maintain health and happiness and extend life. As the technology expert for my extended family, it has fallen to me to architect the infrastructure that will support my family’s aging loved ones in their homes as long as possible. Over the years, I have assisted four different senior households in achieving this goal, and although things have been bumpy at times, I have refined technical solutions and methodologies that seem to work well.

November 10, 2006

Topic: Bioscience

2 comments

Understanding the Problem:
Is there any data showing that Java projects are any more or less successful than those using older languages?

I’ve done a one-day intro class and read a book on Java but never had to write any serious code in it. As an admin, however, I’ve been up close and personal with a number of Java server projects, which seem to share a number of problems.

November 10, 2006

Topic: Programming Languages

0 comments

What’s on Your Hard Drive?:
Working with the same specific programs on a daily basis, developers often form intimate relationships with them.

Working with the same specific programs on a daily basis, developers often form intimate relationships with them.

November 10, 2006

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Medical Profession Slow to Embrace ’Net… or Is It?

November 10, 2006

0 comments

The Criminal Mind:
We’re all vulnerable to cybercrime.

Technology is a catch-22. It makes our lives easier and more productive, but in doing so it also makes us more vulnerable to the elements that can make our lives very difficult.

November 10, 2006

0 comments

You Can Look It Up: or Maybe Not:
Chasing citations through endless, mislabeled nodes

Many are said to have said, "If I can’t take it with me, I’m not going!" I’ve just said it, but that hardly counts. Who, we demand, said or wrote it first? It’s what I call (and claim first rights on) a FUQ (frequently unanswerable question, pronounced fook to avoid ambiguity and altercation). Yogi Berra’s famous advice was "You can look it up," meaning, in fact, "Take my word on this." He knew quite well that few had the means or patience to wade through the records. Nowadays, of course, as we quip in Unix, it’s easier done than sed.

October 10, 2006

Topic: Development

1 comments

Calendar:
6-Oct

Grace Hopper Celebration of Women in Computing Conference

October 10, 2006

0 comments

Breaking the Major Release Habit:
Can agile development make your team more productive?

Keeping up with the rapid pace of change can be a daunting task. Just as you finally get your software working with a new technology to meet yesterday’s requirements, a newer technology is introduced or a new business trend comes along to upset the apple cart. Whether your new challenge is Web services, SOA (service-oriented architecture), ESB (enterprise service bus), AJAX, Linux, the Sarbanes-Oxley Act, distributed development, outsourcing, or competitive pressure, there is an increasing need for development methodologies that help to shorten the development cycle time, respond to user needs faster, and increase quality all at the same time.

October 10, 2006

Topic: Development

1 comments

The Heart of Eclipse:
A look inside an extensible plug-in architecture

ECLIPSE is both an open, extensible development environment for building software and an open, extensible application framework upon which software can be built. Considered the most popular Java IDE, it provides a common UI model for working with tools and promotes rapid development of modular features based on a plug-in component model. The Eclipse Foundation designed the platform to run natively on multiple operating systems, including Macintosh, Windows, and Linux, providing robust integration with each and providing rich clients that support the GUI interactions everyone is familiar with: drag and drop, cut and paste (clipboard), navigation, and customization.

October 10, 2006

Topic: Development

0 comments

The Long Road to 64 Bits:
Double, double, toil and trouble

Shakespeare’s words often cover circumstances beyond his wildest dreams. Toil and trouble accompany major computing transitions, even when people plan ahead. To calibrate “tomorrow’s legacy today,” we should study “tomorrow’s legacy yesterday.” Much of tomorrow’s software will still be driven by decades-old decisions. Past decisions have unanticipated side effects that last decades and can be difficult to undo.

October 10, 2006

Topic: System Evolution

0 comments

A Conversation with David Brown:
The nondisruptive theory of system evolution

This month Queue tackles the problem of system evolution. One key question is: What do developers need to keep in mind while evolving a system, to ensure that the existing software that depends on it doesn’t break? It’s a tough problem, but there are few more qualified to discuss this subject than two industry veterans now at Sun Microsystems, David Brown and Bob Sproull. Both have witnessed what happens to systems over time and have thought a lot about the introduction of successive technological innovations to a software product without undermining its stability or the software that depends on it.

October 10, 2006

Topic: System Evolution

0 comments

Saddle Up, Aspiring Code Jockeys:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I am an IT consultant/contractor. I work mainly on networks and Microsoft operating systems. I have been doing this work for more than eight years. Unfortunately, it is starting to bore me. My question is: How would I go about getting back into programming? I say getting back into because I have some experience. In high school I took two classes of programming in Applesoft BASIC. I loved it, aced everything, and was the best programming student the teacher ever saw. This boosted my interest in computer science, which I pursued in college. In college, I took classes in C++, Java, and Web development.

October 10, 2006

Topic: Code

0 comments

What’s on Your Hard Drive?:
The name of this department, “What’s on Your Hard Drive?”, suggests software that’s installed locally on your desktop or laptop.

With the amount of software that now runs on remote servers accessed through a Web browser, however, we’re expanding our scope to include any of these thin client tools or services that you might find useful.

October 10, 2006

0 comments

Reality vs. Perception:
Cleaning up misperception (or bad code) is a messy task.

October is upon us, the month that celebrates the struggle between perception and reality. Every October 31, children of all ages don costumes and masks in pursuit of receiving treats or playing tricks, trying desperately to fool everyone into thinking they’re a witch or a goblin or a clown instead of a 10-year-old girl or a teenage boy or a middle-aged business executive.

October 10, 2006

0 comments

Seeking Compliance Nirvana:
Don’t let SOX and PCI get the better of you

Compliance. The mere mention of it brings to mind a harrowing list of questions and concerns. For example, who is complying and with what? With so many standards, laws, angles, intersections, overlaps, and consequences, who ultimately gets to determine if you are compliant or not? How do you determine what is in scope and what is not? And why do you instantly think of an audit when you hear the word compliance? To see the tangled hairball that is compliance, just take a look at my company.

September 15, 2006

Topic: Compliance

0 comments

Keeping Score in the IT Compliance Game:
ALM can help organizations meet tough IT compliance requirements.

Achieving developer acceptance of standardized procedures for managing applications from development to release is one of the largest hurdles facing organizations today. Establishing a standardized development-to-release workflow, often referred to as the ALM (application lifecycle management) process, is particularly critical for organizations in their efforts to meet tough IT compliance mandates. This is much easier said than done, as different development teams have created their own unique procedures that are undocumented, unclear, and nontraceable.

September 15, 2006

Topic: Compliance

0 comments

Compliance Deconstructed:
When you break it down, compliance is largely about ensuring that business processes are executed as expected.

The topic of compliance becomes increasingly complex each year. Dozens of regulatory requirements can affect a company’s business processes. Moreover, these requirements are often vague and confusing. When those in charge of compliance are asked if their business processes are in compliance, it is understandably difficult for them to respond succinctly and with confidence. This article looks at how companies can deconstruct compliance, dealing with it in a systematic fashion and applying technology to automate compliance-related business processes. It also looks specifically at how Microsoft approaches compliance to SOX.

September 15, 2006

Topic: Compliance

1 comments

Box Their SOXes Off:
Being proactive with SAS 70 Type II audits helps both parties in a vendor relationship.

Data is a precious resource for any large organization. The larger the organization, the more likely it will rely to some degree on third-party vendors and partners to help it manage and monitor its mission-critical data. In the wake of new regulations for public companies, such as Section 404 of SOX, the folks who run IT departments for Fortune 1000 companies have an ever-increasing need to know that when it comes to the 24/7/365 monitoring of their critical data transactions, they have business partners with well-planned and well-documented procedures. In response to a growing need to validate third-party controls and procedures, some companies are insisting that certain vendors undergo SAS 70 Type II audits.

September 15, 2006

Topic: Compliance

0 comments

Complying with Compliance:
Blowing it off is not an option.

“Hey, compliance is boring. Really, really boring. And besides, I work neither in the financial industry nor in health care. Why should I care about SOX and HIPAA?” Yep, you’re absolutely right. You write payroll applications, or operating systems, or user interfaces, or (heaven forbid) e-mail servers. Why should you worry about compliance issues?

September 15, 2006

Topic: Compliance

0 comments

A Requirements Primer:
A short primer that provides background on four of the most important compliance challenges that organizations face today.

Many software engineers and architects are exposed to compliance through the growing number of rules, regulations, and standards with which their employers must comply. Some of these requirements, such as HIPAA, focus primarily on one industry, whereas others, such as SOX, span many industries. Some apply to only one country, while others cross national boundaries.

September 15, 2006

Topic: Compliance

1 comments

Rationalizing a Home Terabyte Server:
Self-indulgent, or a view of the future?

With 1 TB of RAID 5 storage, most of my friends believe I have really gone off the deep end with my home server. They may be right, but as in most things in life, I have gotten to this point through a rational set of individual upgrades all perfectly reasonable at the time. Rather than being overly indulgent to my inner geek, am I an early adopter of what will be the inevitable standard for home IT infrastructure? Here is my story; you be the judge.

September 15, 2006

Topic: Development

0 comments

Facing the Strain:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I’ve been working on a software team that produces an end-user application on several different operating system platforms. I started out as the build engineer, setting up the build system, then the nightly test scripts, and now I work on several of the components themselves, as well as maintaining the build system. The biggest problem Ive seen in building software is the lack of API stability. It’s OK when new APIs are added--you can ignore those if you like--and when APIs are removed I know, because the build breaks. The biggest problem is when someone changes an API, as this isn’t discovered until some test script--or worse, a user--executes the code and it blows up.

September 15, 2006

Topic: Code

0 comments

What’s on Your Hard Drive?:
One of the interesting things about WOYHD is seeing which tools developers seem to rely on most heavily.

Despite the large amount of software available, a few core programs appear again and again.

September 15, 2006

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Open Source Gets Mac Attack

September 15, 2006

0 comments

Playing by the Rules:
The complex world of compliance

Some of my favorite childhood memories are of playing games with my sister—both structured games such as Monopoly or hopscotch and imagination-fueled games such as cops and robbers or roller derby girls. Regardless of whether the game had established regulations, often our play would devolve into what I call Calvinball, a term coined in the comic strip Calvin and Hobbes referring to the act of making up the rules as you go along.

September 15, 2006

Topic: Compliance

0 comments

Like a Podcast in the Sea: Mean ol’ LoTech Blues:
Is it just a matter of semantics?

Mache Creeger’s general pessimism about IT’s status quo rests on his perception that HiTech (the character- and tree-saving token for High Technology, somewhat, if not totally, vitiated by this long-winded, unnecessary explanation) is not quite Hi enough. IT relies too much on dreary, evolutionary gradualism rather than on the exciting Kuhnian discontinuities that spell revolution and paradigm shifts. I have no qualms about Creeger’s observation that the marketeers, both commercial and academic (if such categories can be distinguished in these pursy PC times), are fond of paint jobs - coloring the most modest upgrades with claims of major, must-have breakthroughs. This is an ancient and, alas, effective promotional ploy in other trades.

July 27, 2006

Topic: Mobile Computing

0 comments

Calendar:
6-Jul

Linux Kernel Development Summit

July 27, 2006

0 comments

Too Much Information:
Two applications reveal the key challenges in making context-aware computing a reality.

As mobile computing devices and a variety of sensors become ubiquitous, new resources for applications and services - often collectively referred to under the rubric of context-aware computing - are becoming available to designers and developers. In this article, we consider the potential benefits and issues that arise from leveraging context awareness in new communication services that include the convergence of VoIP (voice over IP) and traditional information technology.

July 27, 2006

Topic: HCI

0 comments

The Invisible Assistant:
One lab’s experiment with ubiquitous computing

Ubiquitous computing seeks to place computers everywhere around us—into the very fabric of everyday life1—so that our lives are made better. Whether it is improving our job productivity, our ability to stay connected with family and friends, or our entertainment, the goal is to find ways to put technology to work for us by getting all those computers—large and small, visible and invisible—to work together.

July 27, 2006

Topic: HCI

0 comments

Social Perception:
Modeling human interaction for the next generation of communication services

Bob manages a team that designs and builds widgets. Life would be sweet, except that Bob’s team is distributed over three sites, located in three different time zones. Bob used to collect lots of frequent flyer miles traveling to attend meetings. Lately, however, business travel has evolved into a humanly degrading, wasteful ordeal. So Bob has invested in a high-bandwidth video communications system to cut down on business travel. Counting direct costs, the system was supposed to pay for itself within three months. There is a problem, however.

July 27, 2006

Topic: HCI

0 comments

The Future of Human-Computer Interaction:
Is an HCI revolution just around the corner?

Personal computing launched with the IBM PC. But popular computing—computing for the masses—launched with the modern WIMP (windows, icons, mouse, pointer) interface, which made computers usable by ordinary people. As popular computing has grown, the role of HCI (human-computer interaction) has increased. Most software today is interactive, and code related to the interface is more than half of all code. HCI also has a key role in application design. In a consumer market, a product’s success depends on each user’s experience with it.

July 27, 2006

Topic: HCI

3 comments

A Conversation with Jordan Cohen:
Speaking out about speech technology

Jordan Cohen calls himself ’sort of an engineer and sort of a linguist.’ This diverse background has been the foundation for his long history working with speech technology, including almost 30 years with government agencies, with a little time out in the middle to work in IBM’s speech recognition group. Until recently he was the chief technology officer of VoiceSignal, a company that does voice-based user interfaces for mobile devices. VoiceSignal has a significant presence in the cellphone industry, with its software running on between 60 and 100 million cellphones. Cohen has just joined SRI International as a senior scientist.

July 27, 2006

Topic: HCI

0 comments

Pointless PKI:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

We’ve had problems in the past with internal compromises, and management has decided that the only way to protect the information is to encrypt it during transmission.

July 27, 2006

Topic: Security

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

In a past column (September 2005) we reported on the “vigilante” anti-spam tactics adopted by Blue Security. The company’s software, named Blue Frog, fought spammers directly through tracking, warning, and ultimately flooding them with complaint e-mails if they continued spamming e-mail addresses included on the “do not intrude” list. Some saw this as essentially a DoS (denial-of-service) attack and criticized the company for its tactics. But users—more than 500,000 of them—were undeterred: Why not fight fire with fire?

July 27, 2006

0 comments

Able bodies:
Alternative HCI allows us humans to live our lives better.

When I was growing up I used to have these frightening dreams in which I had no use of my arms or legs. I was helpless to do anything for myself and could only watch the world walk by.

July 27, 2006

0 comments

Software Development Amidst the Whiz of Silver Bullets...:
A call to Transylvania may be needed.

Veterans of the software industry will attest to having seen a number of silver bullets come and go in their time. The argentum projectiles of yesteryear, such as OO, high-level software languages, and IDEs, are now obvious to have been only low-grade alloys compared with the fine silver being discharged today. For example, today’s silver bullets have demonstrated an unparalleled ability to provide implicit value to both text and diagrams, the power to shift the economics of software development, and a capacity to change the focus of long-established engineering disciplines.

June 30, 2006

Topic: Code

1 comments

Calendar:
6-Jun

O’Reilly Where 2.0

June 30, 2006

0 comments

ASPs: The Integration Challenge:
The promise of software as a service is becoming a reality with many ASPs.

Organizations using ASPs and third-party vendors that provide value-added products to ASPs need to integrate with them. ASPs enable this integration by providing Web service-based APIs. There are significant differences between integrating with ASPs over the Internet and integrating with a local application. When integrating with ASPs, users have to consider a number of issues, including latency, unavailability, upgrades, performance, load limiting, and lack of transaction support.

June 30, 2006

Topic: Component Technologies

0 comments

Untangling Enterprise Java:
A new breed of framework helps eliminate crosscutting concerns.

Separation of concerns is one of the oldest concepts in computer science. The term was coined by Dijkstra in 1974.1 It is important because it simplifies software, making it easier to develop and maintain. Separation of concerns is commonly achieved by decomposing an application into components. There are, however, crosscutting concerns, which span (or cut across) multiple components. These kinds of concerns cannot be handled by traditional forms of modularization and can make the application more complex and difficult to maintain.

June 30, 2006

Topic: Component Technologies

0 comments

The Rise and Fall of CORBA:
There’s a lot we can learn from CORBA’s mistakes.

Depending on exactly when one starts counting, CORBA is about 10-15 years old. During its lifetime, CORBA has moved from being a bleeding-edge technology for early adopters, to being a popular middleware, to being a niche technology that exists in relative obscurity. It is instructive to examine why CORBA—despite once being heralded as the “next-generation technology for e-commerce”—suffered this fate. CORBA’s history is one that the computing industry has seen many times, and it seems likely that current middleware efforts, specifically Web services, will reenact a similar history.

June 30, 2006

Topic: Component Technologies

19 comments

From COM to Common:
Component software’s 10-year journey toward ubiquity

Ten years ago, the term component software meant something relatively specific and concrete. A small number of software component frameworks more or less defined the concept for most people. Today, few terms in the software industry are less precise than component software. There are now many different forms of software componentry for many different purposes. The technologies and methodologies of 10 years ago have evolved in fundamental ways and have been joined by an explosion of new technologies and approaches that have redefined our previously held notions of component software.

June 30, 2006

Topic: Component Technologies

0 comments

A Conversation with Leo Chang of ClickShift:
In the world of component software, simplicity, not standards, might be the key.

To explore this month’s theme of component technologies, we brought together two engineers with lots of experience in the field to discuss some of the current trends and future direction in the world of software components. Queue Editorial board member Terry Coatta is the director of software development at GPS Industries. His expertise is in distributed component systems such as CORBA, EJB, and COM. He joins in the discussion with Leo Chang, the cofounder and CTO of ClickShift, an online campaign optimization and management company.

June 30, 2006

Topic: Component Technologies

0 comments

Logging on with KV:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, I’ve been stuck with writing the logging system for a new payment processing system at work. As you might imagine, this requires logging a lot of data because we have to be able to reconcile the data in our logs with our customers and other users, such as credit card companies, at the end of each billing cycle, and we have to be prepared if there is any argument over the bill itself. I’ve been given the job for two reasons: because I’m the newest person in the group and because no one thinks writing yet another logging system is very interesting.

June 30, 2006

Topic: Development

0 comments

What’s on Your Hard Drive?:
Whether you love ’em or hate ’em, dev tools play an integral role in a programmer’s existence.

Whether you love ’em or hate ’em, dev tools play an integral role in a programmer’s existence.

June 30, 2006

0 comments

Accessorizing:
About five years ago I received a large, heavy-duty stand mixer as a birthday gift.

The stand came with a batter paddle and a dough hook, both of which have been put to good use.

June 30, 2006

0 comments

The Calculus Formally Known as Pi:
The hype over the pi-calculus

Dominic Behan once asked me in a rare sober moment: “What’s the point of knowing something if others don’t know that you know it?” To which I replied with the familiar, “It’s not what you don’t know that matters, it’s what you know that ain’t so.” I was reminded of these dubious epistemological observations while reading Stephen Sparkes’ interview with Steve Ross-Talbot in the March 2006 issue of ACM Queue.

June 30, 2006

Topic: Code

0 comments

Calendar:
6-May

Business Process Management Conference

June 30, 2006

0 comments

The Network’s New Role:
Application-oriented networks can help bridge the gap between enterprises.

Companies have always been challenged with integrating systems across organizational boundaries. With the advent of Internet-native systems, this integration has become essential for modern organizations, but it has also become more and more complex, especially as next-generation business systems depend on agile, flexible, interoperable, reliable, and secure cross-enterprise systems.

June 30, 2006

Topic: Networks

0 comments

Search Considered Integral:
A combination of tagging, categorization, and navigation can help end-users leverage the power of enterprise search.

Most corporations must leverage their data for competitive advantage. The volume of data available to a knowledge worker has grown dramatically over the past few years, and, while a good amount lives in large databases, an important subset exists only as unstructured or semi-structured data. Without the right systems, this leads to a continuously deteriorating signal-to-noise ratio, creating an obstacle for busy users trying to locate information quickly. Three flavors of enterprise search solutions help improve knowledge discovery.

June 30, 2006

Topic: Search Engines

0 comments

AI Gets a Brain:
New technology allows software to tap real human intelligence.

In the 50 years since John McCarthy coined the term artificial intelligence, much progress has been made toward identifying, understanding, and automating many classes of symbolic and computational problems that were once the exclusive domain of human intelligence. Much work remains in the field because humans still significantly outperform the most powerful computers at completing such simple tasks as identifying objects in photographs - something children can do even before they learn to speak.

June 30, 2006

Topic: AI

0 comments

A Conversation with Werner Vogels:
Learning from the Amazon technology platform: Many think of Amazon as ’that hugely successful online bookstore.’ You would expect Amazon CTO Werner Vogels to embrace this distinction, but in fact it causes him some concern.

Many think of Amazon as "that hugely successful online bookstore." You would expect Amazon CTO Werner Vogels to embrace this distinction, but in fact it causes him some concern. "I think it’s important to realize that first and foremost Amazon is a technology company," says Vogels. And he’s right. Over the past years, Vogels has helped Amazon grow from an online retailer (albeit one of the largest, with more than 55 million active customer accounts) into a platform on which more than 1 million active retail partners worldwide do business.

June 30, 2006

Topic: Web Services

5 comments

Phishing for Solutions:
Re: phishing, doesn’t the URL already give away enough information?

Dear KV, I noticed you covered cross-site scripting a few issues back, and I’m wondering if you have any advice on another Web problem, phishing. I work at a large financial institution and every time we roll out a new service, the security team comes down on us because either the login page looks different or they claim that it’s easy to phish information from our users using one of our forms. It’s not like we want our users to be phished, but I don’t think it’s a technical problem. Our users are just stupid and give away their information to anyone who seems willing to put up a reasonable fake of one of our pages.

June 30, 2006

Topic: Web Security

0 comments

What’s on Your Hard Drive?:
Do you ever feel as though the software you are using is glaringly inadequate?

Do you sometimes find yourself screaming incoherently at your monitor?

June 30, 2006

0 comments

The Domino Effect:
Sometimes, adding one component can lead to a total network redesign.

Sometimes, the undertaking of a task leads to a result far different from the expected. Any homeowner can attest to that. Case in point: About four years ago, after living in my cozy little cape for about two years, I decided to replace the toilet in the upstairs bathroom because it had not been installed properly, which made me nervous each time it was used. How it survived 20 years I will never know, but I was adamant that it needed to be replaced.

June 30, 2006

0 comments

Evolution or Revolution?:
Where is the High in High Tech?

We work in an industry that prides itself on "changing the world," one that chants a constant mantra of innovation and where new products could aptly be described as "this year’s breakthrough of the century." While there are some genuine revolutions in the technology industry, including cellphones, GPS (global positioning system), quantum computing, encryption, and global access to content, the vast majority of new product introductions are evolutionary, not revolutionary. Real technical breakthroughs are few and far between. Most new products are just a recycling of an earlier idea.

May 2, 2006

Topic: Development

0 comments

Java in a Teacup:
Programming Bluetooth-enabled devices using J2ME

Few technology sectors evolve as fast as the wireless industry. As the market and devices mature, the need (and potential) for mobile applications grows. More and more mobile devices are delivered with the Java platform installed, enabling a large base of Java programmers to try their hand at embedded programming. Unfortunately, not all Java mobile devices are created equal, presenting many challenges to the new J2ME (Java 2 Platform, Micro Edition) programmer. Using a sample game application, this article illustrates some of the challenges associated with J2ME and Bluetooth programming.

May 2, 2006

Topic: Mobile Computing

0 comments

TiVo-lution:
The challenges of delivering a reliable, easy-to-use DVR service to the masses

One of the greatest challenges of designing a computer system is in making sure the system itself is "invisible" to the user. The system should simply be a conduit to the desired result. There are many examples of such purpose-built systems, ranging from modern automobiles to mobile phones.

May 2, 2006

Topic: Purpose-built Systems

0 comments

The (not so) Hidden Computer:
The growing complexity of purpose-built systems is making it difficult to conceal the computers within.

Ubiquitous computing may not have arrived yet, but ubiquitous computers certainly have. The sustained improvements wrought by the fulfillment of Moore’s law have led to the use of microprocessors in a vast array of consumer products. A typical car contains 50 to 100 processors. Your microwave has one or maybe more. They’re in your TV, your phone, your refrigerator, your kids’ toys, and in some cases, your toothbrush.

May 2, 2006

Topic: Purpose-built Systems

0 comments

A Conversation with Chuck McManis:
Developing systems with a purpose: do one thing, and do it well.

When thinking about purpose-built systems, it’s easy to focus on the high-visibility consumer products: the iPods, the TiVos. Lying in the shadows of the corporate data center, however, are a number of less-glamorous devices built primarily to do one specific thing: and do it well and reliably.

May 2, 2006

Topic: Purpose-built Systems

0 comments

Kode Vicious Bugs Out:
What do you do when tools fail?

This month Kode Vicious serves up a mixed bag, including tackling the uncertainties of heisenbugs -- a nasty type of bug that’s been known to drive coders certifiably insane. He also gives us his list of must-reads. Are any of your favorites on the list? Read on to find out!

May 2, 2006

Topic: Tools

1 comments

What’s on Your Hard Drive?:
Ahhhh, April. A time to break free from winter’s grasp and begin anew.

So along with those parkas and snow blowers, why not pack away some of your less-than-satisfactory dev tools and take a look at this month’s WOYHD.

May 2, 2006

0 comments

The Private Universe:
Reliability is a big virtue in the brave new world of small devices.

As I type this, the debate rages on whether the BlackBerry service will cease to exist as we know it. As of press time, a judge had refused to issue an order of injunction against BlackBerry maker RIM in its ongoing patent infringement battle with NTP Inc., giving users at least another 30 days to feed their addiction.

May 2, 2006

0 comments

But, Having Said That, ...:
80 percent of the useful work is performed by 20 percent of the code.

A persistent rule of thumb in the programming trade is the 80/20 rule: "80 percent of the useful work is performed by 20 percent of the code." As with gas mileage, your performance statistics may vary, and given the mensurational vagaries of body parts such as thumbs, you may prefer a 90/10 partition of labor. With some of the bloated code-generating meta-frameworks floating around, cynics have suggested a 99/1 rule - if you can locate that frantic 1 percent. Whatever the ratio, the concept has proved useful in performance tuning.

March 29, 2006

Topic: Code

0 comments

Calendar:
6-Mar

O’Reilly Emerging Technology

March 29, 2006

0 comments

Under New Management:
Autonomic computing is revolutionizing the way we manage complex systems.

In an increasingly competitive global environment, enterprises are under extreme pressure to reduce operating costs. At the same time they must have the agility to respond to business opportunities offered by volatile markets.

March 29, 2006

Topic: Workflow Systems

0 comments

Best Practice (BPM):
In business process management, finding the right tool suite is just the beginning.

Just as BPM (business process management) technology is markedly different from conventional approaches to application support, the methodology of BPM development is markedly different from traditional software implementation techniques. With CPI (continuous process improvement) as the core discipline of BPM, the models that drive work through the company evolve constantly. Indeed, recent studies suggest that companies fine-tune their BPM-based applications at least once a quarter (and sometimes as often as eight times per year). The point is that there is no such thing as a "finished" process; it takes multiple iterations to produce highly effective solutions. Every working BPM-based process is just a starting point for the future.

March 29, 2006

Topic: Workflow Systems

1 comments

People and Process:
Minimizing the pain of business process change

When Mike Hammer and I published Reengineering the Corporation in 1992, we understood the impact that real business process change would have on people. I say "real" process change, because managers have used the term reengineering to describe any and all corporate change programs. One misguided executive told me that his company did not know how to do real reengineering; so it just downsized large departments and business units, and expected that the people who were left would figure out how to get their work done. Sadly, this is how some companies still practice process redesign: leaving people overworked and demoralized, while customers experience bad service and poor quality.

March 29, 2006

Topic: Workflow Systems

0 comments

Going with the Flow:
Workflow systems can provide value beyond automating business processes.

An organization consists of two worlds. The real world contains the organization’s structure, physical goods, employees, and other organizations. The virtual world contains the organization’s computerized infrastructure, including its applications and databases. Workflow systems bridge the gap between these two worlds. They provide both a model of the organization’s design and a runtime to execute the model.

March 29, 2006

Topic: Workflow Systems

0 comments

A Conversation with Steve Ross-Talbot:
The IT world has long been plagued by a disconnect between theory and practice.

Academics theorizing in their ivory towers; programmers at "Initech" toiling away in their corporate cubicles. While this might be a somewhat naïve characterization, the fact remains that both academics and practitioners could do a better job of sharing their ideas and innovations with each other. As a result, cutting-edge research often fails to find practical application in the marketplace.

March 29, 2006

Topic: Business/Management

0 comments

Human-KV Interaction:
We can’t guarantee you’ll agree with his advice, but it’ll probably be more effective than anything you’ve tried thus far.

Welcome to another installment of Kode Vicious, the monthly forum for Queue’s resident kode maven and occasional rabble-rouser. KV likes to hear your tales from the coding trenches, preferably those ending with a focused question about programming. KV also has a tried-and-true arsenal of techniques for dealing with interpersonal relationships so feel free to enlist his help with those characters as well.

March 29, 2006

Topic: Software Design

0 comments

What’s on Your Hard Drive?:
Welcome once again to WOYHD, Queue’s monthly forum dedicated to developer tools.

Pay attention as well to the tools people love. There you’ll often find people raving about some of the smaller or lesser-known developer tools—tools that could be just the thing to make your latest project easier.

March 29, 2006

0 comments

Letters:
Questioning MDD

Questioning MDD

March 29, 2006

0 comments

It Isn’t Your Father’s Realtime Anymore:
The misuse and abuse of a noble term

Isn’t it a shame the way the term realtime has become so misused? I’ve noticed a slow devolution since 1982, when realtime systems became the main focus of my research, teaching, and consulting. Over these past 20-plus years, I have watched my beloved realtime become one of the most overloaded, overused, and overrated terms in the lexicon of computing. Worse, it has been purloined by users outside of the computing community and has been shamelessly exploited by marketing opportunists.

February 23, 2006

Topic: Data

3 comments

Modern Performance Monitoring:
Today’s diverse and decentralized computer world demands new thinking about performance monitoring and analysis.

The modern Unix server floor can be a diverse universe of hardware from several vendors and software from several sources. Often, the personnel needed to resolve server floor performance issues are not available or, for security reasons, not allowed to be present at the very moment of occurrence. Even when, as luck might have it, the right personnel are actually present to witness a performance "event," the tools to measure and analyze the performance of the hardware and software have traditionally been sparse and vendor-specific.

February 23, 2006

Topic: Performance

0 comments

Performance Anti-Patterns:
Want your apps to run faster? Here’s what not to do.

Performance pathologies can be found in almost any software, from user to kernel, applications, drivers, etc. At Sun we’ve spent the last several years applying state-of-the-art tools to a Unix kernel, system libraries, and user applications, and have found that many apparently disparate performance problems in fact have the same underlying causes. Since software patterns are considered abstractions of positive experience, we can talk about the various approaches that led to these performance problems as anti-patterns: something to be avoided rather than emulated.

February 23, 2006

Topic: Performance

0 comments

A High-Performance Team:
From design to production, performance should be part of the process.

You work in the product development group of a software company, where the product is often compared with the competition on performance grounds. Performance is an important part of your business; but so is adding new functionality, fixing bugs, and working on new projects. So how do you lead your team to develop high-performance software, as well as doing everything else? And how do you keep that performance high throughout cycles of maintenance and enhancement?

February 23, 2006

Topic: Performance

0 comments

Hidden in Plain Sight:
Improvements in the observability of software can help you diagnose your most crippling performance problems.

In December 1997, Sun Microsystems had just announced its new flagship machine: a 64-processor symmetric multiprocessor supporting up to 64 gigabytes of memory and thousands of I/O devices. As with any new machine launch, Sun was working feverishly on benchmarks to prove the machine’s performance. While the benchmarks were generally impressive, there was one in particular that was exhibiting unexpectedly low performance. The benchmark machine would occasionally become mysteriously distracted: Benchmark activity would practically cease, but the operating system kernel remained furiously busy. After some number of minutes spent on unknown work, the operating system would suddenly right itself: Benchmark activity would resume at full throttle and run to completion.

February 23, 2006

Topic: Performance

7 comments

A Conversation with Jarod Jenson:
Pinpointing performance problems

One of the industry’s go-to guys in performance improvement for business systems is Jarod Jenson, the chief systems architect for a consulting company he founded called Aeysis. He received a B.S. degree in computer science from Texas A&M University in 1995, then went to work for Baylor College of Medicine as a system administrator. From there he moved to Enron, where he played a major role in developing EnronOnline. After the collapse of Enron, Jenson worked briefly for UBS Warburg Energy before setting up his own consulting company. His focus since then has been on performance and scalability with applications at numerous companies where he has earned a reputation for quickly delivering substantial performance gains.

February 23, 2006

Topic: Performance

0 comments

Gettin’ Your Kode On:
Dear KV, Simple question: When is the right time to call the c_str() method on a string to get the actual pointer?

Another year is upon us and we are happy to have Kode Vicious still ranting against the ills of insecure programming, insufficient commenting, and numerous other forms of koding malpractice. Yet despite his best efforts, the bittersweet truth is that these problems are not going away anytime soon, and therefore should continue to provide ample fodder for future KV columns. Oh, to live in a world that doesn’t need KV’s advice or doctors, for that matter.

February 23, 2006

Topic: Development

0 comments

What’s on Your Hard Drive?:
Sometimes having the right tools can make the difference between a project’s success and failure.

Therefore, it’s no surprise that many of you “roll your own” when what’s available just isn’t cutting it.

February 23, 2006

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

IPv6 Gains Government Traction

February 23, 2006

0 comments

Quality Really is Job #1:
I hate my car. I really do.

I hate my car. I really do. Not because it’s a minivan—that was a lapse in judgment on my part that I can (and do) smack myself for, but that’s not the car’s fault. I hate my minivan because it is poorly made.

February 23, 2006

0 comments

Anything Su Doku, I Can Do Better:
The new puzzle craze from japan is sweeping the world, and testing our Boolean logic.

I dedicate this essay in memoriam to Jef Raskin. Many more authoritative tributes than I can muster continue to pour in, and no doubt a glorious Festschrift will be forthcoming from those who admired this remarkable polymath. "Le don de vivre a passé dans les fleurs."

January 31, 2006

Topic: Code

0 comments

Coding for the Code:
Can models provide the DNA for software development?

Despite the considerable effort invested by industry and academia in modeling standards such as UML (Unified Modeling Language), software modeling has long played a subordinate role in commercial software development. Although modeling is generally perceived as state of the art and thus as something that ought to be done, its appreciation seems to pale along with the progression from the early, more conceptual phases of a software project to those where the actual handcrafting is done.

January 31, 2006

Topic: Code

2 comments

Monitoring, at Your Service:
Automated monitoring can increase the reliability and scalability of today’s online software services.

Internet services are becoming more and more a part of our daily lives. We derive value from them, depend on them, and are now beginning to assume their ubiquity as we do the phone system and electricity grid. The implementation of Internet services, though, is an unsolved problem, and Internet services remain far from fulfilling their potential in our world.

January 31, 2006

Topic: Web Services

0 comments

Lessons from the Floor:
The manufacturing industry can teach us a lot about measuring performance in large-scale Internet services.

The January monthly service quality meeting started normally. Around the table were representatives from development, operations, marketing, and product management, and the agenda focused on the prior month’s performance. As usual, customer-impacting incidents and quality of service were key topics, and I was armed with the numbers showing the average uptime for the part of the service that I represent: MSN, the Microsoft family of services that includes e-mail, Instant Messenger, news, weather and sports, etc.

January 31, 2006

Topic: Performance

0 comments

A Conversation with Phil Smoot:
The challenges of managing a megaservice

In the landscape of today’s megaservices, Hotmail just might be Mount Everest. One of the oldest free Web e-mail services, Hotmail relies on more than 10,000 servers spread around the globe to process billions of e-mail transactions per day. What’s interesting is that despite this enormous amount of traffic, Hotmail relies on less than 100 system administrators to manage it all.

January 31, 2006

Topic: Web Services

1 comments

Vicious XSS:
For readers who doubt the relevance of KV’s advice, witness the XSS attack that befell MySpace in October.

For readers who doubt the relevance of KV’s advice, witness the XSS attack that befell MySpace in October. This month Kode Vicious addresses just this sort of XSS attack. It’s a good thing cross-site scripting is not abbreviated CSS, as the MySpace hacker used CSS to perpetrate his XSS attack. That would have made for one confusing story, eh?

January 31, 2006

Topic: Web Security

0 comments

What’s on Your Hard Drive?:
As 2005 comes to a close, we’d like to thank all the readers who’ve submitted to WOYHD.

We’ve seen a lot of tools mentioned over the past year—some popular, and some we’d never heard of. With 2006 on the horizon, we’d like a lot more of the same.

January 31, 2006

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Web 2.0—Looking toward the Future or Reviving the Past?

January 31, 2006

0 comments

In with the New:
What’s ahead for Queue in 2006?

Ah, December. That time of year when endings and beginnings are as prevalent as Salvation Army Santas. Amid the holiday parties, shopping for presents, and drinking and eating too much of the stuff that’s really bad for us, most of us take some time to reflect on the year that is now behind us and contemplate ways to make the new year better than the last.

January 31, 2006

0 comments

Stop Whining about Outsourcing!:
I’m sick of hearing all the whining about how outsourcing is going to migrate all IT jobs to the country with the lowest wages.

The paranoia inspired by this domino theory of job migration causes American and West European programmers to worry about India, Indian programmers to worry about China, Chinese programmers to worry about the Czech Republic, and so on. Domino theorists must think all IT jobs will go to the Republic of Elbonia, the extremely poor, fourth-world, Eastern European country featured in the Dilbert comic strip.

December 16, 2005

Topic: Development

0 comments

Calendar:
5-Nov

Software Test and Performance Conference

December 16, 2005

0 comments

Information Extraction:
Distilling structured data from unstructured text

In 2001 the U.S. Department of Labor was tasked with building a Web site that would help people find continuing education opportunities at community colleges, universities, and organizations across the country. The department wanted its Web site to support fielded Boolean searches over locations, dates, times, prerequisites, instructors, topic areas, and course descriptions. Ultimately it was also interested in mining its new database for patterns and educational trends. This was a major data-integration project, aiming to automatically gather detailed, structured information from tens of thousands of individual institutions every three months.

December 16, 2005

Topic: Semi-structured Data

0 comments

Threads without the Pain:
Multithreaded programming need not be so angst-ridden.

Much of today’s software deals with multiple concurrent tasks. Web browsers support multiple concurrent HTTP connections, graphical user interfaces deal with multiple windows and input devices, and Web and DNS servers handle concurrent connections or transactions from large numbers of clients. The number of concurrent tasks that needs to be handled increases while software grows more complex. Structuring concurrent software in a way that meets the increasing scalability requirements while remaining simple, structured, and safe enough to allow mortal programmers to construct ever-more complex systems is a major engineering challenge.

December 16, 2005

Topic: Concurrency

0 comments

Fighting Spam with Reputation Systems:
User-submitted spam fingerprints

Spam is everywhere, clogging the inboxes of e-mail users worldwide. Not only is it an annoyance, it erodes the productivity gains afforded by the advent of information technology. Workers plowing through hours of legitimate e-mail every day also must contend with removing a significant amount of illegitimate e-mail. Automated spam filters have dramatically reduced the amount of spam seen by the end users who employ them, but the amount of training required rivals the amount of time needed simply to delete the spam without the assistance of a filter.

December 16, 2005

Topic: Email and IM

0 comments

Social Bookmarking in the Enterprise:
Can your organization benefit from social bookmarking tools?

One of the greatest challenges facing people who use large information spaces is to remember and retrieve items that they have previously found and thought to be interesting. One approach to this problem is to allow individuals to save particular search strings to re-create the search in the future. Another approach has been to allow people to create personal collections of material. Collections of citations can be created manually by readers or through execution of (and alerting to) a saved search.

December 16, 2005

Topic: Social Computing

3 comments

A Conversation with Ray Ozzie:
Cooperate, Communicate, Collaborate

There are not many names bigger than Ray Ozzie’s in computer programming. An industry visionary and pioneer in computer-supported cooperative work, he began his career as an electrical engineer but fairly quickly got into computer science and programming. He is the creator of IBM’s Lotus Notes and is now chief technical officer of Microsoft, reporting to chief software architect Bill Gates. Recently, Ozzie’s role as chief technical officer expanded as he assumed responsibility for the company’s software-based services strategy across its three major divisions.

December 16, 2005

Topic: Development

0 comments

Kode Vicious:
The Doctor is In

KV is back on duty and ready to treat another koding illness: bad APIs. This is one of the most widespread pathologies affecting, and sometimes infecting, us all. But whether we write APIs or simply use APIs (or both), we would all do well to read on and heed the vicious one’s advice.

December 16, 2005

Topic: Development

0 comments

What’s on Your Hard Drive?:
The tools on our hard drives vary widely in how directly they allow us to access the underlying hardware.

Accordingly, many of our readers are most comfortable working as close to the machine as possible. Sometimes called “bare metal” programmers, they prefer the fewest possible layers of abstraction separating their code from the hardware.

December 16, 2005

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Acoustical Spying Techniques Cooked up in Lab

December 16, 2005

0 comments

Socially Acceptable Behavior:
Social bookmarking is a technology whose time has come.

While perusing the ?popular? section of the social bookmarking site del.icio.us recently, I was struck by the myriad off-the-wall sites that users are bookmarking. Indeed, one of the most popular sites (311 bookmarks!) was titled, ?Vanishing Point: How to disappear in America without a trace.?

December 16, 2005

0 comments

The Cost of Data:
Semi-structured data is the result of economics.

In the past few years people have convinced themselves that they have discovered an overlooked form of data. This new form of data is semi-structured. Bosh! There is no new form of data. What folks have discovered is really the effect of economics on data typing—but if you characterize the problem as one of economics, it isn’t nearly as exciting. It is, however, much more accurate and valuable. Seeing the reality of semi-structured data clearly can actually lead to improving data processing. As long as we look at this through the fogged vision of a “new type of data,” however, we will continue to misunderstand the problem and develop misguided solutions to address it.

December 8, 2005

Topic: Data

0 comments

Calendar:
5-Oct

Web 2.0

December 8, 2005

0 comments

Why Your Data Won’t Mix:
New tools and techniques can help ease the pain of reconciling schemas.

When independent parties develop database schemas for the same domain, they will almost always be quite different from each other. These differences are referred to as semantic heterogeneity, which also appears in the presence of multiple XML documents, Web services, and ontologies—or more broadly, whenever there is more than one way to structure a body of data. The presence of semi-structured data exacerbates semantic heterogeneity, because semi-structured schemas are much more flexible to start with. For multiple data systems to cooperate with each other, they must understand each other’s schemas.

December 8, 2005

Topic: Semi-structured Data

0 comments

Order from Chaos:
Will ontologies help you structure your semi-structured data?

There is probably little argument that the past decade has brought the “big bang” in the amount of online information available for processing by humans and machines. Two of the trends that it spurred (among many others) are: first, there has been a move to more flexible and fluid (semi-structured) models than the traditional centralized relational databases that stored most of the electronic data before; second, today there is simply too much information available to be processed by humans, and we really need help from machines.

December 8, 2005

Topic: Semi-structured Data

0 comments

XML <and Semi-Structured Data>:
XML provides a natural representation for hierarchical structures and repeating fields or structures.

Vocabulary designers can require XML data to be perfectly regular, or they can allow a little variation, or a lot. In the extreme case, an XML vocabulary can effectively say that there are no rules at all beyond those required of all well-formed XML. Because XML syntax records only what is present, not everything that might be present, sparse data does not make the XML representation awkward; XML storage systems are typically built to handle sparse data gracefully.

December 8, 2005

Topic: Semi-structured Data

3 comments

Learning from the Web:
The Web has taught us many lessons about distributed computing, but some of the most important ones have yet to fully take hold.

In the past decade we have seen a revolution in computing that transcends anything seen to date in terms of scope and reach, but also in terms of how we think about what makes up “good” and “bad” computing.

December 8, 2005

Topic: Semi-structured Data

0 comments

Managing Semi-Structured Data:
I vividly remember during my first college class my fascination with the relational database.

In that class I learned how to build a schema for my information, and I learned that to obtain an accurate schema there must be a priori knowledge of the structure and properties of the information to be modeled. I also learned the ER (entity-relationship) model as a basic tool for all further data modeling, as well as the need for an a priori agreement on both the general structure of the information and the vocabularies used by all communities producing, processing, or consuming this information.

December 8, 2005

Topic: Semi-structured Data

1 comments

Kode Vicious Unscripted:
The problem? Computers make it too easy to copy data.

Some months, when he’s feeling ambitious, Kode Vicious reads through all of your letters carefully, agonizing for days over which to respond to. Most of the time, though, he takes a less measured approach. This usually involves printing the letters out, throwing them up in the air, and seeing which land face up, repeating the process until only two remain. And occasionally, KV dispenses with reader feedback altogether, as is the case this month.

December 8, 2005

Topic: Development

0 comments

What’s on Your Hard Drive?:
What’s on Your Hard Drive?

One of the most interesting things about WOYHD is seeing how often certain tools appear under either the “love” or “hate” categories. In fact, over the past year we’ve developed a keen sense of which tools our readers love most and which they loathe most. It’s not that our data says anything about the overall popularity of any one product. A product that receives a large number of “hates” actually could be the most popular tool on the market.

December 8, 2005

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Apple’s announcement in June that it would begin using Intel chips took the computing world by surprise. The design-savvy company opted to begin using Intel chips in 2006, mostly because of Intel’s expertise in manufacturing low-power chips. Participants in the Apple Developer Connection recently received a developer’s kit that allows them to test their OS X code on Macs fitted with Intel chips, now increasingly known as “MacIntels.” The kit was enabled with security features to prevent others from installing --OS X on non-Macintosh x86 boxes.

December 8, 2005

0 comments

Unstructured, but Not Really:
Data that doesn’t fit the mold

Mention the term semi-structured data and chances are you’ll be met with strong opinions from one of two camps: those who believe semi-structured data is nothing more than a fancy term for a data structure left unfinished, and others who firmly believe semi-structured data is the best way to describe data that doesn’t easily fit into the traditional database structure.

December 8, 2005

0 comments

Multicore CPUs for the Masses:
Will increased CPU bandwidth translate into usable desktop performance?

Multicore is the new hot topic in the latest round of CPUs from Intel, AMD, Sun, etc. With clock speed increases becoming more and more difficult to achieve, vendors have turned to multicore CPUs as the best way to gain additional performance. Customers are excited about the promise of more performance through parallel processors for the same real estate investment.

October 18, 2005

Topic: Performance

0 comments

Software and the Concurrency Revolution:
Leveraging the full power of multicore processors demands new tools and new thinking from the software industry.

Concurrency has long been touted as the "next big thing" and "the way of the future," but for the past 30 years, mainstream software development has been able to ignore it. Our parallel future has finally arrived: new machines will be parallel machines, and this will require major changes in the way we develop software. The introductory article in this issue describes the hardware imperatives behind this shift in computer architecture from uniprocessors to multicore processors, also known as CMPs.

October 18, 2005

Topic: Concurrency

0 comments

The Price of Performance:
An Economic Case for Chip Multiprocessing

In the late 1990s, our research group at DEC was one of a growing number of teams advocating the CMP (chip multiprocessor) as an alternative to highly complex single-threaded CPUs. We were designing the Piranha system,1 which was a radical point in the CMP design space in that we used very simple cores (similar to the early RISC designs of the late ’80s) to provide a higher level of thread-level parallelism. Our main goal was to achieve the best commercial workload performance for a given silicon budget. Today, in developing Google’s computing infrastructure, our focus is broader than performance alone. The merits of a particular architecture are measured by answering the following question: Are you able to afford the computational capacity you need?

October 18, 2005

Topic: Processors

0 comments

Extreme Software Scaling:
Chip multiprocessors have introduced a new dimension in scaling for application developers, operating system designers, and deployment specialists.

The advent of SMP (symmetric multiprocessing) added a new degree of scalability to computer systems. Rather than deriving additional performance from an incrementally faster microprocessor, an SMP system leverages multiple processors to obtain large gains in total system performance. Parallelism in software allows multiple jobs to execute concurrently on the system, increasing system throughput accordingly. Given sufficient software parallelism, these systems have proved to scale to several hundred processors.

October 18, 2005

Topic: Processors

0 comments

The Future of Microprocessors:
Chip multiprocessors’ promise of huge performance gains is now a reality.

The performance of microprocessors that power modern computers has continued to increase exponentially over the years for two main reasons. First, the transistors that are the heart of the circuits in all processors and memory chips have simply become faster over time on a course described by Moore’s law, and this directly affects the performance of processors built with those transistors. Moreover, actual processor performance has increased faster than Moore’s law would predict, because processor designers have been able to harness the increasing numbers of transistors available on modern chips to extract more parallelism from software.

October 18, 2005

Topic: Processors

0 comments

A Conversation with Roger Sessions and Terry Coatta:
The difference between objects and components? That’s debatable.

In the December/January 2004-2005 issue of Queue, Roger Sessions set off some fireworks with his article about objects, components, and Web services and which should be used when (“Fuzzy Boundaries,” 40-47). Sessions is on the board of directors of the International Association of Software Architects, the author of six books, writes the Architect Technology Advisory, and is CEO of ObjectWatch. He has a very object-oriented viewpoint, not necessarily shared by Queue editorial board member Terry Coatta, who disagreed with much of what Sessions had to say in his article. Coatta is an active developer who has worked extensively with component frameworks.

October 18, 2005

Topic: Development

0 comments

KV the Konqueror:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Suppose I’m a customer of Sincere-and-Authentic’s (“Kode Vicious Battles On,” April 2005:15-17), and suppose the sysadmin at my ISP is an unscrupulous, albeit music-loving, geek. He figured out that I have an account with Sincere-and-Authentic. He put in a filter in the access router to log all packets belonging to a session between me and S&A. He would later mine the logs and retrieve the music—without paying for it.

October 18, 2005

Topic: Programming Languages

0 comments

What’s on Your Hard Drive?:
What’s on Your Hard Drive?

Tool I love! Python. I?m still a newbie to Python but I?m quite impressed with it thus far. As a scripting language, it can quickly test an idea or an algorithm, even if the project I?m working on doesn?t use Python. Also, with free tools such as wxPython and py2exe, a Python script can easily become a full-blown distributable application with a robust UI.

October 18, 2005

Topic: Development

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

IBM recently announced that it would discontinue support for its once-flagship operating system, OS/2, beginning in late 2006. Developed in the 1980s during an early alliance with Microsoft, OS/2 eventually became OS/2 Warp and had some success during the ’90s, particularly in the server market. But its desktop counterpart failed to take off, and IBM eventually ceded victory to Microsoft. IBM is now urging OS/2 users to switch to Linux, which it supports. Switch to Linux? If only it were that easy. Though gone from the spotlight, OS/2 continues to run on servers around the globe, especially on those linked to ATMs.

October 18, 2005

0 comments

Call That Gibberish?:
Detecting the real from the fake is getting harder.

The Ninth World Multiconference SCI (Systematics, Cybernetics, and Informatics) 2005 has attracted more attention than its vaporific title usually merits by accepting a spoof paper from three MIT graduate students. The Times (of London, by default, of course) ran the eye-catching headline, “How gibberish put scientists to shame” (April 6, 2005). One of the students, Jeremy Stribling, explains how they had developed a computer program to generate random sequences of technobabble in order to confirm their suspicions that papers of dubious academicity were bypassing serious, or indeed, any scrutiny. In fact, the students claim ulterior, financial motives behind this lack of proper peer review.

August 18, 2005

Topic: Development

0 comments

Calendar:
5-Sep

Macworld

August 18, 2005

0 comments

Enterprise Grid Computing:
Grid computing holds great promise for the enterprise data center, but many technical and operational hurdles remain.

I have to admit a great measure of sympathy for the IT populace at large, when it is confronted by the barrage of hype around grid technology, particularly within the enterprise. Individual vendors have attempted to plant their flags in the notionally virgin technological territory and proclaim it as their own, using terms such as grid, autonomic, self-healing, self-managing, adaptive, utility, and so forth. Analysts, well, analyze and try to make sense of it all, and in the process each independently creates his or her own map of this terra incognita, naming it policy-based computing, organic computing, and so on. Unfortunately, this serves only to further muddy the waters for most people.

August 18, 2005

Topic: Distributed Computing

2 comments

Web Services and IT Management:
Web services aren’t just for application integration anymore.

Platform and programming language independence, coupled with industry momentum, has made Web services the technology of choice for most enterprise integration projects. Their close relationship with SOA (service-oriented architecture) has also helped them gain mindshare. Consider this definition of SOA: "An architectural style whose goal is to achieve loose coupling among interacting software agents. A service is a unit of work done by a service provider to achieve desired end results for a service consumer.

August 18, 2005

Topic: Distributed Computing

0 comments

Enterprise Software as Service:
Online services are changing the nature of software.

While the practice of outsourcing business functions such as payroll has been around for decades, its realization as online software services has only recently become popular. In the online service model, a provider develops an application and operates the servers that host it. Customers access the application over the Internet using industry-standard browsers or Web services clients. A wide range of online applications, including e-mail, human resources, business analytics, CRM (customer relationship management), and ERP (enterprise resource planning), are available.

August 18, 2005

Topic: Distributed Computing

0 comments

Describing the Elephant: The Different Faces of IT as Service:
Terms such as grid, on-demand, and service-oriented architecture are mired in confusion, but there is an overarching trend behind them all.

In a well-known fable, a group of blind men are asked to describe an elephant. Each encounters a different part of the animal and, not surprisingly, provides a different description. We see a similar degree of confusion in the IT industry today, as terms such as service-oriented architecture, grid, utility computing, on-demand, adaptive enterprise, data center automation, and virtualization are bandied about. As when listening to the blind men, it can be difficult to know what reality lies behind the words, whether and how the different pieces fit together, and what we should be doing about the animal(s) that are being described.

August 18, 2005

Topic: Distributed Computing

0 comments

A Conversation with David Anderson:
Supercomputing on the grassroots level

Millions of PCs on desktops at home helping to solve some of the world’s most compute-intensive scientific problems. And it’s an all-volunteer force of PC users, who, with very little effort, can contribute much-needed PC muscle to the scientific and academic communities.

August 18, 2005

Topic: Open Source

0 comments

Kode Vicious Cycles On:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Not only does California give you plenty of sun, it also apparently has employers that give you plenty of time to play around with the smaller problems that you like, in a programming language that’s irrelevant to the later implementation.

August 18, 2005

Topic: Development

0 comments

What’s on Your Hard Drive?:
What’s on Your Hard Drive?

Tool I love! DbVisualizer. This tool flatters my intuition. Whenever I’m working with a database and think, "There really ought to be a tool to ___________," DbVisualizer usually has what I need.

August 18, 2005

Topic: Development

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

You’ve watched in awe as spyware programs cannibalized each other, shook your head in disgust when that vigilante, anti-piracy P2P virus deleted all of your MP3s, and felt instant paranoia after learning about the astonishing number of zombie PCs out there (“could I be a zombie?”). Well, it just keeps getting weirder... and scarier.

August 18, 2005

0 comments

Distributed Computing in the Modern Enterprise:
Grids, SOAs, web services, and beyond

Welcome to the Alphabet Soup edition of Queue. You are about to wade into a sea of buzzwords and acronyms, but the future of enterprise computing is on the other shore.

August 18, 2005

0 comments

Syntactic Heroin:
A dangerous coding addiction that leads to a readability disaster.

User-defined overloading is a drug. At first, it gives you a quick, feel-good fix. No sense in cluttering up code with verbose and ugly function names such as IntAbs, FloatAbs, DoubleAbs, or ComplexAbs; just name them all Abs. Even better, use algebraic notation such as A+B, instead of ComplexSum(A,B). It certainly makes coding more compact. But a dangerous addiction soon sets in. Languages and programs that were already complex enough to stretch everyone’s ability suddenly get much more complicated.

July 6, 2005

Topic: Code

2 comments

Calendar:
5-Jun

Red Hat Summit. June 1-3, 2005. New Orleans, Louisiana

July 6, 2005

0 comments

Programmers Are People, too:
Programming language and API designers can learn a lot from the field of human-factors design.

I would like to start out this article with an odd, yet surprisingly uncontroversial assertion, which is this: programmers are human. I wish to use this as a premise to explore how to improve the programmer’s lot. So, please, no matter your opinion on the subject, grant me this assumption for the sake of argument.

July 6, 2005

Topic: HCI

4 comments

Attack Trends: 2004 and 2005:
Hacking has moved from a hobbyist pursuit with a goal of notoriety to a criminal pursuit with a goal of money.

Counterpane Internet Security Inc. monitors more than 450 networks in 35 countries, in every time zone. In 2004 we saw 523 billion network events, and our analysts investigated 648,000 security “tickets.” What follows is an overview of what’s happening on the Internet right now, and what we expect to happen in the coming months.

July 6, 2005

Topic: Security

0 comments

Security - Problem Solved?:
Solutions to many of our security problems already exist, so why are we still so vulnerable?

There are plenty of security problems that have solutions. Yet, our security problems don’t seem to be going away. What’s wrong here? Are consumers being offered snake oil and rejecting it? Are they not adopting solutions they should be adopting? Or, is there something else at work, entirely? We’ll look at a few places where the world could easily be a better place, but isn’t, and build some insight as to why.

July 6, 2005

Topic: Security

1 comments

The Answer is 42 of Course:
If we want our networks to be sufficiently difficult to penetrate, we’ve got to ask the right questions.

Why is security so hard? As a security consultant, I’m glad that people feel that way, because that perception pays my mortgage. But is it really so difficult to build systems that are impenetrable to the bad guys?

July 6, 2005

Topic: Security

0 comments

A Conversation with Peter Tippett and Steven Hofmeyr:
Two leaders in the field of computer security discuss the influence of biomedicine on their work, and more.

There have always been similarities and overlap between the worlds of biology and computer science. Nowhere is this more evident than in computer security, where the basic terminology of viruses and infection is borrowed from biomedicine.

July 6, 2005

Topic: Security

0 comments

Kode Vicious Gets Dirty:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear Kode Vicious, I am a new Webmaster of a (rather new) Web site in my company’s intranet. Recently I noticed that although I have implemented some user authentication (a start *.asp page linked to an SQL server, having usernames and passwords), some of the users found out that it is also possible to enter a rather longer URL to a specific page within that Web site (instead of entering the homepage), and they go directly to that page without being authenticated (and without their login being recorded in the SQL database).

July 6, 2005

Topic: Web Development

0 comments

What’s on Your Hard Drive?:
We’re stuck using these darn development tools.

Echoing a recent editor’s note, it’s fun to imagine a future where software development is abstracted to the point where we just say, “Hey, Deep Thought, build me this call-center application. Marvin, here, will get you the specs,” and voila, the software is designed, developed, tested, and deployed. Until then, however, we’re stuck using these darn development tools. Being of human derivation (and therefore imperfect), they are perfect targets for our frustration—and praise.

July 6, 2005

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

It’s well known that computer science education in the U.S. is in decline. From secondary schools up to graduate degree programs, the U.S. has witnessed a steady drop in enrollment in computer science courses. Many factors are responsible—in particular, the dot-com bust, outsourcing fears, and declines in federal spending on fundamental CS research. A vicious cycle is in the works, whereby fears about job prospects will drive possible CS candidates into other fields, only to provide fuel for the outsourcing fire as the pool of native IT workers shrinks.

July 6, 2005

0 comments

Letters:
Selinger’s Inspiring Words

The Queue interview with Pat Selinger (April 2005) was very interesting and a great review of major developments in database technology. I recognize how much computer science has advanced in the past 30 years, but when I read her answer about what areas she thinks need more research, I could not take from my mind that image of Captain Picard saying, “Computer, investigate all cross-references of this incident in the last 200 years and narrow the search to relevant information.” I really hope to ask the same kinds of questions to my computer in 2035.

July 6, 2005

0 comments

On Feeling Secure in an Unsafe World:
Doing all we can to meet our security needs

Security has always been a loaded word—all the more so since 9/11. Webster’s defines it as “freedom from fear, anxiety, danger, and doubt.” Within Maslow’s famous hierarchy of needs, meanwhile, we can find it just above our most basic physiological needs, such as food, water, and shelter. So it seems, no matter how you slice it, security comes down to some fundamental sense of well-being or safety. As we all know, that’s no small feat in a world that is not particularly safe.

July 6, 2005

0 comments

Mal Managerium: A Field Guide:
I have seen the enemy, and he is me.

Please allow me the pleasure of leading you on an “office safari,” so to speak. On today’s journey we’ll travel the corridors of computerdom in search of the widespread but elusive mal managerium, or bad manager, in common parlance. They will be difficult to spot because we will be in a sense looking for that most elusive creature of all: ourselves. That is to say, it’s quite possible that many of us will share some of the qualities with the various types of bad managers we shall encounter. Qualities that we are loath to admit we possess, I might add.

June 7, 2005

Topic: Development

0 comments

Calendar:
5-May

NSDI (Networked Systems Design and Implementation), May 2-4, 2005, Boston, Massachusetts

June 7, 2005

0 comments

You Don’t Know Jack about Network Performance:
Bandwidth is only part of the problem.

Why does an application that works just fine over a LAN come to a grinding halt across the wide-area network? You may have experienced this firsthand when trying to open a document from a remote file share or remotely logging in over a VPN to an application running in headquarters. Why is it that an application that works fine in your office can become virtually useless over the WAN? If you think it’s simply because there’s not enough bandwidth in the WAN, then you don’t know jack about network performance.

June 7, 2005

Topic: Networks

0 comments

Streams and Standards: Delivering Mobile Video:
The era of video served up to mobile phones has arrived and threatens to be the next “killer app” after wireless calling itself.

Don’t believe me? Follow along… Mobile phones are everywhere. Everybody has one. Think about the last time you were on an airplane and the flight was delayed on the ground. Immediately after the dreaded announcement, you heard everyone reach for their phones and start dialing.

June 7, 2005

Topic: Mobile Computing

0 comments

Mobile Media: Making It a Reality:
Two prototype apps reveal the challenges in delivering mobile media services.

Many future mobile applications are predicated on the existence of rich, interactive media services. The promise and challenge of such services is to provide applications under the most hostile conditions - and at low cost to a user community that has high expectations. Context-aware services require information about who, where, when, and what a user is doing and must be delivered in a timely manner with minimum latency. This article reveals some of the current state-of-the-art "magic" and the research challenges.

June 7, 2005

Topic: Mobile Computing

0 comments

Enterprise-Grade Wireless:
Wireless technology has come a long way, but is it robust enough for today’s enterprise?

We have been working in the wireless space in one form or another in excess of 10 years and have participated in every phase of its maturation process. We saw wireless progress from a toy technology before the dot-com boom, to something truly promising during the boom, only to be left wanting after the bubble when the technology was found to be not ready for prime time. Fortunately, it appears that we have finally reached the point where the technology and the enterprise’s expectations have finally converged.

June 7, 2005

Topic: Mobile Computing

0 comments

A Conversation with Tim Marsland:
Taking software delivery to a new level

Delivering software to customers, especially in increments to existing systems, has been a difficult challenge since the days of floppies and shrink-wrap. But with guys like Tim Marsland working on the problem, the process could be improving.

June 7, 2005

Topic: Patching and Deployment

0 comments

Kode Vicious vs. Mothra:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, My co-workers keep doing really bad things in the code, such as writing C++ code with macros that have gotos that jump out of them, and using assert in lower-level functions as an error-handling facility. I keep trying to get them to stop doing these things, but the standard response I get is, “Yeah, it’s not pretty, but it works.” How can I get them to start asking, “Is there a better way to do this?” They listen to my arguments but don’t seem convinced. In some cases they even insist they are following good practices.

June 7, 2005

Topic: Development

0 comments

What’s on Your Hard Drive?:
Combine this storage surplus with gigabytes of RAM, and we have a breeding ground for that most conspicuous of “wares”: bloatware.

Combine this storage surplus with gigabytes of RAM, and we have a breeding ground for that most conspicuous of “wares”: bloatware. We’ve all seen it, and many of us use it every day. We may have even become comfortable with these overweight tools and now barely recognize them as such. Indeed, they can simultaneously be our greatest enemies and our most familiar friends.

June 7, 2005

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Debates about the relative robustness and security of various operating systems have raged for years. A few months ago the Linux community got some concrete numbers to back up its claims of superiority. A study conducted by Stanford researchers revealed that the Linux 2.6 kernel, which has 5.7 million lines of code, contains “only” 985 bugs. This number pales in comparison to the average number for commercial software, which a Carnegie Mellon University team determined to be 20 to 30 bugs per 1,000 lines of code. Based on this ratio, one would expect 114,000 to 171,000 bugs in the Linux kernel.

June 7, 2005

0 comments

Letters:
Readers’ Comments Are Important, Too

Jef Raskin’s article, “Comments Are More Important than Code” (March 2005), made my day! Ideally the comments should outline (in a form clarified by hindsight) the intellectual process that led to the code.

June 7, 2005

0 comments

Mobile Applications Get Real:
Yes, you can do actual work on a PDA.

First off, full disclosure: I don’t own a PDA. I’ve long dismissed them as expensive gadgets whose main function is to distract people from the boredom of commuting. But one need not look far to see the error in my observation. How shocked I was to discover that some of my fellow commuters are doing actual work on their PDAs! And I don’t mean just e-mail. A growing number of workers employ these devices to access company data using custom applications developed in-house and then deployed to the field.

June 7, 2005

0 comments

File under "Unknowable!":
It’s been ahard day’s night—proving nonexistence!

The Yellow Pages used to advertise along the following lines: “If you can’t find it here, it does not exist.” Shannan Hobbes, my favorite epistemologist, and I would ponder this claim well into the wee hours, testing its validity by searching for vendors of “Square Circles,” “Pet Unicorns,” “Cold Fusion,” “The Largest Prime Number,” “Reigning Bald Kings of France,” and similar quiddities oft debated in the PhilTrans (Philosophical Transactions). Our mounting failures—or, to quickly remove the scandalous ambiguity—our growing number of “not founds” amounted to some sort of inductive verification (of which, more anon). The Yellow Pages, considered as a merely finite hierarchy of marketable strings, has nothing much to tell us of contingent, objective existence.

April 21, 2005

Topic: Code

0 comments

Calendar:
5-Apr

CHI (Conference on Human-Computer Interaction)

April 21, 2005

0 comments

Beyond Relational Databases:
There is more to data access than SQL.

The number and variety of computing devices in the environment are increasing rapidly. Real computers are no longer tethered to desktops or locked in server rooms. PDAs, highly mobile tablet and laptop devices, palmtop computers, and mobile telephony handsets now offer powerful platforms for the delivery of new applications and services. These devices are, however, only the tip of the iceberg. Hidden from sight are the many computing and network elements required to support the infrastructure that makes ubiquitous computing possible.

April 21, 2005

Topic: Data

1 comments

Databases of Discovery:
Open-ended database ecosystems promote new discoveries in biotech. Can they help your organization, too?

The National Center for Biotechnology Information is responsible for massive amounts of data. A partial list includes the largest public bibliographic database in biomedicine, the U.S. national DNA sequence database, an online free full text research article database, assembly, annotation, and distribution of a reference set of genes, genomes, and chromosomes, online text search and retrieval systems, and specialized molecular biology data search engines. At this writing, NCBI receives about 50 million Web hits per day, at peak rates of about 1,900 hits per second, and about 400,000 BLAST searches per day from about 2.5 million users.

April 21, 2005

Topic: Databases

0 comments

A Call to Arms:
Long anticipated, the arrival of radically restructured database architectures is now finally at hand.

We live in a time of extreme change, much of it precipitated by an avalanche of information that otherwise threatens to swallow us whole. Under the mounting onslaught, our traditional relational database constructs—always cumbersome at best—are now clearly at risk of collapsing altogether. In fact, rarely do you find a DBMS anymore that doesn’t make provisions for online analytic processing. Decision trees, Bayes nets, clustering, and time-series analysis have also become part of the standard package, with allowances for additional algorithms yet to come. Also, text, temporal, and spatial data access methods have been added—along with associated probabilistic logic, since a growing number of applications call for approximated results.

April 21, 2005

Topic: Databases

1 comments

A Conversation with Pat Selinger:
Leading the way to manage the world’s information

Take Pat Selinger of IBM and James Hamilton of Microsoft and put them in a conversation together, and you may hear everything you wanted to know about database technology and weren’t afraid to ask. Selinger, IBM Fellow and vice president of area strategy, information, and interaction for IBM Research, drives the strategy for IBM’s research work spanning the range from classic database systems through text, speech, and multimodal interactions. Since graduating from Harvard with a Ph.D. in applied mathematics, she has spent almost 30 years at IBM, hopscotching between research and development of IBM’s database products.

April 21, 2005

Topic: Databases

0 comments

Kode Vicious Battles On:
Kode Vicious is at it again, dragging you out of your koding quagmires and kombating the enemies of kommon sense.

Dear KV, I’m maintaining some C code at work that is driving me right out of my mind. It seems I cannot go more than three lines in any file without coming across a chunk of code that is conditionally compiled.

April 21, 2005

Topic: Development

0 comments

What’s on Your Hard Drive?:
What’s on Your Hard Drive?

Tool I love! GCC. I enjoy GCC because each of its compilers does exactly what it is told-nothing more, nothing less. If you can’t write a program in your favorite language without the use of an IDE, you are not a programmer.

April 21, 2005

Topic: Development

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

For years, many of us have donated our spare CPU cycles to the romantic, and thus far fruitless, search for extraterrestrial life in the heavens above. Even without empirical proof, the Seti@home project gives us some sense of contributing to something larger than ourselves—and hope that the cool, 3D graphs being drawn and redrawn across our screens could be just moments away from plotting a new course for intelligent life in the universe.

April 21, 2005

0 comments

Letters:
Quality Assurance: Much More than Testing

Reading Stuart Feldman’s article, “Quality Assurance: Much More than Testing” (February 2005), I thought, “Oh good, someone who shares my views is writing about this.” Later, however, I found that of the four articles in the QA special report, only Feldman’s was actually about QA. Only he seems to know that QA does not equal testing. You should study his ISO 12207 definition. It says QA is about process. It is about providing confidence. The other three articles were mainly about testing techniques and only superficially about QA. Testing is only one of the possible QA activities.

April 21, 2005

0 comments

I Am an Abstraction Layer:
Queue’s databases special report sparks a thought.

I’m not going to “sully” Queue’s special report on databases by attempting to talk about them myself. Doing so would be like a sportscaster giving color-commentary to a Dalai Lama sermon: embarrassing. And while I won’t go so far as to call this issue of Queue spiritual, read Jim Gray’s “A Call to Arms” and you’ll see what I mean.

April 21, 2005

0 comments

Comments are More Important than Code:
The thorough use of internal documentation is one of the most-overlooked ways of improving software quality and speeding implementation.

In this essay I take what might seem a paradoxical position. I endorse the techniques that some programmers claim make code self-documenting and encourage the development of programs that do “automatic documentation.” Yet I also contend that these methods cannot provide the documentation necessary for reliable and maintainable code. They are only a rough aid, and even then help with only one or two aspects of documentation—not including the most important ones.

March 18, 2005

Topic: Code

3 comments

Calendar:
5-Mar

Embedded Systems Conference

March 18, 2005

0 comments

UML Fever: Diagnosis and Recovery:
Acknowledgment is only the first step toward recovery from this potentially devastating affliction.

The Institute of Infectious Diseases has recently published research confirming that the many and varied strains of UML Fever1 continue to spread worldwide, indiscriminately infecting software analysts, engineers, and managers alike. One of the fever’s most serious side effects has been observed to be a significant increase in both the cost and duration of developing software products. This increase is largely attributable to a decrease in productivity resulting from fever-stricken individuals investing time and effort in activities that are of little or no value to producing deliverable products. For example, afflictees of Open Loop Fever continue to create UML (Unified Modeling Language) diagrams for unknown stakeholders.

March 18, 2005

Topic: Patching and Deployment

1 comments

On Plug-ins and Extensible Architectures:
Extensible application architectures such as Eclipse offer many advantages, but one must be careful to avoid “plug-in hell.”

In a world of increasingly complex computing requirements, we as software developers are continually searching for that ultimate, universal architecture that allows us to productively develop high-quality applications. This quest has led to the adoption of many new abstractions and tools. Some of the most promising recent developments are the new pure plug-in architectures. What began as a callback mechanism to extend an application has become the very foundation of applications themselves. Plug-ins are no longer just add-ons to applications; today’s applications are made entirely of plug-ins.

March 18, 2005

Topic: Computer Architecture

0 comments

Patching the Enterprise:
Organizations of all sizes are spending considerable efforts on getting patch management right - their businesses depend on it.

Software patch management has grown to be a business-critical issue—from both a risk and a financial management perspective. According to a recent Aberdeen Group study, corporations spent more than $2 billion in 2002 on patch management for operating systems.1 Gartner research further notes the cost of operating a well-managed PC was approximately $2,000 less annually than that of an unmanaged PC.2 You might think that with critical mass and more sophisticated tools, the management cost per endpoint in large organizations would be lower, though in reality this may not be the case.

March 18, 2005

Topic: Patching and Deployment

0 comments

Understanding Software Patching:
Developing and deploying patches is an increasingly important part of the software development process.

Software patching is an increasingly important aspect of today’s computing environment as the volume, complexity, and number of configurations under which a piece of software runs have grown considerably. Software architects and developers do everything they can to build secure, bug-free software products. To ensure quality, development teams leverage all the tools and techniques at their disposal. For example, software architects incorporate security threat models into their designs, and QA engineers develop automated test suites that include sophisticated code-defect analysis tools.

March 18, 2005

Topic: Patching and Deployment

0 comments

Kode Vicious Reloaded:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

The program should be a small project, but every time I start specifying the objects and methods it seems to grow to a huge size, both in the number of lines and the size of the final program.

March 18, 2005

Topic: Development

0 comments

What’s on Your Hard Drive?:
What’s on Your Hard Drive?

Submissions pour in daily, creating piles of late-night prescreening work for the Queue oompa loompas. We’re also receiving e-mail feedback from irate readers, questioning why we published Joe Blow’s emphatic endorsement of such and such IDE when “frankly, it totally sucks.” To accommodate these impulses we’re taking WOYHD to the Web, where we’ll post each month’s results, complete with a comments feature so you can argue about which tools are great and which tools you hate.

March 18, 2005

0 comments

Letters:
In “Extensible Programming for the 21st Century”, Gregory V. Wilson considers user-extensible syntax and semantics to be the “next big thing” that is missing from existing programming systems.

Done imperfectly, however, unfamiliar extended syntax can interfere with readability, creating a Tower of Babel that makes adding resources to a project expensive.

March 18, 2005

0 comments

An Update on Software Updates:
The way software is delivered has changed.

When I raised the idea at the Queue editorial advisory board meeting several months ago, it was because I think the way that software is now being delivered to us is quite interesting. Things have changed. Nowadays you much less often expect to “install” whole systems or indeed even individual applications.

March 18, 2005

0 comments

Traipsing Through the QA Tools Desert:
Who’s really to blame for buggy code?

The Jeremiahs of the software world are out there lamenting, “Software is buggy and insecure!” Like the biblical prophet who bemoaned the wickedness of his people, these malcontents tell us we must repent and change our ways. But as someone involved in building commercial software, I’m thinking to myself, “I don’t need to repent. I do care about software quality.” Even so, I know that I have transgressed. I have shipped software that has bugs in it. Why did I do it? Why can’t I ship perfect software all the time?

February 16, 2005

Topic: Quality Assurance

0 comments

Calendar:
5-Feb

LinuxWorld Conference and Expo. February 14-17, 2005. Boston, Massachusetts

February 16, 2005

0 comments

Review: Network Security Architectures:
Review: Network Security Architectures

Review: Network Security Architectures

February 16, 2005

0 comments

A Passage to India:
Pitfalls that the outsourcing vendor forgot to mention

Most American IT employees take a dim view of offshore outsourcing. It’s considered unpatriotic and it drains valuable intellectual capital and jobs from the United States to destinations such as India or China. Online discussion forums on sites such as isyourjobgoingoffshore.com are headlined with titles such as “How will you cope?” and “Is your career in danger?” A cover story in BusinessWeek magazine a couple of years ago summed up the angst most people suffer when faced with offshoring: “Is your job next?”

February 16, 2005

Topic: Distributed Development

0 comments

Orchestrating an Automated Test Lab:
Composing a score can help us manage the complexity of testing distributed apps.

Networking and the Internet are encouraging increasing levels of interaction and collaboration between people and their software. Whether users are playing games or composing legal documents, their applications need to manage the complex interleaving of actions from multiple machines over potentially unreliable connections. As an example, Silicon Chalk is a distributed application designed to enhance the in-class experience of instructors and students. Its distributed nature requires that we test with multiple machines. Manual testing is too tedious, expensive, and inconsistent to be effective. While automating our testing, however, we have found it very labor intensive to maintain a set of scripts describing each machine’s portion of a given test.

February 16, 2005

Topic: Quality Assurance

0 comments

Sifting Through the Software Sandbox: SCM Meets QA:
Source control—it’s not just for tracking changes anymore.

Thanks to modern SCM (software configuration management) systems, when developers work on a codeline they leave behind a trail of clues that can reveal what parts of the code have been modified, when, how, and by whom. From the perspective of QA (quality assurance) and test engineers, is this all just “data,” or is there useful information that can improve the test coverage and overall quality of a product?

February 16, 2005

Topic: Quality Assurance

0 comments

Too Darned Big to Test:
Testing large systems is a daunting task, but there are steps we can take to ease the pain.

The increasing size and complexity of software, coupled with concurrency and distributed systems, has made apparent the ineffectiveness of using only handcrafted tests. The misuse of code coverage and avoidance of random testing has exacerbated the problem. We must start again, beginning with good design (including dependency analysis), good static checking (including model property checking), and good unit testing (including good input selection). Code coverage can help select and prioritize tests to make you more efficient, as can the all-pairs technique for controlling the number of configurations.

February 16, 2005

Topic: Quality Assurance

1 comments

Quality Assurance: Much More than Testing:
Good QA is not only about technology, but also methods and approaches.

Quality assurance isn’t just testing, or analysis, or wishful thinking. Although it can be boring, difficult, and tedious, QA is nonetheless essential. Ensuring that a system will work when delivered requires much planning and discipline. Convincing others that the system will function properly requires even more careful and thoughtful effort. QA is performed through all stages of the project, not just slapped on at the end. It is a way of life.

February 16, 2005

Topic: Quality Assurance

0 comments

A Conversation with Tim Bray:
Searching for ways to tame the world’s vast stores of information.

Tim Bray’s Waterloo was no crushing defeat, but rather the beginning of his success as one of the conquerors of search engine technology and XML. In 1986, after working in software at DEC and GTE, he took a job at the University of Waterloo in Ontario, Canada, where he managed the New Oxford English Dictionary Project, an ambitious research endeavor to bring the venerable Oxford English Dictionary into the computer age.

February 16, 2005

Topic: Web Services

0 comments

Kode Vicious Unleashed:
Koding konundrums driving you nuts? Ko-workers making you krazy? Not to worry, Kode Vicious has you covered.

Dear KV, My officemate writes methods that are 1,000 lines long and claims they are easier to understand than if they were broken down into a smaller set of methods. How can we convince him his code is a maintenance nightmare?

February 16, 2005

Topic: Development

0 comments

What’s on Your Hard Drive?:
Every month we invite you to visit the Queue Web site and tell us: What’s on your hard drive?

Every month we invite you to visit the Queue Web site and tell us: What’s on your hard drive?

February 16, 2005

0 comments

Letters:
I enjoyed Roy Want’s article, “The Magic of RFID” (October 2004).

It clearly outlined the technical and social problems associated with RFID systems. I definitely do not want my personal info stored in someone’s database with my purchase data from RFID tags, so the kill switch is a good idea. Unfortunately, it does not solve the problem of retailers collecting detailed information on the buying habits of their customers and then selling or losing this data. It was a really great article. Please have more of the same.

February 16, 2005

0 comments

Puttin’ the Queue in QA:
QA is so important that the ACM Queue Editorial Advisory Board felt it merited its own dedicated special report.

Unlike many of the topics Queue tackles, QA isn’t sexy or exciting. It isn’t new, hip, or happenin’. You’ll not see many magazine covers proclaiming QA to be the next big thing, in the vein of past cover crazes: Push Technology Will Change the World!; or XML, XML, XML!; or Java, This Changes Everything!

February 16, 2005

0 comments

Self-Healing in Modern Operating Systems:
A few early steps show there’s a long (and bumpy) road ahead.

Driving the stretch of Route 101 that connects San Francisco to Menlo Park each day, billboard faces smilingly reassure me that all is well in computerdom in 2004. Networks and servers, they tell me, can self-defend, self-diagnose, self-heal, and even have enough computing power left over from all this introspection to perform their owner-assigned tasks.

December 27, 2004

Topic: Failure and Recovery

0 comments

How Not to Write Fortran in Any Language:
There are characteristics of good coding that transcend all programming languages.

There’s no obfuscated Perl contest because it’s pointless.

December 27, 2004

Topic: Programming Languages

9 comments

Extensible Programming for the 21st Century:
Is an open, more flexible programming environment just around the corner?

In his keynote address at OOPSLA ’98, Sun Microsystems Fellow Guy L. Steele Jr. said, “From now on, a main goal in designing a language should be to plan for growth.” Functions, user-defined types, operator overloading, and generics (such as C++ templates) are no longer enough: tomorrow’s languages must allow programmers to add entirely new kinds of information to programs, and control how it is processed. This article argues that next-generation programming systems can accomplish this by combining three specific technologies.

December 27, 2004

Topic: Programming Languages

3 comments

Fuzzy Boundaries: Objects, Components, and Web Services:
It’s easy to transform objects into components and Web services, but how do we know which is right for the job?

If you are an object-oriented programmer, you will understand the code snippet, even if you are not familiar with the language (C#, not that it matters). You will not be surprised to learn that this program will print out the following line to the console: woof.

December 27, 2004

Topic: Programming Languages

1 comments

Languages, Levels, Libraries, and Longevity:
New programming languages are born every day. Why do some succeed and some fail?

In 50 years, we’ve already seen numerous programming systems come and (mostly) go, although some have remained a long time and will probably do so for: decades? centuries? millennia? The questions about language designs, levels of abstraction, libraries, and resulting longevity are numerous. Why do new languages arise? Why is it sometimes easier to write new software than to adapt old software that works? How many different levels of languages make sense? Why do some languages last in the face of “better” ones?

December 27, 2004

Topic: Programming Languages

0 comments

Linguae Francae:
Is programming language a misnomer?

Many linguists are still busy trying to reconstruct the single ur-language presumed to have evolved over untold millennia into the thousands of human tongues - alive and dead, spoken and written - that have since been catalogued and analyzed. The amazing variety and complexity of known languages and dialects seems, at first parse, to gainsay such a singular seed.

December 27, 2004

Topic: Programming Languages

0 comments

Calendar:
4-Dec

WORLDS (Workshop on Real, Large Distributed Systems)

December 27, 2004

0 comments

Review: Spoken Dialogue Technology:
Review: Spoken Dialogue Technology

Review: Spoken Dialogue Technology

December 27, 2004

0 comments

Review: Eclipse by Steve Holzner:
Review: Eclipse by Steve Holzner

Review: Eclipse by Steve Holzner

December 27, 2004

0 comments

A Conversation with Alan Kay:
Big talk with the creator of smalltalk - and much more

When you want to gain a historical perspective on personal computing and programming languages, why not turn to one of the industry’s preeminent pioneers? That would be Alan Kay, winner of last year’s Turing Award for leading the team that invented Smalltalk, as well as for his fundamental contributions to personal computing. Kay was one of the founders of the Xerox Palo Alto Research Center (PARC), where he led one of several groups that together developed modern workstations (and the forerunners of the Macintosh), Smalltalk, the overlapping window interface, desktop publishing, the Ethernet, laser printing, and network client-servers.

December 27, 2004

Topic: Programming Languages

4 comments

Kode Vicious: The Return:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear KV, Whenever my team reviews my code, they always complain that I don’t check for return values from system calls. I can see having to check a regular function call, because I don’t trust my co-workers, but system calls are written by people who know what they’re doing--and, besides, if a system call fails, there isn’t much I can do to recover. Why bother?

December 27, 2004

Topic: Development

0 comments

What’s on Your Hard Drive?:
What’s on Your Hard Drive?

If you have visited the Queue Web site recently, you will have noticed an invitation to tell us about tools that you use—how they make your life wonderful or how they make your life a living hell. Every month the editors will carefully select four of these submissions from the millions received. If you’re one of the chosen, you’ll receive a complimentary (and oh so very flattering) Queue t-shirt, the Holy Grail of the software development industry.

December 27, 2004

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

These days it’s difficult to pick up a newspaper, turn on the TV, or even, say, open a copy of Queue, without encountering the “O” word: outsourcing. Gartner recently predicted nontrivial cutbacks in the domestic IT labor force over the next five years, mostly as a result of outsourcing IT services, claiming that 60 percent of IT departments will cut their workforces in half by 2008.

December 27, 2004

0 comments

Letters:
I enjoyed the first column by Kode Vicious (October 2004).

I enjoyed the first column by Kode Vicious (October 2004), and especially liked the irreverent remarks to questioners prior to his substantive reply. Good column—keep up the good work!

December 27, 2004

0 comments

The Big Programming Languages Issue:
Please tell me, Who won the presidency?

As I write this month’s Editor’s Note, I sit upon the eve of the 2004 Presidential election, and by all counts it’s anybody’s game. Of course, by the time this reaches you the outcome (hopefully) will be clear. Now I’m not going to here, on the pages of ACM Queue, a magazine dedicated to practical technology issues for technologists, sound off about my political leanings.

December 27, 2004

0 comments

Lack of Priority Queuing Considered Harmful:
We’re in sore need of critical Internet infrastructure protection.

Most modern routers consist of several line cards that perform packet lookup and forwarding, all controlled by a control plane that acts as the brain of the router, performing essential tasks such as management functions, error reporting, control functions including route calculations, and adjacency maintenance. This control plane has many names; in this article it is the route processor, or RP. The route processor calculates the forwarding table and downloads it to the line cards using a control-plane bus. The line cards perform the actual packet lookup and forwarding.

December 6, 2004

Topic: Web Security

0 comments

Outsourcing: Devising a Game Plan:
What types of projects make good candidates for outsourcing?

Your CIO just summoned you to duty by handing off the decision-making power about whether to outsource next years big development project to rewrite the internal billing system. That’s quite a daunting task! How can you possibly begin to decide if outsourcing is the right option for your company? There are a few strategies that you can follow to help you avoid the pitfalls of outsourcing and make informed decisions. Outsourcing is not exclusively a technical issue, but it is a decision that architects or development managers are often best qualified to make because they are in the best position to know what technologies make sense to keep in-house.

December 6, 2004

Topic: Distributed Development

1 comments

Error Messages:
What’s the Problem?

Computer users spend a lot of time chasing down errors - following the trail of clues that starts with an error message and that sometimes leads to a solution and sometimes to frustration. Problems with error messages are particularly acute for system administrators (sysadmins) - those who configure, install, manage, and maintain the computational infrastructure of the modern world - as they spend a lot of effort to keep computers running amid errors and failures.

December 6, 2004

Topic: Failure and Recovery

0 comments

Automating Software Failure Reporting:
We can only fix those bugs we know about.

There are many ways to measure quality before and after software is released. For commercial and internal-use-only products, the most important measurement is the user’s perception of product quality. Unfortunately, perception is difficult to measure, so companies attempt to quantify it through customer satisfaction surveys and failure/behavioral data collected from its customer base. This article focuses on the problems of capturing failure data from customer sites.

December 6, 2004

Topic: Failure and Recovery

0 comments

Oops! Coping with Human Error in IT Systems:
Errors Happen. How to Deal.

Human operator error is one of the most insidious sources of failure and data loss in today’s IT environments. In early 2001, Microsoft suffered a nearly 24-hour outage in its Web properties as a result of a human error made while configuring a name resolution system. Later that year, an hour of trading on the Nasdaq stock exchange was disrupted because of a technicians mistake while testing a development system. More recently, human error has been blamed for outages in instant messaging networks, for security and privacy breaches, and for banking system failures.

December 6, 2004

Topic: Failure and Recovery

0 comments

Programming in Franglais:
Six of one, half a dozen of d’autre

When I was studying French in high school, we students often spoke “Franglais”: French grammar and words where we knew them, English inserted where our command of French failed us. It was pretty awful, and the teacher did not think highly of it. But we could communicate haltingly because we all had about the same levels of knowledge of the respective languages. Today, there is a kind of programmer’s Franglais that is all too pervasive. Those who are old enough will remember the pitched controversy in the late 1960s and early 1970s over whether compilers, operating systems, and other systems programs should be written in assembly code or a high-level language.

December 6, 2004

Topic: Development

0 comments

Careers:
NextNet Wireless

NextNet,the industry’s most widely deployed provider of non-line-of-sight (NLOS) plug-and-play broadband wireless access systems has an immediate opening for a Software Engineer. This position will be responsible for the design, development and support of RISC based embedded software for interfacing to proprietary signal processing devices and microprocessor support chips.

December 6, 2004

0 comments

Calendar:
4-Nov

Supercomputing. November 6-12, 2004. Pittsburgh, Pennsylvania. RFID Developer Conference. November 9-10, 2004. San Francisco, California

December 6, 2004

0 comments

A Conversation with Bruce Lindsay:
Designing for failure may be the key to success.

Designing for failure may be the key to success.

December 6, 2004

Topic: Databases

2 comments

Kode Vicious Strikes Again:
Kall us krazy, but we’re making Kode Vicious a regular.

Dear Kode Vicious, I have this problem. I can never seem to find bits of code I know I wrote. This isn’t so much work code--that’s on our source server--but you know, those bits of test code I wrote last month, I can never find them. How do you deal with this?

December 6, 2004

Topic: Development

0 comments

What’s on Your Hard Drive?:
Clean, fast, multi-platform, and somewhat terse, it writes client and server code anywhere.

Clean, fast, multi-platform, and somewhat terse, it writes client and server code anywhere. It’s powerful enough to not get in the way of expressing complex designs... and rapidly scrapping them.

December 6, 2004

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

You’ve probably been reading a lot about the PlanetLab Consortium (150-universities-and-research-labs-large, and growing) and all the bells and whistles that will be included in the new, planetary-scale Internet, such as a new overlay network with intelligent routers and servers, decentralized and self-organizing applications, higher-level functionalities, and more. Sounds like something that most of earth’s 6 billion mortals could get used to.

December 6, 2004

0 comments

Letters:
I liked the article but disagree with the statement, “In C++ and Java, the construct called a class is neither a module nor a type…”

Here’s my favorite definition of class: “A class is a module AND a blueprint for modules. The modules constructed according to such a blueprint are usually called ‘objects of the class’.”

December 6, 2004

0 comments

The Guru Code:
Does anyone actually know what these codes mean?

No, this is not my attempt at a Da Vinci Code parody. The guru code isn’t fiction, it’s real. The guru code permeates everything—OK, maybe not everything, but at least the cable TV network in Connecticut.

December 6, 2004

0 comments

Trials and Tribulations of Debugging Concurrency:
You can run, but you can’t hide.

We now sit firmly in the 21st century where the grand challenge to the modern-day programmer is neither memory leaks nor type issues (both of those problems are now effectively solved), but rather issues of concurrency. How does one write increasingly complex programs where concurrency is a first-class concern. Or even more treacherous, how does one debug such a beast? These questions bring fear into the hearts of even the best programmers.

November 30, 2004

Topic: Concurrency

1 comments

Thread Scheduling in FreeBSD 5.2:
To help get a better handle on thread scheduling, we take a look at how FreeBSD 5.2 handles it.

A busy system makes thousands of scheduling decisions per second, so the speed with which scheduling decisions are made is critical to the performance of the system as a whole. This article - excerpted from the forthcoming book, “The Design and Implementation of the FreeBSD Operating System“ - uses the example of the open source FreeBSD system to help us understand thread scheduling. The original FreeBSD scheduler was designed in the 1980s for large uniprocessor systems. Although it continues to work well in that environment today, the new ULE scheduler was designed specifically to optimize multiprocessor and multithread environments. This article first studies the original FreeBSD scheduler, then describes the new ULE scheduler.

November 30, 2004

Topic: Open Source

0 comments

Integrating RFID:
Data management and inventory control are about to get a whole lot more interesting.

RFID (radio frequency identification) has received a great deal of attention in the commercial world over the past couple of years. The excitement stems from a confluence of events. First, through the efforts of the former Auto-ID Center and its sponsor companies, the prospects of low-cost RFID tags and a networked supply chain have come within reach of a number of companies. Second, several commercial companies and government bodies, such as Wal-Mart and Target in the United States, Tesco in Europe, and the U.S. Department of Defense, have announced RFID initiatives in response to technology improvements.

November 30, 2004

Topic: Hardware

0 comments

The Magic of RFID:
Just how do those little things work anyway?

Many modern technologies give the impression they work by magic, particularly when they operate automatically and their mechanisms are invisible. A technology called RFID (radio frequency identification), which is relatively new to the mass market, has exactly this characteristic and for many people seems a lot like magic. RFID is an electronic tagging technology that allows an object, place, or person to be automatically identified at a distance without a direct line-of-sight, using an electromagnetic challenge/response exchange.

November 30, 2004

Topic: Hardware

1 comments

The Burning Bag of Dung and Other Environmental Antipatterns:
And you think you have problems?

In my youth a favorite prank of the local delinquents was to place a paper bag full of doggy doo on a neighbor’s porch, light it on fire, ring the doorbell, and then flee. The home-owner, upon answering the door, had no choice but to stomp out the incendiary feces, getting their shoes dirty in the process. Why this scatological anecdote? Because it is a metaphor for work situations in which things have gotten so bad that the only way to “put the fire out” is to step into it. I call this the “burning bag of dung” antipattern.

November 30, 2004

Topic: Development

0 comments

Calendar:
4-Oct

WebSphere Technical Exchange

November 30, 2004

0 comments

A Conversation with Mike Deliman:
And you think your operating system needs to be reliable.

Mike Deliman was pretty busy last January when the Mars rover Spirit developed memory and communications problems shortly after landing on the Red Planet. He is a member of the team at Wind River Systems who created the operating system at the heart of the Mars rovers, and he was among those working nearly around the clock to discover and solve the problem that had mysteriously halted the mission on Mars.

November 30, 2004

Topic: Purpose-built Systems

0 comments

There’s Still Some Life Left in Ada:
When it comes to survival of the fittest, Ada ain’t no dinosaur.

Ada remains the Rodney Dangerfield of computer programming languages, getting little respect despite a solid technical rationale for its existence. Originally pressed into service by the U.S. Department of Defense in the late 1970s, these days Ada is just considered a remnant of bloated military engineering practices.

November 30, 2004

Topic: Programming Languages

0 comments

Electronic Voting Systems: the Good, the Bad, and the Stupid:
Is it true that politics and technology don’t mix?

As a result of the Florida 2000 election fiasco, some people concluded that paper ballots simply couldn’t be counted. Instead, paperless computerized voting systems were touted as the solution to “the Florida problem.” Replacing hanging chads with 21st century technology, proponents claimed, would result in accurate election counts and machines that were virtually impossible to rig. Furthermore, with nothing to hand-count and no drawn-out recounts to worry about, computerized voting systems were expected to enable the reporting of results shortly after the polls had closed.

November 30, 2004

Topic: HCI

2 comments

Kode Vicious to the Rescue:
A koder with attitude, KV answers your questions. Miss Manners he ain’t.

Dear Kode Vicious, Where I work we use a mixture of C++ code, Python, and shell scripts in our product. I always have a hard time trying to figure out when it’s appropriate to use which for a certain job. Do you code in only assembler and C, or is this a problem for you as well?

November 30, 2004

Topic: Programming Languages

2 comments

What’s on Your Hard Drive?:
What’s on Your Hard Drive?

Most of my work is prototyping. Perl allows me to quickly put together a prototype of a sophisticated system and pass the design on to someone else to refine. I can quickly accomplish a lot in a few lines of code.

November 30, 2004

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Ever notice how many people walk around carrying something on their backs? From bows and arrows and bales of hay to tripods and water bags, it’s the best way to go about your business while making a quiet statement about who you are. As it’s the computing age, after all, why not sling a roll-up monitor or TV over your shoulder when you leave home for the day? After all, it makes quite a different statement than lugging a no-skid yoga mat everywhere, and it’s a lot more useful to the technorati.

November 30, 2004

0 comments

Letters:
Being German, I love the unsurpassed combinatorial possibilities of our language.

Being German, I love the unsurpassed combinatorial possibilities of our language. Stan Kelly-Bootle’s “From this Moment On” (June 2004), however, contains a construction, “Entunternehhmangenausstreckenisierung,” that defies my linguistic capabilities. Ent... what?

November 30, 2004

0 comments

RFID Isn’t Science Fiction:
Is RFID going to wreak its havoc on your systems?

For those of you who read my editor’s note last month, you’ll recall I’m throwing this space open for comment: What should the editor’s note page be used for? Orienting you to the issue you’re about to read, highlighting some of the sights to see? Should I give some context as to how our Editorial Advisory Board grappled with the topic of the month’s special report? Or should we do away with the editor’s note altogether?

November 30, 2004

0 comments

Vote Early, Vote Often:
An e-vote by any other name?

I usually shun clichés like the plague, but could not resist this oft-quoted slogan that sums up what I like to call Psephological Cynicism. Psephology is the huge and growing branch of mathematics (with frequent distractions from sociologists, psychologists, political scientists, and allied layabouts) that studies the structure and effectiveness of polling and electoral strategies. Related domains include probability and games theory, although, as well see, the subject has many far-from-playful implications.

October 25, 2004

Topic: Security

2 comments

Calendar:
4-Sep

Intel Developer Forum. September 7-9, 2004

October 25, 2004

0 comments

Book Reviews: Quantum Computing (Natural Computing Series), 2nd ed.:
Quantum Computing (Natural Computing Series), 2nd ed. Mika Hirvensalo. Springer Verlag, 2004, $54.95, ISBN: 3-540-40704-9

A handful of good introductions to ideas in quantum computing—a new, multidisciplinary research area crossing quantum mechanics, theoretical computer science, and mathematics—have appeared in the past few years. This introduction stands out, in being friendly and brief. It provides one of the first overviews of, and introductions to, this nonstandard form of computation from the mathematical and computer science viewpoint. The field of quantum computing promises to solve some complex problems through a massive use of parallelism’s power, achieved via the properties of quantum physics—in particular, the superposition of bits, known as quantum bits, or qubits.

October 25, 2004

0 comments

What’s on Your Hard Drive?:
What’s on Your Hard Drive?

If you have visited the Queue Web site recently, you will have noticed an invitation to tell us about tools that you use—how they make your life wonderful or how they make your life a living hell. Every month the editors will carefully select four of these submissions from the millions received. If you’re one of the chosen, you’ll receive a complimentary (and oh so very flattering) Queue t-shirt, the Holy Grail of the software development industry.

October 25, 2004

0 comments

Longhorn Ties Platform Apps to Core Operating System:
Will Microsoft’s New OS Be a Developer’s Dream-Come-True?

Call it all-tools-for-all-things-Microsoft. That’s the way I’m beginning to think about Longhorn, Microsoft’s next-generation operating system, expected in 2006. The operating system will herald more than just the transition of mainstream computing to 64 bits. It will mark the emergence of a new, multifaceted programming model. Breaking away from traditional computer science taxonomy, where operating systems stood barely removed from the hardware abstraction layer, Longhorn will be more like a multi-tentacled container for a wide range of administration and application tools.

October 25, 2004

Topic: System Evolution

0 comments

Schizoid Classes:
Of class, type, and method

Smalltalk pays a high price elsewhere for taking object orientation to the extreme, notably in complete loss of static typing and serious runtime efficiency penalties. Special, one-instance forms of classes are, for many pro- gramming problems, not as good a conceptual match as modules. But at least it provides a single, consistent, and syntactically explicit call mechanism.

October 25, 2004

Topic: Programming Languages

1 comments

News 2.0:
Taking a second look at the news so you don’t have to.

This is rather like sending a smoke signal, but without all the fuss. By fall, for example, you’ll be able to upgrade your Nokia 3220 with the Xpress-on Fun Shell. To airtext, you simply type out a brief message (limited to 15 characters) and wave your cellphone in the air. A motion sensor activates a row of LEDs on the back of the phone, flashing your message into interpersonal space.

October 25, 2004

0 comments

Letters:
I was pleased to read Rodney Bates’ “Buffer Overrun Madness”

The choice of programming language has a significant effect on the vulnerability of networking software. In addition to buffer overruns, the single largest category of vulnerabilities, there are also large categories of vulnerabilities related to signed integers that silently overflow and to spawning new processes.

October 25, 2004

0 comments

From the Editors: Calling All Readers:
Queue reader, I need your help. Rise ye up, take hold of thy mighty pen (or keyboard), and assist me.

OK, kidding aside, we’ve got a great VoIP (voice over IP) special report this month, and typically I’d use this space to tell you a little bit about how we came to the topic, how Lucy Sanders (a former R&D VP at VoIP powehouse Avaya) helped us identify the key areas - and authors - in this industry. Instead, however, I’ve decided to ask myself, and you (after all, there are more than 30,000 of you), if anyone really cares about all that.

October 25, 2004

0 comments

A Conversation with Donald Peterson:
What will the coming revolution merging voice and data communications with business applications bring?

That light we see at the end of the tunnel is the convergence of voice and data communications with business applications. As chairman and chief executive officer of Avaya, Donald Peterson is in a position to help make that convergence happen sooner rather than later. Peterson has been with Avaya since it was spun off from Lucent in 2000. Prior to that he was chief financial officer of AT&T’s Communication Services Group and Lucent.

October 25, 2004

Topic: VoIP

0 comments

A Time and a Place for Standards:
History shows how abuses of the standards process have impeded progress.

Over the next decade, we will encounter at least three major opportunities where success will hinge largely on our ability to define appropriate standards. That’s because intelligently crafted standards that surface at just the right time can do much to nurture nascent industries and encourage product development simply by creating a trusted and reliable basis for interoperability. From where I stand, the three specific areas I see as particularly promising are: (1) all telecommunications and computing capabilities that work together to facilitate collaborative work; (2) hybrid computing/home entertainment products providing for the online distribution of audio and/or video content; and (3) wireless sensor and network platforms (the sort that some hope the 802.15.4 and ZigBee Alliance standards will ultimately enable).

October 25, 2004

Topic: VoIP

0 comments

VoIP Security: Not an Afterthought:
DDOS takes on a whole new meaning.

Voice over IP (VoIP) promises to up-end a century-old model of voice telephony by breaking the traditional monolithic service model of the public switched telephone network (PSTN) and changing the point of control and provision from the central office switch to the end user’s device.

October 25, 2004

Topic: VoIP

0 comments

VoIP: What is it Good for?:
If you think VoIP is just an IP version of telecom-as-usual, think again. A host of applications are changing the phone call as we know it.

VoIP (voice over IP) technology is a rapidly expanding field. More and more VoIP components are being developed, while existing VoIP technology is being deployed at a rapid and still increasing pace. This growth is fueled by two goals: decreasing costs and increasing revenues.

October 25, 2004

Topic: VoIP

0 comments

Not Your Father’s PBX?:
Integrating VoIP into the enterprise could mean the end of telecom business-as-usual.

Perhaps no piece of office equipment is more taken for granted than the common business telephone. The technology behind this basic communication device, however, is in the midst of a major transformation. Businesses are now converging their voice and data networks in order to simplify their network operations and take advantage of the new functional benefits and capabilities that a converged network delivers from greater productivity and cost savings to enhanced mobility.

October 25, 2004

Topic: VoIP

0 comments

You Don’t Know Jack About VoIP:
The Communications they are a-changin’.

Telecommunications worldwide has experienced a significant revolution over recent years. The long-held promise of network convergence is occurring at an increasing pace. This convergence of data, voice, and video using IP-based networks is delivering advanced services at lower cost across the spectrum, including residential users, business customers of varying sizes, and service providers.

October 25, 2004

Topic: VoIP

0 comments

Without a NULL That String Would Never End:
N-streak, 1-streak, worra streak

It’s an undiluted pleasure to be invited to contribute a third column for ’ACM Queue’ under the surly rubric “Curmudgeon.” Curmudgeons are not usually associated with pleasures, diluted or full strength, but at my age the cheap thrill of thrusting a poisoned pen is especially welcome since the targets for satire bob daily as upstart sitting ducks for the roasting: mere “Juvenal delinquents,” as master curmudgeon George Crabbe [sic] called them.

August 31, 2004

Topic: Code

0 comments

Calendar:
4-Jul

Linux Kernel Developers Summit. July 19-20, 2004. Ottawa, Ontario, Canada

August 31, 2004

0 comments

Book Reviews: Critical Testing Process: Plan, Prepare, Perform, Perfect:
Rex Black Addison-Wesley Professional, 2003, $49.99, ISBN: 0-201-74868-1

In this book, Black succeeds in transforming his experience with testing realities—and different testing approaches—into a true reference for anybody who has ever had to plan testing activities in a software organization. The author provides an excellent guideline for determining what test processes might fit best into an existing organization. To his credit, Black does not propose the optimal testing approach, but rather acknowledges many different approaches and provides guidance on how to find the process that will best suit an individual product.

August 31, 2004

0 comments

A Conversation with James Gosling:
James Gosling talks about virtual machines, security, and of course, Java.

As a teenager, James Gosling came up with an idea for a little interpreter to solve a problem in a data analysis project he was working on at the time. Through the years, as a grad student and at Sun as creator of Java and the Java Virtual Machine, he has used several variations on that solution. “I came up with one answer once, and I have just been repeating it over and over again for a frightening number of years,” he says.

August 31, 2004

Topic: Virtual Machines

0 comments

What’s on Your Hard Drive?:
Eclipse

Eclipse. Being open source is one thing, and being of high quality is another. Eclipse meets both criteria and provides an excellent platform for developing applications in any language of your choosing. There are hundreds of high-quality plug-ins available for Eclipse, which makes the deal even sweeter. And you know what is best about it: it’s completely free.

August 31, 2004

0 comments

Toolkit: Samba Does Windows-to-Linux Dance:
Mounting remote Linux drives under Windows is easier than you think.

With heterogeneous networked environments becoming the rule rather than the exception, there’s more need than ever for Windows and Linux to work and play well together. Enter Samba, the print- and file-sharing tool that enables files residing on Linux hosts to interact with Windows-based desktops.

August 31, 2004

Topic: Virtual Machines

0 comments

Opinion: For Want of a Comma, the Meaning Was Lost:
What does punctuation have to do with software development?

Odder things have happened in publishing, but not by much. The bestseller list on Amazon (as I write this) is topped by Lynne Truss’s book on... punctuation. It’s outselling Harry Potter. Starting from the title, ’Eats, Shoots and Leaves’ (Gotham Books, 2004), which changes meaning drastically when the comma is omitted, the book is an entertaining romp through the advantages of writing correctly and clearly with the proper use of punctuation.

August 31, 2004

Topic: Development

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

For many European governments, paying for licenses and ongoing upgrades to proprietary software seems to be annoying and prohibitive. The city of Munich, the French Ministry of the Interior, and Hungary’s Ministry of Education, for example, all have begun using products that support Linux.

August 31, 2004

0 comments

Letters:
Unruly languages such as C and its descendants require increasingly disciplined programmers.

Rodney Bates’ excellent article, “Buffer Overrun Madness” (May 2004), grows ever more timely. Unruly languages such as C and its descendants require increasingly disciplined programmers.

August 31, 2004

0 comments

From the Editors: Virtually Yours:
Titling this month’s editor’s note “Virtually Yours” was irresistible

It’s one of those names that’s so bad you just have to use it. It reminds me of the hair salons you find in small towns. And believe it or not, hairstyling is not unrelated to the topic of this month’s special report, virtual machines: they’re both subject to the whims of fashion.

August 31, 2004

0 comments

Leveraging Application Frameworks:
Why frameworks are important and how to apply them effectively

In today’s competitive, fast-paced computing industry, successful software must increasingly be: (1) extensible to support successions of quick updates and additions to address new requirements and take advantage of emerging markets; (2) flexible to support a growing range of multimedia data types, traffic flows, and end-to-end QoS (quality of service) requirements; (3) portable to reduce the effort required to support applications on heterogeneous operating-system platforms and compilers; (4) reliable to ensure that applications are robust and tolerant to faults; (5) scalable to enable applications to handle larger numbers of clients simultaneously; and (6) affordable to ensure that the total ownership costs of software acquisition and evolution are not prohibitively high.

August 31, 2004

Topic: Component Technologies

0 comments

Security is Harder than You Think:
It’s not just about the buffer overflow.

Many developers see buffer overflows as the biggest security threat to software and believe that there is a simple two-step process to secure software: switch from C or C++ to Java, then start using SSL (Secure Sockets Layer) to protect data communications. It turns out that this naïve tactic isn’t sufficient. In this article, we explore why software security is harder than people expect, focusing on the example of SSL.

August 31, 2004

Topic: Security

0 comments

Simulators: Virtual Machines of the Past (and Future):
Has the time come to kiss that old iron goodbye?

Simulators are a form of “virtual machine” intended to address a simple problem: the absence of real hardware. Simulators for past systems address the loss of real hardware and preserve the usability of software after real hardware has vanished. Simulators for future systems address the variability of future hardware designs and facilitate the development of software before real hardware exists.

August 31, 2004

Topic: Virtual Machines

1 comments

Building Systems to Be Shared, Securely:
Want to securely partition VMs? One option is to put ’em in Jail.

The history of computing has been characterized by continuous transformation resulting from the dramatic increases in performance and drops in price described by Moore’s law. Computing power has migrated from centralized mainframes/servers to distributed systems and the commodity desktop. Despite these changes, system sharing remains an important tool for computing. From the multitasking, file-sharing, and virtual machines of the desktop environment to the large-scale sharing of server-class ISP hardware in collocation centers, safely sharing hardware between mutually untrusting parties requires addressing critical concerns of accidental and malicious damage.

August 31, 2004

Topic: Virtual Machines

2 comments

Building Systems to Be Shared, Securely:
Want to securely partition VMs? One option is to put ’em in Jail.

The history of computing has been characterized by continuous transformation resulting from the dramatic increases in performance and drops in price described by Moore’s law. Computing power has migrated from centralized mainframes/servers to distributed systems and the commodity desktop. Despite these changes, system sharing remains an important tool for computing. From the multitasking, file-sharing, and virtual machines of the desktop environment to the large-scale sharing of server-class ISP hardware in collocation centers, safely sharing hardware between mutually untrusting parties requires addressing critical concerns of accidental and malicious damage.

August 31, 2004

Topic: Virtual Machines

2 comments

The Reincarnation of Virtual Machines:
Virtualization makes a comeback.

The term virtual machine initially described a 1960s operating system concept: a software abstraction with the looks of a computer system’s hardware (real machine). Forty years later, the term encompasses a large range of abstractions?for example, Java virtual machines that don’t match an existing real machine. Despite the variations, in all definitions the virtual machine is a target for a programmer or compilation system. In other words, software is written to run on the virtual machine.

August 31, 2004

Topic: Virtual Machines

1 comments

From This Moment On:
Divining the future of computers with computers

Science fiction seems to have spawned two divergent subgenres. One, which is out of favor, paints a bright future for us, assuming an optimistic, Darwinian "perfectability." These scenarios project an ever-expanding (or rather, a never-imploding) cosmos with ample time for utopian evolutions.

August 31, 2004

Topic: Development

0 comments

Calendar:
4-Jun

ECOOP (European Conference on Object-Oriented Programming). June 14-18, 2004

August 31, 2004

0 comments

Book Reviews: Molecular Computing:
Edited by Tanya Sienko, Andrew Adamatzky, Nicholas G. Rambidi, and Michael Conrad. MIT Press, 2003, $45.00, ISBN: 0-262-19487-2

Edited by Tanya Sienko, Andrew Adamatzky, Nicholas G. Rambidi, and Michael Conrad. MIT Press, 2003, $45.00, ISBN: 0-262-19487-2

August 31, 2004

0 comments

A Conversation with Brewster Kahle:
Creating a library of Alexandria for the digital age

Stu Feldman, Queue board member and vice president of Internet technology for IBM, interviews the chief executive officer of the nonprofit Internet Archive.

August 31, 2004

Topic: Web Services

0 comments

Grid Tools: Coming to a Cluster Near You:
Hot scientific tools trickle down to support mainstream IT tasks.

A set of surprisingly mainstream software tools has come out of an unlikely source—a scientifically focused collective called the Gelato Federation. Formally launched in March 2002, the group seeks to apply open source Linux software running on Intel’s advanced Itanium processor as an enabling technology toward the goal of putting together large, highly scalable clusters of 64-bit systems. The Gelato group believes such scalability is the most significant trend in high-performance computing in the last 10 to 15 years. It marks a potent—and much cheaper—alternative to the Cray supercomputers that populated university labs in the 1980s and early 1990s.

August 31, 2004

Topic: Tools

0 comments

First, Do No Harm: A Hippocratic Oath for Software Developers?:
What’s wrong with taking our profession a little more seriously?

When asked about the Hippocratic Oath, most people are likely to recall the phrase, “First, do no harm.” It’s a logical response, as even those unfamiliar with the oath could figure out that avoiding additional injury in the course of treatment is critical. In fact, it’s natural to strive in any endeavor not to break something further in the course of repair. In software engineering, as in medicine, doing no harm starts with a deep understanding of the tools and techniques available. Using this theme and some medical metaphors, I offer some observations on the practice of software engineering. Whatever we do, first, do no harm.

August 31, 2004

Topic: Development

2 comments

News 2.0:
Taking a second look at the news so you don’t have to.

The forum members are hopeful that NFC, a wireless data transfer technology based on RFID, will one decade enable common devices such as cellphones, cameras, PDAs, and even charge cards to conveniently “communicate” with each other by touch.

August 31, 2004

0 comments

Letters:
I’ve read three issues of ACM Queue so far and I must say I’ve really enjoyed them.

The distributed development issue (ACM Queue 1(9), December/January 2003-2004) covered topics that will become increasingly important to developers—for instance, to those of us at Wal-Mart offices worldwide—as we deal with different time zones.

August 31, 2004

0 comments

From the Editors: The New Screen of Death:
Is security a problem that just can’t be solved?

In the olden days (say, all the way back in 1995), the popular complaint about computers was that they crashed too often. And while stability remains a problem in which perhaps there’s still progress to be made, the blue screen of death has been eclipsed by the new screen of death: Security.

August 31, 2004

0 comments

The Hitchhiker’s Guide to Biomorphic Software:
The natural world may be the inspiration we need for solving our computer problems.

The natural world may be the inspiration we need for solving our computer problems. While it is certainly true that "the map is not the territory," most visitors to a foreign country do prefer to take with them at least a guidebook to help locate themselves as they begin their explorations. That is the intent of this article. Although there will not be enough time to visit all the major tourist sites, with a little effort and using the information in the article as signposts, the intrepid explorer can easily find numerous other, interesting paths to explore.

August 31, 2004

Topic: Bioscience

0 comments

The Insider, Naivety, and Hostility: Security Perfect Storm?:
Keeping nasties out if only half the battle.

Every year corporations and government installations spend millions of dollars fortifying their network infrastructures. Firewalls, intrusion detection systems, and antivirus products stand guard at network boundaries, and individuals monitor countless logs and sensors for even the subtlest hints of network penetration. Vendors and IT managers have focused on keeping the wily hacker outside the network perimeter, but very few technological measures exist to guard against insiders - those entities that operate inside the fortified network boundary. The 2002 CSI/FBI survey estimates that 70 percent of successful attacks come from the inside. Several other estimates place those numbers even higher.

August 31, 2004

Topic: Security

0 comments

Network Forensics:
Good detective work means paying attention before, during, and after the attack.

The dictionary defines forensics as “the use of science and technology to investigate and establish facts in criminal or civil courts of law.” I am more interested, however, in the usage common in the computer world: using evidence remaining after an attack on a computer to determine how the attack was carried out and what the attacker did. The standard approach to forensics is to see what can be retrieved after an attack has been made, but this leaves a lot to be desired. The first and most obvious problem is that successful attackers often go to great lengths to ensure that they cover their trails.

August 31, 2004

Topic: Web Security

0 comments

Security: The Root of the Problem:
Why is it we can’t seem to produce secure, high-quality code?

Security bug? My programming language made me do it! It doesn’t seem that a day goes by without someone announcing a critical flaw in some crucial piece of software or other. Is software that bad? Are programmers so inept? What the heck is going on, and why is the problem getting worse instead of better?

August 31, 2004

Topic: Security

0 comments

Blaster Revisited:
A second look at the cost of Blaster sheds new light on today’s blended threats.

What lessons can we learn from the carnage the Blaster worm created? The following tale is based upon actual circumstances from corporate enterprises that were faced with confronting and eradicating the Blaster worm, which hit in August 2003. The story provides views from many perspectives, illustrating the complexity and sophistication needed to combat new blended threats.

August 31, 2004

Topic: Web Security

0 comments

A Bigot by Any Other Name...:
Are you an Open Source Bigot?

I’ve been responsible for hiring many software engineers. I tend to ask lots of elaborate technical questions so I can really get to know how the candidate thinks and works with me while solving hard problems. QA engineers will appreciate this one (it’s a “negative test” for intellectual honesty): “Explain the relative strengths and weaknesses of FreeBSD, Windows NT, Solaris, and Linux.”

June 14, 2004

Topic: Open Source

1 comments

Calendar:
4-May

WWW (The 2004 International World Wide Web Conference). May 17-22, 2004

June 14, 2004

0 comments

Book Reviews: RFID Handbook: Fundamentals and Applications in Contactless Smart Cards and Identification, 2nd ed.:
Klaus Finkenzeller, John Wiley & Sons, 2003, $125, ISBN: 0-470-84402-7

This handbook offers reasonably complete coverage of the fundamentals and applications in contactless smart cards and identification.

June 14, 2004

0 comments

A Conversation with Sam Leffler:
A Unix and BSD pioneer discusses the open source movement.

The seeds of Unix and open source were sown in the 1970s, and Sam Leffler was right in there doing some of the heaviest cultivating. He has been actively working with Unix since 1976 when he first encountered it at Case Western Reserve University, and he has been involved with what people now think of as open source, as he says, “long before it was even termed open source.” While working for the Computer Systems Research Group (CSRG) at the University of California at Berkeley, he helped with the 4.1BSD release and was responsible for the release of 4.2BSD.

June 14, 2004

Topic: Open Source

0 comments

Opinion: Buffer Overrun Madness:
Why do good programmers follow bad practices?

In January 2003, the Slammer worm was reported to be the fastest spreading ever. Slammer gets access by exploiting a buffer overrun. If you peruse CERT advisories or security upgrade releases, you will see that the majority of computer security holes are buffer overruns. These would be minor irritations but for the world’s addiction to the weakly typed programming languages C and its derivative C++. Buffer overruns are a kind of array bounds error. There are many variations on how one might actually happen, but here is a typical scenario. Function F calls function G, and G returns a string. F allocates a buffer to hold the result and passes a pointer to its zeroth character.

June 14, 2004

Topic: Development

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

MIT Media Lab and nonprofit organization Friendly Planet recently produced what is officially the world’s largest book, Bhutan: A Visual Odyssey Across the Kingdom, as part of an effort to raise education funds for that Himalayan kingdom.

June 14, 2004

0 comments

Letters:
Eric Allman, the Curmudgeon author of The Economics of Spam has it just about right regarding why we are deluged in spam.

Eric Allman, the Curmudgeon author of “The Economics of Spam” (ACM Queue 1(9), December/January 2003-2004), has it just about right regarding why we are deluged in spam: it costs no more for a sender to put out a million messages than one. Given recent statistics (spam allegedly passed 50 percent of all e-mail traffic some time ago), I suspect that the time isn’t too far off before some of the “sender pays” approaches start to happen.

June 14, 2004

0 comments

From the Editors: Open Source Revisited:
Its influence just keeps on growing.

In our first open source theme issue last year (ACM Queue 1(5), July-August 2003), we focused on business issues such as using open source software as a basis for a commercial product. We knew that this was an important topic, but predicted that many of our readers might find it boring. We were wrong. That issue remains among the most responded-to issues of Queue to date. So with that response, we are revisiting the open source theme.

June 14, 2004

0 comments

From IR to Search, and Beyond:
Searching has come a long way since the 60s, but have we only just begun?

It’s been nearly 60 years since Vannevar Bush’s seminal article, ’As We May Think,’ portrayed the image of a scholar aided by a machine, “a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.”

June 14, 2004

Topic: Search Engines

0 comments

TCP Offload to the Rescue:
Getting a toehold on TCP offload engines—and why we need them

In recent years, TCP/IP offload engines, known as TOEs, have attracted a good deal of industry attention and a sizable share of venture capital dollars. A TOE is a specialized network device that implements a significant portion of the TCP/IP protocol in hardware, thereby offloading TCP/IP processing from software running on a general-purpose CPU. This article examines the reasons behind the interest in TOEs and looks at challenges involved in their implementation and deployment.

June 14, 2004

Topic: Networks

1 comments

Desktop Linux: Where Art Thou?:
Catching up, meeting new challenges, moving ahead

Linux on the desktop has come a long way - and it’s been a roller-coaster ride. At the height of the dot-com boom, around the time of Red Hat’s initial public offering, people expected Linux to take off on the desktop in short order. A few years later, after the stock market crash and the failure of a couple of high-profile Linux companies, pundits were quick to proclaim the stillborn death of Linux on the desktop.

June 14, 2004

Topic: Open Source

0 comments

There’s No Such Thing as a Free (Software) Lunch:
What every developer should know about open source licensing

The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software to make sure the software is free for all its users. So begins the GNU General Public License, or GPL, which has become the most widely used of open source software licenses. Freedom is the watchword; it’s no coincidence that the organization that wrote the GPL is called the Free Software Foundation and that open source developers everywhere proclaim, “Information wants to be free.”

June 14, 2004

Topic: Open Source

1 comments

Is Open Source Right for You?:
A fictional case study of open source in a commercial software shop

The media often present open source software as a direct competitor to commercial software. This depiction, usually pitting David (Linux) against Goliath (Microsoft), makes for fun reading in the weekend paper. However, it mostly misses the point of what open source means to a development organization. In this article, I use the experiences of GizmoSoft (a fictitious software company) to present some perspectives on the impact of open source software usage in a software development shop.

June 14, 2004

Topic: Open Source

0 comments

Open Source to the Core:
Using open source in real-world software products: The good, the bad and the ugly

The open source development model is not exactly new. Individual engineers have been using open source as a collaborative development methodology for decades. Now that it has come to the attention of upper and middle management, however, it’s finally being openly acknowledged as a commercial engineering force-multiplier and important option for avoiding significant software development costs.

June 14, 2004

Topic: Open Source

0 comments

Instant Messaging or Instant Headache?:
IM has found a home within the enterprise, but it’s far from secure.

It’s a reality. You have IM (instant messaging) clients in your environment. You have already recognized that it is eating up more and more of your network bandwidth and with Microsoft building IM capability into its XP operating system and applications, you know this will only get worse. Management is also voicing concerns over the lost user productivity caused by personal conversations over this medium. You have tried blocking these conduits for conversation, but it is a constant battle.

May 5, 2004

Topic: Email and IM

0 comments

Gaming Graphics: The Road to Revolution:
From laggard to leader, game graphics are taking us in new directions.

It has been a long journey from the days of multicolored sprites on tiled block backgrounds to the immersive 3D environments of modern games. What used to be a job for a single game creator is now a multifaceted production involving staff from every creative discipline. The next generation of console and home computer hardware is going to bring a revolutionary leap in available computing power; a teraflop (trillion floating-point operations per second) or more will be on tap from commodity hardware.

May 5, 2004

Topic: Game Development

0 comments

Building Nutch: Open Source Search:
A case study in writing an open source search engine

Search engines are as critical to Internet use as any other part of the network infrastructure, but they differ from other components in two important ways. First, their internal workings are secret, unlike, say, the workings of the DNS (domain name system). Second, they hold political and cultural power, as users increasingly rely on them to navigate online content.

May 5, 2004

Topic: Search Engines

1 comments

Why Writing Your Own Search Engine Is Hard:
Big or small, proprietary or open source, Web or intranet, it’s a tough job.

There must be 4,000 programmers typing away in their basements trying to build the next “world’s most scalable” search engine. It has been done only a few times. It has never been done by a big group; always one to four people did the core work, and the big team came on to build the elaborations and the production infrastructure. Why is it so hard? We are going to delve a bit into the various issues to consider when writing a search engine. This article is aimed at those individuals or small groups that are considering this endeavor for their Web site or intranet.

May 5, 2004

Topic: Search Engines

22 comments

Enterprise Search: Tough Stuff:
Why is it that searching an intranet is so much harder than searching the Web?

The last decade has witnessed the growth of information retrieval from a boutique discipline in information and library science to an everyday experience for billions of people around the world. This revolution has been driven in large measure by the Internet, with vendors focused on search and navigation of Web resources and Web content management. Simultaneously, enterprises have invested in networking all of their information together to the point where it is increasingly possible for employees to have a single window into the enterprise.

May 5, 2004

Topic: Search Engines

0 comments

Searching vs. Finding:
Why systems need knowledge to find what you really want

Finding information and organizing it so that it can be found are two key aspects of any company’s knowledge management strategy. Nearly everyone is familiar with the experience of searching with a Web search engine and using a search interface to search a particular Web site once you get there. (You may have even noticed that the latter often doesn’t work as well as the former.) After you have a list of hits, you typically spend a significant amount of time following links, waiting for pages to download, reading through a page to see if it has what you want, deciding that it doesn’t, backing up to try another link, deciding to try another way to phrase your request, et cetera.

May 5, 2004

Topic: Search Engines

0 comments

Web Search Considered Harmful:
The top five reasons why search is still way too hard

Nowadays, when you find yourself utterly disgusted by “American Idol,” or any other of the latest “reality” shows on TV, you may decide, “What the heck, time to seek a slightly less horrible form of punishment: let’s get on the Web.”

May 5, 2004

Topic: Search Engines

0 comments

Calendar:
4-Apr

Real World Linux

May 5, 2004

0 comments

Book Reviews: Extreme Programming Refactored: The Case Against XP:
Matt Stephens and Doug Rosenberg. Apress, 2003, $39.99, ISBN: 1-590-59096-1

Extreme Programming Refactored: The Case Against XP. Matt Stephens and Doug Rosenberg. APress, 2003, $39.99, ISBN: 1-590-59096-1

May 5, 2004

0 comments

A Conversation with Matt Wells:
When it comes to competing in the search engine arena, IS bigger always better?

Search is a small but intensely competitive segment of the industry, dominated for the past few years by Google. But Google’s position as king of the hill is not insurmountable, says Gigablast’s Matt Wells, and he intends to take his product to the top.

May 5, 2004

Topic: Search Engines

1 comments

Intel Is Stealth Source of Heavy-Duty Software Tools:
As we shift froma 32-bit world to a 64-bit paradigm, the right development tools matter—big time.

In the PC and server worlds, the engineering battle for computer performance has often focused on the hardware advances Intel brings to its microprocessors.

May 5, 2004

Topic: Tools

0 comments

Opinion: Voting Machine Hell:
Garbage in, garbage out—it’s that simple.

There has been much commentary from computer scientists on the advantages and disadvantages of various electronic voting schemes. More holes have been poked in the current electronic designs than were punched in the cards used in Florida.

May 5, 2004

Topic: Purpose-built Systems

1 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Microsoft Office, StarSuite, and OpenOffice now have a little “right side of the brain” competition. Software engineer Denny Jaeger tapped into his alter ego’s creative pool (he’s also a composer and musician) to create No Boundaries or Rules (NBOR), software based on a new paradigm.

May 5, 2004

0 comments

Letters:
The long and short of the Waterfall model, addressed by Phillip A. Laplante and Colin J. Neill

The long and short of the Waterfall model, addressed by Phillip A. Laplante and Colin J. Neill in “‘The Demise of the Waterfall Model Is Imminent’ and Other Urban Myths” (ACM Queue 1(10), February 2003), is that it is based on a need within current implementations of imperative programming languages. This need, as do all the deficiencies pointed out, disappears with the introduction of fourth-generation, declarative languages—for example, SQL and AI.

May 5, 2004

0 comments

From the Editors: Search - An Enterprising Affair:
The searching-to-finding ratio is in need of improvement.

Arguably, search is the killer app on the Internet. If there was anyone left who’d argue it wasn’t at least in the top five, they’ve probably now recanted, as the verb to Google has entered the common vernacular (I Google, you Google, he Googles).

May 5, 2004

0 comments

BPM: The Promise and the Challenge:
It’s all about closing the loop from conception to execution and back.

Over the last decade, businesses and governments have been giving increasing attention to business processes - to their description, automation, and management. This interest grows out of the need to streamline business operations, consolidate organizations, and save costs, reflecting the fact that the process is the basic unit of business value within an organization.

April 16, 2004

Topic: Workflow Systems

0 comments

Death by UML Fever:
Self-diagnosis and early treatment are crucial in the fight against UML Fever.

A potentially deadly illness, clinically referred to as UML (Unified Modeling Language) fever, is plaguing many software-engineering efforts today. This fever has many different strains that vary in levels of lethality and contagion. A number of these strains are symptomatically related, however. Rigorous laboratory analysis has revealed that each is unique in origin and makeup. A particularly insidious characteristic of UML fever, common to most of its assorted strains, is the difficulty individuals and organizations have in self-diagnosing the affliction. A consequence is that many cases of the fever go untreated and often evolve into more complex and lethal strains.

April 16, 2004

Topic: Development

5 comments

Digitally Assisted Analog Integrated Circuits:
Closing the gap between analog and digital

In past decades, “Moore’s law”1 has governed the revolution in microelectronics. Through continuous advancements in device and fabrication technology, the industry has maintained exponential progress rates in transistor miniaturization and integration density. As a result, microchips have become cheaper, faster, more complex, and more power efficient.

April 16, 2004

Topic: Processors

0 comments

Stream Processors: Progammability and Efficiency:
Will this new kid on the block muscle out ASIC and DSP?

Many signal processing applications require both efficiency and programmability. Baseband signal processing in 3G cellular base stations, for example, requires hundreds of GOPS (giga, or billions, of operations per second) with a power budget of a few watts, an efficiency of about 100 GOPS/W (GOPS per watt), or 10 pJ/op (picoJoules per operation). At the same time programmability is needed to follow evolving standards, to support multiple air interfaces, and to dynamically provision processing resources over different air interfaces. Digital television, surveillance video processing, automated optical inspection, and mobile cameras, camcorders, and 3G cellular handsets have similar needs.

April 16, 2004

Topic: DSPs

0 comments

DSPs: Back to the Future:
To understand where DSPs are headed, we must look at where they’ve come from.

From the dawn of the DSP (digital signal processor), an old quote still echoes: "Oh, no! We’ll have to use state-of-the-art 5µm NMOS!" The speaker’s name is lost in the fog of history, as are many things from the ancient days of 5µm chip design. This quote refers to the first Bell Labs DSP whose mask set in fact underwent a 10 percent linear lithographic shrink to 4.5µm NMOS (N-channel metal oxide semiconductor) channel length and taped out in late 1979 with an aggressive full-custom circuit design.

April 16, 2004

Topic: DSPs

0 comments

On Mapping Alogrithms to DSP Architectures:
Knowledge of both the algorithm and target architecture is crucial.

Our complex world is characterized by representation, transmission, and storage of information - and information is mostly processed in digital form. With the advent of DSPs (digital signal processors), engineers are able to implement complex algorithms with relative ease. Today we find DSPs all around us - in cars, digital cameras, MP3 and DVD players, modems, and so forth. Their widespread use and deployment in complex systems has triggered a revolution in DSP architectures, which in turn has enabled engineers to implement algorithms of ever-increasing complexity.

April 16, 2004

Topic: DSPs

0 comments

Of Processors and Processing:
There’s more than one way to DSP

Digital signal processing is a stealth technology. It is the core enabling technology in everything from your cellphone to the Mars Rover. It goes much further than just enabling a one-time breakthrough product. It provides ever-increasing capability; compare the performance gains made by dial-up modems with the recent performance gains of DSL and cable modems. Remarkably, digital signal processing has become ubiquitous with little fanfare, and most of its users are not even aware of what it is.

April 16, 2004

Topic: DSPs

0 comments

Damnéd Digits:
Floating in the real world of real numbers

I remind you, first, that "damnéd" has two syllables, calling for a Shakespearean sneer as sneered by Olivier strutting his King Richard III stuff.

April 16, 2004

Topic: Code

0 comments

Calendar:
4-Mar

Emerging Robotics Technologiesand Applications Conference. March 9-10, 2004. Cambridge, Massachusetts - PerCom (IEEE International Conference on Pervasive Computing and Communications). March 14-17, 2004. Orlando, Florida

April 16, 2004

0 comments

Book Reviews: Linux on the Mainframe:
John Eilert, Maria Eisenhaendler, Dorothea Matthaeus, and Ingolf Salm
Prentice Hall Professional Technical Reference, 2003, $49.99, ISBN: 0131014153

It has been the conventional wisdom for some time that it is more cost effective to use many smaller servers rather than one large centralized mainframe. The arguments in favor of this view are, essentially, that expansion is cheaper and that one is not locked into a single vendor for hardware and software. It is the thesis of this book that, with the advent of implementations of Linux for the mainframe, this conventional wisdom is no longer correct. Indeed, using a mainframe may make greater economic sense.

April 16, 2004

0 comments

A Conversation with Teresa Meng:
The founder of Atheros analyzes the role of signal processing in the evolving world of wireless communications.

In 1999, Teresa Meng took a leave of absence from Stanford University and with colleagues from Stanford and the University of California, Berkeley, founded Atheros Communications to develop and deliver the core technology for wireless communication systems. Using a combination of signal processing and CMOS RF technology, Atheros came up with a pioneering 5 GHz wireless LAN chipset found in most 802.11a/b/g products, and continues to extend its market as wireless communications evolve.

April 16, 2004

Topic: Mobile Computing

0 comments

Get Your Graphics On:
OpenGL Advances with the Times

OpenGL, the decade-old mother of all graphics application programming interfaces (APIs), is getting two significant updates to bring it into the 21st century.

April 16, 2004

Topic: Graphics

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

People have warned against “monoculturalism” in the software community for years. Parallels have been drawn between Internet viruses and worms such as ILOVEYOU and Microsoft SQL Slammer and, for example, the Mexican boll weevil that devastated cotton plantations in the American South.

April 16, 2004

0 comments

Letters:
Ken Coar’s “The Sun Never Sets on Distributed Development”

Ken Coar’s “The Sun Never Sets on Distributed Development” (ACM Queue 1(9), December/January 2003) is a fine, succinct, and to-the-point(s) article. Absolutely required reading for every telemanager. I’m sure I’ll include some of Ken Coar’s insights in our training.

April 16, 2004

0 comments

DSP 4 You:
Whether you think it means digital signal processing, or digital signal processor, DSP is a topic that affects your life, if not your work as a software engineer.

Since 1979 saw the design of the first wave of user-programmable DSP chips—the Intel 2920, the NEC µPD7720, and the Bell Labs (AT&T) DSP-1—2004 is a 25th anniversary that’s appropriately celebrated with the collection of articles presented in this month’s ACM Queue.

April 16, 2004

Topic: DSPs

0 comments

People in Our Software:
A person-centric approach could make software come alive, but at what cost?

People are not well represented in today’s software. With the exception of IM (instant messaging) clients, today’s applications offer few clues that people are actually living beings. Static strings depict things associated with people like e-mail addresses, phone numbers, and home-page URLs. Applications also tend to show the same information about a person, no matter who is viewing it.

February 24, 2004

Topic: Social Computing

0 comments

Sensible Authentication:
According to the author of Beyond Fear, it’s not enough to know who you are; you’ve got to prove it.

The problem with securing assets and their functionality is that, by definition, you don’t want to protect them from everybody. It makes no sense to protect assets from their owners, or from other authorized individuals (including the trusted personnel who maintain the security system). In effect, then, all security systems need to allow people in, even as they keep people out. Designing a security system that accurately identifies, authenticates, and authorizes trusted individuals is highly complex and filled with nuance, but critical to security.

February 24, 2004

Topic: Security

0 comments

The Scalability Problem:
The coexistence of high-end systems and value PCs can make life hell for game developers.

Back in the mid-1990s, I worked for a company that developed multimedia kiosk demos. Our biggest client was Intel, and we often created demos that appeared in new PCs on the end-caps of major computer retailers such as CompUSA. At that time, performance was in demand for all application classes from business to consumer. We created demos that showed, for example, how much faster a spreadsheet would recalculate (you had to do that manually back then) on a new processor as compared with the previous year’s processor. The differences were immediately noticeable to even a casual observer - and it mattered.

February 24, 2004

Topic: Game Development

0 comments

AI in Computer Games:
Smarter games are making for a better user experience. What does the future hold?

If you’ve been following the game development scene, you’ve probably heard many remarks such as: "The main role of graphics in computer games will soon be over; artificial intelligence is the next big thing!" Although you should hardly buy into such statements, there is some truth in them. The quality of AI (artificial intelligence) is a high-ranking feature for game fans in making their purchase decisions and an area with incredible potential to increase players’ immersion and fun.

February 24, 2004

Topic: AI

0 comments

Fun and Games: Multi-Language Development:
Game development can teach us much about the common practice of combining multiple languages in a single project.

Computer games (or "electronic games" if you encompass those games played on console-class hardware) comprise one of the fastest-growing application markets in the world. Within the development community that creates these entertaining marvels, multi-language development is becoming more commonplace as games become more and more complex. Today, asking a development team to construct a database-enabled Web site with the requirement that it be written entirely in C++ would earn scornful looks and rolled eyes, but not long ago the idea that multiple languages were needed to accomplish a given task was scoffed at.

February 24, 2004

Topic: Game Development

0 comments

Massively Multiplayer Middleware:
Building scaleable middleware for ultra-massive online games teaches a lesson we all can use: Big project, simple design.

Wish is a multiplayer, online, fantasy role-playing game being developed by Mutable Realms. It differs from similar online games in that it allows tens of thousands of players to participate in a single game world. Allowing such a large number of players requires distributing the processing load over a number of machines and raises the problem of choosing an appropriate distribution technology.

February 24, 2004

Topic: Game Development

0 comments

Game Development: Harder Than You Think:
Ten or twenty years ago it was all fun and games. Now it’s blood, sweat, and code.

The hardest part of making a game has always been the engineering. In times past, game engineering was mainly about low-level optimization—writing code that would run quickly on the target computer, leveraging clever little tricks whenever possible. But in the past ten years, games have ballooned in complexity. Now the primary technical challenge is simply getting the code to work to produce an end result that bears some semblance to the desired functionality. To the extent that we optimize, we are usually concerned with high-level algorithmic choices.

February 24, 2004

Topic: Game Development

6 comments

When Bad People Happen to Good Games:
OK, so I admit it - not only am I a total closet gamer geek, I admit that I actually care enough to be bitter about it. Yep, that’s right - this puts me in the “big-time nerd” category.

But I think I have a lot of company, which sort of makes me feel better. In fact, at any given moment there are hundreds of thousands of people online playing games. Sure, some of them are playing very simple games like Yahoo! Checkers, and others are playing complicated realtime strategies like Blizzard’s Starcraft—but no matter what game they are playing, they are playing with other people. This is the real attraction of online games. No matter how good games get at so-called artificial intelligence, humans will always make more interesting teammates or opponents. That’s a good thing, but it’s also a bad thing.

February 24, 2004

Topic: Game Development

1 comments

Calendar:
4-Feb

ETech (O’Reilly Emerging Technology Conference) - February 9-12, 2004 - San Diego, California. Intel Developer Forum - February 17-19, 2004 - San Francisco, California

February 24, 2004

0 comments

Book Reviews: Hacking Exposed: Network Security Secrets and Solutions, 4th ed.:
Knowledge is power, and you’ll feel both knowledgeable and powerful after reading this book.

Hacking Exposed: Network Security Secrets and Solutions, 4th ed. Stuart McClure, Joel Scambray, and George Kurtz. McGraw-Hill, 2003, $49.99, ISBN: 0-072-22742-7. The first chapter of this latest edition of Hacking Exposed discusses footprinting, the methodical process of network reconnaissance. The goal is to gather data about an organization in a controlled fashion and compile a complete security profile, including domain names, individual IP (Internet protocol) addresses, and network blocks.

February 24, 2004

0 comments

A Conversation with Will Harvey:
In many ways online games are on the bleeding edge of software development.

That puts Will Harvey, founder and executive vice president of Menlo Park-based There, right at the front of the pack. There, which just launched its product in October, is a virtual 3D world designed for online socializing.

February 24, 2004

Topic: Game Development

2 comments

Toolkit: Java is Jumpin’:
There’s perception, and then there’s reality.

Even though the frenzied hype over Java has died down since the Internet bubble burst, Java is becoming hugely popular in the wireless space. Several events highlight its emergence. Most recently, in December, Texas Instruments opened a research operation in France to focus on the integration of Java apps into the next generation of wireless devices.

February 24, 2004

Topic: Programming Languages

0 comments

The Demise of the Waterfall Model Is Imminent, and Other Urban Myths:
Rumors of the demise of the Waterfall Life-cycle Model are greatly exaggerated.

We discovered this and other disappointing indicators about current software engineering practices in a recent survey of almost 200 software professionals. These discoveries raise questions about perception versus reality with respect to the nature of software engineers, software engineering practice, and the industry.

February 24, 2004

Topic: Development

2 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Gates and crew have stepped forward and announced the Anti-Virus Reward Program, offering a quarter-million dollars for information leading to the arrest and conviction of the “saboteurs of cyberspace” behind the Blaster worm; a quarter-million dollars for helping capture and convict those behind the Sobig virus attack; and four-and-a-half million has been set aside for future rewards.

February 24, 2004

0 comments

Letters:
I was drawn to the article on sentient data.

I was drawn to the article “Sentient Data” (George W. Fitzmaurice, Azam Khan, William Buxton, Gordon Kurtenbach, and Ravin Balakrishnan, ACM Queue 1(8), November 2003), which I found very much in tune with the approach I took while the technical architect for a new “multifaceted data” infrastructure product developed by Digital Equipment Corporation (DEC).

February 24, 2004

0 comments

From the Editors: Fun and Games and Software Development:
What you may not remember is that one of the key groups AMD was going after with its promotional blitz was gamers.

You may recall some of the hype last year as AMD announced and then released its 64-bit processor, the AMD Opteron. What you may not remember is that one of the key groups AMD was going after with its promotional blitz was gamers. You see, at the high-end (read: high-margin) side of the PC business, power-hungry users drive the business, and more and more often those power users are gamers looking to get that millisecond advantage needed to claim bragging rights for the week.

February 24, 2004

0 comments

Black Box Debugging:
It’s all about what takes place at the boundary of an application.

Modern software development practices build applications as a collection of collaborating components. Unlike older practices that linked compiled components into a single monolithic application, modern executables are made up of any number of executable components that exist as separate binary files. This design means that as an application component needs resources from another component, calls are made to transfer control or data from one component to another. Thus, we can observe externally visible application behaviors by watching the activity that occurs across the boundaries of the application’s constituent components.

January 29, 2004

Topic: Quality Assurance

0 comments

Sink or Swim: Know When It’s Time to Bail:
A diagnostic to help you measure organizational dysfunction and take action

There are endless survival challenges for newly created businesses. The degree to which a business successfully meets these challenges depends largely on the nature of the organization and the culture that evolves within it. That’s to say that while market size, technical quality, and product design are obviously crucial factors, company failures are typically rooted in some form of organizational dysfunction.

January 29, 2004

Topic: Distributed Development

0 comments

Culture Surprises in Remote Software Development Teams:
When in Rome doesn’t help when your team crosses time zones, and your deadline doesn’t.

Technology has made it possible for organizations to construct teams of people who are not in the same location, adopting what one company calls "virtual collocation." Worldwide groups of software developers, financial analysts, automobile designers, consultants, pricing analysts, and researchers are examples of teams that work together from disparate locations, using a variety of collaboration technologies that allow communication across space and time.

January 29, 2004

Topic: Distributed Development

0 comments

Building Collaboration into IDEs:
Edit>Compile>Run>Debug>Collaborate?

Software development is rarely a solo coding effort. More often, it is a collaborative process, with teams of developers working together to design solutions and produce quality code. The members of these close-knit teams often look at one another’s code, collectively make plans about how to proceed, and even fix each other’s bugs when necessary. Teamwork does not stop there, however. An extended team may include project managers, testers, architects, designers, writers, and other specialists, as well as other programming teams.

January 29, 2004

Topic: Distributed Development

0 comments

The Sun Never Sits on Distributed Development:
People around the world can work around the clock on a distributed project, but the real challenge lies in taming the social dynamics.

More and more software development is being distributed across greater and greater distances. The motives are varied, but one of the most predominant is the effort to keep costs down. As talent is where you find it, why not use it where you find it, rather than spending the money to relocate it to some ostensibly more "central" location? The increasing ubiquity of the Internet is making far-flung talent ever-more accessible.

January 29, 2004

Topic: Distributed Development

0 comments

Distributed Development: Lessons Learned:
Why repeat the mistakes of the past if you don’t have to?

Delivery of a technology-based project is challenging, even under well-contained, familiar circumstances. And a tight-knit team can be a major factor in success. It is no mystery, therefore, why most small, new technology teams opt to work in a garage (at times literally). Keeping the focus of everyone’s energy on the development task at hand means a minimum of non-engineering overhead.

January 29, 2004

Topic: Distributed Development

0 comments

The Economics of Spam:
Who pays in the spam game?

You know what I hate about spam filtering? Most of what we do today hurts the people who are already being hurt the most. Think about it: Who pays in the spam game? The recipients. That’s what’s wrong in the first place - the wrong folks pay for this scourge.

January 29, 2004

Topic: Email and IM

0 comments

Calendar:
4-Jan

WOSP (International Workshop on Software and Performance) - January 14-16, 2004 - Redwood City, California

January 29, 2004

0 comments

Book Review: The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography:
Simon Singh. Delacorte Press, 2000, $15.00, ISBN: 0-385-49532-3

This is a superb text. It is both comprehensive and accessible, which is an unusual combination. It introduces the reader to cryptography and cryptanalysis—subsets of the broader field of cryptology—which might be loosely referred to as the study of codes.

January 29, 2004

0 comments

A Conversation with Steve Hagan:
At Oracle, distributed development is a way of life.

Oracle Corporation, which bills itself as the world’s largest enterprise software company, with $10 billion in revenues, some 40,000 employees, and operations in 60 countries, has ample opportunity to put distributed development to the test. Among those on the front lines of Oracle’s distributed effort is Steve Hagan, the engineering vice president of the Server Technologies division, based at Oracle’s New England Development Center in Nashua, New Hampshire, located clear across the country from Oracle’s Redwood Shores, California, headquarters.

January 29, 2004

Topic: Databases

0 comments

Toolkit: GNU Tools: Still Relevant?:
Often lost amid the focus on software you don’t have to pay for - such as Linux and Eclipse - is any mention of the organization that started it all: the Free Software Foundation (FSF).

FSF was founded in 1984 by Richard Stallman. Then a programmer at MIT’s Artificial Intelligence Laboratory, Stallman resigned to protest its restrictive copyright policy. He started the GNU (GNU’s Not Unix) Project, an effort to build a free Unix clone, and wrote the GNU (General Public License), which essentially said you could redistribute GNU code for free as long as you also gave away any modification you added—and the free software movement was on its way.

January 29, 2004

Topic: Tools

0 comments

Silicon Superstitions:
Ask yourself if what you’re doing is based on fact, on observation, on a sound footing, or if there is something dodgy about it.

We live in a technological age. Even most individuals on this planet who do not have TV or cellular telephones know about such gadgets of technology. They are artifacts made by us and for us. You’d think, therefore, that it would be part of our common heritage to understand them. Their insides are open to inspection, their designers generally understand the principles behind them, and it is possible to communicate this knowledge - even though the "theory of operation" sections of manuals, once prevalent, seem no longer to be included.

January 29, 2004

Topic: System Evolution

2 comments

Letters:
I am writing to express very positive feedback on the software development tools issue.

I am writing to express very positive feedback on the software development tools issue (Queue 1(6), September 2003). In particular, David J. Brown’s interview with Wayne Rosling, Google’s vice president of engineering, and Michael Donat’s “Debugging in an Asynchronous World” were incredibly well written and insightful.

January 29, 2004

0 comments

From the Editors: New World Order:
It seems that wherever you turn these days you read another headline about the increasing popularity of outsourcing software development.

On a whim, I stopped by Google’s news page where a search of “outsourcing software development” produced eight stories on the first page written in the last 24 hours (with the remaining two not significantly older than that). No surprise, the headlines include such frenzied folly as “outsourcing craze,” “job fears,” and the latest entrant—“backlash.” That last one is enough to trip my sensors that global software development outsourcing is a cemented new feature of the landscape. Once there’s a backlash to something, you know it has fully arrived.

January 29, 2004

0 comments

IM, Not IP (Information Pollution):
A steady dose of realtime interruptions is toxic to anyone’s health.

Respected technology commentators say that they now prefer instant messaging (IM) over e-mail as their medium of choice for computer-mediated communication. The main reasons are that e-mail has become an overloaded channel for readers and that you can’t be sure to get a timely response from the recipients of your e-mail.

January 28, 2004

Topic: Email and IM

1 comments

Calendar:
3-Nov

Conference on Universal Usability (CUU) November 10-11, 2003 Vancouver, Canada, SuperComputing International Conference for High Performance Computing and Communications November 15-21, 2003 Phoenix, Arizona

January 28, 2004

0 comments

Book Reviews: Cocoa in a Nutshell - Michael Beam and James Duncan Davidson:
Like other OReilly Nutshell books, this is not the first stop on the journey of learning the topic in question.

Also like the other Nutshell books, it is the definitive reference for the topic that it covers. With over 240 classes, Cocoa is a complete class library, framework, and development environment for Apple’s revolutionary Mac OS X. When faced with problems such as stopping coding, going into the documentation, or digging around to find the class documentation I need, there is just something magical about being able to flip through a book to find solutions.

January 28, 2004

0 comments

A Conversation with Peter Ford:
The IM world according to a messenger architect

Instant messaging (IM) may represent our brave new world of communications, just as e-mail did a few short years ago. Many IM players are vying to establish the dominant standard in this new world, as well as introducing new applications to take advantage of all IM has to offer. Among them, hardly surprising, is Microsoft, which is moving toward the Session Initiation Protocol (SIP) as its protocol choice for IM.

January 28, 2004

Topic: Email and IM

1 comments

Eclipse: A Platform Becomes an Open-Source Woodstock:
Call it a platform. Call it a tool. Call it the hottest open-source software movement since Linux.

All those descriptions currently surround Eclipse, the language-agnostic code base that’s billed as a “universal platform.” According to its creators it’s more precisely a metaplatform. That is, it’s a building tool out of which developers can create IDEs, or indeed anything else they might desire.

January 28, 2004

Topic: Open Source

0 comments

On Helicopters and Submarines:
SIP does a great job as a helicopter, but when you try to make it function as an IM submarine as well, disaster may follow.

Bernoulli vs. Archimedes - Whenever you see a movie that’s got a vehicle that’s part helicopter and part submarine, you know you’re in for a real treat. What could be cooler? One second, the hero’s being pursued by some fighter jets piloted by some nasty dudes with bad haircuts, dodging air-to-air missiles and exchanging witty repartee over the radio with a megalomaniac bent on world domination; and then, just as the hero is unable to evade the very last missile, he pushes a button, the craft dives into the ocean, and is surrounded by an oasis of peaceful blue.

January 28, 2004

Topic: Email and IM

0 comments

News 2.0:
Taking a second look at the news so you don’t have to.

The obvious reason: IBM is ceasing support for OS/2 after 2004. But under the surface there might be something more interesting at play.

January 28, 2004

0 comments

Letters:
I really have to thank Jef Raskin for his article on user interface designers.

For some reason, this topic seems to be completely ignored within almost all modern computing environments. Therefore, trying to understand the current slew of computer interfaces can be painful to a veteran in the field like me.

January 28, 2004

0 comments

From the Editors: Reach Out and Touch Someone:
The two killer apps that have best defined the Internet Age have been e-mail and the Web. Now we have a third: IM.

Note that two of the three are entirely focused on person-to-person communication ? no great surprise since work demands that we find increasingly efficient and effective means to communicate with colleagues, customers, and suppliers. And then out in that big world beyond day-to-day commerce are all those other people we really want to be able to connect directly with in some convenient yet meaningful way. For millions of computer and wireless phone users, IM has become a preferred means for doing just that.

January 28, 2004

0 comments

Uprooting Software Defects at the Source:
Source code analysis is an emerging technology in the software industry that allows critical source code defects to be detected before a program runs.

Although the concept of detecting programming errors at compile time is not new, the technology to build effective tools that can process millions of lines of code and report substantive defects with only a small amount of noise has long eluded the market. At the same time, a different type of solution is needed to combat current trends in the software industry that are steadily diminishing the effectiveness of conventional software testing and quality assurance.

January 28, 2004

Topic: Quality Assurance

0 comments

Sentient Data Access via a Diverse Society of Devices:
Today’s ubiquitous computing environment cannot benefit from the traditional understanding of a hierarchical file system.

It has been more than ten years since such "information appliances" as ATMs and grocery store UPC checkout counters were introduced. For the office environment, Mark Weiser began to articulate the notion of UbiComp and identified some of the salient features of the trends in 1991. Embedded computation is also becoming widespread. Microprocessors, for example, are finding themselves embedded into seemingly conventional pens that remember what they have written. Anti-lock brake systems in cars are controlled by fuzzy logic.

January 28, 2004

Topic: Embedded Systems

0 comments

Nine IM Accounts and Counting:
The key word with instant messaging today is interoperability.

Instant messaging has become nearly as ubiquitous as e-mail, in some cases far surpassing e-mail in popularity. But it has gone far beyond teenagers’ insular world to business, where it is becoming a useful communication tool. The problem, unlike e-mail, is that no common standard exists for IM, so users feel compelled to maintain multiple accounts, for example, AOL, Jabber, Yahoo, and MSN.

January 28, 2004

Topic: Email and IM

0 comments

Broadcast Messaging: Messaging to the Masses:
This powerful form of communication has social implications as well as technical challenges.

We have instantaneous access to petabytes of stored data through Web searches. With respect to messaging, we have an unprecedented number of communication tools that provide both synchronous and asynchronous access to people. E-mail, message boards, newsgroups, IRC (Internet relay chat), and IM (instant messaging) are just a few examples. These tools are all particularly significant because they have become essential productivity entitlements. They have caused a fundamental shift in the way we communicate. Many readers can attest to feeling disconnected when a mail server goes down or when access to IM is unavailable.

January 28, 2004

Topic: Email and IM

0 comments

Beyond Instant Messaging:
Platforms and standards for these services must anticipate and accommodate future developments.

The recent rise in popularity of IM (instant messaging) has driven the development of platforms and the emergence of standards to support IM. Especially as the use of IM has migrated from online socializing at home to business settings, there is a need to provide robust platforms with the interfaces that business customers use to integrate with other work applications. Yet, in the rush to develop a mature IM infrastructure, it is also important to recognize that IM features and uses are still evolving. For example, popular press stories1 have raised the concern that IM interactions may be too distracting in the workplace.

January 28, 2004

Topic: Email and IM

0 comments

Reading, Writing, and Code:
The key to writing readable code is developing good coding style.

Forty years ago, when computer programming was an individual experience, the need for easily readable code wasn’t on any priority list. Today, however, programming usually is a team-based activity, and writing code that others can easily decipher has become a necessity. Creating and developing readable code is not as easy as it sounds.

December 5, 2003

Topic: Code

1 comments

The Big Bang Theory of IDEs:
Pondering the vastness of the ever-expanding universe of IDEs, you might wonder whether a usable IDE is too much to ask for.

Remember the halcyon days when development required only a text editor, a compiler, and some sort of debugger (in cases where the odd printf() or two alone didn’t serve)? During the early days of computing, these were independent tools used iteratively in development’s golden circle. Somewhere along the way we realized that a closer integration of these tools could expedite the development process. Thus was born the integrated development environment (IDE), a framework and user environment for software development that’s actually a toolkit of instruments essential to software creation. At first, IDEs simply connected the big three (editor, compiler, and debugger), but nowadays most go well beyond those minimum requirements.

December 5, 2003

Topic: Development

0 comments

Modern System Power Management:
Increasing demands for more power and increased efficiency are pressuring software and hardware developers to ask questions and look for answers.

The Advanced Configuration and Power Interface (ACPI) is the most widely used power and configuration interface for laptops, desktops, and server systems. It is also very complex, and its current specification weighs in at more than 500 pages. Needless to say, operating systems that choose to support ACPI require significant additional software support, up to and including fundamental OS architecture changes. The effort that ACPI’s definition and implementation has entailed is worth the trouble because of how much flexibility it gives to the OS (and ultimately the user) to control power management policy and implementation.

December 5, 2003

Topic: Power Management

0 comments

Making a Case for Efficient Supercomputing:
It is time for the computing community to use alternative metrics for evaluating performance.

A supercomputer evokes images of “big iron“ and speed; it is the Formula 1 racecar of computing. As we venture forth into the new millennium, however, I argue that efficiency, reliability, and availability will become the dominant issues by the end of this decade, not only for supercomputing, but also for computing in general.

December 5, 2003

Topic: Power Management

0 comments

Energy Management on Handheld Devices:
Whatever their origin, all handheld devices share the same Achilles heel: the battery.

Handheld devices are becoming ubiquitous and as their capabilities increase, they are starting to displace laptop computers - much as laptop computers have displaced desktop computers in many roles. Handheld devices are evolving from today’s PDAs, organizers, cellular phones, and game machines into a variety of new forms. Although partially offset by improvements in low-power electronics, this increased functionality carries a corresponding increase in energy consumption. Second, as a consequence of displacing other pieces of equipment, handheld devices are seeing more use between battery charges. Finally, battery technology is not improving at the same pace as the energy requirements of handheld electronics.

December 5, 2003

Topic: Power Management

0 comments

The Inevitability of Reconfigurable Systems:
The transition from instruction-based to reconfigurable circuits will not be easy, but has its time come?

The introduction of the microprocessor in 1971 marked the beginning of a 30-year stall in design methods for electronic systems. The industry is coming out of the stall by shifting from programmed to reconfigurable systems. In programmed systems, a linear sequence of configuration bits, organized into blocks called instructions, configures fixed hardware to mimic custom hardware. In reconfigurable systems, the physical connections among logic elements change with time to mimic custom hardware. The transition to reconfigurable systems will be wrenching, but this is inevitable as the design emphasis shifts from cost performance to cost performance per watt. Here’s the story.

December 5, 2003

Topic: Power Management

0 comments

Getting Gigascale Chips:
Challenges and Opportunities in Continuing Moore’s Law

Processor performance has increased by five orders of magnitude in the last three decades, made possible by following Moore’s law - that is, continued technology scaling, improved transistor performance to increase frequency, additional (to avoid repetition) integration capacity to realize complex architectures, and reduced energy consumed per logic operation to keep power dissipation within limits. Advances in software technology, such as rich multimedia applications and runtime systems, exploited this performance explosion, delivering to end users higher productivity, seamless Internet connectivity, and even multimedia and entertainment.

December 5, 2003

Topic: Processors

0 comments

Wireless Networking Considered Flaky:
You know what bugs me about wireless networking? Everyone thinks it’s so cool and never talks about the bad side of things.

Oh sure, I can get on the ’net from anywhere at Usenix or the IETF, but those are _hostile_ _nets_. Hell, all wireless nets are hostile. By their very nature, you don’t know who’s sharing the ether with you. But people go on doing their stuff, confident that they are OK because they’re behind the firewall.

December 5, 2003

Topic: Mobile Computing

0 comments

Calendar:
3-Oct

Instant Messaging Planet October 15-16, 2003 San Jose, California

December 5, 2003

0 comments

Book Reviews: Lean Software Development: An Agile Toolkit:
This wonderful, short book is a pragmatic guide to the realities of managing software projects, and to the pitfalls of guiding research and development projects in general.

With brief examples based on the authors’ real-life experiences, the reader sees how “good software practice” in its traditional and literal sense can often lead to something between disaster and frustration. In particular, the authors argue that specifying a software system in its entirety before developing any code, ensures there is no flexibility for changes down the road when more is known about the problem.

December 5, 2003

0 comments

A Conversation with Dan Dobberpuhl:
The computer industry has always been about power.

The development of the microprocessors that power computers has been a relentless search for more power, higher speed, and better performance, usually in smaller and smaller packages. But when is enough enough?

December 5, 2003

Topic: Power Management

2 comments

Microsoft’s Compact Framework Targets Smart Devices:
Welcome to my first installment of ACM Queue’s ToolKit column. Each issue I’ll dig beneath the market-friendly, feature-rich exterior of some of the best-known (and some of the least-known) development tools in an attempt to separate the core app from the product spec sheet.

This month we’re embedding ourselves inside Microsoft’s .NET Compact Framework (CF), billed by Microsoft as the perfect platform to create software applications that target mobile devices. That’s accurate—and it’s not.

December 5, 2003

Topic: Tools

0 comments

Stand and Deliver: Why I Hate Stand-Up Meetings:
Stand-up meetings are an important component of the ’whole team’, which is one of the fundamental practices of extreme programming (XP).

According to the Extreme Programming Web site, the stand-up meeting is one part of the rules and practices of extreme programming: “Communication among the entire team is the purpose of the stand-up meeting. They should take place every morning in order to communicate problems, solutions, and promote team focus. The idea is that everyone stands up in a circle in order to avoid long discussions. It is more efficient to have one short meeting that everyone is required to attend than many meetings with a few developers each.”

December 5, 2003

Topic: Development

3 comments

News 2.0:
Taking a second look at the news so you don’t have to.

Many of us secretly (or flamboyantly) partake of unhealthy snacks, relying on them to make it through a busy day. But in the not-too-distant future, our afternoon treats will be as precious as oil fields and as essential as an upstate water plant. Sugar is going to rock our world. Recent studies conducted by Derek Lovley and Swades Chaudhuri at the University of Massachusetts at Ahmherst indicate that Rhodoferax ferrireducens is capable of converting sugars of all kinds into stable, long-term electricity at an impressive rate of 80% efficiency.

December 5, 2003

0 comments

Letters:
With the exception of my wife, my family has run OpenOffice on Linux for some time.

I just read “MOXIE: Microsoft Office-Linux Interoperability Experiment,“ (Hal Varian and Chris Varian, ACM Queue 1(5), July/August 2003). Thanks for running the study. My wife’s work forces her to use the Word doc format far more than the rest of us. Because of chronic Windows instability issues, she recently became the last of the tribe to make the move to the OpenOffice.org suite (again, on Linux).

December 5, 2003

0 comments

CPUs with 2,000 MIPS per Watt, Anyone?:
The recent events with the Eastern power grid provide an ominous reminder of our huge dependence on electrical power.

Making a living as an IT professional, I always get a terrible sinking feeling right in the midsection when that background hum of automation suddenly becomes quiet. Electrical power breathes life into every display pixel, CPU, and disk drive—and soon will do the same to Ethernet ports delivering power to devices along with data packets. Power demands our consideration in just about every implicit or explicit decision we make; and when we get it wrong, computer systems overheat and fail.

December 5, 2003

Topic: Power Management

0 comments

Book Reviews: Java Precisely:
Peter Sestoft. MIT Press, 2002, $14.95, ISBN: 0262692767

Sestoft provides a concise reference to the Java 2 programming language (versions 1.3 and 1.4). The stated audience for the book is people learning or using Java who need more details about the language than are usually provided in a textbook.

October 2, 2003

0 comments

A Conversation with Wayne Rosing:
How the Web changes the way developers build and release software

Google is one of the biggest success stories of the recent Internet age, evolving in five years from just another search engine with a funny name into a household name that is synonymous with searching the Internet. It processes about 200 million search requests daily, serving as both a resource and a challenge to developers today.

October 2, 2003

Topic: Web Services

0 comments

User Interface Designers, Slaves of Fashion:
The status quo prevails in interface design, and the flawed concept of cut-and-paste is a perfect example.

The discipline, science, and art of interface design has gone stagnant. The most widely read books on the subject are primarily compendia of how to make the best of received widgets. The status quo is mistaken for necessity. Constrained in this chamber pot, designers wander around giving the users of their products little comfort or fresh air.

October 2, 2003

Topic: Development

0 comments

Outfoxing Outsourcing:
There is more to watch for on the offshore horizon as outsourcing traffic increases and involves multiple countries.

Companies intent on saving bucks need no longer fix their gaze solely on India in search of information technology professionals. Russia, Vietnam, and China, for example, have equally impressive pools of talented and motivated techies, and they’re extremely eager to negotiate affordable contracts with offshore companies.

October 2, 2003

Topic: Business/Management

0 comments

The Developer’s Art Today: Aikido or Sumo?:
Software development, tools, and whether or not they make us more productive

About once a month the Queue Advisory Board gets together for dinner to hammer out ideas for upcoming issues. Well, a few months back we fell into discussion about the problems surrounding software development these days. A few of us piped up straight away that tools are very important. Others countered, “Oh, sure, but do they help or do they hurt?” And so this issue was born.

October 2, 2003

Topic: Tools

0 comments

Spam, Spam, Spam, Spam, Spam, the FTC, and Spam:
A forum sponsored by the FTC highlights just how bad spam is, and how it’s only going to get worse without some intervention.

The Federal Trade Commission held a forum on spam in Washington, D.C., April 30 to May 2. Rather to my surprise, it was a really good, content-full event. The FTC folks had done their homework and had assembled panelists that ran the gamut from ardent anti-spammers all the way to hard-core spammers and everyone in between: lawyers, legitimate marketers, and representatives from vendor groups.

October 2, 2003

Topic: Email and IM

0 comments

Another Day, Another Bug:
We asked our readers which tools they use to squash bugs. Here’s what they said.

As part of this issue on programmer tools, we at Queue decided to conduct an informal Web poll on the topic of debugging. We asked you to tell us about the tools that you use and how you use them. We also collected stories about those hard-to-track-down bugs that sometimes make us think of taking up another profession.

October 2, 2003

Topic: Debugging

0 comments

No Source Code? No Problem!:
What if you have to port a program, but all you have is a binary?

Typical software development involves one of two processes: the creation of new software to fit particular requirements or the modification (maintenance) of old software to fix problems or fit new requirements. These transformations happen at the source-code level. But what if the problem is not the maintenance of old software but the need to create a functional duplicate of the original? And what if the source code is no longer available?

October 2, 2003

Topic: Code

0 comments

Code Spelunking: Exploring Cavernous Code Bases:
Code diving through unfamiliar source bases is something we do far more often than write new code from scratch--make sure you have the right gear for the job.

Try to remember your first day at your first software job. Do you recall what you were asked to do, after the human resources people were done with you? Were you asked to write a piece of fresh code? Probably not. It is far more likely that you were asked to fix a bug, or several, and to try to understand a large, poorly documented collection of source code. Of course, this doesn’t just happen to new graduates; it happens to all of us whenever we start a new job or look at a new piece of code. With experience we all develop a set of techniques for working with large, unfamiliar source bases.

October 1, 2003

Topic: Quality Assurance

1 comments

Coding Smart: People vs. Tools:
Tools can help developers be more productive, but they’re no replacement for thinking.

Cool tools are seductive. When we think about software productivity, tools naturally come to mind. When we see pretty new tools, we tend to believe that their amazing features will help us get our work done much faster. Because every software engineer uses software productivity tools daily, and all team managers have to decide which tools their members will use, the latest and greatest look appealing.

October 1, 2003

Topic: Tools

1 comments

Debugging in an Asynchronous World:
Hard-to-track bugs can emerge when you can’t guarantee sequential execution. The right tools and the right techniques can help.

Pagers, cellular phones, smart appliances, and Web services - these products and services are almost omnipresent in our world, and are stimulating the creation of a new breed of software: applications that must deal with inputs from a variety of sources, provide real-time responses, deliver strong security - and do all this while providing a positive user experience. In response, a new style of application programming is taking hold, one that is based on multiple threads of control and the asynchronous exchange of data, and results in fundamentally more complex applications.

October 1, 2003

Topic: Quality Assurance

0 comments

A Conversation with Chris DiBona:
Chris DiBona has been out front and outspoken about the open source movement.

He was hooked from the moment he installed Linux on an old PC when he was a teenager.

October 1, 2003

Topic: Open Source

0 comments

Uncrackable Passwords:
Companies such as Apple, Dell, Gateway, and MicronPC are marketing fingerprint readers or developing add-ons.

As a result of “heightened demands” for secure computing, PC makers are taking a serious look at biometrics. MPC’s TransPort laptops, for example, use heat-sensitive scans integrated into the system’s BIOS. The MPC’s TouchChip captures fingerprint scans from the laptop’s palmrest. A laptop can be registered to multiple users, who can each designate which files, folders, or directories will be shared. Very James Bond.

October 1, 2003

Topic: Security

0 comments

Viewing Open Source with an Open Mind:
Most writing about open source stresses the goodness (or lack of goodness) of open source as a software development model.

Some of the more interesting articles discussing open source have been from the economic point of view. Since the dismal science of economics sees nothing as purely good or bad, these types of pieces generally make for interesting reading, in turn leading you to think a bit more carefully about the possibility that open source might be good for some things but less good for others.

October 1, 2003

Topic: Open Source

0 comments

Closed Source Fights Back:
SCO vs. The World-What Were They Thinking?

In May 2003, the SCO Group, a vendor of the Linux operating system, sent a letter to its customers. Among other things, it stated, "We believe that Linux is, in material part, an unauthorized derivative of Unix." What would make SCO do that?

October 1, 2003

Topic: Open Source

0 comments

Commercializing Open Source Software:
Many have tried, a few are succeeding, but challenges abound.

The use of open source software has become increasingly popular in production environments, as well as in research and software development. One obvious attraction is the low cost of acquisition. Commercial software has a higher initial cost, though it usually has advantages such as support and training. A number of business models designed by users and vendors combine open source and commercial software; they use open source as much as possible, adding commercial software as needed.

October 1, 2003

Topic: Open Source

1 comments

The Age of Corporate Open Source Enlightenment:
Like it or not, zealots and heretics are finding common ground in the open source holy war.

It’s a bad idea, mixing politics and religion. Conventional wisdom tells us to keep them separate - and to discuss neither at a dinner party. The same has been said about the world of software. When it comes to mixing the open source church with the proprietary state (or is it the other way around?), only one rule applies: Don’t do it.

October 1, 2003

Topic: Open Source

1 comments

From Server Room to Living Room:
How open source and TiVo became a perfect match

The open source movement, exemplified by the growing acceptance of Linux, is finding its way not only into corporate environments but also into a home near you. For some time now, high-end applications such as software development, computer-aided design and manufacturing, and heavy computational applications have been implemented using Linux and generic PC hardware.

October 1, 2003

Topic: Open Source

0 comments

A Conversation with Jim Gray:
Sit down, turn off your cellphone, and prepare to be fascinated.

Clear your schedule, because once you’ve started reading this interview, you won’t be able to put it down until you’ve finished it.

July 31, 2003

Topic: File Systems and Storage

1 comments

Big Storage: Make or Buy?:
We hear it all the time. The cost of disk space is plummeting.

Your local CompUSA is happy to sell you a 200-gigabyte ATA drive for $300, which comes to about $1,500 per terabyte. Go online and save even more - $1,281 for 1 terabyte of drive space (using, say, 7X Maxtor EIDE 153-GB ATA/113 5400-RPM drives). So why would anyone pay $360,000 to XYZ Storage System Corp. for a 16-terabyte system? I mean, what’s so hard about storage? Good question.

July 31, 2003

Topic: File Systems and Storage

0 comments

Storage-n Sides to Every Story:
If you ask five different technologists about storage, you better expect five different answers.

The term storage sparks a number of ideas in the minds of storage experts—more so than most other topics in the computing field. Any discussion about storage among people in the industry brings to mind the East Indian legend made famous by English poet John Godfrey Saxe about the blind men observing an elephant. As they all reach out and feel that part of the elephant closest to themselves, they all have a different “view.” Each is correct, and yet none fully so, because no one view can take in the whole picture.

July 31, 2003

Topic: File Systems and Storage

0 comments

Storage Systems: Not Just a Bunch of Disks Anymore:
The sheer size and scope of data available today puts tremendous pressure on storage systems to perform in ways never imagined.

The concept of a storage device has changed dramatically from the first magnetic disk drive introduced by the IBM RAMAC in 1956 to today’s server rooms with detached and fully networked storage servers. Storage has expanded in both large and small directions. All use the same underlying technology but they quickly diverge from there. Here we will focus on the larger storage systems that are typically detached from the server hosts. We will introduce the layers of protocols and translations that occur as bits make their way from the magnetic domains on the disk drives and interfaces to your desktop.

July 31, 2003

Topic: File Systems and Storage

0 comments

You Don’t Know Jack about Disks:
Whatever happened to cylinders and tracks?

Traditionally, the programmer’s working model of disk storage has consisted of a set of uniform cylinders, each with a set of uniform tracks, which in turn hold a fixed number of 512-byte sectors, each with a unique address. The cylinder is made up of concentric circles (or tracks) on each disk platter in a multiplatter drive. Each track is divided up like pie slices into sectors.

July 31, 2003

Topic: File Systems and Storage

3 comments

A Conversation with Mario Mazzola:
To peek into the future of networking, you don’t need a crystal ball. You just need a bit of time with Mario Mazzola, chief development officer at Cisco.

Mazzola lives on the bleeding edge of networking technology, so his present is very likely to be our future. He agreed to sit down with Queue to share some of his visions of the future and the implications he anticipates for software developers working with such rapidly evolving technologies as wireless networking, network security, and network scalability.

July 30, 2003

Topic: Networks

0 comments

The Woes of IDEs:
Programming speed and quality are hindered by poor integrated development environment interfaces.

Preaching emanating from the ranks and gurus of the human interface world is slowly convincing management, software designers--and even programmers--that better human-machine interfaces can increase productivity by speeding the work, decreasing learning time, lowering the burden on human memory, and easing users’ physical and mental stress.

July 30, 2003

Topic: Development

0 comments

Would You Like Some Data with That?:
You know wireless technology has arrived when the Golden Arches announce they’ll be equipping franchises with wireless hotspots.

Just a few months ago, McDonald’s Corporation unveiled its plan for a pilot wireless access program at 10 restaurants in Manhattan. Several hundred restaurants at various metropolitan centers are to follow later in the year. Combine this with Intel’s recent announcement of built-in wireless (802.11) support as part of its new Centrino chipset, and you can reasonably conclude that ubiquitous wireless access may soon be upon us.

July 30, 2003

Topic: Data

0 comments

Open Spectrum:
A Path to Ubiquitous Connectivity

Just as open standards and open software rocked the networking and computing industry, open spectrum is poised to be a disruptive force in the use of radio spectrum for communications. At the same time, open spectrum will be a major element that helps continue the Internet’s march to integrate and facilitate all electronic communications with open standards and commodity hardware.

July 30, 2003

Topic: Mobile Computing

0 comments

Self-Healing Networks:
Wireless networks that fix their own broken communication links may speed up their widespread acceptance.

The obvious advantage to wireless communication over wired is, as they say in the real estate business, location, location, location. Individuals and industries choose wireless because it allows flexibility of location--whether that means mobility, portability, or just ease of installation at a fixed point. The challenge of wireless communication is that, unlike the mostly error-free transmission environments provided by cables, the environment that wireless communications travel through is unpredictable. Environmental radio-frequency (RF) "noise" produced by powerful motors, other wireless devices, microwaves--and even the moisture content in the air--can make wireless communication unreliable.

July 30, 2003

Topic: Networks

1 comments

Designing Portable Collaborative Networks:
A middleware solution to keep pace with the ever-changing ways in which mobile workers collaborate.

Peer-to-peer technology and wireless networking offer great potential for working together away from the desk - but they also introduce unique software and infrastructure challenges. The traditional idea of the work environment is anchored to a central location - the desk and office - where the resources needed for the job are located.

July 30, 2003

Topic: Mobile Computing

0 comments

The Family Dynamics of 802.11:
The 802.11 family of standards is helping to move wireless LANs into promising new territory.

Three trends are driving the rapid growth of wireless LAN (WLAN): The increased use of laptops and personal digital assistants (PDAs); rapid advances in WLAN data rates (from 2 megabits per second to 108 Mbps in the past four years); and precipitous drops in WLAN prices (currently under $50 for a client and under $100 for an access point).

July 30, 2003

Topic: Mobile Computing

1 comments

Caching XML Web Services for Mobility:
In the face of unreliable connections and low bandwidth, caching may offer reliable wireless access to Web services.

Web services are emerging as the dominant application on the Internet. The Web is no longer just a repository of information but has evolved into an active medium for providers and consumers of services: Individuals provide peer-to-peer services to access personal contact information or photo albums for other individuals; individuals provide services to businesses for accessing personal preferences or tax information; Web-based businesses provide consumer services such as travel arrangement (Orbitz), shopping (eBay), and e-mail (Hotmail); and several business-to-business (B2B) services such as supply chain management form important applications of the Internet.

July 30, 2003

Topic: Web Services

1 comments

The Future of WLAN:
Overcoming the Top Ten Challenges in wireless networking--will it allow wide-area mesh networks to become ubiquitous?

Since James Clerk Maxwell first mathematically described electromagnetic waves almost a century and a half ago, the world has seen steady progress toward using them in better and more varied ways. Voice has been the killer application for wireless for the past century. As performance in all areas of engineering has improved, wireless voice has migrated from a mass broadcast medium to a peer-to-peer medium. The ability to talk to anyone on the planet from anywhere on the planet has fundamentally altered the way society works and the speed with which it changes.

July 9, 2003

Topic: Networks

1 comments

Putting It All Together:
Component integration is one of the tough challenges in embedded system design. Designers search for conservative design styles and reliable techniques for interfacing and verification.

With the growing complexity of embedded systems, more and more parts of a system are reused or supplied, often from external sources. These parts range from single hardware components or software processes to hardware-software (HW-SW) subsystems. They must cooperate and share resources with newly developed parts such that all of the design constraints are met. This, simply speaking, is the integration task, which ideally should be a plug-and-play procedure. This does not happen in practice, however, not only because of incompatible interfaces and communication standards but also because of specialization.

April 1, 2003

Topic: Embedded Systems

0 comments

Blurring Lines Between Hardware and Software:
Software development for embedded systems clearly transcends traditional "programming" and requires intimate knowledge of hardware, as well as deep understanding of the underlying application that is to be implemented.

Motivated by technology leading to the availability of many millions of gates on a chip, a new design paradigm is emerging. This new paradigm allows the integration and implementation of entire systems on one chip.

April 1, 2003

Topic: Embedded Systems

0 comments

Division of Labor in Embedded Systems:
You can choose among several strategies for partitioning an embedded application over incoherent processor cores. Here’s practical advice on the advantages and pitfalls of each.

Increasingly, embedded applications require more processing power than can be supplied by a single processor, even a heavily pipelined one that uses a high-performance architecture such as very long instruction word (VLIW) or superscalar. Simply driving up the clock is often prohibitive in the embedded world because higher clocks require proportionally more power, a commodity often scarce in embedded systems. Multiprocessing, where the application is run on two or more processors concurrently, is the natural route to ever more processor cycles within a fixed power budget.

April 1, 2003

Topic: Embedded Systems

0 comments

SoC: Software, Hardware, Nightmare, Bliss:
System-on-a-chip design offers great promise by shrinking an entire computer to a single chip. But with the promise come challenges that need to be overcome before SoC reaches its full potential.

System-on-a-chip (SoC) design methodology allows a designer to create complex silicon systems from smaller working blocks, or systems. By providing a method for easily supporting proprietary functionality in a larger context that includes many existing design pieces, SoC design opens the craft of silicon design to a much broader audience.

April 1, 2003

Topic: Embedded Systems

0 comments

Programming Without a Net:
Embedded systems programming presents special challenges to engineers unfamiliar with that environment.

Embedded systems programming presents special challenges to engineers unfamiliar with that environment. In some ways it is closer to working inside an operating system kernel than writing an application for use on the desktop. Here’s what to look out for.

April 1, 2003

Topic: Embedded Systems

0 comments

A Conversation with Jim Ready:
Linux may well play a significant role in the future of the embedded systems market, where the majority of software is still custom built in-house and no large player has preeminence.

Linux may well play a significant role in the future of the embedded systems market, where the majority of software is still custom built in-house and no large player has preeminence. The constraints placed on embedded systems are very different from those on the desktop. We caught up with Jim Ready of MontaVista Software to talk about what he sees in the future of Linux as the next embedded operating system (OS).

April 1, 2003

Topic: Embedded Systems

0 comments

The Truth About Embedded Systems:
Embedded systems are different in several ways from other software environments.

Embedded systems are different in several ways from other software environments. The hardware they run on is often resource-constrained in terms of both memory and processor cycles, but still these systems must respond in realtime. They control the brakes of cars, the flaps of airplanes, traffic signaling systems, medical equipment, and other life-critical devices. Programming as if someone’s life depended on it is a new concept to many systems engineers.

April 1, 2003

Topic: System Evolution

0 comments

Scripting Web Services Prototypes:
As web services become increasingly sophisticated, their practitioners will require skills spanning transaction processing, database management, middleware integration, and asynchronous messaging.

IBM Lightweight Services (LWS), an experimental hosting environment, aims to support rapid prototyping of complex services while insulating developers from advanced issues in multi-threading, transactions, and resource locking. To achieve this we adapt a high-level, event-driven, single-threaded scripting environment to server-side application hosting. Developers may use this freely available environment to create robust web services that store persistent data, consume other services, and integrate with existing middleware. Lightweight services are invoked by standard HTTP SOAP clients, and may in turn invoke other web services using WSDL.

March 18, 2003

Topic: Web Services

0 comments

Interview with Adam Bosworth:
The changes that are going to be driven by web services will result in a major language extension.

Adam Bosworth’s contributions to the development and evolution of Web Services began before the phrase "Web Services" had even been coined. That’s because while working as a senior manager at Microsoft in the late ’90s, he became one of the people most central to the effort to define an industry XML specification. While at Microsoft, he also served as General Manager of the company’s WebData organization (with responsibility for defining Microsoft’s long-term XML strategy) in addition to heading up the effort to develop the HTML engine used in Internet Explorer 4 & 5.

March 18, 2003

Topic: Web Services

0 comments

Securing the Edge:
Common wisdom has it that enterprises need firewalls to secure their networks.

Common wisdom has it that enterprises need firewalls to secure their networks. In fact, as enterprise network practitioners can attest, the "must-buy-firewall" mentality has pervaded the field.

March 18, 2003

Topic: Web Services

0 comments

Finding the Right Questions:
Does the world really need another computing magazine?

Does the world really need another computing magazine? Surely, that’s a legitimate question. By any measure, we already have an overwhelming number of publications to choose from. But how many do you actually read? And of those, how many do you feel really contribute to your knowledge and understanding of emerging software technologies and capabilities?

March 18, 2003

Topic: Education

26 comments

Web Services: Promises and Compromises:
Much of web services’ initial promise will be realized via integration within the enterprise.

Much of web services’ initial promise will be realized via integration within the enterprise, either with legacy applications or new business processes that span organizational silos. Enterprises need organizational structures that support this new paradigm.

March 12, 2003

Topic: Web Services

0 comments

An Open Web Services Architecture:
The name of the game is web services.

The name of the game is web services-sophisticated network software designed to bring us what we need, when we need it, through any device we choose. We are getting closer to this ideal, as in recent years the client/server model has evolved into web-based computing, which is now evolving into the web services model. In this article, I will discuss Sun Microsystems’ take on web services, specifically Sun ONE: an open, standards-based web services framework. I’ll share with you Sun’s decision-making rationales regarding web services, and discuss directions we are moving in.

March 4, 2003

Topic: Web Services

0 comments

The Deliberate Revolution:
Transforming Integration With XML Web Services

While detractors snub XML web services as CORBA with a weight problem, industry cheerleaders say these services are ushering in a new age of seamless integrated computing. But for those of us whose jobs don’t involve building industry excitement, what do web services offer?

March 4, 2003

Topic: Web Services

0 comments