Interoperability

Vol. 9 No. 6 – June 2011

Interoperability

Articles

Computing without Processors

Heterogeneous systems allow us to target our programming to the appropriate environment.

Computing without Processors

Heterogeneous systems allow us to target our programming to the appropriate environment.

Satnam Singh, Microsoft Research Cambridge, UK


From the programmer's perspective the distinction between hardware and software is being blurred. As programmers struggle to meet the performance requirements of today's systems, they will face an ever increasing need to exploit alternative computing elements such as GPUs (graphics processing units), which are graphics cards subverted for data-parallel computing,11 and FPGAs (field-programmable gate arrays), or soft hardware.

The current iteration of mainstream computing architectures is based on cache-coherent multicore processors. Variations on this theme include Intel's experimental Single-Chip Cloud Computer, which contains 48 cores that are not cache coherent. This path, however, is dictated by the end of frequency scaling rather than being driven by requirements about how programmers wish to write software.4 The conventional weapons available for writing concurrent and parallel software for such multicore systems are largely based on abstractions developed for writing operating systems (e.g., locks and monitors). However, these are not the right abstractions to use for writing parallel applications.

by Satnam Singh

DSL for the Uninitiated

Domain-specific languages bridge the semantic gap in programming

DSL for the Uninitiated

Domain-specific languages bridge the semantic gap in programming

Debasish Ghosh, Anshinsoft


One of the main reasons why software projects fail is the lack of communication between the business users, who actually know the problem domain, and the developers who design and implement the software model. Business users understand the domain terminology, and they speak a vocabulary that may be quite alien to the software people; it's no wonder that the communication model can break down right at the beginning of the project life cycle.

A DSL (domain-specific language)1,3 bridges the semantic gap between business users and developers by encouraging better collaboration through shared vocabulary. The domain model that the developers build uses the same terminologies as the business. The abstractions that the DSL offers match the syntax and semantics of the problem domain. As a result, users can get involved in verifying business rules throughout the life cycle of the project.

by Debasish Ghosh

Interviewing Techniques

Separating the good programmers from the bad

Interviewing Techniques

Separating the good programmers from the bad


Dear KV,

My work group has just been given approval to hire four new programmers, and now all of us have to interview people, both on the phone and in person. I hate interviewing people. I never know what to ask. I've also noticed that people tend to be careless with the truth when writing their resumes. We're considering a programming test for our next round of interviewees, because we realized that some previous candidates clearly couldn't program their way out of a paper bag. There have to be tricks to speeding up hiring without compromising whom we hire.

by George Neville-Neil

Case Study: Interoperability Testing

Microsoft's Protocol Documentation Program: Interoperability Testing at Scale

A Discussion with Nico Kicillof, Wolfgang Grieskamp and Bob Binder

Microsoft's Protocol Documentation Program: Interoperability Testing at Scale

A Discussion with Nico Kicillof, Wolfgang Grieskamp and Bob Binder


In 2002, Microsoft began the difficult process of verifying much of the technical documentation for its Windows communication protocols. The undertaking came about as a consequence of a consent decree Microsoft entered into with the U.S. Department of Justice and several state attorneys general that called for the company to make available certain client-server communication protocols for third-party licensees. A series of RFC-like technical documents were then written for the relevant Windows client-server and server-server communication protocols, but to ensure interoperability Microsoft needed to verify the accuracy and completeness of those documents. From the start, it was clear this wouldn't be a typical QA (quality assurance) project. First and foremost, a team would be required to test documentation, not software, which is an inversion of the normal QA process; and the documentation in question was extensive, consisting of more than 250 documents—30,000 pages in all. In addition, the compliance deadlines were tight. To succeed, the Microsoft team would have to find an efficient testing methodology, identify the appropriate technology, and train an army of testers—all within a very short period of time.

This case study considers how the team arrived at an approach to that enormous testing challenge. More specifically, it focuses on one of the testing methodologies used—model-based testing—and the primary challenges that have emerged in adopting that approach for a very large-scale project. Two lead engineers from the Microsoft team and an engineer who played a role in reviewing the Microsoft effort tell the story.

Articles

The Robustness Principle Reconsidered

Seeking a middle ground

The Robustness Principle Reconsidered

Seeking a middle ground

Eric Allman, Sendmail


"Be conservative in what you do, be liberal in what you accept from others." (RFC 793)


In 1981, Jon Postel formulated the Robustness Principle, also known as Postel's Law, as a fundamental implementation guideline for the then-new TCP. The intent of the Robustness Principle was to maximize interoperability between network service implementations, particularly in the face of ambiguous or incomplete specifications. If every implementation of some service that generates some piece of protocol did so using the most conservative interpretation of the specification and every implementation that accepted that piece of protocol interpreted it using the most generous interpretation, then the chance that the two services would be able to talk with each other would be maximized. Experience with the Arpanet had shown that getting independently developed implementations to interoperate was difficult, and since the Internet was expected to be much larger than the Arpanet, the old ad-hoc methods needed to be enhanced.

by Eric Allman