May/June 2018 issue of acmqueue The May/June issue of acmqueue is out now



Research for Practice

Education

  Download PDF version of this article PDF

ITEM not available

acmqueue

Originally published in Queue vol. 15, no. 3
see this item in the ACM Digital Library


Tweet



Related:

Ellen Chisa - Evolution of the Product Manager
Better education needed to develop the discipline


Jon P. Daries, Justin Reich, Jim Waldo, Elise M. Young, Jonathan Whittinghill, Daniel Thomas Seaton, Andrew Dean Ho, Isaac Chuang - Privacy, Anonymity, and Big Data in the Social Sciences
Quality social science research and the privacy of human subjects requires trust.


Michael J. Lutz, J. Fernando Naveda, James R. Vallino - Undergraduate Software Engineering: Addressing the Needs of Professional Software Development
Addressing the Needs of Professional Software Development



Comments

(newest first)

Nick P | Tue, 22 Aug 2017 16:14:47 UTC

Id love to see regular debates like these in the various subfields as each has stuff worth debating.

I was doing knock-offs of Tannenbaum vs Torvalds pushing microkernel-based systems (esp separation kernels). I get Linuss points since theres definitely sense in them. I advocated Nizza Architecture specifically to restrict how much stuff is done like a distributed system. Yet, timing related errors kept cropping up in Linux kernel (eg between syscalls) showing even the monolith had similar distributed problems. Whereas, on the other side, I found Tannenbaum et al were correct in that most opponents didnt know anything about microkernels past Mach. The field-proven benefits of QNX, BeOS, KeyKOS, and later Minix 3 were unknown to them. Shows their opposition of microkernels had no technical merit so much as following a crowd somewhere.

Another was N-version programming. I pushed this as a counter to both failure and subversion. The first problem I noticed was different solutions to same simple protocol often looked pretty similar. I hypothesized that were taught to use arrays, control flow, stacks, GCs, and so on in similar ways more often than not. My enhancement was to require the tools (esp languages or VMs) to have as different internal structure as possible plus with different coding style. Only the input to output mapping should be the same. On hardware side, different suppliers using different sub-components and process nodes if possible. Same for power supplies and maybe cabling. Far as subversion, I proposed having ideologically opposing people work together with, say, Israeli people coding with Palestenian reviewing trading spaces occasionally. I still think N-Version can help so long as each submission is high quality but internally different as I describe.

Also, shout out to @vyodaiken for making the list. Your countering the formal methodist with NonStop was perfect reply! :) Those systems just run and run and run despite failures at every level. Through formal verification? No, just systematic use of mitigations in every layer developed from at least a thousand instances of failure customers reported in their systems. Empirical method of adapting a solution with combo of informal reasoning and field data. Shows one doesnt need formal methods to make something work nearly perfect in practice.

That said, formal verification of protocols for things similar to NonStop always found errors esp corner cases humans and tests missed. More accurate to say combining the empirical and formal methods can increase correctness over just using one of them. Due to cost, apply it to only most important part like replication or failover in NonStop. Four teams applied it to CPUs of varying sizes with them having no errors during FPGA or silicon testing. That didnt happen for non-formally-verified chips which also were rarely first-pass silicon. So, thats further argument for formal verification able to improve something even as good as NonStop.

Far as Code-Pointer Integrity, that was a great piece of research I pushed as an interim solution in a few places. They wisely used segments like NaCl and security kernels did before them. Lacking a security-focused CPU, one should use any hardware-accelerated forms of isolation available to best of their ability. The CPI technique used it much more efficiently than prior work. The attack paper focused on the watered-down version instead of the strong version to of course find faults. As usual, wed have been much better off if attackers instead threw their brainpower at attacking the strongest stuff. So, CPI people rightly countered saying so.


Leave this field empty

Post a Comment:







© 2018 ACM, Inc. All Rights Reserved.