March/April 2020 issue of acmqueue The March/April 2020 issue of acmqueue is out now

Subscribers and ACM Professional members login here



Kode Vicious

Security

  Download PDF version of this article PDF

Kode Vicious

Broken Hearts and Coffee Mugs

The ordeal of security reviews

Dear KV,

I'm working on a project that has been selected for an external security review by a consulting company. They are asking for a lot of information but not really explaining the process to me. I can't tell what kind of review this is—pen (penetration) test or some other thing. I don't want to second-guess their work, but it seems to me they're asking for all the wrong things. Should I point them in the right direction or just keep my head down, grin, and bear it?

Reviewed

 

Dear Reviewed,

I have to say that I'm not a fan of keeping one's head down, or grinning, or bearing much of anything on someone else's behalf, but you probably knew that before you sent this note. Many practitioners in the security space are neither as organized nor as original in their thinking as KV would like. In fact, this isn't just in the security space, but let me limit my comments, for once, to a single topic.

Overall, there are two broad types of security review: white box and black box. A white-box review is one in which the attackers have nearly full access to information such as code, design documents, and other information that will make it easier for them to design and carry out a successful attack. A black-box review, or test, is one in which the attackers can see the system only in the same way that a normal user or consumer would.

Imagine you are attacking a consumer device such as a phone. In a white-box situation, you have the device, the code, the design docs, and everything else the development team came up with while building the phone; in a black-box case, you have only the phone itself. The pen-test idea currently has credence in security circles (which I ascribe to the tittering 12-year-olds who get off on saying they're responsible for "penetration testing"), but, candidly, that is just a black-box test of a system. In point of fact, the goal of any security test or review is to figure out if an attacker can carry out a successful attack against the system.

Determining what is or is not a successful attack requires the security tester to think like the attacker, a trick that KV finds easy, because at heart (what heart?), I am a terrible person whose first thought is, "How can I break this shit?" Security testing is often quite easy because of incredibly low overall quality of software and the increasingly large number of software modules used in any product. To paraphrase Weinberg's Second Law, "If architects designed buildings the way programmers built programs, the first woodpecker that came along would destroy all of society." The difficult parts of security work are constraining the attacks to those that matter and getting past those koders with a modicum of clue who are able to build systems that at least resist the most common script kiddie attacks.

Your letter seems to imply that your external reviewers are interested in a white-box review since they are asking for a great deal of information, rather than just taking your system at face value and trying to violate it. What to expect from a white-box security review, at least at a high level, should not be a surprise to anyone who has ever participated in a design review, as the two processes should be reasonably similar. The review would work in a top-down fashion, where the reviewer asks for an overall description of the system, hopefully enshrined in a design document (please for the love of God have a design document); or the same information can be extracted, painfully, through a series of meetings.

Extracting a design in a review meeting takes a great deal longer in the absence of a design document but, again, looks similar to a design review. First, there must be a lot of coffee in the room. How much coffee? At least one pot per person, or two if you have KV in the room. With the coffee in place, you need a large white board, at least two meters (six feet) long. I also suggest implements of torture, or at least a riding crop, to keep people in line.

Then we have the typical line of interrogation: "What are the high-level features?"; "How many distinct programs make up the system?"; "What are they called?"; "How do they communicate?"; and for each program, "What are the major modules of each program?" KV once asked a software designer after he had filled a four-meter white board with named boxes, "What's the architecture that holds all this together?" to which the answer was, "This system is too complex to have an architecture." The next sound was KV's glasses clattering on the table and a very heavy sigh. Needless to say, that piece of software was riddled with bugs, and many were security related. It is not every day that KV wants to switch from coffee to gin and tonic at noon, but then there are those days.

A good reviewer will have a minimal checklist of questions to ask about each program or subsystem, but nothing too prescriptive. A security review is an exploration, a form of spelunking, in which you dig into the dirty, unloved corners of a piece of software and push on the soft parts to see if they scream, or spit green ichor, which burns—it burns and you can't wash the damned stuff off! Overly prescriptive checklists always miss the important questions. Instead, the questions should start broad and then get more focused as issues of interest appear—and trust me, they always will.

When issues are found, they should be recorded, though perhaps not in an easily portable form, since you never know who else is reading your ticketing system. You want to get inside a system and go read the bugs. If you have a bad apple or two inside the company (and what company is free of rotten apples?) and they do a search on "Security P1," they're going to walk away with a lot of fodder for zero-day attacks against your system.

Once the system and its modules have been described, the next step is to look at the module APIs (application programming interfaces). You can learn a lot about a system and its security from looking at its APIs, though some of what you will learn will never be able to be unseen. It can be pretty scarring, but it has to be done. I feel most of these steps ought to have wine (or something stronger) pairings. For readers in California, I recommend a nice indica for this kind of work.

The APIs have to be looked at, of course, because they show what data is being passed around and how that data is being handled. There are security scanning tools for this type of work, which can be used to direct you toward where to perform code reviews, but it's often best to spot check the APIs yourself if you have any type of ability or intuition around security.

Lastly, we come to the code reviews. Any reviewer who wants to start here should be fired out of a cannon immediately. The code is actually the last thing to be reviewed—for many reasons, not the least of which is that unless the security-review team is even larger than the development team, they will never have the time to finish reviewing the code to sufficient depth.

Code reviews must be targeted and must look deeply at the things that really matter. It is all of the previous steps that have told the reviewers what really matters, and, therefore, they should be asking to look at maybe 10 percent (and hopefully less) of the code in the system. The only broad view of the code should be carried out, automatically, by the code-scanning tools previously mentioned, which include static analysis. The static analysis tools should be able to identify hot spots that the other, human reviews have missed, and then the humans have to go back into the dark corners of the code and again try to avoid being sprayed with green ichor.

With the review complete, you should expect a few outputs, including summary and detailed reports, bug-tracking tickets that describe issues and mitigations (all while being secured from prying eyes), and hopefully a set of tests the QA team can use to verify that the identified security issues are fixed and do not recur in later versions of the code.

It's a long process littered with broken hearts and coffee mugs, but it can be done if the reviewers are organized and original in their thinking.

KV

 

Related articles

How to Improve Security?
It takes more than flossing once a year.
Kode Vicious
https://queue.acm.org/detail.cfm?id=2019582

Security Problem Solved?
Solutions to many of our security problems already exist, so why are we still so vulnerable?
John Viega
https://queue.acm.org/detail.cfm?id=1071728

Pickled Patches
On repositories of patches and tension between security professionals and in-house developers
Kode Vicious
https://queue.acm.org/detail.cfm?id=2856150

 

Kode Vicious, known to mere mortals as George V. Neville-Neil, works on networking and operating system code for fun and profit. He also teaches courses on various subjects related to programming. His areas of interest are code spelunking, operating systems, and rewriting your bad code (OK, maybe not that last one). He earned his bachelor's degree in computer science at Northeastern University in Boston, Massachusetts, and is a member of ACM, the Usenix Association, and IEEE. George is the coauthor with Marshall Kirk McKusick and Robert N. M. Watson of The Design and Implementation of the FreeBSD Operating System. He is an avid bicyclist and traveler who currently lives in New York City.

Copyright © 2020 held by owner/author. Publication rights licensed to ACM.

acmqueue

Originally published in Queue vol. 18, no. 2
see this item in the ACM Digital Library


Tweet


Follow Kode Vicious on Twitter
and Facebook


Have a question for Kode Vicious? E-mail him at [email protected]. If your question appears in his column, we'll send you a rare piece of authentic Queue memorabilia. We edit e-mails for style, length, and clarity.


Related:

Simson Garfinkel, John M. Abowd, Christian Martindale - Understanding Database Reconstruction Attacks on Public Data
These attacks on statistical databases are no longer a theoretical danger.


Rich Bennett, Craig Callahan, Stacy Jones, Matt Levine, Merrill Miller, Andy Ozment - How to Live in a Post-Meltdown and -Spectre World
Learn from the past to prepare for the next battle.


Arvind Narayanan, Jeremy Clark - Bitcoin's Academic Pedigree
The concept of cryptocurrencies is built from forgotten ideas in research literature.


Geetanjali Sampemane - Internal Access Controls
Trust, but Verify





© 2020 ACM, Inc. All Rights Reserved.