Abstracts

Concurrency Checking with CHESS
Tom Ball (Microsoft Research)

The CHESS project at MSR championed the concept of concurrency unit testing and validated the small scope hypothesis for concurrency bugs: many, if not most, concurrency bugs can be found using a few threads and simple scenarios. However, getting users of the CHESS concurrency testing tool to think about simple concurrency scenarios and write concurrency unit tests has not been easy. Furthermore, real systems can exhibit non-deterministic behavior that a tool like CHESS must deal with in order to provide a best effort at reproducing errors, while maximizing coverage. Finally, CHESS was designed to provide deterministic scheduling outside the underlying OS scheduler via API wrappers while providing the guarantee that every schedule possible on the OS (and no more) could be explored in the limit. The lack of knowledge about blocking and ordering properties of APIs greatly complicated the creation of API wrappers. To better share the lessons learned from years of experience developing CHESS, the CHESS team (Sebastian Burckhardt, Madan Musuvathi, Shaz Qadeer, and myself) is happy to announce that source code of CHESS is now available at http://chesstool.codeplex.com

A few billion lines of code later: experiences in commercializing a static checking tool
Dawson Engler (Stanford, Coverity)

This talk will discuss some of the surprises (amusing, depressing, concerning) we encountered when taking a research bug finding tool and peddling it hundreds of companies.

Astrée: Building a Static Analyzer for Embedded C Programs
Antoine Mine (ENS)

Astrée is a sound static analyzer based on the Abstract Interpretation theory and aiming to prove the absence of run-time errors in industrial critical embedded C software. While developing Astrée, we learned on the gap between academic static analysis methods experienced only on simplified languages and the requirements of an industrial static analyzer for large-scale software programmed in a real-life language. In this presentation, we discuss the design and implementation of Astrée, focusing on some of the choices and innovations we made in terms of semantic definitions, abstractions, and algorithms, in order to bridge this gap.

Mechanizing Program Analysis With Chord
Mayur Naik (Intel)

Chord originated as a static "Checker of Races and Deadlocks" for Java but has since evolved into an extensible, general-purpose, static/dynamic program analysis framework for Java. In this talk, I will describe the goals, architecture, and design decisions underlying Chord. I will illustrate how Chord enables program analysis specialists as well as outsiders to productively craft practical program analyses by (1) providing powerful tools like dynamic analysis and declarative Datalog/BDD-based analysis, integrated seamlessly with imperatively expressed static analysis, and (2) enabling inter-connecting program analyses in complex ways while exposing parallelism and ensuring determinism. I will also show examples in parallel computing and cloud computing where Chord has been applied by myself and others, within Intel Labs and in the academic community.

A Browser Developer's Wish List
Robert O'Callahan (Mozilla)

Web browser development is extremely challenging: compatibility with billions of legacy Web pages, specification and implementation of new Web-standard platform features, a wide range of devices and platforms to target, hundreds of millions of users to support, and severe security and performance challenges, all in an environment of white-hot competition. At Mozilla we use a variety of tools and processes to try to cope. This talk will analyze where they have been successful and where they fall down, with particular focus on major pain points such as nondeterministic test failures, mismatch between conditions "in the lab" and "in the field", and grappling with legacy code. We have bright hopes for the future, such as static array bounds checking and record-and-replay debugging, but other areas look dire, so I'll present my research wish-list --- including better performance analysis tools, verified refactorings, and static analysis packaged as assistance for code reviewers.

Academic researchers need to listen to their customers
William Pugh (Univ. of Maryland)

The FindBugs project has been fairly successful, but hasn't had the level of impact or adaption I might have expected from our experimental results. To actually have practical impact, you need to figure out who your potential customers are, what their real problems are, and be able to justify not only that you can solve some of their problems, but that your tools can do so more effectively than other ways they have to solve those problems. I'll talk in detail about my experiences applying FindBugs at Google. FindBugs was very accurate at finding coding mistakes (75+% of the issues identified by FindBugs were marked as "Should Fix" by Google engineers). However, FindBugs wasn't solving important problems for Google and we are now working on the third reinvention of FindBugs at Google (after it has been killed/canceled twice), hoping to find a way to use it to effectively solve important problems.

Mooly Sagiv (Tel Aviv University)

I will describe our experience with TVLA, a generic shape analyzer. Our experience indicates that the system is able to infer interesting quantified invariants on programs manipulating arrays and dynamically allocated data structures. The system is also generic enough to allow rapid development of new shape analysis algorithms. A new system Deskcheck created by Bill McCloskey inspired by TVLA was applied to small realistic system code. The original TVLA system is described here. The theory behind TVLA is described here. Handling of concurrency is described here. Techniques for reducing the state space are described in here, here, and here.