Aim and Scope

Implementation details are critical for making results reproducible, a staple of the scientific process. Various tricks and hacks are often used in practice in order to tune our tools and analyses, but for the most part, these are not shared with the community, making it hard for researchers to understand how tools really work.

The goal of this workshop is to bring together researchers with practical experience to share their lessons, tips, and tricks, on how stuff really works. The workshop encourages presentation of negative results, things that did not work, things that will never work, and things that were made to work by various workarounds and principled hacking.


9:10-10:00 Tom Ball (Microsoft Research) Concurrency Checking with CHESS [abstract] [slides] [video]

The CHESS project at MSR championed the concept of concurrency unit testing and validated the small scope hypothesis for concurrency bugs: many, if not most, concurrency bugs can be found using a few threads and simple scenarios. However, getting users of the CHESS concurrency testing tool to think about simple concurrency scenarios and write concurrency unit tests has not been easy. Furthermore, real systems can exhibit non-deterministic behavior that a tool like CHESS must deal with in order to provide a best effort at reproducing errors, while maximizing coverage. Finally, CHESS was designed to provide deterministic scheduling outside the underlying OS scheduler via API wrappers while providing the guarantee that every schedule possible on the OS (and no more) could be explored in the limit. The lack of knowledge about blocking and ordering properties of APIs greatly complicated the creation of API wrappers. To better share the lessons learned from years of experience developing CHESS, the CHESS team (Sebastian Burckhardt, Madan Musuvathi, Shaz Qadeer, and myself) is happy to announce that source code of CHESS is now available at

10:30-11:15Dawson Engler (Stanford, Coverity)A few billion lines of code later: experiences in commercializing a static checking tool[abstract][slides][video]

This talk will discuss some of the surprises (amusing, depressing, concerning) we encountered when taking a research bug finding tool and peddling it hundreds of companies.

11:15-12:00Antoine Mine (ENS)Astrée: Building a Static Analyzer for Embedded C Programs [abstract][slides][video]

Astrée is a sound static analyzer based on the Abstract Interpretation theory and aiming to prove the absence of run-time errors in industrial critical embedded C software. While developing Astrée, we learned on the gap between academic static analysis methods experienced only on simplified languages and the requirements of an industrial static analyzer for large-scale software programmed in a real-life language. In this presentation, we discuss the design and implementation of Astrée, focusing on some of the choices and innovations we made in terms of semantic definitions, abstractions, and algorithms, in order to bridge this gap.

1:30-2:15Mayur Naik (Intel)Mechanizing Program Analysis With Chord[abstract][slides][video]

Chord originated as a static "Checker of Races and Deadlocks" for Java but has since evolved into an extensible, general-purpose, static/dynamic program analysis framework for Java. In this talk, I will describe the goals, architecture, and design decisions underlying Chord. I will illustrate how Chord enables program analysis specialists as well as outsiders to productively craft practical program analyses by (1) providing powerful tools like dynamic analysis and declarative Datalog/BDD-based analysis, integrated seamlessly with imperatively expressed static analysis, and (2) enabling inter-connecting program analyses in complex ways while exposing parallelism and ensuring determinism. I will also show examples in parallel computing and cloud computing where Chord has been applied by myself and others, within Intel Labs and in the academic community.

2:15-3:00Robert O'Callahan (Mozilla)A Browser Developer's Wish List[abstract][slides][video]

Web browser development is extremely challenging: compatibility with billions of legacy Web pages, specification and implementation of new Web-standard platform features, a wide range of devices and platforms to target, hundreds of millions of users to support, and severe security and performance challenges, all in an environment of white-hot competition. At Mozilla we use a variety of tools and processes to try to cope. This talk will analyze where they have been successful and where they fall down, with particular focus on major pain points such as nondeterministic test failures, mismatch between conditions "in the lab" and "in the field", and grappling with legacy code. We have bright hopes for the future, such as static array bounds checking and record-and-replay debugging, but other areas look dire, so I'll present my research wish-list --- including better performance analysis tools, verified refactorings, and static analysis packaged as assistance for code reviewers.

3:30-4:15William Pugh (Univ. of Maryland) Academic researchers need to listen to their customers[abstract][slides][video]

The FindBugs project has been fairly successful, but hasn't had the level of impact or adaption I might have expected from our experimental results. To actually have practical impact, you need to figure out who your potential customers are, what their real problems are, and be able to justify not only that you can solve some of their problems, but that your tools can do so more effectively than other ways they have to solve those problems. I'll talk in detail about my experiences applying FindBugs at Google. FindBugs was very accurate at finding coding mistakes (75+% of the issues identified by FindBugs were marked as "Should Fix" by Google engineers). However, FindBugs wasn't solving important problems for Google and we are now working on the third reinvention of FindBugs at Google (after it has been killed/canceled twice), hoping to find a way to use it to effectively solve important problems.

4:15-5:00Mooly Sagiv (Tel Aviv University)Experience with Generic Shape Analysis[abstract][slides][video]

I will describe our experience with TVLA, a generic shape analyzer. Our experience indicates that the system is able to infer interesting quantified invariants on programs manipulating arrays and dynamically allocated data structures. The system is also generic enough to allow rapid development of new shape analysis algorithms. A new system Deskcheck created by Bill McCloskey inspired by TVLA was applied to small realistic system code. The original TVLA system is described here. The theory behind TVLA is described here. Handling of concurrency is described here. Techniques for reducing the state space are described in here, here, and here.



LFX 2010 is supported by: