The seventh annual Programming Languages Day will be held at the
IBM Thomas J. Watson Research Center on Monday, April 17, 2006. The day will
be held in cooperation with the
New Jersey and
New England Programming Languages and
Systems Seminars. The main goal of the event is to increase awareness of each
other's work, and to encourage interaction and collaboration.
If you plan to attend the Programming Languages Day, please register by
sending an e-mail with your name, affiliation, and contact information
to email@example.com so that we can plan for lunch and refreshments to
Location and driving directions
The Programming Languages day will be held at room GN-F15 in the Hawthorne1 building
at 19, Skyline Drive, Hawthorne, New York 10532.
Directions to the building can be found at http://www.watson.ibm.com/general_info_haw.html
Information about local hotels can be found at: http://www.watson.ibm.com/lodging.html
Please follow directions at the site to visitor's entrance reception,
which is on the BACK SIDE of the building. From the reception area,
follow directions to room GN-F15. You will be reaching Skyline
drive via Rt. 9A. Both ends of Skyline drive meet Rt. 9A. If you
come in via the south end of Skyline drive, IBM Research is the
first building on the right. Move to the left lane, as the right
lane leads to the employee's entrance (the first entrance). Take
the next entrance (the second entrance) to reception. After you
go through the gate, go to the parking lot on the right side.
You are welcome from 9AM onwards, and the keynote presentation will
start at 10AM sharp.
Synchronous programming techniques for embedded systems
Synchronous programming techniques were created in the 1980's to deal with embedded systems programming. The first idea is that of cycle-based computation, behavior being a repetition of input-computation-output cycles. In practice, cycles can be sampling cyles in signal processing or continuous control, event cycles in finite state machines based protocols and user interaction systems, or clock cycles for digital hardware circuits. The second idea is to use a zero-delay computation model for the computation of each reaction. The zero-delay or synchrony hypothesis fully separates concerns and removes interference between I/O and computation. It simplifies language semantics and makes application design and verification much simpler. In particular, it makes it possible do deal with large scale concurrency in a fully deterministic and easy-to-control way, unlike for classical threading and signalling systems where non-determinism makes applications much harder to understand and verify.
Originally, synchronous languages were divided into two groups. Data-flow based languages such as Lustre or Signal were tailored to continuous control and signal processing. Event-based languages such as Esterel and SyncCharts were dedicated to systems with complex control structure. Unification of both approaches is now available in two forms that we present in the talk. The SCADE language and tool is devoted to safety-critical software systems, and in particular to certified avionics ones. It is intensively used in major avionics programs at Airbus (e.g., A380) and many other companies. At the user level, programming is done using graphical data-flow block diagrams with Lustre semantics, coupled with hierarchical state machines to deal with complex control. The semantics is simple and directly based on the mathematics of synchrony. The SCADE compiler is itself certifiable at DO-178B Level A, which is the highest certification level for avionics embedded software, and a formal verifier is available. The Esterel Studio tool is dedicated to hardware circuit design. It is based on the Esterel v7 language, which adds powerful Lustre-based data-path primitives to the original Esterel language . Programs can be textual or graphical, with free mix of both styles. The Esterel Studio tool embeds a simulator, hardware and software code generators, and a formal property verification system. Esterel Studio is used by several major semiconductor companies to design, verify, and synthesize complex hardware circuits.
Flux: A Language for Programming High-Performance Servers
University of Massachusetts, Amherst
Programming high-performance server applications is challenging. It is both complicated and error-prone to write the concurrent code required to deliver high performance and scalability. Server performance bottlenecks are difficult to identify and correct. Finally, it is difficult to predict server performance prior to deployment.
This talk presents Flux, a language that dramatically simplifies the construction of scalable high-performance server applications. Flux lets programmers compose off-the-shelf, sequential C or C++ functions into concurrent servers. Flux programs are type-checked and guaranteed to be deadlock-free. We have built a number of servers in Flux, including a web server with PHP support, an image-rendering server, a BitTorrent peer, and a game server. These Flux servers match or exceed the performance of their counterparts written entirely in C. By tracking hot paths through a running server, Flux simplifies the identification of performance bottlenecks. The Flux compiler also automatically generates discrete event simulators that accurately predict actual server performance under load and with different hardware resources. Flux is both easy to use and unusually compact, allowing entire servers to be specified in tens of lines of code.
This is joint work with Kevin Grimaldi, Alex Kostadinov, Emery Berger,
and Mark Corner, and will appear at the USENIX Annual Technical
On-line Visualization and Analysis of Real-time Systems with TuningFork
IBM T.J. Watson Research Center
Debugging tools typically operate by instrumenting the program with break points and allowing the programmer to pause execution and examine the state of the system. However, this approach is not appropriate for real-time systems. TuningFork is an online, scriptable data visualization and analysis tool that supports the development and continuous monitoring of real-time systems. While TuningFork was originally designed and tested for use with a particular real-time Java Virtual Machine, the architecture has been designed from the ground up for extensibility by leveraging the Eclipse plug-in architecture. This allows different client programs to design custom data formats, new visualization and analysis components, and new export formats. The TuningFork views allow the visualization of data from time scales of microseconds to minutes, enabling rapid understanding and analysis of system behavior.
Joint work with Perry Cheng, Daniel Frampton, David Grove, Matthias Hauswirth, V.T. Rajan
Simple Techniques for Surviving Errors in Software Systems
I will present a set of simple techniques that enable software systems
to survive otherwise fatal errors. The goal is to enable systems to
execute through such errors, potentially with degraded functionality,
to continue to serve the needs of their users.
I will first address resource consumption errors. Examples of such
errors include memory leaks, file handle leaks, infinite loops, and
infinite recursions. Cyclic memory allocation eliminates memory leaks
by preallocating a fixed-size buffer for all objects allocated at a
given allocation site, then cyclically allocating objects out of that
buffer. Least recently used file handle allocation gives each program
a fixed number of file handles. If a program attempts to exceed its
file handle limit, the file handle allocator first closes the least
recently used file handle, then allocates the new file handle. The
result is that the program stays within its file handle
limit. Infinite loops and infinite recursions are handled by first
learning reasonable loop iteration and stack depth bounds, then
terminating loops or recursions whenever they exceed the bounds. Our
experimental results show that our techniques eliminate resource
consumption errors in widely used progams such as Squid, Pine, and
I will also discuss failure-oblivious computing, a technique for
ignoring memory accessing errors. A system that uses
failure-oblivious computing checks each memory access to discard out
of bounds writes and manufacture values for out of bounds reads. The
end result is that out of bounds writes do not corrupt key data
structures and the program continues along its normal execution path
in spite of any out of bounds accesses. Our experimental results show
that this technique eliminates buffer-overflow security
vulnerabilities and enables widely used servers such as Apache, Pine,
and Sendmail to continue to execute successfully through otherwise
fatal memory errors.
All of these techniques are simple to implement and deploy. They do,
however, perturb the standard programming language semantics and
introduce the possibility of taking the software down unanticipated
execution paths. As such, they represent a significant departure from
standard approaches. I will briefly discuss the benefits and risks of
adopting such techniques.
PADS/ML: A Functional Data Description Language
Massive amounts of useful data are stored and processed in non-standard
or ad hoc formats, for which critical tools like parsers and formatters
do not exist. Traditional databases and XML systems provide rich
infrastructure for processing well-behaved data, but are of little help
when dealing with data in ad hoc formats.
To address the challenges of ad hoc data, we have designed PADS/ML - a
declarative data description language for the ML family of languages.
PADS/ML is based on the ML type structure and features polymorphic,
recursive datatypes as central elements of data descriptions. In
addition, PADS/ML leverages the ML module system to support an entirely
new tool generation architecture designed to simplify the task of
developing tool generators that work for any PADS/ML description.
We have formalized the semantics of PADS/ML's recursive, polymorphic,
datatypes by extending the Data Description Calculus of Fisher, et. al
 with type parameterized types and proven the resulting system
"type-correct." That is, generated parsers return data of the expected
We have implemented a version of PADS/ML for the O'Caml language. From
a description, the O'Caml PADS/ML compiler generates an O'Caml module
containing the relevant type declarations, a parser for the
description, and other tools.
 The Next 700 Data Description Languages. K. Fisher, Y. Mandelbaum,
In Proceedings of POPL 2006.
Improved Algorithm Complexities for Linear Temporal Logic Model Checking
of Pushdown Systems
State University of New York at Stony Brook
This paper presents a novel implementation strategy for linear temporal
logic (LTL) model checking of pushdown systems (PDS). The model checking
problem is formulated intuitively in terms of evaluation of Datalog rules.
We use a systematic and fully automated method to generate a specialized
algorithm and data structures directly from the rules. The generated
implementation employs an incremental approach that considers one fact at
a time and uses a combination of linked and indexed data structures
for facts. We provide precise time complexity for the model checking
problem; it is computed automatically and directly from the rules. We
obtain a more precise and simplified complexity analysis, as well as
improved algorithm understanding.
Joint work with Annie Liu
(The paper for this talk appeared at VMCAI 2006.
First-Class Hidden Types
Parametric types provide more precise type checking and reduce code
duplication. But many formulations require significant verbosity and
place severe limits on expressible type relationships. Existing
programming languages with nominal subtyping, including Eiffel, C#,
and the Java^TM Programming Language, sometimes require more type
parameters to be explicitly instantiated than is necessary to infer a
full type instantiation. Furthermore, they cannot easily express
covariance, contravariance, and other useful type relationships among
instantiations without additional language primitives. Type inference
provides only a partial solution to the former problem, and offers no
help with the latter.
In this paper, we present a core part of the Fortress Programming
Language that supports /hidden type variables/, which can be used
to quantify types beyond their ordinary type parameters. With hidden
type variables, a single type can extend, and inherit methods from,
infinitely many instantiations of another type. Moreover, with hidden
type variables, we can express type relationships like covariance and
contravariance directly, without ad-hoc language features, and we can
elide redundant type parameters in type instantiations. Unlike previous
languages with similar type systems, our formulation does /not/ have a
type-erasure semantics: the types that instantiate the ordinary type
parameters of a parametric type are retained at run time, and can be
used by type-dependent operations. Because these parameters may be
instantiated by hidden type variables of the parametric type's subtypes,
we must be able to reconstruct hidden type variables at run time. This
requirement significantly complicates the semantics, in return for type
safety and for added expressive power.
We also formalize and prove soundness of our type system. Although our
formal analysis is directly applicable to Fortress, it is also relevant
to the inclusion of hidden type variables in any type system with
nominal subtyping and support for type-dependent operations.
Joint work with Eric Allen, Sukyoung Ryu, Victor Luchangco, and