VSSE2018

Previous Workshops

ETAPS 2018


Room: AMFITRION II

9.10 - 9.15 Welcome to VSSE
Grigory Fedyukovich (Princeton University)
9.15 - 10.00 Coupling Predicates as a Versatile Tool for Relational Verification
Mattias Ulbrich (Karlsruhe Institute of Technology)
10.00 - 10.45 Coffee break
10.45 - 11.30 Agile Development of Business-Critical Systems? Formal Methods!
Michael Tautschnig (Amazon Web Services)
11.30 - 12.15 How Automated Tools Change the Way We Write Code
Peter Schrammel (Diffblue)
12.15 - 13.45 Lunch
13.45 - 14.30 Slicing Version Histories to Support Feature Reuse
Julia Rubin (University of British Columbia)
14.30 - 15.15 Structuring Simulink Models for Verification and Reuse
Michael W. Whalen (University of Minnesota)
15.15 - 16.00 Learning Models of Configuration Correctness
Mark Santolucito (Yale University)
16.00 - 16.30 Coffee break
16.30 - 17.15 Verifying Constant-Time Implementations
Michael Emmi (SRI International)
17.15 - 18.00 Proof-Guided Debugging
Subhajit Roy (Indian Institute of Technology Kanpur)



Abstracts

Michael Tautschnig (Amazon Web Services)

Agile Development of Business-Critical Systems? Formal Methods!

Virtual machines are an integral part of public-cloud offerings. In this setting, customers rely on the provider to ensure a secure and reliable operating platform. Isolation from other -- possibly malicious -- guests is of utmost importance. The core components are hypervisors and device emulation on top of a Linux kernel. Systems of such complexity are business critical, yet often developed in an agile and necessarily fast-paced setting. In this talk I will discuss the challenges and some novel ideas for performing software analysis for such a stack, and how automated static analysis ranging from data-flow analysis to model checking helps to identify and resolve issues.

Peter Schrammel (Diffblue)

How Automated Tools Change the Way We Write Code

Agile development methodologies have been widely adopted in software development. Well-scoped tasks, peer code reviews and early testing are core elements of these processes. Developers frequently need to get acquainted with new code bases, they have to quickly understand the behaviour of a piece of code, reason about corner cases and evaluate the impact of code changes on dependent components. Automated analysis tools promise vital support for developers to cope with these demands and increase their productivity. In this talk I will discuss the opportunities and challenges involved in integrating automated code analysis tools into the development work flow to provide software developers with the essential information they need.

Julia Rubin (University of British Columbia)

Slicing Version Histories to Support Feature Reuse

Software developers often need to transfer functionality, such as a set of commits implementing a new feature or a bug fix, from one branch of a configuration management system to another. That can be a challenging task as the existing configuration management tools lack support for matching high-level semantic functionality with low-level commit histories. The developers thus have to either manually identify the exact set of semantically-related commits implementing the functionality of interest or sequentially port a segment of the change history, "inheriting" additional, unwanted functionality. In this talk, we will discuss techniques for assisting developers performing such porting tasks. We will formally define a functionality-preserving semantic history slice, which is a (proper) subset of the original history that maintains the functionality of interest. We will outline properties of history slices, such as minimality, and present a set of techniques for identifying functionality-preserving slices and their relationships.

Mark Santolucito (Yale University)

Learning Models of Configuration Correctness

Configuration settings are one of the most critical components in defining the behavior of software, yet are notoriously difficult and error prone to create and edit. Without the rich semantic structure of program code, verifying the correctness of configuration settings is far outside the scope of existing analysis techniques. In this work, we specifically address the problem of configuration verification and repair within the domain of continuous integration testing. Continuous integration (CI) allows users to automatically build and run their code on a suite of virtual environments as they develop a code base to test compatibility with different operating systems, hardware configurations, library versions, etc. While this technique helps to ensure cross-platform functionality, correctly configuring the many permutations of test environments is a complicated process and can only be debugged by rerunning many slow test cases. In some cases, CI builds can take upwards of 24 hours to complete - and if there is a misconfiguration, the tests themselves cannot even run. In order to provide users with static, compile time feedback, we analyze large databases of code to learn models for correct CI configurations. We leverage the fact that it is often the small, incremental changes users make that break or fix builds. By combining association rule learning and with an SMT solver to resolve conflicting rules, we are able to efficiently learn build models for what causes CI builds to fail. These models can be used to preemptively warn users of misconfigured test environments and suggest repairs to their configuration settings.

Michael W. Whalen (University of Minnesota)

Structuring Simulink Models for Verification and Reuse

Model-based development (MBD) tool suites such as Simulink and Stateflow offer powerful tools for design, development, and analysis of models. These models can be used for several purposes: for code generation, for prototyping, as descriptions of an environment (plant) that will be controlled by software, as oracles for a testing process, and many other aspects of software development. In addition, a goal of model-based development is to develop reusable models that can be easily managed in a version-controlled continuous integration process. Although significant guidance exists for proper structuring of source code for these purposes, considerably less guidance exists for MBD approaches. In this talk, I discuss structuring issues in constructing models to support use (and reuse) of models for design and verification in critical software development projects. I illustrate our approach using a generic patient-controlled analgesia infusion pump (GPCA), a medical cyber-physical system.

Mattias Ulbrich (Karlsruhe Institute of Technology)

Coupling Predicates as a Versatile Tool for Relational Verification

In program verification, there are many interesting program properties that cannot be defined by a single program invocation, but that relate the effects of different invocations. Such relational properties can often be successfully and efficiently verified using product programs and coupling predicates. We present a number of successful applications of this verification technique to different relational questions in different domains.

Subhajit Roy (Indian Institute of Technology Kanpur)

Proof-Guided Debugging

Symbolic techniques of fault localization and repair search on the symbolic encoding of the buggy program to search for mutations that produce the expected output for a currently failing test. This search for the right mutant is generally posed as a search for a satisfiable assignment, thereby allowing the use of efficient SAT/SMT solvers. In this talk, we will look at how we can direct debugging when the solvers fail to find a feasible assignment -- interesting artifacts from proofs of infeasibility, like unsatisfiability cores and interpolants, can then be leveraged for effective program debugging. In particular, we present our recent work on the use of interpolants for regression-aware fault localization and the use of unsat cores for directing program repair of heap manipulating programs.

Michael Emmi (SRI International)

Verifying Constant-Time Implementations

The constant-time programming discipline is an effective countermeasure against timing attacks, which can lead to complete breaches of otherwise-secure systems. However, adhering to constant-time programming is hard on its own, and extremely hard under additional efficiency and legacy constraints. This makes automated verification of constant-time code an essential component for building secure software.

We have developed an approach for verifying constant time. Our approach is able to validate implementations which intentionally violate the constant-time policy locally, so long as such violations are benign, and leak no more information than the public outputs of a given computation. This capability is crucial, since such policy breaches are important for performance and compliance with legacy APIs. Our approach is based on a simple reduction from 2-safety verification, which captures constant-time security, to the verification of standard safety properties, on product programs which simulate two simultaneous executions of the given program. While completeness of the reduction generally demands a self-composition-based product construction, automated verification is better-enabled via an alternative construction, akin to the synchronous cross-product transition system. By applying the synchronous product where possible, and the self-composition locally, where necessary, our approach reaps both completeness and efficiency. Our publicly-available prototype tool, ct-verif, can verify a wide range of constant-time components from cryptographic libraries.
ETAPS