SPIN 2017 will feature the following invited speakers.
Google Inc., Mountain View, California, USAhttp://www.domagoj-babic.com
Title: SunDew - Systematic Automated Security Testing
SunDew is a new automated test generation framework developed at Google, focused on finding security bugs in C/C++ code. It combines the strengths of multiple test generation techniques under a single cohesive platform. It leverages the vast amount of computational resources available at Google to massively parallelize the automated test generation and triage.
By using a portfolio of test generation techniques, SunDew aims to overcome the coverage saturation (or plateau) that occurs with any individual technique. This saturation manifests as the inability of the technique to discover unexplored parts of a program after a certain number of generated tests. A portfolio of techniques, on the other hand, provides a diversity of test generation strategies that complement each other.
SunDew embeds the most recent advances in automated test case generation, which provide precision and thoroughness. For example, symbolic execution uses powerful constraint solvers to generate tests that precisely follow desired program branches. This approach allows symbolic execution to reach code executed under very specific input preconditions that would be difficult to discover randomly. At the same time, recent improvements to coverage-guided automated fuzzing, such as AFL or LibFuzzer, generates tests faster than symbolic execution. Thus, SunDew alternates these approaches by using coverage-guided fuzzing to quickly bring the coverage to a first saturation level, then using symbolic execution to refine the search for harder-to-reach code. This, in turn, may provide additional inputs for coverage-guided fuzzers, etc.
As part of SunDew, we also developed a number of format-aware fuzzers, that rely on, amongst other things, machine learning to generate language-aware fuzzers. The SunDew architecture follows a distributed continuous pipeline pattern. It allows a performance-based dynamic resource allocation for the various test generation techniques. This allows us to maximize the combined output of the test suite generation and avoid long plateaus in the coverage growth of the test suite. We discuss the application of SunDew on a variety of fuzzing targets of interest.
Domagoj Babic is a computer scientist at Google, Inc. His work focuses on research and development of automated software analysis systems for various security-related applications. Primarily, he wants his work to have a positive impact on people’s lives. He enjoys building strong teams and working with them on solving large-scale real-world important problems, while learning and having fun on the way. He’s particularly excited about big technical challenges and enjoy creating the vision, strategy, and technology for taming those challenges. Over his career, Domagoj has published in the areas of verification, testing, security of complex software systems, automated reasoning, grammar inference, and applied formal methods. Before joining Google, Domagoj was a research scientist at UC Berkeley and elsewhere in industry. He received his Dipl.Ing. in Electrical Engineering and M.Sc. in Computer Science from the Zagreb University (Faculty of Electrical Engineering and Computing) in 2001 and 2003. He received his Ph.D. in Computer Science in 2008 from the University of British Columbia. He was a recipient of the NSERC PDF Research Fellowship (2010-2012), Microsoft Graduate Research Fellowship (2005-2007), and several awards at international programming competitions (1st place at the 2007 Satisfiability Modulo Theories competition in the bit-vector arithmetic category and 3rd place at the 2005 Satisfiability Testing competition in the satisfiable-crafted instances category).
Amazon Web Services, New York, USA and University College London, UKhttp://www0.cs.ucl.ac.uk/staff/b.cook
Title: Automated Formal Reasoning About Amazon Web Services
Automatic and semiautomatic formal verification and model checking tools are now being used within AWS to find proofs that prove or disprove desired properties of key AWS components. In this session, we outline these efforts and discuss how tools are used to play and then replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system.
Byron’s interests include program analysis/verification, security, programming languages, theorem proving, logic, hardware design, operating systems, and biological systems. At Amazon Byron leads a research group focused on formal/constraint-based tools for reasoning about cloud-computing security. Byron is previously known for his work on the termination prover Terminator/T2, and the SLAM model checker.
Nimble Research, Arcadia, California, USAhttp://spinroot.com/gerard
Title: Cobra - Fast Structural Code Checking
In software analysis most research has traditionally been focused on the development of tools and techniques that can be used to formally prove key correctness properties of a software design. Design errors can be hard to catch without the right tools, and even after decades of development, the right tools can still be frustratingly hard to use.
Most software developers know, though, that the number of coding errors is usually much larger than the number of design errors. If something is wrong, the problem can usually be traced back to a coding mistake, and less frequently to an algorithmic error. Tools for chasing down those types of errors are indeed widely used. They are simple to use, and considered effective, although they do not offer much more than sophisticated types of trial and error.
The first commercial static source code analyzers hit the market a little over a decade ago, aiming to fill the gap between ease of use and more meaningful types of code analysis. Today these tools are mature, and their use is widespread in commercial software development, but they tend to be big and slow. There does not seem to be any tool of this type yet that can be used for interactive code analysis.
We’ll describe the design and use of a different type of static analysis tool, that is designed to be simple, small, and fast enough that it can effectively be used for interactive code analysis even for large code bases.
Gerard Holzmann got his PhD from Delft University of Technology in The Netherlands in the deep dark days before there were PCs, iphones, or even an internet. He joined Bell Labs in Murray Hill,New Jersey, to help fix some of these things, but others beat him to it. At Bell Labs he did develop one of the first digital darkroom programs, and early versions of software analysis tools like Spin. After twenty-some years at Bell Labs he moved to the sunnier side of the US and joined NASA/JPL to start a lab for reliable software (LaRS). For unclear reasons he was later made a JPL fellow, an ACM fellow, and a member of the auspicious National Academy of Engineering. He retired from JPL in January 2017 to start a new company called Nimble Research, where he is currently employed as prime nimbler.