At Amazon Web Services (AWS), our customers manage a diverse array of resources - databases, compute, storage, and analytics - all configured to meet their specific business, security, and governance needs. They often ask, “Have I configured these resources correctly?” This seemingly simple question led us on a fascinating journey of developing a domain-specific language (DSL) for defining configuration checking rules and proving that they are correct. Our goal was ambitious: create a language that’s easy to author, capable of proving rule correctness, and adaptable to AWS’s continuous evolution. The more we restricted our language, the more powerful it became. This realization brought new meaning to the phrase “less is more.”
Traditionally, proving correctness in software development is a burdensome task. Developers must define specifications without much guidance on their accuracy. Our domain-specific approach, however, turned this concept on its head. Instead of humans creating specifications for computers to check, we designed our system to generate potential specifications for humans to verify. Here’s how it works: our system automatically generates scenarios demonstrating when rules evaluate to true or false. These scenarios effectively create specifications. A security engineer can then review these cases, deciding whether the specification is valid or if the rule needs adjustment. This process of “automatically generating specifications” has revolutionized how we approach configuration rule checking at AWS.
The restrictiveness of our domain-specific language, far from being a limitation, became its strength. By narrowing our focus, we gained the ability to prove correctness more efficiently and effectively than ever before. This approach eased the burden on our developers and led to provably correct configuration checks. Our experience with this DSL at AWS has reinforced a valuable lesson: in the right context, specialization can outperform generalization, especially when it comes to proving correctness. By focusing on the specific needs of AWS configuration checking, we’ve created a tool that’s not just more efficient, but fundamentally more capable of ensuring our customers’ resources are configured correctly.
Bio: Neha Rungta is a Director of Science in AWS. Neha is a world-renowned expert in developing and applying automated reasoning techniques to industrial systems. Neha’s work in AWS has broken new ground in the scale of automated reasoning applications. She has launched security services such as Amazon S3 Block Public Access, IAM Access Analyzer, and now, the Cedar language and Amazon Verified Permissions. Before AWS, Neha was a Research Scientist at NASA Ames Research Center where she verified the correctness of unmanned space systems and conflict detection algorithms used in commercial aircraft.
Thu 1 MayDisplayed time zone: Eastern Time (US & Canada) change
09:00 - 10:30 | Thursday Morning PlenaryMain Plenaries at Canada Hall 1 and 2 Chair(s): Timothy Lethbridge University of Ottawa | ||
09:00 30mDay opening | Thursday Early Morning Awards Session Main Plenaries | ||
09:30 60mKeynote | Neha Rungta Keynote: Engineering correctness for a domain Main Plenaries |