Existing template and learning-based Automated Program Repair (APR) tools have successfully found patches for many benchmark faults. However, our analysis of existing results shows that omission faults pose a significant challenge. For template based approaches, omission faults provide no location to apply templates to; for learning based approaches that formulate repair as Neural Machine Translation (NMT), omission faults similarly do not provide faulty code to translate. To address these issues, we propose GLAD, a novel learning-based repair technique that targets if-clause synthesis. GLAD does not require a concrete faulty line as it is based on generative Language Models (LMs) instead of machine translation; consequently, it can repair omission faults. To provide the LM with project-specific information critical to synthesis, we incorporate two components: a type-based grammar that constrains the model, and a dynamic ranking system that evaluates candidate patches using a debugger. Our evaluation shows GLAD is highly orthogonal to existing techniques, correctly fixing 26 Defects4J v1.2 faults that previous NMT-based techniques could not, while maintaining a small runtime cost, underscoring its potential as a lightweight tool to complement existing tools in practice. An inspection of the bugs that GLAD fixes reveals that GLAD can quickly generate expressions that would be challenging for other techniques.