Manual security policy validation of Infrastructure- as-Code (IaC) creates significant bottlenecks in enterprise CI/CD pipelines, with industry surveys reporting that 90% of cloud breaches involve misconfigured IaC. While traditional static analyzers help automate security checks, they struggle to keep pace with rapidly evolving cloud services and custom organizational policies. We propose and evaluate a production-ready framework that augments conventional scans with Large Language Models (LLMs) to validate security policies in Kubernetes manifests, IAM policies, and Terraform templates. Our evaluation on a 500-case synthetic IaC corpus demonstrates that ensemble methods achieve F1 = 0.95 at 3.1s latency. Results show that while LLMs can detect complex violations missed by rule-based tools, specific safeguards are essential for enterprise deployment. We provide actionable guidance including: (i) privacy-preserving CI/CD architecture, (ii) risk mitigation strategies, and (iii) concrete recommendations for balancing security and efficiency.