Large Language Models (LLMs) are increasingly applied to data-centric tasks in software maintenance and evo- lution, such as quality assurance and migration. While recent methods constrain LLM outputs using grammars or regular expressions, these syntactic techniques fail to enforce deeper semantic constraints involving numeric dependencies, conditional logic, and checksums. We present ClauseBandit, a framework that combines LLMs with Satisfiability Modulo Theories (SMT) solvers to generate structured data satisfying such constraints from natural language descriptions. ClauseBandit introduces a Bayesian inference approach that selects the most plausible SMT formula using posterior probabilities derived from formula self- consistency and data likelihoods. Evaluated on 27 structured generation tasks inspired by industrial use-cases, ClauseBandit successfully selected valid SMT formulas for 74.1% of tasks. Our approach enables LLM-based structured generation that goes beyond syntax, producing semantically valid, reusable constraint specifications from natural language.