Large language models (LLMs) are becoming increasingly integrated into software development, with a majority of developers now adopting AI tools for code generation. Although the current models can produce syntactically and functionally correct code, they often generate unnecessarily complex solutions, and struggle with large, evolving code bases that have rich internal structure. Most evaluations of LLM-generated code to date have focused primarily on test-based accuracy, unfairly overlooking other essential aspects of software quality. In this paper, we emphasize the importance of modularity — the practice of structuring code into well-defined, reusable components — as a critical lens for improving the maintainability of AI-generated code. We argue that modularity should be a foundational principle in LLM-assisted code generation, empowering models to produce more maintainable, production-ready software.