Large language models (LLMs) are transforming software engineering, but their adoption brings critical challenges. In this keynote, I will explore three key thrusts: the limits of LLMs, including their struggles with long-tailed data distributions and the quality of generated outputs; the threats they pose, such as their robustness, vulnerabilities to backdoor attacks and the memorization of sensitive information; and the emerging ecosystems surrounding their reuse, licensing, and documentation practices. Empirical research plays a pivotal role in uncovering these challenges and guiding the responsible development of LLMs in software engineering. By addressing these issues, we can chart a path forward for future research and innovation in this rapidly evolving field.