As Large Language Models (LLMs) reshape software development across industries, they also reshape the associated threat landscape. Traditional threat modeling methods, which assume predictable system behavior, struggle to accommodate the inherent nondeterminism of LLMs. Paradoxically, LLMs themselves offer capabilities, such as pattern recognition, natural language understanding, and semi-structured reasoning, that may support the automation of threat elicitation and mitigation. This research project, ThreMoLIA, aims to design, develop, and empirically evaluate a threat modeling tool that leverages LLMs to assist practitioners in identifying and analyzing security threats in LLM-integrated applications (LIAs). To this end, we apply a mixed-methods exploratory case study to define and validate threat modeling metrics, and a comparative case study to evaluate the ThreMoLIA tool against existing threat modeling practices. The project is conducted in close collaboration with industry and contributes to the ESEM community by advancing Security-by-Design practices and sharing reproducible artifacts such as metrics, benchmarks, and threat models.