In today’s society, we are becoming increasingly dependent on software systems. However, we also constantly witness the negative impacts of buggy software. To produce more reliable code, recent work introduced the LLM4TDD process, which guides Large Language Models to generate code iteratively using a test-driven development methodology. This work highlights the promise of the LLM4TDD process, but reveals that the quality of the code generated is dependent on the test cases used as prompts. Therefore, this paper conducts an empirical study to investigate how different test generation strategies can impact the quality of code produced by the LLM4TDD workflow as well as the impact on overall effort to conduct the LLM4TDD process.