Unit testing, as a critical means of ensuring software quality, is often constrained in practice by the high cost and low efficiency of manual test case construction, resulting in limited test coverage and a scarcity of unit test cases in real-world projects. Traditional test generation tools can improve coverage but suffer from poor readability and limited generalization. In recent years, large language models (LLMs) have demonstrated strong potential in the field of test generation, owing to their powerful generalization and reasoning capabilities. However, the static nature of training data often causes hallucinations, undermining the reliability of generated tests. To address this, we propose MUATC, a multi-agent unit test generation framework based on LLMs. This work introduces, for the first time in coverage-driven LLM-based test generation, a multi-agent collaborative mechanism that integrates Chain-of-Thought reasoning and Retrieval-Augmented Generation to enhance both the quality and coverage of generated test cases. Additionally, we propose a unit test repair algorithm MTCRA aimed at further improving test coverage. The experimental results show that MUATC achieves 4.8%–5.5% higher coverage than Coverup, with performance gains independent of model architecture and programming languages. Compared with advanced LLM-based coverage enhancement tools such as ChatUniTest, TestPilot and Coverup, MUATC achieves a 12.7% improvement in test coverage on the benchmark dataset provided by ChatUniTest. To demonstrate the superior readability of test cases generated by MUATC, we conducted a readability study via the HumanEval platform. The results indicate that MUATC-generated test cases are significantly more readable than those produced by Pynguin. Therefore, to leverage the high readability of generated test cases, we also develop UnitTestPlat, a user-oriented platform for visualized unit test generation.