ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

Modern computing architectures (e.g., multi-core CPUs, GPUs, distributed systems) rely on parallel code implemented via frameworks such as OpenMP, MPI, and CUDA. While large language models (LLMs) have shown strong performance in general code generation, they struggle with the structured reasoning required for parallel programming, such as handling concurrency, synchronization, and framework-specific semantics. In practical parallel code development, a common workflow begins with sequential code and incrementally introduces parallel directive codes. We formalize this process as the task of \textbf{framework-based parallel code completion} (FPCC), which involves three subtasks: identifying insertion points, selecting parallel frameworks, and completing parallel directive codes.

To support this task, we construct a high-quality dataset of 16,615 framework-based parallel code pairs across six widely used frameworks, labeled with directive points, parallel frameworks, and the code of parallel directives. Empirical results show that six popular LLMs perform poorly on FPCC, particularly struggling with identifying insertion points and completing correct directive codes.

To address these limitations, we propose HPCL, a curriculum-based fine-tuning framework that progressively improves model capabilities in insertion point identification, parallel framework selection, and parallel directive code completion. Our approach achieves substantial improvements, yielding an 17.82% increase in EM and a 5.43% improvement in DIR scores over LLM-based baselines. Finally, expert-guided error analysis reveals common failure patterns and suggests future directions in retrieval-augmented completion and consistency-aware training.