ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

Permission explanations, explanatory text accompanying mobile app permission requests, are crucial for user privacy transparency and informed consent. Despite their importance, current practices often fall short of regulatory expectations due to the lack of systematic evaluation mechanisms. Through an empirical study of 600 mainstream mobile apps, we reveal widespread deficiencies: 15% of permission requests provide no explanation, others use vague language or technical jargon, and critically, many fail to disclose third-party SDK data access despite these components actively using granted permissions. To address these transparency gaps, we present SCOPE, an automated multi-agent framework that systematically evaluates permission explanation compliance and generates targeted optimization recommendations. SCOPE employs four specialized agents working collaboratively: multimodal LLM-based explanation extraction, few-shot learning-based linguistic analysis, dynamic API-based purpose inference, and adaptive report generation. Comprehensive evaluation demonstrates SCOPE’s effectiveness, achieving 98% accuracy in explanation extraction, 93.5% consistency in compliance analysis, and 92% accuracy in purpose inference. A user study with 30 participants shows 84.6% preference for SCOPE-optimized explanations, confirming practical utility. Our work provides the first systematic analysis of permission explanation practices and establishes a scalable solution for enhancing mobile app privacy transparency.