Separation of Concerns for Privacy-Preserving LLM Adoption: A Banking Architecture Framework
This program is tentative and subject to change.
Banking employees using third-party Large Language Models risk inadvertently exposing sensitive customer data through prompts. Existing cryptographic solutions impose prohibitive computational costs unsuitable for real-time applications. We present the Banking Prompt Privacy Detector (BPPD), a modular framework implementing Separation of Concerns through three components: a locally-hosted DetectorLLM for privacy risk assessment, Format-Preserving Encryption maintaining semantic integrity, and a proxy mediating external LLM interactions.
The DetectorLLM adapts Llama-3.2-3B-Instruct through parameter-efficient fine-tuning (LoRA and selective freezing) on 4,170 synthetically generated banking scenarios, avoiding exposure of real sensitive data. Evaluation on 1,668 test samples demonstrates substantial improvements over baseline: safety classification accuracy increases from 73.6% to 99.9%, multi-label Subset Accuracy improves from 30.5% to 98.5%, and Privacy Hiding Rate rises from 14.4% to 87.3%.
The framework enables incremental deployment on commodity hardware (single GPU), providing financial institutions a practical solution for LLM adoption while maintaining GDPR, PCI DSS, and Basel III compliance.