PROFES 2024
Mon 2 - Wed 4 December 2024 Tartu, Estonia

Large Language Models (LLMs) offer promising capabilities for information retrieval and processing. However, the LLM deployment for querying proprietary enterprise data poses unique challenges, particularly for companies with strict data security policies.

This study shares our experience in setting up a secure LLM environment within a FinTech company and utilizing it for enterprise information retrieval while adhering to data privacy protocols. We conducted three workshops and 30 interviews with industrial engineers to gather data and requirements. The insights collected from the workshops were further enriched by the interviews.

We report the steps taken to deploy a LLM solution in a private and sandboxed environment, and lessons learned from the experience. These lessons span LLM configuration (e.g., chunk_size and top_k settings), local document ingestion, and evaluating LLM outputs.

Our lessons learned serve as a practical guide for practitioners seeking to use private data with LLMs to achieve better usability, improve user experiences, or explore new business opportunities.