Thinktank: Leveraging LLM Reasoning for Advanced Task Execution in CI/CD
Thinktank is a task execution framework that harnesses the reasoning capabilities of Large Language Models (LLMs). These models demonstrate an ability to provide intermediate steps essential for approaching problem solutions. Thinktank capitalizes on this by employing an LLM, such as GPT-4, to iteratively solve given objectives.
Initially, Thinktank processes each objective into actionable tasks that can be executed immediately using the available information. We employ a technique called ReAct prompting (https://arxiv.org/abs/2210.03629) to leverage the LLM’s reasoning abilities, guiding it to select appropriate Agents. Our method deviates from the original paper’s proposal; recognizing that the vast number of available functions could overwhelm the decision-making of statistical language models, we restrict the model’s function choices by automatically clustering Agents into no more than ten capabilities. These capabilities are dynamically recalculated whenever new Agents are added to the system.
Agents in Thinktank, developed by users as plugins, are autonomous entities crucial for task execution. They operate independently, enhancing the system’s capabilities with their varied functionalities. Each Agent is proficient in generating ‘Facts’ – key pieces of verified information vital for advancing tasks. These Facts are the cornerstone of Thinktank’s collaborative and continuous problem-solving approach. The design facilitates seamless integration and interaction among Agents, fostering an efficient, cooperative task execution method. Agents use a spectrum of tools and methods, tailored to their specific tasks, illustrating Thinktank’s versatility in managing diverse objectives.
Thinktank’s easy extensibility unlocks the power of AI-driven task execution for CI/CD processes. Teams can tailor Thinktank to their specific needs by developing and deploying specialized plugins, providing a focused subset of capabilities. These plugins can contain a variety of Agents which are adept at identifying and resolving a range of issues encountered in CI/CD workflows. They excel in automated analysis and rectification of problems within the CI/CD pipelines, as well as in fixing test case errors in the code slated for delivery. This not only streamlines the CI/CD process but also enhances the reliability and efficiency of code deployment.
Thinktank aims to replace the rigid structures found in traditional task execution frameworks with the adaptive reasoning capabilities of LLMs. This flexibility allows the system to address a wide range of challenges on demand. Our ultimate goal is to employ Thinktank as a universal tool for automatically analyzing and resolving a broad spectrum of issues. Among our initial use cases is the analysis and resolution of bugs and various issues in our CI/CD pipelines.
The project is still in an early development stage. In this talk we will present the concepts behind Thinktank and explain why we believe this technology has the potential to automate many different use cases.
Tue 28 MayDisplayed time zone: Eastern Time (US & Canada) change
08:30 - 10:30 | |||
08:30 20mDay opening | Welcome to CCIW CCIW Tim A. D. Henderson Google | ||
08:50 25mTalk | Thinktank: Leveraging LLM Reasoning for Advanced Task Execution in CI/CD CCIW Tim Keller SAP SE | ||
09:15 25mTalk | Widespread Error Detection in Large Scale Continuous Integration Systems CCIW Stanislaw Swierc Meta Platforms, Inc., James Lu Meta Platforms, Inc., Thomas Yi Meta Platforms, Inc. Link to publication | ||
09:40 25mTalk | Scalable Continuous Integration using Remote Execution CCIW | ||
10:05 25mTalk | Replay-Based Continual Learning for Test Case Prioritization CCIW Asma Fariha Ontario Tech University, Akramul Azim Ontario Tech University, Ramiro Liscano Ontario Tech University |