Skip to content

Latest commit

 

History

History
102 lines (61 loc) · 6.7 KB

File metadata and controls

102 lines (61 loc) · 6.7 KB
graph LR
    Agent_Implementations["Agent Implementations"]
    Task_Orchestrator["Task Orchestrator"]
    Bayesian_Learning_Engine["Bayesian Learning Engine"]
    Strategy_Repository["Strategy Repository"]
    LLM_Adapters["LLM Adapters"]
    Performance_Evaluator["Performance Evaluator"]
    Task_Definition_Module["Task Definition Module"]
    Agent_Implementations -- "initiates task and passes to" --> Task_Orchestrator
    Task_Orchestrator -- "requests strategy recommendation from" --> Bayesian_Learning_Engine
    Task_Orchestrator -- "retrieves strategy from" --> Strategy_Repository
    Task_Orchestrator -- "utilizes" --> Task_Definition_Module
    Task_Orchestrator -- "invokes" --> LLM_Adapters
    Task_Orchestrator -- "submits output to" --> Performance_Evaluator
    Bayesian_Learning_Engine -- "provides strategy insights to" --> Task_Orchestrator
    Bayesian_Learning_Engine -- "receives evaluation results from" --> Performance_Evaluator
    Strategy_Repository -- "provides strategy to" --> Task_Orchestrator
    LLM_Adapters -- "returns response to" --> Task_Orchestrator
    Performance_Evaluator -- "provides evaluation results to" --> Bayesian_Learning_Engine
    Task_Definition_Module -- "provides task structure to" --> Task_Orchestrator
    click Task_Orchestrator href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/bayesian_meta_learning/Task_Orchestrator.md" "Details"
    click Bayesian_Learning_Engine href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/bayesian_meta_learning/Bayesian_Learning_Engine.md" "Details"
    click LLM_Adapters href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/bayesian_meta_learning/LLM_Adapters.md" "Details"
    click Performance_Evaluator href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/bayesian_meta_learning/Performance_Evaluator.md" "Details"
Loading

CodeBoardingDemoContact

Details

The bayesian_meta_learning project is architected as a self-improving AI system, designed around a robust feedback loop for optimizing strategy selection in LLM-driven tasks. At its core, the Task Orchestrator acts as the central coordinator, initiating task execution based on inputs from Agent Implementations. It dynamically selects strategies by consulting the Bayesian Learning Engine, which maintains and updates probabilistic priors based on past performance, and retrieves these strategies from the Strategy Repository. Interactions with external Large Language Models are abstracted through LLM Adapters. The outputs are then rigorously evaluated by the Performance Evaluator, with results feeding back into the Bayesian Learning Engine to refine future strategy choices. This continuous cycle, supported by the structured task definitions from the Task Definition Module, enables the system to adapt and improve its performance over time, making it highly suitable for dynamic AI/ML applications requiring adaptive strategy management.

Agent Implementations

Entry points for specific applications (e.g., LLM judging, code generation) that initiate the meta-learning process.

Related Classes/Methods:

Task Orchestrator [Expand]

The central control unit managing the entire meta-learning feedback loop, coordinating strategy selection, LLM interaction, performance evaluation, and Bayesian updates.

Related Classes/Methods:

Bayesian Learning Engine [Expand]

Implements the core meta-learning algorithm, maintaining and updating probabilistic priors for strategies based on their observed performance, guiding strategy selection.

Related Classes/Methods:

Strategy Repository

A module responsible for storing, managing, and providing access to a collection of predefined strategies.

Related Classes/Methods:

LLM Adapters [Expand]

A set of interfaces that abstract interactions with various external Large Language Models (e.g., GPT-4o), ensuring modularity and extensibility.

Related Classes/Methods:

Performance Evaluator [Expand]

Assesses the quality and effectiveness of the outputs generated by the LLM based on the chosen strategy, providing crucial feedback for the Bayesian update process.

Related Classes/Methods:

Task Definition Module

Defines the data structures, schemas, and types for various tasks, ensuring consistent input and output formats across the system.

Related Classes/Methods: