Skip to content

Latest commit

 

History

History
96 lines (55 loc) · 7.74 KB

File metadata and controls

96 lines (55 loc) · 7.74 KB
graph LR
    Memory_Management_StoragePolicy_["Memory Management (StoragePolicy)"]
    Regularization_Techniques["Regularization Techniques"]
    Replay_Buffer_Update_Mechanism["Replay Buffer Update Mechanism"]
    Replay_Buffer_Low_Level_Operations["Replay Buffer Low-Level Operations"]
    Regularization_Application_Dispatcher["Regularization Application Dispatcher"]
    Regularization_Loss_Computations["Regularization Loss Computations"]
    Continual_Learning_Strategies_Plugins["Continual Learning Strategies/Plugins"]
    Continual_Learning_Strategies_Plugins -- "utilizes" --> Memory_Management_StoragePolicy_
    Continual_Learning_Strategies_Plugins -- "relies on" --> Regularization_Techniques
    Memory_Management_StoragePolicy_ -- "orchestrates" --> Replay_Buffer_Update_Mechanism
    Replay_Buffer_Update_Mechanism -- "depends on" --> Replay_Buffer_Low_Level_Operations
    Regularization_Techniques -- "dispatches to" --> Regularization_Application_Dispatcher
    Regularization_Application_Dispatcher -- "invokes" --> Regularization_Loss_Computations
Loading

CodeBoardingDemoContact

Details

The "Memory & Regularization" subsystem in Avalanche is designed to combat catastrophic forgetting in continual learning. It primarily consists of two intertwined functional areas: Memory Management and Regularization Techniques. Memory Management, centered around the StoragePolicy, provides the foundational mechanisms for storing and retrieving past data samples, crucial for replay-based strategies. The Replay Buffer Update Mechanism and Replay Buffer Low-Level Operations handle the dynamic aspects of this memory. Complementing this, Regularization Techniques, spearheaded by the Regularization component, apply various penalties to the loss function to preserve learned knowledge. The Regularization Application Dispatcher and Regularization Loss Computations manage the specific algorithms for these penalties. These core components are then leveraged by higher-level Continual Learning Strategies/Plugins like ReplayPlugin, which orchestrate the overall continual learning process by utilizing the memory and regularization capabilities.

Memory Management (StoragePolicy)

This is the foundational component for memory management, specifically for implementing replay buffers. It provides mechanisms for storing, organizing, and retrieving past data samples, which are crucial for replay-based continual learning strategies. It acts as the primary interface for memory operations.

Related Classes/Methods:

Regularization Techniques

This component is responsible for applying various regularization techniques during the training process. Its primary goal is to mitigate catastrophic forgetting by introducing penalties to the model's loss function, thereby preserving knowledge acquired from previous tasks. It serves as the main entry point for regularization.

Related Classes/Methods:

Replay Buffer Update Mechanism

Handles the high-level logic for updating the replay buffer within the Memory Management component. This includes deciding when and how to add new samples, and potentially adapting the buffer size or content based on training progress.

Related Classes/Methods:

Replay Buffer Low-Level Operations

Manages the actual data storage, resizing, and internal organization of samples within the replay buffer. These are the granular operations that support the Replay Buffer Update Mechanism.

Related Classes/Methods:

Regularization Application Dispatcher

Acts as a central dispatcher within the Regularization Techniques component, orchestrating the application of different regularization terms. It determines which specific regularization loss functions to invoke based on the current training context.

Related Classes/Methods:

Regularization Loss Computations

Implements the specific mathematical functions for computing various regularization losses (e.g., LwF penalty, distillation loss). These are the core algorithms that calculate the regularization terms added to the model's loss function.

Related Classes/Methods:

Continual Learning Strategies/Plugins

These components represent the primary consumers and orchestrators of the "Memory & Regularization" subsystem. They utilize the Memory Management for replay and rely on Regularization Techniques to mitigate forgetting during their training processes.

Related Classes/Methods: