graph LR
Memory_Management_StoragePolicy_["Memory Management (StoragePolicy)"]
Regularization_Techniques["Regularization Techniques"]
Replay_Buffer_Update_Mechanism["Replay Buffer Update Mechanism"]
Replay_Buffer_Low_Level_Operations["Replay Buffer Low-Level Operations"]
Regularization_Application_Dispatcher["Regularization Application Dispatcher"]
Regularization_Loss_Computations["Regularization Loss Computations"]
Continual_Learning_Strategies_Plugins["Continual Learning Strategies/Plugins"]
Continual_Learning_Strategies_Plugins -- "utilizes" --> Memory_Management_StoragePolicy_
Continual_Learning_Strategies_Plugins -- "relies on" --> Regularization_Techniques
Memory_Management_StoragePolicy_ -- "orchestrates" --> Replay_Buffer_Update_Mechanism
Replay_Buffer_Update_Mechanism -- "depends on" --> Replay_Buffer_Low_Level_Operations
Regularization_Techniques -- "dispatches to" --> Regularization_Application_Dispatcher
Regularization_Application_Dispatcher -- "invokes" --> Regularization_Loss_Computations
The "Memory & Regularization" subsystem in Avalanche is designed to combat catastrophic forgetting in continual learning. It primarily consists of two intertwined functional areas: Memory Management and Regularization Techniques. Memory Management, centered around the StoragePolicy, provides the foundational mechanisms for storing and retrieving past data samples, crucial for replay-based strategies. The Replay Buffer Update Mechanism and Replay Buffer Low-Level Operations handle the dynamic aspects of this memory. Complementing this, Regularization Techniques, spearheaded by the Regularization component, apply various penalties to the loss function to preserve learned knowledge. The Regularization Application Dispatcher and Regularization Loss Computations manage the specific algorithms for these penalties. These core components are then leveraged by higher-level Continual Learning Strategies/Plugins like ReplayPlugin, which orchestrate the overall continual learning process by utilizing the memory and regularization capabilities.
This is the foundational component for memory management, specifically for implementing replay buffers. It provides mechanisms for storing, organizing, and retrieving past data samples, which are crucial for replay-based continual learning strategies. It acts as the primary interface for memory operations.
Related Classes/Methods:
This component is responsible for applying various regularization techniques during the training process. Its primary goal is to mitigate catastrophic forgetting by introducing penalties to the model's loss function, thereby preserving knowledge acquired from previous tasks. It serves as the main entry point for regularization.
Related Classes/Methods:
Handles the high-level logic for updating the replay buffer within the Memory Management component. This includes deciding when and how to add new samples, and potentially adapting the buffer size or content based on training progress.
Related Classes/Methods:
avalanche.training.storage_policy.StoragePolicy.updateavalanche.training.storage_policy.StoragePolicy.post_adaptavalanche.training.storage_policy.StoragePolicy.update_from_dataset
Manages the actual data storage, resizing, and internal organization of samples within the replay buffer. These are the granular operations that support the Replay Buffer Update Mechanism.
Related Classes/Methods:
avalanche.training.storage_policy.StoragePolicy.resizeavalanche.training.storage_policy.StoragePolicy.get_group_lengthsavalanche.training.storage_policy.StoragePolicy._make_groups
Acts as a central dispatcher within the Regularization Techniques component, orchestrating the application of different regularization terms. It determines which specific regularization loss functions to invoke based on the current training context.
Related Classes/Methods:
Implements the specific mathematical functions for computing various regularization losses (e.g., LwF penalty, distillation loss). These are the core algorithms that calculate the regularization terms added to the model's loss function.
Related Classes/Methods:
avalanche.training.regularization._lwf_penalty:101-131avalanche.training.regularization.cross_entropy_with_oh_targets:22-33avalanche.training.regularization._distillation_loss:83-99avalanche.training.regularization.stable_softmax:14-19
These components represent the primary consumers and orchestrators of the "Memory & Regularization" subsystem. They utilize the Memory Management for replay and rely on Regularization Techniques to mitigate forgetting during their training processes.
Related Classes/Methods: