Online Continual Learning
on Hierarchical Label Expansion

ICCV 2023

1Dept. of Electrical and Computer Engineering, 2INMC & IPAI, Seoul National University, Republic of Korea
3Yonsei University, Republic of Korea
* Equal contribution, † Corresponding authors

Abstract

Continual learning (CL) enables models to adapt to new tasks and environments without forgetting previously learned knowledge. While current CL setups have ignored the relationship between labels in the past task and the new task with or without small task overlaps, real-world scenarios often involve hierarchical relationships between old and new tasks, posing another challenge for traditional CL approaches. To address this challenge, we propose a novel multi-level hierarchical class incremental task configuration with an online learning constraint, called hierarchical label expansion (HLE).

Our configuration allows a network to first learn coarse-grained classes, with data labels continually expanding to more fine-grained classes in various hierarchy depths. To tackle this new setup, we propose a rehearsal-based method that utilizes hierarchy-aware pseudo-labeling to incorporate hierarchical class information. Additionally, we propose a simple yet effective memory management and sampling strategy that selectively adopts samples of newly encountered classes. Our experiments demonstrate that our proposed method can effectively use hierarchy on our HLE setup to improve classification accuracy across all levels of hierarchies, regardless of depth and class imbalance ratio, outperforming prior state-of-the-art works by significant margins while also outperforming them on the conventional disjoint, blurry and i-Blurry CL setups.

Overview of Hierarchical Label Expansion (HLE)

{"overview"}

Comparison sketch between conventional, blurry, and our HLE setups. (a) Conventional task-free online CL setup gradually introduces new classes and classifies data without task identification (b) Blurry task-free online CL setup where classes are divided into major and minor categories at each task, with varying proportions, leads to unclear task boundaries (c) Proposed HLE CL setup features class label expansion where child class labels are added to parent class labels throughout the learning process.

HLE Scenarios

{"overview"}

An illustration of two HLE scenarios. (a) In single-depth scenario, fine-grained classes grow horizontally from coarse-grained ones within the same level. This scenario is further explored through dual-label (overlapping data) and single-label (disjoint data) setups(b) In multiple-depth scenario, classes grow vertically from coarse to fine across different hierarchy levels.

Proposed Method (PL-FMS)

{"overview"}

Sketch of our proposed method, PL-FMS’s two components: PL and FMS. (a) Pseudo-Labeling based memory management (PL) outlines the method of discarding a data sample, which will be replaced with incoming data, based on its effect on reducing loss, irrespective of its label’s nature (true or pseudo). (b) Flexible Memory Sampling (FMS) shows formation of the training batch by filtering and compensating data samples.

Results: Single-depth Scenario

{"overview"} {"overview"}

Table and figures for experimental results of baseline methods and our proposed method (PL-FMS) evaluated on HLE setup for single-depth hierarchy scenario in CIFAR100, ImageNet-Hier100, and Stanford Cars. In the table, dual-label means overlapping data between tasks, and single-label means disjoint data between tasks. Classification accuracy on hierarchy level 1 and 2 at the final task (%) was measured for all datasets. In the figure, H=1 is parent classes and H=2 child classes. Task index 1 receives parent class labeled data and subsequent indexes receive child class labeled data. Each data point shows average accuracy over three runs (± std. deviation).

Results: Multi-depth Scenario

{"overview"} {"overview"}

Table and figures for experimental results of baseline methods and our proposed method (PL-FMS) evaluated on the HLE setup for the multiple-depth hierarchy scenario in CIFAR100 and iNaturalist-19. The classification accuracy on all hierarchy levels at the final task(%) was measured for all datasets.

BibTeX


      @InProceedings{lee2023hle,
          author    = {Lee, Byung Hyun and Jung, Okchul and Choi, Jonghyun and Chun, Se Young},
          title     = {Online Continual Learning on Hierarchical Label Expansion},
          booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
          month     = {October},
          year      = {2023},
          pages     = {11761-11770}
      }