TY - GEN
T1 - Asynchronous Multi-Task Learning Based on One Stage YOLOR Algorithm
AU - Liou, Cheng Fu
AU - Lee, Tsung Han
AU - Guo, Jiun In
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The You Only Learn One Representation (YOLOR) approach is an object detector that can encode implicit knowledge and explicit knowledge of multiple tasks simultaneously. However, the requirement of jointly feeding data is not a friendly setting for an edge device due to the high computational cost. A better strategy is to learn the concept of the new task on the device individually, one by one, without access to the old data. In other words, the model has to deal with multiple tasks asynchronously. In this work, we extend the multi-purpose network YOLOR to asynchronous multi-task learning to learn domain invariant features, which focus on capturing the relatedness between the weight of the previous task and data of the subsequent task. Further, as the number of tasks gradually increases, we accumulate significant weights by introducing task-specific masks and expert modules; the former can automatically identify important filters to prevent modification caused by new tasks, and the latter address the kernel space misalignment problem to perform multi-task feature selection. We experimentally demonstrate that the proposed training strategy significantly outperforms the traditional solution in learning multiple tasks at different times on a public dataset, which supports that the proposed approach is more competitive for resource-limited edge devices.
AB - The You Only Learn One Representation (YOLOR) approach is an object detector that can encode implicit knowledge and explicit knowledge of multiple tasks simultaneously. However, the requirement of jointly feeding data is not a friendly setting for an edge device due to the high computational cost. A better strategy is to learn the concept of the new task on the device individually, one by one, without access to the old data. In other words, the model has to deal with multiple tasks asynchronously. In this work, we extend the multi-purpose network YOLOR to asynchronous multi-task learning to learn domain invariant features, which focus on capturing the relatedness between the weight of the previous task and data of the subsequent task. Further, as the number of tasks gradually increases, we accumulate significant weights by introducing task-specific masks and expert modules; the former can automatically identify important filters to prevent modification caused by new tasks, and the latter address the kernel space misalignment problem to perform multi-task feature selection. We experimentally demonstrate that the proposed training strategy significantly outperforms the traditional solution in learning multiple tasks at different times on a public dataset, which supports that the proposed approach is more competitive for resource-limited edge devices.
KW - catastrophic forgetting
KW - channel-level sparsity
KW - continual learning
KW - edge device
KW - Multi-task learning
KW - one-stage object detection
UR - http://www.scopus.com/inward/record.url?scp=85172143172&partnerID=8YFLogxK
U2 - 10.1109/ISIE51358.2023.10228120
DO - 10.1109/ISIE51358.2023.10228120
M3 - Conference contribution
AN - SCOPUS:85172143172
T3 - IEEE International Symposium on Industrial Electronics
BT - 2023 IEEE 32nd International Symposium on Industrial Electronics, ISIE 2023 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 32nd IEEE International Symposium on Industrial Electronics, ISIE 2023
Y2 - 19 June 2023 through 21 June 2023
ER -