Hong Kong University of Science and Technology, Hong Kong
Biography: James Kwok is a Professor in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology. He is an IEEE Fellow. Prof Kwok received his B.Sc. degree in Electrical and Electronic Engineering from the University of Hong Kong and his Ph.D. degree in computer science from the Hong Kong University of Science and Technology. He then joined the Department of Computer Science, Hong Kong Baptist University as an Assistant Professor. He returned to the Hong Kong University of Science and Technology and is now a Professor in the Department of Computer Science and Engineering. He is serving as an Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, Neural Networks, Neurocomputing, Artificial Intelligence Journal, International Journal of Data Science and Analytics, and on the Editorial Board of Machine Learning. He is also serving / served as Senior Area Chairs of major machine learning / AI conferences including NeurIPS, ICML, ICLR, IJCAI, and as Area Chairs of conferences including AAAI and ECML. He is on the IJCAI Board of Trustees. He is recognized as the Most Influential Scholar Award Honorable Mention for "outstanding and vibrant contributions to the field of AAAI/IJCAI between 2009 and 2019". Prof Kwok will be the IJCAI-2025 Program Chair.
Speech Title: Multi-Objective Deep Learning
Multi-objective optimization (MOO) aims to optimize multiple conflicting objectives simultaneously and is becoming increasingly important in deep learning. However, traditional MOO methods face significant challenges due to the non-convexity and high dimensionality of modern deep neural networks, making effective MOO in deep learning a complex endeavor.
In this talk, we address these challenges in MOO for several deep learning applications. First, in multi-task learning, we propose an efficient approach that learns the Pareto manifold by integrating a main network with several low-rank matrices. This method significantly reduces the number of parameters and helps extract shared features. We also introduce preference-aware model merging, which uses MOO to combine multiple models into a single one, treating the performance of the merged model on each base model's task as an objective. During the merging process, our parameter-efficient structure generates a Pareto set of merged models, each representing a Pareto-optimal solution tailored to specific preferences. Finally, we demonstrate that pruning large language models (LLMs) can be framed as a MOO problem, allowing for the efficient generation of a Pareto set of pruned models that illustrate various capability trade-offs.
Prof. Yin-Tien Wang
Tamkang University, Taiwan
Biography: Professor Yin-Tien Wang received the B.S. degree from Tamkang University (TKU), New Taipei City, Taiwan, in 1983, the M.S. degree from the Stevens Institute of Technology, Hoboken, NJ, USA, in 1988, and the Ph.D. degree from the University of Pennsylvania, Philadelphia, PA, USA, in 1992, all in mechanical engineering. He joined the Department of Mechanical and Electro-Mechanical Engineering, TKU, as an Associate Professor, in 1992, and was appointed as Full Professor in 2013. He served as the Chairperson of the Department of Mechanical and Electro-Mechanical Engineering, TKU, from 2016 to 2020, where he is currently a Professor and the Chairperson of the Department of Artificial Intelligence, and also in charge of robotics and machine vision courses. His current interests include computer vision research and the transference of this technology to robotic and nonrobotic application domains.
Speech Title: Mitigating Catastrophic Forgetting via Brain-Inspired Gating and Consolidation
Deep neural networks suffer from catastrophic forgetting in continual learning, as training on new tasks often overwrites previously learned knowledge. Retraining from scratch with all past data is a common solution, but it is costly and often infeasible due to storage, privacy, and operational constraints. Continual learning therefore seeks incremental model updates that preserve prior knowledge under limited resources. In this talk, we present a brain-inspired continual learning framework that mitigates catastrophic forgetting. Inspired by complementary learning systems (CLS), our method enables lifelong learning without replaying past data. The framework uses hippocampal-style, input-driven gating for dynamic routing. We introduce geometric resonance for vision and semantic interactive resonance for sequence modeling to generate spatially selective gating signals that regulate information flow and activate task-relevant neurons. We further translate synaptic tagging and capture (STC) into a novelty-driven, prediction-error-based consolidation process that protects critical memory traces while maintaining synaptic plasticity. Experiments show reduced replay-free forgetting and a strong stability-plasticity trade-off. Using sparse computation in spiking neural networks (SNNs), the framework also achieves significant inference energy savings with competitive performance.