Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

The Design of Dynamic Neural Networks for Efficient Learning and Inference

Abstract

Intelligent systems with computer vision capabilities are poised to revolutionize social interaction, entertainment, healthcare, and work automation. Recent progress in computer vision research, especially with deep learning methods, is largely driven by immense datasets, complex models, and billions of parameters. However, real-world applications often neither have access to large scale datasets nor can afford expensive data labeling and computation overhead. Furthermore, we live in a dynamic world, which requires machine learning models to handle changing data distributions and novel prediction tasks. When using learning techniques in actual systems, we still face significant hurdles coming from the scarcity of data and labels, limited computation, and never-ending varying prediction scenarios.

The focus of this thesis is the design of dynamic neural networks and its application to efficient learning (e.g., few-shot learning, semi-supervised learning) and inference. Dynamic networks adapt the model parameters and structures as a function of the input to leverage test-time information. For example, we introduce the design of dynamic neural architectures that can scale their computational complexity as a function of the input. To achieve small improvements in prediction accuracy, deep neural networks are rapidly increasing in depth and computational complexity. Also, the same amount of computation is applied regardless of the input. However, many inputs can reduce the level of computation required to render an accurate prediction. In Part I, we describe a range of dynamic models such as SkipNet and DeepMoE, which adjust the network depths on a per-input basis without reducing the prediction accuracy. In Part II, we describe the usage of dynamic neural networks for sample efficient learning. We propose dynamic weight generation using a task-aware meta learner and its application to a few-shot learning setting. We find that adapting the network parameters at prediction time improves model generalization.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View