- Main
Online Learning for Orchestrating Deep Learning Inference at Edge
- Shahhosseini, Sina
- Advisor(s): Dutt, Nikil
Abstract
Deep-learning-based intelligent services have become prevalent in cyber-physical applications including smart cities and health-care. Resource-constrained end-devices must be carefully managed in order to meet the latency and energy requirements of computationally-intensive deep learning services. Collaborative end-edge-cloud computing for deep learning provides a range of performance and efficiency that can address application requirements through computation offloading. The decision to offload computation is a communication-computation co-optimization problem that varies with both system parameters (e.g., network condition) and workload characteristics. On the other hand, deep learning model optimization provides another source of tradeoff between latency and model accuracy. An end-to-end decision-making solution that considers such computation-communication problem is required to synergistically find the optimal offloading policy and model for deep learning services. To this end, we propose a reinforcement-learning-based computation offloading solution that learns optimal offloading policy considering deep learning model selection techniques to minimize response time while providing sufficient accuracy. We demonstrate the efficacy of our strategies through experimental comparison with state-of-the-art RL-based inference orchestration. In addition, we investigate applying intelligent orchestration strategy in eHealth monitoring systems as a case study.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-