Skip to main content
Download PDF
- Main
Design and Optimization of Hardware Accelerator Design
- Alla, Navateja
- Advisor(s): Esmaeilzadeh, Hadi
Abstract
Deep neural networks have become prominent in solving many real-life problems. However, they need to rely on learning patterns of data. As the demand for such services grows, merely scaling-out the number of accelerators is not economically cost-effective. Although multi-tenancy has propelled data center scalability, it has not been a primary factor in designing DNN accelerators due to the arms race for higher speed and efficiency. A new architecture is proposed which helps in spatially co-locating multiple DNN inference services on the same hardware, offering simultaneous multi-tenant DNN acceleration.
Main Content
For improved accessibility of PDF content, download the file to your device.
If you recently published or updated this item, please wait up to 30 minutes for the PDF to appear here.
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Page Size:
-
Fast Web View:
-
Preparing document for printing…
0%