<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" class="elementToProof">
<br>
</div>
<div>
<div dir="ltr">
<div class="x_gmail_quote">
<div dir="ltr">
<table align="center" bgcolor="#fff" border="0" cellpadding="0" cellspacing="0" width="600" style="font-family:Arial,sans-serif">
<tbody>
<tr>
<td><img alt="Dissertation Defense Announcement at the Cullen College of Engineering" width="600" height="171" src="https://www.egr.uh.edu/sites/www.egr.uh.edu/files/enews/2022/images/dissertation1.png">
<table align="center" bgcolor="#ffffff" border="0" cellpadding="10" cellspacing="0">
<tbody>
<tr>
<td align="center" style="padding:40px 20px 10px">
<div style="font-size:24px; color:rgb(200,16,46); line-height:28px"><strong>Designing Highly-Efficient Hardware Accelerators for Robust and Automatic Deep Learning Technologies</strong></div>
<div style="margin:30px 0px; line-height:20px">
<div style="font-size:18px; margin-bottom:5px"><strong>Qiyu Wan</strong></div>
<div style="font-size:14px; line-height:20px">
<p style="font-family:Arial,Helvetica,sans-serif; line-height:22px; margin:0px 0px 5px">
November 22, 2022; 2:00 PM - 4:00 PM (CST)<br>
Zoom: <a href="https://urldefense.com/v3/__https://uh-edu-cougarnet.zoom.us/j/3959794030__;!!LkSTlj0I!AtGNMLgARHUfMRGdX2yJj9Jm-gyxxAeddE3IftFFtN2rnf3ZusuTB1LWj0WA-aCvW6p7nmq2cmHjxG1Jz9I$" data-auth="NotApplicable">https://uh-edu-cougarnet.zoom.us/j/3959794030</a></p>
</div>
</div>
<div style="font-size:14px; line-height:20px">
<p style="font-family:Arial,Helvetica,sans-serif; line-height:22px; margin:0px 0px 5px">
<strong>Committee Chair:</strong><br>
Xin Fu, Ph.D.<br>
</p>
</div>
<div style="font-size:14px; line-height:20px">
<p style="font-family:Arial,Helvetica,sans-serif; line-height:22px; margin:0px 0px 20px">
<strong>Committee Members:</strong><br>
Kaushik Rajashekara, Ph.D. | Miao Pan, Ph.D. | Jinghong Chen, Ph.D. | Shuaiwen Leon Song, Ph.D.</p>
</div>
</td>
</tr>
<tr>
<td style="padding:0px 20px 20px">
<p style="font-family:Arial,Helvetica,sans-serif; font-size:16px; line-height:22px; margin:15px 0px; color:rgb(200,16,46)">
<strong>Abstract</strong></p>
<p style="font-family:Arial,Helvetica,sans-serif; font-size:14px; line-height:22px; margin:15px 0px">
Deep learning based AI technologies, such as deep convolutional neural networks (DNNs), have recently achieved amazing success in numerous applications, such as image recognition, autonomous driving, and so on. However, there are two critical issues in the
conventional DNN applications. The first problem is safety. DNN models can become unreliable due to the uncertainty in data, e.g., insufficient labeled training data, measurement errors and noise in the label. To address this issue, Bayesian deep learning
has become an appealing solution since it provides a mathematically grounded framework to quantify uncertainties for a model's final prediction. As a key example, Bayesian Neural Networks (BNNs) are one of the most successful Bayesian models being increasingly
employed in a wide range of real-world AI applications which demand reliable and robust decisions. However, the nature of BNN stochastic inference and training procedures incurs orders of magnitude higher computational costs than conventional DNN models, which
poses a daunting challenge to traditional hardware platforms, such as CPUs/GPUs. The second issue lying in the conventional DNN applications is the laboring-intensive design period. The actual architecture design of a DNN model demands a significant amount
of effort and cycles from machine learning experts. Fortunately, the recent emergence of Neural Architecture Search (NAS) has brought the neural architecture design into an era of automation. Nevertheless, the search cost is still prohibitively expensive for
practical large-scale deployment in real-world applications.<br>
<br>
This dissertation focuses on designing high-speed and energy-efficient hardware accelerators for robust and automatic deep learning technologies, i.e. BNN and NAS. In this dissertation, two BNN accelerators, i.e., Fast-BCNN and Shift-BNN, are proposed to accelerate
the BNN inference and training, respectively. Furthermore, an efficient in-situ NAS search engine is introduced for large-scale deployment in real-world applications. The proposed accelerators show promise of solving the challenges during the execution of
BNN and NAS workloads efficiently.<br>
</p>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td><img alt="Engineered For What's Next" width="600" height="82" src="https://www.egr.uh.edu/sites/www.egr.uh.edu/files/enews/2022/images/dissertation2.png"></td>
</tr>
</tbody>
</table>
<div>
<div dir="ltr" data-smartmail="gmail_signature">
<div dir="ltr">Sincerely,
<div>Qiyu</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>