[CCoE Notice] Cullen College Dissertation Defense Announcement - Lening Wang

Hutchinson, Inez A iajackso at Central.UH.EDU
Fri Jul 21 11:46:02 CDT 2023


[Dissertation Defense Announcement at the Cullen College of Engineering]
Achieving Low-Cost and High-Efficient Robust Inference and Training for DNNs
Lening Wang

July 25, 2023; 2:00 PM - 3:30 PM (CST)
Location:
Zoom: https://urldefense.com/v3/__https://uh-edu-cougarnet.zoom.us/j/3959794030__;!!LkSTlj0I!A32ekrIP8tTVOq69pktRClIc2nH9pUD-Q9kq_00mYQgbJAYhenS6bLoFR4J2TGsSs3u6Lb6tdIdvqjNNqBC5AifxstU$ <https://urldefense.com/v3/__https:/uh-edu-cougarnet.zoom.us/j/3959794030__;!!LkSTlj0I!G5svJ-0Dxtfavp1hFxzX5M2F9RMRmituJda1_CpBDpBzwLD8-ZrCaMKbLcoRFohyS7oy-1LtzrzBZBpN1XQ-UoxTjQ$>


Committee Chair:
Xin Fu, Ph.D.

Committee Members:
Jinghong Chen, Ph.D. | Hien Van Nguyen, Ph.D. | Biresh Kumar Joardar, Ph.D. | Xuqing Wu, Ph.D.

Abstract

The popularity of Convolutional Neural Networks (CNNs) has been widely used in the field of computer vision and image recognition. However, the practical implementation of CNNs poses several notable challenges. The first challenge is about the training efficiency, as the computational demands of training CNNs can be substantial. Another challenge arises in the domain of security, as CNNs are susceptible to a variety of adversarial attacks, including backdoor attacks from software side and bit flip attacks from hardware side.

This dissertation focuses specifically on addressing these challenges. We first present LP-RFL (Label Guided Pruning for Robust Federated Learning). LP-RFL is a low-cost framework of removing the backdoor attacks in Federated Learning (FL). In LP-RFL, we have observed that weight gradients calculated during training exhibit high sparsity, and each class has a unique sparse pattern. We further discovered that for malicious clients with inserted backdoors, the calculated sparse pattern during training demonstrates inconsistencies when compared to the targeted (incorrect) label's sparse pattern. Based on these observations, we propose a label-based gradient pruning approach for FL to mitigate the impact caused by malicious clients. At the same time, since training labels are known in advance, we can easily prune weight gradients that do not belong to the critical path. resulting in significant energy savings in computation and communication. We second present NNHammerGuard, which is a low-cost solution of preventing neural networks (NNs) from Row Hammer (RH) attacks. RH is an attacking method to cause bit flips in adjacent rows by repeatedly accessing (reading or writing) a specific row of memory cells in rapid succession. It can cause the bit flips in NN parameters (i.e., weights) resulting in the failure of NN's function. In NNHammerGuard, we find only a few of bits in weights are vulnerable to bit flips. Hence, we focus on protect the vulnerable bits only. We further find only a few cells in memory can be flipped by RH as well. Therefore, we simply mismatch the vulnerable bits in weights with vulnerable memory cells to enhance the NN's robustness against RH attacks. In summary, this dissertation proposes a low-cost solution to improve NN's energy/computation efficiency and enhance NN's security at the same time.
[Engineered For What's Next]

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://Bug.EGR.UH.EDU/pipermail/engi-dist/attachments/20230721/b311cafd/attachment-0001.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 28058 bytes
Desc: image001.png
Url : http://Bug.EGR.UH.EDU/pipermail/engi-dist/attachments/20230721/b311cafd/attachment-0002.png 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 5699 bytes
Desc: image002.png
Url : http://Bug.EGR.UH.EDU/pipermail/engi-dist/attachments/20230721/b311cafd/attachment-0003.png 


More information about the Engi-Dist mailing list