<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:Aptos;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        font-size:12.0pt;
        font-family:"Calibri",sans-serif;
        mso-ligatures:standardcontextual;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:#0563C1;
        text-decoration:underline;}
span.EmailStyle17
        {mso-style-type:personal-compose;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72" style="word-wrap:break-word">
<div class="WordSection1">
<div align="center">
<table class="MsoNormalTable" border="0" cellspacing="0" cellpadding="0" width="600" style="width:6.25in">
<tbody>
<tr>
<td style="padding:0in 0in 0in 0in">
<p class="MsoNormal"><span style="font-family:"Aptos",sans-serif;mso-ligatures:none"><img width="599" height="171" style="width:6.2395in;height:1.7812in" id="_x0000_i1037" src="cid:image001.png@01DBAEE8.8DC400C0" alt="Thesis Defense Announcement at the Cullen College of Engineering"><o:p></o:p></span></p>
<div align="center">
<table class="MsoNormalTable" border="0" cellspacing="0" cellpadding="0" style="background:white">
<tbody>
<tr>
<td style="padding:30.0pt 15.0pt 7.5pt 15.0pt">
<p class="MsoNormal" align="center" style="text-align:center;mso-line-height-alt:15.0pt">
<b><span style="font-size:18.0pt;font-family:"Times New Roman",serif;color:#C00000">A Unified Diffusion based Representation Learning Framework for Hyperspectral Image Analysis<o:p></o:p></span></b></p>
<p class="MsoNormal" align="center" style="text-align:center;mso-line-height-alt:15.0pt">
<b><span style="font-size:18.0pt;font-family:"Times New Roman",serif;color:#C00000"> </span></b><b><span style="font-size:18.0pt;font-family:"Times New Roman",serif;color:#C8102E"><br>
</span></b><b><span style="font-size:13.5pt;font-family:"Times New Roman",serif;color:black;mso-ligatures:none">Yuzhen Hu</span></b><span style="font-size:11.0pt;font-family:"Times New Roman",serif;mso-ligatures:none"><o:p></o:p></span></p>
<p class="MsoNormal" align="center" style="text-align:center;line-height:16.5pt">
<span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none">April 29, 2025, 1:00 p.m. to 3:00 p.m. (CST)<o:p></o:p></span></p>
<p class="MsoNormal" align="center" style="text-align:center;line-height:16.5pt">
<span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none"><a href="https://urldefense.com/v3/__https://teams.microsoft.com/l/meetup-join/19*3ameeting_MGVhZjFmODMtNTY0ZC00OGQ1LWE4MTYtNDYxZGM0NDFkYzdk*40thread.v2/0?context=*7b*22Tid*22*3a*22170bbabd-a2f0-4c90-ad4b-0e8f0f0c4259*22*2c*22Oid*22*3a*22c0a4c8cf-9aed-4850-a3b2-880cc2ea1c47*22*7d__;JSUlJSUlJSUlJSUlJSUl!!LkSTlj0I!Cv6A-l8L4h-IT3Kc2XwSwVwrszNXbEZsxz13on1EX3VLIWtXnCtZjo2Dd45V-5sYnnxWDRHmECwvpsu94UM7lFPWrD4$">Teams
link</a> <o:p></o:p></span></p>
<p class="MsoNormal" align="center" style="margin-bottom:12.0pt;text-align:center;line-height:16.5pt">
<span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none">Meeting ID: 287 765 081 841 0<o:p></o:p></span></p>
<p class="MsoNormal" align="center" style="margin-bottom:12.0pt;text-align:center;line-height:16.5pt">
<span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none">Passcode: Vw9YG7LH<o:p></o:p></span></p>
<p class="MsoNormal" align="center" style="margin-bottom:3.75pt;text-align:center;line-height:16.5pt">
<b><span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none"><o:p> </o:p></span></b></p>
<p class="MsoNormal" align="center" style="margin-bottom:3.75pt;text-align:center;line-height:16.5pt">
<b><span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none">Committee Chair:</span></b><span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none"><br>
Saurabh Prasad, Ph.D. </span><span style="font-size:11.0pt;font-family:"Aptos",sans-serif;mso-ligatures:none"><o:p></o:p></span></p>
<p class="MsoNormal" align="center" style="margin-bottom:15.0pt;text-align:center;line-height:15.0pt">
<b><span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none">Committee Members:</span></b><span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none"><br>
Yashashree Kulkarni, Ph.D. | David Mayerich, Ph.D. </span><span style="font-size:10.5pt;font-family:"Aptos",sans-serif;mso-ligatures:none"><o:p></o:p></span></p>
</td>
</tr>
<tr>
<td style="padding:0in 15.0pt 15.0pt 15.0pt">
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><b><span style="font-family:"Arial",sans-serif;color:#C8102E;mso-ligatures:none">Abstract</span></b><span style="font-family:"Arial",sans-serif;color:#C8102E;mso-ligatures:none"><o:p></o:p></span></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none">Hyperspectral imaging is a promising remote sensing modality for robust land-cover mapping
- however, analysis of such imagery is often challenging, owing to the high spectral dimensionality and limited pixel-level annotations for training. Additionally, hyperspectral imagery acquired from satellites are often lower resolution compared to their
multi-spectral or color-image counterparts.</span><span style="font-size:10.5pt;font-family:"Arial",sans-serif;mso-ligatures:none"><o:p></o:p></span></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none">Diffusion models exhibit strong generative capabilities and effectively preserve spatial
structure, making them well-suited for feature extraction from low spatial resolution hyperspectral imagery with degraded textures. We validate their efficacy by proposing an approach -- GeoDiffNet-F, which leverages pseudo-RGB representations and a diffusion
model pre-trained on natural images (ImageNet) without domain adaptation to extract transferable low-level spatial features. Combined with per-pixel spectral reflectance, these features significantly improve classification performance and outperform existing
baselines, highlighting the strength of diffusion-based spatial feature extraction in hyperspectral land-cover mapping. While GeoDiffNet-F demonstrates the utility of low-level features, the full potential of diffusion models lies in their ability to generate
hierarchical, tree-like representations through multi-step denoising—progressions ranging from global structures to fine details. Fully leveraging this capacity requires adaptation to the target domain. A central challenge is catastrophic forgetting, which
can degrade generalization from large-scale pretraining, especially under limited data. To address this, we propose a parameter-efficient domain-adaptive pre-training strategy for unsupervised representation learning, which updates only adaptive normalization
layers (e.g., FiLM-like affine modulation). This enables the model to extract modality-aware features and adapt rapidly while preserving general spatial priors.</span><span style="font-size:10.5pt;font-family:"Arial",sans-serif;mso-ligatures:none"><o:p></o:p></span></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none">Building on these insights, we introduce UniDiff-MM, a unified diffusion-based framework
that addresses the dual challenges of domain adaptation and multimodal fusion in hyperspectral imagery. UniDiff-MM combines two key innovations: (1) the above parameter-efficient adaptation strategy, and (2) modality-aware conditioning, where the diffusion
process is conditioned on distinct spectral views—pseudo-RGB for spatial and PCA-reduced bands for spectral content. This enables a single shared diffusion model to adapt to multiple domain-specific representations. It preserves modality-specific features
while aligning them through shared weights, projecting each view into a shared representation space. In practice, UniDiff-MM can generate modality-specific outputs when conditioned accordingly, demonstrating effective domain adaptation and consistent structure
across modalities.</span><span style="font-size:10.5pt;font-family:"Arial",sans-serif;mso-ligatures:none"><o:p></o:p></span></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span style="font-size:10.5pt;font-family:"Arial",sans-serif;color:black;mso-ligatures:none">We validate UniDiff-MM on hyperspectral pixel-wise classification tasks, where it achieves
state-of-the-art performance and demonstrates its effectiveness for robust, multi-modal domain-adaptive representation learning.</span><span style="font-size:10.5pt;font-family:"Arial",sans-serif;mso-ligatures:none"><o:p></o:p></span></p>
</td>
</tr>
<tr>
<td style="padding:0in 15.0pt 15.0pt 15.0pt">
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span style="font-family:"Aptos",sans-serif;color:black;mso-ligatures:none"><img border="0" width="599" height="82" style="width:6.2395in;height:.8541in" id="_x0000_i1038" src="cid:image002.png@01DBAEE8.8DC400C0" alt="Engineered For What's Next"></span><b><span style="font-family:"Arial",sans-serif;color:#C8102E;mso-ligatures:none"><o:p></o:p></span></b></p>
</td>
</tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<p class="MsoNormal"><span style="font-size:11.0pt"><o:p> </o:p></span></p>
</div>
</body>
</html>