Hongbao Zhang | å¼ ę“Ŗå®
Currently, I am a Predoctral Research Fellow in the Department of Finance, HKUST, under the supervision of
Prof. Yingying Li in Financial Econometrics.
I obtained my MSc in Data Science degree at
The Chinese University of Hong Kong (Shenzhen),
supervised by
Prof. Baoyuan Wu.
Besides, during my first year in CUHKSZ, I worked with
Prof. Rui Shen. in LLM powered Accounting Research.
and, simultaneously, with Prof. Ka Wai Tsang. in Statistics.
Prior to that, I obtained B.A. in Economics from
Xiamen University.
I finished my undergraduate thesis in quantitative finance under the guidance of
Prof. Haiqiang Chen.
Through these diverse research experiences, I discovered my passion for doing research, solidifying my commitment to making it my lifelong pursuit.
Research Interests: Financial Econometrics, Generative AI Application in Finance.
I hope to make Science a good companion to everyone.
Email (CUHKSZ) /
CV /
Google Scholar /
Github
|
|
Research
|
|
To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models
Zihao Zhu*, Hongbao Zhang, Mingda Zhang, Ruotong Wang, Guanzong Wu, Ke Xu, Baoyuan Wu
arXiv Paper
Large Reasoning Models (LRMs) are shown to suffer from an āUnthinking Vulnerability,ā whereby carefully crafted delimiter tokens can bypass their explicit reasoning steps.
We demonstrate this weakness in two attacksāBreaking of Thought (BoT), which comes in a backdoored fineātuning variant and a trainingāfree adversarial variantāand show how both undermine an LRMās reliability.
To defend against BoT, we propose Thinking Recovery Alignment, which partially restores proper reasoning behavior.
Finally, we turn this vulnerability into a feature with Monitoring of Thought (MoT), a lightweight, plugāandāplay framework that safely and efficiently halts unnecessary or dangerous reasoning.
Extensive experiments confirm that BoT seriously compromises reasoning, while MoT effectively prevents overthinking and jailbreak attempts.
|
|
HMGIE: Hierarchical and Multi-Grained Inconsistency Evaluation for Vision-Language Data Cleansing
Zihao Zhu*, Hongbao Zhang, Guanzong Wu, Siwei Lyu, Baoyuan Wu
arXiv Paper
Visual-textual inconsistency (VTI) evaluation is critical for cleansing vision-language data.
This paper introduces HMGIE, a hierarchical framework to evaluate and address inconsistencies
in image-caption pairs across accuracy and completeness dimensions.
Extensive experiments validate its effectiveness on multiple datasets, including the newly constructed MVTID dataset.
|
Miscs
-
Football I am a football fan and have played in school teams for 13 years.
I have won at least 9 championships and various awards. Always enjoy myself playing with teammates!
|
This website template was borrower from Jonathon Barron.
Last updated on September, 2025.
|
|