Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in IJCAI 2025 (accepted), 2024
We propose a dynamic and adaptive feature generation method utilizing Large Language Models (LLMs), improving interpretability, applicability, and strategic flexibility in automated feature engineering.
Published in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), 2024
Proto-RM introduces prototypical networks into reward modeling to enhance data efficiency in RLHF, achieving robust preference learning with limited human feedback.
Published in Proceedings of the 2024 IEEE International Conference on Knowledge Graph (ICKG 2024), 2024
This paper introduces Loss-at-Risk (LaR), a novel loss function integrating Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) into Transformer-based models, enhancing sensitivity to extreme financial risks.
Published in Proceedings of the 2024 IEEE International Conference on Knowledge Graph (ICKG 2024), 2024
This paper introduces IDEM-DQN, a Deep Q-Network framework with dynamic weight adjustment for real-time adaptation in changing environments.
Published in Proceedings of the 2024 IEEE International Conference on Big Data (BigData 2024), 2024
This paper investigates how LLMs score empathy in dialogues, introducing a comprehensive framework that combines explicit features, embeddings, and the MITI code to approximate fine-tuned LLM performance.
Published in Proceedings of the 34th International Joint Conference on Artificial Intelligence (IJCAI 2025), 2025
This paper proposes LEKA, a Large Language Model–Enhanced Knowledge Augmentation framework that actively retrieves and aligns transferable knowledge across domains for improved data efficiency and transfer learning performance.
Published in Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025), 2025
This paper proposes DoAug, a Diversity-Oriented Data Augmentation framework that leverages LLMs as diverse paraphrasers to enhance dataset diversity and robustness in NLP tasks.
Published in AAAI Conference on Artificial Intelligence (AAAI-25), 2025
We introduce the Retrieval Augmented Thought Tree (RATT), which integrates fact retrieval and strategic planning for more coherent reasoning.
Published in Findings of the Association for Computational Linguistics: ACL 2025, 2025
Entro-duction dynamically adjusts reasoning depth in LLMs by monitoring entropy and variance entropy, improving reasoning accuracy and efficiency.
Published in Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2025), 2025
This paper presents a two-step fine-tuning framework for distilling empathy from Large Language Models (LLMs) into Smaller Language Models (SLMs), achieving a 90% win rate in empathetic response generation.
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.