Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Future Blog Post
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.
Blog Post number 4
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 3
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 2
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 1
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
portfolio
Portfolio item number 1
Short description of portfolio item number 1
Portfolio item number 2
Short description of portfolio item number 2 
publications
Dynamic and Adaptive Feature Generation with LLM
Published in IJCAI 2025 (accepted), 2024
We propose a dynamic and adaptive feature generation method utilizing Large Language Models (LLMs), improving interpretability, applicability, and strategic flexibility in automated feature engineering.
Prototypical Reward Network for Data-Efficient RLHF
Published in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), 2024
Proto-RM introduces prototypical networks into reward modeling to enhance data efficiency in RLHF, achieving robust preference learning with limited human feedback.
Enhancing Risk Assessment in Transformers with Loss-at-Risk Functions
Published in Proceedings of the 2024 IEEE International Conference on Knowledge Graph (ICKG 2024), 2024
This paper introduces Loss-at-Risk (LaR), a novel loss function integrating Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) into Transformer-based models, enhancing sensitivity to extreme financial risks.
Dynamic Weight Adjusting Deep Q-Networks for Real-Time Environmental Adaptation
Published in Proceedings of the 2024 IEEE International Conference on Knowledge Graph (ICKG 2024), 2024
This paper introduces IDEM-DQN, a Deep Q-Network framework with dynamic weight adjustment for real-time adaptation in changing environments.
Scoring with Large Language Models: A Study on Measuring Empathy of Responses in Dialogues
Published in Proceedings of the 2024 IEEE International Conference on Big Data (BigData 2024), 2024
This paper investigates how LLMs score empathy in dialogues, introducing a comprehensive framework that combines explicit features, embeddings, and the MITI code to approximate fine-tuned LLM performance.
LEKA: LLM-Enhanced Knowledge Augmentation
Published in Proceedings of the 34th International Joint Conference on Artificial Intelligence (IJCAI 2025), 2025
This paper proposes LEKA, a Large Language Model–Enhanced Knowledge Augmentation framework that actively retrieves and aligns transferable knowledge across domains for improved data efficiency and transfer learning performance.
Diversity-Oriented Data Augmentation with Large Language Models
Published in Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025), 2025
This paper proposes DoAug, a Diversity-Oriented Data Augmentation framework that leverages LLMs as diverse paraphrasers to enhance dataset diversity and robustness in NLP tasks.
RATT: A Thought Structure for Coherent and Correct LLM Reasoning
Published in AAAI Conference on Artificial Intelligence (AAAI-25), 2025
We introduce the Retrieval Augmented Thought Tree (RATT), which integrates fact retrieval and strategic planning for more coherent reasoning.
Entropy-based Exploration Conduction for Multi-step Reasoning
Published in Findings of the Association for Computational Linguistics: ACL 2025, 2025
Entro-duction dynamically adjusts reasoning depth in LLMs by monitoring entropy and variance entropy, improving reasoning accuracy and efficiency.
Distilling Empathy from Large Language Models
Published in Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2025), 2025
This paper presents a two-step fine-tuning framework for distilling empathy from Large Language Models (LLMs) into Smaller Language Models (SLMs), achieving a 90% win rate in empathetic response generation.
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.