Yoo Yeon Sung (성유연)

Yoo Yeon Sung (성유연)

Ph.D. Candidate in College of Information

University of Maryland

Hi, I’m Yoo Yeon (“You-yawn”)!

I am a fifth-year Ph.D. candidate at the College of Information at University of Maryland, College Park. I am fortunate to be advised by Jordan Boyd-Graber and Naeemul Hassan. I got a M.S. degree in Industrial Management Engineering from Korea university, and B.S. degree in English Literature and Language from Chung-Ang University. I was a visiting researcher at KAIST AI Graduate school, advised by Jaegul Choo. My current research is centered on Human-Centered NLP and Responsible AI.


Areas of Interest

  1. Human-AI Alignment: Developing and testing robust human-AI interactive systems.

  2. Human-Centered LLM Evaluation: Creating evaluation frameworks to assess AI reliability.

  3. Robust Benchmark Datasets: Building adversarial datasets with human incentives and validation.

  4. Combating Misinformation: Developing interactive systems to detect and mitigate misinformation.


Current Research Focus

I strive to develop AI systems that align closely with human needs. Ultimately, I strive to create AI models that not only achieve high accuracy but also foster positive social impact by enhancing reliability and fostering supportive, human-complementary interactions. My specific interests are:

  1. Benchmark Dataset Creation: Develop datasets that evaluate language models based on human-centered standards.
  2. Robustness Evaluation Metrics: Design metrics that assess LLM robustness through the lens of human capabilities.
  3. Human-AI Interaction Testing: Evaluate human-AI interactive systems that support and complement human users effectively.

Research Vision

I enjoy exploring how advancements in LLMs influence the evolving relationship between humans and AI, particularly in the context of today’s information crisis. I value the human-centered perspective, ensuring that AI development genuinely benefits people by addressing these challenges responsibly.

Publications

(2024). You Make me Feel like a Natural Question: Training QA Systems on Transformed Trivia Questions. Empirical Methods in Natural Language Processing. Main.

PDF Code

(2024). Is your benchmark truly adversarial? AdvScore: Evaluating Human-Grounded Adversarialness. Arxiv.

Arxiv

(2023). Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines. Empirical Methods in Natural Language Processing. Main.

PDF Dataset Code Video

(2020). Topical Keyphrase Extraction with Hierarchical Semantic Networks. Decision Support Systems.

PDF Arxiv