Seungju Han


Profile Image

home / blogs / misc / CV

I am a predoctoral researcher working with Yejin Choi and Youngjae Yu. Previously, I was a visiting student researcher at Allen Institute for AI (Ai2), also advised by Yejin Choi. At Ai2, I collaborated with Nouha Dziri, Youngjae Yu, and Jack Hessel. I received my B.S. in Electrical and Computer Engineering from Seoul National University and worked on research and engineering at Hyperconnect (a startup acquired by Match Group for $1.7B).

My research focuses on natural language processing and machine learning, particularly interested in scalable and practical ways to improve large language models / large multimodal language models. This has led me to work on:

  • training models with scale: I built diverse, high-quality, large-scale datasets. For example, I employed internet-scale (20M videos) to teach vision-grounded dialogue (CHAMPAGNE), and leveraged in-the-wild user-LLM interactions to build 262K prompt-response pairs for red teaming (WildTeaming).
  • evaluating models with models: I developed model-based evaluations, such as detecting the harms and refusals in user-LLM interactions (WildGuard) and measuring the diversity of LM responses. I also created model-written benchmarks, such as personality test to assess LLM behavior and challenging vision-language benchmarks (understanding social norms, humor, and visual arguments).
  • building small deployable models: I worked on an algorithm for knowledge distillation from generative LMs to retrieval-based LMs, and retrieval-based LMs for in-context learning and response generation for practical chatbot models.

You can reach me at wade3han at snu.ac.kr.


Research

Preprints

AI as Humanity’s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text
Ximing Lu, Melanie Sclar, Skyler Hallinan, Niloofar Mireshghallah, Jiacheng Liu, Seungju Han, Allyson Ettinger, Liwei Jiang, Khyathi Chandu, Nouha Dziri, Yejin Choi
We introduce Creativity Index to quantify how LM outputs are linguistically creative when compared to the web corpus.

Do LLMs Have Distinct and Consistent Personality? TRAIT: Personality Testset designed for LLMs with Psychometrics
Seungbeen Lee\(*\), Seungwon Lim\(*\), Seungju Han, Giyoung Oh, Minju Kim, Beongwoo Kwak, Jiwan Chung, Hyungjoo Chae, Dongha Lee, Jinyoung Yeo, Youngjae Yu
We developed a testbed to assess the personality of LLMs with psychometrics.

Publications

WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs
Seungju Han\(*\), Kavel Rao\(*\), Allyson Ettinger, Liwei Jiang, Bill Yuchen Lin, Nathan Lambert, Yejin Choi, Nouha Dziri
NeurIPS 2024
To train the multi-task model that detects harms and model refusals in user-LLM interactions, we constructed the largest multi-task safety dataset with 92K examples across 13 risk categories.

WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Liwei Jiang, Kavel Rao\(^*\), Seungju Han\(^*\), Allyson Ettinger, Faeze Brahman, Sachin Kumar, Niloofar Mireshghallah, Ximing Lu, Maarten Sap, Yejin Choi, Nouha Dziri
NeurIPS 2024
We developed a framework that uses real-world user-LLM interactions to discover 5.7K unique jailbreak tactics (10x larger than prior resources) that reveal vulnerabilities of LLMs.

Selective Vision is the Challenge for Visual Reasoning: A Benchmark for Visual Argument Understanding
Jiwan Chung, Sungje Lee, Minseo Kim, Seungju Han, Ashkan Yousefpour, Jack Hessel, Youngjae Yu
EMNLP 2024

Multimodal Laughter Reasoning with Textual Audio-Visual Representation
Hyun Lee, Sung Bin Kim, Seungju Han, Youngjae Yu, Tae Hyun Oh
NAACL 2024

Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms
Seungju Han, Junhyeok Kim, Jack Hessel, Liwei Jiang, Jiwan Chung, Yejin Son, Yejin Choi, Youngjae Yu
EMNLP 2023 Oral
We introduced commonsense reasoning task that require visual grounding and released high-quality evaluation datasets for these tasks, setting a foundation to narrow the gap between humans and models.

CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos
Seungju Han, Jack Hessel, Nouha Dziri, Yejin Choi, Youngjae Yu
ICCV 2023
We created a framework that transforms web videos into visually-grounded dialogues with 18M items (20x larger than previous resources) to train models performing multimodal dialogue.

Measuring and Improving Semantic Diversity of Dialogue Generation
Seungju Han, Beomsu Kim, Buru Chang
EMNLP 2022
We measured the diversity of responses generated by LMs using the semantic space with embeddings.

Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional Characters with only a Few Utterances
Seungju Han\(*\), Beomsu Kim\(*\), Jin Yong Yoo\(*\), Seokjun seo, Sangbum Kim, Enkhbayar Erdenee, Buru Chang
NAACL 2022
We introduced an in-context learning algorithm to generate responses in the style of fictional characters with only a few utterances.

Understanding and Improving the Exemplar-based Generation for Open-domain Conversation
Seungju Han\(*\), Beomsu Kim\(*\), Seokjun Seo\(*\), Enkhbayar Erdenee*, Buru Chang
ACL 2022 4th Workshop on NLP4ConvAI Oral Presentation, Outstanding Paper
Developed an algorithm to successfully train a retrieval-augmented conversational model.

Distilling the Knowledge of Large-scale Generative Models into Retrieval Models for Efficient Open-domain Conversation
Beomsu Kim\(*\), Seokjun Seo\(*\), Seungju Han\(*\), Enkhbayar Erdenee\(*\), Buru Chang
EMNLP 2021
We designed an algorithm for knowledge distillation from generative LMs to retrieval conversational models, achieving speedups of 20–40 times compared to the teacher model while maintaining performance.

Disentangling Label Distribution for Long-tailed Visual Recognition
Youngkyu Hong\(^*\), Seungju Han\(*\), Kwanghee Choi\(^*\), Seokjun Seo, Beomsu Kim, Buru Chang
CVPR 2021
We designed the training objective to correct biases in output probabilities when training on long-tail distributions.

Attentron: Few-Shot Text-to-Speech Utilizing Attention-Based Variable-Length Embedding
Seungwoo Choi\(*\), Seungju Han\(*\), Dongyoung Kim\(*\), Sungjoo Ha
Interspeech 2020