Seungju Han


Profile Image

home / blogs / misc / CV

I am a predoctoral research intern at NVIDIA working with Yejin Choi. Previously, I was a visiting student researcher at Allen Institute for AI (Ai2). At Ai2, I collaborated with Nouha Dziri, Youngjae Yu, and Jack Hessel. I received my B.S. in Electrical and Computer Engineering from Seoul National University and worked on research and engineering at Hyperconnect (a startup acquired by Match Group for $1.7B).

I am passionate about advancing LLMs and multimodal LLMs, especially to make them robust, factually accurate, deeply reasoning, and truly safe. I am particularly excited about training and evaluating language models using other models, as I believe these approaches can significantly improve the efficiency and effectiveness of model training and evaluation. For example:

  • synthetic data, algorithmic data selection and reweighting: I am interested in large-scale synthetic data for training language models. I believe the key is data diversity; to tackle the problem of data diversity, I have worked on building synthetic data by leveraging large web data sourcs such as web videos (CHAMPAGNE) and user-LLM interactions (WildTeaming, WildGuard). I am also interested in algorithmic data selection to effectively utilize the large-scale data.
  • training models with supervision from models: I am exploring better training objectives than the standard next-token prediction loss in language model training, using supervision from the models in training. I previously worked on alternative training objectives on long-tail classification (LADE) and open-domain conversation models (G2R), obtaining positive results.
  • evaluating models with models: I am interested in evaluating models with less reliance on human experts, and believe that using models to evaluate models will be the future. I first worked on reducing reliance on crowdworkers on building evals and built some model-written benchmarks, such as personality tests to assess LLM behavior and challenging vision-language benchmarks (understanding social norms, humor, and visual arguments).

Research

Preprints

AI as Humanity’s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text
Ximing Lu, Melanie Sclar, Skyler Hallinan, Niloofar Mireshghallah, Jiacheng Liu, Seungju Han, Allyson Ettinger, Liwei Jiang, Khyathi Chandu, Nouha Dziri, Yejin Choi
We introduce Creativity Index to quantify how LM outputs are linguistically creative when compared to the web corpus.

Do LLMs Have Distinct and Consistent Personality? TRAIT: Personality Testset designed for LLMs with Psychometrics
Seungbeen Lee\(*\), Seungwon Lim\(*\), Seungju Han, Giyoung Oh, Minju Kim, Beongwoo Kwak, Jiwan Chung, Hyungjoo Chae, Dongha Lee, Jinyoung Yeo, Youngjae Yu
We developed a testbed, TRAIT, to assess the personality of LLMs with psychometrics.

Publications

WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs
Seungju Han\(*\), Kavel Rao\(*\), Allyson Ettinger, Liwei Jiang, Bill Yuchen Lin, Nathan Lambert, Yejin Choi, Nouha Dziri
NeurIPS 2024
To train the multi-task model that detects harms and model refusals in user-LLM interactions, we constructed the largest multi-task safety dataset with 92K examples across 13 risk categories.

WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
Liwei Jiang, Kavel Rao\(^*\), Seungju Han\(^*\), Allyson Ettinger, Faeze Brahman, Sachin Kumar, Niloofar Mireshghallah, Ximing Lu, Maarten Sap, Yejin Choi, Nouha Dziri
NeurIPS 2024
We developed a framework that uses real-world user-LLM interactions to discover 5.7K unique jailbreak tactics (10x larger than prior resources) that reveal vulnerabilities of LLMs.

Selective Vision is the Challenge for Visual Reasoning: A Benchmark for Visual Argument Understanding
Jiwan Chung, Sungje Lee, Minseo Kim, Seungju Han, Ashkan Yousefpour, Jack Hessel, Youngjae Yu
EMNLP 2024

Multimodal Laughter Reasoning with Textual Audio-Visual Representation
Hyun Lee, Sung Bin Kim, Seungju Han, Youngjae Yu, Tae Hyun Oh
NAACL 2024

Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms
Seungju Han, Junhyeok Kim, Jack Hessel, Liwei Jiang, Jiwan Chung, Yejin Son, Yejin Choi, Youngjae Yu
EMNLP 2023 Oral
We introduced commonsense reasoning task that require visual grounding and released high-quality evaluation datasets for these tasks, setting a foundation to narrow the gap between humans and models.

CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos
Seungju Han, Jack Hessel, Nouha Dziri, Yejin Choi, Youngjae Yu
ICCV 2023
We created a framework that transforms web videos into visually-grounded dialogues with 18M items (20x larger than previous resources) to train models performing multimodal dialogue.

Measuring and Improving Semantic Diversity of Dialogue Generation
Seungju Han, Beomsu Kim, Buru Chang
EMNLP 2022
We measured the diversity of responses generated by LMs using the semantic space with embeddings.

Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional Characters with only a Few Utterances
Seungju Han\(*\), Beomsu Kim\(*\), Jin Yong Yoo\(*\), Seokjun seo, Sangbum Kim, Enkhbayar Erdenee, Buru Chang
NAACL 2022
We introduced an in-context learning algorithm to generate responses in the style of fictional characters with only a few utterances.

Understanding and Improving the Exemplar-based Generation for Open-domain Conversation
Seungju Han\(*\), Beomsu Kim\(*\), Seokjun Seo\(*\), Enkhbayar Erdenee*, Buru Chang
ACL 2022 4th Workshop on NLP4ConvAI Oral Presentation, Outstanding Paper
Developed an algorithm to successfully train a retrieval-augmented conversational model.

Distilling the Knowledge of Large-scale Generative Models into Retrieval Models for Efficient Open-domain Conversation
Beomsu Kim\(*\), Seokjun Seo\(*\), Seungju Han\(*\), Enkhbayar Erdenee\(*\), Buru Chang
EMNLP 2021
We designed an algorithm for knowledge distillation from generative LMs to retrieval conversational models, achieving speedups of 20–40 times compared to the teacher model while maintaining performance.

Disentangling Label Distribution for Long-tailed Visual Recognition
Youngkyu Hong\(^*\), Seungju Han\(*\), Kwanghee Choi\(^*\), Seokjun Seo, Beomsu Kim, Buru Chang
CVPR 2021
We designed the training objective to correct biases in output probabilities when training on long-tail distributions.

Attentron: Few-Shot Text-to-Speech Utilizing Attention-Based Variable-Length Embedding
Seungwoo Choi\(*\), Seungju Han\(*\), Dongyoung Kim\(*\), Sungjoo Ha
Interspeech 2020