Kyuyoung Kim

I am an MS student with AI focus in the Computer Science Department at Stanford University, where I work in the Stanford Intelligent Systems Laboratory led by Prof. Mykel Kochenderfer.

Previously, I was a senior software engineer at Google working on developing machine learning algorithms for natural language generation used to localize the Google Assistant. I was also part of the Display Ads developing the ads backend system and improving auction algorithms. I have a BS in computer science with a minor in applied mathematics from Cornell University.

Email  /  GitHub  /  Google Scholar  /  CV  /  LinkedIn

profile photo

Research

I am interested in machine learning, decision making under uncertainty, and optimization.

project image

Adaptive Algorithms for Efficient Risk Estimation of Black-Box Systems


Kyu-Young Kim
MS thesis, Stanford University, 2022
paper /

Adaptive algorithms for efficient risk estimation of black-box systems and their applications to evaluating autonomous vehicle policies in simulation.

project image

A Deep Reinforcement Learning Approach to Rare Event Estimation


Anthony Corso, Kyu-Young Kim, Shubh Gupta, Grace Gao, Mykel J. Kochenderfer
arXiv, 2022
paper /

Deep reinforcement learning-based adaptive importance sampling algorithms for efficient estimation of the probability of rare events for sequential decision making systems.

project image

BEHAVIOR-1K: A Benchmark for Embodied AI with 1,000 Everyday Activities and Realistic Simulation.


Chengshu Li, Ruohan Zhang, Josiah Wong, Cem Gokmen, Sanjana Srivastava, Roberto Martín-Martín, Chen Wang, Gabrael Levine, Michael Lingelbach, Jiankai Sun, Mona Anvari, Minjune Hwang, Manasi Sharma, Arman Aydin, Dhruva Bansal, Samuel Hunter, Kyu-Young Kim, Alan Lou, Caleb R Matthews, Ivan Villa-Renteria, Jerry Huayang Tang, Claire Tang, Fei Xia, Silvio Savarese, Hyowon Gweon, Karen Liu, Jiajun Wu, Li Fei-Fei
CoRL (oral), 2022
paper /

A human-centric benchmark for Embodied AI in simulation with 1,000 everyday activities, a diverse dataset of 5,000+ objects and 50 scenes, and a simulation environment, OmniGibson, that reaches high levels of simulation realism.

project image

Using Machine Translation to Localize Task Oriented NLG Output


Scott Roy, Cliff Brunk, Kyu-Young Kim, Justin Zhao, Markus Freitag, Mihir Kale, Gagan Bansal, Sidharth Mudgal, Chris Varano
arXiv, 2021
paper /

Developed a set of methods to localize at scale a task-oriented natural language application like the Google Assistant using neural machine tranlsation. The proposed approach includes development of an appropriate sequence model with an encoding scheme of structured input, fine-tuning on in-domain translations, and automatic error detection.

project image

Taskmaster-1: Toward a realistic and diverse dialog dataset


Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Daniel Duckworth, Semih Yavuz, Ben Goodrich, Amit Dubey, Andy Cedilnik, Kyu-Young Kim
EMNLP-IJCNLP, 2019
paper /

Built a Wizard-of-Oz system to collect 13,215 task-based dialogs comprising six domains. Released the dataset for NLP research.

project image

Finding Overlapping Communities From Subspaces


David Bindel, Paul Chew, John Hopcroft, Kyu-Young Kim, Colin Ponce
Technical Report, 2011
paper /

Developed spectral methods for finding overlapping community structures in networks.




Other Projects

These include coursework, side projects, and unpublished research work.

project image

Shallow and Deep Methods for Time-Series Modeling


Kyu-Young Kim
Stanford CS229 Machine Learning, 2021
paper /

Studied statistical and deep learning approaches to time-series modeling and analyzed the methods in terms of the amount of data needed, forecasting accuracy, and ability to incorporate covariates into the models.

project image

Effectiveness of Combining Deep Reinforcement Learning Algorithms


Kyu-Young Kim
Stanford CS234 Reinforcement Learning, 2020
paper /

Explored the effectiveness of combining multiple deep reinforcement learning algorithms in training stability, convergence speed, and ability to handle partial-observability.

project image

Effectiveness of Recurrent Network for Partially-Observable MDPs


Kyu-Young Kim
Stanford CS238 Decision Making Under Uncertainty, 2019
paper /

Use of recurrent neural networks in deep Q-learning to handle partial-observability in MDPs.


Design and source code from Leonid Keselman's website