References¶
A Simple and Fast Algorithm for K-medoids Clustering. Hae-Sang Park, Chi-Hyuck Jun
Active Preference-Based Learning of Reward Functions . Dorsa Sadigh, Anca D. Dragan, Shankar Sastry, and Sanjit A. Seshia
Active Preference Learning Using Maximum Regret . Nils Wilde, Dana Kulic, Stephen L. Smith
Asking Easy Questions: A User-Friendly Approach to Active Reward Learning. Erdem Bıyık, Malayandi Palan, Nicholas C. Landolfi, Dylan P. Losey, Dorsa Sadigh
Batch Active Preference-Based Learning of Reward Functions. Erdem Bıyık, Dorsa Sadigh
Batch Active Learning Using Determinantal Point Processes. Erdem Bıyık, Kenneth Wang, Nima Anari, Dorsa Sadigh
Bayesian Inverse Reinforcement Learning. Deepak Ramachandran, Eyal Amir
Determinantal point processes for machine learning. Alex Kulesza, Ben Taskar
Learning an Urban Air Mobility Encounter Model from Expert Preferences. Sydney M. Katz, Anne-Claire Le Bihan, Mykel J. Kochenderfer
Learning Reward Functions by Integrating Human Demonstrations and Preferences. Malayandi Palan, Nicholas C. Landolfi, Gleb Shevchuk, Dorsa Sadigh
Learning Reward Functions from Diverse Sources of Human Feedback: Optimally Integrating Demonstrations and Preferences. Erdem Bıyık, Dylan P. Losey, Malayandi Palan, Nicholas C. Landolfi, Gleb Shevchuk, Dorsa Sadigh
NumPy / SciPy Recipes for Data Science: k-Medoids Clustering. Christian Bauckhage
OpenAI Gym. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba
Preference-Based Learning for Exoskeleton Gait Optimization . Maegan Tucker, Ellen Novoseller, Claudia Kann, Yanan Sui, Yisong Yue, Joel Burdick, Aaron D. Ames
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor . Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine
The Green Choice: Learning and Influencing Human Deci-sions on Shared Roads. Erdem Bıyık, Daniel A. Lazar, Dorsa Sadigh, Ramtin Pedarsani