I'm interested in the type of social learning where an artificial learner interacts with a human teacher, and tries to figure out what that teacher would like the learner to do. Demonstrations, an evaluation button pushed by a teacher, facial expressions, speech comments, etc do not come with a specification of how they relate to what the learner should do. Any algorithm that modifies a policy based on interactions with a human is thus necessarily built on top of interpretations, either explicitly or implicitly. My main research interest is finding better such interpretations. I'm also interested in creating stronger theoretical foundations for this situation, especially related to the tricky problems that one runs into when dealing with flawed teachers.
I have recently joined the project: cooperative intelligence mental models for assistive systems, at the Cognition and Robotics Lab, at the university of Bielefeld, Germany, for a 3 year postdoc, working with Helge Ritter.
Contact:
[email protected]
I have recently joined the project: cooperative intelligence mental models for assistive systems, at the Cognition and Robotics Lab, at the university of Bielefeld, Germany, for a 3 year postdoc, working with Helge Ritter.
Contact:
[email protected]