The goal of my research is to enable robots to autonomously produce behavior that reasons about function _and_ interaction with and around people. I aim to develop a formal understanding of interaction that leads to algorithms which are informed by mathematical models of how people interact with robots, enabling generalization across robot morphologies and interaction modalities.
Robots are increasingly becoming a part of our daily lives, from the automated vacuum cleaners in our homes to the rovers exploring Mars. However, while recent years have seen dramatic progress in the development of affordable, general-purpose robot hardware, the capabilities of that hardware far exceed our ability to write software to adequately control.
Biological machines created by millions of years of evolution suggest a paradigm shift in robotic design. Realizing animals’ magnificent locomotive capabilities is next big challenge in mobile robotic applications. The main theme of MIT Biomimetic Robotics Laboratory is innovation through ‘principle extraction’ from biology. The embodiment of such innovations includes Stickybot that employs the world’s first synthetic directional dry adhesive inspired by geckos, and the MIT Cheetah, designed after the fastest land animal.
Autonomous UAVs, or "self-flying vehicles", hold the promise of transforming a number of industries, and changing how we move things around the world. Building from the foundation of decades of research in autonomy and UAVs, Google launched Project Wing in 2012 and recently announced trials of a delivery service using a small fleet of autonomous UAVs in Australia. In this talk, I will provide an introduction to the work Google has been doing in developing this service, describe the capabilities (and limitations) of the vehicles, and talk briefly about the promise of UAVs in general.
Haptic rendering is the application of forces in a virtual environment to a user interface (such as a flight control stick, haptic glove or surgical robot hand controls). The virtual environment can represent a physical environment, it can be fully synthetic (like in a video game), or it can be an augmented reality combination representing both physical and synthetic components. In this virtual environment, motions of the hand control device interact with a virtual representation of the physical environment, and kinesthetic feedback is provided to the user.
This talk will survey some of the exciting work happening in Applied Algebraic Topology with a focus on applications to robotics and sensing. The talk will include a gentle introduction to the tools (homology/cohomology/persistence/sheaves), alongside relevant applications to motion planning, sensor coverage, mapping, optimization, and sensing.
For robust, life-long autonomous operation in dynamic unstructured environments, mobile robots must contend with vast amounts of continually evolving data. The exploring robot must adapt to its environment and refine its workspace representation with new observations. The key competency we seek is “introspection”: to ability to determine what is perplexing, which can further drive active information acquisition or human disambiguation. The talk explores this in the context of place recognition and semantic mapping.
Reformulating planning problems as probabilistic inference problems is interesting, but does not necessarly solve fundamental problems. In this talk I will review three variations of the theme where the reformulation has lead to novel theoretical insights and efficient algorithms. These are in the context of stochastic optimal control and model-free Reinforcement Learning, for multi-agent POMDPs, and for relational MDPs. I will conclude with some questions and first steps on a problem we currently work on: how to efficiently plan in the case of uncertainty over existence of objects.
I will talk about the research I have done with my collaborators on teaching robots to perform manipulation tasks based on human demonstrations. Of particular interest are tasks involving deformable objects, where it is hard to perceive the full state of the system and model its dynamics. I will also talk about some techniques we have developed to solve the associated perception and motion planning problems.
As dynamic robot behaviors become more capable and well understood, there becomes a need for a wide variety of equally capable and understood transitions between these behaviors. For legged robots, we introduce a new formalism for understanding behavioral components as a self-manipulation, and then build up a hybrid system that defines topologically the space of dynamic transitions as a cellular complex. Our primary motivation is not to facilitate numerical simulation but rather to promote design insight -- behavior design, controller design, and platform design.