Knowledge sharing is a task no more unique to the humankind. Robots, as a semi-intelligent human creations, are already learning to exchange the knowledge they accumulate while performing different tasks among themselves, or even keep the collected information in separate online databases for later use by their counterparts.
The basic idea is not very new, as there are many successful practical implementations of machine intelligence operating by combining information at a large scale. Examples include Google knowledge graph, IBM Watson, Wikipedia, Apple Siri. However, the knowledge supplied by these systems is limited mostly to humans, because the symbolic nature of stored information is hardly of use to robots, as they certainly aren’t as efficient as people at using, for example, internet search queries.
Differently from humans, robots require data presented in a different form. Mostly, much finer details are required to automatically perform tasks like planning, sensing, control and language processing. “Specifically, the robot would need access to knowledge for grounding the language symbols into physical entities, knowledge, that sweet tea can either be on table or in fridge, and knowledge for inferring the appropriate plans for grasping and manipulating objects”, say the authors of a new study detailing the concept of RoboBrain, a knowledge engine that allows robots to learn and share such knowledge.
According to the authors, the robot-oriented knowledge engine should address several challenges. For example, robots operate with multi-modal data, and respective information representations stored in this engine should be capable of encoding a variety of different sources like sensors, visual inputs an so on. The knowledge base should provide the opportunity to include new knowledge on a real-time basis and provide means according to which robots could determine the reliability of concepts stored in this database.
In order to address these challenges, authors present an architecture that can store the so-called large-scale knowledge, and which provides the framework for efficient retrieval and updating of information during the process of machine learning. Currently their development consists of three applications: grounding natural language, perception and planning.
The first one converts natural language commands into an action plan for robot controllers; The second considers the human activity anticipation. The third considers path planning for mobile manipulators in complex environments, including assessment of how different objects can be used and what the human preferences are. The current implementation or RoboBrain is available under open Creative Commons Attibution licence (CC-BY) and is available at https://robobrain.me.
In their paper, authors detail main stages of RoboBrain development, including principles of how the robotic knowledge entities are formed and how related control algorithms are developed. The scientists note, that sometimes the robot may not be certain about the optimal parameters of particular task implementation. For this reason they included the feedback functionality: robot can ask the users for feedback during the learning process (for example, when planning an optimal motion trajectory). Then a human operator can correct the position of actuator, remove any interfering object, etc. Such external feedback is directly registered in RoboBrain and parameters of the corresponding knowledge entity are updated for future queries.
The developers of RoboBrain say, that their creation is a collaborative and ongoing effort, and other scientists as well as engineers are welcome to join the further advancement of this framework. Currently they are improving the system architecture and expanding functionality to support scaling to larger knowledge sources (e.g. databases containing millions of videos). Other aims include achieving better disambiguation and improving continuous learning abilities.
Written by Alius Noreika