When most people with normal vision walk down the street, they can easily differentiate the things they must avoid – like trees, curbs and glass doors — from the things they don’t, such as shadows, reflections and clouds.
Chances are, most people also can anticipate what obstacles they should expect to encounter next — knowing, for example, that at a street corner they should watch out for cars and prepare to step down off the curb.
The ability to differentiate and anticipate comes easily to humans but it’s still very difficult for artificial intelligence-based systems. That’s one big reason why self-driving cars or autonomous delivery drones are still emerging technologies.
Microsoft researchers are aiming to change that. They are working on a new set of tools that other researchers and developers can use to train and test robots, drones and other gadgets for operating autonomously and safely in the real world. A beta version is available on GitHub via an open source license.
It’s all part of a research project the team dubs Aerial Informatics and Robotics Platform. It includes software that allows researchers to quickly write code to control aerial robots and other gadgets and a highly realistic simulator to collect data for training an AI system and testing it in the virtual world before deploying it in the real world.
Visit website for Microsoft Aerial Informatics and Robotics Platform (AIRP): Bridging the Simulator-to-Reality Gap, https://www.microsoft.com/en-us/research/project/aerial-informatics-robotics-platform
Visit website for GitHub Microsoft AirSim: Code, Documentation, Software and Hardware Requirements, https://github.com/Microsoft/AirSim