How ChatGPT can control robots

Microsoft researchers controlled this robotic arm using ChatGPT. | Credit: Microsoft

By now, you’ve likely heard of ChatGPT, OpenAI’s language model that can generate somewhat coherent responses to a variety of prompts and questions. It’s primarily being used to generate text, translate information, make calculations and explain topics you’re looking to learn about.

Researchers at Microsoft, which has invested billions into OpenAI and recently integrated ChatGPT into its Bing search engine, extended the capabilities of ChatGPT to control a robotic arm and aerial drone. Earlier this week, Microsoft released a technical paper that describes a series of design principles that can be used to guide language models toward solving robotics tasks.

“It turns out that ChatGPT can do a lot by itself, but it still needs some help,” Microsoft wrote about its ability to program robots.

Prompting LLMs for robotics control poses several challenges, Microsoft said, such as providing a complete and accurate description of the problem, identifying the right set of allowable function calls and APIs, and biasing the answer structure with special arguments. To make effective use of ChatGPT for robotics applications, the researchers constructed a pipeline composed of the following steps:

1. First, they defined a high-level robot function library. This library can be specific to the form factor or scenario of interest and should map to actual implementations on the robot platform while being named descriptively enough for ChatGPT to follow.
2. Next, they build a prompt for ChatGPT which described the objective while also identifying the set of allowed high-level functions from the library. The prompt can also contain information about constraints, or how ChatGPT should structure its responses.
3. The user stayed in the loop to evaluate code output by ChatGPT, either through direct analysis or through simulation and provides feedback to ChatGPT on the quality and safety of the output code.
4. After iterating on the ChatGPT-generated implementations, the final code can be deployed onto the robot.

Examples of ChatGPT controlling robots

In one example, Microsoft researchers used ChatGPT in a manipulation scenario with a robot arm. It used conversational feedback to teach the model how to compose the originally provided APIs into more complex high-level functions: that ChatGPT coded by itself. Using a curriculum-based strategy, the model was able to chain these learned skills together logically to perform operations such as stacking blocks.

The model was also able to build the Microsoft logo out of wooden blocks. It was able to recall the Microsoft logo from its internal knowledge base, “draw” the logo as SVG code, and then use the skills learned above to figure out which existing robot actions can compose its physical form.

Researchers also tried to control an aerial drone using ChatGPT. First, they fed ChatGPT a rather long prompt laying out the computer commands it could write to control the drone. After that, the researchers could make requests to instruct ChatGPT to control the robot in various ways. This included asking ChatGPT to use the drone’s camera to identify a drink, such as coconut water and a can of Coca-Cola. It was also able to write code structures for drone navigation based solely on the prompt’s base APIs, according to the researchers.

“ChatGPT asked clarification questions when the user’s instructions were ambiguous and wrote complex code structures for the drone such as a zig-zag pattern to visually inspect shelves,” the team said.

Microsoft said it also applied this approach to a simulated domain, using the Microsoft AirSim simulator. “We explored the idea of a potentially non-technical user directing the model to control a drone and execute an industrial inspection scenario. We observe from the following excerpt that ChatGPT is able to effectively parse intent and geometrical cues from user input and control the drone accurately.”

Key limitation

The researchers did admit this approach has a major limitation: ChatGPT can only write the code for the robot based on the initial prompt the human gives it. A human engineer has to thoroughly explain to ChatGPT how the application programming interface for a robot works, otherwise, it will struggle to generate applicable code.

“We emphasize that these tools should not be given full control of the robotics pipeline, especially for safety-critical applications. Given the propensity of LLMs to eventually generate incorrect responses, it is fairly important to ensure solution quality and safety of the code with human supervision before executing it on the robot. We expect several research works to follow with the proper methodologies to properly design, build and create testing, validation and verification pipelines for LLM operating in the robotics space.

“Most of the examples we presented in this work demonstrated open perception-action loops where ChatGPT generated code to solve a task, with no feedback provided to the model afterwards. Given the importance of closed-loop controls in perception-action loops, we expect much of the future research in this space to explore how to properly use ChatGPT’s abilities to receive task feedback in the form of textual or special-purpose modalities.”

Microsoft said its goal with this research is to see if ChatGPT can think beyond text and reason about the physical world to help with robotics tasks.

“We want to help people interact with robots more easily, without needing to learn complex programming languages or details about robotic systems. The key challenge here is teaching ChatGPT how to solve problems considering the laws of physics, the context of the operating environment, and how the robot’s physical actions can change the state of the world.”

The post How ChatGPT can control robots appeared first on The Robot Report.