Humanoid KinetIQ AI Framework Robots
Humanoid just launched KinetIQ, and the timing seems ideal for this robotics effort. This UK-based startup has created an artificial intelligence framework that allows an entire fleet of humanoid robots to be controlled by a single super-intelligent system. Robots of various shapes, some moving on wheels in warehouses and others walking on two legs in homes, can now make decisions together. They are able to use what they’ve learned in one situation to the next, and the whole crew learns to adapt together.



KinetIQ handles coordination across four layers, all of which move at the same speed in theory but think at different speeds in practice. At the very top, a “fleet agent,” similar to a manager, assigns assignments, monitors progress, and exchanges robots around to ensure the show runs well. This section communicates directly with warehouse software and store systems, accepting new requests, tracking progress, and stepping in to address problems as they arise. It’s all extremely high-level, big-picture thinking, and most judgments are made in a matter of minutes, at best.

Unitree G1 Humanoid Robot(No Secondary Development)
  • Height, width and thickness (standing): 1270x450x200mm Height, width and thickness (folded): 690x450x300mm Weight with battery: approx. 35kg
  • Total freedom (joint motor): 23 Freedom of one leg: 6 Waist Freedom: 1 Freedom of one arm: 5
  • Maximum knee torque: 90N.m Maximum arm load: 2kg Calf + thigh length: 0.6m Arm arm span: approx. 0.45m Extra large joint movement space Lumbar Z-axis...

One level down, each robot is attempting to figure out how to do its task. A multimodal model examines what the camera sees, breaks down the task into smaller chunks, and adapts the plan on the fly as it goes – no hard scripts here, just a constant review of progress and a “help, need a hand” button if it becomes stuck. This layer works quickly, in seconds to minutes, converting large instructions into actually useful sequences.

Humanoid KinetIQ AI Framework Robots
The execution phase comes next, with a vision-language-action model that generates exact movement targets a few times per second. It instructs the robot on where to place its hands, torso, or base to pick up objects, open containers, and traverse paths. These small bursts of predicted activities are transmitted down and refined on the fly as needed. The lowest layer is in charge of reinforcement learning for whole-body movement, which occurs 50 times per second. This element determines how to keep the robot balanced and steady while rolling or walking.

Humanoid KinetIQ AI Framework Robots
Where this gets really interesting is in mixed fleets, where you have robots on wheels doing the heavy lifting, such as picking up groceries or moving containers around in back rooms of stores, warehouses, or factories. Then you have bipedal models performing service and home roles, answering voice orders, and so on. The objective is that because they all use the same KinetIQ framework, data from one type of robot helps develop the others; for example, learn a new ability on a wheeled platform and apply it to a walking one, or vice versa, without having to start over.

Author

A technology, gadget and video game enthusiast that loves covering the latest industry news. Favorite trade show? Mobile World Congress in Barcelona.

Write A Comment