Home SECURITY Latest Meta* technologies allow AI robots to control their actions without humans

Latest Meta* technologies allow AI robots to control their actions without humans

0
Latest Meta* technologies allow AI robots to control their actions without humans

[ad_1]

Cyborgs on the way: the latest Meta* technologies allow robots with AI to control their actions without people

New technologies allow machines to function independently in the real world.

Meta*’s AI research team has announced a number of important advances in adaptive skill coordination and visual cortex simulation that will enable AI robots to act independently in the real world. These results represent significant progress towards a comprehensive “embodied AI” capable of interacting with the environment without human intervention.

The visual cortex is the part of the brain that enables beings to use their sight to perform actions. Thus, the artificial visual cortex is extremely important for robots that need to solve problems based on what they see in front of them. The VC-1 artificial visual cortex was trained on the Ego4D dataset containing thousands of hours of wearable camera footage of study participants around the world performing routine activities such as cooking, cleaning, sports, and handicrafts.

However, the visual cortex is only one aspect of embodied AI. In order for a robot to operate completely independently in the real world, it must be able to manipulate objects in the environment – approach an object, pick it up, move it to another place, and place an object – and do all this based on what it sees and hears.

To solve this problem, AI specialists from Meta*, together with researchers from the Georgia Institute of Technology, developed a new approach ASC (Adaptive Skill Coordination – Adaptive Skill Coordination), where training takes place in simulations, and then these skills are transferred to real robots. Meta* demonstrated the effectiveness of ASC by partnering with Boston Dynamics. ASC has been integrated with the Spot robot, which has robust recognition, navigation, and manipulation capabilities, although it requires significant human involvement.

The goal of the researchers was to create an AI model that could perceive the world around them using sensors using the Boston Dynamics API. Initially, ASC was trained in the Habitat simulator using HM3D and ReplicaCAD datasets, which contain 3D models of more than a thousand houses. Then the Spot virtual robot was trained to move around unfamiliar houses, pick up objects, carry them and place them in the right place. After that, these skills were transferred to real Spot robots, which autonomously performed the same tasks, based on the received understanding of the interiors.

“We used two completely different real-world environments where Spot performed tasks to move various objects: a 185 m² fully furnished apartment and a 65 m² university laboratory,” the researchers note. “ASC performed near perfect performance, successfully handling 59 out of 60 episodes, overcoming hardware instabilities, selection errors, and competitive hindrances such as moving obstacles or blocked paths.”

Meta* researchers today released the source code for the VC-1 model, providing details on model scaling and dataset sizes. The team’s next goal will be to integrate the VC-1 with the ASC to create a unified system that approaches true embodied AI.


*Added to the list of public and religious associations in respect of which a court decision on liquidation or prohibition of activities, based on the provisions of the Federal Law of July 25, 2002 No. 114-FZ “On Counteracting Extremist Activity”, has entered into force.

[ad_2]

Source link

www.securitylab.ru

LEAVE A REPLY

Please enter your comment!
Please enter your name here