Disclaimer: This Article is auto-generated from the HT news service.
Robotics Transformer 2 is a ‘a first-of-its-kind vision-action-language (VAL) model,’ according to the tech giant.
Google has announced Robotics Transformer 2 (RT-2), which it says is a first-of-its-kind vision-action-language (VAL) model.
In the past, robots have usually required firsthand experience in order to perform an action. But with our new vision-language-action model, RT-2, they can now learn from both text and images from the web to tackle new and complex tasks. Learn more ↓ https://t.co/4DSRwUHhwg
— Google (@Google) July 28, 2023
A Transfer-based model trained on text and images from the web, RT-2 transfers knowledge from web data to inform robot behaviour.
This is where Robotics Transformer 2 comes in.
RT-2, on the other hand, already knows what trash is, and can identify it without any explicit training. In turn, it can make the robot perform the task.
Disclaimer: This Article is auto-generated from the HT news service.