Artificial Intelligence / Cognition Archives - The Robot Report https://www.therobotreport.com/category/design-development/ai-cognition/ Robotics news, research and analysis Thu, 06 Apr 2023 01:15:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Artificial Intelligence / Cognition Archives - The Robot Report https://www.therobotreport.com/category/design-development/ai-cognition/ 32 32 How MIT taught a quadruped to play soccer https://www.therobotreport.com/how-mit-taught-a-quadruped-to-dribble-a-soccer-ball/ https://www.therobotreport.com/how-mit-taught-a-quadruped-to-dribble-a-soccer-ball/#respond Thu, 06 Apr 2023 01:14:38 +0000 https://www.therobotreport.com/?p=565419 MIT's DribbleBot can maneuver soccer balls on landscapes like sand, gravel, mud and snow and get up and recover the ball after falling. 

The post How MIT taught a quadruped to play soccer appeared first on The Robot Report.

]]>

A research team at MIT’s Improbable Artificial Intelligence Lab, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), taught a Unitree Go1 quadruped to dribble a soccer ball on various terrains. DribbleBot can maneuver soccer balls on landscapes like sand, gravel, mud and snow, adapt its varied impact on the ball’s motion and get up and recover the ball after falling. 

The team used simulation to teach the robot how to actuate its legs during dribbling. This allowed the robot to achieve hard-to-script skills for responding to diverse terrains much quicker than training in the real world. Because the team had to load its robot and other assets into the simulation and set physical parameters, they could simulate 4,000 versions of the quadruped in parallel in real-time, collecting data 4,000 times faster than using just one robot. You can read the team’s technical paper called “DribbleBot: Dynamic Legged Manipulation in the Wild” here (PDF).

DribbleBot started out not knowing how to dribble a ball at all. The team trained it by giving it a reward when it dribbles well, or negative reinforcement when it messes up. Using this method, the robot was able to figure out what sequence of forces it should apply with its legs. 

“One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behavior,” MIT Ph.D. student Gabe Margolis, who co-led the work along with Yandong Ji, research assistant in the Improbable AI Lab, said. “Once we’ve designed that reward, then it’s practice time for the robot. In real time, it’s a couple of days, and in the simulator, hundreds of days. Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity.”

The team did teach the quadruped how to handle unfamiliar terrains and recover from falls using a recovery controller build into its system. However, dribbling on different terrains still presents many more complications than just walking.

The robot has to adapt its locomotion to apply forces to the ball to dribble, and the robot has to adjust to the way the ball interacts with the landscape. For example, soccer balls act differently on thick grass as opposed to pavement or snow. To combat this, the MIT team leveraged cameras on the robot’s head and body to give it vision.

While the robot can dribble on many terrains, its controller currently isn’t trained in simulated environments that include slopes or stairs. The quadruped can’t perceive the geometry of terrain, it just estimates its material contact properties, like friction, so slopes and stairs will be the next challenge for the team to tackle. 

The MIT team is also interested in applying the lessons they learned while developing DribbleBot to other tasks that involve combined locomotion and object manipulation, like transporting objects from place to place using legs or arms. A team from Carnegie Mellon University (CMU) and UC Berkeley recently published their research about how to give quadrupeds the ability to use their legs to manipulate things, like opening doors and pressing buttons. 

The team’s research is supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation Institute of Artificial Intelligence and Fundamental Interactions, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

A quadruped with a soccer ball.

The post How MIT taught a quadruped to play soccer appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-mit-taught-a-quadruped-to-dribble-a-soccer-ball/feed/ 0
Capra Robotics’ AMRs to use RGo Perception Engine https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/ https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/#respond Wed, 05 Apr 2023 21:19:21 +0000 https://www.therobotreport.com/?p=565424 RGo Robotics, a company developing artificial perception technology, announced leadership appointments, new customers and an upcoming product release.

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>

RGo Robotics, a company developing artificial perception technology that enables mobile robots to understand complex surroundings and operate autonomously, announced significant strategic updates. The announcements include leadership appointments, new customers and an upcoming product release.

RGo develops AI-powered technology for autonomous mobile robots, allowing them to achieve 3D, human-level perception. Its Perception Engine gives mobile robots the ability to understand complex surroundings and operate autonomously. It integrates with mobile robots to deliver centimeter-scale position accuracy in any environment. In Q2 2023, RGo said it will release the next iteration of its software that will include:

  • An indoor-outdoor mode: a breakthrough capability for mobile robot navigation allows them to operate in all environments – both indoors and outdoors.
  • A high-precision mode that enables millimeter-scale precision for docking and similar use cases.
  • Control Center 2.0: a redesigned configuration and admin interface. This new version supports global map alignment, advanced exploration capabilities and new map-sharing utilities.

RGo separately announced support for NVIDIA Jetson Orin System-on-Modules that enables visual perception for a variety of mobile robot applications.

RGo will exhibit its technology at LogiMAT 2023, Europe’s biggest annual intralogistics tradeshow, from April 25-27, in Stuttgart, Germany at Booth 6F59. The company will also sponsor and host a panel session “Unlocking New Applications for Mobile Robots” at the Robotics Summit and Expo in Boston from May 10-11.

Leadership announcements

RGO also announced four leadership appointments. This includes Yael Fainaro being named chief business officer and president; Mathieu Goy being named head of European sales; Yasuaki Mori being named executive consultant, APAC market development; and Amy Villeneuve as a member of the board of directors.

“It is exciting to have reached this important milestone. The new additions to our leadership team underpin our evolution from a technology innovator to a scaling commercial business model including new geographies,” said Amir Bousani, CEO and co-founder, RGo Robotics.

Goy, based in Paris, and Mori, based in Tokyo, join with extensive sales experience in the European and APAC markets. RGo is establishing an initial presence in Japan this year with growth in South Korea planned for late 2023.


“RGo has achieved impressive product maturity and growth since exiting stealth mode last year,” said Fainaro. “The company’s vision-based localization capabilities are industrial-grade, extremely precise and ready today for even the most challenging environments. This, together with higher levels of 3D perception, brings tremendous value to the rapidly growing mobile robotics market. I’m looking forward to working with Amir and the team to continue growing RGo in the year ahead.”

Villeneuve joins RGo’s board of directors with leadership experience in the robotics industry, including her time as the former COO and president of Amazon Robotics. “I am very excited to join the team,” said Villeneuve. “RGo’s technology creates disruptive change in the industry. It reduces cost and adds capabilities to mobile robots in logistics, and enables completely new applications in emerging markets including last-mile delivery and service robotics.”

Customer traction

After comprehensive field trials in challenging indoor and outdoor environments, RGo continued its commercial momentum with new customers. The design wins are with market-leading robot OEMs across multiple vertical markets ranging from logistics and industrial autonomous mobile robots, forklifts, outdoor machinery and service robots.

Capra Robotics, an award-winning mobile robot manufacturer based in Denmark, selected RGo’s Perception Engine for its new Hircus mobile robot platform.

“RGo continues to develop game-changing navigation technology,” said Niels Juls Jacobsen, CEO of Capra and founder of Mobile Industrial Robots. “Traditional localization sensors either work indoors or outdoors – but not both. Combining both capabilities into a low-cost, compact and robust system is a key aspect of our strategy to deliver mobile robotics solutions to the untapped ‘interlogistics’ market.”

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>
https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/feed/ 0
NVIDIA is making AI easier to use https://www.therobotreport.com/nvidia-is-making-ai-easier-to-use/ https://www.therobotreport.com/nvidia-is-making-ai-easier-to-use/#respond Tue, 21 Mar 2023 17:51:34 +0000 https://www.therobotreport.com/?p=565285 NVIDIA announces new features and capabilities that support the acceleration and growth of AI-based solutions for robotics and simulation.

The post NVIDIA is making AI easier to use appeared first on The Robot Report.

]]>
amazon robots in a simulated world.

NVIDIA Omniverse enables Amazon Robotics engineers to quickly simulate warehouse environments and train sensor data. | Credit: NVIDIA

NVIDIA’s CEO Jensen Huang presented the latest product announcements during his keynote at the 2023 GTC event this morning.

One of the first significant announcements is that NVIDIA accelerated computing together with cuOpt has solved Lee and Lim’s traveling salesman problem faster than any other solution to date. This milestone opens up a new world of capability for roboticists to create real-time solutions to AMR path planning problems.

Isaac Sim on Omniverse

Omniverse Cloud for enterprises is now available as a platform as a service (PaaS) for compute-intensive workloads like synthetic data generation and CI/CD. This PaaS provides access to top-of-the-line hardware when you need it for processing-intensive workloads. The service is rolling out with MS Azure.

Amazon Robotics is an early customer of Isaac Sim on Omniverse. The Proteus warehouse robot development team created a complete digital twin of the Proteus AMR and deployed it into Isaac SIM to help with the development and programming of the AMR. 

The team generated hundreds of photo-realistic environments to train and test sensor processing algorithms, and AMR behavior. This enabled the development team to accelerate the project without the need to build and test expensive prototypes in the real world.

Isaac Sim enables sensor simulation and interactive design and can run on AWS RoboMaker, to help with world generation. It is deployable on your cloud service provider.

BMW is also using NVIDIA Omniverse to accelerate the planning and design of new automobile assembly factories. BMW moved the virtual factory development to Omniverse. There, manufacturing engineers are able to layout robotics assembly workcells and virtually modify the robotics, tools and programming to optimize the workflow. Mercedes is claiming that this digital twin development process is shaving two years off the planning cycle for a new factory.

microsoft team screenshot of mercedes engineering team.

BMW is an early user of NVIDIA Omniverse for the development and programming of future automotive assembly lines and factories. | Credit: NVIDIA

Isaac ROS DP3 release adds new perception capabilities and open-source modules

There is a new ROS DP3 release for Issac that includes a number of new features:

  • New LIDAR-based grid localizer package
  • New people detection support in the NVBLOX package
    • GPU-accelerated 3D reconstruction for collision avoidance
  • Updated VSLAM and depth perception GEM
  • Source release of NITROS, NVIDIA’s ROS 2 hardware acceleration implementation
  • New Isaac ROS benchmark suite built
sensor data of robot in warehouse.

NVIDIA Omniverse and Isaac Sim enable roboticists to view the world as the sensors see it. | Credit: NVIDIA

Updates to NVIDIA Jetson Orin Family

The Jetson Orin product line gets an update including a new Orin Nano unit that is available in a complete range of system-on-module from hobbyists to commercial platforms:

  • Jetson Orin Nano 8GB/4GB
  • Orin NX 16GB/8GB
  • AGX Orin 64GB/32GB

New entry-level Jetson developer kit for Robotics / Edge AI

NVIDIA is introducing the NVIDIA Jetson Orin Nano developer kit that delivers 80x the performance when compared with the previous-generation Jetson Nano, enabling developers to run advanced transformer and robotics models. It also improves power efficiency by 50x the performance per watt, so developers getting started with the Jetson Orin Nano modules can build and deploy power-efficient, entry-level AI-powered robots, smart drones, and intelligent vision systems.

  • Available now for preorder for $499

NVIDIA Metropolis

In a future-looking statement, NVIDIA believes that building infrastructure will evolve such that every building will be considered to be a “robot.” Practically, this implies that buildings and other infrastructure elements will be imbued with the ability to sense, think and act.

It starts with the idea to automate infrastructure with vision-based AI as a platform for things that watch other things that move. This is a vision that the company calls “NVIDIA Metropolis”

The company is announced the latest generation TAO, TAO 5.0 and the next version of DeepStream which puts more sensors to work to help automate machinery and solve computer vision grand challenges with APIs.

Additional features of TAIO 5.0 include:

  • New transformer-based pre-trained models
  • Deploy on any device – GPUs, CPUs, MCUs
  • TAO is now source open
  • AI-assisted annotation
  • REST APIs
  • Integration with any cloud — Google Vertex AI, AzureAI, AWS, etc.

NVIDIA is also announcing Metropolis Microservices to solve hard problems like multi-camera tracking and human and machine interactions.

DeepStream is putting AI to work in low-code interfaces for graph composing as an expansion of existing AI services. This includes multi-sensor sensor fusion and deterministic scheduling for things like PLC controllers.

New features of DeepStream SDK include a new graph execution runtime (GXF) that allows developers to expand beyond the open-source GStreamer multimedia framework. This update unlocks a host of new applications, including those in industrial quality control, robotics and autonomous machines.

Editors note: An earlier version of this article mistakenly referenced Mercedes instead of BMW. It has been updated to accurately document Jensen’s keynote references.

The post NVIDIA is making AI easier to use appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-is-making-ai-easier-to-use/feed/ 0
How ChatGPT can control robots https://www.therobotreport.com/microsoft-demos-how-chatgpt-can-control-robots/ https://www.therobotreport.com/microsoft-demos-how-chatgpt-can-control-robots/#respond Wed, 22 Feb 2023 18:44:02 +0000 https://www.therobotreport.com/?p=565080 Microsoft researchers use ChatGPT to write computer code that can control a robot arm and an aerial drone.

The post How ChatGPT can control robots appeared first on The Robot Report.

]]>
a chatgpt prompt asking a robot to perform a block-building task

Microsoft researchers controlled this robotic arm using ChatGPT. | Credit: Microsoft

By now, you’ve likely heard of ChatGPT, OpenAI’s language model that can generate somewhat coherent responses to a variety of prompts and questions. It’s primarily being used to generate text, translate information, make calculations and explain topics you’re looking to learn about.

Researchers at Microsoft, which has invested billions into OpenAI and recently integrated ChatGPT into its Bing search engine, extended the capabilities of ChatGPT to control a robotic arm and aerial drone. Earlier this week, Microsoft released a technical paper that describes a series of design principles that can be used to guide language models toward solving robotics tasks.

“It turns out that ChatGPT can do a lot by itself, but it still needs some help,” Microsoft wrote about its ability to program robots.

Prompting LLMs for robotics control poses several challenges, Microsoft said, such as providing a complete and accurate description of the problem, identifying the right set of allowable function calls and APIs, and biasing the answer structure with special arguments. To make effective use of ChatGPT for robotics applications, the researchers constructed a pipeline composed of the following steps:

  • 1. First, they defined a high-level robot function library. This library can be specific to the form factor or scenario of interest and should map to actual implementations on the robot platform while being named descriptively enough for ChatGPT to follow.
  • 2. Next, they build a prompt for ChatGPT which described the objective while also identifying the set of allowed high-level functions from the library. The prompt can also contain information about constraints, or how ChatGPT should structure its responses.
  • 3. The user stayed in the loop to evaluate code output by ChatGPT, either through direct analysis or through simulation and provides feedback to ChatGPT on the quality and safety of the output code.
  • 4. After iterating on the ChatGPT-generated implementations, the final code can be deployed onto the robot.

Examples of ChatGPT controlling robots

In one example, Microsoft researchers used ChatGPT in a manipulation scenario with a robot arm. It used conversational feedback to teach the model how to compose the originally provided APIs into more complex high-level functions that ChatGPT coded by itself. Using a curriculum-based strategy, the model was able to chain these learned skills together logically to perform operations such as stacking blocks.

The model was also able to build the Microsoft logo out of wooden blocks. It was able to recall the Microsoft logo from its internal knowledge base, “draw” the logo as SVG code, and then use the skills learned above to figure out which existing robot actions can compose its physical form.

Researchers also tried to control an aerial drone using ChatGPT. First, they fed ChatGPT a rather long prompt laying out the computer commands it could write to control the drone. After that, the researchers could make requests to instruct ChatGPT to control the robot in various ways. This included asking ChatGPT to use the drone’s camera to identify a drink, such as coconut water and a can of Coca-Cola. It was also able to write code structures for drone navigation based solely on the prompt’s base APIs, according to the researchers.

“ChatGPT asked clarification questions when the user’s instructions were ambiguous and wrote complex code structures for the drone such as a zig-zag pattern to visually inspect shelves,” the team said.

Microsoft said it also applied this approach to a simulated domain, using the Microsoft AirSim simulator. “We explored the idea of a potentially non-technical user directing the model to control a drone and execute an industrial inspection scenario. We observe from the following excerpt that ChatGPT is able to effectively parse intent and geometrical cues from user input and control the drone accurately.”

Key limitation

The researchers did admit this approach has a major limitation: ChatGPT can only write the code for the robot based on the initial prompt the human gives it. A human engineer has to thoroughly explain to ChatGPT how the application programming interface for a robot works, otherwise, it will struggle to generate applicable code.

“We emphasize that these tools should not be given full control of the robotics pipeline, especially for safety-critical applications. Given the propensity of LLMs to eventually generate incorrect responses, it is fairly important to ensure solution quality and safety of the code with human supervision before executing it on the robot. We expect several research works to follow with the proper methodologies to properly design, build and create testing, validation and verification pipelines for LLM operating in the robotics space.

“Most of the examples we presented in this work demonstrated open perception-action loops where ChatGPT generated code to solve a task, with no feedback provided to the model afterwards. Given the importance of closed-loop controls in perception-action loops, we expect much of the future research in this space to explore how to properly use ChatGPT’s abilities to receive task feedback in the form of textual or special-purpose modalities.”

Microsoft said its goal with this research is to see if ChatGPT can think beyond text and reason about the physical world to help with robotics tasks.

“We want to help people interact with robots more easily, without needing to learn complex programming languages or details about robotic systems. The key challenge here is teaching ChatGPT how to solve problems considering the laws of physics, the context of the operating environment, and how the robot’s physical actions can change the state of the world.”

The post How ChatGPT can control robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/microsoft-demos-how-chatgpt-can-control-robots/feed/ 0
Soft Robotics mGripAI uses simulation to train in NVIDIA Isaac Sim https://www.therobotreport.com/soft-robotics-mgripai-uses-simulation-to-train-in-nvidia-isaac-sim/ https://www.therobotreport.com/soft-robotics-mgripai-uses-simulation-to-train-in-nvidia-isaac-sim/#respond Thu, 19 Jan 2023 17:03:23 +0000 https://www.therobotreport.com/?p=564841 Soft Robotics applies NVIDIA Isaac Sim’s synthetic data to food processing automation in efforts to improve safety and increase production.

The post Soft Robotics mGripAI uses simulation to train in NVIDIA Isaac Sim appeared first on The Robot Report.

]]>
Soft Robotics

Soft Robotics grippers can move items that might be damaged by classic mechanical grippers. | Credit: Soft Robotics

Robots are finally getting a grip. 

Developers have been striving to close the gap on robotic gripping for the past several years, pursuing applications for multibillion-dollar industries. Securely gripping and transferring fast-moving items on conveyor belts holds vast promise for businesses. 

Soft Robotics, a Bedford, Mass. startup, is harnessing NVIDIA Isaac Sim to help close the sim-to-real gap for a handful of robotic gripping applications. One area is perfecting gripping for pick and placement of foods for packaging. 

Food packaging and processing companies are using the startup’s mGripAI system which combines soft grasping with 3D Vision and AI to grasp delicate foods such as proteins, produce, and bakery items without damage.

“We’re selling the hands, the eyes and the brains of the picking solution,” said David Weatherwax, senior director of software engineering at Soft Robotics. 

Unlike other industries that have adopted robotics, the $8 trillion food market has been slow to develop robots to handle variable items in unstructured environments, says Soft Robotics. 

The company, founded in 2013, recently landed $26 million in Series C funding from Tyson Ventures, Marel and Johnsonville Ventures.

Companies such as Tyson Foods and Johnsonville are betting on the adoption of robotic automation to help improve safety and increase production in their facilities. Both companies rely on Soft Robotics technologies. 

Soft Robotics is a member of the NVIDIA Inception program, which provides companies with GPU support and AI platform guidance. 

Getting a grip with synthetic data

Soft Robotics develops unique models for every one of its gripping applications, each requiring specific data sets. And picking from piles of wet, slippery chicken and other foods can be a tricky challenge. 

Utilizing Omniverse and Isaac Sim, the company can create 3D renderings of chicken parts with different backgrounds, like on conveyor belts or in bins and with different lighting scenarios. 

The company taps into Isaac Replicator to develop synthetic data, generating hundreds of thousands of images per model and distributing that among an array of instances in the cloud. Isaac Replicator is a set of tools, APIs, and workflows for generating synthetic data using Isaac Sim.

It also runs pose estimation models to help its gripping system see the angle of the item to pick. 

NVIDIA A100 GPUs on site enable Soft Robotics to run split-second inference with the unique models for each application in these food-processing facilities. Meanwhile, simulation and training in Isaac Sim offer access to NVIDIA A100s for scaling up workloads.

“Our current setup is fully synthetic, which allows us to rapidly deploy new applications. We’re all in on Omniverse and Isaac Sim, and that’s been working great for us,” said Weatherwax. 

Solving issues with occlusion, lighting 

A big challenge at Soft Robotics is solving issues with occlusion for an understanding of how different pieces of chicken stack up and overlap one another when dumped into a pile. “How those form can be pretty complex,” Weatherwax said.

Glares on wet chicken can potentially throw off detection models. “A key thing for us is the lighting, so the NVIDIA RTX-driven ray tracing is really important,” he said. 

Soft Robotics chicken

The glares on wet chicken is a classic lighting and vision problem that requires a new approach for training machine learning vision models. | Credit: Soft Robotics

But where it really gets interesting is modeling it all in 3D and figuring out in a split second which item is the least obstructed in a pile and most accessible for a robot gripper to pick and place. 

Building synthetic data sets with physics-based accuracy, Omniverse enables Soft Robotics to create such environments. “One of the big challenges we have is how all these amorphous objects form into a pile,” Weatherwax said. 

Boosting production line pick accuracy

Production lines in food processing plants can move fast. But robots deployed with application-specific models promise to handle as many as 100 picks per minute. 

Still a work in progress, success in such tasks hinges on accurate representations of piles of items, supported by training data sets that consider every possible way items can fall into a pile. 

The objective is to provide the robot with the best available pick from a complex and dynamic environment. If food items fall off the conveyor belt or otherwise become damaged then it is considered waste, which directly impacts yield.

Driving production gains 

Meat-packing companies rely on lines of people for processing chicken, but like so many other industries they have faced employee shortages. Some that are building new plants for food processing can’t even attract enough workers at launch, said Weatherwax. 

“They are having a lot of staffing challenges, so there’s a push to automate,” he said.

The Omniverse-driven work for food processing companies has delivered a more than 10X increase in its simulation capacity, accelerating deployment times for AI picking systems from months to days. 

And that’s enabling Soft Robotics customers to get a grip on more than just deploying automated chicken-picking lines — it’s ensuring that they are covered for an employment challenge that has hit many industries, especially those with increased injury and health risks. 

“Handling raw chicken is a job better suited for a robot,” he said.

The post Soft Robotics mGripAI uses simulation to train in NVIDIA Isaac Sim appeared first on The Robot Report.

]]>
https://www.therobotreport.com/soft-robotics-mgripai-uses-simulation-to-train-in-nvidia-isaac-sim/feed/ 0
How human language accelerated robotic learning https://www.therobotreport.com/how-human-language-accelerated-robotic-learning/ https://www.therobotreport.com/how-human-language-accelerated-robotic-learning/#respond Wed, 11 Jan 2023 20:17:37 +0000 https://www.therobotreport.com/?p=564757 Researchers developed a suite of policies using machine learning training approaches with and without language information, and then compared the policies’ performance.

The post How human language accelerated robotic learning appeared first on The Robot Report.

]]>

Researchers found human language descriptions of tools accelerated the learning of simulated robotic arms. | Credit: Princeton University

Exploring a new way to teach robots, Princeton researchers have found that human-language descriptions of tools can accelerate the learning of a simulated robotic arm lifting and using a variety of tools.

The results build on evidence that providing richer information during artificial intelligence (AI) training can make autonomous robots more adaptive to new situations, improving their safety and effectiveness.

Adding descriptions of a tool’s form and function to the training process for the robot improved the robot’s ability to manipulate newly encountered tools that were not in the original training set. A team of mechanical engineers and computer scientists presented the new method, Accelerated Learning of Tool Manipulation with LAnguage, or ATLA, at the Conference on Robot Learning.

Robotic arms have great potential to help with repetitive or challenging tasks, but training robots to manipulate tools effectively is difficult: Tools have a wide variety of shapes, and a robot’s dexterity and vision are no match for a human’s.

“Extra information in the form of language can help a robot learn to use the tools more quickly,” said study coauthor Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton who leads the Intelligent Robot Motion Lab.

The team obtained tool descriptions by querying GPT-3, a large language model released by OpenAI in 2020 that uses a form of AI called deep learning to generate text in response to a prompt. After experimenting with various prompts, they settled on using “Describe the [feature] of [tool] in a detailed and scientific response,” where the feature was the shape or purpose of the tool.

“Because these language models have been trained on the internet, in some sense you can think of this as a different way of retrieving that information,” more efficiently and comprehensively than using crowdsourcing or scraping specific websites for tool descriptions, said Karthik Narasimhan, an assistant professor of computer science and coauthor of the study. Narasimhan is a lead faculty member in Princeton’s natural language processing (NLP) group, and contributed to the original GPT language model as a visiting research scientist at OpenAI.

This work is the first collaboration between Narasimhan’s and Majumdar’s research groups. Majumdar focuses on developing AI-based policies to help robots – including flying and walking robots – generalize their functions to new settings, and he was curious about the potential of recent “massive progress in natural language processing” to benefit robot learning, he said.

For their simulated robot learning experiments, the team selected a training set of 27 tools, ranging from an axe to a squeegee. They gave the robotic arm four different tasks: push the tool, lift the tool, use it to sweep a cylinder along a table, or hammer a peg into a hole. The researchers developed a suite of policies using machine learning training approaches with and without language information, and then compared the policies’ performance on a separate test set of nine tools with paired descriptions.

This approach is known as meta-learning, since the robot improves its ability to learn with each successive task. It’s not only learning to use each tool, but also “trying to learn to understand the descriptions of each of these hundred different tools, so when it sees the 101st tool it’s faster in learning to use the new tool,” said Narasimhan. “We’re doing two things: We’re teaching the robot how to use the tools, but we’re also teaching it English.”

The researchers measured the success of the robot in pushing, lifting, sweeping and hammering with the nine test tools, comparing the results achieved with the policies that used language in the machine learning process to those that did not use language information. In most cases, the language information offered significant advantages for the robot’s ability to use new tools.

One task that showed notable differences between the policies was using a crowbar to sweep a cylinder, or bottle, along a table, said Allen Z. Ren, a Ph.D. student in Majumdar’s group and lead author of the research paper.

“With the language training, it learns to grasp at the long end of the crowbar and use the curved surface to better constrain the movement of the bottle,” said Ren. “Without the language, it grasped the crowbar close to the curved surface and it was harder to control.”

The research was supported in part by the Toyota Research Institute (TRI), and is part of a larger TRI-funded project in Majumdar’s research group aimed at improving robots’ ability to function in novel situations that differ from their training environments.

“The broad goal is to get robotic systems – specifically, ones that are trained using machine learning — to generalize to new environments,” said Majumdar. Other TRI-supported work by his group has addressed failure prediction for vision-based robot control, and used an “adversarial environment generation” approach to help robot policies function better in conditions outside their initial training.

Editor’s Note: This article was republished from Princeton University.

The post How human language accelerated robotic learning appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-human-language-accelerated-robotic-learning/feed/ 0
Intel Labs introduces open-source simulator for AI https://www.therobotreport.com/intel-labs-introduces-open-source-simulator-for-ai/ https://www.therobotreport.com/intel-labs-introduces-open-source-simulator-for-ai/#respond Wed, 14 Dec 2022 18:41:36 +0000 https://www.therobotreport.com/?p=564546 Intel Labs collaborated with the Computer Vision Center in Spain, Kujiale in China, and the Technical University of Munich to develop the Simulator for Photorealistic Embodied AI Research (SPEAR).

The post Intel Labs introduces open-source simulator for AI appeared first on The Robot Report.

]]>

SPEAR creates photorealistic simulation environments that provide challenging workspaces for training robot behavior. | Credit: Intel

Intel Labs collaborated with the Computer Vision Center in Spain, Kujiale in China, and the Technical University of Munich to develop the Simulator for Photorealistic Embodied AI Research (SPEAR). The result is a highly realistic, open-source simulation platform that accelerates the training and validation of embodied AI systems in indoor domains. The solution can be downloaded under an open-source MIT license.

Existing interactive simulators have limited content diversity, physical interactivity, and visual fidelity. This realistic simulation platform allows developers to train and validate embodied agents for growing tasks and domains.

The goal of SPEAR is to drive research and commercialization of household robotics through the simulation of human-robot interaction scenarios.

It took more than a year with a team of professional artists to construct a collection of high-quality, handcrafted, interactive environments. The SPEAR starter pack features more than 300 virtual indoor environments with more than 2,500 rooms and 17,000 objects that can be manipulated individually.

These interactive training environments use detailed geometry, photorealistic materials, realistic physics, and accurate lighting. New content packs targeting industrial and healthcare domains will be released soon.

The use of highly detailed simulation enables the development of more robust embodied AI systems. Roboticists can leverage simulated environments to train AI algorithms and optimize perception functions, manipulation, and spatial intelligence. The ultimate outcome is faster validation and a reduction in time-to-market.

In embodied AI, agents learn from physical variables. Capturing and collating these encounters can be time-consuming, labor-intensive, and risky. The interactive simulations provide an environment to train and evaluate robots before deploying them in the real world.

Overview of SPEAR

SPEAR is designed based on three main requirements:

  1. Support a large, diverse, and high-quality collection of environments
  2. Provide sufficient physical realism to support realistic interactions and manipulation of a wide range of household objects
  3. Offer as much photorealism as possible, while still maintaining enough rendering speed to support training complex embodied agent behaviors

At its core, SPEAR was implemented on top of the Unreal Engine, which is an industrial-strength open-source game engine. SPEAR environments are implemented as Unreal Engine assets, and SPEAR provides an OpenAI Gym interface to interact with environments via Python.

SPEAR currently supports four distinct embodied agents:

  1. OpenBot Agent – well-suited for sim-to-real experiments, it provides identical image observations to a real-world OpenBot, implements an identical control interface, and has been modeled with accurate geometry and physical parameters
  2. Fetch Agent – modeled using accurate geometry and physical parameters, Fetch Agent is able to interact with the environment via a physically realistic gripper
  3. LoCoBot Agent – modeled using accurate geometry and physical parameters, LoCoBot Agent is able to interact with the environment via a physically realistic gripper
  4. Camera Agent – which can be teleported anywhere within the environment to create images of the world from any angle

The agents return photorealistic robot-centric observations from camera sensors, odometry from wheel encoder states as well as joint encoder states. This is useful for validating kinematic models and predicting the robot’s operation.

For optimizing navigational algorithms, the agents can also return a sequence of waypoints representing the shortest path to a goal location, as well as GPS and compass observations that point directly to the goal. Agents can return pixel-perfect semantic segmentation and depth images, which is useful for correcting for inaccurate perception in downstream embodied tasks and gathering static datasets.

SPEAR currently supports two distinct tasks:

  • The Point-Goal Navigation Task randomly selects a goal position in the scene’s reachable space, computes a reward based on the agent’s distance to the goal, and triggers the end of an episode when the agent hits an obstacle or the goal.
  • The Freeform Task is an empty placeholder task that is useful for collecting static datasets.

SPEAR is available under an open-source MIT license, ready for customization on any hardware. For more details, visit the SPEAR GitHub page.

The post Intel Labs introduces open-source simulator for AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/intel-labs-introduces-open-source-simulator-for-ai/feed/ 0
Applied Intuition lands robotics development contract with Army https://www.therobotreport.com/applied-intuition-lands-an-up-to-49m-army-contract/ https://www.therobotreport.com/applied-intuition-lands-an-up-to-49m-army-contract/#respond Wed, 16 Nov 2022 23:14:57 +0000 https://www.therobotreport.com/?p=564302 The Army and DIU have selected Applied Intuition to deliver an end-to-end autonomy software development and testing platform.

The post Applied Intuition lands robotics development contract with Army appeared first on The Robot Report.

]]>
applied intuitive simian

Applied Intuition’s Simian simulation platform can create thousands of scenarios for autonomous vehicle development. | Source: Applied Intuition

The Army and Defense Innovation Unit (DIU) have selected Applied Intuition to deliver an end-to-end autonomy software development and testing platform for the Army’s Robotic Combat Vehicle (RCV program). The contract has a $49 million ceiling for the competitive prototyping phase that will span two years. 

Applied Intuition will be drawing on its expertise in developing software products to develop and test autonomous vehicles to provide a foundational modeling and simulation platform for the RCV program. The platform aims to give the RCV program office, under the umbrella of PEO Ground Combat Systems, the ability to manage software development and testing for mission and mobility autonomy to be used in RCV variants. 

The RCV program turned to Applied Intuition and its end-to-end autonomy development solutions to improve its off-road maneuvering, obstacle avoidance and safety capabilities. Applied Intuition’s toolchain will also help the RCV program evaluate autonomy stacks created by the Army and its other commercial partners. 

“We are excited to bring our proven enterprise autonomy development toolchain to the Army’s RCV program,” Qasar Younis, co-founder and CEO of Applied Intuition, said. “Our modeling and simulation development environment will enable continuous improvement of autonomy software across the program’s lifecycle and will ultimately enhance the Army‘s broader approach to autonomy stack development.”

The contract, part of the Software Pathway under the Agile Acquisition Framework, is a result of DIU’s Commercial Solutions Opening, which involves the Army’s RCV program working in close coordination with DIU to acquire commercial software.

“The innovative use of the Department of Defense’s Software Acquisition Pathway to acquire commercial modeling and simulation software for autonomy development is a landmark achievement,” Colin Carroll, the Head of Government at Applied Intuition, said. “We look forward to helping the RCV program and the DOD quickly and safely scale production of autonomous systems.”

Applied Intuition was founded in 2017, and has since raised over $350 million. In November 2021, when it announced its last funding round, the company was valued at $3.6 billion. It offers a suite of products aimed at facilitating testing for autonomous vehicles. These include Simian, the company’s core simulator that provides comprehensive scenario coverage for AV development, and Spectral, a platform to train, test and validate perception systems. 

The company is headquartered in Silicon Valley and has offices in Los Angeles, Detroit, Washington, D.C., Munich, Stockholm, Seoul and Tokyo.

Applied Intuition recently partnered with Ouster, which recently announced it was merging with Velodyne, with the goal of speeding up customer deployment of LiDAR-based perception systems. The partnership will involve the companies collaborating to create, test and release synthetic models of Ouster LiDARs. 

The post Applied Intuition lands robotics development contract with Army appeared first on The Robot Report.

]]>
https://www.therobotreport.com/applied-intuition-lands-an-up-to-49m-army-contract/feed/ 0
Amazon testing robots to transport oversized items in fulfillment centers https://www.therobotreport.com/amazon-testing-robots-to-transport-oversized-items-in-fulfillment-centers/ https://www.therobotreport.com/amazon-testing-robots-to-transport-oversized-items-in-fulfillment-centers/#respond Mon, 31 Oct 2022 18:14:36 +0000 https://www.therobotreport.com/?p=564181 Amazon already has more than half a million robots working in its fulfillment centers every day, and it's looking to add more.

The post Amazon testing robots to transport oversized items in fulfillment centers appeared first on The Robot Report.

]]>
amazon robot

Amazon’s autonomous robot is being tested at some of its fulfillment facilities. | Source: Amazon

Amazon already has more than half a million robots working in its fulfillment centers every day. These robots perform a variety of tasks, like stocking inventory, filling orders and sorting packages, in areas with physical and virtual barriers than prevent them from interacting with human workers in the fulfillment center. 

The robots are kept away from the busy fulfillment floors, where Amazon associates are constantly moving pallets across a crowded floor littered with pillars and other obstacles, to ensure workers are safe and to keep the robots moving quickly. 

However, there are jobs on the fulfillment center floor, like moving the 10% of items ordered from the Amazon Store that are too long, wide or unwieldy to fit in the company’s pods or on its conveyor belts. These tasks require a robot that can use artificial intelligence and computer vision to navigate the chaotic facility floor without putting any workers at risk. 

This robot would also need to be able to be integrated into Amazon’s current fulfillment centers seamlessly, without disrupting the tasks that Amazon associates perform every day.

“We don’t develop technology for technology’s sake,” Siddhartha Srinivasa, director of Amazon Robotics AI, said. “We want to develop technology with an end goal in mind of empowering our associates to perform their activities better and safer. If we don’t integrate seamlessly end-to-end, then people will not use our technology.”

Amazon is currently testing a few dozen robots that can do just that in some of its fulfillment centers. Amazon Robotics AI’s perception lead, Ben Kadlec, is leading the development of the AI for these robots, which have been deployed to preliminarily test whether the robot would be good at transporting non-conveyable items. 

These robots have the ability to understand the three-dimensional structure of the world and how those structures distinguish each object in it. The robot can then understand how that object is going to behave based on its knowledge of the structure. This understanding, called semantic understanding or scene comprehension, along with LiDAR and camera data, allows the robot to be able to map its environment in real-time and make decisions on the fly.

“When the robot takes a picture of the world, it gets pixel values and depth measurements,” Lionel Gueguen, an Amazon Robotics AI machine learning applied scientist, said. “So, it knows at that distance, there are points in space — an obstacle of some sort. But that is the only knowledge the robot has without semantic understanding.”

Semantic understanding is all about teaching a robot to take a point in space and decide if that point is a person, a pod, a pillar, a forklift, another robot or any other object that could be in a fulfillment center. The robot then decides if that object is still or moving. It takes all of this information into account when calculating the best path to its destination. 

Amazon’s team is currently working on predictive models that can help the robot better predict the paths of people and other moving objects that it encounters. They’re also working to help the robots how to best interact with humans. 

“If the robot sneaks up on you really fast and hits the brake a millimeter before it touches you, that might be functionally safe, but not necessarily acceptable behavior,” Srinivasa said. “And so, there’s an interesting question around how do you generate behavior that is not only safe and fluent but also acceptable, that is also legible, which means that it’s human-understandable.”

Amazon’s roboticists hope that if they can launch a full-scale deployment of these autonomous robots, then they can apply what they’ve learned to other robots that perform different tasks. The company has already begun rolling out autonomous robots in its warehouses. Earlier this year, it unveiled its first autonomous mobile robot Proteus

The post Amazon testing robots to transport oversized items in fulfillment centers appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amazon-testing-robots-to-transport-oversized-items-in-fulfillment-centers/feed/ 0
NVIDIA releases robotics development tools at GTC https://www.therobotreport.com/nvidia-releases-new-robotics-development-tools-gtc/ https://www.therobotreport.com/nvidia-releases-new-robotics-development-tools-gtc/#respond Tue, 20 Sep 2022 18:16:03 +0000 https://www.therobotreport.com/?p=563845 NVIDIA released Jetson Orin Nano system-on-modules, updated its Nova Orin reference platform for autonomous mobile robots and announced cloud-based availability for its Isaac Sim technology.

The post NVIDIA releases robotics development tools at GTC appeared first on The Robot Report.

]]>

NVIDIA today announced a number of new tools for robotics developers during its GTC event, including Jetson Orin Nano system-on-modules, updates to its Nova Orin reference platform for autonomous mobile robots (AMRs) and cloud-based availability for its Isaac Sim technology.

NVIDIA expanded its Jetson lineup with the launch of Jetson Orin Nano system-on-modules for entry-level edge AI and robotics applications. The new Orin Nano delivers up to 40 trillion operations per second (TOPS) of AI performance, which NVIDIA said is 80x the performance over the prior generation, in the smallest Jetson form factor yet.

Jetson Orin features an NVIDIA Ampere architecture GPU, Arm-based CPUs, next-generation deep learning and vision accelerators, high-speed interfaces, fast memory bandwidth and multimodal sensor support. The Orin Nano modules will be available in two versions. The Orin Nano 8GB delivers up to 40 TOPS with power configurable from 7W to 15W, while the 4GB version delivers up to 20 TOPS with power options as low as 5W to 10W.

Orin Nano is supported by the NVIDIA JetPack software development kit and is powered by the same NVIDIA CUDA-X accelerated computing stack used to create AI products in such fields as industrial IoT, manufacturing, smart cities and more.

The Jetson Orin Nano modules will be available in January 2023 starting at $199.

“Over 1,000 customers and 150 partners have embraced Jetson AGX Orin since NVIDIA announced its availability just six months ago, and Orin Nano will significantly expand this adoption,” said Deepu Talla, vice president of embedded and edge computing at NVIDIA. “With an orders-of-magnitude increase in performance for millions of edge AI and ROS developers today, Jetson Orin is the ideal platform for virtually every kind of robotics deployment imaginable.”

The Orin Nano modules are form-factor and pin-compatible with the previously announced Orin NX modules. Full emulation support allows customers to get started developing for the Orin Nano series today using the AGX Orin developer kit. This gives customers the flexibility to design one system to support multiple Jetson modules and easily scale their applications.

The Jetson Orin platform is designed to solve tough robotics challenges and brings accelerated computing to over 700,000 ROS developers. Combined with the powerful hardware capabilities of Orin Nano, enhancements in the latest NVIDIA Isaac software for ROS put increased performance and productivity in the hands of roboticists.

Jetson Orin has seen broad support across the robotics and embedded computing ecosystem, including from Canon, John Deere, Microsoft Azure, Teradyne, TK Elevator and many more.

 

Isaac Sim in the clouds
NVIDIA also announced there will be three ways to access its Isaac Sim robotics simulation platform on the cloud:

  • It will soon be available on the new NVIDIA Omniverse Cloud platform
  • It’s available now on AWS RoboMaker
  • Developers can now download it from NVIDIA NGC and deploy it to any public cloud

Roboticists will be able to generate large datasets from physically accurate sensor simulations to train the AI-based perception models on their robots. The synthetic data generated in these simulations improve the model performance and provide training data that often can’t be collected in the real world.

NVIDIA said the upcoming release of Isaac Sim will include NVIDIA cuOpt, a real-time fleet task-assignment and route-planning engine for optimizing robot path planning. Tapping into the accelerated performance of the cloud, teams can make dynamic, data-driven decisions, whether designing the ideal warehouse layout or optimizing active operations.

Nova Orin updates
NVIDIA also shared new details about three Nova Orin reference platform configurations for AMRs. Two use a single Jetson AGX Orin — which runs the NVIDIA Isaac robotics stack and the Robot Operating System (ROS) with the GPU-accelerated framework — and one relies on two Orin modules.

NVIDIA said the Nova Orin platform is designed to improve reliability and reduce development costs for building and deploying AMRs. Nova Orin provides industrial-grade configurations of sensors, software and GPU-computing capabilities.

The Nova Orin reference architecture designs are provided for specific use cases. There is one Orin-based design without safety-certified sensors, and one that includes them, along with a safety programmable logic controller. The third architecture has a dual Orin-based design that depends on vision AI for enabling functional safety.

Sensor support is included for stereo cameras, LiDAR, ultrasonic sensors and inertial measurement units. The chosen sensors have been selected to balance performance, price and reliability for industrial applications. The stereo cameras and fisheye cameras are custom designed by NVIDIA in coordination with camera partners. All sensors are calibrated and time-synchronized, and come with drivers for reliable data capture. These sensors allow AMRs to detect objects and obstacles across a wide range of situations while also enabling simultaneous localization and mapping (SLAM).

NVIDIA provides two LiDAR options, one for applications that don’t need sensors certified for functional safety, and the other for those that do. In addition to these 2D LiDARs, Nova Orin supports 3D LiDAR for mapping and ground-truth data collection.

The base OS includes drivers and firmware for all the hardware and adaptation tools, as well as design guides for integrating it with robots. Nova can be integrated with a ROS-based robot application. The sensors will have validated models in Isaac Sim for application development and testing without the need for an actual robot.

The cloud-native data acquisition tools eliminate the arduous task of setting up data pipelines for the vast amount of sensor data needed for training models, debugging and analytics. State-of-the-art GEMs developed for Nova sensors are GPU accelerated with the Jetson Orin platform, providing key building blocks such as visual SLAM, stereo depth estimation, obstacle detection, 3D reconstruction, semantic segmentation and pose estimation.

Nova Orin supports secure over-the-air updates, as well as device management and monitoring, to enable easy deployment and reduce the cost of maintenance. Its open, modular design enables developers to use some or all capabilities of the platform and extend it to quickly develop robotics applications.

NVIDIA is working closely with regulatory bodies to develop vision-enabled safety technology to further reduce cost and improve the reliability of AMRs.

The post NVIDIA releases robotics development tools at GTC appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-releases-new-robotics-development-tools-gtc/feed/ 0
RoboBusiness and Field Robotics Engineering Forum preview https://www.therobotreport.com/robobusiness-and-field-robotics-engineering-forum-preview/ https://www.therobotreport.com/robobusiness-and-field-robotics-engineering-forum-preview/#respond Tue, 13 Sep 2022 19:19:04 +0000 https://www.therobotreport.com/?p=563774 Steve Crowe and Mike Oitzman are joined by Dan Kara to preview RoboBusiness 2022.

The post RoboBusiness and Field Robotics Engineering Forum preview appeared first on The Robot Report.

]]>

Welcome to Episode 93 of The Robot Report Podcast, which brings conversations with robotics innovators straight to you. Join us each week for discussions with leading roboticists, innovative robotics companies and other key members of the robotics community.

Co-hosts Steve Crowe and Mike Oitzman are joined by Dan Kara to preview and discuss the upcoming RoboBusiness and the Field Robotics Engineering Forum. The events take place October 19-20, 2022 at the Santa Clara Convention Center. To register, go to robobusiness.com

We’ve got an exciting lineup of keynotes and content planned for both events, but other event highlights include:

  • Pitchfire startup competition
  • Networking events
  • Startup bootcamp
  • Career fair (Sponsored by MassRobotics)

Links from today’s show:


If you would like to be a guest on an upcoming episode of the podcast, or if you have recommendations for future guests or segment ideas, contact Steve Crowe or Mike Oitzman.

For sponsorship opportunities of The Robot Report Podcast, contact Courtney Nagle for more information.

The post RoboBusiness and Field Robotics Engineering Forum preview appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robobusiness-and-field-robotics-engineering-forum-preview/feed/ 0
Commercial UAV update; Product leadership with Kishore Boyalakuntla https://www.therobotreport.com/commercial-uav-update-product-leadership-with-kishore-boyalakuntla/ https://www.therobotreport.com/commercial-uav-update-product-leadership-with-kishore-boyalakuntla/#respond Sat, 10 Sep 2022 00:27:05 +0000 https://www.therobotreport.com/?p=563759 This week's podcast reviews the latest developments in UAVs and a deep dive into product management with Kishore Boyalakuntla.

The post Commercial UAV update; Product leadership with Kishore Boyalakuntla appeared first on The Robot Report.

]]>

Welcome to Episode 92 of The Robot Report Podcast, which brings conversations with robotics innovators straight to you. Join us each week for discussions with leading roboticists, innovative robotics companies and other key members of the robotics community.

Kishore Boyalakuntla, VP of Products, Berkshire Grey, joins this week’s show. Kishore talks to Mike about product management best practices and how important the voice of the customer is in making product decisions and developing and productizing automation solutions. If you are a product leader in the robotics market, you’ll appreciate what it takes to commercialize a robotic solution and maintain a viable product roadmap.

The podcast also discusses the top stories of the week, including a pending FTC investigation into the pending acquisition of iRobot by Amazon.

Links from today’s show:


If you would like to be a guest on an upcoming episode of the podcast, or if you have recommendations for future guests or segment ideas, contact Steve Crowe or Mike Oitzman.

For sponsorship opportunities of The Robot Report Podcast, contact Courtney Nagle for more information.

The post Commercial UAV update; Product leadership with Kishore Boyalakuntla appeared first on The Robot Report.

]]>
https://www.therobotreport.com/commercial-uav-update-product-leadership-with-kishore-boyalakuntla/feed/ 0
Robust.AI co-founder to deliver opening keynote at RoboBusiness https://www.therobotreport.com/robust-ai-co-founder-to-deliver-opening-keynote-at-robobusiness/ https://www.therobotreport.com/robust-ai-co-founder-to-deliver-opening-keynote-at-robobusiness/#respond Fri, 09 Sep 2022 16:08:43 +0000 https://www.therobotreport.com/?p=563749 Anthony Jules to deliver farsighted keynote at RoboBusiness called "Designing the Robot Future We Want to Live In."

The post Robust.AI co-founder to deliver opening keynote at RoboBusiness appeared first on The Robot Report.

]]>

Anthony Jules, co-founder and CEO of Robust.AI, will deliver the opening keynote at RoboBusiness, which takes place Oct. 19-20 in Santa Clara, Calif. Jules’ keynote, called “Designing the Robot Future We Want to Live In” runs from 8:45 AM – 9:30 AM on Oct. 19.

Robotics and AI have entered a new golden age of capability and possibility. We have perception systems that were only dreamed of a decade ago, while the cost of building useful machines continues to fall. At the same time, the world’s machines are increasingly connected and able to work in concert. These changes and others form the bedrock for an evolution in robotics, where individual systems give way to fleets of machines that are incredibly capable, especially when it comes to working with people.

In this farsighted keynote presentation, Jules will explain the significant business opportunities available for those willing to take the hard road of intentionally designing and developing systems for the positive future we all want to live in – one that enables people to work in true collaboration with machines.

Today, many machines operate in locations completely devoid of people. Going forward, however, most work will be undertaken by individuals supported by automation or accomplished by increasingly intelligent systems working in true collaboration with people. These machines will operate next to us and share space with us, and by necessity, they will affect our work experiences and well-being. Therefore, our intention must be to make that collaborative work experience a positive one. If we take a purely functional approach to solving tasks using robotics and automation, the work experience will be negative rather than positive.

Jules co-founded Robust.AI in 2019 alongside Mohamed Amer, Rodney Brook and Henrik Christensen. Robust.AI is pioneering a vision of collaborative mobility in robotics and recently unveiled its autonomous mobile robot software suite called Grace. He is a 30-year technology veteran with expertise in robotics, AI, machine learning and business transformation, his exceptional track record as a technology leader is based on his belief that complex problems like robotics require multidimensional and multidisciplinary solutions that are human-centered by design.

Prior to joining Robust.AI, Jules led a number of high-performance multidisciplinary teams building robotics products at Google. As part of Google Robotics and X, he was head of product and program management for the Everyday Robot project. Jules joined Google through its acquisition of Redwood Robotics in 2013, at which he was COO and VP of product. Earlier in his career, Jules was part of the founding team of Sapient Corporation, where he held leadership roles in software project delivery, client management and people strategy. He holds a BS and MS in Computer Science from MIT.

The other RoboBusiness keynotes include:

  • Allison Thackston, senior technical lead and manager, roboticist, Waymo
  • Sally Miller, CIO, North America, DHL Supply Chain
  • Jonathan Hurst & Damion Shelton, Co-founders, Agility Robotics
  • Keynote panel on investments with Karthee Madasamy, founder and managing partner of Mobile Foundation Ventures, Sherwin Prior, director of Amazon’s Industrial Innovation Fund, Fady Saad, founder and general partner of Cybernetix Ventures, Martyna Waliszewska, Investment Manager, Invest in Odense

RoboBusiness will feature 100-plus exhibitors, 50-plus speakers, a MassRobotics Career Fair and Startup Workshop, the Pitchfire Startup Competition, networking receptions and much more. Full conference passes are $795, while expo-only passes are just $75. Academic discounts are available and academic full conference rates are $295. Register today.

Co-Located Events
RoboBusiness will be co-located with the Field Robotics Engineering Forum, an international conference and exposition designed to provide engineers, engineering management, business professionals and others with information about how to successfully develop and safely deploy the next generation of field robotics systems for operation in wide-ranging, outdoor, dynamic environments. You can check out the current list of speakers, to which more will be added, here.

Also co-located with RoboBusiness is DeviceTalks West, the premier industry event for medical technology professionals, currently in its ninth year. Both events attract engineering and business professionals from a broad range of healthcare and medical technology backgrounds.

Sponsorship Opportunities
For information about sponsorship and exhibition opportunities, download the prospectus. Questions regarding sponsorship opportunities should be directed to Courtney Nagle at cnagle[AT]wtwhmedia.com.

About WTWH Media
WTWH Media is an integrated media company serving engineering, business and investment professionals through 50+ websites, 5 print publications, along with many other technical and business events. WTWH’s Robotics Group produces The Robot Report, Robotics Business Review, Collaborative Robotics Trends and Mobile Robot Guide, online technical, business and investment news and information portals focused on robotics and intelligent systems. WTWH Media also produces leading in-person robotics conferences, including the Robotics Summit & Expo, RoboBusiness and the Healthcare Robotics Engineering Forum. See www.wtwhmedia.com for more information.

The post Robust.AI co-founder to deliver opening keynote at RoboBusiness appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robust-ai-co-founder-to-deliver-opening-keynote-at-robobusiness/feed/ 0
AI: chipset bans and keeping humans in the loop https://www.therobotreport.com/ai-chipset-bans-keeping-humans-in-the-loop/ https://www.therobotreport.com/ai-chipset-bans-keeping-humans-in-the-loop/#respond Fri, 02 Sep 2022 19:00:37 +0000 https://www.therobotreport.com/?p=563722 This week's podcast discusses situations in which AI and machine learning still come up short for robotics systems and how a human-in-the-loop system can help solve edge cases in real-time.

The post AI: chipset bans and keeping humans in the loop appeared first on The Robot Report.

]]>

Welcome to Episode 91 of The Robot Report Podcast, which brings conversations with robotics innovators straight to you. Join us each week for discussions with leading roboticists, innovative robotics companies and other key members of the robotics community.

Michael Kohen, founder and CEO of Spark AI, joins this week’s show. Michael discusses situations in which AI and machine learning still come up short for robotics systems and how a human-in-the-loop system can help solve edge cases in real-time. He shares specific real-world examples, including details about Spark AI’s partnership with John Deere and its autonomous tractors.

The podcast also discusses the top stories of the week, including the US government imposing restrictions on AMD and NVIDIA selling certain AI chipsets to China. We break down what this could mean for robotics.

Links from today’s show:


If you would like to be a guest on an upcoming episode of the podcast, or if you have recommendations for future guests or segment ideas, contact Steve Crowe or Mike Oitzman.

For sponsorship opportunities of The Robot Report Podcast, contact Courtney Nagle for more information.

The post AI: chipset bans and keeping humans in the loop appeared first on The Robot Report.

]]>
https://www.therobotreport.com/ai-chipset-bans-keeping-humans-in-the-loop/feed/ 0
Fraunhofer taps NVIDIA’s simulation skills to improve robot designs https://www.therobotreport.com/fraunhofer-taps-nvidias-simulation-skills-to-improve-robot-designs/ https://www.therobotreport.com/fraunhofer-taps-nvidias-simulation-skills-to-improve-robot-designs/#respond Wed, 31 Aug 2022 20:04:47 +0000 https://www.therobotreport.com/?p=563700 The Germany-based research group’s O3dyn platform uses NVIDIA simulation technologies to create robots with a rich feature set for logistics, manufacturing and more.

The post Fraunhofer taps NVIDIA’s simulation skills to improve robot designs appeared first on The Robot Report.

]]>
Fraunhofer AMR

Fraunhofer developed O3dyn to explore new concepts of AMR motion and dynamics for logistics operations. | Credit: Fraunhofer IPA

The Fraunhofer Institute in Germany conducts practical research in a number of important fields, including AI, cybersecurity, and medicine. One of its 76 research institutes, the Fraunhofer IML group, seeks to advance robotics and logistics. The researchers are testing the simulation capabilities of NVIDIA Isaac Sim to potentially enhance the design of robots. 

Its most recent mobile robot development, O3dyn, uses technologies developed by NVIDIA for simulation and robotics to produce an indoor-outdoor autonomous mobile robot (AMR).

“We’re looking at how we can go as fast and as safely as possible in logistics scenarios,” said Julian Eber, a robotics and AI researcher at Fraunhofer IML.

The latest research aims to bridge the sim-to-real gap by developing and validating these AMRs in simulation. The researchers use Isaac Sim to train the AMR virtually by placing it in photorealistic, physically correct 3D environments.

The team leverages Isaac SIM to test the operation of O3dyn by importing a complete 3D CAD model of the robot into the simulated environment. The accurate specifications are combined with Omniverse PhysX to deliver a realistic response to any input. The robot uses simulated warehouse environments to train the AI guidance on how to navigate and operate in its intended missions.

In the physical world, O3dyn uses a variety of camera and sensor inputs into the NVIDIA Jetson edge AI and robotics platform to aid with navigation. With improved speed and agility, it can move at up to 13.4 m/s (30 mph) and has omnidirectional wheels that can move in any direction.

“The omnidirectional dynamics is very unique, and there’s nothing like this that we know of in the market,” said Sören Kerner, head of AI and autonomous systems at Fraunhofer IML.

As a result, the virtual robot can move as quickly in a simulation as the actual robot can in the physical world. Harnessing the virtual environment allows Fraunhofer to accelerate development, safely increase accuracy for real-world deployment and scale up faster.

Fraunhofer refers to this idea as simulation-based AI. Fraunhofer is making the AMR simulation model open source so that developers can enhance it in order to achieve results more quickly.

The post Fraunhofer taps NVIDIA’s simulation skills to improve robot designs appeared first on The Robot Report.

]]>
https://www.therobotreport.com/fraunhofer-taps-nvidias-simulation-skills-to-improve-robot-designs/feed/ 0