Academia / Research Archives - The Robot Report https://www.therobotreport.com/category/research-development/ Robotics news, research and analysis Fri, 07 Apr 2023 18:36:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Academia / Research Archives - The Robot Report https://www.therobotreport.com/category/research-development/ 32 32 MIT researchers create algorithm to stop drones from colliding midair https://www.therobotreport.com/mit-researchers-create-algorithm-to-stop-drones-from-colliding-midair/ https://www.therobotreport.com/mit-researchers-create-algorithm-to-stop-drones-from-colliding-midair/#respond Sat, 08 Apr 2023 14:00:32 +0000 https://www.therobotreport.com/?p=565441 Robust MADER is able to generate collision-free trajectories for drones even when there is a delay in communications between agents.

The post MIT researchers create algorithm to stop drones from colliding midair appeared first on The Robot Report.

]]>
Drones in a warehouse.

The MIT team tested its collision avoidance system in a flight environment with six drones and in simulation. | Source: MIT

A research team from MIT created a trajectory-planning system called Robust MADER that can allow drones working together in the same airspace to pick safe paths forward without crashing into each other. The algorithm is an updated version of MADER, a 2020 project that worked well in simulation but didn’t hold up in real-world testing. 

The original MADER system involved each agent broadcasting its trajectory so fellow drones know where it’s planning to go. In simulation, this worked without problems, with all drones considering each other’s trajectories when planning their own. When put to the test, the team found that it didn’t take into account delays in communication between drones, resulting in unexpected collisions. 

“MADER worked great in simulations, but it hadn’t been tested in hardware. So, we built a bunch of drones and started flying them. The drones need to talk to each other to share trajectories, but once you start flying, you realize pretty quickly that there are always communication delays that introduce some failures,” Kota Kondo, an aeronautics and astronautics graduate student, said.

Robust MADER is able to generate collision-free trajectories for drones even when there is a delay in communications between agents. The system is an asynchronous, decentralized, multiagent trajectory planner, meaning each drone formulates its own trajectory and then checks with drones nearby to ensure it won’t run into any of them. 

The drones optimize their new trajectories using an algorithm that incorporates the trajectories they received from nearby drones, and agents constantly optimize and broadcast new trajectories to avoid collisions. 

To get around any delays in sharing trajectories, every drone has a delay-check period, where it spends a fixed amount of time repeatedly checking for communications from other agents to see if its new trajectory is safe. If it finds a possible collision, it abandons the new trajectory and keeps going on its current one. The length of this delay-check period depends on the distance between agents and other environmental factors that could hamper communications. 

While the system does require all drones to agree on each new trajectory, they don’t all have to agree at the same time, making it a scalable system. It could be used in any situation where multiple drones are working together in the same airspace like spraying pesticides over crops. 

The MIT team ran hundreds of simulations in which they artificially introduced communication delays, and found that MADER was 100% successful at avoiding collisions. When tested with six drones and two aerial obstacles in a flight environment, Robust MADER was able to avoid all collisions, while the old algorithm would have caused seven collisions. 

Moving forward, the research team hopes to put Robust MADER to the test outdoors, where obstacles can affect communications. They also hope to outfit drones with visual sensors so they can detect other agents or obstacles, predict their movements and include that information in trajectory optimizations. 

Kota Konda wrote the paper with Jesus Tordesillas, a postdoc; Parker C. Lusk, a graduate student; Reinaldo Figueroa, Juan Rached, and Joseph Merkel, MIT undergraduates; and senior author Jonathan P. How, the Richard C. Maclaurin Professor of Aeronautics and Astronautics, a principal investigator in the Laboratory for Information and Decision Systems (LIDS), and a member of the MIT-IBM Watson AI Lab. This work was supported by Boeing Research and Technology.

The post MIT researchers create algorithm to stop drones from colliding midair appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-researchers-create-algorithm-to-stop-drones-from-colliding-midair/feed/ 0
How MIT taught a quadruped to play soccer https://www.therobotreport.com/how-mit-taught-a-quadruped-to-dribble-a-soccer-ball/ https://www.therobotreport.com/how-mit-taught-a-quadruped-to-dribble-a-soccer-ball/#respond Thu, 06 Apr 2023 01:14:38 +0000 https://www.therobotreport.com/?p=565419 MIT's DribbleBot can maneuver soccer balls on landscapes like sand, gravel, mud and snow and get up and recover the ball after falling. 

The post How MIT taught a quadruped to play soccer appeared first on The Robot Report.

]]>

A research team at MIT’s Improbable Artificial Intelligence Lab, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), taught a Unitree Go1 quadruped to dribble a soccer ball on various terrains. DribbleBot can maneuver soccer balls on landscapes like sand, gravel, mud and snow, adapt its varied impact on the ball’s motion and get up and recover the ball after falling. 

The team used simulation to teach the robot how to actuate its legs during dribbling. This allowed the robot to achieve hard-to-script skills for responding to diverse terrains much quicker than training in the real world. Because the team had to load its robot and other assets into the simulation and set physical parameters, they could simulate 4,000 versions of the quadruped in parallel in real-time, collecting data 4,000 times faster than using just one robot. You can read the team’s technical paper called “DribbleBot: Dynamic Legged Manipulation in the Wild” here (PDF).

DribbleBot started out not knowing how to dribble a ball at all. The team trained it by giving it a reward when it dribbles well, or negative reinforcement when it messes up. Using this method, the robot was able to figure out what sequence of forces it should apply with its legs. 

“One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behavior,” MIT Ph.D. student Gabe Margolis, who co-led the work along with Yandong Ji, research assistant in the Improbable AI Lab, said. “Once we’ve designed that reward, then it’s practice time for the robot. In real time, it’s a couple of days, and in the simulator, hundreds of days. Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity.”

The team did teach the quadruped how to handle unfamiliar terrains and recover from falls using a recovery controller build into its system. However, dribbling on different terrains still presents many more complications than just walking.

The robot has to adapt its locomotion to apply forces to the ball to dribble, and the robot has to adjust to the way the ball interacts with the landscape. For example, soccer balls act differently on thick grass as opposed to pavement or snow. To combat this, the MIT team leveraged cameras on the robot’s head and body to give it vision.

While the robot can dribble on many terrains, its controller currently isn’t trained in simulated environments that include slopes or stairs. The quadruped can’t perceive the geometry of terrain, it just estimates its material contact properties, like friction, so slopes and stairs will be the next challenge for the team to tackle. 

The MIT team is also interested in applying the lessons they learned while developing DribbleBot to other tasks that involve combined locomotion and object manipulation, like transporting objects from place to place using legs or arms. A team from Carnegie Mellon University (CMU) and UC Berkeley recently published their research about how to give quadrupeds the ability to use their legs to manipulate things, like opening doors and pressing buttons. 

The team’s research is supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation Institute of Artificial Intelligence and Fundamental Interactions, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

A quadruped with a soccer ball.

The post How MIT taught a quadruped to play soccer appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-mit-taught-a-quadruped-to-dribble-a-soccer-ball/feed/ 0
Researchers taught a quadruped to use its legs for manipulation https://www.therobotreport.com/researchers-taught-quadruped-to-use-legs-as-manipulators/ https://www.therobotreport.com/researchers-taught-quadruped-to-use-legs-as-manipulators/#respond Fri, 31 Mar 2023 00:39:18 +0000 https://www.therobotreport.com/?p=565378 Researchers taught a Unitree Go1 quadruped how to use its front legs to climb walls, press buttons and kick a soccer ball.

The post Researchers taught a quadruped to use its legs for manipulation appeared first on The Robot Report.

]]>

Researchers from Carnegie Mellon University (CMU) and UC Berkeley want to give quadrupeds more capabilities similar to their biological counterparts. Just like real dogs can use their front legs for things other than walking and running, like digging and other manipulation tasks, the researchers think quadrupeds could someday do the same.

Currently, we see quadrupeds use their legs as just legs to navigate their surroundings. Some of them, like Boston Dynamics’ Spot, get around these limitations by adding a robotic arm to the quadruped’s back. This arm allows Spot to manipulate things, like opening doors and pressing buttons, while maintaining the flexibility that four legs give locomotion.

However, the researchers at CMU and UC Berkeley taught a Unitree Go1 quadruped, equipped with an Intel RealSense camera for perception, how to use its front legs to climb walls, press buttons, kick a soccer ball and perform other object interactions in the real world, on top of teaching it how to walk.

The team started this challenging task by decoupling the skill learning into two broad categories: locomotion, which involves movements like walking or climbing a wall, and manipulation, which involves using one leg to interact with objects while balancing on three legs. Decoupling these tasks help the quadruped to simultaneously move to stay balanced and manipulate objects with one leg.

By training in simulation, the team taught the quadruped these skills and transferred them to the real world with their proposed sim2real variant. This variant builds upon recent locomotion success.

All of these skills are combined into a robust long-term plan by teaching the quadruped a behavior tree that encodes a high-level task hierarchy from one clean expert demonstration. This allows the quadruped to move through the behavior tree and return to its last successful movement when it runs into problems with certain branches of the behavior tree.

For example, if a quadruped is tasked with pressing a button on a wall but fails to climb up the wall, it returns to the last task it did successfully, like approaching the wall, and starts there again.

The research team was made up of Xuxin Cheng, a Master’s student in robotics at CMU, Ashish Kumar, a graduate student at UC Berkeley, and Deepak Pathak, an assistant professor at CMU in Computer Science. You can read their technical paper “Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion” (PDF) to learn more. They said a limitation of their work is that they decoupled high-level decision making and low-level command tracking, but that a full end-to-end solution is “an exciting future direction.”

The post Researchers taught a quadruped to use its legs for manipulation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-taught-quadruped-to-use-legs-as-manipulators/feed/ 0
German robotics industry to grow 9% in 2023 https://www.therobotreport.com/german-robotics-industry-to-grow-9-in-2023/ https://www.therobotreport.com/german-robotics-industry-to-grow-9-in-2023/#respond Wed, 29 Mar 2023 21:40:49 +0000 https://www.therobotreport.com/?p=565367 German robotics and automation experts expect the industry's gross revenue to grow by 9%, reaching 15.7 billion euros by the end of 2023.

The post German robotics industry to grow 9% in 2023 appeared first on The Robot Report.

]]>
Graph showing forecasted revenue for the German robotics industry.

German Robotics and Automation global turnover. | Source: VDMA Robotics + Automation

German robotics and automation experts expect the industry’s gross revenue to grow by 9%, reaching 15.7 billion euros (over $17 billion) by the end of 2023, according to the VDMA Robotics + Automation Association

“Demand for robotics and automation remains high and the transformation of many customer industries requires innovative automation solutions. The easing in supply chain disruptions is now putting the industry in a position to successively work off the enormously high order backlog,” Frank Konrad, Chairman of the VDMA R+A Association and CEO of Hahn Automation GmbH in Rheinböllen, Germany, said. 

The VDMA R+A Association is a trade association that is made up of more than 370 member companies. These companies include suppliers of components and systems from the fields of robotics, integrated assembly systems and machine vision. 

Germany was the fourth most automated country worldwide in 2020 and 2021, with the country reaching 397 industrial robots for every 10,000 employees in 2021, according to the International Federation of Robotics (IFR). 

Within the automotive industry, which has the largest number of robots working in factories around the world, Germany has 1,500 robots for every 10,000 employees, according to the IFR. This makes it the country with the second-highest robot density in the automotive industry, coming just below South Korea. 

The post German robotics industry to grow 9% in 2023 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/german-robotics-industry-to-grow-9-in-2023/feed/ 0
MIT ‘traffic cop’ algorithm helps drones stay on task https://www.therobotreport.com/mit-traffic-cop-algorithm-helps-drone-swarm-stay-on-task/ https://www.therobotreport.com/mit-traffic-cop-algorithm-helps-drone-swarm-stay-on-task/#respond Wed, 15 Mar 2023 00:48:09 +0000 https://www.therobotreport.com/?p=565253 MIT developed a method for tailoring any wireless network to handle a large load of time-sensitive data from various sources.

The post MIT ‘traffic cop’ algorithm helps drones stay on task appeared first on The Robot Report.

]]>

MIT engineers developed a method to tailor any wireless network to handle a high load of time-sensitive data coming from multiple sources. | Credit: Christine Daniloff, MIT

How fresh are your data? For drones searching a disaster zone or robots inspecting a building, working with the freshest data is key to locating a survivor or reporting a potential hazard. But when multiple robots simultaneously relay time-sensitive information over a wireless network, a traffic jam of data can ensue. Any information that gets through is too stale to consider as a useful, real-time report.

Now, MIT engineers may have a solution. They’ve developed a method to tailor any wireless network to handle a high load of time-sensitive data coming from multiple sources. Their new approach, called WiSwarm, configures a wireless network to control the flow of information from multiple sources while ensuring the network is relaying the freshest data.

The team used their method to tweak a conventional Wi-Fi router, and showed that the tailored network could act like an efficient traffic cop, able to prioritize and relay the freshest data to keep multiple vehicle-tracking drones on task.

The team’s method, which they will present in May at IEEE’s International Conference on Computer Communications (INFOCOM), offers a practical way for multiple robots to communicate over available Wi-Fi networks so they don’t have to carry bulky and expensive communications and processing hardware onboard.

Last in line

The team’s approach departs from the typical way in which robots are designed to communicate data.

“What happens in most standard networking protocols is an approach of first come, first served,” said MIT author Vishrant Tripathi. “A video frame comes in, you process it. Another comes in, you process it. But if your task is time-sensitive, such as trying to detect where a moving object is, then all the old video frames are useless. What you want is the newest video frame.”

In theory, an alternative approach of “last in, first out” could help keep data fresh. The concept is similar to a chef putting out entreés one by one as they are hot off the line. If you want the freshest plate, you’d want the last one that joined the queue. The same goes for data, if what you care about is the “age of information,” or the most up-to-date data.

“Age-of-information is a new metric for information freshness that considers latency from the perspective of the application,” said Eytan Modiano of the Laboratory for Information and Decision Systems (LIDS). “For example, the freshness of information is important for an autonomous vehicle that relies on various sensor inputs. A sensor that measures the proximity to obstacles in order to avoid collision requires fresher information than a sensor measuring fuel levels.”

The team looked to prioritize age-of information, by incorporating a “last in, first out” protocol for multiple robots working together on time-sensitive tasks. They aimed to do so over conventional wireless networks, as Wi-Fi is pervasive and doesn’t require bulky onboard communication hardware to access.

However, wireless networks come with a big drawback: They are distributed in nature and do not prioritize receiving data from any one source. A wireless channel can then quickly clog up when multiple sources simultaneously send data. Even with a “last in, first out” protocol, data collisions would occur. In a time-sensitive exercise, the system would break down.

Data priority

As a solution, the team developed WiSwarm — a scheduling algorithm that can be run on a centralized computer and paired with any wireless network to manage multiple data streams and prioritize the freshest data.

Rather than attempting to take in every data packet from every source at every moment in time, the algorithm determines which source in a network should send data next. That source (a drone or robot) would then observe a “last in, first out” protocol to send their freshest piece of data through the wireless network to a central processor.

The algorithm determines which source should relay data next by assessing three parameters: a drone’s general weight, or priority (for instance, a drone that is tracking a fast vehicle might have to update more frequently, and therefore would have higher priority over a drone tracking a slower vehicle); a drone’s age of information, or how long it’s been since a drone has sent an update; and a drone’s channel reliability, or likelihood of successfully transmitting data.

By multiplying these three parameters for each drone at any given time, the algorithm can schedule drones to report updates through a wireless network one at a time, without clogging the system, and in a way that provides the freshest data for successfully carrying out a time-sensitive task.

The team tested out their algorithm with multiple mobility-tracking drones. They outfitted flying drones with a small camera and a basic Wi-Fi-enabled computer chip, which it used to continuously relay images to a central computer rather than using a bulky, onboard computing system. They programmed the drones to fly over and follow small vehicles moving randomly on the ground.

When the team paired the network with its algorithm, the computer was able to receive the freshest images from the most relevant drones, which it used to then send commands back to the drones to keep them on the vehicle’s track.

When the researchers ran experiments with two drones, the method was able to relay data that was two times fresher, which resulted in six times better tracking, compared to when the two drones carried out the same experiment with Wi-Fi alone. When they expanded the system to five drones and five ground vehicles, Wi-Fi alone could not accommodate the heavier data traffic, and the drones quickly lost track of the ground vehicles. With WiSwarm, the network was better equipped and enabled all drones to keep tracking their respective vehicles.

“Ours is the first work to show that age-of-information can work for real robotics applications,” said MIT author Ezra Tal.

In the near future, cheap and nimble drones could work together and communicate over wireless networks to accomplish tasks such as inspecting buildings, agricultural fields, and wind and solar farms. Farther in the future, he sees the approach being essential for managing data streaming throughout smart cities.

“Imagine self-driving cars come to an intersection that has a sensor that sees something around the corner,” said MIT’s Sertac Karaman. “Which car should get that data first? It’s a problem where timing and freshness of data matters.”

Editor’s Note: This article was republished from MIT News.

The post MIT ‘traffic cop’ algorithm helps drones stay on task appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-traffic-cop-algorithm-helps-drone-swarm-stay-on-task/feed/ 0
OpenAI releases APIs for production use of ChatGPT https://www.therobotreport.com/openai-releases-new-apis-for-production-use-of-chatgpt/ https://www.therobotreport.com/openai-releases-new-apis-for-production-use-of-chatgpt/#respond Thu, 02 Mar 2023 19:04:45 +0000 https://www.therobotreport.com/?p=565173 OpenAI released two ChatGPT APIs to facilitate product integration. Roboticists and robotics application developers can programmatically utilize speech-to-text capability.

The post OpenAI releases APIs for production use of ChatGPT appeared first on The Robot Report.

]]>
chatgpt from openai in a browser view of the URL.

OpenAI launches a new developer edition of the ChatGPT API to support high-consumption use cases. | Credit Adobe Stock

OpenAI announced two new APIs for ChatGPT that will enable high-volume use and improve the functionality of ChatGPT for production applications. For roboticists and robotics application developers, this provides programmatic access to the speech-to-text functionality.

Developers can now integrate ChatGPT and Whisper models into their apps and products through the OpenAI-supported API. OpenAI has made the open-source Whisper large-v2 model in the API faster and cheaper for developers.

ChatGPT API users should expect model upgrades and dedicated capacity for model control. The company has also improved the API terms of service based on developer input. All of these changes encourage the use of the API in new applications.

A new ChatGPT model called gpt-3.5-turbo was released this week and is the same model used in the ChatGPT product. New pricing for the API access is priced at $0.002 per 1k tokens, which is 10x cheaper than the existing GPT-3.5 models. The company is claiming that only small changes are needed to existing prompts to use the API.


Robotics Summit (May 10-11) returns to Boston

Register Today


Finally, OpenAI is offering dedicated instances of ChatGPT for API users who need the performance and availability that a dedicated instance offers. Developers get full control over the instance’s load, the option to enable features such as longer context limits, and the ability to pin the model snapshot. According to the company, dedicated instances can make economic sense for developers running beyond ~450M tokens per day.

With a renewed focus on developers, the company is making several changes to its policies with this release:

  • Data submitted through the API is no longer used for service improvements
  • A default 30-day data retention policy for API users has been implemented
  • Pre-launch review is removed
  • Improved developer documentation

What does it mean for robotics?

Traditionally, GPT models read unstructured text, which the model sees as a series of “tokens.” ChatGPT models, on the other hand, take in a series of messages along with metadata. There is a new model raw format called Chat Markup Language (“ChatML”), which is used to read the tokens in the API.

This opens the door to creative new ways that ChatGPT can be used in robotics applications. The system could be leveraged to enhance interactivity between a robot and its end users, especially in interactions with service robots or robotic applications that interface with non-professional users (i.e. the public).

API implementation of the ChatGPT functionality will enable developers to filter both user inputs (to better model a prompt) and the ChatGPT response. Snapchat, Instacart and Shopify are already implementing API access to ChatGPT and this will help ensure that the API is scalable and hardened for the high volumes of API usage that these applications will deliver.

The system has already been proven to be useful for generating code that can operate a robot as Microsoft engineers recently demonstrated.

The post OpenAI releases APIs for production use of ChatGPT appeared first on The Robot Report.

]]>
https://www.therobotreport.com/openai-releases-new-apis-for-production-use-of-chatgpt/feed/ 0
Researchers create responsive ankle exoskeleton algorithm https://www.therobotreport.com/researchers-create-responsive-ankle-exoskeleton-algorithm/ https://www.therobotreport.com/researchers-create-responsive-ankle-exoskeleton-algorithm/#respond Wed, 01 Mar 2023 21:51:39 +0000 https://www.therobotreport.com/?p=565155 Researchers at the University of Michigan have created a responsive ankle exoskeleton algorithm that uses direct muscle measurement.

The post Researchers create responsive ankle exoskeleton algorithm appeared first on The Robot Report.

]]>

Jacqueline Hannan, a PhD student in industrial and operations engineering, demonstrates walking with an ankle exoskeleton in Stirling’s lab. Photo credit: Brenda Ahearn, University of Michigan Engineering

Researchers at the University of Michigan have created a responsive ankle exoskeleton algorithm that uses direct muscle measurement to handle changes in pace and gait. The algorithm could potentially support a user who switches between walking and running with ease. 

The researchers hope that the algorithm will bring us a step closer to ankle exoskeletons that help people extend their endurance. In particular, the algorithm could help researchers develop exoskeletons that automatically adapt to individual users and tasks, eliminating or greatly reducing the need for manual recalibration in between each task. 

“This particular type of ankle exoskeleton can be used to augment people who have limited mobility,” Leia Stirling, U-M associate professor of industrial and operations engineering and robotics and senior author of the study published in the journal PLOS ONE, said.

“That could be an older adult who wouldn’t normally be able to walk to the park with their grandkids. But wearing the system, they now have extra assistance that enables them to do more than they could before.”

Current exoskeletons typically have to be tailored to a single user performing a single task, like walking in a straight line. Changing tasks or users requires a lengthy set of manual readjustments. This new algorithm has demonstrated the ability to handle different walking speeds as well as changes in gait between walking and running. 

What sets this control algorithm apart from ones typically used in exoskeletons is that it directly measures how quickly muscle fibers are expanding and contracting. It uses these measurements to determine the amount of chemical energy the muscle is using while doing work and then compares that measurement with a biological model to determine the best way to assist movement. 

Current methods use broader measures of motion to determine how to assist movement, making them less accurate than this method, which measures muscle physiology directly. 

The University of Michigan researchers chose to focus on the ankle because of the key role it plays in mobility. The team found that assisting the muscles in the ankle could have a dramatic impact on our ability to walk further and faster. 

While the team was unable to test on humans because they were working during COVID-19 restrictions, they did use data on existing exoskeleton devices and muscle dynamics to simulate and test their algorithm. During testing, the team made adjustments to make their algorithm more responsive to changes in speed and gait. 

The team’s next step will be to perform tasks on humans. During testing, the team will use ultrasound to measure muscle fibers in real time.

The study was funded by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001.

The post Researchers create responsive ankle exoskeleton algorithm appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-create-responsive-ankle-exoskeleton-algorithm/feed/ 0
Form & Function Robotics Challenge teams announced https://www.therobotreport.com/form-function-robotics-challenge-teams-announced/ https://www.therobotreport.com/form-function-robotics-challenge-teams-announced/#respond Tue, 28 Feb 2023 20:23:31 +0000 https://www.therobotreport.com/?p=565137 MassRobotics challenge calls for teams to create a robot that delivers a compelling form factor specific to its tasks. Finalists will be on display at Robotics Summit & Expo.

The post Form & Function Robotics Challenge teams announced appeared first on The Robot Report.

]]>

MassRobotics announced the teams participating in its inaugural Form & Function Robotics Challenge. The finalists will be showcasing their robots on May 10-11 at the Robotics Summit & Expo in Boston. The winners will be announced on May 11 at 12:30 PM at the event.

The challenge calls for teams to create a robotics or automation project that delivers a compelling form factor specific to its tasks while accomplishing a useful function. The following universities will be participating:

  • Brown University
  • Indiana University Bloomington
  • Kwame Nkrumah Science and Technology
  • MIT
  • Northeastern University
  • Seoul National University
  • Stevens Institute of Technology
  • Tufts University
  • University of Bath
  • University of Calgary
  • University of Southern Denmark
  • Worcester Polytechnic Institute

“We received entries for the Form and Function Challenge from around the world,” said MassRobotics executive director Tom Ryden. “We are excited about the selected teams, their robot concepts and the ways they are planning to incorporate many of our partners’ offerings, from development kits to sensors to software. We look forward to the teams showcasing their robotic solutions at the Robotics Summit & Expo in May.”

The challenge encourages cross-collaboration between state-of-the-art software and hardware providers, including MassRobotics strategic partners Onshape, Lattice Semiconductor, Nano Dimension, Danfoss, FESTO, Novanta, and Analog Devices. The challenge requires teams to use the offerings from a minimum of two of the seven partners. Examples of the offerings available to participants include:

Lattice Semiconductor: FPGA technology with their solution stack for a machine vision camera
Onshape: cloud-native CAD platform
Nano Dimension: 3D printed circuit boards and design review
FESTO: vacuum gripper kit and Electrical Actuators
Novanta: drives, inductive position encoders, RFID
Danfoss: remote control and PLUS+1 controllers

Participating teams will be provided technical support in the form of software, hardware, expertise and regular check-ins with the challenge’s sponsoring partners. Teams have until May 5, 2023 to develop and complete their projects and will have the opportunity to present their work at the Robotics Summit & Expo.

The challenge offers a grand prize of $25,000, along with $5,000 prizes for second and third place and a $5,000 Audience Choice award.

The Robotics Summit & Expo is the premier event for commercial robotics developers. There will be nearly 70 industry-leading speakers sharing their development expertise on stage during the conference, with 150-plus exhibitors on the showfloor showcasing their latest enabling technologies, products and services that help develop commercial robots. There also will be a career fair, networking opportunities and more. Register for full conference passes by March 9 to save $300. Expo-only passes are just $75. Academic discounts are available and academic full conference rates are just $295.

The post Form & Function Robotics Challenge teams announced appeared first on The Robot Report.

]]>
https://www.therobotreport.com/form-function-robotics-challenge-teams-announced/feed/ 0
How ChatGPT can control robots https://www.therobotreport.com/microsoft-demos-how-chatgpt-can-control-robots/ https://www.therobotreport.com/microsoft-demos-how-chatgpt-can-control-robots/#respond Wed, 22 Feb 2023 18:44:02 +0000 https://www.therobotreport.com/?p=565080 Microsoft researchers use ChatGPT to write computer code that can control a robot arm and an aerial drone.

The post How ChatGPT can control robots appeared first on The Robot Report.

]]>
a chatgpt prompt asking a robot to perform a block-building task

Microsoft researchers controlled this robotic arm using ChatGPT. | Credit: Microsoft

By now, you’ve likely heard of ChatGPT, OpenAI’s language model that can generate somewhat coherent responses to a variety of prompts and questions. It’s primarily being used to generate text, translate information, make calculations and explain topics you’re looking to learn about.

Researchers at Microsoft, which has invested billions into OpenAI and recently integrated ChatGPT into its Bing search engine, extended the capabilities of ChatGPT to control a robotic arm and aerial drone. Earlier this week, Microsoft released a technical paper that describes a series of design principles that can be used to guide language models toward solving robotics tasks.

“It turns out that ChatGPT can do a lot by itself, but it still needs some help,” Microsoft wrote about its ability to program robots.

Prompting LLMs for robotics control poses several challenges, Microsoft said, such as providing a complete and accurate description of the problem, identifying the right set of allowable function calls and APIs, and biasing the answer structure with special arguments. To make effective use of ChatGPT for robotics applications, the researchers constructed a pipeline composed of the following steps:

  • 1. First, they defined a high-level robot function library. This library can be specific to the form factor or scenario of interest and should map to actual implementations on the robot platform while being named descriptively enough for ChatGPT to follow.
  • 2. Next, they build a prompt for ChatGPT which described the objective while also identifying the set of allowed high-level functions from the library. The prompt can also contain information about constraints, or how ChatGPT should structure its responses.
  • 3. The user stayed in the loop to evaluate code output by ChatGPT, either through direct analysis or through simulation and provides feedback to ChatGPT on the quality and safety of the output code.
  • 4. After iterating on the ChatGPT-generated implementations, the final code can be deployed onto the robot.

Examples of ChatGPT controlling robots

In one example, Microsoft researchers used ChatGPT in a manipulation scenario with a robot arm. It used conversational feedback to teach the model how to compose the originally provided APIs into more complex high-level functions that ChatGPT coded by itself. Using a curriculum-based strategy, the model was able to chain these learned skills together logically to perform operations such as stacking blocks.

The model was also able to build the Microsoft logo out of wooden blocks. It was able to recall the Microsoft logo from its internal knowledge base, “draw” the logo as SVG code, and then use the skills learned above to figure out which existing robot actions can compose its physical form.

Researchers also tried to control an aerial drone using ChatGPT. First, they fed ChatGPT a rather long prompt laying out the computer commands it could write to control the drone. After that, the researchers could make requests to instruct ChatGPT to control the robot in various ways. This included asking ChatGPT to use the drone’s camera to identify a drink, such as coconut water and a can of Coca-Cola. It was also able to write code structures for drone navigation based solely on the prompt’s base APIs, according to the researchers.

“ChatGPT asked clarification questions when the user’s instructions were ambiguous and wrote complex code structures for the drone such as a zig-zag pattern to visually inspect shelves,” the team said.

Microsoft said it also applied this approach to a simulated domain, using the Microsoft AirSim simulator. “We explored the idea of a potentially non-technical user directing the model to control a drone and execute an industrial inspection scenario. We observe from the following excerpt that ChatGPT is able to effectively parse intent and geometrical cues from user input and control the drone accurately.”

Key limitation

The researchers did admit this approach has a major limitation: ChatGPT can only write the code for the robot based on the initial prompt the human gives it. A human engineer has to thoroughly explain to ChatGPT how the application programming interface for a robot works, otherwise, it will struggle to generate applicable code.

“We emphasize that these tools should not be given full control of the robotics pipeline, especially for safety-critical applications. Given the propensity of LLMs to eventually generate incorrect responses, it is fairly important to ensure solution quality and safety of the code with human supervision before executing it on the robot. We expect several research works to follow with the proper methodologies to properly design, build and create testing, validation and verification pipelines for LLM operating in the robotics space.

“Most of the examples we presented in this work demonstrated open perception-action loops where ChatGPT generated code to solve a task, with no feedback provided to the model afterwards. Given the importance of closed-loop controls in perception-action loops, we expect much of the future research in this space to explore how to properly use ChatGPT’s abilities to receive task feedback in the form of textual or special-purpose modalities.”

Microsoft said its goal with this research is to see if ChatGPT can think beyond text and reason about the physical world to help with robotics tasks.

“We want to help people interact with robots more easily, without needing to learn complex programming languages or details about robotic systems. The key challenge here is teaching ChatGPT how to solve problems considering the laws of physics, the context of the operating environment, and how the robot’s physical actions can change the state of the world.”

The post How ChatGPT can control robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/microsoft-demos-how-chatgpt-can-control-robots/feed/ 0
Soft robotic wearable restores arm function for people with ALS https://www.therobotreport.com/soft-robotic-wearable-restores-arm-function-for-people-with-als/ https://www.therobotreport.com/soft-robotic-wearable-restores-arm-function-for-people-with-als/#respond Mon, 06 Feb 2023 15:33:40 +0000 https://www.therobotreport.com/?p=564977 Researchers developed a sensor system that detects residual movement of an arm and calibrates the appropriate pressurization of the balloon actuator to move the person’s arm smoothly and naturally.

The post Soft robotic wearable restores arm function for people with ALS appeared first on The Robot Report.

]]>
a male model wear the shoulder harness with right arm outstretched.

This soft robotic wearable is capable of significantly assisting upper arm and shoulder movement in people with ALS. | Credit: Walsh Lab, Harvard SEAS

Some 30,000 people in the U.S. are affected by amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, a neurodegenerative condition that damages cells in the brain and spinal cord necessary for movement.

Now, a team of researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Massachusetts General Hospital (MGH) has developed a soft robotic wearable capable of significantly assisting upper arm and shoulder movement in people with ALS.

“This study gives us hope that soft robotic wearable technology might help us develop new devices capable of restoring functional limb abilities in people with ALS and other diseases that rob patients of their mobility,” says Conor Walsh, senior author on Science Translational Medicine paper reporting the team’s work.

Walsh is the Paul A. Maeder Professor of Engineering and Applied Sciences at SEAS where he leads the Harvard Biodesign Lab, and he has presented related topics at earlier Healthcare Robotics Engineering Forum events.

The assistive prototype is soft, fabric-based, and powered cordlessly by a battery.

“This technology is quite simple in its essence,” says Tommaso Proietti, the paper’s first author and a former postdoctoral research fellow in Walsh’s lab, where the wearable was designed and built. “It’s basically a shirt with some inflatable, balloon-like actuators under the armpit. The pressurized balloon helps the wearer combat gravity to move their upper arm and shoulder.”

To assist patients with ALS, the team developed a sensor system that detects residual movement of the arm and calibrates the appropriate pressurization of the balloon actuator to move the person’s arm smoothly and naturally. The researchers recruited ten people living with ALS to evaluate how well the device might extend or restore their movement and quality of life.

The team found that the soft robotic wearable – after a 30-second calibration process to detect each wearer’s unique level of mobility and strength – improved study participants’ range of motion, reduced muscle fatigue, and increased performance of tasks like holding or reaching for objects. It took participants less than 15 minutes to learn how to use the device.

“These systems are also very safe, intrinsically, because they’re made of fabric and inflatable balloons,” Proietti says. “As opposed to traditional rigid robots, when a soft robot fails it means the balloons simply don’t inflate anymore. But the wearer is at no risk of injury from the robot.”


Robotics Summit (May 10-11) returns to Boston

Register Today


Walsh says the soft wearable is light on the body, feeling just like clothing to the wearer. “Our vision is that these robots should function like apparel and be comfortable to wear for long periods of time,” he says.

His team is collaborating with neurologist David Lin, director of MGH’s Neurorecovery Clinic, on rehabilitative applications for patients who have suffered a stroke. The team also sees wider applications of the technology including for those with spinal cord injuries or muscular dystrophy.

“As we work to develop new disease-modifying treatments that will prolong life expectancy, it is imperative to also develop tools that can improve patients’ independence with everyday activities,” says Sabrina Paganoni, one of the paper’s co-authors, who is a physician-scientist at MGH’s Healey & AMG Center for ALS and associate professor at Spaulding Rehabilitation Hospital/Harvard Medical School.

The current prototype developed for ALS was only capable of functioning on study participants who still had some residual movements in their shoulder area. ALS, however, typically progresses rapidly within two to five years, rendering patients unable to move – and eventually unable to speak or swallow. In partnership with MGH neurologist Leigh Hochberg, principal investigator of the BrainGate Neural Interface System, the team is exploring potential versions of assistive wearables whose movements could be controlled by signals in the brain. Such a device, they hope, might someday aid movement in patients who no longer have any residual muscle activity.

an air bladder under the arm is filled with compressed air to lift the patients arm.

Balloon actuators attached to the wearable move the person’s arm smoothly and naturally. | Credit: Walsh Lab, Harvard SEAS

Feedback from the ALS study participants was inspiring, moving, and motivating, Proietti says.

“Looking into people’s eyes as they performed tasks and experienced movement using the wearable, hearing their feedback that they were overjoyed to suddenly be moving their arm in ways they hadn’t been able to in years, it was a very bittersweet feeling.”

The team is eager for this technology to start improving people’s lives, but they caution that they are still in the research phase, several years away from introducing a commercial product.

“Soft robotic wearables are an important advancement on the path to truly restored function for people with ALS. We are grateful to all people living with ALS who participated in this study: it’s only through their generous efforts that we can make progress and develop new technologies,” Paganoni says.

Harvard’s Office of Technology Development has protected the intellectual property arising from this study and is exploring commercialization opportunities.

The work was enabled by the Cullen Education and Research Fund (CERF) Medical Engineering Prize for ALS Research, awarded to team members in 2022.

Additional authors include Ciaran O’Neill, Lucas Gerez, Tazzy Cole, Sarah Mendelowitz, Kristin Nuckols, and Cameron Hohimer.

Editor’s Note: This article was republished from Harvard University.

The post Soft robotic wearable restores arm function for people with ALS appeared first on The Robot Report.

]]>
https://www.therobotreport.com/soft-robotic-wearable-restores-arm-function-for-people-with-als/feed/ 0
How fish sensory organs could improve underwater robots’ navigation skills https://www.therobotreport.com/how-fish-sensory-organs-could-improve-underwater-robots-navigation-skills/ https://www.therobotreport.com/how-fish-sensory-organs-could-improve-underwater-robots-navigation-skills/#respond Tue, 31 Jan 2023 22:23:18 +0000 https://www.therobotreport.com/?p=564917 A research team led by the University of Bristol is studying fish sensory organs to help then develop sensors for underwater robots.

The post How fish sensory organs could improve underwater robots’ navigation skills appeared first on The Robot Report.

]]>
Two yellow blaze African cichlid fish, blue fish with bright yellow fins, against a black background.

Two yellow blaze African cichlid fish, the ones at the center of the University of Bristol team’s research for underwater robots. | Source: University of Bristol

A research team led by the University of Bristol is studying fish sensory organs to better understand the cues they give to determine collective behavior. These researchers think these same cues could be used in swarms of underwater robots. 

The team’s research is focused on the lateral line sensing organ found in African cichlid fish, but it can also be found in most fish species. This lateral line-sensing organ helps the fish sense and interpret water pressures around them. These organs are sensitive enough to detect external influences, like neighboring fish, changes in water flow, nearby predators and obstacles. 

On fish, the lateral line system is distributed across the head, trunk and tail of the fish. It is made up of mechanoreceptors, or lateral line sensory units called neuromasts that are either within channels under the skin or on the surface of the skin. 

“We were attempting to find out if the different areas of the lateral line – the lateral line on the head versus the lateral line on the body, or the different types of lateral line sensory units such as those on the skin, versus those under it, play different roles in how the fish is able to sense its environment through environmental pressure readings,” Elliott Scott, lead author on the paper and a member of the University of Bristol’s Department of Engineering Mathematics, said in a release. “We did this in a novel way, by using hybrid fish, that allowed for the natural generation of variation.”


Robotics Summit & Expo (May 10-11) returns to Boston


The researchers found that the lateral line system around a fish’s head has the most influence on how well fish are able to swim in a group or a shoal. Additionally, when many neuromasts are found under the skin, fish tend to swim closer together. Many neuromasts found on the skin mean the fish will likely swim further apart. 

The researchers then took to simulation to demonstrate how the mechanisms behind the work the later line does are applicable both in smaller cases, like for groups of fish, and at larger scales. These mechanisms could be mimicked using a type of easily-manufactured pressure sensor for underwater robots. The sensor would help these robots navigate dark or murky environments that traditional sensing systems struggle with. 

“These findings provide a better understanding of how the lateral line informs shoaling behavior in fish, while also contributing a novel design of inexpensive pressure sensor that could be useful on underwater robots that have to navigate in dark or murky environments,” Elliott said.

The University of Bristol team plans to further develop this sensor and eventually integrate it into a robotic platform to demonstrate its effectiveness.

The research was funded by the Engineering and Physical Science Research Council (EPSRC), Biotechnology and Biological Sciences Research Council (BBSRC) and the Human Frontier Science Program (HFSP). 

The post How fish sensory organs could improve underwater robots’ navigation skills appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-fish-sensory-organs-could-improve-underwater-robots-navigation-skills/feed/ 0
Watch Boston Dynamics’ Atlas humanoid work at a ‘construction site’ https://www.therobotreport.com/watch-boston-dynamics-atlas-humanoid-assist-on-a-construction-site/ https://www.therobotreport.com/watch-boston-dynamics-atlas-humanoid-assist-on-a-construction-site/#comments Wed, 18 Jan 2023 15:01:26 +0000 https://www.therobotreport.com/?p=564827 With a new routine where Atlas assists at a simulated job site, Boston Dynamics engineers have pushed the humanoid one step closer to performing real-world manipulation tasks at human speed.

The post Watch Boston Dynamics’ Atlas humanoid work at a ‘construction site’ appeared first on The Robot Report.

]]>

Boston Dynamics never disappoints when it releases a video showing new capabilities for its robots. And it just released a video, “Atlas Gets a Grip,” in which the humanoid performs a slew of new moves at a simulated construction site.

A “construction worker” atop a scaffold conveniently forgot some tools down on the ground. Instead of hopping down to get the tools himself, Atlas brings the tools to him. And this is where the magic happens.

Atlas, using a claw gripper, picks up and manipulates a wooden plank to create a bridge for itself onto the scaffold. It then picks up a toolbag, runs onto the scaffold, spins around and throws the toolbag up to the construction worker. Atlas then pushes a wooden box off the scaffold and flips and twists its way to the ground.

You can watch the video atop this page. Boston Dynamics said the new capabilities represent a natural progression of the humanoid robot’s skillset, particularly in areas of perception, manipulation and autonomy. Atlas’ ability to pick up and move objects of different sizes, materials, and weights while staying balanced is enabled by improved locomotion and sensing capabilities.

For this video, Boston Dynamics installed utility “claw” grippers with one fixed finger and one moving finger. Boston Dynamics said this gripper debuted during its Super Bowl commercial when Atlas lifted a keg over its head. These simple grippers are designed for heavy lifting tasks.

According to Boston Dynamics, some of the other new capabilities include:

  • Improved control systems in order to jump 180-degree jump while holding the wooden plank.
  • Performing a spinning jump while throwing the tool bag. To accomplish this task, Boston Dynamics extended the model predictive controller (MPC) to consider the coupled motion of both the robot and object together.
  • Pushing the wooden box from the platform, which meant Atlas needed to generate enough power to cause the box to fall without sending itself off of the platform.
  • Atlas’ concluding move, an inverted 540-degree, multi-axis flip, adds asymmetry to the robot’s movement making it a much more difficult skill than previously performed parkour.

“We’re layering on new capabilities,” said Ben Stephens, Atlas controls lead, Boston Dynamics. “Parkour and dancing were interesting examples of pretty extreme locomotion, and now we’re trying to build upon that research to also do meaningful manipulation. It’s important to us that the robot can perform these tasks with a certain amount of human speed. People are very good at these tasks, so that has required some pretty big upgrades to the control software.”

Boston Dynamics released a must-watch video (below) that takes you behind the scenes of how this new routine was developed.

In a blog, Boston Dynamics explained some of the more complex sequences in the new routine. Stephens said Atlas manipulating the large wooden plank was especially challenging. Instead of turning around cautiously, Atlas performed a 180-degree jump while holding the plank. Stephens said this meant Atlas’ control system needed to account for the plank’s momentum to avoid toppling over.

He also said pushing the wooden box from the platform is a deceptively complex task. Atlas needed to generate enough force to cause the box to fall, leaning its weight into the shove without sending its own body off the platform.

Stephens also said the flip at the end of the routine is much more difficult than previous acrobatics. The twist adds asymmetry that doesn’t exist in a regular backflip. Not only is the math more complicated, but in trial runs, Atlas kept getting tangled in its own limbs as it tucked its arms and legs.

“We’re using all of the strength available in almost every single joint on the robot,” Deits says. “That trick is right at the limit of what the robot can do.”

Stephens said humanoids that can routinely tackle dirty and dangerous jobs in the real world are a “long way off.” So it appears Atlas will remain a research platform for the foreseeable future.

“Manipulation is a broad category, and we still have a lot of work to do,” he said. “But this gives a sneak peek at where the field is going. This is the future of robotics.”

The post Watch Boston Dynamics’ Atlas humanoid work at a ‘construction site’ appeared first on The Robot Report.

]]>
https://www.therobotreport.com/watch-boston-dynamics-atlas-humanoid-assist-on-a-construction-site/feed/ 11
How human language accelerated robotic learning https://www.therobotreport.com/how-human-language-accelerated-robotic-learning/ https://www.therobotreport.com/how-human-language-accelerated-robotic-learning/#respond Wed, 11 Jan 2023 20:17:37 +0000 https://www.therobotreport.com/?p=564757 Researchers developed a suite of policies using machine learning training approaches with and without language information, and then compared the policies’ performance.

The post How human language accelerated robotic learning appeared first on The Robot Report.

]]>

Researchers found human language descriptions of tools accelerated the learning of simulated robotic arms. | Credit: Princeton University

Exploring a new way to teach robots, Princeton researchers have found that human-language descriptions of tools can accelerate the learning of a simulated robotic arm lifting and using a variety of tools.

The results build on evidence that providing richer information during artificial intelligence (AI) training can make autonomous robots more adaptive to new situations, improving their safety and effectiveness.

Adding descriptions of a tool’s form and function to the training process for the robot improved the robot’s ability to manipulate newly encountered tools that were not in the original training set. A team of mechanical engineers and computer scientists presented the new method, Accelerated Learning of Tool Manipulation with LAnguage, or ATLA, at the Conference on Robot Learning.

Robotic arms have great potential to help with repetitive or challenging tasks, but training robots to manipulate tools effectively is difficult: Tools have a wide variety of shapes, and a robot’s dexterity and vision are no match for a human’s.

“Extra information in the form of language can help a robot learn to use the tools more quickly,” said study coauthor Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton who leads the Intelligent Robot Motion Lab.

The team obtained tool descriptions by querying GPT-3, a large language model released by OpenAI in 2020 that uses a form of AI called deep learning to generate text in response to a prompt. After experimenting with various prompts, they settled on using “Describe the [feature] of [tool] in a detailed and scientific response,” where the feature was the shape or purpose of the tool.

“Because these language models have been trained on the internet, in some sense you can think of this as a different way of retrieving that information,” more efficiently and comprehensively than using crowdsourcing or scraping specific websites for tool descriptions, said Karthik Narasimhan, an assistant professor of computer science and coauthor of the study. Narasimhan is a lead faculty member in Princeton’s natural language processing (NLP) group, and contributed to the original GPT language model as a visiting research scientist at OpenAI.

This work is the first collaboration between Narasimhan’s and Majumdar’s research groups. Majumdar focuses on developing AI-based policies to help robots – including flying and walking robots – generalize their functions to new settings, and he was curious about the potential of recent “massive progress in natural language processing” to benefit robot learning, he said.

For their simulated robot learning experiments, the team selected a training set of 27 tools, ranging from an axe to a squeegee. They gave the robotic arm four different tasks: push the tool, lift the tool, use it to sweep a cylinder along a table, or hammer a peg into a hole. The researchers developed a suite of policies using machine learning training approaches with and without language information, and then compared the policies’ performance on a separate test set of nine tools with paired descriptions.

This approach is known as meta-learning, since the robot improves its ability to learn with each successive task. It’s not only learning to use each tool, but also “trying to learn to understand the descriptions of each of these hundred different tools, so when it sees the 101st tool it’s faster in learning to use the new tool,” said Narasimhan. “We’re doing two things: We’re teaching the robot how to use the tools, but we’re also teaching it English.”

The researchers measured the success of the robot in pushing, lifting, sweeping and hammering with the nine test tools, comparing the results achieved with the policies that used language in the machine learning process to those that did not use language information. In most cases, the language information offered significant advantages for the robot’s ability to use new tools.

One task that showed notable differences between the policies was using a crowbar to sweep a cylinder, or bottle, along a table, said Allen Z. Ren, a Ph.D. student in Majumdar’s group and lead author of the research paper.

“With the language training, it learns to grasp at the long end of the crowbar and use the curved surface to better constrain the movement of the bottle,” said Ren. “Without the language, it grasped the crowbar close to the curved surface and it was harder to control.”

The research was supported in part by the Toyota Research Institute (TRI), and is part of a larger TRI-funded project in Majumdar’s research group aimed at improving robots’ ability to function in novel situations that differ from their training environments.

“The broad goal is to get robotic systems – specifically, ones that are trained using machine learning — to generalize to new environments,” said Majumdar. Other TRI-supported work by his group has addressed failure prediction for vision-based robot control, and used an “adversarial environment generation” approach to help robot policies function better in conditions outside their initial training.

Editor’s Note: This article was republished from Princeton University.

The post How human language accelerated robotic learning appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-human-language-accelerated-robotic-learning/feed/ 0
Teaching old robots new tricks https://www.therobotreport.com/teaching-old-robots-new-tricks/ https://www.therobotreport.com/teaching-old-robots-new-tricks/#respond Fri, 06 Jan 2023 19:45:37 +0000 https://www.therobotreport.com/?p=564709 The SouthWest Research Institute shows how older robots can deliver value when you systematically design the approach and work within the constraints of your hardware.

The post Teaching old robots new tricks appeared first on The Robot Report.

]]>
Robots, and in particular industrial robots, are programmed to perform certain functions. The Robot Operating System (ROS) is a very popular framework that facilitates the asynchronous coordination between a robot and other drives and/or devices. ROS has been a go-to means to enable the development of advanced capability across the robotics sector.

Southwest Research Institute (SwRI) and the ROS-I community often develop applications in ROS 2 , the successor to ROS 1. In many cases, particularly where legacy application code is utilized bridging back to ROS 1 is still very common, and one of the challenges in supporting the adoption of ROS for industry. This post does not aim to explain ROS, or any of the journey to migrating to ROS 2 in detail, but if interested as a reference, I invite you to read the following blogs by my colleagues, and our partners at Open Robotics/Open Source Robotics Foundation.

Giving an old robot a new purpose

Robots have been manufactured since the 1950s and, logically, over time there are newer versions with better properties and performance than their ancestors. And this is where the question comes in: how can you give the capability to those older but still functional robots?

This is becoming a more important question as the circular economy has gained momentum and understanding of the carbon footprint impact of the manufacture of robots that can be offset by reusing a functional robot. Each robot has its own capabilities and limitations and those must be taken into account. However, the question of “can I bring new life to this old robot?” always comes up, and this exact use case came up recently here at SwRI.

Confirming views of the camera to robot calibration. | Credit: ROS Industrial

In the lab, an older Fanuc robot seemed to be a good candidate to set up a system that could demonstrate basic Scan-N-Plan capabilities in an easy-to-digest way with this robot that would be constantly available for testing and demonstrations. The particular system was a demo unit from a former integration company and included an inverted Fanuc robot manufactured in 2008.

The demo envisioned for this system would be a basic Scan-N-Plan implementation that would locate and execute the cleaning of a mobile phone screen. Along the way, we encountered several obstacles that are described below.

Driver updates

Let’s talk first about the drivers. A driver is a software component that lets the operating system and a device communicate with each other. Each robot has its own drivers to properly communicate with whatever is going to instruct it on how to move. So when speaking of drivers, the handling of that is different from a computer’s driver to a robot’s driver. This is because a computer’s driver can be updated faster and easier than that of a robot.

When device manufacturers identify errors, they create a driver update that will correct them. In computers, you will be notified if a new update is available, you can accept the update and the computer will start updating. But in the world of industrial robots, including the Fanuc in the lab here, you need to manually upload the driver and the supporting software options to the robot controller. Once the driver software and options are installed, a fair amount of testing is needed to understand what the changes you made to the robot impacted elsewhere in the system. In certain situations, you may receive a robot with the required options needed to facilitate external system communication, however, it is always advised to check and confirm functionality.

With the passing of time, the robot will not communicate as fast as newer versions of the same model. So to obtain the best results, you will want to try to update your communication drivers, if available. The Fanuc robot comes with a controller that lets you operate it manually, via a teach pendant that is in the user’s hand at all times. It can be set to automatic and it will do what it has instructed via a simple cycle start. But all safety systems need to be functional and in the proper state for the system to operate.

The rapid position report of the robot’s state is very important for the computer’s software (in this case our ROS application) to know where the robot is and if it is performing the instructions correctly. This position is commonly known as the robot pose. For robotic arms, the information can be separated by joint states, and your laptop will probably have an issue with the old robot due to reporting these joint states at a slower speed while in auto mode than the ROS-based software on the computer expects. One way to solve this slow reporting is to update the drivers or by adding the correct configurations for your program to your robot’s controller, but that is not always possible or feasible.

Updated location of the RGB-D camera in the Fanuc cell. | Credit: ROS-Industrial

Another way to make the robot move as expected is to calibrate the robot with an RGB-D camera. To accomplish this, you must place the robot in a strategic position so that most of the robot is visible by the camera. Then view the projection of the camera and compare it to the URDF, which is a file that represents the model of the robot in simulation. Having both representations, in Rviz for example, you can change the origin of the camera_link, until you see that the projection is aligned with the URDF.

For the Scan n’ Plan application, the RGB-D camera was originally mounted on part of the robot’s end effector. But when we encountered this joint state delay, the camera was changed to a strategic position on the roof of the robot’s enclosure where it could view the base and the Fanuc robot for calibration to the simulation model as can be seen in the photos below. In addition, we set the robot to manual mode, where the user needed to hold the controller and tell the robot to start with the set of instructions given by the developed ROS-based Scan-N-Plan generated program.

Where we landed and what I learned

While not as easy as a project on “This Old House,” you can teach an old robot new tricks. It is very important to know the control platform of your robot. It may be that a problem is not with your code but with the robot itself, so it is always good to make sure the robot, associated controller and software work well and then seek alternatives to enable that new functionality within the constraints of your available hardware.

Though not always efficient in getting to the solution, older robots can deliver value when you systematically design the approach and work within the constraints of your hardware, taking advantage of the tools available, in particular those in the ROS ecosystem.

About the Author

Bryan Marquez was an engineer intern in the robotics department at the Southwest Research Institute.

The post Teaching old robots new tricks appeared first on The Robot Report.

]]>
https://www.therobotreport.com/teaching-old-robots-new-tricks/feed/ 0
WPI launches Autonomous Vehicle Mobility Institute https://www.therobotreport.com/wpi-launches-autonomous-vehicle-mobility-institute/ https://www.therobotreport.com/wpi-launches-autonomous-vehicle-mobility-institute/#respond Thu, 22 Dec 2022 21:40:51 +0000 https://www.therobotreport.com/?p=564623 Vladimir Vantsevich and Lee Moradi, two professors at WPI, have established an Autonomous Vehicle Mobility Institute (AVMI).

The post WPI launches Autonomous Vehicle Mobility Institute appeared first on The Robot Report.

]]>
WPI

From left to right, researchers Vladimir Vantsevich, Huashuai Fan and Lee Moradi. | Source: WPI

Vladimir Vantsevich and Lee Moradi, two professors at Worchester Polytechnic Institute’s (WPI’s) Department of Mechanical and Materials Engineering, have established an Autonomous Vehicle Mobility Institute (AVMI) at WPI.

The institute aims to expand the university’s interdisciplinary research into autonomous vehicle technologies as well as to boost educational opportunities for students. AVMI will focus on developing technology for off-road autonomous vehicles that travel across rough terrains. This could mean anything from farmland to battlefields to other planets. 

“Much of the current research into autonomous vehicles focuses on cars that travel on roads, but we focus on off-road vehicles, from small robotic vehicles to full-scale vehicles, both manned and unmanned, with as many as 8, 12, or 16 wheels that are driven by electric motors or mechanical drivetrain systems with controls,” Vantsevich said. “The technological challenge for these off-road vehicles is making them intelligent enough to sense and understand the terrain under the wheel to supply in real time the correct amount of power to each wheel and thus improve the vehicle’s terrain mobility, maneuverability, and energy efficiency. We believe that WPI is an excellent place to engage students, other faculty members, and industry partners in this work.”

Researchers at WPI already have a few ongoing projects having to do with autonomous vehicle technology. These projects include models to sift through large amounts of sensor data from autonomous systems and software that will enable groups of lunar robots to collaborate while exploring the moon. 

“A significant portion of vehicles on and off roads are expected to be autonomous in the coming decades,” Wole Soboyejo, interim president of WPI, said. “WPI researchers across departments are already doing groundbreaking work in this field, and Vladimir and Lee will allow WPI to transform the scale of our innovations with their expertise and their ability to bring together collaborators with complementary expertise. This will lead to several new opportunities for our students and prepare them for leadership positions in a field that will define the cutting edge of transportation and space exploration.”

AVMI is funded by the USA Army, NASA, the U.S. Department of Energy and industry partners in both the U.S. and Western Europe. 

“I’m very excited to join the faculty of WPI to continue working on autonomous off-road vehicles that could be used in agriculture, construction, the military, and especially planetary exploration,” Moradi said. “As humans continue to explore space, developing autonomous vehicles that can function on other planets under harsh conditions will be of the utmost importance.”

Vantsevich has experience in research and engineering on mechanical and intelligent mechatronic multi-physics systems with application to vehicle system modeling and simulation. Morado has spent over 18 years working in the industry after receiving his BS in engineering and his MS and Ph.D. in civil engineering from the University of Alabama (UAB). Before joining WPI in early 2022, both Vantsevich and Moradi worked together as faculty members at UAB. 

The post WPI launches Autonomous Vehicle Mobility Institute appeared first on The Robot Report.

]]>
https://www.therobotreport.com/wpi-launches-autonomous-vehicle-mobility-institute/feed/ 0