Human Robot Interaction / Haptics Archives - The Robot Report https://www.therobotreport.com/category/design-development/haptics/ Robotics news, research and analysis Thu, 02 Feb 2023 22:07:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Human Robot Interaction / Haptics Archives - The Robot Report https://www.therobotreport.com/category/design-development/haptics/ 32 32 Why roboticists should prioritize human factors https://www.therobotreport.com/why-roboticists-should-prioritize-human-factors/ https://www.therobotreport.com/why-roboticists-should-prioritize-human-factors/#respond Thu, 02 Feb 2023 15:00:48 +0000 https://www.therobotreport.com/?p=564721 Human systems engineering aims to combine engineering and psychology to create systems that are designed to work with humans' capabilities and limitations.

The post Why roboticists should prioritize human factors appeared first on The Robot Report.

]]>
draper

Draper is a nonprofit engineering company that helps private and public entities better design robotic systems. | Source: Draper

Human systems engineering aims to combine engineering and psychology to create systems that are designed to work with humans’ capabilities and limitations. Interest in the subject has grown among government agencies, like the FDA, the FAA and NASA, as well as in private sectors like cybersecurity and defense. 

More and more, we’re seeing robots deployed in real-world situations that have to work alongside or directly with people. In manufacturing and warehouse settings, it’s common to see collaborative robots (cobots) and autonomous mobile robots (AMRs) work alongside humans with no fencing or restrictions to divide them. 

Dr. Kelly Hale, of Draper, a nonprofit engineering innovation company, has seen that too often human factors principles are an afterthought in the robotics development process. She gave some insight into things roboticists should keep in mind to make robots that can successfully work with humans. 

Specifically, Hale outlined three overarching ideas that roboticists should keep in mind: start with your end goal in mind, consider how human and robot limitations and strengths can work together and minimize communication to make it as efficient as possible. 


Robotics Summit & Expo (May 10-11) returns to Boston


Start with an end goal in mind

It’s important that human factors are considered at every stage of the development process, not just at the end when you’re beginning to put a finished system into the world, according to Dr. Hale. 

“There’s not as many tweaks and changes that can be made [at the end of the process],” Dr. Hale said. “Whereas if we were brought in earlier, some small design changes probably would have made that interface even more useful.” 

Once the hardware capabilities of a system are set, Dr. Hale’s team has to work around those parameters. In the early design phase, researchers should consider not only how a system functions but where and how a human comes in. 

“I like to start with the end in mind,” Dr. Hale said. “And really, that’s the operational impact of whatever I’m designing, whether it’s an operational system, whether it’s a training system, whatever it is. I think that’s a key notion of the human-centered system, really saying, okay, at the end of the day, how do I want to provide value to the user through this increased capability?”

Working with human limitations and robot limitations

“From my perspective, human systems engineering is really about combining humans and technology in the best way so that the overall system can be more capable than the parts,” Dr. Hale said. “So more useful than a human by themselves or a machine or a system by themselves.”

There are many questions roboticists should ask themselves early in the process of building their systems. Roboticists should have an understanding of human capabilities and limitations and think about whether they’re being effectively considered in the system’s design, according to Dr. Hale. They should also consider human physical and cognitive capabilities, as there’s only so much data a human can handle at once. 

Knowing human limitations will help roboticists build systems that fill in those gaps and, alternatively, they can build systems that maximize the things that humans are good at. 

Another hurdle to consider when building systems to work with humans is building trust with the people working with them. It’s important for people working alongside robots to understand what the robot can do, and trust that it will do it consistently. 

“Part of it is building that situational awareness and an understanding from the human’s perspective of the system and what its capabilities are,” Dr. Hale said. “To have trust, you want to make sure that what I believe the system is capable of matches the automation capability.” 

For Dr. Hale, it’s about pushing humans and robotic systems toward learning from each other and having the ability to grow together.

For example, while driving, there are many things humans can do better than autonomous vehicles. Humans have a better understanding of the complexity of road rules, and can better read cues from other drivers. At the same time, there are many things autonomous vehicles do better than humans. With advanced sensors and vision, they have fewer blindspots and can see things from farther away than humans can. 

In this case, the autonomous system can learn from human drivers as they’re driving, taking note of how they respond to tricky situations. 

“A lot of it is having that shared experience and having the understand of the baseline of what the system’s capable of, but then having that learning opportunity with this system over time to really kind of push the boundaries.”

Making systems that communicate effectively with humans

People are able to discern whether a system is not optimized for their use. The manner and frequency with which the technology interacts with humans may be a dead giveaway.

“What you’ll find with some of the systems that were less ideally designed, you start to get notified for everything,” Dr. Hale said. 

Dr. Hale compared these systems to Clippy, the animated paperclip that used to show up in Mircosoft Word. Clippy was infamous for butting in too often to tell users things they already knew. A robotic system that interrupts people while they’re working too often, with information that isn’t important, results in a poor user experience. 

“Even with those systems that have a lot of user experience and human factors considered, there are still those touch points and those endpoints that make it tricky. And to me, it’s a lot of those ‘false alarms’, where you’re getting notified when you don’t necessarily want to be,” Dr. Hale said. 

Dr. Hale also advises that roboticists should consider access and maintenance when designing robots to prevent downtime. 

With these things in mind, Hale said the robotic development process can be greatly shortened, resulting in a robot that not only works better for the people that need to work with it, but can also be quickly deployed in many environments. 

The post Why roboticists should prioritize human factors appeared first on The Robot Report.

]]>
https://www.therobotreport.com/why-roboticists-should-prioritize-human-factors/feed/ 0
University of Texas to study how people interact with robots in public https://www.therobotreport.com/university-of-texas-to-study-how-people-interact-with-robots-in-public/ https://www.therobotreport.com/university-of-texas-to-study-how-people-interact-with-robots-in-public/#respond Mon, 21 Nov 2022 23:37:45 +0000 https://www.therobotreport.com/?p=564354 Researchers at UT Austin received a $3.6 million grant to do a five-year study focusing on how humans interact with robots in public.

The post University of Texas to study how people interact with robots in public appeared first on The Robot Report.

]]>
UT study

While the robots roam UT’s campus, those monitoring the quadrupeds will wear brain sensors. | Source: University of Texas

Researchers at the University of Texas (UT), Austin recently received a $3.6 million grant from the National Science Foundation to do a five-year study focusing on how humans interact with robots in public. The research will involve deploying a fleet of quadrupeds on UT’s campus, with the first deployments starting in early 2023. 

The grant will help the interdisciplinary team of researchers set up a robot delivery network on campus using both Boston Dynamics‘ and Unitree Robotics‘ quadruped robots. 

Once the robots are up and running, members of the UT Austin community will be able to order free supplies, like wipes and hand sanitizer, using a smartphone app. The quadrupeds will deliver the supplies to certain pedestrian zones on campus. While delivering, these robots could encounter hundreds of different people and their reactions to the robot. 

At first, the robots will work individually, but after a while, the researchers plan to send them out in pairs. The robots will be monitored by chaperones and remotely by other people. Nanshu Lu, a professor in UT’s Department of Aerospace Engineering and Engineering Mechanics, will design wearable brain sensors for the people monitoring the robots to wear. This will give the researchers some insight into the kind of workload and attention span monitoring the robot requires. 

The researchers will gain most of their insights, however, from observing and interviewing people who interact with the robots while they’re out in the world. The team is hoping to get a wide range of reactions so that they can better understand the full range of experiences robots can have on campus. 

The team hopes their research will help designers have a better understanding of how to create robots that can co-exist with humans in public, and how and where those robots should move when interacting with humans. 

While students might not see quadrupeds roaming around campus every day, college campuses have been popular testing grounds for mobile robots. Many sidewalk delivery robot companies, including Starship Robotics and Kiwibot, have rolled out delivery services on college campuses. 

This new research will build on the team’s six-year project that kicked off in September 2021 Living and Working with Robots. The team is made up of Elliot Hauser, an assistant professor in UT’s School of Information, Justin Hart, an assistant professor of practice with UT’s College of Natural Sciences, Keri Stephens, a professor in UT’s College of Communication, Joydeep Biswas, an assistant professor in of computer science, Junfeng Jiao of UT’s School of Architecture, Maria Esteva of the Texas Advanced Computing Center and Samantha Shorey, and assistant professor in UT’s College of Communication. 

The post University of Texas to study how people interact with robots in public appeared first on The Robot Report.

]]>
https://www.therobotreport.com/university-of-texas-to-study-how-people-interact-with-robots-in-public/feed/ 0
Inside Samantha Johnson’s quest to build robots for deafblind people https://www.therobotreport.com/inside-samantha-johnsons-quest-to-build-robots-for-deafblind-people/ https://www.therobotreport.com/inside-samantha-johnsons-quest-to-build-robots-for-deafblind-people/#respond Mon, 14 Nov 2022 19:58:21 +0000 https://www.therobotreport.com/?p=564270 Tatum Robotics won the RoboBusiness Pitchfire robotics startup competition. It's building an anthropomorphic hand that signs tactile sign languages for people with deafblindness who can't hear or see.

The post Inside Samantha Johnson’s quest to build robots for deafblind people appeared first on The Robot Report.

]]>

Welcome to Episode 98 of The Robot Report Podcast, which brings conversations with robotics innovators straight to you. Join us each week for discussions with leading roboticists, innovative robotics companies and other key members of the robotics community.

Samantha Johnson, founder and CEO of Tatum Robotics, joined the podcast this week. The Boston-based startup won the Pitchfire robotics startup competition at RoboBusiness 2022. The company is building an independent communication tool for people with deafblindness who can’t hear or see. Tatum Robotics’ anthropomorphic hand is designed to sign tactile sign languages for people with deafblindness. You can see a couple of photos and a video about Tatum Robotics at the bottom of this page.

Johnson dives into the challenging system design and the opportunities for such technology. She also discusses her passion for helping the deafblind community, educating us about some of the daily challenges they face. The interview with Johnson starts at the 21:44 mark of the episode.

The podcast also recaps some of the week’s top stories, including a new spin on shelf-inventory robots and a merger between two LiDAR companies.

Links from today’s show:

If you would like to be a guest on an upcoming episode of the podcast, or if you have recommendations for future guests or segment ideas, contact Steve Crowe or Mike Oitzman.

For sponsorship opportunities of The Robot Report Podcast, contact Courtney Nagle for more information.

 

The post Inside Samantha Johnson’s quest to build robots for deafblind people appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inside-samantha-johnsons-quest-to-build-robots-for-deafblind-people/feed/ 0
Elbit Systems to study human-robot interaction https://www.therobotreport.com/elbit-systems-to-study-human-robot-interaction/ https://www.therobotreport.com/elbit-systems-to-study-human-robot-interaction/#respond Fri, 24 Dec 2021 16:00:28 +0000 https://www.therobotreport.com/?p=561277 The consortium will bring together academic researchers studying AI, computer science and behavioral science and robotics companies.

The post Elbit Systems to study human-robot interaction appeared first on The Robot Report.

]]>
Elbit Systems

Elbit Systems develops robotics for the military and commercially. | Source: Elbit Systems

Oftentimes, autonomous robots operate in human-free environments, like logistics centers or automated production lines. It can be hard for humans to adjust to working around robots, and vice versa. Recently, the Israel Innovation Authority approved a new innovation consortium for Human-Robot Interaction (HRI).

The consortium will be led by Elbit Systems C4I and Cyber. It will bring together academic researchers studying AI, computer science and behavioral science and robotics companies. The goal is to develop an innovative HRI infrastructure.

“Our selection by the Israel Innovation Authority, to lead a Human-Robot Interaction consortium reflects our expertise in the fields of autonomous systems and manned-unmanned teaming,” Yossi Cohen, Elbit and Cyber chief technology officer, said. “We are looking forward to collaborating with additional industry partners that specialise in these fields, that will join us in the consortium.”

The goal for many in the industry is to see humans working alongside robots, where robots are performing dull, dirty and dangerous jobs, while humans focus on tasks that require creativity and critical thinking.

Communication between humans and robots is key for a productive working environment, but that doesn’t come naturally. Robotic systems need to be familiar with the ways that humans communicate, verbally and non-verbally. Robots also need to be aware of any relevant social codes that could change the way humans interact.

“The Israel Innovation Authority is working to close technology gaps in the field of robotics using various tools as well as by promoting knowledge transfer from the defence sector and academia to the wider industry,” Dr. Aviv Zeevi, VP of technological infrastructure at the Israel Innovation Authority, said.

Elbit Systems is an Israeli-based international defense and electronics company. It has been working with automation systems and robotics for decades.

The post Elbit Systems to study human-robot interaction appeared first on The Robot Report.

]]>
https://www.therobotreport.com/elbit-systems-to-study-human-robot-interaction/feed/ 0
6 tips for autonomous cars to safely share roads with cyclists https://www.therobotreport.com/6-tips-for-autonomous-cars-to-safely-share-roads-with-cyclists/ https://www.therobotreport.com/6-tips-for-autonomous-cars-to-safely-share-roads-with-cyclists/#respond Fri, 10 Dec 2021 15:43:31 +0000 https://www.therobotreport.com/?p=561091 Argo AI and The League of American Bicyclists released best practices for creating self-driving systems that operate safely around bicyclists.

The post 6 tips for autonomous cars to safely share roads with cyclists appeared first on The Robot Report.

]]>
autonomous cars bicyclists

Cyclists present unique challenges for autonomous vehicles. | Photo Credit: Argo AI

Bicyclists can present unique challenges for human drivers. Their behavior can be hard to predict, and they can change lanes or swerve to avoid obstacles faster than other drivers. In 2020, fatal accidents involving bicyclists rose 5% (846) over 2019, according to NHTSA.

Autonomous vehicles face those same challenges, which is why companies making self-driving systems need to take extra care when considering bicyclists. While autonomous cars have the potential to reduce vehicle accidents and make roads safer, they first need to learn the best way to navigate the road.

Argo AI, a company working on its own self-driving system, in collaboration with The League of American Bicyclists, released its own best practices for creating self-driving systems that operate safely around bicyclists.

“Argo AI and the League of American Bicyclists share a common goal to improve the safety of streets for all road users,” said Ken McLeod, policy director, the League of American Bicyclists. “We appreciate Argo’s proactive approach to researching, developing, and testing for the safety of people outside of vehicles. Roads have gotten significantly less safe for people outside of vehicles in the last decade, and by addressing interactions with bicyclists now, Argo is demonstrating a commitment to the role of automated technology in reversing that deadly trend.”

Here are the suggested six guidelines to follow when developing autonomous driving systems.

1. Cyclists should be a unique object class

For a self-driving car to be able to react to a cyclist on the road, it first needs to understand what cyclists are and how they move. Cyclists don’t behave like anything else on the road, so it’s important that a self-driving system designates them as a core object representation within its perception system.

By labeling a diverse set of bicycle images, a self-driving system can recognize a cyclist from any viewpoint, speed and bicycle position or orientation. Self-driving systems should also be able to recognize different shapes and sizes of bicycles, such as recumbent bikes or electric bikes.

2. Typical cyclist behavior should be expected

There are certain behaviors that are common for bicyclists that self-driving systems should be able to recognize and anticipate. Lane splitting, yielding at stop signs or walking a bicycle are all behaviors that self-driving systems need to be able to recognize and react appropriately to.

Self-driving systems should use specialized cyclist-specific forecasting models. With these models, when a car encounters a bicyclist, it can predict many potential path’s for the cyclist, making it easier for the car to predict and respond to the cyclist’s actions.

This 2019 video shows how a Waymo autonomous vehicle approaches two cyclists and a vehicle blocking a bike lane. The self-driving system correctly predicts the cyclists will pass the vehicle on the left and slows down to allow them to pass.

3. Cycling infrastructure and local laws should be mapped

Many cities and states have specific laws for cyclists that a self-driving car should be aware of. For example, in some states cyclists are allowed to treat red lights as stop signs. Additionally, mapping for self-driving cars should include any bike lanes. Knowing where bike lanes are will allow a self-driving car to anticipate more bikers in those areas, and keep extra watch for common cyclist behaviors.

4. Drive in a consistent and understandable way

The goal of self-driving cars is to replicate, and improve upon, human driving. This means that self-driving cars should communicate with cyclists and other drivers on the road the same way a human driver would. Self-driving cars should use turn signals wherever appropriate to help others understand its intentions.

Self-driving cars should also maintain a greater following distance with cyclists the way that a human driver would. They should also take extra care when passing cyclists.

5. Prepare for uncertain situations and proactively slow down

It’s inevitable that, sometimes, drivers and cyclists act unpredictably and self-driving cars aren’t sure how to react. When this happens, the car should slow down and create more distance between the car and cyclist, if possible. Self-driving systems should always account for uncertainty.

6. Cyclist scenarios should be tested continuously

Thorough testing is one of the most important things when developing a self-driving system. Testing in both simulation and the real world is crucial to making a system that can operate safely around bicyclists.

Simulation should be used to test real-life scenarios in a virtual world to safely test different scenarios. These scenarios should capture vehicle and cyclist behavior, as well as changes in road structures and visibility. Real-world testing should be used to validate simulations and ensure the technology behaves the same as it did in a simulation.

The post 6 tips for autonomous cars to safely share roads with cyclists appeared first on The Robot Report.

]]>
https://www.therobotreport.com/6-tips-for-autonomous-cars-to-safely-share-roads-with-cyclists/feed/ 0
Movia launches TheraPal robots for cognitive, social development https://www.therobotreport.com/movia-robotics-therpal-robots-emotional-social-development/ https://www.therobotreport.com/movia-robotics-therpal-robots-emotional-social-development/#respond Wed, 08 Dec 2021 16:59:23 +0000 https://www.therobotreport.com/?p=561073 The new robotic aides are designed to be used by parents, therapists and other healthcare professionals for the development and learning of individuals with neurodevelopmental or intellectual challenges

The post Movia launches TheraPal robots for cognitive, social development appeared first on The Robot Report.

]]>
Movia Robotics

MOVIA Robotics founer and chief scientist Tim Gifford with some of the company’s robots. | Credit: MOVIA Robotics

Movia Robotics this week launched its TheraPal line of digital health aides for autism spectrum disorder and other intellectual or developmental disabilities.

Bristol, Connecticut-based Movia Robotics released its TheraPal Progress Tracker, TheraPal Home and TheraPal Clinical assist aides for use in homes and clinician offices. The robotic aides are designed to be used by parents, therapists and other healthcare professionals for the development and learning of individuals with neurodevelopmental or intellectual challenges, according to the company.

Movia’s robot-assisted intervention is a friendly, digital tool that uses applied behavior analysis and other evidence-based methods with gamification techniques to allow children and older individuals to practice a broad range of life skills and confidence-building activities.

The fully-configurable system has modules for cognitive training, communication training, practice and educational learning. It assists the individuals in understanding and practicing basic social skills like making eye contact, building confidence, engaging in conversation and other intellectual skills like reading comprehension, basic math and auditory processing learning. The robot-assisted intervention sends data to healthcare professionals who can adjust professional therapies as needed.

“By using robots that engage and interact with kids, we are able to get kids with autism to respond more readily,” Timothy Gifford, founder of Movia Robotics, has said about the technology. “Robots seem friendlier, less judgmental than human beings; they seem safer, so the children are able to explore more, develop their confidence and have more control all while learning skills to help them be successful in their daily lives.”

Movia Robotics said that its TheraPal product line assists neuro-diverse children with an individualized treatment plan. Its TheraPal Progress Tracker is a medical device data system that is used as an assist tool for homecare individuals and clinicians.

“We are focused on showing how robotics can improve the lives of individuals with autism and other special needs, and the launch of our TheraPal is the first step in our commitment to FDA digital health certification,” CEO Jean-Pierre Bolat said.

Editor’s Note: This article was republished from sister publication Medical Design & Outsourcing.

The post Movia launches TheraPal robots for cognitive, social development appeared first on The Robot Report.

]]>
https://www.therobotreport.com/movia-robotics-therpal-robots-emotional-social-development/feed/ 0
Motional open-sources VR environments for autonomous vehicle research https://www.therobotreport.com/motional-open-sources-vr-scenarios-nureality-autonomous-vehicles/ https://www.therobotreport.com/motional-open-sources-vr-scenarios-nureality-autonomous-vehicles/#respond Tue, 30 Nov 2021 16:37:24 +0000 https://www.therobotreport.com/?p=561000 Motional's nuReality set of VR environments are designed to study interactions between autonomous vehicles and pedestrians.

The post Motional open-sources VR environments for autonomous vehicle research appeared first on The Robot Report.

]]>
Motional nuReality VR

A screenshot from nuReality, Motional’s VR system to study human-robot interaction. | Photo Credit: Motional

Building off of its nuScenes autonomous driving dataset, Motional today open-sourced its nuReality set of virtual reality (VR) environments. The custom environments are designed to study interactions between autonomous vehicles and pedestrians. Motional said open-sourcing the VR environments will help the research community.

Motional said a key challenge to widespread acceptance and adoption of autonomous vehicles is effective communication between AVs and other road users. For example, how will an autonomous vehicle acknowledge that it sees a cyclist or pedestrian who is trying to cross a street?

To study this scenario and others, Motional created VR environments using CHRLX animation studio. There are 10 VR scenarios modeled after an urban 4-way intersection. There is no clear pedestrian crossing zone, stop signs, stoplights, or other elements to indicate that the car lawfully would have to come to a complete stop. Those scenarios include:

  • A human driver stopping at an intersection
  • An AV stopping at an intersection
  • A human driver not stopping at an intersection
  • An autonomous vehicle not stopping at an intersection
  • An autonomous vehicle using expressive behavior, such as a light bar or sounds, to signal its intentions

Motional uses two vehicle models in VR environments: a conventional, human-driven vehicle and an autonomous vehicle without a human operator. Both vehicles were modeled after a white 2019 Chrysler Pacifica, but the autonomous vehicle model has some notable differences. It includes both side-mirror and roof-mounted LiDAR sensors and has no visible occupants. The human driven model includes a male driver who looks straight ahead and remains motionless during the interaction.

In the clip below, the approaching Motional robotaxi uses an LED strip in the front windshield to indicate that the vehicle is stopping.

In this next clip, Motional said the approaching robotaxi’s nose dips to signal that the vehicle is stopping.

And in this next clip, the autonomous vehicle features exaggerated brake and RPM reduction sounds to alert other road users that it’s coming to a stop.

You can view more clips from Motional here. Motional said it adopts some of the learnings from these VR tests for the development of its SAE Level 4 autonomous vehicles.

“By making nuReality open source, we hope these VR files will accelerate research into pedestrian-AV interactions and Expressive Robotics,” said Paul Paul Schmitt, principal engineer – planning, controls, drive by wire architecture, Motional. “It’s like the old saying: If you want to go fast, go alone. But, if you want to go far, go together.”

Motional, the autonomous vehicle company that is a $4 billion joint venture between Hyundai and Aptiv, recently said it plans to launch a fully driverless robotaxi service in Las Vegas in 2023. It’s been testing robotaxis in Las Vegas for nearly four years, completing over 100,000 passenger trips, but with a human safety driver inside each car.

In November 2020, Motional received the go-ahead from the state of Nevada to test fully driverless vehicles on public roads. While Nevada won’t require a human safety driver behind the wheel, Motional has still had a safety driver in the passenger seat for extra precautions.


Laura Major, CTO, Motional, recently joined The Robot Report Podcast to discuss the challenges of developing and deploying Motional’s technology, including the service in Las Vegas. She also discussed when the safety drivers could potentially be removed and when Motional’s service will be commercially available to fleet operators. You can listen to the conversation with Laura below, starting at about the 48-minute mark.


Motional has also partnered with Derq to test how autonomous vehicles react when given a broader perspective than they already have. At two intersections in Las Vegas, cameras placed high above the roads are connected to Derq’s AI system. The cameras will transmit data to Motional’s vehicles, providing a different view of some of the toughest intersections they’re navigating.

Motional’s autonomous vehicles use a sensor suite of advanced LiDAR, cameras, and radar that see up to 300 meters away and 360 degrees around the vehicle. But further enhancing the technology could help the vehicles navigate these challenging environments.

The post Motional open-sources VR environments for autonomous vehicle research appeared first on The Robot Report.

]]>
https://www.therobotreport.com/motional-open-sources-vr-scenarios-nureality-autonomous-vehicles/feed/ 0
Soft exosuit automatically adapts to walking needs https://www.therobotreport.com/soft-exosuit-automatically-adapts-to-walking-needs/ https://www.therobotreport.com/soft-exosuit-automatically-adapts-to-walking-needs/#respond Mon, 29 Nov 2021 18:23:01 +0000 https://www.therobotreport.com/?p=560981 The bioinspired system uses ultrasound measurements of muscle dynamics to develop a personalized and activity-specific assistance profile for users of the exosuit.

The post Soft exosuit automatically adapts to walking needs appeared first on The Robot Report.

]]>
SEAS exosuit

Researchers used a portable ultrasound system strapped to the calves of participants to image their muscles. | Photo Credit: Harvard Biodesign Lab/Harvard SEAS

The way we walk constantly changes. We change speed and adjust to inclines without even thinking about it. This variability is what makes walking difficult to recreate with exoskeletons. But researchers at the Harvard School of Engineering and Applied Sciences (SEAS) have developed an approach that automatically adjusts a soft exoskeleton based on how the wearer is walking.

Typically, customizing exoskeletons to the way someone walks requires hours of manual or automatic tuning. This process can be challenging for a healthy person, and sometimes nearly impossible for older adults or clinical patients.

SEAS researchers are taking a unique approach to developing their exoskeleton. Instead of focusing on the dynamic movements of the limbs of the wearer, they created a muscle-based assistance strategy.

With this strategy, a portable ultrasound system takes ultrasound measurements of the calf muscles. These measurements estimate the amount of force produced by those muscles. Using these measurements, researchers develop a personalized assistance profile. The exosuit then automatically prescribes the amount of assistance needed for different walking speeds and slopes. The exosuit provides lower assistance force to help someone walk.

“We used ultrasound to look under the skin and directly measured what the user’s muscles were doing during several walking tasks,” said Richard Nuckols, a postdoctoral research associate at SEAS and co-first author of the paper. “Our muscles and tendons have compliance which means there is not necessarily a direct mapping between the movement of the limbs and that of the underlying muscles driving their motion.”

What sets this method apart from previous ones is the exoskeleton’s ability to automatically determine the assistance a person needs for different walking speeds and slopes. The exosuit only requires a few seconds of walking to capture the muscle’s profile.

This exosuit can adjust quickly to real-world conditions, according to the researchers. When researchers measured the metabolic energy used with and without the suit, they found that the suit significantly reduced the amount of metabolic energy used.

The research is a collaboration between the Harvard Biorobotics Lab and the Harvard Biodesign Lab run by Conor J. Walsh, the Paul A. Maeder Professor of Engineering and Applied Sciences at SEAS. SEAS researchers plan to move forward by testing how the system does with constant real-time adjustments.

SEAS’ full research can be found here.

The post Soft exosuit automatically adapts to walking needs appeared first on The Robot Report.

]]>
https://www.therobotreport.com/soft-exosuit-automatically-adapts-to-walking-needs/feed/ 0
Inflatable robotic hand gives amputees real-time tactile control https://www.therobotreport.com/inflatable-robotic-hand-amputees-real-time-tactile-control/ https://www.therobotreport.com/inflatable-robotic-hand-amputees-real-time-tactile-control/#respond Tue, 17 Aug 2021 20:05:14 +0000 https://www.therobotreport.com/?p=560171 A computer model relates a finger’s desired position to the corresponding pressure a pump would have to apply to achieve that position. Using this model, the team developed a controller that directs the pneumatic system to inflate the fingers.

The post Inflatable robotic hand gives amputees real-time tactile control appeared first on The Robot Report.

]]>

The smart hand is soft and elastic, weighs about half a pound, and costs a fraction of comparable prosthetics.

For the more than 5 million people in the world who have undergone an upper-limb amputation, prosthetics have come a long way. Beyond traditional mannequin-like appendages, there is a growing number of commercial neuroprosthetics — highly articulated bionic limbs, engineered to sense a user’s residual muscle signals and robotically mimic their intended motions.

But this high-tech dexterity comes at a price. Neuroprosthetics can cost tens of thousands of dollars and are built around metal skeletons, with electrical motors that can be heavy and rigid.

Now engineers at MIT and Shanghai Jiao Tong University have designed a soft, lightweight, and potentially low-cost neuroprosthetic hand. Amputees who tested the artificial limb performed daily activities, such as zipping a suitcase, pouring a carton of juice, and petting a cat, just as well as — and in some cases better than — those with more rigid neuroprosthetics.

The researchers found the prosthetic, designed with a system for tactile feedback, restored some primitive sensation in a volunteer’s residual limb. The new design is also surprisingly durable, quickly recovering after being struck with a hammer or run over with a car.

The smart hand is soft and elastic, and weighs about half a pound. Its components total around $500 — a fraction of the weight and material cost associated with more rigid smart limbs.

“This is not a product yet, but the performance is already similar or superior to existing neuroprosthetics, which we’re excited about,” said Xuanhe Zhao, professor of mechanical engineering and of civil and environmental engineering at MIT. “There’s huge potential to make this soft prosthetic very low cost, for low-income families who have suffered from amputation.”

Zhao and his colleagues have published their work today in Nature Biomedical Engineering. Co-authors include MIT postdoc Shaoting Lin, along with Guoying Gu, Xiangyang Zhu, and collaborators at Shanghai Jiao Tong University in China.

Big Hero hand

The team’s pliable new design bears an uncanny resemblance to a certain inflatable robot in the animated film “Big Hero 6.” Like the squishy android, the team’s artificial hand is made from soft, stretchy material — in this case, the commercial elastomer EcoFlex. The prosthetic comprises five balloon-like fingers, each embedded with segments of fiber, similar to articulated bones in actual fingers. The bendy digits are connected to a 3-D-printed “palm,” shaped like a human hand.

Related: Watch a soft robotic hand play Mario Bros.

Rather than controlling each finger using mounted electrical motors, as most neuroprosthetics do, the researchers used a simple pneumatic system to precisely inflate fingers and bend them in specific positions. This system, including a small pump and valves, can be worn at the waist, significantly reducing the prosthetic’s weight.

Lin developed a computer model to relate a finger’s desired position to the corresponding pressure a pump would have to apply to achieve that position. Using this model, the team developed a controller that directs the pneumatic system to inflate the fingers, in positions that mimic five common grasps, including pinching two and three fingers together, making a balled-up fist, and cupping the palm.

The pneumatic system receives signals from EMG sensors — electromyography sensors that measure electrical signals generated by motor neurons to control muscles. The sensors are fitted at the prosthetic’s opening, where it attaches to a user’s limb. In this arrangement, the sensors can pick up signals from a residual limb, such as when an amputee imagines making a fist.

The team then used an existing algorithm that “decodes” muscle signals and relates them to common grasp types. They used this algorithm to program the controller for their pneumatic system. When an amputee imagines, for instance, holding a wine glass, the sensors pick up the residual muscle signals, which the controller then translates into corresponding pressures. The pump then applies those pressures to inflate each finger and produce the amputee’s intended grasp.

Going a step further in their design, the researchers looked to enable tactile feedback — a feature that is not incorporated in most commercial neuroprosthetics. To do this, they stitched to each fingertip a pressure sensor, which when touched or squeezed produces an electrical signal proportional to the sensed pressure. Each sensor is wired to a specific location on an amputee’s residual limb, so the user can “feel” when the prosthetic’s thumb is pressed, for example, versus the forefinger.

Good grip

To test the inflatable hand, the researchers enlisted two volunteers, each with upper-limb amputations. Once outfitted with the neuroprosthetic, the volunteers learned to use it by repeatedly contracting the muscles in their arm while imagining making five common grasps.

After completing this 15-minute training, the volunteers were asked to perform a number of standardized tests to demonstrate manual strength and dexterity. These tasks included stacking checkers, turning pages, writing with a pen, lifting heavy balls, and picking up fragile objects like strawberries and bread. They repeated the same tests using a more rigid, commercially available bionic hand and found that the inflatable prosthetic was as good, or even better, at most tasks, compared to its rigid counterpart.

One volunteer was also able to intuitively use the soft prosthetic in daily activities, for instance to eat food like crackers, cake, and apples, and to handle objects and tools, such as laptops, bottles, hammers, and pliers. This volunteer could also safely manipulate the squishy prosthetic, for instance to shake someone’s hand, touch a flower, and pet a cat.

In a particularly exciting exercise, the researchers blindfolded the volunteer and found he could discern which prosthetic finger they poked and brushed. He was also able to “feel” bottles of different sizes that were placed in the prosthetic hand, and lifted them in response. The team sees these experiments as a promising sign that amputees can regain a form of sensation and real-time control with the inflatable hand.

The team has filed a patent on the design, through MIT, and is working to improve its sensing and range of motion.

“We now have four grasp types. There can be more,” Zhao said. “This design can be improved, with better decoding technology, higher-density myoelectric arrays, and a more compact pump that could be worn on the wrist. We also want to customize the design for mass production, so we can translate soft robotic technology to benefit society.”

Editor’s Note: This article was republished from MIT News.

The post Inflatable robotic hand gives amputees real-time tactile control appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inflatable-robotic-hand-amputees-real-time-tactile-control/feed/ 0
Sarcos Robotics, T-Mobile partnership enables real-time control over 5G https://www.therobotreport.com/sarcos-robotics-and-t-mobile-partnership-enables-real-time-control-over-5g/ https://www.therobotreport.com/sarcos-robotics-and-t-mobile-partnership-enables-real-time-control-over-5g/#respond Thu, 05 Aug 2021 17:27:54 +0000 https://www.therobotreport.com/?p=560076 Sarcos Robotics and T-Mobile announce a collaboration to integrate T-Mobile 5G into the Sarcos Guardian XT robot.

The post Sarcos Robotics, T-Mobile partnership enables real-time control over 5G appeared first on The Robot Report.

]]>
Sarcos Gardian

The Guardian XT robot can now be controlled in real time over a 5G connection. | Image credit: Sarcos

Sarcos Robotics (Sarcos) and T-Mobile (NASDAQ: TMUS)  today announced a collaboration to integrate T-Mobile 5G into the Sarcos Guardian XT highly dexterous mobile industrial robot. The Guardian XT robot is a remote-controlled robotic system designed to help humans safely work in hazardous conditions, performing tasks such as lifting heavy materials or using power tools at significant heights. With T-Mobile 5G integration, the companies aim to improve performance and response time for remote operations, so the robots can perform tasks more quickly and more in tune with their operator’s movements.

The Guardian XT robotic system is an upper-body variant of the award-winning Sarcos Guardian XO full-body, battery-powered industrial exoskeleton. It is platform-agnostic and can be mounted to a variety of mobile bases to access hard-to-reach or elevated areas and applies to many industries, including aerospace, automotive, aviation, construction, defense, industrial manufacturing, maritime, and oil and gas. Both the Guardian XO and the Guardian XT robots are expected to be commercially available by the end of 2022.

T-Mobile 5G to Power Remote Viewing and Teleoperation

The T-Mobile and Sarcos collaboration begins with the integration of 5G to develop a remote viewing system powered by T-Mobile’s high bandwidth, low latency 5G network. This enables workers, supervisors, outside experts, and others, whether they are based locally or remote, to watch tasks being performed by the robot as it is controlled by an operator in the field. The second phase of development is expected to include full T-Mobile 5G wireless network integration, allowing teleoperation of the Guardian XT robot over 5G, giving operators greater flexibility and increasing their safety by enabling them to perform tasks from a distance.

Sarcos remote arm controlled by a human

The Guardian XT robot is a remote-controlled robotic system designed to help humans safely work in hazardous conditions. | Image credit: Sarcos

“We are proud to collaborate with T-Mobile and we’ve made great progress leveraging their 5G network to enable the remote viewing management system,” said Scott Hopper, Executive Vice President of Corporate and Business Development, Sarcos Robotics. “This is a significant first step and we’re eager to continue the development toward full 5G wireless connectivity that will unlock a variety of new capabilities, including remote teleoperation, as we prepare for commercial availability.”

“The Sarcos Guardian XT robot requires a highly reliable, low latency 5G network that its human operators can count on,” said John Saw, EVP of Advanced & Emerging Technologies at T-Mobile. “5G was designed from the ground up for industrial applications such as this and we cannot wait to further collaborate with Sarcos as they develop the next big thing in industrial robotics.”

On April 6, 2021, Sarcos announced that it will become publicly listed through a merger transaction with Rotor Acquisition Corp. (NYSE: ROT.U, ROT, and ROT WS) (“Rotor”), a publicly-traded special purpose acquisition company. The transaction is expected to close in the third quarter of 2021, at which point the combined company’s common stock is expected to trade on Nasdaq under the ticker symbol STRC.

Takeaways

This announcement is one of the first formal partnerships between a robotics company and a 5G provider that enables remote control via cellular 5G connection. The communication speeds and low latency of 5G enable a stable, high speed connection for devices like the Sarcos Guardian XT. As 5G networks continue to be built out, more applications for 5G will arise to support the connectivity of industrial and service automation applications.

Some autonomous mobile robot providers are starting to look at 5G as a option to WiFi connectivity enabling the connection between AMRs and supervisory or fleet management solutions. Likewise, several manufacturers of AMRs are also looking at 5G connectivity to support remote diagnostic and troubleshooting data exchange with the manufacturer.

The post Sarcos Robotics, T-Mobile partnership enables real-time control over 5G appeared first on The Robot Report.

]]>
https://www.therobotreport.com/sarcos-robotics-and-t-mobile-partnership-enables-real-time-control-over-5g/feed/ 0
Researchers develop human-aware motion planning algorithm https://www.therobotreport.com/researchers-develop-human-aware-motion-planning-algorithm/ https://www.therobotreport.com/researchers-develop-human-aware-motion-planning-algorithm/#respond Tue, 13 Jul 2021 15:43:45 +0000 https://www.therobotreport.com/?p=559921 To demo the algorithm, the robot helped put a jacket on a human, which could potentially prove to be a powerful tool in expanding assistance for those with disabilities or limited mobility.

The post Researchers develop human-aware motion planning algorithm appeared first on The Robot Report.

]]>

Basic safety needs in the paleolithic era have largely evolved with the onset of the industrial and cognitive revolutions. We interact a little less with raw materials, and interface a little more with machines.

Robots don’t have the same hardwired behavioral awareness and control, so secure collaboration with humans requires methodical planning and coordination. You can likely assume your friend can fill up your morning coffee cup without spilling on you, but for a robot, this seemingly simple task requires careful observation and comprehension of human behavior.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created an algorithm to help a robot find efficient motion plans to ensure physical safety of its human counterpart. To demo the algorithm, the bot helped put a jacket on a human, which could potentially prove to be a powerful tool in expanding assistance for those with disabilities or limited mobility.

“Developing algorithms to prevent physical harm without unnecessarily impacting the task efficiency is a critical challenge,” said MIT PhD student Shen Li, a lead author on a new paper about the research (PDF). “By allowing robots to make non-harmful impact with humans, our method can find efficient robot trajectories to dress the human with a safety guarantee.”

Human modeling, safety, and efficiency

Proper human modeling – how the human moves, reacts, and responds – is necessary to enable successful robot motion planning in human-robot interactive tasks. A robot can achieve fluent interaction if the human model is perfect, but in many cases, there’s no flawless blueprint.

A robot shipped to a person at home, for example, would have a very narrow, “default” model of how a human could interact with it during an assisted dressing task. It wouldn’t account for the vast variability in human reactions, dependent on a myriad of variables such as personality and habits. A screaming toddler would react differently to putting on a coat or shirt than a frail elderly person, or those with disabilities who might have rapid fatigue or decreased dexterity.

If that robot is tasked with dressing, and plans a trajectory solely based on that default model, the robot could clumsily bump into the human, resulting in an uncomfortable experience or even possible injury. However, if it’s too conservative in ensuring safety, it might pessimistically assume that all space nearby is unsafe, and then fail to move, something known as the “freezing robot” problem.

To provide a theoretical guarantee of human safety, MIT’s algorithm reasons about the uncertainty in the human model. Instead of having a single, default model where the robot only understands one potential reaction, the team gave the machine an understanding of many possible models, to more closely mimic how a human can understand other humans. As the robot gathers more data, it will reduce uncertainty and refine those models.

To resolve the freezing robot problem, MIT redefined safety for human-aware motion planners as either collision avoidance or safe impact in the event of a collision. Often, especially in robot-assisted tasks of activities of daily living, collisions cannot be fully avoided. This allowed the robot to make non-harmful contact with the human to make progress, so long as the robot’s impact on the human is low. With this two-pronged definition of safety, the robot could safely complete the dressing task in a shorter period of time.

For example, let’s say there are two possible models of how a human could react to dressing. “Model One” is that the human will move up during dressing, and “Model Two” is that the human will move down during dressing. With the team’s algorithm, when the robot is planning its motion, instead of selecting one model, it will try to ensure safety for both models. No matter if the person is moving up or down, the trajectory found by the robot will be safe.

To paint a more holistic picture of these interactions, future efforts from MIT will focus on investigating the subjective feelings of safety in addition to the physical during the robot-assisted dressing task.

“This multifaceted approach combines set theory, human-aware safety constraints, human motion prediction, and feedback control for safe human-robot interaction,” said Assistant Professor in The Robotics Institute at Carnegie Mellon University Zackory Erickson. “This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities.”

The post Researchers develop human-aware motion planning algorithm appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-develop-human-aware-motion-planning-algorithm/feed/ 0
Embodied acquires conversational AI startup Kami Computing https://www.therobotreport.com/embodied-acquires-conversational-ai-kami-computing/ https://www.therobotreport.com/embodied-acquires-conversational-ai-kami-computing/#comments Thu, 06 May 2021 13:31:34 +0000 https://www.therobotreport.com/?p=559543 Kami will strengthen Embodied's Moxie robot by adding memory of past conversations context, consistent personality, common sense reasoning.

The post Embodied acquires conversational AI startup Kami Computing appeared first on The Robot Report.

]]>
Embodied designs Moxie for children's social, cognitive, and emotional development

The Moxie companion robot from Embodied. Credit: Embodied

Paolo Pirjanian is not only a successful robotics entrepreneur, he’s also a savvy investor. That’s part of the reason why the Embodied Inc. founder and CEO was recently named a venture partner at Calibrate Ventures, a California-based early stage venture capital firm.

Today, Pirjanian is putting his investment chops to the test. Embodied acquired Kami Computing, an early-stage startup focused on conversational AI, machine learning, and natural language generation (NLG). Embodied developed the Moxie companion robot that is designed to help kids with social-emotional learning.

Kami’s machine learning scientists and engineers are joining Embodied to integrate their technology into Embodied’s proprietary SocialX platform. SocialX is the platform developed by Embodied’s in-house team to enable children to engage with Moxie through natural interactions such as body language, conversation, facial expressions and more.

Kami was founded by Tel Aviv-based Guy de Beer and London-based Dr. David Levy, who together with a team of 12 scientists, spent the last two years developing a generative voice-based conversational agent. Its product is an artificial persona capable of human-level open domain conversations, with a natural tone of voice, long-term memory, advanced cognition, and emotions. This conversational technology passed the Turing Test in October 2019, an artificial intelligence test to determine whether computers can “think” as humans would.

Pirjanian told The Robot Report that incorporating Kami’s technology will strengthen Moxie’s conversational AI capabilities to include memory of past conversations context, consistent personality, common sense reasoning and common knowledge.

“The state of the art in conversational AI is Amazon Alexa’s challenge to create 20 minutes of general conversation. It is something that the entire research and business community is working towards,” he said. “Embodied has now demonstrated with actual customers that we can maintain an average of 25 minutes of engagement with the user … and this is every day for more than three months (and still going). This is unprecedented.”

Pirjanian added that Kami’s technology and expertise will help Embodied accelerate its technology roadmap.

“We are creating an entirely new category that, for now, looks believable to a 5-year-old and in the foreseeable future will be believable for anyone and everyone,” he said. “That is the goal that we believe is more relevant than the Turing test to have social impact.”

Embodied started shipping Moxie to customers on March 19, 2021. At the time, Pirjanian said there were 10,000 families on the waiting list for Moxie. The companion robot market hasn’t been kind to companies over the years, but Pirjanian said developing Moxie “is a worthwhile mission and it will change the world for the better.”

“The next big wave in technology will be driven by human-machine interfaces,” said Pirjanian. “Kami’s technology helps us continue developing category-defining technology to create social emotional robots that have the power to fight the loneliness epidemic and change people’s lives.”

Pirjanian is well regarded in robotics circles. One of his previous companies, Evolution Robotics, was acquired by iRobot in 2012 for $74 million. He joined iRobot as CTO, but left after three years. He was the second-ever guest on The Robot Report Podcast, joining us in June 2020. He discussed the technical and business challenges involved with building Moxie, including creating seamless human-robot interaction for children, content creation for Moxie, and raising funding after several social robotics companies failed. Certainly much has changed for Embodied since Pirjanian was on the podcast, but you can listen to the episode below.

The post Embodied acquires conversational AI startup Kami Computing appeared first on The Robot Report.

]]>
https://www.therobotreport.com/embodied-acquires-conversational-ai-kami-computing/feed/ 4
SICK, Universal Robots partner on cobot safety https://www.therobotreport.com/sick-universal-robots-partner-on-cobot-safety/ https://www.therobotreport.com/sick-universal-robots-partner-on-cobot-safety/#respond Mon, 22 Mar 2021 15:08:57 +0000 https://www.therobotreport.com/?p=559171 Safety system monitors two protective fields simultaneously around a cobot, making it possible to dynamically reduce or stop the robot's movement only when a person is too close to the working area.

The post SICK, Universal Robots partner on cobot safety appeared first on The Robot Report.

]]>
SICK and Universal Robots (UR) announced a new safety solution for the safeguarding of accessible cobot applications for the URe Series of cobots. When combined, the sBot safety system and URCap from SICK increase productivity with immediate automated restart and profit from an easy configuration of the safety system directly from the UR teach pendant.

The sBot Stop – URCap and sBot Speed – URCap are based on the combination of SICK’s nanoScan3 safety laser scanner and UR’s cobot safety features. These solutions work well in handling operations using robot applications with free access and enable manufacturers to achieve safety without sacrificing productivity.

SICK Universal Robots cobot safetyWith its small size, the nanoScan3 opens up potential applications where space is extremely critical, such as cobots. It offers a protective field range of up to three meters. It features 128 freely configurable fields and monitoring cases. SICK won a 2020 RBR50 Innovation Award for the nanoScan3. The nanoScan3 is combined with the URe cobots safety features via a provided configuration tool. This enables the configuration directly via the UR teach pendant.

The safety laser scanner is constantly monitoring the ground level around the cobot. It can monitor two protective fields simultaneously around the robot, making it possible to dynamically reduce or stop the robot’s movement only when a person is too close to the robot working area. As soon the person leaves the area, the cobot recovers its operating speed.

sBot Stop – URCap and sBot Speed – URCap offers you a new way of safeguarding your robot application and achieve high productivity. It is easy and fast to setup and offers smart features like Smart Field definition, Field teach-in, simultaneous scan data and automatic field fusion provides feedback though the GUI.

UR, a 2020 RBR50 Innovation Award winner and the leading developer of cobot arms, recently sold its 50,000 cobot. In mid-February, it named Kim Povlsen its new president. Povlsen, a Danish native, held various executive business and technology leadership roles at Schneider Electric, a global energy management and automation company. Povlsen replaced Jürgen von Hollen, who stepped down at the end of 2020 after four-plus years as president of UR.

The post SICK, Universal Robots partner on cobot safety appeared first on The Robot Report.

]]>
https://www.therobotreport.com/sick-universal-robots-partner-on-cobot-safety/feed/ 0
‘Robomorphic computing’ aims to quicken robots’ response time https://www.therobotreport.com/robomorphic-computing-hasten-robots-response-time/ https://www.therobotreport.com/robomorphic-computing-hasten-robots-response-time/#respond Thu, 21 Jan 2021 16:31:26 +0000 https://www.therobotreport.com/?p=558724 Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman. Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds. Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits reaction…

The post ‘Robomorphic computing’ aims to quicken robots’ response time appeared first on The Robot Report.

]]>
robomorphic computing

MIT developed ‘robomorphic computing, an automated way to design custom hardware to speed up a robot’s operation. | Credit: Jose-Luis Olivares, MIT

Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman.

Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds.

Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits reaction time, says Neuman, who recently graduated with a PhD from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot’s “mind” and body. The method, called “robomorphic computing,” uses a robot’s physical layout and intended applications to generate a customized computer chip that minimizes the robot’s response time.

The advance could fuel a variety of robotics applications, including, potentially, frontline medical care of contagious patients. “It would be fantastic if we could have robots that could help reduce risk for patients and hospital workers,” says Neuman.

Neuman will present the research at April’s International Conference on Architectural Support for Programming Languages and Operating Systems. MIT co-authors include graduate student Thomas Bourgeat and Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Neuman’s PhD advisor. Other co-authors include Brian Plancher, Thierry Tambe, and Vijay Janapa Reddi, all of Harvard University. Neuman is now a postdoctoral NSF Computing Innovation Fellow at Harvard’s School of Engineering and Applied Sciences.

Related: How Boston Dynamics’ robots learned to dance

There are three main steps in a robot’s operation, according to Neuman. The first is perception, which includes gathering data using sensors or cameras. The second is mapping and localization: “Based on what they’ve seen, they have to construct a map of the world around them and then localize themselves within that map,” says Neuman. The third step is motion planning and control — in other words, plotting a course of action.

These steps can take time and an awful lot of computing power. “For robots to be deployed into the field and safely operate in dynamic environments around humans, they need to be able to think and react very quickly,” says Plancher. “Current algorithms cannot be run on current CPU hardware fast enough.”

Neuman adds that researchers have been investigating better algorithms, but she thinks software improvements alone aren’t the answer. “What’s relatively new is the idea that you might also explore better hardware.” That means moving beyond a standard-issue CPU processing chip that comprises a robot’s brain — with the help of hardware acceleration.

Hardware acceleration refers to the use of a specialized hardware unit to perform certain computing tasks more efficiently. A commonly used hardware accelerator is the graphics processing unit (GPU), a chip specialized for parallel processing. These devices are handy for graphics because their parallel structure allows them to simultaneously process thousands of pixels. “A GPU is not the best at everything, but it’s the best at what it’s built for,” says Neuman. “You get higher performance for a particular application.”

Most robots are designed with an intended set of applications and could therefore benefit from hardware acceleration. That’s why Neuman’s team developed robomorphic computing.

The system creates a customized hardware design to best serve a particular robot’s computing needs. The user inputs the parameters of a robot, like its limb layout and how its various joints can move. Neuman’s system translates these physical properties into mathematical matrices. These matrices are “sparse,” meaning they contain many zero values that roughly correspond to movements that are impossible given a robot’s particular anatomy. (Similarly, your arm’s movements are limited because it can only bend at certain joints — it’s not an infinitely pliable spaghetti noodle.)

Related: 8 degrees of difficulty for autonomous navigation

The system then designs a hardware architecture specialized to run calculations only on the non-zero values in the matrices. The resulting chip design is therefore tailored to maximize efficiency for the robot’s computing needs. And that customization paid off in testing.

Hardware architecture designed using this method for a particular application outperformed off-the-shelf CPU and GPU units. While Neuman’s team didn’t fabricate a specialized chip from scratch, they programmed a customizable field-programmable gate array (FPGA) chip according to their system’s suggestions. Despite operating at a slower clock rate, that chip performed eight times faster than the CPU and 86 times faster than the GPU.

“I was thrilled with those results,” says Neuman. “Even though we were hamstrung by the lower clock speed, we made up for it by just being more efficient.”

Plancher sees widespread potential for robomorphic computing. “Ideally we can eventually fabricate a custom motion-planning chip for every robot, allowing them to quickly compute safe and efficient motions,” he says. “I wouldn’t be surprised if 20 years from now every robot had a handful of custom computer chips powering it, and this could be one of them.” Neuman adds that robomorphic computing might allow robots to relieve humans of risk in a range of settings, such as caring for covid-19 patients or manipulating heavy objects.

“This work is exciting because it shows how specialized circuit designs can be used to accelerate a core component of robot control,” says Robin Deits, a robotics engineer at Boston Dynamics who was not involved in the research. “Software performance is crucial for robotics because the real world never waits around for the robot to finish thinking.” He adds that Neuman’s advance could enable robots to think faster, “unlocking exciting behaviors that previously would be too computationally difficult.”

Neuman next plans to automate the entire system of robomorphic computing. Users will simply drag and drop their robot’s parameters, and “out the other end comes the hardware description. I think that’s the thing that’ll push it over the edge and make it really useful.”

Editor’s Note: This article was republished from MIT News.

The post ‘Robomorphic computing’ aims to quicken robots’ response time appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robomorphic-computing-hasten-robots-response-time/feed/ 0
Motional, MIT roboticists to examine the future of human-robot collaboration in RoboBusiness Direct https://www.therobotreport.com/motional-mit-experts-discuss-future-human-robot-collaboration/ Mon, 07 Dec 2020 14:17:26 +0000 https://www.therobotreport.com/?p=107514 In their RoboBusiness Direct session moderated by MassRobotics Executive Director Tom Ryden, Motional CTO Laura Major and MIT Professor Julie Shah will examine the future of human-robot interaction.

The post Motional, MIT roboticists to examine the future of human-robot collaboration in RoboBusiness Direct appeared first on The Robot Report.

]]>
Motional, MIT experts to discuss the future of human-robot collaboration in RoboBusiness Direct

Over the past several decades, industrial robots were mostly immobile, single-task systems that had little interaction with humans or the world around them as they performed repetitive tasks while encaged within safety barriers. However, rapid innovation in the past 20 years has led to robots capable of operating in open, dynamic environments, often working closely or directly with humans. But what comes next? Motional CTO Laura Major and MIT Professor Julie Shah will examine what the future holds for robotics developers, investors, and end users.

Tom Ryden, executive director of MassRobotics, will be moderating Major and Shah’s RoboBusiness Direct discussion on “The Future of Human-Robot Collaboration: Developing the Next Generation of Industrial, Commercial, and Consumer Robotics Systems,” which will be on Friday, Dec. 11, 2020, at 2:00 p.m. EST.

In their session, Major and Shah will describe how the next generation of robotics systems will operate in the real world and how they will change our relationship to technology. They will also discuss how social behavior can be incorporated into robots, as well as how public spaces can be optimized for safe and beneficial human-robot interaction. Major and Shah will share insights from their new book on the topic.


About RoboBusiness Direct

RoboBusiness Direct is an ongoing series of digital events delivered by brightest minds from the leading robotics and automation organizations around the world. The series complements continuing coverage and analysis in Robotics Business Review, a sibling site to The Robot Report.

RoboBusiness Direct is designed to impart to business and engineering professionals the information they need to identify market opportunities, successfully develop and deploy the next generation of commercial robotics systems, and accelerate their businesses.

You can find a listing of RoboBusiness Direct speakers and topics, along with the dates and times of upcoming sessions, HERE. RoboBusiness Direct presentations are available on demand after the initial broadcast.

There is no charge to register for RoboBusiness Direct programs.

RoboBusiness Direct Register


About the speakers

Motional CTO Laura Major

Laura Major has a long career developing autonomous solutions across ground robotics, aerial, and space applications. She is currently the chief technology officer of Motional, an autonomous driving joint venture between Hyundai Motor Group and Aptiv PLC, where she leads the technology roadmap and engineering development of an autonomous vehicle.

Prior to Motional, Major was the chief technology officer at CyPhy Works/Aria Insights, a pioneer in tethered drones. In addition, she spent 12 years at Draper Laboratory developing capabilities to fuse human and machine intelligence to solve technical challenges across many domains that address problems of national security. Major was honored by the Society of Women Engineers (SWE) with a national award for Emerging Leaders in 2014, and she was also honored by Mass High Tech as one of its 2014 “Women to Watch.” Major holds an MS from MIT and a BS from Georgia Tech.

Laura Major, Motional

MIT CSAIL Professor Julie Shah

Julie Shah is an associate professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, where she leads the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL). Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing.

Shah has developed innovative methods for enabling fluid human-robot teamwork in time-critical, safety-critical domains, ranging from manufacturing to surgery to space exploration. Her group draws on expertise in artificial intelligence, human factors, and systems engineering to develop interactive robots that emulate the qualities of effective human team members to improve the efficiency of human-robot teamwork.

In 2014, Shah was recognized with an NSF CAREER award for her work on “Human-aware Autonomy for Team-oriented Environments,” and by the MIT Technology Review TR35 list as one of the world’s top innovators under the age of 35. Shah received her SB and SM from the Department of Aeronautics and Astronautics at MIT, and her Ph.D. in Autonomous Systems from MIT.

Julie Shah, MIT

MassRobotics Executive Director Tom Ryden

Thomas Ryden is the executive director of MassRobotics, a nonprofit organization whose mission is to help grow the next generation of robotics companies. MassRobotics is a strategic partner of RoboBusiness Direct. Prior to joining MassRobotics, Ryden was the founder and CEO/COO of VGo Communications, where he oversaw the development and launch of the VGo telepresence robot.

Previously, Ryden was director of sales and marketing at iRobot Corp. In addition, he held roles in program management, overseeing the development of some of iRobot’s most successful products. Ryden serves as the co-chairman of the robotics cluster of the Massachusetts Technology Leadership Council (Mass TLC) and is on the board of directors of AUVSI New England and the Robotics Technology Advisory Panel for ASME. He holds a BS in electrical engineering from the University of Vermont and an MBA from Bentley University.

Tom Ryden MassRobotics

The post Motional, MIT roboticists to examine the future of human-robot collaboration in RoboBusiness Direct appeared first on The Robot Report.

]]>