Design / Development Archives - The Robot Report https://www.therobotreport.com/category/design-development/ Robotics news, research and analysis Thu, 06 Apr 2023 01:15:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Design / Development Archives - The Robot Report https://www.therobotreport.com/category/design-development/ 32 32 How MIT taught a quadruped to play soccer https://www.therobotreport.com/how-mit-taught-a-quadruped-to-dribble-a-soccer-ball/ https://www.therobotreport.com/how-mit-taught-a-quadruped-to-dribble-a-soccer-ball/#respond Thu, 06 Apr 2023 01:14:38 +0000 https://www.therobotreport.com/?p=565419 MIT's DribbleBot can maneuver soccer balls on landscapes like sand, gravel, mud and snow and get up and recover the ball after falling. 

The post How MIT taught a quadruped to play soccer appeared first on The Robot Report.

]]>

A research team at MIT’s Improbable Artificial Intelligence Lab, part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), taught a Unitree Go1 quadruped to dribble a soccer ball on various terrains. DribbleBot can maneuver soccer balls on landscapes like sand, gravel, mud and snow, adapt its varied impact on the ball’s motion and get up and recover the ball after falling. 

The team used simulation to teach the robot how to actuate its legs during dribbling. This allowed the robot to achieve hard-to-script skills for responding to diverse terrains much quicker than training in the real world. Because the team had to load its robot and other assets into the simulation and set physical parameters, they could simulate 4,000 versions of the quadruped in parallel in real-time, collecting data 4,000 times faster than using just one robot. You can read the team’s technical paper called “DribbleBot: Dynamic Legged Manipulation in the Wild” here (PDF).

DribbleBot started out not knowing how to dribble a ball at all. The team trained it by giving it a reward when it dribbles well, or negative reinforcement when it messes up. Using this method, the robot was able to figure out what sequence of forces it should apply with its legs. 

“One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behavior,” MIT Ph.D. student Gabe Margolis, who co-led the work along with Yandong Ji, research assistant in the Improbable AI Lab, said. “Once we’ve designed that reward, then it’s practice time for the robot. In real time, it’s a couple of days, and in the simulator, hundreds of days. Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity.”

The team did teach the quadruped how to handle unfamiliar terrains and recover from falls using a recovery controller build into its system. However, dribbling on different terrains still presents many more complications than just walking.

The robot has to adapt its locomotion to apply forces to the ball to dribble, and the robot has to adjust to the way the ball interacts with the landscape. For example, soccer balls act differently on thick grass as opposed to pavement or snow. To combat this, the MIT team leveraged cameras on the robot’s head and body to give it vision.

While the robot can dribble on many terrains, its controller currently isn’t trained in simulated environments that include slopes or stairs. The quadruped can’t perceive the geometry of terrain, it just estimates its material contact properties, like friction, so slopes and stairs will be the next challenge for the team to tackle. 

The MIT team is also interested in applying the lessons they learned while developing DribbleBot to other tasks that involve combined locomotion and object manipulation, like transporting objects from place to place using legs or arms. A team from Carnegie Mellon University (CMU) and UC Berkeley recently published their research about how to give quadrupeds the ability to use their legs to manipulate things, like opening doors and pressing buttons. 

The team’s research is supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation Institute of Artificial Intelligence and Fundamental Interactions, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

A quadruped with a soccer ball.

The post How MIT taught a quadruped to play soccer appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-mit-taught-a-quadruped-to-dribble-a-soccer-ball/feed/ 0
Capra Robotics’ AMRs to use RGo Perception Engine https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/ https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/#respond Wed, 05 Apr 2023 21:19:21 +0000 https://www.therobotreport.com/?p=565424 RGo Robotics, a company developing artificial perception technology, announced leadership appointments, new customers and an upcoming product release.

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>

RGo Robotics, a company developing artificial perception technology that enables mobile robots to understand complex surroundings and operate autonomously, announced significant strategic updates. The announcements include leadership appointments, new customers and an upcoming product release.

RGo develops AI-powered technology for autonomous mobile robots, allowing them to achieve 3D, human-level perception. Its Perception Engine gives mobile robots the ability to understand complex surroundings and operate autonomously. It integrates with mobile robots to deliver centimeter-scale position accuracy in any environment. In Q2 2023, RGo said it will release the next iteration of its software that will include:

  • An indoor-outdoor mode: a breakthrough capability for mobile robot navigation allows them to operate in all environments – both indoors and outdoors.
  • A high-precision mode that enables millimeter-scale precision for docking and similar use cases.
  • Control Center 2.0: a redesigned configuration and admin interface. This new version supports global map alignment, advanced exploration capabilities and new map-sharing utilities.

RGo separately announced support for NVIDIA Jetson Orin System-on-Modules that enables visual perception for a variety of mobile robot applications.

RGo will exhibit its technology at LogiMAT 2023, Europe’s biggest annual intralogistics tradeshow, from April 25-27, in Stuttgart, Germany at Booth 6F59. The company will also sponsor and host a panel session “Unlocking New Applications for Mobile Robots” at the Robotics Summit and Expo in Boston from May 10-11.

Leadership announcements

RGO also announced four leadership appointments. This includes Yael Fainaro being named chief business officer and president; Mathieu Goy being named head of European sales; Yasuaki Mori being named executive consultant, APAC market development; and Amy Villeneuve as a member of the board of directors.

“It is exciting to have reached this important milestone. The new additions to our leadership team underpin our evolution from a technology innovator to a scaling commercial business model including new geographies,” said Amir Bousani, CEO and co-founder, RGo Robotics.

Goy, based in Paris, and Mori, based in Tokyo, join with extensive sales experience in the European and APAC markets. RGo is establishing an initial presence in Japan this year with growth in South Korea planned for late 2023.


“RGo has achieved impressive product maturity and growth since exiting stealth mode last year,” said Fainaro. “The company’s vision-based localization capabilities are industrial-grade, extremely precise and ready today for even the most challenging environments. This, together with higher levels of 3D perception, brings tremendous value to the rapidly growing mobile robotics market. I’m looking forward to working with Amir and the team to continue growing RGo in the year ahead.”

Villeneuve joins RGo’s board of directors with leadership experience in the robotics industry, including her time as the former COO and president of Amazon Robotics. “I am very excited to join the team,” said Villeneuve. “RGo’s technology creates disruptive change in the industry. It reduces cost and adds capabilities to mobile robots in logistics, and enables completely new applications in emerging markets including last-mile delivery and service robotics.”

Customer traction

After comprehensive field trials in challenging indoor and outdoor environments, RGo continued its commercial momentum with new customers. The design wins are with market-leading robot OEMs across multiple vertical markets ranging from logistics and industrial autonomous mobile robots, forklifts, outdoor machinery and service robots.

Capra Robotics, an award-winning mobile robot manufacturer based in Denmark, selected RGo’s Perception Engine for its new Hircus mobile robot platform.

“RGo continues to develop game-changing navigation technology,” said Niels Juls Jacobsen, CEO of Capra and founder of Mobile Industrial Robots. “Traditional localization sensors either work indoors or outdoors – but not both. Combining both capabilities into a low-cost, compact and robust system is a key aspect of our strategy to deliver mobile robotics solutions to the untapped ‘interlogistics’ market.”

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>
https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/feed/ 0
Researchers taught a quadruped to use its legs for manipulation https://www.therobotreport.com/researchers-taught-quadruped-to-use-legs-as-manipulators/ https://www.therobotreport.com/researchers-taught-quadruped-to-use-legs-as-manipulators/#respond Fri, 31 Mar 2023 00:39:18 +0000 https://www.therobotreport.com/?p=565378 Researchers taught a Unitree Go1 quadruped how to use its front legs to climb walls, press buttons and kick a soccer ball.

The post Researchers taught a quadruped to use its legs for manipulation appeared first on The Robot Report.

]]>

Researchers from Carnegie Mellon University (CMU) and UC Berkeley want to give quadrupeds more capabilities similar to their biological counterparts. Just like real dogs can use their front legs for things other than walking and running, like digging and other manipulation tasks, the researchers think quadrupeds could someday do the same.

Currently, we see quadrupeds use their legs as just legs to navigate their surroundings. Some of them, like Boston Dynamics’ Spot, get around these limitations by adding a robotic arm to the quadruped’s back. This arm allows Spot to manipulate things, like opening doors and pressing buttons, while maintaining the flexibility that four legs give locomotion.

However, the researchers at CMU and UC Berkeley taught a Unitree Go1 quadruped, equipped with an Intel RealSense camera for perception, how to use its front legs to climb walls, press buttons, kick a soccer ball and perform other object interactions in the real world, on top of teaching it how to walk.

The team started this challenging task by decoupling the skill learning into two broad categories: locomotion, which involves movements like walking or climbing a wall, and manipulation, which involves using one leg to interact with objects while balancing on three legs. Decoupling these tasks help the quadruped to simultaneously move to stay balanced and manipulate objects with one leg.

By training in simulation, the team taught the quadruped these skills and transferred them to the real world with their proposed sim2real variant. This variant builds upon recent locomotion success.

All of these skills are combined into a robust long-term plan by teaching the quadruped a behavior tree that encodes a high-level task hierarchy from one clean expert demonstration. This allows the quadruped to move through the behavior tree and return to its last successful movement when it runs into problems with certain branches of the behavior tree.

For example, if a quadruped is tasked with pressing a button on a wall but fails to climb up the wall, it returns to the last task it did successfully, like approaching the wall, and starts there again.

The research team was made up of Xuxin Cheng, a Master’s student in robotics at CMU, Ashish Kumar, a graduate student at UC Berkeley, and Deepak Pathak, an assistant professor at CMU in Computer Science. You can read their technical paper “Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion” (PDF) to learn more. They said a limitation of their work is that they decoupled high-level decision making and low-level command tracking, but that a full end-to-end solution is “an exciting future direction.”

The post Researchers taught a quadruped to use its legs for manipulation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-taught-quadruped-to-use-legs-as-manipulators/feed/ 0
Keys to using ROS 2 & other frameworks for medical robots https://www.therobotreport.com/keys-to-using-ros-2-other-frameworks-medical-robots/ https://www.therobotreport.com/keys-to-using-ros-2-other-frameworks-medical-robots/#respond Thu, 30 Mar 2023 15:24:57 +0000 https://www.therobotreport.com/?p=565373 What is the best architectural approach to use when developing medical robots? MedAcuity's Tom Amlicke will explore this topic at the Robotics Summit & Expo.

The post Keys to using ROS 2 & other frameworks for medical robots appeared first on The Robot Report.

]]>

What is the best architectural approach to use when developing medical robots? Is it ROS, ROS 2 or other open-source or commercial frameworks? The upcoming Robotics Summit & Expo (May 10-11 in Boston) will explore engineering questions concerning the level of concern, risk, design controls, and evidence on a couple of different applications of these frameworks.

In a session on May 10 from 2-2:45 PM, Tom Amlicke, Software Systems Engineer, MedAcuity will discuss the “Keys to Using ROS 2 and Other Frameworks for Medical Robots.” Amlicke will look at three hypothetical robotic systems and explore these approaches:

  • 1. An application based on the da Vinci Research Kit through regulatory clearance
  • 2. ROS as test tools to verify the software requirements for a visual guidance system
  • 3. Commercial off-the-shelve robot arm used for a medical application

If you attend his session, you will also learn how to create trade-offs with these different architectural approaches and how to validate the intended uses of these architectural approaches to ensure a successful submission package for your FDA, EMA, or other regulatory approval.

Amlicke has 20-plus years of embedded and application-level development experience. He designs and deploys enterprise, embedded, and mobile solutions on Windows, Mac, iOS, and Linux/UNIX platforms using a variety of languages including C++. Amlicke takes a lead role on complex robotics projects, overseeing end-to-end development of ROS-based mobile robots and surgical robots.

You can find the full agenda for the Robotics Summit here. The Robotics Summit & Expo is the premier event for commercial robotics developers. There will be nearly 70 industry-leading speakers sharing their development expertise on stage during the conference, with 150-plus exhibitors on the showfloor showcasing their latest enabling technologies, products and services that help develop commercial robots. There also will be a career fair, networking opportunities and more. 

The post Keys to using ROS 2 & other frameworks for medical robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/keys-to-using-ros-2-other-frameworks-medical-robots/feed/ 0
How Amazon Astro moves through its environment https://www.therobotreport.com/how-amazon-astro-moves-smoothly-through-its-environment/ https://www.therobotreport.com/how-amazon-astro-moves-smoothly-through-its-environment/#respond Tue, 28 Mar 2023 22:35:17 +0000 https://www.therobotreport.com/?p=565338 Amazon counteracts Astro's lack of computation capabilities with algorithms and software designed to allow the robot to move more gracefully. 

The post How Amazon Astro moves through its environment appeared first on The Robot Report.

]]>

Amazon recently detailed how Astro, the company’s multi-purpose home robot, can navigate through its environment with limited onboard computational capabilities. Astro’s sensor field of view and onboard computational capabilities aren’t nearly as powerful as other autonomous robots. While this makes it a more affordable option for consumers, it also means it’s more challenging for Amazon to deliver a high-quality of motion. 

Amazon counteracts Astro’s lack of computation capabilities with algorithms and software designed to allow the robot to move more gracefully. 

Predictive planning is a key aspect of Astro’s navigational abilities. Astro’s limited computational capabilities mean it struggles with a large sensing-to-actuation latency. To combat this, Astro makes predictions about the movements of the objects around it, like people. The robot predicts where those objects will be and what its surroundings will look like at the end of its current planning cycle, helping it to account for latencies in sensing and mapping while it’s moving.

All of Astro’s plans are based on its latest sensor data and what it thinks its surroundings will look like when its plan will be taking effect. The robot can make these predictions because of its ability to predict and handle uncertainties and risks of collisions. 

Astro’s motivation to move towards its goal is always weighed dynamically with its perceived level of uncertainty. This means Astro evaluates uncertainty-adjusted progress for each candidate motion, allowing it to focus on getting to its goal when it determines risk is low, and focus on evasion when risk is high. 

The robot also uses trajectory optimization software to operate in its environment. Astro considers multiple candidate trajectories and picks the best one in each planning cycle. The robot plans 10 times a second and evaluates a few hundred trajectory candidates in each instance. 

Astro considers safety, smoothness of motion and progress toward its end goal. With these three criteria, the robot picks the trajectory that will result in optimal behavior. Other approaches limit the number of choices a robot can make to a discrete set, or a state lattice, but Amazon’s formulation is continuous, helping the robot move smoothly. 

Astro doesn’t just have to plan where its two wheels and body will go, it also has to plan movements for Astro’s screen. The robot’s screen is used to communicate motion and intent and for active perception, so Astro plans to do things like orienting its screen towards the person it’s following or in the direction it plans to go so humans around it know what its plans are. 

Amazon released Astro in September 2021. The robot can be used for a variety of things, including home monitoring, videoconferencing with family and friends, entertaining children, and more. The voice-controllable robot can recognize faces, deliver items to specific people, after a human puts the item in the storage bin, and use third-party accessories to, for example, record blood pressure. It can detect the sound of a smoke alarm, carbon monoxide detector or breaking glass. If you have a Ring account, Astro can send you notifications if it notices something unusual.

The post How Amazon Astro moves through its environment appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-amazon-astro-moves-smoothly-through-its-environment/feed/ 0
Celera Motion Summit Designer simplifies PCB design for robots https://www.therobotreport.com/celera-motion-summit-designer-pcb-design-robots/ https://www.therobotreport.com/celera-motion-summit-designer-pcb-design-robots/#respond Mon, 27 Mar 2023 18:50:23 +0000 https://www.therobotreport.com/?p=565334 According to Celera Motion, every design is open-source and consists of a fully customizable and fully documented Altium project.

The post Celera Motion Summit Designer simplifies PCB design for robots appeared first on The Robot Report.

]]>

Celera Motion, a business unit of Novanta Inc., launched its Summit Designer tool that delivers standard market-ready printed circuit board (PCB) designs for robotic applications.

Celera Motion, headquartered in Bedford, Mass., is a provider of motion-control components and subsystems for OEMs serving a variety of medical and advanced industrial markets. Celera Motion said it offers precision encoders, motors and customized mechatronic solutions. Celera Motion will be exhibiting in booth 335 of the Robotics Summit & Expo, the world’s premier commercial robotics development event that takes place May 10-11 in Boston.

Summit Designer is an open-source PCB design library featuring a diverse and vast offering of market-ready application-specific PCBs that are designed, supported and updated by experts. It is a new way to develop compact robot joints, multi-axis mobile robotics systems, industrial end-effectors and surgical robots, among many others.

“Summit Designer allows developers to create an ideal application using tested and proven PCB Designs,” said Marc Vila, director of strategy and business development, Celera Motion. “This ingenious new platform cuts development time, decreases the chances of error and reduces prototype iterations. That’s more important than ever as markets evolve faster and grow more competitive. Unexpected delays can be catastrophic to projects.”

According to Celera Motion, every design is open-source and consists of a fully customizable and fully documented Altium project. Users only have to choose and add their desired modules to create a fully functional servo drive design for a market-ready robot. The options are designed to satisfy the most common requirements, such as
type of connectors, communication protocols, safety functions and motor and encoder specifications. Users then receive a fully scalable and modular download file, ready to edit at their convenience. Experts are available to answer questions and guide users.

Celera Motion said the entire process takes five steps:

  • 1. Bring an idea for a new motion control application to the Summit Designer website
  • 2. Check the PCB designs there to find the one that fits your needs
  • 3. Customize it with our in-depth application guide
  • 4. Talk to our experts and get support whenever needed
  • 5. Plug into a Summit Drive and go to market

“Summit Designer was developed by top experts in motion control applications and robotics,” Vila said. “Our goal was to make the process as simple, flexible and seamless as possible. Each project allows for high customization and provides all the necessary tools in a single download.”

The post Celera Motion Summit Designer simplifies PCB design for robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/celera-motion-summit-designer-pcb-design-robots/feed/ 0
NVIDIA is making AI easier to use https://www.therobotreport.com/nvidia-is-making-ai-easier-to-use/ https://www.therobotreport.com/nvidia-is-making-ai-easier-to-use/#respond Tue, 21 Mar 2023 17:51:34 +0000 https://www.therobotreport.com/?p=565285 NVIDIA announces new features and capabilities that support the acceleration and growth of AI-based solutions for robotics and simulation.

The post NVIDIA is making AI easier to use appeared first on The Robot Report.

]]>
amazon robots in a simulated world.

NVIDIA Omniverse enables Amazon Robotics engineers to quickly simulate warehouse environments and train sensor data. | Credit: NVIDIA

NVIDIA’s CEO Jensen Huang presented the latest product announcements during his keynote at the 2023 GTC event this morning.

One of the first significant announcements is that NVIDIA accelerated computing together with cuOpt has solved Lee and Lim’s traveling salesman problem faster than any other solution to date. This milestone opens up a new world of capability for roboticists to create real-time solutions to AMR path planning problems.

Isaac Sim on Omniverse

Omniverse Cloud for enterprises is now available as a platform as a service (PaaS) for compute-intensive workloads like synthetic data generation and CI/CD. This PaaS provides access to top-of-the-line hardware when you need it for processing-intensive workloads. The service is rolling out with MS Azure.

Amazon Robotics is an early customer of Isaac Sim on Omniverse. The Proteus warehouse robot development team created a complete digital twin of the Proteus AMR and deployed it into Isaac SIM to help with the development and programming of the AMR. 

The team generated hundreds of photo-realistic environments to train and test sensor processing algorithms, and AMR behavior. This enabled the development team to accelerate the project without the need to build and test expensive prototypes in the real world.

Isaac Sim enables sensor simulation and interactive design and can run on AWS RoboMaker, to help with world generation. It is deployable on your cloud service provider.

BMW is also using NVIDIA Omniverse to accelerate the planning and design of new automobile assembly factories. BMW moved the virtual factory development to Omniverse. There, manufacturing engineers are able to layout robotics assembly workcells and virtually modify the robotics, tools and programming to optimize the workflow. Mercedes is claiming that this digital twin development process is shaving two years off the planning cycle for a new factory.

microsoft team screenshot of mercedes engineering team.

BMW is an early user of NVIDIA Omniverse for the development and programming of future automotive assembly lines and factories. | Credit: NVIDIA

Isaac ROS DP3 release adds new perception capabilities and open-source modules

There is a new ROS DP3 release for Issac that includes a number of new features:

  • New LIDAR-based grid localizer package
  • New people detection support in the NVBLOX package
    • GPU-accelerated 3D reconstruction for collision avoidance
  • Updated VSLAM and depth perception GEM
  • Source release of NITROS, NVIDIA’s ROS 2 hardware acceleration implementation
  • New Isaac ROS benchmark suite built
sensor data of robot in warehouse.

NVIDIA Omniverse and Isaac Sim enable roboticists to view the world as the sensors see it. | Credit: NVIDIA

Updates to NVIDIA Jetson Orin Family

The Jetson Orin product line gets an update including a new Orin Nano unit that is available in a complete range of system-on-module from hobbyists to commercial platforms:

  • Jetson Orin Nano 8GB/4GB
  • Orin NX 16GB/8GB
  • AGX Orin 64GB/32GB

New entry-level Jetson developer kit for Robotics / Edge AI

NVIDIA is introducing the NVIDIA Jetson Orin Nano developer kit that delivers 80x the performance when compared with the previous-generation Jetson Nano, enabling developers to run advanced transformer and robotics models. It also improves power efficiency by 50x the performance per watt, so developers getting started with the Jetson Orin Nano modules can build and deploy power-efficient, entry-level AI-powered robots, smart drones, and intelligent vision systems.

  • Available now for preorder for $499

NVIDIA Metropolis

In a future-looking statement, NVIDIA believes that building infrastructure will evolve such that every building will be considered to be a “robot.” Practically, this implies that buildings and other infrastructure elements will be imbued with the ability to sense, think and act.

It starts with the idea to automate infrastructure with vision-based AI as a platform for things that watch other things that move. This is a vision that the company calls “NVIDIA Metropolis”

The company is announced the latest generation TAO, TAO 5.0 and the next version of DeepStream which puts more sensors to work to help automate machinery and solve computer vision grand challenges with APIs.

Additional features of TAIO 5.0 include:

  • New transformer-based pre-trained models
  • Deploy on any device – GPUs, CPUs, MCUs
  • TAO is now source open
  • AI-assisted annotation
  • REST APIs
  • Integration with any cloud — Google Vertex AI, AzureAI, AWS, etc.

NVIDIA is also announcing Metropolis Microservices to solve hard problems like multi-camera tracking and human and machine interactions.

DeepStream is putting AI to work in low-code interfaces for graph composing as an expansion of existing AI services. This includes multi-sensor sensor fusion and deterministic scheduling for things like PLC controllers.

New features of DeepStream SDK include a new graph execution runtime (GXF) that allows developers to expand beyond the open-source GStreamer multimedia framework. This update unlocks a host of new applications, including those in industrial quality control, robotics and autonomous machines.

Editors note: An earlier version of this article mistakenly referenced Mercedes instead of BMW. It has been updated to accurately document Jensen’s keynote references.

The post NVIDIA is making AI easier to use appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-is-making-ai-easier-to-use/feed/ 0
MIT ‘traffic cop’ algorithm helps drones stay on task https://www.therobotreport.com/mit-traffic-cop-algorithm-helps-drone-swarm-stay-on-task/ https://www.therobotreport.com/mit-traffic-cop-algorithm-helps-drone-swarm-stay-on-task/#respond Wed, 15 Mar 2023 00:48:09 +0000 https://www.therobotreport.com/?p=565253 MIT developed a method for tailoring any wireless network to handle a large load of time-sensitive data from various sources.

The post MIT ‘traffic cop’ algorithm helps drones stay on task appeared first on The Robot Report.

]]>

MIT engineers developed a method to tailor any wireless network to handle a high load of time-sensitive data coming from multiple sources. | Credit: Christine Daniloff, MIT

How fresh are your data? For drones searching a disaster zone or robots inspecting a building, working with the freshest data is key to locating a survivor or reporting a potential hazard. But when multiple robots simultaneously relay time-sensitive information over a wireless network, a traffic jam of data can ensue. Any information that gets through is too stale to consider as a useful, real-time report.

Now, MIT engineers may have a solution. They’ve developed a method to tailor any wireless network to handle a high load of time-sensitive data coming from multiple sources. Their new approach, called WiSwarm, configures a wireless network to control the flow of information from multiple sources while ensuring the network is relaying the freshest data.

The team used their method to tweak a conventional Wi-Fi router, and showed that the tailored network could act like an efficient traffic cop, able to prioritize and relay the freshest data to keep multiple vehicle-tracking drones on task.

The team’s method, which they will present in May at IEEE’s International Conference on Computer Communications (INFOCOM), offers a practical way for multiple robots to communicate over available Wi-Fi networks so they don’t have to carry bulky and expensive communications and processing hardware onboard.

Last in line

The team’s approach departs from the typical way in which robots are designed to communicate data.

“What happens in most standard networking protocols is an approach of first come, first served,” said MIT author Vishrant Tripathi. “A video frame comes in, you process it. Another comes in, you process it. But if your task is time-sensitive, such as trying to detect where a moving object is, then all the old video frames are useless. What you want is the newest video frame.”

In theory, an alternative approach of “last in, first out” could help keep data fresh. The concept is similar to a chef putting out entreés one by one as they are hot off the line. If you want the freshest plate, you’d want the last one that joined the queue. The same goes for data, if what you care about is the “age of information,” or the most up-to-date data.

“Age-of-information is a new metric for information freshness that considers latency from the perspective of the application,” said Eytan Modiano of the Laboratory for Information and Decision Systems (LIDS). “For example, the freshness of information is important for an autonomous vehicle that relies on various sensor inputs. A sensor that measures the proximity to obstacles in order to avoid collision requires fresher information than a sensor measuring fuel levels.”

The team looked to prioritize age-of information, by incorporating a “last in, first out” protocol for multiple robots working together on time-sensitive tasks. They aimed to do so over conventional wireless networks, as Wi-Fi is pervasive and doesn’t require bulky onboard communication hardware to access.

However, wireless networks come with a big drawback: They are distributed in nature and do not prioritize receiving data from any one source. A wireless channel can then quickly clog up when multiple sources simultaneously send data. Even with a “last in, first out” protocol, data collisions would occur. In a time-sensitive exercise, the system would break down.

Data priority

As a solution, the team developed WiSwarm — a scheduling algorithm that can be run on a centralized computer and paired with any wireless network to manage multiple data streams and prioritize the freshest data.

Rather than attempting to take in every data packet from every source at every moment in time, the algorithm determines which source in a network should send data next. That source (a drone or robot) would then observe a “last in, first out” protocol to send their freshest piece of data through the wireless network to a central processor.

The algorithm determines which source should relay data next by assessing three parameters: a drone’s general weight, or priority (for instance, a drone that is tracking a fast vehicle might have to update more frequently, and therefore would have higher priority over a drone tracking a slower vehicle); a drone’s age of information, or how long it’s been since a drone has sent an update; and a drone’s channel reliability, or likelihood of successfully transmitting data.

By multiplying these three parameters for each drone at any given time, the algorithm can schedule drones to report updates through a wireless network one at a time, without clogging the system, and in a way that provides the freshest data for successfully carrying out a time-sensitive task.

The team tested out their algorithm with multiple mobility-tracking drones. They outfitted flying drones with a small camera and a basic Wi-Fi-enabled computer chip, which it used to continuously relay images to a central computer rather than using a bulky, onboard computing system. They programmed the drones to fly over and follow small vehicles moving randomly on the ground.

When the team paired the network with its algorithm, the computer was able to receive the freshest images from the most relevant drones, which it used to then send commands back to the drones to keep them on the vehicle’s track.

When the researchers ran experiments with two drones, the method was able to relay data that was two times fresher, which resulted in six times better tracking, compared to when the two drones carried out the same experiment with Wi-Fi alone. When they expanded the system to five drones and five ground vehicles, Wi-Fi alone could not accommodate the heavier data traffic, and the drones quickly lost track of the ground vehicles. With WiSwarm, the network was better equipped and enabled all drones to keep tracking their respective vehicles.

“Ours is the first work to show that age-of-information can work for real robotics applications,” said MIT author Ezra Tal.

In the near future, cheap and nimble drones could work together and communicate over wireless networks to accomplish tasks such as inspecting buildings, agricultural fields, and wind and solar farms. Farther in the future, he sees the approach being essential for managing data streaming throughout smart cities.

“Imagine self-driving cars come to an intersection that has a sensor that sees something around the corner,” said MIT’s Sertac Karaman. “Which car should get that data first? It’s a problem where timing and freshness of data matters.”

Editor’s Note: This article was republished from MIT News.

The post MIT ‘traffic cop’ algorithm helps drones stay on task appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-traffic-cop-algorithm-helps-drone-swarm-stay-on-task/feed/ 0
Full set of design files released for PR2 robot https://www.therobotreport.com/full-set-design-files-released-pr2-robot/ https://www.therobotreport.com/full-set-design-files-released-pr2-robot/#respond Tue, 14 Mar 2023 19:36:05 +0000 https://www.therobotreport.com/?p=565247 You can now access resources such as wiring diagrams, schematics, cable and assembly drawings and everything else related to the most advanced research and development platform of its time.

The post Full set of design files released for PR2 robot appeared first on The Robot Report.

]]>

The PR2 is one of the most beloved robots in the robotics industry. It was the state-of-the-art mobile manipulator when it was launched in 2009 by Willow Garage. The PR2 was sold in small numbers to research labs. Willow Garage continued to support the PR2 until the company was shut down in 2014.

If anyone’s interested in learning about the PR2, or reviving the platform, Clearpath Robotics is here to help. Clearpath took over service and support responsibilities of the PR2 after the Willow Garage shutdown. Clearpath posted on its new documentation website all design files for PR2. You need to give up some of your information to access the files, but there’s a lot there. You can now access resources such as wiring diagrams, schematics, cable and assembly drawings and everything else related to the most advanced research and development platform of its time.

“Clearpath Robotics has been a long-standing supporter of PR2, with the intention of providing important resources to its users,” the company said on its website. “On that account, we are thrilled to have all design files for PR2 available for download on our new documentation website. The hardware and 1000+ software libraries currently available for PR2 affirms new opportunities for robotics researchers to focus on. We are excited to see how these supplementary resources will help developers with their applications.”

The PR2 could navigate unknown, human-shared environments, and could grasp and manipulate objects. The ground-breaking platform propelled research in fields of automation, manipulation and human-robot interaction. The PR2 was entirely written in the Robot Operating System (ROS), making its capabilities available through ROS interfaces. It had two 7-DOF robot arms, a variety of sensors, two computers in its base and much more.

In the meantime, take a trip down memory lane and relive some of the milestones the PR2 team achieved.

The post Full set of design files released for PR2 robot appeared first on The Robot Report.

]]>
https://www.therobotreport.com/full-set-design-files-released-pr2-robot/feed/ 0
How Boston Dynamics is developing Spot for real-world applications https://www.therobotreport.com/how-boston-dynamics-is-developing-spot-for-real-world-applications/ https://www.therobotreport.com/how-boston-dynamics-is-developing-spot-for-real-world-applications/#respond Fri, 03 Mar 2023 18:27:54 +0000 https://www.therobotreport.com/?p=565174 Marco da Silva, senior director of R&D at Boston Dynamics, will discuss at the Robotics Summit & Expo how the company is developing Spot for real-World applications.

The post How Boston Dynamics is developing Spot for real-world applications appeared first on The Robot Report.

]]>

Spot, Boston Dynamics’ quadruped robot, can reliably walk nearly anywhere a human can, but what does it do? While legged mobility is necessary for many use cases, it’s not the only requirement for value-producing applications.

At the Robotics Summit & Expo (May 10-11 in Boston), Marco Da Silva, senior director of R&D at Boston Dynamics, will discuss how the company is “Developing Spot for Real-World Applications.” The session, which takes place on May 11 from 2-2:45 PM, will specifically focus on the work done to develop Spot for remote and autonomous sensing applications. These developments include:

  • Encapsulating Spot’s mobility in an extensible API
  • Building an autonomous capability
  • Making it easy to add sensing to build value-producing solutions 

Spot’s recent updates have included built-in data processing and review, an improved data collection pipeline and a better operator experience, all aimed to make Spot better at the various jobs it’s deployed to do. These updates are particularly aimed at improving Spot’s data collection capabilities, which it uses in many of its inspection jobs.

Spot has use cases across many industries, including construction, oil and gas, energy and more. Since its release, Boston Dynamics continues to enhance the capabilities of Spot. Feedback from customers, and the experience gained from operating in difficult, industrial situations, are enabling the Spot development team to improve Spot’s capabilities.

da Silva is currently focused on making Spot useful as a tool for autonomous inspection of industrial facilities, remote inspection in dangerous settings, and other applications. Prior to Spot, he was platform director for Atlas, a program to develop the world’s most advanced biped robot. da Silva joined Boston Dynamics in 2010 after graduating with a PhD from MIT. Prior to that, he worked at Pixar Animation Studios from 2001 to 2005.

You can find the full agenda for the Robotics Summit here. The event will also feature a conversation with Marc Raibert, executive director of the AI Institute and founder and chairman of the board at Boston Dynamics. Raibert will discuss opportunities for the robotics industry and the most important and difficult challenges facing the creation of advanced robots. It will also describe how the new AI Institute is pushing the limits of technological innovation to solve these challenges.

The Robotics Summit & Expo is the premier event for commercial robotics developers. There will be nearly 70 industry-leading speakers sharing their development expertise on stage during the conference, with 150-plus exhibitors on the showfloor showcasing their latest enabling technologies, products and services that help develop commercial robots. There also will be a career fair, networking opportunities and more. Register for full conference passes by March 9 to save $300. Expo-only passes are just $75. Academic discounts are available and academic full conference rates are just $295.

The post How Boston Dynamics is developing Spot for real-world applications appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-boston-dynamics-is-developing-spot-for-real-world-applications/feed/ 0
Form & Function Robotics Challenge teams announced https://www.therobotreport.com/form-function-robotics-challenge-teams-announced/ https://www.therobotreport.com/form-function-robotics-challenge-teams-announced/#respond Tue, 28 Feb 2023 20:23:31 +0000 https://www.therobotreport.com/?p=565137 MassRobotics challenge calls for teams to create a robot that delivers a compelling form factor specific to its tasks. Finalists will be on display at Robotics Summit & Expo.

The post Form & Function Robotics Challenge teams announced appeared first on The Robot Report.

]]>

MassRobotics announced the teams participating in its inaugural Form & Function Robotics Challenge. The finalists will be showcasing their robots on May 10-11 at the Robotics Summit & Expo in Boston. The winners will be announced on May 11 at 12:30 PM at the event.

The challenge calls for teams to create a robotics or automation project that delivers a compelling form factor specific to its tasks while accomplishing a useful function. The following universities will be participating:

  • Brown University
  • Indiana University Bloomington
  • Kwame Nkrumah Science and Technology
  • MIT
  • Northeastern University
  • Seoul National University
  • Stevens Institute of Technology
  • Tufts University
  • University of Bath
  • University of Calgary
  • University of Southern Denmark
  • Worcester Polytechnic Institute

“We received entries for the Form and Function Challenge from around the world,” said MassRobotics executive director Tom Ryden. “We are excited about the selected teams, their robot concepts and the ways they are planning to incorporate many of our partners’ offerings, from development kits to sensors to software. We look forward to the teams showcasing their robotic solutions at the Robotics Summit & Expo in May.”

The challenge encourages cross-collaboration between state-of-the-art software and hardware providers, including MassRobotics strategic partners Onshape, Lattice Semiconductor, Nano Dimension, Danfoss, FESTO, Novanta, and Analog Devices. The challenge requires teams to use the offerings from a minimum of two of the seven partners. Examples of the offerings available to participants include:

Lattice Semiconductor: FPGA technology with their solution stack for a machine vision camera
Onshape: cloud-native CAD platform
Nano Dimension: 3D printed circuit boards and design review
FESTO: vacuum gripper kit and Electrical Actuators
Novanta: drives, inductive position encoders, RFID
Danfoss: remote control and PLUS+1 controllers

Participating teams will be provided technical support in the form of software, hardware, expertise and regular check-ins with the challenge’s sponsoring partners. Teams have until May 5, 2023 to develop and complete their projects and will have the opportunity to present their work at the Robotics Summit & Expo.

The challenge offers a grand prize of $25,000, along with $5,000 prizes for second and third place and a $5,000 Audience Choice award.

The Robotics Summit & Expo is the premier event for commercial robotics developers. There will be nearly 70 industry-leading speakers sharing their development expertise on stage during the conference, with 150-plus exhibitors on the showfloor showcasing their latest enabling technologies, products and services that help develop commercial robots. There also will be a career fair, networking opportunities and more. Register for full conference passes by March 9 to save $300. Expo-only passes are just $75. Academic discounts are available and academic full conference rates are just $295.

The post Form & Function Robotics Challenge teams announced appeared first on The Robot Report.

]]>
https://www.therobotreport.com/form-function-robotics-challenge-teams-announced/feed/ 0
Robotics Summit & Expo full conference agenda https://www.therobotreport.com/robotics-summit-expo-2023-full-conference-agenda/ https://www.therobotreport.com/robotics-summit-expo-2023-full-conference-agenda/#respond Mon, 27 Feb 2023 19:42:46 +0000 https://www.therobotreport.com/?p=565127 More than 60 industry leaders will be on stage at the Robotics Summit & Expo (May 10-11 in Boston), the premier event for commercial robotics development.

The post Robotics Summit & Expo full conference agenda appeared first on The Robot Report.

]]>

The Robotics Summit & Expo, produced by The Robot Report and parent company WTWH Media, recently announced the full conference agenda for the May 10-11 event at the Boston Convention and Exhibition Center. Since its founding in 2018, the Robotics Summit & Expo has become the world’s premier commercial robotics development event.

The conference sessions at the event are designed to impart engineers with the information they need to develop and deploy the next generation of commercial robots. Beyond the keynotes and conference sessions, there will be 150-plus exhibits and demonstrations on the expo show floor, a career fair, a robotics development challenge, networking opportunities and more.

Register for full conference passes by March 9 to save $300. Expo-only passes are just $75. Academic discounts are available and academic full conference rates are just $295.

The Robotics Summit & Expo will be co-located with the Healthcare Robotics Engineering Forum (HREF), an event designed to provide engineers, engineering management, business professionals and others with information about how to successfully develop and deploy the next generation of healthcare robot. Also co-located with these events is DeviceTalks Boston, the premier industry event for medical technology professionals. HREF and DeviceTalks Boston attract engineering and business professionals from a range of medical technology backgrounds.

The complete agenda for the Robotics Summit & Expo is below. You can also view the entire agenda here and register here.

Wednesday, May 10, 2023

Opening Keynote: Idea to Reality: Commercializing Robotics Technologies
Howie Choset, Professor of Robotics, Carnegie Mellon University
8:45 AM -9:30 AM

Turning a technology developed inside a lab into a successful robotics company is no easy task. Howie Choset has done this several times with companies such as Medrobotics (surgical robots), Hebi Robotics (modular robots) and Bito Robotics (robot software). Choset will share insights about the robotics startups he founded and best practices for taking technological innovation from an idea to reality.

Keynote: Future of Open-Source Robotics Development
Wendy Tan White, CEO, Intrinsic
9:30 AM – 10:15 AM

In this fireside chat, Intrinsic CEO Wendy Tan White will discuss the company’s ongoing efforts to make industrial robotics more accessible and usable for millions more businesses, entrepreneurs and developers. Tan White will also discuss the recent acquisition of the Open Source Robotics Corporation and what it means going forward.

Keynote: Scalable AI Solutions for Driverless Vehicles
Laura Major Chief Technology Officer, Motional
10:45 AM – 11:30 AM

Major will discuss Motional’s approach to developing SAE Level 4 autonomous vehicles (AVs) that can safely navigate complex road scenarios. As part of the discussion, Laura will cover the core challenges of AVs, deep learning advancements, and Motional’s innovative Machine Learning-first solutions.

Breakout Session: Robotics Roadmap to Commercialization Success
Speakers: Jennifer Apicella, Vice President, Pittsburgh Robotics Network; Reese Mozer, CEO and Co-Founder, American Robotics; Andy McMillan, Advisory Board Chairman, Cirtronics
11:45 AM – 12:30 PM

Commercialization is the impact point where all expectations for your product are tested – right when it hits the market. Successful commercialization is good for the individual company and the robotics industry as a whole. Success leads to more products, whether they are next-gen, derivative, or brand-new new ideas. Join us in learning from three experts in the robotics industry about how developing strategic partnerships can expedite commercial expansion and the steps required to succeed in this process. While the road to commercialization may not be linear, executives share their firsthand experiences and requirements for precision and detail-oriented partners.

Breakout Session: Situational Awareness Using 3D LiDAR
Cedric Hutchings, Co-Founder & CEO, Outsight
11:45 AM – 12:30 PM

Many types of robots can leverage 3D LiDAR data to gain situational awareness in real time. This awareness is essential to perform required tasks but also to enable market adoption. However, effectively using LiDAR data in real time is complex and expensive. In this presentation, attendees will learn about a new category of LiDAR software – a real-time pre-processing engine – that allows application developers and integrators to use LiDAR data from any hardware supplier and for any application. Real-world use cases and LiDAR recordings will be used to illustrate the practical applications.

Breakout Session: Achieving Scalable Interoperability with Automated Negotiation
Michael Grey, Software Engineering Manager, Intrinsic
11:45 AM – 12:30 PM

With the rapid acceleration of robot deployments, the need for heterogeneous fleet management will become standard. In this session, Intrinsic will share recent developments in the Open-RMF platform and present use cases where it has been deployed. Intrinsic will also present a roadmap about the future of Open-RMF and interoperability and some of its latest tools and capabilities, including:

  • Multi-agent planning framework – Mapf is a library for cooperative path finding
  • Task Management – the latest improvements including a flexible task framework to allow custom task definitions, multi-phase tasks, prioritization and more
  • Site Editor – a desktop or web utility to visualize and edit large deployment sites
  • Crowd Simulation – a plugin to simulate human actors with multiple behaviors
  • Obstacle Detectors – packages that infer the presence of obstacles from sensor inputs including LIDAR or 2D/3D cameras

Breakout Session: Designing Surgeon-Level Haptic Sensing for Surgical Robotics
Robert Brooks, CEO, Forcen
11:45 AM – 12:30 PM

Force and torque sensing play key roles in enabling surgical robotics, including at the tip of the instrument, trocar location/tissue contact, surgeon collaboration and the surgeon interface. During this session, attendees will learn about 13 core specifications for haptic sensors and the current state-of-the-art of what’s possible. This talk will detail best practices for implementing haptic sensors into surgical robots, including:

  • Thermal compensation and considerations under surgical drapes
  • Grounding & shielding inside ultra-compact robotic joints
  • Engineered cable assemblies for high-flex, multidimensional, tight-bend application

How customizable cobot design enables success of your surgical robotics company
Speakers: Gene Matthews, Senior Product Manager, Kollmorgen; Dr. Jindong Tan, President and Founder, Azure Medical Innovation
1:45 AM – 12:30 PM

Collaborative robots and AI are becoming increasingly important for surgeons to perform repetitive and precise control tasks. But surgical applications have unique performance and certification requirements that are not available in the current cobot market. This presentation aims to help eliminate barriers to choosing customized surgical robots, as well as help surgical robotics companies build out their specifications so that they can focus on clinical applications. This talk will also address critical engineering considerations when specifying surgical application needs on collaborative features, AI integration in the surgical flow, and certification requirements:

  • What are the unique requirements for cobots in surgical applications?
  • How do key components such as frameless motors determine performance?
  • How do cobots and AI-enabled vision impact surgical flow?
  • How customized design can impact performance, development cycle, and certification

Breakout Session: Sensor Calibration and SLAM to increase ODD & reduce BOM cost
William Sitch, Chief Business Officer, Main Street Autonomy
2:00 PM – 2:45 PM

Autonomy and perception systems are built on the core components of sensor calibration, localization, and mapping. Calibration requires targets and trained personnel and maintenance. Localization and mapping only works in certain areas with expensive sensor systems. These issues drive higher robot cost, restrict robot deployments and constrain business growth. This talk will detail three innovations that solve these problems.

Breakout Session: Simplification of Advanced Motion Control Using Integrated Servo Drives
Andrew Zucker, Mechatronics Engineer, Harmonic Drive
2:00 PM – 2:45 PM

In modern robotic applications, space comes at a premium. Power density continues to be a leading factor in robotic applications, although it is often compromised by the cabling and hardware needed to implement the power and control systems. In this session, we will explore how to simplify the design of a robotic joint without compromising performance, reliability, or advanced motion control features that normally come at the cost of bulky cabling and electronics.

Breakout Session: Developing General Purpose Robots That Push the Boundaries of Technology
Jeff Cardenas, CEO, Apptronik
2:00 PM – 2:45 PM

New hardware, sensors, algorithms, and AI technologies have opened up the ability to rethink how robots are developed for use in unstructured environments. Using a first principles approach, Apptronik has used the same platform to develop a range of robots from exoskeletons to humanoids. This has resulted in reactive, compliant, lightweight, and affordable robots that can perform a variety of tasks in existing human environments. In this session, you will about Apptronik’s approach to developing a platform for general-purpose robots, their use cases, and the future viability of these systems.

Breakout Session: Developing a New Generation of Robots to Transform Care in the Home
Mike Dooley, CEO, Labrador Systems
2:00 PM – 2:45 PM

Across the globe, we are living older for longer than ever before. This is creating huge demands on caregivers, healthcare systems and societies overall, where many regions are already experiencing a labor shortage crisis. Robotics can play a significant role in helping people live more independently for longer.

To achieve this, robotics has to transform in at least two major ways. First, we need to develop robots that can scale to be affordable for personal, 1-to-1 use, which is a dramatic change from most commercial robots today. Second, making functional robots operate autonomously in homes requires solving for much greater complexity, with far more diverse and challenging settings and use case scenarios.

In this presentation, Labrador Systems will walk through the design and development of Retriever, a personal robot built from the ground up to operate in the home, lighten the load of daily activities, extend the impact of caregivers, and ultimately help us live more independently as we age.

Breakout Session: Keys to Using ROS 2 and Other Frameworks for Medical Robots
Tom Amlicke, Software Systems Engineer, MedAcuity
2:00 PM – 2:45 PM

What is the best architectural approach to use when building medical robots? Is it ROS, ROS 2 or other open-source or commercial frameworks? The answer is, “it depends.” In this presentation, we will explore engineering questions concerning the level of concern, risk, design controls, and evidence on a couple of different applications of these frameworks. Looking at three hypothetical robotic systems, we will explore these approaches:

1. An application based on the da Vinci Research Kit through regulatory clearance
2. ROS as test tools to verify the software requirements for a visual guidance system
3. Commercial off-the-shelve robot arm used for a medical application

Attending this session to learn how to create trade-offs with these different architectural approaches and how to validate the intended uses of these architectural approaches to ensure a successful submission package for your FDA, EMA, or other regulatory approval.

Breakout Session: How to Cut Build Cycles in Half and Supercharge Robotics Development
Dave Evans, Co-Founder & CEO, Fictiv
3:00 PM – 3:45 PM

When it comes to new product development in robotics, there is no silver bullet. But there are common and predictable bottlenecks and inefficiencies that prevent engineering teams from operating at maximum productivity that can be eliminated.

Attendees will learn strategies to overcome these barriers and accelerate development. Through inspiring success stories from industry-leading companies, including Honeywell and Gecko Robotics, this talk will detail how robotics teams shaved weeks and months off development cycles to drive improved quality, speed and time-to-market outcomes. Ultimately, attendees will leave with a clear plan of action on how they can transform new product development.

Breakout Session: Coordinated Motion of a Manipulator and Mobile Base
Tiffany Cappellari, Engineer, Southwest Research Institute
3:00 PM – 3:45 PM

Robotics for large-scale fabrication or processing has relied on the ability to realize what is referred to as “coordinated motion.” This enables a robotic arm to work beyond its reach envelope by precise coordination with external axes. These external axes may be linear rails, or rotational axes to extend the working envelop of the combined robotic solution. Mobile robots have also gained in adoption and use, but, even in fairly complex mobile manipulator solutions, coordinated motion is only realized through the connection of the manipulator and the base via an external monitoring device. These solutions also normally make use of a “stop and go” approach in which the mobile base positions itself in a static pose first before the industrial manipulator begins its operation, thus not demonstrating true coordinated motion.

Southwest Research Institute seeks to pursue a coordinated motion solution that will enable richer continuous processing beyond the standard reach of the manipulator without the need to tie together the base and manipulator with an external tracking device, therefore opening a new frontier of industrial mobile robots that currently are not available in industry.

Breakout Session: Innovation in Robotic Grasping
Speakers: Roy Belak, CEO, Nexera Robotics; Nathan Brooks, CTO, PickNik Robotics, Jeff Mahler, Co-founder and CTO , Ambi Robotics; Boston Dynamics
3:00 PM – 3:45 PM

Grasping and manipulation, the ability to directly and physically interact with and modify objects in the environment, is perhaps the greatest differentiator between robotic systems and all other classes of automated systems. Many types of robots require the ability to coordinate tactile, vision, and proprioceptive sensing to pick-up and operate on all manner of objects, with goals ranging from providing human-like dexterity and autonomous manipulation, to high precision repeatability, and on to superhuman strength and endurance. During this panel session, attendees will learn of the latest grasping and manipulation technologies and techniques commercially available, as well as solutions emerging from the lab that will allow for whole new classes of robotics applications.

Breakout Session: Human Factor Design Considerations for Healthcare Robots
Speakers: Laura Birmingham, Associate Research Director, Emergo by UL; Alix Dorfman, Managing Human Factors Specialist, Emergo by UL
3:00 PM – 3:45 PM

Although human factors engineering touches many facets of overall system design, at its core, the practice facilitates the interaction between humans and technology; it aligns a system’s design with individuals’ cognitive and physical capabilities and limitations to produce a safe and satisfying user experience. Despite the level of autonomy healthcare robotics technologies might offer, there is always a human element that requires consideration.

During this talk, the presenters will discuss human factors implications and considerations related to the design of robotic healthcare technology used in clinical and non-clinical environments. The talk will describe how robotics disrupts the four key aspects of design analyzed when supporting product development: the system’s touchpoints, intended users, intended use environment(s), and its intended users’ tasks with the system. We will illuminate how such aspects can and should influence design decisions, as well as best practices when conducting research within the regulated medical device industry.

Breakout Session: Position feedback for healthcare robotics
Astrid Stock, Product Manager, SIKO
3:00 PM – 3:45 PM

Healthcare robots are quite different from their industrial counterparts. They do not work in fenced-off areas, but rather side by side with their operators. With this in mind, safety, accuracy, and size requirements have become more critical in today’s applications. Reliable control of the robot’s position, alignment and movement is essential. Rotary and linear encoders enable the position feedback of the motor and send vital information to the control.

This talk will illustrate different measurement principles (magnetic, glass, inductive) and explain the advantages of magnetic measurement. It will differentiate between absolute and incremental systems and will discuss the different interfaces from basic incremental TTL to absolute interfaces like CANopen or BiSS-C. Attendees will also learn about trends and requirements for compact designs and highly integrated solutions.

Breakout Session: Unlocking New Applications for Mobile Robots
Speakers: Niels Jul Jacobsen, CEO, Capra Robotics; Steve Boyle CEO, Essential Aero; Amir Bousani, Founder and CEO, RGo Robotics; Mike Oitzman, Editor of Robotics, WTWH Media
4:15 PM – 5:00 PM

Traditional sensors and navigation stacks have enabled AGVs and AMRs to bring tremendous value to a finite set of indoor material handling applications. But a much broader set of additional applications remains unsolved due to limitations in more challenging environments, including outdoor, indoor/outdoor, spaces with dynamic or repetitive features and where mobile robots must interact seamlessly together with humans. The panelists will discuss how recent technology advances are setting the stage for the next wave of innovation mobile robots and share examples of exciting new applications that will be unlocked.

Breakout Session: ASTM Standards for Robotics, Automation, and Autonomous Systems
Adam Norton, Associate Director, NERVE Center, UMass Lowell
4:15 PM – 5:00 PM

Robotics and automated systems used in many industries still lack sufficient standard specifications for interfaces, test methods for performance comparison, and practices for implementation. These gaps can stifle industry adoption and innovation.

The ASTM F45 Committee on Robotics, Automation, and Autonomous Systems is working to fill these gaps through the development of standard terminology, practices, classifications, guides, test methods, and specifications applicable to these systems.

This talk will include an overview presentation on the committee’s recent and upcoming activities, as well as an interactive discussion session to gather industry feedback on recommendations for future standards developments to ensure alignment with the needs of the community, both from a developer and user perspective.

The Era of Robotic Unicorns
Eliot Horowitz, CEO & Founder, Viam Robotics
4:15 PM – 5:00 PM

The robotics industry is at an inflection point because software advancements can offer a full paradigm shift in how to scale a successful robotics business. At MongoDB, Horowitz’s data platform was the foundation for dozens of $1B+ high-growth software companies. That same style of software infrastructure advancement is now coming to robotics to support a wave of high-growth robotics companies.

In this session, Viam and MongoDB co-founder Eliot Horowitz will detail why this is the best time to launch a robotics business and how a modern approach to software can get you from paper to prototype to production to successful, scaled business faster than ever.

Breakout Session: Motion Control Trends for Healthcare Robots
Prabhakar Gowrisankaran, VP of Engineering and Strategy, Performance Motion Devices
4:15 PM – 5:00 PM

In this presentation we will provide an update on recent developments in motion control technologies, applications, and products that are especially important for designers of medical analytical instruments and operating room equipment.

The emphasis will be on mobile & surgical robotics, patient therapy equipment, and advances in actuators and position sensors that are driving the next generation of motion control applications that deliver more accuracy, lower treatment costs, and improved medical outcomes.

Prabh Gowrisankaran, VP of Engineering and Strategy at Performance Motion Devices, Inc. (PMD), will share his extensive experience in electronic motion control and will lead this discussion designed to be interesting for both engineers and medical practitioners alike.

Healthcare Robotics Startup Showcase
4:15 PM – 5:00 PM

MassRobotics, FESTO, Mitsubishi Electric Automation, MITRE, Novanta and other key players in the healthcare and robotics space recently initiated the Healthcare Robotics Startup Catalyst Program. The goal is to advance healthcare robotics companies by providing the connections, guidance and resources they need to grow and succeed.

During this session, attendees will hear pitches from the following seven healthcare robotics startups currently in the catalyst program: Able Human Motion, Acumino, Andromeda, Maestro Surgical, Robot on Rails, Unlimited Robotics and Zeta Surgical


Thursday, May 11, 2023

Opening Keynote: The Next Decade in Robotics
Marc Raibert, Executive Director, AI Institute
9:00 AM – 9:45 AM

This fireside chat with Marc Raibert will discuss opportunities for the robotics industry and the most important and difficult challenges facing the creation of advanced robots. It will also describe how the new Boston Dynamics AI Institute is pushing the limits of technological innovation to solve these challenges.

Keynote: The Future of Surgical Robotics
Martin Buehler, Global Head of Robotics R&D, Johnson & Johnson MedTech
10:00 AM – 10:45 AM

Johnson & Johnson, one of the world’s leading healthcare companies, gives an inside look at the end-to-end development of its Monarch and Ottava robotics platforms, as well as strategy and innovation cadence across surgical robotics for MedTech.

Breakout Session: Guerilla Product Development for Robotics
Ted Larson, CEO, OLogic
11:30 AM – 12:15 PM

Based on years developing numerous robots for companies across the world, Ted Larson, CEO of OLogic, a Silicon Valley-based robotics design and development services firm, will outline the steps in his guerilla product development program, a novel engineering approach for developing robots. The talk will discuss the concepts underlying the product development technique and provide specific examples of how the process has been used for the development of various robotic and consumer electronics products. He will use specific case studies to detail how various commercial robotics companies.

Breakout Session: Battery Power for Mobile Robotics – Guidelines & Solutions
Dan Friel, Battery Systems Specialist, VARTA
11:30 AM – 12:15 PM

Battery decisions for mobile robots are critical for achieving power autonomy, but they can be complex:

  • How much capacity is needed?
  • Where can I put the battery?
  • What is the best charge method?

In this session, attendees will learn the answers to these questions and more. Detailed information for designers who need reliable portable power for mobile robots will be provided. Additional topics covered include system considerations such as non-motive loads, battery placement issues, and how to design for re-generative braking charging. Also discussed will be wired and wireless charging pros and cons, plus the trade-offs between short, quick charging and once-per-shift charging methodologies.

Breakout Session: The Rise of Cobots in Healthcare
Brad Porter, Founder & CEO, Collaborative Robotics
11:30 AM – 12:15 PM

Robotics is all about leading change. To realize the true potential of robotics requires a bold vision driven top-down that aligns R&D teams, operations, finance, and global IT, while also requiring a deep focus on the details of successful individual deployments. In this talk, Brad Porter will share his perspective on how soon collaborative robots, with more human-like capability and a greater ability to collaborate with humans, are coming and the new challenges and opportunities they will present in healthcare.

Breakout Session: Using Simulation to Design and Develop Autonomous Robots
Gerard Andrews, Product Marketing, Robotics, NVIDIA
11:30 AM – 12:15 PM

Warehouse logistics and advanced manufacturing are increasingly using robotics as a critical part of their automation strategies. They can improve operational efficiency, improve safety, and help companies address the persistent labor shortages that are being observed across the globe. Developing these intelligent robotic systems, however, is a complex, challenging, and costly undertaking. Thankfully, advanced simulation tools are available to engineers that can speed the design, development, and testing processes.

This talk will describe the many ways NVIDIA Isaac Sim can be used to accelerate the development and deployment of robots, including advanced AI and computer vision. Specifically, attendees will learn how simulation can test robot applications in photo-realistic, physically accurate digital twin environments. In addition, the robots can be placed in increasingly complex simulations involving digital humans and fleets of robots to optimize operational KPIs. This session is designed as an introduction to photo-realistic 3D simulation for robots and is appropriate for all levels.

Breakout Session: Launching Mobile Manipulation Robots in Hospitals
Siddhartha Banerjee, Lead Robotics Engineer, Diligent Robotics
11:30 AM – 12:15 PM

Over the years, robots have made huge strides in the mobile transport space as well as warehouse automation. However, mobile manipulation robots operating around people in semi-structured environments are still few and far between. Diligent Robotics is pushing the boundaries of socially-aware mobile manipulation by deploying robots into hospitals. This talk will cover the challenges of a startup putting Moxi, a socially-aware mobile manipulation platform, into a semi-structured environment with people (i.e. hospitals). It will include lessons learned and key takeaways as well as insights into healthcare automation given the rise of COVID-19 and the impact of labor shortages.

Form and Function Challenge Winners Announced
12:30 PM

Breakout Session: Oxidizing Your Software Development: Rust for Robots
Zach Goins, Senior Autonomy Software Engineer, Scythe Robotics
2:00 PM – 2:45 PM

For a long time, robotics software development has been forced to make the choice between safe software and fast software. Early on, Scythe Robotics chose a third path: Rust, the new programming language that promised a break from that dichotomy. It was a lonely and bold choice at the time, but it has paid huge dividends.

Scythe Robotics, a developer of commercial autonomous lawnmowers, will discuss its decision to use Rust and some learnings along the way. It will discuss how it benefited from Rust’s strengths to build a robot software platform that is both reliable and performant, while still enabling high-velocity development. For example, leveraging the best parts of both ROS and Rust is possible with the mature rosrust crate, which allows engineers to make use of existing ROS tools as well as the wider crates.io package ecosystem. Further, directly integrating with other existing C/C++ software is straightforward as well, enabling reuse of proven libraries and device-specific features like compute accelerators and other C/C++ platform SDKs.

Scythe’s rapid progress is, in no small part, due to the strengths of Rust and the confidence it has unlocked for developers to move fast and build things reliably. This talk will be a guide for others looking to oxidize and accelerate their software development process.

Breakout Session: Building Production-level Robots for Farming
Thomas Palomares, CTO & Co-Founder, FarmWise
2:00 PM – 2:45 PM

Vegetable farming still relies on labor-intensive processes. Weed control in particular, which is critical to ensure good yields, is mostly taken care of by hand crews who walk the field with hoes. After prototyping for 5 years in the epicenter of lettuce production in North America, FarmWise is about to release its first commercial robotic weeder. This machine can detect crops from weeds and mechanically uproot the undesired plants with extreme precision using deep learning and actuation control. This talk will walk you through the team’s journey and learnings from the first demo to the architectural decisions behind their new product.

Breakout Session on Legged Locomotion
Boston Dynamics
2:00 PM – 2:45 PM

Breakout Session: From Product Idea to Robotic Healthcare Solution – An Overview
Tobias Luksch, Manager, R&D, Robotics, ITK Engineering
2:00 PM – 2:45 PM

It is a long road with numerous hurdles to take a medical robot from an initial idea to a certified product. To achieve this, some important questions must be answered, such as:

  • What is the intended use?
  • What is the legal framework?
  • What are the main risks?
  • Which methods can be used to evaluate concepts, to develop prototypes and to verify the final product?

This session will give an overview of the essential steps required to turn a product idea into a market-ready healthcare robot. Attendees will be provided with practical advice on how to implement these steps, as well as a European perspective on the regulatory aspects of the product life cycle.

Breakout Session: Magnetic Robots for Diagnosis and Surgery
Giovanni Pittiglio, Research Fellow, Boston Children’s Hospital, Harvard Medical School
2:00 PM – 2:45 PM

Robotics has the potential to democratize healthcare by complementing a surgeon’s skills and guaranteeing consistent quality of care. With the aim of reducing pain, discomfort and limiting disruptive interaction with the anatomy, soft magnetic robots are a novel, emerging solution. This technology can guarantee remote actuation, which equates to smaller size and softer devices. This talk will introduce the potential for magnetic soft robots in overcoming the main limitations of alternative approaches, such as minimally-invasive surgery which is difficult to scale due to the need for highly skilled personnel. The session will discuss a range of applications magnetic robots can cover in healthcare, with a focus on diagnosis and surgery. The main challenges and future research goals are introduced and reviewed.

Breakout Session: Motion Control and Robotics Opportunities
Speakers: Dave Rollinson, Co-Founder, HEBI Robotics
3:00 PM – 3:45 PM

Breakout Session: Using Emulation to Accelerate the Development of Wearable Machines
Josh Caputo, Founder, President & CEO, Humotech
3:00 PM – 3:45 PM

Emulation is a concept that will be familiar to anyone engaged in the development of computing systems (or fans of retro gaming), but did you know research & development groups around the world are leveraging the approach to develop more personalized and advanced prosthetics, orthotics, exoskeletons, wearable robotics, and more?

Wearable systems are costly to prototype and difficult to perfect, so it is crucial for this burgeoning industry to reimagine R&D processes to unlock greater efficiency, throughput, and more wildly successful products. Learn more about the opportunities, challenges, and innovative approaches being explored towards developing technology that augments human biomechanics and could one day be accessible to anyone looking for a boost to their physical performance.

Future of Autonomy in Robotics
Ryan Gariepy, Co-Founder & CTO, Clearpath Robotics & OTTO Motors
3:00 PM – 3:45 PM

From vacuums to quadrupeds to self-driving cars, robots are becoming increasingly physically capable, intelligent and cost-effective. As with any emerging industry, the earliest innovators didn’t have the luxury of decades of fundamental knowledge and best practices available to them. They built from the ground up and learned the hard way what not to do.

Today, we’re entering a new era of robotics. The most successful robotics companies of the next decade won’t be the ones building from scratch. They’ll build on existing platforms that have been hardened to solve very specific problems, including problems in autonomy, fleet management, simulation, and more throughout the robotics stack.

In this presentation, the audience will learn how robotics development has been done recently, what is changing, and what is coming in the next decade from an expert with fifteen years of experience in robot development and deployment across a variety of industries. Market expectations surrounding robotic capabilities, security and privacy, and robustness and safety are becoming increasingly difficult for new entrants to match. Nevertheless, a variety of market forces are making building robots cheaper and easier than ever before, and demand for robotics has never been higher!

Just as a new software company today wouldn’t build its own cloud computing platform, and instead would use AWS, the next generation of robotics companies are not going to start with a hodgepodge of ROS nodes and custom circuit boards. It is highly likely that some of the world’s largest robotics companies haven’t even been founded yet!

Closing Keynote: Developing Robots for Final Frontiers
Nicolaus Radford, CEO, Nauticus Robotics
4:00 PM – 4:45 PM

Space is commonly referred to as the “final frontier.” But Nicolaus Radford and the team at Nauticus Robotics believe the world’s oceans are of the utmost near-term priority (and largely unexplored) final frontier. Founded by former NASA engineers, Nauticus is leading the way by developing novel ocean robotic platforms for unprecedented ways of working in and exploring the aquatic domain, while challenging the less-than-desirable and archaic paradigm of the legacy industry. The company’s vision is to become the most impactful ocean robotics company and to disrupt the current ocean services paradigm through the integration of autonomous robotic technologies. The deep sea is vast, full of potential, and yet remains largely as uncharted as space itself – and Nauticus is at the forefront of unlocking its possibilities.

This keynote will chart Radford’s journey from developing humanoid robotics for space and leveraging that experience to form Nauticus and its revolutionary ocean robotics portfolio. His work at NASA heavily influenced the advancements at Nauticus as the company develops robots capable of aiding in national security, repairing oil pipelines, and inspecting windfarms — all while significantly reducing emissions and hazards to human counterparts. During his keynote, Radford will provide insights about both environments, discuss the business and technology of Nauticus’ current work and explain his vision for the future of ocean technology and robotics.

The post Robotics Summit & Expo full conference agenda appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robotics-summit-expo-2023-full-conference-agenda/feed/ 0
How ChatGPT can control robots https://www.therobotreport.com/microsoft-demos-how-chatgpt-can-control-robots/ https://www.therobotreport.com/microsoft-demos-how-chatgpt-can-control-robots/#respond Wed, 22 Feb 2023 18:44:02 +0000 https://www.therobotreport.com/?p=565080 Microsoft researchers use ChatGPT to write computer code that can control a robot arm and an aerial drone.

The post How ChatGPT can control robots appeared first on The Robot Report.

]]>
a chatgpt prompt asking a robot to perform a block-building task

Microsoft researchers controlled this robotic arm using ChatGPT. | Credit: Microsoft

By now, you’ve likely heard of ChatGPT, OpenAI’s language model that can generate somewhat coherent responses to a variety of prompts and questions. It’s primarily being used to generate text, translate information, make calculations and explain topics you’re looking to learn about.

Researchers at Microsoft, which has invested billions into OpenAI and recently integrated ChatGPT into its Bing search engine, extended the capabilities of ChatGPT to control a robotic arm and aerial drone. Earlier this week, Microsoft released a technical paper that describes a series of design principles that can be used to guide language models toward solving robotics tasks.

“It turns out that ChatGPT can do a lot by itself, but it still needs some help,” Microsoft wrote about its ability to program robots.

Prompting LLMs for robotics control poses several challenges, Microsoft said, such as providing a complete and accurate description of the problem, identifying the right set of allowable function calls and APIs, and biasing the answer structure with special arguments. To make effective use of ChatGPT for robotics applications, the researchers constructed a pipeline composed of the following steps:

  • 1. First, they defined a high-level robot function library. This library can be specific to the form factor or scenario of interest and should map to actual implementations on the robot platform while being named descriptively enough for ChatGPT to follow.
  • 2. Next, they build a prompt for ChatGPT which described the objective while also identifying the set of allowed high-level functions from the library. The prompt can also contain information about constraints, or how ChatGPT should structure its responses.
  • 3. The user stayed in the loop to evaluate code output by ChatGPT, either through direct analysis or through simulation and provides feedback to ChatGPT on the quality and safety of the output code.
  • 4. After iterating on the ChatGPT-generated implementations, the final code can be deployed onto the robot.

Examples of ChatGPT controlling robots

In one example, Microsoft researchers used ChatGPT in a manipulation scenario with a robot arm. It used conversational feedback to teach the model how to compose the originally provided APIs into more complex high-level functions that ChatGPT coded by itself. Using a curriculum-based strategy, the model was able to chain these learned skills together logically to perform operations such as stacking blocks.

The model was also able to build the Microsoft logo out of wooden blocks. It was able to recall the Microsoft logo from its internal knowledge base, “draw” the logo as SVG code, and then use the skills learned above to figure out which existing robot actions can compose its physical form.

Researchers also tried to control an aerial drone using ChatGPT. First, they fed ChatGPT a rather long prompt laying out the computer commands it could write to control the drone. After that, the researchers could make requests to instruct ChatGPT to control the robot in various ways. This included asking ChatGPT to use the drone’s camera to identify a drink, such as coconut water and a can of Coca-Cola. It was also able to write code structures for drone navigation based solely on the prompt’s base APIs, according to the researchers.

“ChatGPT asked clarification questions when the user’s instructions were ambiguous and wrote complex code structures for the drone such as a zig-zag pattern to visually inspect shelves,” the team said.

Microsoft said it also applied this approach to a simulated domain, using the Microsoft AirSim simulator. “We explored the idea of a potentially non-technical user directing the model to control a drone and execute an industrial inspection scenario. We observe from the following excerpt that ChatGPT is able to effectively parse intent and geometrical cues from user input and control the drone accurately.”

Key limitation

The researchers did admit this approach has a major limitation: ChatGPT can only write the code for the robot based on the initial prompt the human gives it. A human engineer has to thoroughly explain to ChatGPT how the application programming interface for a robot works, otherwise, it will struggle to generate applicable code.

“We emphasize that these tools should not be given full control of the robotics pipeline, especially for safety-critical applications. Given the propensity of LLMs to eventually generate incorrect responses, it is fairly important to ensure solution quality and safety of the code with human supervision before executing it on the robot. We expect several research works to follow with the proper methodologies to properly design, build and create testing, validation and verification pipelines for LLM operating in the robotics space.

“Most of the examples we presented in this work demonstrated open perception-action loops where ChatGPT generated code to solve a task, with no feedback provided to the model afterwards. Given the importance of closed-loop controls in perception-action loops, we expect much of the future research in this space to explore how to properly use ChatGPT’s abilities to receive task feedback in the form of textual or special-purpose modalities.”

Microsoft said its goal with this research is to see if ChatGPT can think beyond text and reason about the physical world to help with robotics tasks.

“We want to help people interact with robots more easily, without needing to learn complex programming languages or details about robotic systems. The key challenge here is teaching ChatGPT how to solve problems considering the laws of physics, the context of the operating environment, and how the robot’s physical actions can change the state of the world.”

The post How ChatGPT can control robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/microsoft-demos-how-chatgpt-can-control-robots/feed/ 0
Meet the Robotics Summit & Expo keynote speakers https://www.therobotreport.com/meet-the-robotics-summit-expo-keynote-speakers/ https://www.therobotreport.com/meet-the-robotics-summit-expo-keynote-speakers/#respond Mon, 06 Feb 2023 17:02:44 +0000 https://www.therobotreport.com/?p=564951 Martin Buehler, Howie Choset, Marc Raibert and more to keynote the 2023 Robotics Summit & Expo, the world's leading development event for commercial robots.

The post Meet the Robotics Summit & Expo keynote speakers appeared first on The Robot Report.

]]>

The Robotics Summit & Expo, produced by The Robot Report, has announced the keynote lineup for the May 10-11 event at the Boston Convention and Exhibition Center. The Robotics Summit is the world’s leading event focused on the design and development of commercial robots.

Click here to see the Robotics Summit speaker lineup. Registration for the Robotics Summit is also open. Register by March 9 to take advantage of the early bird price of $395 for full-conference passes. Academic registration is $295 and expo-only passes are just $75. Discounted group rates are available. Email events@wtwhmedia.com for more information.

You can also check out the agenda here and the expo floor here

Robotics Summit & Expo Keynotes

Idea to Reality: Commercializing Robotics Technologies

Howie Choset, Professor of Robotics, Carnegie Mellon University
May 10: 8:45 AM – 9:30 AM

Turning a technology developed inside a lab into a successful robotics company is no easy task. Howie Choset has done this several times with companies such as Medrobotics (surgical robots), Hebi Robotics (modular robots) and Bito Robotics (robot software). Choset will share insights about the robotics startups he founded and best practices for taking technological innovation from an idea to reality.


Future of Open-Source Robotics Development

Wendy Tan White, CEO, Intrinsic
May 10: 9:30 AM – 10:15 AM

This conversation with Intrinsic CEO Wendy Tan White will discuss the company’s ongoing efforts to make industrial robotics more accessible and usable for millions more businesses, entrepreneurs and developers. Tan White will also discuss the recent acquisition of the Open Source Robotics Corporation and what it means going forward.


Scalable AI Solutions for Driverless Vehicles

Laura Major, CTO, Motional
May 10: 10:45 AM – 11:30 AM

Laura Major will discuss Motional’s approach to developing SAE Level 4 autonomous vehicles (AVs) that can safely navigate complex road scenarios. As part of the discussion, Laura will cover the core challenges of AVs, deep learning advancements, and Motional’s innovative Machine Learning-first solutions.


The Next Decade in Robotics

Marc Raibert, Executive Director, The AI Institute
May 11: 9 AM – 9:45 AM

This fireside chat with Marc Raibert will discuss opportunities for the robotics industry and the most important and difficult challenges facing the creation of advanced robots. It will also describe how the new Boston Dynamics AI Institute is pushing the limits of technological innovation to solve these challenges.


The Future of Surgical Robotics

Martin Buehler, Global Head of Robotics R&D, Johnson & Johnson MedTech
May 11: 10 AM – 10:45 AM

Johnson & Johnson, one of the world’s leading healthcare companies, gives an inside look at the end-to-end development of its Monarch and Ottava robotics platforms, as well as strategy and innovation cadence across surgical robotics for MedTech.


Developing Robots for Final Frontiers

Nicolaus Radford, Founder & CEO, Nauticus Robotics
May 11: 4 PM – 4:45 PM

Space is commonly referred to as the “final frontier.” But Nicolaus Radford and the team at Nauticus Robotics believe the world’s oceans are of the utmost near-term priority (and largely unexplored) final frontier. Founded by former NASA engineers, Nauticus is leading the way by developing novel ocean robotic platforms for unprecedented ways of working in and exploring the aquatic domain, while challenging the less-than-desirable and archaic paradigm of the legacy industry. The company’s vision is to become the most impactful ocean robotics company and to disrupt the current ocean services paradigm through the integration of autonomous robotic technologies. The deep sea is vast, full of potential, and yet remains largely as uncharted as space itself – and Nauticus is at the forefront of unlocking its possibilities.

This keynote will chart Radford’s journey from developing humanoid robotics for space and leveraging that experience to form Nauticus and its revolutionary ocean robotics portfolio. His work at NASA heavily influenced the advancements at Nauticus as the company develops robots capable of aiding in national security, repairing oil pipelines, and inspecting windfarms — all while significantly reducing emissions and hazards to human counterparts. During his keynote, Radford will provide insights about both environments, discuss the business and technology of Nauticus’ current work and explain his vision for the future of ocean technology and robotics.

Sponsorship Opportunities

For information about Robotics Summit & Expo sponsorship and exhibition opportunities, please download the prospectus and/or contact Colleen Sepich at csepich@wtwhmedia.com.

The post Meet the Robotics Summit & Expo keynote speakers appeared first on The Robot Report.

]]>
https://www.therobotreport.com/meet-the-robotics-summit-expo-keynote-speakers/feed/ 0
Why roboticists should prioritize human factors https://www.therobotreport.com/why-roboticists-should-prioritize-human-factors/ https://www.therobotreport.com/why-roboticists-should-prioritize-human-factors/#respond Thu, 02 Feb 2023 15:00:48 +0000 https://www.therobotreport.com/?p=564721 Human systems engineering aims to combine engineering and psychology to create systems that are designed to work with humans' capabilities and limitations.

The post Why roboticists should prioritize human factors appeared first on The Robot Report.

]]>
draper

Draper is a nonprofit engineering company that helps private and public entities better design robotic systems. | Source: Draper

Human systems engineering aims to combine engineering and psychology to create systems that are designed to work with humans’ capabilities and limitations. Interest in the subject has grown among government agencies, like the FDA, the FAA and NASA, as well as in private sectors like cybersecurity and defense. 

More and more, we’re seeing robots deployed in real-world situations that have to work alongside or directly with people. In manufacturing and warehouse settings, it’s common to see collaborative robots (cobots) and autonomous mobile robots (AMRs) work alongside humans with no fencing or restrictions to divide them. 

Dr. Kelly Hale, of Draper, a nonprofit engineering innovation company, has seen that too often human factors principles are an afterthought in the robotics development process. She gave some insight into things roboticists should keep in mind to make robots that can successfully work with humans. 

Specifically, Hale outlined three overarching ideas that roboticists should keep in mind: start with your end goal in mind, consider how human and robot limitations and strengths can work together and minimize communication to make it as efficient as possible. 


Robotics Summit & Expo (May 10-11) returns to Boston


Start with an end goal in mind

It’s important that human factors are considered at every stage of the development process, not just at the end when you’re beginning to put a finished system into the world, according to Dr. Hale. 

“There’s not as many tweaks and changes that can be made [at the end of the process],” Dr. Hale said. “Whereas if we were brought in earlier, some small design changes probably would have made that interface even more useful.” 

Once the hardware capabilities of a system are set, Dr. Hale’s team has to work around those parameters. In the early design phase, researchers should consider not only how a system functions but where and how a human comes in. 

“I like to start with the end in mind,” Dr. Hale said. “And really, that’s the operational impact of whatever I’m designing, whether it’s an operational system, whether it’s a training system, whatever it is. I think that’s a key notion of the human-centered system, really saying, okay, at the end of the day, how do I want to provide value to the user through this increased capability?”

Working with human limitations and robot limitations

“From my perspective, human systems engineering is really about combining humans and technology in the best way so that the overall system can be more capable than the parts,” Dr. Hale said. “So more useful than a human by themselves or a machine or a system by themselves.”

There are many questions roboticists should ask themselves early in the process of building their systems. Roboticists should have an understanding of human capabilities and limitations and think about whether they’re being effectively considered in the system’s design, according to Dr. Hale. They should also consider human physical and cognitive capabilities, as there’s only so much data a human can handle at once. 

Knowing human limitations will help roboticists build systems that fill in those gaps and, alternatively, they can build systems that maximize the things that humans are good at. 

Another hurdle to consider when building systems to work with humans is building trust with the people working with them. It’s important for people working alongside robots to understand what the robot can do, and trust that it will do it consistently. 

“Part of it is building that situational awareness and an understanding from the human’s perspective of the system and what its capabilities are,” Dr. Hale said. “To have trust, you want to make sure that what I believe the system is capable of matches the automation capability.” 

For Dr. Hale, it’s about pushing humans and robotic systems toward learning from each other and having the ability to grow together.

For example, while driving, there are many things humans can do better than autonomous vehicles. Humans have a better understanding of the complexity of road rules, and can better read cues from other drivers. At the same time, there are many things autonomous vehicles do better than humans. With advanced sensors and vision, they have fewer blindspots and can see things from farther away than humans can. 

In this case, the autonomous system can learn from human drivers as they’re driving, taking note of how they respond to tricky situations. 

“A lot of it is having that shared experience and having the understand of the baseline of what the system’s capable of, but then having that learning opportunity with this system over time to really kind of push the boundaries.”

Making systems that communicate effectively with humans

People are able to discern whether a system is not optimized for their use. The manner and frequency with which the technology interacts with humans may be a dead giveaway.

“What you’ll find with some of the systems that were less ideally designed, you start to get notified for everything,” Dr. Hale said. 

Dr. Hale compared these systems to Clippy, the animated paperclip that used to show up in Mircosoft Word. Clippy was infamous for butting in too often to tell users things they already knew. A robotic system that interrupts people while they’re working too often, with information that isn’t important, results in a poor user experience. 

“Even with those systems that have a lot of user experience and human factors considered, there are still those touch points and those endpoints that make it tricky. And to me, it’s a lot of those ‘false alarms’, where you’re getting notified when you don’t necessarily want to be,” Dr. Hale said. 

Dr. Hale also advises that roboticists should consider access and maintenance when designing robots to prevent downtime. 

With these things in mind, Hale said the robotic development process can be greatly shortened, resulting in a robot that not only works better for the people that need to work with it, but can also be quickly deployed in many environments. 

The post Why roboticists should prioritize human factors appeared first on The Robot Report.

]]>
https://www.therobotreport.com/why-roboticists-should-prioritize-human-factors/feed/ 0