Mobility / Navigation Archives - The Robot Report https://www.therobotreport.com/category/design-development/mobility-navigation/ Robotics news, research and analysis Wed, 05 Apr 2023 21:20:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Mobility / Navigation Archives - The Robot Report https://www.therobotreport.com/category/design-development/mobility-navigation/ 32 32 Capra Robotics’ AMRs to use RGo Perception Engine https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/ https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/#respond Wed, 05 Apr 2023 21:19:21 +0000 https://www.therobotreport.com/?p=565424 RGo Robotics, a company developing artificial perception technology, announced leadership appointments, new customers and an upcoming product release.

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>

RGo Robotics, a company developing artificial perception technology that enables mobile robots to understand complex surroundings and operate autonomously, announced significant strategic updates. The announcements include leadership appointments, new customers and an upcoming product release.

RGo develops AI-powered technology for autonomous mobile robots, allowing them to achieve 3D, human-level perception. Its Perception Engine gives mobile robots the ability to understand complex surroundings and operate autonomously. It integrates with mobile robots to deliver centimeter-scale position accuracy in any environment. In Q2 2023, RGo said it will release the next iteration of its software that will include:

  • An indoor-outdoor mode: a breakthrough capability for mobile robot navigation allows them to operate in all environments – both indoors and outdoors.
  • A high-precision mode that enables millimeter-scale precision for docking and similar use cases.
  • Control Center 2.0: a redesigned configuration and admin interface. This new version supports global map alignment, advanced exploration capabilities and new map-sharing utilities.

RGo separately announced support for NVIDIA Jetson Orin System-on-Modules that enables visual perception for a variety of mobile robot applications.

RGo will exhibit its technology at LogiMAT 2023, Europe’s biggest annual intralogistics tradeshow, from April 25-27, in Stuttgart, Germany at Booth 6F59. The company will also sponsor and host a panel session “Unlocking New Applications for Mobile Robots” at the Robotics Summit and Expo in Boston from May 10-11.

Leadership announcements

RGO also announced four leadership appointments. This includes Yael Fainaro being named chief business officer and president; Mathieu Goy being named head of European sales; Yasuaki Mori being named executive consultant, APAC market development; and Amy Villeneuve as a member of the board of directors.

“It is exciting to have reached this important milestone. The new additions to our leadership team underpin our evolution from a technology innovator to a scaling commercial business model including new geographies,” said Amir Bousani, CEO and co-founder, RGo Robotics.

Goy, based in Paris, and Mori, based in Tokyo, join with extensive sales experience in the European and APAC markets. RGo is establishing an initial presence in Japan this year with growth in South Korea planned for late 2023.


“RGo has achieved impressive product maturity and growth since exiting stealth mode last year,” said Fainaro. “The company’s vision-based localization capabilities are industrial-grade, extremely precise and ready today for even the most challenging environments. This, together with higher levels of 3D perception, brings tremendous value to the rapidly growing mobile robotics market. I’m looking forward to working with Amir and the team to continue growing RGo in the year ahead.”

Villeneuve joins RGo’s board of directors with leadership experience in the robotics industry, including her time as the former COO and president of Amazon Robotics. “I am very excited to join the team,” said Villeneuve. “RGo’s technology creates disruptive change in the industry. It reduces cost and adds capabilities to mobile robots in logistics, and enables completely new applications in emerging markets including last-mile delivery and service robotics.”

Customer traction

After comprehensive field trials in challenging indoor and outdoor environments, RGo continued its commercial momentum with new customers. The design wins are with market-leading robot OEMs across multiple vertical markets ranging from logistics and industrial autonomous mobile robots, forklifts, outdoor machinery and service robots.

Capra Robotics, an award-winning mobile robot manufacturer based in Denmark, selected RGo’s Perception Engine for its new Hircus mobile robot platform.

“RGo continues to develop game-changing navigation technology,” said Niels Juls Jacobsen, CEO of Capra and founder of Mobile Industrial Robots. “Traditional localization sensors either work indoors or outdoors – but not both. Combining both capabilities into a low-cost, compact and robust system is a key aspect of our strategy to deliver mobile robotics solutions to the untapped ‘interlogistics’ market.”

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>
https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/feed/ 0
How Amazon Astro moves through its environment https://www.therobotreport.com/how-amazon-astro-moves-smoothly-through-its-environment/ https://www.therobotreport.com/how-amazon-astro-moves-smoothly-through-its-environment/#respond Tue, 28 Mar 2023 22:35:17 +0000 https://www.therobotreport.com/?p=565338 Amazon counteracts Astro's lack of computation capabilities with algorithms and software designed to allow the robot to move more gracefully. 

The post How Amazon Astro moves through its environment appeared first on The Robot Report.

]]>

Amazon recently detailed how Astro, the company’s multi-purpose home robot, can navigate through its environment with limited onboard computational capabilities. Astro’s sensor field of view and onboard computational capabilities aren’t nearly as powerful as other autonomous robots. While this makes it a more affordable option for consumers, it also means it’s more challenging for Amazon to deliver a high-quality of motion. 

Amazon counteracts Astro’s lack of computation capabilities with algorithms and software designed to allow the robot to move more gracefully. 

Predictive planning is a key aspect of Astro’s navigational abilities. Astro’s limited computational capabilities mean it struggles with a large sensing-to-actuation latency. To combat this, Astro makes predictions about the movements of the objects around it, like people. The robot predicts where those objects will be and what its surroundings will look like at the end of its current planning cycle, helping it to account for latencies in sensing and mapping while it’s moving.

All of Astro’s plans are based on its latest sensor data and what it thinks its surroundings will look like when its plan will be taking effect. The robot can make these predictions because of its ability to predict and handle uncertainties and risks of collisions. 

Astro’s motivation to move towards its goal is always weighed dynamically with its perceived level of uncertainty. This means Astro evaluates uncertainty-adjusted progress for each candidate motion, allowing it to focus on getting to its goal when it determines risk is low, and focus on evasion when risk is high. 

The robot also uses trajectory optimization software to operate in its environment. Astro considers multiple candidate trajectories and picks the best one in each planning cycle. The robot plans 10 times a second and evaluates a few hundred trajectory candidates in each instance. 

Astro considers safety, smoothness of motion and progress toward its end goal. With these three criteria, the robot picks the trajectory that will result in optimal behavior. Other approaches limit the number of choices a robot can make to a discrete set, or a state lattice, but Amazon’s formulation is continuous, helping the robot move smoothly. 

Astro doesn’t just have to plan where its two wheels and body will go, it also has to plan movements for Astro’s screen. The robot’s screen is used to communicate motion and intent and for active perception, so Astro plans to do things like orienting its screen towards the person it’s following or in the direction it plans to go so humans around it know what its plans are. 

Amazon released Astro in September 2021. The robot can be used for a variety of things, including home monitoring, videoconferencing with family and friends, entertaining children, and more. The voice-controllable robot can recognize faces, deliver items to specific people, after a human puts the item in the storage bin, and use third-party accessories to, for example, record blood pressure. It can detect the sound of a smoke alarm, carbon monoxide detector or breaking glass. If you have a Ring account, Astro can send you notifications if it notices something unusual.

The post How Amazon Astro moves through its environment appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-amazon-astro-moves-smoothly-through-its-environment/feed/ 0
Linux Foundation launches Overture Maps Foundation https://www.therobotreport.com/linux-foundation-launches-overture-maps-foundation/ https://www.therobotreport.com/linux-foundation-launches-overture-maps-foundation/#comments Sun, 01 Jan 2023 14:00:34 +0000 https://www.therobotreport.com/?p=564664 The Linux Foundation announced it formed the Overture Maps Foundation, a collaborative effort to create interoperable open map data.

The post Linux Foundation launches Overture Maps Foundation appeared first on The Robot Report.

]]>
map data

The Overture Maps Foundation, created by the Linux Foundation, aims to help developers who build map services or use geospatial data. | Source: Overture Maps Foundation

The Linux Foundation announced it formed the Overture Maps Foundation, a collaborative effort to create interoperable open map data as a shared asset. The Overture Maps Foundation aims to strengthen mapping services worldwide and enable current and next-generation mapping products. These mapping services could be crucial to robotic applications like autonomous driving. 

Currently, companies developing and rolling out autonomous vehicles have to spend massive amounts of time and money meticulously mapping the cities they’re deploying in. Additionally, those companies have to continuously remap those cities to account for any changes in road work or traffic laws. 

The foundation is founded by Amazon Web Services (AWS), Meta, Microsoft and TomTom. Overture hopes to add more members in the future to include a wide range of signals and data inputs. Members of the foundation will combine their resources to create map data that is complete, accurate and refreshed as the physical world changes. The resulting data will be open and extensible under an open data license. 

“Mapping the physical environment and every community in the world, even as they grow and change, is a massively complex challenge that no one organization can manage. Industry needs to come together to do this for the benefit of all,” Jim Zemlin, executive director for the Linux Foundation, said. “We are excited to facilitate this open collaboration among leading technology companies to develop high quality, open map data that will enable untold innovations for the benefit of people, companies, and communities.”

The Overture Maps foundation aims to build maps using data from multiple sources, including Overture members, civic organizations and open data sources, and simplify interoperability by creating a system that links entities from different data sets to the same real-world entities. All data used by Overture will undergo validation to ensure there are no map errors, breakage or vandalism within the mapping data. 

Overture also aims to help drive the adoption of a common, structured and documented data schema to create an easy-to-use ecosystem of map data. Currently, developers looking to create detailed maps have to source and curate their data from disparate sources, which can be difficult and expensive. Not to mention, many datasets use different conventions and vocabulary to reference the same real-world entities. 

“Microsoft is committed to closing the data divide and helping organizations of all sizes to realize the benefits of data as well as the new technologies it powers, including geospatial data,” Russell Dicker, Corporate Vice President, Product, Maps and Local at Microsoft, said. “Current and next-generation map products require open map data built using AI that’s reliable, easy-to-use and interoperable. We’re proud to contribute to this important work to help empower the global developer community as they build the next generation of location-based applications.” 

Overture hopes to release its first datasets in the first half of 2023. The initial release will include basic layers including buildings, road and administrative information, but Overture plans to steadily add more layers like places, routing or 3D building data. 

The post Linux Foundation launches Overture Maps Foundation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/linux-foundation-launches-overture-maps-foundation/feed/ 1
Inuitive sensor modules bring VSLAM to AMRs https://www.therobotreport.com/inuitive-sensor-modules-vslam-amrs/ https://www.therobotreport.com/inuitive-sensor-modules-vslam-amrs/#respond Tue, 13 Dec 2022 19:31:53 +0000 https://www.therobotreport.com/?p=564529 New sensor modules add depth sensing and image processing with AI and VSLAM capabilities.

The post Inuitive sensor modules bring VSLAM to AMRs appeared first on The Robot Report.

]]>
Inuitive

Inuitive introduces the M4.5S (center) and M4.3WN (right) sensor modules that add VSLAM for AMR and AGVs.

Inuitive, an Israel-based developer of vision-on-chip processors, launched its M4.5S and M4.3WN sensor modules. Designed to integrate into robots and drones, both sensor modules are built around the NU4000 vision-on-chip (VoC) processor adds depth sensing and image processing with AI and Visual Simultaneous Localization and Mapping (VSLAM) capabilities.

The M4.5S provides robots with enhanced depth from stereo sensing along with obstacle detection and object recognition. It features a field of view of 88×58 degrees, a minimum sensing range of 9 cm (3.54″) and a wide dynamic operating temperature range of up to 50 degrees Celsius (122 degrees Farenheit). The M4.5S supports the Robot Operating System (ROS) and has an SDK that is compatible with Windows, Linux and Android.

The M4.3WN features tracking and VSLAM navigation based on fisheye cameras and an IMU together with depth sensing and on-chip processing. This enables free navigation, localization, path planning, and static and dynamic obstacle avoidance for AMRs and AGVs. The M4.3WN is designed in a metal case to serve in industrial environments.

“Our new all-in-one sensor modules expand our portfolio targeting the growing market of autonomous mobile robots. Together with our category-leading vision-on-chip processor, we now enable robotic devices to look at the world with human-like visual understanding,” said Shlomo Gadot, CEO and co-founder of Inuitive. “Inuitive is fully committed to continuously developing the best performing products for our customers and becoming their supplier of choice.

The M4.5S and the M4.3WN sensor modules’ primary processing unit is Inuitive’s all-in-one NU4000 processor. Both modules are equipped with depth and RGB sensors that are controlled and timed by the NU4000. Data generated by the sensors and processed in real-time at a high frame rate by the NU4000, is then used to generate depth information for the host device.

The post Inuitive sensor modules bring VSLAM to AMRs appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inuitive-sensor-modules-vslam-amrs/feed/ 0
ISEE’s yard automation solution brings in $40M https://www.therobotreport.com/isees-yard-automation-solution-brings-in-40m/ https://www.therobotreport.com/isees-yard-automation-solution-brings-in-40m/#respond Thu, 17 Nov 2022 21:52:47 +0000 https://www.therobotreport.com/?p=564329 ISEE, the Cambridge, Mass.-based developer of self-driving yard truck solutions, announced that it raised $40 million in Series B funding.

The post ISEE’s yard automation solution brings in $40M appeared first on The Robot Report.

]]>
ISEE

ISEE AI is an autonomous driver that can automate yard truck movements. | Source: ISEE

ISEE, the Cambridge, Mass.-based developer of self-driving yard truck solutions, announced that it raised $40 million in Series B funding. This brings the company’s total funding since it spun out from MIT in 2017 to $70 million. 

Founders Fund, a venture capital fund founded by Peter Thiel, co-founder of PayPal, led the Series B round, which also includes participation from Maersk Growth, Eniac Ventures, New Legacy and other new and existing investors. 

“ISEE is focused on logistics yards because of their economic importance and because autonomous driving is ready to perform in logistics yards today. That’s why ISEE is already working with some of the world’s largest companies and why we backed them again,” Scott Nolan, Partner at Founders Fund, said.

ISEE’s technology can turn the most common yard trucks into autonomous vehicles. With advanced sensor coverage and ISEE AI, the company’s autonomous driver, ISEE ensures freight keeps moving in shipping yards. The system can adapt to a variety of different yard sizes and locations. 

While shipping yards are limited spaces, they present unique challenges for autonomous vehicles. Autonomous vehicles that operate on roads, for example, have clearly marked lanes that they must stay within and well-defined traffic laws. In a shipping yard, operations can be less predictable. 

ISEE AI has orchestrated more than 10,000 fully-autonomous moves. The autonomous driving platform is designed to navigate quickly changing environments. According to the company, ISEE AI is able to understand the inherent uncertainty that comes with dynamic environments, predict human behavior, calculate risk and efficiency and continue to learn from every experience it has. 

“By leveraging advanced cognitive modeling, game theory, and deep learning, we’ve developed proprietary technology that’s a perfect match for the challenges of a logistics yard,” Yibiao Zhao, the CEO and co-founder of ISEE, said. “Our self-driving technology is the most advanced autonomous yard tractor product on the market.”

In the last 12 months, ISEE’s revenue has grown more than 20 times, and its customers include big names like BMW. 

ISEE faces competition in the yard automation space from other companies, like Outrider, which creates an autonomous retrofit kit for yard trucks. The technology enables autonomous yard trucks to back trailers into tight spaces without modifications to trailers. 

The post ISEE’s yard automation solution brings in $40M appeared first on The Robot Report.

]]>
https://www.therobotreport.com/isees-yard-automation-solution-brings-in-40m/feed/ 0
Clearpath Robotics launches outdoor autonomy software https://www.therobotreport.com/clearpath-robotics-outdoor-autonomy-software/ https://www.therobotreport.com/clearpath-robotics-outdoor-autonomy-software/#respond Tue, 18 Oct 2022 23:32:02 +0000 https://www.therobotreport.com/?p=564120 Clearpath Robotics' new OutdoorNav technology provides GPS-based navigation for Clearpath’s outdoor mobile robots and third-party vehicles.

The post Clearpath Robotics launches outdoor autonomy software appeared first on The Robot Report.

]]>
Clearpath OutdoorNav

Clearpath Robotics launched an autonomous navigation software platform called OutdoorNav. The technology provides GPS-based navigation for Clearpath’s outdoor mobile platforms and third-party vehicles.

OutdoorNav software provides point-to-point, GPS-based autonomous navigation through proprietary fusion of vehicle sensor data. When paired with compatible hardware, the software also provides built-in obstacle detection and avoidance, as well as continuous path planning, allowing off-road vehicles to navigate autonomously between waypoints.

Vehicle developers can interact with the software using a web-based user interface or through a documented API. Through the web-based interface, users can create, edit, monitor and manage autonomous missions, visualize data from onboard sensors, and view the vehicle’s live position on an interactive map. Customized tasks, such as capturing sensor data or imagery at specific locations, can be assigned at waypoints.

“Robotics product development can often be a difficult, sometimes harrowing experience,” said Bryan Webb, president of Clearpath Robotics. “Building a robust navigation system is expensive and risky, and it may prevent you from bringing your product to market in a timely fashion. We built and designed OutdoorNav to streamline your development of autonomous vehicles. You no longer need a full team of robotics navigation experts and months of prototyping to get your autonomous system into the field.”

The web-based interface also supports a teleoperation mode, which allows users to command and control the vehicles remotely via cellular connection using a virtual joystick. Onboard sensor data from network cameras, LiDARs, and other integrated components can be easily visualized in real-time.

The software comes with a well-documented Application Programming Interface (API) that allows developers to expand the capabilities of the software by integrating their custom ROS-based applications and graphical user interfaces, or by connecting third-party fleet management tools.

OutdoorNav software is compatible with Clearpath mobile platforms, including the Jackal UGV, Husky UGV, and Warthog UGV, as well as third-party vehicles with drive-by-wire control. Clearpath also announced a new partner program to provide qualifying robotics developers access to discounted OutdoorNav software licenses, robot hardware, sensor kits, and support. Applications will be accepted until November 15th, 2022, with applicants selected on December 15th, 2022. Click here to learn more and apply for the program.

The post Clearpath Robotics launches outdoor autonomy software appeared first on The Robot Report.

]]>
https://www.therobotreport.com/clearpath-robotics-outdoor-autonomy-software/feed/ 0
Sensor breakdown: how robot vacuums navigate https://www.therobotreport.com/sensor-breakdown-how-robot-vacuums-navigate-and-clean/ https://www.therobotreport.com/sensor-breakdown-how-robot-vacuums-navigate-and-clean/#respond Wed, 14 Sep 2022 16:13:53 +0000 https://www.therobotreport.com/?p=563786 Peter Hartwell, CTO of Invensense, breaks down the various sensors used to improve the navigation and cleaning capabilities of robot vacuums.

The post Sensor breakdown: how robot vacuums navigate appeared first on The Robot Report.

]]>
block diagram robot vacuum

An example diagram block for a robot vacuum. | Credit: Invensense, a TDK company

Over the past few years, robot vacuums have advanced immensely. Initial models tended to randomly bump their way around the room, often missing key areas on the floor during their runtime. They also became trapped on thick rugs, and if vacuuming upstairs, came tumbling down with a heavy thud. Their runtime was also relatively short, and you’d often come home hoping for a nice and clean room only to discover that it had run out of juice halfway through.

Since those early days, these cons have turned into pros with the innovative use of sensors and motor controllers in combination with dedicated open-source software and drivers. Here is a look at some of the different sensors used in today’s robot vacuums for improved navigation and cleaning.

Ultrasonic time-of-flight sensors
Ultrasonic time-of-flight (ToF) sensors work in any lighting conditions and can provide millimeter-accurate range measurements independent of the target’s color and optical transparency. The sensor’s wide field-of-view (FoV) enables simultaneous range measurements of multiple objects. In a robot vacuum, they are used to detect if an object, such as a dog or children’s toy, is in its way and whether it needs to deviate its route to avoid a collision.

Short-range ultrasonic ToF sensors
Short-range ultrasonic ToF sensors can be used to determine different floor types. The application uses the average amplitude of a reflected ultrasonic signal to determine if the target surface is hard or soft. If the robot vacuum detects that it has moved from a carpet onto a hardwood floor, it can slow the motors down because they do not need to work as hard compared to carpet use.

The cliff detection feature can enable the robot vacuum to determine when it’s at the top of a set of stairs to prevent a fall.

VSLAM and LiDAR
Most companies developing high-end robot vacuums use visual simultaneous localization and mapping (VSLAM) or LiDAR technology to build a virtual map of the room. These technologies enable the robot vacuum to move around more efficiently, covering an entire level of a home with multiple rooms. However, if you lift the robot and put it down, it will not know its new location. To find out where it is, the robot must go off in a random direction and, once it detects an object and starts tracing the walls, it can find out where it is relevant to the map.

VSLAM or LiDAR technologies may not be applicable for low-light areas, for example, if the robot vacuum goes under a table or couch, where it is unable to read the map.

An example of the mapping capabilities of iRobot’s j7 robot vacuum. | Credit: iRobot

Inertial Measurement Units (IMU)
IMUs take the roll, pitch, and yaw of movements of the robot vacuum in the real world both from a linear and rotational perspective. When the robot vacuum is doing circles or moving in a straight line, it knows where it is supposed to go and how it is moving. There may be a slight error between where it should be and where it is, and the IMU can hold that position in a very accurate way.

Based on rotational and linear movement, plus the mapping of the room, the robot vacuum can determine that it is not going over the same areas twice and can pick up where it left off if the battery dies. And, if someone picks up the robot vacuum and places it somewhere else or turns it around, it can detect what is happening and know where it is in real space. The IMU is essential to making robot vacuums efficient.

For robot vacuums that do not use VSLAM or LiDAR mapping technology, their position and navigation can be determined using dead reckoning by combining measurements from the wheel’s rotations with the inertial measurements from the IMU and object detection from the ToF sensors.

Smart speaker microphones
As developers of robot vacuums continue to implement artificial intelligence (AI) with the ability to use voice assistants, microphones become an essential sensor technology. Take beamforming, for example. Beamforming is a type of radio frequency (RF) management technique that focuses the noise signal towards the microphone in combination with AI for tweaking. At the moment, the noise of the motors and the turning brushes on the robot vacuum are a bit loud. However, as microphone technology progresses and motors and brushes become quieter, coupled with beamforming, microphones will be able to determine the user’s voice in the not-too-distant future.

Algorithms can also be trained to disregard certain noises and listen specifically for the voice of the user. Ostensibly, the user wants to call for the vacuum cleaner to clear up something or tell it to go home without going through an app or voice assistant product. You want that to happen in real time inside the host processor of the robot vacuum. Alternatively, if the microphone notices that something is being spoken, it may be possible for the robot vacuum to stop all of its motors to listen to the command.

Embedded motor controllers
The embedded motor controllers are turning the gears to ensure the wheels are moving the robot vacuum in the correct direction with accuracy that can tell when the wheel is actually turned 90 degrees as opposed to 88 degrees. Without this high level of accuracy, the robot vacuum will be way off track after a certain amount of time. The embedded motor controller can be flexible whether you use sensors or not, making the robot vacuum scalable.

Pressure sensors
The level of dust inside the dust box is estimated by monitoring the flow of air through the dustbin with a pressure sensor. Compared to the air pressure when the dustbin is empty, the air pressure inside the dustbin begins to drop when the airflow begins to stagnate due to an increase in suction dust or clogging of the filter. However, for more accurate detection, it is recommended to detect it as a differential pressure that uses a similar pressure sensor to measure the outside air pressure.

A lot of the high-end bases have the capability to suck out the contents of the dust box automatically. The robot vacuum can then return to base, empty its contents, return to its last known position and continue cleaning.

Auto-recharging
To determine the battery’s state of charge (SoC), you need accurate current and voltage measurements. The coulomb counters and NTC thermistors in the battery pack provide this information.

When the battery reaches an arbitrary SoC level, the battery communicates an instruction for the robot vacuum to stop cleaning and return to the base for a recharge. When fully charged, the robot vacuum goes back to its last known position and continues cleaning. Regardless of the size of the room, in theory, with multiple chargers and multiple abilities to empty the dustbin, the robot vacuum can cover the entire floor space.

Thermistors
Thermistors, which are a type of temperature sensor, can be used to monitor the running temperature of the MCU or MPU. They can also be used to monitor the temperatures of the motors and brush gears. If they are running way too hot, the robot vacuum is instructed to take a break and perhaps run a few system diagnostics to find out what is causing the problem. Also, items caught in the brushes, like an elastic band or excess hair, can make the motors overcompensate and overheat.

Robot vacuum developers should understand what the motors are supposed to sound like at a certain threshold of frequency. It is possible to use a microphone to detect whether the motors are running abnormally, thereby detecting early stages of motor degradation. Again, by using diagnostics, the abnormal noise from the bushes could indicate that they have picked.

Conclusion
The retail price of a robot vacuum goes hand in hand with functionality and accuracy; some of the high-end models can be as much as $1,100. You can get a robot vacuum for closer to $200, but you will be sacrificing some of the bells and whistles. It all depends on the value the robot vacuum developer wants to create and the cost structure that works best for the user.

As component costs come down, it seems likely that more mid-tier robot vacuums will enter the market. Technologies like ToF sensors, pressure sensors, IMUs and motor controllers, along with improvements in battery efficiency, will drive this growth.

About the Author
For seven years, Peter Hartwell has been the chief technology officer at Invensense, a TDK company. He holds more than 40 patents and his operation oversees 600 engineers who have developed a broad range of technologies and sensors for drones, automotive, industrial and, more broadly, IoT. Hartwell has 25-plus years of experience commercializing silicon MEMS products, working on advanced sensors and actuators, and specializes in MEMS testing techniques.

Prior to joining InvenSense, he spent four years as an architect of sensing hardware at Apple where he built and led a team responsible for the integration of accelerometer, gyroscope, magnetometer, pressure, proximity, and ambient light sensors across the entire product line. Hartwell holds a B.S. in Materials Science from the University of Michigan and a Ph.D. in Electrical Engineering from Cornell University.

The post Sensor breakdown: how robot vacuums navigate appeared first on The Robot Report.

]]>
https://www.therobotreport.com/sensor-breakdown-how-robot-vacuums-navigate-and-clean/feed/ 0
How Amazon developed precision autonomy for Proteus https://www.therobotreport.com/how-amazon-developed-precision-autonomy-for-proteus/ https://www.therobotreport.com/how-amazon-developed-precision-autonomy-for-proteus/#comments Wed, 31 Aug 2022 16:39:54 +0000 https://www.therobotreport.com/?p=563668 John Enright, principal engineer at Amazon Robotics, explains how the company developed precision autonomy for its new Proteus autonomous mobile robot.

The post How Amazon developed precision autonomy for Proteus appeared first on The Robot Report.

]]>

A couple of months ago, Amazon unveiled its first-ever autonomous mobile robot (AMR) Proteus. The company first entered the mobile robot space in 2012, when it acquired Kiva Systems for $775 million. Kiva Systems offered automated guided vehicles (AGVs) that have been at work in Amazon’s warehouses since. 

Proteus has a similar design to the Kiva robots. It slides under Amazon’s GoCarts, lifts them up and moves them across warehouses to employees or other robotic cells. Unlike the Kiva robots, which currently operate in caged-off spaces away from Amazon employees, Proteus is able to work freely among them. 

This change means that Proteus needs to be prepared to adapt quickly to unexpected changes in its environment. John Enright, principal engineer at Amazon Robotics, recently gave some insight into how the company developed the technology behind Proteus. He explained the approach to the AMR’s navigation in the video above.

“Our design focuses on safety, efficiency and cost-effectiveness,” Enright said. “We employ a wide range of diverse and redundant sensing modalities that allow us to provide certain guarantees on vehicle behavior.” 

Proteus’ job is to store, move and sort Amazon’s blue GoCarts, a central part of the company’s logistics operations. The AMR travels to where the carts are and slides underneath them to move them. It uses general navigation abilities to travel to the general location of the GoCarts, and then uses its high precision LiDAR to find the carts.

To slide under the cart, Proteus uses a two-step detection and motion process. First, the robot will perform a small “S” curve to remove any lateral error in its positioning under the GoCart. Next, it performs a straight motion to tunnel under the cart and lift it. 

Proteus carries the cart to its desired storage location, which it identifies with Amazon’s fiducial plus. Fiducial plus is a custom-made ground target that aids Proteus in its alignment capabilities and in finding storage cells. These fiducials help the robot to perform millimeter-level corrections on its positioning. 

The AMR has been deployed in Amazon’s outbound GoCart handling areas in its fulfillment and sorting centers. A source told The Robot Report Amazon will use both the Proteus AMRs and the Kiva-like AGVs moving forward.

The post How Amazon developed precision autonomy for Proteus appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-amazon-developed-precision-autonomy-for-proteus/feed/ 2
Hyundai launches Boston Dynamics AI Institute https://www.therobotreport.com/hyundai-launches-boston-dynamics-ai-institute/ https://www.therobotreport.com/hyundai-launches-boston-dynamics-ai-institute/#comments Fri, 12 Aug 2022 13:46:35 +0000 https://www.therobotreport.com/?p=563558 Marc Raibert, founder of Boston Dynamics, will lead the institute's work on cognitive AI, athletic AI, organic hardware design and ethics and policy.

The post Hyundai launches Boston Dynamics AI Institute appeared first on The Robot Report.

]]>
Marc Raibert Atlas dancing

Boston Dynamics founder Marc Raibert standing with the Atlas humanoid. | Credit: Associated Press

Hyundai Motor Group (Hyundai) today announced the launch of the Boston Dynamics AI Institute. Hyundai and Boston Dynamics are making an initial investment of more than $400 million to make fundamental advances in artificial intelligence (AI), robotics and intelligent machines.

The institute will be led by Marc Raibert, founder of Boston Dynamics. Hyundai said the name of the institute could change after its corporate registration is complete. The institute will work on four technical areas:

  • Cognitive AI
  • Athletic AI
  • Organic hardware design
  • Ethics and policy

“Our mission is to create future generations of advanced robots and intelligent machines that are smarter, more agile, perceptive and safer than anything that exists today,” said Raibert, executive director of the Boston Dynamics AI Institute. “The unique structure of the Institute — top talent focused on fundamental solutions with sustained funding and excellent technical support — will help us create robots that are easier to use, more productive, able to perform a wider variety of tasks, and that are safer working with people.”

In addition to developing technology with its own staff, the institute plans to partner with universities and corporate research labs. The institute will be headquartered in the heart of the Kendall Square research community in Cambridge, Massachusetts. The institute plans to hire AI and robotics researchers, software and hardware engineers, and technicians at all levels.

Al Rizzi will serve as the institute’s chief technology officer. Rizzi has 25-plus years of experience building dynamic robots, including nearly 17 years as the chief scientist at Boston Dynamics. At Boston Dynamics, Rizzi directed research targeted at novel locomotion and mobile manipulation systems, including LittleDog, BigDog, WildCat, SandFlea, and Spot. As CTO of the Boston Dynamics AI Institute, Rizzi will guide the technical and fundamental thinking that underlies the organization’s mission.

Raibert will continue to serve as the chairman of the board at Boston Dynamics, which he founded in 1992. Robert Playter was named CEO of Boston Dynamics in late 2019 in conjunction with the company’s increased focus on building commercially viable robots.

Boston Dynamics, an MIT spin-off, is best known for its innovative robots, including BigDog, Atlas, Stretch, and Spot. Spot and Stretch are available for commercial purchase, while BigDog has been retired and Atlas continues to be used internally at Boston Dynamics for R&D purposes.

Hyundai acquired Boston Dynamics in June 2021, purchasing an 80% stake in the company from Softbank for about $880 million. Hyundai became the third owner of Boston Dynamics in seven years. It was acquired by Google in 2013 and sold to Softbank Group in 2017. It has mainly operated as an R&D organization since it was founded, but a new emphasis on commercialization was evident after it was acquired by Softbank.

The post Hyundai launches Boston Dynamics AI Institute appeared first on The Robot Report.

]]>
https://www.therobotreport.com/hyundai-launches-boston-dynamics-ai-institute/feed/ 5
How Labrador Systems developed its assistive robots https://www.therobotreport.com/inside-the-development-of-labrador-systems-assistive-robots/ https://www.therobotreport.com/inside-the-development-of-labrador-systems-assistive-robots/#respond Thu, 21 Jul 2022 17:31:18 +0000 https://www.therobotreport.com/?p=563354 On The Robot Report Podcast, Mike Dooley discusses the technical and business challenges of developing Labrador Systems' assistive robots.

The post How Labrador Systems developed its assistive robots appeared first on The Robot Report.

]]>

 

Welcome to Episode 85 of The Robot Report Podcast, which brings conversations with robotics innovators straight to you. Join us each week for discussions with leading roboticists, innovative robotics companies and other key members of the robotics community.

Our guest this week is Mike Dooley, co-founder and CEO of robotics startup Labrador Systems. Founded in 2017, the Calabasas, Calif.-based company is developing autonomous mobile robots (AMRs) for home environments to help the elderly and those with mobility issues live independently longer. The AMRs can transport things around the home and even retrieve items off countertops and tables.

We discuss the technical challenges and decisions involved with building the robots, as well as the business challenges such as funding and working with insurance companies. Mike also details how his days at Evolution Robotics and iRobot are helping direct the Labrador Systems journey. Labrador Systems was named a 2022 RBR50 Robotics Innovation Award winner by our sister publication Robotics Business Review. The interview with Mike starts at 36:36.

Links from today’s show:

Now it’s time to prepare for RoboBusiness and the Field Robotics Engineering Forum, which run October 19-20, 2022 in Santa Clara, Calif


If you would like to be a guest on an upcoming episode of the podcast, or if you have recommendations for future guests or segment ideas, contact Steve Crowe or Mike Oitzman.

For sponsorship opportunities of The Robot Report Podcast, contact Courtney Nagle for more information.

The post How Labrador Systems developed its assistive robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inside-the-development-of-labrador-systems-assistive-robots/feed/ 0
Autonomics RADAR for autonomous mobile robots https://www.therobotreport.com/autonomics-radar-for-autonomous-mobile-robots/ https://www.therobotreport.com/autonomics-radar-for-autonomous-mobile-robots/#respond Tue, 05 Jul 2022 18:55:40 +0000 https://www.therobotreport.com/?p=563190 The new RADAR sensor is designed for the navigation requirements of outdoor AMRs and autonomous vehicles.

The post Autonomics RADAR for autonomous mobile robots appeared first on The Robot Report.

]]>
hero image of the Autonomics RADAR unit

The RADAR unit is a software-defined sensor. The operating parameters of the device can be set and modified by software. | Credit: Autonomics

Cyprus-based Autonomics released the first version of its high-accuracy RADAR sensor. The new Autonomics MM 4D RADAR offers a new sensing modality for service robots that need to navigate safely in a collaborative environment around humans. The company is launching the first generation of the RADAR unit in Europe.

The RADAR sensor has a maximum sensing envelope of 150 m (492 ft) with 0.3 m (11.8 inch) resolution. In its short-range configuration, the resolution improves to 0.1 m (3.9 inch) under 50 M (164 ft).

The sensor is also described by Autonomics business development leader Rostislav Lopatkin as a “4D” sensor. The 4D (or fourth dimension here) is the velocity of items seen by the sensor. This means that the sensor can return a vector indicating the speed and motion of an item seen by the RADAR as part of the data stream. This is useful to help understand what things are moving and what things are stationary in the scene surrounding the vehicle.

One of the big advantages of RADAR over LiDAR is that RADAR can operate in inclement weather, including environmental situations such as rain, snow or foggy/smokey air. As a result, it can effectively perform obstacle detection in situations where LiDAR or even vision-guidance might fail to accurately image the area around a moving autonomous mobile robot (AMR) or autonomous vehicle.

The company is marketing the new sensor to both AMR developers as well as autonomous vehicle developers. The company has built a reference design, sweeping AMR to test and validate the RADAR sensor.

Device Features

  • Unprecedented high resolution. Autonomics RADAR can determine the distance, velocity and also the boundaries of surrounding objects. This is a vital feature for solving obstacle avoidance tasks during autonomous driving.
  • Wide angle of view. The antenna system based on a MIMO structure coupled with tailored embedded digital-signal-processing algorithms provides a horizontal field of view of up to 150 degrees. Safety is a crucial issue for autonomous service robots operating in unstructured environment shared with people. Autonomics RADAR’s wide field of view along with its high resolution and capability of obstacle detection in any weather and lighting conditions ensures the robot operates safely.
  • The software-defined characteristics make Autonomics RADAR flexible and adaptable to any required applications and conditions. Detection range, azimuth field of view, range and velocity resolution, and accuracy can be customized over a wide range.
  • Easy-to-integrate into existing systems with full ROS compatibility.

The unit is a software-defined sensor. This means that the operating parameters of the device can be set and modified by software, without having to modify the sensor physically. The sensor has two primary operating modes: short-range mode and long-range mode. Changing the mode of operation modifies the accuracy and resolution of data gathered by the device.

The radar module is based on a Texas Instruments AWR2243 system-on-a-chip cascade. The computing module based on automotive-grade SoC performs radar signal processing and interaction with external systems via Ethernet and CAN interfaces. The unit is also completely ROS compatible.

For complete radar specifications, see the datasheet: https://www.autonomics.tech/radar

The post Autonomics RADAR for autonomous mobile robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/autonomics-radar-for-autonomous-mobile-robots/feed/ 0
Synkar offers sidewalk delivery as a service https://www.therobotreport.com/synkar-offers-sidewalk-delivery-as-a-service/ https://www.therobotreport.com/synkar-offers-sidewalk-delivery-as-a-service/#respond Thu, 23 Jun 2022 21:43:01 +0000 https://www.therobotreport.com/?p=563130 Synkar is partnering with food delivery apps to fulfill first and last mile delivery logistics.

The post Synkar offers sidewalk delivery as a service appeared first on The Robot Report.

]]>
A synkar mobile robot navigates on the sidewalk

Synkar develops a robots-as-a-service sidewalk delivery solution. | Credit: Synkar

Synkar was started by three Brazilian engineers in Toronto Canada in 2019. Matheus Theodoro, Evandro Barros and Lucas Assis started the company with the idea to create a unique indoor/outdoor mobile robot. Now the company has offices in both Toronto and Brazil.

The first iteration of the Synkar autonomous delivery robot was designated SD01 (Synkar Delivery 01). The current version is SD02, and the company is working on the release of SD03 by the end of 2022.

the synkar product family of robots

The Synkar product family is evolving as the market demands additional features. | Credit: Synkar

Following the development of the robot, the company has built a B2B cloud based platform called SARA. The platform features a rich API that enables business partners to integrate quickly and manage the robot operations.

The company intends to offer the delivery service as a Robots-as-a-Service (RaaS) based solution. As such, the company has a roadmap to continue developing SARA and add features as necessary to support future use cases.

First mile food delivery

One of the early use cases for the Synkar mobile robot is food delivery, however, during the pandemic an interesting application for the Synkar mobile robot emerged: not in last mile delivery, but rather what might be called “first mile” delivery. The company worked with an online food delivery service app in Brazil to enable mall-based food court (inner mall) vendors to move customers orders from the food court, through the mall, out to the curbside, where human delivery drivers could easily take the orders out of the robots and then drive to the final delivery location.

This immediately improved the efficiency of the delivery drivers and shaved 6-8 minutes off the delivery cycle, as drivers didn’t need to find a place to park and then go into the mall to pick up a delivery order. It also provided an additional revenue stream for the mall-based restaurants, in a time when mall attendance was down. A win-win all around.

In fact the company envisions an operating model in which the delivery robots could fulfill orders for multiple online food delivery apps and restaurants at the same time, rather than simply partnering with a single ordering app solution. The company calls this a “shared fleet as a service” business model. One might also imagine food delivery in one order, followed by a parcel delivery a short time later.

The company is already working with several partners to create a reconfigurable cargo carrying container that will enable multiple, locked sections. However, the company states that even with a single order delivery container, customers don’t take other customers’ orders, except by mistake.

Within Brazil, the company is already working with the Brazilian Post service on parcel delivery.

The company has it’s R&D headquarters and manufacturing in Brazil, and a small team in Canada. This is primarily a result of the impact of the pandemic. In the future, the company plans to outsource manufacturing to other worldwide partners and expand its sales and support teams into other regions as it signs new partnerships and customers.

On the topic of sidewalk regulations and the future for sidewalk robots in cities such as Toronto, where sidewalk robots are currently banned, I talked to co-founder Lucas Assis at length about the issues facing these robotics solutions. Lucas believes that the Toronto city council is being cautious about the deployment of robots on sidewalks and is looking to define the requirements for public safety and safe operations before they allow the robots back onto the sidewalks.

In addition to opportunities in the Canada and Brazil, the company is also looking to expand in Ireland as new partnerships develop and opportunities emerge. Following that, the company plans to expand in to the United States and Portugal.

The company is currently raising a second round of investment of $4M to fund the expansion of the company and build a new fleet of robots for deployment.

The post Synkar offers sidewalk delivery as a service appeared first on The Robot Report.

]]>
https://www.therobotreport.com/synkar-offers-sidewalk-delivery-as-a-service/feed/ 0
MIT researchers help robots navigate uncertain environments https://www.therobotreport.com/mit-researchers-help-robots-navigate-uncertain-environments/ https://www.therobotreport.com/mit-researchers-help-robots-navigate-uncertain-environments/#respond Tue, 24 May 2022 21:15:44 +0000 https://www.therobotreport.com/?p=562835 MIT's research team hopes that its findings could help autonomous robots explore remote exoplanets with unknown conditions. 

The post MIT researchers help robots navigate uncertain environments appeared first on The Robot Report.

]]>
MIT CSAIL

MIT researchers have developed a trajectory-planning system for autonomous robots in unpredictable environments. | Source: Jose-Luis Olivares, MIT based on figure courtesy of the researchers

Researchers at MIT have developed a technique that can guide an autonomous robot through unknown environmental conditions. The technique helps a robot avoid obstacles without knowing the size, shape or location of what it could encounter. 

The research team hopes that its findings could help autonomous robots explore remote exoplanets where the robot, and the researchers who programmed it, don’t know what it will encounter on the planet. 

“Future robotic space missions need risk-aware autonomy to explore remote and extreme worlds for which only highly uncertain prior knowledge exists. In order to achieve this, trajectory-planning algorithms need to reason about uncertainties and deal with complex uncertain models and safety constraints,” co-lead author on the paper, Ashkan Jasour, said. 

MIT’s team couldn’t use typical trajectory planning methods that make assumptions about the vehicle, obstacles and environment. These methods are too simplistic for real-world settings. Instead, the team developed an algorithm that could determine the probability of observing different conditions or obstacles at different locations.

The algorithm determines the probability of these events based on a map or images the robot collects with its perception system. This approach formulates trajectory planning as a probabilistic optimization problem, a mathematical programming framework which lets the robot achieve planning objectives while avoiding obstacles. 

“Our challenge was how to reduce the size of the optimization and consider more practical constraints to make it work. Going from good theory to good application took a lot of effort,” Jasour said.

The researchers then used higher-order statistics of probability distributions of the uncertainties to convert the probabilistic optimization problem into a more straightforward deterministic optimization problem. This kind of problem could be solved efficiently with off-the-shelf solves. 

MIT’s team tested their technique with simulated navigation scenarios. In an underwater model where the algorithms needed to chart a course from an uncertain position, around obstacles and to a goal region. The system could safely reach the goal 99% of the time. Depending on how complex the environment is, the algorithm can plan a safe course in seconds or minutes. 

The next step for the team is to create more efficient processes that significantly reduces runtime. Co-authors on the paper include Jasour, former Computer Science and Artificial Intelligence Laboratory (CSAIL) research scientist who now works at NASA’s Jet Propulsion Lab, and Weiqiao Ham, a graduate student in the department of electrical engineering and computer science and member of CSAIL. Senior author on the paper was Brian Williams, a professor of aeronautics and astronautics and a member of CSAIL. 

The post MIT researchers help robots navigate uncertain environments appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-researchers-help-robots-navigate-uncertain-environments/feed/ 0
John Deere acquires camera-based perception tech from Light https://www.therobotreport.com/john-deere-acquires-light-camera-based-perception-platform/ https://www.therobotreport.com/john-deere-acquires-light-camera-based-perception-platform/#comments Thu, 19 May 2022 21:45:40 +0000 https://www.therobotreport.com/?p=562778 John Deere will integrate Light’s perception platform into its autonomous tractors to augment its vision-only approach to autonomy.

The post John Deere acquires camera-based perception tech from Light appeared first on The Robot Report.

]]>
John Deere autonomous tractor

The John Deere 8R Autonomous Tractor was launched at CES 2022. | Credit: John Deere

John Deere has made another acquisition related to the development of autonomous tractors. John Deere has acquired numerous patents and other intellectual property from Light, which specializes in depth sensing and camera-based perception for autonomous vehicles. John Deere also hired a team of employees from Light. Financial terms of the deal are unknown.

Light was founded in 2013 and, according to Crunchbase, has raised $185.7 million. Its last funding round, a $121 Series D led by Softbank Vision Fund, came in July 2018, according to Crunchbase.

John Deere will integrate Light’s platform, Clarity, into its autonomous tractors. It appears John Deere is using a vision-only approach to autonomy here. Its new 8R autonomous tractor features six pairs of stereo cameras and doesn’t use LiDAR. While LiDAR has become commonplace for autonomous vehicles operating on roads, it appears to be less relevant for navigating a less hectic farm. John Deere also said the vision-based approach is better for identifying and monitoring weeds in real time.

Tesla uses an often-criticized, vision-only approach for its Autopilot Level 2 driving system. But on its website, Light says that “human-like vision is the key to enabling machines to navigate their environment accurately and safely.” Willy Pell, VP of autonomy and new ventures at Blue River Technology, a John Deere company, recently said on The Robot Report Podcast that LiDAR isn’t a good sensor option for autonomous tractors. The acquisition of Light would seem to back that up.

“There were a bunch of reasons [we chose not to use LiDAR], but dust was a major one. They just don’t perform as well in dust,” Pell said. “Another reason is that LiDARs have moving parts and we operate in really rugged environments. And the other reason is that we’re actually just not going that fast. We’re going 10 miles per hour, and we don’t have to see 90 meters ahead of us. We need to see 20 meters ahead of us. So we ended up with stereo cameras.”

Light offers up this performance chart for its Clarity platform. 

According to Light, its camera-based perception platform can see 3D structures at a range from 10 cm to 1000 m and captures 95 million points per second. It uses signal-processing to build a 3D view of the surrounding environment at 30 times a second.

Here is how Light describes its technology: “Light’s multi-camera depth perception platform improves upon existing stereo vision systems by using additional cameras, novel calibration, as well as unique signal processing to provide unprecedented depth quality across the camera field of view. With temporally consistent, full field of view depth that is intrinsically unified with the reference camera’s image at a pixel level, perception engineers are unshackled from the existing constraints of depth range, frequency, and even errors attributed to sensor fusion.”

John Deere making moves

This is the latest in a string of moves by John Deere. In April 2022, it formed a joint venture with GUSS Automation, a Kingsburg, California-based pioneer of semi-autonomous orchard and vineyard sprayers. Through the joint venture, Deere will help GUSS further collaborate with the Deere sales channel and GUSS will continue its innovation and product development to best serve customers, Davison said.

GUSS was founded in 2018 and has approximately 35 full-time employees. GUSS will retain its employees, brand name, and trademark, and continue to operate from its current location. GUSS employees, customers, and business partners should notice little change in daily operations resulting from the joint venture.

In August 2021, John Deere acquired Bear Flag Robotics for $250 million. Bear Flag Robotics is a Calif.-based developer of autonomous driving technology for tractors. Founded in 2017, Bear Flag Robotics retrofits its autonomy stack onto existing tractors. It uses cameras, LiDAR and radar technology for redundant, 360-degree situational awareness on a farm. The tractors can either be bought or rented under a robotics as a service (RaaS) agreement in which it charges per acre.

Of course, John Deere’s quest for autonomy was kicked off by its acquisition of Blue River Technology in 2017 for $305 million. Pell was recently a guest on The Robot Report Podcast to discuss how pivotal an acquisition this was for John Deere and how it resulted in the company’s autonomy as we know it today. You can listen to the interview below. It starts at the 27:02 mark of the episode.

Updated at 10:35 AM on May 20, 2022: John Deere has since clarified that it didn’t acquire all of Light.

The post John Deere acquires camera-based perception tech from Light appeared first on The Robot Report.

]]>
https://www.therobotreport.com/john-deere-acquires-light-camera-based-perception-platform/feed/ 1
RGo Robotics exits stealth with $20M in funding https://www.therobotreport.com/rgo-robotics-exits-stealth-with-20m-in-funding/ https://www.therobotreport.com/rgo-robotics-exits-stealth-with-20m-in-funding/#respond Mon, 09 May 2022 19:24:37 +0000 https://www.therobotreport.com/?p=562687 RGo Robotics announced that it raised $20 million in funding and that its Perception Engine is available for customers.

The post RGo Robotics exits stealth with $20M in funding appeared first on The Robot Report.

]]>
Rgo

RGo Robotics’ Perception Engine allow AMRs to perceive and navigate their environment. | Source: RGo Robotics

Editor’s Note: Amir Bousani, the co-founder and CEO of RGo Robotics, is taking part in a panel at the Robotics Summit & Expo on May 10-11 in Boston. The panel, titled “Innovations in Sensors, Sensing and Robot Vision” will discuss the latest advances in sensing products and technologies, including use cases highlighting important trends. The session will take place on May 10 from 4:15 PM to 5:00 PM. 

RGo Robotics announced that it raised $20 million in funding and that its Perception Engine is available for customers. RGo’s Perception Engine, which comes with hardware and software components, helps mobile robots understand complex environments and operate autonomously within them. 

“Most mobile robots today are still blind and unable to navigate intelligently in dynamic and complex environments, and we see first-hand how hard it is for machine and robot manufacturers to develop basic visual perception on their own,” Amir Bousani, the CEO and co-founder of RGo, said. “Our technology changes this. Leveraging the most advanced AI and vision technologies, Perception Engine allows mobile machines to see and understand the world around them so they can move autonomously, safely and intelligently in any environment.”

RGo’s Perception Engine includes a hardware component in the form of a reference design, and a software development toolkit which can provide real-time data on the edge. The engine learns its environment and provides layers of information so the robot can function naturally in real-life settings. 

The engine’s perception data is provided over its API to the robot control system, which it uses to plan paths and behave autonomously. RGo’s engine has been through extensive field trials in outdoor and indoor environments, according to the company. The engine’s effectiveness has secured the company over $10 million in customer deals. 

“As the use of robotics continues to exponentially grow across industries, RGo’s solution is a game changer. Manufacturers will be able to offer more intelligent mobile robots and accelerate time-to-market without having to focus their R&D teams on this extremely complex technology area,” Nilanjana Bhowmik, co-founder and general partner at Converge, RGo’s strategic partner, said.

MoreTech Ventures led RGo’s funding round. The company plans to use the funding to continue expanding its R&D and commercial teams. RGo was founded in 2018, and has been in stealth since then. 

RGo is a 2022 RBR50 Robotics Innovation Award winner. The company’s Perception Engine achieved the award for it’s low-cost, low-power hardware and effective software. 

The post RGo Robotics exits stealth with $20M in funding appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rgo-robotics-exits-stealth-with-20m-in-funding/feed/ 0