Sensors / Sensing Systems Archives - The Robot Report https://www.therobotreport.com/category/technologies/sensors-sensing/ Robotics news, research and analysis Wed, 05 Apr 2023 21:20:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Sensors / Sensing Systems Archives - The Robot Report https://www.therobotreport.com/category/technologies/sensors-sensing/ 32 32 Capra Robotics’ AMRs to use RGo Perception Engine https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/ https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/#respond Wed, 05 Apr 2023 21:19:21 +0000 https://www.therobotreport.com/?p=565424 RGo Robotics, a company developing artificial perception technology, announced leadership appointments, new customers and an upcoming product release.

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>

RGo Robotics, a company developing artificial perception technology that enables mobile robots to understand complex surroundings and operate autonomously, announced significant strategic updates. The announcements include leadership appointments, new customers and an upcoming product release.

RGo develops AI-powered technology for autonomous mobile robots, allowing them to achieve 3D, human-level perception. Its Perception Engine gives mobile robots the ability to understand complex surroundings and operate autonomously. It integrates with mobile robots to deliver centimeter-scale position accuracy in any environment. In Q2 2023, RGo said it will release the next iteration of its software that will include:

  • An indoor-outdoor mode: a breakthrough capability for mobile robot navigation allows them to operate in all environments – both indoors and outdoors.
  • A high-precision mode that enables millimeter-scale precision for docking and similar use cases.
  • Control Center 2.0: a redesigned configuration and admin interface. This new version supports global map alignment, advanced exploration capabilities and new map-sharing utilities.

RGo separately announced support for NVIDIA Jetson Orin System-on-Modules that enables visual perception for a variety of mobile robot applications.

RGo will exhibit its technology at LogiMAT 2023, Europe’s biggest annual intralogistics tradeshow, from April 25-27, in Stuttgart, Germany at Booth 6F59. The company will also sponsor and host a panel session “Unlocking New Applications for Mobile Robots” at the Robotics Summit and Expo in Boston from May 10-11.

Leadership announcements

RGO also announced four leadership appointments. This includes Yael Fainaro being named chief business officer and president; Mathieu Goy being named head of European sales; Yasuaki Mori being named executive consultant, APAC market development; and Amy Villeneuve as a member of the board of directors.

“It is exciting to have reached this important milestone. The new additions to our leadership team underpin our evolution from a technology innovator to a scaling commercial business model including new geographies,” said Amir Bousani, CEO and co-founder, RGo Robotics.

Goy, based in Paris, and Mori, based in Tokyo, join with extensive sales experience in the European and APAC markets. RGo is establishing an initial presence in Japan this year with growth in South Korea planned for late 2023.


“RGo has achieved impressive product maturity and growth since exiting stealth mode last year,” said Fainaro. “The company’s vision-based localization capabilities are industrial-grade, extremely precise and ready today for even the most challenging environments. This, together with higher levels of 3D perception, brings tremendous value to the rapidly growing mobile robotics market. I’m looking forward to working with Amir and the team to continue growing RGo in the year ahead.”

Villeneuve joins RGo’s board of directors with leadership experience in the robotics industry, including her time as the former COO and president of Amazon Robotics. “I am very excited to join the team,” said Villeneuve. “RGo’s technology creates disruptive change in the industry. It reduces cost and adds capabilities to mobile robots in logistics, and enables completely new applications in emerging markets including last-mile delivery and service robotics.”

Customer traction

After comprehensive field trials in challenging indoor and outdoor environments, RGo continued its commercial momentum with new customers. The design wins are with market-leading robot OEMs across multiple vertical markets ranging from logistics and industrial autonomous mobile robots, forklifts, outdoor machinery and service robots.

Capra Robotics, an award-winning mobile robot manufacturer based in Denmark, selected RGo’s Perception Engine for its new Hircus mobile robot platform.

“RGo continues to develop game-changing navigation technology,” said Niels Juls Jacobsen, CEO of Capra and founder of Mobile Industrial Robots. “Traditional localization sensors either work indoors or outdoors – but not both. Combining both capabilities into a low-cost, compact and robust system is a key aspect of our strategy to deliver mobile robotics solutions to the untapped ‘interlogistics’ market.”

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>
https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/feed/ 0
Luxonis releases DepthAI ROS driver https://www.therobotreport.com/luxonis-releases-depthai-ros-driver/ https://www.therobotreport.com/luxonis-releases-depthai-ros-driver/#respond Fri, 24 Feb 2023 19:25:07 +0000 https://www.therobotreport.com/?p=565119 Luxonis announced the release of its newest DepthAI ROS driver for its stereo depth OAK cameras.

The post Luxonis releases DepthAI ROS driver appeared first on The Robot Report.

]]>
Luxonis sensors

Luxonis offers high-resolution cameras with depth vision and on-chip machine learning. | Source: Luxonis

Luxonis announced the release of its newest DepthAI ROS driver for its stereo depth OAK cameras. The driver aims to make the development of ROS-based software easier. 

When using the DepthAI ROS driver, almost everything is parameterized with ROS2 parameters/dynamic reconfigure, which aims to provide more flexibility to help users customize OAK to their unique use cases. 

The DepthAI ROS driver is being developed on ROS2 Humble and ROS1 Noetic. This allows users to take advantage of ROS Composition/Nodelet mechanisms. The driver supports both 2D and spatial detection and semantic segmentation networkers. 

The driver offers several different modes that users can run their camera in depending on their use case. For example, users can use the camera to publish Spatial NN detections and publish RGBD pointcloud. Alternatively, with the DepthAI ROS driver users can stream data straight from sensors for host processing, calibration and modular camera setup. 


Robotics Summit (May 10-11) returns to Boston

Register Today


With the driver, users can set parameters for things like exposure and focus for individual cameras at runtime and IR LED power for better depth accuracy and night vision. This allows users to experiment with onboard depth filter parameters. 

The driver enables encoding to get more bandwidth with compressed images and provides an easy way to integrate a multi-camera setup. It also provides docker support for easy integration, users can build one themselves or use one from Luxonis’ DockerHub repository.

Users can also reconfigure their cameras quickly and easily using ‘stop’ and ‘start’ services. The driver also allows users to use low-quality streams and switch to higher quality when they need or switch between different neural networks to get their robot the data it needs.

Earlier this month, Luxonis announced a partnership with ams OSRAM. As part of the partnership, Luxonis will use OSRAM’s Belago 1.1 Dot Projector in its 3D vision solutions for automatic guided vehicles (AGVs), robots, drones and more.

The post Luxonis releases DepthAI ROS driver appeared first on The Robot Report.

]]>
https://www.therobotreport.com/luxonis-releases-depthai-ros-driver/feed/ 0
LiDAR makers Ouster, Velodyne complete merger https://www.therobotreport.com/lidar-makers-ouster-velodyne-complete-merger/ https://www.therobotreport.com/lidar-makers-ouster-velodyne-complete-merger/#respond Wed, 15 Feb 2023 14:38:55 +0000 https://www.therobotreport.com/?p=565019 The combined company has an intellectual property portfolio with 173 granted and 504 pending patents and a cash balance of over $315 million.

The post LiDAR makers Ouster, Velodyne complete merger appeared first on The Robot Report.

]]>

Ouster and Velodyne have completed their merger. The combined company will operate under the name Ouster and will trade on the New York Stock Exchange under the ticker “OUST.” 

Velodyne ceased trading on the NASDAQ after markets closed on the day the merger was completed. Each Velodyne share was exchanged for .8204 shares of Ouster common stock. 

The combined company has over 850 customers spanning the automotive, industrial, robotics and smart infrastructure industries. Ouster expects to retain about 350 employees and will be headquartered in San Francisco, with other key offices in the Americas, Europe and Asia-Pacific. 

“We’re thrilled to have completed the merger with Velodyne so quickly, further boosting our financial position and our ability to accelerate LiDAR adoption,” Angus Pacala, CEO of Ouster, said. “Together, we have an even stronger team backed by a healthy balance sheet, new channel partners, and a wide selection of positive-margin products to serve a diverse set of customers and win more deals than ever before. We expect our innovative digital LiDAR roadmap, amplified by exciting new software solutions, to further expand our serviceable market and catalyze growth across the business.”


Robotics Summit (May 10-11) returns to Boston

Register Today


Ouster now has an intellectual property portfolio with 173 granted and 504 pending patents. The combined company has a cash balance of over $315 million as of December 31, 2022. Ouster is on track to exceed its estimation that the combined company will save at least $75 million in operating costs within the first 9 months of the transaction’s close.

“The combined Ouster is stronger than ever, led by an esteemed executive leadership team and Board with deep company, industry, and financial expertise,” Dr. Ted Tewksbury, executive chair of Ouster’s Board of Directors, said. “Ouster is well positioned as a global leader in LiDAR backed by the talent, products, and technology roadmap to make performant and affordable LiDAR solutions pervasive while accelerating time to profitability and enhancing value for stockholders.”

The combined company’s executive leadership team is made up of:

  • Angus Pacala, chief executive officer
  • Mark Frichtl, chief technology officer
  • Mark Weinswig, chief financial officer
  • Darien Spencer, chief operations officer
  • Nate Dickerman, president of field operations
  • Megan Chung, general counsel

The post LiDAR makers Ouster, Velodyne complete merger appeared first on The Robot Report.

]]>
https://www.therobotreport.com/lidar-makers-ouster-velodyne-complete-merger/feed/ 0
Cepton raises $100M for LiDAR sensors https://www.therobotreport.com/cepton-raises-100m-for-lidar-sensors/ https://www.therobotreport.com/cepton-raises-100m-for-lidar-sensors/#respond Mon, 23 Jan 2023 23:47:32 +0000 https://www.therobotreport.com/?p=564863 Silicon Valley-based Cepton offers a range of LiDAR sensors, including the Vista-X family of automotive-grade LiDARs and Sora-X family of industrial-grade LiDARs.

The post Cepton raises $100M for LiDAR sensors appeared first on The Robot Report.

]]>
cepton

Cepton’s Helius smart LiDAR system. | Source: Cepton

Cepton, a Silicon Valley-based developer of LiDAR products, has raised $100 million from Koito Manufacturing Co., its automotive Tier 1 partner and current shareholder.

The investment comes in the form of convertible preferred stock and was approved during a special meeting of Cepton stockholders. The investment will be convertible starting on the first anniversary of the issue date and can be converted into shares of Capton’s common stock at an initial conversion price of $2.585 per share.

Cepton offers a range of LiDAR products that include the Vista-X family of automotive-grade LiDARs and its Sora-X family of industrial-grade LiDARs. The company also offers a smart LiDAR system called Helius. Helius combines Ception’s LiDAR technology with edge computing and perception software to provide 3D perception. It can be used for smart cities, smart spaces and other applications to provide object detection, tracking and classification.

“We’re happy to announce the closing of the preferred stock investment as we deepen our partnership with Koito,” said Cepton co-founder Jun Pei. “We plan to deploy the additional capital to help fund our next stage of growth, continue series production execution, and expand our collaboration efforts towards winning additional automotive OEM programs.”



Cepton was founded in 2016 and is headquartered in San Jose, California. The company also has a presence in Germany, Canada, Japan, India and China. With the latest round of funding, Cepton has raised over $270 million, according to Crunchbase. 

Koito has been working in automotive lighting since it started in 1915. The Koito Group consists of 31 companies spread across 13 countries worldwide.

“We’re excited to have completed our third investment in Cepton, which solidifies our commitment to lidar and increasing automotive safety for drivers worldwide,” said Koiot president Michiaki Kato. “This is an important year for us as we work towards commercialization and scale manufacturing of our LiDAR products. We have a strong track record with Cepton as a partner and look forward to achieving our mutual goal of becoming the leader within LiDAR.”

The post Cepton raises $100M for LiDAR sensors appeared first on The Robot Report.

]]>
https://www.therobotreport.com/cepton-raises-100m-for-lidar-sensors/feed/ 0
Teledyne releases Hydra3D+ Time-of-Flight sensor https://www.therobotreport.com/teledyne-releases-hydra3d-time-of-flight-sensor/ https://www.therobotreport.com/teledyne-releases-hydra3d-time-of-flight-sensor/#respond Fri, 13 Jan 2023 02:00:17 +0000 https://www.therobotreport.com/?p=564782 Teledyne e2v, a part of Teledyne Technologies, has released its Hydra3D+, a new Time-of-Flight CMOS image sensor.

The post Teledyne releases Hydra3D+ Time-of-Flight sensor appeared first on The Robot Report.

]]>
teledyne

Teledyne e2v, a part of Teledyne Technologies, has released its Hydra3D+, a new Time-of-Flight (ToF) CMOS image sensor which incorporates 832 x 600-pixel resolution and is tailored for versatile 3D detection and measurement.

Designed with Teledyne e2v’s proprietary CMOS technology, Hydra3D+ features a brand-new 10 µm three-tap pixel which provides very fast transfer times (starting from 10ns), and displays high sensitivity in the NIR wavelength, alongside excellent demodulation contrast. This precise combination enables the sensor to operate in real-time without motion artifacts (even if there are fast-moving objects in the scene) and with excellent temporal noise at short ranges, essential in applications such as pick and place, logistics, factory automation, and factory safety. An innovative on-chip multi-system management feature enables the sensor to work alongside multiple active systems without interference which can lead to false measurements.

The excellent sensitivity of Hydra3D+ enables it to effectively manage lighting power and handle a wide range of reflectivity. Its high resolution, with powerful on-chip HDR, featuring an on-the-fly flexible configuration, enables the best trade-off between application-level parameters, such as distance range, object reflectivity, frame rate, and so on. This makes it suitable for mid, long-range distances, and/or outdoor applications such as automated guided vehicles, surveillance, ITS, and building construction.

The sensor has been thoughtfully designed for customers seeking real-time and flexible 3D detection with uncompromised 3D performance. It offers large field-of-view scenes captured in both 2D and 3D by a compact sensor, making the system very effective.

“Today, many Time-of-Flight sensors suffer from motion artifacts and can’t instantly perform in changing operating conditions. With Hydra3D+, our customers can easily achieve reliable 3D measurement with the highest levels of 3D performance, uncompromised image quality in both 2D and 3D modes, and in all distance ranges and conditions even where multiple systems operate or in outdoor environments,” said Ha Lan Do Thu, marketing manager for 3D imaging at Teledyne e2v.

Documentation, samples, and development tools are now available upon request. In addition, several proprietary modeling tools support customers in their assessment of the operation of Hydra3D+.

Editor’s Note: This article first appeared on sister website Design World.

 

The post Teledyne releases Hydra3D+ Time-of-Flight sensor appeared first on The Robot Report.

]]>
https://www.therobotreport.com/teledyne-releases-hydra3d-time-of-flight-sensor/feed/ 0
Seoul Robotics releases 3D perception software SENSR 3.0 https://www.therobotreport.com/seoul-robotics-releases-3d-perception-software-sensr-3-0/ https://www.therobotreport.com/seoul-robotics-releases-3d-perception-software-sensr-3-0/#respond Tue, 10 Jan 2023 04:58:22 +0000 https://www.therobotreport.com/?p=564738 Seoul Robotics introduced SENSR 3.0, the most advanced iteration of the SENSR 3D perception platform to deliver increased ease of use.

The post Seoul Robotics releases 3D perception software SENSR 3.0 appeared first on The Robot Report.

]]>
Seoul Robotics

Seoul Robotics released an updated version of its perception platform SENSR 3.0. | Seoul Robotics

Seoul Robotics introduced SENSR 3.0, the most advanced iteration of the SENSR 3D perception platform to deliver increased ease of use and simplify the deployment of large-scale 3D systems across an array of industries. With the release of SENSR 3.0, Seoul Robotics is furthering its mission to make groundbreaking, comprehensive 3D perception easily accessible for any application and user, regardless of previous experience with this technology.

SENSR 3.0 features new updates and added functionality that aim to make it easier to implement, navigate and scale 3D systems. When embedded into 3D sensors such as LiDAR, SENSR uses AI deep learning to track, detect and identify hundreds of objects at once within a 4-centimeter range. As it collects more data, SENSR 3.0 continues to improve over time, unlocking unparalleled insights into environments. Included with the software are QuickTune, a new snap-to-point tool to expedite sensor calibration; and QuickSite, a site simulation tool that enables users to virtually design and scale 3D systems, reducing installation times.

QuickTune makes it easier for any user to install and calibrate 3D systems, whether an expert or a novice. During setup, QuickTune finds commonalities in an environment, such as a wall or corner, and automatically calibrates multiple sensors in a system to speed up deployments. In addition to leveraging QuickTune to automatically identify a common point, users also have the option to select a specific spot for sensors to calibrate around. QuickTune is especially valuable for multi-sensor installations, requiring just a few clicks to dramatically reduce calibration time.

QuickSite removes the uncertainty around setup, leveraging location information to automatically calculate where to position sensors to optimize insights without having to physically travel to an installation site and take measurements. Through a greater understanding of a location and its size, the cloud-based QuickSite tool can account for positions and angles and virtually adjust sensor positions until coverage is optimal. By simulating a setup outcome, Seoul Robotics is ensuring a more accurate and robust design while reducing the installation time from as long as a week to less than an hour.

Because the SENSR platform is both hardware- and sensor-agnostic, it is compatible with a range of different systems and can be configured depending on the application, needs and budget. This flexibility, coupled with the platform’s sensor fusion capabilities, makes it highly scalable even across large footprints. Furthering the platform’s versatility, SENSR 3.0 enables dockerization, simplifying the process of building and deploying the software for different solutions. Dockers also support compatibility with other applications on the same device and broaden the pool of compatible hardware.

Additionally, Seoul Robotics has added more guidance to the SENSR 3.0 setup and maintenance platform to make it even easier to navigate, and SENSR 3.0 is verified to have OS support on multiple versions of Ubuntu.

Organizations are increasingly looking for solutions that can provide an additional dimension of intelligence without the privacy concerns of always-on recordings. With the price point of 3D systems now in the same bracket as 2D cameras, a growing number of cities, states and private companies are leveraging this technology to create smarter, safer, and more efficient spaces. For example, Seoul Robotics works with partners across the Intelligent Transport Systems, security, retail, airport, rail and smart city industries to construct and deploy these transformative solutions that avoid collisions, make traffic flow more efficient, and monitor spaces for unauthorized personnel.

“Imagine what we can accomplish if we can more accurately perceive our world beyond what’s visible to the human eye: roadways will be safer for both drivers and pedestrians, stores are optimized based on the customer journey, and airports experience reduced wait times. Those are just the immediate benefits that come from installing 3D systems,” William Muller, Vice President of Business Development at Seoul Robotics, said. “With SENSR 3.0, never before has it been so simple to set up, calibrate, and scale a 3D system, and this will revolutionize how companies gain value from our technology.”

The post Seoul Robotics releases 3D perception software SENSR 3.0 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/seoul-robotics-releases-3d-perception-software-sensr-3-0/feed/ 0
Remembering robotics companies we lost in 2022 https://www.therobotreport.com/robotics-companies-lost-in-2022/ https://www.therobotreport.com/robotics-companies-lost-in-2022/#respond Tue, 27 Dec 2022 20:21:46 +0000 https://www.therobotreport.com/?p=564643 Running a successful robotics company is never easy. Unfortunately, these companies found out it's even harder during a pandemic and supply chain crisis.

The post Remembering robotics companies we lost in 2022 appeared first on The Robot Report.

]]>
robotics companies lost in 2022

There are many reasons robotics companies fail. From an ill-conceived idea to poor execution or the inability to raise funding, building and running a sustainable robotics company is challenging.

This is never a fun recap to write. We don’t want to see startups fail, but inevitably many do. The last couple of years have been especially difficult thanks to a global pandemic, economic uncertainties and ongoing supply chain issues. But perhaps some lessons can be learned from those that couldn’t survive a global pandemic or supply chain issues.

Here are some of the robotics companies we’ll, unfortunately, remember losing in 2022.

Argo AI (2016-2022)

Argo AI, the self-driving company previously backed by Ford and Volkswagen, abruptly closed its doors in October. For most, this will be the most surprising shutdown on the list. When news broke about the shutdown, Ford said its plan was to shift its focus away from funding Argo AI’s development of Level 4 autonomous driving technology and towards creating its own Level 2 and Level 3 driving systems.

“We still believe in Level 4 autonomy that it will have a big impact on our business of moving people,” Ford’s CEO and President Jim Farley said at the time. “We’ve learned though, in our partnership with Argo and after our own internal investments, that we will have a very long road. It’s estimated that more than $100 billion has been invested in the promise of Level 4 autonomy. And yet no one has defined a profitable business model at scale.”

Farley continued, “Deploying L4 broadly, perhaps the toughest technical problem of our time, will require significant breakthroughs going forward in many areas: reliable and low-cost sensing, it’s not the case today; algorithms that can operate on limited compute resources without constraining the operating time and domain of an electric vehicle; breakthroughs in neural networks that can learn to operate a car more safely than a human, even in very complex urban environments.”

“We’re optimistic about a future for L4 ADAS, but profitable, fully autonomous vehicles at scale are a long way off and we won’t necessarily have to create that technology ourselves.”

Argo AI spun out of Carnegie Mellon in 2016 and came out of stealth in 2017 with a $1 billion investment from Ford. Since then, it raised another $2.6 billion, primarily from Ford and VW, and secured partnerships with Walmart and Lyft.

Kitty Hawk (2010-2022)

After more than a decade of trying to make autonomous flying cars, Kitty Hawk closed its doors in September. The company was founded in 2010 by Sebastian Thrun, who previously founded and led Google’s self-driving car project, which we now know as Waymo.

Kitty Hawk built a number of different aircraft, and in 2021 demonstrated a beyond-visual-line-of-sight flight in Ohio. In June 2021, Kitty Hawk acquired 3D Robotics, a drone company that was once a competitor to DJI. As part of the acquisition, 3D Robotics co-founder Chris Anderson became Kitty Hawk’s chief operating officer. Kitty Hawk said at the time its new focus was on developing a remote-piloted electric vertical takeoff and landing (eVTOL) aircraft.

After the company shut down, Thrun said that “no matter how hard we looked, we could not find a path to a viable business.”

Local Motors (2007-2022)

Olli shuttle

Local Motors, which was building Olli the autonomous shuttle, shut down in early January. Local Motors was founded in 2007, but didn’t start dipping its toes into the world of autonomous vehicles until 2016 when it launched Olli. The company closed due to a lack of funding.

Olli 1.0 was a low-speed pod that could drive for 60 miles on a single charge. The shuttle was designed for environments like hospitals, military bases and universities. In 2019, Local Motors upgraded to Olli 2.0 with a top speed of 25 miles per hour and the ability to run for 100 miles on a single charge.

In October 2020, the company announced it would be testing Olli on the streets of Toronto. Olli hit the streets in 2021, but would only carry out tests until December, when an Olli 1.0 shuttle collided with a tree, resulting in the attendant being critically injured. After the collision, the City of Toronto stopped its trials of the self-driving shuttles. An investigation by the Durham Regional Police Service found that the shuttle was being operated manually during the accident.

The company raised a total of $15.3M in funding over 6 rounds. (Crunchbase)

Perceptive Automata (2015-2022)

Perceptive Automata was a Boston-based developer of human behavior understanding AI for autonomous vehicles and robots. According to co-founder and CTO Sam Anthony, Perceptive Automata went “kablooey” after it failed to close Series B funding.

Anthony said that the shutdown snuck up on him and the staff. “The part that was lousy was how it went down for the staff. There was a sense that we were blindsided by it falling apart,” he said. “That said, I’m not sure we should’ve been blindsided by it. Part of being a VC-funded company is that you have fairly specific marks you have to hit. If you don’t hit them, the path is cloudy at best. Combined with other factors outside of our control, we were in a tough spot.”

Perceptive Automata raised $20 million since it was founded in 2015.

Skyward (2013-2022)

Skyward built a software platform that helped customers manage drone workflows, including training crews, planning missions, accessing controlled airspace and more. It was acquired by Verizon in 2017 before being shut down in May. At the time of the acquisition, Verizon said it planned to use the company’s technology to streamline drone operation management through one platform.

Skyward sent its customers an email to announce the closure, which came as a surprise to many. Verizon said the decision to shutter Skyward “was about market agility and ensuring that Verizon continues to focus on areas that provide both near and mid-term growth opportunities.”

The company raised a total of $8.2M in funding over 4 rounds. (Crunchbase)

Chowbotics (2014-2022)

Chowbotics' Sally feeds frontline health workers during coronavirus crisis Saladworks

DoorDash shut down its subsidiary Chowbotics less than 1.5 years after acquiring the business. Chowbotics built Sally, a vending machine-like robot that made salads and other fresh meals. It should be noted many folks in the industry have questioned whether Sally is a robot, but nevertheless.

“At DoorDash, we create an environment to build new products and set high standards to determine when to scale, continue, or cut back investments,” a DoorDash spokesperson said. “We’re always looking for new ways to serve our merchants, exceed consumers’ increasingly higher expectations, and complement our logistics infrastructure.”

Chowbotics was founded in 2014 and acquired by DoorDash in February 2021 for an undisclosed amount. At the time of the acquisition, DoorDash wanted to explore how to deploy Chowbotics’ technology across restaurants. It hoped Sally could help restaurants expand their menu or allow salad bars to pop up in more locations without needing more manpower.

Fifth Season (2016-2022)

Fifth Season was a Pittsburgh-based company that used robotics to grow and harvest various leafy vegetables that were then packaged and sold as salads, mixed greens or in variety packs. It shut down in October. A Carnegie Mellon University spinout founded in 2016 and raised more than $75 million in investment.

Fifth Season had about 100 employees, including about 20 or so that worked shifts at a 60,000-square-foot indoor farming facility in Braddock, Pa.

Rovenso (2016-2022)

Rovenso was a Switzerland-based company developing autonomous robots for security and safety monitoring of industrial sites. The company was founded in 2016 and raised $2.8 million in funding, according to Crunchbase.

Thomas Estier, co-founder and CEO of Rovenso, posted about the shutdown on LinkedIn, saying he and the team didn’t understand the impact of COVID on business development and components sourcing.

The post Remembering robotics companies we lost in 2022 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robotics-companies-lost-in-2022/feed/ 0
IDS launches new higher resolution Ensenso N 3D camera https://www.therobotreport.com/ids-launches-new-higher-resolution-ensenso-n-3d-camera/ https://www.therobotreport.com/ids-launches-new-higher-resolution-ensenso-n-3d-camera/#respond Thu, 22 Dec 2022 15:59:49 +0000 https://www.therobotreport.com/?p=564624 The latest edition of the Ensenso N cameras includes a higher resolution sensor to nearly double the accuracy of the solution.

The post IDS launches new higher resolution Ensenso N 3D camera appeared first on The Robot Report.

]]>

The resolution and accuracy have almost doubled on The Ensenso N camera while the price has remained the same. | Credit: IDS

The Ensenso N-series 3D cameras have a compact body made of aluminum or a plastic composite, depending on the model, and a pattern projector built right in. They can be used to take pictures of both still and moving objects. The integrated projector projects a high-contrast texture onto the objects in question.

A pattern mask with a random dot pattern fills in surface structures that don’t exist or are only faintly detectable. This makes it possible for the cameras to make detailed 3D point clouds even when the lighting is bad.

The Ensenso models N31, N36, N41 and N46, supercede the previously available N30, N35, N40 and N45. Visually, the cameras are identical to their predecessors. Internally, however, the cameras leverage the new IMX392 sensor from Sony. This sensor has a higher resolution of 2.3 MP over the prior 1.3 MP. All cameras are pre-calibrated and therefore easy to set up. The Ensenso selector on the IDS website helps to choose the right model.

With Ensenso N, users can choose from a series of 3D cameras that give reliable 3D information for a wide range of applications, whether they are fixed in place or being moved around by a robot arm. The cameras show their worth when they are used to pick up single items, support industrial robots that are controlled remotely, help with logistics, and even help to automate high-volume laundry.

The most recent update of the  IDS NXT software includes the ability to detect anomalies in addition to Object Detection and Classification. This can be done with only a minimum of training data required to reliably identify both known and unknown deviations.

The post IDS launches new higher resolution Ensenso N 3D camera appeared first on The Robot Report.

]]>
https://www.therobotreport.com/ids-launches-new-higher-resolution-ensenso-n-3d-camera/feed/ 0
LUCID launches the Atlas10 camera featuring an ultraviolet sensor https://www.therobotreport.com/lucid-launches-the-atlas10-camera-featuring-an-ultraviolet-sensor/ https://www.therobotreport.com/lucid-launches-the-atlas10-camera-featuring-an-ultraviolet-sensor/#respond Wed, 21 Dec 2022 15:47:58 +0000 https://www.therobotreport.com/?p=564608 LUCID expands its advanced sensing portfolio with the Atlas10 camera Featuring the sony IMX487 ultraviolet (UV) sensor.

The post LUCID launches the Atlas10 camera featuring an ultraviolet sensor appeared first on The Robot Report.

]]>

LUCID Vision Labs, Inc., today announced the series production of its new Atlas10 camera featuring the Sony IMX487 ultraviolet (UV) sensor.

The ATX081S-UC 10GigE PoE+ UV camera equipped with the high UV sensitivity 8.1 MP Sony IMX487 global shutter CMOS sensor, is capable of capturing images across the ultraviolet light spectrum in the 200 to 400nm range. Utilizing Sony’s Pregius S unique back-illuminated pixel structure, the Atlas10 camera’s high-level UV sensitivity makes it ideal for industrial applications requiring greater precision in transparent materials (plastic and PET), semiconductor pattern defect inspection, material sorting and more.

The Atlas10 10BASE-T camera is known for its industrial reliability offering Power over Ethernet (PoE+), robust M12 and M8 connectors, Active Sensor Alignment for superior optical performance, and a wide ambient temperature range of -20°C to 55°C.

“The Atlas10 UV offers excellent sensitivity in the UV wavelength and is packed with industrial features designed to provide high-speed and reliable operation in challenging environments,” says Rod Barman, President at LUCID Vision Labs. “The Sony IMX487 offers improved quantum efficiency, high dynamic range and reduced noise, enabling high-quality imaging for a broad range of advanced sensing applications.”

The Atlas10 is a GigE Vision and GenICam-compliant camera capable of 10 Gbps data transfer rates and allows the use of standard CAT6 cables up to 25 meters. Atlas10 features Power over Ethernet (PoE+) that simplifies integration and reduces cost.

All LUCID cameras conform to the GigE Vision 2.0 and GenICam3 standards and are supported by LUCID’s own Arena software development kit. The Arena SDK provides customers with easy access to the latest industry standards and software technology. The SDK supports Windows, Linux 64bit and Linux ARM operating systems, and C, C++, C# and Python programming languages.

The post LUCID launches the Atlas10 camera featuring an ultraviolet sensor appeared first on The Robot Report.

]]>
https://www.therobotreport.com/lucid-launches-the-atlas10-camera-featuring-an-ultraviolet-sensor/feed/ 0
Indy Autonomous Challenge returns to CES 2023 to spotlight autonomous racing https://www.therobotreport.com/indy-autonomous-challenge-returns-to-ces-2023-to-spotlight-autonomous-racing/ https://www.therobotreport.com/indy-autonomous-challenge-returns-to-ces-2023-to-spotlight-autonomous-racing/#respond Fri, 16 Dec 2022 18:14:15 +0000 https://www.therobotreport.com/?p=564577 The Indy Autonomous Challenge returns to Las Vegas at CES 2023 with more powerful engines and improved sensors and computing infrastructure onboard.

The post Indy Autonomous Challenge returns to CES 2023 to spotlight autonomous racing appeared first on The Robot Report.

]]>
dallara AV-21 racecar image

The Indy Autonomous Challenge (IAC) returns to CES 2023 to push the boundaries of head-to-head autonomous racing and showcase the future of autonomous mobility. The IAC will have a significant presence throughout CES 2023 with the Autonomous Challenge @ CES at the Las Vegas Motor Speedway on January 7 and an exhibit in partnership with the Indiana Economic Development Corporation (IEDC) in the LVCC West Hall booth #3601.

The racing will take place at the Las Vegas Motor Speedway on Saturday, January 7, featuring nine teams seeking to break autonomous racing world records. The Rules of the IAC Competition consist of a single elimination tournament with multiple rounds of high-speed head-to-head passing matches with AV-21 racecars culminating in a championship round.

All CES attendees are welcome to attend the race event at the Las Vegas Motor Speedway and experience infield festivities with the IAC, including a WatchZone with trackside viewing, concession areas, a DJ and more.

The following university teams are competing in the Autonomous Challenge @ CES 2023:

On Saturday, January 7, 2023, Las Vegas Motor Speedway the time trials and elimination rounds begin at 10 a.m. PST. This will be followed by the live broadcast of semi and final competition rounds starting at 1 PM PST. If you’re not able to attend the event live, you can watch the broadcast via a link available on the IAC website.

The prior event took place at the Texas Motor Speedway in November 2022, and the rebroadcast of that event is currently available to watch online.

About IAC: The Indy Autonomous Challenge (IAC) organizes racing competitions among university-affiliated teams from around the world to program fully autonomous racecars and compete in a series of history-making events at iconic tracks. Based in Indiana, the IAC is working to establish a hub for performance automation in the state and is harnessing the power of innovative competitions to attract the best and the brightest minds from around the globe to further state-of-the-art technology in the safety and performance of automated vehicles. The IAC started as a $1 million prize competition with 31 university teams signing up to compete more than two years ago, representing top engineering and technology programs from 15 U.S. states and 10 countries.

The post Indy Autonomous Challenge returns to CES 2023 to spotlight autonomous racing appeared first on The Robot Report.

]]>
https://www.therobotreport.com/indy-autonomous-challenge-returns-to-ces-2023-to-spotlight-autonomous-racing/feed/ 0
LiDAR maker Quanergy files Chapter 11 bankruptcy https://www.therobotreport.com/lidar-maker-quanergy-files-chapter-11-bankruptcy/ https://www.therobotreport.com/lidar-maker-quanergy-files-chapter-11-bankruptcy/#respond Thu, 15 Dec 2022 20:17:23 +0000 https://www.therobotreport.com/?p=564561 Quanergy went public just 10 months ago via a SPAC with China's CITIC Capital, at an implied $1.4 billion equity value.

The post LiDAR maker Quanergy files Chapter 11 bankruptcy appeared first on The Robot Report.

]]>
the M1 Edge 2D LiDAR sensor

Quanergy’s M1 Edge 2D LiDAR. | Credit: Quanergy

After going public 10 months ago via a SPAC with China’s CITIC Capital, at an implied $1.4 billion equity value, LiDAR maker Quanergy has filed for Chapter 11 bankruptcy. Quanergy is now looking for a buyer under section 363 of the Bankruptcy Code.

Quanergy said it expects to continue operations during the Chapter 11 process and seeks to complete an expedited sale process with Bankruptcy Court approval. To help fund and protect its operations, Quanergy intends to use available cash on hand along with normal operating cash flows to fund post-petition operations and costs in the ordinary course.

Quanergy also said CEO Kevin Kennedy will retire effective December 31, 2022. 

“It has been my honor to serve as CEO at Quanergy for the past 2.5 years,” said Kevin Kennedy, Chief Executive Officer of Quanergy. “During this time, the company shifted our technology focus towards security and industrial applications which enabled the company to grow revenue by serving customer needs in a new marketplace.”

The company will transition its executive leadership to a newly appointed chief restructuring officer and president, Lawrence Perkins.

“Quanergy has made considerable efforts to address ongoing financial challenges stemming from volatile capital market conditions,” said Perkins. “Despite these challenges, the Company has seen improving demand in the security, smart spaces, and industrial markets, and improvements in supply chain conditions. We are confident that Quanergy’s efforts have positioned the company for a value-maximizing transaction during the Chapter 11 sale process. During the process, we will continue to prioritize the needs of our customers and I am thankful to the entire Quanergy team for their continued efforts and contributions to the business.”

For Q3 2022, which ended on September 30, Quanergy reported revenue of $2.3 million, which is said at the time was near the top end of its guidance range and up 104% year-over-year. It reported a third-quarter GAAP net loss of $17.7 million, compared to $19 million in the third quarter of 2021, and a third-quarter adjusted EBITDA loss of $12.3 million compared to $6.1 million in the third quarter of 2021. 

Revenue for its 2021 fiscal year was $3.9 million, which was up 30% year-over-year from $3 million in 2020. Quanergy said 1,065 LiDAR sensors shipped in the full year.

Tough times for LiDAR makers?

Quanergy isn’t the only LiDAR developer to be struggling financially. Last month, German LiDAR developer Ibeo Automotive Systems GmbH filed for insolvency because it could not secure further growth financing. MicroVision, a developer of MEMS-based solid-state automotive LiDAR and advanced driver-assistance systems (ADAS) solutions, swooped in to acquire certain assets of Ibeo for $15.8 million.

AEye reported revenue of $768,000 in its fiscal third quarter of 2022 with a GAAP loss of $23.6 million. Non-GAAP losses totaled $17 million in the quarter. AEye ended Q3 with $112.2 million in cash, cash equivalents and marketable securities.

Ouster and Velodyne, two prominent LiDAR companies, recently announced they’re merging in an all-stock transaction. The agreement was signed on November 4, 2022, and is expected to be completed in the first half of 2023.

Last year, Kyle Vogt, co-founder and CEO of autonomous driving company Cruise, said the LiDAR industry would consolidate. The issue, according to Vogt, is the projected revenue comes from “entirely overlapping potential customers, with very little discount applied to future projections.”

The post LiDAR maker Quanergy files Chapter 11 bankruptcy appeared first on The Robot Report.

]]>
https://www.therobotreport.com/lidar-maker-quanergy-files-chapter-11-bankruptcy/feed/ 0
Inuitive sensor modules bring VSLAM to AMRs https://www.therobotreport.com/inuitive-sensor-modules-vslam-amrs/ https://www.therobotreport.com/inuitive-sensor-modules-vslam-amrs/#respond Tue, 13 Dec 2022 19:31:53 +0000 https://www.therobotreport.com/?p=564529 New sensor modules add depth sensing and image processing with AI and VSLAM capabilities.

The post Inuitive sensor modules bring VSLAM to AMRs appeared first on The Robot Report.

]]>
Inuitive

Inuitive introduces the M4.5S (center) and M4.3WN (right) sensor modules that add VSLAM for AMR and AGVs.

Inuitive, an Israel-based developer of vision-on-chip processors, launched its M4.5S and M4.3WN sensor modules. Designed to integrate into robots and drones, both sensor modules are built around the NU4000 vision-on-chip (VoC) processor adds depth sensing and image processing with AI and Visual Simultaneous Localization and Mapping (VSLAM) capabilities.

The M4.5S provides robots with enhanced depth from stereo sensing along with obstacle detection and object recognition. It features a field of view of 88×58 degrees, a minimum sensing range of 9 cm (3.54″) and a wide dynamic operating temperature range of up to 50 degrees Celsius (122 degrees Farenheit). The M4.5S supports the Robot Operating System (ROS) and has an SDK that is compatible with Windows, Linux and Android.

The M4.3WN features tracking and VSLAM navigation based on fisheye cameras and an IMU together with depth sensing and on-chip processing. This enables free navigation, localization, path planning, and static and dynamic obstacle avoidance for AMRs and AGVs. The M4.3WN is designed in a metal case to serve in industrial environments.

“Our new all-in-one sensor modules expand our portfolio targeting the growing market of autonomous mobile robots. Together with our category-leading vision-on-chip processor, we now enable robotic devices to look at the world with human-like visual understanding,” said Shlomo Gadot, CEO and co-founder of Inuitive. “Inuitive is fully committed to continuously developing the best performing products for our customers and becoming their supplier of choice.

The M4.5S and the M4.3WN sensor modules’ primary processing unit is Inuitive’s all-in-one NU4000 processor. Both modules are equipped with depth and RGB sensors that are controlled and timed by the NU4000. Data generated by the sensors and processed in real-time at a high frame rate by the NU4000, is then used to generate depth information for the host device.

The post Inuitive sensor modules bring VSLAM to AMRs appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inuitive-sensor-modules-vslam-amrs/feed/ 0
MicroVision acquiring LiDAR maker Ibeo https://www.therobotreport.com/microvision-acquiring-lidar-maker-ibeo/ https://www.therobotreport.com/microvision-acquiring-lidar-maker-ibeo/#respond Fri, 02 Dec 2022 19:24:49 +0000 https://www.therobotreport.com/?p=564429 Ibeo recently filed for insolvency because it could not secure further growth financing.

The post MicroVision acquiring LiDAR maker Ibeo appeared first on The Robot Report.

]]>
MicroVision, a developer of MEMS-based solid-state automotive LiDAR and advanced driver-assistance systems (ADAS) solutions, is acquiring certain assets from Hamburg, Germany-based Ibeo Automotive Systems GmbH for up to 15 million euros ($15.8M USD). The acquisition combines MAVIN LiDAR with Ibeo perception software features into the MicroVision ASIC for automotive OEMs.

Ibeo recently filed for insolvency because it could not secure further growth financing. MicroVision said the acquisition will expand its multi-market strategy focusing on industrial, smart infrastructure, robotics, and commercial vehicle segments with Ibeo’s flash-based sensor.

The combined company is expected to have revenue streams from existing and new product lines ranging from software, Ibeo’s flash-based LiDAR and MicroVision’s scanning LiDAR sensor, as well as other combinations of hardware with perception software solutions.

Ibeo Automotive Systems Gmbh developed and launched the SCALA sensor into serial production with a Tier 1 that is today used by premium OEMs like Audi, Mercedes and Stellantis and software solutions used by BMW and VW. Under the terms of the agreement, MicroVision will acquire certain Ibeo assets, IP, and teams to operate within the MicroVision organization as of the closing date, which is expected to be in the first half of 2023.

“This is an exciting time as we welcome the Ibeo team to the MicroVision family. We believe this is the winning combination to accelerate our strategic plan at the exact right time. Our best-in-class hardware solution paired with existing perception features added to our ASIC, accelerated by the Ibeo software and automotive qualification experience, presents a significantly advanced solution for OEM,” said Sumit Sharma, CEO of MicroVision. “I’m also very excited about the immediate expansion of our multi-market strategy with Ibeo’s sensor and hardware.”

The acquisition will enable MicroVision to accelerate its timeline around the delivery of a complete lidar and perception software solution. Ibeo’s perception software will be ported into the MicroVision digital ASIC with compatibility demonstrations available by early Q2 2023.

The forecasted revenue of $8 to $15 million is expected from new and existing customers, including top-tier German and U.S. OEMs as well as non-automotive multi-market customers.

The combined engineering teams in Hamburg, Nuremberg and Redmond, Washington will continue developing LiDAR hardware, perception software, ASIC, auto-annotation software, and other innovative ADAS and autonomous driving products.

The post MicroVision acquiring LiDAR maker Ibeo appeared first on The Robot Report.

]]>
https://www.therobotreport.com/microvision-acquiring-lidar-maker-ibeo/feed/ 0
RoboSense launches flash solid-state LiDAR https://www.therobotreport.com/robosense-launches-flash-solid-state-lidar/ https://www.therobotreport.com/robosense-launches-flash-solid-state-lidar/#respond Wed, 23 Nov 2022 17:09:07 +0000 https://www.therobotreport.com/?p=564378 RoboSense officially launched RS-LiDAR-E1, a flash solid-state LiDAR that sees 360° based on its in-house, custom developed chips.

The post RoboSense launches flash solid-state LiDAR appeared first on The Robot Report.

]]>
robosense

RoboSense announced its latest solid-state LiDAR at the Tech Day event. | Source: RoboSense

RoboSense, a provider of Smart LiDAR Sensor Systems, held a new product launch and Tech Day event. During the conference, RoboSense officially launched RS-LiDAR-E1 (E1), a flash solid-state LiDAR that sees 360° based on its in-house, custom-developed chips and flash technology platform. The company also held the unveiling ceremony of a new smart manufacturing joint venture, Luxsense, jointly with Luxshare-ICT, a domestic leading electronics manufacturer.

RoboSense launched E1, an automotive-grade flash solid-state LiDAR. It serves as a new product platform featuring area array transceiver technology with application-specific developed chips as the core. E1 is designed for large-scale series production with a simple bill of materials including no moving parts. It excels in all the three aspects of detection performance, cost efficiency, automotive-grade safety and reliability of LiDARs. As a key piece to realizing the core functions of autonomous driving, E1 will assist partners to further bridge the gap in smart driving perception and improve the all-scenario perception capability of automated and autonomous vehicles.

4 core features of E1

  • Horizontal FOV of 120°, which ensures 360° coverage area without blind zone using minimum sensors.
  • Vertical FOV is designed to be 90°, to allow the perception area to cover both blind zones on the ground and lateral vision.
  • Ultra-high frame rates of over 25Hz, capturing target objects’ motion states and predicting their moves faster.
  • Detection range of 30m @10%, which enables better perception planning control.

E1 uses RoboSense’s first in-house, custom chips for flash solid-state LiDAR platform and its first 2D electronic scanning technology. With highly integrated chips that incorporate the three core components of transmission, reception, and processing, E1 greatly streamlines the circuit design and production processes, creating the performance and cost advantages necessary for the durability and reliability requirements of blind spot LiDARs in the automotive market.

To ensure product performance and rapidly improve production capacity, RoboSense launched the first and only CNAS certified LiDAR lab to analyze LiDAR and their components and developed a complementary smart manufacturing system to produce this high-tech, precision sensor.

During the event, leaders of RoboSense and Luxshare jointly held the unveiling ceremony of Luxsense. Investment in the first phase of RoboSense smart manufacturing system exceeded 1 billion RMB cumulatively, over $139 million; the plant area exceeds 55,000 square meters including nearly 20 automated production lines built with highly intelligent production software to achieve a top-level production efficiency of “a LiDAR every 12 seconds” and guarantees capacity by connecting our Shenzhen, Dongguan and Guangzhou plants.

Additionally, RoboSense announced a new strategic financing round which attracted top industry investors. In particular, industry investors in this new financing round include car companies with self-owned brands, emerging automakers, top luxury supercar brands, leading commercial vehicle companies, supply chain pioneers and the tier 1 institutions. Greatly empowered by capital investments from the industry, RoboSense gained unprecedented momentum and sustainability.

The post RoboSense launches flash solid-state LiDAR appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robosense-launches-flash-solid-state-lidar/feed/ 0
Video podcast episode featuring interview with Tatum Robotics founder https://www.therobotreport.com/video-podcast-episode-featuring-interview-with-tatum-robotics-founder/ https://www.therobotreport.com/video-podcast-episode-featuring-interview-with-tatum-robotics-founder/#respond Tue, 22 Nov 2022 00:20:29 +0000 https://www.therobotreport.com/?p=564361 This special video podcast episode features an American Sign Language translated edition of The Robot Report Podcast episode 98, an interview with Tatum Robotics founder Samantha Johnson.

The post Video podcast episode featuring interview with Tatum Robotics founder appeared first on The Robot Report.

]]>

This week, we have a special video edition of The Robot Report podcast. This is the video feed from our recent interview with Tatum Robotics founder and CEO, Samantha Johnson. The video features American Sign Language (ASL) translation so that hearing-impaired individuals can also enjoy the content.

Tatum Robotics is building a robotic device shaped like a human hand and arm, that can mimic a human translator for deafblind individuals. Currently, deafblind individuals communicate by touching the hand of their translator. The human translator uses finger spelling and ASL signs to communicate.

Tatum Robotics is building a robotic analog to the human hand, designed to replicate the interaction between a translator and a deafblind user. Ultimately, Tatum Robotics wants to open up the world of ebooks for consumption by deafblind individuals. This will be followed by remote communication (i.e. over the web) between both hearing individuals and deafblind individuals, or even between two deafblind individuals.

As Samantha Johnson discusses in the video, until now, deafblind individuals are often isolated and bored for long periods of time, with no ability to communicate without a translator.

An early prototype of the Tatum Robotics communication robot for deafblind individuals. | Credit: Tatum Robotics

We want to thank the ASL translators on this project: Tymber Marsh and Sean Havas for their amazing translation skills. Tatum Robotics is currently recruiting additional ASL signers to contribute their unique ASL techniques to the robot design. If you are interested, contact Tatum Robotics directly for how you can contribute.

The post Video podcast episode featuring interview with Tatum Robotics founder appeared first on The Robot Report.

]]>
https://www.therobotreport.com/video-podcast-episode-featuring-interview-with-tatum-robotics-founder/feed/ 0