Sensors, machine vision, and feedback for robotic designs https://www.therobotreport.com/category/technologies/cameras-imaging-vision/ Robotics news, research and analysis Wed, 05 Apr 2023 21:20:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Sensors, machine vision, and feedback for robotic designs https://www.therobotreport.com/category/technologies/cameras-imaging-vision/ 32 32 Capra Robotics’ AMRs to use RGo Perception Engine https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/ https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/#respond Wed, 05 Apr 2023 21:19:21 +0000 https://www.therobotreport.com/?p=565424 RGo Robotics, a company developing artificial perception technology, announced leadership appointments, new customers and an upcoming product release.

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>

RGo Robotics, a company developing artificial perception technology that enables mobile robots to understand complex surroundings and operate autonomously, announced significant strategic updates. The announcements include leadership appointments, new customers and an upcoming product release.

RGo develops AI-powered technology for autonomous mobile robots, allowing them to achieve 3D, human-level perception. Its Perception Engine gives mobile robots the ability to understand complex surroundings and operate autonomously. It integrates with mobile robots to deliver centimeter-scale position accuracy in any environment. In Q2 2023, RGo said it will release the next iteration of its software that will include:

  • An indoor-outdoor mode: a breakthrough capability for mobile robot navigation allows them to operate in all environments – both indoors and outdoors.
  • A high-precision mode that enables millimeter-scale precision for docking and similar use cases.
  • Control Center 2.0: a redesigned configuration and admin interface. This new version supports global map alignment, advanced exploration capabilities and new map-sharing utilities.

RGo separately announced support for NVIDIA Jetson Orin System-on-Modules that enables visual perception for a variety of mobile robot applications.

RGo will exhibit its technology at LogiMAT 2023, Europe’s biggest annual intralogistics tradeshow, from April 25-27, in Stuttgart, Germany at Booth 6F59. The company will also sponsor and host a panel session “Unlocking New Applications for Mobile Robots” at the Robotics Summit and Expo in Boston from May 10-11.

Leadership announcements

RGO also announced four leadership appointments. This includes Yael Fainaro being named chief business officer and president; Mathieu Goy being named head of European sales; Yasuaki Mori being named executive consultant, APAC market development; and Amy Villeneuve as a member of the board of directors.

“It is exciting to have reached this important milestone. The new additions to our leadership team underpin our evolution from a technology innovator to a scaling commercial business model including new geographies,” said Amir Bousani, CEO and co-founder, RGo Robotics.

Goy, based in Paris, and Mori, based in Tokyo, join with extensive sales experience in the European and APAC markets. RGo is establishing an initial presence in Japan this year with growth in South Korea planned for late 2023.


“RGo has achieved impressive product maturity and growth since exiting stealth mode last year,” said Fainaro. “The company’s vision-based localization capabilities are industrial-grade, extremely precise and ready today for even the most challenging environments. This, together with higher levels of 3D perception, brings tremendous value to the rapidly growing mobile robotics market. I’m looking forward to working with Amir and the team to continue growing RGo in the year ahead.”

Villeneuve joins RGo’s board of directors with leadership experience in the robotics industry, including her time as the former COO and president of Amazon Robotics. “I am very excited to join the team,” said Villeneuve. “RGo’s technology creates disruptive change in the industry. It reduces cost and adds capabilities to mobile robots in logistics, and enables completely new applications in emerging markets including last-mile delivery and service robotics.”

Customer traction

After comprehensive field trials in challenging indoor and outdoor environments, RGo continued its commercial momentum with new customers. The design wins are with market-leading robot OEMs across multiple vertical markets ranging from logistics and industrial autonomous mobile robots, forklifts, outdoor machinery and service robots.

Capra Robotics, an award-winning mobile robot manufacturer based in Denmark, selected RGo’s Perception Engine for its new Hircus mobile robot platform.

“RGo continues to develop game-changing navigation technology,” said Niels Juls Jacobsen, CEO of Capra and founder of Mobile Industrial Robots. “Traditional localization sensors either work indoors or outdoors – but not both. Combining both capabilities into a low-cost, compact and robust system is a key aspect of our strategy to deliver mobile robotics solutions to the untapped ‘interlogistics’ market.”

The post Capra Robotics’ AMRs to use RGo Perception Engine appeared first on The Robot Report.

]]>
https://www.therobotreport.com/capra-robotics-amrs-to-use-rgo-perception-engine/feed/ 0
Activ Surgical completes first case with ActivSight https://www.therobotreport.com/activ-surgical-completes-first-case-with-activsight/ https://www.therobotreport.com/activ-surgical-completes-first-case-with-activsight/#respond Tue, 10 Jan 2023 21:41:13 +0000 https://www.therobotreport.com/?p=564749 Activ Surgical has completed its first case with ActivSight Intelligent Light, a module that provides enhanced visualization during surgery. 

The post Activ Surgical completes first case with ActivSight appeared first on The Robot Report.

]]>

Activ Surgical announced that it has completed its first case with ActivSight Intelligent Light, a module that can be attached to laparoscopic and robotic systems to provide enhanced visualization during surgery. 

The surgery was performed on December 22, 2022, at the Ohio State University Wexner Medical Center. Matthew Kalady, MD, FASCARS, the Cheif of the Division of Colon and Rectal Surgery at the Wexner Medical Center, performed a laparoscopic left colectomy, which is the surgical removal of the left side of the larger bowel, typically performed because of colon cancer. Dr. Kalady performed the surgery using the colorectal AI mode within ActivSight. 

“While using one of ActivSight’s intraoperative visual overlays, the dye-free ActivPerfusion Mode, I was able to clearly see key critical structures in the surgical site and tissue perfusion in real-time,” Dr. Kalady said in a release. “With the press of a button, the colorectal AI mode was enabled, removing distractions of background signals from non-bowel tissue and clearly focusing on perfusion to the colon. There was a clear difference in visualization during AI mode.”

In ActivSight’s ActivPerfusion mode, the system uses laser speckle technology on the entire view to show blood perfusion. When used with the device’s colorectal AI mode, the system isolates the ActivPerfusion display to the targeted issue, which, in the case of the surgery performed by Dr. Kalady was the colon. This makes it easier for the surgeon to focus on the targeted tissue, and not become distracted by seeing blood perfusion in the entire scene. 

“With this procedure, we have shown that we can deploy proprietary models that have been trained with our datasets, annotated with our experts and our pipeline, and developed with our team and partners,” Activ Surgical CEO Shah-Bugaj said. “We are collaborating with global technology leaders to assist us with optimizing storage, integration, and inference. When all of this advanced tech is installed in the OR, our novel sensing brings it to life, and the results are incredible.”

Activ Surgical is currently conducting a clinical study with the Wexner Medical Center. The study seeks to determine the utility and usability of ActivSight. 

Activ Sight was cleared by the FDA in 2021, and has been used in first-in-human/IRB studies. The module received CE Mark approval in 2022, allowing Activ Surgical to commercialize the enhanced imaging system across the European Union (EU). 

The post Activ Surgical completes first case with ActivSight appeared first on The Robot Report.

]]>
https://www.therobotreport.com/activ-surgical-completes-first-case-with-activsight/feed/ 0
Orbbec, Microsoft launch 3D vision camera https://www.therobotreport.com/orbbec-microsoft-launch-3d-vision-camera/ https://www.therobotreport.com/orbbec-microsoft-launch-3d-vision-camera/#respond Thu, 05 Jan 2023 17:11:10 +0000 https://www.therobotreport.com/?p=564716 A built-in NVIDIA Jetson Nano is used to run advanced depth vision algorithms, eliminating the need for an external compute device.

The post Orbbec, Microsoft launch 3D vision camera appeared first on The Robot Report.

]]>
femto mega

Orbbec and Microsoft released its latest 3D camera the Femto Mega. | Source: Orbbec

3D camera manufacturer Orbbec launched its latest product, the Femto Mega, at CES 2023. Femto Mega was built in partnership with Microsoft. The depth camera uses Microsoft’s time-of-flight (ToF) technology for precise scene understanding over a wide 120 degrees field of view and a broad range from 0.25m to 5.5m. The 1 mega-pixel depth camera is complemented by a high-performance 4K resolution RGB camera with 90° FOV.

A built-in NVIDIA Jetson Nano is used to run advanced depth vision algorithms to convert raw data to precise depth images. This eliminates the need for an external PC or compute device, Orbbec said.

The camera can be directly connected to servers or the cloud using the Power over Ethernet (PoE) connection for both data and power. The device also has USB-C 3.2 and DC power supply connectors. A 6DOF IMU module provides orientation. The universal trigger control system provides accurate frame synchronization and uses standard ethernet cables for multi-camera and multi-sensor networks. The SDK enables setup and registration, and a set of APIs allows integration with various applications.

“Orbbec’s Femto Mega extends the use of Microsoft’s depth technology, used in Hololens and Azure Kinect DevKit, to a broad range of industrial applications,” Jon Yee, the depth PM director at Microsoft, said. “This camera is a result of a close collaboration between our teams and will be an essential tool helping AI developers to add depth perception to computer vision.”

“Our large-scale cargo digitization solution is built using Microsoft’s Azure Kinect and Azure and is commercially deployed in Singapore. It is also in commercial trials for flight capacity optimization and digital handling at multiple international airports,” Dr. Suraj Nair, CTO at Speedcargo Technologies, Singapore, said. “Orbbec’s Femto Mega enables us to maintain compatibility with our current system while reducing the size, cost and complexity of our solution. This will enable ease of scaling our operations to new locations.”

“Femto Mega is aimed at expanding the use of 3D vision in various industry solutions,” Amit Banerjee, head of platform and partnerships at Orbbec, said. “We’re excited to introduce this new category-leading intelligent camera as part of Orbbec’s new depth vision platform.”

The post Orbbec, Microsoft launch 3D vision camera appeared first on The Robot Report.

]]>
https://www.therobotreport.com/orbbec-microsoft-launch-3d-vision-camera/feed/ 0
Owl AI launches 3D Thermal Ranger evaluation kit https://www.therobotreport.com/owl-ai-launches-3d-thermal-ranger-evaluation-kit/ https://www.therobotreport.com/owl-ai-launches-3d-thermal-ranger-evaluation-kit/#respond Tue, 03 Jan 2023 23:48:37 +0000 https://www.therobotreport.com/?p=564697 Owl AI has launched a monocular 3D Thermal Ranger computer vision product for advanced driver assistance systems and autonomous vehicles.

The post Owl AI launches 3D Thermal Ranger evaluation kit appeared first on The Robot Report.

]]>
Owl AI

Owl AI offers a monocular 3D thermal imaging product that can help cars see at night and determine how far they are from living objects. | Source: Owl AI

Owl Autonomous Imaging (Owl AI) has launched a monocular 3D Thermal Ranger computer vision product for advanced driver assistance systems (ADAS) and autonomous vehicles.

The company has also launched an Evaluation Kit for their Thermal Ranger that gives Tier 1 and OEM automotive companies the ability to easily evaluate Owl AI’s Thermal Ranger imaging solution for use in their Pedestrian Automatic Emergency Breaking (PAEB) applications. 

Thermal imaging can allow systems to identify objects and ranging in complete darkness or blinding light, things that traditional autonomous vehicles and ADAS sensors, like cameras, typically struggle with. Owl AI’s monocular thermal camera system enables 2D and 3D perception for object classification, 3D segmentation of objects, RGB-to-thermal fusion and distance measurements. 

“Our unique, patented solutions deliver panoramic thermal imaging and dense range maps are superior to HDR RGB cameras, and/or LiDAR or RADAR sensors for difficult night and blinding light situations,” Chuck Gershman, CEO of Owl Autonomous Imaging, said in a release. “Unlike cumbersome stereo camera approaches, a single Owl AI 3D thermal camera delivers distance information throughout the entire field of view (FOV) and is immune to vehicle vibration for reliable and robust mapping. Thermal sensing is an important camera modality which can see in complete darkness as well as in blinding light, which is critical for pedestrian safety.”

Thermal imaging systems like Owl AI’s can be especially useful for identifying living objects, like pedestrians, cyclists and animals, in all conditions, whether it be in the middle of the day, at night or during harsh weather. 

Owl AI’s 3D Thermal Ranger provides VGA image resolution, but the company hopes that in the near future, the Ranger can provide a 150 times improvement in resolution and cloud density over other sensing modalities. The system can detect pedestrians, cyclists, animals and other vehicles while also calculating their position and direction. 

Owl AI

A breakdown of Owl AI’s thermal imaging software. | Source: Owl AI

The company’s Evaluation Kit for the Thermal Ranger platform consists of a hardware and software kit that can help automotive companies to evaluate the platform for use in their applications. The Evaluation Kit supports SAE L2, L2+, L3 and L4 requirements. 

The Thermal Ranger Platform consists of a thermal imaging camera, a NVIDIA Jetson AGX Orin AI processor and the Owl AI software suite including Convolutional Neural Networks (CNNs), ROS applications, artificial intelligence (AI) and machine learning (ML) frameworks, drivers and necessary cables and adapters. 

The system also comes with all required software installed and ready to use. This includes the software required for the operation of the NVIDIA processor, as well as the following modules: 

  • Owl AI/ML Neural Networks
  • Autonomous Emergency Braking application
  • 3D Birds-eye-view application
  • Object segmentation
  • Raw thermal video viewer
  • Raw thermal video recorder
  • Thermal with both 2D and 3D bounding boxes and colorized range data

Owl AI was founded in 2018 by Chuck Gershman and Eugene Petilli. The company has raised a total of $16.2 million to date, according to Crunchbase

The post Owl AI launches 3D Thermal Ranger evaluation kit appeared first on The Robot Report.

]]>
https://www.therobotreport.com/owl-ai-launches-3d-thermal-ranger-evaluation-kit/feed/ 0
IDS launches new higher resolution Ensenso N 3D camera https://www.therobotreport.com/ids-launches-new-higher-resolution-ensenso-n-3d-camera/ https://www.therobotreport.com/ids-launches-new-higher-resolution-ensenso-n-3d-camera/#respond Thu, 22 Dec 2022 15:59:49 +0000 https://www.therobotreport.com/?p=564624 The latest edition of the Ensenso N cameras includes a higher resolution sensor to nearly double the accuracy of the solution.

The post IDS launches new higher resolution Ensenso N 3D camera appeared first on The Robot Report.

]]>

The resolution and accuracy have almost doubled on The Ensenso N camera while the price has remained the same. | Credit: IDS

The Ensenso N-series 3D cameras have a compact body made of aluminum or a plastic composite, depending on the model, and a pattern projector built right in. They can be used to take pictures of both still and moving objects. The integrated projector projects a high-contrast texture onto the objects in question.

A pattern mask with a random dot pattern fills in surface structures that don’t exist or are only faintly detectable. This makes it possible for the cameras to make detailed 3D point clouds even when the lighting is bad.

The Ensenso models N31, N36, N41 and N46, supercede the previously available N30, N35, N40 and N45. Visually, the cameras are identical to their predecessors. Internally, however, the cameras leverage the new IMX392 sensor from Sony. This sensor has a higher resolution of 2.3 MP over the prior 1.3 MP. All cameras are pre-calibrated and therefore easy to set up. The Ensenso selector on the IDS website helps to choose the right model.

With Ensenso N, users can choose from a series of 3D cameras that give reliable 3D information for a wide range of applications, whether they are fixed in place or being moved around by a robot arm. The cameras show their worth when they are used to pick up single items, support industrial robots that are controlled remotely, help with logistics, and even help to automate high-volume laundry.

The most recent update of the  IDS NXT software includes the ability to detect anomalies in addition to Object Detection and Classification. This can be done with only a minimum of training data required to reliably identify both known and unknown deviations.

The post IDS launches new higher resolution Ensenso N 3D camera appeared first on The Robot Report.

]]>
https://www.therobotreport.com/ids-launches-new-higher-resolution-ensenso-n-3d-camera/feed/ 0
LUCID launches the Atlas10 camera featuring an ultraviolet sensor https://www.therobotreport.com/lucid-launches-the-atlas10-camera-featuring-an-ultraviolet-sensor/ https://www.therobotreport.com/lucid-launches-the-atlas10-camera-featuring-an-ultraviolet-sensor/#respond Wed, 21 Dec 2022 15:47:58 +0000 https://www.therobotreport.com/?p=564608 LUCID expands its advanced sensing portfolio with the Atlas10 camera Featuring the sony IMX487 ultraviolet (UV) sensor.

The post LUCID launches the Atlas10 camera featuring an ultraviolet sensor appeared first on The Robot Report.

]]>

LUCID Vision Labs, Inc., today announced the series production of its new Atlas10 camera featuring the Sony IMX487 ultraviolet (UV) sensor.

The ATX081S-UC 10GigE PoE+ UV camera equipped with the high UV sensitivity 8.1 MP Sony IMX487 global shutter CMOS sensor, is capable of capturing images across the ultraviolet light spectrum in the 200 to 400nm range. Utilizing Sony’s Pregius S unique back-illuminated pixel structure, the Atlas10 camera’s high-level UV sensitivity makes it ideal for industrial applications requiring greater precision in transparent materials (plastic and PET), semiconductor pattern defect inspection, material sorting and more.

The Atlas10 10BASE-T camera is known for its industrial reliability offering Power over Ethernet (PoE+), robust M12 and M8 connectors, Active Sensor Alignment for superior optical performance, and a wide ambient temperature range of -20°C to 55°C.

“The Atlas10 UV offers excellent sensitivity in the UV wavelength and is packed with industrial features designed to provide high-speed and reliable operation in challenging environments,” says Rod Barman, President at LUCID Vision Labs. “The Sony IMX487 offers improved quantum efficiency, high dynamic range and reduced noise, enabling high-quality imaging for a broad range of advanced sensing applications.”

The Atlas10 is a GigE Vision and GenICam-compliant camera capable of 10 Gbps data transfer rates and allows the use of standard CAT6 cables up to 25 meters. Atlas10 features Power over Ethernet (PoE+) that simplifies integration and reduces cost.

All LUCID cameras conform to the GigE Vision 2.0 and GenICam3 standards and are supported by LUCID’s own Arena software development kit. The Arena SDK provides customers with easy access to the latest industry standards and software technology. The SDK supports Windows, Linux 64bit and Linux ARM operating systems, and C, C++, C# and Python programming languages.

The post LUCID launches the Atlas10 camera featuring an ultraviolet sensor appeared first on The Robot Report.

]]>
https://www.therobotreport.com/lucid-launches-the-atlas10-camera-featuring-an-ultraviolet-sensor/feed/ 0
Inuitive sensor modules bring VSLAM to AMRs https://www.therobotreport.com/inuitive-sensor-modules-vslam-amrs/ https://www.therobotreport.com/inuitive-sensor-modules-vslam-amrs/#respond Tue, 13 Dec 2022 19:31:53 +0000 https://www.therobotreport.com/?p=564529 New sensor modules add depth sensing and image processing with AI and VSLAM capabilities.

The post Inuitive sensor modules bring VSLAM to AMRs appeared first on The Robot Report.

]]>
Inuitive

Inuitive introduces the M4.5S (center) and M4.3WN (right) sensor modules that add VSLAM for AMR and AGVs.

Inuitive, an Israel-based developer of vision-on-chip processors, launched its M4.5S and M4.3WN sensor modules. Designed to integrate into robots and drones, both sensor modules are built around the NU4000 vision-on-chip (VoC) processor adds depth sensing and image processing with AI and Visual Simultaneous Localization and Mapping (VSLAM) capabilities.

The M4.5S provides robots with enhanced depth from stereo sensing along with obstacle detection and object recognition. It features a field of view of 88×58 degrees, a minimum sensing range of 9 cm (3.54″) and a wide dynamic operating temperature range of up to 50 degrees Celsius (122 degrees Farenheit). The M4.5S supports the Robot Operating System (ROS) and has an SDK that is compatible with Windows, Linux and Android.

The M4.3WN features tracking and VSLAM navigation based on fisheye cameras and an IMU together with depth sensing and on-chip processing. This enables free navigation, localization, path planning, and static and dynamic obstacle avoidance for AMRs and AGVs. The M4.3WN is designed in a metal case to serve in industrial environments.

“Our new all-in-one sensor modules expand our portfolio targeting the growing market of autonomous mobile robots. Together with our category-leading vision-on-chip processor, we now enable robotic devices to look at the world with human-like visual understanding,” said Shlomo Gadot, CEO and co-founder of Inuitive. “Inuitive is fully committed to continuously developing the best performing products for our customers and becoming their supplier of choice.

The M4.5S and the M4.3WN sensor modules’ primary processing unit is Inuitive’s all-in-one NU4000 processor. Both modules are equipped with depth and RGB sensors that are controlled and timed by the NU4000. Data generated by the sensors and processed in real-time at a high frame rate by the NU4000, is then used to generate depth information for the host device.

The post Inuitive sensor modules bring VSLAM to AMRs appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inuitive-sensor-modules-vslam-amrs/feed/ 0
Stereolabs launches ZED-X cameras for indoor, outdoor robots https://www.therobotreport.com/stereolabs-zed-x-camera/ https://www.therobotreport.com/stereolabs-zed-x-camera/#comments Wed, 07 Dec 2022 15:14:55 +0000 https://www.therobotreport.com/?p=564476 With new SDK and multi-camera control, Stereolabs launched deployment-ready 3D perception hardware and software.

The post Stereolabs launches ZED-X cameras for indoor, outdoor robots appeared first on The Robot Report.

]]>

Stereolabs today launched the ZED-X stereo camera line, which is designed for robots operating in indoor and outdoor environments. With navigation and obstacle detection capabilities, Stereolabs said the ZED-X is designed for robots in agriculture, construction, logistics and last-mile delivery.

Stereolabs said the ZED-X features an IP66-rated aluminum enclosure, GMSL2 connection and native multi-camera synchronization. Available in two form factors, the ZED-X and the ZED-X Mini, the stereo cameras provide 3D perception at a range of .2 to 20 meters for navigation and up-close at a range of .08 to 12.5 meters for object detection during core process automation.

It features a 1920×1200 global shutter RGB sensor, rendering capabilities up to 120 fps, and a 3.0 µm pixel size for both low-light and bright conditions. Stereolabs said the built-in IMU combines a 16-bit digital triaxial accelerometer and a 16-bit digital triaxial gyroscope for accurate detection of motion and measurement of orientation.

Stereolabs said the ZED-X cameras use a secure GMSL2 connection to support high-speed video data transfer. In a multigigabit point-to-point connection, GMSL2 transfers raw video data from the ZED-X to an AI gateway at a speed of up to 6 GB per second. For large robots, the company said additional cameras can be placed farther from the gateway, at 15 m distance, while still delivering lower latency with less power and a higher frame rate than USB 3.0, without EMI.

The ZED-X camera has been optimized for use with NVIDIA’s Jetson AGX Orin supercomputer. Each Jetson module can control four ZED-X cameras, reducing cost, weight and onboard space requirements.

In conjunction with the release of the ZED-X, Stereolabs is also launching a multi-camera management platform, ZED Hub, and a new 4.0 version of the ZED SDK. This combined solution enables a 360-degree surround view around the robot, which is necessary for safe navigation. Users can now fuse data from multiple cameras automatically.

“Today’s robots need to navigate in harsh environments and respond quickly as they operate. Building an affordable, industrial-grade surround 3D perception solution is critical to production-scale deployment of next-generation robotics and smart analytics,” said Cecile Schmollgruber, CEO of Stereolabs. “Our camera-based solution dramatically simplifies 360-degree spatial perception, and is backed by an ecosystem of tools to integrate and control them at a price point that makes it easy to add 3D vision to any machine.”

The ZED-X is available now for preorder at $599, and the ZED-X Mini at $549 per camera. The ZED SDK 4.0 and ZED Hub management software are available immediately and are compatible with the entire ZED range of cameras.

The post Stereolabs launches ZED-X cameras for indoor, outdoor robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/stereolabs-zed-x-camera/feed/ 1
Robotics Engineering Week to address critical development issues https://www.therobotreport.com/robotics-engineering-week-to-address-critical-development-issues/ https://www.therobotreport.com/robotics-engineering-week-to-address-critical-development-issues/#respond Fri, 04 Nov 2022 16:54:58 +0000 https://www.therobotreport.com/?p=564199 Robotics Engineering Week's sessions are focused on critical topics for robotics and automation professionals.

The post Robotics Engineering Week to address critical development issues appeared first on The Robot Report.

]]>
robotics engineering week

Robotics Engineering Week, produced by The Robot Report and WTWH Media, kicks off on Tuesday, November 8, 2022. Those interested in attending can still register for the all sessions for the event

Despite the monumental potential of new robotics-enabling technology and substantial social and business drivers, the pace of development for new robotics technologies, products and services has been painfully slow.

The complexity of developing robotics systems, together with the unending crush of technological innovation, has hampered innovation and slowed robotics product releases. This, in turn, has placed companies – both start-ups and mature firms – at risk.

The webinar is a digital event series featuring keynotes and panels designed to deliver the information and guidance engineers, technical managers, business development professionals, researchers and more need to build the next generation of commercial robotics systems more quickly and easily. 

The complete agenda for RoboBusiness is below, and the full conference is here


Tuesday, Nov. 8

Session: Intelligent Sensing for Object Recognition, Manipulation and Control
Brual Shah, Co-Founder and CTO, GrayMatter Robotics, and Jeff Mahler, Co-founder and CTO, Ambi Robotics
11:00 AM ET

Grasping and manipulation, the ability to directly and physically interact with and modify objects in the environment, is perhaps the greatest differentiator between robotic systems and all other classes of automated systems. Many types of robots make use of advanced sensing solutions – from tactile, to vision, proprioceptive, and more – to identify, pick up and operate on all manner of objects, with goals ranging from providing human-like dexterity and autonomous manipulation, to high precision repeatability, and on to superhuman strength and endurance. During this Robotics Engineering session, attendees will learn of the latest sensing technologies and techniques commercially available to support object recognition, grasping, manipulation and control, as well as solutions emerging from the lab that will allow for whole new classes of robotics applications.

Session: Using Simulation for the Design and Development of Robotics Systems
Erin Rapacki-Bishop, Senior PMM Robotics & Isaac Sim, NVIDIA
2:00 PM ET

The development of robots and robotic technology requires the mastery of multiple disciplines – primarily software development, mechanical and electrical engineering.  Robotics development is made even more difficult as it is limited by embedded and real-time constraints. Commercial viability adds additional burdens for the robotics developer. Solution providers have responded to these difficulties by providing a whole host of robotics design, development tools, simulation and testing tools, as well as ready-made robotic ‘platforms’, that dramatically simplifies the job of designing, developing, testing and manufacturing robots and robotic products. This Robotics Engineering Week session will provide an overview of current robotics development solutions, as well as highlight development trends.


Wednesday, Nov. 9

Session: Grounding Your Cloud-based Robotics Initiatives for Success
Andrei Kholodni, Principal Technologist, Wind River, and Brian Gerkey, CEO/Cofounder, Open Robotics
11:00 AM ET

Machine learning (and deep learning) technologies and techniques have found great success in enabling advanced robotics capabilities such as decision-making, object identification, vision processing, autonomous navigation, motor control, sensor integration and other functions, as well as speech, facial and emotion recognition. Moreover, robotics designers and engineers can also take advantage of different types of distributed execution architectures – edge, fog and cloud – to optimize their systems and their intended applications. While the large number and variety of machine learning alternatives for robotics development and deployment is beneficial, they can also result in confusion and indecision, particularly given the rapid rate of technological innovation and product introduction. In this Robotics Engineering Week session, designed to provide some much-needed clarity, attendees will learn how the latest AI and machine learning technologies and techniques can be employed in ground-based, aerial and maritime systems to make robots more intelligent and functional. 

Session: Advanced Motion Control Solutions for Robotics Systems
Brian Coyne, VP of Engineering, Harmonic Drive LLC
2:00 PM ET

‘Motion’ in the physical world, whether in the form of changing place, position or posture, is perhaps the greatest differentiator between robotic systems and all other classes of engineered products. It is motion is that makes robotics systems ‘robotic’, and it is advances in motion control technologies that have spurred robotics innovation, with the result that there has been a dramatic increase in the use of robotics technologies and products around the globe. In this Robotics Engineering Week session, attendees will learn how support for robotic motion control has improved with the introduction of new products and technologies, and how they allow for new capabilities, new applications, and entry into new markets. Case studies and product examples will be used to highlight salient points. 


Thursday, Nov. 10

Session: Intelligent Vision and Sensing Solutions for Autonomous Mapping and Navigation
Paul Baim, VP Product Management & Systems Engineering, DreamVu Inc
11:00 AM ET

Commercial robotic systems typically require multiple types of sensors to capture information about the physical world, which following fusion and further processing allows them localize themselves, navigate while avoiding obstacles, and provide additional information. The number, type, and quality of the onboard sensors vary depending on the price and target application for the platform. Common sensor types include 2D / 3D imaging sensors (cameras), 1D and 2D laser rangefinders, 2D and 3D sonar sensors, 3D High Definition LiDAR, accelerometers, GPS and more. Thankfully, solution providers continue to release low-cost, increasingly powerful products, and new sensing technologies are always emerging. In this Robotics Engineering Week session, attendees will learn of the latest advances in sensing products and technologies, including use cases highlighting important trends and examples of the latest sensing trends and techniques.

Session: Motion Control for Healthcare Robotics Applications: Functional Requirements, Critical Capabilities
Prabh Gowrisankaran, VP of Engineering & Strategy, Performance Motion Devices
2:00 PM

Healthcare robotics share many areas of technical commonality with electrically powered medical devices, as well as the common goal of improving patient care. A key difference, however, is that for all robotics systems, motion and movement in the physical world is expected. For robots, motion (and motion control) is presumed and definitional. As such, motion control technologies and techniques are central considerations for any robotics engineering initiative. Compared to industrial and consumer motion control technologies, motion control solutions for healthcare applications typically have different, and often very stringent, functional requirements in areas such as safety, reliability, tolerances, cleanability, sterilization and more. In this Robotics Engineering Week session, attendees will learn about the leading functional requirements and critical capabilities of motion control solutions for healthcare robotics applications.

The post Robotics Engineering Week to address critical development issues appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robotics-engineering-week-to-address-critical-development-issues/feed/ 0
Zivid introduces 3D camera for deeper reach bin picking https://www.therobotreport.com/zivid-introduces-3d-camera-for-deeper-reach-bin-picking/ https://www.therobotreport.com/zivid-introduces-3d-camera-for-deeper-reach-bin-picking/#respond Wed, 12 Oct 2022 21:29:36 +0000 https://www.therobotreport.com/?p=564092 Zivid, the high-performance industrial 3D camera company, announced the second member of its Zivid Two 3D camera family – the Zivid Two L100.

The post Zivid introduces 3D camera for deeper reach bin picking appeared first on The Robot Report.

]]>
L100 Zivid

The L100 is designed to enable robotic picking in deeper, larger bins that are typical of the manufacturing industry. | Source: Zivid

Zivid announced the second member of its 3D camera family – the Zivid Two L100. The L100 has been developed to address the market’s appetite for a high-performance 3D camera with industrial-grade reliability that can tackle the larger, deeper bins typically seen in the manufacturing industry. With an extended working distance and larger working volume, it is also suited for machine tending and parcel induction operations in logistics scenarios.

The company’s existing industrial 3D camera has made a significant impact across multiple industries since its launch in 2021. Equally suited for static and robot-mounted operation with a short baseline of 120 mm and weighing in at a mere 940 grams, the L100 has proven its industrial-grade credentials by demonstrating consistent and reliable calibration performance over time while deployed in challenging industrial settings.

The L100 is built on the established Zivid Two platform. It offers the same high resolution of 2.3 megapixels and has comparable characteristics in terms of point cloud fidelity, spatial resolution, point precision and dimensional trueness. It is fully compatible with the company’s SDK 2.8 and Zivid Studio development tools. Existing codebases developed with the Zivid Two can be used with this new 3D camera with ease. It is enclosed in the same robust magnesium enclosure as Zivid Two and is compatible with the full range of Zivid Two accessories. All in all, it is a simple plug-in replacement for an existing Zivid Two when the extra distance and reach is necessary.

L100 highlights:

  • Longer focus distance of 100 cm
  • Longer recommended working distance of 60 to 160 cm
  • 105 x 62 @ 100 (cm) Field of view
  • 2.3 megapixels resolution
  • Point cloud data as XYZ + RGB + SNR
  • High Dynamic Range of up to 23 stops
  • Spatial resolution of 0.56 mm @ 100 cm
  • Dimensional trueness > 99.7%

‘We have a constant dialog with our customers to hear what they want and need for their use-cases. Zivid Two L100 is a result of such conversations. The L100 maintains the ability to always capture superbly accurate, detailed point clouds with robot-mounted flexibility. This enables our customers to consistently empty the bin, ensuring uninterrupted productivity. What the L100 introduces is a longer working distance to accommodate the longer grippers tools that are popular in bin-picking. Consequently, this 3D camera enables error-free picking from deeper bins without the robot and 3D camera having to enter the bin region,’ Øyvind Theie, Zivid’s VP of Product said. 

The introduction of the L100 with a longer working distance means that the original Zivid Two will now become the Zivid Two M70. The new identification scheme suffix allows customers to easily identify the focus distance of the Zivid Two 3D camera in centimeters.

The Zivid Two L100 industrial 3D camera is currently in volume production and available for order from the company’s sales and distribution partners.

The post Zivid introduces 3D camera for deeper reach bin picking appeared first on The Robot Report.

]]>
https://www.therobotreport.com/zivid-introduces-3d-camera-for-deeper-reach-bin-picking/feed/ 0
Teledyne FLIR debuts SIRAS drone https://www.therobotreport.com/teledyne-flir-debuts-siras-drone/ https://www.therobotreport.com/teledyne-flir-debuts-siras-drone/#respond Thu, 15 Sep 2022 19:45:42 +0000 https://www.therobotreport.com/?p=563732 Teledyne FLIR pairs a radiometric thermal and visible camera payload featuring MSX with an affordable, flexible, and easy-to-operate airframe.

The post Teledyne FLIR debuts SIRAS drone appeared first on The Robot Report.

]]>
FLIR SIRAS drone hero shot

The IP-54-rated aircraft features a 31-minute flight time, radar-based front collision avoidance, and backpack portability, so professional UAV pilots can fly safely when and where the mission demands. | Credit: FLIR

Teledyne FLIR, part of Teledyne Technologies Incorporated, launched SIRAS, a professional drone that includes a quick-connect dual radiometric thermal and visible camera payload. Engineered for data security, performance, and affordability, SIRAS is optimized for industrial and utility inspection, public safety, firefighting, and search and rescue missions.

Teledyne acquired FLIR for $8 billion in a major merger, early in 2021. Since the acquisition, the new division has continued to engineer new products and expand its portfolio of drone-specific options.

“Designed to provide pilots with the flexibility to get the job done, SIRAS delivers a geofence-free flight experience with thermal and visible imaging capabilities at $9,695 USD,” said, Mike Walters, vice president of product management, Teledyne FLIR. “SIRAS is the only enterprise drone to currently incorporate the patented MSX technology, which overlays the edge detail from the visible camera on the thermal image to provide critical information in real-time.”

The IP-54-rated aircraft features a 31-minute flight time, radar-based front collision avoidance, and backpack portability, so professional UAV pilots can fly safely when and where the mission demands. The included Vue TV128 payload features a quick-connect gimbal, which provides imagery compatible with FLIR Thermal Studio and leading third-party photogrammetry applications. The 16MP visible camera can zoom 128x to pinpoint details. The integrated 640×512 pixel, radiometric Boson provides best-in-class thermal imagery, 5x digital zoom, and temperature measurement of every pixel in the scene.

With a startup time of one minute, pilots can get eyes on the scene quickly and maintain control via a dual-band radio (2.4/5.8 GHz) connection, while hot-swappable batteries ensure efficient operation. To improve data security, SIRAS stores imagery on an onboard SD card and does not include cloud connection capability. Furthermore, pilots are not required to create an online profile, increasing ease of use and reducing potential unintended online data access.

The SIRAS aircraft was designed in collaboration with and is manufactured by Coretronic Intelligent Robotics Corporation (CIRC) in Taiwan, a subsidiary of Coretronic Group. Final payload integration and quality control are completed in the USA.

SIRAS will begin shipping in the fourth quarter of 2022 in the US. SIRAS is dual use and classified under US Department of Commerce jurisdiction as EAR 6A003.b.4.a.

The post Teledyne FLIR debuts SIRAS drone appeared first on The Robot Report.

]]>
https://www.therobotreport.com/teledyne-flir-debuts-siras-drone/feed/ 0
Mobot raises $12.5M, launches robotic app testing platform https://www.therobotreport.com/mobot-raises-12-5m-launches-robotic-app-testing-platform/ https://www.therobotreport.com/mobot-raises-12-5m-launches-robotic-app-testing-platform/#respond Fri, 19 Aug 2022 21:52:23 +0000 https://www.therobotreport.com/?p=563610 Mobot uses mechanical robots to automate quality assurance app testing, resulting in more bug catches than typical methods.

The post Mobot raises $12.5M, launches robotic app testing platform appeared first on The Robot Report.

]]>
mobot

Mobot uses robots to quality test apps. | Source: Mobot

Mobot announced that it brought in $12.5 million in Series A funding and that it launched its quality assurance-as-a-service (QA-as-a-service) platform for app testing. The company uses mechanical robots to automate the testing of repetitive, human-like functions in a real-world setting.

To use Mobot’s platform, you start by recording a video of the test you want to be performed. This step doesn’t require users to write out test plans or code anything, just to record a simple video. Next, users will upload the video using the company’s self-serve test plan tool, and tell Mobot how they want the test to be run, whether it be on iOS, Android or another platform. 

The Mobot team then converts the recorded test into an automated test using the Mobot platform and mechanical robots. Then, it’s time for the robots to take over and begin testing the mobile app. During testing, the robot record all results, data and reports within the platform. 

Users can view all of the results in the Mobot platform, and compare them with baseline test results, pass or fail flags and bug notes from their CSM that ensure noise is eliminated. 

Mobot’s platform can automate tests for a number of challenging use cases, including complex hardware and software interaction, streaming data stability, backward compatibility testing and critical action stability. The platform is currently being used by companies like Citizen, Persona, Branch, Mapbox and Radar. 

“Mobot has helped us increase our App Store rating from 4.2 to 4.8 and achieve a 99.9% crash-free rate,” Swamy Ramaswamy, CTO and COO at Sandboxx, said. “Our app is responsible for helping military service members send and receive physical letters with family, so stability is crucial. Mobot is a critical part of our QA workflow and regularly uncovers issues that weren’t surfaced by our internal software testing process.”

The platform eliminated thousands of hours of manual testing, increases testing efficiency and captures more bugs in-app before app store launches than software can do on its own. Mobot’s funding round was led by Cota Capital and included participation from Heavybit, Uncorrelated Ventures and more. 

“The limitations of QA software mean too many people hours are burned testing mobile apps to make sure they ship right the first time. Mobot’s non-obvious insight to use physical robots for manual testing is an ingenious solution,” Adit Singh, partner at Cota Capital, said. “Its fleet of robots are more reliable and do the job with accuracy and consistency. Mobot helps eliminate tedious, manual testing so customers can ship their mobile apps with confidence, every time.”

The post Mobot raises $12.5M, launches robotic app testing platform appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mobot-raises-12-5m-launches-robotic-app-testing-platform/feed/ 0
MIT helps robots retrieve objects buried in a pile https://www.therobotreport.com/mit-helps-robots-retrieve-objects-buried-in-a-pile/ https://www.therobotreport.com/mit-helps-robots-retrieve-objects-buried-in-a-pile/#respond Tue, 12 Jul 2022 15:40:16 +0000 https://www.therobotreport.com/?p=563234 Researchers at the MIT developed a system that can help robots retrieve target items from a pile of items with some RFID tags. 

The post MIT helps robots retrieve objects buried in a pile appeared first on The Robot Report.

]]>
mit fusebot

Tara Boroushaki, Fadel Adib and Nazish Naeem (from left to right) working with FuseBot. | Source: James Day, MIT Media Lab

Researchers at the Massachusetts Institute of Technology (MIT) developed a system that can help robots retrieve target items from a pile of things as long as some of the items have RFID tags.

RFID tags reflect signals sent to them from an antenna. According to a recent market report from Accenture, more than 90% of U.S. retailers use RFID tags. However, these tags aren’t universal, and oftentimes in e-commerce warehouses employees are dealing with piles of objects that have a mix of tagged and untagged items.

The system created at MIT, called FuseBot, builds off of previous work from the school in which researchers demonstrated a robotic arm that combines visual information and radio frequency (RF) signals to find objects that were tagged with RFID tags. Now, FuseBot can find items even if the target item doesn’t have a tag, as long as some items within the pile do.

“What this paper shows, for the first time, is that the mere presence of an RFID-tagged item in the environment makes it much easier for you to achieve other tasks in a more efficient manner. We were able to do this because we added multimodal reasoning to the system — FuseBot can reason about both vision and RF to understand a pile of items,” Fadel Adib, associate professor in the Department of Electrical Engineering and director of the Signal Kinetics group in the MIT Media Lab, said.

The system uses a robotic arm that is equipped with a video camera and RF antenna. It uses the camera to scan a pile of objects and create a 3D model of the environment. At the same time, FuseBot sends signals from its antenna to locate the RFID tags in the pile. These radio waves can pass through most solid surfaces, so the robot is able to see into the pile.

FuseBot knows that its target item doesn’t have a tag, so it knows that the item won’t be at the exact location as any of the tags. It combines the information it gathered from its antenna with its current 3D model of the environment and then highlights areas where the target item could be located.

FuseBot uses this information to reason about the objects in the pile and the location of RFID tags to decide which items to remove to get to the targeted item. The team’s goal is for FuseBot to find the item in as few moves as possible.

This type of reasoning can be difficult for FuseBot, as the robot doesn’t know how the objects are oriented under the pile or how soft or hard the object is. Some objects on the bottom of the pile could even be deformed because of heavier objects pressing into them. To overcome these obstacles, the robot uses probabilistic reasoning. It uses the information it does have about the size and shape of an object and its nearest RFID tag location to create a model of the 3D space an object is likely to occupy.

After removing each object, FuseBot scans the pile again and reasons again about which object will be the next best to remove.

“If I give a human a pile of items to search, they will most likely remove the biggest item first to see what is underneath it. What the robot does is similar, but it also incorporates RFID information to make a more informed decision. It asks, ‘How much more will it understand about this pile if it removes this item from the surface?’” Tara Boroushaki, research assistant in the Signal Kinetics group, said.

The MIT team ran more than 180 experimental trials with FuseBot using piles of household items, like office supplies, stuffed animals and clothing. Each new pile had a random number of items and RFID-tagged items.

FuseBot was able to extract its target item 95% of the time, an increase from the 84% success rate you would see on similar robotic systems. It was able to get to the target item with 40% fewer moves than typical systems, allowing it to retrieve items more than twice as fast.

Moving forward, the MIT team hopes to incorporate more complex models into the FuseBot so that it does better with soft, deformable objects. The team is also interested in moving objects in different ways, such as a robotic arm that pushes items out of the way instead of grabbing them.

The post MIT helps robots retrieve objects buried in a pile appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-helps-robots-retrieve-objects-buried-in-a-pile/feed/ 0
Analog Devices launches affordable iTOF 3D camera for robotics applications https://www.therobotreport.com/analog-devices-launches-affordable-itof-3d-camera-for-robotics-applications/ https://www.therobotreport.com/analog-devices-launches-affordable-itof-3d-camera-for-robotics-applications/#respond Thu, 30 Jun 2022 22:14:27 +0000 https://www.therobotreport.com/?p=563163 Analog Devices, Inc. announced the industry’s first high-resolution, industrial quality, indirect Time-of-Flight (iToF) module for 3D depth sensing and vision systems.

The post Analog Devices launches affordable iTOF 3D camera for robotics applications appeared first on The Robot Report.

]]>
Image of mobile robot and fixed robot

The Analog Devices time-of-flight 3D camera has a variety of robotics and automation applications. | Source: Analog Devices

Analog Devices, Inc. announced the industry’s first high-resolution, industrial quality, indirect Time-of-Flight (iToF) module for 3D depth sensing and vision systems. With a one megapixel resolution sensor, the new ADTF3175 module offers image resolution that is double or triple the pixel count of competitive solutions. The camera offers  +/-3mm resolution for machine vision applications ranging from industrial automation to logistics, healthcare and augmented reality.

The ADTF3175 camera is unique in its design and is an alternative to existing stereo 3D and LiDAR camera solutions, yet offers an imaging solution that can be used to extract 3D data from a scene. For robotic machine builders this new sensor option offers higher resolution than other iTOF cameras currently on the market.

Hero shot of ADTF3175 camera

The ADTF3175 is a fully calibrated time-of-flight 3D camera. | Source: Analog Devices

The ADTF3175 is fully calibrated for depth data, and ready to image a scene. The module in its current form is designed to be integrated into a larger structure such a mobile robot base, robotic gripper or camera body. Analog Devices offers other chip sets that easily integrate with the module and process the output from the ADTF3175.

The robust, high-resolution module is specifically designed to perform in a range of environmental settings and leverages state-of-the-art triple junction vertical-cavity surface-emitting laser (VCSEL) technology from Lumentum Operations LLC, a leading provider of VCSEL arrays for light detection and ranging (LiDAR) and 3D sensing applications, to enable sensing in a wide range of lighting conditions. 

images from ADTF3175 with 3D information

The ADTF3175 offers both greyscale and 3D annotated image frames from the camera. | Credit: Analog Devices

“We are thrilled to work with ADI on solutions for the industry’s most demanding and highest resolution 3D sensing applications, ranging from extended reality to industrial applications like robotics, intelligent buildings, and logistics systems,” said Téa Williams, Senior Vice President and General Manager of 3D Sensing at Lumentum. “Our 10W VCSEL arrays allow ADI to enable more capable sensing and vision systems that can operate under a wide range of lighting conditions and thereby removing environmental obstacles to broader and more rapid machine vision deployment.”

The ADTF3175 features an infrared illumination source with optics, laser diode and driver, and a receiver path with a lens and an optical band-pass filter. The module also includes flash memory for calibration and firmware storage plus power regulators to generate local supply voltages. It comes pre-programmed with several operating modes that are optimized for long and short range.

“Machine vision needs to make the leap to perceiving smaller, more subtle objects faster in industrial environments that often include harsh conditions and multiple stimuli,” said Tony Zarola, Senior Director for ToF at Analog Devices. “The ADTF3175’s unmatched resolution and accuracy allows vision and sensing systems – including industrial robots – to take on more precision-oriented tasks by enabling them to better understand the space they’re operating in and ultimately improve productivity. Bringing this to market helps bridge a major gap and accelerates deployment of the next generation of automation solutions and critical logistics systems.”

The ADTF3175 module will be accompanied with an open-source reference design for implementing the full system, all of the required drivers and access to ADI’s sophisticated depth processing capabilities. ADI also offers guidance on how to achieve Class One eye safety certification for the end product.

Analog Devices is partnering with Microsoft for some of the software elements in the imaging solution.

Analog Devices Product Line Director Erik Barnes states that preproduction samples are available for evaluation now, and full scale production will begin in Fall 2022. Price of the unit is $197 in 1,000 unit quantities, making this an affordable 3D sensing option.


ADTF3175 Module Technical Specs

  • Unit Size: 42mm x 31mm x 15.1mm
  • Image resolution: 1024 x 1024
  • Field of View (FOV): 75° x 75°
  • Accuracy, 0.4 to 4m: +/- 3mm
  • 43 x 31 x 15.1 mm
  • QMP (512 x 512 binned)
    • Short range (~0.1m – 1m)
    • Long range (~ 0.4 – 5m)
  • 1MP (1024 x 1024)
    • Short range (~0.1m – 1m)
    • Long range (~0.4 – 4m)
  • Reflectance range: 15% – 90%
  • Ambient temperature: -20°C to 65°C
  • Equivalent sunlight < 5,000 lux
    • ~1 W/m2 @ 940 nm +/-25nm

Download data sheet and order samples: https://www.analog.com/en/products/adtf3175

The post Analog Devices launches affordable iTOF 3D camera for robotics applications appeared first on The Robot Report.

]]>
https://www.therobotreport.com/analog-devices-launches-affordable-itof-3d-camera-for-robotics-applications/feed/ 0
Teledyne gives Sapera Vision software an AI upgrade https://www.therobotreport.com/teledyne-gives-sapera-vision-software-an-ai-upgrade/ https://www.therobotreport.com/teledyne-gives-sapera-vision-software-an-ai-upgrade/#respond Fri, 20 May 2022 22:53:04 +0000 https://www.therobotreport.com/?p=562802 Sapera Vision Software from Teledyne DALSA offers field proven image acquisition, control and image processing.

The post Teledyne gives Sapera Vision software an AI upgrade appeared first on The Robot Report.

]]>
Teledyne announced that its Sapera Vision Software Edition 2022-05 is now available. Sapera Vision Software from Teledyne DALSA offers field proven image acquisition, control, image processing and artificial intelligence functions to design, develop and deploy high-performance machine vision applications. The new upgrades include enhancements to its AI training graphical tool Astrocyte and its image processing and AI libraries tool Sapera Processing.

“We are excited to introduce new features to the Astrocyte and Sapera Processing packages. With the new tiling feature users can detect the smallest defects on large images at native resolution,” Brandon Hunt, Product Manager for Teledyne’s Vision Solutions group, said. “In this latest update we have included improved anomaly detection algorithms, live video acquisition from frame grabbers, and new functionality that delivers increased performance and a better user experience. Sapera Processing 9.30 offers improvements on the AI and 3D tool.” 

teledyna sapera vision

Sapera Vision Software is ideal for applications such as surface inspection on metal plates, location and identification of hardware parts, detection and segmentation of vehicles and noise reduction on x-ray medical images.

New features in this release:

  • Tiling on Large Images – A mechanism for handling large images at native resolution to avoid losing precision on very small defects/objects.
  • Increased Performance on Anomaly Detection – A new Anomaly Detection algorithm with better performance on low to medium resolutions.
  • YOLOX Object Detection – An additional object detection algorithm with higher performance and lower footprint.
  • Processing of Non-Square Images – A flexible mechanism for preparing images for training and inference without distorting the original aspect ratio.

Teledyne DALSA is a part of Teledyne’s Vision Solutions group and a leader in the design, manufacture and deployment of digital imaging components for machine vision. Teledyne DALSA image sensors, cameras, smart cameras, frame grabbers, software and vision solutions are at the heart of thousands of inspection systems around the world and across multiple industries.

The post Teledyne gives Sapera Vision software an AI upgrade appeared first on The Robot Report.

]]>
https://www.therobotreport.com/teledyne-gives-sapera-vision-software-an-ai-upgrade/feed/ 0