Microprocessors / SoCs Archives - The Robot Report https://www.therobotreport.com/category/technologies/microprocessors-socs/ Robotics news, research and analysis Mon, 26 Sep 2022 23:06:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Microprocessors / SoCs Archives - The Robot Report https://www.therobotreport.com/category/technologies/microprocessors-socs/ 32 32 Inuitive announces NU4100 IC robotics processor https://www.therobotreport.com/inuitive-announces-nu4100-ic-robotics-processor/ https://www.therobotreport.com/inuitive-announces-nu4100-ic-robotics-processor/#respond Mon, 26 Sep 2022 23:06:10 +0000 https://www.therobotreport.com/?p=563924 Inuitive, a Vision-on-Chip processors company, announced the launch of its NU4100, an expansion of its Vision and AI IC portfolio.

The post Inuitive announces NU4100 IC robotics processor appeared first on The Robot Report.

]]>
NU4100

Inuitive’s NU4100 IC can be used for robotics, drones, VR and edge-AI applications. | Source: Inuitive

Inuitive, Ltd., a Vision-on-Chip processors company, announced the launch of its NU4100, an expansion of its Vision and AI IC portfolio. Based on Inuitive’s unique architecture and advanced 12nm process technology, the NU4100 IC supports integrated dual-channel 4K ISP, enhanced AI processing and depth sensing in a single-chip, low-power design, setting the new industry standard for Edge-AI performance.

The NU4100 is the second generation of the NU4x00 series of products. The NU4x00 series is ideal for robotics, drones, VR, and edge-AI applications that demand multiple sensor aggregation, processing, packing and streaming. It is specifically designed for robots and other applications that must sense and analyze the environment using three, six or more cameras, as they make real-time actionable decisions based on that input.

“Robots designers demand higher resolutions, an ever-increasing number of channels, and high-performing, enhanced AI and VSLAM capabilities,” Shlomo Gadot, Inuitive’s CEO, said. “The NU4100 addition to the Vision-on-Chip series of processors is a true revolution, based on all integrated vision capabilities, combined in a single, complete-mission computer chip. The integrated dual-camera ISP provides much-needed flexibility without having to add more components, which, in turn, require additional processing power at a higher price point.”

Mr. Gadot also said, “Inuitive is committed to bringing the most advanced technology to the market. NU4500, the next processor in our roadmap, is planned for tape-out on Q1 2023 with additional 8 cores of ARM A55, more than double the AI compute power and H.265 & H.264 video encoder & decoder and is to be the ultimate single-chip solution for robotic and applications.”

The NU4100 supports multi-camera designs and can simultaneously process and stream two imager channels of up to 12MP, or 4K resolution, each at 60 frames per second (fps), while running advanced AI networks. This IC enhances the level of integration for products using Inuitive technology and speeds the AI processing power by 2X-4X while consuming 20% less power than Inuitive’s first generation.

The new NU4100 was quickly adopted by the CE & Metaverse industry leaders, already securing it for their market products, instead of any alternatives. Customer products powered by NU4100 will be available starting 1Q 2023.

“Robots are increasingly reliant on vision processors. Their ability to perceive and understand the environment is fundamental to achieving a higher level of robot autonomy,” Dor Zepeniuk, CTO and VP of Product at Inuitive, said. “Processing streams of input from multiple cameras expand the robot’s independence and flexibility, while the integrated dual-channel 4K ISP improves the system’s capabilities. Both, in turn, serve the end goal of designing powerful products that are lower on cost.”

Main features and capabilities of the new NU4100 include:

  • Proprietary Inuitive Depth Vision Accelerators (IDVA):
    • High-throughput, low-latency, depth-from-stereo HW engine
    • SLAM HW Accelerators
    • General purpose Imaging/Vision engines
  • Dual camera ISP unit – up to 12Mp per video stream
  • Dual-core Vision-DSP with 384GOPs – optimized for computer vision functions
  • Efficient AI Engine with 3.2TOPs processing power for DNN
  • ARM Cortex-A5 CPU running Linux OS
  • Connectivity for up to 6 Camera devices
  • Fast interfaces – USB3.0, MIPI CSI/DSI – Rx & Tx, LPDDR4 and more

The high-resolution and advanced AI processing provided by the new IC can benefit many other Edge-AI applications. Applications such as Industry 4.0 facilities can leverage the high Edge-AI performance and image resolution for improved process control and a higher level of automation. Likewise, drones can use the ISP and Neural Network-Based vision effects, such as low-light enhancement, to autonomously operate in both dark and lit environments.

The NU4100 samples are already available and will be ready for mass production by January 2023

The post Inuitive announces NU4100 IC robotics processor appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inuitive-announces-nu4100-ic-robotics-processor/feed/ 0
Acceleration Robotics, AMD partner to design robotic compute architectures with ROS https://www.therobotreport.com/acceleration-robotics-amd-partner-to-design-robotic-compute-architectures-with-ros/ https://www.therobotreport.com/acceleration-robotics-amd-partner-to-design-robotic-compute-architectures-with-ros/#respond Fri, 23 Sep 2022 16:57:18 +0000 https://www.therobotreport.com/?p=563829 Acceleration Robotics and AMD are working together to develop new robotic capabilities for AMD's Kria system-on-modules.

The post Acceleration Robotics, AMD partner to design robotic compute architectures with ROS appeared first on The Robot Report.

]]>
AMD SOMs

AMD’s system-on-modules and system-on-chips are the target of the company’s collaboration with Acceleration Robotics. | Source: AMD

Acceleration Robotics and AMD are working together to develop new robotic capabilities for AMD’s Kria system-on-modules (SoMs) and adaptive system-on-chips (SoCs). Acceleration Robotics is a robotics semiconductor startup based in Basque Country, Spain. 

Before being acquired by AMD, Xilinx and Acceleration Robotics worked together to create robotics-specific hardware designs, with the goal of allowing roboticists to use Xilinx’s adaptive computing solutions with the Robot Operating System (ROS)

ROS is used by many robotics companies to build real, commercial robots. About 55% of the total commercial robots to be shipped in 2024 will use ROS, according to ABI Research, a market-foresight advisory firm. 

Now, Acceleration Robotics is expanding its partnership and working with AMD and AMD products optimized for ROS 2. The collaboration will be led by Victor Mayoral-Vilches, a former Systems Architect at Xilinx and founder of Acceleration Robotics. 

“We are excited to see our continued effort with Xilinx extending into AMD and look forward to a close collaboration with their engineering teams on producing architectural blueprints for robotics using ROS,” Mayoral-Vilches said. “AMD’s technology is a perfect fit for robots, wherein latency and determinism rule over everything else. We’re especially thrilled to explore how AMD FPGAs, adaptive SoCs and SOMs, as well as other compute solutions can be mixed together to create robot-specific processing units, what we call Robotic Processing Units (RPUs).”

The collaboration will focus on improving robotics computations with ROS 2 for areas like the robotics message-passing infrastructure, robotics perception, control or navigation, among other areas. 

AMD completed its acquisition of Xilinx in February 2022 for $49 billion. Xilinx’s technology found its home in AMD’s Adaptive and Embedded Computing Group, a newly formed segment of the company. 

Acceleration Robotics was founded in 2020. The company designs customized brains to help shorten robots’ response times.  Its main product offering is ROBOTCORE, a hardware acceleration framework for ROS. ROBOTCORE helps developers build custom compute architectures for robots that make them faster, more deterministic and power efficient. 

The post Acceleration Robotics, AMD partner to design robotic compute architectures with ROS appeared first on The Robot Report.

]]>
https://www.therobotreport.com/acceleration-robotics-amd-partner-to-design-robotic-compute-architectures-with-ros/feed/ 0
NVIDIA releases robotics development tools at GTC https://www.therobotreport.com/nvidia-releases-new-robotics-development-tools-gtc/ https://www.therobotreport.com/nvidia-releases-new-robotics-development-tools-gtc/#respond Tue, 20 Sep 2022 18:16:03 +0000 https://www.therobotreport.com/?p=563845 NVIDIA released Jetson Orin Nano system-on-modules, updated its Nova Orin reference platform for autonomous mobile robots and announced cloud-based availability for its Isaac Sim technology.

The post NVIDIA releases robotics development tools at GTC appeared first on The Robot Report.

]]>

NVIDIA today announced a number of new tools for robotics developers during its GTC event, including Jetson Orin Nano system-on-modules, updates to its Nova Orin reference platform for autonomous mobile robots (AMRs) and cloud-based availability for its Isaac Sim technology.

NVIDIA expanded its Jetson lineup with the launch of Jetson Orin Nano system-on-modules for entry-level edge AI and robotics applications. The new Orin Nano delivers up to 40 trillion operations per second (TOPS) of AI performance, which NVIDIA said is 80x the performance over the prior generation, in the smallest Jetson form factor yet.

Jetson Orin features an NVIDIA Ampere architecture GPU, Arm-based CPUs, next-generation deep learning and vision accelerators, high-speed interfaces, fast memory bandwidth and multimodal sensor support. The Orin Nano modules will be available in two versions. The Orin Nano 8GB delivers up to 40 TOPS with power configurable from 7W to 15W, while the 4GB version delivers up to 20 TOPS with power options as low as 5W to 10W.

Orin Nano is supported by the NVIDIA JetPack software development kit and is powered by the same NVIDIA CUDA-X accelerated computing stack used to create AI products in such fields as industrial IoT, manufacturing, smart cities and more.

The Jetson Orin Nano modules will be available in January 2023 starting at $199.

“Over 1,000 customers and 150 partners have embraced Jetson AGX Orin since NVIDIA announced its availability just six months ago, and Orin Nano will significantly expand this adoption,” said Deepu Talla, vice president of embedded and edge computing at NVIDIA. “With an orders-of-magnitude increase in performance for millions of edge AI and ROS developers today, Jetson Orin is the ideal platform for virtually every kind of robotics deployment imaginable.”

The Orin Nano modules are form-factor and pin-compatible with the previously announced Orin NX modules. Full emulation support allows customers to get started developing for the Orin Nano series today using the AGX Orin developer kit. This gives customers the flexibility to design one system to support multiple Jetson modules and easily scale their applications.

The Jetson Orin platform is designed to solve tough robotics challenges and brings accelerated computing to over 700,000 ROS developers. Combined with the powerful hardware capabilities of Orin Nano, enhancements in the latest NVIDIA Isaac software for ROS put increased performance and productivity in the hands of roboticists.

Jetson Orin has seen broad support across the robotics and embedded computing ecosystem, including from Canon, John Deere, Microsoft Azure, Teradyne, TK Elevator and many more.

 

Isaac Sim in the clouds
NVIDIA also announced there will be three ways to access its Isaac Sim robotics simulation platform on the cloud:

  • It will soon be available on the new NVIDIA Omniverse Cloud platform
  • It’s available now on AWS RoboMaker
  • Developers can now download it from NVIDIA NGC and deploy it to any public cloud

Roboticists will be able to generate large datasets from physically accurate sensor simulations to train the AI-based perception models on their robots. The synthetic data generated in these simulations improve the model performance and provide training data that often can’t be collected in the real world.

NVIDIA said the upcoming release of Isaac Sim will include NVIDIA cuOpt, a real-time fleet task-assignment and route-planning engine for optimizing robot path planning. Tapping into the accelerated performance of the cloud, teams can make dynamic, data-driven decisions, whether designing the ideal warehouse layout or optimizing active operations.

Nova Orin updates
NVIDIA also shared new details about three Nova Orin reference platform configurations for AMRs. Two use a single Jetson AGX Orin — which runs the NVIDIA Isaac robotics stack and the Robot Operating System (ROS) with the GPU-accelerated framework — and one relies on two Orin modules.

NVIDIA said the Nova Orin platform is designed to improve reliability and reduce development costs for building and deploying AMRs. Nova Orin provides industrial-grade configurations of sensors, software and GPU-computing capabilities.

The Nova Orin reference architecture designs are provided for specific use cases. There is one Orin-based design without safety-certified sensors, and one that includes them, along with a safety programmable logic controller. The third architecture has a dual Orin-based design that depends on vision AI for enabling functional safety.

Sensor support is included for stereo cameras, LiDAR, ultrasonic sensors and inertial measurement units. The chosen sensors have been selected to balance performance, price and reliability for industrial applications. The stereo cameras and fisheye cameras are custom designed by NVIDIA in coordination with camera partners. All sensors are calibrated and time-synchronized, and come with drivers for reliable data capture. These sensors allow AMRs to detect objects and obstacles across a wide range of situations while also enabling simultaneous localization and mapping (SLAM).

NVIDIA provides two LiDAR options, one for applications that don’t need sensors certified for functional safety, and the other for those that do. In addition to these 2D LiDARs, Nova Orin supports 3D LiDAR for mapping and ground-truth data collection.

The base OS includes drivers and firmware for all the hardware and adaptation tools, as well as design guides for integrating it with robots. Nova can be integrated with a ROS-based robot application. The sensors will have validated models in Isaac Sim for application development and testing without the need for an actual robot.

The cloud-native data acquisition tools eliminate the arduous task of setting up data pipelines for the vast amount of sensor data needed for training models, debugging and analytics. State-of-the-art GEMs developed for Nova sensors are GPU accelerated with the Jetson Orin platform, providing key building blocks such as visual SLAM, stereo depth estimation, obstacle detection, 3D reconstruction, semantic segmentation and pose estimation.

Nova Orin supports secure over-the-air updates, as well as device management and monitoring, to enable easy deployment and reduce the cost of maintenance. Its open, modular design enables developers to use some or all capabilities of the platform and extend it to quickly develop robotics applications.

NVIDIA is working closely with regulatory bodies to develop vision-enabled safety technology to further reduce cost and improve the reliability of AMRs.

The post NVIDIA releases robotics development tools at GTC appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-releases-new-robotics-development-tools-gtc/feed/ 0
How AI chipset bans could impact Chinese robotics companies https://www.therobotreport.com/how-ai-chipset-bans-could-impact-chinese-robotics-companies/ https://www.therobotreport.com/how-ai-chipset-bans-could-impact-chinese-robotics-companies/#respond Thu, 01 Sep 2022 22:34:07 +0000 https://www.therobotreport.com/?p=563712 The US is restricting the sale of the most powerful artificial intelligence processors to China.

The post How AI chipset bans could impact Chinese robotics companies appeared first on The Robot Report.

]]>

NVIDIA and AMD said on Wednesday that the United States government has ordered them to halt exports of certain AI chipsets to China, which is the world’s second-largest economy. Both companies now require licenses for the sale of AI chipsets to China.

The restrictions cover NVIDIA’s A100 and upcoming H100 integrated circuits, and any systems that include them. AMD said the new license requirements will stop its MI250 chips from being exported to China. However, it said it doesn’t foresee this having a material impact on its business. NVIDIA, however, said this move could result in a loss of $400 million in sales this year.

NVIDIA said U.S. officials told it the new rule “will address the risk that products may be used in, or diverted to, a ‘military end use’ or ‘military end user’ in China.” NVIDIA has development and manufacturing within China. Today it announced the export restrictions do not cover the movement of materials related to the development and manufacturing of the H100 chip. NVIDIA confirmed it will be allowed to fulfill orders of the A100 and complete development of its H100 chip through the company’s Hong Kong facility until Sept. 1, 2023.

“We are working with our customers in China to satisfy their planned or future purchases with alternative products and may seek licenses where replacements aren’t sufficient,” NVIDIA said in a statement. “The only current products that the new licensing requirement applies to are A100, H100 and systems such as DGX that include them.”

The NVIDIA A100 and H100 chipsets are Tensor Core GPUs, designed to process enterprise workloads. NVIDIA’s ninth-generation H100 data center GPU features 80 billion transistors. Built on the Hopper architecture, NVIDIA’s new accelerator is advertised as “the world’s largest and most powerful accelerator,” making it ideal for intensive HPC and AI simulations.

The NVIDIA A100 Tensor Core GPU, which was launched in 2020, was the highest-performing elastic data center for artificial intelligence (AI), data analytics (DA), and high-performance computing (HPC) at the time. The Ampere architecture provides up to 20X higher performance than its predecessors.

For AMD, the restrictions cover the Instinct MI250 Accelerator. Accelerators from AMD’s latest Instinct MI200 family are designed to fuel discoveries in mainstream servers and supercomputers, including some of the largest exascale systems, so that researchers may take on problems as diverse as climate change and vaccine development.

Impact on Chinese robotics companies

There are a number of robotic applications that leverage AI to operate effectively. This includes tasks like vision guidance for industrial robots for bin picking, sorting and palletizing. For autonomous mobile robots, AI is used for perception and obstacle avoidance. Within the warehouse, many suppliers are employing AI to optimize the daily workflow, inventory placement and both goods-to-person and person-to-goods operations.

The class of GPUs coming under export controls is arguably the most powerful on the market. The chips in question are designed to be deployed in data center applications and embedded in enterprise-class servers. The likely reason for the export ban is that the US government wants to keep these chips from being diverted to other (i.e. military) applications.

These servers require lots of power, cooling and high-speed network connections to do their work. As a result, according to multiple sources The Robot Report spoke to, these specific chips are not likely to be engineered into embedded systems like a robot controller or an autonomous mobile robot controller. However, these GPUs are important for training deep learning or reinforcement learning models for a robotic application. Model training is computationally intensive and these chips help to accelerate that task.

Typically, model training is done using a set of servers in a data center, accessed over the network. Amazon, Google and Microsoft deploy thousands of these types of servers to support their web services. Many AMR, drone and robot manufacturers around the world are deploying the NVIDIA Jetson as an embedded controller, and this class of technology is not included in any restrictions.

A source said if the NVIDIA block “is only on Ampere and Hopper, then that isn’t as severe as many (companies) still use the Jetson Xavier, which uses the predecessor to Ampere and works quite well.” However, the source added if future bans impact these lower-performance GPUs, then Chinese robotics companies could be in bad shape and could have to find alternative solutions.

Related impacts of US policy decisions on robotics

US policy has impacted Chinese robotics manufacturers in other ways recently.

In November 2021, the SPAC merger for autonomous trucking company Plus was blocked because of “developments in the regulatory environment outside of the United States.” At the time the story was reported, Plus had a partnership with China’s FAW, which is the world’s largest heavy truck manufacturer. Plus was working with some of the largest fleets in the US and China to pilot commercial freight operations.

A source told The Robot Report the US military is rejecting exoskeletons where the only part that was made in China is the cloth around them. 

Chinese company HIK Robotics has been banned from selling in the US due in part to the use of its cameras and vision technology by the Chinese government for surveillance operations. As a result, the company is focusing on sales in China and Korea.

The post How AI chipset bans could impact Chinese robotics companies appeared first on The Robot Report.

]]>
https://www.therobotreport.com/how-ai-chipset-bans-could-impact-chinese-robotics-companies/feed/ 0
Toposens introduces processing unit for 3D collision avoidance https://www.therobotreport.com/toposens-introduces-processing-unit-for-3d-collision-avoidance/ https://www.therobotreport.com/toposens-introduces-processing-unit-for-3d-collision-avoidance/#respond Mon, 11 Apr 2022 18:29:28 +0000 https://www.therobotreport.com/?p=562396 The Toposens Processing Unit DK makes it easier to integrate the Toposens ultrasonic sensing technology into a customer’s existing platform.

The post Toposens introduces processing unit for 3D collision avoidance appeared first on The Robot Report.

]]>
toposens

The Toposens Processing Unit DK. | Source: Toposens

Munich-based startup Toposens released its plug-and play Toposens Processing Unit DK as a gateway between Toposens 3D Ultrasonic Sensors and customer applications.

The Toposens Processing Unit DK (TPU DK) makes it easier to integrate the company’s ultrasonic sensing technology into a customer’s existing platform to evaluate 3D collision avoidance. It interconnects the company’s 3D Ultrasonic Echolocation Sensor “ECHO ONE DK” to an automated guided vehicle (AGV) or autonomous mobile robot (AMR) infrastructure, enabling AGVs and AMRs to avoid collisions with all kinds of obstacles.

The TPU DK features advanced data processing algorithms which are used to communicate to an AGV’s or AMR’s control system via standardized outputs, whether an obstacle violates the user-configurable warning and stop zones. This takes away the hassle of complicated integration procedures. The user-friendly graphical user interface (GUI) visualizes the adjustable 3D warning zones and stop zones, which are configurable to suit different use cases.

Individual sensor parameters can also be adjusted in the GUI together with the sensor vehicle positioning. Alternatively the sensor also offers a “fusion mode,” where the TPU DK delivers pre-filtered point cloud data to another processing device via ethernet.

“The combination of Toposens ECHO ONE DK and Toposens Processing Unit DK provides our customers with an unprecedented 3D Ultrasonic Collision Avoidance for AGVs and AMRs. By detecting even the most complex objects in 3D space with a small blind zone, customers are able to reduce costly accidents while ensuring highest safety in any industrial environment”, Alexander Rudoy, CTO and Co-Founder of the company, said.

The TPU DK offers multiple communication interfaces for flexible deployment, provides a user-friendly GUI and operates on a very low power consumption of less than five watts. Further information can be obtained on TPU DK product page.

The post Toposens introduces processing unit for 3D collision avoidance appeared first on The Robot Report.

]]>
https://www.therobotreport.com/toposens-introduces-processing-unit-for-3d-collision-avoidance/feed/ 0
ADLINK partners with Qualcomm on SMARC module https://www.therobotreport.com/adlink-partners-qualcomm-smarc-module-robots-drones/ https://www.therobotreport.com/adlink-partners-qualcomm-smarc-module-robots-drones/#respond Mon, 06 Dec 2021 17:07:49 +0000 https://www.therobotreport.com/?p=561051 The LEC-RB5 SMARC module provides on-device AI capabilities, support for up to six cameras, and low power consumption.

The post ADLINK partners with Qualcomm on SMARC module appeared first on The Robot Report.

]]>
ADLINK Qualcomm

ADLINK Technology LEC-RB5 SMARC module. | Photo Credit: ADLINK Technology

ADLINK Technology recently released its LEC-RB5 SMARC module. This is the company’s first SMARC AI-on-Module based on a Qualcomm processor. The Qualcomm QRB5165 processor is designed for robotics and drones applications and integrates several IoT technologies in a single solution. SMARC stands for Smart Mobility Architecture.

The LEC-RB5 SMARC module provides on-device artificial intelligence (AI) capabilities, support for up to six cameras, and low power consumption. ADLINK said the LEC-RB5 SMARC module is capable of powering robots and drones in consumer, enterprise, defense, industrial and logistics sectors.

“Qualcomm Technologies’ portfolio of leading robotics and drones solutions is driving next-generation use cases including autonomous deliveries, mission critical use cases, commercial and enterprise drone applications and more,” said Dev Singh, senior director, business development and general manager of robotics, drones and intelligent machines, Qualcomm Technologies. “The Qualcomm QRB5165 solution supports the development of next generation high-compute, AI-enabled, low power robots and drones for the consumer, enterprise, defense, industrial and professional service sectors that can be connected by 5G. The ADLINK LEC-RB5 SMARC module will support the proliferation of 5G in robotics and intelligent systems.”

For robotics developers, ADLINK Technology said the LEC-RB5 SMARC module provides the capability to build robots for use in harsh industrial conditions and in temperatures that range from -30° to +85°C. The ADLINK LEC-RB5 SMARC module features:

  • Qualcomm Kryo 585 CPU (8x Arm Cortex-A77 cores)
  • Qualcomm Hexagon Tensor Accelerator (HTA) running up to 15 trillion operations per second (TOPS)
  • Six cameras support: MIPI CSI cameras CSI0 (2 lanes) and CSI1 (4 lanes)
  • Less than 12W of power consumption
  • 82 x 50 mm small size form factor

The LEC-RB5 is part of ADLINK’s portfolio of SMARC form factors that support both ARM and x86 designs. The module provides enhancements for computer vision applications with reduced latencies for real-time image processing decisions, freeing up capacity for other critical AI applications while delivering mobile-optimized CV experiences.

The post ADLINK partners with Qualcomm on SMARC module appeared first on The Robot Report.

]]>
https://www.therobotreport.com/adlink-partners-qualcomm-smarc-module-robots-drones/feed/ 0
Hailo raises $136M to scale edge processors https://www.therobotreport.com/hailo-raises-136m-to-scale-edge-processors/ https://www.therobotreport.com/hailo-raises-136m-to-scale-edge-processors/#respond Tue, 12 Oct 2021 16:10:30 +0000 https://www.therobotreport.com/?p=560603 The Hailo-8 processor features up to 26 tera-operations per second and is smaller than a penny.

The post Hailo raises $136M to scale edge processors appeared first on The Robot Report.

]]>
Hailo-8

The Hailo-8 AI Processor for Edge Devices.

Israeli chipmaker Hailo raised $136 million in Series C funding today. The funding will be used to continue the development of its Hailo-8 processor. It also aims to expand into new and existing markets.

The Series C round was led by Poalim Equity and Gil Agmon. The round was joined by existing investors, including prominent Israeli entrepreneur and Hailo Chairman Zohar Zisapel, ABB Technology Ventures, Latitude Ventures, OurCrowd, and new investors, including Carasso Motors, Comasco, Shlomo Group, Talcar Corporation Ltd., and Automotive Equipment.

The round raised Hailo’s total funding to $224 million. According to Reuters, a source closed to the company said the financing was done at a valuation of $1 billion, which makes it a unicorn.

The Hailo-8 processor features up to 26 tera-operations per second (TOPS), and it is smaller than a penny. The company claimed its Hailo-8 processor can perform deep learning tasks such as object detection and segmentation in real time with minimal power consumption, size, and cost.

“We are honored by this milestone round for an edge AI chip company and will use these significant resources to accelerate our aggressive plan to make advanced AI edge solutions more accessible to industries across the globe,” said Orr Danon, CEO and co-founder of Hailo. “This tremendous support is a testament to our unparalleled edge AI product line, and we look forward to empowering even smarter and swifter devices, and thus, a more robust future powered by AI.”

Hailo recently launched both its M.2 and Mini PCIe high-performance AI acceleration modules for empowering edge devices. It also recently partnered with MicroSys Electronics to launch the miriac AIP-LX2160A embedded platform that can host up to five integrated Hailo-8 AI accelerator modules. The company’s claimed this solution offers processing performance and deep learning capabilities of up to 130 TOPS. Typical markets for this new application-ready bundle include collaborative robotics.

Hailo said it doubled its customer base to more than 100 clients over the last two fiscal quarters of 2021 as more enterprises seek out AI solutions that empower sensors and smart devices at lower costs, lower energy, and greater power. The company has expanded its presence globally over the last year, opening offices in Tokyo, Taipei, Munich, and Silicon Valley. In addition, the company recently announced a partnership with Macnica, a leading global semiconductor distributor in Japan.

The post Hailo raises $136M to scale edge processors appeared first on The Robot Report.

]]>
https://www.therobotreport.com/hailo-raises-136m-to-scale-edge-processors/feed/ 0
‘Robomorphic computing’ aims to quicken robots’ response time https://www.therobotreport.com/robomorphic-computing-hasten-robots-response-time/ https://www.therobotreport.com/robomorphic-computing-hasten-robots-response-time/#respond Thu, 21 Jan 2021 16:31:26 +0000 https://www.therobotreport.com/?p=558724 Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman. Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds. Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits reaction…

The post ‘Robomorphic computing’ aims to quicken robots’ response time appeared first on The Robot Report.

]]>
robomorphic computing

MIT developed ‘robomorphic computing, an automated way to design custom hardware to speed up a robot’s operation. | Credit: Jose-Luis Olivares, MIT

Contemporary robots can move quickly. “The motors are fast, and they’re powerful,” says Sabrina Neuman.

Yet in complex situations, like interactions with people, robots often don’t move quickly. “The hang up is what’s going on in the robot’s head,” she adds.

Perceiving stimuli and calculating a response takes a “boatload of computation,” which limits reaction time, says Neuman, who recently graduated with a PhD from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot’s “mind” and body. The method, called “robomorphic computing,” uses a robot’s physical layout and intended applications to generate a customized computer chip that minimizes the robot’s response time.

The advance could fuel a variety of robotics applications, including, potentially, frontline medical care of contagious patients. “It would be fantastic if we could have robots that could help reduce risk for patients and hospital workers,” says Neuman.

Neuman will present the research at April’s International Conference on Architectural Support for Programming Languages and Operating Systems. MIT co-authors include graduate student Thomas Bourgeat and Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Neuman’s PhD advisor. Other co-authors include Brian Plancher, Thierry Tambe, and Vijay Janapa Reddi, all of Harvard University. Neuman is now a postdoctoral NSF Computing Innovation Fellow at Harvard’s School of Engineering and Applied Sciences.

Related: How Boston Dynamics’ robots learned to dance

There are three main steps in a robot’s operation, according to Neuman. The first is perception, which includes gathering data using sensors or cameras. The second is mapping and localization: “Based on what they’ve seen, they have to construct a map of the world around them and then localize themselves within that map,” says Neuman. The third step is motion planning and control — in other words, plotting a course of action.

These steps can take time and an awful lot of computing power. “For robots to be deployed into the field and safely operate in dynamic environments around humans, they need to be able to think and react very quickly,” says Plancher. “Current algorithms cannot be run on current CPU hardware fast enough.”

Neuman adds that researchers have been investigating better algorithms, but she thinks software improvements alone aren’t the answer. “What’s relatively new is the idea that you might also explore better hardware.” That means moving beyond a standard-issue CPU processing chip that comprises a robot’s brain — with the help of hardware acceleration.

Hardware acceleration refers to the use of a specialized hardware unit to perform certain computing tasks more efficiently. A commonly used hardware accelerator is the graphics processing unit (GPU), a chip specialized for parallel processing. These devices are handy for graphics because their parallel structure allows them to simultaneously process thousands of pixels. “A GPU is not the best at everything, but it’s the best at what it’s built for,” says Neuman. “You get higher performance for a particular application.”

Most robots are designed with an intended set of applications and could therefore benefit from hardware acceleration. That’s why Neuman’s team developed robomorphic computing.

The system creates a customized hardware design to best serve a particular robot’s computing needs. The user inputs the parameters of a robot, like its limb layout and how its various joints can move. Neuman’s system translates these physical properties into mathematical matrices. These matrices are “sparse,” meaning they contain many zero values that roughly correspond to movements that are impossible given a robot’s particular anatomy. (Similarly, your arm’s movements are limited because it can only bend at certain joints — it’s not an infinitely pliable spaghetti noodle.)

Related: 8 degrees of difficulty for autonomous navigation

The system then designs a hardware architecture specialized to run calculations only on the non-zero values in the matrices. The resulting chip design is therefore tailored to maximize efficiency for the robot’s computing needs. And that customization paid off in testing.

Hardware architecture designed using this method for a particular application outperformed off-the-shelf CPU and GPU units. While Neuman’s team didn’t fabricate a specialized chip from scratch, they programmed a customizable field-programmable gate array (FPGA) chip according to their system’s suggestions. Despite operating at a slower clock rate, that chip performed eight times faster than the CPU and 86 times faster than the GPU.

“I was thrilled with those results,” says Neuman. “Even though we were hamstrung by the lower clock speed, we made up for it by just being more efficient.”

Plancher sees widespread potential for robomorphic computing. “Ideally we can eventually fabricate a custom motion-planning chip for every robot, allowing them to quickly compute safe and efficient motions,” he says. “I wouldn’t be surprised if 20 years from now every robot had a handful of custom computer chips powering it, and this could be one of them.” Neuman adds that robomorphic computing might allow robots to relieve humans of risk in a range of settings, such as caring for covid-19 patients or manipulating heavy objects.

“This work is exciting because it shows how specialized circuit designs can be used to accelerate a core component of robot control,” says Robin Deits, a robotics engineer at Boston Dynamics who was not involved in the research. “Software performance is crucial for robotics because the real world never waits around for the robot to finish thinking.” He adds that Neuman’s advance could enable robots to think faster, “unlocking exciting behaviors that previously would be too computationally difficult.”

Neuman next plans to automate the entire system of robomorphic computing. Users will simply drag and drop their robot’s parameters, and “out the other end comes the hardware description. I think that’s the thing that’ll push it over the edge and make it really useful.”

Editor’s Note: This article was republished from MIT News.

The post ‘Robomorphic computing’ aims to quicken robots’ response time appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robomorphic-computing-hasten-robots-response-time/feed/ 0
Researchers develop powerful optical neuromorphic processor https://www.therobotreport.com/researchers-develop-powerful-optical-neuromorphic-processor/ https://www.therobotreport.com/researchers-develop-powerful-optical-neuromorphic-processor/#respond Fri, 08 Jan 2021 18:46:03 +0000 https://www.therobotreport.com/?p=558646 Researchers said the optical neuromorphic processor could benefit autonomous vehicles and other data-intensive applications.

The post Researchers develop powerful optical neuromorphic processor appeared first on The Robot Report.

]]>
neuromorphic chip

Dr. Xingyuan Xu with the integrated optical microcomb chip, which forms the core part of the optical neuromorphic processor. | Credit: Swinburne University

An international team of researchers, led by Swinburne University of Technology, demonstrated what it claimed is the world’s fastest and most powerful optical neuromorphic processor for artificial intelligence (AI). It operates faster than 10 trillion operations per second (TeraOPs/s) and is capable of processing ultra-large scale data.

The researchers said this breakthrough represents an enormous leap forward for neural networks and neuromorphic processing in general. It could benefit autonomous vehicles and data-intensive machine learning tasks such as computer vision.

Artificial neural networks can ‘learn’ and perform complex operations with wide applications. Inspired by the biological structure of the brain’s visual cortex system, artificial neural networks extract key features of raw data to predict properties and behaviour with unprecedented accuracy and simplicity.

The team was able to dramatically accelerate the computing speed and processing power of the optical neural networks. The team demonstrated an optical neuromorphic processor operating more than 1000 times faster than any previous processor, with the system also processing ultra-large scale images – enough to achieve full facial image recognition. Here is the researchers’ full paper, “11 TOPS photonic convolutional accelerator for optical neural networks” (PDF).

“This breakthrough was achieved with ‘optical micro-combs’, as was our world-record internet data speed reported in May 2020,” said Professor Moss, Director of Swinburne’s Optical Sciences Centre.

While state-of-the-art electronic processors such as the Google TPU can operate beyond 100 TeraOPs/s, this is done with tens of thousands of parallel processors, according to the researchers. In contrast, the optical system demonstrated by the team uses a single processor and was achieved using a new technique of simultaneously interleaving the data in time, wavelength and spatial dimensions through an integrated micro-comb source.

Operating principle of the photonic convolutional accelerator. | Credit: Swinburne University

Micro-combs are relatively new devices that act like a rainbow made up of hundreds of high-quality infrared lasers on a single chip. They are much faster, smaller, lighter and cheaper than any other optical source.

“In the 10 years since I co-invented them, integrated micro-comb chips have become enormously important and it is truly exciting to see them enabling these huge advances in information communication and processing. Micro-combs offer enormous promise for us to meet the world’s insatiable need for information,” says Professor Moss.

“This processor can serve as a universal ultrahigh bandwidth front end for any neuromorphic hardware —optical or electronic based — bringing massive-data machine learning for real-time ultra-high bandwidth data within reach,” said co-lead author of the study, Dr Xu, Swinburne alum and postdoctoral fellow with the Electrical and Computer Systems Engineering Department at Monash University.

“We’re currently getting a sneak-peak of how the processors of the future will look. It’s really showing us how dramatically we can scale the power of our processors through the innovative use of microcombs,” Dr Xu explained.

RMIT’s Professor Mitchell adds, “This technology is applicable to all forms of processing and communications – it will have a huge impact. Long term we hope to realise fully integrated systems on a chip, greatly reducing cost and energy consumption”.

“Convolutional neural networks have been central to the artificial intelligence revolution, but existing silicon technology increasingly presents a bottleneck in processing speed and energy efficiency,” said Professor Damien Hicks, from Swinburne and the Walter and Elizabeth Hall Institute.

He added, “This breakthrough shows how a new optical technology makes such networks faster and more efficient and is a profound demonstration of the benefits of cross-disciplinary thinking, in having the inspiration and courage to take an idea from one field and using it to solve a fundamental problem in another.”

Editor’s Note: This article was republished from Swinburne University of Technology.

The post Researchers develop powerful optical neuromorphic processor appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-develop-powerful-optical-neuromorphic-processor/feed/ 0
NVIDIA DRIVE AGX Orin to power next generation of Einride autonomous Pods https://www.therobotreport.com/orin-nvidia-drive-agx-power-next-generation-einride-autonomous-pods/ https://www.therobotreport.com/orin-nvidia-drive-agx-power-next-generation-einride-autonomous-pods/#respond Thu, 03 Dec 2020 14:00:53 +0000 https://www.therobotreport.com/?p=107467 Einride said NVIDIA's Orin processors will help make the next generation of its autonomous Pods for freight transport safer and more functional.

The post NVIDIA DRIVE AGX Orin to power next generation of Einride autonomous Pods appeared first on The Robot Report.

]]>

Einride AB said today that the next generation of its Einride Pods will use NVIDIA Corp.’s DRIVE AGX Orin autonomous vehicle computing platform to handle high-speed, unmanned operations.

“Safety and functionality in autonomous drive are achieved in two ways: diversity and redundancy,” stated Pär Degerman, chief technology officer of Stockholm-based Einride. “To capture and account for the diversity in a myriad of operational scenarios, and to develop the redundancy necessary to improve functionality, we need the most advanced processors possible, and that’s where NVIDIA Orin comes in.”

Einride AETs to rely on Orin SoC

The company said its Einride Pods are unmanned and 100% electric trucks intended to make freight transport more sustainable and cost-effective. It has been working closely with NVIDIA since 2018, said Robert Falck, founder and CEO of Einride.

The next-generation Autonomous Electric Transport, or AET vehicles, will have SAE Level 4 autonomy. Einride said its next-generation Pod will take software-defined AET functionality from fenced-area operation (AET 1) to highways (AET 4), meeting a sizable portion of global transport needs.

This capability is enabled by the NVIDIA DRIVE AGX platform and its Orin system on a chip (SoC), which consists of 17 billion transistors featuring NVIDIA’s next-generation GPU architecture, Arm Hercules-AE CPU cores, new deep-learning accelerators and other autonomous vehicle processors.

Einride’s Pods previously used the DRIVE AGX Xavier system, Falck told The Robot Report. In comparison, a single Orin SoC packs more than 200 trillion operations per second (TOPS), nearly seven times the performance and three times more power efficient than NVIDIA’s previous SoC.


Bringing high-performance compute to the roads

“Einride’s next-generation Pods will be able to achieve scalability and autonomous functionality by leveraging high-performance, energy-efficient NVIDIA computing in the vehicle,” said Danny Shapiro, senior director of automotive at NVIDIA. “Our Orin SoC is born out of the data center — delivering the massive compute capability necessary to enable Einride to bring to market transport solutions that can safely increase productivity and improve utilization.”

“Cutting-edge freight transport solutions require the very best in capability,” Falck said. “NVIDIA AI technology has been instrumental to our development thus far, and we are excited to take the next step together to scale our functionality and the availability of the next-generation Pod worldwide.”

The new Pods are available for preorder worldwide and will begin shipping with AET 1 and 2 functionality by the end of 2021. Einride said it expects Pods at AET 3 and 4 by 2023.

The post NVIDIA DRIVE AGX Orin to power next generation of Einride autonomous Pods appeared first on The Robot Report.

]]>
https://www.therobotreport.com/orin-nvidia-drive-agx-power-next-generation-einride-autonomous-pods/feed/ 0
CronAI senseEDGE platform joins Ouster lidar to bring 3D perception to robots, vehicles https://www.therobotreport.com/cronai-senseedge-joins-ouster-lidar-bring-3d-edge-perception/ https://www.therobotreport.com/cronai-senseedge-joins-ouster-lidar-bring-3d-edge-perception/#respond Thu, 12 Nov 2020 12:14:26 +0000 https://www.therobotreport.com/?p=107238 Ouster and CronAI are combining their award-winning lidar sensor and platform for processing image data at the edge, respectively, for 3D perception for early customers.

The post CronAI senseEDGE platform joins Ouster lidar to bring 3D perception to robots, vehicles appeared first on The Robot Report.

]]>

Autonomous vehicles and robots need accurate perception to perceive the world and interact with objects. Perception data processing provider CronAI yesterday announced that it has partnered with lidar supplier Ouster Inc. The companies said they will combine CronAI’s senseEDGE platform with Ouster’s lidar sensors to enable developers to create intelligent systems with 3D perception.

San Francisco-based Ouster was founded in 2015 and said its digital lidar architecture makes its sensors reliable, compact, and affordable while delivering camera-like image quality. The company has more than 800 customers in the robotics, transportation, and smart infrastructure industries, and it has secured $140 million in total funding.

Struck by the absence of intelligent, scalable, purpose-built “whole solutions” for 3D sensor data perception processing while developing applications for the defense industry, Tushar Chhabra and Saurav Argawala set out to bridge these critical gaps and break down performance bottlenecks when they founded CronAI in 2015. The New Delhi, India-based company is developing a 3D sensor data edge-processing platform for applications across mobility, transport infrastructure, smart spaces, automation, and security.

CronAI senseEDGE built from the ground up

CronAI said its senseEDGE platform is designed to address the acceleration requirements of 3D perception processing at the edge. The company described it as “a ground-breaking, artificially evolving, self-optimizing heterogeneous FPGA-based [field-programmable gate array] edge platform.”

SenseEDGE is intended to bridges the gap between complex 3D sensing dynamics and real-world applications, according to CronAI. It includes accelerated algorithms for 3D sensing, scalability to support next-generations algorithms and 3D sensing modalities, and user-optimized throughput and latency. SenseEDGE also has continuous real-time self-adaptation capabilities, said the company.

CronAI senseEDGE platform joins Ouster lidar to bring 3D perception to robots, vehicles

senseEDGE is an edge inference 3D sensor data perception processing platform. Source: CronAI

Ouster partners to get early access

The companies said they are providing Ouster’s partner and customer communities with early access to CronAI’s perception platform.

“This partnership is significant, representing collaboration between a leader in lidar sensor technologies and an AI software and hardware platform for 3D perception,” stated Tushar Chhabra, co-founder and CEO of CronAI. “Our heterogeneous software platform is specifically designed for processing 3D sensor data and for computer vision, and through our work together, Ouster can focus on building quality sensor technologies, while their customers can significantly mitigate their investment risks.”

“We’ve architected our platform to leverage the features of FPGAs supporting sequential, parallel and mixed workloads at lower precisions,” he said. “That all adds up to improvement in throughput, lower latency, and reduced power consumption. Now innovators can develop solutions using 3D sensors without the performance bottlenecks of traditional GPU compute hardware.”

Ouster said the acceleration of 3D sensor data perception on an edge-inference platform will enhance the value of its lidar technology by reducing customers’ research and development spending, reducing time to market, and increasing the efficiency of deployments.

“We’re proud to add CronAI to our partner ecosystem,” said Clement Kong, general manager of Asia-Pacific at Ouster. “A 3D perception platform with edge processing will help customers accelerate their projects and unlock the perception capabilities of high-resolution digital lidar. CronAI’s platform can help us reshape the autonomous world — unlocking the potential of smart spaces, transforming the way we secure critical infrastructure, and enabling new levels of industrial automation.”

By accelerating algorithms and neural networks on the product edge, Ouster said its customers can rapidly innovate and achieve higher performance per watt, performance per dollar, and throughput per TOP. The partnership with CronAI aims to overcome the traditional software and processing bottlenecks that customers have faced when building applications and solutions with 3D sensors.

CronAI recently opened its early access program to partners and began porting of customer or third-party perception software. It is now accepting pre-orders for development kits to ship in early 2021.

The post CronAI senseEDGE platform joins Ouster lidar to bring 3D perception to robots, vehicles appeared first on The Robot Report.

]]>
https://www.therobotreport.com/cronai-senseedge-joins-ouster-lidar-bring-3d-edge-perception/feed/ 0
Aeva makes SPAC deal to scale up production of 4D lidar on a chip for self-driving cars https://www.therobotreport.com/aeva-spac-deal-scale-production-4d-lidar-chip-self-driving-cars/ https://www.therobotreport.com/aeva-spac-deal-scale-production-4d-lidar-chip-self-driving-cars/#comments Tue, 03 Nov 2020 13:00:05 +0000 https://www.therobotreport.com/?p=107109 Aeva has entered into an agreement with special purpose acquisition company InterPrivate Acquisition Corp. to scale up production of its 4D lidar sensor on a chip, which it said can benefit autonomous vehicles and robotics.

The post Aeva makes SPAC deal to scale up production of 4D lidar on a chip for self-driving cars appeared first on The Robot Report.

]]>
Aeva sensors

Source: Aeva

Special purpose acquisition company InterPrivate Acquisition Corp. yesterday announced that it has entered into a definitive agreement for a business combination with lidar maker Aeva. Mountain View, Calif.-based Aeva claims to be the first company to provide a perception platform built from the ground up on silicon photonics for mass-scale application in automotive, consumer electronics, and other sectors.

Soroush Salehian and Mina Rezk, former engineering leaders at Apple and Nikon, founded Aeva in 2017. The company said it is building the next generation of sensing and perception for autonomous vehicles and beyond. Aeva has a multidisciplinary team of more 100 experienced leaders, engineers, and operators, It said it is actively engaged with 30 of the top players in automated and autonomous driving across passenger, trucking, and mobility.

InterPrivate is a blank-check company organized for the purpose of effecting a merger, share exchange, asset acquisition, stock purchase, recapitalization, reorganization, or other similar business combination with one or more businesses or entities. The special purpose acquisition company (SPAC) is controlled by affiliates of Chairman and CEO Ahmed M. Fattouh and InterPrivate LLC, a firm founded by Fattouh that invests on behalf of a consortium of family offices in partnership with independent sponsors from the private equity and venture capital industries.

Aeva offers 4D lidar on a chip

Aeva said its 4D LiDAR on Chip combines instant velocity measurements and long-range performance at affordable costs for commercialization at silicon scale.

With its software stack, Aeva plans to scale its perception platform to a range of industries beyond automotive, including consumer electronics, consumer health, industrial robotics, and security.

Unlike legacy lidar, which relies on time-of-flight (ToF) technology and measures only depth and reflectivity, Aeva uses frequency modulated continuous wave (FMCW) technology to measure velocity in addition to depth, reflectivity, and inertial motion. The company added that this draws significantly less power than other available technologies, including ToF, to bring perception to broad applications at an industry-leading cost.

“From the beginning, we believed that the only way to achieve the holy grail of lidar is to be integrated on a chip,” stated Mina Rezk, co-founder and chief technology officer of Aeva. “Over the last four years, we did it by leveraging Aeva’s unique coherent FMCW approach. With today’s announcement, we can use our development efforts to expand into new markets that were simply not possible before.”

Aeva system on a chip

4D LiDAR on Chip. Source: Aeva

Partners for production

In September, Aeva announced a production partnership with ZF Friedrichshafen, one of the world’s largest Tier 1 automotive suppliers, to supply what it described as the first automotive-grade 4D lidar to global OEM customers. The partnership will combine Aeva’s expertise in frequency modulated continuous wave (FMCW) lidar with ZF’s experience in mass production of automotive-grade sensors.

“Our vision has been to create a fundamentally new sensing system to enable perception across all devices,” said Soroush Salehian, co-founder and CEO of Aeva. “This milestone accelerates our journey toward delivering the next paradigm in perception to mass-market applications, not just in automotive, but [also] consumer and beyond.”

In 2019, Aeva announced a partnership with Audi’s Autonomous Intelligent Driving entity. Aeva has also partnered with multiple other passenger car, trucking, and mobility platforms to further adoption of advanced driver-assistance systems (ADAS) and autonomous applications.


SPAC combination worth $2.1B

Aeva said it plans to use 100% of the net proceeds from the transaction to accelerate its growth and commercialization. The combined company will have an implied pro forma equity value of approximately $2.1 billion at closing, and Aeva’s existing stockholders will hold approximately 80% of the issued and outstanding shares of common stock immediately following the closing.

The combined Aeva-InterPrivate business will provide up to $363 million in gross proceeds, including InterPrivate’s $243 million held in trust and a $120 million fully committed common stock PIPE (private investment in public equity) at $10 per share, including investments from Adage Capital and Porsche SE, the major shareholder of VW Group.

“We look forward to our combination with Aeva, which was the clear stand-out amongst the 100+ merger targets we evaluated,” said Ahmed Fattouh, chairman and CEO of InterPrivate. “The company’s breakthrough technology combines the key advantages of lidar, radar, motion sensing, and vision in a single compact chip.”

“As a result of this transaction, including the upsized PIPE private placement, Aeva is not expected to require any additional funding to achieve significant cash flow through its commercial partnerships with world class customers,” he said. “Soroush, Mina and their team are revolutionizing sensing solutions, not only for the automotive industry, but ultimately across all devices.”

Related content: The Robot Report Podcast: Robotics investment trends; are Amazon drone deliveries coming?

Aeva transaction details

All Aeva stockholders, including Lux Capital, Canaan Partners, and Lockheed Martin, will retain their equity holdings through Aeva’s transition into the publicly listed company. It previously raised $45 million in Series A funding in October 2018. Upon closing of the transaction, the combined company will be renamed “Aeva Inc.” and is expected to continue to be listed on the New York Stock Exchange and trade under the ticker symbol “AEVA.”

“Cash proceeds in connection with the transaction will be funded through a combination of (i) the issuance of approximately $120 million of common stock through a fully committed private placement at $10.00 per share, including investments from Adage Capital and Porsche SE, (ii) the issuance of $ 1.7 billion of new common stock of InterPrivate to current stockholders of Aeva subject to customary adjustments and (iii) $243 million of cash held in trust assuming no redemptions by InterPrivate’s existing public stockholders,” said the companies.

Completion of the proposed business combination is subject to, among other things, the approval by InterPrivate and Aeva stockholders and the satisfaction or waiver of other customary closing conditions, including a registration statement being declared effective by the Securities and Exchange Commission and is expected to occur in the first quarter of 2021. Following completion of the transaction, Aeva will retain its management team.

The post Aeva makes SPAC deal to scale up production of 4D lidar on a chip for self-driving cars appeared first on The Robot Report.

]]>
https://www.therobotreport.com/aeva-spac-deal-scale-production-4d-lidar-chip-self-driving-cars/feed/ 1
NVIDIA Jetson can enable hybrid edge/cloud processing for robots, says Formant CEO https://www.therobotreport.com/jetson-helps-robotics-developers-cloud-edge-formant-ceo/ https://www.therobotreport.com/jetson-helps-robotics-developers-cloud-edge-formant-ceo/#respond Tue, 06 Oct 2020 20:49:25 +0000 https://www.therobotreport.com/?p=106770 When using Jetson devices, Formant users can now enable real-time video and image analytics in the cloud and perform PII scrubbing at the edge, writes CEO Jeff Linnell in this column.

The post NVIDIA Jetson can enable hybrid edge/cloud processing for robots, says Formant CEO appeared first on The Robot Report.

]]>
Formant CEO explains choice of NVIDIA Jetson for hybrid edge/cloud robotics platform

Formant says it uses NVIDIA’s Jetson with its cloud platform for next-generation robotics controls. Source: Formant

Robotics began at the edge. Early robots were massive, immobile machines operating on factory floors, with plenty of space for storing what little data they required locally. Over recent years, however, robots have left the factory floor and are moving around in an increasing number and variety of environments. These robots are no longer refrigerator-sized automata punching out widgets.

Now, rather than worrying about workers bumping into robots, we have to worry about robots bumping into workers. The new unstructured environments that autonomous systems are venturing into are invariably fraught with obstacles and challenges. Humans can assist, but we need data, lots of it, and in real-time. Companies like Formant have used cloud technology to meet those needs. We’ve enabled companies to observe, operate, and analyze this new wave of robotic fleets remotely and intuitively through the use of our cloud platform.

This notion, however, of everything being pushed to the cloud all of the time, is beginning to come into question. Using the integrated GPU cores in NVIDIA’s Jetson platform, we can swing the pendulum back in the direction of the edge and reap its advantages. The combination of GPU optimized edge data processing paired with Formant’s observability and teleoperation platform can creates an efficient command and control center right out of the box.

When using Jetson devices, Formant users can now enable real-time video and image analytics in the cloud and perform PII scrubbing at the edge, ultimately sustaining more reliable connections and better privacy protections. This also allows for much greater cloud/edge portability, as the very same algorithms can run in both places. This hybridized model allows one to do away with “one-size-fits-all” solutions and opt for balance between one’s in-cloud and on-device operations.

Find balance with cutting-edge encoding

An optimal teleoperation experience requires the proper balance between latency, quality, and computational availability. In the past, striking such a balance wasn’t an easy task. In essence, for each single quality one sought to prioritize, it would come at the expense of the other two. By virtue of applying Formant’s tooling and the portability of NVIDIA’s DeepStream SDK, users can re-adjust those balances as needed in order to optimize data management to their specific use case.

Formant and Jetson

Source: Formant

The most immediately-useful capability we gain by using Jetson is hardware-accelerated video encoding. When Formant detects that you are using a Jetson or compatible device, it unlocks the option to automatically perform H.264 encoding at the edge. This enables high-quality transmission of full-motion video with substantially diminished bandwidth requirements, lower latency, and lower storage requirements if buffering the data for later retrieval.

When it comes to measuring the performance of a teleoperation system, latency is one of the most important criteria to consider. This is even more so the case when dealing with video encoding, an exceedingly resource-intensive process that can easily introduce latency if pushed to the limit.

In our tests, 1080p resolution at 30 frames per second pegged all 6 of our cores at 100% utilization when not using hardware acceleration. This then caused a fair amount of latency to be introduced to the pipeline. However, when our Jetson implementation is activated, the average CPU utilization for the system drops to below 25%. This not only improved latency significantly, it also freed up the CPU for other activities.

NVIDIA Jetson CPU loaad

Source: NVIDIA, Formant

Jetson could lead to the future of hybrid robotics

With Formant, one has the ability to fine-tune and balance what operations occur at the edge and in the cloud. We think this flexibility is a huge milestone in the business of robotics. Just imagine your chief financial officer saying that your LTE bill is way too high, your engineering team deciding that they need to use cheaper devices and smaller batteries, or that new data transmission and sovereignty regulations have just been passed. With the ability to determine what is done at the edge and what is done on the cloud, these otherwise heavy lifts become as simple as checking a box and swinging the pendulum.

At the moment, these are crucial decisions, and the striking of the edge/cloud balance must be decided by engineers. Formant provides these interfaces to let you tune the system easily. Looking ahead, we envision an automated dynamic “load balancing” between the edge and the cloud. You essentially will define rules and budgets to optimize around.

For example, when your robot is connected to Wi-Fi, power, and idle, you could automatically use this spare time and power to leverage the GPU for semantic labeling and data enrichment, then upload the data to the cloud while the bandwidth is cheap.

There are clear reasons to choose either the cloud or the edge for your computation. It’s equally evident that this line will continue to shift and evolve.

Related content: The Robot Report Podcast: Tele-operating Spot, Veo on manufacturing challenges, Amazon goes to the mall

Jeff Linnell, CEO, FormantAbout the author:

Jeff Linnell is founder and CEO of Formant. He was previously head of product-robotics at Google and director of robotics at X, “the moonshot factory.”

The post NVIDIA Jetson can enable hybrid edge/cloud processing for robots, says Formant CEO appeared first on The Robot Report.

]]>
https://www.therobotreport.com/jetson-helps-robotics-developers-cloud-edge-formant-ceo/feed/ 0
Jetson Nano 2GB designed by NVIDIA to make AI, robotics development more accessible https://www.therobotreport.com/jetson-nano-2gb-nvidia-makes-starting-ai-robotics-affordable/ https://www.therobotreport.com/jetson-nano-2gb-nvidia-makes-starting-ai-robotics-affordable/#respond Mon, 05 Oct 2020 13:00:14 +0000 https://www.therobotreport.com/?p=106744 NVIDIA said its Jetson Nano 2GB kit is designed to enable students and developers to affordably get started with AI, robotics, and IoT.

The post Jetson Nano 2GB designed by NVIDIA to make AI, robotics development more accessible appeared first on The Robot Report.

]]>

NVIDIA Corp. today expanded its Jetson AI at the Edge platform with an entry-level developer kit priced at just $59. The company said Jetson Nano 2GB makes artificial intelligence and robotics available to a new generation of students, educators, and hobbyists.

The Jetson Nano 2GB Developer Kit is designed to enable people to learn about AI by creating hands-on projects in areas such as robotics and the Internet of Things (IoT). Santa Clara, Calif.-based NVIDIA also announced the availability of free online training and AI certification programs, which will supplement the many open-source projects, how-tos, and videos contributed by thousands of developers in its Jetson community.

“While today’s students and engineers are programming computers, in the near future they’ll be interacting with, and imparting AI, to robots,” stated Deepu Talla, vice president and general manager of edge computing at NVIDIA. “The new Jetson Nano is the ultimate starter AI computer that allows hands-on learning and experimentation at an incredibly affordable price.”

Kit joins Jetson AI at the Edge platform

The Jetson Nano 2GB Developer Kit is the latest offering in NVIDIA’s Jetson AI at the Edge platform, which ranges from entry-level AI devices to advanced platforms for fully autonomous machines.

Jetson Nano 2GB is supported by the JetPack software development kit (SDK), which comes with NVIDIA container runtime and a full Linux software development environment. The company said this allows developers to package their applications for Jetson with all its dependencies into a single container that is designed to work in any deployment.

The SDK is powered by the same CUDA-X accelerated computing stack used to create breakthrough AI products in self-driving cars, industrial IoT, healthcare, smart cities, and more, according to NVIDIA. The company was a 2020 RBR50 innovation award winner.

“NVIDIA’s Jetson is driving the biggest revolution in industrial AIoT,” said Jim McGregor, principal analyst at Tarias Research. “With the new Jetson Nano 2GB, NVIDIA opens up AI learning and development to a broader audience, using the same software stack as its data center AI computing platform.”

In addition, with the performance and capability to run a diverse set of AI models and frameworks, the Jetson Nano 2GB Developer Kit provides a scalable platform for learning and creating AI applications as they evolve, said NVIDIA.

“At Booz Allen, we seek to empower people to change the world,” said Drew Farris, director of analytics and AI research at Booz Allen Hamilton. “We’re using NVIDIA Jetson to train new technical resources as AI becomes critical for enterprises and personnel leveraging AI to solve the most difficult global challenges.”


Jetson Nano 2GB gets ecosystem support

NVIDIA said its Jetson Nano 2GB Developer Kit has received strong endorsements from organizations, enterprises, educators and partners in the embedded computing ecosystem.

“Acquiring new technical skills with a hands-on approach to AI learning becomes critical as AIoT drives the demand for interconnected devices and increasingly complex industrial applications,” said Matthew Tarascio, vice president of AI at Lockheed Martin. “We’ve used the NVIDIA Jetson platform as part of our ongoing efforts to train and prepare our global workforce for the AI revolution.”

“The Duckietown educational platform provides a hands-on, scaled down, accessible version of real-world autonomous systems,” said Emilio Frazzoli, professor of dynamic systems and control at ETH Zurich. “Integrating NVIDIA’s Jetson Nano power in Duckietown enables unprecedented, affordable access to state-of-the-art compute solutions for learning autonomy.”

“We know how important it is to provide all students with opportunities to impact the future of technology,” said Christine Nguyen, STEM curriculum director at Boys & Girls Club of Western Pennsylvania. “We’re excited to utilize the NVIDIA Jetson AI Specialist certification materials with our students as they work toward becoming leaders in the fields of AI and robotics.”

The Jetson Nano 2GB Developer Kit will be available at the end of the month for $59 through NVIDIA’s distribution channels.

The post Jetson Nano 2GB designed by NVIDIA to make AI, robotics development more accessible appeared first on The Robot Report.

]]>
https://www.therobotreport.com/jetson-nano-2gb-nvidia-makes-starting-ai-robotics-affordable/feed/ 0
Hailo-8 powers new M.2 and Mini PCIe AI acceleration modules for edge devices https://www.therobotreport.com/hailo-8-ai-acceleration-modules-edge-devices/ https://www.therobotreport.com/hailo-8-ai-acceleration-modules-edge-devices/#respond Wed, 30 Sep 2020 11:00:00 +0000 https://www.therobotreport.com/?p=106691 Hailo said its new M.2 and Mini PCIe modules, which use the Hailo-8 AI processor, will make it easier for developers to build high performance into edge devices than other current systems.

The post Hailo-8 powers new M.2 and Mini PCIe AI acceleration modules for edge devices appeared first on The Robot Report.

]]>
Hailo launches high-performance M.2 and Mini PCIe AI acceleration modules for edge devices

The Hailo-8 processor powers two new modules. Source: Hailo Technologies

Major chip makers are not alone in offering products to help developers build artificial intelligence into edge devices such as mobile robots. Hailo Technologies Ltd. today launched the M.2 and Mini PCIe modules, which it said can perform better than competing chips, can be integrated into standard frameworks, and can enable a variety of smart machines, including robots.

Tel Aviv, Israel-based Hailo was founded in 2017 by members of the Israel Defense Forces’ elite technology unit. The company claimed that its Hailo-8 processor can perform deep learning tasks such as object detection and segmentation in real time with minimal power consumption, size, and cost. Earlier this year, Hailo raised $60 million in Series B funding and partnered with Foxconn Technology Group and Socionext Inc. to produce the latest generation of Boxiedge. Investors include NEC and ABB.

The startup claimed that its AI modules can help developers and users of edge devices that are both high-performing and cost-effective. For example, fanless AI edge boxes are in high demand because they allow many cameras or sensors to be connected to a single intelligent processing device in outdoor deployments, it said.

“For a project manager building a robotics application, there have not been many choices for what they can integrate into a solution,” said Liran Bar, vice president of business development at Hailo. “We offer a structure-defined dataflow architecture, built out of memory, control, and compute resources distributed in an efficient and flexible way. Our innovation comes from the architecture, not using dedicated memory or highly advanced nodes.”

Comparing AI modules

Hailo said its processor compares favorably with competitors in terms of frames per second (FPS) across multiple neural network benchmarks. Based on published figures, Hailo said its AI modules have a FPS rate 26 times higher than Intel’s Myriad-X modules and 13 times higher than Google’s Edge TPU modules.

Hailo-8 performance comparison

Hailo-8 versus published benchmarks for the Intel Myriad-X and Google Edge TPU. Source: Hailo

“NVIDIA is good for proofs of concept, but users face challenges with power, size, and cost,” Bar told The Robot Report. “On the software side, we provide the drivers for x86 or any embedded device.”

The Hailo-8 M.2 module is already integrated into the next generation of Foxconn’s BOXiedge, with no redesign required for the printed circuit board (PCB) and provides market-leading energy efficiency for standalone AI inference nodes, said Hailo.

“The integration of Hailo’s M.2 AI module into our BOXiedge is revolutionizing our next-generation edge computing devices and will enable us to continue supporting our mission to create innovative, efficient, and competitive products for the electronics industry,” said Dr. Gene Liu, vice president of the Semiconductor Subgroup at Foxconn. “Hailo’s M.2 and Mini PCIe modules, together with the high-performance Hailo-8 AI chip, will allow many rapidly evolving industries to adopt advanced technologies in a very short time, ushering in a new generation of high performance, low power, and smarter AI-based solutions.”

Hailo said its AI acceleration modules also integrate into standard frameworks, such as TensorFlow and ONNX, which are both supported by Hailo‘s comprehensive Dataflow Compiler. Customers can easily port their neural networks into the Hailo-8 processor, ensuring high performance, enabling smarter AI products, and accelerating time to market, claimed the company.

With the Hailo-8 AI processor delivering 26 Tera Operations Per Second (TOPS) and power efficiency of 3 TOPS/W, the modules can be plugged into any existing edge device with the appropriate M.2 or Mini PCIe sockets, said Hailo. This helps deliver high performance while reducing latency and improving privacy, it said.

“We don’t need to convince users of the need for AI; we just need to demonstrate better performance,” said Bar. “We offer a newer module than Intel or Google, and for a customer like Foxconn to change, these are not theoretical numbers. We feel very strongly about what we’re doing, which is why we’re comparing our modules with publicly available information.”

“Intel’s investment in Habana or NVIDIA’s purchase of Arm will not have any impact on Hailo because Arm has controllers, but no AI on the edge,” he added. “Their impact will be on the cloud, while we have a dedicated solution for edge applications based on neural networks.”

Hailo-8 powers modulesHailo-8 intended to accelerate time to market

Hailo said developers can plug its new AI modules into edge devices for a variety of sectors including automotive, Industry 4.0, healthcare, smart homes, smart cities, and retail. The M.2 and Mini PCIe modules can optimize time to market with a standard form factor, it said.

“Manufacturers across industries understand how crucial it is to integrate AI capabilities into their edge devices,” stated Orr Danon, CEO of Hailo. “Simply put, solutions without AI can no longer compete.”

“Our new Hailo-8 M.2 and Mini PCIe modules will empower companies worldwide to create new powerful, cost-efficient, innovative AI-based products with a short time to market — while staying within the systems’ thermal constraints,” he said.

Hailo AI module using Hailo-8

Source: Hailo

The Hailo-8 AI modules are already being integrated by select customers worldwide. “We’re already in production with some customers, and samples are now available,” Bar said. “We expect to go into mass production in the second half of 2020.”

“When we engage with customers, they have the options of the development platform, debugging tools, and direct support,” he added.

More information on the Hailo-8 M.2 and Mini PCIe AI modules can be found here.

The post Hailo-8 powers new M.2 and Mini PCIe AI acceleration modules for edge devices appeared first on The Robot Report.

]]>
https://www.therobotreport.com/hailo-8-ai-acceleration-modules-edge-devices/feed/ 0