The suite is intended to deliver an accelerated path to explore and implement differentiated machine vision applications that leverage the performance and efficiency of event-based vision.
Said to be the industry’s most comprehensive suite of software tools and code samples, the Metavision suite will be available for free from initial adoption, use through commercial development and release of market-ready products.
With this advanced toolkit, engineers will be able to develop computer vision applications on a PC for a wide range of markets, including industrial automation, IoT, surveillance, mobile, medical, automotive and more.
The free modules in Metavision Intelligence 3.0 are available through C++ and Python APIs and include a comprehensive machine learning toolkit. The suite also offers a no-code option through the Studio tool which enables users to play pre-recorded datasets provided for free, without owning an event camera. With an event camera, users can stream or record events from their event camera in seconds.
In total, the suite consists of 95 algorithms, 67 code samples and 11 ready-to-use applications. Plug-and-play-provided algorithms include high-speed counting, vibration monitoring, spatter monitoring, object tracking, optical flow, ultra-slow-motion, machine learning and others. It provides users with both C++ and Python APIs as well as extensive documentation and a wide range of samples organised by its implementation level to incrementally introduce the concept of event-based machine vision.
“We have seen a significant increase in interest and use of Event-Based Vision and we now have an active and fast-growing community of more than 4,500 inventors using Metavision Intelligence since its launch. As we are opening the event-based vision market across many segments, we decided to boost the adoption of MIS throughout the ecosystem targeting 40,000 users in the next two years. By offering these development aids, we can accelerate the evolution of event-based vision to a broader range of applications and use cases and allow for each player in the chain to add its own value,” said Luca Verre, co-founder and CEO of Prophesee.
The latest release includes enhancements to help speed up time to production, allowing developers to stream their first events in minutes, or even build their own event camera from scratch using the provided camera plugins under open-source license as a base.
They now also have the tools to port their developments on Windows or Ubuntu operating systems. Metavision Intelligence 3.0 features also allow access to the full potential of advanced sensor features (e.g. anti-flickering, bias adjustment) by providing source code access to key sensor plugins.
The Metavision Studio tool has also enhanced the user experience with improvements to the onboarding guidance, UI, ROI and bias setup process.
The core ML modules include an open-source event-to-video converter, as well as a video-to-event simulator. The event-to-video converter utilizes the pretrained neural network to build grayscale images based on events. This allows users to make the best use of their existing development resources to process event-based data and build algorithms upon it.
The video-to-event pipeline breaks down the barrier of data scarcity in the event-based domain by enabling the conversion of conventional frame-based datasets to event-based datasets.