Summary
- Profile Type
- Technology offer
- POD Reference
- TOHR20230727020
- Term of Validity
- 21 August 2023 - 20 August 2025
- Company's Country
- Croatia
- Type of partnership
- Commercial agreement with technical assistance
- Targeted Countries
- All countries
Contact the EEN partner nearest to you for more information.
Find my local partner
General information
- Short Summary
- 3D Visual sensor connects pixels from a 2D video camera stream with real-world 3D coordinates of the world in front of the camera. This machine vision system is using a visual sensor to export the location and dimensions of the objects detected in the video stream with real-world coordinates. It also exports object dimensions in meters. This machine vision system enables any camera and any number of cameras to be used as the visual sensor(s).
- Full Description
-
The company's inception traces back to the entertainment sector. A decade ago, they pioneered the world's first visual stage tracking system—a revolutionary technology that utilized a video camera and an operator to track the stage performers with moving heads. The response from customers was very positive, captivated by the system's seamless operation and pinpoint accuracy. Simultaneously, there was a clear demand for integrating a 3D stage and adopting ultra-wide-angle lenses to expand the camera's field of view.
Their system, like most vision-based counterparts, was founded on pixel-based architecture. The call for a 3D implementation prompted a transition to real-world coordinates expressed in meters, marking a departure from pixel-based metrics. To tackle this challenge, we devised an innovative camera model introducing a unique method for rectifying lens distortion. This novel camera model effectively transforms any video camera into a visual sensor capable of streaming metadata alongside video content.
In its simplest configuration, a visual sensor ingests a video stream along with video camera parameters encompassing factory and configuration data. Output from this visual sensor amalgamates video and location metadata, measured in meters, for each recognized object relative to the camera's mounting point. The visual sensor functions as a conduit for streaming object location, width, and height data alongside the video feed. When the video stream detects a pedestrian or a car, the visual sensor generates and presents their respective location in meters, in addition to width and height measurements. Should the geographical coordinates of the camera's mounting point be known, the current location of the pedestrian or car can be exported in geographical coordinates.
The "vi sensor" encompasses various overlays superimposed on the video stream. The "f overlay" constitutes a layer of metadata encompassing attributes like location, width, and height. Single-camera visual applications often lack vertical dimension information about detected areas. To address this, a CAD file is employed as an additional layer to provide height data. This CAD file encompasses the camera's mounting location and calibration point, with the undistorted camera image being mapped onto a 3D CAD model. Each pixel within the image corresponds to a unique 3D coordinate and the visual sensor exports height as the z-coordinate. This CAD file facilitates automatic extraction of area height. Furthermore, CAD files form a top-layer overlay on the video stream. For outdoor applications, measure can be derived from tools like Google Earth Pro or depth maps. Company's machine vision systems, enhanced by visual sensors, present a significant advantage with multi-camera setups. As the visual sensor provides location data referencing the camera's mounting point, a machine VS well-versed in the geometry of multiple camera mounts can seamlessly compute location data for detected objects across all cameras, effectively situating these objects within the real world. Consequently, any number of video cameras can be integrated into a unified vision system. Any camera, combined with diverse geometries, can be harnessed in a stereo vision setup. This innovation yields novel stereo VS that are straightforward, cost-effective, easily producible, and resilient. Additionally, a stereo vision system reliant on visual sensors accurately exports precise coordinates for detected objects. For instance, when a stick is placed before the stereo vision system, both ends of the stick are identified, and x, y, and z coordinates for each end are provided. These coordinates facilitate straightforward determination of the stick's length.
This functionality bears significant implications for mobile platforms such as autonomous vehicles. By employing 8 cameras configured into 4 visual sensor-based stereo vision independent systems, vehicles attain a comprehensive awareness of surroundings. - Advantages and Innovations
-
Visual sensor is software that represents a mathematical model of a video camera.
Visual sensors' different input parameters will lead to different mathematical models. Panoramic cameras, thermal cameras, and ultra-wide angle cameras are some of the examples of different type of cameras that leads to different types of mathematical models. Each of these mathematical models can be integrated into 3D space, cad file, Google EarthPRO application or any other 3D environment file.
Visual sensors can import 3D environment file or be part of the 3D environment. In single camera applications visual sensor is importing 3D environment and in multiple camera applications machine vision systems are mapping visual sensors on their own 3D environment.
Visual sensor can be integrated in existing applications. Our visual sensor is integrated in one of the leading Video Management Systems for video surveillance where automated video tracking with PTZ cameras functionality is added. Also, functionality that displays of all detected objects (pedestrians, cars, trucks) in one map is also added.
New version of the application that track stage performer with moving heads is also developed as well as stereo vision application that calculates 3D position of the object in front of the camera.
These applications are just an example of machine vision systems that are using visual sensor technology. Autonomous vehicles, robotic positioning and any application that uses video cameras for object detection will benefit from the transition to visual sensor technology. - Stage of Development
- Already on the market
- Sustainable Development Goals
- Goal 9: Industry, Innovation and Infrastructure
- IPR status
- IPR applied but not yet granted
Partner Sought
- Expected Role of a Partner
-
The company is actively seeking an optimal partner who can grasp the extensive potential inherent in their technology, spanning numerous sectors.
In essence, the company seeks a partner who not only recognizes the versatility of its technology but also shares its enthusiasm for its potential to revolutionize multiple industries. This partner should possess the attributes necessary to effectively harness and capitalize on this potential, leading to a mutually beneficial and impactful collaboration. - Type and Size of Partner
- SME <=10
- Type of partnership
- Commercial agreement with technical assistance
Dissemination
- Technology keywords
- 01006012 - Description Image/Video Computing
- 02009015 - Audio / video
- Market keywords
- 08002004 - Robotics
- 08002005 - Machine vision software and systems
- Targeted countries
- All countries