Advanced Object Tracking
Networks of cameras that provide live video streams are ubiquitous, found monitoring production lines, spread across cities and installed in large industrial facilities, as examples.
AGT has developed advanced IoT analytics for tracking objects in real-time video streams, across extensive camera networks.
One application is the robust automatic tracking of vehicles in real time using optical tracking, camera-to-camera handover, computer vision, machine learning and prediction capabilities, as well as automatic camera calibration to adjust camera position, orientation and zoom. The system directs mobile forces to the exact locations of suspected vehicles.
These unique algorithms are also used for criminal investigations and other use cases.
Tracking vehicles and other objects between cameras is difficult due to the sheer number of cameras in a city’s or corporate site’s camera network. Tracking objects in outdoor environments adds another layer of complexity, as a useful solution must account for object occlusion, multiple and changing perspectives and scenes, and changing camera conditions. The overall challenge, however, is achieving robust performance with low failure rates.
Our advanced object tracking system generally follows these steps:
- Once an object (e.g., a vehicle of interest) is identified, an initial object category classification is made (e.g., coupe; bus; pickup; van; etc.).
- 3D tracking begins: the algorithm quickly learns and separates the different sides or sections of the vehicle/object, and determines its position (GPS coordinates).
- Learning is applied during tracking in order to continuously improve tracking of the object’s unique appearance, and better distinguish it from other vehicles.
- Based on movement profiles, predictions of the object’s current position are continuously made. When a different camera in the network gains a better vantage point, tracking is handed over to that camera, but tracking is uninterrupted. Our algorithm copes with temporary occlusion and small gaps between cameras.
Our algorithm “learns” classes of objects (by studying examples) and so can track them. 3D tracking and hand-over between cameras is made possible using camera network topology determinations and accurate geo positioning. To this end, known lane markings are detected in the field of view to first determine the relative orientations and positions of cameras to the street. Based on statistical analysis and object matching of tracked objects, a mapping between adjacent cameras is determined.