Video surveillance dataset for people matching | [back] | |||||||||||||||
|
||||||||||||||||
Being able to match objects across multiple views is one of the main building blocks for coordination of multi-camera systems. This dataset has been acquired and annotated to test the ability of algorithms to work in a constraint free settings: cameras differ for resolution, position, framerate, the scenario is non planar, a camera is hand held and moves freely and no calibration information is provided. Each video is 110 second long and contains manual annotations of each track using ID that are shared between cameras. |
||||||||||||||||
Dataset | ||||||||||||||||
![]()
|
||||||||||||||||
Ground truth | ||||||||||||||||
For each video is it available a ground truth file that contains the bounding boxes of each track identified by an ID that is the same for all the cameras. Each track is defined with an ID followed by a sequence of bounding boxes: < track > ::= < id > < bb_list > < bb_list > ::= < bb > < bb_list > | < bb > < bb > ::= < frame > < x > < y > < w > < h > < id > ::= string < x > ::= integer < y > ::= integer < w > ::= integer < h > ::= integer |
||||||||||||||||
Publications | ||||||||||||||||
|
||||||||||||||||
Contacts and acknoledgements | ||||||||||||||||
This is a joint work betweeen Università degli Studi di Genova (It) and Queen Mary University of London (UK) |
||||||||||||||||
[back] |