PhD Forum: Coverage redundancy in visual sensor networks

Arezoo Vejdanparast*

*Corresponding author for this work

    Research output: Chapter in Book/Published conference outputConference publication


    When a network of cameras with adjustable zoom lenses is tasked with object coverage, an important question is how to determine the optimal zoom level for each camera. While covering a smaller area allows for higher detection likelihood, overlapping fields of view introduce a redundancy which is vital to fault tolerance and acquisition of multiple perspectives of targets. In our research, we study the coverage redundancy problem in visual sensor networks formalised as the k-coverage (e.g. [1, 3, 4]), where k represents the number of cameras covering a specific point. However, we are not trying to cover static points in the environment but rather mobile points that may change their position over time. A visual sensor network (VSN) that we study, is composed of omni-directional cameras C = {c1, c2, . . ., ci, . . ., cn} with 360-degree views, each equipped with an adjustable zoom lens. The circular field of view (FOV) of camera ci has a range ri that corresponds to current zoom level. The network is tasked to cover a set of moving objects O = {o1, o2, . . ., oj, . . ., om}. We consider a point to be covered by a camera if it lies within the camera's FOV and is detectable by that camera. In order to address detectability, inspired by the work by Esterle et al. [2], we propose a resolution-based confidence model for our cameras. For a camera with a given resolution, the model identifies the classification success rate based on pixel density across all available objects within that cameras FOV. If the rate is above a certain threshold τ then the object is marked as covered. This enables each individual camera to identify the number of objects it covers at runtime. Before we describe details of our confidence model, we indicate a few important factors that affects the accuracy of the model; i) current zoom level has an inherent impact on the quality of the coverage. It invokes a trade-off between the size of area covered by the camera and the quality of acquired information. ii) position of the object the distance between the object and the camera affects the quality of obtained information. It is clear that the camera acquires more information on closer objects than further ones. iii) size of the object is also important and can affect the accuracy of the results. However, there are still some other factors such as camera type, environmental lighting, etc. that might lead to varying results. We factor out these issues in our profiling experiments. We propose the resolution-based confidence model using the template illustrated in Figure 1.

    Original languageEnglish
    Title of host publicationProceedings of the 12th International Conference on Distributed Smart Cameras, ICDSC 2018
    ISBN (Electronic)9781450365116
    Publication statusPublished - 3 Sept 2018
    Event12th International Conference on Distributed Smart Cameras, ICDSC 2018 - Eindhoven, Netherlands
    Duration: 3 Sept 20184 Sept 2018


    Conference12th International Conference on Distributed Smart Cameras, ICDSC 2018


    • Decentralised systems
    • Machine learning approaches
    • Online learning
    • Smart cameras
    • Visual sensor networks


    Dive into the research topics of 'PhD Forum: Coverage redundancy in visual sensor networks'. Together they form a unique fingerprint.

    Cite this