VaBUS: Edge-Cloud Real-Time Video Analytics via Background Understanding and Subtraction

IEEE JSAC, 23
Hanling Wang1,2,Qing Li2, Heyang Sun3, Zuozhou Chen2,Yingqian Hao6,Junkun Peng1,2, Zhenhui Yuan4, Junsheng Fu5, Yong Jiang1,2
1Tsinghua Shenzhen International Graduate School, Shenzhen, China
2Peng Cheng Laboratory, Shenzhen, China
3the School of Software, Southeast University, Nanjing, Jiangsu 211189, China
4Northumbria University, Newcastle upon Tyne, United Kingdom
5Zenseact, Gothenburg 41756, Sweden
6 the School of Software, Jilin University, Changchun, Jilin 130012, China
[Paper] [Code]

Problem to solve

Edge-cloud collaborative video analytics with low bandwidth comsumption.

Method

In this work, we explore the contextual information of video data from surveillance cameras. Since surveillance cameras generate video frames with static background, certain characteristics tend to persist over time, such as regions that may observe objects and the size of occurred objects at the same location. By exploiting these context-dependent characteristics, we design VaBUS, a new real-time Video analytics system based on Background Understanding and Subtraction. Specifically, VaBuS first reconstructs the background from camera video feeds in the cloud and transfers the background image to the edge with minimum overhead, then the edge sends the cloud with only useful foreground pixels that may contain interesting objects for inference. As a result, VaBUS has the potential to significantly reduce the bandwidth consumption between the edge and cloud, while achieving high inference accuracy in the task-oriented communications scenario. Ideally, RoIs in a video frame only contain objects to be detected, and the bits for remaining regions will not be pushed into the network, i.e., the RoIs are optimally compressed in the semantic aspect. To the best of our knowledge, the context characteristics of static background for surveillance cameras has not yet been systematically exploited by prior works in the real-time video
analytics scenario.




Result

To validate the feasibility of VaBUS, we implement a prototype1 based on Python and C++. With the prototype, we conduct comprehensive experiments on four real-world datasets. Results show that VaBUS 1) reduces bandwidth consumption by 25.0%-76.9% while achieving 90.7% accuracy, 2) incurs only 477.5ms latency and 10% CPU usage overhead compared to a baseline approach, and 3) achieves 68% offline estimation accuracy which outperforms both the optical flow and motion vector-based methods.


Bibtex

@article{wang2022vabus,
  title={VaBUS: Edge-Cloud Real-Time Video Analytics via Background Understanding and Subtraction},
  author={Wang, Hanling and Li, Qing and Sun, Heyang and Chen, Zuozhou and Hao, Yingqian and Peng, Junkun and Yuan, Zhenhui and Fu, Junsheng and Jiang, Yong},
  journal={IEEE Journal on Selected Areas in Communications},
  volume={41},
  number={1},
  pages={90--106},
  year={2022},
  publisher={IEEE}
}