Using Video Feeds from Public Traffic Cameras and Computer Vision to Analyze Social Distancing and Travel Patterns during the COVID-19 Pandemic
Researchers at the Connected Cities for Smart Mobility towards Accessible and Reliable Transportation (C2SMART) University Transportation Center (UTC), led by Professor Kaan Ozbay at New York University, have developed a continuous, real-time pedestrian detection framework that uses public traffic camera feeds and deep learning-based video processing to analyze sidewalk and roadway density. This framework allows researchers to capture critical data on pedestrian, cyclist, and vehicle flows and densities without any additional infrastructure investment. It also provides data that assist with answering both traditional transportation planning questions, as well as novel questions such as how often pedestrians maintained the recommended “6 feet” of social distance during the COVID-19 pandemic.
This research showcases the feasibility of tracking density and physical distancing, which have previously been more difficult to track than metrics like volumes and congestion. Using publicly available video footage from existing traffic cameras in New York City and Seattle, researchers from multiple consortium universities, including both graduate and undergraduate students, trained computer vision to identify vehicles, pedestrians, and other objects on city blocks where traffic cameras had previously been installed. Due to both the low-resolution nature of the existing camera feeds, and the conversion of vehicles, cycles, and pedestrians into untraceable objects, this privacy-preserving visual recognition process prevents collection and/or leak of any identifying information for the human subjects in the camera footage.