- Main
Beyond visual inspection: capturing neighborhood dynamics with historical Google Street View and deep learning-based semantic segmentation
Abstract
While street view imagery has accumulated over the years, its use to date has been largely limited to cross-sectional studies. This study explores ways to utilize historical Google Street View (GSV) images for the investigation of neighborhood change. Using data for Santa Ana, California, an experiment is conducted to assess to what extent deep learning-based semantic segmentation, processing historical images much more efficiently than visual inspection, enables one to capture changes in the built environment. More specifically, semantic segmentation results are compared for (1) 248 sites with construction or demolition of buildings and (2) two sets of the same number of randomly selected control cases without such activity. It is found that the deep learning-based semantic segmentation can detect nearly 75% of the construction or demolition sites examined, while screening out over 60% of the control cases. The results suggest that it is particularly effective in detecting changes in the built environment with historical GSV images in areas with more buildings, less pavement, and larger-scale construction (or demolition) projects. False-positive outcomes, however, can emerge due to the imperfection of the deep learning model and the misalignment of GSV image points over years, showing some methodological challenges to be addressed in future research.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-