For many, gazing at old photographs of the city evokes feelings of nostalgia and curiosity — what was it like to walk through Manhattan in the 1940s? How much has the street changed since I grew up? While Google Street View lets people see what an area looks like today, what if you want to explore what places looked like in the past?
In order to create beneficial “time travel” experiences for research and entertainment purposes, we introduced a browser-based toolset called R ǝ (pronounced re “turn”), which is an open source, extensible system running on Google Cloud and Kubernetes, Reconstructing cities from historical maps and photos represents the implementation of our open source suite of tools launched earlier this year. Again or requoting the common prefix meaning, R ǝ with intended to represent reconstruction, study, entertainment, and the subject. Keep in mind that there are three parts to this crowdsourcing research effort:
- A crowdsourcing platform that allows users to upload historical maps of cities, geocorrect them (that is, match them to real-world coordinates) and vectorize them
- Time map server, showing how city maps change over time
- A 3D experience platform, running on R ǝ map server, creates 3D experiences by using deep learning to reconstruct 3D buildings from limited historical images and map data.
Our goal was to make R turkic a compendium that would allow history buffs to experience historic cities from around the world virtually, help researchers, policymakers and educators, and provide a dose of nostalgia for everyday users.
Crowdsourced data from historical maps
Recreating how cities used to view scale is a challenge — historical image data is harder to work with than modern data because there are far fewer images available and far less metadata captured from images. To solve this puzzle, the R ǝ map module is an open source set of tools that work together to create a map server with time dimensions, allowing users to jump back and forth between time periods using sliders. These tools allow users to upload scans of historical printed maps, geologically correct them to match real-world coordinates, and then convert them into vector format by tracking their geographic features. These vectorized maps are then provided to the slice server and rendered as sliding maps, which allow users to zoom in and zoom out.
The entry point for rǝ map module was Warper, which was a Web application that allowed the user to upload a historical image of the map and geographically correct it by looking for control points on the historical map and corresponding points on the base map. The next application, Editor, allows the user to load a geo-corrected historical map as a background and then track its geographical features (for example, building footprints, roads, and so on). This trace data is stored in OpenStreetMap (OSM) vector format. They are then converted into vector slices and served from the server application (vector slice server). Finally, our map renderer, Kartta, visualizes slices of space-time vectors, allowing users to navigate space and time on historical maps. These tools are built on a number of open source sources including OpenStreetMap, and we want our tools and data to be completely open source as well.
3 d experience
The 3D model module is designed to reconstruct historical buildings using detailed complete 3D structure and map data associated with images, organize these 3D models in a repository normally, and make them historical maps in relation to time dimensions.
In many cases, only one historical image of the building is available, making 3D reconstruction an extremely challenging problem. To address this challenge, we developed a coarse-to-fine recognition algorithm for refactoring.
Starting with the footprint on the map and the elevation area in the historical image (both annotated by crowdsourcing or detected by automated algorithms), an input footprint of the building is extruded upward to generate its rough 3D structure. The height of this stretch is set to the number of floors in the corresponding metadata in the map database.
Meanwhile, rather than directly extrapolating the detailed 3D structure of each facade into an entity, the 3D reconstruction pipeline identifies all individual components (for example, Windows, entrances, stairs, etc.) and reconstructs their 3D structure separately according to their categories. These detailed 3D structures are then combined with rough structures to form the final 3D mesh. The results are stored in a 3D repository and ready for 3D rendering.
Key technologies supporting this feature are many of the most advanced deep learning models:
- Faster region-based convolutional neural networks (RCNN) are trained with appearance component annotations for each target semantic class (e.g., Windows, entries, stairs, etc.) for locating bounding box-level instances in historical images.
- DeepLab is a semantic segmentation model that can be trained to provide pixel-level labels for each semantic class.
- Specially designed neural networks can be trained to perform high-level rules in the same semantic class. This ensures that the Windows generated on the facade are equidistant and uniform in shape. This also promotes the consistency of different semantic categories, such as stairs, to ensure that they are placed in a reasonable position and have a consistent size relative to the relevant entry way.
The primary outcome
conclusion
Using R ǝ people, we developed tools to facilitate crowdsourcing to address the main challenge of insufficient historical data when rebuilding virtual cities. The 3D experience is still a work in progress and we aim to improve it with future updates. We wanted R ǝ people to be a link between active hobbyists and a community of casual users who not only took advantage of our historical data sets and open source code, but also actively contributed to both.
Update note: first update wechat public number “rain night blog”, later update blog, after will be distributed to each platform, if the first to know more in advance, please pay attention to the wechat public number “rain night blog”.
Blog Source: Blog of rainy Night