Image Based Modeling from Spherical Photogrammetry and Structure for Motion. the Case of the Treasury, Nabatean Architecture in Petra

This research deals with an efficient and low cost methodology to obtain a metric and photorealstic survey of a complex architecture. Photomodeling is an already tested interactive approach to produce a detailed and quick 3D model reconstruction. Photomodeling goes along with the creation of a rough surface over which oriented images can be back-projected in real time. Lastly the model can be enhanced checking the coincidence between the surface and the projected texture. The challenge of this research is to combine the advantages of two technologies already set up and used in many projects: spherical photogrammetry and structure for motion (Photosynth web service and Bundler + CMVS2 + PMVS2). The input images are taken from the same points of view to form the set of panoramic photos paying attention to use well-suited projections: equirectangular for spherical photogrammetry and rectilinear for Photosynth web service. The performance of the spherical photogrammetry is already known in terms of its metric accuracy and acquisition quickness but time is required in the restitution step because of the manual homologous point recognition from different panoramas. In Photosynth instead the restitution is quick and automated: the provided point clouds are useful benchmarks to start with the model reconstruction even if lacking in details and scale. The proposed workflow needs of ad-hoc tools to capture high resolution rectilinear panoramic images and visualize Photosynth point clouds and orientation camera parameters. All of them are developed in VVVV programming environment. 3DStudio Max environment is then chosen because of its performance in terms of interactive modeling, UV mapping parameters handling and real time visualization of projected texture on the model surface. Experimental results show how is possible to obtain a 3D photorealistic model using the scale of the spherical photogrammetry restitution to orient web provided point clouds. Moreover the proposed research highlights how is possible to speed up the model reconstruction without losing metric and photometric accuracy. In the same time, using the same panorama dataset, it picks out a useful chance to compare the orientations coming from the two mentioned technologies (Spherical Photogrammetry and Structure for Motion).


INTRODUCTION
This research tries to improve an already tested approach for the metric and photorealistic reconstruction of a complex architecture: the Interactive Image Based Modeling [1].Photomodeling it is possible to produce a detailed and quick 3D surface reconstruction thanks to real time texture projection on model surfaces.
The challenge here is to combine the advantages of two technologies already set up and used in many projects: Spherical Photogrammetry and Structure for Motion (SfM).(Figure 1) The main difference with previous experiences [2] is the type of planar photo used as input for the SfM: here the images are shots acquired by a developed VR tool, saving a high resolution frame of the entire subject.Key points of the proposed research are: low-cost instrumentation, single operator, single PC, measure accuracy, procedure speed, real time control of the result, high resolution photo-realistic textures, shareable output.
All the working phases are tested and optimized to work quickly with an ordinary pc.

INSTRUMENT AND PHOTO ACQUISITION
The input data for Spherical Photogrammetry are panoramic images with equirectangular projection, obtained by stitching different photos, taken from the same point and rotating around a pivot (Figure 2).
It allows to work with high resolution images and have a large (complete if it's necessary) photographic scene information.
The acquisition instruments are only a reflex camera (Canon 60D, 18MPx, 50mm zoom lens-35mm.eq),a long lens monopod bracket and a tripod.The head is adapted to have the camera nodal point in the center of two rotation axis and so to create a panoramic head.This system guarantees a good stability, important when Because of hardware limitations some images are resized into 10000x10000 px resolution.Software and hardware restrictions are known problems in this research field; on the other hand they drive to optimize the entire survey process.

SPHERICAL PHOTOGRAMMETRY SURVEY
The Spherical Photogrammetry [3,4,5,6,7,8] is particularly suitable for architecture and archaeology metric recording and characterized by a proved precision.The Sp.Ph., making use of low-cost instrumentation and few steps allows to orient the acquired panorama and obtain some shape primitives (.dxf format).(Figure3) Orientation and restitution procedures are manual: homologous points are collected by the user one by one.In this way Sp.Ph., gives the chance to have an accurate orientation of the images while the following 3D point restitution remains time-consuming.In this manual restitution lies the main problem for complex architecture.
The reliability of the Sp.Ph.approach allows to use it as reference system to orient and scale the following SfM 3D models.

INTEGRATION METHODOLOGY
Complementing the 2 techniques it is possible to combine a large number of geometrical 3D information (from SfM) with the Sp.Ph.precision.
The SfM [9,10,11,12] approach allows to recover good 3D models from scattered photos but not directly suitable for photogrammetric use: we have no information about scale and survey precision.
There are different available SfM tools.They carry out a full automatic 3D reconstruction of subjects visible in more images by means of automated operations: image matching, camera calibration and dense point cloud creation.
In this experience are tested two different SfM based tools: Photosynth Web Service and SfM Toolkit (Bundler+CMVS2+PMVS2).They are investigated by the comparison between their point cloud results and the lines or shapes coming from Sp.Ph.(Figure 4) A good way to integrate different survey techniques and have a clear comparison is to visualize all the results in a unique 3D digital environment.
One research goal is to test the approach flexibility in integrating data from different survey techniques, laser scan data and direct survey measures when available.These in fact could be useful hints for a reliable precision comparison.The idea of obtaining quickly these planar projections come during a virtual navigation with experimental tools.

VR TOOLS TO CAPTURE HIGH RESOLUTION PLANAR PHOTOS FROM EQUIRECTANGULAR PANORAMA
A VR tool [13] is created to have an interactive navigation of the high resolution spherical panorama.
The software is scalable and flexible, making possible to develop a large number of functions.
It this case were developed plug-ins to automate the UV spherical mapping and save the planar projection visualized along (Figure 5).
In the same time other information about the virtual camera are stored, first of all the FOV (Figure 6).
The user can navigate and choose the frame to save in a common image file format with a very high resolution photo and 64Mpx.
The powerful render engine enables to visualize a real time render of 8096x8096 px max resolution.This procedure minimize the image distortion according to two main reasons: the stitching software produces undistorted images and the VR tool adds no perspective distortion.This is confirmed by the only two camera parameters (Radial distortion: k1, k2) coming out from one of the two SfM processes used (Fig. 8

Photosynth
Photosynth is the first software used to carry out the research.It is a user friendly web service to automatically orient scattered photo making use of few on line steps.Its Web interface allows the user to navigate a virtual scene where his oriented photos are combined with the produced point cloud.Lastly camera calibration data and tie points are exported with an additional software (SynthExport).
Different tests are fulfilled, but only two of them are reported hereafter (Figure 7).The model is then scaled and rototraslated according to the same reference system used in the restitution from panorama.
Experimental results are visualized in the same working environment to underline how the SfM result (test with a second image set) differs in terms of orientation from the restitution coming from the Sp.Ph. Figure 9 shows this comparison: Sp.Ph.results on the left (restitution lines in red and panorama stations in green) and Photosynth outputs on the right (oriented point clouds in black and image stations in blue).As done previously with Photosynth, the same 2 photo set are tested: the first from digital camera and second from VR tool.
VR shots don't have the Exif information used to extract CCD width, but it is possible to pair them with their previously stored FOV info (Fig. 6-a).
Starting from an existing model it is possible to modify image Exif information by inserting the relative image focal length values (35mm eq.) (Figure 10 Experimental results underline the performance of the proposed approach: the extracted point clouds are characterized by a good level of coverage, detail and noise reduction.
Not so far from the feeling to have to deal with a laser scanning.
To test the experimental results by the comparison with the Sp.Ph., again the second photo set is chosen: it shares the projection center with the oriented panoramas (Figure 12).MeshLab environment is chosen to visualize and process the point cloud, because of its advanced mesh processing system.Interesting is the chance to align meshes taking into account variation in scale: it is necessary for point clouds not collected directly by a 3D scanner device.

POINT CLOUD ORIENTATION AND MESH OPTIMIZATION
All the restitutions are oriented according to the Reference System used in the Ph.Sp.(Figure 13).
The mesh is made up of 1500000 triangles, too many to be handle in the graphic environment used to draw surfaces.Therefore the mesh is optimized trying not to lose important information about the subject geometry (Figure 14).a) Point cloud orientation according to some points restituited by using the Sp.Ph.method Visualizing the results in a unique Reference System it is possible to add visual control and underline differences between orientation camera parameters (Figure 15).
The mesh from PMVS2 and the lines restitutied by using the Sp.Ph.approach are very similar: it's difficult to highlight differences without other kind of control.

TEXTURE MAPPING AND IMAGE BASED MODELING
High resolution texture is a valid alternative to heavy 3D geometric details, wasting hardware resources.Texture mapping is already known [2 ] to be possible thanks to the panorama orientation and UV mapping parameters.
Because the center and the orientation of spherical projection are note, a datasheet can be created to automate the UV mapping tiling and the offset computation (Figure 16-a).The scene is so ready to perform spherical projections over any surface (Figure 16-b).
In particular the texture projection on the SFM mesh reveals a good fitting: it validates the experimental procedure and shows the possibility to use the extracted mesh as good starting rough model to redraw 3D surfaces.The mesh is so imported in the graphic environment to be a useful help to speed up the Interactive Image Based Modeling, devised to draw geometrical elements of an architecture with the photogrammetric control.The imported mesh can now be enhanced under the review of the photogrammetric geometric data and texture mapped on the surface.
Real time rendering allows to visualize a photorealistic texture during the modeling phase (Figure 17).This is a useful help to control the drawn geometries.Resulting point clouds have good performance in terms of coverage and accuracy and can be oriented in accordance with the spherical photogrammetry restitution.
Then, with the aim of building a 3D digital model, the mesh can be optimized, resized and imported in a graphic environment.Here it becomes a good rough 3D reference to aid the modeling phase.
Image Based Modeling (Figure 18, 19) can in this way take advantage of three different benchmarks: -Restitutied data from Spherical Photogrammetry -Optimized mesh from Bundler+PMVS2 -High Resolution Texture Projected on surface.
Step by step, the combination of these three references, drives the modeler towards a photorealstic description of a complex architecture, without loosing in quickness and hardware performance.

Figure 1 :
Figure 1: More photogrammetric survey information in a single 3D environment

a)
Create Points.exeManual identification of homologous points b) Sphera.exepanorama orientation tool by Prof.Fangi d) Geometric Output (dxf) Restitution lines in red, acquisition stations in green, collimation rays in gray c) Panorama Orientation Info

Figure 3 :
Figure 3: Spherical Photogrammetry process The 3D model scale is given trough a direct measure while the reference system origin and the model orientation are fixed by the user.

Figure 4 :
Figure 4: Photogrammetric data visualization and management in 3D graphic software To make the comparison possible (SfM versus Ph.Sp.) it is chosen to work with the same nodal point: high resolution planar images of the entire subject are acquired in laboratory from the same panorama used as input for the Sp.Ph.process.
-a, 8-b).a) VR GUI: after the panorama and frame are chosen with a mouse click is possible to save a very high resolution shot b) The nodal point of the (virtual) pinhole camera is in the textured sphere centre

Figure 5 :
Figure 5: High resolution photo acquisition from spherical panorama

Figure 6 :
Figure 6: High resolution images and their information

First, 10 Figure 7 :Figure 8 :
Figure 7: Experimental tests: from the photo set acquisition to the point clouds

Figure 10 :Figure 11 :
Figure 10: Exif info association and XML database update It's necessary to update .XML data base adding a new camera and its hypothetical sensor dimension width.(Figure10-c) Problems with hardware engine and time-consuming elaborations can be overcome by resizing the images: a max resolution of 5120x5120px is processed for all the 9 images..ply files store the restitutions with the RGB information associated with the point clouds (Figure11).

Figure 12 :
Figure 12: Dense point clouds by VR and SFM toolkits combination

Figure 15 :
Figure 15: SFM and SP.Ph. geometrical survey in the same 3D environment

a)Figure 16 :
Figure 16: Orientation and UV spherical mapping

a)Figure 17 :
Figure 17: Texture projection and mesh control

Figure 18 :
Figure 18: Image Base Modeling of some elements In each working phase, thanks to innovative interactive systems, the outputs allow efficient real time visualizations of the complex architectural model.The performance of the proposed photogrammetic tools favours testing and investigation.Lastly, using the same panorama dataset, the proposed research picks out a useful chance to make comparison between Spherical Photogrammetry and Structure for Motion in terms of orientations and accuracy.Moreover the research underlines how this combination allows to speed up the interactive Image Based Modeling, taking advantage of a correct image orientation (Spherical Photogrammetry) and a large extracted point cloud as geometric reference.

Figure 19 :
Figure 19: Low Poly Model with high resolution texture