【正文】
ra, which provides a resolution of 1536x2034 pixel with a focal length of 20 mm. Measurement Configuration Because it is not possible to have a plete 3D coverage for the AlKhasneh facade based on data collected from a single station, three different viewpoints with five scans were done to resolve the occlusions. The problem to choose the viewpoint positions represents an important phase of the survey for such a monument since potential sensor stations are restricted by the mountainous environment surrounding AlKhasneh. Three positions were selected, from the entrance area of the monument, from the left of the monument, and one scan was collected from an elevated viewpoint. Since the vertical field of view of the laser scanner from these positions could not cover all the facade from one scan, the left and top scanning were done using 2 scans from the same position, taking into consideration sufficient overlapping regions to allow for a subsequent integration. In total, the five scans resulted in almost 5 million collected points. All the acquired 3D models have been processed using Innovmetric Software, PolyWorks. The model of AlKhasneh facade resulted from merging the five scans in an independent coordinate system into an absolute coordinate system. After registration of the scans using corresponding points, the software constructs a nonredundant surface representation, where each part of the measured object is only described once. The result of the bination of the five laser scans is given in Figure 2. 3D Model of AlKhasneh created from 5 scans In additional to the outer survey, a 360degree scanning from a station in the interior of the AlKhasneh had been taken, which resulted in 19 million points. Figure 3 shows the point cloud of this scan with colour information overlaid. Figure 2. The produced model has an average resolution of 2 cm with more than 10 million triangles. 3 INTEGRATED DATA PROCESSING Although the 3D model produced by laser scanner contains a large number of triangles, which are representing the surfaces, it can still difficult to recognize and localize the outlines of the surface features. An example for this type of features, which are clearly visible in an image is depicted in figure 4. This data was collected from the left door of AlKhasneh. As it can be seen from the corresponding 3D meshed model shown in figure 5, these cracks and the edges outlines are lost beyond the resolution in the available laser data. Figure 3. 360 degree scanning for the inner part of AlKhasneh In order to support the visual quality of such details, a hybrid approach bining data from the laser scanner and the digital imagery was developed. For integration all data sets have to be coregistered in the first processing step. This is realized by aligning the extracted edges from both data sources using an algorithm developed by [Klinec and Fritsch, 2020]. After position and orientation parameters are puted for the sensor stations, distance images are generated from the point cloud in order to provide the missing third dimension in the available images. Finally, an integrated segmentation process based on the image data is be used in order to support the extraction of the details and the surface features outlines from distance images. Additionally, the approach applies a semiautomated feature extraction from images to bridge gaps in the laser scanner data. By these means details can be added that are necessary for generating more realistic perceptions of the scene volume. In the following paragraphs the hybrid approach will be discussed in more details. The Figure 4. The left door of AlKhasneh Figure 5. Meshed model for the left door Data Coregistration Figure 6. Two intersecting planar surfaces, one of them is represented by grid lines for demonstrating purposes. left door of the AlKhasneh depicted in figure 4 and 5 is exemplarily processed. Data Coregistration The quality of the registration process, which aligns the laser scanner data with the imagery, is a crucial factor for the aspired bined processing. This registration can realized if correspondence coordinates are available in both systems. Since the accurate detection and measurement of point correspondences can be difficult especially for the point clouds from laser scanning, straight lines are measured between the image and the laser data as corresponding elements. These lines are then used by a shape matching followed by a modified spatial resection [Klinec and Fritsch, 2020]. The algorithm transforms the 3D straight lines extracted from the laser data and the corresponding 2D image lines, which are given by two points, into a parameterized representation. Then the unknown exterior parameters of the image are determined by spatial rese