3D reconstruction by photogrammetry and 4D deformation measurement

11
Name: Muhammad Irsyadi Firdaus Student ID: P66067055 DIGITAL PHOTOGRAMMETRY 3D reconstruction by photogrammetry and 4D deformation measurement 1. Objective In this project, we want to generate a 3D reconstruction of an object and 4D deformation measurement. 2. Material and Methods Equipment There is several equipment that used in this project such as: - Hardware Sony alpha 6300 camera to take the object picture. Camera and its specification can be seen in Figure 1 and Table 1. Figure 1. Sony Alpha 6300 Camera Table 1. Sony Alpha 6300 Camera Specification Specification Description Focal Length 20 mm Pixel Size 3.92 micron Sensor Size 23.5 x 15.6 mm Image Size 6000 x 4000 pixel Sensor Type CMOS Effective pixel 24.2 megapixel - Software a. Agisoft Photoscan Pro, to generate 3D Model b. CloudCompare was used to do comparison point cloud between before and after deformation Material The object of this project is a deflated ball and full ball taken by Sony Alpha 6300 Camera that has 20 mm focal length, 6000 x 4000 pixel image, and 23.5 x 15.6 mm sensor size. To generate 3D model of an object, firstly I took several images of the object. I took the pictures from many directions to cover all of the object shape. In some detail area

Transcript of 3D reconstruction by photogrammetry and 4D deformation measurement

Page 1: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

DIGITAL PHOTOGRAMMETRY

3D reconstruction by photogrammetry and 4D deformation measurement

1. Objective In this project, we want to generate a 3D reconstruction of an object and 4D deformation measurement.

2. Material and Methods • Equipment There is several equipment that used in this project such as: - Hardware Sony alpha 6300 camera to take the object picture. Camera and its specification can be seen in Figure 1 and Table 1.

Figure 1. Sony Alpha 6300 Camera

Table 1. Sony Alpha 6300 Camera Specification

Specification Description Focal Length 20 mm Pixel Size 3.92 micron Sensor Size 23.5 x 15.6 mm Image Size 6000 x 4000 pixel Sensor Type CMOS Effective pixel 24.2 megapixel

- Software a. Agisoft Photoscan Pro, to generate 3D Model b. CloudCompare was used to do comparison point cloud between before and after

deformation

• Material The object of this project is a deflated ball and full ball taken by Sony Alpha 6300 Camera that has 20 mm focal length, 6000 x 4000 pixel image, and 23.5 x 15.6 mm sensor size. To generate 3D model of an object, firstly I took several images of the object. I took the pictures from many directions to cover all of the object shape. In some detail area

Page 2: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

such as curve or complex area, I took the picture more than the other area to clearly see the object texture. Total image used in this project are 72 images of the deflated ball and 65 images of the full ball. The object can be seen in Figure 2. To do the accuracy analysis, a code target need to put on the object in some locations and took the images together with the object. So the pictures used to generate 3D model also included the code target in one photo file. The value of a distance of each code target that need to be added into Agisoft Photoscan software to set the scale of 3D model. In addition, to know the deformation between the object of full ball and deflated ball. We can use CloudCompare software by calculating cloud to cloud. Make sure that every image has been done align. it is necessary to create the same conjugate point in every 3D object.

a b

Figure 2. (a) full ball image, (b) deflated ball image

• Methods To generate a 3D reconstruction of an object and 4D deformation measurement, some steps need to be done following the project workflow showed in Figure 3.

Page 3: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

Object Images(Deflated Ball)

Object Images(Full Ball)

Align Photo Align Photo

Camera Calibration and Optimization

Camera Calibration and Optimization

Add Coded Targets and Distance

Add Coded Targets and Distance

Re - Optimise Re - Optimise

Build Dense Point cloud

Build Dense Point cloud

Build Mesh Model

Build Mesh Model

Build Textured Model

Build Textured Model

3D Model Analysis

Conclusion

3D Model Analysis

Qualitative and Quantitative Accuracy Analysis

Figure 3. Generating 3D reconstruction of an object and 4D deformation

measurement workflow

We have two drawing objects: deflated ball and full ball. The steps to generate 3D models on both objects are the same. Detail of project workflow will mention below: 1. For the object images, in Agisoft Photoscan software, I did the photo alignment

process to create sparse point cloud. 2. After aligned the object photo, camera calibration and optimization need to be

done to improve the sparse point cloud quality. Here we can create the boundary region box to select the specific object that want to create the model.

3. Then added the value of a distance of each code target to the model by creating 4 markers on the different photos

4. Build the dense point cloud from sparse point cloud data. After dense point cloud was built, we can remove some data that we don’t want to process such as the outliers point cloud.

5. Build shaded, solid and wireframe mesh model. 6. Build textured model.

Page 4: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

7. Analysis of textured 3D model. In this part, we can analysis the 3D model by looking for what kind of object that not visible or any kind analysis.

8. Export the point cloud data of model to .las file to do the accuracy analysis. 9. Qualitative and quantitative accuracy analysis of point cloud between 3D model. 10. Conclude the analysis explanation

3. Results and Analysis

a. Dense Point Cloud, Mesh and Textured 3D Model The first main objective of this project is to generate 3D model using Agisoft Photoscan workflow step. In this project we got several result according to the process followed in Agisoft Photoscan. The results are the sparse point cloud, dense point cloud, mesh model and textured model. Each result image can be seen in Figure 4 and figure 5.

Figure 4. full ball. (a) Sparse Point Cloud, (b) Dense Point Cloud, (c) Dense Cloud

Classes, (d) Shaded, (e) Solid, (f) Wireframe Mesh Object, and (g) Textured Object

a b c

d e f

g

Page 5: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

Figure 5. deflated ball. (a) Sparse Point Cloud, (b) Dense Point Cloud, (c) Shaded, (d) Solid, (e) Wireframe Mesh Object, (f) Dense Cloud Classes, and (g) Textured Object From the result above, we can see that the 3D model of this ball can be generated very detail. The sphere on the object and the color can be seen clearly looks like the real appearance got form image. The appearance of 3D textured model with real image can be seen in Figure 6.

Figure 6. The Appearance of (a) 3D Textured Model and (b) Real Image of ball

a b

g

c

d e

f

a b

Page 6: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

From that figure 6, we can see that the sphere shape on the object can be generated clearly. Also, the model color looks similar with the real color provided by the image. We also can see some light shadows were captured and generated together with the model. This is a good approach in term of generating 3D model using non-metric digital camera. In my assumption, the result of 3D model was influenced by four factors. 1. The light or environmental condition around the object. We know that in

generating 3D model, we need to capture the object images using digital camera. Digital camera is a passive sensor that only receive the visible light from the object in front of it. If we capture the object image in bad or less light condition, then the 3D result can be bad in visibility or the texture color will be looked so dark.

2. The object itself. Some objects may difficult to reconstruct due to the material or the shape factor. In this project I choose the rubber ball as the object, then the 3D model result becomes so good because the object is not move and the visible light can easily reflect to the camera well. We need to do more experience in generating 3D model from many kind of different object material to know the effect of material itself.

3. The image taking technique including the object to camera position distance and image overlap. In taking the picture, we need to consider the distance to object. If we take the picture too far from the object, then the result of 3D model will not good because some of object details will be lost or cannot be recognized. The other factor is the image overlap. To create good 3D model, high overlap images are needed to avoid the gap between two images. More overlap, then good quality 3D model will be generated. So here, because I used 72 images of the deflated ball and 65 images of the full ball, it makes sense that I got very good 3D model result.

4. The camera specification and setting. High resolution and stable camera can perform good 3D model result. A camera that can capture the object size as small as possible without any blur distortion can provide very good image. It means that the 3D model also can be generated more detail than if we used low-resolution camera.

Beside on those good point of view, I found a case that occured and make the result has a little distortion. This problem appeared on the bottom of the 3D model result that can be seen in Figure 7.

Figure 7. Textured Object (a) full ball and (b) deflated ball

a b

Page 7: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

From the figure 7, we can see that some areas have dense point cloud distortion. Also, there we can see that some areas of the object are missing and appear as a hole. The reason of this problem due to some factors. First factor comes from the aligning photos process related to images number used. Textured model was generated from dense point cloud produced from aligning photos process that theoretically based on image matching concept. To know the relationship between aligning photo process and number of photo, we can investigate the dense point cloud of the object showed on Figure 8.

Figure 8. Dense point cloud (a) full ball and (b) deflated ball

From the figure 8, we can see that there are some outlier points appear at the boundary of object. I already deleted some outlier points. But here I still found some of it. Dense point cloud was generated from sparse point cloud from aligning photos process. Aligning photos process was affected by how many images that we used. Here although I processed 72 images of the deflated ball and 65 images of the full ball, but I only process few of the bottom-object-images that means only few information of this area we will get. More images of this area needed to get more information of the bottom-area. So, we can conclude that more images used in building 3D model can perform good quality result. Taking picture technique also can affect the 3D model result. As I mentioned before, we need to take into consideration of the object distance and image overlap. Here I checked every image I took and I found that for the bottom-area-object, I captured the image a little bit far and no image captured from the bottom direction. Image of bottom-area and camera position of this project can be seen in Figure 9.

a b

Page 8: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

Figure 9. Camera Position. (a) full ball, (b) deflated ball

We can see (in figure 9) that no camera position was located on the bottom of object. The bottom-area images were captured as the tilt image taken from the upper position so here I know that no information available on the bottom of object. In my opinion, if we want to create good result on the top-direction, then we need to take the image from bottom-direction too. From this discussion we can conclude that taking the picture far away from the object can decrease the object detail. Also, no information of an area can create the blank space or a hole on the object. In addition, the shape of an object also affects the outcome of the 3D model. In this case, the object is spherical so it requires more photos to create a 3D model because there are parts of the object that are difficult to capture by the photo.

b. Qualitative and Quantitative Accuracy After we analyze the appearance of 3D model result, then we can analyze the qualitative and quantitative accuracy of 3D model itself. First we input the point cloud data from two object result. In Cloud Compare software, we did the alignment process to align this two point cloud data becomes close each other. After the process was finished, we can see this pointclouds ware moved closer to each other as shown in Figure 10.

a

b

Page 9: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

Figure 10. Result of Aligning Two Pointcloud Data

The yellow point cloud represents the data from full ball object and the red point cloud represents the data from deflated ball object. From the figure above we can see that those two point clouds were not aligned well. We still can see the object differences. Point cloud from full ball object position is down to the point cloud data from deflated ball object. In my assumption, this problem caused by the aligning process done before. In the aligning process, I used full ball point cloud data as the reference and set four points as the conjugate point. The position of conjugate points can be seen in Figure 11.

Figure 11. Conjugate Points Position on 3D Model Pointcloud Data

After we analyze the qualitative accuracy, we can analyze the quantitative accuracy between these two point cloud data. Because here I analyze the accuracy between

Page 10: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

two data set, then this accuracy is the relative accuracy. After the alignment process, I got the RMS error of this process was 0.431 mm with the transformation matrix is:

From that result, we can see that the RMS error only 0.431 mm means that the alignment process was well performed in generating aligned point cloud data. We also can analyze the accuracy of 3D model itself by seeing the error of scale bar measurement on model. In this project, four measurements were set from four markers at all different images like showed in Figure 12. From four measurements, I got 0.0002 m error accuracy. It shows that any measurement in this model has 0.2 mm error position of the full ball object and 0.3 mm of the deflated ball object.

Figure 12. Marker (a) full ball and (b) deflated ball

c. 4D Deformation Measurement To know more about the deformation measurement between full ball and deflated ball, we can calculate the distance between these two point cloud data and show the distance color scale of the 4D model point cloud data. To calculate the distance, I set the full ball point cloud data as the reference. After I run the cloud to cloud distance function, then I got the result like shown in Figure 13. From this project, I got the mean deformation value is 8.395 mm, maximum deformation is 9.802 cm and the standard deviation is 4.039 mm. It means that the mean deformation between these two point cloud data is 8.395 mm. It is a quite small value but it makes sense because before we can see that 3D model point cloud data was well aligned. We can check it by seeing the color scale bar. Blue color represents the smallest deformation while the red color represents the greatest deformation.

a b

Page 11: 3D reconstruction by photogrammetry and 4D deformation measurement

Name: Muhammad Irsyadi Firdaus Student ID: P66067055

Figure 13. Distribution of deformation between Two Point cloud Data

The explanation of this condition is: - Alignment process can affect the result. This factor I already explained before where the number of conjugate points and its location can affect the result. Used more points located on separate location can increase the accuracy and quality of point cloud aligned result. - Scale of point cloud data. In cloud compare software, I can see the coordinate scale of these two data. Full ball point cloud data has 1 while the deflated point cloud data has 1 scale factor. In alignment process, we need to make all the data in same scale or adjust its scale so both of the data and the result are in same scale factor.

4. Conclusion From the result and analysis above, the conclusion that we got is: a. Capture the object image in bad or less light condition can make 3D result has bad

visibility. b. 3D model of static and solid object can be generated in good quality. c. To create good 3D model, high overlap images are needed to avoid the gap between

two images. d. 3D model also can be generated more detail if we used high-resolution camera. e. More images used in building 3D model can perform good quality result. f. Taking the picture far away from the object can decrease the object detail. g. No information of an area can create the blank space or a hole on the object. h. In image matching, more conjugate points in separate location are needed to produce

high accuracy 3D model. i. RMS error of the alignment process is 0.431 mm while mean deformation is 8.935

mm j. Distance value of two point cloud data affected by point cloud alignment process and

the scale of point cloud data. k. Scale measurement accuracy on 3D model has 0.2 mm error position.