Hello everyone, Grzegorz Baran here. In version 1.6 Agisoft has introduced a new photogrammetry reconstruction mode. In this video I decided to give it a try and share the results with you. So this time I am going to present full photogrammetry workflow where I captured a cliff rock formation and turned it into a vista prop. In details I am going to show
cliff rock formation capture with the drone an image postprocessing with a different way for ambient shadow removal in PhotoLab 3 a full 3D reconstruction in Metashape using new reconstruction mode. I will build a low poly model in ZBrush which I am going to UV map with the RizomUV I will bake textures using Substance Designer next I am going to fix missing areas in Substance Painter and finally I am going to compose a quick scene in Marmoset Toolbag 3 to present the result I hope someone finds this video interesting and fun to watch and let’s begin The new reconstruction mode was introduced in version 1.6.0 of Metashape. This mode is way faster and way less resource demanding for reconstruction when compared with other modes as well as previous Metashape
versions. Bare in mind that new mode works in *Arbitrary* reconstruction mode with *Depth Maps* selected as a source only. The Height Field mode, but also the Arbitrary but based on a dense cloud as a source of data work as before and have no ‘out-of-core’ implementation and no GPU support. This new method can benefit from additional amount of VRAM available
and also fast communication with the project folder as quite a lot of intermediate data is stored in the project folder and being read and re-written. So using fast SSD drive to store the project files should give faster reconstruction and likely outperform slow HDD. We should also get them significantly faster when we compare reconstruction times before the 1.6 version and last but not least, bare in mind that current Depth Map based reconstruction mode gives a bit lower density mesh to the one based on a dense cloud. Fortunately these are things Agisoft is currently working on, so should be fixed soon. Let’s move to the capture part then. For this capture I picked pretty cloudy but windy day. The sun was shining through clouds from time to time, but it was fine since material capture wasn’t my top priority this time. I wanted to find and capture larger but consistent rock formation which can be used as a quite generic and universal vista prop. The low tide gave better access to interesting rock formations and exposed bottom parts of the rocks but I had to be careful since the high tide was coming and without paying attention I could end up in a water. It took me a while to get to the nice spot but finally I have found a really nice cliff wall with interesting shape. Since I was planning to capture it and turn it into a vista prop, my aim was to get a full image coverage from every possible angle. The only, fast and easy way to capture this type of prop is to use a drone. Without a drone I wouldn’t have access to big part of this cliff. Since the ground wasn’t flat I decided for hand take off For this capture I set the drone’s camera f-stop to 4 but I left ISO in auto mode. Hopefully light was good enough and all images were captured with the ISO 100 This is the setting I have found the most optimal for Mavic 2 Pro and close surface scanning. It gives short exposure time while this value is small enough to compensate the distance to some degree and give sharp images also on image edges. Before I started the scan I took a few images of color checker for future white balance correction and color calibration. Unfortunately since this cliff is quite huge and has limited access I didn’t set up any rulers as a scale reference. Scanning takes some time, especially for larger props it is quite easy to get lost and leave some areas uncovered at the same time overshooting others. This is why it is worth to keep scanning patch organised. And this is the patch I have found the most efficient and useful when I scan larger vertical props with the drone I simply fly in vertical lines trying to keep the same distance to the subject. So I fly up taking series of pictures until there is nothing more to capture and when done I move the drone to the next virtual column and fly down taking another series of images. When done I repeat these steps over and over until everything is covered. And this is exactly what I have done in here.
To make sure I didn’t miss anything I also took a few shots from larger distance as even lower quality data is better to no data. There were a few birds flying around and a few crazy seagulls, but I have learnt that seagulls need more space to attack. They are predators but they usually dive from the top to hit the target to pull up when it is done. It works with fishes in the open sea but not in this case since the cliff wall limits the space. So as long as I kept the drone quite close to the cliff wall, it was safe and seagulls were just flying around in big circles. I am not saying that these seaguls wanted to attack the drone, just that it would be quite hard for them if they would try Finally I managed to capture 274 key images plus a few additional for color calibration. All captured images were fine When the capture was over I hand landed the drone and packed it back to my backpack. The next step is the image photo-editing to get the best from all captured images. For photo editing I use a Photolab 3.
Next let’s jump into the ‘Customise’ section
and select the image which is going to be used to setup the White Balance. With this image selected let’s press ‘Ctrl’ and ‘A’ to select all images Next lets zoom in to see the color checker clips and select neutral CLIP with the White Balance Color Picker It sets the white balance based on this value for all selected images Next let’s remove some ambient occlusion shadows using Selective tone section Since we work on RAW image we work on very dense color data and it is totally fine to shift colors and push values because while working on RAW we deal with way more additional data that we really need. So even if we limit the histogram area, we still have a lot of real capture based data to pick from Just to remind you one of my previous videos… this is the histogram based chart showing how many colors depending on their color depth, RAW files can store. It means that as long as we operate on dense 14bits RAW data, we can consider any color shifts within a dynamic range lossless. And this is the stage where we can easily remove any ambient occlusion shadows or calm down highlights by simply playing with options in ‘Selective Tone’ section. It would affect the reconstruction quality if we would use 8 bits for reconstruction,
but as long we export the result into 16 bits and use that 16 bits result for reconstruction,
photogrammetry software gets more data to what it even needs. It is good to understand that the photogrammetry reconstruction isn’t based on shadows but position of points. As long photogrammetry software has enough information to see the difference between neighbour pixels, and while we work on 16 bits it definitely has, we should be totally fine. We should also turn the CROP correction off as we don’t want PhotoLab to remove anything from our images Also we should turn any geometry distortion correction off as the photogrammetry software will do it way better And I think we should be ready to export the data for reconstruction We just need to make sure we export in format which supports 16 bits like TIFF or PNG. I usually use TIFF. Since we convert 275 images this way, it is going to take a while. When done, we can jump into Metashape and upload those images for reconstruction The next step is the reconstruction. First to reconstruct the subject we need to load and align all the images in 3D space I did it by selecting:
Workload and ..Images I skipped the image with myself holding the color checker and selected the fist image with actual rock formation, next I moved to the last one and with SHIFT pressed I selected it with all between. Next we need to align those images in 3D space. I did it by jumping into Workflow and selecting ALIGN images. Let’s try it with the default setting. Since we work on 16 bits images it should be enough, but in case not all images are aligned, we can play with numbers in advanced setting and increase values Depends on alignment settings, this process can take from a couple of minutes to even a few hours. This alignment took about 2 hours, so let’s skip this video to the moment when it was over When the alignment is done, all images should have a thick. This marking means that the image was aligned and is going to be used for reconstruction. With all images aligned we are sure that we are going to use all collected data we have for reconstruction. After image alignment we can see the camera positions and tie points. These points are a navigational points shared between cameras to estimate the positioning in 3D space for the reconstruction. Blue planes are representing the camera position from the moment when the image was taken. Looks like this rock formation is quite well covered from close and medium distance and it should be enough information for reconstruction. The box around the tie points limits the reconstruction area. Lets extend it a bit to cover skipped part to the right side. I think everything looks ok, so lets move with the next step which is the reconstruction. As you can see, we get more options after the images were aligned. Since we want to proceed with a full 3D reconstruction lets use the reconstruction mode I mentioned at the beginning of this video. To do this we need to select Workflow from the top tab and pick option to generate mesh. As I mentioned the new mode works only with the ‘Depth Maps’ used as a source of data and only in the ‘Arbitrary 3D mode’. To get mesh as dense as possible lets keep the quality as ‘Ultra High’ and maximum face count. Since I want color to be stored in vertexes, lets make sure that option to calculate vertex colors is active. Next lets hit the OK button and start the reconstruction. Next lets speed time a bit since full 3D reconstruction took almost 22 hours In previous version of Metashape it would probably take even more and highly likely crash at the end giving totally nothing back. The mesh we generated has 56 million faces and is quite accurate. Unfortunately I didn’t manage to cover everything and there are a few gaps left. These gaps can be easily covered with the lowpoly model and filled with texture data using Clone Tool in Substance Painter in next stage. But before that, let’s save the model and export it as a highpoly source for baking Exporting 60 million poly dense FBX file might take even about an hour. this one took 40 minutes Since it is very hard to navigate in ZBrush with such a heavy mesh, we need to decimate it in Metashape to create a lighter version. We can do it easily by running a Decimate Tool from Tools and Mesh tab. I think that 5 million polygons should be enough to keep all shape details we might need in the next steps, so lets set decimation to 5 million and press the ok.. or… no, lets pick 10 million as ZBrush also should handle 10 million without any problems. Just be careful and don’t save any changes after decimation is done so we will keep the heavy model in a project to be able to re-decimate it to whatever value we need As you can see 10 million is enough to store all these shape details so is definitely enough as a reference to build a lowpoly model in ZBrush.. Decimation itself is quite fast and when finished we can export decimated mesh as another FBX file. Exporting 10 million poly took me just 3 minutes. When done we can close Metashape, just without saving to do not overwrite the highpoly with its decimated version. Now since we have a highpoly model for baking and medium poly model as a reference, we need to build a fully functional lowpoly version of it and bake all highpoly information into it’s texture So lets jump into ZBrush and load our 10 million faces dense model as a reference to build a lowpoly one. Next lets bring the Topology Brush and create the lowpoly model using the detailed model as our reference. The Topology Brush is a very handy and efficient tool for this job. Next lets set the proper spline accuracy using ‘Draw Distance’ slider and let’s start building the model with the topology brush we simply draw lines when they crossed to each other they create the connection point, so we don’t need to be very accurate drawing these lines. What really matters are just the intersections. Areas defined by 3 or 4 connection points are being filled with a FACE. Because we are going to project this lowpoly model on our heavy reference we don’t need to be very accurate. We just need to focus on main topology to follow the main shape. This way we will get enough geometry to wrap around details after we subdivide the mesh When done we need to separate lowpoly mesh from the reference and project it on it’s surface to makes sure it aligns properly To get a bit more information from geometry itself, lets divide it a bit and project it on the 10 million poly dense mesh And repeat that until we get the density we want. I think on 5th subdivision the mesh is dense enough to carry all silhouette details. For the vista prop seen from long distance, 1st or 2nd subdivision level would be totally enough. But since this is a vista prop for a still shot in Marmoset Toolbag scene I can go a bit crazier and not worry about performance too much. Just bare in mind that in workflow totally nothing changes and no matter what mesh density I pick, next steps are exactly the same. If it would be a game asset I would go with way lower density Now the mesh is ready to be exported. To make the final scene a bit more interesting, lets create something for the ground surface and export it out. Now its time to UV map it Uv mapping can be done directly in ZBrush with UVMaster tool, but there is a way better tool designed just for this and this is the tool I am going to use. So for UV Mapping lets jump into RizomUV To start UV Mapping in RizomUV we need to drag and drop the mesh we have made We can UV map it manually, half-manually or totally trust the RizomUV and let it do the job for us. Since we are going to use a dedicated texture without any overlapping with information baked directly from the highpoly model, let’s save some time and let RizomUV to do full job. Lets move into Auto Seams and FULL AUTO Uvs tab next select Auto Select Mosaic mode and next let’s select 3 for number of cuts this one means less distortion but more cuts Next lets activate overlaps and stretching detection as its important for us to do not have anything baked in the same space twice And lets proceed with UV mapping. ‘Auto Select Edges’ button selects the uvs for cutting following the rules we set. We can change the rules, recalculate them or hit the second button if if we are ok with what we see. The next button Cuts and Flatten the uv shells. When done it is time to hit the last button which will rescale and pack all these uv islands within a single UDIM and will use as much space as possible. Now its time to hit Ctrl and S to save our UVmapping to the FBX file and consider UVmapping process as done Now its time to bake all color and high information from the highpoly model to the lowpoly one we made. To do this lets jump into Substance Designer and use its baker. When opened lets create any substance, it doesn’t matter which since we need it just to run the baker. Next let’s bring the lowpoly model we want to use for baking and activate the baker on it. Let’s configure the baker window and start the baking process. Looks like I missed the ‘Vertex Color’ as a source for color map so lets fix it and rebake just the color map Now looks like we have full set of textures to deal with so it’s time to jump into the Painter and fix what’s left to be fixed. Since we didn’t bake a roughness map, lets generate one in Substance Designer using our bakes. I quickly made one by processing albedo with the ‘highpass’ filter, next I multiplied the value with ambient occlusion map since we don’t want nothing what is hidden in a shadow to be reflective and at the end since its a roughness map, I inverted all values. now it’s time to jump into Painter for final tweaks let’s bring our uvmapped lowpoly rock formation. Next lets add a ‘Fill layer’ and fill the channels with baked textures first we need to import them to the project and add to the channels To be able to use Clone Tool we need to apply Paint to the layer and set it to PassThrough for every channel. With Paint Active we can activate the Clone Tool and paint on the mesh. By pressing ‘V’ we select the area were we want to clone from Now we need to find areas to fix and overpaint them with correct data It is worth to switch betewen channels to get better visibility. It can be done by pressing ‘C’ button. With ‘M’ we come back to material view. Looks like I missed the ambient occlusion map.. so.. let’s bring it in then OK, looks like everything is done. Now we can export all textures and apply them in the next step to our render scene in Marmoset Toolbag. Let’s open the Marmoset Toolbag then and put all elements together. At the beginning lets bring our main hero, the rock formation to the scene Next lets apply all the textures we made Since this rock formation looks crap without the context lets add the context then and bring both ground meshes we made before to the scene This one will be used as a solid ground Next lets bring another one and try to set it as a background water plane Since this is not the subject of this video, to cover the ground planes I will just bring two environment materials I already made For this beach I am gonna pick the one I captured while ago For the water I will use a procedural water material I made ages ago too. This one isn’t the best but should do the job Next lets drag both materials to meshes and set up the tiling to the value which makes sense Next let’s copy the cliff and set it in a longer distance and rearrange the scene to build nice and interesting composition To make lighting a bit mode interesting let’s apply one of Sky maps I captured while ago. Lets add additional light which will cast the sharp shadow by taping on HDRI preview map and adjust the angle and brightness Next lets add some fog and setup the render and finally lets bring some life to the scene Lets do a bit more tweaking until we are happy with what we see and I guess we can consider the scene done I really hope you’ve found something useful in this video If you found it interesting and want me to create more content like this one, please drop the comment, leave the thumbs up and if you havent already, please subscribe to my channel Big thanks to those who did it already as it really motivates and helps me to move forward and create more content I can share with you I guess that’s it, and hopefully see you in the next video. Bye 🙂