Introductory Guide to Photogrammetry using Agisoft Metashape

Updated: Jun 20

I created this introductory guide to photogrammetry, it is very basic and explains my process creating 3D models using Agisoft Metashape.


Photogrammetry is the process of using multiple photos of real-world objects to create 3D digital replicas. It’s best suited to objects that are time-consuming to produce in 3D sculpting software. What Photoscan and other photogrammetry allow you to take 2D photographs of a given object, person, or space and make a 3D model, which can then be imported into a Unity, UnReal, or Cinema 4D. Photogrammetry models are great for documentary-based projects because you can bring your subject (whether it’s a person or an object or a physical space) into the scene, as opposed to having a computer-generated model representation. You’ll also get a photorealistic texture for your model. Shooting for photogrammetry There are some characteristics that will be problematic for the software. Avoid the following:

-Reflective or shiny objects (You can use hairspray or baby powder in some objects to avoid shine or reflection)

-Objects with few distinguishing features (big one-coloured walls, simple symmetrical shapes)

-Moving objects (people in motion, a tree/branches moving because of the wind)

-Objects with small (hair-like/fuzzy details)

-All black or all white objects. It just won't happen.

Lighting conditions


VERY IMPORTANT: The model will look better and it will be easier to clean if the object does not have direct light on it. Make sure it doesn’t have a shadow on one side and a lot of light on the other side. Light is better when diffused and consistent all around the object. Highlights and shadows will not only show up in the final texture. Ideally, you’ll want the texture to have no shadows and to create the lighting conditions in Unity. Direct light from one side or shadows can throw off the software in constructing the model. A photoset of consistently lit, well-exposed photos is ideal. I experimented using diffused light and avoiding direct light inside my apartment. I waited for the right time of day (around 4 pm) to get good light conditions and took all the photos as fast as possible to avoid any light changes.

Camera Focus (DSLR professional cameras) As I was researching and testing I was making sure that each photo the object was completely in focus. Some recommendations regarding focus: -The background can be out of focus. If the object is oblong, make sure your f-stop is high enough (f-8+) to capture the depth of the length of the object.

-Make sure there’s no motion blur in the photos. The shutter speed should be fast enough to ensure this (1/80). A high f-stop and fast shutter speed mean you will likely need to either be shooting in daylight or be shooting with a light kit. -If you don’t have a lighting kit you can use natural light, but remember that it can be tricky because the lighting conditions change quickly. I took the photos with a DSLR. I kept the settings set to “manual” and used the same lens for all the photos in the photo set. I personally, don’t have different lenses but it is recommended to select the lens most appropriate for your shooting environment/subject. I avoided Autofocus as the size of my object is not that big. The photos with Autofocus would not always be focused on all the objects.


Smartphone camera (iPhone)

Honestly, the photos were just as good. I have done 3D models using both portrait and landscape photos and they worked great. In my experience, it is not about the camera but about the time and effort you dedicate to taking the photos.

If you are using a smartphone to take the photos it most likely means you will not use a tripod. So be very careful and make sure you take a lot of photos!


Capturing the photos A photoset should cover the entire object or surface you want to capture and create a 3D model. For small objects, I recommend using a cake turntable stand, as it will allow the camera to stay in the same position and to move the object as easily as possible. I also tried to make a simple set up to help me save time masking the photos. I decided to use a small table I had as a background to make sure I wouldn’t get noise points from other objects in my point cloud.

If you’re shooting a single surface, such as a building facade, your shooting angles should look like this: (UPLOAD IMAGE OF SURFACE AND OBJECT)


I took enough images so every surface of the object/surface is covered, and make sure the IMAGES OVERLAP AT LEAST 50%. There is nothing more satisfying than admiring at an almost perfect photo-alignment. A photoset can be anywhere between 15 and several hundreds of images, depending on the object and desired level of accuracy. In my experience, the more the better, even if the object is small. In all my tests I took three rings of photos, a first ring of photos from top-level of the object, a mid-level capturing the entire object, and a low-level ring capturing the entire object. I also experimented with adding photos of details. Most of my tests had between 36 and 300 photos.

Example A:


Example A the photos were taken with a DSLR camera, a tripod and home-made turn-table (using a white piece of cardboard on top of a box and just turning around the white cardboard). The photos are overlapped about 50% and there are three major rings. There is also one photo of the top of the Object.

As a result of detailed, the 3D object had great texture and captured details and even dust.


Example B:


Photos were taken with iPhone 7 Plus. No tripod and no turn-table used. As you can see the alignment is not as perfect as Example A but the 3D model was very good. It took less time to take photos. A white background was used (a wall and a white piece of paper). Some people will recommend a green screen because it will make it easier to clean up the objects. In case of not having a green screen white paper always works.


Mistakes to avoid:


The object is a complex origami model. A tripod and a DSLR camera were used but only took 39 photos. Also, as you can see, the photos are not very well aligned, that is because the photographer kept moving the tripod and adjusted the camera focus on several occasions. I recommend not moving the tripod or adjusting the camera focus once you start taking the photos. Only move the tripod when starting a new ring of photos. As you can see photos of the back of the model were not taken, hence the incomplete 3D model.


When it comes to taking the photos for your photogrammetry model you should think “the more detailed photos the better” if it is an object with a lot of details make an extra ring and try to make sure the photos overlap as much as possible.


Important takeaways: -Make sure there is enough space between the object and the background, otherwise, Photoscan will add parts of the background to your object. This can be cleaned initially during the Dense Point Cloud part of the Workflow, and then in another software. In my experience, it is better and quicker to take your time with the photos, make sure they are ok for Photoscan, don’t rush it. The time invested in taking photos will save lots of editing time on Photoscan and Blender. If you edit the photogrammetry model in PhotoScan, it is more effective and easier to do it during the Dense Cloud section, as once you proceed to the building the mesh the scan won’t look as sharp and it is also trickier.


-Use a tripod. This is important! I tried a couple of tests without a tripod but the results were messy. -The use of turntables helped the photo-alignment, creating clean 3D models. -Have a white background preferably. Will save time when processing the photogrammetry model. Taking photos with background means the software will also create the point cloud information of the background, this means the time processing the PhotoScan could double or triple. The background and the extra points will add noise to the photogrammetry version of your object. Some people recommend using a green screen as it will be easier to identify the background point cloud information when cleaning up the dense cloud, I don’t have a green screen available so I used white. The cleaning process took a little longer and I had to make sure to take great details of the objects.


-If you look closely, I used sticky notes and drew signs (arrows and circles) and placed them around the turntable. This was recommended in several of the tutorials I watched, it is supposed to help PhotoScan identify the different sides/areas of the turntable.

PROCESSING THE PHOTOS ON PHOTOSHOP!

Once the photos were taken, I decided to colour process the photos on PhotoShop before adding them to Photoscan. I saw a tutorial on youtube and to be honest the best 3D models I have made using this process. It is an extra step and it is not necessary, feel free to skip it if you want but in all honesty, it adds a lot of details to the 3D models.


Steps:

1. Add the photos in Photoshop using Camera Raw. 2. Decide which are your best photos. This is the moment to delete the photos that won’t be used. Make sure to delete any repeated photos or shaky photos. 3. Change the vibrancy of the photos for the colours to look brighter. Make sure all the photos have the same settings. 4. Save all the photos numbered from 1 to the x number. (this is important, the software will recognize the order of the numbers.)



Once I have my edited photos on a specific folder I open Photoscan.


Processing the photos using Photoscan Photoscan has a very user-friendly interface. To make a model you basically have to follow step by step the Workflow tab. There are some tricks that are important to know, as it is with any other software. Go to Workflow > Add Photos to import your photo set. Once imported, it will be added as a new Chunk.


With several of the tutorials from youtube and Udemy, I learned to use separate chunks depending on the object. I tried doing a big tree trunk and the chunks were divided into three: Top, Trunk, and bottom. Combining the three chunks was not difficult, it implied a little more work, as you have to define Marker points on the object for Photoscan to know how to properly merge them together. The process definitely made a better outcome.

In one of my tests, the sign one, I masked to perfection each photo. This was very time consuming and the only reason I did it was that I wanted to learn how to do it and see the final result. It took me like 4 hours, making sure everything was perfect. As a result, I didn’t have to clean any of the Dense Cloud and I am very proud of the outcome of that particular asset. Would I ever mask to perfection again? Yes! I wouldn’t do it to every single one of my scans. I find masking is helpful if you have a messy background and in that case, you can leave out things you don’t want Photoscan to read and process.


I came across an “express” technique for masking: Using one of the photos in photoshop you can mask it and then upload it as a mask in Photoscan, I guess this technique is good if you are scanning an object with a particular shape. I tried using this technique but it didn’t work for any of my tests.


To Align the photos. Go to Workflow > Align photos.


Once your cameras have aligned, you’ll see a sparse point cloud that looks vaguely like the object. If you hit the camera icon along the top, you’ll see the aligned camera angles in blue. They should reflect the angles from which you shot the object. The first couple of times I tested the alignment was a mess and not all the photos would align.


Then continue to build the Dense Cloud. Go to Workflow > Build Dense Cloud You will see a dense point cloud that should look like your object. To the right of the crop button at the top is a series of different view options. If you don’t see the dense cloud right away, click the dense cloud icon.


Along the top is also an icon with a marquee selection tool, and different selection shapes. At this stage, you will want to eliminate all the extra points. This is a very important stage and worth the meticulous effort because it means your mesh will only be made from the accurate points.


I recommend cleaning the 3D Model right after building the Dense Cloud.


By cleaning I mean deleting the Cloud Points you don't need in your photogrammetry model. I highly recommend if it is your first time making a photogrammetry model, to finalize a Workflow before you clean it up. This will give a better idea of what I mean by "clean up".

-Delete the EXTRA POINTS IN BETWEEN SPACES. -Delete the Cloud Points you won’t use in your final object. This saves time and effort in the future.


What happens if you decide not to clean your 3D models? Well, it will depend on how you took the photos and the background you use. In the next example, you will see why I need to clean up the extra cloud points.

Examples:


In both examples, I didn’t clean up the Dense Cloud Point or mask the photos. In the first one, the extra points in between the leaves just stuck together. In the second example, you can see a lot of the white background attached to the tree. Cleaning up is not a complicated process but it does demand time. It is better if you avoid these types of mistakes if you invest more time taking photos and adjusting the focus. If I were working on a project that is required to look perfect and incredible I would prefer to mask all the photos before processing them instead of cleaning the dense cloud.


Next, build the mesh by going to Workflow > Build Mesh. You will have a model (see below). The model doesn’t have a texture yet, it’s just using per-vertex colour here.

To get the texture, go to Workflow > Build Texture.


The default settings here are fine. Although, if your texture has any weird blending issues, you can rebuild it with different mapping modes.




To export, go to File > Export Model. Recommended file formats are OBJ and FBX.


BLENDER: Blender is a great software to edit the UVs, which is important for the high polymesh. The UVs will be better than the ones generated by Photoscan.


It is possible to unwrap the UVs of the 3D model generated in Photoscan and fix the mesh. In Blender, you can also edit the orientation and positioning of the 3D model.


UNITY: My first attempt was to download a free skybox and some assets to make a nicer scene for some of my models and found issues relating to lightning. Photos contain lighting information from the environment of the shot at the time of the shoot. In order to be able to apply virtual lighting, it is required to remove all lighting information from the created textures. This is one of the main issues I was having. It took me a couple of days of research to get to this point.


Unity has the De-Lighting tool on the Asset Store. It enables artists and developers to remove lighting information from photogrammetry textures so that the final assets can be used under any lighting condition.


There are multiple ways of un-lighting a texture. Some of these ways require more information about the environment lighting while doing the shoot.


The Unity light removal tool is not perfect and some textures can be challenging to remove light automatically. In this case, Photoshop can be used to remove lighting artistically (based on your own skills rather than an automatic process). It is more time consuming and in cases with many lights, coloured lights or significant global illumination, it can be difficult to do. The light removal process in Photoshop is based on the world’s normal and ambient occlusion map. The light removal is done in a 32-bit format. Unity has also created a step-by-step guide to shows how to use a layered shader to achieve the same level of quality. Photogrammetry allows you to get a qualitative result but requires a very high texture resolution to conserve details. This is unpractical for game authoring due to memory budget, and it doesn’t allow you to add any variation to the object.

A layered shader defines the visual with a combination of individual materials, making other materials are tileable. This means they can wrap around objects and you can reuse them on different objects. Using a combination of materials enables you to have a similar visual quality as a high-resolution texture, but with low-resolution textures, which saves memory.

Resources: https://www.udemy.com/from-photo-to-3d-model-photogrammetry-tutorial-series/

https://unity.com/solutions/photogrammetry https://unity3d.com/files/solutions/photogrammetry/Unity-Photogrammetry-Workflow_2017-07_v2.pdf

http://gmv.cast.uark.edu/modeling/software-visualization/unity-software-visualization/workflow-unity-software-visualization/a-walkable-unity-demo/

YouTube Videos:

PhotoScan Guide Part 1 - Getting Started with Basics: https://www.youtube.com/watch?v=LeU_2SHwhqI PhotoScan Guide Part 2: Turntable Tutorial: https://www.youtube.com/watch?v=9_F-b2hxP_o

PhotoScan Guide Part 3: Natural Environment Scanning: https://youtu.be/diQAJO4sghQ PhotoScan Guide Part 4: Man-Made Environments: https://youtu.be/3K1MmSd-0GM Agisoft Photoscan Pro - basic workflow: https://youtu.be/6uI9_5T3d5U