Skip to content

6. 3D Scanning and printing

I have no prior experience in this week’s assignment topics. I will be using the Photogrammetry method of 3D scanning since that uses computer vison algorithms to make 3D models with images that have zero depth information. The outcome of this week’s learning was really satisfying.

Attempt I

To start with something user friendly, I tried SCANN3D for android.

On opening the app we are greeted with the screen as shown on left hand side. Click on New model. There are two ways we can take photos. Manual or guided. In manual mode, we should take atleast 20 pictures of the object. Each one in an angle slightly off to the previous one.

The important part in taking pictures for photogrammetry is that each photo should overlap some parts of the image in their previous & nest shot. This is to ensure that the algorithms can have a common point in consecutive pictures so as to make connecton between them

Guided mode helps in this overlapping because it shows red dots on the screen if the preceeding photo is too far away from the previous one. Lines at the red points show inclination. By moving the phone into the direction shown in the red lines will turn the dots into green as shown in third picture. After clicking more than 20 pictures press the tick button on the bottom right.

The app takes some time to process depeding on the number of photos taken. Only a low quality 3d model with small number of polygons is available for free users.

Then comes the next gripe. For downloading the file you need to subscribe to the app.(Which I didn’t do) Hence I am attaching a video of the same

Attempt II

I used the same photos I took with the SCANN3D app for this trial as well. The photos will be saven in the phone in SCANN3D > Imagesets folder

This time I used Meshroom, a free & opensource photogrammetry software with a node based control UI. Meshlab will work with PC’s with a CUDA capable Graphics card. Which means you need a comparitevley new Nvidia GPU for using it.

It has three panes on top Images. We should drag & drop all the photos taken to the Images Pane & click on the start button on top & you will get a textured mesh. (Well that is the theory)

I got a terible 3d model with a 56mb file size. (which cannot be uploaded to a free account in sketchfab)

Notice the two separate planes & two sides of the shoe at different sizes on each plane.

It is very hard to find good tutorials on troubleshooting in meshroom. But I found out what was wrong with this trial.

The photogrammetry depends on overlaping of the frames between concecutive shots. The 3D viewer pane shows the current level of details the software has figured out until now.

The box shaped parts in the pane represent the position of the camera in 3D space. It is clearly seen that there are gaps between shots and this eliminated continuity between frames.

So I thought I might as well take a video and take frames from it and feed it to meshroom.

Atempt III

I shot a 1:30 minute video on my old phone. It was late night and the indoor light was not adequate, resulting in blurry video. I made sure I recorded in 4K resolution with exposure & ISO locked with the open camera app.

The video was transferred to PC and ffmpeg was used to extract the frames out of it. I used the code

ffmpeg -i 1.mp4 thumb%04d.jpg -hide_banner

This resulted in making 2017 photos. I added all of that to the images pane & clicked Start. There is no “expected time to finish” in meshroom but with my experience with the first attempt, I figured I will let the PC do its job and went to sleep. In morning, I saw the process paused at the node where mesh is made. I clicked on the start button and the software tried to start from where it paused, but it kept failing.

As earlier there was no easy way to find out what was wrong but I assumed maybe this happened because of the huge number of pictures involved. The code I used in ffmpeg did not explicitly say how many frames I need and it produced 25 frames/ second default value.

I looked around in the internet to restrict the frames generated and found this code

ffmpeg -i input.mp4 -r 3 -f image2 image-%2d.png

Here,

  • -r – Set the frame rate. I.e the number of frames to be extracted into images per second. The default value is 25.
  • -f – Indicates the output format i.e image format in our case.
  • image-%2d.png – Indicates how we want to name the extracted images. In this case, the names should start like image-01.png, image-02.png, image-03.png and so on. If you use %3d, then the name of images will start like image-001.png, image-002.png and so on.

This reduced the time it took to go through each node, but the same issue reccured. It was mid-noon & we got a bright sun over our head at that time. I decided I will take another set of pictures and try again.

Attempt IV

It is not recommended to take pictures for photogrammetry with a focussed light as the reslting textures will have dark and bright areas at differet parts of the model and it would look awkward when the 3d model is subjected to a shadow in rendering time. I didn’t bother about any of that and start taking photos.

I took 43 pictures in total at 13mp resolution with very low ISO & neutral exposure.

The images had good detail and very little noise because of the bright light. This time around, I took incredible care to make sure that I had enough overlap betweeen my shots. The images were dropped into the images pane & Start button clicked.

As the software went through each node I saw that I did a good job with spacing the frames this time.

I found that meshing was taking incredibly long time to finish. I also noticed that the point clound generated before the mesh was made looked very detailed compared to my first attempt.

So I assumed this level of detail is too much for my PC & I reduced the two parameters in the attributes section by three zeroes after clicking on the “Meshing” node on graph pane.

It did the trick. This resulted in the highly detailed 3D model.

There were three areas were the polygons were not connected. However, I thought I will stop my photogrammetry experiments this week for now. Next time I will learn how to take advantage of cloud rendering and leave the heavy lighting to a better machine.

Even with this output, The level of details are pretty good.

All the files generated within a node can be opened directly by right clicking on the respective node in graph pane. I zipped the files inside the texturing folder and uploaded it to sketchfab.

I saw that the model had a lot of unwanted polygons to show the floor. since what I wanted was to just have the shoe, I decide I will use blender to trim off the edges.

3D Printing

I used this tutorial from youtube to make a 3d file for 3D printing. This would be a good excercise to get familiar with Rhino which I omitted on CAD week. I followed the tutorial node by node except for removing the discs in the final design. The design cannot be subtractively maded because the tubes that form the structure is circular in cross section and this makes the circular shapes cannot be maintained inside the tower with subtractive manufacturing.

We have the ultimaker 2+ in our lab. The workflow is very userfriendly. The flament was already loaded in the machine. But for understanding the filament change process we can follow this guide

After the material is loaded and we have made a .stl file, We have to slice the same. For slicing, we use the Cura software.

Open the stl file from Files >> Open. After the 3d file is loaded, we have options to change the 3D printer, material, layer thickness infill %, support & adhesion.

If we want to tweak further, click on any one of the last four parameters mentioned above and click “custom” on the bottom.

First read about what these parameters do before changin the values.

Click Slice on the bottom after all the tweaking is done.

After slicing, a new pop up comes with the approximate time needed for the job and an option to preview

Move the vertical slider and you can see layer by layer change. Click the play button on the bottom to see the preview of the movement of the arm in the particular plane.

Click “Save to File” to copy the gcode to the SD Card & Eject Safely to prevent corruption of data. The SD Card should be inserted to the machine and using the jog wheel, select print and select the gcode. I disabled “supports” when slicing and the output was near to my intended design and hence did not spend time on breaking the support later.