Skip to content

5. 3D Printing & Scanning

I. Group assignment

  • Test the design rules for your printer(s) and define its limits.

Test File

General process

  1. Load the test file on Slicer Cura.

  2. Set Printing Parameters.

  3. Slice and verify.

  4. Export G-code.

  5. Load GCode OctoPrint.

  6. Pre-Heat Nozel and Mat.

  7. Launch print when Nozel and Mat have reached desired temperatures.

  8. Keep an eye on printing for first few layers

1. Creality CR-10S5

The CR-10S5 at the lab is equipped with a 1mm wide nozzle. I loaded the test model on Cura, entering the corresponding printer settings.

While verifying the slice, I noticed that most of the tests were missing from the render. Instead, there were only translucent profiles of the geometries supposed to be rendered. This indicated that these elements could not be printed with the current Parameters.

I tried scaling the model and observed if it had any effect on the slice renders, which it did. In fact, by scaling up by a factor of 2, I was able to obtain a slice render that was almost complete. Bridge tests were all here as well as all overhang tests. I printed the model in both its original size and scaled up by a factor of 2 as show below.

TestOG

TestOG

TestOG

TestOG

TestOG

TestOG

TestOG

TestOG

2. Comaprison of all printers

RepRap Anycubic Creality Ender Pro 3 Creality 10S5 Photon Prusa
Picture Reprap AC Predator enderpro
Filament PLA PLA PLA PLA PLA PLA
Nozzle Diameter (mm) 0.6 0.4 0.4 1.0 N.A. 0.4
Cooling Fan No No No Yes No No
Bed Temperature 40C 40C 60C 60C N.A. 60C
Nozzle Temperature 250C 240C 200C 220C N.A. N.A.
Result

II. Individual assignment

  • Design and 3D print an object (small, few cm3, limited by printer time) that could not be made subtractively
  • 3D scan an object (and optionally print it)

1. 3D Printing

a. Vertebrea

As an initial test, I printed the vertebrea model I produced for week 3 on one of the Anycubic Predators. I loaded the .STL file into Cura, oriented the model in a way that reduced the amounts of supports to the strict minimum, and launched the print using the following parameters.

Print Characteristics Details
Material PLA
Diameter 1.75mm
Color Red
Print Time 90 minutes
Print Parameters Details
Nozzle Thickness 0.4mm
Nozzle Temperature 220C
Plate Temperature 60C
Profile Fine
Layer Height 0.2 mm
Infill 15%
Infill Pattern Grid

The temperatures used were the ones specified for the batch of PLA I was using. A

Cura1

Print1

PrintFinal

Vert1 Vert2

Vert3

b. Observations & conclusion

In restrospection, I probably should have gone with a simpler geometry to print. The vertebrae model had no flat surfaces and quite a few overhanging elemnts. This meant I needed a lot of supports to have it printed. As shown in the pictures above, I wasn’t able to clean off all of the supports as some of them were completely bound to the surface of the print. The surface was therefore rough in certain areas and as a whole, the print was lacking essential details.

While the issues I have encountered here might not be that problematic as the printed vertebrae is not a component of larger system, I need to try avoiding encountering them again in the future. It is essential to ensure the orientation in which a model is printed is one which requires the least amound of supports. Also going for simpler geometries would help in making the printing process more straightforward. Finally, It is essential to verify what geometries can be printed effectively.

2. Scanning an object

Objects, spaces and topologies can be modelled into 3D MESH, or point cloud, by recording their surface geometry using cameras or specialised scanners.

Source 3D Scanning Photogrammetry
Description The shape of an object is captured by a 3D scanning device recording millions of data points to create a dense vector point cloud. Making measurements from photographs. Works by capturing multiple overlapping digital photographs from different angles which are then digitally reconstructed into 3D models by computational algorithms.
Equipment 3D scanning device/apparatus + software (capture and post-processing) Camera + post-processing software.
Model Object/Human body/other Object/Human body/other
Environment Some scanners cannot scan under natural light, some other cannot scan with the metallic environment, the limitation is often dictated by the technology used. Anywhere with Light. Issues with reflective, shiny, featureless and smooth surfaces and repetitive pattern.
Accuracy Very accurate: from 2mm to under 0.25mm for high-level 3D scanner. Dependent on different factors: camera resolution, software, angles, distance, number of pictures.
Visual Quality Texture limited because of the camera used scaled by default. Better texture quality but scaling is approximative.
Potential Uses Recreating parts and components with a high-level of precision. Modelling topologies.

Note - Photogrammetry requires 100 pictures on average. Depending on the size of the element of interest, using a single-camera set up may prove to be time consuming.

a. Photogrammetry - COLMAP

There is a wide spectrum of free photogrammetry post processing softwares, the best of which is considered to be Meshroom. However, Meshroom requires an NIVIDIA GPU which my Mac doesn’t have. I therfore looked for alternatives and went with COLMAP, an open-source pipeline with a graphical and command-line interface.

I installed Colmap by building it from its source git and following the instructions given on its official site. Once installed, I launched the software and its GUI using the following commands

colmap -h

colmap gui

Colmap Interface

I took about a hundred pictures of a clay structure I found at the Lab and launched a reconstruction project on ColMAP. The structure can be seen below.

Picture of object

The process took a little over an hour to complete. However, the outcome was not what I had hoped. Colmap determined that pictures were taken on one side of the structure even though I covered all 360 degrees around it. Perhaps the symmetrical geometry of the structure confused the software yielding to a partial reconstruction.

Picture of object

I restarted the process, following ColMAP’s tutorial as I should have at first. The resulting model was much better than what I previously obtained, but a lot of information was still missing. Perhaps this was due to a lack of pictures. Nevertheless, I focused on extracting as much information from what I had.

b. 3D Scanning on smartphones

Qlone App

The Qlone app uses smartphone cameras to scan relatively small objects placed on a special ‘mat’. When an object is placed on said mat, as shown below, and pointed at with a phone’s camera within the app, an AR dome is generated around the object indicating the angles which need to be covered by the user. Upon completion, a textured 3D model is generated and can be viewed in 360 degrees from the app. Exporting the 3D model requires the purchase of the app’s premium version.

I tested Qlone by scanning the vertabrea I had previously printed. The resulting mesh did a great job recreating the print’s surface. However, It wasn’t able to detect the hole at its centre. Nevertheless, the result was quite decent considering the ease of use and speed at which objects can be scanned. The only downside is that exporting the mesh as a 3D CAD model requires the premium version of the app.

Qlonefinal

LiDAR Sensing - Capture & Scandy

Some phones are equipped with LiDAR (Light Detection and Ranging) sensors, generally on their front facing cameras for face recognition. Applications such as Capture allow for these sensors to be used for 3D scanning. Laser measurements are combined with pictures taken by the camera to add texture to the scans. Capture is easy to use. It’s interface is similar to that of any camera application on a smartphone. I installed it on my iPhone 12 and used it to scan my face.

CaptureInterface

I exported the scan as an .obj file. The recreated geometry is a made up of a point cloud and may not be compatible with certain CAD tools. I converted it to a MESH using the poisson reconstruction function in meshLab, by following this tutorial. The resulting MESH can be seen below. All in all, the app works fine. It does however detect much of the surroundings which adds noise to a scan or straight up stops it as it looses track of its position. I do not see how it can be useful to me right now. Perhaps if I had a LiDAR sensor on my phone’s regular camera.

MeshLab Stages

meshlab point cloud

ml mesh

c. Conclusion

All and all this process is extremely hardware dependent. Free and accessible solutions produce outcomes that are decent in quality but higher levels of precision require substantial investments.

III. Files


Last update: June 22, 2021