6. Scanning and Printing

3D printer and Kinect

Aims

I have used 3D printer before and I am aware of its limitations. Furthermore, I used to use Kinect for real-time scanning in order to capture a gesture and actuating. Thus this week I tried to thing a bit out of the box. For printing I wants to try to generate the g-code for the machine by designing the toolpath instead of making a geometry and then using the slicer to make the path. There are some geometry that are easily can be presented in the line format but very hard and complex to define the volumetric of them. for the scanning part I wants to try to do photogrammetry. It is very interesting that how much data can be extracted from an image.

3D Printing

for the 3d printing despite my goal. I decided to print a gyroid helicoid geometry. The amazing thing about this geometry slicing is that the although it has a empty spot in the center, since the layers are overlapping it does not need any support for printing. I modeled the geometry in grasshopper and prepare the file in the CURA software. the process of using CURA was very easy the interface is very user friendly.

Process of 3d printing

In general the process of printing start with having a 3d model. it can be downloaded or modeled in a 3d software. Whenever that you had the mesh geometry of your object, you need to import it in a slicing software. These software analyzing the geometry and base on the machine material, thickness of the layer that you are going to use, slice your object. The other parameter that you need to set is the skirt lines which are the few offset line from the initial layer or the wall thickness and infill where you can specify the thickness of the boarder line of the contour and the density of the material inside the volume of object. As result these software often can generate g-code base on selected machine or firmware that machine is using. When you have the file you can use the preset program that must of the 3dprinter has to prepare the machine for printing. Homing the nuzzle, leveling the bed if it is necessary/ Then you can preheat the bed and filament to make sure every things running smoothly. Now we can send the files to print, either through a USB/SD card or if the 3d print is connected to the network we can directly send the file through network. Make sure the first layers are well connected to the bed and the nuzzle position is right. let the machine be but check it sometime for unexpected errors. The errors that can happen usually is shifting caused by mechanical failure or drop in the temperature of the nuzzle which would reduce the flow of the material.

Group assignment

For group assignment we had to test the limitations of the machine. I have start printing Neil’s test files. Each of the tries to challenge one aspect of the machine process. At the beginning I did a mistake and forgot to add support for the clearance test. So I made a lot of bird houses.:))

Fig-1: The printing test for bridging distance
Fig-1: Bridge test final result
Fig-2: bridge test dimension check
Fig-3: The failure of spacing test (I forgot to add support)
Fig-4: reprint the spacing test with supports
Fig-5: reprint the spacing test without supports

Design

Fig01

Fig01- Parametric design of the gyroid

Fig02

Fig02- add the thickness to mesh and bake the geometry

Fig03

Fig03- export the file as STL

3D model


slicing

Fig01

Fig01- Cura environment

Fig02

Fig02- add printer

Fig03

Fig03- import the file as STL

Fig04

Fig04- Adding the Print setting

Fig05

Fig05- final slicing

Fig06

Fig06- Checking the simulation


printing

After generating the g-code we have to upload it on the machine SD card. Meanwhile we can set the printer to pre-heat the bed and nozzle to be ready for the printing. I should also mention that I am using PLA.

Fig01

Fig01- start heating up

Fig02

Fig02- I am using Anycubic which is a delta system printer

Fig03

Fig03- start printing the plate

Fig04

Fig04- Printing the early layers(set to be slower)

Fig05

Fig05- the last layers

Fig06

Fig06- Final object


For the next level practice I designed a meta-ball geometry and I would like to transform the lines directly to the g-code of 3d Printer. //To be added


class note:

History

self replicating machine for bringing the 3d printing to the community. Darwin was their first project.

additive vs. subtracting manufacturing :

So as I understood in additive manufacturing like 3d print instead of removing the material from the stock we put the material where it is needed. However in the subtracting manufacturing like laser cut we remove the material. this means that we are wasting material along the production which we do not have this problem in 3d printers. despite this advantage the resolution of the fabrication by additive method does not have the accuracy of the subtracting technique. because we always are limited by the nozzle diameter.

machine parameter:

  • support filament
  • extruder
  • build platform

resolution vs time :

thinner layer need more time to print and visually are not so different!

Curves can get closer to the design by increasing the resolution.

this variable can be adjusted for each layer of print.

Shell thinness and patterns :

by adjusting these factors we will be able to add strength to our final peace.

0.02 - 0.04

chemicals that we can use for post production

Gravity problem :

if the angle get bigger then 45 degree the bottom layer cannot support the top layer and we need to add support for that. Although this angel is slightly different from machine to machine due to the small variation in nozzles and materials.

Some 3d printer software optimizing the supports in order to reduce waste and cleaner outcome.

The orientation of the object for 3dprinter is very important and its depends on the application of the objects.

Normal vs. Remote extruder

The normal one is when the filament is on top of the nozzle and the remote one is the one that filament is being fed to nozzle. The difference is in the speed and heat control.

for the remote one which is the most common one now the nozzle needed to be pre-heated before sending the file.

Bio-Printer : using composed material to extrude the bio eatable material which coming form the waste of the food.


3D Scanning by photogrammetry

So as I said in the top my aim for 3d printer is to use photogrammetry in order to scan some fruits. I hope that I can use this technique emended in my vending machine for my final project so I can recognize the object automatically. I know there are some image analysis for objects detection like openCV library. I think having the mesh of the objects can improve the system.

photogrammetry process

It is a remote sensing technique where you can rebuild a 3d object from 2d images. This technique uses images feature to understand the camera movement according to the object. Base on the position of the camera and the plan of the image it will locate a feature point in the space. This triangulation will happen between each two of the pictures so they make a cloud of points which can be translated to a mesh with the help of marching cube algorithm and voxels. This technique is working the best for areal scanning not for objects. however it is possible to use it but you need to have featureful images with a good lighting. Also there are many other techniques base on this simple concept has been develop to fulfill the gaps that this technique has. for instance there are many image analysis to prepare the image feed and various mesh post process to convert the point to a smooth mesh with a good mimic of the texture.

Meshroom

I used meshroom for this objective. It was really straight forward interface and the good thing is that its based on python and executable from the cmd which allow me to incorporate it later in my system.

Fig01 Fig01- meshroom environment

as you can see there are several windows opens when you run the program. This an interface and you can check the commands that running in back in the cmd which opens with software.

Tricks for taking photo

My experience with this software start with many failures until I realized that the image that I am giving to the machine is quite important.

Also I learned that there are imbedded data in each image which is helping for positioning the cameras.

the other important fact is that using taking the movie and extracting the frames not gonna help since images are losing all their metadata.

Fig02

Fig02- metadata lost in export of frames from video

last but not least, it is great technique for areal scanning but not good for the object scanning.

Fig03

Fig03- final result with apple

Fig04

Fig04- final result with apple

final 3D model after post production

In post-production process I have used blender to remove the extended part which I did not need. Also reduced a bit the number of the faces. Furthermore I add some modifier to make the overall look smother.

Fig-1: adding modifier
Fig-2: remove unnecessary faces

conclusion

I think it would be hard to use this tool for object recognition. Also it is not very convenient to have this system in vending machine since it is needed many image from different angles. I believe that I will have a better chance with the openCV for this task.

Files

You can download the files for this assignment in the follow link

Final Project Contribution

For my final project I used 3d printing FDM technique with PETG and PLA to make a stand for my device. I have tried first with PETG filament to have a translucent effect however due to some tolerance on bended par the railing was not matching. Thus I had to re-print it with new design and couple of adjustments. The print was easy and since there was not any hanging or angles more than 40.