Jump to content
 

Making Stuff with your Mobile Phone.


Recommended Posts

PXL_20220528_004711686.jpg.2d633062d9b0c50f18d8f747652a9479.jpg

 

Maybe you have been in the market for a new phone over the last couple of years. If so you may have noticed more and more features, some gimmicky, some potentially useful - popping up in the marketing material. Some you may have noticed recently are 3D scanning abilities and LIDAR.

 

Maybe you thought to yourself  "Are these useful things for a scale modeller ? And if so are they a decent stab at it or just  a half-hearted attempt intended to allow teenage girls to make 3D images of their soft toys and friends heads? "

 

Maybe you have wondered how to use them and what you can do with them. I'll attempt here to give a very brief bit of a rundown on their potential usefulness, practicality and results. Maybe you will then think it sounds like too much phaffing around, or perhaps you will think you'd like to give it a bit of a go. Either way, I'll have helped you out a bit I reckon.

 

Firstly, the Iphone 12 and later Ipads have  a LIDAR camera. This sounds high tech but what is it?  Basically i t is is meant to enable you to 3D scan your surroundings  and create walkthrough environments. I dont have an iphone so I have no idea how good it is, though the reviews Ive read of its abilities are a bit Meh. It seems best performing its other duties of improving low-light photographs but if you have an iphone maybe you would like to play around with it and see how you go. 

 

Another increasingly hyped feature is 3D scanning, either as a specialised feature of the phone camera, or as a 3D app you can install which then relies on the abilities of the phones camera to take a decent enough set of images of a subject  that is put into the  app in order to spit out a 3D image of that subject.  

 

A variation of the above is to use the camera to create a dataset that is imported into a commercial photogrammetry programme on a windows PC or Apple Mac, which likewise can produce a 3D model of the subject.

 

A quick google for photogrammetry apps will list a number, many free,  which can be used to perform this.

 

Ive been dabbling in photogrammetry for the last  3 years using a 5 year old high end point and shoot camera to get satisfactory results, so in theory the latest camera phones should be capable of taking an image sufficiently detailed enough to enable it to be used in a photogrammetry package or app to create a physical 3D model. 

 

But first, what is photogrammetry?

 

Photogrammetry as far as modelmaking is concerned is the process of taking an array of photos of a subject, importing those photos into an application which will recreate the image as a 3D mesh. This mesh can ultimately be transferred to a 3D printer and a model of the subject can be printed.

 

Sounds magic, and it is, within limits. Firstly the list of things you can't scan successfully  is longer than the list of things that you can. For instance don't think you can rock up to the local steam museum with your mobile , snap some  loco from all angles and create a model of it  because that will be beyond the limits.

 

Similarly, shiny objects like cars will be problematical because the software will not handle reflections. Large non-shiny items are more successful but the datasets can be huge and the processing power required will tax all but the most powerful PCs, and probably take many many hours.

 

Still, for small scenic items it is a viable process. My list of successfully modelled subjects is around 150 now, mainly gravestones and memorials since masonry  is non-shiny and are in the main a size that makes photographing them by walking in a circle  around them  an easy, quick process.

 

Similarly,  difficult to model details on  buildings like carved ornamentation, or fancy corbels or brackets which are tricky to produce accurately, especially if there are a number of uniform ones required can be done successfully.

 

So, in a bit more detail - what is the process?

 

Firstly find a subject - an ideal first attempt is a headstone, preferably  with some carving or fancy stonework to make it interesting, and that is accessible from all sides, so not crammed up against other stones. In this example I will use this one commemorating one James Donaldson  in Rookwood Cemetery Sydney:

 

P1110533.JPG.49148a0138bef210e3523e65bd146878.JPG

 

 

 

Secondly, pick the right weather and sun conditions. Ideally the less sunshine  and thus less  shadows to hide details, the better the result will be. Also if the sun is low in the sky you will have less success since some of the photos will be taken directly into the sun, which will probably create an image that cannot be used because the subject will most likely be a silhouette. So, avoid early morning or late afternoon. Basically a  cloudy day around midday is the best.

 

Third, try to  use a camera that you can set the focal length to a minimum of F8.  The larger the depth of field, the  more details will be in focus. Additionally, because the software relies on the background of the images to locate them in relationship to each other, the clearer the background, the  more success the  application will have. If you are using a mobile phone maybe check its advanced settings to see if you can specify the focal length, or at the very least ensure that it isn't set to any 'portrait' mode or anything which renders a soft focus to the background or creates a vignette effect. You want as much of the picture to be focussed as possible.

 

Time to get started! Firstly position the subject in the viewfinder and take a good sharp picture., try to fill the viewfinder with the subject. Then step to the left or right and take another, again try to fill the view finder with the whole subject. Imagine the numbers on a clock and replicate those positions around the subject, taking a photo at each position  until you have completed a circuit of the subject and have a  dataset of between a dozen and 20 or so images. If the top of the subject was too high to get into the shots, hold the camera up and angle it down onto the top and take more  shots by walking around as before with the camera pointed down at the top of the subject. Review the photos to make sure they are sharp, that you didnt have your finger over the lens etc and retake any that have issues. 
You should now have a dataset comprising of a set of photographs, similar to this:

 


1809154182_Screenshot(463).png.db8090053d448907b4771ef33e548415.png132235510_Screenshot(464).png.e11c1ae3e199b6cf84386f03e162fe7f.png

 

Now its time to go home and import the data into a photgrammetry package.

 

I use 3Dflow Zephyr. It has a free version and a paid version. The free one has a couple of limitations which to be frank I have never hit, namely the maximum dataset is 50 photos, and there are a couple of limitations to the export formats, which for what we are doing here are not an issue. I have a paid license to use the Lite version but I could have successfully achieved all I have done using the free version.

 

Other options are out there, including some that run on mobile phones and Apple devices. 

 

Whichever is chosen , the basic process is pretty much the same.  Once the programme is opened and a new project started, the photos are imported and the programme will ask a few details about how accurate you want it to be, the fidelity of the data and so on. Most packages have default  settings that you can select to simplify this process, you put in the data subject  type (human, building, landscape etc) and the programme will set itself up accordingly.

 

The first thing it does is to create a sparse mesh - it analyses each photo, matches them to each other and creates a 'cloud' of data points that float in space which will appear as a ghostly sparse image of your subject. You may also see spurious points around the image or clumps of points around the place, which may be other things in your photo - bushes, rocks etc that the programme has detected and recreated. This is the time to delete them and the package have a variety of ways of doing this, either automatically based on the relationship to the subject  or you can  delete them by  selecting a bounding box or manually deleting them.

 

The mesh should look like this, though here it rather hard to see!

 

1791962579_Screenshot(465).png.39adbd01f9b0f5bf811b19e35a9615b3.png

 

 

Then a dense cloud is created by correlating the sparse cloud and creating data for the intermediate pixels. It should look like this.

163170255_Screenshot(466).png.ac2d6677e91652a9be2fa5821dd3d761.png

 

From there, the next step is the creation of the  3D mesh. Depending upon your photogrammetry package each of these steps may be a separate process which you need to manually kick off following the completion of the previous one, or it may be an automatic  workflow process.

 

Depending on the size of your dataset and the power of your processor, these steps may take a few minutes each, or several hours. In the case of some photogrammetry packages having a 3D graphics card can be a big help because the programme will use the card for additional processing power.
Anyway, the 3D mesh is created and draped over the points in the cloud. This mesh is basically a grid like construction and at this point it can be exported in a format such as .obj that can be imported into the slicing software of a 3D printer.

 

349261164_Screenshot(467).png.9bb602aa1b07ec04212e1849435fac2f.png

 

 

The output obj file will be of a mesh that is some random size, maybe 4mm tall or maybe 400 metres since actual dimensions of the subject are not considered by the photogrammetry package.

 

Fortunately the final model can be rescaled to the correct scale size using a free package such as Meshmixer. Meshmixer is a powerful tool which can also be used to edit out any lumps and bumps and fix any holes in the mesh. It can also hollow out the model if it is large in order to save resin.


1486935243_Screenshot(468).png.b097f6e39d64d477c4d7ef95be970645.png

 

The next step is off to the 3D printer slicing software, then hopefully a successful print. In my case I was lucky to have access to a Stratasys J55 full colour printer, the results are pretty accurate in detail and colour I reckon.

 

PXL_20220528_004449751.jpg.0286da993e562aac4e1734fd0f197905.jpg

 

 

 

So there it is. Another string to your modelling bow perhaps. Maybe you have been putting off getting a 3D printer because you think learning CAD is a step too far, here is another way to produce files for it. Maybe you have goth kids that you want to spend more time with. Kids love taking photos of things with their phones and if they are goths then even better, just join them in  their favourite graveyard with their mobile phones and the whole family can join in collecting datasets.

 

The process does take a bit of practice, particularly editing the point cloud, and performing any mesh edits that might be needed but it is learnable and with a bit of practice the results can be very satisfying.

 

PXL_20220528_003854638.jpg.d120934d88eb76b1b9403c1b326e6fab.jpg

PXL_20220528_003817193.jpg.55d8a8ce5a3da0da67b58964252a4ebc.jpg

PXL_20220528_004526849.jpg.bf2e6064a4287878fd95425171be1fb7.jpg

1636114888_PXL_20220528_004412731.MP(1).jpg.1073736dde7aad649340f33c594e1fc1.jpg

 

PXL_20220528_004728719.jpg.dd5aa53a54ddadd0f70594dc313dba96.jpg

 

 

 


 

 

 

 


 

Edited by monkeysarefun
  • Like 4
  • Informative/Useful 8
  • Interesting/Thought-provoking 3
  • Craftsmanship/clever 2
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...