Jump to content
 

Recommended Posts

Hey trensetters, is anyone using photogrammetry software, such as Agisoft  Photoscan, Reality Creator,  3DF Zephyr or similar?

 

I'm thinking they could be a more reasnably priced alternative to buying a 3d scanner. 

 

  Leaning towards either Photoscan or Zephyr given they are a one-off cost compared to the monthiy  licensce fee of Reality Creator, but RC does look fast and accurate. 

 

So was just wondering if anyone has experience of any of the  above or similar, in regards to making scans of smallish items  (and maybe people and pets if they stay still....) for 3D modelling..

 

Not interested in  cloud based uploading solutions.... 

 

Edited by monkeysarefun
Link to post
Share on other sites

Ok so I've found you can get 3DF zephyr for free - its limited but should do all I need and the system requirements are a lot less than some of the others, with the trade off that processing files will take a lot longer,  but that I can put up with.

 

https://www.3dflow.net/3df-zephyr-free/

 

TIme to play!

  • Like 2
Link to post
Share on other sites

So Ive been having a play around with 3df Zephyr and actually got something 3D out of it at the end..

 

For anyone wondering what photogrammetry even is (which included me this time last week) its basically the process of taking measurements from photographs  - which sounds uninspiring but what has my interest is that that means that you can use it to create a 3D mesh from a series of photographs of an object and then use that mesh to create a 3D model..

 

For instance, take this frilled neck lizard garden ornament...

 

post-22541-0-48901500-1539604232_thumb.jpg

 

Place it on a pedstal ourside in natural light and take a series of photographs of it from all angles, all roughly the same distance from it so that you have basically an overlapping web of photos. I used a reasonable  point and click camera, didn't need a DSLR or anything fancy.

 

The pictures are imported into the software and an initial course  point cloud is computed. At the end of this process I could see a kind of frill neck lizard shaped ghostly image in there, with the  photos arranged around it in mid air replicating their individual positions.

 

post-22541-0-80008400-1539604272_thumb.jpg

 

Next a finer detailed point cloud is computed, based on the initial one. This process took around 15 minutes on my midrange 3 year old laptop with the image settings set to default.

 

post-22541-0-11326300-1539604293_thumb.jpg

 

This resulted in a more detailed image of the lizard. Next I selected all the points that fell outside the lizard and deleted them before creating a 3D mesh and running the export process to create the .stl file suitable for importing into the Anycubic Photon slicing programme. I am using the trial version of 3Dflow lite which allows exporting in .stl format.

 

The free version doesn't, but you can export an .obj file which can be imported into any number of free 3D packages such as Meshmaker, Flashprint or even the freebie 3D thingy if you have Windows 10. All these packages allow you to subsequently export an .stl format file.

 

Because it takes no longer to print a whole bed full of objects than it does to print one I filled the printer up with lizard ornaments in various scales - 1 inch to the foot are the big ones, but also just for fun included some at 7mm to the foot and 4mm to the foot.

 

post-22541-0-26917400-1539604855_thumb.jpg

 

Once processed by the slicing software its off to the printer.

 

An hour and a bit later, here they are... The 4mm scale ones are just 2.6mm high s are a bit hard to make out (I put them on a giant base so I could find them in the alcohol bath that the prints get after printing...) The software has setings for finer details. I just used the default in teh interests of getting a result but I imagine you could get more details into the final output by changing the setings to high detail.

 

post-22541-0-86969300-1539605446_thumb.jpg

Edited by monkeysarefun
  • Like 5
Link to post
Share on other sites

  • 3 weeks later...

Does this photogrammetry software rely on having a flat ground reference plane?

The technique would be a boon if it could capture details attached to rolling stock such as axleboxes, buffers and hoses.

 

The Nim.

 

No,basically  just take lots of pics of the object from all angles and it recreates it floating in space. Check this out...

 

https://www.youtube.com/watch?v=kdtLtBt9dJY

 

Been hanging out in the local graveyard refining my photogrammetry  skilz..

 

post-22541-0-25744500-1541238142_thumb.jpg

 

post-22541-0-37542800-1541237906_thumb.jpg

 

post-22541-0-19607600-1541237932_thumb.jpg

 

 

 

 

Edited by monkeysarefun
Link to post
Share on other sites

fwiw I've used 3d zephyr (free). It can use up to 50 photos, but unless you have the correct graphics card, it can take a while to render. It works best with a plain background, so they say. I took 20 photos of our little summerhouse, and it threw out 14 of them, but the resulting 3d file was pretty good for only six images. I've yet to find something that'll fit on a table top that is worthwhile for me to machine a 3d model. Our dog doesn't stay still enough.

 

In the meantime, I've been downloading some of the images from sketchfab, and editing them using meshlab. Meshlab is pretty complex, and unfortunately has no 'undo', but I guess you get what you pay for. I've found the forum on sketchfab good for answering the one or two queries I've had wrt meshlab, since  I find the meshlab tutorials exceedingly annoying - the terrible intro, and the speed in which topics are covered. (Do not bother with meshlab's suggested email help)

Edited by raymw
Link to post
Share on other sites

Yes, 3D zephyr uses NVidia graphics cards to aid processing. Luckily my Laptop has an Nvidia GEforce something or other in it which has proved a little  dissapointing for later high end games but proves useful here.

 

Its worth reading the photography tips in the zephyr manual to get the most out of the photos. A well composed photo from an artistic point of view isn't necessarily useful for photogrammetry. Primary 2 points I've found make a big difference are to ensure a large depth of field to maximise what is in focus, (ie no arty soft focussed backgrounds) - I've been using F8 aperture setting to ensure I get a lot more focussed detail. Also objects silhouetted against a bright sky are useless if the details are in shadow so you need to bump up the exposure on the object in these situations

 

The other thing is to ensure a large overlap between pictures-  70% or more is the recommended overlap.I found this can make a big difference when shooting the gravestones - I'd originally assumed that because the edges of the gravestones are not very detailed I only needed to take one side on shot, but the software needs to be able to form a continual image and could not place the sides in the right location because I hadn't iven it enough data to work on so hence the back of the gravestone couldn't be modelled either because it couldn't tie that into the model. Reshooting the sides from several different angles ensuring a cohesive link from the front, around the side to the back of the gravestone fixed this.

 

Also, shiny objects don't work so you can't do your new car.

 

There is a menu option inside the software that you can use to evaluate the images. It rates each one with a score out of 5 or something so you can see how worthwhile each pic is to use. Initailly I'd get one or two photos rated between 2 and 3, and the  vast majority rated under 1. Anything under 0.5 is apparently useless so I was wasting a lot of pictures. 

 

Finally, the masquerade tool is very useful for situations like graveyards where there are a lot of similar coloured objects in the near background, which can confuse the software when its trying to work out the shape of the object.. You draw a red stoke on the object you want modelled and a blue one on the background items you want it to ignore and the software processes accordingly. This also speeds up the processing because it doesn't spend time processing trees etc in the vicinity.

Edited by monkeysarefun
Link to post
Share on other sites

Following through on my hypothesis I did some testing using pictures of one specific truck from a truck sales site. The original sample was 200 pictures, many were of tyres and transmission, I narrowed it down to about 49 pics of relevant parts and ran it through 3DF Zephyr, it accepted 31 of the images and produced a 3D model of... a small patch of the truck body and the back corner of the cab. I then tried less images, ones that concentrated on the cab and the result was a model of the site's watermark rather than anything on a truck.

 

Conclusion: Photogrammetry has lots of potential, but so far doesn't produce good results using collections of found images

Link to post
Share on other sites

Just confirming you used downloaded pics from the internet? That I can't see working any more successfully than googling "Big Ben" or "Sydney Opera House", downloading all the images that came up and sticking them into the programme. The software expects the images to be overlapping and ideally taken from the same distance - it relies greatly on a parallel axis through the pictures to do its stuff so essentially you need to create a dataset of images specifically intended to put into the software.

 

 I assume the pics would have been cropped etc by the uploader - again any post processing (photoshopping, cropping etc) is not recomended because it reduces the amount of data  or distorts it. The fact you mention a watermark indicates some kind of post processing so I assume the uploader would have also cropped, brightened and so on. 

Also I assume the pics were taken with someones iphone or similar, not really a suitable camera as such due to its small sensor size.

 

Heres a quick excerpt from the 3Dflow photo guide:

 

Pixels size must be higher than 2 µm; therefore it is strongly recommended to exploit camera sensor bigger than 1/2.3″ even if smaller sensors may be used depending on the accuracy you want to reach;

Keep the subject on the center of the framing;

Avoid direct light sources that may cast shadows and hide surface areas;

Avoid high ISO value since the noise may set back the Structure from Motion phase in Zephyr;

Keep a high aperture value (f/8 – F/16) when possible as it helps get a deep depth of field on pictures;

Avoid blurry photos. Using good quality cameras and having a good illumination can help. Consider also using a tripod if necessary.

Have a lots of overlap on each photo (70-80%). This is probably the most important tip. Shoot as many photos you can. Each part of the scene you’re shooting should appear in at least three separate views taken from different locations. This is a minimum requirement, taking more photos will likely improve the final results.

Limit the angles between photos. When moving around objects, try to keep the angle between each photo very low.

Shoot scenes with lots of detail and texture. The visual texture in the photos is what ties them together. 3DF Zephyr does not work well with uniform or highly repetitive texture and does not work at all with specular or transparent objects. You can, however, mask out certain areas using 3DF Masquerade (bundled with 3DF Zephyr)

Don’t try to adjust lens distorsion as that error is quite essential during the first phase of Structure from Motion;

 

 I went to the local park and shot off 50 pics of  one of the WW1 memorials (since it is Armistace Day..) and ran it through the software. This is a first attempt on a 'mechanical' subject and I didn't know if it would work on an object with many small sticking out parts (up to now I've been prtacticing on reasonably simple things like gravestones to hone my skills and understanding) but I think the initial  result - although needing some fine tuning and some additional photos of some of its parts - especially the area around the mounts -  has come out reasonably well. I can at least recognise what it is! 

 

post-22541-0-03072500-1541977007_thumb.jpg

Edited by monkeysarefun
Link to post
Share on other sites

The dataset I used was one person's images of one specific vehicle, not just random images, many were taken from a similar distance and overlapping but admittedly the number that were not similar would be enough to throw off the result.

 

I'm not sure how much processing was actually done on these pictures but the point is moot, they simply do not contain enough data. I knew there wasn't enough data for a great result, I hoped there might be enough for a fair result, I would have been willing to move forward with the most basic result. One thing I did not realise until your reply is that the recommended sensor size puts it out of range for most non-DSLR/DSLM camera users as pretty much everything else uses the 1/2.3 or smaller. 

 

I made a hypothesis, tested it, observed the result and came to the conclusion that the hypothesis was incorrect. I was not trying to pass judgement on photogrammetry.

Link to post
Share on other sites

Did you use 3df Masquerade to blank out the background in the memorial images? Do you have to apply it to every image, or somehow can you apply it to the result? How many of your 50 images did it actually use? I found, for the test I made earlier, that my phone gave good enough results - obviously it gives a large depth of field. I must try again, if I can get a suitable table-top subject. I suppose I could practice on anything, but if I was really wanting to 'get into this', then I'd invest in a suitable graphics card. The advantage of table top, cf your out of door's memorial, is that for areas that need more images, you can easily add them to your collection. Makes it simpler for practicing.

Link to post
Share on other sites

The dataset I used was one person's images of one specific vehicle, not just random images, many were taken from a similar distance and overlapping but admittedly the number that were not similar would be enough to throw off the result.

 

I'm not sure how much processing was actually done on these pictures but the point is moot, they simply do not contain enough data. I knew there wasn't enough data for a great result, I hoped there might be enough for a fair result, I would have been willing to move forward with the most basic result. One thing I did not realise until your reply is that the recommended sensor size puts it out of range for most non-DSLR/DSLM camera users as pretty much everything else uses the 1/2.3 or smaller. 

 

I made a hypothesis, tested it, observed the result and came to the conclusion that the hypothesis was incorrect. I was not trying to pass judgement on photogrammetry.

 

I didn't take your post as a dismissal or whatever of the software, honest!  From your post it sounded like you'd downloaded images from a car sales website or similar that a dealer had posted showing just general shots of the vehicle. Apologies for mis-understanding.

 

As for the sensor size, the bigger the better is recommended but the camera I'm using is basically  a point and shooter. Its a Panasonic FZ200 which I got from a grey importer for around $350 a couple of years ago. 

 

Buying an Anycubic Photon   resin printer and being blown away by the level of detail it can reproduce led me to looking for detailed things to print. I started at the scan the world site which is  a huge collection of 3D scans of statuary and antiquities from  around the world. I noticed that many of these were acquired by various photogrammetry programmes, hence my initial post here.

 

I'm not sure how worthwhile it will be for mechanical or highly accurate or large items. My intention is to see if it can reproduce the smaller detailed items found around the place that are a bit painful to model such as dumpsters, wheelie bins, electricity and telecoms boxes that you see dotted around the place, plus obviously headstones, statueary and so on. The initial results are encouraging enough to press on.

 

This morning for instance  I spent 10 minutes photographing a council wheelie bin from all angles - I'll probably get a visit from the anti-terror squad or something if anyone spotted me, such is the world we live in these days...

Edited by monkeysarefun
Link to post
Share on other sites

Did you use 3df Masquerade to blank out the background in the memorial images? Do you have to apply it to every image, or somehow can you apply it to the result? How many of your 50 images did it actually use? I found, for the test I made earlier, that my phone gave good enough results - obviously it gives a large depth of field. I must try again, if I can get a suitable table-top subject. I suppose I could practice on anything, but if I was really wanting to 'get into this', then I'd invest in a suitable graphics card. The advantage of table top, cf your out of door's memorial, is that for areas that need more images, you can easily add them to your collection. Makes it simpler for practicing.

 

I did have a play with Masquearade last week and diligently went through each photo in the set and masked out the background. I need to play around with it more because that is a bit of a long process, hopefully it is clever enough to just need a couple of images masked.

 

I've been using the trial copy of Lite, which gives me 2 weeks to use unlimited images so I've found that if you shoot enough from all angles it can figure out the subject versus background by itself. It probably takes longer to process but I just kick it off before I go to bed and when I get up its all done, like that shoemaker bloke in the story with the elves in it. One thing I do do is that once the initial sparse cloud has been generated I bring up the bounding box and resize that to be as close to the subject as posssible, this eliminates all the extraneous background points during the next processing steps.

 

I'll end up buying the Lite licence when the trial is up. Firstly its amazing how quickly you can have 70 or 80 shots of even the simplest object and then have to go through culling them while ensuring that you haven;t deleted too many from one particular viewpoint, but also because I reckon the developers deserve to make a bit out of the use I'm making of it. 

Link to post
Share on other sites

  • 1 year later...

I had a day in town on a course recently and thought I'd test out 3Dflow by using it on the Archibald Fountain in Sydneys Hyde Park. The fountain has several statues which for a change here are of a different subject to our usual  bunch of discoverers, explorers or governors.

 

Many a young Sydneysider got his first glimpse of a nude boosie by checking out the statue of Diana, or giggled at the unimpressively endowed bloke killing the Minotaur - or wondered who the nude guy with the sheep was - presumably some New Zealander.

 

Anyway, it was a challenge to get clean photos of all figures for the fountain is a large polygon which you have to walk around getting the shots, and at any one time at least one of the figures was obscured by the rest of the fountain. Also the spray from the fountain was helping to obscure the figures.  I concentrated on the three statues and the central figure, pacing around the edge of the fountain and at every couple of feet taking a shot of each of the figures, while being hampered by joggers, school kids on excursions and selfie-taking tourists.

 

archr_air.jpg.58a80bcd964195e01df29b3145c3dc15.jpg

 

The software had no issues with the photos I did get  and I was surprised at the decent job it made of it - in fact the only fail I had was with the central figure which I assumed would be the one that WOULD work however  the fountain jets obscured his bum and legs too much so the rear of the model was just a garbled mess. 

 

The rest came out ok - in these pics they are still uncured so have a shininess to them which hides some of the details.

 

realdiana1.jpg.dbfa95464522d10f5653336ce34b1528.jpg


 

 

 

diana.JPG.daf730072b87094cd409d85681a704d7.JPG

 

The bottom of her bow is just on the too-fineside to print successfully at this size, and there are 2 bumps - one on her shoulder and one on her head that are a result of the software merging background details with her. Its easier to take these off of the final physical model than muck around editing  the point cloud in 3dflow.

 

realMinatour.jpg.5a64103cd3e74ef9c583c4d0db0f96e3.jpg

Minotaur.JPG.91affba4542d58d93dc536cf8fce31bd.JPG

 

I forgot to place a support under his raised foot so that didn't print, and his sword blade broke off as I was removing the print from the printer plate..

 

pan_real1.jpg.c2fb6cab612f6c8968a696cb1458108e.jpg

 

pan.JPG.242e21adbdca1c47e4d375633309db7c.JPG

 

Quite pleased with the detail here - the goats antlers and the sheeps horns for instance.

 

Anyway, it was just a test rather than hoping to have my own miniature Archibald fountain so I 'm happy with the result They are astronomically better than I could achieve if carving by hand or given my artistic skills  modelling in say Blender etc.

 

Printed on the Photon using Monocure grey resin which I have found doesn't produce as crisp a result as the Anycubic resins so I'll probably reprint them in that and compare.

 

 

 

 

Edited by monkeysarefun
  • Like 3
  • Craftsmanship/clever 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...