Well, times are weird but its good to have something to take one’s mind off all the terrible events in the outside world and this project is one of those things. I am posting this short update because the ‘Gigapixel Project’ is now fully functional – some bugs remain but I hope to be able to iron those out during this period of ‘lock-down’.
So how have things changed since the last post I made about a month ago? Since then, I have completed the 3rd (Y) axis and have the whole machine up and running. I am leaving some bells and whistles until later. I have for example, for test purposes made the ‘Gigapixel’ menu somewhat simpler than it will be in the end. I have set the duration for the ‘settle’ and ‘exposure’ times to be short – just a few seconds. The reason for this is to speed up the time it takes to test changes to the software. I have also set the X and Y overlaps between the pictures in a ‘gigastack’ to be 50%. I rediscovered the old cookie of C++ rounding errors and I am putting up with them for now. Any programmers among those reading those this post will know the problem – you do float arithmetic but you need an integer answer rounded up so you need to add 0.5 to the float result…..or use the ‘Math.h’ library that I recently rediscovered (!) that provides among many other very useful math functions, a round up command (ceil(value)). Putting this bug right will be easy. There are some others but all of them are pretty straightforward to fix. I have been excited to actually get the first pictures out of the machine so they can wait for now.
I have added a menu that calculates the best Z step for an object given the magnification, f number etc. It returns exactly the same result that the tables on this subject on the Zerene Systems pages give (https://zerenesystems.com/cms/stacker/docs/tables/macromicrodof) which is reassuring. The ‘jog menu’ now returns the position in um from where one started jogging. This is useful because one can use ‘jog’ to determine the area one wants to include in a ‘gigastack’ as well as the front and back of an object for a Z stack. The work flow for a gigastack goes something like this: 1) determine size of the field of view i.e. what you can see on the display screen of the camera 2) determine the size of the object you want to create a gigastack from in X, Y and Z planes – ‘jog’ can be used for this or in some circumstances you can just use a ruler or even just plain guesstimate things 3) from the ‘calculator’ get the best Z step for the object. After you have provided this information the programme will tell you how many X and Y fields it will take to cover the object with 50% overlap in each dimension, how many Z stacks it will make, and how many pictures there will be in each of these. Obviously, even if you are demanding just say 5 x X, moves and 5 x Y moves, with say 20 slices in each Z stack, then you are going to have a lot of photos – 500 to be precise. 10 x 10 x 50 = WOW! Also, you will need to process the individual Z stacks to boil them down to single ‘all-in-focus’ pictures. After this it is over to ICE (see previous post), or Affinity Photo for smaller panoramas, to stitch everything together. A 5X x 5Y image from my camera will result in a 0.5GPix result or 2GPix in hi-res mode.
As I said above, ‘it’s a lot of photos’ so, there is an issue to do with the capacity of the flash guns to provide all the flashes even if set to say 1/64th normal power. To answer this problem, I am in the process of 3D printing ‘faux’ batteries that will allow the flashes to be powered from an external power supply so they can meet the demand placed upon them and also allow them recharge more rapidly. A similar problem may exist for the camera. I am using the Em1 Mk 2’s electronic shutter to give an extended battery time and reduce wear. For now, I have been testing the setup without the flashes – instead, just using the LED illuminators built into the Meike 320 flash guns. This is highly sub-optimal in terms of the quality of the pictures but good for tests because it eliminates the time that the flashes would otherwise need to recharge.
After many false starts with rails running backwards and all kinds of other things that resulted from programming errors, I got my first fully automated gigapixel image (actually much less that a GPix but hey, I am testing things out!). The images are pretty poor because I am not using a flash, the rails are moving on with with short times to settle, and lots of other excuses…. My first successful image was 3X x 3Y by 3Z, so just 27 images in 9 stacks of 3. Despite the poor images, I have to say I was pretty pleased. It’s eerie watching the camera/object move in three dimensions as the images are acquired. I am pretty impressed with the precision of the rails even though they currently have 8mm leads (I will replace them with 1mm lead lead-screws when (if?) China opens for business again). The programme rewinds the rails to the original point at the lower right-hand corner when it is finished and seems to be spot on when it does this even when working at higher magnifications.
Unimpressive though it maybe, here is the first image from the setup – for me it’s a milestone so it may look better to me than anyone else! Indeed, I am sure it does. The Z stack depth, and the number of photos with in each stack, were insufficient to provide a crisp image at every point but hey, once again it’s a test. I will add some better photos as I generate them and also a video.
1 April 2020: I have added a YouTube video to show how the software works. I will add a 2nd movie to show the setup creating a macro panorama.
Finally, given the terrible times the world is suffering – I hope everyone remains safe. “All men are brothers now” (and sisters) – lets not forget it.