It soon became apparent that the K40 was not going to be that useful unless several upgrades were made. First of all, the useless bed/clamp thingee that is supplied with the machine is way too small. It might be OK for tiny jobs but the gantry of the machine is capable of traveling over a much greater area. To enable the use of the whole area covered by the movement of the gantry, the old bed has to be completely removed. This done, some people then prop their work on a lab jack. This will enable the machine to function over approximately an A4 sized area and also allows objects of different thickness to be placed where one wants with respect to the focus of the laser. My solution was rather different. I bought a piece of perforated steel plate that is pierced over its entire extent by 8mm holes. Using the aluminum from the bed supplied with the laser and using the steel plate as a template, I drilled an 8mm hole at each corner of the plate and used these to mount 4 pieces of 120mm long threaded rod. These were locked in place with nuts on either side of the aluminium plate. I then placed some washers over these along with 4 springs. The steel plate was then slid over the screwed rods and nuts placed on each of them so that by screwing them up or down one can easily alter the height of the bed. In my initial experiments, I found that unless the material being cut is raised from any surface beneath it, you get a lot of staining from the tar etc. that is emitted when it burns. Thus, I decided to use small N52 neodymium magnets to raise the material to the correct height and stand it off from the steel plate. More magnets on top that are attracted to those beneath, hold the work-material flat. The whole setup is pretty versatile and the height adjustments can often be made by using 2,3 or 4 of the magnets beneath the material. If that isn’t enough, the bed height can quickly be adjusted with a spanner to any height one wants. To aid in setting the height, I cut some plywood gauges. The whole bed can easily be slid in and out of the machine. Care is required to align it (blue arrows on the Y axis) so that the laser head cannot hit it or the screwed rods. I will add some plywood rings to locate the bed on the chassis such that whenever it is removed, it can be dropped back in exactly the same location.
At present, my smoke extract is to say the least, ‘suboptimal’. While the new OEM fan that has replaced the old slide-in bathroom extractor setup is far better, I have it attached to 80mm diameter ducting. I wish I had made an adapter and used higher diameter ducting.
A second modification is required in order to use the whole area covered by the movement of the gantry; the smoke outlet duct needs to be trimmed back by about 2cms. I did this using a Dremel cutting disc and without removing the duct from the chassis. The reason I cut it in place is that like all the fastenings on the K40, none are captive so in order to remove and refit anything, you need access to both side of the sheet metal and I’ll say it again, the quality of the nuts, bolts and screws is truly awful! You will find statements on the web that the laser power a little way beyond on the focal point is too dispersed to do much damage. That is not true either where your eyes or where the big hole to be found in the bottom of the K40 chassis, are concerned. I ran the cutter and found that I had set alight to the bench under the machine! I think the big hole may be necessary for good extraction (I am not sure on this) so I placed a piece of aluminium sheet beneath the machine to protect the bench. I should probably emphasize here that a K40 should be kept under constant watch – never leave it running when you are not there and keep a fire extinguisher handy. To further improve extraction and so that the fan on the K40 has less work to do, I laser cut the parts for an in-line fan box to go near the exhaust point and installed an old 18W 12V 120mm fan I had lying around. Finally, I sealed up any leaks on the outlet side of the fans – mine leaked a lot from the outlet side of the K40 fan – I sealed the gaps with silicone.
Next up for attention was the provision of an air assist. Blowing air over the point impacted by the laser creates a much better cut, putting out the flame caused by the heat of the laser, it greatly reduces charring, increases cutting efficiency and reduces the width of the kerf (the thickness of the cut line). My first attempt at an air assist consisted simply of a 3D printed clamp holding a piece of 6mm aluminium tube bent so that air from my shop air-compressor was directed at the focal point of the laser. While this worked really well, it turned out to have one major disadvantage – because the air is coming from one side, the air assist is more effective when it is chasing the laser when it is moving in the direction of the air flow. Though such a setup is recommended by a number of K40 aficionados, if you use it then the kerf will be different in the two axes. If you need parts that are as accurate as possible, the difference in the width of the cut line is significant – about 0.5mm. The way round this problem is to direct the air-flow in line with the laser beam. There are several designs for 3D printed air-assists that do this available on Thingeeverse. However, I chose to buy a new laser head from CloudRay. This had a couple of advantages; the laser head came with a new lens and mirror to replace the existing low quality ones supplied with the K40. While I was at it, I decided to buy a set of 3 new gold-on-silicon mirrors. Thus, mirrors 1-3 were replaced and I had one as a spare.
The new lens and mirrors were a revelation. The power required to cut things fell by about 50%. The new air-assist head ensured that objects that were meant to be square came out square. A further revelation was the discovery of the ‘ACD method’ of aligning the laser (see https://www.youtube.com/watch?v=oSzFJGCPapc&ab_channel=MakerMonkeyIndustries on YouTube). Once you have understood this method, alignment becomes a doddle. It is THE K40 video to watch.
Next, I decided it was a good idea to be able to keep an eye on the cooling water flow and temperature. To this end, I bought a cheap dual thermometer from AliExpress and also a flow switch. I simply taped the temperature senders to the inlet and outlet tubes for the laser and then encased them in pipe insulation. The thermometers are very accurate and you can easily measure the difference in the temperature of the cooling water on entry and exit from the laser. I have wired the flow switch to a flashing LED/buzzer to warn if for any reason the cooling water flow should stop. Since my workshop is cold in winter, I also bought a cheap aquarium heater and placed this in the cooling water reservoir. For cooling, I used de-ionised water. Deionised water is fine – you can find a lot of nonsense about how you have to use distilled water – it’s nonsense, either will do.
While on the subject of nonsense, there is a great deal of misinformation concerning the technicalities of using a K40. The 1st one that I would emphasize is the notion that it is safe to operate with the lid open. There are dozens of videos of people doing this on YouTube. Just don’t do it unless you are wearing eye-protection – think about the burns in my bench caused by the defocused laser and imagine the beam hitting a reflective surface and bouncing into your eye. Other nonsense; there are videos suggesting the laser head lens goes up the wrong way – it goes in flat side down. While on the subject of the lens there are warnings on the web that the lens is toxic. It is, but I wouldn’t worry to much, ZnSe is toxic but the LD50 is 5g per kg. That is no reason not to treat it with care and if you must use your fingers, wash your hands afterwards, better still wear gloves, but even if you ate the lens it would not kill you (don’t try that!).
Next, I decided that the potentiometer supplied with the machine doesn’t really provide much accuracy when trying to set the laser tube current. I swapped it out for a 5KOhm 10 turn pot with a counter (see pic). This allows for much greater precision and reliability in setting the laser current. Eventually, I will add a voltmeter to monitor the control voltage to the laser and give a more visible display of the power setting. Note that on a 10 turn pot, the ‘wiper’ is the terminal at the end of the potentiometer not the one in the middle as it is for a normal pot. Further, when wired up test everything is OK with a tiny blip on the laser test button. I suggest that because I was supplied with a faulty pot and it cause the laser to run at max output.
With the ‘upgrades’ above, the K40 will cut 3mm baltic ply with only 9mA of current at 10mm/S. The edges of the ply are not horribly burnt. The top surfaces are clean, as are the back ones. This is a major improvement over the results I obtained from the unmodified machine (14mA at 6mm/S). When I measured the width of the kerf, it was practically zero and a 50mm square piece of ply was both the right size and truly square. I have one remaining thing to do, and that is to fix the broken Y-tension belt adjuster. For that, the whole gantry has to be removed so, my next post will be about what can be tuned when that is done and the addition of a new control panel incorporating temperature and control voltage monitors.
This is not the usual sort of blog post that I make on this site. Normally, those posts are restricted to the use of electronics to enable some or other aspect of photography or just to photography or electronics alone. In this case, the post is about my first impressions of a K40 laser cutter and my first attempts to ‘sort it’! I bought this 40W CO2 Laser Cutter from Vevor here in France. The term ‘K40’ is a generic name given to a range of cheap Chinese laser cutters. Since they cost about 350 Euros, they are very….very….very cheap for what they are. If the machines were made in the USA, UK etc. the price would probably have a zero on the end. However, the fact that they are cheap really does show! The manufacturing quality, quality control and the optical and mechanical alignment on delivery, range from fair to terrible. That said, the results of others show that with some tweaking, the machine can be made reasonably accurate and extremely effective. I should say right from the outset that this, Part 1 of what is likely to be a 3 part post, is based on my initial impressions of the machine and that as yet because I don’t consider it safe (!), I have not even fired the laser. I won’t do that until I consider the machine electrically and optically safe. While I know nothing about laser cutters, I have fair experience with lasers as components of complex optical systems such as confocal laser scanning microscopes and other bits of scientific instrumentation.
Out of the box, it is pretty clear that some of the parts are nothing short of rubbish and that is where they should probably go (or better, into the recycle bin!). The machines are intended for cutting/engraving over only a near postcard sized area and arrive with a clamping bed designed to hold small objects such as the heads of rubber stamps. However, with that (useless) bed removed, the cutting area can easily be expanded to ~A4 or larger. I was a bit surprised by some of the YouTube and other posts about this machine, while some were misleading or just plain wrong, based on older specification machines and thus not particularly relevant to later models, many showed a horrible disregard for laser safety and should be taken down for that reason alone. I bought a pair of laser safety goggles designed to block 10600nm CO2 lasers – the machine should never be operated with the laser tube compartment or the cutting area compartment open without the user wearing safety goggles…never. People say that the orange observation window is laser safe and it probably is safe to observe the cutter in action through it without safety goggles but why take the chance? You only get one pair of eyes and a 40W IR laser could take them out in an instant – you cannot see IR light so you will have no protection from your eyelids closing and they would in any case instantly be burnt by the beam.
It is important to know that the construction of K40s varies in detail depending on the manufacturer, the exact ‘model’ and when it was made. Thus, much of the information you can find on-line is out-of-date – even that on what are considered ‘go to’ sites dedicated to the K40. It seems to me that the manufacturers have responded to some of the obvious criticisms of the machine with minor modifications to the frame and chassis. You really need to check your machine matches the description below (or on any web site from which you take advice) before trying to apply any fixes otherwise you are going to waste a lot of time and possibly damage things.
Mechanically the machine effectively consists of 3 parts that I will refer to as the enclosure or chassis, the X-Y frame and the gantry. In my nomenclature, the enclosure is the big pressed steel box the whole thing comes in. The X-Y frame is the rigid(ish) 4 sided pressed steel framework to which the stepping motors and the gantry are attached. The gantry is the arm that is powered up and down along the Y axis and that powers the laser head along the X axis.
Lets start with electrical safety. I have no idea how electrically safe the power supply and control boards inside the RHS compartment of the chassis are. Probably they are OK…maybe. What I did want to find out right away was how well grounded is this metal box? Voltages inside this box run up to 20000V. That could be instant death so the box needs to be grounded so that the metal work you can touch is at earth potential just like you are. There is a lot of stuff on the poor connection of the earth point on the back of the machine though my machine’s chassis was definitely grounded, it was not via this point.The red socket there is actually insulated from the chassis because it uses a socket that has a plastic mount that prevents the metal of the connector touching the chassis plus there is a thick layer of paint on both side of the chassis. Clearly, for some reason the manufacturer did not want a connection to earth at this point on the chassis. However, the wire connected to the plastic earth socket is connected to the chassis and derives directly from the earth pin on the kettle-style socket on the left of the machine at the back. The web also has lots of articles saying you should not trust the white auxiliary mains sockets to the right of the kettle connections. I do not know about that but on my machine they all have earth connections, again directly derived from the earth on the kettle-socket. I cannot see any reason why the ancillary red plastic earth socket should not be connected to the chassis directly where it penetrates it but it isn’t. I wanted an earth point I could rely on so I threw the plastic socket away, scraped the paint to bare metal on both sides of the chassis and created a ground point at that hole using the bolt – washer – solder connector – shake-proof washer – chassis metal – shake-proof washer – solder connector – washer – nut ‘sandwich’ shown in the diagram below. Note that I don’t intend to connect anything to the earth point I created – I just wanted to know that there is a properly constructed connection to earth for the enclosure. I also checked that the socket the laser cutter will be plugged into had a reliable earth – you would be surprised how many sockets do not! I checked the grounding of the chassis to the earth pin of the cutter’s mains plug with a multimeter and the socket to the grounding stake outside my workshop. 5 Ohms is generally considered the maximum resistance for such a connection to earth – mine was 2 Ohms – the ground is dry at the moment – I will water it.
Now I was happy the machine was well-earthed, I switched it on but only after making sure the laser would truly be OFF. The machine homed appropriately, the lights came on etc. I made sure the laser ON switch was at OFF. I consider that switch unsafe – the travel of the push button is tiny and you could easily mistake OFF for ON. Using a multimeter I checked the lid interlock safety switch went open circuit when the lid was not closed – it did. For now, that was as far as I went with the electrical connections. The electrical compartment on the RHS of the machine and that for the laser were both delivered with bolts to keep them shut. I have removed those bolts and will install two extra safety switches that will cut the power to the laser if the lids are lifted a millimeter or two.
The next thing was to check the alignment of the mechanics of the machine. Pretty clearly there was a significant problem. The gap between the gantry and the bottom bar of the frame was clearly much greater at one end than the other! It was about 5 to 10mm out. However, before sorting this I checked that the frame was square. It wasn’t far out but it was not square. The construction of the machine is such that whether the frame is square or not depends both on the welding and bolting together of the folded pressed steel fame and how the frame has been bolted to the enclosure. There are four 10mm bolts connecting the frame to the chassis. If the frame is little bit out of square (say 1mm or less), as mine was, it can be squared-up by releasing the front two bolts and applying some judicious leverage and then re tightening the bolts. What’s happening here is that the bolt holes in the chassis and frame are a little over-size and that can be used to advantage. I guess the rear bolts could also be used in the same way. Note that there are no washers and the nuts are not captive – you will need 2 spanners. If the frame were more out of square than this I guess you would have to take all 4 bolts out and remove the frame to get it square with some judicious leverage. So with the frame squared-up in X and Y, I checked that the Z axis ran level in the plane the frame. It did, but I suspect I will make further checks on this when the bed is back in the machine because ultimately one wants the laser to be focused on the bed over its entire extent.
With the frame reasonably square it is possible to sort out the gantry to ensure it runs at right-angles to the frame. As I said above, mine definitely did not. There is an easy way to make sure the gantry runs true. I removed the two bolts connecting the black Y axis stepper motor cover to the XY-frame. Under this is a motor with shafts that connect the LHS and RHS belts that drive the gantry on the Y axis. The motor is connected to the belt drive on the LHS via a long shaft. There is a crude rigid coupling connecting the motor shaft and the longer drive shaft. It has 8 rather crappy screws. I undid all of these (you might just have to unlock 4) thus uncoupling the LHS and RNS belt drives. This done, you can rotate the Y axis motor shaft until the gantry is square to the frame and re lock the screws doing your best to get the shaft central within the coupling. After doing this, everything was within about 0.5mm of true.
Why was the gantry so far out-of-true? It is possible that this was because the belt on the RHS was lose and the drive pulley had skipped some of the notches on the belt after being bounced about in transit (but see below re: optical alignment). Either that or it was just how it was put together (more likely – see below). It is relatively easy to change the two Y belt tensions. There are two holes in the back of the machine that allow you to access the tensioning screws, a process made much easier if you put a small torch in side the machine to illuminate them, get your eye level with the holes so you can see to position a screwdriver. The RHS tensioner on my machine seemed a bit stuck but I was able to get the belt reasonably tight – I figure you should be able to depress the belt about half the gap between one side and the other using a light finger press. You can tension the X belt by positioning the gantry such that you can access the tensioning screws (not the Allen-headed bolts that hold the end of the gantry to the slider!) via the big hole in the LHS of the power supply compartment.
With these preliminary mechanical adjustments done and without which there was little hope of aligning the optics, I turned my attention to the light path. Just eyeballing it, it was pretty clear there was no way the light from mirror #2 on the gantry would ever hit mirror #3 in the laser head because that head was clearly twisted through several degrees. Given how tightly it was screwed to its mount, I doubt that this had shifted in transit, rather I think that perhaps an attempt had been made at the factory to get the laser beam to hit the laser head by turning it in its mount. This makes some sense given how out of square the gantry had been to the X-Y frame. Classically, to align the light path you use tape over the mirrors to see where the laser burns holes in it (the ‘burn’ technique). I had absolutely no wish to activate the laser while it was so clearly miles out of alignment. Instead, I designed and 3d-printed a laser alignment tool so that I could use ‘the reverse alignment method’ (my laser alignment tool is available to download on Thingiverse). This technique involves shining a red laser back along the optical path to the laser tube. In theory if you set up the mirrors so the light lands back at the exit of the laser tube, then the CO2 laser beam should retrace that path. However, there is a flaw in that argument, the laser beam may not enter the 1st mirror at the same angle as the laser pointer alignment beam left it. As a result, you might end up having to rejig the laser tube’s positioning to get the angle right. In practice that is a pain and the ‘burn’ technique is more practical. However, this does not mean that the reverse alignment technique is as some have claimed ‘useless’ (‘a solution looking for a problem’). It will get you from a hopelessly out-of-line situation to one in which you are not too far off. My reverse alignment laser showed things were as I had expected, hopelessly out of line.
My first steps were to completely dissemble the laser head, deburr the mount, clean up the turned parts, the 3rd mirror and the lens. I could see where the CO2 laser beam had struck the mirror so clearly someone at the factory had at least tried to align it! With the laser head cleaned up, I cleaned the other two mirrors – they were all filthy and were cleaned with propyl alcohol on a cotton wool bud. Next using the reverse alignment tool, I aimed the laser pointer beam at mirror #2. It would not hit that mirror but rather always fell below it. I decide to ‘shim up’ the LHS laser head mount hex spacer (see picture). One turn on the hex spacers was enough to bring the laser to the centre of the mirror. A few tweaks on the adjustment screws for mirror 2 brought the beam onto mirror 3 and minor adjustments on mirror 1 brought the beam into the centre of the exit from the CO2 laser tube. Actually, centring doesn’t matter much. What is important is that the beam travels parallel to the X and Y axes and does not thus move when the laser head is moved from a near to a far point and the same for when the gantry is moved from close to far from mirror 1. There are some good YouTube videos on how to achieve this (do a quick search to find one you like) so all I will say here is that you start with masking tape on mirror mount #2 and get the reverse alignment spot steady on that mirror at all distances, move the tape to mirror #1 and arrange for the same there. Now, the machine IS NOT aligned for the CO2 laser but it probably isn’t far out and it is time to use the ‘burn’ technique. More of this in Part 2 of this blog entry. My conclusion at this point is that the optical components are all cheap and nasty and at some point better mirrors and a better lens will be required. They are ‘functional’ but no more than that. However, the laser tube looks good!
The next post will deal with firing up the laser, getting it properly aligned, sorting out the bed and some other things.
The STL files and code for the control box of the XYZ macro rail described in my blog are now available at https://www.thingiverse.com/thing:5146275 and https://github.com/pgmobbs/Gigapixel-MPFR. I would reiterate that I cannot help further. It is a fairly advanced project requiring some knowledge of C++ and the use of Arduino’s. There is huge room for improvement of the code. In particular, it needs rebuilding using a stepper motor library rather than as at present with crude direct control of the EasyStepper boards. I would be very pleased to see someone rewrite the code. If someone does, please send me your changes! However, as it is the code works and allows the features described in the 3 articles on it in my blog (above – use the search facility).
The little Olympus FL-LM3 flash gun is a work of genius. One of my favourite photographers, Robin Wong, has made an excellent YouTube video about this flash (see https://www.youtube.com/watch?v=tj4oDtUlpsI&ab_channel=RobinWong). I will only repeat here a few of the reasons he gives for the flash being so clever because what I really want to do is to point out that with a suitable extension cord, it can be used off camera. I don’t know who it was at Olympus who thought of it but the addition of a fourth contact on the hot-shoe of Olympus cameras that could carry power to the flash was a brilliant idea. It enabled Olympus to move away from the pop-up flash found on many cameras and instead to provide a very small but versatile flash gun that could be mounted on the hot-shoe and because the flash is powered by the camera’s internal battery, the flash could be made very small. Some neat engineering allowed the flash head to be rotated and inclined just like those found on much larger units, enabling bounce flash and much greater control over illumination. Further, the little FL-M3 also works as a flash commander and can control off camera flashes by radio. With a Guide Number of only 9, it isn’t the most powerful flash but it is as versatile as any, and also the lightest and smallest TTL commander flash available – a lot smaller, and a lot lighter! Plus, it’s a real bargain – about £60 if you have to buy it as a separate piece of kit but it came free with my Olympus EM1 Mk2.
I spend most of my time in macro photography. I have a variety of speed-lights with various diy diffusers. Diffusers are essential to avoid specular highlights in images taken with a flash. I moved from Nikon to Micro 4/3rds in order to get a camera I could lug around all day in the field, lying down, squatting, and climbing over gates and fences. Until recently, I made a lot of use of a Yongnuo 560 IV flash gun (cheap!) with a really big 3D printed diffuser. I had to mount the flash on camera because I found it very difficult to take flash photos of the quality taken by for example, Robin Wong. Robin holds the camera in one hand and the flash plus diffuser, in the other. I find I need both hands on the camera not least of all because I like back button focus. Having the flash off-camera allows for greater versatility – dramatic lighting from one side etc. However, my hands just aren’t able to hold the camera and a big flash like the Yongnuo for any period of time. The alternative is to mount a flash on the camera and use both hands. Some people use the small pop-up flash found on other makes of camera and rig-up an inclined diffuser on the lens that scatters light onto subjects near the front of the objective. I thought about doing that with the FL-LM3. However, it occurred to me that if one could find a suitable cord that simply replicated the contacts of the camera on a remote shoe, it should be possible to use the FL-LM3 just like Robin does or, because it is so light, to make a mount to move it nearer to the end of the lens. I found a suitable cord, the JJC FC-03. This cord has all the contacts found on the camera and thus gives full TTL control over an off camera flash.
In the end, I decided to mount the flash on the lens using a simple 3D-printed articulated arm mounted on a clamp that fits onto the lens of hood of my Olympus 60mm macro lens. I made two identical clamps – the rear one holds the flash which I can set at any angle and at any position around the lens, and the second clamp holds a very simple 3D-printed diffuser. I plan to print a diffuser that clamps on to the flash itself but I haven’t got round to it yet. Meantime, I learnt from someone on DPReview that should you be in a restaurant that has butter in little white plastic containers with a tear off film on the top, the container is a push fit over the FL-LM3 flash head – a little diffuser for free! Eat the butter first!
Flash diffusers have become something of a diy industry and making your own is fun. I have found that a good way to test materials is to try shinging a bright torch through them. If you still get a bright spot, the diffusion is insufficient. If you get a nice even illumination larger than the field of view, you may be onto a good material. Fancy shapes can make sense but I have found that though a flat sheet may waste some light, they can work almost as well. Double diffusers work particularly well; one on the front of the flash and the other a little further forward.
Below are some pictures of the set-up I am currently experimenting with – to get something approaching acceptable ight I used two sheets of kitchen towel one on a ‘natural’ PLA sheet held on the clamp on the lens and the other just taped over the flash itself. I am thinking about how to add a 2nd flash because if I can find a way to do that, while it will not rival the £380 Olympus STF-8 dual macro flash set up, it will have cost me nothing because I already have some units I could use as the 2nd flash. I have also printed mounts to enable the flash to be used on my Loawa Super-Macro and my Nikon 105mm Micro lenses. I think having the flash near the end of the lens is really useful if you want to do ‘super-macro’ photography.
As ever, just click the photo to see an enlarged version….
I don’t really do product reviews but just occasionally something comes along that no one else seems to want to review or at least, to review it in such a way as to enable people to make a very specific decision on whether or not something will work with this lens or that. If you are reading this page, it is probably because you have an interest in one or other aspects of macro photography. Many macro photographers who operate in the field rather than in the studio, have like me opted for a lightweight camera system – in many cases the Micro 4/3rds ecosystem which is dominated by Panasonic and Olympus. Many of us came to this system from either Canon or Nikon. I came to it from the latter brand where the ‘must have’ macro lens was the 105mm Micro Nikkor f2.8 G AF-S VR IF ED. This is a truly superb macro lens but it’s a deal heavier than the Olympus M.Zuiko Digital ED 60mm f2.8 Macro. If based only upon ultimate sharpness if I had to choose which one I preferred, then I think the Nikon lens would win. At the moment, Olympus has its problems but it has promised to come out with a new 100mm macro lens in its Pro series. The arrival date is unknown. The Pro lenses are fantastic, I own a couple of Pro zooms and they are brilliantly well-made and fantastically sharp. Indeed, when I need a bit more reach, I reach (!) for my 40-150mm Pro zoom because at its closest focus, it does a pretty good job of being a near-macro lens and it provides the subject distance you need not scare creatures away. However, for smaller subjects its reproduction ratio means that it can’t really hack it and it is not as sharp as a true macro prime.
Sometime ago, it occurred to me that my old 105mm Nikkor lens would provide the equivalent of 210mm of reach on my Olympus EM1 Mk2 camera. However, the options for using it, retaining autofocus, the EXIF data and in-lens vibration reduction system (VR) were pretty slim/expensive. Then, along came the Viltrox NF-M1 adapter. It isn’t cheap but it is cheaper than any equivalent that I have been able to find, and it lets you use most of your old AF Nikon lenses (there is a compatibility table on the Viltrox website ). Further, if you have the Olympus 1.4X teleconverter and perhaps the 2.0X, as well, they will fit within the NF1-M1 allowing a further extension of the reach. Since there are no optics within the NF-M1 there is nothing to degrade the image quality. So for the price of the Viltrox, if you are a Micro 4/3rds camera owner and you have the 105mm Micro Nikkor, you can have a 210mm or longer macro lens with the option to further extend its reach.
There isn’t much to say about the technical capabilities of the Viltrox adapter itself that you could not simply get from their website (https://viltroxstore.com/). It’s a solidly made piece of kit with the opportunity to update its firmware via a USB port. It can be switched from manual focus to automatic. It does everything it claims to – it keeps all the EXIF data and displays everything that you would normally expect of a native Olympus lens on the camera’s LCD and EVF. It is reasonably snappy when used with single AF but the 105mm Nikkor hunts a bit sometimes (though my recollection is that it did that on my Nikon cameras!). My impression is that the AF is slowed down somewhat but since I use Manual so much that doesn’t bother me a lot. However, it simply doesn’t work at all on CAF. VR works but I cannot tell if the images are any better than when VR is switched off and you just use the EM-1 body’s built in 5 axes stabilization on the sensor. The latter is so good that I don’t think the VR matters. My Em1-Mk2’s focus peaking and magnification focusing aids also work with the adapter.
I first discovered that Olympus teleconverters could work with lenses other than which they were intended when someone on DPReview pointed out that the ‘nose’ of the converter would sit within a Pixco and perhaps some other, 10mm extension rings. I tried that trick with the 60mm Olympus macro lens and the results were pretty impressive. When I bought the NF-M1 it looked from the pictures of it that it too would accommodate the nose of the Olympus teleconverters….and it does. I was not sure the combination would be usable because the NF-M1 is a bit over 25mm deep and thus moves the final element of the teleconverter a lttile way back from the first element of the objective lens. However, much to my surprise, it not only fits but gives remarkably sharp images and the lens can focus from as close as you are likely to want to want to get (I haven’t precisely established just how close!) all the way out to infinity. Thus, if you want it, you can have a 294mm macro lens using the I.4X converter or a 420mm with the 2X! Question is, how well does it work. I only have the 1.4X converter so I cannot speak for the performance of the longer combination. Clearly, you will lose some light but otherwise the answer is; pretty well. Below are some initial pictures taken using the NF-M1 with the 105mm Nikkor lens with and without the teleconverter. I was stuck inside by the foggy weather so I am afraid the pictures are of everyday objects. The single fibres in the image of the wool are about 20 microns thick. I could definitely have got closer but I was hand-holding the camera – the lack of signs of shake in the images is down to the brevity of the flash. Doubtless greater resolution could be had by finding the sweet spot for the aperture. In order to upload these images I have had to export them from Affinity photo as rather smaller jpegs than I would wish – the cropped photo is at highest resolution. Clicking on the photos will allow you to see them at the resolution at which they were uploaded.
Bottom line, it is a well-made gadget that is a boon if you are a micro 4/3rds user with any suitable Nikon lenses. It works well with the Nikkor 105mm Micro and you can use it with the Olympus teleconverters. The only down-side is the price – about 200 Euros or £180 and the incongruity of using a heavy lens with a camera you probably chose because it was so small and light!
Is it really worth printing your own photographs or should you send them to a laboratory? It’s a difficult question because it depends on so many things. Not least of these questions are ‘why?’ and ‘how?’. Maybe ‘why?’ is the best place to start.
Before photography went digital, more-or-less everyone who had a camera had no choice but to have the film developed and printed. You could only do this yourself if you had room for a darkroom, and had all the apparatus required including an enlarger. Despite the cost of those things, many amateur photographers opted to develop their own films and print their own pictures. Without any doubt, while much of the skill in flim-based photography involved spotting and composing a picture, and having a sense of the moment when to press the shutter button, there was also a lot of art involved in how one dealt with the neagtive and the print in the darkroom. While many professional photographers from ‘the time of film’ (ok, I know, it ain’t over yet!) employed a darkroom technician to do the magic for them, they pretty much all oversaw the process and did so with a view to ensuring that their images came out just the way they wanted. Amateur photographers who were not fortunate enough to have a darkroom mostly just sent their films away and a while later, received the prints in the mail. Either that, or they took their films to a chemists or photgraphy shop that offered a similar service. Having both developed my own films and printed them, and also having used commercial darkrooms, I can attest to just how disappointing most of the results turned out to be if you sent the films away – B&W that was gray and a bit grayer – colours that were washed out and though ideally one composes a picture in the viewfinder, many shots benefit from a crop or from the ‘black magic’ of dodging and burning and other of the darkroom dark (?) arts. Bottom line? It was as a keen amateur who apsired to being a ‘photographer’, hard to get what you wanted from film unless you took the whole process into your own hands.
The number of photographs that are taken these days is huge. One estimate puts it at 1,500,000,000,000 a year. Of those 1.5 trillion photos, how many are ever seen by anyone other than the photographer? Indeed, how many are ever viewed again by the photographer themselves? Perhaps the photographer didn’t even ‘see’ the photo when they took it? So, where do these photos all go? Mostly, they never see the light of day again. However, many are posted on social media platforms or retained on a memory stick, SD card or in phone memory. I would argue that the ‘lifetime’, by which I mean the time between when the photo was taken and when it is last viewed by anyone, is extremely short. Indeed, the lifetime is probably mostly only slighter longer than that involved in pressing the shutter button. However, there remains an apetite for the printed image, billions of photos ARE printed every year and the apetite for the print appears to be growing. The figures I could find suggested that the Global Photo Printing Market is expected to grow from USD 13,125.4 million in 2017 to USD 26,113.0 million by 2023. I do not know how many of the customers for those prints will be satisfied by what they recieve back from the printing services they use, but I suspect it will be a bit like back in the old days but with the major advantage that digital photography offers a preview of the image – you couldn’t see what was on film until it was developed and printed.
While I suspect that the majority of the digital images that end up being printed go directly from the camera to an on-line printing service, those services and most devices that take pictures, offer the facility to ‘process’ the image. That is to say that you can enhance the image by the equivalent of what used to be darkroom processes such as correcting for under- or over-exposure, bringing up the highlights, and all manner of other digital jiggery-pokery. For a minority of photographs then the keen amateur photographer can still have a hand in the way a print will look. For the keenest of us, we can use specialised software like Photoshop, Lightroom, Affinity etc. to really get to grips with the final appearance of the image. The one step that is generally outside of the scope of most people is the printing process itself and as I can witness, the print you get back from even the best of print services, will not and for techincal reasons cannot, look like it did on a screen. Why not? Well, because light from a print is reflected from its surface while that on a screen is created by light transmitted through a surface and because the colour-rendition of a screen unless specially calibrated, will not match that of the printed photo.
Well, that has been a long-winded ramble but the ‘why’ should be clear – printing your own photos is the only way to take total control of how an image will appear. Printing a photo creates something that has a lifetime much longer than that of a digital photo to be viewed on a screen. I would argue that the history of photography will forever be something that can be hung on a wall. For the individual photographer, the family album or the day-to-day photographic records of holidays will always best be kept on paper. I wonder just how many digital images will never be viewed again because it’s too much bother to recover them from an outdated storage medium?
There are some other aspects to ‘why?’. While you can hang a screen on a wall, picture frames are cheap. While I have put many pictures on-line, I would argue that pictures are intended to be seen and there is no better place for that than on a wall. My reasoning is that the digital world is something that most people skate through at a very superficial level and that a picture on a wall has an impact that lasts both longer in terms of how long people view it, and how long they remember it. Finally, I have been most impressed by the effects of the environment in which photos are viewed. Pictures displayed on the walls of museum or gallery, or indeed in the pages of a family album, exist in a context that cannot be replicated by a screen. I think that to suggest otherwise is to argue that the Mona Lisa viewed in a book is the same as viewing it in the Louvre – it just ain’t!
So what of ‘how?’. Well that’s fairly straightforward. Buy a printer. I recently purchased a Canon 100S (A3+). It’s a dye-based printer, about the best of the bunch where A3+ printers are concerned and is probably like many other printers, for reason that you will end up buying the expensive OEM inks, sold at less than it costs to build. Why dye inks? The only reason to choose dye over pigment inks is that the colour rendition by dye inks particularly of black, is better than that of pigment ink and dye inks are cheaper. The reason to buy a pigment ink printer is because the ink is more permanent – many tens of years against 20 or 30 years. I have been stunned by the quality of the prints produced by this printer. How do the costs compare with a print-house? There is no doubt that it is cheaper but not hugely so. An A3 print works out at about £2 for the paper and about the same for the ink. By comparison, Whitewall, a reasonably professional print-house, would charge about £15 for the same image – not including postage charges which are significant. In the end however, it all comes down to the ‘why?’ – control over the process and the satisfaction of ending up with something that is truly all your own work, that you can hang on a wall and that has more than the most fleeting of an existence.
Well as usual, my puzzler is puzzled about quite a few things. One of them is do with photography and the way in which the mechanical shutter curtains coordinate with a flash and how the same thing is achieved with an electronic shutter. You see, my Olympus OMD EM-1 Mark II has both electronic and mechanical shutters.
Most mechanical shutters consist of a front and a rear curtain. We don’t often think about how they operate but if you want to see, there is a high-speed video at this link (https://www.youtube.com/watch?v=ptfSW4eW25g). The link shows the shutter operation for a DSLR , a camera with a mirror but the two curtain system is much the same as that found in a mirror-less camera. In a mirror-less camera, the imaging chip is normally on and open to the light entering through the lens. The image the chip is producing is displayed on the LCD display and/or in the electronic view-finder (EVF). When you press the shutter button the front curtain of the shutter falls and stops light from entering, after this the sensor chip is switched off, and any information that may have been present in the pixels is erased. The sensor chip is then turned back on and the camera opens the front curtain, light pours in forming an image and the exposure is terminated by the rear curtain of the shutter that rises to block off the light. For relatively long exposures, less than say 1/250th of a second, the two shutters move independently; the front curtain is fully open before the rear curtain terminates the exposure. For shorter exposure times the shutters move together, the front shutter moving up with the rear one creating a slit, the width of which determines just how long any row of pixels is exposed to the incoming light. The “shutter rate” is defined by the speed at which the shutter curtains can move and the “exposure time” by how long any row of pixels sees the light. A third factor is “shutter lag” which can be thought of as the ‘thinking time’ of the camera – when you press the shutter button, actually nothing in the way of image formation occurs, rather the camera’s electronics get themselves ready to take a picture, the front curtain falls, and the imaging chip is erased. Shutter lag is important because if you trigger a camera using a laser trigger or other device in order to image a fast-moving object, the trigger pulse is sent to the camera in a few microseconds or less, but the camera doesn’t get round to recording an image for the period of the shutter lag which can be many milliseconds (a typical value for a DSLR is about 50ms). Mostly shutter lag is short enough that it doesn’t matter much. However, it does if you want to catch an image of a speeding bullet. This is because you can trigger the camera as quickly as you like but by the time processes described above have occurred, the bullet has long gone. The solution to that problem is to work in the dark with the shutter open and then form the image with a very brief flash of light. It should be clear that there is a big difference between the ‘shutter rate’ (the speed at which the shutter curtains rise and fall – relatively slow processes) and the ‘exposure time’ (how long a row pixels is exposed to the light). Some point and shoot cameras have such long shutter lag times that even slow moving objects have beetled-off between pressing the shutter-release button and the camera getting round to taking the picture – very annoying!
When you want to take a flash photograph it is important that the whole of the sensor chip is exposed to the light at the same time and thus that neither of the shutter curtains is blocking the light. It is for that reason that the flash ‘sync speed’ (exposure time) is limited to 1/250th of a second or less because for shorter exposures the two shutter curtains have to travel together forming a slit which would lead to only a band of pixels on the sensor chip being exposed to light during the brief flash from the strobe.
There are just a couple of other things to mention about flash photography and shutter curtains. Most cameras allow you to either fire the flash when the front curtain first opens i.e. at the beginning of the exposure or at its end when the rear curtain is about to rise. What difference does that make? Well if you take, a say 1 second exposure in dim light and have the flash go off at the beginning (front curtain sync), an object moving from left-to-right in the frame will be shown frozen on the left of the frame due to the light from the flash and then, by virtue of the ambient light, as a blur to the right as it continues to move across the frame. The opposite will be true if the flash is fired at the end of the exposure (rear curtain sync). Whether you want one or the other depends on how you want the image to look.
Try as I might, I was unable to find out exactly how a flash gun can be synced to an electronic shutter. Nearly all cheap point and shoot cameras have flashes that sync perfectly with their electronic shuttering. Though unknown to many of its owners, the Olympus OMD-EM-1 Mk 2 camera that I own, can also sync a flash to its electronic shutter. That this is little know is probably because that option is buried deep within that camera’s somewhat complex menu system.
I decided I needed to to understand how electronic shuttering works. Normally, a mirror-less camera is in ‘live view’ mode with the images it captures being sent to the LCD and/or EVF. When the shutter button is pressed, the imaging chip is erased and then for a period commensurate with the exposure time, allowed to accumulate information about the light intensity which is then read out and stored. That processed rolls down the sensor chip so once again there is a difference between “shutter rate” – the rate at which the process described above rolls down the sensor chip, and the exposure time – the length of time the pixels on the chip accumulate light.
The well-known “rolling shutter effect” is a direct result of the relatively slow shutter rates of both electronic and mechanical shutters….but which of them has the faster shutter rate? Here are some pictures I have taken to try to demonstrate the rolling shutter effect. Note they also illustrate the difference between exposure time and shutter rate – exposure time here is 1/8000th of a second but the shutter rate is much slower (for various reasons, I think it takes about 1/320th of a second for the pair of shutters to move across the sensor in my camera). It’s pretty clear from the photos that the distortion caused by the shutter rate is much worse for the electronic shutter which I think implies that it takes a lot longer to complete a readout of the whole sensor than it does for the mechanical shutter curtains to travel across the sensor chip. Interestingly, and for reasons I don’t quite understand, the image from the electronic shutter is crisper. I guess the more marked rolling shutter effect when using the electronic shutter is unsurprising because if my suppositions are correct, then for the mechanical shutter all the pixel rows were cleared before the shutter curtains started to travel and are read-out after they have closed. Thus, the distortion is solely the result of the time it takes the slit between the shutter curtains to expose the sensor. Whereas, for the electronic shutter even though the exposure times that it is capable of are far shorter than for the mechanical shutter, the shutter rate is quite a lot slower because row after row of pixels have to be cleared, set to on and read out during the exposure.
So while electronic shutters can manage much shorter exposure times than their mechanical equivalents they have the disadvantage due to their slower shutter rate, of distorting moving objects. The big advantages of the electronic shutter are that nothing moves – it is err umm entirely ‘electronic’ so its operation is silent, there is no mechanical judder and thus no camera wobble to blur the image, and there is no wear and tear as there would be were it mechanical. A further big disadvantage of electronic shutters is the relatively slow flash sync speed.
As mentioned above, it seems a relatively poorly known fact that you can use a xenon flash with the electronic shutter of the Olympus OMD EM-1 Mk2 camera. At first sight this may appear a bit of a paradox because most of the descriptions you find on-line of the operation of an electronic shutter imply the sensor is being cleared, exposed, and read out a line at a time. If only a single line were available to the light at any one time then a brief flash would only be caught by a few rows of pixels. I was puzzled and could not find an answer on-line as to exactly how an electronic shutter operates during brief exposures so, I set out to try to determine the answer by experiment. Since, my camera can at its maximum sync speed, sync a flash regardless of the power setting and thus the brevity of the light output, I reasoned that all the rows of pixels must be available to receive information when the flash fires. The exposure time in the picture below is 1/50th of a second (the maximum sync speed) and the flash duration is about 1/8000th of a second but the frame is evenly illuminated as it is even when the flash duration is reduced to 1/25000ths by turning the flash power down to a minimum. So, I believe that all the camera’s pixels must be cleared and then rapidly set to ‘on’ line-by-line just as they would be behind the mechanical shutter prior to the exposure. After that, they are allowed to accumulate light for 1/50th of a second before being read out very quickly also line-by-line. So, my tentative conclusion is that the sync speed limit of 1/50th second results from the time it takes to set all the sensor’s pixels to ‘on’, and read them all out. If you try to take a picture with a shorter exposure, only some of the rows of pixels are available to to form an image.
A flash photo taken using the electronic shutter of the EM-1 Mk2 – maximum flash sync speed is 1/50th of a second.
With higher shutter speeds using electronic shutter with a flash results in an image only being formed in the lower half of the frame
But how does an electronic shutter operate when set to faster exposure times? I decided to try to discover the answer by using a camera trigger to fire my camera and then after a delay to allow for the shutter lag, to then trigger a flash set to a very low power so that the flash duration is extremely brief. In this setup, the pixels that are available to receive light should be obvious in the images formed. What the images show is that there is a band of pixels available to receive the light that travels across the surface of the chip during the exposure and that the width of the band is directly proportional to the shutter speed. Presumably, at the leading edge of the stripe, rows of pixels are being made available to receive light and at the lower edge, those pixels are being turned off and the image information read out. The images are similar to those you would expect from a mechanical shutter but I haven’t made a mistake, this is the Olympus’ electronic shutter! Given what appears to be a mode of operation not disimilar to that of the moving slit found in a mechanical shutter, could the exagerated rolling shutter effect associated with the electronic versus the mechanical shutter simply be down to the slower shutter rate? The maximum number of frames that the EM-1 can manage in high speed silent mode is about 60 per second (~17 ms per frame) suggesting a maximum scan rate of at least that. The 50 ms for the minimum exposure time to sync a flash in electronic shutter mode suggests a frame rate quite a lot lower than that when in normal silent shutter mode. However, both rates are significantly slower than that achieved by the mechanical shutter – a frame in less than 4 ms (again note exposure time and shutter rate are different things). So, perhaps it is indeed all down to shutter rate.
If you know better what is going on PLEASE let me know because it is puzzling my puzzler – lots!
I am not a photographer. I just take photographs. I greatly admire photographers who capture moments that no one else can and I wish I could do the same. However, I very much enjoy ‘the world of small things’ that my cameras and lenses allow me to explore. While, I have little talent as a true photographer, I take some solace from having taught myself how to take tolerably good macro photos. In this post, the first of several on macro technique, I have dared to share a little of what I think I may have learnt.
When I first started in the world of ‘macro’, I found the techniques described on the web and in some books extremely confusing. Indeed, one author often flatly contradicted another. Below, I use those contradictions to explore the techniques that I now employ. However, there is a big question that everyone considering macro photography needs to answer before they begin: why do you want to take macro photographs? I use macro photography for several purposes. Firstly, to collect images of insects and plants so that I can later sit down to identify them in a field guide. Secondly, I take pictures to allow me to see features of the things that I find so beautiful that I cannot see with my naked eye. Finally, I do it because it opens my eyes and reveals a hidden world that we spend most of our time ignoring – and I love it. My techniques are far from perfect, and some of my preferences like traveling with minimal gear, are designed to collect images for reasons other than getting to optical perfection heaven! Others use techniques totally different to mine and they may take better shots than I do. Below are the things that work for me.
Contradiction no 1. You need a sophisticated camera – any camera, even your phone will do.
Well, this wasn’t really a question for me, I just love optical technology and have worked with the best of it all my life so, I have always had an SLR or more recently a fancy mirror-less camera. It is true that you can take excellent macro shots with any camera and indeed, your phone. However, you will have much better chances of success and generate much more detailed pictures with a good interchangeable lens camera equipped with a macro lens. You can get excellent results with a standard lens and extension tubes or with an accessory lens, but you really cannot beat a macro lens for sharpness. I have three: an Olympus ED f2.8 60mm (for micro 4/3rds), an AF-S VR Micro-Nikkor 105mm f2.8G IF-ED (for Nikon SLRs), and a Loawa (Venus) 25mm f/2.8 2.5-5x. The latter is a specialist lens for extreme macro and the first two are fantastically sharp macro lenses for two different camera systems. Macro lenses also make excellent portrait lenses. For me, a key characteristic of a good macro camera is not how sophisticated its ‘bells and whistles’ are, but rather how it ‘handles’. Two aspects of handling are key: the system has to be light and it has to be easy to maneuver. It is true that you could take brilliant macro shots of plants or dead insects with a massive plate camera. However, an hour or two on a hot day hiking over rough ground kneeling, squatting and lying down, over-and-over again. and trying to maneuver your camera into position between grass stems, and you soon come to appreciate ‘small and light’. I love my Nikon and 105mm macro lens but that rig weighs nearly twice as much as my Olympus OM-D EM1 Mk2 with its comparatively tiny 60mm macro lens and thus I nearly always use the latter. The Loawa lens is a very specialized lens; a kind of zoom microscope, for use only on a stand of some kind – more of that in another post. I would add that I also find my Olympus 40-150mm f/2.8 Pro zoom useful because though it is not strictly a ‘macro’ lens (ie it doesn’t go to a 1:1 reproduction ratio) it is good for ~1:5, is unbelievably sharp for a zoom, and is great for larger objects.
Contradiction no 2. You need a tripod – you can take good shots handheld.
If it’s flighty insects you are interested in then I find a tripod next to useless and even a monopod is way to clumsy and hard to maneuver. There are occasions where a tripod can be useful, for example when ‘focus-stacking’ or taking HDR photos but mostly by the time you have it set up, the thing you are trying to have photographed will have long gone! A tripod can be of use when photographing plants or insects that remain motionless but even then, a camera with a good image stabilization system can make it a lot easier to clamber over a barbed wire fence! If you want the perfect shot of say an orchid, you can protect it from the wind with a ‘light tent’ and light it with the sun plus accessory flashes – you will get brilliant studio quality shots. If that’s what you want, then that is what you should do though 90% of the time you will do nearly as well with a handheld camera and daylight. On the whole, I believe in traveling with your camera and taking as little kit as possible. I will not mention it elsewhere but something that seriously transformed my experience of macro photography was to ditch the neck-strap that came with my camera. I swapped it for a ‘Peak Slide Lite’ strap. This allows for the camera to hang by my side, around my neck if I want (I don’t!), and for me to slide it rapidly into place in front of my eyes. Quick adjusters allow me to change instantly the height at which the camera hangs. Straps like the ones from Peak seem so expensive (£55!) – grin and bear it – they make a big difference.
At some point you will want a tripod. Get one that is light but as solid as possible, and that has a ‘beam arm’ that can be swung out at right angles to the tripod. It needs to be possible to set it really low to the ground as well as to use it more conventionally. Tripods are a very personal choice. Try to borrow some or watch other photographers wrestling with their octopuses! Mine seldom leaves home but I do use it. Monopods? Yes, maybe. I find I can use a hiking stick as a support – you hold the camera against the stick and slide the camera down to where you want it. You can buy monopods that are also hiking sticks. Bean-bags – why not? But I never use them.
In summary, if I had to say what was key to a stable camera in most of the situations in which I take macro shots, it’s a good image stabilization system and good camera holding technique (see below). The 5 axes IBIS system in the Olympus is fantastic and being part of the camera body, it works with all lenses.
Contradiction no 3. You can only get really sharp picture with a flash – you don’t need one.
Both are true. I mostly take three kinds of pictures, all handheld – without a flash, with a flash used as a fill-in light source, with a flash where I try to eliminate all natural light. On a bright day in the South of France with the ASA set to 200 you can take ‘sharp enough’ pictures without any kind of lighting other than the sun. Indeed, most of the pictures I take are taken that way. Pictures taken by natural light look err umm, ‘natural’. Anything they lack in ultimate sharpness is often only visible to ‘pixel peepers’ and you have the delight of traveling light and being instantly ready for a shot. I say traveling light because a naked speed-light mounted directly on the top of the camera is not the way to take good macro shots. To look even vaguely natural, the light from a flash needs to be diffused. The little pull-out or clip-on diffusers for a flash are handy but not very effective at their job. You need something much bigger – a proper macro soft-box. There are lots of suggestions for home-built soft boxes on the web. You can make a really effective one with cardboard, duct tape, and tracing- or tissue paper. A soft-box will provide the all light you need but because the light comes from the diffuser rather than directly from the flash head, harsh shadows and specular highlights are reduced. The result is a softer more natural looking photograph. With a big soft box it matters less where the flash is mounted, and though off camera would probably be better than on, the rig becomes harder to handle, so mine stays firmly on the hot-shoe. Ring flashes, dual macro flash etc. all have their place but frankly a manual flash with a soft-box is a cheap and effective solution in most situations. Fill-in flash lets you operate at the optimal f number and exposure time (see below) as well as filling in the shadows. As a result, the photos taken can be sharper than when using natural light alone.
There are occasions when you may want to eliminate nearly all natural light and depend solely on the light from a flash-tube. What follows is only relevant to that situation. If you want ultimately sharp photos then there is nothing to beat a flash gun with the intensity turned down. The duration of the light flash from a speed-light is related to the intensity of the flash. Elsewhere on this blog, you will find an article where I measure the duration of a flash from a speed-light. At full power my Yongnuo 560 III speed-lights have a flash duration of about 1/300th of a second. That is really slow. Why? Because when you have a tiny object in front of your macro lens even a slow movement translates into a fast movement in terms of the number of pixels the object moves across on the camera sensor. So for example, if an object contains features such as the hairs on an insect body that have dimensions of say 10um then at a 1:1 magnification, a movement at 1mm/s translates into a movement of 33um during a full-power flash. The pixel pitch on my Olympus camera is about 3.3um so instead of occupying ~3 pixels on the sensor an object on the scale above now appears on 10 of them – it has been badly blurred. 1mm/s is only 3.6m/hour which is nothing compared to wind-speeds, the movements insects make or indeed, the trembling hands of the would-be macro photographer! This is not to say that you cannot get a decent photograph with a 1/300th exposure but if you turn the flash intensity down to say 1/32nd of full power, the flash lasts only 1/13000th of a second, a duration that will freeze almost all motion. While we have a way of freezing motion, this will not make up for a lack of focus – that is a separate matter (see below) .
Someone reading this might ask why did I start out by saying you may want to eliminate all natural light? When you take a flash photograph the sync speed is generally 1/250th of a second or slower. So, if there is a lot of natural light reaching the sensor, this will produce an image that is superimposed on that produced by flash, the image from the flash may be sharp but that formed by natural light may for the reasons I describe above, be in a different position on the sensor, or otherwise blurred. Ways round the problem of blur caused by having both natural light and some light from a flash is to either use a tripod, or minimize and put up with the blur, or eliminate as much of one light source as you can. The object of the kind of macro photography I am discussing here is to make the light from the flash as predominate as possible. Paradoxically, when that is the object, areas of shade then become the best places to take pictures. If you can’t work in the shade you can try to use your shadow to block out the light from the sun or choose an overcast day.
So how do you use a flash to overwhelm natural light? The key is to set up the camera so that natural light forms little or nothing in the way of an image. This can be done by selecting a low ASA – I usually use something between 64 and 200 ASA. Adding a neutral density filter (ND10 is good) will in most situations reduce the natural light entering the camera at 1/250th of a second to very low levels. The flash employed needs to be of sufficient luminous intensity (high guide number) to generate an image when operating at relatively low power levels say, 1/8th to 1/64th of maximum power. This will ensure that the flashes are of very short duration. For the flash to be bright enough, you need to be relatively close to subject say, less than 0.5m. The quality of the photo will be best when a large soft-box is employed. Pictures taken this way can look dramatic and perhaps unnatural but will if the focus is good, be critically sharp. By using a flash diffuser and shooting in RAW so that you can bring up the background in post-processing, it is possible to produce tolerably natural looking and sharp photos. It also provides in combination with a suitable trigger (see past blog posts), a way of taking pictures of insects in flight. The photographers out there that enjoy ‘high-speed photography’ to take pictures of projectiles, smashing pumpkins etc., will recognize the similarity between the techniques they employ and those described here. One thing worth adding here is that while motion-induced blur may detract from the technical excellence of a photo, it may well add to its artistic impact.
Entire books could be written about macro flash photography using several speed-lights and a ‘macro-studio’ but that is a subject for another post.
Contradiction 3 – You need to use a very small aperture to ensure you get things in focus – you need to use a large aperture to ensure things are sharp. (Focus!).
Really, we need to think here about the whole subject of ‘getting things in focus’. Nothing is more important in macro photography than ‘focus’ even if you are aiming for an artistic result, you need to know what will and will not be, in focus. The reason that focus is so important is that when you are close up to an object the depth of field (dof) becomes very small. My 60mm macro lens focused on an object 30cm away and set at f/8.0 has a dof of 4.6mm. Move into the closest it will focus (19cm) and open it up to its maximum f number (f/2.8) and the dof is 0.6mm! Pick up a short pencil and point it at a nearby edge and watch the point of it moving about – natural tremor makes focusing at high magnifications while hand-holding a camera very challenging. It doesn’t matter if you employ the techniques above to freeze motion, hand tremor and the sheer physical difficulty of setting the lens to a perfect focus mean that while you may have a photograph with some part of the object of interest in focus, it may not be the ‘right ‘part. Anyway, what is the right part? Let’s answer that question first. Obviously, it’s the bit you wanted to be in focus which for an insect will usually be the compound eye. Other things may be a little fuzzy but it’s the eye that draws the eye! In many cases, no matter how stopped down the lens may be the available dof will mean only certain features can be in focus in a single image (see below re: stacking). So, without using a tripod, how do you ensure a good focus? You could stop the lens down to say f/22. For the closest focus of my 60mm macro lens this will increase the dof to 4.3mm but that will cause diffraction blur and worse still, unless the predominant light is from a flash, the increased exposure time necessary will mean camera shake may increase to unacceptable levels. What to do?
Different photographers have different way of holding a camera. My way is if I can, to choose kneeling, standing or lying down, over squatting. Stability of your body and arms is everything. I keep my arms tucked into my sides and hold my breath once I start to focus until after I have made the exposures. My right hand on the camera grip and my left on the focus barrel. My camera is always set to single AF plus manual with ‘back button focus’. ‘Back button focus’ means setting the camera so that a button on the back of the camera takes over from the shutter release button where focusing is concerned while the shutter button just err releases the shutter! Back button focus sound trivial but actually it is key to getting sharp photos – try it. I use a single small central focus point for the AF. Usually, particularly if I have set the focus limiter on the lens to a suitable range, but not always, the focus will lock. If it doesn’t I use the focus barrel to get into focus. It’s good to have a macro lens with a long ‘throw’ that is to say that a large movement of the focusing ring causes only a small change in the focal point. I find the whole process of focusing is much easier if you have a focus peaking setting on your camera. Focus peaking highlights in a colour of your choice, the pixels on the edges of objects that are in focus. It isn’t a ‘must have’ but it is extremely useful. I always have my camera set to use its electronic shutter so there is no noise or shutter shudder and minimal ‘shutter lag’ (the time it takes after pressing the release button for the camera to actually take the picture). My camera is set to manual exposure so generally, I will have already set the aperture and shutter speed as I want them using a nearby plant or bush to get things right. If the object is very difficult by which I mean the wind is blowing, or it is moving around, I may set the camera to rattle off quite frames a second. Sometimes I will also bracket the exposure in either speed or aperture. When taking either single or multiple frames, I use an almost imperceptible rocking of my body to achieve perfect focus before hitting the shutter button. Using your camera as a machine gun seems an attractive way to go until you have to sort through the pictures to find one, if any, that is sharp.
To go back to the contradiction of a large versus a small aperture, the truth is that these things are trade offs between the dof available at any particular aperture, the loss of resolution that results from diffraction when light passes through a narrow aperture and ‘artistic’ considerations such as the ‘bokeh’ (the blur of objects in the background).The optical testing of lenses using charts dominates magazine reviews but in fact the most important aspect of lenses is how they perform in the real world. I generally seek to use an aperture around the best for minimal diffraction and best optical performance (f/5.6 to f/8.0 for the Oly 60mm) but shift from this if ‘needs must’. I generally try to work with shutter speeds higher than 1/300th but sometimes again needs must. Similarly, I’ll work at low ASAs (200) to minimise noise but if trying to shoot a Humming Bird Hawkmoth, I’ll go to much higher ASAs to get say 1/8000th of a second exposures to freeze its motion. Bottom line, everything is a trade off but it is as well to understand the principles involved.
Contradiction 4. You can’t have everything in focus at once. (Stacking).
Well that isn’t altogether true. By using a focus slide or the ability of some cameras, to take a series of photographs at different planes of focus, it is possible to have much more in focus than would be possible in a single shot. You do not need a fancy camera to do focus stacking. It can be done with a simple mechanical device that allows the camera to be moved forward in steps in the same orientation, or by focusing the lens at different points in different exposures, or by using the ability of some camera to do the equivalent of these maneuvers using the focusing mechanisms internal to the camera’s lens. Focus stacking, no matter how it is done takes time and thus is not suitable for subjects that may move. The process generally requires the camera to be fixed on a tripod or other support. It is a powerful process and is capable of producing some great pictures. It requires post-processing software that can remove the out of focus information, align and combine the images taken. I may cover this in another post.
Contradiction 5. You will get the best macro shots on a butterfly farm or with home-reared specimens – it’s better in the field. The same might be said of botanical specimens?
Both are true. You can be pretty sure that a day out at a butterfly farm will provide some fabulous opportunities for shots of exotic butterflies from far away countries – so why not? If you can get some butterfly pupae and have them emerge at home that too will provide some fantastic opportunities for macro shots that you would be very unlikely to be able to take in ‘the wild’. However, neither of those two options allow for the fun of ‘hunting’ with a camera. I am going to boast that I am quite a good hunter but…. I used to be hopeless. Now, I know my prey. I know when and where different butterflies, and other insects will appear, where they are likely to settle and be least disturbed by my presence. How to move to reduce the possibility that they will fly off. When to wait, and when to move. It’s good to have read about the habits of the things you seek to photograph and which habitats they prefer – see for example Thomas and Lewington’s wonderful book, “The Butterflies of Britain and Ireland’. You soon learn not to let your shadow fall over a butterfly – it will fly away, not to come between a butterfly and the sun, and to move slowly and wear subdued colours. When you look at a butterfly, you will see a dark spot in the centre of its eye. This is the ‘pseudopupil’ and it is the region of the eye that is directed towards the point from which the eye is being observed. When you can see the pseudopupil the butterfly can see you, the bigger it appears the better its ability to see you. Start by taking shots at a distance then move slowly towards an insect of interest taking more shots as you go. Always watch the histogram! Nothing will make a poorly exposed picture great though shooting in RAW will offer more post-processing opportunities to get things right.
A PS…some other notes on the camera settings I use. You’ll find a lot of macro photographers who say you should use Aperture Priority (AP) – after all, what could be more important than what is and isn’t in focus? To my mind AP is great if the object isn’t going to move much but if it is, then Shutter Priority comes into its own because close-up even the tiniest movement of the object or your hand will cause significant blur, often setting a fast shutter speed is the key to a tack sharp photo. Full manual is the way to go if you want complete control but beware what I call the ‘time to flight’ factor – how long is that creature going to stay there? Quite often it’s a very short time indeed and for that, having everything set up so you can press the shutter button really fast is key. As a result, auto-iso and setting a minimum shutter speed of say 1/350th or 1/500th with Single or Continuous AF plus manual will get you a shot you would otherwise miss – insects don’t wait to have their pictures taken!
Well, times are weird but its good to have something to take one’s mind off all the terrible events in the outside world and this project is one of those things. I am posting this short update because the ‘Gigapixel Project’ is now fully functional – some bugs remain but I hope to be able to iron those out during this period of ‘lock-down’.
So how have things changed since the last post I made about a month ago? Since then, I have completed the 3rd (Y) axis and have the whole machine up and running. I am leaving some bells and whistles until later. I have for example, for test purposes made the ‘Gigapixel’ menu somewhat simpler than it will be in the end. I have set the duration for the ‘settle’ and ‘exposure’ times to be short – just a few seconds. The reason for this is to speed up the time it takes to test changes to the software. I have also set the X and Y overlaps between the pictures in a ‘gigastack’ to be 50%. I rediscovered the old cookie of C++ rounding errors and I am putting up with them for now. Any programmers among those reading those this post will know the problem – you do float arithmetic but you need an integer answer rounded up so you need to add 0.5 to the float result…..or use the ‘Math.h’ library that I recently rediscovered (!) that provides among many other very useful math functions, a round up command (ceil(value)). Putting this bug right will be easy. There are some others but all of them are pretty straightforward to fix. I have been excited to actually get the first pictures out of the machine so they can wait for now.
I have added a menu that calculates the best Z step for an object given the magnification, f number etc. It returns exactly the same result that the tables on this subject on the Zerene Systems pages give (https://zerenesystems.com/cms/stacker/docs/tables/macromicrodof) which is reassuring. The ‘jog menu’ now returns the position in um from where one started jogging. This is useful because one can use ‘jog’ to determine the area one wants to include in a ‘gigastack’ as well as the front and back of an object for a Z stack. The work flow for a gigastack goes something like this: 1) determine size of the field of view i.e. what you can see on the display screen of the camera 2) determine the size of the object you want to create a gigastack from in X, Y and Z planes – ‘jog’ can be used for this or in some circumstances you can just use a ruler or even just plain guesstimate things 3) from the ‘calculator’ get the best Z step for the object. After you have provided this information the programme will tell you how many X and Y fields it will take to cover the object with 50% overlap in each dimension, how many Z stacks it will make, and how many pictures there will be in each of these. Obviously, even if you are demanding just say 5 x X, moves and 5 x Y moves, with say 20 slices in each Z stack, then you are going to have a lot of photos – 500 to be precise. 10 x 10 x 50 = WOW! Also, you will need to process the individual Z stacks to boil them down to single ‘all-in-focus’ pictures. After this it is over to ICE (see previous post), or Affinity Photo for smaller panoramas, to stitch everything together. A 5X x 5Y image from my camera will result in a 0.5GPix result or 2GPix in hi-res mode.
As I said above, ‘it’s a lot of photos’ so, there is an issue to do with the capacity of the flash guns to provide all the flashes even if set to say 1/64th normal power. To answer this problem, I am in the process of 3D printing ‘faux’ batteries that will allow the flashes to be powered from an external power supply so they can meet the demand placed upon them and also allow them recharge more rapidly. A similar problem may exist for the camera. I am using the Em1 Mk 2’s electronic shutter to give an extended battery time and reduce wear. For now, I have been testing the setup without the flashes – instead, just using the LED illuminators built into the Meike 320 flash guns. This is highly sub-optimal in terms of the quality of the pictures but good for tests because it eliminates the time that the flashes would otherwise need to recharge.
After many false starts with rails running backwards and all kinds of other things that resulted from programming errors, I got my first fully automated gigapixel image (actually much less that a GPix but hey, I am testing things out!). The images are pretty poor because I am not using a flash, the rails are moving on with with short times to settle, and lots of other excuses…. My first successful image was 3X x 3Y by 3Z, so just 27 images in 9 stacks of 3. Despite the poor images, I have to say I was pretty pleased. It’s eerie watching the camera/object move in three dimensions as the images are acquired. I am pretty impressed with the precision of the rails even though they currently have 8mm leads (I will replace them with 1mm lead lead-screws when (if?) China opens for business again). The programme rewinds the rails to the original point at the lower right-hand corner when it is finished and seems to be spot on when it does this even when working at higher magnifications.
Unimpressive though it maybe, here is the first image from the setup – for me it’s a milestone so it may look better to me than anyone else! Indeed, I am sure it does. The Z stack depth, and the number of photos with in each stack, were insufficient to provide a crisp image at every point but hey, once again it’s a test. I will add some better photos as I generate them and also a video.
1 April 2020: I have added a YouTube video to show how the software works. I will add a 2nd movie to show the setup creating a macro panorama.
Well, a certain number of things have come together while some others haven’t. I am waiting for an 8mm lead-screw with a 2mm lead to replace the current Z axis. This was ordered from China a little while ago but I suspect that given the problems with the Corona virus, there maybe a long delay. When I get it, the current Z axis that is functioning on an 8mm lead, will become the Y axis. However, with two axes working it became possible to create the XZ table and test some aspects of both its function and that of the available software. This post reports on the progress so far. So, below is a picture of the XZ table before I properly installed it on two 60cm 2020 maker rails. The base of the X rail has holes for the installation of 4 ‘tee-nuts’ which allows the whole XZ assembly to slide forwards to give a wider range of distance from the camera’s sensor plane to the object being photographed. This seems a good idea because it will allow me to use both my Olympus 60mm macro and my Lowa 2.5x to 5x ultra-macro lenses thus covering object sizes from about 15 cm in size down to a couple of millimeters across. I have to say that 2020 rail is amazing stuff and I can see all kinds of possibilities for its use, 2040, 4040 etc, elsewhere in my photography projects. Also worth mentioning is the performance of the 3D printed parts. Printed in black PETG 15-20mm thick with 20% infill and 6 vertical shells, they are incredibly strong and there is no ‘give’ or sag.
The 3D printed components temporarily connected together to check the fit. In the end I changed my ideas about ho to employ the rails mounting them across the width of the table – see below.
I decided to make a compromise between having the Y axis over the edge of a table and having it installed on the same surface as the XZ table. I created a box about 20cm high by 60cm long. The maker rails are bolted to this using tee-nuts. For smaller objects the Y rail can be installed directly to the front of the box – for larger ones it can operate over the edge of a table. My objective being to enable a significant distance between the object being focused and the background thus putting it well out of focus. In addition to the XZ table, the 2020 rails carry the brackets for two compact Meike 320P flash units. The brackets allow for a lot of flexibility in the positioning of the flashes.
My Olympus OM-D EM1 Mk2 with the Lowa super-macro lens mounted on the XZ rails (note the 2020 rails these span the width of the X rail). Y movements being temporarily supplied by a 2 axes manual focus rail. This is mounted on a board that bolts to the front of the set-up. A lash-up but a functional one!
Although I have yet to make the diffusers for the flashes, it seemed like a good idea to try out some of the software. I have started out by making some small stitched macro images. By small I mean up to 40 x 21 MPix images i.e. getting on for a GPixel when combined. Because I wanted to test the software I might use for stitching, I did not Z stack for these tests. Had I done that, I would be dealing with image sets consisting of upwards of several hundred photos! I am a great admirer of Affinity Photo so I started out trying to stitch with it. It worked but it was very slow and ‘fell-over’ unless it had a huge amount space available to it on my SSD. I tried HugIn but it was just two clunky and seemed not be able to cope at all. Finally, I turned to Microsoft ICE. I pointed ICE at a series of 34 overlapping RAW images and to my amazement it completed the stitching within 180 seconds compared to 15 minutes when using Affinity. The results were to my mind excellent…….
This image is of course tiny by comparison to the original – such are the restrictions of WordPress! However, even though the flashes were of slightly variable intensity, the result is seamless.
A detail from the image above, again at very reduced resolution.
To make the image shown above, while the X rail motor handled the X movements, I had to make the Y movements required by hand. Obviously, the ultimate aim is to have motion in all three axes handled by software. I am happy with the X and Z performance of the mostly printed focus slides. Next steps are to create the 3rd rail, write some more of the software to handle their movement, and make the flash diffusers etc.