Monday, September 24, 2012
There has been a lot of interest in this thread in recent times so I’ve provided a short summary, mainly for the benefit of newcomers to DSLR astrophotography - I hope it’s helpful.
Dithering, is in my view, one of the most useful and effective techniques applied to image acquisition for increasing SNR, applied to DSLR cameras, saving hours of processing time and frustratingly poor results.
There has been a bit of discussion on various forums about substituting dithering for calibration. Sounds attractive, but;
***dithering is not intended to be a substitute for calibration - I recommend reading this, as well***
Dithering and image reduction serve different purposes with the same aim - increased SNR. Temperature must be considered when using DSLRs, requiring dark subtraction. Dithering however, will hide the majority of temperature related calibration errors/inaccuracies as well as several other types of artifacts… read on.
You can also read about dithering in Berry and Burnell’s, “Handbook of Astronomical Image Processing,” where they recommend displacement of images by at least 12 pixels. There are several informative academic papers on-line, as well.
Backyard EOS has dithering capability, however, I have never used the program. My setup is Arduino based, controlling the RA or DEC axis between images, simply by slewing to present the camera to the target, displaced by 10 - 15 pixels or more between images - it’s that basic.
The comparison image, is intended to accentuate the underlying issues with the image on the right. No attempt has been made to minimize the effect with post processing. The image was stacked and stretched - please note that the red streaks were not evident in individual subs and only appeared after integration. In-fact, I naively spent hours trying to salvage that image - a complete waste of time. The image on the left was taken with the same camera, dithered.
Rather than spending time eradicating/covering up unsightly problems, time was spent lifting out detail, which in the image on the right was partly obliterated by poor acquisition - no dithering.
Here is the pattern I follow… it keeps the image within the sensor boundary. I use a look up table in the Arduino program to schedule the correct hand controller button activation.
Struggling to produce a decent image, it became apparent that the physical and electronic characteristics of DSLR CMOS sensors demand more attention and an understanding of the limitations of the camera’s sensor was essential.
I went to the trouble of replacing the factory IR filter with a special purpose astronomical filter to increase the transmission of Ha wavelengths and built a sensor cooling system to reduce dark current/thermal noise - this all worked quite well. But, there is more to sensor technology and manufacture than meets the eye.
For instance, the anti-aliasing filter, which forms part of the dust reduction system in Canon DSLR cameras is designed to reduce Moire, an artifact produced by the Bayer (RGB) colour matrix. However, the reduction of moire tends to soften the image. Even with a focus mask at ‘perfect focus’ - consequently, the AA filter should be removed.
But that’s not the end of the story. Among the millions of pixels that make up the sensor light gathering matrix, a small percentage are dead (don’t work). Some are on all the time (hot) and overall, pixels differ in their ability to convert photons to electrons. For daylight photography this isn’t a problem, as a rule.
There’s more. The pixel matrix and associated electronics produce fixed pattern noise. Heating of the sensor during extended operation also increases noise. Random noise is a function of arriving photons and is different for every frame. The optical properties and cleanliness of the sensor, filters and lens also produce artifacts.
Combined, this all conspires to ruin the image to which you have dedicated copious amounts of time at the wrong end of the day, when perhaps, you should be sleeping.
So what’s the solution? In principle, reducing the effects of the deleterious electronic and physical influences inherent in the optical system is quite simple, in practice however, it involves another layer of complexity; that is, DITHERING!.
Dithering is an authentic solution because it addresses noise suppression, optical and sensor artifacts - dithering does not replace proper image calibration techniques. It will however, greatly improve results and avoid problems that no amount of calibration or sensible image processing can resolve.
So, what is dithering?
Dithering is the practice of shifting the sensor (camera) between images, so that each new image is slightly offset from the previous image. The image is sampled by different pixels; that is, the image moves, ideally, by 10 - 15 pixels, from the position of the preceding exposure. With careful management the target image remains well within the sensor boundary.
Dithering can be random using a hand controller and estimating the offset by timing the button push in DEC or RA. Better still, an automated system that dithers in a box shaped spiral, or maze pattern. The goal is to avoid a succession of images occupying the same or neighbouring pixels (which produces poor results) and to prevent the target object moving out of the FOV.
Dithering, particularly with DSLR cameras improves signal to noise ratio for very little effort. It can be win win for the astrophtographer.
Executed properly, dithering deals effectively with random noise, hides hot and cold pixels, improves flat fielding and sub-pixel sampling; that is, capturing the image over a range of pixels means that we are not sampling the same and possibly less efficient pixels repeatedly for the same object location.
Calibration is not always as effective as we would like with DSLR images. And even if the images weren’t calibrated, a dithered stack would produce pretty good results.
Monday, July 2, 2012
For something completely different.
Edit: The latest information indicates that this is a terrestrial spider - it enjoys the seaside life! Could it be an Opilionid or Harvestman? Spiders generally have leg hair to one degree or another. The Australian Daddy Long Legs, which appears to have hairless legs, is a tangle web spider. However, Daddy Long Legs is also applied to the order Opiliones or Harvestman, which are Aracnids, but not spiders.
There is a little fan shaped cove with rocky promontories at either end near Torquay (Victoria Australia). Walking this stretch of beach at low tide one warm sunny day in May 2011, I noticed a small creature, perhaps 5 - 6mm (leg tips,) hurrying along the waterline, occasionally covered by a light wash. The water receding, this leggy little organism, having resisted the flow of water, continued on its way.
A wisp of life, it seemed to be feeding near the low tide water line over smooth packed sand. Scurrying here and there, stopping suddenly, then moving on with equal energy. Carrying a camera with a macro lens was fortuitous, managing to squeeze off one shot in focus, while chasing this little fellow around the beach, losing sight of him, standing up to find him again, a bright little speck on the sand.
I made several inquiries to various institutions without much joy. Without a ’sample’ there was little to identify. Eventually, however, and almost a year later, it was suggested that the arthropod is most likely a species of Sea Spider or Pycnogonida, which inhabit the oceans from the shore to the deep, worldwide.
I’m intrigued by what appears to be a proboscis attached by a socket to the front of a dorsal appendage sweeping backward over the body, like a trowel handle. The other legs appear to be arranged asymmetrically. The abdomen gives the appearance of a terrestrial Daddy Long Legs. The eyes, blue and arranged either side of the base of the dorsal appendage. Fascinating little guy!
Wednesday, May 16, 2012
The Bow Tie focus mask, described here, is derived from Carey and Lord focus masks, which are types of diffraction gratings, similar to the well known Bahtinov mask.
The bow tie mask was purpose designed to suit a small aperture, short focal length lens. The four obstructions are intended to produce splayed double spikes, similar to the Carey mask, while eliminating the grating typical of focus mask designs. The wide obstructions and absence of grating increases the brightness of the diffraction spikes - discernible with a small lens.
The junction of the obstructions also provides an area of certainty. A central spike perpendicular to the double splay is generated at focus. This spike is not present otherwise. Another phenomenon of this design is the presence of red and/or blue fill within the splay of each pair of spikes.
The bow tie mask is easy to make. A flat section of rigid plastic is easily cut to shape with a hobby knife and steel rule. The clear plastic can be coated with black indelible marker. Sharp straight edges are essential.
Using the bow tie mask is straightforward. Equal spacing of each pair of spikes and the presence of the perpendicular spike indicate focus.
Saturday, December 10, 2011
The links below describe the base cooling system, using a full spectrum modified Canon 1000D/XS/Kiss F (or 450D, which is of similar construction), fitted with an Astronomik UV/IR Clip-in filter. The notes are divided into 3 main parts and sub-sections, mainly to keep file size reasonable.
Note: Please read this.
Why cooling - a very basic explanation
For anyone not familiar with the reasons for cooling a digital camera sensor. The purpose is to reduce dark noise (thermal current) generated during long exposures - the result of sensor heating.
Reducing the temperature at which an image is acquired improves its quality because signal to noise ratio (SNR) is improved.
The cooling system described here, dependent on Thermoelectric module (TEC) and heatsink rating, is capable of reducing sensor temperature between 18 and 30C.
Sample images taken with a cooled Canon 1000D 200mm prime lens f5.6 and f6.3.
Having completed this prototype, short of 3D printing the electronics compartment and light exclusion shroud, which is black cloth tape at the moment, the electronics package is performing surprisingly well.
Cool down from 16C to -5C took approximately 2 minutes. Once at temperature (-5C), the on-temperature/setpoint LED came on and remained on, except for a few brief moments, which I suspect were sensor read errors/spikes, throughout the first run of dark frames. The changes to the circuitry have been more successful than anticipated.
An advantage of regulated cooling is the acquisition of dark libraries. Typically, the system presets, 5C, 0C and -5C, at 800 and 1600iso and various exposure times. However, most of my imaging seldom exceeds 210 seconds, unguided.
Given the above, my strategy is to image at 800 and 1600ISO for most targets and utilize gain over extended exposure time, to acquire fine detail. This is arbitrary at the moment and subject to change with experience.
Appendix 1 Canon 1000D Thermoelectric Cooling Conversion - PCB etching. This is the basic board - updated.
Appendix 2 Canon 1000D Thermoelectric Cooling Conversion - Arduino/Teensy Code txt file. New version - corrected error to pwmV code.
Appendix 3 Canon 1000D Thermoelectric Cooling Conversion - Arduino/Teensy Code - .ino file New version - corrected error to pwmV code.
Monday, November 28, 2011
Basic astrophotography image processing in GIMP - Part 2: increasing SNR (image alignment, integration and enhancement)
I thought this section deserved more attention. Leaving off in part 1, we discuss combining images - to use astrophotography jargon, stacking and aligning - more correctly, registration.
Please remember that these tutorials are intended for beginners, using very basic equipment and software. The methodology is the basics of image calibration and processing, but very much hands on, using what we have at our disposal.
Recapping, the purpose of combining images is to increase the signal to noise ratio (SNR); that is, less noise and more signal, improving the overall appearance of our combined final image - our integrated image (more jargon).
We are going to select the best light frames and combine them into a single image. But, noise reduction strategies start before uploading images to our computer. We employ a nifty method during image capture; that is, we make sure that our images are slightly offset one from the other during the imaging session (yet more jargon). The technical term for this is dithering, a science and a separate discussion altogether.
For our purposes however, we will take advantage of our fixed set up. We note that the stars move across the sky and change position from East to West at 15.0416 degrees/hour (the siderial rate), we let the stars drift across the camera sensor between exposures. Of course, after a while the object that we are imaging will drift out of view. For 6 or 10 images there should be no need to recenter our target.
In part 1 we exposed for 10 seconds. Adding a 3 second delay between exposures ensures that a few pixels separate the next image from the previous - in effect offsetting our images. Very crude dithering - effective all the same. And, furthermore, once complete, our total exposure time is 60 seconds vs 10 seconds. However, SNR increases by the square root of the number of combined images. 2 images increases SNR by 1.414 - approximating for our purposes.
So, starting where we left off in part 1, the image below shows the second and third images in our set of calibrated light images - we have already aligned the bottom and second image in the stack. In this case, the third image is selected with Mode set to Difference (and View 8:1, for clarity). This layer is transparent, showing the difference between the two images as they came out of the camera. We can use the drag tool to align the transparent (difference layer) with the image below.
And this is the result in Difference mode. The pixels have been aligned.
We then set Mode to Normal and select the image above, by selecting its ‘eye’ and highlighting the layer, setting its Mode to Difference. As before we drag the image into alignment with the image below, and so on up the stack.
Note: We loaded our images, File > Open as Layers, and need to deselect the ‘eyes’ of the images above the image that we are dragging so that it is visible.
The image below is the first of our image stack (the ‘eyes’ above it are deselected to make it visible). It’s noisy.
Lets see what happens when we average the images; that is, with Mode set to Normal for all images, (all ‘eyes’ selected), we set the Opacity slider of the bottom image to 100% - the default setting. Select the second image and set it to 50%, third to 25%, 4th to 12.5%, 5th to 6.3% and our 6th image to 3.1%.
As you proceed up the layers, note the change - dithering has been to good effect and pixels that were not removed during calibration are hidden behind good pixels. Additionally, because ambient noise is random the image is becoming less noisy. If we had 50 or 100 images, noise would be reduced even further. Still, for 6 images the result is impressive - as below - and much smoother.
Just to finish things off, Image > Flatten, to fuse all the layers together. Apply a sharpen algorithm to the luminous layer. This can be found at, FX-Foundry > Photo > Sharpen > Luminosity Sharpen. You can also use, Filters > Enhance > Sharpen (Smart Redux), or any of the available sharpen algorithms available for GIMP. Avoid the use of unsharp mask if you can. It too, tends to overdo the image (my personal view).
And here is our completed image.
For comparison, the image below is the final image from part 1, which is a single layer, as opposed to 6 layers in the image above.
Comparing the position of the constellation Orion on the frames shown, it should be evident that any one of our light images may be selected as the base or background image, framing the scene as preferred. Terrestrial objects do not align in any case, and we have to live with that.
The availability of free programs to perform calibration, registration and integration, and then using GIMP to finish off with brightness, contrast, colour and enhancement, makes the process much easier. (Keep in mind that images that contain terrestrial objects may interfere with alignment in some programs, essentially designed to align stars).
The next step perhaps, is to use RegiStax or Deep Sky Stacker (DSS) to do all the heavy lifting (calibration, registration and integration of our images) and follow up with GIMP. Now we are getting into serious amateur stuff. But, we can still use our fixed tripod/camera set up, to take beautiful shots of the Milky Way, well beyond the spectrum of the human eye.
StarTools is new and innovative astrophotography processing program. The author has gone to extraordinary lengths to create a program applicable to amateur and professional astrophotographers. ST is an image processing toolbox, that is non-destructive to your image data. If desired it will track every processing step, among many other attributes, including intelligent application of detail sensitive noise reduction to the final draft image. But, I suggest reading what the author has to say and trying out the demo version, which is fully functional, except for image saving. Be patient. There is a learning curve associated with all image processing applications.
Perhaps you need one of these.