Thursday, May 1, 2014
This thread has its origins in Craig Stark’s article about the non-linearity of DLSR RAW data, as well as discussions on various forums about preprocessing of DSLR RAW frames. The non-linearity of DSLR RAW data, compared to 16bit CCD linear data, is a product of camera firmware corrections, designed to create a RAW image that is easily processed, out of the box!
The issues discussed here were more evident when I started preprocessing cooled, temperature regulated, DSLR images. Prior to this, I used software package defaults with mixed results. Consequently, I have not ruled out the same application to uncooled DSLR RAW data. My experiments show similarities. However, if you have one of those supercooled DSLRs, perhaps dark frames are a thing of the past. A carefully constructed master bias and flat may be all that is required.
The point of this discussion is that problems arise with DSLR RAW data when dark frames are bias subtracted and the master dark frame is then subtracted from the light frames - as one would do if scaling the darks. This is a process applied to linear data and is not suited to DSLR RAW data calibration. Linear processes applied to DSLR data may produce less than optimal and sometimes devastating results due to data truncation.
I take the view also, that DSLR RAW data, because it is processed in the camera is relatively clean to begin with and applying pixel rejection algorithms to calibration frames is unnecessary. Applied to light frames yes, as they have been exposed to unfriendly light sources. For dark, bias and flats, no, I now leave them untouched to prevent inadvertant data subtraction - I may be corrected on this, but for now that’s my position.
A note on bias frames - CMOS sensors do not have a bias and instead produce a pattern of fixed noise that, to all tense and purpose, may be considered bias and applied in the same way.
The practice of not calibrating dark frames is common place, and has been for years; “the-bias-is-in-the-dark.” However, I am not convinced that the advantages and reasons, applied to DSLR RAW data, are necessarily understood - “it’s always been done that way.” Whereas, adhering to dark scaling as a valid option (which is a default of some image processing software), the sheer size of data sets cosmetically mask calibration errors and perhaps, personal committment to the software package prevents further investigation, despite less than optimal results.
My experience on-line has been mixed. In seeking out the why’s and wherefores of DLSR RAW data preprocessing, I got the impression that perhaps I was the only person who “didn’t get it.” Yet answers indicated that the subject is not widely understood and if the number of newbies who avoid preprocessing, to any useful extent, is an indication of their frustrations, then the problem of DSLR RAW data preprocessing needs better explaining. I am amazed by the way some individuals change cameras, scopes, mounts and software to better their experience, yet never seem to arrive at the conclusion that it is their application at fault and not the equipment.
These days, my DSLR RAW data sets comprise, darks, flats and lights, as well as bias frames with which to calibrate the flat frames. However, bias frames can be substituted with dark flats; that is, frames of the same duration as the flats except with the lens cap on. The dark flats take the place of the bias - I tend to think this is a better option.
AstroArt preprocessing is a one time operation and having used it several times now, it is ideal and uncomplicated. In PixInsight, master bias (dark flat), master dark and master flat frames are prepared separately to avoid bias subtraction of dark and light frames. Either way, pixel rejection is applied to light frames only. Options vary depending on the software package, and the number of frames if you are using PixInsight with a great deal of control over the result.
I hope this demystifies image calibration of DSLR RAW data and validates the practice of not using a calibrated dark frame. The bias_in_the_dark (BITD) methodology works because the bias is not subtracted from the dark, which otherwise truncates the dark frames, subsequently truncating the light frames, further compounded by bias subtraction. Flats and bias frames are reported to be more linear than dark or light frames (which is probably a function of exposure time) and therefore, flats are not adversely affected by bias subtraction.
I encourage readers to source Craig Stark’s excellent article on DSLR RAW non-linearity, and hopefully a better understanding of the issues will lead to a better experience, particularly for newcomers. If forum discussions are anything to go by, no doubt there will be disagreements and those who know better - who don’t objectively separate their experiences - a personal frustration I have with some forums. However, in the interests of transparency, I am open to alternative viewpoints.
Comments are closed, but there is always email.
Thursday, October 31, 2013
This is a sample entry, posted to show you some of the features of FlatPress.
The more tag allows you to create a “jump” between an excerpt and the complete article.
Saturday, December 29, 2012
For something completely different. “The Black Generic” (say that with a thick Scottish accent). Presented here for posterity. See further down, for a small snail pattern that was very effective in small dams, too.
“The Generic” (which never had a name, until yesterday) was an attempt to accentuate the form of nymphs - and it worked. The hook shank bent upwards, giving the body a curved posture, the long flared tail, tight skinny abdomen and bulbous bushy thorax were intentional. I noticed that variations were less successful. A thinly tied or short fiber thorax being the main flaws. It needs to look, curvy, leggy, cheeky and thoroughly provocative - hey! look at me, I’m here, come and get me!
My rough sketch doesn’t do it justice and I no longer have my fly tying gear.
Fished, in shallow runs, deep, sinking and on the rise, this fly caught plenty of fish in streams North of Melbourne, some 25 years ago, when I had the time to fish - Deep Creek and Jacksons Creek in particular, when they were healthy, before the drought devastated them. I understand that recent rains have regenerated the area and that fish are more plentiful.
The appearance is a long bulging thorax with a very bushy hackle to 2/3 of the body length. The abdomen is thinly wound single layer of silk with tightly wound silver wire segments and a long flared tail.
Hook size: 14-16 longshank - slightly bent upward ~2/3 from the eye.
Body : Black silk.
Tail: Soft black hen 6-9 curved fibres
Segment: Silver wire - anterior only.
Carapace: Black crow wing.
Hackle: 3 dark brown - black Ostrich herl with long fibres.
Head: Tie off crow wing with 3 half hitches and clip to cover eyelet - a flared finish.
Starting near the hook bend (not the 2/3 bend), tie in the tail, flaring with turns of silk. Tie in the wire and wind the body tightly (a single layer) to the bend (2/3 bend). Wind in narrow segments with the wire and tie off and clip at the 2/3 bend.
At and continuing from the 2/3 bend, tie in a good bunch of crow wing and several Ostrich herl. Wind silk to eye and half hitch. Wind Ostrich herl to within ~1/2mm of the eye and secure with half hitches. Bring crow wing over and tie off with half hitches immediately behind the eye and trim to cover the eye.
It may be necessary to tease out the hackle under the crow wing with a dubbing needle.
You may wax the thread, but I don’t. I prefer the fly waterlogged and sinking fast in moving or deep water.
Tying the crow wing shiny side up is an alternative presentation.
Small Snail pattern:
A second and very effective little fly, was a very small snail pattern. Hook size 16. Olive thread. A few winds of olive chenille with an olive carapace and a few turns of yellow chenille. About half and half.
Fished above weed beds in particular. Very easy fly to make.
Monday, September 24, 2012
There has been a lot of interest in this thread in recent times so I’ve provided a short summary, mainly for the benefit of newcomers to DSLR astrophotography - I hope it’s helpful. Dithering, is in my view, one of the most useful and effective techniques applied to image acquisition for increasing SNR, applied to DSLR cameras, saving hours of processing time and frustratingly poor results.
There has been a bit of discussion on various forums about substituting dithering for calibration. Sounds attractive, but;
***dithering is not intended to be a substitute for calibration***
Dithering and image reduction serve different purposes with the same aim - increased SNR. Temperature must be considered when using DSLRs, requiring dark subtraction. Dithering however, will hide the majority of temperature related calibration errors/inaccuracies as well as several other types of artifacts… read on.
You can also read about dithering in Berry and Burnell’s, “Handbook of Astronomical Image Processing,” where they recommend displacement of images by at least 12 pixels. There are several informative academic papers on-line, as well.
Backyard EOS has dithering capability, however, I have never used the program. My setup is Arduino based, controlling the RA or DEC axis between images, simply by slewing to present the camera to the target, displaced by 10 - 15 pixels or more between images - it’s that basic.
The comparison image, is intended to accentuate the underlying issues with the image on the right. No attempt has been made to minimize the effect with post processing. The image was stacked and stretched - please note that the red streaks were not evident in individual subs and only appeared after integration. In-fact, I naively spent hours trying to salvage that image - a complete waste of time. The image on the left was taken with the same camera, dithered.
Rather than spending time eradicating/covering up unsightly problems, time was spent lifting out detail, which in the image on the right was partly obliterated by poor acquisition - no dithering.
Here is the pattern I follow… it keeps the image within the sensor boundary. I use a look up table in the Arduino program to schedule the correct hand controller button activation.
Struggling to produce a decent image, it became apparent that the physical and electronic characteristics of DSLR CMOS sensors demand more attention and an understanding of the limitations of the camera’s sensor was essential.
I went to the trouble of replacing the factory IR filter with a special purpose astronomical filter to increase the transmission of Ha wavelengths and built a sensor cooling system to reduce dark current/thermal noise - this all worked quite well. But, there is more to sensor technology and manufacture than meets the eye.
For instance, the anti-aliasing filter, which forms part of the dust reduction system in Canon DSLR cameras is designed to reduce Moire, an artifact produced by the Bayer (RGB) colour matrix. However, the reduction of moire tends to soften the image. Even with a focus mask at ‘perfect focus’ - consequently, the AA filter should be removed.
But that’s not the end of the story. Among the millions of pixels that make up the sensor light gathering matrix, a small percentage are dead (don’t work). Some are on all the time (hot) and overall, pixels differ in their ability to convert photons to electrons. For daylight photography this isn’t a problem, as a rule.
There’s more. The pixel matrix and associated electronics produce fixed pattern noise. Heating of the sensor during extended operation also increases noise. Random noise is a function of arriving photons and is different for every frame. The optical properties and cleanliness of the sensor, filters and lens also produce artifacts.
Combined, this all conspires to ruin the image to which you have dedicated copious amounts of time at the wrong end of the day, when perhaps, you should be sleeping.
So what’s the solution? In principle, reducing the effects of the deleterious electronic and physical influences inherent in the optical system is quite simple, in practice however, it involves another layer of complexity; that is, DITHERING!.
Dithering is an authentic solution because it addresses noise suppression, optical and sensor artifacts - dithering does not replace proper image calibration techniques. It will however, greatly improve results and avoid problems that no amount of calibration or sensible image processing can resolve.
So, what is dithering?
Dithering is the practice of shifting the sensor (camera) between images, so that each new image is slightly offset from the previous image. The image is sampled by different pixels; that is, the image moves, ideally, by 10 - 15 pixels, from the position of the preceding exposure. With careful management the target image remains well within the sensor boundary.
Dithering can be random using a hand controller and estimating the offset by timing the button push in DEC or RA. Better still, an automated system that dithers in a box shaped spiral, or maze pattern. The goal is to avoid a succession of images occupying the same or neighbouring pixels (which produces poor results) and to prevent the target object moving out of the FOV.
Dithering, particularly with DSLR cameras improves signal to noise ratio for very little effort. It can be win win for the astrophtographer.
Executed properly, dithering deals effectively with random noise, hides hot and cold pixels, improves flat fielding and sub-pixel sampling; that is, capturing the image over a range of pixels means that we are not sampling the same and possibly less efficient pixels repeatedly for the same object location.
Calibration is not always as effective as we would like with DSLR images. And even if the images weren’t calibrated, a dithered stack would produce pretty good results.
Monday, July 2, 2012
For something completely different.
One day in May last year, I was walking the dogs along a Bass Strait beach, just near Torquay (Victoria, Australia). There is a little fan shaped cove with rocky promontories at either end. The sun was warm, the sea lapping gently on the shore, at low tide. Perhaps it was the light, but as I traversed this section of the beach I noticed a small creature, perhaps 5 - 6mm (leg tips,) hurrying along the waterline, occasionally covered by a light wash. Once the water had receded, this leggy little organism, having resisted the water flow, continued on its way.
A wisp of life, it seemed to be feeding near the low tide water line over smooth packed sand. It hurried here and there, stopping suddenly, then moving on with equal energy. Taking a camera with me that day was fortuitous. I managed to squeeze off one shot in focus while chasing this little fellow around the beach, losing sight of him, standing up to find him again, a bright little speck on the sand.
I made several inquiries to various institutions without much joy. Without a ’sample’ there was little to identify. Eventually, however, and almost a year later, it was suggested that the arthropod is most likely a species of Sea Spider or Pycnogonida, which inhabit the oceans from the shore to the deep, worldwide.
I’m intrigued by what appears to be a proboscis attached by a socket to the front of a dorsal appendage sweeping backward over the body, like a trowel handle. The other legs appear to be arranged asymmetrically. The abdomen gives the appearance of a terrestrial Daddy Long Legs. The eyes, blue and arranged either side of the base of the dorsal appendage. Fascinating little guy!