Monday, November 28, 2011
Basic astrophotography image processing in GIMP - Part 2: increasing SNR (image alignment, integration and enhancement)
I thought this section deserved more attention. Leaving off in part 1, we discuss combining images - to use astrophotography jargon, stacking and aligning - more correctly, registration.
Please remember that these tutorials are intended for beginners, using very basic equipment and software. The methodology is the basics of image calibration and processing, but very much hands on, using what we have at our disposal.
Recapping, the purpose of combining images is to increase the signal to noise ratio (SNR); that is, less noise and more signal, improving the overall appearance of our combined final image - our integrated image (more jargon).
We are going to select the best light frames and combine them into a single image. But, noise reduction strategies start before uploading images to our computer. We employ a nifty method during image capture; that is, we make sure that our images are slightly offset one from the other during the imaging session (yet more jargon). The technical term for this is dithering, a science and a separate discussion altogether.
For our purposes however, we will take advantage of our fixed set up. We note that the stars move across the sky and change position from East to West at 15.0416 degrees/hour (the siderial rate), we let the stars drift across the camera sensor between exposures. Of course, after a while the object that we are imaging will drift out of view. For 6 or 10 images there should be no need to recenter our target.
In part 1 we exposed for 10 seconds. Adding a 3 second delay between exposures ensures that a few pixels separate the next image from the previous - in effect offsetting our images. Very crude dithering - effective all the same. And, furthermore, once complete, our total exposure time is 60 seconds vs 10 seconds. However, SNR increases by the square root of the number of combined images. 2 images increases SNR by 1.414 - approximating for our purposes.
So, starting where we left off in part 1, the image below shows the second and third images in our set of calibrated light images - we have already aligned the bottom and second image in the stack. In this case, the third image is selected with Mode set to Difference (and View 8:1, for clarity). This layer is transparent, showing the difference between the two images as they came out of the camera. We can use the drag tool to align the transparent (difference layer) with the image below.
And this is the result in Difference mode. The pixels have been aligned.
We then set Mode to Normal and select the image above, by selecting its ‘eye’ and highlighting the layer, setting its Mode to Difference. As before we drag the image into alignment with the image below, and so on up the stack.
Note: We loaded our images, File > Open as Layers, and need to deselect the ‘eyes’ of the images above the image that we are dragging so that it is visible.
The image below is the first of our image stack (the ‘eyes’ above it are deselected to make it visible). It’s noisy.
Lets see what happens when we average the images; that is, with Mode set to Normal for all images, (all ‘eyes’ selected), we set the Opacity slider of the bottom image to 100% - the default setting. Select the second image and set it to 50%, third to 25%, 4th to 12.5%, 5th to 6.3% and our 6th image to 3.1%.
As you proceed up the layers, note the change - dithering has been to good effect and pixels that were not removed during calibration are hidden behind good pixels. Additionally, because ambient noise is random the image is becoming less noisy. If we had 50 or 100 images, noise would be reduced even further. Still, for 6 images the result is impressive - as below - and much smoother.
Just to finish things off, Image > Flatten, to fuse all the layers together. Apply a sharpen algorithm to the luminous layer. This can be found at, FX-Foundry > Photo > Sharpen > Luminosity Sharpen. You can also use, Filters > Enhance > Sharpen (Smart Redux), or any of the available sharpen algorithms available for GIMP. Avoid the use of unsharp mask if you can. It too, tends to overdo the image (my personal view).
And here is our completed image.
For comparison, the image below is the final image from part 1, which is a single layer, as opposed to 6 layers in the image above.
Comparing the position of the constellation Orion on the frames shown, it should be evident that any one of our light images may be selected as the base or background image, framing the scene as preferred. Terrestrial objects do not align in any case, and we have to live with that.
The availability of free programs to perform calibration, registration and integration, and then using GIMP to finish off with brightness, contrast, colour and enhancement, makes the process much easier. (Keep in mind that images that contain terrestrial objects may interfere with alignment in some programs, essentially designed to align stars).
The next step perhaps, is to use RegiStax or Deep Sky Stacker (DSS) to do all the heavy lifting (calibration, registration and integration of our images) and follow up with GIMP. Now we are getting into serious amateur stuff. But, we can still use our fixed tripod/camera set up, to take beautiful shots of the Milky Way, well beyond the spectrum of the human eye.
StarTools is new and innovative astrophotography processing program. The author has gone to extraordinary lengths to create a program applicable to amateur and professional astrophotographers. ST is an image processing toolbox, that is non-destructive to your image data. If desired it will track every processing step, among many other attributes, including intelligent application of detail sensitive noise reduction to the final draft image. But, I suggest reading what the author has to say and trying out the demo version, which is fully functional, except for image saving. Be patient. There is a learning curve associated with all image processing applications.
Perhaps you need one of these.
Saturday, November 26, 2011
EDIT: When I first wrote this, it followed several years of dabbling in various designs and versions, hand and motor driven. There are of course many way to skin a cat, and this is just one of them. Think of this blog as useful information, if in-fact it is useful. I am a fan of the curved rod design for it’s simplicity. This design was probably a response to frustration over the inherent tangent arm error with flat board,straight shaft designs - anyway, if this is helpful, have fun…
Double Arm Drives have been used to photograph the night sky for over 20 years. Originally designed by Dave Trott, based on the Haig or Scotch mount (otherwise known as a Barn Door Tracker), the Double Arm Drive is a camera platform used to track and capture images of celestial objects using long exposure times. This design is conventional and attempts to refine the tracking performance of the double arm drive. Hence, the Tangent Error Minimized “Preloaded” (TEM) Tracker - this is a prototype.
However, before proceeding this neat little design may be preferable for some readers. It is small and compact and can be driven using the Arduino electronics described later in this post, if desired. I really like what Gary Seronik has done with this design.
This how to is intended for a wide audience, consequently, there is lots of info and does not assume previous experience…
A few useful notes?
The unit had to be easy to build and accurate. A steel rule, sharp pencil, basic tools and a small drill, should be all that’s needed. Having said that, built-in adjustments can be used to fine tune performance to overcome minor fabrication errors.
I’ve provided as much detail as possible, along with the Arduino code, PCB template and Eagle board, for those who would like to have a commercially made Arduino shield.
In hindsight, the conventional layout is best - the camera arm, as shown in the images of the prototype, is not too stable and needs restraining to prevent the camera and lens toppling.
Although stepper motors are reliable and accurate, vibration can be a problem. A solution is gearing, which also increases torque at the drive shaft or using an Easy Driver with micro stepping.
Together, the dimensions of the tracker and packing the camera arm hinge with 4 thicknesses of 80GSM A4 paper (0.4mm), improves tracking overall.
Linux/MacOS RAW image preprocessing bash script with graphical interface and instructions. This should properly calibrate your RAW images, preparing them for deBayering, alignment and stacking.
The reader may want to postprocess their images. A low cost solution is GIMP, however, Deep Sky Stacker and Star Tools is a more sophisticated image processing solution.
Front and rear views - the reduction drive is more effective. Increasing torque at the drive screw and minimizing stepper motor resonance.
The Lagoon Nebula M8 and M20 the Triffid Nebula. Composite of 9, 30 second frames.Tail of Scorpio toward the centre of the Galaxy - M7, M6, the Butterfly Cluster and Cats Paw Nebula - 21 30 second frames. Taken with a FujiFilm XPro1, 60mm, f/2.4, iso800 and preprocessing in Pixinsight (Deep Sky Stacker is free) and post processing in Star Tools. I went to the trouble of taking bias, dark and flat calibration frames.
The Equatorial Wedge (EW) provides adjustment of altitude (latitude), limited to a range of latitudes in which the device is expected to be used. If attached to an adjustable tripod, directly to the Altitude board, the Azimuth board is not required and may be omitted. Although, an EW is a more rigid design and easier to set up, as shown above.
For simplicity of construction the Conventional Layout is recommended. Accurate dimensions and ensuring that the Tracker is flat when closed will ensure that it performs as expected. All Tracker dimensions are metric (unless otherwise stated), including the Drive Shaft thread.
For the non-metric world, imperial measurements for use with the 1/4 inch 20 tpi drive screw can be found at the bottom of the page in the appendix;
Design and performance testing
The dimensions of the TEM Tracker provide for very accurate tracking in the first 15 to 20 minutes of operation and subsequent tracking error is minor to 60 minutes. A design goal was accurate tracking for up to 60 minutes. In practice, performance is very accurate up to 90 minutes.
Some helpful definitions
Siderial rate: The rate at which the Earth rotates on its axis - approximately 15.0416 degrees/hour.
Drive cycle: From boards closed to 60 minutes (zero to nominally, 15.0416 degrees).
Contact Point: The physical point at which the Drive Arm lifts the Camera Arm - 349.95mm (350mm).
Optimal Contact Point: The position at which the contact point ‘would’ intersect the Camera Arm, if it were to move (optimally) throughout the drive cycle. In practice, too complex.
Points of Rotation: Hinge and pinion centers should line up when the Tracker is closed, except that the Camera-arm hinge is slightly elevated. The performance of the Tracker is predicated on this arrangement - its the zero datum.
Straining at Gnats
A spreadsheet was used to calculate Drive Arm and Camera Arm dimensions, with tracking tolerances set to 4 decimal places of a degree, using the following fixed parameters;
motor speed, 1 rpm ; drive screw pitch, 1 thread/mm (6M (6mm) or 8mm fine - which has the same 1 tpm pitch as 6M).
Camera Arm - Drive Arm Trend
Optimised angular displacement of the Camera Arm was calculated to 4 decimal places at 1 minute intervals for 60 minutes; i.e., 15.0416/60. Optimal contact points were determined and match the displacement of the Camera Arm at these intervals. The start and end points being 349.95 (350mm) and 347.11mm, respectively.
With the contact point fixed at 350mm (349.95mm) the Camera Arm is driven through 14.9517 degrees (in 60 minutes). If the contact point is fixed at 347.11 mm the Camera Arm is driven through 15.0416 degrees, which is optimal but problematic, because error is introduced during the early part of the drive cycle. The object is to drive the Camera Arm between these two points and take advantage of accurate performance at both ends of the drive cycle. This can be achieved by raising the Camera Arm hinge 0.4mm (4 thicknesses of 80gsm paper).
Calculating contact points made it possible to verify the arc derived from the CAD program; angles subtended from the Camera Arm hinge to the Camera Arm arc correspond very closely to the optimal contact points.
How did it shape up - Performance
A Canon G9, fitted with a 2x tele-converter lens with the camera lens set at 24x digital zoom, an approximate focal length of 1600mm, was used to take 10 x 64 second exposures (Spica, southern hemisphere) over 22 minutes, of which 5 were stacked, showing no apparent trailing. The others, subject to atmospheric distortion and vibration due to construction faults, were discarded. Similarly, trailing was not apparent. Spica1 and Spica5 are the first and last in the series of 10 exposures - true!
Accurate tracking was observed > 30 minutes; that is, 15 minutes to resolve polar alignment using the drift method, 10 minutes to verify tracking and 22 minutes of photography, including a period of approximately 5 minutes where the setup was unattended after the shooting cycle was complete.
To show that the images are aligned and verify the ‘authenticity’ of tracking, in-camera software (CHDK) was used to combine/stack the Spica images. Compare the 5 sub exposures.
Software control of motor speed is optimal because it eliminates a variable that tends to mask other errors, such as construction faults and/or poor polar alignment.
Programming an ‘Arduino’ board, fitted with a motor shield provides very accurate and consistent motor speed. This arrangement was used to test the tracker. Alternatively, Google other types of conventional circuitry.
Planning is the key to acquiring quality exposures, which depends, in part, on proper polar alignment.
Device leveling, latitude setting and finding True North or True South (depending on hemisphere) is essential to accurate polar alignment - finding TN or TS can be the most difficult and frustrating nightly chore. Setting up references/datums during the day minimizes efforts in the dark when we should be imaging.
If you have access to Google Maps. TN/TS can be referenced to natural lines, buildings or fence lines, by measuring the angle between a reference line and TN/TS (which is, of course, vertically up and down the page (screen shot)).
Locate two legs of the tripod on the reference line and the third perpendicular to the reference line. Now point the axis of the drive arm hinge to TN/TS; that is, the angular difference between the reference line and TN/TS measured from Google map.
Next level the azimuth/base board of the tracker, set the latitude at your location by adjusting the altitude board up or down and check alignment with TN or TS for accuracy.
Having completed this task once, nightly set up at the same location and datums, perhaps marked on the ground, is a 3 minute job. If you have a GPS equipped phone/tablet, record the latitude and longitude of the location.
A polar alignment scope, if you have one, is the traditional polar alignment method - wide field imaging at short focal lengths is tolerant of small polar alignment error.
Shoot an image and check for drift - elongated stars. Make very small adjustments in azimuth (rotating the azimuth board) and latitude (adjusting the angle of the altitude board) to further improve polar alignment. That is, until stars are round for the chosen exposure time.
Providing the Critical Dimensions, Points of Rotation and other design conventions are observed, performance should be consistent in various configurations.
Preparing the drive end, before committing to other measurements, referenced to the centre of the Drive Shaft, is preferable, making sure that the 20mm (nominal) drive shaft holes in the top and bottom boards are aligned prior to marking the location of other components. That is, marking out the motor/drive shaft assembly end first, will minimize construction errors, in particular the placement of hinges.
The boards pictured are 17mm ply coated with laminate - a cut-off picked up at a timber yard. This material is used for concrete form-work and is very stable - resists warping etc.
Notice that the motor is mounted on the top board and hinged. It may be mounted on the bottom board in a similar fashion - a matter of preference. Importantly, the centre of the drive shaft should be coincident with the centre of the motor mount hinge and the centre of the Drive Nut pinions. It may be necessary to ‘pack the motor up’ to provide clearance between the Drive Nut and the motor shaft.
An easy way to make Drive Shaft pinions, and have them match up with the Motor Mount hinges, is to cut the ends off the hinges to be used for the Motor Mount. The part with the pin is retained (see photo); additional holes are drilled to accept locking screws - use tape to hold things in place while drilling.
Another refinement is the use of springs on the pinions to minimize slack in the assembly. Alternatively, remove the pins and tap threads to fit grub screws for centering the Drive Nut (recommended).
While it is important to ensure that everything is properly aligned during construction, it is recommended that the Tracker be started slightly open - say 10 - 15mm - to stabilize the drive shaft and pinion. With the Tracker closed the drive shaft tends to lean, due to its proximity to the drive nut pinion assembly.
Nylon nuts and bolts can be easily modified with side-cutters, and are useful replacements for hinge pins and pinions - they tend to reduce the transmission of motor resonance. Nylon threads are noticeably tighter.
Tip - place a small ball of Blutak on the end of the screw before pushing it into the hinge - this will further isolate hard surfaces without compromising rigidity.
Azimuth and Altitude
If intending to mount the drive on an adjustable (sturdy) tripod, the azimuth board may be omitted.
Be careful of heavy telephoto lens that may topple the Camera Arm - restraint is necessary.
EDIT: Update - for L293D read L293NE, which seems to run the stepper smoothly. Half stepping included in Arduino code - see acknowledgements for author credit.
The Printed Circuit Board (PCB) is designed as a motor shield and fits on top of the Arduino board. It utilises an L293D or SN754410NE H-Bridge bipolar stepper driver, and a ULN2003AN (or similar) to drive a unipolar stepper motor. A three position switch selects Forward, Stop and Reverse and a ‘Kill Switch’ stops the motor once the Drive Arm is back in the start position; the motor is held in position with its coils energised. Turn off supply power to rotate by hand, if necessary.
The L293D is probably a better choice because it has in-built protection to prevent damage to your Arduino from voltage spikes generated by the motor; the SN754410NE does not. However, the use of the Arduino pull-up resistors may well serve to provide additional protection; no problems have been experienced to date.
The L293D and SN754410NE use two separate power sources, one for the chip and one for the motor. As such, the motor shield is designed to provide several control configurations. For example, the SN754410NE may utilise a “power-off” kill switch, or the Arduino logic. Similarly, for the L293D, the board may also be configured to remove power from the logic and power supply. This is more derivative, through design evolution, than a deliberate feature.
The ULN2003AN Darlington Array, drives a 5 or 6 wire uni-polar motor. Changing the pin allocation in the ‘Global’ section of the ‘Wiring’ program is necessary with the current program.
Fitting a heat sink to the 780x (x = the motor supply voltage) IC and attaching a cooling fan will be necessary where more powerful stepper motors, consuming large amounts of current, are used.
A 5 volt bi-polar motor or 5 or 6 wire unipolar is adequate for the job, unless you have other requirements. Besides, there are several motor shields available for the Arduino if you prefer an alternative, for some reason.
Ebay has a plethora of unipolar 5v geared stepper motors for sale from Hong Kong (28BYJ-48 - advertised at 64 steps, it actually has 32 steps/rev and 1/64 gear ratio) - set the stepper speed and change the motorStep line of your Arduino script to suit your motor. Otherwise steppers come in various grades and steps - gearing of some type is highly recommended to reduce resonance.
Arduino motor shield
Direction and Kill Switch wiring
Copy and Paste the Arduino code to your editor and upload to the board.
The PCB pdf file prints the actual size of the shield to fit the Arduino (Decimilia or similar) - it was printed directly from Eagle. Print to a transfer medium then iron onto a single sided board for etching. It may be wise to print to paper first, cut out, and check for fit with the Arduino board. A Laser printer is required, as well as a 1mm and 0.8mm drills, fine hacksaw and file to cut to shape.
Refer to the parts list and use the image of the Arduino Motor Shield for guidance (note the two jumpers - logic setup). The 100uf capacitor is nearest the diode and 4 pin connection header, the 1uf capacitor is at the back of the shield. The L293D (SN754410NE) is the IC to the front of the image/board. The ULN 2003AN is located at the back of the board.
The Direction Switch is an 8 pin 3 position sliding switch. Terminal layout as shown, is 3 + 1 and 1 + 3. The limit switch, when closed, sets Pin2 LOW. Note, that in the Stop and Reverse positions Pin 3 is always LOW. Forward, sets Pins 2 and 3 HIGH, overriding the limit switch.
If problems are experienced getting the stepper motor to rotate; i.e., it ticks one way then the other, the motor wiring will need rearranging in the socket. If the motor turns the wrong way, plug the socket in the opposite way.
If intending to have a board made commercially, use the “Eagle Board Milling” file.
The “PCB Etching” file has bigger pads to improve adhesion during image transfer (ironing) and provides more copper for better adhesion to the board.
Warning the program makes use of the pull up resistors on the Arduino board for voltage protection. No resistors have been used in this design. Use of the L293D is recommended because it has in-built protection.
“Section 3” Concluding
It has been 2 years since designing the Tracker, and it is safe to say that it provides very accurate tracking up to 90 minutes, consistent with accurate polar alignment.
Large Imperial version:
Similar profile to the Metric version, for exposures up to and beyond 60 minutes - say 90 minutes.
Drive Arm hinge - Drive Nut pinion / Drive Shaft centre = 16 inches; Drive Arm hinge - Contact Point = 14 inches; Drive Arm and Camera Arm hinge = 4 inches. Pack up the Camera Arm hinge with 2 layers of 80 gsm paper, because the uncorrected error after 60 minutes is half that of the metric version.
Compact Imperial version (see Section 3 Acknowledgements):
Indicates superior tracking up to 40 - 45 mins with no camera arm correction (packing up, as in the tracker design) and may be ideal for hand driven exposures of shorter duration. A computerised motor driven version should demonstrate exceptional tracking to 42 minutes - more than enough.
DA hinge - DN pinion / DS centre = 14” ; CA hinge - CP = 12.92” ; DA hinge - CA hinge = 1.9”. No packing is required. Calipers may be useful for measuring down to 1/100”.
Dave Trott,, the original designer of the Double Arm Drive, proposed the concept in the Sky and Telescope magazine, 1988. Containing a wealth of information, his web-site is also beautifully designed.
My brother, the interested sceptic, and the brains behind the spreadsheet. The spreadsheet enabled experimentation with various component dimensions.
Mike Mohaupt - whose Compact Imperial design prompted further research to optimise performance, which provided the data for 1/4” 20tpi dimensions.
Open source software (Linux) - Qcad.
Arduino half step library Note: The Stepper.cpp file above has been modified to suit Arduino 1.x (WProgram.h changed to Arduino.h)
Not forgetting Stellarium an excellent open source desktop planetarium.
GIMP the image manipulation program, another open source astronomical imaging tool.
The CHDK developers and many excellent sites devoted to digital astrophotography and Double Arm Drive design.
This work is licensed under a Creative Commons Attribution-Noncommercial 2.5 Australia License.
The information on this site is provided in good faith. The author/owner of the material of this site accepts no responsibility for reader/user outcomes, of any nature, directly or indirectly associated with this and/or any other site associated with, or affiliated, by any means or interpretation. Please use the information freely, at your own risk.
I wrote this several years ago. It was intended for Slackware and now includes ubuntu 8.10, 9.04, 10.04 and 11.x - not tested. However, it may be getting a little out of date and for reference only.
IRDA (FIR mode) on Toshiba laptop with an smsc-ircc IrDA device and no BIOS setting.
Because the laptop came with IRDA, this was more of a challenge than anything else, and more difficult than first imagined. Most people get SIR working, I didn’t!
Acknowledgements to the various IR sites - GMane in particular.
If your laptop (Toshiba) is equipped with an “ISA bridge: Intel Corporation 82801DBM (ICH4-M) LPC Interface Bridge”, and a 24cc controller or similar, it will require the smsc-ircc2 kernel module driver. Patches are added from time to time and may be viewed on Gmane; http://blog.gmane.org/gmane.linux.irda.general
If you want to support a specific combination of bridge and controller Gmane may be a good place to start, to see if your combination is supported.
Slackware 11.0, 12.0 and 12.1 running a recent 2.6.x kernel, and more recently ubuntu up to 11.x
Please read the documentation for your distribution.
NOTE: The smsc-ircc2 module is experimental and may break your system.
Latest irdautils, openobex and recent kernel (188.8.131.52 at the time of writing) (still working with 184.108.40.206 and ubuntu 2.6.27-9-generic - plus).
Not an issue with ubuntu
Networking > IRDA (compiled as modules).
ISA and Serial support enabled (SIR capable).
The smsc-ircc2 module is experimental, therefore it is necessary to set > Code maturity level options > Prompt for development and/or incomplete code/drivers = y.
Please refer to the many howto’s on compiling and installing the linux kernel.
PCI by name (the relevant bits)
00:1f.0 ISA bridge: Intel Corporation 82801DBM (ICH4-M) LPC Interface Bridge (rev 03)
Flags: bus master, medium devsel, latency 0
PCI by numbers
#lspci -v -n
00:1f.0 0601: 8086:24cc (rev 03)
Flags: bus master, medium devsel, latency 0
Install the software and then create the IrDA devices (linux irda howto - 2.6 kernel)
# mknod /dev/ircomm0 c 161 0
# mknod /dev/ircomm1 c 161 1
# mknod /dev/irlpt0 c 161 16
# mknod /dev/irlpt1 c 161 17
# mknod /dev/irnet c 10 187
# chmod 666 /dev/ir*
Set the aliases in /etc/modprobe.d - Kernel 2.6.x requires a separate entry eg. /etc/modprobe.d/smsc-ircc2 will do.
Regardless of options placed in modprobe.d, I chose to pass the required options during modprobe. It was impossible to load the module otherwise.
alias irda0 smsc-ircc2
alias tty-ldisc-11 irtty-sir
alias char-major-161 ircomm-tty
alias char-major-10-187 irnet
For information on your chip, run smcinit
NOTE: Other than for setup, never run smcinit to initialize the smsc-ircc IrDA device - it will prevent IR from working.
SIR ioport: 0×3f8
FIR ioport: 0×130
FIR interupt: 3
FIR DMA: 3
Detected IO hub vendor id: 0×8086
Detected IO hub device id: 0×24cc.
Detected Chip id: 0×7a
SIR ioport register write: 0xfe read: 0xfe
FIR interrupt register write: 0×3 read: 0×3
FIR ioport register write: 0×26 read: 0×26
FIR dma register write: 0×3 read: 0×3
Initialization of the SMC 47Nxxx succeeded
Windows device manager indicates the following values
I/O 02f8 - 02FF (SIR)
I/O 0130-0137 (FIR)
IRQ 07 (FIR)
DMA 01 (FIR)
There are differences in some values between Windows and smcinit. I used the smcinit values.
The SIR serial device in this case is /dev/ttyS0
but may vary depending on your hardware.
/dev/ttyS0, UART: 16550A, Port: 0×03f8, IRQ: 4
From smcinit above, SIR ioport: 0×3f8 = /dev/ttyS0 Port: 0×03f8.
dmesg will provide the same information concerning your serial ports, however, it may be necessary to match the correct serial driver with the irda hardware, as they may vary from machine to machine.
To initialize FIR, first disable the serial device
#setserial /dev/ttyS0 uart none
or whatever your SIR port /dev/ttySX is.
Load the smsc-ircc2 module using the values provided by smcinit;
#modprobe smsc-ircc2 -v –ignore-install ircc_dma=3 ircc_irq=3 ircc_fir=0×130 ircc_sir=0×3f8
–ignore-install was not always necessary, but occasionally the module would not load without it? See “man modprobe” for details. ircc_dma=7 works also. Otherwise, the values are fixed as far as I can tell. -v for debugging.
# dmesg grep | tail
Detected unconfigured Toshiba laptop with Intel 8281DBM LPC bridge SMSC IrDA chip, pre-configuring device.
Setting up Intel 82801 controller and SMSC device
Overriding FIR address 0×0130
Overriding SIR address 0×03f8
SMsC IrDA Controller found
IrCC version 2.0, firport 0×130, sirport 0×3f8 dma=3, irq=3
No transceiver found. Defaulting to Fast pin select
IrDA: Registered device irda0
#irattach irda0 -s
If all has gone well you should see something similar to this in /var/log/messages
Oct 20 16:32:38 localhost irattach: executing: ‘/sbin/modprobe irda0′
Oct 20 16:32:38 localhost irattach: executing: ‘echo xx > /proc/sys/net/irda/devname’
Oct 20 16:32:38 localhost irattach: executing: ‘echo 1 > /proc/sys/net/irda/discovery’
Oct 20 16:32:38 localhost irattach: Starting device irda0
“xx” the laptop - 1 device discovered.
Then run irdadump to verify the whole process. You should see your computer and any device that you used to test the link. In this case a Palm.
07:32:53.284907 xid:cmd 286e7df5 > ffffffff S=6 s=5 (14)
07:32:53.374893 xid:cmd 286e7df5 > ffffffff S=6 s=* xx hint=0400 [ Computer ] (18)
07:32:54.462330 xid:rsp 286e7df5 < 3ea004c9 S=6 s=5 zz hint=8220 [ PDA/Palmtop IrOBEX ] (20)
07:32:55.834526 xid:cmd 286e7df5 > ffffffff S=6 s=0 (14)
xx is the computer name, zz is the Palm username.
And, just to be sure, the following shows the Palm device.
# cat /proc/sys/net/irda/discovery
IrLMP: Discovery log:
nickname: zz, hint: 0×8220, saddr: 0×286e7df5, daddr: 0×3ea004c9
This start|stop|restart script is adapted from the slmodemd script. Added module loading and unloading, to ensure that all relevant modules are loaded before the smsc-ircc2 module, otherwise IR will not work, and to unload the modules when stopping IR, ready for the next start. Not really necessary, but it is cleaner and prevents problems.
NOTE: Don’t use this script in ubuntu
# Start irda
if [ -x /sbin/setserial ]; then
echo -n “Starting irda:”
/sbin/setserial /dev/ttyS0 uart none
/sbin/modprobe smsc-ircc2 –ignore-install ircc_dma=3 ircc_irq=3 ircc_fir=0×130 ircc_sir=0×3f8
/usr/sbin/irattach irda0 -s
echo “Shutting down irda”
case “$1” in
echo “usage $0 start|stop|restart”
Make it executable - as su or sudo
#chmod +x /etc/rc.d/rc.irda
Run as su or sudo
Irlan, irnet, rfcomm, phone and pda connections etc, are adequately explained in other tutorials.
I did manage to sync the Palm with my desktop. Set /dev/ircomm0 in Kpilot or Jpilot preferences. If you are using Gnome, it’s under Evolution, Edit>Synchronisation options… menu, or the Gnome Preferences menu.
ubuntu setup: up to and including 11.x should be OK.
1. Ubuntu kernel has smsc-ircc2 module configured.
2. In /etc/modprobe.d/irda-utils add line: alias irda0 smsc-ircc2
3. In /etc/default/irda-utils edit: DEVICE=”irda0” SETSERIAL=”/dev/ttyS0” SMCINIT=”no”
4. In /etc/init.d/irda-setup under FIR=”smsc-ircc2”; add line
OPTIONS=”–ignore-install ircc_dma=3 ircc_irq=3 ircc_fir=0×130 ircc_sir=0×3f8”
NOTE: Invoking smcinit or setting to “yes” in /etc/defaults/irda-utils prevents the operation of IR. Ensure other related modules are loaded before invoking /etc/init.d/irda-utils start.
It has been some time since adding anything useful to this blog, so here is a tutorial for beginners on astronomical image processing using GIMP. GIMP is open source and free. Photoshop is proprietary software and performs similar functions.
This tutorial is intended to provide a method for photographers of any skill level to try their hand at astrophotography, without the need for specialist equipment. Anyone with a digital camera and a tripod can easily take images of the night sky and produce satisfying results with basic, and in this case free software.
And, yes! The process described here is similar to Pixinsight, DSS, AstroArt and other astrophotography processing software, but is really intended for anyone with a passing interest or just starting out.
There is a change to this tutorial, the result of a closer look at DSLR image processing. Some of the conventions applied to DSLR images are really intended for dedicated astro CCD images. Notes are provided explaining the differences.
A very basic introduction
This tutorial uses jpeg images, because all digital cameras produce jpeg images. Serious astrophotographers, using DLSR cameras, shoot RAW. Forget about this for now. If you cant shoot RAW, jpeg is just fine.
There are several reasons for calibrating astronomical images. Primarily, to make our images look better, we want to reduce noise and retain detail in the final image, that is, we want to increase the Signal to Noise ratio (SNR). If we expose too long, finer detail is obliterated, too short and the image is dominated by noise. In any case, because we are taking images in low light (at night), noise is a problem. So what is the best exposure time to use?
If you have a tracking device that follows the stars (see the Tracker page, you may wish to build one) and the sky at your location is polluted by suburban lighting, and depending on the ISO setting, 1 - 5 minutes is usual. At a dark site (no suburban lights), exposures will be much longer - counter intuitive, perhaps, and a separate discussion altogether. Astrophotography can be complex. Here we wish to deal with the basics.
If you have a fixed tripod, you will most likely use a high ISO (1600 - 3200, perhaps higher), a short focal length lens, wide aperture and short exposures. At 24mm stars start to show trailing after about 10 seconds. This tutorial is based on the calibration of a single 10 second exposure - 24mm, f/4.0, iso1600. Don’t be put off by all of this. If you have any sort of camera with a manual setting, particularly the ability to take longer exposures, that will do for now - this tutorial is designed for you.
You will need to know where to find certain functions in GIMP. Referring to the File menu - I use the convention, File > Open as Layers, to indicate that you need to select File and Open as Layers. Another, Image > Flatten. And, Windows > Dockable Dialogs > Layers. Another, View > Zoom > 4:1 (400%). You will use all of these in this tutorial. Photoshop has something similar.
Digital images taken under low light conditions are noisy - produced by the electronic, thermal and optical properties of the camera lens and ambient noise. Ambient noise changes from image to image (basically, a consequence of light conditions at the time). Because ambient noise is random, combining several images averages it out remarkably well, and we can pretty much forget about it for now.
Sources of noise…
Electronic (Bias) - when the camera takes a photo the sensor is activated electronically and this leaves a characteristic pattern (cross hatch - canon 1000D) of noise, easily seen in a low light image. We take a bias frame and subtract it from our image or images.
Thermal (Dark) - as the camera sensor acquires an image over a long duration it gets hot. Heat produces a characteristic thermal noise signature. We take a dark frame and subtract it from our image or images.
Note: The bias is present in all images… it’s really a fixed pattern typical of CMOS sensors - CCD have a true bias, but we DSLR users can think of them as the same. We will treat them the same way - very convenient.
Optical (Flat) - the combined camera sensor and lens (optical train) has a characteristic appearance that shows up as dust spots, vignetting and variations in individual sensor pixels (not all pixels are equal in operation or light gathering capability). We take a flat frame and divide it into our image or images.
Ambient (Random) - reduced by combining images.
In-fact, it’s a little more complicated in that we take several bias, dark and flat frames and average each set to create a master bias, master dark and master flat, thereby obtaining a better average of each type of noise.
Let’s leave it at that. There is a good deal more to effective astro imaging. For now, we want to calibrate our newly acquired image and impress our friends and family - if indeed, they can be impressed.
The light image
We will use a single image in this example. The constellation Orion rising in the East. The Great Orion Nebula (M42) can be seen in the handle of Orions’ sword. A front yard perspective - tree, shrubs and power lines. The camera was mounted on a fixed tripod, lens focal length 24mm, iso1600, focal ratio, f/4.0, 10 seconds exposure time. Note the brown characteristic hue of light pollution. Because it’s a single exposure noise is quite evident.
The calibration frames
We begin by taking bias, dark and flat frames. Please note: the images shown have been processed to show the detail. Out of the camera, bias and dark frames appear as black frames. The flat will be a uniform light colour. I used Color > Auto > Equalize, to reveal the noise.
Bias - cover the lens (put the lens cap on, or cover with something that wont let light into the lens) and set the cameras’ fastest shutter speed - no need to change any other settings. Press the shutter release - 1 bias frame.
Dark - cover the lens as before and set the exposure time to that of the light frame. In this case 10 seconds. Press the shutter release - 1 dark frame.
Note the bias noise in the dark frame below. Because the dark is a 10 second exposure, there is not a lot of thermal noise and it looks very similar to the bias. Look closely and you will see the differences.
Flat - a bit more complicated, but well worth the effort, as you will see. This can be done in various ways, and each astrophotographer has their own true and tried method. For our purposes, we want to take an image of the camera sensor and lens, the inside the optics stuff. Focus must be the same as for the light frame. Here is an easy method, adequate for the purposes of this tutorial;
On your computer desktop open GIMP (Photoshop) and create a New frame. Maximize to fill the screen - the default is a white frame. If you wish, place a sheet of A4 paper over the computer display. Hold the camera lens (objective), so that it just touches the paper (as close as possible). Now, set the shutter speed to give an exposure (refering to the image preview histogram) of approximately 70 - 75% (make sure auto focus is off). Press the shutter - 1 flat frame.
Usually, and until familiar with your camera, taking flats may require some experimentation, till you get it right. This flat was taken at iso1600 and is quite noisy. But it’s OK to take flats at the lowest ISO provided by the camera. Note the vignetting at the corners (dark areas) and the dust spots on the face of the low pass filter, which is exposed to the air.
Calibrating the calibration frames
That’s right! First we must calibrate the calibration frames! You may have guessed it, but we need to subtract the bias from the dark, flat and light frames. This is easily done in GIMP (or Photoshop). With GIMP open on your desktop, File > Open as Layers, the dark and bias frames. You may need to go to the Windows > Dockable Dialogs > Layers, and open the Layers dialog, if it’s not already displayed.
Note: We can elect not to subtract the bias frame from the dark frame. But we will subtract the bias from the flat. We would only subtract the bias from the dark if we intended scaling the dark (very technical, but for completeness included here).
It is inconvenient, but the first image opened as layers is automatically set as ‘Background’ - so we are left to deduce which is the bias and which is the other frame - there are only two, fortunately, but still use proper naming conventions; that is, bias, dark, light should be the names of our frames for this tutorial (it’s easy to mix them up otherwise). Set the dark frame as the background image (that is, at the bottom of the layer). The bias will be the image above. Highlight the bias and set Mode to Subtract. Now, flatten the image (Image > Flatten) and save as masterdark.jpg.
Do the same with the flat and light images, saving as masterflat.jpg and biassubtractedlight.jpg - we have our master dark and master flat frames and a bias subtracted light frame. The bias is of course the master bias and the only frame that does not require calibration in this instance.
Now, I have used different names for each image, because that’s what I was using when I created the screen shots for this tutorial. Still adhering to a naming convention that I understand, however, the names suggested above more accurately represent the state of calibration of the dark, flat and light frames.
For an imaging run of several hours, we might take 40 or 50 bias frames, 30 or 40 dark frames and 20 or 30 flat frames and combine the bias frames to create a master bias, and subtract it from the dark, flat and light frames, as we have done here. We may even shoot the flat frames at a lower ISO and require a separate set of bias frames with which to calibrate - naming conventions and ordering our folder structure is very important for smooth execution of the calibration and processing task.
Applying the calibration frames
Let’s keep in mind that the way in which we are approaching this task is slightly different to a sophisticated program such as Pixinsight. But not too different. The principles are essentially the same - we are working with what we have.
With GIMP on the desktop, File > Open as Layers’ the biassubtractedlight.jpg, the masterdark.jpg and the masterflat.jpeg. The biassubtracted light is set as the bottom image, next is the masterdark with Mode set to Subtract. The top image is the masterflat with Mode set to Divide. The result is shown below.
What have we done? Indeed, what have we done? Well, we have calibrated all the images by subtracting the master bias from each dark, flat and light frame and then subtracted the dark from the light frame, dividing the result by the flat frame. However, see the note below.
Note: As in the previous note, we can make this much easier by subtracting the bias from the flat frame only, and use the dark as is - that is, do not subtract the bias from the dark. This is a better method for use with DSLRs. However, the method shown here works quite well with GIMP.
Now! If we had a stack of light images, we would do the same for each and combine them to reduce the ambient noise. Effectively, we have increased the SNR. Take a look at the frame below. There are two very ugly blue (cold/dead pixels). My camera sensor has several of these, as well as bright red hot pixels (always on).
Voila! No blue pixel - it was definitely noise. After applying the master dark the image is looking better. The thermal, reddish appearance is diminished too.
One additional step will improve the result and get rid of that brown sky. If we normalize the masterflat we set the pixel values to upper and lower limits providing ourselves with some colour calibration (This can be done in situ by deselecting the ‘eye’ for the light and dark frames, leaving the flat as the active image. Then select Color > Auto > Normalize. The master flat can be normalized beforehand if desired. Don’t forget to re select the eyes for the light and dark frames.
While originally a single frame, of only 10 seconds, the appearance is greatly improved. Contrast, brightness, smoothness and colour balance, particularly the sky, is a better representation of that captured by the camera sensor, which is much more sensitive than the human eye. Compare the 3 light images at each stage of processing and notice the steady improvement. Now it needs to be cropped… as you please.
A word on combining (stacking) images in GIMP
The composition of the example image does not lend itself to automatic alignment, even though there is a GIMP astrophtography package that has a stacking tool, among other things. Still, it is possible to stack images manually in GIMP. For calibration frames it’s very easy.
For example, if we did take several bias, dark and flat frames, File > Open as Layers, all the bias images and apply the Average script downloadable from the GIMP repository. Flatten and save as masterbias. Do the same with the darks and flats (now we’re getting into the jargon).
It’s a little different with light images because, the stars will have moved between exposures (this is actually a good thing for noise reduction - and similar to dithering). Part 2 covers stacking and alignment, noise reduction and basic image enhancement.