Monday, September 24, 2012
There has been a lot of interest in this thread in recent times so I’ve provided a short summary, mainly for the benefit of newcomers to DSLR astrophotography - I hope it’s helpful.
Dithering, is in my view, one of the most useful and effective techniques applied to image acquisition for increasing SNR, applied to DSLR cameras, saving hours of processing time and frustratingly poor results.
There has been a bit of discussion on various forums about substituting dithering for calibration. Sounds attractive, but;
***dithering is not intended to be a substitute for calibration - I recommend reading this, as well***
Dithering and image reduction serve different purposes with the same aim - increased SNR. Temperature must be considered when using DSLRs, requiring dark subtraction. Dithering however, will hide the majority of temperature related calibration errors/inaccuracies as well as several other types of artifacts… read on.
You can also read about dithering in Berry and Burnell’s, “Handbook of Astronomical Image Processing,” where they recommend displacement of images by at least 12 pixels. There are several informative academic papers on-line, as well.
Backyard EOS has dithering capability, however, I have never used the program. My setup is Arduino based, controlling the RA or DEC axis between images, simply by slewing to present the camera to the target, displaced by 10 - 15 pixels or more between images - it’s that basic.
The comparison image, is intended to accentuate the underlying issues with the image on the right. No attempt has been made to minimize the effect with post processing. The image was stacked and stretched - please note that the red streaks were not evident in individual subs and only appeared after integration. In-fact, I naively spent hours trying to salvage that image - a complete waste of time. The image on the left was taken with the same camera, dithered.
Rather than spending time eradicating/covering up unsightly problems, time was spent lifting out detail, which in the image on the right was partly obliterated by poor acquisition - no dithering.
Here is the pattern I follow… it keeps the image within the sensor boundary. I use a look up table in the Arduino program to schedule the correct hand controller button activation.
Struggling to produce a decent image, it became apparent that the physical and electronic characteristics of DSLR CMOS sensors demand more attention and an understanding of the limitations of the camera’s sensor was essential.
I went to the trouble of replacing the factory IR filter with a special purpose astronomical filter to increase the transmission of Ha wavelengths and built a sensor cooling system to reduce dark current/thermal noise - this all worked quite well. But, there is more to sensor technology and manufacture than meets the eye.
For instance, the anti-aliasing filter, which forms part of the dust reduction system in Canon DSLR cameras is designed to reduce Moire, an artifact produced by the Bayer (RGB) colour matrix. However, the reduction of moire tends to soften the image. Even with a focus mask at ‘perfect focus’ - consequently, the AA filter should be removed.
But that’s not the end of the story. Among the millions of pixels that make up the sensor light gathering matrix, a small percentage are dead (don’t work). Some are on all the time (hot) and overall, pixels differ in their ability to convert photons to electrons. For daylight photography this isn’t a problem, as a rule.
There’s more. The pixel matrix and associated electronics produce fixed pattern noise. Heating of the sensor during extended operation also increases noise. Random noise is a function of arriving photons and is different for every frame. The optical properties and cleanliness of the sensor, filters and lens also produce artifacts.
Combined, this all conspires to ruin the image to which you have dedicated copious amounts of time at the wrong end of the day, when perhaps, you should be sleeping.
So what’s the solution? In principle, reducing the effects of the deleterious electronic and physical influences inherent in the optical system is quite simple, in practice however, it involves another layer of complexity; that is, DITHERING!.
Dithering is an authentic solution because it addresses noise suppression, optical and sensor artifacts - dithering does not replace proper image calibration techniques. It will however, greatly improve results and avoid problems that no amount of calibration or sensible image processing can resolve.
So, what is dithering?
Dithering is the practice of shifting the sensor (camera) between images, so that each new image is slightly offset from the previous image. The image is sampled by different pixels; that is, the image moves, ideally, by 10 - 15 pixels, from the position of the preceding exposure. With careful management the target image remains well within the sensor boundary.
Dithering can be random using a hand controller and estimating the offset by timing the button push in DEC or RA. Better still, an automated system that dithers in a box shaped spiral, or maze pattern. The goal is to avoid a succession of images occupying the same or neighbouring pixels (which produces poor results) and to prevent the target object moving out of the FOV.
Dithering, particularly with DSLR cameras improves signal to noise ratio for very little effort. It can be win win for the astrophtographer.
Executed properly, dithering deals effectively with random noise, hides hot and cold pixels, improves flat fielding and sub-pixel sampling; that is, capturing the image over a range of pixels means that we are not sampling the same and possibly less efficient pixels repeatedly for the same object location.
Calibration is not always as effective as we would like with DSLR images. And even if the images weren’t calibrated, a dithered stack would produce pretty good results.
Wednesday, May 16, 2012
The Bow Tie focus mask, described here, is derived from Carey and Lord focus masks, which are types of diffraction gratings, similar to the well known Bahtinov mask.
The bow tie mask was purpose designed to suit a small aperture, short focal length lens. The four obstructions are intended to produce splayed double spikes, similar to the Carey mask, while eliminating the grating typical of focus mask designs. The wide obstructions and absence of grating increases the brightness of the diffraction spikes - discernible with a small lens.
The junction of the obstructions also provides an area of certainty. A central spike perpendicular to the double splay is generated at focus. This spike is not present otherwise. Another phenomenon of this design is the presence of red and/or blue fill within the splay of each pair of spikes.
The bow tie mask is easy to make. A flat section of rigid plastic is easily cut to shape with a hobby knife and steel rule. The clear plastic can be coated with black indelible marker. Sharp straight edges are essential.
Using the bow tie mask is straightforward. Equal spacing of each pair of spikes and the presence of the perpendicular spike indicate focus.
Saturday, December 10, 2011
The links below describe version 5 of the base cooling system, using a full spectrum modified Canon 1000D/XS/Kiss F (or 450D, which is of similar construction), fitted with an Astronomik UV/IR Clip-in filter. The notes are divided into 3 main parts and sub-sections, mainly to keep file size reasonable.
UPDATE New Teensy code. Removed the delay function as in the previous version. This code is a variation on the blink without delay machine state code which may be found on the Adafruit site. If it it not working for you, try reassigning pins, referring to the old code further down the page. This version also runs a small 1 inch OLED which is not essential. It also has pin allocation for the automatic switching of sensor heating and telescope dew heating, plus a wider setpoint range - +15 to -10C.
This is the 450D version. The heat load is quite a bit more than the 1000D and uses a slightly different profile. I have tested this code quite a bit and it is working well. It does dither over a few seconds ~+/~1C either side of setpoint. Slight different behaviour than with the use of delay().
This is the 1000D version. Heat load is about 40% less than the 450D, consequently the profile is different.
Previous 450D Teensy code with delay() function. Use this if not getting results with the new version.
Otherwise stick with this version Teensy Code - .ino file This should work for 1000D and 450D.
Note: Please read this.
Note: A better resistance heater for the camera sensor. I have removed the sensor heater notes. The one shown in the following link is far superior, as is the use of clear glass in place of the low pass filter. The advantage is less energy and less reduction in the maximum cooling differential (<1C from my measurements).
If the low pass filter is retained, keep in mind that it is also an anti-aliasing filter and may soften images - maybe that isn't such a bad thing? This works a treat - tested at -10C.
Why cooling - a very basic explanation
For anyone not familiar with the reasons for cooling a digital camera sensor. The purpose is to reduce dark noise (thermal current) generated during long exposures - the result of sensor heating.
Reducing the temperature at which an image is acquired improves its quality because signal to noise ratio (SNR) is improved.
The cooling system described here, dependent on Thermoelectric module (TEC) and heatsink rating, is capable of reducing sensor temperature between 18 and 30C.
Having completed this prototype, short of 3D printing the electronics compartment and light exclusion shroud, which is black gaffer tape at the moment, the electronics package is performing surprisingly well.
Cool down from 16C to -5C took approximately 2 minutes. Once at temperature (-5C), the on-temperature/setpoint LED came on and remained on, except for a few brief moments, which I suspect were sensor read errors/spikes, throughout the first run of dark frames. The changes to the circuitry have been more successful than anticipated.
An advantage of regulated cooling is the acquisition of dark libraries. Typically, the system presets, 5C, 0C and -5C, at 800 and 1600iso and various exposure times. However, most of my imaging seldom exceeds 210 seconds, unguided. This is extended to -10C with the improved sensor defogger or the use of an argon bag - links above.
Note: The notes that follow are dated in some places. The battery compartment idea incorporates camera power and is less intrusive. It is experimental at this time and up for redesign of the PCB.
It was possible to include all the cooling, heating and power supply requirements on one double sided board. The hardware is otherwise unchanged, except for a number of surface mount components. The board is better manufactured professionally.
If anything, the code could be improved and I will work on that in due course.
Appendix 1 Canon 1000D Thermoelectric Cooling Conversion - PCB etching. This is the basic board - updated.
Appendix 2 Canon 1000D Thermoelectric Cooling Conversion - Arduino/Teensy Code txt file. New version - corrected error to pwmV code.
Appendix 3 Canon 1000D Thermoelectric Cooling Conversion - Arduino/Teensy Code - .ino file New version - corrected error to pwmV code.
Monday, November 28, 2011
Basic astrophotography image processing in GIMP - Part 2: increasing SNR (image alignment, integration and enhancement)
I thought this section deserved more attention. Leaving off in part 1, we discuss combining images - to use astrophotography jargon, stacking and aligning - more correctly, registration.
Please remember that these tutorials are intended for beginners, using very basic equipment and software. The methodology is the basics of image calibration and processing, but very much hands on, using what we have at our disposal.
Recapping, the purpose of combining images is to increase the signal to noise ratio (SNR); that is, less noise and more signal, improving the overall appearance of our combined final image - our integrated image (more jargon).
We are going to select the best light frames and combine them into a single image. But, noise reduction strategies start before uploading images to our computer. We employ a nifty method during image capture; that is, we make sure that our images are slightly offset one from the other during the imaging session (yet more jargon). The technical term for this is dithering, a science and a separate discussion altogether.
For our purposes however, we will take advantage of our fixed set up. We note that the stars move across the sky and change position from East to West at 15.0416 degrees/hour (the siderial rate), we let the stars drift across the camera sensor between exposures. Of course, after a while the object that we are imaging will drift out of view. For 6 or 10 images there should be no need to recenter our target.
In part 1 we exposed for 10 seconds. Adding a 3 second delay between exposures ensures that a few pixels separate the next image from the previous - in effect offsetting our images. Very crude dithering - effective all the same. And, furthermore, once complete, our total exposure time is 60 seconds vs 10 seconds. However, SNR increases by the square root of the number of combined images. 2 images increases SNR by 1.414 - approximating for our purposes.
So, starting where we left off in part 1, the image below shows the second and third images in our set of calibrated light images - we have already aligned the bottom and second image in the stack. In this case, the third image is selected with Mode set to Difference (and View 8:1, for clarity). This layer is transparent, showing the difference between the two images as they came out of the camera. We can use the drag tool to align the transparent (difference layer) with the image below.
And this is the result in Difference mode. The pixels have been aligned.
We then set Mode to Normal and select the image above, by selecting its ‘eye’ and highlighting the layer, setting its Mode to Difference. As before we drag the image into alignment with the image below, and so on up the stack.
Note: We loaded our images, File > Open as Layers, and need to deselect the ‘eyes’ of the images above the image that we are dragging so that it is visible.
The image below is the first of our image stack (the ‘eyes’ above it are deselected to make it visible). It’s noisy.
Lets see what happens when we average the images; that is, with Mode set to Normal for all images, (all ‘eyes’ selected), we set the Opacity slider of the bottom image to 100% - the default setting. Select the second image and set it to 50%, third to 25%, 4th to 12.5%, 5th to 6.3% and our 6th image to 3.1%.
As you proceed up the layers, note the change - dithering has been to good effect and pixels that were not removed during calibration are hidden behind good pixels. Additionally, because ambient noise is random the image is becoming less noisy. If we had 50 or 100 images, noise would be reduced even further. Still, for 6 images the result is impressive - as below - and much smoother.
Just to finish things off, Image > Flatten, to fuse all the layers together. Apply a sharpen algorithm to the luminous layer. This can be found at, FX-Foundry > Photo > Sharpen > Luminosity Sharpen. You can also use, Filters > Enhance > Sharpen (Smart Redux), or any of the available sharpen algorithms available for GIMP. Avoid the use of unsharp mask if you can. It too, tends to overdo the image (my personal view).
And here is our completed image.
For comparison, the image below is the final image from part 1, which is a single layer, as opposed to 6 layers in the image above.
Comparing the position of the constellation Orion on the frames shown, it should be evident that any one of our light images may be selected as the base or background image, framing the scene as preferred. Terrestrial objects do not align in any case, and we have to live with that.
The availability of free programs to perform calibration, registration and integration, and then using GIMP to finish off with brightness, contrast, colour and enhancement, makes the process much easier. (Keep in mind that images that contain terrestrial objects may interfere with alignment in some programs, essentially designed to align stars).
The next step perhaps, is to use RegiStax or Deep Sky Stacker (DSS) to do all the heavy lifting (calibration, registration and integration of our images) and follow up with GIMP. Now we are getting into serious amateur stuff. But, we can still use our fixed tripod/camera set up, to take beautiful shots of the Milky Way, well beyond the spectrum of the human eye.
StarTools is new and innovative astrophotography processing program. The author has gone to extraordinary lengths to create a program applicable to amateur and professional astrophotographers. ST is an image processing toolbox, that is non-destructive to your image data. If desired it will track every processing step, among many other attributes, including intelligent application of detail sensitive noise reduction to the final draft image. But, I suggest reading what the author has to say and trying out the demo version, which is fully functional, except for image saving. Be patient. There is a learning curve associated with all image processing applications.
Perhaps you need one of these.
Saturday, November 26, 2011
EDIT: When I first wrote this, it followed several years of dabbling in various designs and versions, hand and motor driven. There are of course many way to skin a cat, and this is just one of them. Think of this blog as useful information, if in-fact it is useful. I am a fan of the curved rod design for it’s simplicity. This design was probably a response to frustration over the inherent tangent arm error with flat board,straight shaft designs - anyway, if this is helpful, have fun…
Double Arm Drives have been used to photograph the night sky for over 20 years. Originally designed by Dave Trott, based on the Haig or Scotch mount (otherwise known as a Barn Door Tracker), the Double Arm Drive is a camera platform used to track and capture images of celestial objects using long exposure times. This design is conventional and attempts to refine the tracking performance of the double arm drive. Hence, the Tangent Error Minimized “Preloaded” (TEM) Tracker - this is a prototype.
However, before proceeding this neat little design may be preferable for some readers. It is small and compact and can be driven using the Arduino electronics described later in this post, if desired. I really like what Gary Seronik has done with this design.
This how to is intended for a wide audience, consequently, there is lots of info and does not assume previous experience…
A few useful notes?
The unit had to be easy to build and accurate. A steel rule, sharp pencil, basic tools and a small drill, should be all that’s needed. Having said that, built-in adjustments can be used to fine tune performance to overcome minor fabrication errors.
I’ve provided as much detail as possible, along with the Arduino code, PCB template and Eagle board, for those who would like to have a commercially made Arduino shield.
In hindsight, the conventional layout is best - the camera arm, as shown in the images of the prototype, is not too stable and needs restraining to prevent the camera and lens toppling.
Although stepper motors are reliable and accurate, vibration can be a problem. A solution is gearing, which also increases torque at the drive shaft or using an Easy Driver with micro stepping.
Together, the dimensions of the tracker and packing the camera arm hinge with 4 thicknesses of 80GSM A4 paper (0.4mm), improves tracking overall.
Linux/MacOS RAW image preprocessing bash script with graphical interface and instructions. This should properly calibrate your RAW images, preparing them for deBayering, alignment and stacking.
The reader may want to postprocess their images. A low cost solution is GIMP, however, Deep Sky Stacker and Star Tools is a more sophisticated image processing solution.
Front and rear views - the reduction drive is more effective. Increasing torque at the drive screw and minimizing stepper motor resonance.
The Lagoon Nebula M8 and M20 the Triffid Nebula. Composite of 9, 30 second frames.Tail of Scorpio toward the centre of the Galaxy - M7, M6, the Butterfly Cluster and Cats Paw Nebula - 21 30 second frames. Taken with a FujiFilm XPro1, 60mm, f/2.4, iso800 and preprocessing in Pixinsight (Deep Sky Stacker is free) and post processing in Star Tools. I went to the trouble of taking bias, dark and flat calibration frames.
The Equatorial Wedge (EW) provides adjustment of altitude (latitude), limited to a range of latitudes in which the device is expected to be used. If attached to an adjustable tripod, directly to the Altitude board, the Azimuth board is not required and may be omitted. Although, an EW is a more rigid design and easier to set up, as shown above.
For simplicity of construction the Conventional Layout is recommended. Accurate dimensions and ensuring that the Tracker is flat when closed will ensure that it performs as expected. All Tracker dimensions are metric (unless otherwise stated), including the Drive Shaft thread.
For the non-metric world, imperial measurements for use with the 1/4 inch 20 tpi drive screw can be found at the bottom of the page in the appendix;
Design and performance testing
The dimensions of the TEM Tracker provide for very accurate tracking in the first 15 to 20 minutes of operation and subsequent tracking error is minor to 60 minutes. A design goal was accurate tracking for up to 60 minutes. In practice, performance is very accurate up to 90 minutes.
Some helpful definitions
Siderial rate: The rate at which the Earth rotates on its axis - approximately 15.0416 degrees/hour.
Drive cycle: From boards closed to 60 minutes (zero to nominally, 15.0416 degrees).
Contact Point: The physical point at which the Drive Arm lifts the Camera Arm - 349.95mm (350mm).
Optimal Contact Point: The position at which the contact point ‘would’ intersect the Camera Arm, if it were to move (optimally) throughout the drive cycle. In practice, too complex.
Points of Rotation: Hinge and pinion centers should line up when the Tracker is closed, except that the Camera-arm hinge is slightly elevated. The performance of the Tracker is predicated on this arrangement - its the zero datum.
Straining at Gnats
A spreadsheet was used to calculate Drive Arm and Camera Arm dimensions, with tracking tolerances set to 4 decimal places of a degree, using the following fixed parameters;
motor speed, 1 rpm ; drive screw pitch, 1 thread/mm (6M (6mm) or 8mm fine - which has the same 1 tpm pitch as 6M).
Camera Arm - Drive Arm Trend
Optimised angular displacement of the Camera Arm was calculated to 4 decimal places at 1 minute intervals for 60 minutes; i.e., 15.0416/60. Optimal contact points were determined and match the displacement of the Camera Arm at these intervals. The start and end points being 349.95 (350mm) and 347.11mm, respectively.
With the contact point fixed at 350mm (349.95mm) the Camera Arm is driven through 14.9517 degrees (in 60 minutes). If the contact point is fixed at 347.11 mm the Camera Arm is driven through 15.0416 degrees, which is optimal but problematic, because error is introduced during the early part of the drive cycle. The object is to drive the Camera Arm between these two points and take advantage of accurate performance at both ends of the drive cycle. This can be achieved by raising the Camera Arm hinge 0.4mm (4 thicknesses of 80gsm paper).
Calculating contact points made it possible to verify the arc derived from the CAD program; angles subtended from the Camera Arm hinge to the Camera Arm arc correspond very closely to the optimal contact points.
How did it shape up - Performance
A Canon G9, fitted with a 2x tele-converter lens with the camera lens set at 24x digital zoom, an approximate focal length of 1600mm, was used to take 10 x 64 second exposures (Spica, southern hemisphere) over 22 minutes, of which 5 were stacked, showing no apparent trailing. The others, subject to atmospheric distortion and vibration due to construction faults, were discarded. Similarly, trailing was not apparent. Spica1 and Spica5 are the first and last in the series of 10 exposures - true!
Accurate tracking was observed > 30 minutes; that is, 15 minutes to resolve polar alignment using the drift method, 10 minutes to verify tracking and 22 minutes of photography, including a period of approximately 5 minutes where the setup was unattended after the shooting cycle was complete.
To show that the images are aligned and verify the ‘authenticity’ of tracking, in-camera software (CHDK) was used to combine/stack the Spica images. Compare the 5 sub exposures.
Software control of motor speed is optimal because it eliminates a variable that tends to mask other errors, such as construction faults and/or poor polar alignment.
Programming an ‘Arduino’ board, fitted with a motor shield provides very accurate and consistent motor speed. This arrangement was used to test the tracker. Alternatively, Google other types of conventional circuitry.
Planning is the key to acquiring quality exposures, which depends, in part, on proper polar alignment.
Device leveling, latitude setting and finding True North or True South (depending on hemisphere) is essential to accurate polar alignment - finding TN or TS can be the most difficult and frustrating nightly chore. Setting up references/datums during the day minimizes efforts in the dark when we should be imaging.
If you have access to Google Maps. TN/TS can be referenced to natural lines, buildings or fence lines, by measuring the angle between a reference line and TN/TS (which is, of course, vertically up and down the page (screen shot)).
Locate two legs of the tripod on the reference line and the third perpendicular to the reference line. Now point the axis of the drive arm hinge to TN/TS; that is, the angular difference between the reference line and TN/TS measured from Google map.
Next level the azimuth/base board of the tracker, set the latitude at your location by adjusting the altitude board up or down and check alignment with TN or TS for accuracy.
Having completed this task once, nightly set up at the same location and datums, perhaps marked on the ground, is a 3 minute job. If you have a GPS equipped phone/tablet, record the latitude and longitude of the location.
A polar alignment scope, if you have one, is the traditional polar alignment method - wide field imaging at short focal lengths is tolerant of small polar alignment error.
Shoot an image and check for drift - elongated stars. Make very small adjustments in azimuth (rotating the azimuth board) and latitude (adjusting the angle of the altitude board) to further improve polar alignment. That is, until stars are round for the chosen exposure time.
Providing the Critical Dimensions, Points of Rotation and other design conventions are observed, performance should be consistent in various configurations.
Preparing the drive end, before committing to other measurements, referenced to the centre of the Drive Shaft, is preferable, making sure that the 20mm (nominal) drive shaft holes in the top and bottom boards are aligned prior to marking the location of other components. That is, marking out the motor/drive shaft assembly end first, will minimize construction errors, in particular the placement of hinges.
The boards pictured are 17mm ply coated with laminate - a cut-off picked up at a timber yard. This material is used for concrete form-work and is very stable - resists warping etc.
Notice that the motor is mounted on the top board and hinged. It may be mounted on the bottom board in a similar fashion - a matter of preference. Importantly, the centre of the drive shaft should be coincident with the centre of the motor mount hinge and the centre of the Drive Nut pinions. It may be necessary to ‘pack the motor up’ to provide clearance between the Drive Nut and the motor shaft.
An easy way to make Drive Shaft pinions, and have them match up with the Motor Mount hinges, is to cut the ends off the hinges to be used for the Motor Mount. The part with the pin is retained (see photo); additional holes are drilled to accept locking screws - use tape to hold things in place while drilling.
Another refinement is the use of springs on the pinions to minimize slack in the assembly. Alternatively, remove the pins and tap threads to fit grub screws for centering the Drive Nut (recommended).
While it is important to ensure that everything is properly aligned during construction, it is recommended that the Tracker be started slightly open - say 10 - 15mm - to stabilize the drive shaft and pinion. With the Tracker closed the drive shaft tends to lean, due to its proximity to the drive nut pinion assembly.
Nylon nuts and bolts can be easily modified with side-cutters, and are useful replacements for hinge pins and pinions - they tend to reduce the transmission of motor resonance. Nylon threads are noticeably tighter.
Tip - place a small ball of Blutak on the end of the screw before pushing it into the hinge - this will further isolate hard surfaces without compromising rigidity.
Azimuth and Altitude
If intending to mount the drive on an adjustable (sturdy) tripod, the azimuth board may be omitted.
Be careful of heavy telephoto lens that may topple the Camera Arm - restraint is necessary.
EDIT: Update - for L293D read L293NE, which seems to run the stepper smoothly. Half stepping included in Arduino code - see acknowledgements for author credit.
The Printed Circuit Board (PCB) is designed as a motor shield and fits on top of the Arduino board. It utilises an L293D or SN754410NE H-Bridge bipolar stepper driver, and a ULN2003AN (or similar) to drive a unipolar stepper motor. A three position switch selects Forward, Stop and Reverse and a ‘Kill Switch’ stops the motor once the Drive Arm is back in the start position; the motor is held in position with its coils energised. Turn off supply power to rotate by hand, if necessary.
The L293D is probably a better choice because it has in-built protection to prevent damage to your Arduino from voltage spikes generated by the motor; the SN754410NE does not. However, the use of the Arduino pull-up resistors may well serve to provide additional protection; no problems have been experienced to date.
The L293D and SN754410NE use two separate power sources, one for the chip and one for the motor. As such, the motor shield is designed to provide several control configurations. For example, the SN754410NE may utilise a “power-off” kill switch, or the Arduino logic. Similarly, for the L293D, the board may also be configured to remove power from the logic and power supply. This is more derivative, through design evolution, than a deliberate feature.
The ULN2003AN Darlington Array, drives a 5 or 6 wire uni-polar motor. Changing the pin allocation in the ‘Global’ section of the ‘Wiring’ program is necessary with the current program.
Fitting a heat sink to the 780x (x = the motor supply voltage) IC and attaching a cooling fan will be necessary where more powerful stepper motors, consuming large amounts of current, are used.
A 5 volt bi-polar motor or 5 or 6 wire unipolar is adequate for the job, unless you have other requirements. Besides, there are several motor shields available for the Arduino if you prefer an alternative, for some reason.
Ebay has a plethora of unipolar 5v geared stepper motors for sale from Hong Kong (28BYJ-48 - advertised at 64 steps, it actually has 32 steps/rev and 1/64 gear ratio) - set the stepper speed and change the motorStep line of your Arduino script to suit your motor. Otherwise steppers come in various grades and steps - gearing of some type is highly recommended to reduce resonance.
Arduino motor shield
Direction and Kill Switch wiring
Copy and Paste the Arduino code to your editor and upload to the board.
The PCB pdf file prints the actual size of the shield to fit the Arduino (Decimilia or similar) - it was printed directly from Eagle. Print to a transfer medium then iron onto a single sided board for etching. It may be wise to print to paper first, cut out, and check for fit with the Arduino board. A Laser printer is required, as well as a 1mm and 0.8mm drills, fine hacksaw and file to cut to shape.
Refer to the parts list and use the image of the Arduino Motor Shield for guidance (note the two jumpers - logic setup). The 100uf capacitor is nearest the diode and 4 pin connection header, the 1uf capacitor is at the back of the shield. The L293D (SN754410NE) is the IC to the front of the image/board. The ULN 2003AN is located at the back of the board.
The Direction Switch is an 8 pin 3 position sliding switch. Terminal layout as shown, is 3 + 1 and 1 + 3. The limit switch, when closed, sets Pin2 LOW. Note, that in the Stop and Reverse positions Pin 3 is always LOW. Forward, sets Pins 2 and 3 HIGH, overriding the limit switch.
If problems are experienced getting the stepper motor to rotate; i.e., it ticks one way then the other, the motor wiring will need rearranging in the socket. If the motor turns the wrong way, plug the socket in the opposite way.
If intending to have a board made commercially, use the “Eagle Board Milling” file.
The “PCB Etching” file has bigger pads to improve adhesion during image transfer (ironing) and provides more copper for better adhesion to the board.
Warning the program makes use of the pull up resistors on the Arduino board for voltage protection. No resistors have been used in this design. Use of the L293D is recommended because it has in-built protection.
“Section 3” Concluding
It has been 2 years since designing the Tracker, and it is safe to say that it provides very accurate tracking up to 90 minutes, consistent with accurate polar alignment.
Large Imperial version:
Similar profile to the Metric version, for exposures up to and beyond 60 minutes - say 90 minutes.
Drive Arm hinge - Drive Nut pinion / Drive Shaft centre = 16 inches; Drive Arm hinge - Contact Point = 14 inches; Drive Arm and Camera Arm hinge = 4 inches. Pack up the Camera Arm hinge with 2 layers of 80 gsm paper, because the uncorrected error after 60 minutes is half that of the metric version.
Compact Imperial version (see Section 3 Acknowledgements):
Indicates superior tracking up to 40 - 45 mins with no camera arm correction (packing up, as in the tracker design) and may be ideal for hand driven exposures of shorter duration. A computerised motor driven version should demonstrate exceptional tracking to 42 minutes - more than enough.
DA hinge - DN pinion / DS centre = 14” ; CA hinge - CP = 12.92” ; DA hinge - CA hinge = 1.9”. No packing is required. Calipers may be useful for measuring down to 1/100”.
Dave Trott,, the original designer of the Double Arm Drive, proposed the concept in the Sky and Telescope magazine, 1988. Containing a wealth of information, his web-site is also beautifully designed.
My brother, the interested sceptic, and the brains behind the spreadsheet. The spreadsheet enabled experimentation with various component dimensions.
Mike Mohaupt - whose Compact Imperial design prompted further research to optimise performance, which provided the data for 1/4” 20tpi dimensions.
Open source software (Linux) - Qcad.
Arduino half step library Note: The Stepper.cpp file above has been modified to suit Arduino 1.x (WProgram.h changed to Arduino.h)
Not forgetting Stellarium an excellent open source desktop planetarium.
GIMP the image manipulation program, another open source astronomical imaging tool.
The CHDK developers and many excellent sites devoted to digital astrophotography and Double Arm Drive design.
This work is licensed under a Creative Commons Attribution-Noncommercial 2.5 Australia License.
The information on this site is provided in good faith. The author/owner of the material of this site accepts no responsibility for reader/user outcomes, of any nature, directly or indirectly associated with this and/or any other site associated with, or affiliated, by any means or interpretation. Please use the information freely, at your own risk.