Welcome to iraf.net Saturday, May 11 2024 @ 12:56 PM GMT
hawcheng |
06/15/2010 07:16PM (Read 2995 times)
|
|
|
Status: offline
Registered: 03/22/2010
Posts: 9
|
General question on overscan fitting and bias frame subtraction : why do people do both? It seems that simply subtracting a bias frame from one's program frames would accomplish the same thing, as the bias frames ought to have the same bias level as the program frames.
|
|
|
|
AnTaR3s |
06/15/2010 07:16PM
|
|
|
Status: offline
Registered: 10/24/2009
Posts: 58
|
I also wondered how an Overscan correction is (mathematically) applied to a science frame.
Just one note from me: The bias level is not the same throughout an entire run. Normally one takes Bias frames before and after a run, so just check the mean level across all the frames, and you should see, that these values vary with time.
For our CCD (SBIG STL-6303E) this variation amounts up to ~4 ADU when you compare frames from before and after the observation. So, the mean bias level in an overscan region could be an initial guess or a scaling factor for the reduction, but I'm sure someone knows the correct answer here.cheers
|
|
|
|
jturner |
06/15/2010 07:16PM
|
|
|
Status: offline
Registered: 12/29/2005
Posts: 165
|
Right, as Antares says, the overscan accounts for any drift in the overall mean level with time. The bias frames themselves include any structure from pixel to pixel whilst the overscan gives any necessary additive zero-point correction overall.James.
|
|
|
|
| |
|
Content generated in: 0.05 seconds |
|