GMOS reductions – My Prescription

Our research group is in the middle of receiving data from Gemini/GMOS, so I’ve written out here a thorough (but readable) reference for the steps needed to run reductions on the data that comes back from the ‘scope. Note, Gemini has it’s own GMOS reductions package that needs to be installed with your current version of IRAF.

DISCLAIMER: The steps below were derived from the Gemini Help Pages coupled with trial and error. They have taken this form based on necessities for my own science, and therefore may not perfectly apply to anyone else’s data.

Step 0 – Organization

The first and foremost thing to do is organize your files in a way that  makes sense to you. My preference is to first set a home directory for the reductions (hereafter referred to as homedir), and then make individualized directories for each object. I also create directories for the standard star observations, and twilight observations (should those be required). There are also ‘master bias’ files that are created and made available with the downloaded data; these I also put in their own directory within homedir. The format would look like this:

$ /homedir
$ /homedir/STANDARD/
$ /homedir/BIAS/
$ /homedir/OBJNAME1/
$ /homedir/OBJNAME2/

Within each OBJNAME/ directory I will put the Science Images, the Flats, the CuArc, as well as any text/log files that I find useful. For instance, I let IRAF keep a logfile for me, and I also create a text file where I record all the commands that I use for each object. This will be made clearer below. However you do it, organizing the files in a meaningful/logical/consistent way will save you LOADS of time in the future.

A quick note on central wavelength settings: Gemini typically breaks observations into two groups by using two different central wavelength settings. The benefit to this is you are able to remove the effects of chip gaps on your final spectrum. The downside is you must reduce each wavelength setting’s observations separately and combine at the end.

Step 1 – Getting IRAF set

Upon opening IRAF, run:

unlearn gemini
unlearn gmos
unlearn gemtools
set stdimage=imtgmos2
set homedir=’home directory location’

The 3 ‘unlearn’ commands make sure that all work I did prior to this won’t affect how I reduce this specific object. I also take this opportunity to create a log file ‘GN.OBJNAME.log.’ This refers to the fact that this object was observed using Gemini North; each command from hereon will be recorded in the log file, which is helpful for debugging problems later.

Step 2 – ‘Preparing’ the data

This command is all about ‘preparing’ the data for reductions.

gprepare *.fits fl_addmdf+

Run the command on all fits files. The parameter ‘fl_addmdf+’ indicates to IRAF that it should attach the Mask Definition File (MDF) to the image. This tells IRAF what Mask the light went through (longslit/multislit), and is pulled from the header. I’ve had to specifically turn this parameter on, though that may not necessarily be the case for others. Always check your parameter list before running a command (i.e., $ epar gprepare). Note that any GMOS IRAF package, when run, creates a new file with the same name but appends a prefix to indicate which package it was run through. In the case of gprepare, it appends the letter ‘g’ to the front of each filename (again, the original files are not changed).

Step 3 – Cosmic Ray Rejection

There are lots of routines out there to reject cosmic rays. I simply use the one in the GMOS IRAF package. There may be better options. I only run the cosmic ray rejection on science images.

gscrrej gScience_1.fits cgScience_1.fits
gscrrej gScience_2.fits cgScience_2.fits


gscrrej gScience_N.fits cgScience_N.fits

Again, in an effort to maintain the files properly, you should run this command individually on each g*.fits image, and have it add a ‘c’ prefix to the front of each file. (the original g*.fits files will be unchanged).

Step 4 – Creating a Master Flat(s)

gsflat GCALflat_1.fits,GCALflat_2.fits,GCALflat_N.fits, master_flat.fits fl_over- fl_trim+ nbiascontam=4 fl_bias+ bias=’homedir$BIAS/master_bias.fits’ fl_dark- fl_fixpix- fl_inter+ function=’chebyshev’ order=15 fl_detec+ ovs_flinter- fl_vardq+

There can be a number of flat images, however note that this command will collapse should the central wavelength settings of the images differ. As Gemini typically will split your observations (by 10 nm central wavelength) into two groups, you will get a (or possible more) GCALflat(s) for each setting. You must create a master flat for each individual wavelength setting; READ: run this command twice, once for each wavelength setting. Notice I’ve also removed the bias from the master flat using a ‘master_bias.fits’ image; this is provided by the observatory. Note that the fl_over parameter is turned off; we do not need this if we are using the bias image.
Upon running this command, you will be required to interactively analyze each individual Gemini Chip (this could be either 3 or 6 depending on the type of read-out from the telescope). The fl_inter+ parameter turns this on/off.

Step 5 – Reduce the Science Images

gsreduce cgScience_1.fits,cgScience_2.fits,cgScience_N.fits fl_inter+ fl_over- fl_trim+ nbiascontam=4 fl_bias+ bias=’homedir$BIAS/master_bias.fits’ fl_dark- fl_flat+ flatim=’master_flat.fits’ fl_gmosaic+ fl_fixpix+ fl_cut+ fl_gsappwave+ ovs_flinter- fl_vardq+ yoffset=5.0

Again, you will have to run this command twice for each central wavelength setting, being careful to make sure you indicate in the command which master_flats to use with which science images. Note the fl_gmosaic parameter is turned on; this indicates the 3 (or 6) individual images from the chips will be mosaic’d into one full image.

Step 6 – Wavelength Calibration

This is a two-step process. First, use the ‘gsreduce’ command from above to reduce the CuArc files (you can run all together, as wavelength setting here doesn’t matter).

gsreduce gCuArc_1.fits,gCuArc_2.fits fl_over- fl_trim+ fl_bias- fl_dark- fl_flat- fl_cut+ fl_gsappwave+ yoffset=5.0

After reducing the two CuArc files, you run:

gswavelength gsgCuArc_1.fits,gsgCuArc_2.fits fl_inter+ nsum=5 step=5 function=’chebyshev’ order=6 fitcxord=5 fitcyord=4

Note that the ‘gsreduce’ command will add ‘gs’ prefix to the front of the filename. This command takes the arc lamp spectra and match to a list of already known emissions features in that spectrum. Therefore, this is where you want to take care in making sure the emission features are identified properly. This will require you to manually check the emission features found by the computer for validity.

Step 7 – Apply the Transformation

This is a two-step process as well. First, apply the transformation to the CuArc files themselves; this ensures the transformation will work.

gstransform gsgCuArc_1.fits wavtraname=gsgCuArc_1.fits
gstransform gsgCuArc_2.fits wavtraname=gsgCuArc_2.fits

By ‘applying the transformation’ you are applying a wavelength calibration to images. This first step, you are applying to the output from ‘gswavelength’ above. This is done to check rectification of lamps. If there are no errors upon running the above command, you may move forward with:

gstransform gscgScience_1.fits,gscgScience_2.fits,gscgScience_N.fits wavtraname=gsgCuArc_1.fits fl_vardq+
gstransform gscgScience_1.fits,gscgScience_2.fits,gscgScience_M.fits wavtraname=gsgCuArc_2.fits fl_vardq+

Make sure you append the correct CuArc transformation to the appropriate Science images. If you use the wrong CuArc (with one wavelength setting) on the science images with a different wavelength setting, you will be incorrectly applying your wavelength calibration.

Step 8 – EXTRACT!

Before moving forward, it is recommended you open the individual 2D spectra (now with a tgscg* prefix) in ds9 first, in order to know where the spectrum actually is when cutting it out.

gsextract tgscgScience_1.fits fl_inter+ find+ back=fit bfunct=’chebyshev’ border=1 tfunct=’spline3′ torder=5 tnsum=20 tstep=50 refimage=” apwidth=1.3 recent+ trace+ fl_vardq+ weights=’variance’
gsextract tgscgScience_2.fits fl_inter+ find+ back=fit bfunct=’chebyshev’ border=1 tfunct=’spline3′ torder=5 tnsum=20 tstep=50 refimage=” apwidth=1.3 recent+ trace+ fl_vardq+ weights=’variance’


gsextract tgscgScience_N.fits fl_inter+ find+ back=fit bfunct=’chebyshev’ border=1 tfunct=’spline3′ torder=5 tnsum=20 tstep=50 refimage=” apwidth=1.3 recent+ trace+ fl_vardq+ weights=’variance’

Step 9 – Calibrate the Extracted Spectra

Extracting from a 2D image carries along with it all the non-uniformities the CCD response will have. The next command corrects for extinction and calibrates to a flux scale using sensitivity spectra (produced by the ‘gsstandard’ routine). Note this requires the data reducer to have already created the calibration files using the standard star observations coupled with the science observations. Each Science image will be calibrated individually, using the sensitivity curve from it’s specific central wavelength setting.

gscalibrate etgscgScience_1.fits sfunc=’homedir$STANDARD/sensA.fits’
gscalibrate etgscgScience_2.fits sfunc=’homedir$STANDARD/sensA.fits’


gscalibrate etgscgScience_N.fits sfunc=’homedir$STANDARD/sensB.fits’

Again, in the above example, ‘sensA.fits’ and ‘sensB.fits’ represent the possible sensitivity file based on the standard star observations. For any given data reduction session, there will typically be only two sensitivity files needed.

Step 10 – Coadd the Spectra

This is pretty much the last step. Each spectra has been reduced/calibrated individually, now it is time to add all the spectra together. This is just a result of how Gemini observes targets (2 spectra … shift central wavelength setting … 2 spectra ).

sarith cetgscgScience_1.fits[SCI,1] + cetgscgScience_2.fits[SCI,1] + … + cetgscgScience_N.fits[SCI,1] c1.fits
sarith cetgscgScience_1.fits[SCI,1] + cetgscgScience_2.fits[SCI,1] + … + cetgscgScience_M.fits[SCI,1] c2.fits
sarith c1.fits + c2.fits OBJNAME.fits

The format here is to show that you must first coadd the N individual spectra on the first central wavelength setting, then coadd the M individual spectra on the second central wavelength setting. THEN you may coadd those two spectra to get the final object spectrum.

Effectively, data reductions are done at this point, however, another step is required: normalization. In order to be able to compare this spectrum properly to other spectra, you must have its continuum scaled appropriately. This will be the subject of another post. Eventually.