I’ve been working on reducing, plotting, and normalizing Gemini spectra over the last little while. Here’s a final product.
I’ve been a bit behind posting about this, so this is the last post regarding queue updates for 2013B. But that’s good news, though! Let me explain. Our 2013B CFHT program collects data from August to January. MegaCam, the instrument we use, is on the telescope for roughly 15 nights at a time. It is then swapped out for one of the other major instruments at CFHT: ESPaDONaS or WIRCam. You can see the schedule here. It appears our program has finished data collection by the end of 8 October 2013. It could have taken all semester to get this data in, yet it’s all done now. So that’s great news! You can see the last data release I posted about here. But here’s the final completed map:
Our research group is in the middle of receiving data from Gemini/GMOS, so I’ve written out here a thorough (but readable) reference for the steps needed to run reductions on the data that comes back from the ‘scope. Note, Gemini has it’s own GMOS reductions package that needs to be installed with your current version of IRAF.
DISCLAIMER: The steps below were derived from the Gemini Help Pages coupled with trial and error. They have taken this form based on necessities for my own science, and therefore may not perfectly apply to anyone else’s data.
Step 0 – Organization
The first and foremost thing to do is organize your files in a way that makes sense to you. My preference is to first set a home directory for the reductions (hereafter referred to as homedir), and then make individualized directories for each object. I also create directories for the standard star observations, and twilight observations (should those be required). There are also ‘master bias’ files that are created and made available with the downloaded data; these I also put in their own directory within homedir. The format would look like this:
Within each OBJNAME/ directory I will put the Science Images, the Flats, the CuArc, as well as any text/log files that I find useful. For instance, I let IRAF keep a logfile for me, and I also create a text file where I record all the commands that I use for each object. This will be made clearer below. However you do it, organizing the files in a meaningful/logical/consistent way will save you LOADS of time in the future.
A quick note on central wavelength settings: Gemini typically breaks observations into two groups by using two different central wavelength settings. The benefit to this is you are able to remove the effects of chip gaps on your final spectrum. The downside is you must reduce each wavelength setting’s observations separately and combine at the end.
Step 1 – Getting IRAF set
Upon opening IRAF, run:
set homedir=’home directory location’
The 3 ‘unlearn’ commands make sure that all work I did prior to this won’t affect how I reduce this specific object. I also take this opportunity to create a log file ‘GN.OBJNAME.log.’ This refers to the fact that this object was observed using Gemini North; each command from hereon will be recorded in the log file, which is helpful for debugging problems later.
Step 2 – ‘Preparing’ the data
This command is all about ‘preparing’ the data for reductions.
gprepare *.fits fl_addmdf+
Run the command on all fits files. The parameter ‘fl_addmdf+’ indicates to IRAF that it should attach the Mask Definition File (MDF) to the image. This tells IRAF what Mask the light went through (longslit/multislit), and is pulled from the header. I’ve had to specifically turn this parameter on, though that may not necessarily be the case for others. Always check your parameter list before running a command (i.e., $ epar gprepare). Note that any GMOS IRAF package, when run, creates a new file with the same name but appends a prefix to indicate which package it was run through. In the case of gprepare, it appends the letter ‘g’ to the front of each filename (again, the original files are not changed).
Step 3 – Cosmic Ray Rejection
There are lots of routines out there to reject cosmic rays. I simply use the one in the GMOS IRAF package. There may be better options. I only run the cosmic ray rejection on science images.
gscrrej gScience_1.fits cgScience_1.fits
gscrrej gScience_2.fits cgScience_2.fits
gscrrej gScience_N.fits cgScience_N.fits
Again, in an effort to maintain the files properly, you should run this command individually on each g*.fits image, and have it add a ‘c’ prefix to the front of each file. (the original g*.fits files will be unchanged).
Step 4 – Creating a Master Flat(s)
gsflat GCALflat_1.fits,GCALflat_2.fits,GCALflat_N.fits, master_flat.fits fl_over- fl_trim+ nbiascontam=4 fl_bias+ bias=’homedir$BIAS/master_bias.fits’ fl_dark- fl_fixpix- fl_inter+ function=’chebyshev’ order=15 fl_detec+ ovs_flinter- fl_vardq+
There can be a number of flat images, however note that this command will collapse should the central wavelength settings of the images differ. As Gemini typically will split your observations (by 10 nm central wavelength) into two groups, you will get a (or possible more) GCALflat(s) for each setting. You must create a master flat for each individual wavelength setting; READ: run this command twice, once for each wavelength setting. Notice I’ve also removed the bias from the master flat using a ‘master_bias.fits’ image; this is provided by the observatory. Note that the fl_over parameter is turned off; we do not need this if we are using the bias image.
Upon running this command, you will be required to interactively analyze each individual Gemini Chip (this could be either 3 or 6 depending on the type of read-out from the telescope). The fl_inter+ parameter turns this on/off.
Step 5 – Reduce the Science Images
gsreduce cgScience_1.fits,cgScience_2.fits,cgScience_N.fits fl_inter+ fl_over- fl_trim+ nbiascontam=4 fl_bias+ bias=’homedir$BIAS/master_bias.fits’ fl_dark- fl_flat+ flatim=’master_flat.fits’ fl_gmosaic+ fl_fixpix+ fl_cut+ fl_gsappwave+ ovs_flinter- fl_vardq+ yoffset=5.0
Again, you will have to run this command twice for each central wavelength setting, being careful to make sure you indicate in the command which master_flats to use with which science images. Note the fl_gmosaic parameter is turned on; this indicates the 3 (or 6) individual images from the chips will be mosaic’d into one full image.
Step 6 – Wavelength Calibration
This is a two-step process. First, use the ‘gsreduce’ command from above to reduce the CuArc files (you can run all together, as wavelength setting here doesn’t matter).
gsreduce gCuArc_1.fits,gCuArc_2.fits fl_over- fl_trim+ fl_bias- fl_dark- fl_flat- fl_cut+ fl_gsappwave+ yoffset=5.0
After reducing the two CuArc files, you run:
gswavelength gsgCuArc_1.fits,gsgCuArc_2.fits fl_inter+ nsum=5 step=5 function=’chebyshev’ order=6 fitcxord=5 fitcyord=4
Note that the ‘gsreduce’ command will add ‘gs’ prefix to the front of the filename. This command takes the arc lamp spectra and match to a list of already known emissions features in that spectrum. Therefore, this is where you want to take care in making sure the emission features are identified properly. This will require you to manually check the emission features found by the computer for validity.
Step 7 – Apply the Transformation
This is a two-step process as well. First, apply the transformation to the CuArc files themselves; this ensures the transformation will work.
gstransform gsgCuArc_1.fits wavtraname=gsgCuArc_1.fits
gstransform gsgCuArc_2.fits wavtraname=gsgCuArc_2.fits
By ‘applying the transformation’ you are applying a wavelength calibration to images. This first step, you are applying to the output from ‘gswavelength’ above. This is done to check rectification of lamps. If there are no errors upon running the above command, you may move forward with:
gstransform gscgScience_1.fits,gscgScience_2.fits,gscgScience_N.fits wavtraname=gsgCuArc_1.fits fl_vardq+
gstransform gscgScience_1.fits,gscgScience_2.fits,gscgScience_M.fits wavtraname=gsgCuArc_2.fits fl_vardq+
Make sure you append the correct CuArc transformation to the appropriate Science images. If you use the wrong CuArc (with one wavelength setting) on the science images with a different wavelength setting, you will be incorrectly applying your wavelength calibration.
Step 8 – EXTRACT!
Before moving forward, it is recommended you open the individual 2D spectra (now with a tgscg* prefix) in ds9 first, in order to know where the spectrum actually is when cutting it out.
gsextract tgscgScience_1.fits fl_inter+ find+ back=fit bfunct=’chebyshev’ border=1 tfunct=’spline3′ torder=5 tnsum=20 tstep=50 refimage=” apwidth=1.3 recent+ trace+ fl_vardq+ weights=’variance’
gsextract tgscgScience_2.fits fl_inter+ find+ back=fit bfunct=’chebyshev’ border=1 tfunct=’spline3′ torder=5 tnsum=20 tstep=50 refimage=” apwidth=1.3 recent+ trace+ fl_vardq+ weights=’variance’
gsextract tgscgScience_N.fits fl_inter+ find+ back=fit bfunct=’chebyshev’ border=1 tfunct=’spline3′ torder=5 tnsum=20 tstep=50 refimage=” apwidth=1.3 recent+ trace+ fl_vardq+ weights=’variance’
Step 9 – Calibrate the Extracted Spectra
Extracting from a 2D image carries along with it all the non-uniformities the CCD response will have. The next command corrects for extinction and calibrates to a flux scale using sensitivity spectra (produced by the ‘gsstandard’ routine). Note this requires the data reducer to have already created the calibration files using the standard star observations coupled with the science observations. Each Science image will be calibrated individually, using the sensitivity curve from it’s specific central wavelength setting.
gscalibrate etgscgScience_1.fits sfunc=’homedir$STANDARD/sensA.fits’
gscalibrate etgscgScience_2.fits sfunc=’homedir$STANDARD/sensA.fits’
gscalibrate etgscgScience_N.fits sfunc=’homedir$STANDARD/sensB.fits’
Again, in the above example, ‘sensA.fits’ and ‘sensB.fits’ represent the possible sensitivity file based on the standard star observations. For any given data reduction session, there will typically be only two sensitivity files needed.
Step 10 – Coadd the Spectra
This is pretty much the last step. Each spectra has been reduced/calibrated individually, now it is time to add all the spectra together. This is just a result of how Gemini observes targets (2 spectra … shift central wavelength setting … 2 spectra ).
sarith cetgscgScience_1.fits[SCI,1] + cetgscgScience_2.fits[SCI,1] + … + cetgscgScience_N.fits[SCI,1] c1.fits
sarith cetgscgScience_1.fits[SCI,1] + cetgscgScience_2.fits[SCI,1] + … + cetgscgScience_M.fits[SCI,1] c2.fits
sarith c1.fits + c2.fits OBJNAME.fits
The format here is to show that you must first coadd the N individual spectra on the first central wavelength setting, then coadd the M individual spectra on the second central wavelength setting. THEN you may coadd those two spectra to get the final object spectrum.
Effectively, data reductions are done at this point, however, another step is required: normalization. In order to be able to compare this spectrum properly to other spectra, you must have its continuum scaled appropriately. This will be the subject of another post. Eventually.
In the fall of 2012, I PI’d a proposal to get 675 images taken by the Canada-France-Hawaii Telescope (CFHT) and the MegaCam instrument. We were fortunate enough to have the proposal accepted, and subsequently received our data. I wrote about it many times on my blog [queue update 1, queue update 5, and serendipitous asteroid one and two, just to name a few]. Well, the fall of 2013 is looking to be equally fruitful. We have re-submitted a similarly driven proposal to re-observe the same pointings from last data run with the aim to find things that have changed in the last year. We expect to find some objects that have either gotten brighter or dimmer over the last year, which we will then submit for extra observations (known as a target of opportunity) at Gemini. The above shows the number of pointings that have been re-observed this far, one month into the observing semester. [ASIDE: ‘observing semesters’ are broken into semester A: February-July, and semester B: August-January].
It is clear from observations of the Milky Way, galaxies, and galaxy clusters that our theoretical understanding of gravity as we know them (i.e., Newton and Einstein) do not explain the orbital velocities of stars in galaxies, nor galaxies movements through large clusters. Dark matter is heralded by the majority of astronomers (though certainly not all) as the explanation for the observations; however, modified gravity has offered a solution to the problem that does not invoke the need for unseen matter. In a paper in 2006, Douglas Clowe of the Steward Observatory (and collaborators) published what they claim as the first empirical proof for the existence of dark matter. This blog post is a summary of that paper, with a small amount of necessary background.
Dark matter was first posited as an explanation for astronomical observations by Jan Oort in 1932. Oort (for whom the Oort Cloud is named) published this idea in a Bulletin for of the Astronomical Institutes of the Netherlands; the original work can be found here. Oort observed that the velocities of stars perpendicular to the Galactic plane could not be explained by the mass of the Galactic plane we observe. He concluded that there may be some invisible matter (which he dubbed ‘dark matter’) that could explain the velocities observed.
In the Coma Cluster, Fritz Zwicky made a similar observation. He published his first paper on the subject in 1933 titled Die Rotverschiebung von extragalaktischen Nebeln, however, the more definitive work is found in Zwicky (1937). By measuring the orbital velocities of galaxies in the cluster, and assuming the system was in equilibrium, Zwicky could infer the mass of the system. This is done using the Virial Theorem, which states: , where is the average of total kinetic energy of the system and is the average of total potential energy of the system. Zwicky found that the average mass per galaxy was much higher than expected, given the amount of light coming from each galaxy. For a quick (but accurate/useful) run-down of the Zwicky observations and results, see the summary written by Michael Richmond: ‘Using the Virial Theorem.’
More recently, astronomers have found evidence for the same observational/theoretical discrepancy in the rotation curves of galaxies. This was first pointed out by Roberts (1976), but expanded on quickly in the literature. An excellent study by Ruben et al. (1978) investigated 10 spiral galaxies, measuring their rotation curves.
The rotation curves above flatten out after distances of roughly 5 kpc. Given the matter observed in the galaxy within these radii, you would expect the rotation curves to fall off to near zero, rather than to continue at relatively high velocities to large radii. The stars in galaxies are moving at speeds that are not possible given the matter we can observe.
Either way you look at the work of Oort, Zwicky, and many others since, astronomical observations are not agreeing with what the theories predict. The explanations for the observations above fall into two camps: dark matter, or modified gravity. Either there is a large amount of unobserved mass that forces the stars/galaxies to move as they do, or our theory of how gravity works is flawed. The latter was first proposed by Milgrom (1983). Milgrom proposed that it is not hidden or ‘dark’ matter that is creating the effects we see, but that perhaps the law of inertia is incomplete. He argued that if you assume in the limit of small accelerations, , that a particle of mass feels acceleration following , you can explain the discrepancies in the observations.
Much research has been dedicated to distinguishing between the two different explanations. A long list of work can be found on that subject but a good start would be Buote et al. (2002). Such experiments, while favouring the dark matter hypothesis, fall victim to some assumptions (for instance, mass distribution and symmetry) that left room for counterarguments. One of the first works to provide more definitive proof of one explanation over the other was focused on the Bullet Cluster.
‘The actual existence of dark matter can only be confirmed either by a laboratory detection or, in an astronomical context, by the discovery of a system in which the observed baryons and the inferred dark matter are spatially segregated. An ongoing galaxy cluster merger is such a system.’
The above is taken from Clowe et al. 2006, ApJL, 648, L109. In this post, I summarize the findings of this paper.
Title: A Direct Empirical Proof of the Existence of Dark Matter
Abstract: We present new weak lensing observations of 1E0657-558 (z=0.296), a unique cluster merger, that enable a direct detection of dark matter, independent of assumptions regarding the nature of the gravitational force law. Due to the collision of two clusters, the dissipationless stellar component and the fluid-like X-ray emitting plasma are spatially segregated. By using both wide-field ground based images and HST/ACS images of the cluster cores, we create gravitational lensing maps which show that the gravitational potential does not trace the plasma distribution, the dominant baryonic mass component, but rather approximately traces the distribution of galaxies. An 8-sigma significance spatial offset of the center of the total mass from the center of the baryonic mass peaks cannot be explained with an alteration of the gravitational force law, and thus proves that the majority of the matter in the system is unseen.
Galaxy clusters contain not only the galaxies (~2% of the mass), but also intergalactic plasma (~10% of the mass), and (assuming the null hypothesis) dark matter (~88% of the mass). Over time, the gravitational attraction of all these parts naturally push all the parts to be spatially coincident. If two galaxy clusters were to collide/merge, we will observe each part of the cluster to behave differently. Galaxies will behave as collisionless particles but the plasma will experience ram pressure. Throughout the collision of two clusters, the galaxies will then become separated from the plasma. This is seen clearly in the cluster 1E 0657-558 (hereafter the Bullet Cluster). In Fig. 1, the galaxies of both concentrations are spatially separated from the (purple) plasma.
‘In the absence of dark matter, the gravitational potential will trace the dominant visible matter component, which is the X-ray plasma. If, on the other hand, the mass is indeed dominated by collisionless dark matter, the potential will trace the distribution of that component, which is expected to be spatially coincident with the collisionless galaxies’ (Clowe et al. 2006).
To test this hypothesis the gravitational potential of the system must be mapped in order to determine where most of the mass is, and to see with what part of the cluster it coincides with.
Mapping the Gravitational Potential
To map the gravitational potential energy of the Bullet Cluster, the authors used weak gravitational lensing. [side note: In general, gravitational lensing occurs when a massive foreground object bends the light of background objects. This phenomenon is a result of the curvature of space-time due to mass, and directly fell out of the work of Albert Einstein. Check out this quick guide to lensing by NASA]. Weak lensing is the measure of small/weak distortions of images of background objects (like galaxies) caused by the gravitational deflection of light by a foreground cluster’s mass. As the deflections are very small/weak, a statistical approach is needed in order to quantify the mass distribution of something in the foreground (like a cluster collision), using a large number of background sources.
In this work, the authors used data from the European Southern Observatory (ESO) Very Large Telescope, the ESO Max Planck Gesellschaft 2.2m telescope, the Magellan 6.5m telescope, and the Hubble Space Telescope to create a very large optical data set of the galaxies behind the Bullet Cluster. The more background galaxies observed, the larger the statistical set that maps the gravitational potential of the Bullet Cluster, and therefore the more accurate the map. The deflections caused by the Bullet Cluster stretch the image of the background galaxies preferentially in the direction perpendicular to that of the clusters centre of mass. A perfect example of this can be seen in Abell 2218, in Fig 2. Note, this is not an example of weak gravitational lensing, as the stretching of the galaxies is very large.
In weak lensing, the imparted ellipticity is typically comparable to or smaller than the intrinsic to the galaxy, and thus the distortion is only measurable statistically with large numbers of background galaxies. Using the above data, the authors measure the ellipticity of the the background galaxies from their brightness distribution. The ellipticity of each galaxy is thus a direct measurement of the reduced shear (stretching), , where is the shear, and is the convergence. These are parameters used to measure gravitational lensing effects, described in the Table 1 below:
It is important to note that in Newtonian gravity, is equal to the surface mass density of the lens divided by a scaling constant. In modified gravity models, is no longer linearly related to the surface mass density but is instead a nonlocal function that scales as the mass raised to a power (Clowe et al. 2006). It is this difference that allows the authors to compare nonstandard models of gravity with Newtonian. In the paper, the authors calculate and obtain a 2D map for the convergence across the image of the Bullet Cluster; this map has been overlaid the optical and Xray images in Fig. 3.
The above figure shows, in green contours, the map of the gravitational potential of the Bullet Cluster as measured by the lensing effects on the galaxies in the background. The peaks of the contours occur both offset from the brightest galaxy in the cluster by , yet offset from the centroid of their respective plasma clouds by .
Where’s the Baryonic Mass?
Having the lensing contour map in place, the authors then measure the mass and location of the baryonic matter. To measure the mass of the plasma clouds, the authors made use of a multicomponent three-dimensional cluster model fit to the Chandra X-ray image. Stellar masses were measured using a mass-to-light ratio, based on the I-band luminosity of all galaxies equal in brightness or fainter than the brightest galaxy in the cluster.
It’s clear from the measured masses, that the amount of mass in the stellar component is much smaller than the amount of mass in the Xray plasma, by a large factor. Regardless, the centroid of the gravitational well map (Fig. 3) is aligned with the stellar components, indicating most of the mass should be there. As concluded by the paper, ‘any nonstandard gravitational force that scales with baryonic mass will fail to reproduce these observations.’
Wikipedia – Bullet Cluster
Astronomy Picture of the Day – Bullet Cluster, 24 August 2006
NASA Press Release – A Matter of Fact: Dark Matter Proven
PBS Special – on Youtube The Dark Matter Mystery
Follow up object: This galaxy cluster collision also exhibits similar behaviour: MACS_J0025.4-1222