Black holes are 230 years old

On this day in history, 230 years ago, the world learned of the idea of a black hole.
The first known place in which a black hole is described is in a letter written by John Mitchell which was sent to Henry Canvendish in 1783. In the letter Mitchell writes:

If the semi-diameter of a sphere of the same density as the Sun in the proportion of five hundred to one, and by supposing light to be attracted by the same force in proportion to its [mass] with other bodies, all light emitted from such a body would be made to return towards it, by its own proper gravity. -Mitchell

The letter was written on the 26th of May 1783. It was then read at a meeting of the Royal Society in London on the 27th of November of the same year. Finally it was published in the society’s journal, Philosophical Transactions, Royal Society, London, on the 1st of January 1784 [aside: The Royal Society, a group of natural philosophers and scientists, formed on the 28th of November 1660, making it over 350 years old!].
Mitchell reached his conclusion by using Newton’s Laws. Sir Isaac Newton had published his laws of motion and gravitation via Principia Mathematica in the year 1687, about a century before Mitchell’s letter to Cavendish. You can use Newton’s laws to ask the question: how fast do I have to be moving in order to escape the gravitational pull of Earth? The answer is found by equating your Kinetic Energy (Ek, the energy associated with your moving away from the planet) and your gravitational potential energy (Eg, the energy associated with Earth pulling on you):

(1)   \begin{equation*} Ek=Eg \end{equation*}

(2)   \begin{equation*} {1 \over 2} mv^2 = {GMm \over r} \end{equation*}

where M is the mass of the Earth, m is your mass, v is your speed, G is the universal gravitational constant, and r is the radius of the Earth. Playing with the equation a bit (and entering in the M and r of Earth), you get:

(3)   \begin{equation*} v=\sqrt{{2GM \over r}}=11.2 km/s \end{equation*}

Therefore, in order to escape Earth’s gravity, you need to have a starting speed of 11.2 km/s. This is called, happily, ‘escape velocity.’ You can use this equation on any object whose mass and radius you know (escape velocity for: the Moon=2.4 km/s, Mars=5.0 km/s, and so on).
Mitchell’s mental leap was to ask the equation: ‘what if an object’s escape velocity was the speed of light?’ What would the ratio of mass to radius of that object be? The speed of light was measured at the time to be 295 000 km/s (Bradley, 1728). So, crunching the numbers, Mitchell found that an object with the same density of the Sun, but with a radius 500x bigger, would have a surface escape velocity equal to the speed of light. Therefore, ‘all light emitted from such a body would be made to return towards it, by its own proper gravity.’
Astounding! This was the first time anyone had ever supposed there to be an object that would have a gravitational potential well deep enough to capture light. Unfortunately, Mitchell’s work was not uncovered until the 1970s. Until that time, Pierre Laplace was considered the first to propose the idea.

A slight addendum to the story: Mitchell based his proposal on the idea that light would be attracted to matter via Newton’s laws. This was accepted at the time but research into particle vs. wave theory of light in the 1800s showed that light could not act in such a way. This squashed the idea of black holes (or ‘dark stars’ as they were referred) until a gentlemen by the name of Albert Einstein developed General Relativity and showed that light follows the curved space created by a massive object. This reinstated a scientific basis for the idea of a black hole.
John Mitchell was a polymath: geologist, mathematician, physicist, astronomer, and more. He contributed deeply to many fields (including plate tectonic theory, and magnetism).

Suggested Reading:

Philosophical Transations – The original letter written to Cavendish
American Museum of Natural History – John Mitchell and Black Holes
Astronomy Society of Edinburgh – Black Holes History
The American Physical Society – John Mitchell Anticipates Black Holes

CFHT 2013B queue update, Part 2: The End

I’ve been a bit behind posting about this, so this is the last post regarding queue updates for 2013B. But that’s good news, though! Let me explain. Our 2013B CFHT program collects data from August to January. MegaCam, the instrument we use, is on the telescope for roughly 15 nights at a time. It is then swapped out for one of the other major instruments at CFHT: ESPaDONaS or WIRCam. You can see the schedule here. It appears our program has finished data collection by the end of 8 October 2013. It could have taken all semester to get this data in, yet it’s all done now. So that’s great news! You can see the last data release I posted about here. But here’s the final completed map:

The green indicates a pointing that's been observed from our observing program. Clearly, all pointings have been observed!

The green indicates a pointing that’s been observed from our observing program. Clearly, all pointings have been observed!

GMOS reductions – My Prescription

Our research group is in the middle of receiving data from Gemini/GMOS, so I’ve written out here a thorough (but readable) reference for the steps needed to run reductions on the data that comes back from the ‘scope. Note, Gemini has it’s own GMOS reductions package that needs to be installed with your current version of IRAF.

DISCLAIMER: The steps below were derived from the Gemini Help Pages coupled with trial and error. They have taken this form based on necessities for my own science, and therefore may not perfectly apply to anyone else’s data.

Step 0 – Organization

The first and foremost thing to do is organize your files in a way that  makes sense to you. My preference is to first set a home directory for the reductions (hereafter referred to as homedir), and then make individualized directories for each object. I also create directories for the standard star observations, and twilight observations (should those be required). There are also ‘master bias’ files that are created and made available with the downloaded data; these I also put in their own directory within homedir. The format would look like this:

$ /homedir
$ /homedir/STANDARD/
$ /homedir/BIAS/
$ /homedir/OBJNAME1/
$ /homedir/OBJNAME2/

Within each OBJNAME/ directory I will put the Science Images, the Flats, the CuArc, as well as any text/log files that I find useful. For instance, I let IRAF keep a logfile for me, and I also create a text file where I record all the commands that I use for each object. This will be made clearer below. However you do it, organizing the files in a meaningful/logical/consistent way will save you LOADS of time in the future.

A quick note on central wavelength settings: Gemini typically breaks observations into two groups by using two different central wavelength settings. The benefit to this is you are able to remove the effects of chip gaps on your final spectrum. The downside is you must reduce each wavelength setting’s observations separately and combine at the end.

Step 1 – Getting IRAF set

Upon opening IRAF, run:

fitsutil
gemini
gmos
unlearn gemini
unlearn gmos
unlearn gemtools
set stdimage=imtgmos2
set homedir=’home directory location’
gmos.logfile=’GN.OBJNAME.log’

The 3 ‘unlearn’ commands make sure that all work I did prior to this won’t affect how I reduce this specific object. I also take this opportunity to create a log file ‘GN.OBJNAME.log.’ This refers to the fact that this object was observed using Gemini North; each command from hereon will be recorded in the log file, which is helpful for debugging problems later.

Step 2 – ‘Preparing’ the data

This command is all about ‘preparing’ the data for reductions.

gprepare *.fits fl_addmdf+

Run the command on all fits files. The parameter ‘fl_addmdf+’ indicates to IRAF that it should attach the Mask Definition File (MDF) to the image. This tells IRAF what Mask the light went through (longslit/multislit), and is pulled from the header. I’ve had to specifically turn this parameter on, though that may not necessarily be the case for others. Always check your parameter list before running a command (i.e., $ epar gprepare). Note that any GMOS IRAF package, when run, creates a new file with the same name but appends a prefix to indicate which package it was run through. In the case of gprepare, it appends the letter ‘g’ to the front of each filename (again, the original files are not changed).

Step 3 – Cosmic Ray Rejection

There are lots of routines out there to reject cosmic rays. I simply use the one in the GMOS IRAF package. There may be better options. I only run the cosmic ray rejection on science images.

gscrrej gScience_1.fits cgScience_1.fits
gscrrej gScience_2.fits cgScience_2.fits

.

gscrrej gScience_N.fits cgScience_N.fits

Again, in an effort to maintain the files properly, you should run this command individually on each g*.fits image, and have it add a ‘c’ prefix to the front of each file. (the original g*.fits files will be unchanged).

Step 4 – Creating a Master Flat(s)

gsflat GCALflat_1.fits,GCALflat_2.fits,GCALflat_N.fits, master_flat.fits fl_over- fl_trim+ nbiascontam=4 fl_bias+ bias=’homedir$BIAS/master_bias.fits’ fl_dark- fl_fixpix- fl_inter+ function=’chebyshev’ order=15 fl_detec+ ovs_flinter- fl_vardq+

There can be a number of flat images, however note that this command will collapse should the central wavelength settings of the images differ. As Gemini typically will split your observations (by 10 nm central wavelength) into two groups, you will get a (or possible more) GCALflat(s) for each setting. You must create a master flat for each individual wavelength setting; READ: run this command twice, once for each wavelength setting. Notice I’ve also removed the bias from the master flat using a ‘master_bias.fits’ image; this is provided by the observatory. Note that the fl_over parameter is turned off; we do not need this if we are using the bias image.
Upon running this command, you will be required to interactively analyze each individual Gemini Chip (this could be either 3 or 6 depending on the type of read-out from the telescope). The fl_inter+ parameter turns this on/off.

Step 5 – Reduce the Science Images

gsreduce cgScience_1.fits,cgScience_2.fits,cgScience_N.fits fl_inter+ fl_over- fl_trim+ nbiascontam=4 fl_bias+ bias=’homedir$BIAS/master_bias.fits’ fl_dark- fl_flat+ flatim=’master_flat.fits’ fl_gmosaic+ fl_fixpix+ fl_cut+ fl_gsappwave+ ovs_flinter- fl_vardq+ yoffset=5.0

Again, you will have to run this command twice for each central wavelength setting, being careful to make sure you indicate in the command which master_flats to use with which science images. Note the fl_gmosaic parameter is turned on; this indicates the 3 (or 6) individual images from the chips will be mosaic’d into one full image.

Step 6 – Wavelength Calibration

This is a two-step process. First, use the ‘gsreduce’ command from above to reduce the CuArc files (you can run all together, as wavelength setting here doesn’t matter).

gsreduce gCuArc_1.fits,gCuArc_2.fits fl_over- fl_trim+ fl_bias- fl_dark- fl_flat- fl_cut+ fl_gsappwave+ yoffset=5.0

After reducing the two CuArc files, you run:

gswavelength gsgCuArc_1.fits,gsgCuArc_2.fits fl_inter+ nsum=5 step=5 function=’chebyshev’ order=6 fitcxord=5 fitcyord=4

Note that the ‘gsreduce’ command will add ‘gs’ prefix to the front of the filename. This command takes the arc lamp spectra and match to a list of already known emissions features in that spectrum. Therefore, this is where you want to take care in making sure the emission features are identified properly. This will require you to manually check the emission features found by the computer for validity.

Step 7 – Apply the Transformation

This is a two-step process as well. First, apply the transformation to the CuArc files themselves; this ensures the transformation will work.

gstransform gsgCuArc_1.fits wavtraname=gsgCuArc_1.fits
gstransform gsgCuArc_2.fits wavtraname=gsgCuArc_2.fits

By ‘applying the transformation’ you are applying a wavelength calibration to images. This first step, you are applying to the output from ‘gswavelength’ above. This is done to check rectification of lamps. If there are no errors upon running the above command, you may move forward with:

gstransform gscgScience_1.fits,gscgScience_2.fits,gscgScience_N.fits wavtraname=gsgCuArc_1.fits fl_vardq+
gstransform gscgScience_1.fits,gscgScience_2.fits,gscgScience_M.fits wavtraname=gsgCuArc_2.fits fl_vardq+

Make sure you append the correct CuArc transformation to the appropriate Science images. If you use the wrong CuArc (with one wavelength setting) on the science images with a different wavelength setting, you will be incorrectly applying your wavelength calibration.

Step 8 – EXTRACT!

Before moving forward, it is recommended you open the individual 2D spectra (now with a tgscg* prefix) in ds9 first, in order to know where the spectrum actually is when cutting it out.

gsextract tgscgScience_1.fits fl_inter+ find+ back=fit bfunct=’chebyshev’ border=1 tfunct=’spline3′ torder=5 tnsum=20 tstep=50 refimage=” apwidth=1.3 recent+ trace+ fl_vardq+ weights=’variance’
gsextract tgscgScience_2.fits fl_inter+ find+ back=fit bfunct=’chebyshev’ border=1 tfunct=’spline3′ torder=5 tnsum=20 tstep=50 refimage=” apwidth=1.3 recent+ trace+ fl_vardq+ weights=’variance’

.
.

gsextract tgscgScience_N.fits fl_inter+ find+ back=fit bfunct=’chebyshev’ border=1 tfunct=’spline3′ torder=5 tnsum=20 tstep=50 refimage=” apwidth=1.3 recent+ trace+ fl_vardq+ weights=’variance’

Step 9 – Calibrate the Extracted Spectra

Extracting from a 2D image carries along with it all the non-uniformities the CCD response will have. The next command corrects for extinction and calibrates to a flux scale using sensitivity spectra (produced by the ‘gsstandard’ routine). Note this requires the data reducer to have already created the calibration files using the standard star observations coupled with the science observations. Each Science image will be calibrated individually, using the sensitivity curve from it’s specific central wavelength setting.

gscalibrate etgscgScience_1.fits sfunc=’homedir$STANDARD/sensA.fits’
gscalibrate etgscgScience_2.fits sfunc=’homedir$STANDARD/sensA.fits’

.
.

gscalibrate etgscgScience_N.fits sfunc=’homedir$STANDARD/sensB.fits’

Again, in the above example, ‘sensA.fits’ and ‘sensB.fits’ represent the possible sensitivity file based on the standard star observations. For any given data reduction session, there will typically be only two sensitivity files needed.

Step 10 – Coadd the Spectra

This is pretty much the last step. Each spectra has been reduced/calibrated individually, now it is time to add all the spectra together. This is just a result of how Gemini observes targets (2 spectra … shift central wavelength setting … 2 spectra ).

sarith cetgscgScience_1.fits[SCI,1] + cetgscgScience_2.fits[SCI,1] + … + cetgscgScience_N.fits[SCI,1] c1.fits
sarith cetgscgScience_1.fits[SCI,1] + cetgscgScience_2.fits[SCI,1] + … + cetgscgScience_M.fits[SCI,1] c2.fits
sarith c1.fits + c2.fits OBJNAME.fits

The format here is to show that you must first coadd the N individual spectra on the first central wavelength setting, then coadd the M individual spectra on the second central wavelength setting. THEN you may coadd those two spectra to get the final object spectrum.

Effectively, data reductions are done at this point, however, another step is required: normalization. In order to be able to compare this spectrum properly to other spectra, you must have its continuum scaled appropriately. This will be the subject of another post. Eventually.

Domes at Sunset

Taken at 6:06pm EDT on 3 October 2013. The two domes of the York University Astronomical Observatory bathed in a warm sunset.

Taken at 6:06pm EDT on 3 October 2013. The two domes of the York University Astronomical Observatory bathed in a warm sunset. The picture is taken facing North East.

Also, George is at bottom left.

CFHT 2013B queue update, Part 1

This map shows the number of pointings completed to 31 August 2013 for our CFHT 2013B data run.

This map shows the number of pointings completed to 31 August 2013 for our CFHT 2013B data run.

In the fall of 2012, I PI’d a proposal to get 675 images taken by the Canada-France-Hawaii Telescope (CFHT) and the MegaCam instrument. We were fortunate enough to have the proposal accepted, and subsequently received our data. I wrote about it many times on my blog [queue update 1, queue update 5, and serendipitous asteroid one and two, just to name a few]. Well, the fall of 2013 is looking to be equally fruitful. We have re-submitted a similarly driven proposal to re-observe the same pointings from last data run with the aim to find things that have changed in the last year. We expect to find some objects that have either gotten brighter or dimmer over the last year, which we will then submit for extra observations (known as a target of opportunity) at Gemini. The above shows the number of pointings that have been re-observed this far, one month into the observing semester. [ASIDE: ‘observing semesters’ are broken into semester A: February-July, and semester B: August-January].