Notes on ‘A new distance measure using AGN’ [sic]

For the Astronomy Journal Club this week I will be presenting the paper entitled ‘A New Cosmological Distance Measure Using Active Galactic Nuclei,’ authored by Watson, Denney, Vestergaard, and Davis. This work was published in The Astrophysical Journal Letters, 740:L59, on 20 October 2011.

A link to the arXiv paper:


Finding reliable methods to determine distances, especially LARGE distances, has been an ongoing problem throughout astrophysical history. Only very recently have we come to realize Type 1a supernovae can be accurately used as standard candles, a feat that to our firm observation of the accelerated expansion of the universe. So far, supernovae have only been used to z=1.7, and are unlikely to go further than z=2 (see Riess et al.) Due to their large numbers and luminosity extending out to very high redshift (0.3 < z < 7), AGN have been looked to, but never confirmed to be standard candles in anyway. If they could be used, this could create reliable distance measures far enough away to distinguish between dark energy models.


Broad-line Reverberation Mapping – see Peterson 1993 –

AGN are powered by a supermassive black hole at the centre of the galaxy where matter is accreting. The friction of the material falling in will generate a large amount of energy across the EM spectrum. Surrounding the SMBH, at some distance away, is a large amount of high-velocity gas clouds that produce emission features seen in the spectra of quasars. The BLR is directly controlled by the ionizing photons from the continuum source. For instance, the size of the BLR (meaning, it’s distance from the central source) is directly connected to the number of ionizing photons, and the optical depth of the gas. The ioinizing flux drops with distance following inverse square law, this means the radius of the BLR must be proportional to the square root of the luminosity.

(1)   \begin{equation*} L = 4 \pi r^2 F \end{equation*}

Therefore, if we were able to establish the size of the BLR, r, and the flux, it would lead to a measure of Luminosity of the quasar, and hence a measure of its Luminosity Distance. Hence, r \propto \sqrt{L}.

Aside: The Luminosity Distance is typically denoted D_L in the equation:

(2)   \begin{equation*} D_L = \sqrt{{L \over 4 \pi F}} \end{equation*}

If you measured the flux for some source of light, and had a priori knowledge of the source’s Luminosity, you could then measure the value D_L, the ‘luminosity distance’ to the source of light. In the case of a quasar, we are able to measure the flux on Earth, and use Reverberation Mapping to measure the quasar’s Luminosity, leading to a direct measurement of the quasar’s Luminosity Distance.

The BLR emission lines are emitting photons that are reprocessed from the continuum photons, therefore the emissions lines should vary in response to changes in the continuum with some associated time lag \tau. The time lag would simply be a measure of the distance / speed, in this case \tau = r/c. Therefore, measuring the time delay allows a measure of the BLR radius, this is known as ‘Reverberation Mapping.’ This is effectively done by measuring the time lag between changes in the continuum luminosity of the AGN and the luminosity of a bright emission line, like Civ or H\beta. This time lag will be proportional to the square root of L:

(3)   \begin{equation*} \tau \propto r \proptp \sqrt{L} \propto \sqrt{F} \end{equation*}

From Earth, we measure Flux not Luminosity, so the calculating \tau/\sqrt{F} is a measure of the Luminosity Distance.

Recent advancements in determining the lag, most importantly in: improving contaminating effects of the host galaxy, reobserving AGN to get better time lag measurements, and extending to lower luminosity quasars, have shown that the relationship r \propto \sqrt{L} holds well across four orders of magnitude in Luminosity


The authors used a sample of quasars whose lags between H\beta and the 5100\AA continuum flux are available and have had the effects of the host galaxy removed from the measured flux to obtain the AGN continuum flux. Apparently, remvoing the host galaxy effects has been shown to be very important (see Bentz et al. 2009a).The authors correct the AGN for Galactic extinction. The authors then calibrate their samples \tau/\sqrt{F} relationship using the absolute distance measurement to the source NGC 3227. The authors plot a comparison of their  \tau/\sqrt{F}-derived distances compared to the predicted distances using Hubble’s Law (and the most recent WMAP cosmology). The dotted line showing equality between the two. The AGN distance estimates clearly follow the best current cosmology Hubble distances to good accuracy.

A Note on Absolute Calibration:

Absolute calibration was doine using the galaxy NGC 3227, based on the distance modulus m-M=31.86 \pm0.24 determined by Tonry et al. 2001.


The authors then look at the sources of scatter in the relation of \tau / \sqrt{F}, and ways in which this scatter could be reduced in the immediate future. The estimated scatter in their AGN Hubble diagramis 0.2 dex, equivalent to 0.5mag in the distance modulus:

A Note on the AGN Hubble Diagram: The luminosity distance indicator is plotted as a function of redshift for the 38 AGNs with H\beta lag measurements. The luminosity distance and distance modulus are both plotted on the right (calibrated via NGC3227). The current best cosmology is plotted as a solid line. Cosmologies with with no dark energy component are plotted as dashed and dotted lines. The lower portion of the image shows a ration of the data compared to the current cosmology.

SCATTER DUE TO OBSERVATIONAL UNCERTAINTY: The authors indicate this scatter may be reduced significantly simply by doing multiple measurements of reverberation. For example, over a dozen observations will reduce an object’s observational uncertainty to 0.05 dex, where an object that has been observed only once holds an inherent scatter of 0.14dex. This translates to 0.13 mag vs. 0.35 mag in observational uncertainty. They show (for NGC 5548) there is very little intrinsic variation to the \tau/\sqrt{F} relation for a given object, indicating repeated observations = less scatter. SCATTER DUE TO EXTINCTION: The authors indicate a likely source of scatter may be due to extinction associated with the AGN and its host galaxy. This is highlighted using the object NGC 3516, who’s internal extinction was estimated via two different methods. Taking that into account, the object moves closer to the best-fit line. Therefore, an accurate correction for internal extinction should reduce the scatter in the diagram. SCATTER DUE TO INCORRECT LAG MEAUREMENTS: The authors indicate that a very small number of misidentified lags can contribute a very large fraction of the sactter that is inexcess of the observational uncertainty. This would easily be fixed by repeated measurements of the time-lags. This is most obvious in the object NGC 7469, which was most likely an incorrect measurement. Re-calculation shows that it should be much closer to the best-fit line.

The scatter in the AGN Hubble diagram above, measured at 0.2 dex, could be reduced by simply increasing the number of observations per source, selecting more reliable lags, and better intrinsic extinction estimates, could easily be decreased to 0.08 dex. This translates to decrease in uncertainty from 0.50 mag to 0.20 mag.


The use of the AGN Hubble diagram will extend from from z=0.3 – 4, where the power to discriminate dark energy models likes. The r \propto \sqrt{L} relationship holds over four decades of luminosity, and there is no reason for it not to hold at higher redshifts, as it is based on photoionization physics. However, in order to go to higher redshifts requires much longer temporal baselines. This is because, 1) as redshift increases, so do the observed-frame lags due to time dilation effects, and 2) at higher redshift we observe more luminous AGNs, which have larger BLRs, and hence longer rest-frame lags. At z=2, the H\beta time lag is roughly 2 years, however at z=2.5, the minimum time required to measure a lag approaches 10 years. This is solved by using Civ 1550\AA lines, which are easily seen at high redshifts and are expected much closer to the continuum source (i.e., higher ionization, closer to source), which means shorter time lags. Civ has already been used on one occasion for time lag measurements, and should be shorter than H\beta by a factor of 3 or so, making Civ time lags useful out to z=4. It is conceivable that Nv may also be used out to z=6, however, that emission is severely muddled with the Ly\alpha flux. Civ also has the added value of not requiring host-galaxy removal.


The authors note that since the r \propto \sqrt{L} relationship is very tight, it indicates that the ionization parameter is close to constant across the sample. It’s not surprising that the parameter is constant, however we have little understanding as to why the density stays the same value in a given region of the BLR, across sources, and for a wide range of L. The authors indicate that precisely how constant the density is will be a limiting factor in the accuracy of the AGN Hubble diagram.

Final Notes:

The Civ emission line occurs at 1550\AA, the H\beta line occurs at 4860\AA.


iniozation parameter equaiton

Civ: 1550


how bad is the effects of the host galaxy?

why is NGC 7469 a bad measurement? what did they do wrong?

Posted in Research and tagged as , ,

Comments are closed.