Notes on ‘A new distance measure using AGN’ [sic]

For the Astronomy Journal Club this week I will be presenting the paper entitled ‘A New Cosmological Distance Measure Using Active Galactic Nuclei,’ authored by Watson, Denney, Vestergaard, and Davis. This work was published in The Astrophysical Journal Letters, 740:L59, on 20 October 2011.

A link to the arXiv paper:


Finding reliable methods to determine distances, especially LARGE distances, has been an ongoing problem throughout astrophysical history. Only very recently have we come to realize Type 1a supernovae can be accurately used as standard candles, a feat that to our firm observation of the accelerated expansion of the universe. So far, supernovae have only been used to z=1.7, and are unlikely to go further than z=2 (see Riess et al.) Due to their large numbers and luminosity extending out to very high redshift (0.3 < z < 7), AGN have been looked to, but never confirmed to be standard candles in anyway. If they could be used, this could create reliable distance measures far enough away to distinguish between dark energy models.


Broad-line Reverberation Mapping – see Peterson 1993 –

AGN are powered by a supermassive black hole at the centre of the galaxy where matter is accreting. The friction of the material falling in will generate a large amount of energy across the EM spectrum. Surrounding the SMBH, at some distance away, is a large amount of high-velocity gas clouds that produce emission features seen in the spectra of quasars. The BLR is directly controlled by the ionizing photons from the continuum source. For instance, the size of the BLR (meaning, it’s distance from the central source) is directly connected to the number of ionizing photons, and the optical depth of the gas. The ioinizing flux drops with distance following inverse square law, this means the radius of the BLR must be proportional to the square root of the luminosity.

(1)   \begin{equation*} L = 4 \pi r^2 F \end{equation*}

Therefore, if we were able to establish the size of the BLR, r, and the flux, it would lead to a measure of Luminosity of the quasar, and hence a measure of its Luminosity Distance. Hence, r \propto \sqrt{L}.

Aside: The Luminosity Distance is typically denoted D_L in the equation:

(2)   \begin{equation*} D_L = \sqrt{{L \over 4 \pi F}} \end{equation*}

If you measured the flux for some source of light, and had a priori knowledge of the source’s Luminosity, you could then measure the value D_L, the ‘luminosity distance’ to the source of light. In the case of a quasar, we are able to measure the flux on Earth, and use Reverberation Mapping to measure the quasar’s Luminosity, leading to a direct measurement of the quasar’s Luminosity Distance.

The BLR emission lines are emitting photons that are reprocessed from the continuum photons, therefore the emissions lines should vary in response to changes in the continuum with some associated time lag \tau. The time lag would simply be a measure of the distance / speed, in this case \tau = r/c. Therefore, measuring the time delay allows a measure of the BLR radius, this is known as ‘Reverberation Mapping.’ This is effectively done by measuring the time lag between changes in the continuum luminosity of the AGN and the luminosity of a bright emission line, like Civ or H\beta. This time lag will be proportional to the square root of L:

(3)   \begin{equation*} \tau \propto r \proptp \sqrt{L} \propto \sqrt{F} \end{equation*}

From Earth, we measure Flux not Luminosity, so the calculating \tau/\sqrt{F} is a measure of the Luminosity Distance.

Recent advancements in determining the lag, most importantly in: improving contaminating effects of the host galaxy, reobserving AGN to get better time lag measurements, and extending to lower luminosity quasars, have shown that the relationship r \propto \sqrt{L} holds well across four orders of magnitude in Luminosity


The authors used a sample of quasars whose lags between H\beta and the 5100\AA continuum flux are available and have had the effects of the host galaxy removed from the measured flux to obtain the AGN continuum flux. Apparently, remvoing the host galaxy effects has been shown to be very important (see Bentz et al. 2009a).The authors correct the AGN for Galactic extinction. The authors then calibrate their samples \tau/\sqrt{F} relationship using the absolute distance measurement to the source NGC 3227. The authors plot a comparison of their  \tau/\sqrt{F}-derived distances compared to the predicted distances using Hubble’s Law (and the most recent WMAP cosmology). The dotted line showing equality between the two. The AGN distance estimates clearly follow the best current cosmology Hubble distances to good accuracy.

A Note on Absolute Calibration:

Absolute calibration was doine using the galaxy NGC 3227, based on the distance modulus m-M=31.86 \pm0.24 determined by Tonry et al. 2001.


The authors then look at the sources of scatter in the relation of \tau / \sqrt{F}, and ways in which this scatter could be reduced in the immediate future. The estimated scatter in their AGN Hubble diagramis 0.2 dex, equivalent to 0.5mag in the distance modulus:

A Note on the AGN Hubble Diagram: The luminosity distance indicator is plotted as a function of redshift for the 38 AGNs with H\beta lag measurements. The luminosity distance and distance modulus are both plotted on the right (calibrated via NGC3227). The current best cosmology is plotted as a solid line. Cosmologies with with no dark energy component are plotted as dashed and dotted lines. The lower portion of the image shows a ration of the data compared to the current cosmology.

SCATTER DUE TO OBSERVATIONAL UNCERTAINTY: The authors indicate this scatter may be reduced significantly simply by doing multiple measurements of reverberation. For example, over a dozen observations will reduce an object’s observational uncertainty to 0.05 dex, where an object that has been observed only once holds an inherent scatter of 0.14dex. This translates to 0.13 mag vs. 0.35 mag in observational uncertainty. They show (for NGC 5548) there is very little intrinsic variation to the \tau/\sqrt{F} relation for a given object, indicating repeated observations = less scatter. SCATTER DUE TO EXTINCTION: The authors indicate a likely source of scatter may be due to extinction associated with the AGN and its host galaxy. This is highlighted using the object NGC 3516, who’s internal extinction was estimated via two different methods. Taking that into account, the object moves closer to the best-fit line. Therefore, an accurate correction for internal extinction should reduce the scatter in the diagram. SCATTER DUE TO INCORRECT LAG MEAUREMENTS: The authors indicate that a very small number of misidentified lags can contribute a very large fraction of the sactter that is inexcess of the observational uncertainty. This would easily be fixed by repeated measurements of the time-lags. This is most obvious in the object NGC 7469, which was most likely an incorrect measurement. Re-calculation shows that it should be much closer to the best-fit line.

The scatter in the AGN Hubble diagram above, measured at 0.2 dex, could be reduced by simply increasing the number of observations per source, selecting more reliable lags, and better intrinsic extinction estimates, could easily be decreased to 0.08 dex. This translates to decrease in uncertainty from 0.50 mag to 0.20 mag.


The use of the AGN Hubble diagram will extend from from z=0.3 – 4, where the power to discriminate dark energy models likes. The r \propto \sqrt{L} relationship holds over four decades of luminosity, and there is no reason for it not to hold at higher redshifts, as it is based on photoionization physics. However, in order to go to higher redshifts requires much longer temporal baselines. This is because, 1) as redshift increases, so do the observed-frame lags due to time dilation effects, and 2) at higher redshift we observe more luminous AGNs, which have larger BLRs, and hence longer rest-frame lags. At z=2, the H\beta time lag is roughly 2 years, however at z=2.5, the minimum time required to measure a lag approaches 10 years. This is solved by using Civ 1550\AA lines, which are easily seen at high redshifts and are expected much closer to the continuum source (i.e., higher ionization, closer to source), which means shorter time lags. Civ has already been used on one occasion for time lag measurements, and should be shorter than H\beta by a factor of 3 or so, making Civ time lags useful out to z=4. It is conceivable that Nv may also be used out to z=6, however, that emission is severely muddled with the Ly\alpha flux. Civ also has the added value of not requiring host-galaxy removal.


The authors note that since the r \propto \sqrt{L} relationship is very tight, it indicates that the ionization parameter is close to constant across the sample. It’s not surprising that the parameter is constant, however we have little understanding as to why the density stays the same value in a given region of the BLR, across sources, and for a wide range of L. The authors indicate that precisely how constant the density is will be a limiting factor in the accuracy of the AGN Hubble diagram.

Final Notes:

The Civ emission line occurs at 1550\AA, the H\beta line occurs at 4860\AA.


iniozation parameter equaiton

Civ: 1550


how bad is the effects of the host galaxy?

why is NGC 7469 a bad measurement? what did they do wrong?

Notes on ‘Towards a Unified AGN Structure’

Notes are based on the (submitted) paper ‘Towards a Unified AGN Structure,’ by Kazanas et al.


‘The notion of AGN as an astronomical object of solar system dimensions and luminosity surpassing that of a gaalxy has been with us for about half a century……..accretion onto a black hole as the source of the observed radiation…’

Spectroscopically inferred components (BLR, molecular torus, radio jets) led to well known Urry & Padovani (1995) structure of the AGN, which is simply an arrangement of the components, but with no physical motivation to support it:

Urry & Padovani, 1995, PASP, 107, 803

For instance, statistical analyses indicate the torus to be roughly h/R=1, but this is not supported by hydrostatic equilibrium. The Urry & Padovani picture also does not include UV and Xray outflows. The above components are independent with physical properties assigned as needed to understand the observations. It is of note that, simultaneous presence of Xray and UV absorption outflowing at high velocities implies they belong to the same outflowing plasma, but this ALSO has lacked a physical explanation.

Murray et al. (1995) say AGN outflows were proposed to be driven off the inner regions of the QSO accretion disks by UV and optical line radiation pressure to achieve outflow velocities observed. Along with Proga et al., these works show that efficient wind driving by line pressure requires X-ray shielding, otherwise overionization occurs, and line driving no longer works. A ‘failed wind’ from the innermost regions could provide this shielding, and the fact that BALs are X-ray weak advocates this.

The obvious conclusion being that AGN outflows MUST be included in our idea of the AGN structure, but the broad range of observed velocities and ionization parameters makes this very difficult without some kind of underlying physical principle/structure/physics/etc. This paper proposes a wind dynamical model that explains simultaneously the outflows in UV and Xray, and extends to all accretion powered objects: Seyferts, BALs, XRBs.

Xray Spectroscopy, Warm Absorbers

Modern Xray spectroscopy confirms and details absorption from many ionization states of N, Ne, Mg, Na, Si, Ni, Fe, and O, all absorption blueshifted, indicating a warm absorber and outflowing. In quasars specifically, most absorption in this regime is the Fe-K features [Fe-K features are: iron transitions involving the K orbital]. Xray outflowing absorption shows up in both BAL and non-BAL. Note the range of velocities, upwards of 0.8c! George Chartas has papers on these.

The ionization parameter in a chunk of gas is defined as:

    \[ \xi = {L \over n r^2} \]

where L is the ionizing luminosity, n the local gas density, r distance from source.

Absorption Measure Distribution

The fact that species such as FeXXV, MgV, OI are detected in Xray spectra indicates there MUST be a diverse set of ionization parameters throughout the wind. i.e., this wind has a density distribution that produces ionic column densities sufficiently large to be detected. Originally was considered as multiple components with static \xi values, however, studies fit the absorption data in the Xray with a continuous distribution of \xi. This has been called the Absorption Measure Distribution (AMD). In formula:

\[ AMD(\xi) = {dN_H \over dlog \xi}

Definition: The hydrogen equivalent column density of specific ions, N_H, per decade of ionization paramets \xi, as a function of \xi.

Aside: How to calculate AMD? (from Holczer et al. 2007)

‘The total hydrogen column along the line of sight can be expressed as an integral over its distribution in log \xi.’ A given object will have multiple, highly ionized species, from which to gleen an ionic column density. This is done through fitting routines, not topic here. Assume you have the ionic columns for all species in your Xray spectrum. You must fit to a distribution of dN_H/dlog \xi, that recreates the observed column densities in all ions.

‘For a monotonic distribution of the wind density n(r) with radius r, determination of AMD is the same as determining the ionized wind’s density dependence on r. So, determine AMD, and you can determine how the density of the wind changes with radius. All outflows where AMD can be established imply a flat to modestly increasing AMD with \xi, which says that n(r) \propto 1/r.

The AMD approach suggests that there is a powerlaw dependence for the wind density on r, i.e., n \propto r^{-s}, where s= (2\alpha+1)/(\alpha+1), which is n \propto 1/r. There are some physics I don’t quite get here, but the result is: ‘for radiatively driven winds, the column DECREASES with INCREASING ionization and INCREASING velocity, in significant disagreement with the dependence found by Holcser and Behar.’

So, while it hard to make radiation pressure work with AMD (match the density profile implied by AMD), MHD winds off accretion disks CAN work with AMD.

From the paper, the AMD slopes in their data says the density profile should be n(r) \propto r^{\alpha} where 1<\alpha<1.3, ruling out the standard assumption of n(r) \propto r^{-2}. An MHD wind CAN produce this density relationship. The AMD discussion here comes from Behar (2009), ApJ, 703, 1346. ALSO note that a paper was able to show that an XRB could not have been driven by radiation or Xray heating, thus magnetism must be invoked….MHD will solve all our problems.

MHD wind model

MHD winds are launched by poloidal magnetic fields under the combined action of rotation, gravity, and magnetic stresses (see Blandford & Payne 1982). The Grad Safranov equation is the force balance in the \theta-direction, the solution of this equation provides the angular dependence of all the fluid and magnetic field variables with their initial values at (r_not,90). The Grad Safranov equation has the form of a wind equation with several critical points, in our case the Alfven point is most important.

Winds, in general, are known to be self-similar when:

1. The radius is normalized to the Schwarzschild radius r_s (x=r/r_s, r_s=3M km, where M is the mass)

2. The mass flux Mdot is expressed in units of the eddington accretion rate, mdot = Mdot/M_Edot = Mdot/M

3. Their velocities are Keplerian, v=x^-1/2c

This paper is able to show that equations for accretion, winds, photoionization, and the AMD are all independent of the object’s mass. **what about density in accretion and winds?

The scalings for ionization, velocity, and AMD of the winds are independent of mass, but AGN, GBHC, or XRBs all have different X-ray absorber properties. The ‘self-similarity’ is broken by the dependence of the ionizing flux on the mass of the accreting object, and therefore leads to different wind ioinization propoerties.


The authors presented a broad strokes picture of the 2D AGN structure, which covers many decades of radius, frequency, and supplements UP95 with an outflow launched across the entire disk area, and velocity roughly equal to local Keplerian velocity at each launch radius. Here is an image showing their model.

The outflows have the property that their ionization, velocity, and AMD scale mainly with  the accretion rate \dot{m}. They should be applicable to all accretion powered sources from galactic accreting black holes to quasars. The mass, M, is what scales the overall scale of the the luminosity and size. The authors note the ionization structure depends on the spectrum of the ionizing radiation, which breaks the scale invariance on M with the wind flow.

The crucial and fundamental aaspect of the underlying MHD wind model, is their ability to produce density profiles that decrease as 1/r. This property allows the ionization parameter to decrease with distance, while still providing sufficient column to allow the detection of both high and low ionization in the AGN Xray spectra. This specific density scaling also allows the incorporation of Torii physics.

The price to pay for the density distribution we seek is the need to invoke winds whose mass flux increases with distance from the source (as r^{1/2}).

A radiation driven wind, while 2D in the region of launch, it will appear radial at sufficiently large distance producing density profiles that act as 1/r^2. As appealing as radiative line driving is, there is little evidence for them. Magnetic fields, such as those in MHD, appear to have the right amount of momentum needed to drive a wind with the required \dot(m)\proptor^{1/2}.