Notes on ‘A new distance measure using AGN’ [sic]

For the Astronomy Journal Club this week I will be presenting the paper entitled ‘A New Cosmological Distance Measure Using Active Galactic Nuclei,’ authored by Watson, Denney, Vestergaard, and Davis. This work was published in The Astrophysical Journal Letters, 740:L59, on 20 October 2011.

A link to the arXiv paper: http://lanl.arxiv.org/abs/1109.4632

MOTIVATION:

Finding reliable methods to determine distances, especially LARGE distances, has been an ongoing problem throughout astrophysical history. Only very recently have we come to realize Type 1a supernovae can be accurately used as standard candles, a feat that to our firm observation of the accelerated expansion of the universe. So far, supernovae have only been used to z=1.7, and are unlikely to go further than z=2 (see Riess et al.) Due to their large numbers and luminosity extending out to very high redshift (0.3 < z < 7), AGN have been looked to, but never confirmed to be standard candles in anyway. If they could be used, this could create reliable distance measures far enough away to distinguish between dark energy models.

METHOD:

Broad-line Reverberation Mapping – see Peterson 1993 – http://adsabs.harvard.edu/abs/1993PASP..105..247P

AGN are powered by a supermassive black hole at the centre of the galaxy where matter is accreting. The friction of the material falling in will generate a large amount of energy across the EM spectrum. Surrounding the SMBH, at some distance away, is a large amount of high-velocity gas clouds that produce emission features seen in the spectra of quasars. The BLR is directly controlled by the ionizing photons from the continuum source. For instance, the size of the BLR (meaning, it’s distance from the central source) is directly connected to the number of ionizing photons, and the optical depth of the gas. The ioinizing flux drops with distance following inverse square law, this means the radius of the BLR must be proportional to the square root of the luminosity.

(1)   \begin{equation*} L = 4 \pi r^2 F \end{equation*}

Therefore, if we were able to establish the size of the BLR, r, and the flux, it would lead to a measure of Luminosity of the quasar, and hence a measure of its Luminosity Distance. Hence, r \propto \sqrt{L}.

Aside: The Luminosity Distance is typically denoted D_L in the equation:

(2)   \begin{equation*} D_L = \sqrt{{L \over 4 \pi F}} \end{equation*}

If you measured the flux for some source of light, and had a priori knowledge of the source’s Luminosity, you could then measure the value D_L, the ‘luminosity distance’ to the source of light. In the case of a quasar, we are able to measure the flux on Earth, and use Reverberation Mapping to measure the quasar’s Luminosity, leading to a direct measurement of the quasar’s Luminosity Distance.

The BLR emission lines are emitting photons that are reprocessed from the continuum photons, therefore the emissions lines should vary in response to changes in the continuum with some associated time lag \tau. The time lag would simply be a measure of the distance / speed, in this case \tau = r/c. Therefore, measuring the time delay allows a measure of the BLR radius, this is known as ‘Reverberation Mapping.’ This is effectively done by measuring the time lag between changes in the continuum luminosity of the AGN and the luminosity of a bright emission line, like Civ or H\beta. This time lag will be proportional to the square root of L:

(3)   \begin{equation*} \tau \propto r \proptp \sqrt{L} \propto \sqrt{F} \end{equation*}

From Earth, we measure Flux not Luminosity, so the calculating \tau/\sqrt{F} is a measure of the Luminosity Distance.

Recent advancements in determining the lag, most importantly in: improving contaminating effects of the host galaxy, reobserving AGN to get better time lag measurements, and extending to lower luminosity quasars, have shown that the relationship r \propto \sqrt{L} holds well across four orders of magnitude in Luminosity

DATA USED:

The authors used a sample of quasars whose lags between H\beta and the 5100\AA continuum flux are available and have had the effects of the host galaxy removed from the measured flux to obtain the AGN continuum flux. Apparently, remvoing the host galaxy effects has been shown to be very important (see Bentz et al. 2009a).The authors correct the AGN for Galactic extinction. The authors then calibrate their samples \tau/\sqrt{F} relationship using the absolute distance measurement to the source NGC 3227. The authors plot a comparison of their  \tau/\sqrt{F}-derived distances compared to the predicted distances using Hubble’s Law (and the most recent WMAP cosmology). The dotted line showing equality between the two. The AGN distance estimates clearly follow the best current cosmology Hubble distances to good accuracy.

A Note on Absolute Calibration:

Absolute calibration was doine using the galaxy NGC 3227, based on the distance modulus m-M=31.86 \pm0.24 determined by Tonry et al. 2001.

RESULTS:

The authors then look at the sources of scatter in the relation of \tau / \sqrt{F}, and ways in which this scatter could be reduced in the immediate future. The estimated scatter in their AGN Hubble diagramis 0.2 dex, equivalent to 0.5mag in the distance modulus:

A Note on the AGN Hubble Diagram: The luminosity distance indicator is plotted as a function of redshift for the 38 AGNs with H\beta lag measurements. The luminosity distance and distance modulus are both plotted on the right (calibrated via NGC3227). The current best cosmology is plotted as a solid line. Cosmologies with with no dark energy component are plotted as dashed and dotted lines. The lower portion of the image shows a ration of the data compared to the current cosmology.

SCATTER DUE TO OBSERVATIONAL UNCERTAINTY: The authors indicate this scatter may be reduced significantly simply by doing multiple measurements of reverberation. For example, over a dozen observations will reduce an object’s observational uncertainty to 0.05 dex, where an object that has been observed only once holds an inherent scatter of 0.14dex. This translates to 0.13 mag vs. 0.35 mag in observational uncertainty. They show (for NGC 5548) there is very little intrinsic variation to the \tau/\sqrt{F} relation for a given object, indicating repeated observations = less scatter. SCATTER DUE TO EXTINCTION: The authors indicate a likely source of scatter may be due to extinction associated with the AGN and its host galaxy. This is highlighted using the object NGC 3516, who’s internal extinction was estimated via two different methods. Taking that into account, the object moves closer to the best-fit line. Therefore, an accurate correction for internal extinction should reduce the scatter in the diagram. SCATTER DUE TO INCORRECT LAG MEAUREMENTS: The authors indicate that a very small number of misidentified lags can contribute a very large fraction of the sactter that is inexcess of the observational uncertainty. This would easily be fixed by repeated measurements of the time-lags. This is most obvious in the object NGC 7469, which was most likely an incorrect measurement. Re-calculation shows that it should be much closer to the best-fit line.

The scatter in the AGN Hubble diagram above, measured at 0.2 dex, could be reduced by simply increasing the number of observations per source, selecting more reliable lags, and better intrinsic extinction estimates, could easily be decreased to 0.08 dex. This translates to decrease in uncertainty from 0.50 mag to 0.20 mag.

PROSPECTS FOR EXTENSION TO HIGH REDSHIFT:

The use of the AGN Hubble diagram will extend from from z=0.3 – 4, where the power to discriminate dark energy models likes. The r \propto \sqrt{L} relationship holds over four decades of luminosity, and there is no reason for it not to hold at higher redshifts, as it is based on photoionization physics. However, in order to go to higher redshifts requires much longer temporal baselines. This is because, 1) as redshift increases, so do the observed-frame lags due to time dilation effects, and 2) at higher redshift we observe more luminous AGNs, which have larger BLRs, and hence longer rest-frame lags. At z=2, the H\beta time lag is roughly 2 years, however at z=2.5, the minimum time required to measure a lag approaches 10 years. This is solved by using Civ 1550\AA lines, which are easily seen at high redshifts and are expected much closer to the continuum source (i.e., higher ionization, closer to source), which means shorter time lags. Civ has already been used on one occasion for time lag measurements, and should be shorter than H\beta by a factor of 3 or so, making Civ time lags useful out to z=4. It is conceivable that Nv may also be used out to z=6, however, that emission is severely muddled with the Ly\alpha flux. Civ also has the added value of not requiring host-galaxy removal.

AUTHOR’S NOTE ON IONIZATION PARAMETER/DENSITY:

The authors note that since the r \propto \sqrt{L} relationship is very tight, it indicates that the ionization parameter is close to constant across the sample. It’s not surprising that the parameter is constant, however we have little understanding as to why the density stays the same value in a given region of the BLR, across sources, and for a wide range of L. The authors indicate that precisely how constant the density is will be a limiting factor in the accuracy of the AGN Hubble diagram.

Final Notes:

The Civ emission line occurs at 1550\AA, the H\beta line occurs at 4860\AA.

questions:

iniozation parameter equaiton

Civ: 1550

Hbeta:?

how bad is the effects of the host galaxy?

why is NGC 7469 a bad measurement? what did they do wrong?

Notes on ‘Towards a Unified AGN Structure’

Notes are based on the (submitted) paper ‘Towards a Unified AGN Structure,’ by Kazanas et al.

Motivation:

‘The notion of AGN as an astronomical object of solar system dimensions and luminosity surpassing that of a gaalxy has been with us for about half a century……..accretion onto a black hole as the source of the observed radiation…’

Spectroscopically inferred components (BLR, molecular torus, radio jets) led to well known Urry & Padovani (1995) structure of the AGN, which is simply an arrangement of the components, but with no physical motivation to support it:

Urry & Padovani, 1995, PASP, 107, 803

For instance, statistical analyses indicate the torus to be roughly h/R=1, but this is not supported by hydrostatic equilibrium. The Urry & Padovani picture also does not include UV and Xray outflows. The above components are independent with physical properties assigned as needed to understand the observations. It is of note that, simultaneous presence of Xray and UV absorption outflowing at high velocities implies they belong to the same outflowing plasma, but this ALSO has lacked a physical explanation.

Murray et al. (1995) say AGN outflows were proposed to be driven off the inner regions of the QSO accretion disks by UV and optical line radiation pressure to achieve outflow velocities observed. Along with Proga et al., these works show that efficient wind driving by line pressure requires X-ray shielding, otherwise overionization occurs, and line driving no longer works. A ‘failed wind’ from the innermost regions could provide this shielding, and the fact that BALs are X-ray weak advocates this.

The obvious conclusion being that AGN outflows MUST be included in our idea of the AGN structure, but the broad range of observed velocities and ionization parameters makes this very difficult without some kind of underlying physical principle/structure/physics/etc. This paper proposes a wind dynamical model that explains simultaneously the outflows in UV and Xray, and extends to all accretion powered objects: Seyferts, BALs, XRBs.

Xray Spectroscopy, Warm Absorbers

Modern Xray spectroscopy confirms and details absorption from many ionization states of N, Ne, Mg, Na, Si, Ni, Fe, and O, all absorption blueshifted, indicating a warm absorber and outflowing. In quasars specifically, most absorption in this regime is the Fe-K features [Fe-K features are: iron transitions involving the K orbital]. Xray outflowing absorption shows up in both BAL and non-BAL. Note the range of velocities, upwards of 0.8c! George Chartas has papers on these.

The ionization parameter in a chunk of gas is defined as:

    \[ \xi = {L \over n r^2} \]

where L is the ionizing luminosity, n the local gas density, r distance from source.

Absorption Measure Distribution

The fact that species such as FeXXV, MgV, OI are detected in Xray spectra indicates there MUST be a diverse set of ionization parameters throughout the wind. i.e., this wind has a density distribution that produces ionic column densities sufficiently large to be detected. Originally was considered as multiple components with static \xi values, however, studies fit the absorption data in the Xray with a continuous distribution of \xi. This has been called the Absorption Measure Distribution (AMD). In formula:

\[ AMD(\xi) = {dN_H \over dlog \xi}

Definition: The hydrogen equivalent column density of specific ions, N_H, per decade of ionization paramets \xi, as a function of \xi.

Aside: How to calculate AMD? (from Holczer et al. 2007)

‘The total hydrogen column along the line of sight can be expressed as an integral over its distribution in log \xi.’ A given object will have multiple, highly ionized species, from which to gleen an ionic column density. This is done through fitting routines, not topic here. Assume you have the ionic columns for all species in your Xray spectrum. You must fit to a distribution of dN_H/dlog \xi, that recreates the observed column densities in all ions.

‘For a monotonic distribution of the wind density n(r) with radius r, determination of AMD is the same as determining the ionized wind’s density dependence on r. So, determine AMD, and you can determine how the density of the wind changes with radius. All outflows where AMD can be established imply a flat to modestly increasing AMD with \xi, which says that n(r) \propto 1/r.

The AMD approach suggests that there is a powerlaw dependence for the wind density on r, i.e., n \propto r^{-s}, where s= (2\alpha+1)/(\alpha+1), which is n \propto 1/r. There are some physics I don’t quite get here, but the result is: ‘for radiatively driven winds, the column DECREASES with INCREASING ionization and INCREASING velocity, in significant disagreement with the dependence found by Holcser and Behar.’

So, while it hard to make radiation pressure work with AMD (match the density profile implied by AMD), MHD winds off accretion disks CAN work with AMD.

From the paper, the AMD slopes in their data says the density profile should be n(r) \propto r^{\alpha} where 1<\alpha<1.3, ruling out the standard assumption of n(r) \propto r^{-2}. An MHD wind CAN produce this density relationship. The AMD discussion here comes from Behar (2009), ApJ, 703, 1346. ALSO note that a paper was able to show that an XRB could not have been driven by radiation or Xray heating, thus magnetism must be invoked….MHD will solve all our problems.

MHD wind model

MHD winds are launched by poloidal magnetic fields under the combined action of rotation, gravity, and magnetic stresses (see Blandford & Payne 1982). The Grad Safranov equation is the force balance in the \theta-direction, the solution of this equation provides the angular dependence of all the fluid and magnetic field variables with their initial values at (r_not,90). The Grad Safranov equation has the form of a wind equation with several critical points, in our case the Alfven point is most important.

Winds, in general, are known to be self-similar when:

1. The radius is normalized to the Schwarzschild radius r_s (x=r/r_s, r_s=3M km, where M is the mass)

2. The mass flux Mdot is expressed in units of the eddington accretion rate, mdot = Mdot/M_Edot = Mdot/M

3. Their velocities are Keplerian, v=x^-1/2c

This paper is able to show that equations for accretion, winds, photoionization, and the AMD are all independent of the object’s mass. **what about density in accretion and winds?

The scalings for ionization, velocity, and AMD of the winds are independent of mass, but AGN, GBHC, or XRBs all have different X-ray absorber properties. The ‘self-similarity’ is broken by the dependence of the ionizing flux on the mass of the accreting object, and therefore leads to different wind ioinization propoerties.

Summary

The authors presented a broad strokes picture of the 2D AGN structure, which covers many decades of radius, frequency, and supplements UP95 with an outflow launched across the entire disk area, and velocity roughly equal to local Keplerian velocity at each launch radius. Here is an image showing their model.

The outflows have the property that their ionization, velocity, and AMD scale mainly with  the accretion rate \dot{m}. They should be applicable to all accretion powered sources from galactic accreting black holes to quasars. The mass, M, is what scales the overall scale of the the luminosity and size. The authors note the ionization structure depends on the spectrum of the ionizing radiation, which breaks the scale invariance on M with the wind flow.

The crucial and fundamental aaspect of the underlying MHD wind model, is their ability to produce density profiles that decrease as 1/r. This property allows the ionization parameter to decrease with distance, while still providing sufficient column to allow the detection of both high and low ionization in the AGN Xray spectra. This specific density scaling also allows the incorporation of Torii physics.

The price to pay for the density distribution we seek is the need to invoke winds whose mass flux increases with distance from the source (as r^{1/2}).

A radiation driven wind, while 2D in the region of launch, it will appear radial at sufficiently large distance producing density profiles that act as 1/r^2. As appealing as radiative line driving is, there is little evidence for them. Magnetic fields, such as those in MHD, appear to have the right amount of momentum needed to drive a wind with the required \dot(m)\proptor^{1/2}.

CFHT queue update, Part 5: The End

The observing team at CFHT was able to sneak my last observing group in mid January, allowing me to get 100% completeness on the proposal we submitted last year. Now it’s on to this data reduction and this year’s round of proposals.

Quasar Club: BAL Variability on Short Time-Scales

Title: Variability in quasar broad absorption line outflows III: What happens on the shortest time-scales?
Authors: Capellup, Hamann, Shields, Halpern, Barlow
Abs: Broad absorption lines (BALs) in quasar spectra are prominent signatures of high-velocity outflows, which might be present in all quasars and could be a major contributor to feedback to galaxy evolution. Studying the variability in these BALs allows us to further our understanding of the structure, evolution, and basic physical properties of the outflows. This is the third paper in a series on a monitoring programme of 24 luminous BAL quasars at redshifts 1.2 < z < 2.9. We focus here on the time-scales of variability in CIV 1549A BALs in our full multi-epoch sample, which covers time-scales from 0.02-8.7 yr in the quasar rest-frame. Our sample contains up to 13 epochs of data per quasar, with an average of 7 epochs per quasar. We find that both the incidence and the amplitude of variability are greater across longer time-scales. Part of our monitoring programme specifically targeted half of these BAL quasars at rest-frame time-scales <2 months. This revealed variability down to the shortest time-scales we probe (8-10 days). Observed variations in only portions of BAL troughs or in lines that are optically thick suggest that at least some of these changes are caused by clouds (or some type of outflow substructures) moving across our lines of sight. In this crossing cloud scenario, the variability times constrain both the crossing speeds and the absorber locations. Typical variability times of order ~1 year indicate crossing speeds of a few thousand km/s and radial distances near ~1 pc from the central black hole. However, the most rapid BAL changes occurring in 8-10 days require crossing speeds of 17 000 – 84 000 km/s and radial distances of only 0.001-0.02 pc. These speeds are similar to or greater than the observed radial outflow speeds, and the inferred locations are within the nominal radius of the broad emission line region.
arXiv: 1211.4868

Notes and Thoughts:
This is a 3 paper series: see Paper I and Paper II for more info.
Paper on a spectroscopic monitoring campaign for 24 BAL quasars over 1.2<z<2.9, monitoring specifically Civ 1549. Time-scales covered in this paper are 0.02 – 8.7 yr rest frame.
Results: typical variability of order 1 year suggests orbital crossing speed of ~1000 km/s at 1pc, typical variability of order 8-10 days suggests orbital crossing speed of ~10000 to 100000 km/s at 0.001-0.02pc.
Paper II covers the two major scenarios of variability. change in ionization vs. clouds moving across our line of sight. they cannot rule out either scenario. Need to write up solid comparison of these two scenarios.
Figure 1 – amplitude of variability gets larger with longer timescales. This is in keeping with other results (Gibson for instance). i.e., you have to wait longer time scales to get bigger changes in absorption.
Out of the 17 quasars for which Capellup has data on these shortest timescales, only 2 exhibited Civ BAL variability.
object 1 – 1246-0542 – secure variability over 8 days rest frame
Civ is obviously variable in two places, they shifted this to see if Nv and Siv are also. Nv does. Siv does not. They note that over their observations the BAL returned to its previous state from 25 days earlier. They also have further epochs on this target, which note variability in the same velocity bins. Further confirmation of variability on short timescales
object 2 – 0842+3431 – secure variability over 10 days rest frame
match the Civ to where the Siv should be. they find potential variability. previous data from Lick (early 1991, and late 1991) show short term variability in same velocity bin for Siv, but not Civ.
Method by which to examine Civ variability over all timescales (see Figure 9)
To examine the relationship of Civ BAL variability with time-scale across the full measured range from 0.02 to 8.7 yr, authors compare the BALs in each pair of observations at all velocities in every quasar then count the occurrences of Civ BAL variability [Note: this uses the definition of BAL variability in Paper 1]. Create a probability: by dividing number of occurrences of variability by the number of measurements, where a pair is one measurement. Plot measured probability of detecting Civ BAL variability against delta T in fig. 9. [Note: these are probabilities for detecting variability between two measurements separated by deltaT, NOT for detecting variability at any time in that deltaT time-frame. In other words, if a quasar varied then returned to its initial state within deltaT, would not contribute to the plot].
Authors did much analysis on removing possible biases from Figure 9. removed bias of those quasars that were observed most (which are known to be variable), not much changes. removed bias of taken spectra are clustered, not spread over all time lines. they do indeed have a slight bias towards greater variability fractions at longer time-scales
Summary of Results:
Paper I describes general trends in Civ BAL variability and finds that variability occurs more often at higher outflows and in shallower troughs.
Paper I and II note that variability typically occurs in just portions of troughs
Paper II also directly compares variability in Civ to Siv. Siv BALs are more likely to vary than Civ BALs. perhaps this is due to the tendency for weaker lines to vary more.
Paper III detects a strong trend towards greater variability fractions over longer time-scales. Both incidence and amplitude of variability decreases with deltaT.
Paper III detects variability to 8 days in deltaT.

Discussion:
causes of variability
time scales of variability help constrain location of outflowing gas, but heavilty pitted on the cause
1. change of ionization in far UV
2. outflow cloud moving across line of sight
Changes in ionizing flux incident on the entire outflow should cause global changes in the ionization of the flow. A change of covering fraction due to moving clouds is less likely when absorption regions at diff vels vary in concert, because this would require coordinate movements among outflow strucutres at different velocities. However, changes in narrow portions of the BAL fits more naturally in crossing clouds. Variability in saturated Civ saturated absorption lines strongly favours the corssing cloud scenario. Other possibilities: change in size of continuum source, instabilities within the flows themselves.
Implications of variability
Ionization:
important: ionization changes require significant variations in the quasar’s incident (ionizing) flux. seems unlikely because this study’s sample consists mostly of luiminous quasars, which have a smaller amplitude of continuum variability, and the amplitude shrinks at shorter timescales [reference?]
Interesting from Misawa et al. (2007). [detect variability in mini-BAL over 16 days, also too short for continuum variability] they propose a screen of gas with varying optical depth located between continuum source and absorbing gas. screen is in co-rotation with disk, and if screen is clump, then as it rotates will allow less/more continuum light through, and thus change the ionization parameters of the absorbing gas quickly. Another way of creating ionization changes on short timescales is hotspots on an accretion disk. However, ALL of these scenarios predict global changes across BAL troughs, which we typically do not see.
Crossing Clouds:
constant ionization and density, variability due to component moving across continuum source. use time scale to estimate speed given a geometry for the emission and absorption regions. estimate a characteristic diameter of the continuum source at 1500 Ang using observed fluxes at this wavelength and standard bolometric correction fator, they find D1500 ~ 0.008 pc, and the BEL is Dciv ~ 0.3pc.
results from disk/knife = 17000km/s to 84000km/s TRANSVERSE and distances of 0.001 pc. which is smaller than the UV continuum estimate of 0.004pc! Interesting
These results provide much smaller distance constraints than the changing ioinization scenario. There is indirect evidence for crossing cloud scenario because its variations at -17 200 km/s appear to be in an optically thick trough.
Follow Up:
Read Papers I and II for understanding of what exactly is called variability, and comparison on Ionization vs. Absorption.

Finding Asteroid #2: Serendipity…..again!

Late last year I posted an article on finding an asteroid, serendipitously, in my data from CFHT (see the post here). Well…I found another one! Luckily, this new asteroid happen to pass by a beautiful example of an edge-on spiral galaxy.

can you see it? There are about 20 min between the first two exposures, and almost 4 hours before the third.

You may be wondering, though, why the three images above do not look like the beautiful pictographs we are used to seeing (these, for instance). Why are my images black and white? Why is that edge-on galaxy so boring looking? Let me explain.

We take images of the sky using a CCD, which is just an array of light sensitive pixels. These pixels are sensitive to a rather large range of light, extending down to the near-UV, through optical, into the near-infrared. But, the pixels do not care what kind of light they detect. When a photon hits the pixel, a photon of any wavelength,* the pixel records it. In order to make the beautiful astronomy pictures we see, it takes a few more steps:

First, you have to filter out the colours you want individually. The above three images are the same patch of sky imaged using three different filters: the g filter (390nm – 595nm), the r filter (595nm – 680nm), and the i filter (680nm – 830nm).** While all the images look black and white, they are in fact exposed using three completly different ranges of colours of light. Again, the pixels do not care what colour. Second, you have to combine the various filtered light images into one, while imposing a colour scheme that matches what the colour should be in the wavelength ranges of the filters that you used. For instance, since the above image uses g, r, and i, I would add the images together with X amount of light that is coloured near 500nm for g, Y amount of light coloured near 650nm for r, and Z amount of light coloured near 750nm for i.

A final thought: For those of you who have looked through a telescope at things like the Orion Nebula, Andromeda Galaxy, the Ring Nebula, or any other cool astronomical object, you may notice that these objects appear to be black and white to our eye as well, and not the vibrant images I linked to just now. There is no CCD involved, so why are they black and white? The answer lies in our eye’s biology. We have two different and highly specialized, light sensing cells in our eye: rods and cones. Rods are used to detect light vs. dark, contrast, brightness. They see in black and white. Cones are used to detect different colours; there are three kinds of cones for red, green, and blue (RGB). Cones are far less sensitive than rods, in fact rods can sense light up to 100x dimmer than cones. So when we look through a telescope at a distant nebula, there is, in fact, RGB light travelling through space to us, but it is not intense enough to activate our cones. The rods DO perceive the light, and give us a great contrast image of the astronomical object.

Do not take this to mean that there is no colour in space, and that we photoshop it in. No, the colour is there, we just need to tease it out!

________________________________

* Wavelength is the measure of the distance from peak-to-peak of the wave light. Ultraviolet light is 10nm to 400nm, optical light is from 400nm to 800nm, and infrared is 800nm to 2500nm. These are rough numbers.
** You can see my research post on how I sorted my CFHT targets into different redshift bins using the filters here. It also has a nice graph showing the wavelength range of the u,g,r,i,z filters.