|
ˇ@
A seemingly trivial question came up when I
reorganised some of my teaching materials. We all know that light intensity decreases as the distance between the light source and the observer increases. Hence, the apparent magnitude of a celestial object is not a measure of its luminosity, or more precisely, the amount of light energy the object emits per unit time in a certain specified frequency window. That is why astronomers use the notion of absolute magnitude, which is defined as the apparent magnitude of the object when observed at a distance of 10 parsecs (pc) from it, as a measure of luminosity, or more precisely, the power of light emitted in that specified frequency window from the object. (By definition, a celestial object is 1pc from the Sun if its angle of parallax equals 1arcsecond.) My question is: why the distance used in this definition is set to be 10 pc?
Since light intensity depends on the luminosity of the light source as well as the distance between the light source and the observer, the above definition of absolute magnitude is truly a measure of luminosity as the latter variable is fixed. Surely, the definition of absolute magnitude works perfectly well if we replace the 10 pc by any arbitrary but fixed positive distance from the celestial object. However, some choices of this fixed distance are more natural than the others. Although contemporary astronomers often investigate the absolute magnitude of objects such as nebulae and galaxies, stars were very first objects that astronomers wanted to find their luminosities. Note that astronomers measure distance of nearby stars by the method of trigonometric parallax. In other words, the angle of parallaxes of nearby stars is what we can directly measure (See figure 1). That is why astronomers usually express interstellar distances in pc rather than in light years or in metres. In fact, the typical distance between two neighbouring stars is about 1 pc; and the distance to the nearest star from the Sun, Proxima Centauri, is about 1.3 pc. In this regard, either 1 or 1.3 pc appear to be more natural in defining absolute magnitude. (In contrast, the use of 1 light year as the fixed distance is less natural.) So, why we adopt 10 pc in the absolute magnitude definition in the first place?
ˇ@
Figure 1: Method of Trigonometric Parallax |
I was surprised that nobody asked me this question in class, nor anyone in the Space Museum lectures. And at this point, you should realize that this seemingly trivial question is not that trivial after all. I tried to answer this question in vain by asking my colleagues and friends, by consulting a few astronomy and history of astronomy books, and by searching the web using keywords such as ˇ§origin of absolute magnitudeˇ¨.
After searching the answer for a whole afternoon, I decided to switch my strategy. Rather than relying on secondary sources, I should find out the original one. I began by digging up the origin of the word ˇ§absolute magnitudeˇ¨ through the online version of Oxford English Dictionary. To my delight, it quotes something from a 1902 paper by J. C. Kapteyn [1]. To my
further delight, thanks to the NASA Astrophysics Data System
(http://adswww.harvard.edu/index.html), this paper from an already discontinued journal is available to the public on the web for free!
In that paper, Kapteyn investigated, among other things, the distribution of stars as a function of luminosity, proper motion and parallax. The brighter stars he used in his study were drawn from the Auwers Bradley catalogue, which contained J. Bradley's observations in 1750-1763 and was reduced to the modern form by incorporating the extremely accurate measurements by A. von Auwers in the 1890's. Furthermore, Kapteyn used an earlier (inaccurate) result that the luminosity of the Sun was
4x1010 times that of Vega, which is about half the currently accepted value. Then, Kapteyn stated,
ˇ§From these data it can be easily derived that the sun, when transferred to a distance corresponding to the parallax =
0."10, would have the apparent magnitude 5.48. I will adopt 5m.5, which accidentally agrees with the mean magnitude of the Bradley stars. ... We further define the absolute magnitude (M) of a star ... as the apparent magnitude which that star would have if it was transferred to a distance from the sun corresponding to a parallax of 0."1.ˇ¨
So, it is clear that the adoption of 10 pc as the defining distance for the absolute magnitude is rather arbitrary and coincidental - a point already emphasized by Kapteyn himself in his paper. And after the adoption by A. S. Eddington in his 1914 book[2], the definition of absolute magnitude was crafted in stone.
We learn two things here. First, the 10 pc used to define absolute magnitude is conventional rather than ˇ§naturalˇ¨. Second, the power of the web in solving academic problems is vividly demonstrated, but only if you know how to use it effectively.
References:
[1] J.C. Kapteyn, Publications of the Kapteyn Astronomical Laboratory Groningen, no. 11, pp. 3-32 (1902).
[2] A.S. Eddington, Stellar Movements and the Structure of the Universe, MacMillan, London (1914).
ˇ@
ˇ@ ˇ@
|