How do I make a “CT” for the sun?

In 2016, the Solar High Resolution Imaging Research Team at the Institute of Optoelectronic Technology of the Chinese Academy of Sciences successfully developed the multi-band multi-band multi-band imaging system of the solar atmosphere, which was the largest in the world at the time, and media reports said that “CT” was given to the sun. But why do you make a “CT” for the sun? How exactly do you do “CT”? Let’s start with the source and review the evolution of this technology.

Solar spectrum, from color to fife line

In essence, solar tomography is not the tomography we normally understand, but rather the process of multispectral simultaneous imaging using different wavelengths of sunlight. So before we start, we need to add a little bit about the solar spectrum.

In the early days of heliophysical research, scientists began with the study of the color of the sun. The most famous is Newton’s prism experiment. As a ray of sunlight passes through a triangular prism, it is scattered in various colors in different wavelengths to form a rainbow-shaped color, as shown in the image below. In other words, the white sun is made up of rainbow-like multi-colored light. This dispersion is a spectrum in a combination of different colors (wavelengths). Later, of course, with the scientific understanding of light (light is electromagnetic), the spectrum was also used to describe the wavelength distribution of all electromagnetic waves.


Color is usually the product of partial wavelength electromagnetic waves (visible light bands) perceived by human vision systems. Depending on the wavelength, gamma rays to radio are electromagnetic waves, but most of them are invisible to us humans, and the small part visible to our visual system is called visible light.

After Newton, William Hyde Wollaston, an English chemist and physicist, discovered in 1802 that there were dark lines (no color) in the sun spectrum after dispersion, which he thought was a boundary of different colors, and missed the opportunity to open a new discipline without further study. Fifteen years later, Joseph von Fraunhofer invented the spectrometer based on the diffraction grating and independently rediscovered the dark lines in the solar spectrum, finding 574 such dark lines, which ruled out Wollaston’s guess about the color dividing line. At the time, however, Mr. Fyfe’s interest was not in the solar spectrum, and he did not care about the theory behind these phenomena, and he accurately measured the wavelength of each dark line based on the grating spectrometer, using them to calibrate the refractive index of the glass (he was the best glass manufacturer in the world at the time). After future upturns have figured out the origin of these dark lines, in honor of the man who “brings us closer to the stars” (the epitaph “He brought us closer to the stars”), these dark lines on the solar spectrum are called “the fu-fee lines”.


Spectroscopic analysis, the key to modern astronomy

The mystery of these dark lines was not solved until 1859. It was known at the time that different metals or metal compounds (often called metal salts, such as sodium chloride, or sodium salt) could change the flame of the flame. The flame color of metal salts such as sodium potassium lithium copper, which causes the flame to appear special when burned in a colorless flame, has also been observed, a reaction called a flame color reaction. Around 1958, the German chemist Robert Wilhelm Bunsen thought in reverse, since different substances produce different colors of flame, can different flame colors be used to analyze and distinguish elements? So he invented the “native lamp” without fire to test the flames of various metals and metal salts. However, this method has a large color resolution error and cannot test the solution of some metal salts. Later his friend, German physicist Gustav Kirchhoff, suggested using spectrometers instead of simple colors to distinguish elements.


After a large amount of experimental data, they confirmed that each element produces a unique set of spectral lines, i.e. a specific light or dark line (depending on the lighting) at a specific wavelength, and the characteristic semudes of several common substances are plotted. Based on this method, they also found two new elements of radon and radon.

In the experiment, Kirchhoff found that when sunlight and the nasale flame entered the spectrometer together, the bright emission line that had appeared turned into a dark line. So he used lime light, which was then thought to be continuous, to illuminate, and found that the bright line position in the spectrum had become dark. After some column validation, they finally concluded that the heating of some substances themselves appeared as bright lines (emission spectral lines), and that the gas molecules or atoms of these substances were illuminated by continuous spectral light as dark lines (absorption spectral lines). They went on to think of the solar spectrum and the wire, which they believe was caused by the absorption of sodium in the atmosphere on the sun’s surface (later studies later showed that some of the dark lines were caused by the absorption of certain elements in the Earth’s atmosphere) as solar radiation travels from the inside out. Combined with the work at hand, since the spectra can analyze the composition of chemistry, they immediately thought that the composition of the sun’s material could also be determined by studying these dark lines! More than a hundred years later, I seem to feel the ecstasy and excitement of their conclusion. To know that this was unthinkable at the time, it was epoch-making for the study of distant suns and stars, and opened the door to the field of celestial spectroscopy. According to spectroscopy, they found elements such as calcium-nickel hydro-iron on the sun. Later, after many years of research, it was found that the sun’s chemical composition was similar to that of The Earth, but in a different proportion.


Here’s an interesting episode where we know that the chemical element helium is also called the sun element, and its English name Helium comes from Helios, the sun god in Greek mythology. That’s because, 27 years before helium was discovered on Earth in 1895, the French astronomer Pierre Jules C?sar Janssen and the British scientist Joseph Norman Lockyer independently discovered and named the unknown by observing the solar spectrum.

Briefly, Newton’s color-based research opened the door to spectroscopy. Then, in the early 19th century, Wollaston and Fraser discovered some absorption lines in these continuous solar spectra. On the other hand, chemical studies began to determine elements based on the flame reaction — the color of the flame of different elements — and physicist Kirchhoff finally established the relationship between the elementemission line and the solar spectral absorption line, and eventually opened the door to spectral-based material analysis of celestial bodies.

The stratification structure of the solar atmosphere and the “CT” imaging

After two hundred years of development, people finally figured out the solar spectrum and the fuscocline line, and developed a spectral-based celestial spectroscopy to make accurate observations of the vast universe. In addition to identifying the material composition of the sun and other celestial bodies, spectroscopy can also measure the rotation speed (Doppler effect), temperature and density of celestial bodies. and further counter-push energy sources and transmission mechanisms, and so on. Now this technology has become one of the important means for us to study the sun.

Through spectral analysis, we can know the composition of the sun’s atmosphere, wouldn’t it be better to see directly the image of the sun’s surface? This plays an irreplaceable role in the study of solar energy transfer and material evolution. This is another important tool for heliophysical research – high resolution imaging. The most important factor in determining the resolution is the caliber of the telescope, which is why the telescope is getting bigger and bigger.

But large-calibre telescopes don’t seem to be enough. We know that the solar atmosphere is divided into the light sphere, the color sphere and the corona, in which the thickness of the light sphere and the color sphere is 2,500 km. The surface structure of the sun we usually observe comes mainly from the light sphere, such as the sun’s rice grains, sunspots, and so on.


As we mentioned earlier, the continuous spectrum of sunlight in the process of radiation from the inside out, through the sun’s atmosphere by certain elements to form a fuchs absorbed spectrum. So the scientists thought, if you can develop a very narrow bandwidth of transmission wavelength filter, only for this spectral line of imaging, is it possible to take the corresponding element in the location of the solar surface image? The answer is yes. But it seems a little difficult to understand, don’t you say that elements in the sun’s atmosphere absorb the spectrum of corresponding wavelengths? How come there are still images? Why is the image of this spectral line the image of where the element is located? To explain this problem, let’s look at the figure below, where we illustrate the problem by taking the absorption of hydrogen layers as an example. Although solar radiation is a 360-degree dispersal radiation, given the distance between the Earth and the sun, the Earth can only receive sunlight coming from a small angle, and we assume here that only one direction of radiation can reach the Earth (parallel light).


Originally, a lot of photons were emitted from the sun’s sphere of light, and if there was no absorption layer in the sun’s atmosphere, the light facing the earth would be collected by telescopes to get the image of the light sphere; When the light emitted from the photosphere reaches the hydrogen layer, the sun’s light at a wavelength of 656.281nm is absorbed by the hydrogen atom, but the hydrogen element that absorbs the sun’s light is not stable and can be released again in a short time. However, the direction of the re-emitted photons is random, which causes many of the photons originally facing the Earth to be transferred after the “absorption-emission” process, making it impossible to enter the Earth’s telescopes. This is why, in the solar spectrum, the wavelength position (656.281nm) corresponding to the hydrogen spectrum presents a dark line (note that the energy is only weakened relative to other bands, not at all). Since these photons are emitted from the hydrogen atomic layer, if you image this band, you can naturally get an image of the hydrogen layer. To this end, we can get an image of the sun’s atmospheric color sphere by observing the Ha (hydrogen absorption line, center wavelength 656.281nm) band image.

Further research has found that some elements are mainly distributed at different heights of the sun’s atmosphere, and that different absorption lines can also study specific heliophysical problems. For example, the hydrogen absorption line Ha line mentioned earlier is located in the middle of the color sphere; one of the absorption lines of calcium elements Ca II IR line (854.21nm) is concentrated at the bottom of the color sphere;

Having said that, the idea of giving the sun’s atmosphere a CT is self-evident. If the absorption spectrum is highly distinguished at the same time, it is equivalent to slicing the solar atmosphere and obtaining images of the material structure and morphology of different layers of the solar atmosphere.

The idea is there, but it is difficult to achieve, for example, in order to accurately locate the height of a particular element, it is necessary to make imaging observations only on his characteristic spectral lines, that is, the image of the wavelength of the very narrow band filtering, to set aside the effects of other layers of sunlight on the image. To get an image of a particular layer, the wavelength width used for imaging is usually only a few dozen pimmees, which is one millionth of the width of the hair. This raises two problems, the development of extremely narrow band filters and the lack of energy caused by very narrow band imaging. If more bands are to be imaged simultaneously, these are challenges that engineering practices have to face. Fortunately, after many years of technical accumulation and scientific research, the solar team of the photoelectric institute broke through a number of key technologies and successfully developed a 7-band solar tomography system. This is currently the world’s largest number of bands of multi-band tomography imaging system, its detection wavelength corresponding to the solar height covers the light sphere, the bottom of the color sphere, the middle of the color sphere layer and the top of the chromalit layer, to provide technical support for monitoring solar activity.

Just past the end of 2019, the photoelectric solar high-resolution observation field has achieved another record, successfully developed China’s first set of 2 meters solar telescope. With the largest number of channels of the sun “CT” equipment, it has to be said that China’s heliphysical research, the future can be.