This website only uses cookies that are necessary for its functionality.

Font size

Stellar Interferometry

Welcome

If you found this page, it is most likely because you are interested in understanding the “obscure” mechanisms that govern the functioning of stellar interferometry. However, this branch of astronomy, while feeling counterintuitive at first, isn’t at all as obscure as one might think. In fact, this page aims to teach you, using some necessary simplifications, some of the fundamental basics of what makes this field interesting, unique, and necessary in today's science.
This page aims to present you with the core concept of stellar interferometry and will not address any mathematical equations of any sort, but a brief overview about the units you might encounter can be helpful:
  • 1µm is shorthand for 0.001 mm, or about 0.00004 inch.
  • The wavelengths we’ll use and discuss the most span from 0.25 µm to 5.0 µm, covering the visible light (≈0.4 µm to 0.7 µm) and reaching into the near-infrared (≈0.7 µm 8.0 µm).
  • K is the unit used in physics for temperature and corresponds to the Celsius scale -273.15, so 0°C corresponds to 273.15K and 1°C to 274.15K. 0K is the lowest possible temperature in the universe.
A quick disclaimer also to our American readers – this site mostly uses metric units, since those are used, amongst other SI units, in the scientific works this site is based on.
Throughout the site, you will be able to interact with the illustrations and animations used to describe the chapters' contents. As you will figure out, a lot of those illustrations are not to scale, since large objects, like the telescopes, have to be shrunk, while atoms and photons have to be “zoomed in” and slowed down to be observable. They can be triggered by your mouse’s movements or clicks and through the sliders and buttons on the right. However, they are best used on computer screens with a resolution of at least 1.440px in width.
Here is already an example for you.
Try it out!

Figure 0: This is an example of such animations you can interact with. Change the slider and push the button on the right to see how they affect the animation. Click and drag on it to interact with it directly.

Light

Photons

A photon is an elementary particle that is weightless and travels through space at a constant and maximal speed, the speed of light. It is famous for behaving both like particles, e.g. by colliding with other particles, and like waves, e.g. by interfering with other electromagnetic radiation (more about that later on).
But most of all, photons are famous for being the particles our eyes evolved to measure. Some photons, with the right amount of energy, trigger reactions on the backside of our eyes, telling our brain how many of those come from which direction, and since each energy corresponds to a colour, our brain can interpret what colour comes from what direction. That is also how normal optical telescopes work. However, most of the energy levels are quite faint and rare on Earth, so our eyes specialised only in a narrow group of those colours. Most of the photons therefore are invisible to our eyes.
So, how does one distinguish the energy of photons if they have no mass and all the same speed? In fact, photons are electromagnetic waves, or light; these waves may all travel at the same speed, but they have different wavelengths. The wavelength is the distance between two repetitions of a wave, like the peaks. Visible light has wavelengths of about 0.4 to 0.8 µm. X-rays have wavelengths about 1,000 times smaller, whereas radio waves may have lengths measured in km.
The maximal height of the light wave is called its amplitude, A, and is independent of its energy per se. The wave’s amplitude sets the light’s intensity (how many photons per area per time). A single photon’s energy depends only on its frequency (or wavelength). Sine waves oscillate, so that means that over time, the amplitude of the wave at a given point will go up in a positive direction, and then down to a negative direction, from A to -A. To better visualise the amplitude in the illustrations, the higher its absolute value is, the brighter the pixels will be. But if you want to see the direction of the wave pointing up or down, you can turn on the differentiation of both directions, in which case the blue part would point in the positive direction, and the red part in the negative direction.

Figure 1: The upper part shows a photon, as a particle, moving through space. In the middle is the sine wave with that photon’s wavelength, and finally the representation of the photon travelling as a particle and a wave.

But where do photons come from? In the case of astronomy, the most important source is stars. In the star’s core, a whole lot of nuclear reactions occur, mostly the fusion of two hydrogen atoms into a single helium atom. These reactions emit a tremendous amount of energy carried by very energetic gamma-ray photons (at several characteristic and discrete energies). Over the course of their journey to the Sun’s surface, they will interact, bounce, collide, being absorbed and then re-emitted, with and by a variety of very closely packed atoms and electrons. A star’s matter is so closely packed, and the “bounces” of the photon with it are being so chaotic, that it takes a single photon literally thousands of years to travel out of a star.
Once the photons have reached the outer layer of a star, its well-named photosphere, they all have lost different energy levels, changing their colour, and now leaving full speed for a long journey through outer space.

Figure 2: A photon emitted during atomic fusion in a star’s core exciting the star after a random number of interactions inside it. It is trapped for longer in the core, since it is far denser than the surface. Click on the star to add a photon.

After exiting a star, the photons can either travel directly towards the Earth or come to interact with other objects on the way. Those objects’ matter then again absorb certain photons with a given energy level, heating up, and then cooling down by emitting in, once again, lower energies. All the photons whose energies do not allow them to interact with those atoms pass by. The material is then transparent for that light.
That is partly how planets and gaseous clouds can be measured.

Figure 3: A layer of atoms gets heated up by incoming radiation. That matter is not transparent to the incoming photons, so they get absorbed and re-emitted multiple times, losing energy, until they escape with a new wavelength. (Photons in the invisible are represented in white.)

Black Body

As discussed above, photons do come in a wide variety of energy levels, or wavelengths. A warm source, so a material that has been heated up through radiation, either from its core or other objects shining on it, emits a different amount of photons of each energy depending on its temperature (releasing the absorbed energy). The least transparent and reflective an object is, the more radiation it absorbs, the darker and warmer it gets. Hence, an object that would absorb all incoming radiation would be perfectly black. This theoretical construct is therefore called “black body”.

Figure 4: The black-body spectrum at various temperatures as described by Planck’s law. The shown number on the bottom is the spectrum’s peak wavelength. You can see that the peak of our sun’s spectrum (surface temperature of about 5700K) lies in the light visible to our eyes. See also how warmer bodies radiate more energy than colder ones. The source's colour is the sum of each wavelength's intensities.

However, this black body cannot absorb an infinite amount of energy. In fact, it has to cool down to a constant temperature and stay in equilibrium with its surroundings by re-emitting the absorbed energy. The warmer a body is, the more energy it has to rid itself of, the brighter it gets. A star can very well be described as a black body, since its emission originating from its core is far greater than any absorption or reflection occurring at its surface. Planets can also be quite well described as black bodies, since they absorb and re-emit a vaster amount of radiation than they reflect.
The radiation of a black body can then be measured in all directions equally per unit area perpendicular to its direction. This emission intensity has been mathematically described by Max Planck, with a law named after him. This law shows that the energies aren’t evenly distributed, but are rather concentrated in a peak. That peak gets narrower and steeper with increasing temperature.

Atmospheric Transmission

Just like any other layer of matter, the atmosphere not only lets light pass through it, but also absorbs, reflects and re-emits vast amounts of it. Due to its composition, some wavelengths are completely shielded off, while others are nearly undisturbed. For instance, the presence of ozone (a molecule composed of 3 oxygen atoms) in the atmosphere shields off a large amount of the light in the ultra-violet (the shortest wavelengths) that would otherwise be dangerous for our health.
These effects make astronomical observations from the ground particularly difficult for certain wavelengths. It can be even worse when considering, that cold sources do emit so faint light, that they might not even be observed from the ground. This is one of the reasons why telescopes might be preferred to observe certain objects or physical phenomena.

Figure 5a: The absorption, reflection, and scattering caused by the Earth’s atmosphere. The higher the line, the least light with that wavelength will travel through. The lower the line, the least effect the atmosphere has on that wavelength.

The parts of the electromagnetic spectrum, that is transparent to radiation are called atmospheric windows. The largest one of them lies in a range from centimetre to metre long wavelengths, and are therefore radio waves. Other windows are found for example in the infrared around 10μm opened by a gap in the absorption spectrum of water vapour and carbon dioxide.
Since we have learned that astronomical objects do not only emit visible light, but also light in a lot of other wavelengths, we can understand why astronomers are trying to measure these energy levels as well. A good way for astronomers to do these measurements from the ground is the use of interferometry instead of single telescopes, but more about that later on.

Figure 5b: The part of the original emission of the blackbody spectrum (normalised, so divided by its maximum) that reaches the earth's surface after travelling through the atmosphere. The higher the line, the most light with that wavelength will travel through. If the line reaches 100%, it means, that the peak at that wavelength is fully transmitted.

Temperature:

Interferometry

Angular Resolution

When we are looking at light from afar, we can quickly notice that our ability to recognise details decreases with increasing distance. Of course, this is partly due to individual medical and biological limitations of our body, but there is the obvious physical cause that needs to be addressed: things seem smaller from afar. A leaf on a tree appears larger when we stand right before it, then when we are metres away, even so it still has the same physical size. This apparent size is called angular size, since it is the angle that that object is occupying in our field of view that changes. At a certain point, our angular resolution isn’t good enough to distinguish objects or details that appear to us with angles that are too small. For an average human, that angular resolution lies around 1/60th of a degree, or 1 arcminute.
We are taking a short deviation through the land of trigonometry, to address the subdivision of angles. A full circle is divided into 360°. The human eyes can see as a pair 210° horizontally and 135° vertically. It corresponds to a 1.75m high object (an average human) seen from a 100m distance. One degree is then divided into 60 minutes, 1 minute being written as 0°1’. Again from a 100m distance, this would correspond to 3cm, or reported to the size of the Earth, 1 minute of the angle span from the middle of the Earth would be 2km wide on the equator. Then again, a minute can be divided into further 60 seconds, 1 second being written as 0°0’1”. From a 100m distance, this corresponds to 500µm, or half a millimetre (the thickness of 5 sheets of paper). Finally, the arcsecond is a decimal number that can be further broken down.

Since from Earth the size of a star can only be measured by its angular size, the angular resolution is the key parameter of a telescope. Telescopes are expected to measure objects with angular sizes less than arcseconds. The angular resolution θ of telescopes is proportional to the observed wavelength λ and inversely proportional to its diameter D, so θ ∝ λ ÷ D, or to be precise θ = 1.22 • λ ÷ D. To achieve the observation of small appearing objects it is therefore useful to measure the object in as short wavelengths as possible with telescopes as large as possible. So to observe longer wavelengths (like for observing cold objects) we should use very large telescopes (see the radiotelescopes like Arecibo and its 300m diameter) whereas smaller telescopes are already offering good resolution in the visible range. However, the presence of the atmosphere gives a lower limit for possible resolutions from Earth. Those limits that can only be overcome through telescopes equipped with adaptive optics, correcting the atmosphere’s effects in real time, or, as we will learn, through interferometry.
But for reasons we will explain later on, combining the light of two telescopes using interferometry gives us a resolution, this time called R, not depending on the diameter, but on the separation of the telescopes, or the baseline B of their pair, so R = λ ÷ B. Suddenly, it should be far cheaper to achieve greater resolutions by separating the telescopes by hundreds of metres, kilometres or even in the case of VLBI (very-long-baseline interferometry) thousands of kilometres. This technique was famously used to recreate the first image of a black hole in 2019. In interferometric imaging, the resolution is measured in milliarcseconds, or in short mas, so thousands of arcseconds, or 0°0’0.001”. 1 milliarcsecond, or 1 mas, is the angular size of a typical bacterium (0.5µm) seen from 100m away.

Pixel density:

Size on screen:

You stand at:

Interference

As we talked about how to define light, we teased its wave nature. A synonym for light is also electromagnetic radiation. This radiation has, like it says in its name, 2 components: an electric and a magnetic component. Both are sine waves, typically oscillating perpendicular to one another. However, for simplicity, we will only focus on a single sine wave throughout the chapters, which would correspond to linearly polarised light.
So, we said that light has a wavelength (the length of the wave until it repeats itself). This wavelength can also be used to describe its frequency, since the frequency of a wave corresponds to its speed, here the speed of light, divided by its wavelength, and describes how often in a second a full wave goes through a point in space. To measure this frequency, we could use any point in space and observe how the wave travels through it. But when we are observing multiple waves together, we can see that their amplitudes, or heights, might differ at that point and at that time, even though they have the same wavelength. For instance, one wave could be at its peak, while the other is still ascending. In this case, we say that they have different phases. The phase is the distance between two identical points in the cycles of two waves. For simplicity, we assume the first wave to have a phase of 0. Then, if the phase φ of the other wave is also 0, both waves are in an identical state. Since we said that they share a common point in space and time, the observed amplitude of the combined waves is their sum, in this case, the sum of both their amplitudes. In the case where φ shifts the second wave by about half a wavelength, they suddenly point in opposite directions, having at any point a sum of 0 amplitude.
When the combined amplitude of the waves is greater than a single one of them, we say they interfere constructively; otherwise, if it is smaller than any of their amplitudes, the waves interfere destructively. This works analogously to the active noise cancelling in some headphones, producing a sound that destructively interferes with the incoming sound from the environment.

Figure 6: The upper plot shows a travelling sine wave with the given wavelength λ. The second plot is a sine wave of the same wavelength λ but shifted by a phase difference of φ. The third plot shows their sum.

Finally, keep in mind that this sine wave is only a slice of a whole wavefront that gets emitted by the source. A ray of light, in fact, is at first a circular wavefront, a set of these sine waves travelling in all directions from where it is emitted. This wavefront, when seen from far enough, then seems to become a straight line, or planar wave. It isn’t until the wavefront interacts with something that we can say it is a dot-like photon travelling from A to B. Until it reaches B, itself has no particular direction of preference. But that involves quantum physical considerations that go beyond the scope of what we will discuss here.

Phase difference:

Double Slit

Interferometry as a physical principle really made its grand entrance 200 years ago, when Thomas Young used a clever experimental setup to demonstrate and study the wave nature of light.
His double-slit experiment now counts as one of the most famous experiments in physics and goes like this:
Shine a planar (linear) wavefront of coherent light (with a single wavelength, like lasers) onto a screen with 2 pinholes (or slits) in it. You would expect the image observed on the other end to be 2 points or circles opposite to the holes. However, the observed image resembles more of a range of dots, brighter in the middle and getting more and more obscure while looking to the sides.
These counterintuitive results can be understood quite easily when thinking about the wave nature of the light.
As the wavefront arrives at the first screen, it diffracts when hitting the holes. A simple approximation of light diffraction is to imagine that the linear wavefront is in fact the sum of infinitely many circular waves that cancel each other out through interference, except for the forward-facing slices. When the wavefront then hits the screen, some of those circular waves in the front lose their neighbours, and therefore lost interference partners. If we imagine the hole to be infinitely small, then only one circular wave source would go through, and the resulting wave behind the screen would be perfectly circular. If the hole is large enough, its centre loses neighbours so far away from them that they are merely affected, so that the outgoing wavefront is still more or less linear. This effect of diffraction is diffraction is the fundamental cause why the angular resolution of a telescope is limited by θ ∝ λ ÷ D, since the effect of the diffraction becomes too strong when the telescope’s dish is too small compared to the observed wavelength.

Figure 7: An incoming planar wavefront hits a screen with two pinholes in it and a second screen where the squared visibility is measured. The middle shows how the repartition of the visibility between the two screens. When hovered over, you can isolate the incoming waves affecting the mouse’s position. See how blue (positive) and red (negative) amplitudes interact.

This approximation holds very well in the case of the pinholes in Young’s experiment, so that we could now imagine two circular waves coming out of the pinholes as if those were two sources. Now, those sources have the same phase and the same wavelength since they originate from a perfectly linear wavefront. So, slicing the circle up into single wavefronts, we can again show how they interfere with one another (see the right illustration as you hover over it with your mouse). When one pinhole is further away from a point than the other one, a phase shift occurs, where again, constructive or destructive interference might happen. The measured result is then the famous interferometric fringe pattern, with its alternation of bright and dark zones. Here, the measured visibility is independent of the direction the amplitude pointed at (positive or negative), and in fact, most commonly in interferometry it is its squared value, so a positive value, that is measured.
Now already, we can imagine those two slits being telescopes, taking measures in the incoming wavefront of a faraway star and then combining their light to obtain an interference pattern.
N.B.: Over a century after Young presented his experiment’s results, four physicists then discovered that the same behaviour can be observed in electrons, and in atoms and molecules, as well, demonstrating the wave nature of matter itself.

Space between the slits:

Phase Shift

When we look at the stars in the sky, we do enjoy their faint twinkling. This variation of intensity our eyes can measure, however, isn’t due to variations of the stars’ emission, since we discussed that the blackbody radiation happens continuously, but again due to the atmosphere.
As light travels through the gases and vapour pockets hanging in the sky, it might get refracted due to changes of density in one place. Refraction is known as the change or bending of the light’s direction when it goes from one medium to another. However, this change in direction is a complex consequence of a simple cause: light slows down as it encounters different densities of different substances. The famous speed of light, 299,792,458 m/s, is in fact its speed in a vacuum, a total absence of matter around it, and its maximal speed. In air, it might slow down only a bit, 299,702,547 m/s, but in water, it slows down to about 225,000,000 m/s. When the light then leaves one medium to encounter another, it gets back to its previous speed.

Figure 8: The double-slit experiment with turbulence. Drag the turbulence to see how it shifts the fringe pattern’s phase. Large turbulences overlapping above both slits will have less effect since they influence both waves.

A consequence of that is that the part of a wave front that encounters, let’s say, a small cloud of water vapour, will get slowed down for a very short moment compared to the rest of the wavefront. This short break hence causes a phase shift between the regions of the wave. This atmospheric effect is called seeing and has multiple possible origins and consequences. It is responsible for setting the angular resolution of telescopes often way worse than should be expected from the relation θ ∝ λ  ÷ D. For instance, a single telescope with a 10 m diameter should expect an angular resolution around 10 milli-arcseconds in visible light, but because of the seeing, the quality of the incoming light degrades so much, it cannot be used under 1 arcsecond without further techniques to correct out the seeing.
So, what would then happen to the fringe pattern in Young’s experiment if such a seeing might happen? When thinking back about how the fringe pattern is obtained as the interference of sine waves. The phase difference of both waves at a certain point should be proportional to the difference of the distances between that point and each slit.
If a wave comes later on the pinhole than another, that original phase difference acts like an additional distance between each point after the pinhole and the pinhole. So the observed fringe pattern moves in the direction of the “latest” hole, proportional to its phase difference.

Space between the slits:

Size of the turbulence:

Source Coherence

Sources observed by astronomers aren’t, however, perfect theoretical constructs like point sources that have no size, no speed, nor inhomogeneities and emit perfectly regular light. In fact, the purpose of stellar imaging, of which stellar interferometry is a branch, is to see the structure of the light’s source in space.

Figure 9: The spatially incoherent source consists of 3 stars with different intensities. The intensity of the A star is set to 1. Move the stars to see how the pattern changes. Use the slider to move away from the source and see how the waves flatten out and gain in predictability, or coherence.

If we look at how the light of an irregular structure, like a binary star system, interferes, it seems chaotic from up close. At best, we can recognise some kind of fringe pattern discussed before; at worst, it looks like a noisy soup with no structure. We call this kind of structure incoherent, as opposed to coherent like the planar wavefronts.
However, as these waves travel a huge distance through space, the apparent chaos smooths out. When observed from Earth, the seemingly random ripples become well-behaved, nearly coherent again. This behaviour is used in the van Cittert–Zernike theorem, which states that the degree of coherence of the observed light correlates with the apparent shape of the source. In the case of the interferometric fringes, coherent light gives us sharp edges like before, with a high contrast between the fringes. If, however, the source is particularly incoherent, the fringes get washed out.
The pattern of how coherence decreases with telescope separation contains the information needed to reconstruct the brightness distribution of the source. Thanks to the complex mathematics established in the van Cittert–Zernike theorem, interference patterns don’t just tell us how the source shines, but tell us what shape it appears to have.

Distance from the sources:

Intensity B:

Intensity C:

Delay-line

Thanks to the van Cittert–Zernike theorem, we have established that it is of crucial importance to measure the same wave front from the different telescopes to be able to obtain the information concerning the source’s shape. To ensure that this is the case, we have to find a way to make the distances of the telescopes to the source identical. One might think that small height differences in the telescopes’ positions can be neglected after the tremendous distance the light has travelled from outer space to the Earth, but remember the comparatively short wavelengths we observe. Those are the distances that count, so the distance between the telescopes and the source must be equal with a very high precision.

Figure 10: Two telescopes observing the same source at an angle. A delay line of length D / 2 compensates for the path difference. Drag your mouse around to change the angle. When moving the cursor for the additional delay, you can see how a slight change in the delay line’s length can change the measured visibility (lowest square).

Due to the Earth’s rotation and the inclination needed by the telescopes to observe certain sources, the path difference of the light, D, will change for each observation and will change over time during the observation. Hence, a solution must be found to dynamically correct that path difference. Such a solution is to artificially elongate the path of the light captured by the nearest telescope, by making it travel down a delay line of length D / 2. This delay line is a system of mirrors moved on a carriage. The captured ray is deviated by a mirror towards a moving carriage at a distance of D / 2. The carriage is mounted by a mirror system sending the ray back D / 2 until it meets a new mirror sending the ray to the laboratory where it will be combined with the other ray. Both will then have travelled the same distance.

Additionl delay error:

Self Calibration

However, atmospheric turbulences cause great phase shifts over the measured range. This already causes some troubles in single telescope astronomical observations with larger mirrors. New techniques have evolved, allowing the backsides of the telescopes to move, compensating for the seeing over their surface. However, when combining the beams from multiple telescopes, the phase of all telescopes should be further corrected to be adequately measured. We have seen that turbulences cause the fringe pattern of the combined rays from two telescopes to shift by a factor proportional to the phase shift caused by the turbulence. Now let us imagine three telescopes 1, 2 and 3, where a turbulence causes the incoming light in 2 to be shifted. When we combine the rays of the pair 1–2, the fringe pattern will be shifted towards 2. In the case of the 2–3 pair, it will also be shifted to 2, but this time in the opposite direction than in 1–2. If we then add the shifts of A–B and B–C, we obviously should obtain a 0 sum, hence cancelling the effect of the turbulence in the measured phase. However you sum the three measured phases, the final value isn’t expected to be always 0, but a value called the closure phase. This quantity, once all turbulence and instrumental phase drifts are cleaned out, gives further information about the source’s asymmetry. For symmetric sources, like in our example, it’s near 0, and for asymmetric sources it can reach to -π or +π.

Figure 11: Like in Figure 8, the planar wavefront coming from a point-like source is disturbed by turbulence. The first line of phases shows the phase shift of each telescope caused by the turbulence. The second line shows the shift in the fringes’ phase between the telescope pairs 1–2, 2–3 and 3–1. Finally, the last value shows the sum of the 3 phases of the patterns adding to 0 (the closure phase of a point-like source), hence cancelling out the turbulence’s effect.

Of course, in real cases, the turbulences causing the seeing would be way more complex and affecting all telescopes, but still, summing up the phases of all pairs will cancel even those complex turbulences. Hence, for measuring the phase of the incoming wave efficiently, we will need at least 3 telescopes, but the amount of datapoints will increase, with decreasing error, the more pairs will be availabel, so the more telescopes are used. Modern optical interferometry commonly uses at least 3 to 4 telescopes (often way more) to measure the closure phases and improve the following image reconstruction.

Size of the turbulence:

The u–v Plane

But what data is the telescope pair actually measuring? So, we saw, in our example for the double-slit experiment, that, when looking at the experiment from above, we can measure the fringes’ intensities on a single line and plot it on a 2D graph. In that case, we looked at a simple slice of the actual fringe pattern. In reality, however, the fringes are spread over a 2D screen. If the pinholes’ coordinates on the screen are given using the (x, y) plane, the equivalent plane on the observing screen would be measured in (u, v).
While the field of view (x, y) describes the image as we would normally see it in the sky, the (u, v) plane describes the so-called Fourier transform of that image. To recreate the image of the source, scientists have to figure out the back transformation based on the measured visibilities. When a pair of telescopes measure a visibility, this corresponds to 2 points in the u–v plane (u, v) and (-u, -v) which are separated by a distance proportional to the baseline and with an inclination corresponding to the angle of the baseline relatively to the observed plane. Each measurement we obtain with an interferometer is not a direct picture of the source, but rather a single Fourier component of it.
An analogy for a single measurement can be found in music: a single measurement of an interferometer would correspond to listening to a single note from a song. You hear what frequency the note is in and how loud it is, but you do not recognise the song until you have heard enough of them.
Hence, it is the goal of astronomers to make measurements that best cover the u–v plane.

Figure 12a: The field of view (FOV) showing a set of different objects, differing in size and intensity. Drag the sources to change their positions. The size of the FOV depends on the angular resolution of the telescope or telescope array. So with a fixed D or B, it shrinks or grows with the wavelength.

Figure 12b: The u–v plane corresponding to the FOV above, showing the 2D interference pattern of the observed objects.

Number of Objects:

Size object 1:

Size object 2:

Intensity 2:

Size object 3:

Intensity 3:

Aperture Synthesis

Figure 13: Depending on the incoming ray’s angle, the projected baseline (yellow) will change its length. On the right is the fringe pattern of a binary system of point-like sources (similar to the double-slit experiment) spread out on the u-v plane. There you can see what points of the u-v plane the telescope pair is measuring. Keep in mind that the pattern is unknown by the researchers when observing a source, and their goal is to best match a model to the measured values to reconstruct the source’s image.

We said that pairs of telescopes combining their light have an angular resolution R = λ ÷ B, inversely proportional to their separation, or baseline, B. In fact, the angular resolution of a whole array is as good as the longest pair. However, where a single telescope measures directly the image of a source it can resolve, a telescope array only gives us the visibilities and at best the phases of a set of points in the fringe pattern.
To effectively reconstruct the true image of a source using its interferometric data, the whole pattern should be measured, meaning an infinite amount of telescopes each taking a different sample of the pattern. However, even if the amount of baselines (BL) increases drastically with each new telescope (2T: 1BL, 3T: 3BL, 4T: 6BL, 5T: 10BL, 6T: 15BL, …) this will not be feasible to achieve enough data coverage through the array’s size alone. Image reconstruction has to be approximated from partial data collections, but a quick and easy way to increase the quantity of measurement exists.
Simply prolong the duration of the observation: the rotation of the Earth changes the apparent distance of the telescopes in regard to the observed source, virtually moving the telescopes around. The projected baselines can be understood as the shadow of the baseline in the direction of the incoming light. If the light is perpendicular to the baseline, its projected baseline is at its maximum and as long as the real baseline; however, when the light is parallel to the baseline, the length of the projected baseline might be 0. Since the longer baselines will best provide information about the small-scale structure and the short baselines more about large-scale structure, a wide range of baselines are needed for efficient image reconstruction. It is important to cover as much of the u-v plane as possible, but in the case of a repeated measurements, measuring the same value twice will improve the measurement’s error and uncertainty.

Angle on the ground:

Measurement

The VLT

Figure 14: The VLT as seen from above (1 m ≈ 1.5 px). Place telescopes as you like. You have 4 UTs or 4 ATs at your disposal. However, AT and UT units cannot be used together. Also, each line may only have 1 single AT on it. The ellipse that appears between the telescopes is the equivalent dish of a single telescope with the same angular resolution as the chosen array, directed in the same direction. The line that crosses the image shows the aim direction of the telescopes.

Figure 15: A visualisation of the night sky as it could be seen from the VLT. Choose the position in the sky of the source, red cross, you want to observe. The source has the configuration you choose in Figure 12a. Then start the simulation to see the stars turn in the sky. The green cross shows where the geographical south pole points to in the sky, so this is the point around which all stars rotate.

The Very Large Telescope (VLT) is an astronomical facility operated since 1998 by the European Southern Observatory, located on Cerro Paranal in the Atacama Desert of northern Chile. It consists of four individual Unit Telescopes (UTs), each equipped with a mirror that measures 8.2 metres (27 ft) in diameter. These optical telescopes are generally used separately but can be combined to achieve a very high angular resolution. The VLT array is also complemented by four movable Auxiliary Telescopes (ATs) which are 1.8 metre (5.9 ft) in diameter. The VLT is capable of observing both visible and infrared wavelengths. Each individual telescope can detect objects that are roughly 4,000,000,000 times fainter than what can be seen with the naked eye. When all the telescopes are combined, the facility can achieve an angular resolution of approximately 2 mas, way smaller than the 50 mas achieved by a single UT. The VLT is one of the most productive facilities for astronomy, second only to the Hubble Space Telescope in terms of the number of scientific papers produced from facilities operating at visible wavelengths. Some of the pioneering observations made using the VLT include the first direct image of an exoplanet, the tracking of stars orbiting around the supermassive black hole at the centre of the Milky Way, and observations of the afterglow of the furthest known gamma-ray burst.
The different beam combiners allow up to four telescopes to be used at the same time.

Measured Data

To compare the obtained observational results, we can now plot the measured phases and visibilities as a function of the telescopes’ separation projected onto the u–v plane. The x–axis is therefore not the baseline length B itself, but B ÷ λ. Now, comparing these values with the expected values from various models, scientists can approximate the actual source’s appearance.

Figure 16: The squared visibilities of the source set up in Figure 12a as measured by the baselines selected in Figure 14 over the time of the simulation (while the source is visible in the sky, Figure 15), plotted against the spatial frequency.

Figure 17: The closure phase of the source set up in Figure 12a as measured by the baselines selected in Figure 14 over the time of the simulation (while the source is visible in the sky, Figure 15), plotted against the spatial frequency. (If at least three telescopes have been placed).

Conclusion

Results

Like in all sciences, most results achieved using stellar Interferometry are mostly interesting for the people active in that line of work. But from time to time, an unexpected discovery makes us all change our perception of our universe. Amongst those, the VLTI was used by the team of researchers, winning the 2020 physics Nobel prize, for observing the movement of the stars orbiting the black hole in the centre of the Milky Way.
It also gave us images of star systems as the binary Eta Carinae, whose two massive stars expel incredibly strong and fast winds at each other, puzzling astronomers since 1827 when its Great Eruption made it shine surprisingly bright in the sky for 18 years.
It was thanks to the VLTI, that in 2019, a group of researchers could for the first time study an exoplanet, HD 8799e, using optical interferometry, as opposed to the far more used radio interferometry, that could give new insight on its structure and composition.
Amongst many other observations, in 2017 an image reconstruction of the surface of Antares gave us the first image of an other star’s surface than that of our own sun.
Stellar interferometry allows us to see stars as more than mere points of light in the night sky, but to observe their structure, orbiting partners or planets, and even observe how a black hole bends space-time heart of our Galaxy.
With the emergence of AI, new apparatus, and new generations of researchers, it will surely bring us even more astonishing inside about the vast and complex universe we live in.

Go Further

If you are now curious about more on the subject of stellar interferometry, YouTube offers many resources to learn more about the field of stellar interferometry.
Very long baseline interferometry was a central subject in D. Muller’s video “How did they actually take this picture?”, on his channel Veritasium, about the famous image of the black hole in the centre of our Galaxy.
Focusing on exoplanets the documentary “Chasseurs de mondes” by P. Baud on his channel Axolotl gives an insight on different interferometric observatories, including the VLTI, and how they are useful to detect planets hundreds of lightyears from us. The german television program TerraX has an episode titled “So funktionieren die Mega-Teleskope ELT und VLT!” also giving insight in the upcoming single telescope ELT. In the same year, an episode about the radio telescope array ALMA has also be uploaded under the title “Wie funktioniert das ALMA-Teleskop?”.
For further reading on the subject, I can recommend for physics students the books “Principles of Stellar Interferometry” by A. Glindemann and “An Introduction to Optical Stellar Interferometry” by A. Labeyrie, S. G. Lipson, and P. Nisenson or the homepag of ESO
Finally, interferometers can also be found in fiction such as Robert Zemeckis’ movie “Contact” in which Jodie Foster works as a researchers at the VLA (very large array) amongst others.