Engineering challenges involved in the most accurate sky survey yet
7 mins read
For the longest time, we have looked at the night sky in wonder. In the last few decades, huge strides have been made in understanding what's going on 'out there', but we know only a fraction of what we could.
Slowly, starting from hand drawn diagrams produced from the naked eye, we have created ever more complex maps. With the development of the telescope, Galileo found and drew the moons of Jupiter. More detail was added when the huge telescopes of the 20th Century started to generate photographic plates, each containing untold numbers of stars, galaxies and the like.
This knowledge has been largely confined to use by the astronomical community. But recently, there have been moves to make this data available to anyone who wants to use it.
The Sloan Digital Sky Survey (SDSS) is one such project. For eight years, the SDSS used a 120Mpixel camera to image 1.5sq degrees of the sky at a time, collecting deep multicolour images of more than a quarter of the sky. The 3d maps generated contain more than 930,000 galaxies and more than 120,000 quasars. This data has since been released to scientists and the general public and the SDSS will continue to collect and release data until 2014.
Pushing the boundaries
But the SDSS' achievements are set to be dwarfed by a project currently in development. From a mountain top in Chile, the 8.4m Large Synoptic Survey Telescope (LSST) will survey the entire visible sky deeply in multiple colours using a 3.2Gpixel digital camera.
The project is posing some engineering challenges. Victor Krabbendam, LSST telescope and site project manager, said there are three top level challenges. "Firstly, there's a massive data collection and analysis process, then there's building this huge camera. Finally, there's the distributed communications and control of a large telescope on a remote mountain top."
There's a range of reasons why the LSST project is being developed, beyond mapping the visible sky. "We're focused on being able to get a single database of the sky," Krabbendam noted. "This will be of use to planetary scientists, high energy physicists and astronomers. We're also working on a mechanism to get information to the public. It will be community based science," he affirmed. "Everyone will be able to access data."
The LSST project has its origins in the late 20th Century, driven by astronomers and high energy physicists. "Astronomers wanted to extend the Sloan Survey," said Krabbendam, "while the physicists realised they could focus their experiments on the cosmos to investigate dark matter and energy by looking for the smallest effects."
People like LSST director Tony Tyson, from the University of California, Davis, and Roger Angel, from the University of Arizona's Department of Astronomy, were talking about wide field variations to large telescopes. "All had similar desires," Krabbendam continued. "When they got together and pushed for one device, the LSST started to gel."
Recently, more than 250 leading scientists got together to work out how they would use the LSST and the science they could do on the data. Krabbendam noted: "They can see some serious science can be done with the database."
One of the prime challenges for physicists is the search for dark matter. "The universe is not only expanding, but also accelerating," Krabbendam explained, "and we can only 'see' about 5% of the matter it would take to explain it."
Although the LSST has been a decade in the planning, that's not unusual: Krabbendam said most large telescope projects have a similar gestation period. "They're expensive and complicated. Just getting the ideas together and finding funding takes that long."
A boost to the project came in 2007, with an infusion of private funding from the likes of Charles Simonyi and Bill Gates. "That allowed us to start work on the primary mirror," he said, "and to expand sensors R&D."
The LSST will cost about $500million to build, with a projected start date of 2014, and the integration process will take about six years to complete.
Capturing data will be a repetitive process: point, shoot and move on a 34s cycle. "Each 15s exposure is followed by a 2s data read out," Krabbendam said. "Then another 15s exposure is taken of the same part of the sky. The camera is then pointed to the next position in 5s, so it's roughly a 40s cadence. And we'll be doing this every night for 10 years."
Why 15s? "We want to average out atmospheric turbulence, but limit other effects. With too short an exposure, you get a distorted image from the atmosphere; too long, and things could get distorted by the instrument or the motion of observed objects."
It's likely that a given area of the sky will be revisited up to 1000 times in ten years.
Neither is the next exposure simply the adjoining piece of sky. "We have a complicated scheduling algorithm that allows us to look for the next most important field. On any given night, we will go back to the same location, to look for changes and then return again in a couple of days," he continued.
"Atmospheres change and you want to get good shots at the right intervals. We have to react to the conditions and be smarter than being sequential across the sky."
Another reason for multiple imaging is to catch rare events – 'things that go bump in the night', according to Krabbendam. "Astronomers have always known that things happen and then disappear: stars explode and things move. As we process the data, we can see if something is happening and send out alerts quickly."
The LSST will also take images using six filters tuned to specific wavelengths, allowing the colours within objects to be seen. Changing a filter takes two minutes, so the scheduler has to decide when it should be done. "We could take another six exposures in that time," Krabbendam pointed out.
Images will be captured using an array of 189 sensors on the camera's 64cm focal plane. The sensors are arranged on rafts of 3 x 3 and each sensor will output its data across 16 channels. "It's the only way we can read out 3.2Gpixel in 2s," said Krabbendam.
The LSST will capture huge amounts of data. "Our data management system will be handling 15Tbyte per night," Krabbendam said. "That's roughly 3000 dvds a night."
There's also local machine control and processing. "The mirror system, for example, needs to do full dsp with feed forward logic while consuming no more than 200mW. The electronics needs to be close by and that is a contamination issue."
Getting sensors of the quality needed for the LSST is another challenge. Krabbendam said: "Over the last four or five years, we've engaged with a few vendors to prototype sensors. We have very specific requirements, including quantum efficiency across a large optical spectrum and high segmentation. While all the requirements pursued for the LSST have been done before, it hasn't been done in one package. Even if companies can make these science grade sensors, can they produce them fast enough?"
Data processing
Once the data has been captured, it will be transmitted from the mountain top to a base facility some 90km away using a ten core 10Gbit/s fibre cable. "The data will be processed and ordered, then sent to an archive facility in the US," Krabbendam said. There is enough storage capacity on the mountain for three days of data, should there be transmission problems. "We wouldn't be able to do the processing, but we wouldn't lose data or time on the sky," he added.
Then there's the small matter of moving a 340tonne telescope in sync with what the camera is doing. "Along with the observatory dome, which weights 400tonnes," said Krabbendam. "It's a demanding communications and control problem to do that safely." The telescope will move at 7°/s and accelerate and decelerate rapidly while being positioned with micron accuracy.
There are several thousand control points. "All have to be communicated with and processed to keep the machine under control: this involves robust communication at something like 9Mbit/s. And we save the engineering data too because we've found over the years that scientists will want to make sure they're getting the most out of the data and they find interesting correlations to the instrument," he concluded.
Sensing success
LSST is working with a number of companies to develop prototype CCDs, including imaging specialist e2v.
The 4k x 4k sensors have pixels on a 10µm pitch. Because of the need for these large arrays to be read quickly and to maintain a read noise of less than five electrons, the sensors have 16 outputs.
Peter Pool, e2v's chief CCD engineer, explained the challenges. "The sensors need higher than normal near infrared (NIR) response and good quantum efficiency across the visible range. A lot of the things LSST will look at are red shifted, which means any sensor will need to be red efficient. However, silicon becomes increasingly transparent in the NIR, with a cut off at 1.06µm, which causes problems."
A solution is to make the silicon thicker, but that compromises the spatial resolution. And, to get good quantum efficiency, the sensors need back illumination with a surface structure designed to collect all charge generated by absorbed photons.
"Normally," said Pool, "silicon CCDs are 16µm thick. Our LSST prototypes are 100µm thick. Because the depth of focus is small, we need surface flatness to be no worse than ±2.5µm across the sensor and from chip to chip. Normally, this is ±10µm, so we're pushing our technology harder and this inevitably brings new engineering challenges."
The LSST focal plane will comprise 189 CCDs and, for maximum efficiency, the device mounting needs to allow very close butting, with minimal dead space between devices.
Because the sensors also need a good modulation transfer function at all wavelengths, the silicon needs to be over depleted. To get what Pool calls 'a good balance', e2v is using a process in which an additional field can be placed across the silicon – for example, a 70V bias. "Use of a field strong enough to deplete to greater than the silicon thickness ensures a high enough field within the silicon to ensure that electrons reach saturation velocity almost immediately," Pool added.
Achieving the necessary depletion on thick silicon needs more than just a high depletion voltage. "The silicon needs to be very lightly doped (several thousand ohm-cm)" Pool explained. "The voltages required to achieve suitable depletion would not normally allow the inclusion of a low noise output circuit, so a structure has been developed in which the output circuit is effectively isolated from a substrate bias of some 70V, which extends the depletion thickness."
The sensors are being built on what Pool calls a 'standard e2v process'. "But in this instance, it's bulk silicon, rather than epitaxial. With epi, we can clean up the silicon by defining the oxygen concentration in the substrate, causing precipitation of impurities."
e2v has been developing the prototypes for two years and will complete the project this year, after which it will supply samples to LSST and, hopefully, move into production.