Magnus McEwen-King is managing director of Optasense, a company that was spun out of defence technology company QinetiQ to develop fibre optic sensing solutions for non military markets. "Distributed acoustics is our poster child," said McEwen-King. "It is the one everybody readily understands, but we also do distributed temperature sensing, distributed magnetic sensing, distributed impedance sensing, pressure: it is DxS – distributed something sensing.
"We have found that acoustics is the one that is applicable in most industries. Acoustics can pick up leaks on a pipe, someone drilling near a pipe or walking in a field potentially to attack a facility – they all have an acoustic or vibration element."
The Optasense distributed sensing system effectively turns the entire cable into a sensor, or at least a series of adjacent virtual sensors. Light is fired down a fibre in pulses which determine the length of the virtual sensor – a 10m pulse means each 10m length of fibre is a virtual sensor that is adjacent to the next one. A 50km fibre – which is getting close to the practical limit – would, therefore, have 5000 10m sensors.
Most of the light effectively falls out of the far end of the fibre, but a small amount is reflected back to the transmission end as a consequence of an effect called Rayleigh Backscatter. This phenomenon can be used to quantify changes in the fibre – acoustics or vibration causes strain in the fibre, which due to Rayleigh Backscatter, causes variations in the number of photons returning to the transmission end.
Slicing the backscatter signal
Using division multiplexing, the Rayleigh Backscatter signal is sliced so that each part of the signal corresponds to a certain distance and therefore a certain virtual sensor. Slicing and dicing is done using coherent optical time domain reflectometry (COTDR), a technique that mixes the received optical signal with a local optical oscillator in order to provide an electrical signal at an intermediate frequency, which can then be filtered to reduce noise.
"Within COTDR, the coherent piece is the really relevant part," said McEwen-King. "What is unique about the COTDR capability is that each individual segment is its own microphone and is sampled independently and simultaneously in relation to every other virtual sensor on that cable. That particular feature enables us to give each microphone its own characteristics – different algorithms can be used to respond to different sections of the fibre."
Most similar systems on the market, says McEwen-King, are essentially energy sensors, which means that information received about any particular segment can only be used in isolation. "We have phase and amplitude coherence, which means Optasense is a true acoustic system, providing the full fidelity of comparative measurement across every single sensor. As other systems on the market are only energy sensors, you can't compare one channel to another one because phase or amplitude may be out of sync. Our system isn't like that. Essentially, Optasense is equivalent to a geophone and, as each sensor is phase and amplitude coherent to the adjacent one, we have a true seismic array or sonar array."
By analysing the signals from different sensors, it is possible to tell if someone was walking towards the fibre, how many people there were and their size. If a tractor was ploughing an adjacent field, it would not be a cause for concern, but if it stopped adjacent to the fibre, which could be along a buried pipe, and digging started, it could be time to issue an alarm.
Sensitivity proportional to span
The acoustic bandwidth – the sensitivity – is very broad and directly proportional to the length of the cable. Typically, at 40km, the sample rate is 2.5kHz, which results in an acoustic bandwidth of 1.25kHz. A 4km fibre provides an acoustic bandwidth of 12.5kHz. However, it can vary from 0Hz ('virtual DC') to 125kHz, depending on the sensitivity requirements.
There are three components in the Optasense solution. The first is an optical interrogation unit, which converts the fibre into the acoustic sensor. The second piece is the processing unit, typically an off the shelf Dell blade server that holds all the algorithms and data processing capabilities and which converts the microsonic data into meaningful pattern recognition, height/bearing/speed classification and so on. The third component is a control or display capability.
McEwen-King commented: "The proprietary pieces are the optical interrogator unit and the software."
Optasense claims a false alarm – a false positive – is not possible using the system, a claim verified under test by the National Physical Laboratory and others. A nuisance alarm, where an alarm is raised on a real event, but not one that is important, is another matter – that is down to how the system is set up. McEwen-King said: "The systems are designed to be 100% accurate in detection, but the longer we have to look at an event, the more accurate we can become with our classification."
He believes there are three reasons why the system offers functionality not available elsewhere. "The first is that we make the fibre. If you think about an analogy with the human body, the fibre becomes the ear with the best signal to noise ratios on the market – we can hear things other people can't. Ours is a quantitive device, which means it is phase and amplitude coherent, it has the best SNR and is a quantitative measurement device.
"The second piece is being able to look at the thousands of acoustic channels coming towards you at the same time. Because we have been in military sonar as QinetiQ for decades, we have already developed the systems to handle that broadband of upwards of 100kHz of acoustic bandwidth in many thousand channels. In some of our projects, we have 2 to 3Gbit/s of data coming towards us at the same time. It is a very large data handling issue and so we have developed proprietary techniques to handle and sort that acoustic broadband data in real time.
"And, finally, the third piece is the algorithm – the brain behind the ear. Anybody can make a fibre 'ear', but it is very hard to make sense of what it hears, to sort it all out and provide what we call decision ready data."