How new coding approach helps optical systems to hit 100Gbit/s
4 mins read
There are several reasons why engineers are working harder when designing high capacity optical systems – higher data rates mean fewer photons per optical pulse and there is greater signal dispersion. Adding to the challenge, service providers want the reach of 40 and 100Gbit/s systems to match that of existing networks designed for 10Gbit/s lightpaths.
"The optical signal to noise ratio goes down 10dB as line rates go from 10Gbit/s to 100Gbit/s," said Michael Scholten, strategy and technology manager at Vitesse Semiconductor.
Advanced signal processing must be used to compensate for the distortion, while system performance must be enhanced to claw back the lost dBs. "Something needs to change to maintain the same [optical] reach," said David Yeh, AppliedMicro's senior product marketing manager.
This is where forward error correction (fec) comes in and it explains the growing interest in an approach known as soft decision fec (sd-fec).
SD-FEC, which has already been used for networking interfaces such as 10GBaseT, and for radio and satellite communications, differs from traditional fec in that it uses analogue signals to show the likelihood of data being a 1 or a 0, rather than rigid binary values. "You get information from the channel as to a 1 or a 0 and how reliable that piece of info is," said Sameep Dave, principal engineer at ViaSat.
With sd-fec, extra information is fed to the decoder, which equates to an improved electrical coding gain. But this means greater design complexity, a lot more processing horsepower and, in the case of 100Gbit/s optical transmission, a/d converters operating at rates approaching 64Gsample/s.
For 100Gbit/s transmission, the optical industry has chosen a coherent receiver design that uses dual polarisation quadrature phase shift keying (DP-QPSK) modulation. This splits the optical signal using two polarisations, while the data stream is encoded using QPSK. Instead of sending one serial channel at 100Gbit/s, four slower channels are sent in parallel. Since 100Gbit/s transmission allows for an overhead of up to 20% – extra bits sent alongside the data payload – the symbol rate per channel ranges from 28Gbaud/s to 32Gbaud/s, hence the high speed a/d converter sampling rates.
For 40Gbit/s, a standard Reed Solomon fec code delivers a 6dB gain. Going from 40 to 100Gbit/s incurs a penalty of 4dB, so any 100Gbit/s fec must have a gain in excess of 10dB. Because the a/d converters run at such a high rate for 100Gbit/s transmission, only a twin level sd-fec scheme is being considered. Samples are thus classified as a 'strong 0', a 'weak 0', a 'weak 1' or a 'strong 1'.
With 7% overhead codes, the difference between standard hard decision fec and sd-fec is relatively small, but increasing the overhead increases the coding gain difference between the two. "In practice, sd-fec offers a 1.3dB better coding gain," said Scholten.
Several coding schemes can be used for sd-fec, including block turbo codes based on BCH (codes invented by Bose, Chaudhuri and Hocquenghem) and Low Density Parity Check (LDPC). Vitesse favours LDPC as the processing requirements are relatively straightforward. ViaSet, which has long used sd-fec for satellite communications, is a proponent of block turbo codes and claims these have implementation advantages over LDPC.
ViaSat describes a block turbo code as a two dimensional array made up of concatenated BCH codes. Each row and column is an extended BCH code. Each row is decoded and the results updated. Each column is then decoded using the updating enhanced row information. Each iteration provides 'extrinsic information' such that, with several passes, the coder converges on the solution.
Gilles Garcia, director of Xilinx' wired communications division, said there is little demand for sd-fec in fpgas. "The reason is complexity, not because the fpga can't do it. The issue is implementing it at the right power level."
There is also another challenge in using fpgas for sd-fec. "It's down to the way the [fpga's] I/O is implemented," said Frank Melinn, a Xilinx system architect. "If you look at [multilevel] sd-fec, you'd have to get the sensitivities of the serdes changed. However, we are working with a company developing a coherent receiver [chip] and our fpga will perform hard decision fec and framing."
A sd-fec implementation, meanwhile, will require new I/O and Xilinx has announced plans for this in its next generation 28nm fpgas.
The goal is to include the a/d converters, dsp and sd-fec in one chip. "SD-FEC, together with the coherent receiver, is probably the most difficult design, involving both high speed analogue and complex digital processing," said Yeh.
ViaSat has announced sd-fec for 100Gbit/s. It has adapted its block turbo code technology and has designs available in 65nm and 40nm processes.
Not surprisingly, ViaSat does not implement its 100Gbit/s sd-fec prototype designs using fpgas. Instead, it produces bit accurate RTL and C implementations. "If you do C simulations, you can generate bit error rate performance curves and we make sure the hardware and C simulations give the same 'bit trueness'," said Dave. This has given ViaSat the confidence to go straight to an asic.
In the absence of a good sd-fec, some vendors have decided to go to market with a hard decision fec. "The 100Gbit/s dsp is very big and [vendors] realised they didn't have the die space for the LDPC fec," said Jim Keszenheimer, ViaSat's business development manager. As a result, the traditional fec has been implemented in a second chip. Other vendors are waiting for an sd-fec to become available and will use it as part of a 40nm process asic.
With block turbo codes and their claimed lower implementation complexity, ViaSat says vendors now have a third option: using either the 65nm or 40nm implementation as part of a single coherent receiver chip.
Using a 40nm process, the sd-fec design will lower power consumption and die size. However, the extra processing performance of a 40nm design could enable more iterations, squeezing an extra 0.1 or 0.2dB gain out of the sd-fec, says Dave.
Vitesse has been researching sd-fec technology but, for now, has chosen to offer a hard decision fec for 100Gbit/s using its Continuously Interleaved BCH (CI-BCH) scheme. "Due to the implementation complexity – the very high processing rate – sd-fec is difficult to implement," said Scholten.
Based on BCH(1020,988) codewords, 'continuously interleaved' refers to how the codewords intersect, similar to the 2D array of the block turbo code sd-fec. When decoding a data stream, a corrected bit from one code block benefits an intersecting codeword such that corrections ripple through. "That means more and more errors are corrected ahead of time," said Scholten.
The CI-BCH scheme also allows coding gain to be traded off with latency – an increasingly important issue in optical networking. Vitesse's CI-BCH 20% overhead delivers a 10.5dB electrical coding gain: more than enough to recover the 4dB optical gain shortfall using DP-QPSK at 100Gbit/s.
Vitesse is making the enhanced fec available as an IP block and as an fpga implementation. "We expect an asic implementation to consume between a third and a half of the power of the fpga," said Andy Ebert, Vitesse's product marketing manager.