While acquisition still predominates, there is an increasing need to handle output data flow from computer to equipment, in applications such as wide format or 3D printing. Based on our experience of working with customers over the years, I’d say perhaps 75% of applications are for acquisition and data capture, while 25% are for the other direction. This means that any general purpose application needs to be able to handle both data acquisition and data output.
This real-world data can be difficult to handle in an embedded system. There are often high volumes of data, which, in the case of data acquisition, must be handled by an interface, then formatted or reconfigured before being sent to a host computer. Some buffering is needed to keep everything moving, and there may be real-time constraints that dictate the timing needed.
But there’s no need to reinvent the wheel. In most cases, applications have the same needs for data acquisition, and have a similar structure for how data is acquired, processed, transferred and stored.
Data acquisition structure
Figure 1 shows a typical data acquisition system. The first stage is the interface to the peripheral or sensor producing the data, which might be a camera, A/D converter or other sensor.
The data may then be processed or formatted on the fly, and buffered in on-board memory to smooth out any bursts of data from the sensor. Finally, the data is sent to a host computer for display and storage, often using a USB or Ethernet interface. There may also be a control channel for setting up any parameters of the acquisition system, and for starting and stopping it.
How about applications where data flows in the other direction – from a PC to transducers – to control a 3D printer for example? These data output tasks require similar functions to the structure shown in Figure 1, but simply with the data flow reversed.
Tasks of the interface
These functions may be common, but their requirements are not trivial. Buffering the data in RAM requires handling of FIFO (first in, first out) functions. This requires memory accesses to a single external RAM device to be multiplexed for simultaneous reads and writes. It may also be necessary to route multiple data streams through the FIFO, for example if images are being captured from two or more cameras.
After the data is captured and buffered, the interface needs to send it to the host PC. This can often involve feeding multiple data channels through a single USB or Ethernet connection.
Additionally, in most applications, the data from the sensor must be processed before it is sent to the PC. For example, word lengths might need to be changed from 10bit to 8bit, or it may be necessary to change the format of image data, or to add frame boundaries or configure raw data to fit a higher level protocol. While these changes can be handled on the PC, it is often desirable to reconfigure the data in hardware before sending it to the computer, to achieve higher speeds and to preserve host PC resources.
Standard platform
To address these challenges, it can be useful to use a standard platform that can be re-used across multiple applications – whether supplied by a vendor, or developed in-house. For example, Orange Tree’s ZestDAQ software is a platform for data acquisition and control applications running on the Zest series of USB and Gigabit Ethernet boards, which provide high speed embedded device interconnect.
The platform consists of multiple FPGA logic cores designed to simplify the buffering, formatting and transfer of data between peripherals or sensors and a host computer.
A standard platform can handle the low-level tasks needed, which means that the user needs only to create the application specific code to interface to their peripherals and format the data in a suitable manner for transmission to the host PC.
Common components
Figure 2 shows the cores in the platform, using the example of an Ethernet interface board from Orange Tree. The Ethernet wrapper provides simple access to 16 simultaneous channels transferring data over the network. The wrapper includes a connection management state machine and a data multiplexor to hide the detailed operation of the custom chip handling the low-level Ethernet accesses. It presents 16, byte wide parallel, bidirectional FIFOs to the user application, which runs in an FPGA on the board.
Data flows through the SDRAM buffer core which makes the SDRAM device appear as multiple parallel FIFOs, allowing buffering of bulk data between the user application and the Ethernet interface. The register interface provides a simple way to read and write control and status registers inside the user application.
Figure 2: Cores in a ZestET2 Ethernet board
The same principles could apply to a USB interface board. Here, the standardised platform can provide a way of multiplexing multiple data streams over a single USB connection, with minimal impact on the data transfer rate.
In the example of ZestDAQ, data is multiplexed in blocks from each channel in a round-robin fashion where more than one channel is ready to transfer data. The block length can be controlled for each channel individually, and it can be a fixed length or a variable length determined at runtime.
To simplify the porting of applications, the software platform ensures that the wrapper and register cores have similar interfaces on both the USB and Ethernet versions, which makes porting of applications between the two platforms straightforward.
The SDRAM buffer core is also identical on both USB and Ethernet boards. It wraps the SDRAM memory so that it appears as up to 16 independent FIFOs. Each FIFO can have asynchronous input and output clocks and independent input and output widths of 8, 16, 32, 64 or 128bits. Various options are provided for byte enables and packing of byte data into words as well as big and little endian support. Each channel can be assigned any address ranges in the SDRAM allowing each FIFO to be any length.
Benefits of standardisation
Since many data acquisition applications have a similar structure, a standardised platform such as ZestDAQ can be invaluable.
It reduces the custom design work needed for common data acquisition and control architectures, saving time for end users and enabling them to concentrate on their own specialised sensors and applications.
A standard platform means that customers benefit from proven, tested code, which helps to reduce design effort and hence costs, as well as cut time to market and reduce risk.
Author profile
Matt Bowen is software director at Orange Tree Technologies.