There are many scope parameters to consider, but which are the most important?
4 mins read
The more complex our technical environment, the more advanced our measurement equipment needs to be. The oscilloscope is a good example: during the last few years, a huge improvement in performance, along with a range of innovations, means selecting the right scope is now a difficult process.
Many parameters have to be considered when specifying an oscilloscope, but which is the most important? The answer is bandwidth; without the right bandwidth, the signal of interest will not be displayed correctly. Even when a suitable bandwidth is chosen, sample rate needs to be considered. If the a/d sample rate isn't at least 2.5 times the bandwidth, details will be lost or there will be aliasing.
Now, what about data memory? Even if the correct bandwidth and sample rates are chosen, the data will be lost without enough memory because the time/div setting will have to be increased. That makes three important parameters. But what about waveform update rate? Even if the three parameters mentioned above are specified correctly, anomalies won't be found unless the scope has a fast signal update rate.
This process goes on and on; just think about advanced trigger functions, a/d resolution or even the type or size of the screen.
Why do we need to think about the parameter waveform update rate? The reason can be found in digital oscilloscope technology. In analogue oscilloscopes, this was mostly unknown because the input signal was used to deflect the electronic beam vertically. The time for switching the beam from right to left was small enough to be ignored. In digital scopes, there is no electronic beam – all the digitised data has to be processed and prepared for display. While data is being handled, no 'new' data can be acquired – this is called 'dead time'. The sum of a/d conversion time and dead time represents the complete acquisition time.
The curve we see on the screen is a small part of one acquisition cycle. So, the more acquisition cycles per second, the shorter the dead time and the higher the ratio of a/d conversion time to dead time. This allows rare anomalies to be detected more quickly, increases confidence in the measurement results and saves money as the test engineer needs to spend less time on finding errors.
Reducing the dead time – also called blind time – can only be done by using higher performance cpus or/and integrating functions into hardware. (fpga/asic).
To meet these needs, Rigol has developed UltraVision technologies (see fig 1), which are based on the interaction between specialised hardware and intelligent software. The a/d converted data is handled by a hardware based sampling controller. A large memory is connected directly to the controller so data is stored without loading the cpu. Similarly, the waveform to be displayed is generated by a waveform plotter, again without loading the cpu.
There is one additional point: the display becomes more important because many waveforms are overlaid and displayed in one 'shot'. This means the screen technology has to be adapted to help users to analyse signals. Multilevel intensity grading or multicolour displays are common, where different intensities or colours give an indication of how often an anomaly occurs. Screens in those Rigol scopes with UltraVision technology feature 256 intensity steps.
Consider a clock signal as a typical test performed with an oscilloscope. The signal contains an unwanted glitch – a runt or a spike – which occurs from time to time. The task is to find and analyse the unwanted behaviour – how often does th spike occur, what is the peak voltage and what is the pulse width? There are a number of ways to do this. The simplest, but most time consuming, way is to acquire data over a long period, then zoom into the signal and scroll manually through the whole sequence looking for the disruption. The challenge here is that if the error is only a short peak, the sampling rate must be high enough to catch the spike. But if you work at a high sample rate, you need a large memory to acquire enough data.
Another way is to use the record function built into Rigol scopes; this helps to save time by conducting the error search automatically. While this captures data over a long period, multiple single shots are taken with a shorter time base and a lot of data is saved. The user can then define a 'good' mask and perform a pass/fail test on the recorded data. All 'fails' are highlighted and can be accessed and analysed with ease. Using time stamps and an overall view, it is possible to see how frequent the errors occur.
The third – and most advanced – approach is to combine the record function and available trigger functions. The oscilloscope can be configured such that only the error is recorded. Depending on the error's frequency and time base settings, it is possible to monitor the signal and record the errors for hours. A disadvantage is that information captured between errors will be lost.
Signals of interest are acquired at maximum sample rate and the available memory is not wasted by saving signals without anomalies. Further analysis can be performed directly on the scope. Automatic measurements with measurement history can be added and displayed in play back mode or the data can be downloaded to a pc.
Ultra Vision Technology can provide even more. As standard, all scopes featuring this technology have a deep data memory. But why is this important?
If you need to acquire short or fast signals or disturbances, the main focus has to be on a high sample rate and a high bandwidth (the keyword here is rise time). But if the signal has to be monitored over a longer period and if there are additional peaks or drop outs which have to be analysed, a deep memory is necessary. The mathematical relationship between sampling rate, time base and memory makes this clearer:
sampling rate x (time/div) x number of divisions = data memory
Rearranging this:
(time/div) x number of divisions = (data memory / sampling rate)
The maximum memory and sampling rate are fixed by the hardware setup, while the total displayed time is defined by the application. In order to reach the required time period, the sampling rate has to decreased. A lower sample rate will mean less resolution and small signal details may not be acquired.
Fig 2 shows the relation between the mentioned parameters. What does this graph tell us? In short, the larger the memory, the longer you can maintain the maximum sample rate. When longer acquisition times are necessary, it is helpful to keep the sample rate high because the captured waveform can be zoomed without losing details.
Thomas Rottach is an application engineer with Rigol Technologies EU.