Cup Sampling Rate Difference Between M4 & M5

The tower literature indicates that the sampling rate for the cup anemometers is 1 Hz. Since the DAQ system is sampling at 20 Hz, I would expect to see cup measurements in the 20 Hz data files in groups of 20 with similar magnitude. In other words, the cup reports the same wind speed for 20 samples (one second at 20 Hz). This oversampling of the cup instruments will give the cup wind-speed a “step function” appearance when looking at the data with respect to 20 Hz sampling. This “step function” appearance of the cup data is what happens on the M5 tower but not on the M4 tower.

The M4 cup instruments fluctuate at 20Hz and do not appear to be constrained to “hold” a particular value for one second. Consequently, the M4 cup data does not have the “step function” appearance and looks noticeably different from cup data on the M5 tower.

From an analysis standpoint, these differences are unlikely to be significant since the cup data is generally averaged over much longer time periods than one second. I was just curious why these apparent differences in sampling might be occurring since the instrumentation is nearly identical between the M4 and M5 towers.

I have included a plot from Jan 19, 2013: file 00:00:00. This plot shows the first 5 seconds of the M4 134m Cup and the M5 122m Cup. The “step” nature is clearly visible on the M5 cup but not on the M4 cup.


What you’ve just found is a feature of the data acquisition system (DAS). The DAS scans all of the instruments at 20 Hz. A cup sends a pulse each time it completes a part of a rotation. So, if the wind speed is too low, we won’t get any pulses in a 1/20th second interval and so we don’t have a wind speed measurement. But, to keep the data files simple, we record the previous measurement in the data file. This approach is not uncommon, especially on a complex system. You get a hint of this if you look more closely at your data plot; on the right hand side of the plot you’ll see that there are 14 M5 measurements that are the same, rather than 20. This trick of repeating the previous data reduces the amount of energy in the higher frequency part of the power spectra of the turbulence (you could think of it as being a low-pass filter). You should be able to see a plot below from a randomly-selected data file that shows how the M4 and M5 cups can vary.

At this point it’s worth me reminding all users that the sonic anemometers are the primary measurements for speed on the tower. The sonics have no moving parts, no inertia, and no issues of frequency or gust response. The cups, on the other hand, have a very odd frequency response and show a dependency on the previous measurement, and get confused in upflows, and they are closer to the tower body.

If you are interested in learning some more about this, try plotting out the power spectra from the sonics and cups. You’ll see that both follow the typical -5/3 decay curve for a turbulence measurement, but the cups show a much higher noise floor (where power is not a function of frequency). Importantly, the higher frequency range does not contain much energy so that noise does not impact the variance or the turbulence intensity. If you were going to use higher-order moments from the data (i.e. not just the mean and standard deviation, but also skew and kurtosis) I would recommend using the sonics.

Hope that helps? I’ll try to get this written up in the documentation.

I agree with you about using the Sonics for high frequency testing, and with the technique for supplying the previous measurement in cases of slow wind speed. But I think there is still something different between how the M4 and M5 towers handle the data from the Cups. Although the M4 Cups do have repeated values, these values are usually only repeated for a few data points (perhaps 2 or 3 at 20Hz). The M5 tower (for all Cups) seems to almost always have at least 20 repeated values which seem to indicate about a one second (or more) pattern (at 20Hz). For example, in the case I plotted above, you mentioned that the last set only had 14 data points. It actually had about 20 data points but since I selected the cutoff point for the data series, the last 6 data points were not shown. This repeating pattern seems to only occur for Cups on the M5 tower. I do not see this extended repeating pattern on the M4 tower.

Your plot as well shows this extended repeating pattern for the M5 Cups. Judging by the number of data points in your plot, it looks like you were using about 2 or 3 second averages? If this is true, then the M5 repeating pattern clearly can continue for quite a bit longer than a few seconds.

I realize that this repeating pattern is absorbed in the 10 minute averaging process for the M5 tower and so is probably not very important. It just seems very unusual to me that the M5 tower does this while the M4 tower does not. Of course, I am assuming that the towers have identical equipment. If the equipment was different, the DAQ system was different, or something else was different between the two towers, I could understand why we might see such a difference in the data.


PS: I am primarily using 1 week of 20Hz data (from Jan. 2013) for the M4 and M5 towers. The repeating pattern that I describe here may not exist outside of this timeline.

My plot is the raw 20-Hz data. No averaging other than because of the DAS. Remember that between the tick marks it’s a 20-second interval, so the longest sustained measurement is about 1.5 seconds.

I think you hit the nail on the head by asking about your assumption of identical systems. Unfortunately, the two towers (M4 and M5) are not identical. M5 has more cups, which makes it even more susceptible to the problem. If you look a little closer at my example you’ll see that at higher wind speeds there are samples where the cup data approaches 20 Hz, but at low wind speeds and particularly while winds are dropping, the measurement intervals extend to 5,10, or 20 samples. What probably happens there is that the cups are decelerating and stopping, so there is no signal. Then, the DAS just repeats the previous value (as programmed), even though the wind speed may be zero. The software has not changed for a while so that should still be the case in January.

A suggestion: think about plotting out the time taken for the measured to change, versus the mean of the value before and the change. This should be a good coding exercise, and you’ll see how the DAS is behaving.

Ultimately we may switch to 1-second-averaged acquisition on the cups, wind vanes, temperature sensors, and pressure. However, that is going to be difficult to code, will cause problems with the analysis software and is not something we will do lightly.

Thanks Andy,

Actually I realized that I made a mistake with your plot figure. For some reason I thought that there were 10 minutes between your tick marks, not 10 seconds. This is why I thought you were using some kind of averaging to account for the low number of data points.

I understand your figure now.

Also, I agree with you that it is unnecessary to change the code to account for slow response instruments. We have a similar situation at Texas Tech where we also have slow response instruments coupled with fast response instruments using high frequency sampling. The slow response instruments are essentially oversampled and it is up to the individual user to decide what averaging times or response times are appropriate for a particular project.