This is a geeky post about GPS speedsurfing. You've been warned.
I am currently trying to get a couple of GPS devices that I have developed approved for the GPS Team Challenge. The units use u-blox 8 GPS chips, which are very accurate and provide speed accuracy estimates that can be used to automatically identify "bad" sections, for example artificially high speeds related to crashes.
One of the issues that came up is: at what rate should the data be recorded? Some popular GPS watches record only every few seconds, which is good enough for some uses, but not for speedsurfing. The venerable Locosys GT-31 recorded once per second; current Locosys units record at 5 Hz, every 200 milliseconds. The u-blox chips can record up to 18 Hz, although that limits the chips to using only two global satellite systems; for the highest accuracy, tracking satellites from 3 systems is desirable, which limits recording speed to 10 Hz.
There are some theoretical arguments that higher rates are better, because they give more accurate data. One big part of that is that random measurement errors tend to cancel each other out, and the more data points you have, the lower the remaining error gets.
But there are also some practical issues with higher data rates. The resulting larger files are usually not much of an issue, unless you're traveling to a spot that has a slow internet connection, and want to upload data for analysis to web sites like ka72.com.
Slow analysis and drawing speeds can be more of a pain. Most of the currently developed analysis software was developed for 1 Hz data, and can get quite slow with large 5 Hz files. Some steps appear to be coded inefficiently, showing N-squared time complexity - they take about 25 times longer for 5 Herz data. With 10 Hz data, add another factor of 4, and now we're talking about 100-fold slower.
A bigger issue is that higher data rates can lead to "dropped" points when the logging hardware can't keep up with the amount of data. I recently ran into this issue with my prototypes, and have seen indications of the same problem in data files from a GPS specifically developed for speedsurfing that's currently coming onto the market. But fortunately, the frequency of dropped points in my prototypes is low enough (roughly one point per hour) that the data can still be useful.
To see how much we actually can gain from increasing data acquisition rates to 10 Hz, I did a little experiment. It started with a windsurfing session were I used two prototypes at 10 Hz (for control, I also used 2 or 3 "approved" GPS units). Comparing the data on these units for the fastest ten 2-second and 10-second runs gives a good idea about the actual accuracy of the devices; the data from the other GPS units I used help to confirm that.
Next, I took one of the 10-Hz data files and split it into two 5 Hz files by simply writing one record to one file, the next one to a second file, the third one to the first file again, and so on. This simulates measuring at 5 Hz, but I get two 5-Hz files from the same device.
In the same way, I created a couple of 1-Hz files from the original file, this time selecting every 10th record for the first file, and every 10th record but starting one record later for the second file.
I analyzed the speeds for all these files in GPSResults, put them in a spreadsheet, and calculated the differences. Here are the results:
Looking at the "Average" lines, the observed differences increased from 0.041 knots to 0.074 knots for 2 second runs, and from 0.027 knots to 0.036 knots for 10 second runs. Going down to just one sample per second increased the observed differences more than 2-fold for both 2 and 10 seconds.
The observed differences are close to what would be expected by sampling theory, which predicts that the error is proportional to the square root of the number of samples taken. The expected numbers are shown in the "Theoretical error" line above.
But what error is "good enough"? Let's look at the top 10 teams in monthly ranking for the GPS Team Challenge for September to get an idea:
In the 2-second rankings, teams #7 and 8 are just 0.06 knots apart; in the 5x10 second average, the difference is 0.07 knots. The smallest difference in the 5x10 second category is 0.6 knots between teams ranking 8th and 9th.
Looking back at the observed differences at 5 Hz, we see that the average was just 0.036 knots (note that this is actually the average of the
absolute differences). For a 5x10 ranking, the expected error would be roughly two-fold smaller, or less than 0.02 knots. This seems quite adequate, it would have given the correct ranking in the 5 x 10 second category. Note that errors go down even more for the "longer" categories like nautical mile, 1 hour, and distance. Only in the 2-second category were two teams so close together that the observed average error is similar to the difference in posted speeds. Note, however, that many speedsurfers still use GT-31 devices that record at 1 Hz, and that even the 5-Hz Locosys units tend to have 2- to 3-fold higher errors than the u-blox prototypes I used in this test. It is quite well known that the 2-second category is the most likely to be subject to errors; however, it is still a lot better than the "maximum speed" category that is used in some other GPS competitions. For 5 Hz data, the expected error for single point "maxima" is about 3-fold higher than for 2 seconds!
So, to answer the question in the title: yes, it is!