Triumph-LS Cluster Average and Relative Accuracy

Shawn Billings

Shawn Billings
5PLS
Most of my work in the past year has been boundary surveying, requiring me to take the Triumph-LS into some pretty difficult places with phenomenal success. However, in these places precision is diminished by multipath. For my purposes the reduced precision is still tolerable and in most cases on large boundaries is still better than (and acquired much faster than) conventional terrestrial traversing.

Because my work has mostly consisted of pushing the LS in hostile environments, I forget sometimes just how good the Triumph-LS is in good environments with open skies. Last week I was hired to create five control points on a large site for machine control. I worked from two existing control points (Point 5 and Point 553). I set the base directly on 5 and tied into 553 as a check. Point 5 was established in 2003 and Point 553 was established from it in 2006 using static GPS. Both are 12" spikes in stable soil. My new points are 1/2" x 24" rebar. I tied point 553 only once (I had intended to tie in twice, but forgot to return to it). I tied my new points twice each except for 13 which I tied three times.

Accuracy and Cluster Average Google Earth.png
 

Shawn Billings

Shawn Billings
5PLS
Vector Lengths from the base to the 6 points were
553 5139.87'
11 5580.92'
12 6157.32'
13 6025.11'
14 4494.01'
15 3263.18'

J-Field can create automatic cluster averages of points in a user defined proximity to one another. I used this to create an average position of 11, 12, 13, 14 and 15. Another way to do this is to manually select a point in the point list:

00256_Objects_20170618-19.05.31.png




Then tap on the blue coordinate field on the right side of the screen which brings up the base rover statistics screen:

00256_Base___Rover_Statistics_20170618-19.05.10.png


Then press the up arrow hardware button on the left side of the receiver which brings up a graphic view of the points in the cluster and the resulting average:

00256_Cluster_Point_Statistics_20170618-19.05.48.png
 

Shawn Billings

Shawn Billings
5PLS
The resulting point is a survey point that is computed based on a weighted average using the statistics of each observation. The resulting point has improved error estimates over either of the single points used to create it as well.

00256_Base___Rover_Statistics_20170618-19.13.32.png


(Note that the duration should reflect the total number of seconds from the first and second observations. The epoch count does show the combined epoch count).
 

Shawn Billings

Shawn Billings
5PLS
You will notice that the average position is not where a simple average would place the point (this is very plain from point 13's average). Let's take a look at the base-rover statistics screen for each of the three points that make up the average:

First Observation:
00256_Base___Rover_Statistics_20170618-19.20.38.png


Second Observation:
00256_Base___Rover_Statistics_20170618-19.20.47.png


Third Observation:
00256_Base___Rover_Statistics_20170618-19.21.00.png


Note that the RMS and the Error ellipse axes for the Second Observation are higher than the First and Second Observation statistics.
 

Shawn Billings

Shawn Billings
5PLS
After completing my point averages, I then wanted to look at the Relative Accuracy of the five points. J-Field has the tool to report this in COGO>Tools>Relative Accuracy:

00256_CoGo_20170618-19.30.12.png 00256_Tools_20170618-19.30.18.png 00256_Relative_Accuracy_20170618-19.30.22.png

I then selected the five points I wanted to compute the relative accuracy between:

00256_Survey_20170618-19.30.45.png


00256_Relative_Accuracy_20170618-19.30.53.png


The utility is designed to provide a pass/fail for a particular accuracy requirement (such as ALTA/NSPS standards). In this case I used the results to justify a statement that the coordinates I determined had a network relative accuracy of less than 0.05' horizontal and 0.10' vertical.

Selecting "Report" at the bottom right of the screen, creates an HTML report of the relative accuracy that can then be downloaded and printed (currently this isn't available in pdf format, but I would definitely prefer it to HTML, even though HTML seems to work fine).
 

Jim Frame

Well-Known Member
Do you have any terrestrial ties between those points? It'd be interesting to see how those compare. As I indicated in another thread, GPS processor error estimates tend to be optimistic, so making certifications based on those estimates alone would make me uncomfortable.
 

Shawn Billings

Shawn Billings
5PLS
I did not have any terrestrial ties between these points. They are not intervisible. The error spread between the different observations was less than the error estimates so I feel pretty confident that they are realistic. My experience with vectors in the open has been that the error estimates are pessimistic.
 

Adam

Well-Known Member
5PLS
Selecting "Report" at the bottom right of the screen, creates an HTML report of the relative accuracy that can then be downloaded and printed (currently this isn't available in pdf format, but I would definitely prefer it to HTML, even though HTML seems to work fine).
\

It would be good to have that report written to the Project reports we create too. It doesn't get put in there for some reason.
 

Darren Clemons

Well-Known Member
You will notice that the average position is not where a simple average would place the point (this is very plain from point 13's average). Let's take a look at the base-rover statistics screen for each of the three points that make up the average:

First Observation:
View attachment 6411

Second Observation:
View attachment 6412

Third Observation:
View attachment 6413

Note that the RMS and the Error ellipse axes for the Second Observation are higher than the First and Second Observation statistics.
Very nice info Shawn. Thanks for posting all this.
Kind of puts a star on what I was referring to in the other thread with Patrick about how much you (or your crews) can "bring home" with the LS.
Nothing we've ever used gives us this kind of detailed information.
 

Patrick Garner

Active Member
JAVAD does seem to take the attitude that more is better. It's refreshing to see, particularly when the LS is compared to other receivers.
 

Sean Joyce

Well-Known Member
I did not have any terrestrial ties between these points. They are not intervisible. The error spread between the different observations was less than the error estimates so I feel pretty confident that they are realistic. My experience with vectors in the open has been that the error estimates are pessimistic.

This brings up an interesting question. What terrestrial measurement criteria would one use to obtain the same achieved accuracy tolerances? (i.e number of direct and reverse angle observations, forward and back distance measurements, curvature and refraction corrections etc. systematic errors, equipment calibration strength of figure etc.).
Could be more time consuming to achieve especially if the project area is not flat?
 

Jim Frame

Well-Known Member
My experience with vectors in the open has been that the error estimates are pessimistic.

There are a lot of variables that go into this. From what I've seen, with short vectors, longish (3-minute) observation times and a GNSS base, the error estimates tend to be somewhat pessimistic in the horizontal and about right in the vertical. With long vectors (> 1 mile), short (< 1 minute) observation times and a GNSS base, the error estimates tend to be optimistic. (Unobstructed locations in both cases.)

I don't have a great example of the former situation, the closest is a recent project in which I did 3-minute observations twice each (different days) on 5 points, with vector lengths ranging from about 1,000 feet to about 2,200 feet. I also measured between the points (but not to the base) with a total station. The error estimate scalars ran from zero to 2 in north and up, and from zero 1 in east.

I have a better example of the latter, a levee profile with no trees anywhere near it. I had 4 control points that I would check into whenever I passed by, and ended up with 24 check shots spread over 7 days. Observation times were in the 15- to 30-second range (I was looking for blunders, not trying to improve the control). Vector lengths ranged from a little over 1 mile to a little under 3 miles. In an unconstrained adjustment (i.e. the check shots were only compared with themselves, not to established control values), the scalars ranged between 0 and about 3.5 in all 3 dimensions. In a fully constrained adjustment (the control had been positioned via multiple 1-hour static sessions), the scalar range was slightly higher except for one outlier that bumped up the north scalar to 5.4.
 

Sean Joyce

Well-Known Member
There are a lot of variables that go into this. From what I've seen, with short vectors, longish (3-minute) observation times and a GNSS base, the error estimates tend to be somewhat pessimistic in the horizontal and about right in the vertical. With long vectors (> 1 mile), short (< 1 minute) observation times and a GNSS base, the error estimates tend to be optimistic. (Unobstructed locations in both cases.)

I don't have a great example of the former situation, the closest is a recent project in which I did 3-minute observations twice each (different days) on 5 points, with vector lengths ranging from about 1,000 feet to about 2,200 feet. I also measured between the points (but not to the base) with a total station. The error estimate scalars ran from zero to 2 in north and up, and from zero 1 in east.

I have a better example of the latter, a levee profile with no trees anywhere near it. I had 4 control points that I would check into whenever I passed by, and ended up with 24 check shots spread over 7 days. Observation times were in the 15- to 30-second range (I was looking for blunders, not trying to improve the control). Vector lengths ranged from a little over 1 mile to a little under 3 miles. In an unconstrained adjustment (i.e. the check shots were only compared with themselves, not to established control values), the scalars ranged between 0 and about 3.5 in all 3 dimensions. In a fully constrained adjustment (the control had been positioned via multiple 1-hour static sessions), the scalar range was slightly higher except for one outlier that bumped up the north scalar to 5.4.

Jim;
Do you have to deal with points moving from tremors and earthquakes there in California?
That could be fun:(
 
Top