Verification with Triumph-LS Plus and standard Triumph-LS with Multi-Constellation (Preliminary)

Shawn Billings

Shawn Billings
5PLS
These recommendations are preliminary. Ultimately the software will be changed to automate these approaches or even better approaches will be discovered and implemented. In any case, users today can enjoy significant gains in efficiency with the Triumph-LS Plus and standard Triumph-LS (2-engine firmware) when multi-constellation data is used. With multi-constellation, I do not use the same approach to verification that I used with GPS+Glonass v6+ when working in canopy. From some testing I've done along with observations from other members of the PLS Team:

Preliminary Verification Observation with Multi-Constellation
RTPK and RTK agreement
points to a good solution. I can't say this is 100.000% correct, but I'd put the odds at 1:10,000 or better that when RTPK and RTK agree that the solution might still be a failure. Ultimately, I anticipate that this will be incorporated in the action profile, but there will need to be some changes to the software to automate this. For now, I use a White Box button for Post-Processing which allows the user to start processing at any time during the RTK observation. The benefit is that you can be processing RTPK at the same time the RTK is still working out the solution. Once the RTPK solution is acquired, you can visually compare it to the RTK solution. If the RTK and RTPK solutions are in close agreement, the likelihood is that the fix is good, even if you only have a handful of RTK epochs.

Multiple RTPK solutions in agreement can also point to a good solution. First, let me clear up what this is NOT. This is not simply pressing the RTPK Start White Box button several times in an observation and getting the same answer. When the White Box is pressed, the processing starts at the beginning of the session and processes up to when the button was pressed. So each time processing is done with the White Box during the collection of a point, the processor is processing some of the same data. This may still be a good indicator of a fix, seeing that each time the White Box button is pressed it gives the same result, but what I'm actually recommending here is having independent RTPK points agree with each other. Eventually, I believe this will be automated in the software as well, but for now, you can collect an RTPK point (even if it doesn't agree with RTK), store the RTPK, then collect another RTPK point and compare. I have seen two back-to-back RTPK sessions agree that were incorrect. Both solutions were 60 seconds in length and the total for both was about 120 seconds. In the environment I was working, I should have allowed more time (more on that below). As a result, Alexey and I recommend acquiring three matching RTPK sessions which should point to a good solution. Of course once obtained the cluster average can be used to merge the results of the three points into one.

Multiple Engines Fixed can point to a good solution. I ran a test in a medium canopy environment a couple of weeks ago with my Triumph-LS Plus. The test was setup such that when two engines fixed a point was stored using a single epoch. The engines were then reset and collection automatically started again. I ran this test for about five hours. There were no bad points stored in this five hour test. The precision wasn't great (horizontally the worst point was 0.2' from the average of the 1400 points collected and vertically the worst was 0.3' from the average), but considering this was only one epoch in a bad environment, I would consider this a great success. More importantly it demonstrates that having two or more engines fixed simultaneously can indicate a good fix. With the Triumph-LS Plus, this seems to be even more reliable than in the standard Triumph-LS with 2-engine firmware, but the rule is useful in the 2-engine firmware also. It appears that while a bad 2-engine fix can occasionally occur with the standard Triumph-LS, this doesn't last very long, perhaps only a few seconds. Currently it is possible to setup the software to watch for two engine fixes by watching the consistency counter (which only increments when two or more engines are fixed). The count need not be very high, I have mine set to require a consistency of 1 (the lowest setting available). What it does not do at the moment is increment if 2 engines fix at different times during the same observation (Engine 1 fixes then later Engine 2 fixes). This may also prove a good fix and be easier to acquire than two simultaneous engines. It is very, very important that with this rule GPS+Glonass+Galileo+Beidou are being used in the engines. I believe the reason this works is that the two engines are using satellites from very different geometries. These different geometries will be affected by multi-path differently as the signals bounce through trees and off of buildings. It's as if you took the shot now and came back later to take the shot again under a different constellation except now you can shoot the point under a different constellation contemporaneously.

Summation at present, when collecting critical points under canopy using the standard Triumph-LS (2-engine firmware) or Triumph-LS Plus with multi-constellation signals, I'm looking for two of the above conditions to pass. Unfortunately, these conditions require more user involvement at present than the old Boundary profile for the standard Triumph-LS with GPS and Glonass only, but reliable results can be obtained much more quickly. Ultimately, I anticipate these approaches (or some more refined version of them) will be implemented in the software, but for now the user must invest himself a bit more than with the earlier versions. So I watch for a consistency greater than 0 and RTPK agreement mostly, or I shoot the point two or more times looking for agreement between RTPK results or between RTPK in one and RTK in another.

A note about RTPK observation times. RTPK observations with multi-constellation data should not exceed six minutes. Per @Alexey Razumovsky:
My RTPK recommendations with regard to environment: open - 5-30 s; low - 1min; medium - 3 min: high - 4 min; extreme - 6 min. No sense to stay more than 6 min.
So at six minutes, store whatever the processor gives and start again. In the example I give above regarding two bad RTPK results agreeing, I was in a medium environment with only 60 second observations. When I tested again with a minimum observation of 180 seconds, I had a much better success rate and no bad solutions that agreed with the next solution. In other words, at three minutes in the medium environment, two agreeable solutions were always correct. These observation times are NOT appropriate for GPS+Glonass data and times will need to increase significantly when the correction data does not include Galileo and Beidou (basically the same as DPOS processing).
 

Shawn Billings

Shawn Billings
5PLS
Thank you, Alexey. You've provided a tool that is changing how we do our fieldwork for the better. Much better.

1. Maybe this is good, or perhaps only two settings, open and canopy. In the open, the processor starts at 30 seconds automatically. Then processes from the beginning again at 60 seconds (if the user continues the observation for this long) to see if the result is the same at 60 as at 30 and flags user that the RTPK appears good. For canopy, we do similar, starting at 120 seconds, then checking at each minute after, processing from the beginning, and flag the user that the RTPK solution appears good if there is agreement.

2. Perhaps we can satisfy this question as we discussed previously. In this I describe starting at 60 seconds and then reprocessing at 60 second intervals. For open areas, this is too long, I think. So the setting mentioned above would help, I think, to begin processing at 30 seconds, then process again at 60 seconds. If that fails, the process could then jump to this described below.

3 RTPK solutions within 3D confidence guard.
This will require RTPK break raw data file into different sessions. I would recommend that each minute RTPK begins processing automatically. At 1 minute process all data from 0-60 seconds. At 2 minutes process all data from 0-120. Test does second processing result = first processing result. If yes then session 1 is complete at 2 minutes. If not, then process at 3 minutes 0-180 seconds. Test does third processing result = second processing result. If yes, then session 1 is complete at 3 minutes. If not, then process at 4 minutes 0-240 seconds. Test does fourth processing result = third processing result. If yes, then session 1 is complete at 4 minutes. If not, then process at 5 minutes 0-300 seconds. Test does fifth processing result = fourth processing result. If yes, then session 1 is complete at 5 minutes. If not, then process at 6 minutes 0-360 seconds. At six minutes result is stored for session 1 without testing.

Session 2 performs the same sequence as session 1, except that it begins at the end of session 1 instead of 0. So if session 1 ends at 120 seconds, then session 2 will begin at 121 seconds.

Session 3 performs the same sequence as session 2, except that it begins at the end of session 2. So if session 2 ends at 241 seconds, then session 3 will begin at 242 seconds.

At the end of session 3, the sessions are tested to see if the solutions from each are within the confidence guard of each other (much like RTK fixes are treated as groups in phase 1). If all three sessions do not agree, a fourth session is collected. Then the session comparison is performed again. If there are three of the four that agree, then the observation ends. If not, then a fifth session is collected and so on. The goal is to have 3 sessions that agree with each other. The sessions will not exceed 6 minutes in duration.

3. This is great news!!
 
Top