LS Vector Data in Star-Net

T.Guisewhite

Active Member
A while back I made a post about trying to get the LS vector data into TBC so I could combine it with terrestrial data...well...at the end of the year Microsurvey snagged me with a few hundred dollars off of their Star-Net software. I wasn't getting anywhere with Trimble so I decided to take the plunge to Star-Net.

I got my first Star-net project here to read in GPS vector data...it took some fiddlin' ...but I have been able now to figure out how to get the NGS Gfile to read into Star-Net.

It's superior to bringing this into TBC in that it's not just post processed raw data, but it's the the Real time or DPOS PP vectors that you get from the LS...depending on what you choose after DPOS. That is what I was after...so I'm stoked on that!

Only issue...it's loosing all my point descriptions in the NGS Gfile process...Can anyone help with that?

Here's a screenshot of the Export options on the LS that I've been able to make the Star-Net import work with:
Export_To_NGS_GFile.png

Once you generate the Gfile, you have to change the file extension from .Gfile --> to .ngs and then Star-Net can import the data as a "Blue Book Gfile"

With that configuration here's what it looks like in Star-Net:
Belk Mill Starnet.png


But...see how it doesn't import any codes...any help or input is appreciated!

Thanks,
Timmy G.
 

Shawn Billings

Shawn Billings
5PLS
Because of the variations in how users create that fifth field "description" it may be tricky to do. My "description" is actually the combination of the code, attributes and description from J-Field. Others will have a different combination. Perhaps we can have the vector export use the export settings from the .txt export screen, or we'll simply need to add a "description export" screen in the vector export screen. This will probably be a low priority since there are not many users exporting vectors on a routine basis.

The simpler solution, or at least the short term solution for you, would be to use a spreadsheet program after adjustment to combine descriptions from a text export from J-Field with a point list from Star Net.
 

T.Guisewhite

Active Member
If I might make a software development suggestion...

I would love to see an .rw5 export option for GPS vector data. I'm fumbling with this Gfile one...
Carlson is always doing their best to allow their software to "play well" with everyone. It would seem inline with their philosophy to help Javad in developing this?

A correct .rw5 exporter would enable Javad data to go into TBC... Star-Net... Surv-Net... etc... (Even if you could mimic it and have to change the file extension)

When we think about "not many users" exporting vectors... I came from a large "Trimble" (I know I use that word too much) company and recently started my own gig. It's been my "large company" experience, that ALL GNSS data was processed from vector data. Everything we did funneled through TBC as a quality control step.

That's been my biggest hurdle in transitioning to Javad. I NEED a solution to everyday combining of total station and GPS data where I can maintain a master data set for all my projects.

I guess I see it in the opposite light...if you fix this and make it more streamlined to the adjustment softwares...
My opinion = More users
The big companies may never bite on this if it doesn't integrate easily.

Dont get me wrong...I'm sold on this stuff...but struggling to redevelop my workflow with it.
 

T.Guisewhite

Active Member
Actually...I'll add to that.
The Star-Net folks are itching to add Javad to their lineup of supported equipment.

They sold me on this recent purchase by figuring out for me all the formats that they could import. I went through every check box and option on the Gfile export and they figured out which one actually worked.

If you tell me who they should contact...they may work with you to develop this?
I had three of their software developers eager to see screenshots of the LS and it's export options.
 

Shawn Billings

Shawn Billings
5PLS
From that perspective, you can export vector component and variance/covariance data from the txt file export. In this way, you could create an export that contained whatever description data you wish along with vector components. Star Net would simply need to be able to correctly parse out the data fields from the text file. We only created the NGS format to make a simple export in an existing, well-known file type.
 

T.Guisewhite

Active Member
I guess that's what I'm looking for help with...

this is a .gps file that is in the format Star-Net looks at. (extension changed to .txt so i could attach it here)
I started to go through this to add descriptions and bailed...

If anyone could help me format a file like this that would be awesome...

This file is what Star-Net outputs when it reads the NGS Gfile. all that is missing is the last little piece of the G1 line: (the apostrophe & point description)

Should be simple to output this...change the extension from .txt to .gps and link it to the Star-Net project.
I don't know...I'm no programmer...sorry for the runaround.
 

Attachments

  • 19-008 4515 Belk Mill Rd.txt
    34.9 KB · Views: 369

Jim Frame

Well-Known Member
My solution to the g-file descriptor problem is a workaround, but it's actually pretty easy to accomplish. In addition to the g-file export, I export an ASCII text coordinate file that contains the descriptions. I then reformat that file to Star*Net's C record format and include those records in my Star*Net dat file, but without any constraints. (If I have a lot of points, I'll store them in their own dat file so I don't clutter up the primary dat file.) Star*Net then knows to combine the descriptions with the vector data.
 

T.Guisewhite

Active Member
That will actually solve another issue i just found...that the NGS Gfile does not include any offset measurements.
Your method will keep those in the mix...I'll try that.
 

Sdrake14

Active Member
If I might make a software development suggestion...

I would love to see an .rw5 export option for GPS vector data. I'm fumbling with this Gfile one...
Carlson is always doing their best to allow their software to "play well" with everyone. It would seem inline with their philosophy to help Javad in developing this?

A correct .rw5 exporter would enable Javad data to go into TBC... Star-Net... Surv-Net... etc... (Even if you could mimic it and have to change the file extension)

When we think about "not many users" exporting vectors... I came from a large "Trimble" (I know I use that word too much) company and recently started my own gig. It's been my "large company" experience, that ALL GNSS data was processed from vector data. Everything we did funneled through TBC as a quality control step.

That's been my biggest hurdle in transitioning to Javad. I NEED a solution to everyday combining of total station and GPS data where I can maintain a master data set for all my projects.

I guess I see it in the opposite light...if you fix this and make it more streamlined to the adjustment softwares...
My opinion = More users
The big companies may never bite on this if it doesn't integrate easily.

Dont get me wrong...I'm sold on this stuff...but struggling to redevelop my workflow with it.

This is is a powerful statement with insight into the mind of a surveyor. All of the proprietary bullying and it's hobbling effects are the very reason I abandoned brand loyalty. I have veiwed companies such as Javad and Carlson as one solution to this. I highly value the provided ability to be creative in my work and use tools adaptable to my very real, critical, and all embracing methods.

The ability of a product to move into Industry 4.0 gracefully, I think is the only survival tactic that will keep it and it's coders relevant, things like the Internet of things, etc. seem to also recognize the value of integration between products, and I think will force compliance or render obsolete.

Six-pack RTK is relevant, but still too is my plumb bob when I need to verify adjustments to equipment, and so too is angles, distances and leveling. An adjustment is, a lot of times, also a tool for combining historic record data with todays work, a required solution even when I may not use traditional tools.
 

T.Guisewhite

Active Member
My solution to the g-file descriptor problem is a workaround, but it's actually pretty easy to accomplish. In addition to the g-file export, I export an ASCII text coordinate file that contains the descriptions. I then reformat that file to Star*Net's C record format and include those records in my Star*Net dat file, but without any constraints.

This is along the lines of Jim’s workaround...I figured this out...I’m pretty stoked on it:

So I had not previously been using the “code” field for anything...but it comes in handy for this work around getting descriptions into Star*Net.
I set up a code named “C”
For the “Code Description” it looks blank...but there is a [space] in there...
This code can quickly be tagged to all points using batch edit...
20190125-20.32.05_00876_Edit_Code.png


When you go to the text file exporter, I set up a “Preset” called “Star*net .txt to .dat
In this format the “Code Name” comes first followed by Point Name, Northing, Easting, Elevation.
These fields are formatted with the separator “Sep.1” set to [Spc] = Space
Then insert the “Code Description” followed by “Sep.2” set as a custom output of an [ ‘ ] = apostrophe
Finally end with the Description which in my case is the point description of the located object.
The [+] sign after “Code Description” is indicating that it uses “Sep.2” delimiter...(this took some fiddling)
20190125-20.31.51_00876_Export_File_Format.png

In this format
Each line begins with C which is Star*Net’s coordinate input designator.
The fields are delimited by spaces which is Star*Net’s .dat format
The trick is the [Space] as the “Code Description” followed by the Apostrophe...
What it does is it tricks a space in there and makes the Apostrophe precede the “Description” while still being “Space Delimited” from the Elevation field.
The apostrophe is Star*Net’s delimiter that precedes the point description

This outputs a .txt as seen below:
Spaces between each field, and the trick Apostrophe in there...
20190125-20.33.19_00876_Export_File_Preview.png


Once this is exported, you can change the extension from .txt to .dat and directly link this to the Star*Net Project...No formatting or editing at all.

Then output the NGS “Gfile”
This file needs it’s extension changed from .Gfile to .ngs and it also can be directly imported into Star*Net via their GPS Vector importer.

The combination of these two files as separate .dat files does as Jim said...it links the descriptions with the GPS vector data.

I still would love to see a .rw5 file in the future...but I’m satisfied that this method merges the data without doing any manual editing.

Hoping this helps someone else out there with Star*Net import.

I am currently working with their Techs to come up with a custom output of RTK vector data that would skip the two file approach...If we get that ironed out I’ll post that format also. I suspect...however...it may require an online converter via their help & support webpage...which if that is an extra step...this may prove faster and more efficient anyway to keep the two file approach. It’s not too hard to change two file extensions...but please make .rw5 available as that would be one clean file...
 

Jim Frame

Well-Known Member
Another thing to keep in mind is that redundant observations will have unique point numbers, so you'll need to edit either the g-file or the .gps file to change the point numbers on repeat observations. For example, if I observe control point XRAY and store it as point 2, then later return to XRAY and observe it again as point 6, I'll have to change the 6 to 2 in either the g-file or the .gps file in order to properly adjust the observations. It's easy enough to do that I haven't felt a need to automate the process.
 

T.Guisewhite

Active Member
Do you change the “to” number on the G1 line?
Or do you fool around with all the alias commands?

Once these are changed do you delete them from the coordinate .dat file?
 

Jim Frame

Well-Known Member
I usually edit the g-file, changing the forepoint number of the C record, but if I edit the .gps file instead, then yes, I change the "to" point number on the G1 line.

I'm not consistent about deleting unused point numbers from the .dat file, as it doesn't matter if they're in there or not. If there aren't any observations associated with a point and it's not fixed or partially fixed, it's ignored in the adjustment.
 

T.Guisewhite

Active Member
I was toying with a new idea this week...hoping for some feedback.

I read on here a lot, and Adam has told me that 180 seconds is a good rule of thumb line to cross in order to avoid bad fixes.

I did a very large full woods canopy survey this past week and at every boundary corner I took three 60 second observations in my boundary profile. As you all know well, the three 60 second shots take longer than 3 total minutes...so really I have between 4 and 10 minutes of time on every corner. With this repetition in the woods I felt real comfortable that it was giving me good solutions because the checks were good.

I brought these three vectors per point into Star*Net and had good redundancy to process a least squares adjustment from...plus it gives a weighted solution to the actual resulting coordinates...I love what I see in the network report...the results were phenomenal.

Just curious what you all think of this shortcut to my occupation time in order to achieve not only double checks in the field, but redundancy for the adjustment.
 

Jim Frame

Well-Known Member
I'm not sure there's a lot of difference between the two approaches. With 3 separate shots you're getting more Phase 1 time to weed out bad initial fixes, and 3 each Phase 3 (if I have my phases right) cycles to verify the fix. On the other hand, you have less time to mean the epochs in each of the 3 observations, which should be reflected in the error estimate of each observation, while a full 3-minute observation (however long it takes) should have a smaller error estimate (though probably not by a lot). Perhaps the biggest advantage of doing 3 separate observations is that you have less to lose with each one should the shot bomb out near the end.

FWIW, I now use a 240-second standard for control shots after hearing some folks talk about occasional bad fixes at 180 seconds.
 

Matt Johnson

Well-Known Member
5PLS
I generally discourage dividing an observation like this. The best post processing results are obtained with a single long observation.
 

Nate The Surveyor

Well-Known Member
I've been using 200 seconds. If it's critical, and somewhat obstructed, then it gets done 2x, and averaged.
Shot spread is my field indicator, this way. So, I can do it again, if spread is excessive.
N
 

Sdrake14

Active Member
I generally discourage dividing an observation like this. The best post processing results are obtained with a single long observation.
Matt what is Jfield doing if I average 3 independent shots. (RTK or PPK) I remember once in the woods I averaged 3 15 minute shots and it seems like the error statitics changed....?
 

T.Guisewhite

Active Member
I guess my theory on this was that I can watch for any bad fixes as the points are compared in the field?
If one is an outlier...delete...and repeat until three match. The shorter occupations just quicken the process, and there wasn't one of these that I was on for less than 5 total minutes. Many were more than 10 min...but I located over 9000' of boundary between 10am and 3pm!

It didn't pass Chi-Square test cause I don't have all my Star*Net error estimates honed in quite yet.
I only had one that was an outlier for the whole process, and it happens to be the one that I left there and had a long occupation...So I guess the error ellipse is a direct reflection of the single vector error estimates from JField?

(actually I believe I just confirmed it is...almost...identical to the error statistics from JField multiplied by the Total Error Factor shown on the network report.)

A snap of the Network Report attached...
Each one of these is an adjustment of multiple vectors. So the observations with multiple vectors drastically cut down the error estimates in the report.
If you look at the jumps in the point numbers you can see how many observations went into each point.

I guess the long - short of this is that its an example of how redundancy gives a better error estimate on the end result. That was my idea in trying to stay on location for a similar amount of time, while collecting as many points as I could. BUT...I also need to give JField a chance to do it's "thing". {which is a unique "thing" by the way...knocking it outta' the park here;)}

Hope I don't wear out this thread talking about adjustments but I ALMOST always have some sort of total station data to add to my mix...So far I'm real stoked on the results of Star*Net and hope these discussions help someone else down the road.
 

Attachments

  • StarNet.png
    StarNet.png
    56.6 KB · Views: 405
  • 20190208-11.05.18_00876_Base___Rover_Statistics.png
    20190208-11.05.18_00876_Base___Rover_Statistics.png
    33.6 KB · Views: 367
  • Adj Summary.png
    Adj Summary.png
    15 KB · Views: 345

Jim Frame

Well-Known Member
The best post processing results are obtained with a single long observation.

I agree, if you're going to run the file through a baseline processor. But my understanding is that the OP is doing what I usually do: take multiple short (< 10-minute) RTK observations and use the g-file to adjust in Star*Net without reprocessing the observation data. In my case, the "multiple" part means returning to the point hours or days later, not just reobserving under the same conditions.
 
Top