Sunday, November 29, 2015

Field Activity #8: Arc Collector

Introduction

This weeks field activity will have us creating our own data collection parameters for a feature class in ArcMap through the design of our own fields and domains.  Though this many seem like a simple task, assuring you have all of the fields and domains which are pertinent to a study poses to be a complicated task.  Prior planning and thought required to have a perfectly set up feature class, fields, and domains requires in depth planning and commitment to design perfectly.  After the creating of our feature class I will be utilizing the Arc Collector application with my Android phone to collect the data point location and the attributes associated with the location.


Methods

The first portion of this activity we were introduced to what different parameters are available when setting up domains in ArcMap.

A domain is a range or value of valid attributes which can be used in data collection to help streamline the process and aid in accurate values in the fields.

Domains can be set to various Field Types including, short and long integer, float, double, text, and date.  These field types allow you to maintain consistency of the data values.  Consistency is especially important when multiple people could be collecting data.  An example would be if a group was collecting street address data and the manner in which different people would collect the word "Avenue".  The word avenue could be recorded in a number of ways such as: "Ave.", "ave.", "Avenue", "avenue",...ect.  Each variation would register differently in ArcMap and would not register all of results when searching for "avenue".  You can also use domains to set an allowable range value, such as for temperature to assure you don't end up with 300 degrees Fahrenheit when you intended to type 30 degrees.   Using domains eliminates errors which are time consuming to fix later and allows data to be collected faster in the field when done properly.

To practice setting up a domain for the collection of points on the campus.  I selected to map some of the signs around campus.  I set my domains up to what I thought would thoroughly cover the variations of signs around campus.  Before completing the assignment I decided to collect a few points to see how well it was set up.  The first sign I collected information about proved I had set up my domains and fields poorly.  The first issue was I choose specific colors and did not allow for an "others" color variable.  My first sign was a color I had not input in my domain.  My second issue wasn't related to my domain but my preparation.  I had set a "height" field for the signs, but I did not have any form of measuring device to collect the information properly.  I collected a few other points with no other noticeable issues.  With these lessons learned I felt I was ready to design and prepare my geodatabase and domains for the second portion of the assignment.

I chose to survey businesses around the Eau Clarie area on Thanksgiving day, and record if they were open or closed.  I chose to record the date, whether the business was open or closed, if the business was open what time were they open till, what time I recorded the data, and the type of business being recorded.

I created a new geodatabase with in ArcMap and began creating my domains in the Database Properties menu (Fig. 1)

For the date domain, I set the only allowable date as Thanksgiving Day (11/26/2015).  For the open domain I set a coded domain to Yes or No.  I chose to set coded domains for the types of businesses, including Bar, Grocery Store, Gas Station, and Other.  I attempted to set the open till field to allow any time coded answer but something went wrong with this domain and I will be investigating what happened.  The time collection domain was set improperly as well, as I could only type in decimal form and could not use a semi-colon.  I am not sure if this is an ArcMap issue or with the manner in which I set up the domain field.


(Fig. 1) Domain setup for my assignment in ArcMap.


Then next step was to create a feature class for the point locations of the businesses.  After creating the feature class I assigned Field names in the Feature Class Properties menu.  I added Date, Time, Open, Opentil, and Business_Type to menu.  After add the fields I applied the above domains to the attributes to assist me in proper data collection (Fig. 2).

(Fig. 2) Feature Class Properties with Field Name and Domains applied.
My next step was to create a map in Arc Collector which will I will be able to connect with through my application on my phone.  The map will allow me to be able to display the data I collect.   To understand the step required to create the map I was directed to http://doc.arcgis.com/en/collector/ by my Professor.  This was the same process I used when I practiced collecting data around the campus. Next, I created a map for my Thanksgiving Business Survey using the streets basemap from ArcGis.


Once the previous steps were completed I was ready to go out and collect my data information.  The weather was not ideal, as it was sleet/snowing and the temperature was around 30 degrees Fahrenheit. Due to the in-climate weather all of my data points were collected from inside of my car.

After collecting all of my points I went back to the map I created in ArcGis to confirm all of my data was properly stored.  From the map window I could see my attribute data for the point locations I had entered (Fig. 3).
(Fig. 3) Display of Thanksgiving data points in ArcGis.com under my personal content.
I searched around to locate a way to classify the way the point locations were displayed on the map but I was not successful.


Discussion

The seemingly easy task of creating a geodatabase with a single feature class, and attributes does not seem like a important or particularly complicated task.  However, it is both important and complicated.  The extra work you do ahead of time will save you endless hours in the end of a project.

In my case I made errors in both of the fields and domain I created.  The errors I made could eaisly be over come by taking notes and editing the fields when I arrived back to my computer with the proper field notes being taken.  However, had I taken a little bit more care and done some preliminary scouting I would have been able to eliminate almost all of the errors I encountered.  I feel this is probably the most important take away from this assignment.  You can sit at a computer and contemplate all of the factors you will run into in the field but you will never think of them all. Giving yourself the flexibility to add notes and "other" fields to properly account for the variables you didn't think of is a must.

Additionally, I believe if you have the opportunity to head out into the field and "scout" or test your field and domains you should.  This will quickly bring issues to your attention and you can fix the issues before the actual data collection occurs.  Examining other databases similar to your may also bring attention to issues you have not thought of if a scouting mission is not feasible.

I am assuming you can download or extract the data from Arc Collector for use in ArcMap but I have not experimented with that.  I was disappointed when I was unable to change the way the point locations were displayed on the map.  I believe the majority of this needed to be preset when the map was created online.  This is just one more preliminary step you must complete to be successful with Arc Collector.

Conclusion

Preparation is the key when it comes to creating a geodatabase and feature classes from scratch.  The more precise work you do ahead of time the more time you NOT have to spend in the end fixing errors in the data.  Taking quality notes and having an "other" field is also crucial due to unscripted occurrences in the field.

Saturday, November 21, 2015

Field Activity #7: Topographical Survey


Introduction

Field activity #7 was a combination of 2 separate classes focusing on 2 different surveying techniques and tools to create a topographical survey.  Our class utilized a dual-frequency GPS the first week and a Total Station the second week.  These devices are both survey grade units.  We will be comparing the accuracy and the useability between them.

Dual-Frequency GPS

TopCon HiPer SR, TopCon Tesla, and a MiFi are three parts/devices to the dual-frequency GPS we will be using in this exercise.  To achieve the accuracy the units are capable of all the components must be used together.

The TopCon HiPer SR unit is a GPS receiver/antenna (Fig 1).  The TopCon has a static horizontal accuracy of approximately 3mm and a static vertical accuracy of approximately 5mm.


(Fig. 1) TopCon HiPer SR

The TopCon Tesla is a field controller allowing you to do a multitude of functions (Fig. 2).  For this exercise we will be using it to collect our location and elevation while it is paired with the HiPer SR.

(Fig. 2) TopCon Tesla handheld unit

To record the most accurate data possible the HiPer must be angled perfectly vertically.  Attaching the HiPer to a tripod fixed with a level we were able to accomplish a near perfect vertical angle.  To properly record the elevation (Z value) we must know what the height of the HiPer is.  The tripod we used was 2 meters in length, and there was a parameters in the Tesla to account for this.


Total Station

The total station utilized the Hyper, and Tesla from the Dual Frequency section above along with a Topcon Total Station (Fig. 3), and a prism (Fig. 4).  The total station is mounted on a heavy duty adjustable tripod, and the prism is mounted on a mono-pod pole with a level.

(Fig. 3) Similar unit to the Topcon Total Station we utilized during this exercise.
(Fig. 4) Prism which is mounted on top of a pole to gather point locations.


Methods

Dual-Frequency GPS

For this section of the lab I was assigned a partner.  It just happened to be Casey from a few previous field activities.  Before we could collect and data we had to complete the initial setup of the Tesla unit to conduct the survey with the Dual-Frequency GPS.  After creating a new job with in the Tesla unit we set the parameters for our location and use. The parameters we set included but were not limited to Projection: UTM North-Zone 15, Unit of Measurement: Meters and the majority were left to their default settings.

After preparing the Tesla we headed out into the "Mall" area of the University Wisconsin Eau Claire Campus.  Casey and I decided to survey a portion of the stream which runs through campus.  The stream bank offers a fair bit of elevation change and we felt it would display the accuracy better than sublet elevation change (Fig 5).

(Fig. 5)  Study area near the UWEC Campus mall area.


Per the assignment we had to collect at least 100 points.  When we began the data collection Casey was placing and leveling the Hyper and I was operating the Tesla.  She would level the Hyper and give me the go ahead to "Save" the point.  We continued this method for 25 points and then we switched jobs so both of us would get equal experience with both instruments.  We switched again when we reached 50 and 75 points.  Our Tesla registration ran out just before we started the field activity demo, so we were limited to using the Tesla in demo mode.  In demo mode we were only able to collect 25 point per job.  To get around this issue we created 4 jobs to collect our 100 points.

(Fig. 6) Casey collecting the first point with the Dual-Frequency GPS.


After collecting our 100 points we exported the data to a flash drive from the Tesla using the To File feature withing the Exchange menu.  Once on the flash drive we opened the four separate text files of data information and copied the data into one combined file.  Utilizing ArcCatalog I created a shapefile from a XY table  After importing the shape file into ArcMap I used the Kriging interpolation method to create a display of the elevation change of the data points collected (Fig. 7).


(Fig. 7) Display of the dual frequency elevation data using the Kriging interpolation method.


Total Station

The total station started out the same as the dual-frequency survey with the exception of the weather and a new group.  Our professor was nice enough to assign us to groups of 3 since there are so many pieces of equipment to use.  For this section of the assignment I worked with Katie and Nik.  We prepared the Tesla in the same manner as the previous survey.

Using the directions from our professor we used a starting location over a containment box of some sort in the "Mall" area.  This containment box had a point which would not move and would allow ease of leveling the tripod and the total station.

Before leveling the total station we had to collect the location of the total station and our back points. We collected these points in the exact same fashion as the Dual-Frequency exercise.  These points are collected to locate and determine the orientation of the total station.

Once the orientation points were located the total station was leveled and prepared to collect data points.  We utilized the bluetooth capability to connect to the Tesla and collect our data points.  The total station fires a laser at the prism and using the return information it collects the location.  Instead of moving the GPS around like the dual-frequency we just moved the pole mounted prism around and made sure it was perfectly vertical using the attached level.  Once the pole was vertical we aligned the cross hairs in the total station viewer to the center of the prism and used the Tesla to recorded the point location and elevation.

Unfortunately the issue with the Tesla was not resolved by the time we collected points with the total station so we were limited to 25 points for this data collection section.  I repeated the same steps for creating a shapefile and utilizing the Kriging interpolation method to create a display of the elevation.

(Fig. 8) Display of the total station elevation data using the Kriging interpolation method.
Discussion

Being both of the instruments were survey grade I was interested to compare the accuracy between the two.  However, due to some technical difficulties, and weather we were not able to collect data points in the same location for both units.

The dual frequency system is easier to prepare to collect data.  However, if you were going to collect a large number of points in one area it may not be the most efficient method due to having to level the tripod each time.  For the 100 points we collected I feel it was satisfactory and once we had a rhythm it wasn't terrible to level.  If we had to collect 500 or more I would definitely consider using the total station.

The total station takes little more work to prepare between leveling and collecting the back points.  Once you have it set to run though you can collect points in a quick hurry and the accuracy is higher provided you have all the parameters set.

When we were setting up the total station we had issues getting the Tesla to Hyper to collect the OCC and the back points.  This issue took far too long to fix and pressed our group for time to complete the exercise do to a incoming storm and class we had to attend.  Somewhere in the rush we must have missed a parameter.  Comparing the 2 maps above you will notice the dual frequency results are ~30 meters higher than the total station.  This is not the case, as the dual frequency location is lower compared to the area we setup the total station.  There is a number of parameters in the total station which calculate for all of the height dimensions of the equipment and I am guessing one of those was improperly computed or mis-marked.

Overall, both of these units are not complicated to run but they are complicated to run and achieve accurate and proper results.  To become proficient at them I would need a bit more training to understand more of the ins and outs of the setting and parameters within the devices.

Lastly these devices are finicky and can throw temper-tantrums at a moments notice.  The issue we had connecting the Tesla to the Hyper only took restarting the Tesla to fix, but when we turned the Tesla off we could not get it to turn back on.  This happened to multiple groups and even the professor.  I am not sure if this is the case with all of them but it is an expensive tool to have not work properly sometimes.  This is just an example of many of the many connection issues we had.

Conclusion

These field exercise provided me with a great deal of experience in dealing with various surveying components and tools.  This was a big change from many of the "low tech" ways we have been utilizing throughout the beginning of the semester.  To acheive high accuracy the "high tech" method would be my choice.  Though it can throw you a curve ball once and a while with the proper setup you can achieve unparalleled results.

Sunday, November 1, 2015

Field Activity #6: Navigation with Map and Compass

Introduction

This weeks lab is a continuation of Field Activity #5 where I created 2 different navigation maps.  Our group of 3 will be utilizing these maps to plot and navigate to 5 different point locations created by our professor at The UWEC Priory.  The 5 points were marked on trees using bright pink ribbon and labeled with the appropriate number.  Our tools for navigation are limited to the UTM map we created and a standard compass. (Fig. 3)  Each person in our group of 3 had a specific job during the navigation process. One person was the compass holder, and pace counter, another persons stayed back to assure the person stayed on the correct course, and the final person was the runner (going wherever they were needed.)  We were given a GPS unit for backup, as from year to year some of the ribbons on the trees seem to disappear.

Methods

To start the exercise our group was given a list of 5 locations with the corresponding UTM coordinates.  Our first step was to plot the points in the correct location on our map.  Each group member plotted their own points on their individual map.  We compared the point locations on each others maps to verify the correct locations.  The list was organized but there was excess information cluttering the page and I read the incorrect number for one of the locations and had to correct it.  We then decided on a starting location which was easily deciphered on our map and plotted it as a point on our map.   After plotting and numbering the points we drew connecting lines between all of the points.

(Fig. 1) Casey and Katie preparing their maps for the exercise.



(Fig. 2) My map with point locations and lines drawn. 



After preparing our map we were given instructions from Dr. Hupy on how to utilize our compass for navigation.  The first step was to take the compass and line the edge of the compass up with the line you wish to navigate.  While holding the compass on the line I adjusted the compass housing (bezel) so the north arrow and the orienting lines on the compass line up with north and the grid lines on the map.  This gave met the correct bearing direction angle to our first point.

The next step was to calculate the distance we needed to travel to reach the first point.  Using the centimeter ruler on the baseplate of my compass I measured the distance from our starting location to point 1.  I measured 8 cm which equals 280 meters in the real world.

With the bearing direction my group member Casey took the compass while standing at our starting point she held the back (opposite end from the direction of travel arrow) of her compass to her chest and rotated herself until the magnetic needle (red) was inline with the orienting arrow (shed) this also referred to as "red in the shed".  With "red in the shed" Casey was now facing in the direction of our first location. 

(Fig. 3) An example of a compass similar to the one we used in the exercise.

From our previous calculation we knew the first point was 280 meters away from our starting location.  Casey stated her pace count from the Field Activity #5 was 67 paces.  This meant she would have to step 187 steps to reach point 1 if the ground was perfectly level.  However, the ground was not level, and our direction took us into the forest.

I had stayed back where we started to make sure her direction stayed true.  This method was only effective until she hit the tree line and then she was gone.  I caught up with her at the tree line and continued to watch her bearing.  Casey walked her 187 steps and we did not see a marker anywhere.  I remained at the location where her 187th step was, while she continued on her bearing direction to see if she could locate the point.  Meanwhile our runner (Katie) was wandering out front of our location to see if she could spot the marker for the point.

(Fig. 4) Casey and Katie searching for the first point location.


It wasn't long before Katie located a marked tree we believed to be the point.  After further inspection of the marker it was not our point.  At this point we were not sure how far off we were, so we consulted the GPS to compare our location to where we should have been.  After checking the GPS we realized we still had not gone far enough in the bearing direction and were off to the east a slight bit.
(Fig. 5)  The first tree we located with a marker, though it wasn't the one we were looking for.



After a short distance we ran into another marker on a tree.  We believed this marker was the point we were in search of.  Assuming this was our first point we consulted the map and calculated the bearing and distance to the second point.    I took the job of pacing and holding the bearing with the compass.  We were at the edge of a very steep ravine so pacing was not going to equal accurate results.  Before I started walking I judged the distance and used my pacing count to estimate how many steps it would take me to walk the distance.  After reaching the opposite side of the ravine and checking my bearing direction I continued on my bearing and pace counting.  After a very short distance Katie (I believe) found a marker on a tree.  Now we were confused again.  We consulted the GPS and confirmed we were at the proper location finally for point 1.  After inspecting the map closer there were topology lines which should have given us an idea we were on the wrong side of the ravine when we thought we had found the point.  (More on this in the discussion)

(Fig. 6)  Casey (Left) and myself (Right) examining the map for navigation from point 1 to point 2. 



Resetting my count and bearing direction we headed for point 2 with our prior calculations. The terrain was difficult to keep a good pace count on and hold your direction.  There was a number of ravines I would have had to cross to keep my bearing true.  I employed the same method of estimating the distance across.  When we arrived at the point where I felt point 2 should have been located, it was not to be seen.  Having been down this road before we consulted the GPS to check and see which direction we were off.  This time we had traveled far enough but our bearing direction was off slightly which put us off to the west a little bit.  We were still at the top of the ravine so we knew we had descend for sure and guess we would have to go up the other side.  When we hit the bottom of the ravine we checked the GPS and it seemed if we had went to far to the east.  After some wondering, we located the tree marker laying in the bottom of the ravine.  Using the GPS we finally located the location of point 2 which was about half way down the ravine slope.

From point 2 we calculated the bearing direction and distance to get to point 3.  Katie took her turn at pacing and holding the bearing direction.  The bearing took us back up the slope of the ravine.  When we arrived where Katie said the location should have been we couldn't just see the location.  After about 30 seconds I located the tree with the marker for the location.  She was almost perfectly inline with the point as far as distance but her bearing was off very slightly.

We used the same procedures for point 4 & 5.  We used the GPS to find location #4 as the marker on the tree was not there.  As we navigated from point 4 to point 5 it was getting close to the end of class time and darkness.  Casey took another turn at pacing.  When we reached were the location should have been we didn't just see it.  For the last time we consulted the GPS and we were not that far off and after a real quick search we decided to head back to the parking lot where we started the activity.  We used our map to get a bearing direction to head in for our return trip.  We successfully navigated back to our origin.

Discussion

This activity was full of learning experiences.  I was familiar with pacing prior to this activity.  However, I had never preformed pacing in terrain which varied in elevation this much.  The majority of my experience was on mowed trails which were relatively flat.  The method which I employed for estimating distance work fairly well but I am sure there is a better method.

The map I created for this exercise left a fair bit to be desired.  Prior to heading out in the field I felt as though the imagery of the trees would be helpful in navigation.  The basemap imagery did not help us locate ourselves on the map while in the forest.  The second issue I had with the map was the topology lines.  The spacing was probably adequate but the labels did not allow quick assessment of actually topology.  I believe using the DEM to create a better basemap and layout for topology would have greatly assisted us in our navigation. 

The 50m grid spacing for the grid lines made it tough to get an exact accurate location of the points.  Add the grid variability with the size of dot we made on the map with the marker and you already have a 10 meter approximation of location.  In the woods this approximation makes it difficult to locate places with foliage still on the trees.  Additionally, I made the grid color fairly transparent as to not hider visibility of the topology lines.  What I did not consider was we had to use these grid lines to set our bearing direction.  Doing it over again I would make them easier to see for a more accurate bearing.  Smaller grid spacing, larger scale, and alternate basemap would have greatly increased our accuracy for navigation.

Despite the issues we had with the first point, overall our navigation was fairly accurate.  I overlayed our track log from the GPS on the DEM and 1-5 point locations.  From this map you can see with the we were in the ballpark of all the point location.  You can see the issue we had locating the second point.  The 5th point which is the one we gave up on due to the darkness is depicted in the bottom of the ravine which we would have not have been able to see from the top.

(Fig. 7) DEM with track log from GPS, and point locations.

I don't feel the above DEM truly shows the elevation changes we encountered while trying to navigate.  I imported the DEM into ArcScene to create a 3D image of the landscape (Fig. 8)

(Fig. 8) 3 dimensional image of the terrain we navigated.

My fellow classmate Peter Sawall who helped me create the topology maps in Field Activity #5 showed me how to created an elevation profile using the DEM (Fig. 8).  You can see from the image the elevation change was significant especially for Eau Claire County Wisconsin.

(Fig. 9)  Topology profile created with ArcMap.

Conclusion

Overall this field activity was very educational in navigation trials and tribulations.  One of the biggest factors which would aid you in navigation is prior planning.  Learning and exploring your landscape prior to heading out into the field will greatly assist you in whatever your endeavors might be.  To be proficient at navigation in the woods, I would definitely need more practice and training.

Make sure to check out my group mates blog post.

Casey's Blog
Katie's Blog

Thanks again to Peter Sawall for assisiting me with advanced features in ArcMap.  Make sure to check out Peter's Blog.

Saturday, October 24, 2015

Field Activity #5: Development of a Field Navigation Map

Introduction

This week's activities required us to create two field navigation maps.  We will being using the map in the following weeks along with a compass to locate a number of locations withing the area of the Priory in Eau Claire, WI.  The Priory is a wooded 120 acre off campus property, with residence halls owned by the University Wisconsin Eau Claire.  The ample space makes it a prime location for a navigational training.  In addition to creating the navigation maps we will also be learning a skill called pace counting.

Dr. Hupy set up a geodatabase with a plethora of data for us to use while constructing our navigation maps.  One of the few requirements was one map had to have a grid with the Universal Transverse Mercator (UTM) coordinate system with 50 meter spacing (or finer) and the other map had to have a grid with the Geographic Coordinate System (GCS).  The only other requirement was it should be created in landscape format and sized for 11x17 paper.  The reset of the design detail was left up to us using the given data.

The UTM coordinate system is broken in to 60 different zones, each of these zones are 6 degrees of longitude wide.  The zone are also split into an northern and southern zone, the split coming at the equator.

(Fig. 1) UTM zone layout (http://www.gpsinformation.org/utm-zones.gif)

The GCS is different in the aspect it uses latitude and longitude to determine your location on the earth.  This location is stated in terms of decimal degrees, and then can be converted to degrees, minuets, seconds with simple math.  One issue with using GCS for navigation is there in no actual form of measurement to determine how far you have to or have traveled.

(Fig. 2) Geographic Coordinate System layout. (http://newdiscoveryzhou.blogspot.com/2006_08_01_archive.html)



Methods

Pace Counting

Before we started the construction of the maps Dr. Hupy discussed a basic navigation skill called pace countingPace counting is knowing the distance you have walked based on the number of steps you have taken.  After the introduction of pace counting our entire class went outside to practice and learn our personal pace count for a 100 meter distance.

Using a laser distance finder Dr. Hupy gave us a 100 meter distance along the sidewalk of the Davies parking lot on campus.  The weather was not particularly pleasant, as it was raining lightly and in the temperature was around 55 degrees Fahrenheit.  Dr. Hupy had some trouble getting the distance laser finder to function properly in the beginning of our test.  After a couple second attempts he felt as though he had an accurate reading from the laser.

We were instructed to walk from point A to point B (Fig. 3) while only counting the steps we took with our right leg.  Then do the same thing when returning from point B to point A to see make sure we were consistent.  I ended up with 57 & 58 right leg paces for the 100 meter distance.

(Fig. 3) Point locations for the 100 meter distance.  (Basemap snipped from Google Maps)
Pace counting on a flat surface like the side walk gives us a good base to start from.  However, things are dramatically different when you attempt to count your paces in the woods.  Stepping over logs, going up hills, all while trying to maintain your direction are just a few of the struggles you will encounter all while trying to count and keep your paces the same distance apart as you practiced.

Map Design

We were assigned groups for this navigation project, but we were to construct maps separately.  After everyone in our group (3 of us total) had completed our map design, we had to choose which group members maps we would have printed and use for our navigation in the following weeks.

Exploring the data Dr. Hupy had provided for us was the first step for our map design portion of the exercise.  Exploring the blogs from past classes I had a good idea of what I wanted to design but I perused the data within the geodatabase.

The basemap was the first element I explored withing ArcMap.  There was a few raster images in the data to choose from.  There was a black and white aerial (Fig. 4), and color aerial taken in the fall (Fig. 5), and a scanned topographic map (Fig. 6).

(Fig. 4) Black and white aerial image of the Priory.

(Fig. 5) Color aerial image of the Priory.

(Fig. 6) Topographic map of the Priory.
After an brief examination it was clear to me the color image was definitely the one to choose.  The main factor was the level of detail which will allow my group to be able to discern different objects during our navigation.  ArcMap also has the options of importing different basemap to be used.  So using the boundary from the geodatabase we were given, I imported the Imagery basemap from ArcMap (Fig. 7).  As you can see in the photo the vegetation is very thick and green.  Our area was already experiencing fall and the majority of leaves were in the process of changing colors.  For these reasons I did not feel this map would suit our needs.

(Fig. 7) Imagery basemap of the Priory.

Contour line feature classes were the next map element I explored within the data.  I found 2 different contour line files to examine.  One had 5 meter spacing between contour lines and the other had 2 ft (.609 meters) spacing.  The 2 ft. contour file was far to tight to see any detail on the map when overlayed on the basemap (Fig. 8).  The tightness of the lines would have hindered navigation by making the map far too "busy" to understand where on the map we were located.  Additionally, I knew another set of lines was going to be added for the coordinate system grid, rendering this option useless.

(Fig. 8) Color aerial image with 2 ft. contour lines.
Examining the 5 meter contour line data, I felt the lines were spaced more appropriately (Fig. 9).  However, my initial feeling was they could possibly be spaced to far apart.  Dr. Hupy then informed us the data used to create the 5 meter contour lines was far from the best quality and he felt there was better options to  be created or found.  He made a suggestion to talk with one of our classmates, Peter Sawall who has been doing a lot of work with Lidar.

(Fig. 9) Color aerial image with 5 meter contour lines.
Talking with Peter he suggested using the latest Lidar imagery from Eau Claire County to create a new contour map from of at interval I felt was appropriate.  I told him this sounded like a great idea but I have had very little exposure to Lidar, nor do I have any idea how to process the data to produce a contour map.  Peter assured me the process was easy and he would walk me through the steps.  Peter's "walk" through was more like a solid run for me.  Peter was working on his maps at the same time so I did not take the time to slow him down to fully understand the process.  Peter has already done extensive work at the Priory so we already knew which files I needed to complete the task.  We have the current Lidar information from the county on our server at the university.  Looking at the geoprocessing history from ArcMap, we used the following tools: Create LAS Dataset, Make LAS Dataset Layer, LAS Dataset to Raster.  The final tool was the Contour tool which creates the contour lines from the raster image.  Within the Contour tool you could set the contour spacing.  I ran the tool three times setting 5 ft, 7ft, and 12 ft spacings. 

(Fig. 10) 12 ft. contour created from a raster image.
After creating the contour lines, I inspected the different spacings over my basemap of the priory.  Both the 5 ft. and 7 ft. spacings were too tight for my desired visibility of the base map.  The 12 ft. contour was exactly what I had envisioned.  Now I had to clean up the map area a little bit.  With in the data we were given was a feature class with the boundary of the area we would be navigating within.  The extent of the contour lines fell outside the navigational boundaries, so I used the Clip tool with in ArcMap to eliminate the lines which fell outside the boundary.  Eliminating the excess lines will help clean up the clutter after adding the grid lines in the next step.  I took one last look through the data and did not see any other features of interest for my map.

(Fig. 11) Color aerial image with the created 12 ft. contour lines.
Now to add the grids which will be one of the biggest aids in our navigation.  ArcMap has a built in feature to add the geographic coordinate of your map over top of the features.  To accomplish this you must be in the layout view of ArcMap.  Once in the layout view, click the Data Frame Layer properties go to the Grids tab.  From with in this tab you can create a new measured grid using your coordinate system.  Adjusting the properties after you create the grid is probably the most challenging part.  Per the recommendations of Dr. Hupy I changed the settings to have critical number in the grid in black and the beginning number (which stay the same on a map of this scale) gray (Fig 12).  I also adjusted the spacing to what I felt looked like an appropriate spacing.  The last adjustment I made was the amount of "precision" to the coordinate system labels.  After creating the grid for my UTM map, I created a new ArcMap file and added the same basemap, boundary, and contour lines and proceeded to add the grid for my GCS map.  I had to tinker around with the property settings to get the grid spacing, label colors, and precision of the numbers as I did for the UTM map.

(Fig. 12)  Layout & Design for my grid labels.

The last step was to put all of the finishing touches on the map to make it useful and cartographically pleasing.  After adding the standard map elements (title, scale, north arrow, legend, sources, my name), I added the coordinate system as well.  I decided on 2 different scales for the map not knowing exactly what we will need once we are in the field.  I adjusted the colors of the grid so they would not be as dominate of a color, allowing more of the other information on the map to stand out.  I also adjusted the transparency of the contour lines to allow more of the base map to show through.  On the GCS map I made the lines even more transparent than the UTM.  On the UTM I added labels to the contour lines.  These maps will be printed back to back so I feel having some variation between the two will be beneficial.



(Fig. 13)  UTM navigation map of the Priory.


(Fig. 14) GCS navigation map of the Priory

Discussion

Pace counting and compass navigation is not something new to me.  Years ago, I was involved with forestry judging.  We were required to navigate trails by using a compass, azimuth directions, and distances.  The difference in this exercise is I will have a map of the area and points plotted on the map and then I will have to determine the actual steps it will take to get there.  I am very comfortable walking through the woods and feel I have a good sense of distance while walking over, around, and through obstacles.

The part I am not as confidant with is the map portion.  Though I have used maps and aerial photographs for hunting and trapping purposes, I based my navigation off landmarks and just walked till I found them.   Using a map with a known distance should make the navigation easier and keep the excess wondering to a minimum.

Though  these two maps looks very similar, they are very different when it comes down to how they will be used for navigation.  My initial feeling is the UTM map will be the easier map to navigate with due to having actual distances on the grid.  The GCS map will be useful when using a GPS.  Using the GPS will give us the decimal degrees, allowing us to know exactly where on the map we are located.  After a few test pacings with the GPS I feel our group will be able to devise a good plan to know how far we have traveled.


Conclusion

Designing a navigation map when you have never used one for that purpose can pose to be a challenge.  Taking notes what others had wrote about the project and ideas from our professor proved to be the most valuable information for this exercise.  Combining that information with my preferences for maps when navigating outside, I feel as I came up with the best map I could for my level of understanding.  After the actual navigation exercise I am sure I will come up with some changes I would have made.  The best way to learn what works is done though experimentation.

Thank you

I would like to say thank you to Peter Sawall for his help with the contour line creation.  For more information about Peter hop over to his blog.

Saturday, October 17, 2015

Field Activity #4: Unmanned Aerial System Mission Planning

Introduction

In week number 4 our class was introduced to unmanned aerial systems (UAS).  The purpose of the class was to give us a basic understanding of the different types of UAS's, the variety of applications, software programs used to plan and interpret data collected by UAS's.

UAS Overview 

UAS's are not a brand new idea.  However, with recent advances in technologies UAS's are far more accessible to the commercial market than before.  The introduction gave us an overview of the different available types of UAS's and the pros and cons between platforms.  Two types of UAV's covered in our introduction where "fixed wing" and "multicopter".  Fixed wing UAS's are similar to an airplane/glider style craft. (Fig. 1)  Multicopter UAS's are a multiple blade helicopter style aircraft. (Fig. 2)

(Fig. 1) This photo is an example of a fixed wing UAV.  The UAV pictured here is similar to the one we were shown in class. 
One big misconception about UAS's is they are nothing more than a radio controlled (RC) airplane.  RC airplanes are not equipped with a number of components which comprise a UAS.  The fixed wing UAS we were introduced to was equipped with a Pixhawk flight controller.  The flight controller on the UAS communicates with base station on the ground.  The base station can be loaded with software to fly the airplane without human control when properly programmed.  The Pixhawk brand flight controller is compatible with an open source software called Mission Planner which is available for free from ArduPilot.  The open source aspect is relatively rare in the UAS field at this time.  The UAS is also equipped with a modem and radio control receiver (RX).  The modem is used for long range data link communication with the base station.  The RX is used when flying the UAS manually.

All of the UAS's introduced to us were battery operated.  The majority of UAS's available to the public and commercial sector are powered by a lithium polymer (LiPo) battery pack.  LiPo batteries are very volatile, and if not properly cared for can explode.  Dr. Hupy shared a few instances of people he knows who have had these batteries blowup due to improper storage or misuse.

Applications
 
The uses for UAS's are endless.  UAS's can be fit with a multitude of cameras and sensors to fit any application.  Search and rescue, land cover analysis, tracking animal populations, and the list just keeps going.


Technical & Pros and Cons

To effectively launch the UAS pictured in Fig. 1, you need to have a few things in place.  One thing you need is a fairly large area not only to launch but also to land the craft.  Due to the weight of the UAS the other item needed is a launching device. Dr. Hupy showed us a bungee launching system he uses to launch the craft.  To understand how the launch system is configured take a look at this video link (RV JET Bungee Launch).  Another troublesome issue with the fixed wing craft is the turning radius.  If you were trying to fly a grid pattern over a small field it will take a greater area for the UAS to turn around and get inline with the grid thus adding to your flight time.  I will go into these issues in greater detail later on in this blog post when I talk about Mission Planner Software. 

The above issues are a few of the disadvantages of using a fixed wing UAS.  One advantage to the fixed wing UAS is the long flight times they can fly.  The UAS our class was introduced to can fly up to one and half hours depending on a number of variables, such as wind, and the payload of the UAS.  This advantage allows you can cover long distances and gain a lot of data in one flight is desired.

In conclusion the fixed wing UAS is not good for a rapid deployment in confined areas like cities or urban areas.  They are more suitable for use in rural areas when covering great distances which require long flight times.


(Fig. 2) This photo is an example of a multicoper.  The UAV pictured here is a quadcopter due to the four blades used for propulsion.

The multicopters are very similar to their fixed wing brothers in the controlling features, hardware, and software.  Multicopters are constructed using a varying number of blades.  Most commonly constructed with between 4 and 8 blades.

The majority of multicopters are rigged with a gimbal.  A gimbal allows the operator to attach a camera or other types of sensors to the UAS.  The gimbal is able to be rotated and tilted through the controller allowing the user to view in a multitude of directions.  Gimbals are designed in either a 2-axis option or 3-axis. 


(Fig. 3) This photo is an example of a gimble.  This particular gimble is designed by 3D Robotics (3DR) to carry and operate a GoPro camera.
Certain multicopters can be fitted with high torque rotors.  These high torque rotors allow the multicopter the ability to carry a higher payload.  However, it takes more energy to turn the rotors and carry the payload thus reducing the length of your available flight time.  Overall the available flight time of a multicoper is shorter than a fixed wing UAS.  The biggest advantage of the multicopter is the capability to launch just about anywhere you desire even with limited space.  Also, when running a grid system over a field the area need to turn the multicopter around is none.  This improves the efficiency of the UAS in turn reducing flight times.

Methods


Following the introduction, Dr. Hupy took the class out to the site of Field Activity 1 (Blog of Field Activity 1) to fly at DJI Phantom (Fig. 4) for demonstration of the abilities and function of UAS's.  The DJI is quadcopter (four blades) style UAS.  The Phantom had a camera attached to a 2-axis gimbel.  Dr. Hupy attached a small tablet to the controller of the UAS, so he would be able to see exactly what the camera was seeing while flying. 

(Fig. 4) This a photo of the actual DJI Phantom flown by Dr. Hupy.

(Fig. 5)  An example photo of a tablet attached to the Phantom controller.
Once Dr. Hupy had launched the DJI he demonstrated the maneuverability and the stability of the craft.  Dr. Hupy was able to completely remove it hands from the joysticks and the Phantom would hover.  He then flew up and down rivers edge taking photos every 5 to 10 seconds.  On the last trip back one of my fellow students spotted a bird nest in a tree.  Dr. Hupy piloted the DJI above the bird nest and we were able to look directly in the nest to see it was empty.

During this time Dr. Hupy was discussing the many uses for UAS's.  He discussed bridge inspections could greatly benefit from the abilities of a UAS.  We were directly beneath a bridge, which a few years ago had to be inspected.  At that time the inspectors had to tie off with harnesses and physically climb over the edge of the railing and inspect the underside of the bridge.  Dr. Hupy then flew the UAS over to the bridge and displayed how with the proper equipment the inspection could have been done equally effectively without endangering the life of the inspector.

Later we had the opportunity to fly the Phantom if we desired.  I took my turn flying the DJI over the edge of the river.  The controls were as sensitive as I expected them to be.  Small movement of the joystick resulted in immediate movement of the craft.   I found the Phantom to be incredibly easy to control and maneuver for being my first time flying the unit.  While I felt in control, a few times I bumped the control in the wrong direction and by simply letting go of the joystick it stabilize itself and hover.  I felt the DJI Phantom was a great first UAS to pilot. 

The DJI has a warning device which alerts you when the battery is running low.  Dr. Hupy inspected the meter on the control and told us we still had at least another 10-15 minuets of flight time yet. 

After everyone had an opportunity to fly, Dr. Hupy landed and changed the battery on the Phantom.  Then he piloted the DJI around a few more areas of interest along the river bank taking pictures.  He flew over one of the terrain models from exercise 1 which was mostly still intact and took many photographs for later use.  Dr. Hupy also flew over a "24" which had been crafted along the rivers edge with rocks. 

The flight time in total took no more than 20-30 minutes, and in that time we collected 322 photographs.  Below you will find a few examples of photographs taken during the flight.



(Fig. 6) This is a photograph of our class watching Dr. Hupy fly the DJI Phantom.

(Fig. 7)  A photograph taken of a surface terrain from exercise one.

(Fig. 8) A photograph of the 24 along the river.

Mission Planner Software

Flying the UAS by hand is not the most effective method to cover a specific area.  After the flight demonstration Dr. Hupy introduced us the Mission Planner program. Mission Planner allows you to create a custom and fully automated flight for a UAS.  With the program you have the ability to adjust every aspect of the flight.  You set the area you will be collecting data for and then set the parameters to cover the area.

The main parameters you can set are the altitude and the resolution (spacing of the grid).  Deciding how to set the parameters depends on the equipment you are using and the goal of the flight.  If you objective is to have high resolution imagery of the ground cover then you are likely to fly lower and have a lower per area resolution number (but actually makes the resolution a higher).  Adjustment to the altitude has a direct effect on the resolution of the flight.  The higher the resolution the longer the flight time.

Looking at figure 9 & 10 below you will see the differences discussed above.  Both flight plans cover the same approximately 7 acre square.  Figure 9 has a very high resolution, but to obtain this resolution the flight will take 24 minuets and record 558 pictures.  Figure 10 shows the same flight coverage area with an increased altitude of 20 meters.  The increase has reduced the resolution but not significantly.  The altitude change also reduced the flight time by 11 minuets and the picture count by 409 pictures to cover the same area.  Depending on the objective of the flight you may need to use the flight plan in figure 9 however if the detail is higher than the objective then you are just using up unneeded time and resources.

(Fig. 9) Higher resolution and lower altitude flight plan from Mission Planner



(Fig. 10) Lower resolution due to higher altitude flight plan from Mission Planner.

Real Flight Flight Simulator

The next program Dr. Hupy introduced us to was Real Flight Flight Simulator.  The simulator program gave us practice in all the aspects of flying fixed wing and multicopter crafts.  The objective of the this portion of the lab was to pick one fixed wing and one fixed wing  aircraft and fly each of them for 30 minuets each.

In the flight simulator program there was a vast number of options to choose from.  I tried out a number of different aircrafts before deciding on flying the X8 Quadcopter 1260.  Though the X8 is technically a quadcopter it actually had 8 blades and 8 motors.  The additional blades made it incredibly stable and easy to fly.  However, the additional blades seemed to hamper its turning ability slightly.  The X8 was slower to maneuver and change directions.  The additional blades increased its payload capacity.   In the program it was hauling around what looked like a full size SLR digital camera.  The down fall to additional blades is the they require additional power to turn, thus reducing the available flight time.  This craft would be great for high resolution work over a smaller area.

(Fig. 11)  Screenshot from Real Flight Flight Simulator while flying the X8 Quadcopter.
The fixed wing aircraft I chose was a Pipercub Float Plane.  In comparison the Pipercub was considerably faster than the X8 Quadcopter.  Also, the Pipercub was considerably more responsive and quicker turning than the multicopter.  However, when attempting to turn around to land in a narrow space you had to make sure you made a wide swing to get lined up for the landing area. I found it easier to land the Pipercub on the land compared to landing in the water.  Though a true Pipercub is a gas powered airplane, I can definitely tell their are advantages and disadvantages to using a fixed wing aircraft.  Surveying long and narrow fields or long stretches of roadways would be in the wheelhouse of the a fixed wing aircraft.  One tough part of flying the Pipercub was the program did not have any gimble with a camera attached to it.  Many of the other UAS styled crafts had a camera on the nose and you could switch your flying view to the camera.  Flying the Pipercub was a lot more like flying a RC Airplane than actually flying a UAS or conventional plane.

(Fig. 12) Screenshot from Real Flight Flight Simulator while flying the Pipercub. (Yes I crashed into the water shortly after taking the screenshot.)

Pix4D

The final section of our lab session introduced us to a computer program called Pix4D.  Pix4D converts images taken in almost any method and creates georeferenced maps, mosaics, and 3D models.  There are multiple versions of Pix4D, the version I used was Pix4Dmapper Pro.  The objective was to create a 3 dimensional view of the landscape along the rivers edge using a digital elevation model (DEM) and an ortho-mosaic image created from Pix4D.

The Pix4D program is probably one of the simplest programs I have used in my life.  Once the program is open, you select new project.  When in the new project menu, you select where you want the generated information saved, and you select the photos you want to be used.  Selecting the photos is the most difficult part.  I had to make sure I selected enough photos, and those photos had to overlap themselves so for the program to properly depict the landscape.

Since I had time and access to more than one computer, I chose to run all 165 of the photos taken on the east side of the bridge (first part of the flight from earlier).  Had I been in a time crunch I could have achieved satisfactory results with fewer images.  However, my goal was to have the highest resolution depiction of the landscape possible.  The entire process took approximately 4 hours (I was not at the computer when it finished).  After the program worked its magic, I opened the created ortho-mosaic in ArcScene.

(Fig. 13)  Ortho-mosaic created in ArcScene from UAS collected imagery.

To allow you to better understand the detail projected in this image, I took the original image created in Pix4D and used ArcMap to measure a detail on the map.  The object I seletected is the LL & the heart just off to the right of the center of the above image.  I measured the end of the second L.  It measured approximately .494726 meters (1.7477 ft).  The image is clearly discernible from the ortho-mosaic.  The image captured by the UAS displays the individual rocks used to create the LL & heart pattern.

(Fig. 14) Comparison of the image created in ArcScene (Top) and the orginal image created  by Pix4D displayed in ArcMap (bottom).

Scenario

The last part of the lab, I have to provide a consulting report for a scenario.  My consulting report is being directed to my supervisor who will write the final report for the customer.  The scenario is as follows:

   "A Canadian crop consultant who manages 300,000 acres of land in the plains provinces is looking to do drainage modeling, and to monitor the overall health of the crops. She doesn’t have access to anything but coarse DEM data, and the IR satellite imagery is not at a resolution that suits decision making beyond a per field basis. They would like to start out with a pilot project by doing small 80 acre test plots, and then scale up from there if the project works. How might they best implement this plan."

Canadian crop consultation

We definitely have the tools to fully assist this customer.   Without knowing the specific layout of the customers property or the initial 80 acre for the test plot, I have used a known 80 acre field to develop a few options.  Using the mission planner software I created two scenarios to consider: 1 flight using a fixed wing 3D Robotics Areo, and the 3D Robotics X8.

3D Robotics Areo

Specs
Altitude: 50 m
Speed: 23 m/sec
Camera: Canon S110-S120

Results
Ground Resolution: 1.83 cm/pixel
Flight Time: 17 min
Number Images: 494

3D Robotics X8

Specs
Altitude: 50 m
Speed: 8m/sec
Camera: Canon S110-S120

Results
Ground Resolution: 1.83 cm/pixel
Flight Time: 37 min  (This would need to be broken into at least 2 flights)
Number Images: 489

(Fig. 15)  Flight plan for the 3D Robotic Areo.

The ideal situation would be if we could fly the Areo, for the simple fact it could be done in one flight.  If we used the X8 we would need to split the flight in at least half and make 2-3 flight to cover the 80 acres.  It is possible to increase the altitude to accomplish either flight in a shorter time frame be we would compromise the ground resolution considerably.

These results are just a rough approximation.  The following question would need to be answered for me to properly prepare an accurate plan.

  • Is if there is ample room to launch a fixed wing UAS or if we would be need to utilize one of our multicopter platforms?
  • Is the 80 acre test plot just one field or is it multiple small fields which add up to 80 acres?
Either way we approach the flight, we can definitely produce the data in which the customer is looking for.  Below you will find some sample images of products we could produce for this customer.  Figure 16 shows and array of aerial imagery in which the consultant will be able to see the exact health of the fields, through plant health, nutrient needs, and moisture contents.  Looking at figure 17 you will see an elevation and slope model to better address the water drainage of each field.

(Fig. 16) Crop health imagery we could provide for the customer. (Ascending Technologies)

(Fig. 17) Topographic map for drainage modeling. (NCSU)

Discussion

UAS technology is exploding and the ability for the public to obtain one has never been easier.   The variations which are available to the public/commercial field are endless.  The options which I spoke of were found from recommendations from Dr. Hupy and searching the internet on my own.  The field of UAS use is just gaining speed and look forward to see all the uses we haven't even thought of yet.

One use in place already which is fascinating and practical is the use of UAS for Precision Agriculture.  This technology is being used to assist farmers in proper fertilization and watering practices.  It is my hope, through this type of technology we will be able to display the functionality of no-till farming practices, and rotational planting to optimism soil health.  Through farmers across the country are successfully reducing their reliance on pesticides and chemical fertilizers, many do not believe it will work for them.  I feel through the use of this great technology we would be able to physically show them the affects these practices could have on their property and formulate calculations for savings and reduction of time in the field.

One thing not discussed a lot in this lab is the legal issues with UAS use.  From my understanding, UAS regulations vary from state to state and all have regulations from the Federal Aviation Administration (FAA).  These regulations are limiting the use and life saving ability UAS platforms can provide.

Conclusion

Overall, the lab did a great job of introducing us to UAS platforms and a very small taste of their uses.  I am excited to learn more about applications for UAS.  I look forward to taking Dr. Hupy's UAV class in one of the upcoming semesters to learn a great deal more about the in's and out's.  I feel privileged to be in one of the few universities which is teaching UAS technology to undergraduates.  Having even the most basic knowledge of their use has me continually trying to think of new uses for the UAS platforms.

Sources

Ascending Technologies
3D Robotics
NCSU
Pix4D