Wednesday, October 26, 2016

Assignment 6

Introduction:

The goal of this assignment was to do a field survey of trees in Putnam Park on the UWEC campus area (Figure A below) using Azimuth angles and the distance between a fixed point and the trees being surveyed. This is especially important because there may be a situation where all you may have is range finder and little handheld GPS. As my professor says "Technology will fail you" so the class needs to be prepared to collect data without nice equipment like a survey grade GPS or a drone. An important part of this lab was data normalization because the class split into 3 groups in order to get all of the data. Working together the class decided that the data should have x, y, distance, Azimuth, Tree type, Diameter at breast height (DBH), and point number. X field is the longitude, Y field is the latitude, Distance (meters) is the distance between the group and the tree, and  Azimuth is the angle to the tree being surveyed. Tree type is what kind of tree we thought it was, DBH is the tree's diameter, and the point number denotes what set of data the trees belonged to.


Figure A, the red box denotes the study area.

Methods:


Materials
  • Tree Diameter Measuring Tape
  • Compass
  • Rangefinder
  • Handheld GPS
  • Field Notebook
  • Smart Phone
The first step was to collect the GPS location off of the handheld GPS. These GPS coordinates will be used for all the data at point 1. The GPS coordinates for point 2 are all the same (point 2 located about 50 meters west of point 1), and the GPS coordinates for point 2 are all the same (point 3 located about 50 meters east of point 1). See the data below in Figure B. Then a tree is selected and the compass is used to find the angle to a certain degree. Then the range finder is used to find the distance in meters between the fixed point and the tree. Next is tree identification, using smart phones the groups were able to identify the trees easily. Lastly a class member used the Tree Diameter Measuring Tape to measure the tree's diameter at breast height. These steps were then repeated for about 10 trees at points 1, 2, and 3. After all the data was collect it needed to be entered into an excel spread sheet. 

Figure B is the data. The x,y values differ for every point, but are the same within those points.

The data is then imported into an ArcMap Geodatabase using the import single table function. Once the data is a table in ArcMap it is now time to run some tools. The first tool used was the Bearing Distance to Line command. The table was used for the data and all the fields were correlated, x to x, y to y, distance to distance, azimuth to azimuth, and tree type to characteristic. Once the lines are created the tool Feature Vertices to Point command was used. This tool put a point at the end of each line, these points were the trees. To finish it off the feature classes were added to the map and so was a base map.


Results/Discussion:

Figure C is the final azimuth survey map
The final survey map is accurate and shows the correct distances from the point center to each tree. and the GPS points are all accurate. This however did not come with ease, the data needed manipulation because the handheld GPS that was used messed up the points. Figure D below shows the initial map made before the data was fixed.


Figure D shows the bad map made form the initial data.
In Figure D the data at point 2 was in the parking lot, the data for point 1 was too far to the east, and the data for point 3 was located a few miles south of our study area. To remedy this the XY data was changed by using the GPS information gathered from the base map. Once the excel file was fixed the tools were reran and the new feature classes were added, and looked good.

Figure E is the final survey map, but it shows the individual tree types sorted by different colours.

Conclusion:

This distance azimuth survey turned out decently successful despite the setback from the bad GPS data. That was easily remedied by using ArcMap to find the correct lat/long data. It was much different than any other survey the class has done thus far, and quite different from using a Survey Grade GPS, but as said before, those tools will not always be available. Taking measurements by hand turned out to work well, and the fact that this lab went swimmingly was quite rewarding.

Wednesday, October 19, 2016

Assignment 5

Introduction:

Previously, the class had set out to create a Digital Elevation Model (DEM) of a 1m X 1m sandbox terrain by gathering elevation data points based on an artificial sea level. This was achieved using a stratified sampling system (more about the sampling system in the previous post). The survey data was then  entered into a table and imported into ArcGIS to create a point feature class with X,Y, and Z values. Data normalization was a key aspect to this lab, because organization of the data is required to get accurate models. Data normalization reduces redundancy and improves the accuracy of our data due to human error. Normalization in this aspect was that our data was categorized into 3 columns, "X_Cell", Y_Cell", and "Z_Value." Each point has XYZ values, that correspond to position on the x axis, y axis, and depth above or below "sea level." With this data, it is easy to do different types of interpolation to make the DEM rasters. Interpolation is a method of construction new data points within the range of a discrete set of data points. 5 different types of Interpolation were used in the assignment to determine which was the best fit for the DEM.

Methods:

First, the compiled data was imported into a File Geodatabase in ArcMAP. From there it was used to create a point feature class with XYZ data. The group double checked to make sure the data was in Numeric format, because after a previous minor slip up Dr. Hupy loves to hang that over our heads. Next, Geostatistical Analyst, Spatial Analyst, and 3D Analyst tools were all checked and ready to roll. With these ready our different forms of Interpolation were ready to rumble.

1. Inverse Distance Weighted (IDW) Interpolation
IDW interpolation assumes that each point has a weighted value to predict unmeasured points around it. It makes the assumption that points that are closer together are more alike than points that are further apart. Figure A below, shows that IDW worked well for the group since a stratified approach was used, meaning there were many points that were clustered in groups.


Figure A: IDW Interpolation

2. Natural Neighbor Interpolation
This Interpolation technique applies only values directly next to each point. Areas directly next to the point carry more weight for each point around the surface. This is often referred to as "Area-Stealing" interpolation. Natural Neighbor Technique (Figure B) created a very natural looking DEM and potentially the most realistic.


Figure B: Natural Neighbor Interpolation
3. Kriging Interpolation
Kriging Interpolation is in a completely different ballpark than other Interpolation methods. Kriging uses geostatistical methods that are based on statistical models that include autocorrelation. This gives kriging the ability to predict surfaces but also add to the certainty of the accuracy of the predictions. It assumes the surface has relatively the same pattern all the way across. This method did not do well to capture the sandbox because it gave too much weight to the flat areas and did not show the peaks well. Figure C below.


Figure C: Kriging Interpolation
4. Spline Interpolation
Spline Interpolation uses a mathematical function to minimize overall surface curvature by having the lines pass directly through each point. There are two kinds of Spline, Regularized method was used, which creates a gradually changing surface that often results in values that are outside of the sample data range. The Spline method (Figure D) was not as useful because it centered too heavily around the areas that were sampled more (due to stratified sampling).


Figure D: Spline Interpolation. It became discoloured from the others no matter what I did, so I left its vibrancy.
5. Triangulated Irregular Network (TIN) Interpolation
TIN was the final method used, it creates triangular networks between the data points by connecting lines between the data points. With 3 lines going between every point collected the result is a 1980's looking arcade game surface model, much like Figure E below. The TIN method, while blocky, worked well for our group because of the stratified data collection.


Figure E: TIN Surface Model
The data was manipulated in order to be used in ArcMap because ArcMap uses only 2D imagery. By sending the rasters to ArcScene they were able to be manipulated into 3D images. Once the interpolated rasters were sent to ArcScene they were then switched to "Floating on a Custom Surface." This gave the rasters a 3D effect. After that the "Calculate from Extent" selection was enabled to select the amount of vertical exaggeration. After that the Scene is saved as a layer file and captured as an image, ready to be put back in to ArcMap. The Image was then opened in ArcMap as well as the layer file. The layer file contained the data needed to make a legend and the image was situated in the best possible spot to show its 3D properties.

Results/Discussion:

At first Spline seemed to be the best Interpolation method because of how smooth it is, until it is compared to a picture of the sandbox and it is easy to see that the highs are exaggerated far too much. Natural Neighbor is the real winner of Interpolation methods for the sandbox project because it most accurately represented the curvature of the mountainous regions and rolling prairies.

In conclusion, IDW interpolation provided an accurate image of the sandbox terrain; however, it skewed some data points based on the stratified groupings, and made for awkward spikes that were not large elevation changes. This could easily be fixed by sampling more uniformly. As said before Natural Neighbor most accurately portrayed the sample area, it captured the terrain fluidly. The only thing that could improve its accuracy is if more points were sampled. Kirging Interpolation did a poor job of capturing the sample area because it attempted to predict and normalize. Making the mountain more like a plateau and the lower areas just flat. As noted before the Spline Interpolation was not good, it exaggerated the highs and lows too much creating a map that looks more like ski moguls than land. TIN created a great DEM of the sample area, the mountain looks slightly too flat, but that could have been made better by more data points.

Conclusion:
The sampling method chosen turned out to be a success, stratified sampling and Natural Neighbor Interpolation worked well to create a DEM of the sandbox. This survey method was rather archaic in the fact that it used a string/drawn grid and a meter stick rather than survey grade GPS, but it got the job done and the method worked very well. This method is also not used in many situations, it would be difficult to do in a busy downtown area, a crowded building, or on private land. But for this assignment a stratified system with a grid was the best way to accurately sample the sandbox for the best results.

Tuesday, October 18, 2016

Assignment 4

Introduction:

Sampling is a key method across all forms of science. It involves the process of gathering precise data/measurements of an often times large study area. Sampling is quite important and it is most important to be precise in sampling geography because maps and data will directly affect peoples lives. Sampling allows for a project to be completed more efficiently, quickly, and can save resources in the process of creating a sample of a study site. Random, Systematic, and Stratified are the 3 types of sampling. Random sampling uses completely random chance to select points, it is entirely unbiased. Anything can be selected, but it can also be a bad choice because it can leave large areas un-sampled. Systematic sampling uses a regular interval to collect samples. The most common would be a grid. Systematic sampling is reliable and straight forward, however it can be biased. Stratified sampling is final method, it uses known groups of a study and collects data points relative to the size of the groups. This method can generate accurate data, but if the size of the groups are unkown the data can become skewed.

The lab's objective is to accurately construct an elevation surface of a terrain. The study will be completed using a 1 m x 1 m sandbox that students will construct a landscape within using sand to create a unique terrain. This terrain will then be sampled using the sampling method of choice by the students and entered into a spreadsheet to be used for creating a DEM in ArcGIS.

Methods:

Our group chose to use a a stratified sampling technique with a systematic sampling grid to create an easier sample collection. This was most useful when faced with time constraints. The method is similar to a systematic sampling method, but the shape of the terrain acted as "groups" within the total area. Areas of higher and lower elevation had higher more points. The sample plot is located near UWEC's Science building L.E. Phillips Hall, Figure A below.


Figure A, the completed sandbox terrain with a forest in the east, prairie lands to the west, and other random terrain in the center.
Materials used:


  • String
  • Meter Sticks
  • Data Collection Notebook
  • Thumb Tacks
  • Pencil
  • Samsung Galaxy s6 Edge
The sampling scheme used a grid with 10 cm spaces between each reference point on the XY axis. Sea level (0 cm) was the top of the box. Strings were then set up as a spatial reference point at the x and y points to lay over the grid. This allowed us to create a stratified system of uniform groups in the overall model. See Figures B and C below.




Figure B, the grid lines were drawn in to take advantage of the soft sand.

Figure C, shows the tacks displaying areas of higher relief.

After the grid lines and tacks were in place the our group used a meter stick to measure areas of relief. The values were entered directly into our drawn sampling grid. This created a system that was quick and easy to execute when we were short on time.


Results/Discussion:

178 points were collected in the 1m X 1m sandbox using the stratified method; ranging from very low at the deepest depression to quite high at the highest peak. Throughout most of the prairie lands in the west, the elevation stayed roughly the same. The sampling method was adequate for what was needed to be done, and it became even better when the grid depression was made using the strings.

Minimum: -16 cm
Maximum: 12 cm
Mean Elevation: -4.71 cm
Standard Deviation: 4.64 cm


Conclusion:

The stratified system employed used a solid stratified system that allowed for a systematic approach to make data collection easier and more reliable. One of the most important things to consider when sampling is time and resources available, because as seen in this lab, there were little resources and little time, probably just like the real world. This sampling technique is reliable enough to be used again and could produce good data over a large sample area. In the next assignment a DEM will be created from our data on ArcGIS and it will prove if our sampling method was accurate and successful.