Study area of the model which will determine timing of sediment transport form the Stansbury Mountains alluvial source (yellow areas) to the Stockton Bar sink (red box).
Model pseudocode which includes setup of agents/turtles, source and sink areas, moving and stopping the turtles and monitoring the time and distance results.
Sample of NetLogo code for the pseudocode.
This is an example of what was produced for each of the 3 post-fire debris flows. Each map summarizes the impacted parcels, debris flow path, location of weather stations, and slope of fire perimeter.
Base map showing debris flow hazard areas (green) and fire perimeters (red) for the three case studies along the Wasatch Front, northern Utah.
Table 1. Wildfire and debris flow characteristics, housing census data and debris flow damage reports.
Table 2. Worst case scenario vulnerability risk assessment. Possible fictional mitigation estimates if calculated at time of debris flow event and based on geography alone.
The datasets for this project include Utah rivers and lakes, population, school locations and water related land use areas such as parks and irrigation. This map is an example of the land use areas and also shows the study area of the project area.
The methodology of the project. Four buffer zones (500, 1000, 1500, 2000 feet) were created around the American Fork River between the Tibble Fork Dam to Utah Lake. Each dataset (schools, parks and land use) was clipped to the extent of one of the buffers and then summarized. This was repeated 12 times (4 buffer zones, 3 datasets).
The contamination exposure results of the accidental, metal-laden sediment release of Tibble Fork Dam.
The Environmental Impact Statement (EIS) critique introduction and summary of findings. The EIS was a revision study on the impact of oil and gas wells in White River National Forest, Colorado. An EIS is intended to study all of the environmental effects of a proposed action, and its possible alternatives.
A short, additional assignment was to address how GIS is used as a tool by NEPA. It is primarily used in the examination stage of affected resource areas where spatial data is an important component and readily available.
This is the project scope defining the case study objective which is to develop a dam-source contamination exposure prototype. The project’s business need would save money, reduce federal litigations and promote the company’s vision of clean water. The project risks were outlined as existing 3rd party competitors, financial risks and resource (technology and employee) limitations. The identifying key stakeholder were any associated state and federal divisions, dam owners, and civilians living downstream, such as farmers and American Fork citizens.
Time management document outlining the major milestones of the creation of the prototype. The report also includes the Gantt Chart of this timeline.
Quality management document outlining the Quality Assurance Plan and Quality Control Checklist of how quality will be measured for spatial data as well as prototype accuracy.
A bubble plot was used for exploratory data analysis. It reveals the wide range of delta temperatures, from -207°C to 6.2°C, across the study area. Also of note, there is a positive value very close to a very negative delta temperature indicated depth is probably not a contributing factor.
An additional exploratory data method used was the scatterplot. It reveals no correlation between delta temperature and depth.
A histogram was used to understand the distribution of data and showed the delta temperatures have an asymmetrical distribution. Data that is not normally distributed is more difficult to analyze.
A variogram was used to analyze the spatial dependence between depth values and delta temperature values. Because the data is so unevenly distributed and there is no correlation, the variogram for each, only model a few data points each.
Once the data was well understood, Kriging methods were employed to predict delta temperature distribution across the region. The results of the Ordinary Kriging and Kriging with external drift, based on depth, were compared based on the Root Mean Square Error of Prediction cross-validation method (RMSEP).
The Entity-Relationship (E-R) model diagram is the conceptual model of the proposed database. It describes what data will be used, how it will be stored and their relationships to one another. Relationship cardinality, primary keys and attributes are assigned.
The relational model diagram describes the logic of the database structure. It defines how the relationships will function together and how the attributes respond to that functionality. Relations worksheet for all relations and relationships worksheet for all relationships were created.
The object-oriented model was then created, which better approximates the SDE geodatabase. Associations, subclasses, aggregations, methods, access indicators and geometry were included in this last model diagram.
The final step was the implementation and creation of the proposed database. Feature classes with attributes, relationships and projection systems were created. In addition, geodatabase domains, subtypes and versioning controls were designated.
Excerpt #1 from Python presentation which explains first steps of code to create a ArcGIS-ready CSV file and reduce the file size without compromising data resolution.
Excerpt #2 from Python presentation which describes the creation of the feature class from the CSV and iterate through every text file in a folder.
Excerpt #3 from Python presentation showing results of manual vs ArcGIS EBK tool.
The application is looking at the water resources of Northern Utah. The polygons are the municipal water usage. The more the user zooms in, the more rivers and streams that are drawn. When a river is selected, the related table is called to provide the user with the 12 month river flow.
The script uses dojox.charting to create a bar chart in a popup window when a river features is selected. The graph data are the 12 months of river flow for the Great Salt Lake Watershed Boundary.
Histogram of base map views showing that it its highly skewed to the right. Linear regression methods require data which is normally distributed. Since BMV are not (even close), Random Forest regression was used instead.
Regression analysis cannot work with NULL values. Therefore the datasets were presented as distances (example below of distance to roads). For each dataset, the Euclidian Distance ArcGIS tool was run to calculate, for each cell, the Euclidean distance to the closest source. This raster was then converted back to square mile polygons for spatial join to BMV. This was computed for each of 7 independent variables. The final result was one feature class/table with a distance to X independent variable and corresponding base map views.
Figure depicting extreme range of values for base map views, thus increasing the challenges of the regression modeling. Image on left is original dataset for base map views. Image on the right is with the base map view erased which intersect all municipalities.
Preliminary results (work in progress) showing results from R Random Forest Regression (using square root of BMV). Using 500 tree and 4 variables for any sample run, the explained variable is 44.6%. The figure shows which variables are the most important for the regression. (GLF = golf courses, CB = campgrounds and boat marina’s, NRES = natural resources, TRAILS = trails, BIA = Indian reservations, ROADS = roads, DIST = distance.)
Preliminary sample sites for tree measurements (work in progress).