This package is intended to be used in support of data management activities associated with fixed locations in space. The motivating fields include both air and water quality monitoring where fixed sensors report at regular time intervals.
When working with environmental monitoring time series, one of the first things you have to do is create unique identifiers for each individual time series. In an ideal world, each environmental time series would have both a locationID
and a sensorID
that uniquely identify the spatial location and specific instrument making measurements. A unique timeseriesID
could be produced as locationID_sensorID
. Metadata associated with each timeseriesID
would contain basic information needed for downstream analysis including at least:
timeseriesID, locationID, sensorID, longitude, latitude, ...
locationID
.sensorID
.longitude, latitude
.data
table with timeseriesID
column names.Unfortunately, we are rarely supplied with a truly unique and truly spatial locationID
. Instead we often use sensorID
or an associated non-spatial identifier as a stand-in for locationID
.
Complications we have seen include:
locationID
.locationID
.A solution to all these problems is possible if we store spatial metadata in simple tables in a standard directory. These tables will be referred to as collections. Location lookups can be performed with geodesic distance calculations where a location is assigned to a pre-existing known location if it is within distanceThreshold
meters. These will be extremely fast.
If no previously known location is found, the relatively slow (seconds) creation of a new known location metadata record can be performed and then added to the growing collection.
For collections of stationary environmental monitors that only number in the thousands, this entire collection (i.e. “database”) can be stored as either a .rda
or .csv
file and will be under a megabyte in size making it fast to load. This small size also makes it possible to store multiple known location files, each created with different locations and different radii to address the needs of different scientific studies.
The package comes with some example known location tables to demonstrate.
Lets take some metadata we have for air quality monitors in Washington state and create a known location table for them.
get(data("wa_airfire_meta", package = "MazamaLocationUtils"))
wa <-names(wa)
## [1] "monitorID" "longitude" "latitude"
## [4] "elevation" "timezone" "countryCode"
## [7] "stateCode" "siteName" "agencyName"
## [10] "countyName" "msaName" "monitorType"
## [13] "siteID" "instrumentID" "aqsID"
## [16] "pwfslID" "pwfslDataIngestSource" "telemetryAggregator"
## [19] "telemetryUnitID"
We can create a known location table for them with a minimum 500 meter separation between distinct locations:
library(MazamaLocationUtils)
# Initialize with standard directories
mazama_initialize()
setLocationDataDir("./data")
500 <-
wa_monitors_ table_initialize() %>%
table_addLocation(wa$longitude, wa$latitude, distanceThreshold = 500)
Right now, our known locations table contains only automatically generated spatial metadata:
names(wa_monitors_500)
## [1] "locationID" "locationName" "longitude" "latitude" "elevation"
## [6] "countryCode" "stateCode" "county" "timezone" "houseNumber"
## [11] "street" "city" "zip"
Perhaps we would like to import some of the original metadata into our new table. This is a very common use case where non-spatial metadata like site name or agency responsible for a monitor can be added.
Just to make it interesting, let’s assume that our known location table is already large and we are only providing additional metadata for a subset of the records.
# Use a subset of the wa metadata
seq(5,65,5)
wa_indices <- wa[wa_indices,]
wa_sub <-
# Use a generic name for the location table
wa_monitors_500
locationTbl <-
# Find the location IDs associated with our subset
table_getLocationID(
locationID <-
locationTbl, longitude = wa_sub$longitude,
latitude = wa_sub$latitude,
distanceThreshold = 500
)
# Now add the "siteName" column for our subset of locations
wa_sub$siteName
locationData <- table_updateColumn(
locationTbl <-
locationTbl, columnName = "siteName",
locationID = locationID,
locationData = locationData
)
# Lets see how we did
table_getRecordIndex(locationTbl, locationID)
locationTbl_indices <-c("city", "siteName")] locationTbl[locationTbl_indices,
## # A tibble: 13 × 2
## city siteName
## <chr> <chr>
## 1 Chelan "Chelan-Woodin Ave"
## 2 La Crosse "Lacrosse-Hill St"
## 3 Tri-Cities "Kennewick-Metaline"
## 4 Sunnyside "Sunnyside-S 16th"
## 5 Inchelium "Inchelium"
## 6 Wellpinit "Wellpinit-Spokane Tribe"
## 7 Lake Forest Park "Lake Forest Park-Town Center"
## 8 Okanogan County "Twisp-Glover St"
## 9 Limestone Junction "Maple Falls-Azure Way"
## 10 Okanogan County "Omak-Colville Tribe"
## 11 Ritzville "Ritzville-Alder St "
## 12 Darrington "Darrington-Fir St"
## 13 Tukwila "Tukwila_Allentown"
Very nice.
The whole point of a know location table is to speed up access to spatial and other metadata. Here’s how we can use it with a set of longitudes and latitudes that are not currently in our table.
# Create new locations near our known locations
jitter(wa_sub$longitude)
lons <- jitter(wa_sub$latitude)
lats <-
# Any known locations within 50 meters?
table_getNearestLocation(
500,
wa_monitors_longitude = lons,
latitude = lats,
distanceThreshold = 50
%>% dplyr::pull(city) )
## [1] NA NA NA NA NA "Wellpinit"
## [7] NA NA NA NA NA NA
## [13] NA
# Any known locations within 500 meters
table_getNearestLocation(
500,
wa_monitors_longitude = lons,
latitude = lats,
distanceThreshold = 500
%>% dplyr::pull(city) )
## [1] NA "La Crosse" NA
## [4] "Sunnyside" "Inchelium" "Wellpinit"
## [7] "Lake Forest Park" "Okanogan County" "Limestone Junction"
## [10] "Okanogan County" "Ritzville" NA
## [13] "Tukwila"
# How about 5000 meters?
table_getNearestLocation(
500,
wa_monitors_longitude = lons,
latitude = lats,
distanceThreshold = 5000
%>% dplyr::pull(city) )
## [1] "Chelan" "La Crosse" "Tri-Cities"
## [4] "Sunnyside" "Inchelium" "Wellpinit"
## [7] "Lake Forest Park" "Okanogan County" "Limestone Junction"
## [10] "Okanogan County" "Ritzville" "Darrington"
## [13] "Tukwila"
Before using MazamaLocationUtils you must first install MazamaSpatialUtils and then install core spatial data with:
library(MazamaSpatialUtils)
setSpatialDataDir("~/Data/Spatial")
installSpatialData()
Once the required datasets have been installed, the easiest way to set things up each session is with:
library(MazamaLocationUtils)
mazama_initialize()
setLocationDataDir("~/Data/KnownLocations")
mazama_initialize()
assumes spatial data are installed in the standard location and is just a wrapper for:
::setSpatialDataDir("~/Data/Spatial")
MazamaSpatialUtils
::loadSpatialData("EEZCountries")
MazamaSpatialUtils::loadSpatialData("OSMTimezones")
MazamaSpatialUtils::loadSpatialData("NaturalEarthAdm1")
MazamaSpatialUtils::loadSpatialData("USCensusCounties") MazamaSpatialUtils
Every time you table_save()
your location table, a backup will be created so you can experiment without losing your work. File sizes are pretty tiny so you don’t have to worry about filling up your disk.
Best wishes for well organized spatial metadata!