5 Minute Tutorial#
OPeNDAP - the vision#
The original vision of OPeNDAP (Cornillion, et al 1993) was to democratize remote data access, by making the equivalencies
\( \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \boxed{\text{URL} \approx \text{Remote Dataset} }\)
\( \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \boxed{\text{URL + Constraints} \approx \text{Subset of Remote Dataset}} \)
That led to the development of the DAP2 protocol (formerly known as DODS). Currently, OPeNDAP and Unidata servers implement the modern and broader DAP4 protocol (see DAP4 specification), to continue enabling the original vision of OPeNDAP.
What pydap enables:#
The internal logic of PyDAP enables the construction of constraint expressions for each url, interactively, hiding the abstraction away from the user. Furthermore, using PyDAP as a backend engine for Xarray, the original OPeNDAP vision can scaled with multi-core parallelism. Nonetheless, basic understanding about the use of Constraint Expression comes in handy when aggregating multiple files, and can lead to more efficient worklows.
Objectives:#
Demonstrate how to specify the DAP4 protocol to the remote server.
Use
XarraywithPyDAPas the backend engine to download a subset of remote data in two user case scenarios:a)anNcMLaggregation file (virtual dataset), andb)across two Netcdf files.Demonstrate distinct ways to use Constraint Expression (
CEs), and how these are passed down to the remote server so that subsetting is done by the server, in adata-proximateway, without performace loss on the client side.
Requirements#
Datasets behind a DAP4 implementing server. For example, the test server: http://test.opendap.org/opendap/.
pydap>=3.5.8
xarray>=2025.0
numpy>=2.0
Note
The vast majority of NASA’s OPeNDAP servers implement the DAP4 protocol.
from pydap.client import open_url, consolidate_metadata, create_session
import xarray as xr
import numpy as np
# create a session to inspect downloads. cache_name must have `debug`
session = create_session(use_cache=True, cache_kwargs={"cache_name":'data/debug_case1'})
session.cache.clear()
Case 1) Subsetting an NcML file#
The file is an NcML file representing a virtually aggregated dataset, which can be found in the test server and it is named: aggExisting.ncml.
OPeNDAP servers can be configured to produce NcML virtual datasets. Their advantage is that with an individual OPeNDAP url, a user has access to an entire collection of files from which to subset.
ncml_url = "http://test.opendap.org/opendap/data/ncml/agg/aggExisting.ncml"
dap4_ncml_url = ncml_url.replace("http", "dap4")
print("=============================================================\n Remote DAP4 URL: \n", dap4_ncml_url, "\n=============================================================")
=============================================================
Remote DAP4 URL:
dap4://test.opendap.org/opendap/data/ncml/agg/aggExisting.ncml
=============================================================
ds = xr.open_dataset(
dap4_ncml_url,
engine='pydap',
session = session,
chunks={},
)
ds
<xarray.Dataset> Size: 12kB
Dimensions: (time: 59, lat: 3, lon: 4)
Coordinates:
* lat (lat) float32 12B 41.0 40.0 39.0
* lon (lon) float32 16B -109.0 -107.0 -105.0 -103.0
* time (time) datetime64[ns] 472B 2000-01-01 2000-01-02 ... 2000-02-28
Data variables:
P (time, lat, lon) float64 6kB dask.array<chunksize=(59, 3, 4), meta=np.ndarray>
T (time, lat, lon) float64 6kB dask.array<chunksize=(59, 3, 4), meta=np.ndarray>
Attributes:
title: Example DataWhat happens if we download a single data point?#
ds['T']
<xarray.DataArray 'T' (time: 59, lat: 3, lon: 4)> Size: 6kB
dask.array<open_dataset-T, shape=(59, 3, 4), dtype=float64, chunksize=(59, 3, 4), chunktype=numpy.ndarray>
Coordinates:
* lat (lat) float32 12B 41.0 40.0 39.0
* lon (lon) float32 16B -109.0 -107.0 -105.0 -103.0
* time (time) datetime64[ns] 472B 2000-01-01 2000-01-02 ... 2000-02-28
Attributes:
long_name: surface temperature
units: degC
Maps: ('/time', '/lat', '/lon')Note
The info about chunking in T implies the entire array is treated as a single chunk! This is a stardard interpretation that Xarray makes of OPeNDAP urls. What happens if I download a subset of the data?
# clear the cache to inspect what is being downloaded
session.cache.clear()
%%time
ds['T'].isel(time=1, lon=0, lat=0).load()
CPU times: user 74.1 ms, sys: 11.5 ms, total: 85.6 ms
Wall time: 325 ms
<xarray.DataArray 'T' ()> Size: 8B
array(100.)
Coordinates:
lat float32 4B 41.0
lon float32 4B -109.0
time datetime64[ns] 8B 2000-01-02
Attributes:
long_name: surface temperature
units: degC
Maps: ('/time', '/lat', '/lon')print("====================================== \n Request sent to the Remote Server:\n ", session.cache.urls()[0].split("?")[-1].split("&dap4.checksum")[0].replace("%5B","[").replace("%5D","]").replace("%3A",":").replace("%2F","/"), "\n====================================== ")
======================================
Request sent to the Remote Server:
dap4.ce=/T[1:1:1][0:1:0][0:1:0]
======================================
The constraint expression is built from the
.isel Xarray method and correctly passed to the server, which does all the subsetting work!
Case 2) Subsetting across two separate files.#
The two files can be found in the test server, named: coads_climatology and coads_climatology2. These two datasets share identical spatial dimensions, can be aggregated in time, and share almost all identical variables.
Note
It is important to always check that datasets can be aggregated. PyDAP and Xarray have internal logic to check if any two or more datasets can be concatenated. But all these safety checks only take into account dimensions and cooordinates.
An important step will be the use of Constraint Expressions (CEs) to ensure that only the variables of interest are concatenating.
Warning
One of these files has extra variables not present in the other file, and that we will discarded by the use of CEs.
urls = ["http://test.opendap.org/opendap/data/nc/coads_climatology.nc", "http://test.opendap.org/opendap/data/nc/coads_climatology2.nc"]
dap4_urls = [url.replace("http","dap4") for url in urls]
# constraint expression
dap4_CE = "?dap4.ce=" + ";".join(["/SST", "/COADSX", "/COADSY", "/TIME"])
# Final list of OPeNDAP URLs
dap4ce_urls =[url+dap4_CE for url in dap4_urls]
print("====================================================\nThe following are the DAP4 OPeNDAP URLs \n", dap4ce_urls)
====================================================
The following are the DAP4 OPeNDAP URLs
['dap4://test.opendap.org/opendap/data/nc/coads_climatology.nc?dap4.ce=/SST;/COADSX;/COADSY;/TIME', 'dap4://test.opendap.org/opendap/data/nc/coads_climatology2.nc?dap4.ce=/SST;/COADSX;/COADSY;/TIME']
Note
Q: Why use CEs when Xarray has a .drop_variables method? Because Xarray needs to first parse ALL of the metadata associated with the remote file first, to subsequently drop the variables. In some remote files, there could be 1000s of variables. Xarray would then parse all these, and them drop them. With the CE, the server sends a Constrained Metadata associated with only the desired variables.
Warning
Xarray expects the presence of dimension in the metadata. When constructing CEs, the user needs to make sure to include all the dimensions associated with the variables of interest in the CE. In the example above, COASX, COADSY, and TIME are the dimensions of SST.
Consolidate Metadata speeds up the Dataset generation.#
consolidate_metadata(dap4ce_urls, session=session, concat_dim="TIME")
datacube has dimensions ['COADSX[0:1:179]', 'COADSY[0:1:89]'] , and concat dim: `['TIME']`
Note
consolidate_metadata(dap4_urls, concat_dim='...', session=session) downloads the dimensions of the remote file and stores them as a SQLite, to be reused. The session object becomes a way to authenticate, and act as a database manager! This practice can result in a performance gain of ~ 10-100 times faster workflows!
Use Xarray logic to download data.#
ds = xr.open_mfdataset(
dap4ce_urls,
engine='pydap',
concat_dim='TIME',
session=session,
combine="nested",
parallel=True,
decode_times=False,
)
ds
<xarray.Dataset> Size: 2MB
Dimensions: (TIME: 24, COADSY: 90, COADSX: 180)
Coordinates:
* COADSX (COADSX) float64 1kB 21.0 23.0 25.0 27.0 ... 375.0 377.0 379.0
* COADSY (COADSY) float64 720B -89.0 -87.0 -85.0 -83.0 ... 85.0 87.0 89.0
* TIME (TIME) float64 192B 366.0 1.096e+03 ... 7.671e+03 8.401e+03
Data variables:
SST (TIME, COADSY, COADSX) float32 2MB dask.array<chunksize=(12, 90, 180), meta=np.ndarray>
Attributes:
history: FERRET V4.30 (debug/no GUI) 15-Aug-96
Unlimited_Dimension: TIMEds['SST']
<xarray.DataArray 'SST' (TIME: 24, COADSY: 90, COADSX: 180)> Size: 2MB
dask.array<concatenate, shape=(24, 90, 180), dtype=float32, chunksize=(12, 90, 180), chunktype=numpy.ndarray>
Coordinates:
* COADSX (COADSX) float64 1kB 21.0 23.0 25.0 27.0 ... 375.0 377.0 379.0
* COADSY (COADSY) float64 720B -89.0 -87.0 -85.0 -83.0 ... 85.0 87.0 89.0
* TIME (TIME) float64 192B 366.0 1.096e+03 ... 7.671e+03 8.401e+03
Attributes:
long_name: SEA SURFACE TEMPERATURE
history: From coads_climatology
units: Deg C
Maps: ('/TIME', '/COADSY', '/COADSX')Note
The chunking of SST implies the entire array within each file is a single chunk! This is a stardard interpretation that Xarray makes of OPeNDAP urls. What if we download a single spatial point from a single remote file?
session.cache.clear()
%%time
ds['SST'].isel(TIME=0, COADSX=0, COADSY=0).load() # this should download a single point one of the files
CPU times: user 18.1 ms, sys: 17.8 ms, total: 35.8 ms
Wall time: 541 ms
<xarray.DataArray 'SST' ()> Size: 4B
array(-1.e+34, dtype=float32)
Coordinates:
COADSX float64 8B 21.0
COADSY float64 8B -89.0
TIME float64 8B 366.0
Attributes:
long_name: SEA SURFACE TEMPERATURE
history: From coads_climatology
units: Deg C
Maps: ('/TIME', '/COADSY', '/COADSX')print("====================================== \n Request sent to the Remote Server:\n ", session.cache.urls()[0].split("?")[-1].split("&dap4.checksum")[0].replace("%5B","[").replace("%5D","]").replace("%3A",":").replace("%2F","/"), "\n====================================== ")
======================================
Request sent to the Remote Server:
dap4.ce=/SST[0:1:11][0:1:89][0:1:179]
======================================
The entire variable is unnecessarily downloaded !!#
Ideally we would want the see the following Request (in the constraint expressssion) sent to the Remote Server:
dap4.ce=/SST[0:1:0][0:1:0][0:1:0]
It seems that xr.open_mfdataset does not pass the slice argument to the server for each remote dataset. Instead it downloads all the chunk (i.e. the data array) in a single request, subsets it, and then aggregates the data.
How to pass the slice from Xarray to the Remote Server#
The answer is to chunk the dataset when creating it. The chunk should match the expected size of your subset. That way the subset will be processed within a single request per remote file.
Warning
If you chunk the dataset with a size smaller that your expected download, you will trigger many downloads per remote file, forcing Xarray extra work to assemble the data together.
# consolidate metadata again, since the cached metadata was cleared before
consolidate_metadata(dap4ce_urls, session=session, concat_dim="TIME")
datacube has dimensions ['COADSX[0:1:179]', 'COADSY[0:1:89]'] , and concat dim: `['TIME']`
# For a single element in all dimensions, the expected size of the download is:
expected_sizes = {"TIME":1, "COADSX":1, "COADSY":1}
%%time
ds = xr.open_mfdataset(
dap4ce_urls,
engine='pydap',
concat_dim='TIME',
session=session,
combine="nested",
parallel=True,
decode_times=False,
chunks=expected_sizes,
)
session.cache.clear()
CPU times: user 200 ms, sys: 45.3 ms, total: 245 ms
Wall time: 367 ms
ds['SST'] # inspect chunks before download
<xarray.DataArray 'SST' (TIME: 24, COADSY: 90, COADSX: 180)> Size: 2MB
dask.array<concatenate, shape=(24, 90, 180), dtype=float32, chunksize=(1, 1, 1), chunktype=numpy.ndarray>
Coordinates:
* COADSX (COADSX) float64 1kB 21.0 23.0 25.0 27.0 ... 375.0 377.0 379.0
* COADSY (COADSY) float64 720B -89.0 -87.0 -85.0 -83.0 ... 85.0 87.0 89.0
* TIME (TIME) float64 192B 366.0 1.096e+03 ... 7.671e+03 8.401e+03
Attributes:
long_name: SEA SURFACE TEMPERATURE
history: From coads_climatology
units: Deg C
Maps: ('/TIME', '/COADSY', '/COADSX')%%time
ds['SST'].isel(TIME=0, COADSX=0, COADSY=0).load() # triggers download of an individual chunk
CPU times: user 109 ms, sys: 16.4 ms, total: 126 ms
Wall time: 457 ms
<xarray.DataArray 'SST' ()> Size: 4B
array(-1.e+34, dtype=float32)
Coordinates:
COADSX float64 8B 21.0
COADSY float64 8B -89.0
TIME float64 8B 366.0
Attributes:
long_name: SEA SURFACE TEMPERATURE
history: From coads_climatology
units: Deg C
Maps: ('/TIME', '/COADSY', '/COADSX')print("====================================== \n Request sent to the Remote Server:\n ", session.cache.urls()[0].split("?")[-1].split("&dap4.checksum")[0].replace("%5B","[").replace("%5D","]").replace("%3A",":").replace("%2F","/"), "\n====================================== ")
======================================
Request sent to the Remote Server:
dap4.ce=/SST[0:1:0][0:1:0][0:1:0]
======================================
Warning: Be cautious about chunking#
We now only downloaded exactly what we requested! However, in some scenarios the time for download can be 10x slower, compared to the case when we requested more data!! The reason for the slowdown can sometimes be attributed to the number of chunks the dask graph generated.
No chunking. Download all the array in the file. 2 chunks in 5 dask graphs (one per file).Chunking. Download only the desired element of a file. 388800 chunks in 5 dask graphs.
Ideally, the chunk manager should only trigger the download of a single chunk. However, 388800 were created to ensure passing the slice to the server. This, can sometimes lead to slowdowns on the client side.
In the scenario above, we went to the extremes. It is better to find a chunk compromise. We demonstrate that below, but now subsetting across all time (across both files).
consolidate_metadata(dap4ce_urls, session=session, concat_dim="TIME")
datacube has dimensions ['COADSX[0:1:179]', 'COADSY[0:1:89]'] , and concat dim: `['TIME']`
download_sizes = {"COADSY":1} # note that we will subset across all time
%%time
ds = xr.open_mfdataset(
dap4ce_urls,
engine='pydap',
concat_dim='TIME',
session=session,
combine="nested",
parallel=True,
decode_times=False,
chunks=download_sizes,
)
session.cache.clear()
CPU times: user 75.6 ms, sys: 59.9 ms, total: 136 ms
Wall time: 258 ms
ds['SST']
<xarray.DataArray 'SST' (TIME: 24, COADSY: 90, COADSX: 180)> Size: 2MB
dask.array<concatenate, shape=(24, 90, 180), dtype=float32, chunksize=(12, 1, 180), chunktype=numpy.ndarray>
Coordinates:
* COADSX (COADSX) float64 1kB 21.0 23.0 25.0 27.0 ... 375.0 377.0 379.0
* COADSY (COADSY) float64 720B -89.0 -87.0 -85.0 -83.0 ... 85.0 87.0 89.0
* TIME (TIME) float64 192B 366.0 1.096e+03 ... 7.671e+03 8.401e+03
Attributes:
long_name: SEA SURFACE TEMPERATURE
history: From coads_climatology
units: Deg C
Maps: ('/TIME', '/COADSY', '/COADSX')%%time
ds['SST'].isel(COADSX=0, COADSY=0).load()
CPU times: user 19.4 ms, sys: 12.8 ms, total: 32.2 ms
Wall time: 318 ms
<xarray.DataArray 'SST' (TIME: 24)> Size: 96B
array([-1.e+34, -1.e+34, -1.e+34, -1.e+34, -1.e+34, -1.e+34, -1.e+34,
-1.e+34, -1.e+34, -1.e+34, -1.e+34, -1.e+34, -1.e+34, -1.e+34,
-1.e+34, -1.e+34, -1.e+34, -1.e+34, -1.e+34, -1.e+34, -1.e+34,
-1.e+34, -1.e+34, -1.e+34], dtype=float32)
Coordinates:
COADSX float64 8B 21.0
COADSY float64 8B -89.0
* TIME (TIME) float64 192B 366.0 1.096e+03 ... 7.671e+03 8.401e+03
Attributes:
long_name: SEA SURFACE TEMPERATURE
history: From coads_climatology
units: Deg C
Maps: ('/TIME', '/COADSY', '/COADSX')print("====================================== \n Parallel Requests sent to the Remote Server:\n ", [url.split("?")[-1].split("&dap4.checksum")[0].replace("%5B","[").replace("%5D","]").replace("%3A",":").replace("%2F","/") for url in session.cache.urls()], "\n====================================== ")
======================================
Parallel Requests sent to the Remote Server:
['dap4.ce=/SST[0:1:11][0:1:0][0:1:179]', 'dap4.ce=/SST[0:1:11][0:1:0][0:1:179]']
======================================