[Git][debian-gis-team/pyresample][upstream] New upstream version 1.13.0
Antonio Valentino
gitlab at salsa.debian.org
Sat Sep 14 18:44:27 BST 2019
Antonio Valentino pushed to branch upstream at Debian GIS Project / pyresample
Commits:
b62672cc by Antonio Valentino at 2019-09-14T06:53:12Z
New upstream version 1.13.0
- - - - -
29 changed files:
- .travis.yml
- CHANGELOG.md
- appveyor.yml
- docs/source/API.rst
- docs/source/conf.py
- docs/source/geo_def.rst
- docs/source/geometry_utils.rst
- docs/source/index.rst
- docs/source/plot.rst
- docs/source/swath.rst
- pyresample/_spatial_mp.py
- pyresample/area_config.py
- pyresample/bilinear/xarr.py
- + pyresample/bucket/__init__.py
- pyresample/ewa/_fornav.cpp
- pyresample/ewa/_fornav_templates.cpp
- pyresample/ewa/_ll2cr.c
- pyresample/geometry.py
- pyresample/test/__init__.py
- pyresample/test/test_bilinear.py
- + pyresample/test/test_bucket.py
- pyresample/test/test_geometry.py
- pyresample/test/test_grid.py
- pyresample/test/test_spatial_mp.py
- pyresample/test/test_utils.py
- pyresample/test/utils.py
- pyresample/utils/_proj4.py
- pyresample/version.py
- setup.py
Changes:
=====================================
.travis.yml
=====================================
@@ -1,7 +1,7 @@
language: python
env:
global:
- - PYTHON_VERSION=$PYTHON_VERSION
+ - PYTHON_VERSION=$TRAVIS_PYTHON_VERSION
- NUMPY_VERSION=stable
- MAIN_CMD='python setup.py'
- CONDA_DEPENDENCIES='xarray dask toolz Cython pykdtree sphinx cartopy rasterio pillow matplotlib
@@ -10,7 +10,7 @@ env:
- EVENT_TYPE='push pull_request'
- SETUP_CMD='test'
- CONDA_CHANNELS='conda-forge'
- - CONDA_CHANNEL_PRIORITY='True'
+ - CONDA_CHANNEL_PRIORITY='strict'
matrix:
include:
- env: PYTHON_VERSION=2.7
@@ -18,19 +18,25 @@ matrix:
- env: PYTHON_VERSION=2.7
os: osx
language: generic
- - env: PYTHON_VERSION=3.6
+ - env: PYTHON_VERSION=3.7
os: linux
- - env: PYTHON_VERSION=3.6
+ - env: PYTHON_VERSION=3.7
os: osx
language: generic
+ - env: PYTHON_VERSION=3.7
+ os: windows
+ language: c
+ allow_failures:
+ - os: windows
install:
-- git clone --depth 1 git://github.com/astropy/ci-helpers.git
+# - git clone --depth 1 git://github.com/astropy/ci-helpers.git
+- git clone --depth 1 -b all-the-fixes git://github.com/djhoese/ci-helpers.git
- source ci-helpers/travis/setup_conda.sh
script:
- coverage run --source=pyresample setup.py test
- cd docs && mkdir doctest && sphinx-build -E -n -b doctest ./source ./doctest && cd ..
after_success:
-- if [[ $PYTHON_VERSION == 3.6 ]]; then coveralls; codecov; fi
+- if [[ $PYTHON_VERSION == 3.7 ]]; then coveralls; codecov; fi
deploy:
- provider: pypi
user: dhoese
=====================================
CHANGELOG.md
=====================================
@@ -1,3 +1,40 @@
+## Version 1.13.0 (2019/09/13)
+
+### Issues Closed
+
+* [Issue 210](https://github.com/pytroll/pyresample/issues/210) - Incompatibility with new proj/pyproj versions
+
+In this release 1 issue was closed.
+
+### Pull Requests Merged
+
+#### Bugs fixed
+
+* [PR 213](https://github.com/pytroll/pyresample/pull/213) - Remove extra conversion to dask array
+* [PR 208](https://github.com/pytroll/pyresample/pull/208) - Bugfix bilinear resampler masking ([735](https://github.com/pytroll/satpy/issues/735))
+* [PR 207](https://github.com/pytroll/pyresample/pull/207) - Make output index tiling in bilinear interpolation work with dask
+* [PR 205](https://github.com/pytroll/pyresample/pull/205) - Exclude NaNs from Bucket Average
+* [PR 197](https://github.com/pytroll/pyresample/pull/197) - Fix to_cartopy_crs for latlong projections
+* [PR 196](https://github.com/pytroll/pyresample/pull/196) - Improve handling of EPSG codes with pyproj 2.0+
+
+#### Features added
+
+* [PR 212](https://github.com/pytroll/pyresample/pull/212) - Use slices in bilinear resampler
+* [PR 203](https://github.com/pytroll/pyresample/pull/203) - Add Numpy version limitation for Python 2
+* [PR 198](https://github.com/pytroll/pyresample/pull/198) - Clarify warning if no overlap data and projection
+* [PR 196](https://github.com/pytroll/pyresample/pull/196) - Improve handling of EPSG codes with pyproj 2.0+
+* [PR 192](https://github.com/pytroll/pyresample/pull/192) - Add bucket resampling
+
+#### Documentation changes
+
+* [PR 204](https://github.com/pytroll/pyresample/pull/204) - Add Example for Regular Lat-Lon Grid
+* [PR 201](https://github.com/pytroll/pyresample/pull/201) - fix bug in plot example code
+* [PR 198](https://github.com/pytroll/pyresample/pull/198) - Clarify warning if no overlap data and projection
+* [PR 195](https://github.com/pytroll/pyresample/pull/195) - Update docs for create_area_def and improve AreaDefinition property consistency
+
+In this release 15 pull requests were closed.
+
+
## Version 1.12.3 (2019/05/17)
### Pull Requests Merged
=====================================
appveyor.yml
=====================================
@@ -14,10 +14,9 @@ environment:
NUMPY_VERSION: "stable"
install:
- - "git clone --depth 1 git://github.com/astropy/ci-helpers.git"
+ - "git clone --depth 1 -b all-the-fixes git://github.com/djhoese/ci-helpers.git"
- "powershell ci-helpers/appveyor/install-miniconda.ps1"
- - "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
- - "activate test"
+ - "conda activate test"
build: false # Not a C# project, build stuff at the test step instead.
=====================================
docs/source/API.rst
=====================================
@@ -31,6 +31,11 @@ pyresample.utils
.. automodule:: pyresample.utils
:members:
+pyresample.area_config
+----------------------
+.. automodule:: pyresample.area_config
+ :members:
+
pyresample.data_reduce
----------------------
.. automodule:: pyresample.data_reduce
@@ -45,3 +50,8 @@ pyresample.ewa
--------------
.. automodule:: pyresample.ewa
:members:
+
+pyresample.bucket
+-----------------
+.. automodule:: pyresample.bucket
+ :members:
=====================================
docs/source/conf.py
=====================================
@@ -237,5 +237,5 @@ intersphinx_mapping = {
'pyresample': ('https://pyresample.readthedocs.io/en/stable', None),
'trollsift': ('https://trollsift.readthedocs.io/en/stable', None),
'trollimage': ('https://trollimage.readthedocs.io/en/stable', None),
- 'proj4': ('https://proj4.org', None),
+ 'proj4': ('https://proj.org', None),
}
=====================================
docs/source/geo_def.rst
=====================================
@@ -51,60 +51,48 @@ where
* **upper_right_x**: projection x coordinate of upper right corner of upper right pixel
* **upper_right_y**: projection y coordinate of upper right corner of upper right pixel
-Below are three examples of creating an ``AreaDefinition``:
+Example:
.. doctest::
>>> from pyresample.geometry import AreaDefinition
-
- >>> # a) Using a projection dictionary
>>> area_id = 'ease_sh'
>>> description = 'Antarctic EASE grid'
>>> proj_id = 'ease_sh'
- >>> proj_dict = {'proj': 'laea', 'lat_0': -90, 'lon_0': 0, 'a': 6371228.0, 'units': 'm'}
+ >>> projection = {'proj': 'laea', 'lat_0': -90, 'lon_0': 0, 'a': 6371228.0, 'units': 'm'}
>>> width = 425
>>> height = 425
>>> area_extent = (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
- >>> area_def = AreaDefinition(area_id, description, proj_id, proj_dict,
+ >>> area_def = AreaDefinition(area_id, description, proj_id, projection,
... width, height, area_extent)
- >>> print(area_def)
+ >>> area_def
Area ID: ease_sh
Description: Antarctic EASE grid
Projection ID: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
- >>> # b) Using an explicit proj4 string
- >>> proj_string = '+proj=laea +lat_0=-90 +lon_0=0 +a=6371228.0 +units=m'
- >>> area_def = AreaDefinition(area_id, description, proj_id, proj_string,
+You can also specify the projection using a PROJ.4 string
+
+.. doctest::
+
+ >>> projection = '+proj=laea +lat_0=-90 +lon_0=0 +a=6371228.0 +units=m'
+ >>> area_def = AreaDefinition(area_id, description, proj_id, projection,
... width, height, area_extent)
- >>> print(area_def)
- Area ID: ease_sh
- Description: Antarctic EASE grid
- Projection ID: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
- Number of columns: 425
- Number of rows: 425
- Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
- >>> # c) Using an EPSG code in a proj4 string
- >>> proj_string = '+init=EPSG:3409' # Use 'EPSG:3409' with pyproj 2.0+
- >>> area_def = AreaDefinition(area_id, description, proj_id, proj_string,
+or an `EPSG code <https://www.epsg-registry.org/>`_:
+
+.. doctest::
+
+ >>> projection = '+init=EPSG:3409' # Use 'EPSG:3409' with pyproj 2.0+
+ >>> area_def = AreaDefinition(area_id, description, proj_id, projection,
... width, height, area_extent)
- >>> print(area_def)
- Area ID: ease_sh
- Description: Antarctic EASE grid
- Projection ID: ease_sh
- Projection: {'init': 'EPSG:3409'}
- Number of columns: 425
- Number of rows: 425
- Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
.. note::
- When using pyproj 2.0+, please use the new ``'EPSG:XXXX'`` syntax
+ With pyproj 2.0+ please use the new ``'EPSG:XXXX'`` syntax
as the old ``'+init=EPSG:XXXX'`` is no longer supported.
Creating an ``AreaDefinition`` can be complex if you don't know everything
=====================================
docs/source/geometry_utils.rst
=====================================
@@ -11,7 +11,7 @@ AreaDefinition Creation
The main utility function for creating
:class:`~pyresample.geometry.AreaDefinition` objects is the
-:func:`~pyresample.utils.create_area_def` function. This function will take
+:func:`~pyresample.area_config.create_area_def` function. This function will take
whatever information can be provided to describe a geographic region and
create a valid ``AreaDefinition`` object if possible. If it can't make
a fully specified ``AreaDefinition`` then it will provide a
@@ -46,17 +46,17 @@ and optional arguments:
.. doctest::
- >>> from pyresample import utils
+ >>> from pyresample import create_area_def
>>> area_id = 'ease_sh'
>>> proj_dict = {'proj': 'laea', 'lat_0': -90, 'lon_0': 0, 'a': 6371228.0, 'units': 'm'}
>>> center = (0, 0)
>>> radius = (5326849.0625, 5326849.0625)
>>> resolution = (25067.525, 25067.525)
- >>> area_def = utils.create_area_def(area_id, proj_dict, center=center, radius=radius, resolution=resolution)
+ >>> area_def = create_area_def(area_id, proj_dict, center=center, radius=radius, resolution=resolution)
>>> print(area_def)
Area ID: ease_sh
Description: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -68,12 +68,12 @@ keyword arguments can be specified with one value if ``dx == dy``:
.. doctest::
>>> proj_string = '+proj=laea +lat_0=-90 +lon_0=0 +a=6371228.0 +units=m'
- >>> area_def = utils.create_area_def(area_id, proj_string, center=center,
+ >>> area_def = create_area_def(area_id, proj_string, center=center,
... radius=5326849.0625, resolution=25067.525)
>>> print(area_def)
Area ID: ease_sh
Description: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -85,25 +85,44 @@ the mercator projection with radius and resolution defined in degrees.
.. doctest::
>>> proj_dict = {'proj': 'merc', 'lat_0': 0, 'lon_0': 0, 'a': 6371228.0, 'units': 'm'}
- >>> area_def = utils.create_area_def(area_id, proj_dict, center=(0, 0),
+ >>> area_def = create_area_def(area_id, proj_dict, center=(0, 0),
... radius=(47.90379019311, 43.1355420077),
... resolution=(0.22542960090875294, 0.22542901929487608),
... units='degrees', description='Antarctic EASE grid')
>>> print(area_def)
Area ID: ease_sh
Description: Antarctic EASE grid
- Projection: {'a': '6371228.0', 'lat_0': '0.0', 'lon_0': '0.0', 'proj': 'merc', 'units': 'm'}
+ Projection: {'a': '6371228.0', 'lat_0': '0', 'lon_0': '0', 'proj': 'merc', 'type': 'crs', 'units': 'm'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
+The area definition corresponding to a given lat-lon grid (defined by area extent and resolution)
+can be obtained as follows:
+
+.. doctest::
+
+ >>> area_def = create_area_def('my_area',
+ ... {'proj': 'latlong', 'lon_0': 0},
+ ... area_extent=[-180, -90, 180, 90],
+ ... resolution=1,
+ ... units='degrees',
+ ... description='Global 1x1 degree lat-lon grid')
+ >>> print(area_def)
+ Area ID: my_area
+ Description: Global 1x1 degree lat-lon grid
+ Projection: {'lon_0': '0', 'proj': 'latlong', 'type': 'crs'}
+ Number of columns: 360
+ Number of rows: 180
+ Area extent: (-180.0, -90.0, 180.0, 90.0)
+
If only one of **area_extent** or **shape** can be computed from the
information provided by the user, a
:class:`~pyresample.geometry.DynamicAreaDefinition` object is returned:
.. doctest::
- >>> area_def = utils.create_area_def(area_id, proj_string, radius=radius, resolution=resolution)
+ >>> area_def = create_area_def(area_id, proj_string, radius=radius, resolution=resolution)
>>> print(type(area_def))
<class 'pyresample.geometry.DynamicAreaDefinition'>
@@ -118,7 +137,7 @@ AreaDefinition Class Methods
There are four class methods available on the
:class:`~pyresample.geometry.AreaDefinition` class utilizing
-:func:`~pyresample.utils.create_area_def` providing a simpler interface to the
+:func:`~pyresample.area_config.create_area_def` providing a simpler interface to the
functionality described in the previous section.
Hence each argument used below is the same as the ``create_area_def`` arguments
described above and can be used in the same way (i.e. units). The following
@@ -132,7 +151,6 @@ from_extent
.. doctest::
- >>> from pyresample import utils
>>> from pyresample.geometry import AreaDefinition
>>> area_id = 'ease_sh'
>>> proj_string = '+proj=laea +lat_0=-90 +lon_0=0 +a=6371228.0 +units=m'
@@ -142,7 +160,7 @@ from_extent
>>> print(area_def)
Area ID: ease_sh
Description: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -161,7 +179,7 @@ from_circle
>>> print(area_def)
Area ID: ease_sh
Description: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -173,7 +191,7 @@ from_circle
>>> print(area_def)
Area ID: ease_sh
Description: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -189,7 +207,7 @@ from_area_of_interest
>>> print(area_def)
Area ID: ease_sh
Description: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -204,7 +222,7 @@ from_ul_corner
>>> print(area_def)
Area ID: ease_sh
Description: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -374,7 +392,7 @@ read a single ``AreaDefinition`` named ``corner`` by doing:
>>> print(area_def)
Area ID: corner
Description: Example of making an area definition using shape, upper_left_extent, and resolution
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -388,7 +406,7 @@ series of arguments:
>>> print(boundary)
Area ID: ease_sh
Description: Example of making an area definition using shape and area_extent
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -429,7 +447,7 @@ An area definition dict can be read using
Area ID: ease_nh
Description: Arctic EASE grid
Projection ID: ease_nh
- Projection: {'a': '6371228.0', 'lat_0': '90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
@@ -445,7 +463,7 @@ Several area definitions can be read at once using the region names in an argume
Area ID: ease_sh
Description: Antarctic EASE grid
Projection ID: ease_sh
- Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+ Projection: {'R': '6371228', 'lat_0': '-90', 'lon_0': '0', 'no_defs': 'None', 'proj': 'laea', 'type': 'crs', 'units': 'm', 'x_0': '0', 'y_0': '0'}
Number of columns: 425
Number of rows: 425
Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)
=====================================
docs/source/index.rst
=====================================
@@ -16,6 +16,7 @@ Pyresample offers multiple resampling algorithms including:
- Nearest Neighbor
- Elliptical Weighted Average (EWA)
- Bilinear
+- Bucket resampling (count hits per bin, averaging, ratios)
For nearest neighbor and bilinear interpolation pyresample uses a kd-tree
approach by using the fast KDTree implementation provided by the
=====================================
docs/source/plot.rst
=====================================
@@ -130,7 +130,7 @@ Cartopy CRS object.
>>> ax = plt.axes(projection=crs)
>>> ax.coastlines()
>>> ax.set_global()
- >>> plt.imshow(data, transform=crs, extent=crs.bounds, origin='upper')
+ >>> plt.imshow(result, transform=crs, extent=crs.bounds, origin='upper')
>>> plt.colorbar()
>>> plt.savefig('viirs_i04_cartopy.png')
=====================================
docs/source/swath.rst
=====================================
@@ -459,3 +459,12 @@ Example
>>> rows_per_scan = 5
>>> # fornav resamples the swath data to the gridded area
>>> num_valid_points, gridded_data = fornav(cols, rows, area_def, data, rows_per_scan=rows_per_scan)
+
+pyresample.bucket
+-----------------
+
+.. autoclass:: pyresample.bucket.BucketResampler
+ :noindex:
+
+See :class:`~pyresample.bucket.BucketResampler` API documentation for
+the details of method parameters.
=====================================
pyresample/_spatial_mp.py
=====================================
@@ -110,36 +110,15 @@ class cKDTree_MP(object):
class BaseProj(pyproj.Proj):
+ """Helper class for easier backwards compatibility."""
def __init__(self, projparams=None, preserve_units=True, **kwargs):
- # Copy dict-type arguments as they will be modified in-place
- if isinstance(projparams, dict):
- projparams = projparams.copy()
-
- # Pyproj<2 uses __new__ to initiate data and does not define its own __init__ method.
if is_pyproj2():
- # If init is found in any of the data, override any other area parameters.
- if 'init' in kwargs:
- warnings.warn('init="EPSG:XXXX" is no longer supported. Use "EPSG:XXXX" as a proj string instead')
- projparams = kwargs.pop('init')
- # Proj takes params in projparams over the params in kwargs.
- if isinstance(projparams, (dict, str)) and 'init' in projparams:
- warn_msg = '{"init": "EPSG:XXXX"} is no longer supported. Use "EPSG:XXXX" as a proj string instead'
- if isinstance(projparams, str):
- warn_msg = '"+init=EPSG:XXXX" is no longer supported. Use "EPSG:XXXX" as a proj string instead'
- # Proj-dicts are cleaner to parse than strings.
- projparams = proj4_str_to_dict(projparams)
- warnings.warn(warn_msg)
- projparams = projparams.pop('init')
- # New syntax 'EPSG:XXXX'
- if 'EPSG' in kwargs or (isinstance(projparams, dict) and 'EPSG' in projparams):
- if 'EPSG' in kwargs:
- epsg_code = kwargs.pop('EPSG')
- else:
- epsg_code = projparams.pop('EPSG')
- projparams = 'EPSG:{}'.format(epsg_code)
-
- super(BaseProj, self).__init__(projparams=projparams, preserve_units=preserve_units, **kwargs)
+ # have to have this because pyproj uses __new__
+ # subclasses would fail when calling __init__ otherwise
+ super(BaseProj, self).__init__(projparams=projparams,
+ preserve_units=preserve_units,
+ **kwargs)
def is_latlong(self):
if is_pyproj2():
@@ -148,6 +127,7 @@ class BaseProj(pyproj.Proj):
class Proj(BaseProj):
+ """Helper class to skip transforming lon/lat projection coordinates."""
def __call__(self, data1, data2, inverse=False, radians=False,
errcheck=False, nprocs=1):
=====================================
pyresample/area_config.py
=====================================
@@ -335,10 +335,11 @@ def _get_proj4_args(proj4_args):
"""Create dict from proj4 args."""
from pyresample.utils._proj4 import convert_proj_floats
if isinstance(proj4_args, (str, six.text_type)):
- proj_config = proj4_str_to_dict(str(proj4_args))
- else:
- from configobj import ConfigObj
- proj_config = ConfigObj(proj4_args)
+ # float conversion is done in `proj4_str_to_dict` already
+ return proj4_str_to_dict(str(proj4_args))
+
+ from configobj import ConfigObj
+ proj_config = ConfigObj(proj4_args)
return convert_proj_floats(proj_config.items())
@@ -417,12 +418,17 @@ def create_area_def(area_id, projection, width=None, height=None, area_extent=No
description = kwargs.pop('description', area_id)
proj_id = kwargs.pop('proj_id', None)
+ # convert EPSG dictionaries to projection string
+ # (hold on to EPSG code as much as possible)
+ if isinstance(projection, dict) and 'EPSG' in projection:
+ projection = "EPSG:{}".format(projection['EPSG'])
+
# Get a proj4_dict from either a proj4_dict or a proj4_string.
proj_dict = _get_proj_data(projection)
try:
- p = Proj(proj_dict, preserve_units=True)
+ p = Proj(projection, preserve_units=True)
except RuntimeError:
- return _make_area(area_id, description, proj_id, proj_dict, shape, area_extent, **kwargs)
+ return _make_area(area_id, description, proj_id, projection, shape, area_extent, **kwargs)
# If no units are provided, try to get units used in proj_dict. If still none are provided, use meters.
if units is None:
@@ -457,7 +463,7 @@ def create_area_def(area_id, projection, width=None, height=None, area_extent=No
_extrapolate_information(area_extent, shape, center, radius,
resolution, upper_left_extent, units,
p, proj_dict)
- return _make_area(area_id, description, proj_id, proj_dict, shape,
+ return _make_area(area_id, description, proj_id, projection, shape,
area_extent, resolution=resolution, **kwargs)
@@ -482,7 +488,23 @@ def _make_area(area_id, description, proj_id, proj_dict, shape, area_extent, **k
def _get_proj_data(projection):
- """Takes a proj4_dict or proj4_string and returns a proj4_dict and a Proj function."""
+ """Takes a proj4_dict or proj4_string and returns a proj4_dict and a Proj function.
+
+ There is special handling for the "EPSG:XXXX" case where "XXXX" is an
+ EPSG number code. It can be provided as a string `"EPSG:XXXX"` or as a
+ dictionary (when provided via YAML) as `{'EPSG': XXXX}`.
+ If it is passed as a string ("EPSG:XXXX") then the rules of
+ :func:`~pyresample.utils._proj.proj4_str_to_dict` are followed.
+ If a dictionary and pyproj 2.0+ is installed then the string
+ `"EPSG:XXXX"` is passed to ``proj4_str_to_dict``. If pyproj<2.0
+ is installed then the string ``+init=EPSG:XXXX`` is passed to
+ ``proj4_str_to_dict`` which provides limited information to area
+ config operations.
+
+ """
+ if isinstance(projection, dict) and 'EPSG' in projection:
+ projection = "EPSG:{}".format(projection['EPSG'])
+
if isinstance(projection, str):
proj_dict = proj4_str_to_dict(projection)
elif isinstance(projection, dict):
=====================================
pyresample/bilinear/xarr.py
=====================================
@@ -1,3 +1,4 @@
+"""XArray version of bilinear interpolation."""
import warnings
@@ -15,8 +16,17 @@ from pyresample._spatial_mp import Proj
from pykdtree.kdtree import KDTree
from pyresample import data_reduce, geometry, CHUNK_SIZE
+CACHE_INDICES = ['bilinear_s',
+ 'bilinear_t',
+ 'slices_x',
+ 'slices_y',
+ 'mask_slices',
+ 'out_coords_x',
+ 'out_coords_y']
+
class XArrayResamplerBilinear(object):
+ """Bilinear interpolation using XArray."""
def __init__(self,
source_geo_def,
@@ -24,10 +34,10 @@ class XArrayResamplerBilinear(object):
radius_of_influence,
neighbours=32,
epsilon=0,
- reduce_data=True,
- nprocs=1,
- segments=None):
+ reduce_data=True):
"""
+ Initialize resampler.
+
Parameters
----------
source_geo_def : object
@@ -44,11 +54,7 @@ class XArrayResamplerBilinear(object):
reduce_data : bool, optional
Perform initial coarse reduction of source dataset in order
to reduce execution time
- nprocs : int, optional
- Number of processor cores to be used
- segments : int or None
- Number of segments to use when resampling.
- If set to None an estimate will be calculated
+
"""
if da is None:
raise ImportError("Missing 'xarray' and 'dask' dependencies")
@@ -59,11 +65,16 @@ class XArrayResamplerBilinear(object):
self.distance_array = None
self.bilinear_t = None
self.bilinear_s = None
+ self.slices_x = None
+ self.slices_y = None
+ self.slices = {'x': self.slices_x, 'y': self.slices_y}
+ self.mask_slices = None
+ self.out_coords_x = None
+ self.out_coords_y = None
+ self.out_coords = {'x': self.out_coords_x, 'y': self.out_coords_y}
self.neighbours = neighbours
self.epsilon = epsilon
self.reduce_data = reduce_data
- self.nprocs = nprocs
- self.segments = segments
self.source_geo_def = source_geo_def
self.target_geo_def = target_geo_def
self.radius_of_influence = radius_of_influence
@@ -77,9 +88,9 @@ class XArrayResamplerBilinear(object):
Vertical fractional distances from corner to the new points
s__ : numpy array
Horizontal fractional distances from corner to the new points
- input_idxs : numpy array
+ valid_input_index : numpy array
Valid indices in the input data
- idx_arr : numpy array
+ index_array : numpy array
Mapping array from valid source points to target points
"""
@@ -88,9 +99,9 @@ class XArrayResamplerBilinear(object):
(self.neighbours, self.source_geo_def.size))
# Create kd-tree
- valid_input_idx, resample_kdtree = self._create_resample_kdtree()
+ valid_input_index, resample_kdtree = self._create_resample_kdtree()
# This is a numpy array
- self.valid_input_index = valid_input_idx
+ self.valid_input_index = valid_input_index
if resample_kdtree.n == 0:
# Handle if all input data is reduced away
@@ -99,7 +110,7 @@ class XArrayResamplerBilinear(object):
self.target_geo_def)
self.bilinear_t = bilinear_t
self.bilinear_s = bilinear_s
- self.valid_input_index = valid_input_idx
+ self.valid_input_index = valid_input_index
self.index_array = index_array
return bilinear_t, bilinear_s, valid_input_index, index_array
@@ -120,7 +131,9 @@ class XArrayResamplerBilinear(object):
proj = Proj(self.target_geo_def.proj_str)
# Get output x/y coordinates
- out_x, out_y = _get_output_xy_dask(self.target_geo_def, proj)
+ out_x, out_y = self.target_geo_def.get_proj_coords(chunks=CHUNK_SIZE)
+ out_x = da.ravel(out_x)
+ out_y = da.ravel(out_y)
# Get input x/y coordinates
in_x, in_y = _get_input_xy_dask(self.source_geo_def, proj,
@@ -139,81 +152,23 @@ class XArrayResamplerBilinear(object):
self.index_array = index_array
self.distance_array = distance_array
- return (self.bilinear_t, self.bilinear_s, self.valid_input_index,
- self.index_array)
+ self._get_slices()
- def get_sample_from_bil_info(self, data, fill_value=np.nan,
+ return (self.bilinear_t, self.bilinear_s,
+ self.slices, self.mask_slices,
+ self.out_coords)
+
+ def get_sample_from_bil_info(self, data, fill_value=None,
output_shape=None):
+ """Resample using pre-computed resampling LUTs."""
+ del output_shape
if fill_value is None:
- fill_value = np.nan
- # FIXME: can be this made into a dask construct ?
- cols, lines = np.meshgrid(np.arange(data['x'].size),
- np.arange(data['y'].size))
- cols = da.ravel(cols)
- lines = da.ravel(lines)
- try:
- self.valid_input_index = self.valid_input_index.compute()
- except AttributeError:
- pass
- vii = self.valid_input_index.squeeze()
- try:
- self.index_array = self.index_array.compute()
- except AttributeError:
- pass
-
- # ia contains reduced (valid) indices of the source array, and has the
- # shape of the destination array
- ia = self.index_array
- rlines = lines[vii][ia]
- rcols = cols[vii][ia]
-
- slices = []
- mask_slices = []
- mask_2d_added = False
- coords = {}
- try:
- # FIXME: Use same chunk size as input data
- coord_x, coord_y = self.target_geo_def.get_proj_vectors_dask()
- except AttributeError:
- coord_x, coord_y = None, None
-
- for _, dim in enumerate(data.dims):
- if dim == 'y':
- slices.append(rlines)
- if not mask_2d_added:
- mask_slices.append(ia >= self.target_geo_def.size)
- mask_2d_added = True
- if coord_y is not None:
- coords[dim] = coord_y
- elif dim == 'x':
- slices.append(rcols)
- if not mask_2d_added:
- mask_slices.append(ia >= self.target_geo_def.size)
- mask_2d_added = True
- if coord_x is not None:
- coords[dim] = coord_x
+ if np.issubdtype(data.dtype, np.integer):
+ fill_value = 0
else:
- slices.append(slice(None))
- mask_slices.append(slice(None))
- try:
- coords[dim] = data.coords[dim]
- except KeyError:
- pass
-
- res = data.values[slices]
- res[mask_slices] = fill_value
-
- try:
- p_1 = res[:, :, 0]
- p_2 = res[:, :, 1]
- p_3 = res[:, :, 2]
- p_4 = res[:, :, 3]
- except IndexError:
- p_1 = res[:, 0]
- p_2 = res[:, 1]
- p_3 = res[:, 2]
- p_4 = res[:, 3]
+ fill_value = np.nan
+ p_1, p_2, p_3, p_4 = self._slice_data(data, fill_value)
s__, t__ = self.bilinear_s, self.bilinear_t
res = (p_1 * (1 - s__) * (1 - t__) +
@@ -227,25 +182,107 @@ class XArrayResamplerBilinear(object):
idxs = (res > data_max) | (res < data_min)
res = da.where(idxs, fill_value, res)
+ res = da.where(np.isnan(res), fill_value, res)
shp = self.target_geo_def.shape
if data.ndim == 3:
res = da.reshape(res, (res.shape[0], shp[0], shp[1]))
else:
res = da.reshape(res, (shp[0], shp[1]))
- res = DataArray(da.from_array(res, chunks=CHUNK_SIZE),
- dims=data.dims, coords=coords)
+
+ # Add missing coordinates
+ self._add_missing_coordinates(data)
+
+ res = DataArray(res, dims=data.dims, coords=self.out_coords)
return res
+ def _compute_indices(self):
+ for idx in CACHE_INDICES:
+ var = getattr(self, idx)
+ try:
+ var = var.compute()
+ setattr(self, idx, var)
+ except AttributeError:
+ continue
+
+ def _add_missing_coordinates(self, data):
+ if self.out_coords['x'] is None and self.out_coords_x is not None:
+ self.out_coords['x'] = self.out_coords_x
+ self.out_coords['y'] = self.out_coords_y
+ for _, dim in enumerate(data.dims):
+ if dim not in self.out_coords:
+ try:
+ self.out_coords[dim] = data.coords[dim]
+ except KeyError:
+ pass
+
+ def _slice_data(self, data, fill_value):
+
+ def _slicer(values, sl_x, sl_y, mask, fill_value):
+ if values.ndim == 2:
+ arr = values[(sl_y, sl_x)]
+ arr[(mask, )] = fill_value
+ p_1 = arr[:, 0]
+ p_2 = arr[:, 1]
+ p_3 = arr[:, 2]
+ p_4 = arr[:, 3]
+ elif values.ndim == 3:
+ arr = values[(slice(None), sl_y, sl_x)]
+ arr[(slice(None), mask)] = fill_value
+ p_1 = arr[:, :, 0]
+ p_2 = arr[:, :, 1]
+ p_3 = arr[:, :, 2]
+ p_4 = arr[:, :, 3]
+ else:
+ raise ValueError
+
+ return p_1, p_2, p_3, p_4
+
+ values = data.values
+ sl_y = self.slices_y
+ sl_x = self.slices_x
+ mask = self.mask_slices
+
+ return _slicer(values, sl_x, sl_y, mask, fill_value)
+
+ def _get_slices(self):
+ shp = self.source_geo_def.shape
+ cols, lines = np.meshgrid(np.arange(shp[1]),
+ np.arange(shp[0]))
+ cols = np.ravel(cols)
+ lines = np.ravel(lines)
+
+ vii = self.valid_input_index
+ ia_ = self.index_array
+
+ # ia_ contains reduced (valid) indices of the source array, and has the
+ # shape of the destination array
+ rlines = lines[vii][ia_]
+ rcols = cols[vii][ia_]
+
+ try:
+ coord_x, coord_y = self.target_geo_def.get_proj_vectors()
+ self.out_coords['y'] = coord_y
+ self.out_coords['x'] = coord_x
+ self.out_coords_y = self.out_coords['y']
+ self.out_coords_x = self.out_coords['x']
+ except AttributeError:
+ pass
+
+ self.mask_slices = ia_ >= self.source_geo_def.size
+ self.slices['y'] = rlines
+ self.slices['x'] = rcols
+ self.slices_y = self.slices['y']
+ self.slices_x = self.slices['x']
+
def _create_resample_kdtree(self):
- """Set up kd tree on input"""
+ """Set up kd tree on input."""
# Get input information
valid_input_index, source_lons, source_lats = \
_get_valid_input_index_dask(self.source_geo_def,
self.target_geo_def,
self.reduce_data,
- self.radius_of_influence,
- nprocs=self.nprocs)
+ self.radius_of_influence)
# FIXME: Is dask smart enough to only compute the pixels we end up
# using even with this complicated indexing
@@ -266,11 +303,6 @@ class XArrayResamplerBilinear(object):
valid_oi,
reduce_data=True):
"""Query kd-tree on slice of target coordinates."""
-
-# res = da.map_blocks(query_no_distance, tlons, tlats,
-# valid_oi, dtype=np.int, kdtree=resample_kdtree,
-# neighbours=self.neighbours, epsilon=self.epsilon,
-# radius=self.radius_of_influence)
res = query_no_distance(tlons, tlats,
valid_oi, resample_kdtree,
self.neighbours, self.epsilon,
@@ -278,41 +310,9 @@ class XArrayResamplerBilinear(object):
return res, None
-def _get_fill_mask_value(data_dtype):
- """Returns the maximum value of dtype"""
- if issubclass(data_dtype.type, np.floating):
- fill_value = np.finfo(data_dtype.type).max
- elif issubclass(data_dtype.type, np.integer):
- fill_value = np.iinfo(data_dtype.type).max
- else:
- raise TypeError('Type %s is unsupported for masked fill values' %
- data_dtype.type)
- return fill_value
-
-
-def _get_output_xy_dask(target_geo_def, proj):
- """Get x/y coordinates of the target grid."""
- # Read output coordinates
- out_lons, out_lats = target_geo_def.get_lonlats_dask()
-
- # Mask invalid coordinates
- out_lons, out_lats = _mask_coordinates_dask(out_lons, out_lats)
-
- # Convert coordinates to output projection x/y space
- res = da.dstack(proj(out_lons.compute(), out_lats.compute()))
- # _run_proj(proj, out_lons, out_lats)
- #,
- # chunks=(out_lons.chunks[0], out_lons.chunks[1], 2),
- # new_axis=[2])
- out_x = da.ravel(res[:, :, 0])
- out_y = da.ravel(res[:, :, 1])
-
- return out_x, out_y
-
-
-def _get_input_xy_dask(source_geo_def, proj, input_idxs, idx_ref):
+def _get_input_xy_dask(source_geo_def, proj, valid_input_index, index_array):
"""Get x/y coordinates for the input area and reduce the data."""
- in_lons, in_lats = source_geo_def.get_lonlats_dask()
+ in_lons, in_lats = source_geo_def.get_lonlats(chunks=CHUNK_SIZE)
# Mask invalid values
in_lons, in_lats = _mask_coordinates_dask(in_lons, in_lats)
@@ -323,16 +323,15 @@ def _get_input_xy_dask(source_geo_def, proj, input_idxs, idx_ref):
in_lons = da.ravel(in_lons)
in_lons = in_lons.compute()
- in_lons = in_lons[input_idxs]
+ in_lons = in_lons[valid_input_index]
in_lats = da.ravel(in_lats)
in_lats = in_lats.compute()
- in_lats = in_lats[input_idxs]
+ in_lats = in_lats[valid_input_index]
+ index_array = index_array.compute()
# Expand input coordinates for each output location
- # in_lons = in_lons.compute()
- in_lons = in_lons[idx_ref]
- # in_lats = in_lats.compute()
- in_lats = in_lats[idx_ref]
+ in_lons = in_lons[index_array]
+ in_lats = in_lats[index_array]
# Convert coordinates to output projection x/y space
in_x, in_y = proj(in_lons, in_lats)
@@ -340,14 +339,8 @@ def _get_input_xy_dask(source_geo_def, proj, input_idxs, idx_ref):
return in_x, in_y
-def _run_proj(proj, lons, lats):
- return da.dstack(proj(lons, lats))
-
-
def _mask_coordinates_dask(lons, lats):
- """Mask invalid coordinate values"""
- # lons = da.ravel(lons)
- # lats = da.ravel(lats)
+ """Mask invalid coordinate values."""
idxs = ((lons < -180.) | (lons > 180.) |
(lats < -90.) | (lats > 90.))
lons = da.where(idxs, np.nan, lons)
@@ -356,19 +349,23 @@ def _mask_coordinates_dask(lons, lats):
return lons, lats
-def _get_bounding_corners_dask(in_x, in_y, out_x, out_y, neighbours, idx_ref):
- """Get four closest locations from (in_x, in_y) so that they form a
+def _get_bounding_corners_dask(in_x, in_y, out_x, out_y, neighbours, index_array):
+ """Get bounding corners.
+
+ Get four closest locations from (in_x, in_y) so that they form a
bounding rectangle around the requested location given by (out_x,
out_y).
- """
+ """
# Find four closest pixels around the target location
# FIXME: how to daskify?
# Tile output coordinates to same shape as neighbour info
# Replacing with da.transpose and da.tile doesn't work
- out_x_tile = np.transpose(np.tile(out_x, (neighbours, 1)))
- out_y_tile = np.transpose(np.tile(out_y, (neighbours, 1)))
+ out_x_tile = np.reshape(np.tile(out_x, neighbours),
+ (neighbours, out_x.size)).T
+ out_y_tile = np.reshape(np.tile(out_y, neighbours),
+ (neighbours, out_y.size)).T
# Get differences in both directions
x_diff = out_x_tile - in_x
@@ -378,57 +375,65 @@ def _get_bounding_corners_dask(in_x, in_y, out_x, out_y, neighbours, idx_ref):
# Upper left source pixel
valid = (x_diff > 0) & (y_diff < 0)
- x_1, y_1, idx_1 = _get_corner_dask(stride, valid, in_x, in_y, idx_ref)
+ x_1, y_1, idx_1 = _get_corner_dask(stride, valid, in_x, in_y, index_array)
# Upper right source pixel
valid = (x_diff < 0) & (y_diff < 0)
- x_2, y_2, idx_2 = _get_corner_dask(stride, valid, in_x, in_y, idx_ref)
+ x_2, y_2, idx_2 = _get_corner_dask(stride, valid, in_x, in_y, index_array)
# Lower left source pixel
valid = (x_diff > 0) & (y_diff > 0)
- x_3, y_3, idx_3 = _get_corner_dask(stride, valid, in_x, in_y, idx_ref)
+ x_3, y_3, idx_3 = _get_corner_dask(stride, valid, in_x, in_y, index_array)
# Lower right source pixel
valid = (x_diff < 0) & (y_diff > 0)
- x_4, y_4, idx_4 = _get_corner_dask(stride, valid, in_x, in_y, idx_ref)
+ x_4, y_4, idx_4 = _get_corner_dask(stride, valid, in_x, in_y, index_array)
- # Combine sorted indices to idx_ref
- idx_ref = np.transpose(np.vstack((idx_1, idx_2, idx_3, idx_4)))
+ # Combine sorted indices to index_array
+ index_array = np.transpose(np.vstack((idx_1, idx_2, idx_3, idx_4)))
return (np.transpose(np.vstack((x_1, y_1))),
np.transpose(np.vstack((x_2, y_2))),
np.transpose(np.vstack((x_3, y_3))),
np.transpose(np.vstack((x_4, y_4))),
- idx_ref)
+ index_array)
-def _get_corner_dask(stride, valid, in_x, in_y, idx_ref):
- """Get closest set of coordinates from the *valid* locations"""
+def _get_corner_dask(stride, valid, in_x, in_y, index_array):
+ """Get closest set of coordinates from the *valid* locations."""
# Find the closest valid pixels, if any
idxs = np.argmax(valid, axis=1)
# Check which of these were actually valid
invalid = np.invert(np.max(valid, axis=1))
# idxs = idxs.compute()
- idx_ref = idx_ref.compute()
+ index_array = index_array.compute()
# Replace invalid points with np.nan
x__ = in_x[stride, idxs] # TODO: daskify
- x__ = np.where(invalid, np.nan, x__)
+ x__ = da.where(invalid, np.nan, x__)
y__ = in_y[stride, idxs] # TODO: daskify
- y__ = np.where(invalid, np.nan, y__)
+ y__ = da.where(invalid, np.nan, y__)
- idx = idx_ref[stride, idxs] # TODO: daskify
+ idx = index_array[stride, idxs] # TODO: daskify
return x__, y__, idx
def _get_ts_dask(pt_1, pt_2, pt_3, pt_4, out_x, out_y):
- """Calculate vertical and horizontal fractional distances t and s"""
+ """Calculate vertical and horizontal fractional distances t and s."""
+ def invalid_to_nan(t__, s__):
+ idxs = (t__ < 0) | (t__ > 1) | (s__ < 0) | (s__ > 1)
+ t__ = da.where(idxs, np.nan, t__)
+ s__ = da.where(idxs, np.nan, s__)
+ return t__, s__
# General case, ie. where the the corners form an irregular rectangle
t__, s__ = _get_ts_irregular_dask(pt_1, pt_2, pt_3, pt_4, out_y, out_x)
+ # Replace invalid values with NaNs
+ t__, s__ = invalid_to_nan(t__, s__)
+
# Cases where verticals are parallel
idxs = da.isnan(t__) | da.isnan(s__)
# Remove extra dimensions
@@ -438,10 +443,12 @@ def _get_ts_dask(pt_1, pt_2, pt_3, pt_4, out_x, out_y):
t_new, s_new = _get_ts_uprights_parallel_dask(pt_1, pt_2,
pt_3, pt_4,
out_y, out_x)
-
t__ = da.where(idxs, t_new, t__)
s__ = da.where(idxs, s_new, s__)
+ # Replace invalid values with NaNs
+ t__, s__ = invalid_to_nan(t__, s__)
+
# Cases where both verticals and horizontals are parallel
idxs = da.isnan(t__) | da.isnan(s__)
# Remove extra dimensions
@@ -452,16 +459,14 @@ def _get_ts_dask(pt_1, pt_2, pt_3, pt_4, out_x, out_y):
t__ = da.where(idxs, t_new, t__)
s__ = da.where(idxs, s_new, s__)
- idxs = (t__ < 0) | (t__ > 1) | (s__ < 0) | (s__ > 1)
- t__ = da.where(idxs, np.nan, t__)
- s__ = da.where(idxs, np.nan, s__)
+ # Replace invalid values with NaNs
+ t__, s__ = invalid_to_nan(t__, s__)
return t__, s__
def _get_ts_irregular_dask(pt_1, pt_2, pt_3, pt_4, out_y, out_x):
"""Get parameters for the case where none of the sides are parallel."""
-
# Get parameters for the quadratic equation
# TODO: check if needs daskifying
a__, b__, c__ = _calc_abc_dask(pt_1, pt_2, pt_3, pt_4, out_y, out_x)
@@ -478,9 +483,12 @@ def _get_ts_irregular_dask(pt_1, pt_2, pt_3, pt_4, out_y, out_x):
# Might not need daskifying
def _calc_abc_dask(pt_1, pt_2, pt_3, pt_4, out_y, out_x):
- """Calculate coefficients for quadratic equation for
- _get_ts_irregular() and _get_ts_uprights(). For _get_ts_uprights
- switch order of pt_2 and pt_3.
+ """Calculate coefficients for quadratic equation.
+
+ In this order of arguments used for _get_ts_irregular() and
+ _get_ts_uprights(). For _get_ts_uprights switch order of pt_2 and
+ pt_3.
+
"""
# Pairwise longitudal separations between reference points
x_21 = pt_2[:, 0] - pt_1[:, 0]
@@ -503,11 +511,12 @@ def _calc_abc_dask(pt_1, pt_2, pt_3, pt_4, out_y, out_x):
def _solve_quadratic_dask(a__, b__, c__, min_val=0.0, max_val=1.0):
- """Solve quadratic equation and return the valid roots from interval
- [*min_val*, *max_val*]
+ """Solve quadratic equation.
- """
+ Solve quadratic equation and return the valid roots from interval
+ [*min_val*, *max_val*].
+ """
discriminant = b__ * b__ - 4 * a__ * c__
# Solve the quadratic polynomial
@@ -525,8 +534,10 @@ def _solve_quadratic_dask(a__, b__, c__, min_val=0.0, max_val=1.0):
def _solve_another_fractional_distance_dask(f__, y_1, y_2, y_3, y_4, out_y):
- """Solve parameter t__ from s__, or vice versa. For solving s__,
- switch order of y_2 and y_3."""
+ """Solve parameter t__ from s__, or vice versa.
+
+ For solving s__, switch order of y_2 and y_3.
+ """
y_21 = y_2 - y_1
y_43 = y_4 - y_3
@@ -541,8 +552,7 @@ def _solve_another_fractional_distance_dask(f__, y_1, y_2, y_3, y_4, out_y):
def _get_ts_uprights_parallel_dask(pt_1, pt_2, pt_3, pt_4, out_y, out_x):
- """Get parameters for the case where uprights are parallel"""
-
+ """Get parameters for the case where uprights are parallel."""
# Get parameters for the quadratic equation
a__, b__, c__ = _calc_abc_dask(pt_1, pt_3, pt_2, pt_4, out_y, out_x)
@@ -557,8 +567,7 @@ def _get_ts_uprights_parallel_dask(pt_1, pt_2, pt_3, pt_4, out_y, out_x):
def _get_ts_parallellogram_dask(pt_1, pt_2, pt_3, out_y, out_x):
- """Get parameters for the case where uprights are parallel"""
-
+ """Get parameters for the case where uprights are parallel."""
# Pairwise longitudal separations between reference points
x_21 = pt_2[:, 0] - pt_1[:, 0]
x_31 = pt_3[:, 0] - pt_1[:, 0]
@@ -579,27 +588,10 @@ def _get_ts_parallellogram_dask(pt_1, pt_2, pt_3, out_y, out_x):
return t__, s__
-def _check_data_shape_dask(data, input_idxs):
- """Check data shape and adjust if necessary."""
- # Handle multiple datasets
- if data.ndim > 2 and data.shape[0] * data.shape[1] == input_idxs.shape[0]:
- data = da.reshape(data, data.shape[0] * data.shape[1], data.shape[2])
- # Also ravel single dataset
- elif data.shape[0] != input_idxs.size:
- data = da.ravel(data)
-
- # Ensure two dimensions
- if data.ndim == 1:
- data = da.reshape(data, (data.size, 1))
-
- return data
-
-
def query_no_distance(target_lons, target_lats,
valid_output_index, kdtree, neighbours, epsilon, radius):
"""Query the kdtree. No distances are returned."""
voi = valid_output_index
- shape = voi.shape
voir = da.ravel(voi)
target_lons_valid = da.ravel(target_lons)[voir]
target_lats_valid = da.ravel(target_lats)[voir]
@@ -617,11 +609,9 @@ def query_no_distance(target_lons, target_lats,
def _get_valid_input_index_dask(source_geo_def,
target_geo_def,
reduce_data,
- radius_of_influence,
- nprocs=1):
- """Find indices of reduced inputput data"""
-
- source_lons, source_lats = source_geo_def.get_lonlats_dask()
+ radius_of_influence):
+ """Find indices of reduced input data."""
+ source_lons, source_lats = source_geo_def.get_lonlats(chunks=CHUNK_SIZE)
source_lons = da.ravel(source_lons)
source_lats = da.ravel(source_lats)
@@ -663,7 +653,7 @@ def _get_valid_input_index_dask(source_geo_def,
def lonlat2xyz(lons, lats):
-
+ """Convert geographic coordinates to cartesian 3D coordinates."""
R = 6370997.0
x_coords = R * da.cos(da.deg2rad(lats)) * da.cos(da.deg2rad(lons))
y_coords = R * da.cos(da.deg2rad(lats)) * da.sin(da.deg2rad(lons))
@@ -674,8 +664,7 @@ def lonlat2xyz(lons, lats):
def _create_empty_bil_info(source_geo_def, target_geo_def):
- """Creates dummy info for empty result set"""
-
+ """Create dummy info for empty result set."""
valid_input_index = np.ones(source_geo_def.size, dtype=np.bool)
index_array = np.ones((target_geo_def.size, 4), dtype=np.int32)
bilinear_s = np.nan * np.zeros(target_geo_def.size)
=====================================
pyresample/bucket/__init__.py
=====================================
@@ -0,0 +1,303 @@
+# pyresample, Resampling of remote sensing image data in python
+#
+# Copyright (C) 2019 Pyresample developers
+#
+# This program is free software: you can redistribute it and/or modify it under
+# the terms of the GNU Lesser General Public License as published by the Free
+# Software Foundation, either version 3 of the License, or (at your option) any
+# later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public License along
+# with this program. If not, see <http://www.gnu.org/licenses/>.
+
+"""Code for resampling using bucket resampling."""
+
+import dask.array as da
+import xarray as xr
+import numpy as np
+import logging
+from pyresample._spatial_mp import Proj
+
+LOG = logging.getLogger(__name__)
+
+
+class BucketResampler(object):
+
+ """Class for bucket resampling.
+
+ Bucket resampling is useful for calculating averages and hit-counts
+ when aggregating data to coarser scale grids.
+
+ Below are examples how to use the resampler.
+
+ Read data using Satpy. The resampling can also be done (apart from
+ fractions) directly from Satpy, but this demonstrates the direct
+ low-level usage.
+
+ >>> from pyresample.bucket import BucketResampler
+ >>> from satpy import Scene
+ >>> from satpy.resample import get_area_def
+ >>> fname = "hrpt_noaa19_20170519_1214_42635.l1b"
+ >>> glbl = Scene(filenames=[fname])
+ >>> glbl.load(['4'])
+ >>> data = glbl['4']
+ >>> lons, lats = data.area.get_lonlats()
+ >>> target_area = get_area_def('euro4')
+
+ Initialize the resampler
+
+ >>> resampler = BucketResampler(adef, lons, lats)
+
+ Calculate the sum of all the data in each grid location:
+
+ >>> sums = resampler.get_sum(data)
+
+ Calculate how many values were collected at each grid location:
+
+ >>> counts = resampler.get_count()
+
+ The average can be calculated from the above two results, or directly
+ using the helper method:
+
+ >>> average = resampler.get_average(data)
+
+ Calculate fractions of occurrences of different values in each grid
+ location. The data needs to be categorical (in integers), so
+ we'll create some categorical data from the brightness temperature
+ data that were read earlier. The data are returned in a
+ dictionary with the categories as keys.
+
+ >>> data = da.where(data > 250, 1, 0)
+ >>> fractions = resampler.get_fractions(data, categories=[0, 1])
+ >>> import matplotlib.pyplot as plt
+ >>> plt.imshow(fractions[0]); plt.show()
+ """
+
+ def __init__(self, target_area, source_lons, source_lats):
+
+ self.target_area = target_area
+ self.source_lons = source_lons
+ self.source_lats = source_lats
+ self.prj = Proj(self.target_area.proj_dict)
+ self.x_idxs = None
+ self.y_idxs = None
+ self.idxs = None
+ self._get_indices()
+ self.counts = None
+
+ def _get_proj_coordinates(self, lons, lats, x_res, y_res):
+ """Calculate projection coordinates and round to resolution unit.
+
+ Parameters
+ ----------
+ lons : Numpy or Dask array
+ Longitude coordinates
+ lats : Numpy or Dask array
+ Latitude coordinates
+ x_res : float
+ Resolution of the output in X direction
+ y_res : float
+ Resolution of the output in Y direction
+ """
+ proj_x, proj_y = self.prj(lons, lats)
+ proj_x = round_to_resolution(proj_x, x_res)
+ proj_y = round_to_resolution(proj_y, y_res)
+
+ return np.stack((proj_x, proj_y))
+
+ def _get_indices(self):
+ """Calculate projection indices.
+
+ Returns
+ -------
+ x_idxs : Dask array
+ X indices of the target grid where the data are put
+ y_idxs : Dask array
+ Y indices of the target grid where the data are put
+ """
+ LOG.info("Determine bucket resampling indices")
+ adef = self.target_area
+
+ lons = self.source_lons.ravel()
+ lats = self.source_lats.ravel()
+
+ # Create output grid coordinates in projection units
+ x_res = (adef.area_extent[2] - adef.area_extent[0]) / adef.width
+ y_res = (adef.area_extent[3] - adef.area_extent[1]) / adef.height
+ x_vect = da.arange(adef.area_extent[0] + x_res / 2.,
+ adef.area_extent[2] - x_res / 2., x_res)
+ # Orient so that 0-meridian is pointing down
+ y_vect = da.arange(adef.area_extent[3] - y_res / 2.,
+ adef.area_extent[1] + y_res / 2.,
+ -y_res)
+
+ result = da.map_blocks(self._get_proj_coordinates, lons,
+ lats, x_res, y_res,
+ new_axis=0, chunks=(2,) + lons.chunks)
+ proj_x = result[0, :]
+ proj_y = result[1, :]
+
+ # Calculate array indices
+ x_idxs = ((proj_x - np.min(x_vect)) / x_res).astype(np.int)
+ y_idxs = ((np.max(y_vect) - proj_y) / y_res).astype(np.int)
+
+ # Get valid index locations
+ mask = ((x_idxs >= 0) & (x_idxs < adef.width) &
+ (y_idxs >= 0) & (y_idxs < adef.height))
+ self.y_idxs = da.where(mask, y_idxs, -1)
+ self.x_idxs = da.where(mask, x_idxs, -1)
+
+ # Convert X- and Y-indices to raveled indexing
+ target_shape = self.target_area.shape
+ self.idxs = self.y_idxs * target_shape[1] + self.x_idxs
+
+ def get_sum(self, data, mask_all_nan=False):
+ """Calculate sums for each bin with drop-in-a-bucket resampling.
+
+ Parameters
+ ----------
+ data : Numpy or Dask array
+ mask_all_nan : boolean (optional)
+ Mask bins that have only NaN results, default: False
+
+ Returns
+ -------
+ data : Numpy or Dask array
+ Bin-wise sums in the target grid
+ """
+ LOG.info("Get sum of values in each location")
+ if isinstance(data, xr.DataArray):
+ data = data.data
+ data = data.ravel()
+ # Remove NaN values from the data when used as weights
+ weights = da.where(np.isnan(data), 0, data)
+
+ # Rechunk indices to match the data chunking
+ if weights.chunks != self.idxs.chunks:
+ self.idxs = da.rechunk(self.idxs, weights.chunks)
+
+ # Calculate the sum of the data falling to each bin
+ out_size = self.target_area.size
+ sums, _ = da.histogram(self.idxs, bins=out_size, range=(0, out_size),
+ weights=weights, density=False)
+
+ if mask_all_nan:
+ nans = np.isnan(data)
+ nan_sums, _ = da.histogram(self.idxs[nans], bins=out_size,
+ range=(0, out_size))
+ counts = self.get_count().ravel()
+ sums = da.where(nan_sums == counts, np.nan, sums)
+
+ return sums.reshape(self.target_area.shape)
+
+ def get_count(self):
+ """Count the number of occurrences for each bin using drop-in-a-bucket
+ resampling.
+
+ Returns
+ -------
+ data : Dask array
+ Bin-wise count of hits for each target grid location
+ """
+ LOG.info("Get number of values in each location")
+
+ out_size = self.target_area.size
+
+ # Calculate the sum of the data falling to each bin
+ if self.counts is None:
+ counts, _ = da.histogram(self.idxs, bins=out_size,
+ range=(0, out_size))
+ self.counts = counts.reshape(self.target_area.shape)
+
+ return self.counts
+
+ def get_average(self, data, fill_value=np.nan, mask_all_nan=False):
+ """Calculate bin-averages using bucket resampling.
+
+ Parameters
+ ----------
+ data : Numpy or Dask array
+ Data to be binned and averaged
+ fill_value : float
+ Fill value to replace missing values. Default: np.nan
+
+ Returns
+ -------
+ average : Dask array
+ Binned and averaged data.
+ """
+ LOG.info("Get average value for each location")
+
+ sums = self.get_sum(data, mask_all_nan=mask_all_nan)
+ counts = self.get_sum(np.logical_not(np.isnan(data)).astype(int),
+ mask_all_nan=False)
+
+ average = sums / da.where(counts == 0, np.nan, counts)
+ average = da.where(np.isnan(average), fill_value, average)
+
+ return average
+
+ def get_fractions(self, data, categories=None, fill_value=np.nan):
+ """Get fraction of occurrences for each given categorical value.
+
+ Parameters
+ ----------
+ data : Numpy or Dask array
+ Categorical data to be processed
+ categories : iterable or None
+ One dimensional list of categories in the data, or None. If None,
+ categories are determined from the data by fully processing the
+ data and finding the unique category values.
+ fill_value : float
+ Fill value to replace missing values. Default: np.nan
+ """
+ if categories is None:
+ LOG.warning("No categories given, need to compute the data.")
+ # compute any dask arrays by converting to numpy
+ categories = np.asarray(np.unique(data))
+ try:
+ num = categories.size
+ except AttributeError:
+ num = len(categories)
+ LOG.info("Get fractions for %d categories", num)
+ results = {}
+ counts = self.get_count()
+ counts = counts.astype(float)
+ # Disable logging for calls to get_sum()
+ LOG.disabled = True
+ for cat in categories:
+ cat_data = da.where(data == cat, 1.0, 0.0)
+
+ sums = self.get_sum(cat_data)
+ result = sums.astype(float) / counts
+ result = da.where(counts == 0.0, fill_value, result)
+ results[cat] = result
+ # Re-enable logging
+ LOG.disabled = False
+
+ return results
+
+
+def round_to_resolution(arr, resolution):
+ """Round the values in *arr* to closest resolution element.
+
+ Parameters
+ ----------
+ arr : list, tuple, Numpy or Dask array
+ Array to be rounded
+ resolution : float
+ Resolution unit to which data are rounded
+
+ Returns
+ -------
+ data : Numpy or Dask array
+ Source data rounded to the closest resolution unit
+ """
+ if isinstance(arr, (list, tuple)):
+ arr = np.array(arr)
+ return resolution * np.round(arr / resolution)
=====================================
pyresample/ewa/_fornav.cpp
=====================================
The diff for this file was not included because it is too large.
=====================================
pyresample/ewa/_fornav_templates.cpp
=====================================
@@ -234,7 +234,7 @@ int compute_ewa(size_t chan_count, int maximum_weight_mode,
u0 = uimg[swath_offset];
v0 = vimg[swath_offset];
- if (u0 < 0.0 || v0 < 0.0 || __isnan(u0) || npy_isnan(v0)) {
+ if (u0 < 0.0 || v0 < 0.0 || __isnan(u0) || __isnan(v0)) {
continue;
}
=====================================
pyresample/ewa/_ll2cr.c
=====================================
The diff for this file was not included because it is too large.
=====================================
pyresample/geometry.py
=====================================
@@ -22,7 +22,7 @@
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
-"""Classes for geometry operations"""
+"""Classes for geometry operations."""
import hashlib
import warnings
@@ -33,7 +33,7 @@ import numpy as np
import yaml
from pyproj import Geod, transform
-from pyresample import CHUNK_SIZE, utils
+from pyresample import CHUNK_SIZE
from pyresample._spatial_mp import Cartesian, Cartesian_MP, Proj, Proj_MP
from pyresample.boundary import AreaDefBoundary, Boundary, SimpleBoundary
from pyresample.utils import proj4_str_to_dict, proj4_dict_to_str, convert_proj_floats
@@ -44,16 +44,25 @@ try:
except ImportError:
DataArray = np.ndarray
+try:
+ from pyproj import CRS
+except ImportError:
+ CRS = None
+
logger = getLogger(__name__)
class DimensionError(ValueError):
+ """Wrap ValueError."""
+
pass
class IncompatibleAreas(ValueError):
"""Error when the areas to combine are not compatible."""
+ pass
+
class BaseDefinition(object):
"""Base class for geometry definitions.
@@ -69,6 +78,7 @@ class BaseDefinition(object):
"""
def __init__(self, lons=None, lats=None, nprocs=1):
+ """Initialize BaseDefinition."""
if type(lons) != type(lats):
raise TypeError('lons and lats must be of same type')
elif lons is not None:
@@ -101,7 +111,7 @@ class BaseDefinition(object):
return self.hash
def __eq__(self, other):
- """Test for approximate equality"""
+ """Test for approximate equality."""
if self is other:
return True
if other.lons is None or other.lats is None:
@@ -135,12 +145,11 @@ class BaseDefinition(object):
return False
def __ne__(self, other):
- """Test for approximate equality"""
-
+ """Test for approximate equality."""
return not self.__eq__(other)
def get_area_extent_for_subset(self, row_LR, col_LR, row_UL, col_UL):
- """Calculate extent for a subdomain of this area
+ """Calculate extent for a subdomain of this area.
Rows are counted from upper left to lower left and columns are
counted from upper left to upper right.
@@ -159,7 +168,6 @@ class BaseDefinition(object):
Ulrich Hamann
"""
-
(a, b) = self.get_proj_coords(data_slice=(row_LR, col_LR))
a = a - 0.5 * self.pixel_size_x
b = b - 0.5 * self.pixel_size_y
@@ -170,7 +178,7 @@ class BaseDefinition(object):
return a, b, c, d
def get_lonlat(self, row, col):
- """Retrieve lon and lat of single pixel
+ """Retrieve lon and lat of single pixel.
Parameters
----------
@@ -180,8 +188,8 @@ class BaseDefinition(object):
Returns
-------
(lon, lat) : tuple of floats
- """
+ """
if self.ndim != 2:
raise DimensionError(('operation undefined '
'for %sD geometry ') % self.ndim)
@@ -242,8 +250,7 @@ class BaseDefinition(object):
SimpleBoundary(s1_lat.squeeze(), s2_lat.squeeze(), s3_lat.squeeze(), s4_lat.squeeze()))
def get_bbox_lonlats(self):
- """Returns the bounding box lons and lats"""
-
+ """Return the bounding box lons and lats."""
s1_lon, s1_lat = self.get_lonlats(data_slice=(0, slice(None)))
s2_lon, s2_lat = self.get_lonlats(data_slice=(slice(None), -1))
s3_lon, s3_lat = self.get_lonlats(data_slice=(-1, slice(None, None, -1)))
@@ -254,7 +261,7 @@ class BaseDefinition(object):
(s4_lon.squeeze(), s4_lat.squeeze())])
def get_cartesian_coords(self, nprocs=None, data_slice=None, cache=False):
- """Retrieve cartesian coordinates of geometry definition
+ """Retrieve cartesian coordinates of geometry definition.
Parameters
----------
@@ -269,6 +276,7 @@ class BaseDefinition(object):
Returns
-------
cartesian_coords : numpy array
+
"""
if cache:
warnings.warn("'cache' keyword argument will be removed in the "
@@ -308,8 +316,7 @@ class BaseDefinition(object):
@property
def corners(self):
- """Returns the corners of the current area.
- """
+ """Return the corners of the current area."""
from pyresample.spherical_geometry import Coordinate
return [Coordinate(*self.get_lonlat(0, 0)),
Coordinate(*self.get_lonlat(0, -1)),
@@ -317,8 +324,10 @@ class BaseDefinition(object):
Coordinate(*self.get_lonlat(-1, 0))]
def __contains__(self, point):
- """Is a point inside the 4 corners of the current area? This uses
- great circle arcs as area boundaries.
+ """Check if a point is inside the 4 corners of the current area.
+
+ This uses great circle arcs as area boundaries.
+
"""
from pyresample.spherical_geometry import point_inside, Coordinate
corners = self.corners
@@ -329,9 +338,10 @@ class BaseDefinition(object):
return point_inside(point, corners)
def overlaps(self, other):
- """Tests if the current area overlaps the *other* area. This is based
- solely on the corners of areas, assuming the boundaries to be great
- circles.
+ """Test if the current area overlaps the *other* area.
+
+ This is based solely on the corners of areas, assuming the
+ boundaries to be great circles.
Parameters
----------
@@ -341,8 +351,8 @@ class BaseDefinition(object):
Returns
-------
overlaps : bool
- """
+ """
from pyresample.spherical_geometry import Arc
self_corners = self.corners
@@ -373,16 +383,13 @@ class BaseDefinition(object):
return False
def get_area(self):
- """Get the area of the convex area defined by the corners of the current
- area.
- """
+ """Get the area of the convex area defined by the corners of the curren area."""
from pyresample.spherical_geometry import get_polygon_area
return get_polygon_area(self.corners)
def intersection(self, other):
- """Returns the corners of the intersection polygon of the current area
- with *other*.
+ """Return the corners of the intersection polygon of the current area with *other*.
Parameters
----------
@@ -392,6 +399,7 @@ class BaseDefinition(object):
Returns
-------
(corner1, corner2, corner3, corner4) : tuple of points
+
"""
from pyresample.spherical_geometry import intersection_polygon
return intersection_polygon(self.corners, other.corners)
@@ -407,8 +415,8 @@ class BaseDefinition(object):
Returns
-------
overlap_rate : float
- """
+ """
from pyresample.spherical_geometry import get_polygon_area
other_area = other.get_area()
inter_area = get_polygon_area(self.intersection(other))
@@ -420,9 +428,10 @@ class BaseDefinition(object):
class CoordinateDefinition(BaseDefinition):
- """Base class for geometry definitions defined by lons and lats only"""
+ """Base class for geometry definitions defined by lons and lats only."""
def __init__(self, lons, lats, nprocs=1):
+ """Initialize CoordinateDefinition."""
if not isinstance(lons, (np.ndarray, DataArray)):
lons = np.asanyarray(lons)
lats = np.asanyarray(lats)
@@ -438,6 +447,7 @@ class CoordinateDefinition(BaseDefinition):
self.__class__.__name__)
def concatenate(self, other):
+ """Concatenate coordinate definitions."""
if self.ndim != other.ndim:
raise DimensionError(('Unable to concatenate %sD and %sD '
'geometries') % (self.ndim, other.ndim))
@@ -448,6 +458,7 @@ class CoordinateDefinition(BaseDefinition):
return klass(lons, lats, nprocs=nprocs)
def append(self, other):
+ """Append another coordinate definition to existing one."""
if self.ndim != other.ndim:
raise DimensionError(('Unable to append %sD and %sD '
'geometries') % (self.ndim, other.ndim))
@@ -457,6 +468,7 @@ class CoordinateDefinition(BaseDefinition):
self.size = self.lons.size
def __str__(self):
+ """Return string representation of the coordinate definition."""
# Rely on numpy's object printing
return ('Shape: %s\nLons: %s\nLats: %s') % (str(self.shape),
str(self.lons),
@@ -464,7 +476,7 @@ class CoordinateDefinition(BaseDefinition):
class GridDefinition(CoordinateDefinition):
- """Grid defined by lons and lats
+ """Grid defined by lons and lats.
Parameters
----------
@@ -485,9 +497,11 @@ class GridDefinition(CoordinateDefinition):
Grid lats
cartesian_coords : object
Grid cartesian coordinates
+
"""
def __init__(self, lons, lats, nprocs=1):
+ """Initialize GridDefinition."""
super(GridDefinition, self).__init__(lons, lats, nprocs)
if lons.shape != lats.shape:
raise ValueError('lon and lat grid must have same shape')
@@ -538,6 +552,7 @@ class SwathDefinition(CoordinateDefinition):
"""
def __init__(self, lons, lats, nprocs=1):
+ """Initialize SwathDefinition."""
if not isinstance(lons, (np.ndarray, DataArray)):
lons = np.asanyarray(lons)
lats = np.asanyarray(lats)
@@ -553,7 +568,7 @@ class SwathDefinition(CoordinateDefinition):
@staticmethod
def _do_transform(src, dst, lons, lats, alt):
- """Helper for 'aggregate' method."""
+ """Run pyproj.transform and stack the results."""
x, y, z = transform(src, dst, lons, lats, alt)
return np.dstack((x, y, z))
@@ -597,6 +612,7 @@ class SwathDefinition(CoordinateDefinition):
return self.hash
def update_hash(self, the_hash=None):
+ """Update the hash."""
if the_hash is None:
the_hash = hashlib.sha1()
the_hash.update(get_array_hashable(self.lons))
@@ -668,6 +684,7 @@ class SwathDefinition(CoordinateDefinition):
return blons, blats
def compute_bb_proj_params(self, proj_dict):
+ """Compute BB projection parameters."""
projection = proj_dict['proj']
ellipsoid = proj_dict.get('ellps', 'WGS84')
if projection == 'omerc':
@@ -741,46 +758,101 @@ class DynamicAreaDefinition(object):
resolution=None, optimize_projection=False, rotation=None):
"""Initialize the DynamicAreaDefinition.
+ Attributes
+ ----------
area_id:
The name of the area.
description:
The description of the area.
projection:
- The dictionary or string of projection parameters. Doesn't have to be complete.
- height, width:
- The shape of the resulting area.
+ The dictionary or string of projection parameters. Doesn't have to
+ be complete. If not complete, ``proj_info`` must be provided to
+ ``freeze`` to "fill in" any missing parameters.
+ width:
+ x dimension in number of pixels, aka number of grid columns
+ height:
+ y dimension in number of pixels, aka number of grid rows
+ shape:
+ Corresponding array shape as (height, width)
area_extent:
The area extent of the area.
+ pixel_size_x:
+ Pixel width in projection units
+ pixel_size_y:
+ Pixel height in projection units
resolution:
- the resolution of the resulting area.
+ Resolution of the resulting area as (pixel_size_x, pixel_size_y) or a scalar if pixel_size_x == pixel_size_y.
optimize_projection:
Whether the projection parameters have to be optimized.
rotation:
Rotation in degrees (negative is cw)
"""
- if isinstance(projection, str):
- proj_dict = proj4_str_to_dict(projection)
- elif isinstance(projection, dict):
- proj_dict = projection
- else:
- raise TypeError('Wrong type for projection: {0}. Expected dict or string.'.format(type(projection)))
-
self.area_id = area_id
self.description = description
- self.proj_dict = proj_dict
- self.width = self.width = width
- self.height = self.height = height
+ self.width = width
+ self.height = height
+ self.shape = (self.height, self.width)
self.area_extent = area_extent
self.optimize_projection = optimize_projection
+ if isinstance(resolution, (int, float)):
+ resolution = (resolution, resolution)
self.resolution = resolution
self.rotation = rotation
+ self._projection = projection
+
+ # check if non-dict projections are valid
+ # dicts may be updated later
+ if not isinstance(self._projection, dict):
+ Proj(projection)
+
+ def _get_proj_dict(self):
+ projection = self._projection
+
+ if CRS is not None:
+ try:
+ crs = CRS(projection)
+ except RuntimeError:
+ # could be incomplete dictionary
+ return projection
+ if hasattr(crs, 'to_dict'):
+ # pyproj 2.2+
+ proj_dict = crs.to_dict()
+ else:
+ proj_dict = proj4_str_to_dict(crs.to_proj4())
+ else:
+ if isinstance(projection, str):
+ proj_dict = proj4_str_to_dict(projection)
+ elif isinstance(projection, dict):
+ proj_dict = projection.copy()
+ else:
+ raise TypeError('Wrong type for projection: {0}. Expected '
+ 'dict or string.'.format(type(projection)))
+
+ return proj_dict
+
+ @property
+ def pixel_size_x(self):
+ """Return pixel size in X direction."""
+ if self.resolution is None:
+ return None
+ return self.resolution[0]
+
+ @property
+ def pixel_size_y(self):
+ """Return pixel size in Y direction."""
+ if self.resolution is None:
+ return None
+ return self.resolution[1]
- # size = (x_size, y_size) and shape = (y_size, x_size)
def compute_domain(self, corners, resolution=None, shape=None):
"""Compute shape and area_extent from corners and [shape or resolution] info.
Corners represents the center of pixels, while area_extent represents the edge of pixels.
+
+ Note that ``shape`` is (rows, columns) and ``resolution`` is
+ (x_size, y_size); the dimensions are flipped.
+
"""
if resolution is not None and shape is not None:
raise ValueError("Both resolution and shape can't be provided.")
@@ -792,10 +864,9 @@ class DynamicAreaDefinition(object):
x_resolution = (corners[2] - corners[0]) * 1.0 / (width - 1)
y_resolution = (corners[3] - corners[1]) * 1.0 / (height - 1)
else:
- try:
- x_resolution, y_resolution = resolution
- except TypeError:
- x_resolution = y_resolution = resolution
+ if isinstance(resolution, (int, float)):
+ resolution = (resolution, resolution)
+ x_resolution, y_resolution = resolution
width = int(np.rint((corners[2] - corners[0]) * 1.0
/ x_resolution + 1))
height = int(np.rint((corners[3] - corners[1]) * 1.0
@@ -810,8 +881,11 @@ class DynamicAreaDefinition(object):
def freeze(self, lonslats=None, resolution=None, shape=None, proj_info=None):
"""Create an AreaDefinition from this area with help of some extra info.
- lonlats:
- the geographical coordinates to contain in the resulting area.
+ Parameters
+ ----------
+ lonlats : SwathDefinition or tuple
+ The geographical coordinates to contain in the resulting area.
+ A tuple should be ``(lons, lats)``.
resolution:
the resolution of the resulting area.
shape:
@@ -821,16 +895,26 @@ class DynamicAreaDefinition(object):
Resolution and shape parameters are ignored if the instance is created
with the `optimize_projection` flag set to True.
+
"""
+ proj_dict = self._get_proj_dict()
+ projection = self._projection
if proj_info is not None:
- self.proj_dict.update(proj_info)
+ # this is now our complete projection information
+ proj_dict.update(proj_info)
+ projection = proj_dict
if self.optimize_projection:
- return lonslats.compute_optimal_bb_area(self.proj_dict)
+ return lonslats.compute_optimal_bb_area(proj_dict)
if resolution is None:
resolution = self.resolution
- if not self.area_extent or not self.width or not self.height:
- proj4 = Proj(**self.proj_dict)
+ if shape is None:
+ shape = self.shape
+ height, width = shape
+ shape = None if None in shape else shape
+ area_extent = self.area_extent
+ if not area_extent or not width or not height:
+ proj4 = Proj(proj_dict)
try:
lons, lats = lonslats
except (TypeError, ValueError):
@@ -840,12 +924,10 @@ class DynamicAreaDefinition(object):
yarr[yarr > 9e29] = np.nan
corners = [np.nanmin(xarr), np.nanmin(yarr),
np.nanmax(xarr), np.nanmax(yarr)]
- # Note: size=(width, height) was changed to shape=(height, width).
- domain = self.compute_domain(corners, resolution, shape)
- self.area_extent, self.width, self.height = domain
+ area_extent, width, height = self.compute_domain(corners, resolution, shape)
return AreaDefinition(self.area_id, self.description, '',
- self.proj_dict, self.width, self.height,
- self.area_extent, self.rotation)
+ projection, width, height,
+ area_extent, self.rotation)
def invproj(data_x, data_y, proj_dict):
@@ -907,6 +989,8 @@ class AreaDefinition(BaseDefinition):
Pixel width in projection units
pixel_size_y : float
Pixel height in projection units
+ resolution : tuple
+ the resolution of the resulting area as (pixel_size_x, pixel_size_y).
upper_left_extent : tuple
Coordinates (x, y) of upper left corner of upper left pixel in projection units
pixel_upper_left : tuple
@@ -917,7 +1001,7 @@ class AreaDefinition(BaseDefinition):
pixel_offset_y : float
y offset between projection center and upper left corner of upper
left pixel in units of pixels..
- proj4_string : str
+ proj_str : str
Projection defined as Proj.4 string
cartesian_coords : object
Grid cartesian coordinates
@@ -931,13 +1015,7 @@ class AreaDefinition(BaseDefinition):
def __init__(self, area_id, description, proj_id, projection, width, height,
area_extent, rotation=None, nprocs=1, lons=None, lats=None,
dtype=np.float64):
- if isinstance(projection, str):
- proj_dict = proj4_str_to_dict(projection)
- elif isinstance(projection, dict):
- proj_dict = projection
- else:
- raise TypeError('Wrong type for projection: {0}. Expected dict or string.'.format(type(projection)))
-
+ """Initialize AreaDefinition."""
super(AreaDefinition, self).__init__(lons, lats, nprocs)
self.area_id = area_id
self.description = description
@@ -957,11 +1035,23 @@ class AreaDefinition(BaseDefinition):
self.ndim = 2
self.pixel_size_x = (area_extent[2] - area_extent[0]) / float(width)
self.pixel_size_y = (area_extent[3] - area_extent[1]) / float(height)
- self.proj_dict = convert_proj_floats(proj_dict.items())
self.area_extent = tuple(area_extent)
+ if CRS is not None:
+ self.crs = CRS(projection)
+ self._proj_dict = None
+ else:
+ if isinstance(projection, str):
+ proj_dict = proj4_str_to_dict(projection)
+ elif isinstance(projection, dict):
+ # use the float-converted dict to pass to Proj
+ projection = convert_proj_floats(projection.items())
+ proj_dict = projection
+ else:
+ raise TypeError('Wrong type for projection: {0}. Expected dict or string.'.format(type(projection)))
+ self._proj_dict = proj_dict
# Calculate area_extent in lon lat
- proj = Proj(**proj_dict)
+ proj = Proj(projection)
corner_lons, corner_lats = proj((area_extent[0], area_extent[2]),
(area_extent[1], area_extent[3]),
inverse=True)
@@ -983,6 +1073,17 @@ class AreaDefinition(BaseDefinition):
self.dtype = dtype
+ @property
+ def proj_dict(self):
+ """Return the projection dictionary."""
+ if self._proj_dict is None and hasattr(self, 'crs'):
+ if hasattr(self.crs, 'to_dict'):
+ # pyproj 2.2+
+ self._proj_dict = self.crs.to_dict()
+ else:
+ self._proj_dict = proj4_str_to_dict(self.crs.to_proj4())
+ return self._proj_dict
+
def copy(self, **override_kwargs):
"""Make a copy of the current area.
@@ -1007,26 +1108,35 @@ class AreaDefinition(BaseDefinition):
@property
def shape(self):
+ """Return area shape."""
return self.height, self.width
+ @property
+ def resolution(self):
+ """Return area resolution in X and Y direction."""
+ return self.pixel_size_x, self.pixel_size_y
+
@property
def name(self):
+ """Return area name."""
warnings.warn("'name' is deprecated, use 'description' instead.", PendingDeprecationWarning)
return self.description
@property
def x_size(self):
+ """Return area width."""
warnings.warn("'x_size' is deprecated, use 'width' instead.", PendingDeprecationWarning)
return self.width
@property
def y_size(self):
+ """Return area height."""
warnings.warn("'y_size' is deprecated, use 'height' instead.", PendingDeprecationWarning)
return self.height
@classmethod
def from_extent(cls, area_id, projection, shape, area_extent, units=None, **kwargs):
- """Creates an AreaDefinition object from area_extent and shape.
+ """Create an AreaDefinition object from area_extent and shape.
Parameters
----------
@@ -1066,12 +1176,13 @@ class AreaDefinition(BaseDefinition):
Returns
-------
AreaDefinition : AreaDefinition
+
"""
return create_area_def(area_id, projection, shape=shape, area_extent=area_extent, units=units, **kwargs)
@classmethod
def from_circle(cls, area_id, projection, center, radius, shape=None, resolution=None, units=None, **kwargs):
- """Creates an AreaDefinition object from center, radius, and shape or from center, radius, and resolution.
+ """Create an AreaDefinition from center, radius, and shape or from center, radius, and resolution.
Parameters
----------
@@ -1123,13 +1234,14 @@ class AreaDefinition(BaseDefinition):
Notes
-----
* ``resolution`` and ``radius`` can be specified with one value if dx == dy
+
"""
return create_area_def(area_id, projection, shape=shape, center=center, radius=radius,
resolution=resolution, units=units, **kwargs)
@classmethod
def from_area_of_interest(cls, area_id, projection, shape, center, resolution, units=None, **kwargs):
- """Creates an AreaDefinition object from center, resolution, and shape.
+ """Create an AreaDefinition from center, resolution, and shape.
Parameters
----------
@@ -1171,13 +1283,14 @@ class AreaDefinition(BaseDefinition):
Returns
-------
AreaDefinition : AreaDefinition
+
"""
return create_area_def(area_id, projection, shape=shape, center=center,
resolution=resolution, units=units, **kwargs)
@classmethod
def from_ul_corner(cls, area_id, projection, shape, upper_left_extent, resolution, units=None, **kwargs):
- """Creates an AreaDefinition object from upper_left_extent, resolution, and shape.
+ """Create an AreaDefinition object from upper_left_extent, resolution, and shape.
Parameters
----------
@@ -1219,6 +1332,7 @@ class AreaDefinition(BaseDefinition):
Returns
-------
AreaDefinition : AreaDefinition
+
"""
return create_area_def(area_id, projection, shape=shape, upper_left_extent=upper_left_extent,
resolution=resolution, units=units, **kwargs)
@@ -1231,9 +1345,11 @@ class AreaDefinition(BaseDefinition):
@property
def proj_str(self):
+ """Return PROJ projection string."""
return proj4_dict_to_str(self.proj_dict, sort=True)
def __str__(self):
+ """Return string representation of the AreaDefinition."""
# We need a sorted dictionary for a unique hash of str(self)
proj_dict = self.proj_dict
proj_str = ('{' +
@@ -1253,18 +1369,33 @@ class AreaDefinition(BaseDefinition):
__repr__ = __str__
def to_cartopy_crs(self):
+ """Convert projection to cartopy CRS object."""
from pyresample._cartopy import from_proj
bounds = (self.area_extent[0],
self.area_extent[2],
self.area_extent[1],
self.area_extent[3])
- crs = from_proj(self.proj_str, bounds=bounds)
+ if hasattr(self, 'crs') and self.crs.to_epsg() is not None:
+ proj_params = "EPSG:{}".format(self.crs.to_epsg())
+ else:
+ proj_params = self.proj_str
+ if Proj(proj_params).is_latlong():
+ # Convert area extent from degrees to radians
+ bounds = np.deg2rad(bounds)
+ crs = from_proj(proj_params, bounds=bounds)
return crs
def create_areas_def(self):
+ """Generate YAML formatted representation of this area."""
+ if hasattr(self, 'crs') and self.crs.to_epsg() is not None:
+ proj_dict = {'EPSG': self.crs.to_epsg()}
+ else:
+ proj_dict = self.proj_dict
+ # pyproj 2.0+ adds a '+type=crs' parameter
+ proj_dict.pop('type', None)
res = OrderedDict(description=self.description,
- projection=OrderedDict(self.proj_dict),
+ projection=OrderedDict(proj_dict),
shape=OrderedDict([('height', self.height), ('width', self.width)]))
units = res['projection'].pop('units', None)
extent = OrderedDict([('lower_left_xy', list(self.area_extent[:2])),
@@ -1276,6 +1407,7 @@ class AreaDefinition(BaseDefinition):
return ordered_dump(OrderedDict([(self.area_id, res)]))
def create_areas_def_legacy(self):
+ """Create area definition in legacy format."""
proj_dict = self.proj_dict
proj_str = ','.join(["%s=%s" % (str(k), str(proj_dict[k]))
for k in sorted(proj_dict.keys())])
@@ -1295,8 +1427,7 @@ class AreaDefinition(BaseDefinition):
return area_def_str
def __eq__(self, other):
- """Test for equality"""
-
+ """Test for equality."""
try:
return ((self.proj_str == other.proj_str) and
(self.shape == other.shape) and
@@ -1305,8 +1436,7 @@ class AreaDefinition(BaseDefinition):
return super(AreaDefinition, self).__eq__(other)
def __ne__(self, other):
- """Test for equality"""
-
+ """Test for equality."""
return not self.__eq__(other)
def update_hash(self, the_hash=None):
@@ -1319,11 +1449,11 @@ class AreaDefinition(BaseDefinition):
return the_hash
def colrow2lonlat(self, cols, rows):
- """
- Return longitudes and latitudes for the given image columns
- and rows. Both scalars and arrays are supported.
- To be used with scarse data points instead of slices
- (see get_lonlats).
+ """Return lons and lats for the given image columns and rows.
+
+ Both scalars and arrays are supported. To be used with scarse
+ data points instead of slices (see get_lonlats).
+
"""
p = Proj(self.proj_str)
x = self.projection_x_coords
@@ -1331,15 +1461,18 @@ class AreaDefinition(BaseDefinition):
return p(y[y.size - cols], x[x.size - rows], inverse=True)
def lonlat2colrow(self, lons, lats):
- """
- Return image columns and rows for the given longitudes
- and latitudes. Both scalars and arrays are supported.
- Same as get_xy_from_lonlat, renamed for convenience.
+ """Return image columns and rows for the given lons and lats.
+
+ Both scalars and arrays are supported. Same as
+ get_xy_from_lonlat, renamed for convenience.
+
"""
return self.get_xy_from_lonlat(lons, lats)
def get_xy_from_lonlat(self, lon, lat):
- """Retrieve closest x and y coordinates (column, row indices) for the
+ """Retrieve closest x and y coordinates.
+
+ Retrieve closest x and y coordinates (column, row indices) for the
specified geolocation (lon,lat) if inside area. If lon,lat is a point a
ValueError is raised if the return point is outside the area domain. If
lon,lat is a tuple of sequences of longitudes and latitudes, a tuple of
@@ -1353,8 +1486,8 @@ class AreaDefinition(BaseDefinition):
:Returns:
(x, y) : tuple of integer points/arrays
- """
+ """
if isinstance(lon, list):
lon = np.array(lon)
if isinstance(lat, list):
@@ -1393,7 +1526,6 @@ class AreaDefinition(BaseDefinition):
ValueError: if the return point is outside the area domain
"""
-
if isinstance(xm, list):
xm = np.array(xm)
if isinstance(ym, list):
@@ -1434,7 +1566,7 @@ class AreaDefinition(BaseDefinition):
return int(x__), int(y__)
def get_lonlat(self, row, col):
- """Retrieves lon and lat values of single point in area grid
+ """Retrieve lon and lat values of single point in area grid.
Parameters
----------
@@ -1444,14 +1576,14 @@ class AreaDefinition(BaseDefinition):
Returns
-------
(lon, lat) : tuple of floats
- """
+ """
lon, lat = self.get_lonlats(nprocs=None, data_slice=(row, col))
return np.asscalar(lon), np.asscalar(lat)
@staticmethod
def _do_rotation(xspan, yspan, rot_deg=0):
- """Helper method to apply a rotation factor to a matrix of points."""
+ """Apply a rotation factor to a matrix of points."""
if hasattr(xspan, 'chunks'):
# we were given dask arrays, use dask functions
import dask.array as numpy
@@ -1463,6 +1595,7 @@ class AreaDefinition(BaseDefinition):
return numpy.einsum('ji, mni -> jmn', rot_mat, numpy.dstack([x, y]))
def get_proj_vectors_dask(self, chunks=None, dtype=None):
+ """Get projection vectors."""
warnings.warn("'get_proj_vectors_dask' is deprecated, please use "
"'get_proj_vectors' with the 'chunks' keyword argument specified.", DeprecationWarning)
if chunks is None:
@@ -1470,7 +1603,7 @@ class AreaDefinition(BaseDefinition):
return self.get_proj_vectors(dtype=dtype, chunks=chunks)
def _get_proj_vectors(self, dtype=None, check_rotation=True, chunks=None):
- """Helper for getting 1D projection coordinates."""
+ """Get 1D projection coordinates."""
x_kwargs = {}
y_kwargs = {}
@@ -1522,6 +1655,7 @@ class AreaDefinition(BaseDefinition):
return self._get_proj_vectors(dtype=dtype, chunks=chunks)
def get_proj_coords_dask(self, chunks=None, dtype=None):
+ """Get projection coordinates."""
warnings.warn("'get_proj_coords_dask' is deprecated, please use "
"'get_proj_coords' with the 'chunks' keyword argument specified.", DeprecationWarning)
if chunks is None:
@@ -1571,6 +1705,7 @@ class AreaDefinition(BaseDefinition):
@property
def projection_x_coords(self):
+ """Return projection X coordinates."""
if self.rotation != 0:
# rotation is only supported in 'get_proj_coords' right now
return self.get_proj_coords(data_slice=(0, slice(None)))[0].squeeze()
@@ -1578,6 +1713,7 @@ class AreaDefinition(BaseDefinition):
@property
def projection_y_coords(self):
+ """Return projection Y coordinates."""
if self.rotation != 0:
# rotation is only supported in 'get_proj_coords' right now
return self.get_proj_coords(data_slice=(slice(None), 0))[1].squeeze()
@@ -1600,6 +1736,7 @@ class AreaDefinition(BaseDefinition):
Coordinate(corner_lons[3], corner_lats[3])]
def get_lonlats_dask(self, chunks=None, dtype=None):
+ """Get longitudes and latitudes."""
warnings.warn("'get_lonlats_dask' is deprecated, please use "
"'get_lonlats' with the 'chunks' keyword argument specified.", DeprecationWarning)
if chunks is None:
@@ -1627,8 +1764,8 @@ class AreaDefinition(BaseDefinition):
-------
(lons, lats) : tuple of numpy arrays
Grids of area lons and and lats
- """
+ """
if cache:
warnings.warn("'cache' keyword argument will be removed in the "
"future and data will not be cached.", PendingDeprecationWarning)
@@ -1722,7 +1859,8 @@ class AreaDefinition(BaseDefinition):
intersection = data_boundary.contour_poly.intersection(
area_boundary.contour_poly)
if intersection is None:
- logger.debug('Cannot determine appropriate slicing.')
+ logger.debug('Cannot determine appropriate slicing. '
+ "Data and projection area do not overlap.")
raise NotImplementedError
x, y = self.get_xy_from_lonlat(np.rad2deg(intersection.lon),
np.rad2deg(intersection.lat))
@@ -1765,11 +1903,10 @@ class AreaDefinition(BaseDefinition):
def get_geostationary_angle_extent(geos_area):
"""Get the max earth (vs space) viewing angles in x and y."""
-
# get some projection parameters
- req = geos_area.proj_dict['a'] / 1000
- rp = geos_area.proj_dict['b'] / 1000
- h = geos_area.proj_dict['h'] / 1000 + req
+ req = geos_area.proj_dict['a'] / 1000.0
+ rp = geos_area.proj_dict['b'] / 1000.0
+ h = geos_area.proj_dict['h'] / 1000.0 + req
# compute some constants
aeq = 1 - req ** 2 / (h ** 2)
@@ -1787,13 +1924,14 @@ def get_geostationary_bounding_box(geos_area, nb_points=50):
Args:
nb_points: Number of points on the polygon
+
"""
xmax, ymax = get_geostationary_angle_extent(geos_area)
# generate points around the north hemisphere in satellite projection
# make it a bit smaller so that we stay inside the valid area
- x = np.cos(np.linspace(-np.pi, 0, int(nb_points / 2))) * (xmax - 0.0001)
- y = -np.sin(np.linspace(-np.pi, 0, int(nb_points / 2))) * (ymax - 0.0001)
+ x = np.cos(np.linspace(-np.pi, 0, int(nb_points / 2.0))) * (xmax - 0.0001)
+ y = -np.sin(np.linspace(-np.pi, 0, int(nb_points / 2.0))) * (ymax - 0.0001)
ll_x, ll_y, ur_x, ur_y = geos_area.area_extent
@@ -1855,9 +1993,10 @@ class StackedAreaDefinition(BaseDefinition):
"""Definition based on muliple vertically stacked AreaDefinitions."""
def __init__(self, *definitions, **kwargs):
- """Base this instance on *definitions*.
+ """Initialize StackedAreaDefinition based on *definitions*.
*kwargs* used here are `nprocs` and `dtype` (see AreaDefinition).
+
"""
nprocs = kwargs.get('nprocs', 1)
super(StackedAreaDefinition, self).__init__(nprocs=nprocs)
@@ -1869,26 +2008,36 @@ class StackedAreaDefinition(BaseDefinition):
@property
def width(self):
+ """Return width of the area definition."""
return self.defs[0].width
@property
def x_size(self):
+ """Return width of the area definition."""
warnings.warn("'x_size' is deprecated, use 'width' instead.", PendingDeprecationWarning)
return self.width
@property
def height(self):
+ """Return height of the area definition."""
return sum(definition.height for definition in self.defs)
@property
def y_size(self):
+ """Return height of the area definition."""
warnings.warn("'y_size' is deprecated, use 'height' instead.", PendingDeprecationWarning)
return self.height
@property
def size(self):
+ """Return size of the area definition."""
return self.height * self.width
+ @property
+ def shape(self):
+ """Return shape of the area definition."""
+ return (self.height, self.width)
+
def append(self, definition):
"""Append another definition to the area."""
if isinstance(definition, StackedAreaDefinition):
@@ -1909,7 +2058,6 @@ class StackedAreaDefinition(BaseDefinition):
def get_lonlats(self, nprocs=None, data_slice=None, cache=False, dtype=None, chunks=None):
"""Return lon and lat arrays of the area."""
-
if chunks is not None:
from dask.array import vstack
else:
@@ -1940,9 +2088,10 @@ class StackedAreaDefinition(BaseDefinition):
return self.lons, self.lats
def get_lonlats_dask(self, chunks=None, dtype=None):
- """"Return lon and lat dask arrays of the area."""
+ """Return lon and lat dask arrays of the area."""
warnings.warn("'get_lonlats_dask' is deprecated, please use "
- "'get_lonlats' with the 'chunks' keyword argument specified.", DeprecationWarning)
+ "'get_lonlats' with the 'chunks' keyword argument specified.",
+ DeprecationWarning)
if chunks is None:
chunks = CHUNK_SIZE # FUTURE: Use a global config object instead
return self.get_lonlats(chunks=chunks, dtype=dtype)
@@ -1956,25 +2105,25 @@ class StackedAreaDefinition(BaseDefinition):
@property
def proj4_string(self):
- """Returns projection definition as Proj.4 string"""
+ """Return projection definition as Proj.4 string."""
warnings.warn("'proj4_string' is deprecated, please use 'proj_str' "
"instead.", DeprecationWarning)
return self.defs[0].proj_str
@property
def proj_str(self):
- """Returns projection definition as Proj.4 string"""
+ """Return projection definition as Proj.4 string."""
return self.defs[0].proj_str
def update_hash(self, the_hash=None):
+ """Update the hash."""
for areadef in self.defs:
the_hash = areadef.update_hash(the_hash)
return the_hash
def _get_slice(segments, shape):
- """Generator for segmenting a 1D or 2D array"""
-
+ """Segment a 1D or 2D array."""
if not (1 <= len(shape) <= 2):
raise ValueError('Cannot segment array of shape: %s' % str(shape))
else:
@@ -1992,8 +2141,7 @@ def _get_slice(segments, shape):
def _flatten_cartesian_coords(cartesian_coords):
- """Flatten array to (n, 3) shape"""
-
+ """Flatten array to (n, 3) shape."""
shape = cartesian_coords.shape
if len(shape) > 2:
cartesian_coords = cartesian_coords.reshape(shape[0] *
@@ -2017,6 +2165,7 @@ def _get_highest_level_class(obj1, obj2):
def ordered_dump(data, stream=None, Dumper=yaml.Dumper, **kwds):
+ """Dump the data to YAML in ordered fashion."""
class OrderedDumper(Dumper):
pass
=====================================
pyresample/test/__init__.py
=====================================
@@ -37,7 +37,8 @@ from pyresample.test import (
test_ewa_fornav,
test_bilinear,
test_data_reduce,
- test_spatial_mp
+ test_spatial_mp,
+ test_bucket
)
import unittest
@@ -61,6 +62,7 @@ def suite():
mysuite.addTests(test_bilinear.suite())
mysuite.addTests(test_data_reduce.suite())
mysuite.addTests(test_spatial_mp.suite())
+ mysuite.addTests(test_bucket.suite())
return mysuite
=====================================
pyresample/test/test_bilinear.py
=====================================
@@ -1,16 +1,20 @@
+"""Test bilinear interpolation."""
import unittest
import numpy as np
+try:
+ from unittest import mock
+except ImportError:
+ import mock
-from pyresample._spatial_mp import Proj
-import pyresample.bilinear as bil
-from pyresample import geometry, utils, kd_tree
-
-
-class Test(unittest.TestCase):
+class TestNumpyBilinear(unittest.TestCase):
+ """Test Numpy-based bilinear interpolation."""
@classmethod
def setUpClass(cls):
+ """Do some setup for the test class."""
+ from pyresample import geometry, kd_tree
+
cls.pts_irregular = (np.array([[-1., 1.], ]),
np.array([[1., 2.], ]),
np.array([[-2., -1.], ]),
@@ -64,114 +68,147 @@ class Test(unittest.TestCase):
cls.idx_ref = idx_ref
def test_calc_abc(self):
+ """Test calculation of quadratic coefficients."""
+ from pyresample.bilinear import _calc_abc
+
# No np.nan inputs
pt_1, pt_2, pt_3, pt_4 = self.pts_irregular
- res = bil._calc_abc(pt_1, pt_2, pt_3, pt_4, 0.0, 0.0)
+ res = _calc_abc(pt_1, pt_2, pt_3, pt_4, 0.0, 0.0)
self.assertFalse(np.isnan(res[0]))
self.assertFalse(np.isnan(res[1]))
self.assertFalse(np.isnan(res[2]))
# np.nan input -> np.nan output
- res = bil._calc_abc(np.array([[np.nan, np.nan]]),
- pt_2, pt_3, pt_4, 0.0, 0.0)
+ res = _calc_abc(np.array([[np.nan, np.nan]]),
+ pt_2, pt_3, pt_4, 0.0, 0.0)
self.assertTrue(np.isnan(res[0]))
self.assertTrue(np.isnan(res[1]))
self.assertTrue(np.isnan(res[2]))
def test_get_ts_irregular(self):
- res = bil._get_ts_irregular(self.pts_irregular[0],
- self.pts_irregular[1],
- self.pts_irregular[2],
- self.pts_irregular[3],
- 0., 0.)
+ """Test calculations for irregular corner locations."""
+ from pyresample.bilinear import _get_ts_irregular
+
+ res = _get_ts_irregular(self.pts_irregular[0],
+ self.pts_irregular[1],
+ self.pts_irregular[2],
+ self.pts_irregular[3],
+ 0., 0.)
self.assertEqual(res[0], 0.375)
self.assertEqual(res[1], 0.5)
- res = bil._get_ts_irregular(self.pts_vert_parallel[0],
- self.pts_vert_parallel[1],
- self.pts_vert_parallel[2],
- self.pts_vert_parallel[3],
- 0., 0.)
+ res = _get_ts_irregular(self.pts_vert_parallel[0],
+ self.pts_vert_parallel[1],
+ self.pts_vert_parallel[2],
+ self.pts_vert_parallel[3],
+ 0., 0.)
self.assertTrue(np.isnan(res[0]))
self.assertTrue(np.isnan(res[1]))
def test_get_ts_uprights_parallel(self):
- res = bil._get_ts_uprights_parallel(self.pts_vert_parallel[0],
- self.pts_vert_parallel[1],
- self.pts_vert_parallel[2],
- self.pts_vert_parallel[3],
- 0., 0.)
+ """Test calculation when uprights are parallel."""
+ from pyresample.bilinear import _get_ts_uprights_parallel
+
+ res = _get_ts_uprights_parallel(self.pts_vert_parallel[0],
+ self.pts_vert_parallel[1],
+ self.pts_vert_parallel[2],
+ self.pts_vert_parallel[3],
+ 0., 0.)
self.assertEqual(res[0], 0.5)
self.assertEqual(res[1], 0.5)
def test_get_ts_parallellogram(self):
- res = bil._get_ts_parallellogram(self.pts_both_parallel[0],
- self.pts_both_parallel[1],
- self.pts_both_parallel[2],
- 0., 0.)
+ """Test calculation when the corners form a parallellogram."""
+ from pyresample.bilinear import _get_ts_parallellogram
+
+ res = _get_ts_parallellogram(self.pts_both_parallel[0],
+ self.pts_both_parallel[1],
+ self.pts_both_parallel[2],
+ 0., 0.)
self.assertEqual(res[0], 0.5)
self.assertEqual(res[1], 0.5)
def test_get_ts(self):
+ """Test get_ts()."""
+ from pyresample.bilinear import _get_ts
+
out_x = np.array([[0.]])
out_y = np.array([[0.]])
- res = bil._get_ts(self.pts_irregular[0],
- self.pts_irregular[1],
- self.pts_irregular[2],
- self.pts_irregular[3],
- out_x, out_y)
+ res = _get_ts(self.pts_irregular[0],
+ self.pts_irregular[1],
+ self.pts_irregular[2],
+ self.pts_irregular[3],
+ out_x, out_y)
self.assertEqual(res[0], 0.375)
self.assertEqual(res[1], 0.5)
- res = bil._get_ts(self.pts_both_parallel[0],
- self.pts_both_parallel[1],
- self.pts_both_parallel[2],
- self.pts_both_parallel[3],
- out_x, out_y)
+ res = _get_ts(self.pts_both_parallel[0],
+ self.pts_both_parallel[1],
+ self.pts_both_parallel[2],
+ self.pts_both_parallel[3],
+ out_x, out_y)
self.assertEqual(res[0], 0.5)
self.assertEqual(res[1], 0.5)
- res = bil._get_ts(self.pts_vert_parallel[0],
- self.pts_vert_parallel[1],
- self.pts_vert_parallel[2],
- self.pts_vert_parallel[3],
- out_x, out_y)
+ res = _get_ts(self.pts_vert_parallel[0],
+ self.pts_vert_parallel[1],
+ self.pts_vert_parallel[2],
+ self.pts_vert_parallel[3],
+ out_x, out_y)
self.assertEqual(res[0], 0.5)
self.assertEqual(res[1], 0.5)
def test_solve_quadratic(self):
- res = bil._solve_quadratic(1, 0, 0)
+ """Test solving quadratic equation."""
+ from pyresample.bilinear import (_solve_quadratic, _calc_abc)
+
+ res = _solve_quadratic(1, 0, 0)
self.assertEqual(res[0], 0.0)
- res = bil._solve_quadratic(1, 2, 1)
+ res = _solve_quadratic(1, 2, 1)
self.assertTrue(np.isnan(res[0]))
- res = bil._solve_quadratic(1, 2, 1, min_val=-2.)
+ res = _solve_quadratic(1, 2, 1, min_val=-2.)
self.assertEqual(res[0], -1.0)
# Test that small adjustments work
pt_1, pt_2, pt_3, pt_4 = self.pts_vert_parallel
pt_1 = self.pts_vert_parallel[0].copy()
pt_1[0][0] += 1e-7
- res = bil._calc_abc(pt_1, pt_2, pt_3, pt_4, 0.0, 0.0)
- res = bil._solve_quadratic(res[0], res[1], res[2])
+ res = _calc_abc(pt_1, pt_2, pt_3, pt_4, 0.0, 0.0)
+ res = _solve_quadratic(res[0], res[1], res[2])
self.assertAlmostEqual(res[0], 0.5, 5)
- res = bil._calc_abc(pt_1, pt_3, pt_2, pt_4, 0.0, 0.0)
- res = bil._solve_quadratic(res[0], res[1], res[2])
+ res = _calc_abc(pt_1, pt_3, pt_2, pt_4, 0.0, 0.0)
+ res = _solve_quadratic(res[0], res[1], res[2])
self.assertAlmostEqual(res[0], 0.5, 5)
def test_get_output_xy(self):
+ """Test calculation of output xy-coordinates."""
+ from pyresample.bilinear import _get_output_xy
+ from pyresample._spatial_mp import Proj
+
proj = Proj(self.target_def.proj_str)
- out_x, out_y = bil._get_output_xy(self.target_def, proj)
+ out_x, out_y = _get_output_xy(self.target_def, proj)
self.assertTrue(out_x.all())
self.assertTrue(out_y.all())
def test_get_input_xy(self):
+ """Test calculation of input xy-coordinates."""
+ from pyresample.bilinear import _get_input_xy
+ from pyresample._spatial_mp import Proj
+
proj = Proj(self.target_def.proj_str)
- in_x, in_y = bil._get_output_xy(self.swath_def, proj)
+ in_x, in_y = _get_input_xy(self.swath_def, proj,
+ self.input_idxs, self.idx_ref)
self.assertTrue(in_x.all())
self.assertTrue(in_y.all())
def test_get_bounding_corners(self):
+ """Test calculation of bounding corners."""
+ from pyresample.bilinear import (_get_output_xy,
+ _get_input_xy,
+ _get_bounding_corners)
+ from pyresample._spatial_mp import Proj
+
proj = Proj(self.target_def.proj_str)
- out_x, out_y = bil._get_output_xy(self.target_def, proj)
- in_x, in_y = bil._get_input_xy(self.swath_def, proj,
- self.input_idxs, self.idx_ref)
- res = bil._get_bounding_corners(in_x, in_y, out_x, out_y,
- self.neighbours, self.idx_ref)
+ out_x, out_y = _get_output_xy(self.target_def, proj)
+ in_x, in_y = _get_input_xy(self.swath_def, proj,
+ self.input_idxs, self.idx_ref)
+ res = _get_bounding_corners(in_x, in_y, out_x, out_y,
+ self.neighbours, self.idx_ref)
for i in range(len(res) - 1):
pt_ = res[i]
for j in range(2):
@@ -179,6 +216,9 @@ class Test(unittest.TestCase):
self.assertTrue(np.isfinite(pt_[5, j]))
def test_get_bil_info(self):
+ """Test calculation of bilinear resampling indices."""
+ from pyresample.bilinear import get_bil_info
+
def _check_ts(t__, s__):
for i in range(len(t__)):
# Just check the exact value for one pixel
@@ -196,85 +236,764 @@ class Test(unittest.TestCase):
self.assertTrue(t__[i] <= 1.0)
self.assertTrue(s__[i] <= 1.0)
- t__, s__, input_idxs, idx_arr = bil.get_bil_info(self.swath_def,
- self.target_def,
- 50e5, neighbours=32,
- nprocs=1,
- reduce_data=False)
+ t__, s__, input_idxs, idx_arr = get_bil_info(self.swath_def,
+ self.target_def,
+ 50e5, neighbours=32,
+ nprocs=1,
+ reduce_data=False)
_check_ts(t__, s__)
- t__, s__, input_idxs, idx_arr = bil.get_bil_info(self.swath_def,
- self.target_def,
- 50e5, neighbours=32,
- nprocs=1,
- reduce_data=True)
+ t__, s__, input_idxs, idx_arr = get_bil_info(self.swath_def,
+ self.target_def,
+ 50e5, neighbours=32,
+ nprocs=1,
+ reduce_data=True)
_check_ts(t__, s__)
def test_get_sample_from_bil_info(self):
- t__, s__, input_idxs, idx_arr = bil.get_bil_info(self.swath_def,
- self.target_def,
- 50e5, neighbours=32,
- nprocs=1)
+ """Test resampling using resampling indices."""
+ from pyresample.bilinear import get_bil_info, get_sample_from_bil_info
+
+ t__, s__, input_idxs, idx_arr = get_bil_info(self.swath_def,
+ self.target_def,
+ 50e5, neighbours=32,
+ nprocs=1)
# Sample from data1
- res = bil.get_sample_from_bil_info(self.data1.ravel(), t__, s__,
- input_idxs, idx_arr)
+ res = get_sample_from_bil_info(self.data1.ravel(), t__, s__,
+ input_idxs, idx_arr)
self.assertEqual(res[5], 1.)
# Sample from data2
- res = bil.get_sample_from_bil_info(self.data2.ravel(), t__, s__,
- input_idxs, idx_arr)
+ res = get_sample_from_bil_info(self.data2.ravel(), t__, s__,
+ input_idxs, idx_arr)
self.assertEqual(res[5], 2.)
# Reshaping
- res = bil.get_sample_from_bil_info(self.data2.ravel(), t__, s__,
- input_idxs, idx_arr,
- output_shape=self.target_def.shape)
+ res = get_sample_from_bil_info(self.data2.ravel(), t__, s__,
+ input_idxs, idx_arr,
+ output_shape=self.target_def.shape)
res = res.shape
self.assertEqual(res[0], self.target_def.shape[0])
self.assertEqual(res[1], self.target_def.shape[1])
# Test rounding that is happening for certain values
- res = bil.get_sample_from_bil_info(self.data3.ravel(), t__, s__,
- input_idxs, idx_arr,
- output_shape=self.target_def.shape)
+ res = get_sample_from_bil_info(self.data3.ravel(), t__, s__,
+ input_idxs, idx_arr,
+ output_shape=self.target_def.shape)
# Four pixels are outside of the data
self.assertEqual(np.isnan(res).sum(), 4)
def test_resample_bilinear(self):
+ """Test whole bilinear resampling."""
+ from pyresample.bilinear import resample_bilinear
+
# Single array
- res = bil.resample_bilinear(self.data1,
- self.swath_def,
- self.target_def,
- 50e5, neighbours=32,
- nprocs=1)
+ res = resample_bilinear(self.data1,
+ self.swath_def,
+ self.target_def,
+ 50e5, neighbours=32,
+ nprocs=1)
self.assertEqual(res.shape, self.target_def.shape)
# There are 12 pixels with value 1, all others are zero
self.assertEqual(res.sum(), 12)
self.assertEqual((res == 0).sum(), 4)
# Single array with masked output
- res = bil.resample_bilinear(self.data1,
- self.swath_def,
- self.target_def,
- 50e5, neighbours=32,
- nprocs=1, fill_value=None)
+ res = resample_bilinear(self.data1,
+ self.swath_def,
+ self.target_def,
+ 50e5, neighbours=32,
+ nprocs=1, fill_value=None)
self.assertTrue(hasattr(res, 'mask'))
# There should be 12 valid pixels
self.assertEqual(self.target_def.size - res.mask.sum(), 12)
# Two stacked arrays
data = np.dstack((self.data1, self.data2))
- res = bil.resample_bilinear(data,
- self.swath_def,
- self.target_def)
+ res = resample_bilinear(data,
+ self.swath_def,
+ self.target_def)
shp = res.shape
self.assertEqual(shp[0:2], self.target_def.shape)
self.assertEqual(shp[-1], 2)
+class TestXarrayBilinear(unittest.TestCase):
+ """Test Xarra/Dask -based bilinear interpolation."""
+
+ def setUp(self):
+ """Do some setup for common things."""
+ import dask.array as da
+ from xarray import DataArray
+ from pyresample import geometry, kd_tree
+
+ self.pts_irregular = (np.array([[-1., 1.], ]),
+ np.array([[1., 2.], ]),
+ np.array([[-2., -1.], ]),
+ np.array([[2., -4.], ]))
+ self.pts_vert_parallel = (np.array([[-1., 1.], ]),
+ np.array([[1., 2.], ]),
+ np.array([[-1., -1.], ]),
+ np.array([[1., -2.], ]))
+ self.pts_both_parallel = (np.array([[-1., 1.], ]),
+ np.array([[1., 1.], ]),
+ np.array([[-1., -1.], ]),
+ np.array([[1., -1.], ]))
+
+ # Area definition with four pixels
+ self.target_def = geometry.AreaDefinition('areaD',
+ 'Europe (3km, HRV, VTC)',
+ 'areaD',
+ {'a': '6378144.0',
+ 'b': '6356759.0',
+ 'lat_0': '50.00',
+ 'lat_ts': '50.00',
+ 'lon_0': '8.00',
+ 'proj': 'stere'},
+ 4, 4,
+ [-1370912.72,
+ -909968.64000000001,
+ 1029087.28,
+ 1490031.3600000001])
+
+ # Input data around the target pixel at 0.63388324, 55.08234642,
+ in_shape = (100, 100)
+ self.data1 = DataArray(da.ones((in_shape[0], in_shape[1])), dims=('y', 'x'))
+ self.data2 = 2. * self.data1
+ self.data3 = self.data1 + 9.5
+ lons, lats = np.meshgrid(np.linspace(-25., 40., num=in_shape[0]),
+ np.linspace(45., 75., num=in_shape[1]))
+ self.source_def = geometry.SwathDefinition(lons=lons, lats=lats)
+
+ self.radius = 50e3
+ self.neighbours = 32
+ valid_input_index, output_idxs, index_array, dists = \
+ kd_tree.get_neighbour_info(self.source_def, self.target_def,
+ self.radius, neighbours=self.neighbours,
+ nprocs=1)
+ input_size = valid_input_index.sum()
+ index_mask = (index_array == input_size)
+ index_array = np.where(index_mask, 0, index_array)
+
+ self.valid_input_index = valid_input_index
+ self.index_array = index_array
+
+ shp = self.source_def.shape
+ self.cols, self.lines = np.meshgrid(np.arange(shp[1]),
+ np.arange(shp[0]))
+
+ def test_init(self):
+ """Test that the resampler has been initialized correctly."""
+ from pyresample.bilinear.xarr import XArrayResamplerBilinear
+
+ # With defaults
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius)
+ self.assertTrue(resampler.source_geo_def == self.source_def)
+ self.assertTrue(resampler.target_geo_def == self.target_def)
+ self.assertEqual(resampler.radius_of_influence, self.radius)
+ self.assertEqual(resampler.neighbours, 32)
+ self.assertEqual(resampler.epsilon, 0)
+ self.assertTrue(resampler.reduce_data)
+ # These should be None
+ self.assertIsNone(resampler.valid_input_index)
+ self.assertIsNone(resampler.valid_output_index)
+ self.assertIsNone(resampler.index_array)
+ self.assertIsNone(resampler.distance_array)
+ self.assertIsNone(resampler.bilinear_t)
+ self.assertIsNone(resampler.bilinear_s)
+ self.assertIsNone(resampler.slices_x)
+ self.assertIsNone(resampler.slices_y)
+ self.assertIsNone(resampler.mask_slices)
+ self.assertIsNone(resampler.out_coords_x)
+ self.assertIsNone(resampler.out_coords_y)
+ # self.slices_{x,y} are used in self.slices dict
+ self.assertTrue(resampler.slices['x'] is resampler.slices_x)
+ self.assertTrue(resampler.slices['y'] is resampler.slices_y)
+ # self.out_coords_{x,y} are used in self.out_coords dict
+ self.assertTrue(resampler.out_coords['x'] is resampler.out_coords_x)
+ self.assertTrue(resampler.out_coords['y'] is resampler.out_coords_y)
+
+ # Override defaults
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius, neighbours=16,
+ epsilon=0.1, reduce_data=False)
+ self.assertEqual(resampler.neighbours, 16)
+ self.assertEqual(resampler.epsilon, 0.1)
+ self.assertFalse(resampler.reduce_data)
+
+ def test_get_bil_info(self):
+ """Test calculation of bilinear info."""
+ from pyresample.bilinear.xarr import XArrayResamplerBilinear
+
+ def _check_ts(t__, s__, nans):
+ for i, _ in enumerate(t__):
+ # Just check the exact value for one pixel
+ if i == 5:
+ self.assertAlmostEqual(t__[i], 0.730659147133, 5)
+ self.assertAlmostEqual(s__[i], 0.310314173004, 5)
+ # These pixels are outside the area
+ elif i in nans:
+ self.assertTrue(np.isnan(t__[i]))
+ self.assertTrue(np.isnan(s__[i]))
+ # All the others should have values between 0.0 and 1.0
+ else:
+ self.assertTrue(t__[i] >= 0.0)
+ self.assertTrue(s__[i] >= 0.0)
+ self.assertTrue(t__[i] <= 1.0)
+ self.assertTrue(s__[i] <= 1.0)
+
+ # Data reduction enabled (default)
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius, reduce_data=True)
+ (t__, s__, slices, mask_slices, out_coords) = resampler.get_bil_info()
+ _check_ts(t__.compute(), s__.compute(), [3, 10, 12, 13, 14, 15])
+
+ # Nothing should be masked based on coordinates
+ self.assertTrue(np.all(~mask_slices))
+ # Four values per output location
+ self.assertEqual(mask_slices.shape, (self.target_def.size, 4))
+
+ # self.slices_{x,y} are used in self.slices dict so they
+ # should be the same (object)
+ self.assertTrue(isinstance(slices, dict))
+ self.assertTrue(resampler.slices['x'] is resampler.slices_x)
+ self.assertTrue(np.all(resampler.slices['x'] == slices['x']))
+ self.assertTrue(resampler.slices['y'] is resampler.slices_y)
+ self.assertTrue(np.all(resampler.slices['y'] == slices['y']))
+
+ # self.slices_{x,y} are used in self.slices dict so they
+ # should be the same (object)
+ self.assertTrue(isinstance(out_coords, dict))
+ self.assertTrue(resampler.out_coords['x'] is resampler.out_coords_x)
+ self.assertTrue(np.all(resampler.out_coords['x'] == out_coords['x']))
+ self.assertTrue(resampler.out_coords['y'] is resampler.out_coords_y)
+ self.assertTrue(np.all(resampler.out_coords['y'] == out_coords['y']))
+
+ # Also some other attributes should have been set
+ self.assertTrue(t__ is resampler.bilinear_t)
+ self.assertTrue(s__ is resampler.bilinear_s)
+ self.assertIsNotNone(resampler.valid_output_index)
+ self.assertIsNotNone(resampler.index_array)
+ self.assertIsNotNone(resampler.valid_input_index)
+
+ # Data reduction disabled
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius, reduce_data=False)
+ (t__, s__, slices, mask_slices, out_coords) = resampler.get_bil_info()
+ _check_ts(t__.compute(), s__.compute(), [10, 12, 13, 14, 15])
+
+ def test_get_sample_from_bil_info(self):
+ """Test bilinear interpolation as a whole."""
+ from pyresample.bilinear.xarr import XArrayResamplerBilinear
+
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius)
+ _ = resampler.get_bil_info()
+
+ # Sample from data1
+ res = resampler.get_sample_from_bil_info(self.data1)
+ res = res.compute()
+ # Check couple of values
+ self.assertEqual(res.values[1, 1], 1.)
+ self.assertTrue(np.isnan(res.values[0, 3]))
+ # Check that the values haven't gone down or up a lot
+ self.assertAlmostEqual(np.nanmin(res.values), 1.)
+ self.assertAlmostEqual(np.nanmax(res.values), 1.)
+ # Check that dimensions are the same
+ self.assertEqual(res.dims, self.data1.dims)
+
+ # Sample from data1, custom fill value
+ res = resampler.get_sample_from_bil_info(self.data1, fill_value=-1.0)
+ res = res.compute()
+ self.assertEqual(np.nanmin(res.values), -1.)
+
+ # Sample from integer data
+ res = resampler.get_sample_from_bil_info(self.data1.astype(np.uint8),
+ fill_value=None)
+ res = res.compute()
+ # Five values should be filled with zeros, which is the
+ # default fill_value for integer data
+ self.assertEqual(np.sum(res == 0), 6)
+
+ @mock.patch('pyresample.bilinear.xarr.setattr')
+ def test_compute_indices(self, mock_setattr):
+ """Test running .compute() for indices."""
+ from pyresample.bilinear.xarr import (XArrayResamplerBilinear,
+ CACHE_INDICES)
+
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius)
+
+ # Set indices to Numpy arrays
+ for idx in CACHE_INDICES:
+ setattr(resampler, idx, np.array([]))
+ resampler._compute_indices()
+ # None of the indices shouldn't have been reassigned
+ mock_setattr.assert_not_called()
+
+ # Set indices to a Mock object
+ arr = mock.MagicMock()
+ for idx in CACHE_INDICES:
+ setattr(resampler, idx, arr)
+ resampler._compute_indices()
+ # All the indices should have been reassigned
+ self.assertEqual(mock_setattr.call_count, len(CACHE_INDICES))
+ # The compute should have been called the same amount of times
+ self.assertEqual(arr.compute.call_count, len(CACHE_INDICES))
+
+ def test_add_missing_coordinates(self):
+ """Test coordinate updating."""
+ import dask.array as da
+ from xarray import DataArray
+ from pyresample.bilinear.xarr import XArrayResamplerBilinear
+
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius)
+ bands = ['R', 'G', 'B']
+ data = DataArray(da.ones((3, 10, 10)), dims=('bands', 'y', 'x'),
+ coords={'bands': bands,
+ 'y': np.arange(10), 'x': np.arange(10)})
+ resampler._add_missing_coordinates(data)
+ # X and Y coordinates should not change
+ self.assertIsNone(resampler.out_coords_x)
+ self.assertIsNone(resampler.out_coords_y)
+ self.assertIsNone(resampler.out_coords['x'])
+ self.assertIsNone(resampler.out_coords['y'])
+ self.assertTrue('bands' in resampler.out_coords)
+ self.assertTrue(np.all(resampler.out_coords['bands'] == bands))
+
+ def test_slice_data(self):
+ """Test slicing the data."""
+ import dask.array as da
+ from xarray import DataArray
+ from pyresample.bilinear.xarr import XArrayResamplerBilinear
+
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius)
+
+ # Too many dimensions
+ data = DataArray(da.ones((1, 3, 10, 10)))
+ with self.assertRaises(ValueError):
+ _ = resampler._slice_data(data, np.nan)
+
+ # 2D data
+ data = DataArray(da.ones((10, 10)))
+ resampler.slices_x = np.random.randint(0, 10, (100, 4))
+ resampler.slices_y = np.random.randint(0, 10, (100, 4))
+ resampler.mask_slices = np.zeros((100, 4), dtype=np.bool)
+ p_1, p_2, p_3, p_4 = resampler._slice_data(data, np.nan)
+ self.assertEqual(p_1.shape, (100, ))
+ self.assertTrue(p_1.shape == p_2.shape == p_3.shape == p_4.shape)
+ self.assertTrue(np.all(p_1 == 1.0) and np.all(p_2 == 1.0) and
+ np.all(p_3 == 1.0) and np.all(p_4 == 1.0))
+
+ # 2D data with masking
+ resampler.mask_slices = np.ones((100, 4), dtype=np.bool)
+ p_1, p_2, p_3, p_4 = resampler._slice_data(data, np.nan)
+ self.assertTrue(np.all(np.isnan(p_1)) and np.all(np.isnan(p_2)) and
+ np.all(np.isnan(p_3)) and np.all(np.isnan(p_4)))
+
+ # 3D data
+ data = DataArray(da.ones((3, 10, 10)))
+ resampler.slices_x = np.random.randint(0, 10, (100, 4))
+ resampler.slices_y = np.random.randint(0, 10, (100, 4))
+ resampler.mask_slices = np.zeros((100, 4), dtype=np.bool)
+ p_1, p_2, p_3, p_4 = resampler._slice_data(data, np.nan)
+ self.assertEqual(p_1.shape, (3, 100))
+ self.assertTrue(p_1.shape == p_2.shape == p_3.shape == p_4.shape)
+
+ # 3D data with masking
+ resampler.mask_slices = np.ones((100, 4), dtype=np.bool)
+ p_1, p_2, p_3, p_4 = resampler._slice_data(data, np.nan)
+ self.assertTrue(np.all(np.isnan(p_1)) and np.all(np.isnan(p_2)) and
+ np.all(np.isnan(p_3)) and np.all(np.isnan(p_4)))
+
+ @mock.patch('pyresample.bilinear.xarr.np.meshgrid')
+ def test_get_slices(self, meshgrid):
+ """Test slice array creation."""
+ from pyresample.bilinear.xarr import XArrayResamplerBilinear
+
+ meshgrid.return_value = (self.cols, self.lines)
+
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius)
+ resampler.valid_input_index = self.valid_input_index
+ resampler.index_array = self.index_array
+
+ resampler._get_slices()
+ self.assertIsNotNone(resampler.out_coords_x)
+ self.assertIsNotNone(resampler.out_coords_y)
+ self.assertTrue(resampler.out_coords_x is resampler.out_coords['x'])
+ self.assertTrue(resampler.out_coords_y is resampler.out_coords['y'])
+ self.assertTrue(np.allclose(
+ resampler.out_coords_x,
+ [-1070912.72, -470912.72, 129087.28, 729087.28]))
+ self.assertTrue(np.allclose(
+ resampler.out_coords_y,
+ [1190031.36, 590031.36, -9968.64, -609968.64]))
+
+ self.assertIsNotNone(resampler.slices_x)
+ self.assertIsNotNone(resampler.slices_y)
+ self.assertTrue(resampler.slices_x is resampler.slices['x'])
+ self.assertTrue(resampler.slices_y is resampler.slices['y'])
+ self.assertTrue(resampler.slices_x.shape == (self.target_def.size, 32))
+ self.assertTrue(resampler.slices_y.shape == (self.target_def.size, 32))
+ self.assertEqual(np.sum(resampler.slices_x), 12471)
+ self.assertEqual(np.sum(resampler.slices_y), 2223)
+
+ self.assertFalse(np.any(resampler.mask_slices))
+
+ # Ensure that source geo def is used in masking
+ # Setting target_geo_def to 0-size shouldn't cause any masked values
+ resampler.target_geo_def = np.array([])
+ resampler._get_slices()
+ self.assertFalse(np.any(resampler.mask_slices))
+ # Setting source area def to 0-size should mask all values
+ resampler.source_geo_def = np.array([[]])
+ resampler._get_slices()
+ self.assertTrue(np.all(resampler.mask_slices))
+
+ @mock.patch('pyresample.bilinear.xarr.KDTree')
+ def test_create_resample_kdtree(self, KDTree):
+ """Test that KDTree creation is called."""
+ from pyresample.bilinear.xarr import XArrayResamplerBilinear
+
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius)
+
+ vii, kdtree = resampler._create_resample_kdtree()
+ self.assertEqual(np.sum(vii), 2700)
+ self.assertEqual(vii.size, self.source_def.size)
+ KDTree.assert_called_once()
+
+ @mock.patch('pyresample.bilinear.xarr.query_no_distance')
+ def test_query_resample_kdtree(self, qnd):
+ """Test that query_no_distance is called in _query_resample_kdtree()."""
+ from pyresample.bilinear.xarr import XArrayResamplerBilinear
+
+ resampler = XArrayResamplerBilinear(self.source_def, self.target_def,
+ self.radius)
+ res, none = resampler._query_resample_kdtree(1, 2, 3, 4,
+ reduce_data=5)
+ qnd.assert_called_with(2, 3, 4, 1, resampler.neighbours,
+ resampler.epsilon,
+ resampler.radius_of_influence)
+
+ def test_get_input_xy_dask(self):
+ """Test computation of input X and Y coordinates in target proj."""
+ import dask.array as da
+ from pyresample.bilinear.xarr import _get_input_xy_dask
+ from pyresample._spatial_mp import Proj
+
+ proj = Proj(self.target_def.proj_str)
+ in_x, in_y = _get_input_xy_dask(self.source_def, proj,
+ da.from_array(self.valid_input_index),
+ da.from_array(self.index_array))
+
+ self.assertTrue(in_x.shape, (self.target_def.size, 32))
+ self.assertTrue(in_y.shape, (self.target_def.size, 32))
+ self.assertTrue(in_x.all())
+ self.assertTrue(in_y.all())
+
+ def test_mask_coordinates_dask(self):
+ """Test masking of invalid coordinates."""
+ import dask.array as da
+ from pyresample.bilinear.xarr import _mask_coordinates_dask
+
+ lons, lats = _mask_coordinates_dask(
+ da.from_array([-200., 0., 0., 0., 200.]),
+ da.from_array([0., -100., 0, 100., 0.]))
+ lons, lats = da.compute(lons, lats)
+ self.assertTrue(lons[2] == lats[2] == 0.0)
+ self.assertEqual(np.sum(np.isnan(lons)), 4)
+ self.assertEqual(np.sum(np.isnan(lats)), 4)
+
+ def test_get_bounding_corners_dask(self):
+ """Test finding surrounding bounding corners."""
+ import dask.array as da
+ from pyresample.bilinear.xarr import (_get_input_xy_dask,
+ _get_bounding_corners_dask)
+ from pyresample._spatial_mp import Proj
+ from pyresample import CHUNK_SIZE
+
+ proj = Proj(self.target_def.proj_str)
+ out_x, out_y = self.target_def.get_proj_coords(chunks=CHUNK_SIZE)
+ out_x = da.ravel(out_x)
+ out_y = da.ravel(out_y)
+ in_x, in_y = _get_input_xy_dask(self.source_def, proj,
+ da.from_array(self.valid_input_index),
+ da.from_array(self.index_array))
+ pt_1, pt_2, pt_3, pt_4, ia_ = _get_bounding_corners_dask(
+ in_x, in_y, out_x, out_y,
+ self.neighbours,
+ da.from_array(self.index_array))
+
+ self.assertTrue(pt_1.shape == pt_2.shape ==
+ pt_3.shape == pt_4.shape ==
+ (self.target_def.size, 2))
+ self.assertTrue(ia_.shape == (self.target_def.size, 4))
+
+ # Check which of the locations has four valid X/Y pairs by
+ # finding where there are non-NaN values
+ res = da.sum(pt_1 + pt_2 + pt_3 + pt_4, axis=1).compute()
+ self.assertEqual(np.sum(~np.isnan(res)), 10)
+
+ def test_get_corner_dask(self):
+ """Test finding the closest corners."""
+ import dask.array as da
+ from pyresample.bilinear.xarr import (_get_corner_dask,
+ _get_input_xy_dask)
+ from pyresample import CHUNK_SIZE
+ from pyresample._spatial_mp import Proj
+
+ proj = Proj(self.target_def.proj_str)
+ in_x, in_y = _get_input_xy_dask(self.source_def, proj,
+ da.from_array(self.valid_input_index),
+ da.from_array(self.index_array))
+ out_x, out_y = self.target_def.get_proj_coords(chunks=CHUNK_SIZE)
+ out_x = da.ravel(out_x)
+ out_y = da.ravel(out_y)
+
+ # Some copy&paste from the code to get the input
+ out_x_tile = np.reshape(np.tile(out_x, self.neighbours),
+ (self.neighbours, out_x.size)).T
+ out_y_tile = np.reshape(np.tile(out_y, self.neighbours),
+ (self.neighbours, out_y.size)).T
+ x_diff = out_x_tile - in_x
+ y_diff = out_y_tile - in_y
+ stride = np.arange(x_diff.shape[0])
+
+ # Use lower left source pixels for testing
+ valid = (x_diff > 0) & (y_diff > 0)
+ x_3, y_3, idx_3 = _get_corner_dask(stride, valid, in_x, in_y,
+ da.from_array(self.index_array))
+
+ self.assertTrue(x_3.shape == y_3.shape == idx_3.shape ==
+ (self.target_def.size, ))
+ # Four locations have no data to the lower left of them (the
+ # bottom row of the area
+ self.assertEqual(np.sum(np.isnan(x_3.compute())), 4)
+
+ @mock.patch('pyresample.bilinear.xarr._get_ts_parallellogram_dask')
+ @mock.patch('pyresample.bilinear.xarr._get_ts_uprights_parallel_dask')
+ @mock.patch('pyresample.bilinear.xarr._get_ts_irregular_dask')
+ def test_get_ts_dask(self, irregular, uprights, parallellogram):
+ """Test that the three separate functions are called."""
+ from pyresample.bilinear.xarr import _get_ts_dask
+
+ # All valid values
+ t_irr = np.array([0.1, 0.2, 0.3])
+ s_irr = np.array([0.1, 0.2, 0.3])
+ irregular.return_value = (t_irr, s_irr)
+ t__, s__ = _get_ts_dask(1, 2, 3, 4, 5, 6)
+ irregular.assert_called_once()
+ uprights.assert_not_called()
+ parallellogram.assert_not_called()
+ self.assertTrue(np.allclose(t__.compute(), t_irr))
+ self.assertTrue(np.allclose(s__.compute(), s_irr))
+
+ # NaN in the first step, good value for that location from the
+ # second step
+ t_irr = np.array([0.1, 0.2, np.nan])
+ s_irr = np.array([0.1, 0.2, np.nan])
+ irregular.return_value = (t_irr, s_irr)
+ t_upr = np.array([3, 3, 0.3])
+ s_upr = np.array([3, 3, 0.3])
+ uprights.return_value = (t_upr, s_upr)
+ t__, s__ = _get_ts_dask(1, 2, 3, 4, 5, 6)
+ self.assertEqual(irregular.call_count, 2)
+ uprights.assert_called_once()
+ parallellogram.assert_not_called()
+ # Only the last value of the first step should have been replaced
+ t_res = np.array([0.1, 0.2, 0.3])
+ s_res = np.array([0.1, 0.2, 0.3])
+ self.assertTrue(np.allclose(t__.compute(), t_res))
+ self.assertTrue(np.allclose(s__.compute(), s_res))
+
+ # Two NaNs in the first step, one of which are found by the
+ # second, and the last bad value is replaced by the third step
+ t_irr = np.array([0.1, np.nan, np.nan])
+ s_irr = np.array([0.1, np.nan, np.nan])
+ irregular.return_value = (t_irr, s_irr)
+ t_upr = np.array([3, np.nan, 0.3])
+ s_upr = np.array([3, np.nan, 0.3])
+ uprights.return_value = (t_upr, s_upr)
+ t_par = np.array([4, 0.2, 0.3])
+ s_par = np.array([4, 0.2, 0.3])
+ parallellogram.return_value = (t_par, s_par)
+ t__, s__ = _get_ts_dask(1, 2, 3, 4, 5, 6)
+ self.assertEqual(irregular.call_count, 3)
+ self.assertEqual(uprights.call_count, 2)
+ parallellogram.assert_called_once()
+ # Only the last two values should have been replaced
+ t_res = np.array([0.1, 0.2, 0.3])
+ s_res = np.array([0.1, 0.2, 0.3])
+ self.assertTrue(np.allclose(t__.compute(), t_res))
+ self.assertTrue(np.allclose(s__.compute(), s_res))
+
+ # Too large and small values should be set to NaN
+ t_irr = np.array([1.00001, -0.00001, 1e6])
+ s_irr = np.array([1.00001, -0.00001, -1e6])
+ irregular.return_value = (t_irr, s_irr)
+ # Second step also returns invalid values
+ t_upr = np.array([1.00001, 0.2, np.nan])
+ s_upr = np.array([-0.00001, 0.2, np.nan])
+ uprights.return_value = (t_upr, s_upr)
+ # Third step has one new valid value, the last will stay invalid
+ t_par = np.array([0.1, 0.2, 4.0])
+ s_par = np.array([0.1, 0.2, 4.0])
+ parallellogram.return_value = (t_par, s_par)
+ t__, s__ = _get_ts_dask(1, 2, 3, 4, 5, 6)
+
+ t_res = np.array([0.1, 0.2, np.nan])
+ s_res = np.array([0.1, 0.2, np.nan])
+ self.assertTrue(np.allclose(t__.compute(), t_res, equal_nan=True))
+ self.assertTrue(np.allclose(s__.compute(), s_res, equal_nan=True))
+
+ def test_get_ts_irregular_dask(self):
+ """Test calculations for irregular corner locations."""
+ from pyresample.bilinear.xarr import _get_ts_irregular_dask
+
+ res = _get_ts_irregular_dask(self.pts_irregular[0],
+ self.pts_irregular[1],
+ self.pts_irregular[2],
+ self.pts_irregular[3],
+ 0., 0.)
+ self.assertEqual(res[0], 0.375)
+ self.assertEqual(res[1], 0.5)
+ res = _get_ts_irregular_dask(self.pts_vert_parallel[0],
+ self.pts_vert_parallel[1],
+ self.pts_vert_parallel[2],
+ self.pts_vert_parallel[3],
+ 0., 0.)
+ self.assertTrue(np.isnan(res[0]))
+ self.assertTrue(np.isnan(res[1]))
+
+ def test_get_ts_uprights_parallel(self):
+ """Test calculation when uprights are parallel."""
+ from pyresample.bilinear import _get_ts_uprights_parallel
+
+ res = _get_ts_uprights_parallel(self.pts_vert_parallel[0],
+ self.pts_vert_parallel[1],
+ self.pts_vert_parallel[2],
+ self.pts_vert_parallel[3],
+ 0., 0.)
+ self.assertEqual(res[0], 0.5)
+ self.assertEqual(res[1], 0.5)
+
+ def test_get_ts_parallellogram(self):
+ """Test calculation when the corners form a parallellogram."""
+ from pyresample.bilinear import _get_ts_parallellogram
+
+ res = _get_ts_parallellogram(self.pts_both_parallel[0],
+ self.pts_both_parallel[1],
+ self.pts_both_parallel[2],
+ 0., 0.)
+ self.assertEqual(res[0], 0.5)
+ self.assertEqual(res[1], 0.5)
+
+ def test_calc_abc(self):
+ """Test calculation of quadratic coefficients."""
+ from pyresample.bilinear.xarr import _calc_abc_dask
+
+ # No np.nan inputs
+ pt_1, pt_2, pt_3, pt_4 = self.pts_irregular
+ res = _calc_abc_dask(pt_1, pt_2, pt_3, pt_4, 0.0, 0.0)
+ self.assertFalse(np.isnan(res[0]))
+ self.assertFalse(np.isnan(res[1]))
+ self.assertFalse(np.isnan(res[2]))
+ # np.nan input -> np.nan output
+ res = _calc_abc_dask(np.array([[np.nan, np.nan]]),
+ pt_2, pt_3, pt_4, 0.0, 0.0)
+ self.assertTrue(np.isnan(res[0]))
+ self.assertTrue(np.isnan(res[1]))
+ self.assertTrue(np.isnan(res[2]))
+
+ def test_solve_quadratic(self):
+ """Test solving quadratic equation."""
+ from pyresample.bilinear.xarr import (_solve_quadratic_dask,
+ _calc_abc_dask)
+
+ res = _solve_quadratic_dask(1, 0, 0).compute()
+ self.assertEqual(res, 0.0)
+ res = _solve_quadratic_dask(1, 2, 1).compute()
+ self.assertTrue(np.isnan(res))
+ res = _solve_quadratic_dask(1, 2, 1, min_val=-2.).compute()
+ self.assertEqual(res, -1.0)
+ # Test that small adjustments work
+ pt_1, pt_2, pt_3, pt_4 = self.pts_vert_parallel
+ pt_1 = self.pts_vert_parallel[0].copy()
+ pt_1[0][0] += 1e-7
+ res = _calc_abc_dask(pt_1, pt_2, pt_3, pt_4, 0.0, 0.0)
+ res = _solve_quadratic_dask(res[0], res[1], res[2]).compute()
+ self.assertAlmostEqual(res[0], 0.5, 5)
+ res = _calc_abc_dask(pt_1, pt_3, pt_2, pt_4, 0.0, 0.0)
+ res = _solve_quadratic_dask(res[0], res[1], res[2]).compute()
+ self.assertAlmostEqual(res[0], 0.5, 5)
+
+ def test_query_no_distance(self):
+ """Test KDTree querying."""
+ from pyresample.bilinear.xarr import query_no_distance
+
+ kdtree = mock.MagicMock()
+ kdtree.query.return_value = (1, 2)
+ lons, lats = self.target_def.get_lonlats()
+ voi = (lons >= -180) & (lons <= 180) & (lats <= 90) & (lats >= -90)
+ res = query_no_distance(lons, lats, voi, kdtree, self.neighbours,
+ 0., self.radius)
+ # Only the second value from the query is returned
+ self.assertEqual(res, 2)
+ kdtree.query.assert_called_once()
+
+ def test_get_valid_input_index_dask(self):
+ """Test finding valid indices for reduced input data."""
+ from pyresample.bilinear.xarr import _get_valid_input_index_dask
+
+ # Do not reduce data
+ vii, lons, lats = _get_valid_input_index_dask(self.source_def,
+ self.target_def,
+ False, self.radius)
+ self.assertEqual(vii.shape, (self.source_def.size, ))
+ self.assertTrue(vii.dtype == np.bool)
+ # No data has been reduced, whole input is used
+ self.assertTrue(vii.compute().all())
+
+ # Reduce data
+ vii, lons, lats = _get_valid_input_index_dask(self.source_def,
+ self.target_def,
+ True, self.radius)
+ # 2700 valid input points
+ self.assertEqual(vii.compute().sum(), 2700)
+
+ def test_create_empty_bil_info(self):
+ """Test creation of empty bilinear info."""
+ from pyresample.bilinear.xarr import _create_empty_bil_info
+
+ t__, s__, vii, ia_ = _create_empty_bil_info(self.source_def,
+ self.target_def)
+ self.assertEqual(t__.shape, (self.target_def.size,))
+ self.assertEqual(s__.shape, (self.target_def.size,))
+ self.assertEqual(ia_.shape, (self.target_def.size, 4))
+ self.assertTrue(ia_.dtype == np.int32)
+ self.assertEqual(vii.shape, (self.source_def.size,))
+ self.assertTrue(vii.dtype == np.bool)
+
+ def test_lonlat2xyz(self):
+ """Test conversion from geographic to cartesian 3D coordinates."""
+ from pyresample.bilinear.xarr import lonlat2xyz
+ from pyresample import CHUNK_SIZE
+
+ lons, lats = self.target_def.get_lonlats(chunks=CHUNK_SIZE)
+ res = lonlat2xyz(lons, lats)
+ self.assertEqual(res.shape, (self.target_def.size, 3))
+ vals = [3188578.91069278, -612099.36103276, 5481596.63569999]
+ self.assertTrue(np.allclose(res.compute()[0, :], vals))
+
+
def suite():
- """The test suite.
- """
+ """Create the test suite."""
loader = unittest.TestLoader()
mysuite = unittest.TestSuite()
- mysuite.addTest(loader.loadTestsFromTestCase(Test))
+ mysuite.addTest(loader.loadTestsFromTestCase(TestNumpyBilinear))
+ mysuite.addTest(loader.loadTestsFromTestCase(TestXarrayBilinear))
return mysuite
=====================================
pyresample/test/test_bucket.py
=====================================
@@ -0,0 +1,219 @@
+import unittest
+import numpy as np
+import dask.array as da
+import dask
+import xarray as xr
+try:
+ from unittest.mock import MagicMock, patch
+except ImportError:
+ # separate mock package py<3.3
+ from mock import MagicMock, patch
+
+from pyresample.geometry import AreaDefinition
+from pyresample import bucket
+from pyresample.test.utils import CustomScheduler
+
+
+class Test(unittest.TestCase):
+
+ adef = AreaDefinition('eurol', 'description', '',
+ {'ellps': 'WGS84',
+ 'lat_0': '90.0',
+ 'lat_ts': '60.0',
+ 'lon_0': '0.0',
+ 'proj': 'stere'}, 2560, 2048,
+ (-3780000.0, -7644000.0, 3900000.0, -1500000.0))
+
+ chunks = 2
+ lons = da.from_array(np.array([[25., 25.], [25., 25.]]),
+ chunks=chunks)
+ lats = da.from_array(np.array([[60., 60.00001], [60.2, 60.3]]),
+ chunks=chunks)
+
+ def setUp(self):
+ self.resampler = bucket.BucketResampler(self.adef, self.lons, self.lats)
+
+ @patch('pyresample.bucket.Proj')
+ @patch('pyresample.bucket.BucketResampler._get_indices')
+ def test_init(self, get_indices, prj):
+ resampler = bucket.BucketResampler(self.adef, self.lons, self.lats)
+ get_indices.assert_called_once()
+ prj.assert_called_once_with(self.adef.proj_dict)
+ self.assertTrue(hasattr(resampler, 'target_area'))
+ self.assertTrue(hasattr(resampler, 'source_lons'))
+ self.assertTrue(hasattr(resampler, 'source_lats'))
+ self.assertTrue(hasattr(resampler, 'x_idxs'))
+ self.assertTrue(hasattr(resampler, 'y_idxs'))
+ self.assertTrue(hasattr(resampler, 'idxs'))
+ self.assertTrue(hasattr(resampler, 'get_sum'))
+ self.assertTrue(hasattr(resampler, 'get_count'))
+ self.assertTrue(hasattr(resampler, 'get_average'))
+ self.assertTrue(hasattr(resampler, 'get_fractions'))
+ self.assertIsNone(resampler.counts)
+
+ def test_round_to_resolution(self):
+ """Test rounding to given resolution"""
+ # Scalar, integer resolution
+ self.assertEqual(bucket.round_to_resolution(5.5, 2.), 6)
+ # Scalar, non-integer resolution
+ self.assertEqual(bucket.round_to_resolution(5.5, 1.7), 5.1)
+ # List
+ self.assertTrue(np.all(bucket.round_to_resolution([4.2, 5.6], 2) ==
+ np.array([4., 6.])))
+ # Numpy array
+ self.assertTrue(np.all(bucket.round_to_resolution(np.array([4.2, 5.6]), 2) ==
+ np.array([4., 6.])))
+ # Dask array
+ self.assertTrue(
+ np.all(bucket.round_to_resolution(da.array([4.2, 5.6]), 2) ==
+ np.array([4., 6.])))
+
+ def test_get_proj_coordinates(self):
+ """Test calculation of projection coordinates."""
+ prj = MagicMock()
+ prj.return_value = ([3.1, 3.1, 3.1], [4.8, 4.8, 4.8])
+ lons = [1., 1., 1.]
+ lats = [2., 2., 2.]
+ x_res, y_res = 0.5, 0.5
+ self.resampler.prj = prj
+ result = self.resampler._get_proj_coordinates(lons, lats, x_res, y_res)
+ prj.assert_called_once_with(lons, lats)
+ self.assertTrue(isinstance(result, np.ndarray))
+ self.assertEqual(result.shape, (2, 3))
+ self.assertTrue(np.all(result == np.array([[3., 3., 3.],
+ [5., 5., 5.]])))
+
+ def test_get_bucket_indices(self):
+ """Test calculation of array indices."""
+ # Ensure nothing is calculated
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ self.resampler._get_indices()
+ x_idxs, y_idxs = da.compute(self.resampler.x_idxs,
+ self.resampler.y_idxs)
+ self.assertTrue(np.all(x_idxs == np.array([1709, 1709, 1706, 1705])))
+ self.assertTrue(np.all(y_idxs == np.array([465, 465, 458, 455])))
+
+ def test_get_sum(self):
+ """Test drop-in-a-bucket sum."""
+ data = da.from_array(np.array([[2., 2.], [2., 2.]]),
+ chunks=self.chunks)
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_sum(data)
+
+ result = result.compute()
+ # One bin with two hits, so max value is 2.0
+ self.assertTrue(np.max(result) == 4.)
+ # Two bins with the same value
+ self.assertEqual(np.sum(result == 2.), 2)
+ # One bin with double the value
+ self.assertEqual(np.sum(result == 4.), 1)
+ self.assertEqual(result.shape, self.adef.shape)
+
+ # Test that also Xarray.DataArrays work
+ data = xr.DataArray(data)
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_sum(data)
+ # One bin with two hits, so max value is 2.0
+ self.assertTrue(np.max(result) == 4.)
+ # Two bins with the same value
+ self.assertEqual(np.sum(result == 2.), 2)
+ # One bin with double the value
+ self.assertEqual(np.sum(result == 4.), 1)
+ self.assertEqual(result.shape, self.adef.shape)
+
+ # Test masking all-NaN bins
+ data = da.from_array(np.array([[np.nan, np.nan], [np.nan, np.nan]]),
+ chunks=self.chunks)
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_sum(data, mask_all_nan=True)
+ self.assertTrue(np.all(np.isnan(result)))
+ # By default all-NaN bins have a value of 0.0
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_sum(data)
+ self.assertEqual(np.nanmax(result), 0.0)
+
+ def test_get_count(self):
+ """Test drop-in-a-bucket sum."""
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_count()
+ result = result.compute()
+ self.assertTrue(np.max(result) == 2)
+ self.assertEqual(np.sum(result == 1), 2)
+ self.assertEqual(np.sum(result == 2), 1)
+ self.assertTrue(self.resampler.counts is not None)
+
+ def test_get_average(self):
+ """Test averaging bucket resampling."""
+ data = da.from_array(np.array([[2., 4.], [3., np.nan]]),
+ chunks=self.chunks)
+ # Without pre-calculated indices
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_average(data)
+ result = result.compute()
+ self.assertEqual(np.nanmax(result), 3.)
+ self.assertTrue(np.any(np.isnan(result)))
+ # Use a fill value other than np.nan
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_average(data, fill_value=-1)
+ result = result.compute()
+ self.assertEqual(np.max(result), 3.)
+ self.assertEqual(np.min(result), -1)
+ self.assertFalse(np.any(np.isnan(result)))
+
+ # Test masking all-NaN bins
+ data = da.from_array(np.array([[np.nan, np.nan], [np.nan, np.nan]]),
+ chunks=self.chunks)
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_average(data, mask_all_nan=True)
+ self.assertTrue(np.all(np.isnan(result)))
+ # By default all-NaN bins have a value of NaN
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_average(data)
+ self.assertTrue(np.all(np.isnan(result)))
+
+ def test_resample_bucket_fractions(self):
+ """Test fraction calculations for categorical data."""
+ data = da.from_array(np.array([[2, 4], [2, 2]]),
+ chunks=self.chunks)
+ categories = [1, 2, 3, 4]
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_fractions(data, categories=categories)
+ self.assertEqual(set(categories), set(result.keys()))
+ res = result[1].compute()
+ self.assertTrue(np.nanmax(res) == 0.)
+ res = result[2].compute()
+ self.assertTrue(np.nanmax(res) == 1.)
+ self.assertTrue(np.nanmin(res) == 0.5)
+ res = result[3].compute()
+ self.assertTrue(np.nanmax(res) == 0.)
+ res = result[4].compute()
+ self.assertTrue(np.nanmax(res) == 0.5)
+ self.assertTrue(np.nanmin(res) == 0.)
+ # There should be NaN values
+ self.assertTrue(np.any(np.isnan(res)))
+
+ # Use a fill value
+ with dask.config.set(scheduler=CustomScheduler(max_computes=0)):
+ result = self.resampler.get_fractions(data, categories=categories,
+ fill_value=-1)
+
+ # There should not be any NaN values
+ for i in categories:
+ res = result[i].compute()
+ self.assertFalse(np.any(np.isnan(res)))
+ self.assertTrue(np.min(res) == -1)
+
+ # No categories given, need to compute the data once to get
+ # the categories
+ with dask.config.set(scheduler=CustomScheduler(max_computes=1)):
+ result = self.resampler.get_fractions(data, categories=None)
+
+
+def suite():
+ """The test suite.
+ """
+ loader = unittest.TestLoader()
+ mysuite = unittest.TestSuite()
+ mysuite.addTest(loader.loadTestsFromTestCase(Test))
+
+ return mysuite
=====================================
pyresample/test/test_geometry.py
=====================================
@@ -129,7 +129,19 @@ class Test(unittest.TestCase):
with patch('pyresample._cartopy.warnings.warn') as warn:
# Test that user warning has been issued (EPSG to proj4 string is potentially lossy)
area.to_cartopy_crs()
- warn.assert_called()
+ if projection.startswith('EPSG'):
+ # we'll only get this for the new EPSG:XXXX syntax
+ warn.assert_called()
+
+ # Bounds for latlong projection must be specified in radians
+ latlong_crs = geometry.AreaDefinition(area_id='latlong',
+ description='Global regular lat-lon grid',
+ proj_id='latlong',
+ projection={'proj': 'latlong', 'lon0': 0},
+ width=360,
+ height=180,
+ area_extent=(-180, -90, 180, 90)).to_cartopy_crs()
+ self.assertTrue(np.allclose(latlong_crs.bounds, [-np.pi, np.pi, -np.pi/2, np.pi/2]))
def test_create_areas_def(self):
from pyresample import utils
@@ -183,7 +195,7 @@ class Test(unittest.TestCase):
' area_extent:\n'
' lower_left_xy: [-49739, 5954123]\n'
' upper_right_xy: [1350361, 7354223]'.format(epsg=epsg_yaml)))
- self.assertDictEqual(res, expected)
+ self.assertDictEqual(res, expected)
def test_parse_area_file(self):
from pyresample import utils
@@ -1066,9 +1078,12 @@ class Test(unittest.TestCase):
self.assertEqual(slice(3, 3709, None), slice_y)
def test_proj_str(self):
+ """Test the 'proj_str' property of AreaDefinition."""
from collections import OrderedDict
from pyresample import utils
+ # pyproj 2.0+ adds a +type=crs parameter
+ extra_params = ' +type=crs' if utils.is_pyproj2() else ''
proj_dict = OrderedDict()
proj_dict['proj'] = 'stere'
proj_dict['a'] = 6378144.0
@@ -1081,20 +1096,34 @@ class Test(unittest.TestCase):
[-1370912.72, -909968.64, 1029087.28,
1490031.36])
self.assertEqual(area.proj_str,
- '+a=6378144.0 +b=6356759.0 +lat_0=50.0 +lat_ts=50.0 +lon_0=8.0 +proj=stere')
+ '+a=6378144.0 +b=6356759.0 +lat_0=50.0 +lat_ts=50.0 '
+ '+lon_0=8.0 +proj=stere' + extra_params)
+ # try a omerc projection and no_rot parameters
+ proj_dict['proj'] = 'omerc'
+ proj_dict['alpha'] = proj_dict.pop('lat_ts')
proj_dict['no_rot'] = ''
area = geometry.AreaDefinition('areaD', 'Europe (3km, HRV, VTC)', 'areaD',
proj_dict, 10, 10,
[-1370912.72, -909968.64, 1029087.28,
1490031.36])
self.assertEqual(area.proj_str,
- '+a=6378144.0 +b=6356759.0 +lat_0=50.0 +lat_ts=50.0 +lon_0=8.0 +no_rot +proj=stere')
+ '+a=6378144.0 +alpha=50.0 +b=6356759.0 +lat_0=50.0 '
+ '+lon_0=8.0 +no_rot +proj=omerc' + extra_params)
# EPSG
- projections = ['+init=EPSG:6932']
if utils.is_pyproj2():
- projections.append('EPSG:6932')
- for projection in projections:
+ # With pyproj 2.0+ we expand EPSG to full parameter list
+ full_proj = ('+datum=WGS84 +lat_0=-90 +lon_0=0 +no_defs '
+ '+proj=laea +type=crs +units=m +x_0=0 +y_0=0')
+ projections = [
+ ('+init=EPSG:6932', full_proj),
+ ('EPSG:6932', full_proj)
+ ]
+ else:
+ projections = [
+ ('+init=EPSG:6932', '+init=EPSG:6932'),
+ ]
+ for projection, expected_proj in projections:
area = geometry.AreaDefinition(
area_id='ease-sh-2.0',
description='25km EASE Grid 2.0 (Southern Hemisphere)',
@@ -1102,7 +1131,7 @@ class Test(unittest.TestCase):
projection=projection,
width=123, height=123,
area_extent=[-40000., -40000., 40000., 40000.])
- self.assertEqual(area.proj_str, projection)
+ self.assertEqual(area.proj_str, expected_proj)
def test_striding(self):
"""Test striding AreaDefinitions."""
@@ -1367,6 +1396,7 @@ class TestSwathDefinition(unittest.TestCase):
def test_compute_optimal_bb(self):
"""Test computing the bb area."""
+ from pyresample.utils import is_pyproj2
import xarray as xr
lats = np.array([[85.23900604248047, 62.256004333496094, 35.58000183105469],
[80.84000396728516, 60.74200439453125, 34.08500289916992],
@@ -1386,6 +1416,11 @@ class TestSwathDefinition(unittest.TestCase):
proj_dict = {'gamma': 0.0, 'lonc': -11.391744043133668,
'ellps': 'WGS84', 'proj': 'omerc',
'alpha': 9.185764390923012, 'lat_0': -0.2821013754097188}
+ if is_pyproj2():
+ # pyproj2 adds some extra defaults
+ proj_dict.update({'x_0': 0, 'y_0': 0, 'units': 'm',
+ 'k': 1, 'gamma': 0,
+ 'no_defs': None, 'type': 'crs'})
assert_np_dict_allclose(res.proj_dict, proj_dict)
self.assertEqual(res.shape, (6, 3))
@@ -1705,21 +1740,35 @@ class TestDynamicAreaDefinition(unittest.TestCase):
lats = [50, 66, 66, 50]
result = area.freeze((lons, lats),
resolution=3000,
- proj_info={'lon0': 16, 'lat0': 58})
-
+ proj_info={'lon_0': 16, 'lat_0': 58})
+
+ np.testing.assert_allclose(result.area_extent, (-432079.38952,
+ -872594.690447,
+ 432079.38952,
+ 904633.303964))
+ self.assertEqual(result.proj_dict['lon_0'], 16)
+ self.assertEqual(result.proj_dict['lat_0'], 58)
+ self.assertEqual(result.width, 288)
+ self.assertEqual(result.height, 592)
+
+ # make sure that setting `proj_info` once doesn't
+ # set it in the dynamic area
+ result = area.freeze((lons, lats),
+ resolution=3000,
+ proj_info={'lon_0': 0})
np.testing.assert_allclose(result.area_extent, (538546.7274949469,
5380808.879250369,
1724415.6519203288,
6998895.701001488))
- self.assertEqual(result.proj_dict['lon0'], 16)
- self.assertEqual(result.proj_dict['lat0'], 58)
- self.assertEqual(result.x_size, 395)
- self.assertEqual(result.y_size, 539)
+ self.assertEqual(result.proj_dict['lon_0'], 0)
+ # lat_0 could be provided or not depending on version of pyproj
+ self.assertEqual(result.proj_dict.get('lat_0', 0), 0)
+ self.assertEqual(result.width, 395)
+ self.assertEqual(result.height, 539)
def test_freeze_with_bb(self):
"""Test freezing the area with bounding box computation."""
- area = geometry.DynamicAreaDefinition('test_area', 'A test area',
- {'proj': 'omerc'},
+ area = geometry.DynamicAreaDefinition('test_area', 'A test area', {'proj': 'omerc'},
optimize_projection=True)
lons = [[10, 12.1, 14.2, 16.3],
[10, 12, 14, 16],
@@ -1729,13 +1778,23 @@ class TestDynamicAreaDefinition(unittest.TestCase):
[50, 51, 52, 53]]
import xarray as xr
sdef = geometry.SwathDefinition(xr.DataArray(lons), xr.DataArray(lats))
- result = area.freeze(sdef,
- resolution=1000)
+ result = area.freeze(sdef, resolution=1000)
np.testing.assert_allclose(result.area_extent,
[-336277.698941, 5513145.392745,
192456.651909, 7749649.63914])
self.assertEqual(result.x_size, 4)
self.assertEqual(result.y_size, 18)
+ # Test for properties and shape usage in freeze.
+ area = geometry.DynamicAreaDefinition('test_area', 'A test area', {'proj': 'merc'},
+ width=4, height=18)
+ self.assertEqual((18, 4), area.shape)
+ result = area.freeze(sdef)
+ np.testing.assert_allclose(result.area_extent,
+ (996309.4426, 6287132.757981, 1931393.165263, 10837238.860543))
+ area = geometry.DynamicAreaDefinition('test_area', 'A test area', {'proj': 'merc'},
+ resolution=1000)
+ self.assertEqual(1000, area.pixel_size_x)
+ self.assertEqual(1000, area.pixel_size_y)
def test_compute_domain(self):
"""Test computing size and area extent."""
=====================================
pyresample/test/test_grid.py
=====================================
@@ -201,8 +201,12 @@ class Test(unittest.TestCase):
lat, 52.566998432390619, msg='Resampling of single lat failed')
def test_proj4_string(self):
+ """Test 'proj_str' property of AreaDefinition."""
+ from pyresample.utils import is_pyproj2
proj4_string = self.area_def.proj_str
expected_string = '+a=6378144.0 +b=6356759.0 +lat_ts=50.0 +lon_0=8.0 +proj=stere +lat_0=50.0'
+ if is_pyproj2():
+ expected_string += ' +type=crs'
self.assertEqual(
frozenset(proj4_string.split()), frozenset(expected_string.split()))
=====================================
pyresample/test/test_spatial_mp.py
=====================================
@@ -30,31 +30,31 @@ import unittest
from pyresample._spatial_mp import BaseProj
-class SpatialMPTest(unittest.TestCase):
- @mock.patch('pyresample._spatial_mp.pyproj.Proj.__init__', return_value=None)
- def test_base_proj_epsg(self, proj_init):
- """Test Proj creation with EPSG codes"""
- if pyproj.__version__ < '2':
- return self.skipTest(reason='pyproj 2+ only')
-
- args = [
- [None, {'init': 'EPSG:6932'}],
- [{'init': 'EPSG:6932'}, {}],
- [None, {'EPSG': '6932'}],
- [{'EPSG': '6932'}, {}]
- ]
- for projparams, kwargs in args:
- BaseProj(projparams, **kwargs)
- proj_init.assert_called_with(projparams='EPSG:6932', preserve_units=mock.ANY)
- proj_init.reset_mock()
+# class SpatialMPTest(unittest.TestCase):
+# @mock.patch('pyresample._spatial_mp.pyproj.Proj.__init__', return_value=None)
+# def test_base_proj_epsg(self, proj_init):
+# """Test Proj creation with EPSG codes"""
+# if pyproj.__version__ < '2':
+# return self.skipTest(reason='pyproj 2+ only')
+#
+# args = [
+# [None, {'init': 'EPSG:6932'}],
+# [{'init': 'EPSG:6932'}, {}],
+# [None, {'EPSG': '6932'}],
+# [{'EPSG': '6932'}, {}]
+# ]
+# for projparams, kwargs in args:
+# BaseProj(projparams, **kwargs)
+# proj_init.assert_called_with(projparams='EPSG:6932', preserve_units=mock.ANY)
+# proj_init.reset_mock()
def suite():
"""The test suite.
"""
- loader = unittest.TestLoader()
+ # loader = unittest.TestLoader()
mysuite = unittest.TestSuite()
- mysuite.addTest(loader.loadTestsFromTestCase(SpatialMPTest))
+ # mysuite.addTest(loader.loadTestsFromTestCase(SpatialMPTest))
return mysuite
=====================================
pyresample/test/test_utils.py
=====================================
@@ -41,39 +41,64 @@ class TestLegacyAreaParser(unittest.TestCase):
def test_area_parser_legacy(self):
"""Test legacy area parser."""
from pyresample import parse_area_file
+ from pyresample.utils import is_pyproj2
ease_nh, ease_sh = parse_area_file(os.path.join(os.path.dirname(__file__), 'test_files', 'areas.cfg'),
'ease_nh', 'ease_sh')
+ if is_pyproj2():
+ # pyproj 2.0+ adds some extra parameters
+ projection = ("{'R': '6371228', 'lat_0': '90', 'lon_0': '0', "
+ "'no_defs': 'None', 'proj': 'laea', 'type': 'crs', "
+ "'units': 'm', 'x_0': '0', 'y_0': '0'}")
+ else:
+ projection = ("{'a': '6371228.0', 'lat_0': '90.0', "
+ "'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}")
nh_str = """Area ID: ease_nh
Description: Arctic EASE grid
Projection ID: ease_nh
-Projection: {'a': '6371228.0', 'lat_0': '90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+Projection: {}
Number of columns: 425
Number of rows: 425
-Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)"""
+Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)""".format(projection)
self.assertEqual(ease_nh.__str__(), nh_str)
- self.assertIsInstance(ease_nh.proj_dict['lat_0'], float)
-
+ self.assertIsInstance(ease_nh.proj_dict['lat_0'], (int, float))
+
+ if is_pyproj2():
+ projection = ("{'R': '6371228', 'lat_0': '-90', 'lon_0': '0', "
+ "'no_defs': 'None', 'proj': 'laea', 'type': 'crs', "
+ "'units': 'm', 'x_0': '0', 'y_0': '0'}")
+ else:
+ projection = ("{'a': '6371228.0', 'lat_0': '-90.0', "
+ "'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}")
sh_str = """Area ID: ease_sh
Description: Antarctic EASE grid
Projection ID: ease_sh
-Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+Projection: {}
Number of columns: 425
Number of rows: 425
-Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)"""
+Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)""".format(projection)
self.assertEqual(ease_sh.__str__(), sh_str)
- self.assertIsInstance(ease_sh.proj_dict['lat_0'], float)
+ self.assertIsInstance(ease_sh.proj_dict['lat_0'], (int, float))
def test_load_area(self):
from pyresample import load_area
+ from pyresample.utils import is_pyproj2
ease_nh = load_area(os.path.join(os.path.dirname(__file__), 'test_files', 'areas.cfg'), 'ease_nh')
+ if is_pyproj2():
+ # pyproj 2.0+ adds some extra parameters
+ projection = ("{'R': '6371228', 'lat_0': '90', 'lon_0': '0', "
+ "'no_defs': 'None', 'proj': 'laea', 'type': 'crs', "
+ "'units': 'm', 'x_0': '0', 'y_0': '0'}")
+ else:
+ projection = ("{'a': '6371228.0', 'lat_0': '90.0', "
+ "'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}")
nh_str = """Area ID: ease_nh
Description: Arctic EASE grid
Projection ID: ease_nh
-Projection: {'a': '6371228.0', 'lat_0': '90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+Projection: {}
Number of columns: 425
Number of rows: 425
-Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)"""
+Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)""".format(projection)
self.assertEqual(nh_str, ease_nh.__str__())
def test_not_found_exception(self):
@@ -96,44 +121,61 @@ class TestYAMLAreaParser(unittest.TestCase):
'test_latlong')
ease_nh, ease_sh, test_m, test_deg, test_latlong = test_areas
+ from pyresample.utils import is_pyproj2
+ if is_pyproj2():
+ # pyproj 2.0+ adds some extra parameters
+ projection = ("{'R': '6371228', 'lat_0': '-90', 'lon_0': '0', "
+ "'no_defs': 'None', 'proj': 'laea', 'type': 'crs', "
+ "'units': 'm', 'x_0': '0', 'y_0': '0'}")
+ else:
+ projection = ("{'a': '6371228.0', 'lat_0': '-90.0', "
+ "'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}")
nh_str = """Area ID: ease_nh
Description: Arctic EASE grid
-Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+Projection: {}
Number of columns: 425
Number of rows: 425
-Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)"""
+Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)""".format(projection)
self.assertEqual(ease_nh.__str__(), nh_str)
sh_str = """Area ID: ease_sh
Description: Antarctic EASE grid
-Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+Projection: {}
Number of columns: 425
Number of rows: 425
-Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)"""
+Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)""".format(projection)
self.assertEqual(ease_sh.__str__(), sh_str)
m_str = """Area ID: test_meters
Description: test_meters
-Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+Projection: {}
Number of columns: 850
Number of rows: 425
-Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)"""
+Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)""".format(projection)
self.assertEqual(test_m.__str__(), m_str)
deg_str = """Area ID: test_degrees
Description: test_degrees
-Projection: {'a': '6371228.0', 'lat_0': '-90.0', 'lon_0': '0.0', 'proj': 'laea', 'units': 'm'}
+Projection: {}
Number of columns: 850
Number of rows: 425
-Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)"""
+Area extent: (-5326849.0625, -5326849.0625, 5326849.0625, 5326849.0625)""".format(projection)
self.assertEqual(test_deg.__str__(), deg_str)
+ if is_pyproj2():
+ # pyproj 2.0+ adds some extra parameters
+ projection = ("{'ellps': 'WGS84', 'lat_0': '27.12', "
+ "'lon_0': '-81.36', 'proj': 'longlat', "
+ "'type': 'crs'}")
+ else:
+ projection = ("{'ellps': 'WGS84', 'lat_0': '27.12', "
+ "'lon_0': '-81.36', 'proj': 'longlat'}")
latlong_str = """Area ID: test_latlong
Description: Basic latlong grid
-Projection: {'ellps': 'WGS84', 'lat_0': '27.12', 'lon_0': '-81.36', 'proj': 'longlat'}
+Projection: {}
Number of columns: 3473
Number of rows: 4058
-Area extent: (-0.0812, 0.4039, 0.0812, 0.5428)"""
+Area extent: (-0.0812, 0.4039, 0.0812, 0.5428)""".format(projection)
self.assertEqual(test_latlong.__str__(), latlong_str)
def test_dynamic_area_parser_yaml(self):
@@ -338,19 +380,29 @@ class TestMisc(unittest.TestCase):
self.assertIsInstance(proj_dict2['lon_0'], float)
# EPSG
- expected = {'+init=EPSG:4326': {'init': 'EPSG:4326'},
- 'EPSG:4326': {'EPSG': 4326}}
+ proj_str = '+init=EPSG:4326'
+ proj_dict_exp = {'init': 'EPSG:4326'}
+ proj_dict = utils._proj4.proj4_str_to_dict(proj_str)
+ self.assertEqual(proj_dict, proj_dict_exp)
+ self.assertEqual(utils._proj4.proj4_dict_to_str(proj_dict), proj_str) # round-trip
- for proj_str, proj_dict_exp in expected.items():
- proj_dict = utils._proj4.proj4_str_to_dict(proj_str)
+ proj_str = 'EPSG:4326'
+ proj_dict_exp = {'init': 'EPSG:4326'}
+ proj_dict_exp2 = {'proj': 'longlat', 'datum': 'WGS84', 'no_defs': None, 'type': 'crs'}
+ proj_dict = utils._proj4.proj4_str_to_dict(proj_str)
+ if 'init' in proj_dict:
+ # pyproj <2.0
self.assertEqual(proj_dict, proj_dict_exp)
- self.assertEqual(utils._proj4.proj4_dict_to_str(proj_dict), proj_str) # round-trip
-
- # Invalid EPSG code (pyproj-2 syntax only)
- self.assertRaises(ValueError, utils._proj4.proj4_str_to_dict, 'EPSG:XXXX')
+ else:
+ # pyproj 2.0+
+ self.assertEqual(proj_dict, proj_dict_exp2)
+ # input != output for this style of EPSG code
+ # EPSG to PROJ.4 can be lossy
+ # self.assertEqual(utils._proj4.proj4_dict_to_str(proj_dict), proj_str) # round-trip
def test_def2yaml_converter(self):
from pyresample import parse_area_file, convert_def_to_yaml
+ from pyresample.utils import is_pyproj2
import tempfile
def_file = os.path.join(os.path.dirname(__file__), 'test_files', 'areas.cfg')
filehandle, yaml_file = tempfile.mkstemp()
@@ -360,8 +412,16 @@ class TestMisc(unittest.TestCase):
areas_new = set(parse_area_file(yaml_file))
areas = parse_area_file(def_file)
for area in areas:
- area.proj_dict.pop('units', None)
+ if is_pyproj2():
+ # pyproj 2.0 adds units back in
+ # pyproj <2 doesn't
+ continue
+ # initialize _proj_dict
+ area.proj_dict # noqa
+ area._proj_dict.pop('units', None)
areas_old = set(areas)
+ areas_new = {area.area_id: area for area in areas_new}
+ areas_old = {area.area_id: area for area in areas_old}
self.assertEqual(areas_new, areas_old)
finally:
os.remove(yaml_file)
@@ -375,18 +435,24 @@ class TestMisc(unittest.TestCase):
transform = Affine(300.0379266750948, 0.0, 101985.0,
0.0, -300.041782729805, 2826915.0)
crs = CRS(init='epsg:3857')
+ if utils.is_pyproj2():
+ # pyproj 2.0+ expands CRS parameters
+ from pyproj import CRS
+ proj_dict = CRS(3857).to_dict()
+ else:
+ proj_dict = crs.to_dict()
source = tmptiff(x_size, y_size, transform, crs=crs)
area_id = 'area_id'
proj_id = 'proj_id'
- name = 'name'
+ description = 'name'
area_def = utils._rasterio.get_area_def_from_raster(
- source, area_id=area_id, name=name, proj_id=proj_id)
+ source, area_id=area_id, name=description, proj_id=proj_id)
self.assertEqual(area_def.area_id, area_id)
self.assertEqual(area_def.proj_id, proj_id)
- self.assertEqual(area_def.name, name)
+ self.assertEqual(area_def.description, description)
self.assertEqual(area_def.width, x_size)
self.assertEqual(area_def.height, y_size)
- self.assertDictEqual(crs.to_dict(), area_def.proj_dict)
+ self.assertDictEqual(proj_dict, area_def.proj_dict)
self.assertTupleEqual(area_def.area_extent, (transform.c, transform.f + transform.e * y_size,
transform.c + transform.a * x_size, transform.f))
@@ -422,6 +488,9 @@ class TestMisc(unittest.TestCase):
source = tmptiff(transform=transform)
proj_dict = {'init': 'epsg:3857'}
area_def = utils._rasterio.get_area_def_from_raster(source, proj_dict=proj_dict)
+ if utils.is_pyproj2():
+ from pyproj import CRS
+ proj_dict = CRS(3857).to_dict()
self.assertDictEqual(area_def.proj_dict, proj_dict)
=====================================
pyresample/test/utils.py
=====================================
@@ -178,3 +178,21 @@ def create_test_latitude(start, stop, shape, twist_factor=0.0, dtype=np.float32)
lat_array = np.repeat(lat_col, shape[1], axis=1)
lat_array += twist_array
return lat_array
+
+
+class CustomScheduler(object):
+ """Scheduler raising an exception if data are computed too many times."""
+
+ def __init__(self, max_computes=1):
+ """Set starting and maximum compute counts."""
+ self.max_computes = max_computes
+ self.total_computes = 0
+
+ def __call__(self, dsk, keys, **kwargs):
+ """Compute dask task and keep track of number of times we do so."""
+ import dask
+ self.total_computes += 1
+ if self.total_computes > self.max_computes:
+ raise RuntimeError("Too many dask computations were scheduled: "
+ "{}".format(self.total_computes))
+ return dask.get(dsk, keys, **kwargs)
=====================================
pyresample/utils/_proj4.py
=====================================
@@ -15,9 +15,15 @@
#
# You should have received a copy of the GNU Lesser General Public License along
# with this program. If not, see <http://www.gnu.org/licenses/>.
+
from collections import OrderedDict
import six
+try:
+ from pyproj.crs import CRS
+except ImportError:
+ CRS = None
+
def convert_proj_floats(proj_pairs):
"""Convert PROJ.4 parameters to floats if possible."""
@@ -26,9 +32,6 @@ def convert_proj_floats(proj_pairs):
if len(x) == 1 or x[1] is True:
proj_dict[x[0]] = True
continue
- if x[0] == 'EPSG':
- proj_dict[x[0]] = x[1]
- continue
try:
proj_dict[x[0]] = float(x[1])
@@ -41,15 +44,24 @@ def convert_proj_floats(proj_pairs):
def proj4_str_to_dict(proj4_str):
"""Convert PROJ.4 compatible string definition to dict
+ EPSG codes should be provided as "EPSG:XXXX" where "XXXX"
+ is the EPSG number code. It can also be provided as
+ ``"+init=EPSG:XXXX"`` as long as the underlying PROJ library
+ supports it (deprecated in PROJ 6.0+).
+
Note: Key only parameters will be assigned a value of `True`.
"""
- if proj4_str.startswith('EPSG:'):
- try:
- code = int(proj4_str.split(':', 1)[1])
- except ValueError as err:
- six.raise_from(ValueError("Invalid EPSG code '{}': {}".format(proj4_str, err)),
- None) # Suppresses original exception context in python 3
- return OrderedDict(EPSG=code)
+ # convert EPSG codes to equivalent PROJ4 string definition
+ if proj4_str.startswith('EPSG:') and CRS is not None:
+ crs = CRS(proj4_str)
+ if hasattr(crs, 'to_dict'):
+ # pyproj 2.2+
+ return crs.to_dict()
+ proj4_str = crs.to_proj4()
+ elif proj4_str.startswith('EPSG:'):
+ # legacy +init= PROJ4 string and no pyproj 2.0+ to help convert
+ proj4_str = "+init={}".format(proj4_str)
+
pairs = (x.split('=', 1) for x in proj4_str.replace('+', '').split(" "))
return convert_proj_floats(pairs)
@@ -61,11 +73,6 @@ def proj4_dict_to_str(proj4_dict, sort=False):
items = sorted(items)
params = []
for key, val in items:
- if key == 'EPSG':
- # If EPSG code is present, ignore other parameters
- params = ['EPSG:{}'.format(val)]
- break
-
key = str(key) if key.startswith('+') else '+' + str(key)
if key in ['+no_defs', '+no_off', '+no_rot']:
param = key
=====================================
pyresample/version.py
=====================================
@@ -23,9 +23,9 @@ def get_keywords():
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
- git_refnames = " (tag: v1.12.3)"
- git_full = "6b4367ce09ca6040c6107d2fafc31524eb45d1f8"
- git_date = "2019-05-17 10:58:40 -0500"
+ git_refnames = " (HEAD -> master, tag: v1.13.0)"
+ git_full = "790a0bae85cb243c17f0150011e30c834f244e04"
+ git_date = "2019-09-13 08:32:20 +0200"
keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
return keywords
=====================================
setup.py
=====================================
@@ -25,13 +25,20 @@ import sys
from setuptools import Extension, find_packages, setup
from setuptools.command.build_ext import build_ext as _build_ext
-requirements = ['setuptools>=3.2', 'pyproj>=1.9.5.1', 'numpy>=1.10.0', 'configobj',
+requirements = ['setuptools>=3.2', 'pyproj>=1.9.5.1', 'configobj',
'pykdtree>=1.3.1', 'pyyaml', 'six']
extras_require = {'numexpr': ['numexpr'],
'quicklook': ['matplotlib', 'cartopy', 'pillow'],
'rasterio': ['rasterio'],
'dask': ['dask>=0.16.1']}
+if sys.version_info.major > 2:
+ setup_requires = ['numpy>=1.10.0']
+ requirements.append('numpy>=1.10.0')
+else:
+ setup_requires = ['numpy>=1.10.0,<1.17.0']
+ requirements.append('numpy>=1.10.0,<1.17.0')
+
test_requires = ['rasterio', 'dask', 'xarray', 'cartopy', 'pillow', 'matplotlib', 'scipy']
if sys.version_info < (3, 3):
test_requires.append('mock')
@@ -119,7 +126,7 @@ if __name__ == "__main__":
package_dir={'pyresample': 'pyresample'},
packages=find_packages(),
python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',
- setup_requires=['numpy'],
+ setup_requires=setup_requires,
install_requires=requirements,
extras_require=extras_require,
tests_require=test_requires,
View it on GitLab: https://salsa.debian.org/debian-gis-team/pyresample/commit/b62672cc83b9f001f5d291939a18c2fb03697005
--
View it on GitLab: https://salsa.debian.org/debian-gis-team/pyresample/commit/b62672cc83b9f001f5d291939a18c2fb03697005
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20190914/a0de776c/attachment-0001.html>
More information about the Pkg-grass-devel
mailing list