[Git][debian-gis-team/mapproxy][master] 5 commits: Add patch to rename async.py to async_.py for Python 3.7 compatibility.

Bas Couwenberg gitlab at salsa.debian.org
Fri Jul 20 19:16:22 BST 2018


Bas Couwenberg pushed to branch master at Debian GIS Project / mapproxy


Commits:
cea15586 by Bas Couwenberg at 2018-07-20T19:32:29+02:00
Add patch to rename async.py to async_.py for Python 3.7 compatibility.

- - - - -
9de23b2a by Bas Couwenberg at 2018-07-20T19:36:01+02:00
Add lintian overrides for embedded JS & fonts.

- - - - -
459b5601 by Bas Couwenberg at 2018-07-20T19:37:32+02:00
Remove documentation outside usr/share/doc.

- - - - -
381ce845 by Bas Couwenberg at 2018-07-20T19:55:29+02:00
Fix 'every time' typo.

- - - - -
7c0a5bdc by Bas Couwenberg at 2018-07-20T19:55:29+02:00
Set distribution to unstable.

- - - - -


7 changed files:

- debian/changelog
- debian/man/mapproxy-util-autoconfig.1.xml
- + debian/mapproxy-doc.lintian-overrides
- + debian/patches/python3.7-async.patch
- debian/patches/series
- + debian/patches/spelling-errors.patch
- debian/rules


Changes:

=====================================
debian/changelog
=====================================
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,11 +1,15 @@
-mapproxy (1.11.0-2) UNRELEASED; urgency=medium
+mapproxy (1.11.0-2) unstable; urgency=medium
 
   * Update Vcs-* URLs for Salsa.
   * Bump Standards-Version to 4.1.5, no changes.
   * Drop ancient X-Python-Version field.
   * Strip trailing whitespace from control & rules files.
+  * Add patch to rename async.py to async_.py for Python 3.7 compatibility.
+  * Add lintian overrides for embedded JS & fonts.
+  * Remove documentation outside usr/share/doc.
+  * Fix 'every time' typo.
 
- -- Bas Couwenberg <sebastic at debian.org>  Sat, 31 Mar 2018 12:25:13 +0200
+ -- Bas Couwenberg <sebastic at debian.org>  Fri, 20 Jul 2018 19:11:25 +0200
 
 mapproxy (1.11.0-1) unstable; urgency=medium
 


=====================================
debian/man/mapproxy-util-autoconfig.1.xml
=====================================
--- a/debian/man/mapproxy-util-autoconfig.1.xml
+++ b/debian/man/mapproxy-util-autoconfig.1.xml
@@ -164,7 +164,7 @@
       define another coverage, disable featureinfo, etc.
       You can do this by editing the output file of course, or you can modify
       the output by defining all changes to an overwrite file.
-      Overwrite files are applied everytime you call
+      Overwrite files are applied every time you call
       <command>mapproxy-util autoconfig</command>.
     </para>
     <para>


=====================================
debian/mapproxy-doc.lintian-overrides
=====================================
--- /dev/null
+++ b/debian/mapproxy-doc.lintian-overrides
@@ -0,0 +1,8 @@
+# libjs-twitter-bootstrap is not compatible
+embedded-javascript-library usr/share/doc/mapproxy/html/_static/bootstrap-*/js/bootstrap.js please use libjs-twitter-bootstrap
+font-in-non-font-package usr/share/doc/mapproxy/html/_static/boot*/fonts/*
+font-outside-font-dir usr/share/doc/mapproxy/html/_static/boots*/fonts/*
+
+# libjs-jquery is not compatible
+embedded-javascript-library usr/share/doc/mapproxy/html/_static/js/jquery* please use libjs-jquery
+


=====================================
debian/patches/python3.7-async.patch
=====================================
--- /dev/null
+++ b/debian/patches/python3.7-async.patch
@@ -0,0 +1,826 @@
+Description: Rename async.py to async_.py to support Python 3.7.
+ async became a reserved keyword in Python 3.7.
+Author: Bas Couwenberg <sebastic at debian.org>
+Forwarded: https://github.com/mapproxy/mapproxy/pull/372
+
+--- a/mapproxy/util/async.py
++++ /dev/null
+@@ -1,343 +0,0 @@
+-# This file is part of the MapProxy project.
+-# Copyright (C) 2011 Omniscale <http://omniscale.de>
+-#
+-# Licensed under the Apache License, Version 2.0 (the "License");
+-# you may not use this file except in compliance with the License.
+-# You may obtain a copy of the License at
+-#
+-#    http://www.apache.org/licenses/LICENSE-2.0
+-#
+-# Unless required by applicable law or agreed to in writing, software
+-# distributed under the License is distributed on an "AS IS" BASIS,
+-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-# See the License for the specific language governing permissions and
+-# limitations under the License.
+-
+-
+-MAX_MAP_ASYNC_THREADS = 20
+-
+-try:
+-    import Queue
+-except ImportError:
+-    import queue as Queue
+-
+-import sys
+-import threading
+-
+-try:
+-    import eventlet
+-    import eventlet.greenpool
+-    import eventlet.tpool
+-    import eventlet.patcher
+-    _has_eventlet = True
+-
+-    import eventlet.debug
+-    eventlet.debug.hub_exceptions(False)
+-
+-except ImportError:
+-    _has_eventlet = False
+-
+-from mapproxy.config import base_config
+-from mapproxy.config import local_base_config
+-from mapproxy.compat import PY2
+-
+-import logging
+-log_system = logging.getLogger('mapproxy.system')
+-
+-class AsyncResult(object):
+-    def __init__(self, result=None, exception=None):
+-        self.result = result
+-        self.exception = exception
+-
+-    def __repr__(self):
+-        return "<AsyncResult result='%s' exception='%s'>" % (
+-            self.result, self.exception)
+-
+-
+-def _result_iter(results, use_result_objects=False):
+-    for result in results:
+-        if use_result_objects:
+-            exception = None
+-            if (isinstance(result, tuple) and len(result) == 3 and
+-                isinstance(result[1], Exception)):
+-                exception = result
+-                result = None
+-            yield AsyncResult(result, exception)
+-        else:
+-            yield result
+-
+-class EventletPool(object):
+-    def __init__(self, size=100):
+-        self.size = size
+-        self.base_config = base_config()
+-
+-    def shutdown(self, force=False):
+-        # there is not way to stop a GreenPool
+-        pass
+-
+-    def map(self, func, *args, **kw):
+-        return list(self.imap(func, *args, **kw))
+-
+-    def imap(self, func, *args, **kw):
+-        use_result_objects = kw.get('use_result_objects', False)
+-        def call(*args):
+-            with local_base_config(self.base_config):
+-                try:
+-                    return func(*args)
+-                except Exception:
+-                    if use_result_objects:
+-                        return sys.exc_info()
+-                    else:
+-                        raise
+-        if len(args[0]) == 1:
+-            eventlet.sleep()
+-            return _result_iter([call(*list(zip(*args))[0])], use_result_objects)
+-        pool = eventlet.greenpool.GreenPool(self.size)
+-        return _result_iter(pool.imap(call, *args), use_result_objects)
+-
+-    def starmap(self, func, args, **kw):
+-        use_result_objects = kw.get('use_result_objects', False)
+-        def call(*args):
+-            with local_base_config(self.base_config):
+-                try:
+-                    return func(*args)
+-                except Exception:
+-                    if use_result_objects:
+-                        return sys.exc_info()
+-                    else:
+-                        raise
+-        if len(args) == 1:
+-            eventlet.sleep()
+-            return _result_iter([call(*args[0])], use_result_objects)
+-        pool = eventlet.greenpool.GreenPool(self.size)
+-        return _result_iter(pool.starmap(call, args), use_result_objects)
+-
+-    def starcall(self, args, **kw):
+-        use_result_objects = kw.get('use_result_objects', False)
+-        def call(func, *args):
+-            with local_base_config(self.base_config):
+-                try:
+-                    return func(*args)
+-                except Exception:
+-                    if use_result_objects:
+-                        return sys.exc_info()
+-                    else:
+-                        raise
+-        if len(args) == 1:
+-            eventlet.sleep()
+-            return _result_iter([call(args[0][0], *args[0][1:])], use_result_objects)
+-        pool = eventlet.greenpool.GreenPool(self.size)
+-        return _result_iter(pool.starmap(call, args), use_result_objects)
+-
+-
+-class ThreadWorker(threading.Thread):
+-    def __init__(self, task_queue, result_queue):
+-        threading.Thread.__init__(self)
+-        self.task_queue = task_queue
+-        self.result_queue = result_queue
+-        self.base_config = base_config()
+-    def run(self):
+-        with local_base_config(self.base_config):
+-            while True:
+-                task = self.task_queue.get()
+-                if task is None:
+-                    self.task_queue.task_done()
+-                    break
+-                exec_id, func, args = task
+-                try:
+-                    result = func(*args)
+-                except Exception:
+-                    result = sys.exc_info()
+-                self.result_queue.put((exec_id, result))
+-                self.task_queue.task_done()
+-
+-
+-def _consume_queue(queue):
+-    """
+-    Get all items from queue.
+-    """
+-    while not queue.empty():
+-        try:
+-            queue.get(block=False)
+-            queue.task_done()
+-        except Queue.Empty:
+-            pass
+-
+-
+-class ThreadPool(object):
+-    def __init__(self, size=4):
+-        self.pool_size = size
+-        self.task_queue = Queue.Queue()
+-        self.result_queue = Queue.Queue()
+-        self.pool = None
+-    def map_each(self, func_args, raise_exceptions):
+-        """
+-        args should be a list of function arg tuples.
+-        map_each calls each function with the given arg.
+-        """
+-        if self.pool_size < 2:
+-            for func, arg in func_args:
+-                try:
+-                    yield func(*arg)
+-                except Exception:
+-                    yield sys.exc_info()
+-            raise StopIteration()
+-
+-        self.pool = self._init_pool()
+-
+-        i = 0
+-        for i, (func, arg) in enumerate(func_args):
+-            self.task_queue.put((i, func, arg))
+-
+-        results = {}
+-
+-        next_result = 0
+-        for value in self._get_results(next_result, results, raise_exceptions):
+-            yield value
+-            next_result += 1
+-
+-        self.task_queue.join()
+-        for value in self._get_results(next_result, results, raise_exceptions):
+-            yield value
+-            next_result += 1
+-
+-        self.shutdown()
+-
+-    def _single_call(self, func, args, use_result_objects):
+-        try:
+-            result = func(*args)
+-        except Exception:
+-            if not use_result_objects:
+-                raise
+-            result = sys.exc_info()
+-        return _result_iter([result], use_result_objects)
+-
+-    def map(self, func, *args, **kw):
+-        return list(self.imap(func, *args, **kw))
+-
+-    def imap(self, func, *args, **kw):
+-        use_result_objects = kw.get('use_result_objects', False)
+-        if len(args[0]) == 1:
+-            return self._single_call(func, next(iter(zip(*args))), use_result_objects)
+-        return _result_iter(self.map_each([(func, arg) for arg in zip(*args)], raise_exceptions=not use_result_objects),
+-                            use_result_objects)
+-
+-    def starmap(self, func, args, **kw):
+-        use_result_objects = kw.get('use_result_objects', False)
+-        if len(args[0]) == 1:
+-            return self._single_call(func, args[0], use_result_objects)
+-
+-        return _result_iter(self.map_each([(func, arg) for arg in args], raise_exceptions=not use_result_objects),
+-                            use_result_objects)
+-
+-    def starcall(self, args, **kw):
+-        def call(func, *args):
+-            return func(*args)
+-        return self.starmap(call, args, **kw)
+-
+-    def _get_results(self, next_result, results, raise_exceptions):
+-        for i, value in self._fetch_results(raise_exceptions):
+-            if i == next_result:
+-                yield value
+-                next_result += 1
+-                while next_result in results:
+-                    yield results.pop(next_result)
+-                    next_result += 1
+-            else:
+-                results[i] = value
+-
+-    def _fetch_results(self, raise_exceptions):
+-        while not self.task_queue.empty() or not self.result_queue.empty():
+-            task_result = self.result_queue.get()
+-            if (raise_exceptions and isinstance(task_result[1], tuple) and
+-                len(task_result[1]) == 3 and
+-                isinstance(task_result[1][1], Exception)):
+-                self.shutdown(force=True)
+-                exc_class, exc, tb = task_result[1]
+-                if PY2:
+-                    exec('raise exc_class, exc, tb')
+-                else:
+-                    raise exc.with_traceback(tb)
+-            yield task_result
+-
+-    def shutdown(self, force=False):
+-        """
+-        Send shutdown sentinel to all executor threads. If `force` is True,
+-        clean task_queue and result_queue.
+-        """
+-        if force:
+-            _consume_queue(self.task_queue)
+-            _consume_queue(self.result_queue)
+-        for _ in range(self.pool_size):
+-            self.task_queue.put(None)
+-
+-    def _init_pool(self):
+-        if self.pool_size < 2:
+-            return []
+-        pool = []
+-        for _ in range(self.pool_size):
+-            t = ThreadWorker(self.task_queue, self.result_queue)
+-            t.daemon = True
+-            t.start()
+-            pool.append(t)
+-        return pool
+-
+-
+-def imap_async_eventlet(func, *args):
+-    pool = EventletPool()
+-    return pool.imap(func, *args)
+-
+-def imap_async_threaded(func, *args):
+-    pool = ThreadPool(min(len(args[0]), MAX_MAP_ASYNC_THREADS))
+-    return pool.imap(func, *args)
+-
+-def starmap_async_eventlet(func, args):
+-    pool = EventletPool()
+-    return pool.starmap(func, args)
+-
+-def starmap_async_threaded(func, args):
+-    pool = ThreadPool(min(len(args[0]), MAX_MAP_ASYNC_THREADS))
+-    return pool.starmap(func, args)
+-
+-def starcall_async_eventlet(args):
+-    pool = EventletPool()
+-    return pool.starcall(args)
+-
+-def starcall_async_threaded(args):
+-    pool = ThreadPool(min(len(args[0]), MAX_MAP_ASYNC_THREADS))
+-    return pool.starcall(args)
+-
+-
+-def run_non_blocking_eventlet(func, args, kw={}):
+-    return eventlet.tpool.execute(func, *args, **kw)
+-
+-def run_non_blocking_threaded(func, args, kw={}):
+-    return func(*args, **kw)
+-
+-
+-def import_module(module):
+-    """
+-    Import ``module``. Import patched version if eventlet is used.
+-    """
+-    if uses_eventlet:
+-        return eventlet.import_patched(module)
+-    else:
+-        return __import__(module)
+-
+-uses_eventlet = False
+-
+-# socket should be monkey patched when MapProxy runs inside eventlet
+-if _has_eventlet and eventlet.patcher.is_monkey_patched('socket'):
+-    uses_eventlet = True
+-    log_system.info('using eventlet for asynchronous operations')
+-    imap = imap_async_eventlet
+-    starmap = starmap_async_eventlet
+-    starcall = starcall_async_eventlet
+-    Pool = EventletPool
+-    run_non_blocking = run_non_blocking_eventlet
+-else:
+-    imap = imap_async_threaded
+-    starmap = starmap_async_threaded
+-    starcall = starcall_async_threaded
+-    Pool = ThreadPool
+-    run_non_blocking = run_non_blocking_threaded
+--- /dev/null
++++ b/mapproxy/util/async_.py
+@@ -0,0 +1,343 @@
++# This file is part of the MapProxy project.
++# Copyright (C) 2011 Omniscale <http://omniscale.de>
++#
++# Licensed under the Apache License, Version 2.0 (the "License");
++# you may not use this file except in compliance with the License.
++# You may obtain a copy of the License at
++#
++#    http://www.apache.org/licenses/LICENSE-2.0
++#
++# Unless required by applicable law or agreed to in writing, software
++# distributed under the License is distributed on an "AS IS" BASIS,
++# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++# See the License for the specific language governing permissions and
++# limitations under the License.
++
++
++MAX_MAP_ASYNC_THREADS = 20
++
++try:
++    import Queue
++except ImportError:
++    import queue as Queue
++
++import sys
++import threading
++
++try:
++    import eventlet
++    import eventlet.greenpool
++    import eventlet.tpool
++    import eventlet.patcher
++    _has_eventlet = True
++
++    import eventlet.debug
++    eventlet.debug.hub_exceptions(False)
++
++except ImportError:
++    _has_eventlet = False
++
++from mapproxy.config import base_config
++from mapproxy.config import local_base_config
++from mapproxy.compat import PY2
++
++import logging
++log_system = logging.getLogger('mapproxy.system')
++
++class AsyncResult(object):
++    def __init__(self, result=None, exception=None):
++        self.result = result
++        self.exception = exception
++
++    def __repr__(self):
++        return "<AsyncResult result='%s' exception='%s'>" % (
++            self.result, self.exception)
++
++
++def _result_iter(results, use_result_objects=False):
++    for result in results:
++        if use_result_objects:
++            exception = None
++            if (isinstance(result, tuple) and len(result) == 3 and
++                isinstance(result[1], Exception)):
++                exception = result
++                result = None
++            yield AsyncResult(result, exception)
++        else:
++            yield result
++
++class EventletPool(object):
++    def __init__(self, size=100):
++        self.size = size
++        self.base_config = base_config()
++
++    def shutdown(self, force=False):
++        # there is not way to stop a GreenPool
++        pass
++
++    def map(self, func, *args, **kw):
++        return list(self.imap(func, *args, **kw))
++
++    def imap(self, func, *args, **kw):
++        use_result_objects = kw.get('use_result_objects', False)
++        def call(*args):
++            with local_base_config(self.base_config):
++                try:
++                    return func(*args)
++                except Exception:
++                    if use_result_objects:
++                        return sys.exc_info()
++                    else:
++                        raise
++        if len(args[0]) == 1:
++            eventlet.sleep()
++            return _result_iter([call(*list(zip(*args))[0])], use_result_objects)
++        pool = eventlet.greenpool.GreenPool(self.size)
++        return _result_iter(pool.imap(call, *args), use_result_objects)
++
++    def starmap(self, func, args, **kw):
++        use_result_objects = kw.get('use_result_objects', False)
++        def call(*args):
++            with local_base_config(self.base_config):
++                try:
++                    return func(*args)
++                except Exception:
++                    if use_result_objects:
++                        return sys.exc_info()
++                    else:
++                        raise
++        if len(args) == 1:
++            eventlet.sleep()
++            return _result_iter([call(*args[0])], use_result_objects)
++        pool = eventlet.greenpool.GreenPool(self.size)
++        return _result_iter(pool.starmap(call, args), use_result_objects)
++
++    def starcall(self, args, **kw):
++        use_result_objects = kw.get('use_result_objects', False)
++        def call(func, *args):
++            with local_base_config(self.base_config):
++                try:
++                    return func(*args)
++                except Exception:
++                    if use_result_objects:
++                        return sys.exc_info()
++                    else:
++                        raise
++        if len(args) == 1:
++            eventlet.sleep()
++            return _result_iter([call(args[0][0], *args[0][1:])], use_result_objects)
++        pool = eventlet.greenpool.GreenPool(self.size)
++        return _result_iter(pool.starmap(call, args), use_result_objects)
++
++
++class ThreadWorker(threading.Thread):
++    def __init__(self, task_queue, result_queue):
++        threading.Thread.__init__(self)
++        self.task_queue = task_queue
++        self.result_queue = result_queue
++        self.base_config = base_config()
++    def run(self):
++        with local_base_config(self.base_config):
++            while True:
++                task = self.task_queue.get()
++                if task is None:
++                    self.task_queue.task_done()
++                    break
++                exec_id, func, args = task
++                try:
++                    result = func(*args)
++                except Exception:
++                    result = sys.exc_info()
++                self.result_queue.put((exec_id, result))
++                self.task_queue.task_done()
++
++
++def _consume_queue(queue):
++    """
++    Get all items from queue.
++    """
++    while not queue.empty():
++        try:
++            queue.get(block=False)
++            queue.task_done()
++        except Queue.Empty:
++            pass
++
++
++class ThreadPool(object):
++    def __init__(self, size=4):
++        self.pool_size = size
++        self.task_queue = Queue.Queue()
++        self.result_queue = Queue.Queue()
++        self.pool = None
++    def map_each(self, func_args, raise_exceptions):
++        """
++        args should be a list of function arg tuples.
++        map_each calls each function with the given arg.
++        """
++        if self.pool_size < 2:
++            for func, arg in func_args:
++                try:
++                    yield func(*arg)
++                except Exception:
++                    yield sys.exc_info()
++            raise StopIteration()
++
++        self.pool = self._init_pool()
++
++        i = 0
++        for i, (func, arg) in enumerate(func_args):
++            self.task_queue.put((i, func, arg))
++
++        results = {}
++
++        next_result = 0
++        for value in self._get_results(next_result, results, raise_exceptions):
++            yield value
++            next_result += 1
++
++        self.task_queue.join()
++        for value in self._get_results(next_result, results, raise_exceptions):
++            yield value
++            next_result += 1
++
++        self.shutdown()
++
++    def _single_call(self, func, args, use_result_objects):
++        try:
++            result = func(*args)
++        except Exception:
++            if not use_result_objects:
++                raise
++            result = sys.exc_info()
++        return _result_iter([result], use_result_objects)
++
++    def map(self, func, *args, **kw):
++        return list(self.imap(func, *args, **kw))
++
++    def imap(self, func, *args, **kw):
++        use_result_objects = kw.get('use_result_objects', False)
++        if len(args[0]) == 1:
++            return self._single_call(func, next(iter(zip(*args))), use_result_objects)
++        return _result_iter(self.map_each([(func, arg) for arg in zip(*args)], raise_exceptions=not use_result_objects),
++                            use_result_objects)
++
++    def starmap(self, func, args, **kw):
++        use_result_objects = kw.get('use_result_objects', False)
++        if len(args[0]) == 1:
++            return self._single_call(func, args[0], use_result_objects)
++
++        return _result_iter(self.map_each([(func, arg) for arg in args], raise_exceptions=not use_result_objects),
++                            use_result_objects)
++
++    def starcall(self, args, **kw):
++        def call(func, *args):
++            return func(*args)
++        return self.starmap(call, args, **kw)
++
++    def _get_results(self, next_result, results, raise_exceptions):
++        for i, value in self._fetch_results(raise_exceptions):
++            if i == next_result:
++                yield value
++                next_result += 1
++                while next_result in results:
++                    yield results.pop(next_result)
++                    next_result += 1
++            else:
++                results[i] = value
++
++    def _fetch_results(self, raise_exceptions):
++        while not self.task_queue.empty() or not self.result_queue.empty():
++            task_result = self.result_queue.get()
++            if (raise_exceptions and isinstance(task_result[1], tuple) and
++                len(task_result[1]) == 3 and
++                isinstance(task_result[1][1], Exception)):
++                self.shutdown(force=True)
++                exc_class, exc, tb = task_result[1]
++                if PY2:
++                    exec('raise exc_class, exc, tb')
++                else:
++                    raise exc.with_traceback(tb)
++            yield task_result
++
++    def shutdown(self, force=False):
++        """
++        Send shutdown sentinel to all executor threads. If `force` is True,
++        clean task_queue and result_queue.
++        """
++        if force:
++            _consume_queue(self.task_queue)
++            _consume_queue(self.result_queue)
++        for _ in range(self.pool_size):
++            self.task_queue.put(None)
++
++    def _init_pool(self):
++        if self.pool_size < 2:
++            return []
++        pool = []
++        for _ in range(self.pool_size):
++            t = ThreadWorker(self.task_queue, self.result_queue)
++            t.daemon = True
++            t.start()
++            pool.append(t)
++        return pool
++
++
++def imap_async_eventlet(func, *args):
++    pool = EventletPool()
++    return pool.imap(func, *args)
++
++def imap_async_threaded(func, *args):
++    pool = ThreadPool(min(len(args[0]), MAX_MAP_ASYNC_THREADS))
++    return pool.imap(func, *args)
++
++def starmap_async_eventlet(func, args):
++    pool = EventletPool()
++    return pool.starmap(func, args)
++
++def starmap_async_threaded(func, args):
++    pool = ThreadPool(min(len(args[0]), MAX_MAP_ASYNC_THREADS))
++    return pool.starmap(func, args)
++
++def starcall_async_eventlet(args):
++    pool = EventletPool()
++    return pool.starcall(args)
++
++def starcall_async_threaded(args):
++    pool = ThreadPool(min(len(args[0]), MAX_MAP_ASYNC_THREADS))
++    return pool.starcall(args)
++
++
++def run_non_blocking_eventlet(func, args, kw={}):
++    return eventlet.tpool.execute(func, *args, **kw)
++
++def run_non_blocking_threaded(func, args, kw={}):
++    return func(*args, **kw)
++
++
++def import_module(module):
++    """
++    Import ``module``. Import patched version if eventlet is used.
++    """
++    if uses_eventlet:
++        return eventlet.import_patched(module)
++    else:
++        return __import__(module)
++
++uses_eventlet = False
++
++# socket should be monkey patched when MapProxy runs inside eventlet
++if _has_eventlet and eventlet.patcher.is_monkey_patched('socket'):
++    uses_eventlet = True
++    log_system.info('using eventlet for asynchronous operations')
++    imap = imap_async_eventlet
++    starmap = starmap_async_eventlet
++    starcall = starcall_async_eventlet
++    Pool = EventletPool
++    run_non_blocking = run_non_blocking_eventlet
++else:
++    imap = imap_async_threaded
++    starmap = starmap_async_threaded
++    starcall = starcall_async_threaded
++    Pool = ThreadPool
++    run_non_blocking = run_non_blocking_threaded
+--- a/mapproxy/service/wms.py
++++ b/mapproxy/service/wms.py
+@@ -33,7 +33,7 @@ from mapproxy.image.opts import ImageOpt
+ from mapproxy.image.message import attribution_image, message_image
+ from mapproxy.layer import BlankImage, MapQuery, InfoQuery, LegendQuery, MapError, LimitedLayer
+ from mapproxy.layer import MapBBOXError, merge_layer_extents, merge_layer_res_ranges
+-from mapproxy.util import async
++from mapproxy.util import async_
+ from mapproxy.util.py import cached_property, reraise
+ from mapproxy.util.coverage import load_limited_to
+ from mapproxy.util.ext.odict import odict
+@@ -568,7 +568,7 @@ class LayerRenderer(object):
+         render_layers = combined_layers(self.layers, self.query)
+         if not render_layers: return
+ 
+-        async_pool = async.Pool(size=min(len(render_layers), self.concurrent_rendering))
++        async_pool = async_.Pool(size=min(len(render_layers), self.concurrent_rendering))
+ 
+         if self.raise_source_errors:
+             return self._render_raise_exceptions(async_pool, render_layers, layer_merger)
+--- a/mapproxy/client/cgi.py
++++ b/mapproxy/client/cgi.py
+@@ -26,7 +26,7 @@ from mapproxy.source import SourceError
+ from mapproxy.image import ImageSource
+ from mapproxy.client.http import HTTPClientError
+ from mapproxy.client.log import log_request
+-from mapproxy.util.async import import_module
++from mapproxy.util.async_ import import_module
+ from mapproxy.compat.modules import urlparse
+ from mapproxy.compat import BytesIO
+ 
+--- a/mapproxy/cache/s3.py
++++ b/mapproxy/cache/s3.py
+@@ -22,7 +22,7 @@ import threading
+ from mapproxy.image import ImageSource
+ from mapproxy.cache import path
+ from mapproxy.cache.base import tile_buffer, TileCacheBase
+-from mapproxy.util import async
++from mapproxy.util import async_
+ from mapproxy.util.py import reraise_exception
+ 
+ try:
+@@ -111,7 +111,7 @@ class S3Cache(TileCacheBase):
+         return True
+ 
+     def load_tiles(self, tiles, with_metadata=True):
+-        p = async.Pool(min(4, len(tiles)))
++        p = async_.Pool(min(4, len(tiles)))
+         return all(p.map(self.load_tile, tiles))
+ 
+     def load_tile(self, tile, with_metadata=True):
+@@ -139,7 +139,7 @@ class S3Cache(TileCacheBase):
+         self.conn().delete_object(Bucket=self.bucket_name, Key=key)
+ 
+     def store_tiles(self, tiles):
+-        p = async.Pool(min(self._concurrent_writer, len(tiles)))
++        p = async_.Pool(min(self._concurrent_writer, len(tiles)))
+         p.map(self.store_tile, tiles)
+ 
+     def store_tile(self, tile):
+--- a/mapproxy/test/unit/test_async.py
++++ b/mapproxy/test/unit/test_async.py
+@@ -17,7 +17,7 @@ from __future__ import print_function
+ 
+ import time
+ import threading
+-from mapproxy.util.async import imap_async_threaded, ThreadPool
++from mapproxy.util.async_ import imap_async_threaded, ThreadPool
+ 
+ from nose.tools import eq_
+ from nose.plugins.skip import SkipTest
+@@ -49,7 +49,7 @@ class TestThreaded(object):
+ 
+ try:
+     import eventlet
+-    from mapproxy.util.async import imap_async_eventlet, EventletPool
++    from mapproxy.util.async_ import imap_async_eventlet, EventletPool
+     _has_eventlet = True
+ except ImportError:
+     _has_eventlet = False
+--- a/mapproxy/cache/tile.py
++++ b/mapproxy/cache/tile.py
+@@ -42,7 +42,7 @@ from mapproxy.grid import MetaGrid
+ from mapproxy.image.merge import merge_images
+ from mapproxy.image.tile import TileSplitter
+ from mapproxy.layer import MapQuery, BlankImage
+-from mapproxy.util import async
++from mapproxy.util import async_
+ from mapproxy.util.py import reraise
+ 
+ class TileManager(object):
+@@ -250,7 +250,7 @@ class TileCreator(object):
+ 
+     def _create_threaded(self, create_func, tiles):
+         result = []
+-        async_pool = async.Pool(self.tile_mgr.concurrent_tile_creators)
++        async_pool = async_.Pool(self.tile_mgr.concurrent_tile_creators)
+         for new_tiles in async_pool.imap(create_func, tiles):
+             result.extend(new_tiles)
+         return result
+@@ -303,7 +303,7 @@ class TileCreator(object):
+                 return (img, source.coverage)
+ 
+         layers = []
+-        for layer in async.imap(get_map_from_source, self.sources):
++        for layer in async_.imap(get_map_from_source, self.sources):
+             if layer[0] is not None:
+                 layers.append(layer)
+ 
+@@ -358,7 +358,7 @@ class TileCreator(object):
+         main_tile = Tile(meta_tile.main_tile_coord)
+         with self.tile_mgr.lock(main_tile):
+             if not all(self.is_cached(t) for t in meta_tile.tiles if t is not None):
+-                async_pool = async.Pool(self.tile_mgr.concurrent_tile_creators)
++                async_pool = async_.Pool(self.tile_mgr.concurrent_tile_creators)
+                 def query_tile(coord):
+                     try:
+                         query = MapQuery(self.grid.tile_bbox(coord), tile_size, self.grid.srs, self.tile_mgr.request_format,
+--- a/mapproxy/source/mapnik.py
++++ b/mapproxy/source/mapnik.py
+@@ -26,7 +26,7 @@ from mapproxy.layer import MapExtent, De
+ from mapproxy.source import  SourceError
+ from mapproxy.client.log import log_request
+ from mapproxy.util.py import reraise_exception
+-from mapproxy.util.async import run_non_blocking
++from mapproxy.util.async_ import run_non_blocking
+ from mapproxy.compat import BytesIO
+ 
+ try:


=====================================
debian/patches/series
=====================================
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -1,3 +1,5 @@
 offline-tests.patch
 disable-tag_date.patch
 skip-tests-for-missing-files.patch
+python3.7-async.patch
+spelling-errors.patch


=====================================
debian/patches/spelling-errors.patch
=====================================
--- /dev/null
+++ b/debian/patches/spelling-errors.patch
@@ -0,0 +1,30 @@
+Description: Fix spelling errors
+ * everytime -> every time
+Author: Bas Couwenberg <sebastic at debian.org>
+Forwarded: https://github.com/mapproxy/mapproxy/pull/373
+
+--- a/doc/mapproxy_util_autoconfig.rst
++++ b/doc/mapproxy_util_autoconfig.rst
+@@ -78,7 +78,7 @@ Write MapProxy configuration with caches
+ Overwrites
+ ==========
+ 
+-It's likely that you need to tweak the created configuration – e.g. to define another coverage, disable featureinfo, etc. You can do this by editing the output file of course, or you can modify the output by defining all changes to an overwrite file. Overwrite files are applied everytime you call ``mapproxy-util autoconfig``.
++It's likely that you need to tweak the created configuration – e.g. to define another coverage, disable featureinfo, etc. You can do this by editing the output file of course, or you can modify the output by defining all changes to an overwrite file. Overwrite files are applied every time you call ``mapproxy-util autoconfig``.
+ 
+ Overwrites are YAML files that will be merged with the created configuration file.
+ 
+--- a/doc/seed.rst
++++ b/doc/seed.rst
+@@ -411,9 +411,9 @@ Example: Background seeding
+ The ``--duration`` option allows you run MapProxy seeding for a limited time. In combination with the ``--continue`` option, you can resume the seeding process at a later time.
+ You can use this to call ``mapproxy-seed`` with ``cron`` to seed in the off-hours.
+ 
+-However, this will restart the seeding process from the begining everytime the is seeding completed.
++However, this will restart the seeding process from the begining every time the is seeding completed.
+ You can prevent this with the ``--reeseed-interval`` and ``--reseed-file`` option. 
+-The follwing example starts seeding for six hours. It will seed for another six hours, everytime you call this command again. Once all seed and cleanup tasks were proccessed the command will exit immediately everytime you call it within 14 days after the first call. After 14 days, the modification time of the ``reseed.time`` file will be updated and the re-seeding process starts again.
++The follwing example starts seeding for six hours. It will seed for another six hours, every time you call this command again. Once all seed and cleanup tasks were proccessed the command will exit immediately every time you call it within 14 days after the first call. After 14 days, the modification time of the ``reseed.time`` file will be updated and the re-seeding process starts again.
+ 
+ ::
+ 


=====================================
debian/rules
=====================================
--- a/debian/rules
+++ b/debian/rules
@@ -113,3 +113,5 @@ override_dh_auto_install:
 
 override_dh_install:
 	dh_install --list-missing
+
+	$(RM) debian/*/usr/share/python*-mapproxy/test/schemas/*/*/ReadMe.txt



View it on GitLab: https://salsa.debian.org/debian-gis-team/mapproxy/compare/69b5c8b743fbcb0b46d0e58a157eeacb7c72bf48...7c0a5bdcccd16b53717073ff8794a8a7f2741120

-- 
View it on GitLab: https://salsa.debian.org/debian-gis-team/mapproxy/compare/69b5c8b743fbcb0b46d0e58a157eeacb7c72bf48...7c0a5bdcccd16b53717073ff8794a8a7f2741120
You're receiving this email because of your account on salsa.debian.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-grass-devel/attachments/20180720/3c985fdd/attachment-0001.html>


More information about the Pkg-grass-devel mailing list