[med-svn] [python-multipletau] 08/14: New upstream version 0.1.9+ds

Alex Mestiashvili malex-guest at moszumanska.debian.org
Fri Oct 20 21:29:47 UTC 2017


This is an automated email from the git hooks/post-receive script.

malex-guest pushed a commit to branch master
in repository python-multipletau.

commit 44a56eb8e23cda8760e956aacb9007613cea2844
Author: Alexandre Mestiashvili <alex at biotec.tu-dresden.de>
Date:   Fri Oct 20 14:39:35 2017 +0200

    New upstream version 0.1.9+ds
---
 .travis.yml                              |  25 ----
 CHANGELOG                                |  33 +++--
 MANIFEST.in                              |   5 +-
 README.rst                               |  14 +-
 doc/README.md                            |  10 --
 doc/deploy_ghpages.py                    |  83 -----------
 doc/extensions/myviewcode.py             | 240 -------------------------------
 docs/README.md                           |  10 ++
 {doc => docs}/conf.py                    |  83 +++++------
 docs/extensions/fancy_include.py         |  99 +++++++++++++
 {doc => docs}/index.rst                  |   9 +-
 docs/requirements.txt                    |   2 +
 examples/compare_correlation_methods.jpg | Bin 0 -> 58902 bytes
 examples/compare_correlation_methods.png | Bin 90946 -> 0 bytes
 examples/compare_correlation_methods.py  | 214 ++++++++++++---------------
 examples/generate_example_images.py      |  33 +++++
 examples/noise_generator.py              | 148 +++++++++----------
 multipletau/__init__.py                  |   8 +-
 multipletau/_version.py                  |   8 +-
 multipletau/{_multipletau.py => core.py} |   0
 setup.cfg                                |   9 +-
 setup.py                                 |  24 +---
 22 files changed, 392 insertions(+), 665 deletions(-)

diff --git a/.travis.yml b/.travis.yml
deleted file mode 100644
index f582e2a..0000000
--- a/.travis.yml
+++ /dev/null
@@ -1,25 +0,0 @@
-language: python
-python:
-- '2.7'
-- '3.4'
-- '3.5'
-env:
-  global:
-  - GH_REF: github.com/FCS-analysis/multipletau.git
-  - secure: IVoAJNYKGjWbHUGPpe8oOTLhltGrhu0F+xCaRVGs1tTut34BixSSeDgranlRiXZ0wlVOzBGrDHLkoLxFSRRy43BN4TSiv05WLBZba7ypGYBbDrLrG5nFPnT6n9d4ZgFHHHwyvI2ymdSs6/EJwZRXmr2Ehm0HzetA27FB1/Q3kc0=
-notifications:
-  email: false
-install:
-- travis_retry pip install coverage
-- travis_retry pip install coveralls
-- python setup.py develop
-- pip freeze
-script:
-- coverage run --source=multipletau ./setup.py test
-- coverage report -m
-after_success:
-- coveralls --verbose
-- git config credential.helper "store --file=.git/credentials"
-- echo "https://${GH_TOKEN}:@github.com" > .git/credentials
-- if [[ $TRAVIS_PYTHON_VERSION == 3.4 ]]; then pip install numpydoc sphinx; fi
-- if [[ $TRAVIS_PYTHON_VERSION == 3.4 ]]; then python doc/deploy_ghpages.py; fi
diff --git a/CHANGELOG b/CHANGELOG
index aa65120..ba7da60 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,23 +1,28 @@
+0.1.9
+ - include docs in sdist
+0.1.8
+ - update documentation and example files
+ - move documentation to readthedocs.io
 0.1.7
-- code cleanup with pep8 and autopep8 
-- always use numpy dtypes
-- fix tests:
+ - code cleanup with pep8 and autopep8 
+ - always use numpy dtypes
+ - fix tests:
     - take into account floating inaccuracies
     - support i386 numpy dtypes
 0.1.6
-- also compute correlation for zero lag time (`G(tau==0)`)
-- support NumPy 1.11
-- add tests to complete code coverage
-- bugfixes:
+ - also compute correlation for zero lag time (`G(tau==0)`)
+ - support NumPy 1.11
+ - add tests to complete code coverage
+ - bugfixes:
   - wrong normalization for cplx array `v` in `correlate` if `normalize==True`
   - wrong normalization in `correlate_numpy` if `normalize==False`
 0.1.5
-- update documentation
-- support Python 3
+ - update documentation
+ - support Python 3
 0.1.4
-- integer and boolean input types are now automatically converted to floats
-- `multipletau.correlate` now works with complex data types
-- `multipletau.correlate` now checks if input data are same objects
-- documentation now contains examples
+ - integer and boolean input types are now automatically converted to floats
+ - `multipletau.correlate` now works with complex data types
+ - `multipletau.correlate` now checks if input data are same objects
+ - documentation now contains examples
 0.1.3
-- first non-cython implementation
+ - first non-cython implementation
diff --git a/MANIFEST.in b/MANIFEST.in
index 6efe489..905c080 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1,6 +1,7 @@
 include CHANGELOG
 include LICENSE
 include README.rst
-recursive-include examples *.py
-recursive-include doc *.py *.md *.rst
+recursive-include examples *.py *.jpg
+recursive-include docs *.py *.md *.rst *.txt
 recursive-include tests *.py *.md test_*.npy
+prune docs/_build
diff --git a/README.rst b/README.rst
index 7270054..17439a4 100644
--- a/README.rst
+++ b/README.rst
@@ -1,7 +1,7 @@
 multipletau
 ===========
 
-|PyPI Version| |Build Status| |Coverage Status|
+|PyPI Version| |Tests Status| |Coverage Status| |Docs Status|
 
 Multipe-tau correlation is computed on a logarithmic scale (less
 data points are computed) and is thus much faster than conventional
@@ -10,9 +10,9 @@ correlation on a linear scale such as `numpy.correlate <http://docs.scipy.org/do
 
 Installation
 ------------
-``multipletau`` supports Python 2.6+ and Python 3.3+ with a common codebase.
+Multipletau supports Python 2.6+ and Python 3.3+ with a common codebase.
 The only requirement for ``multipletau`` is `NumPy <http://www.numpy.org/>`__ (for fast
-operations on arrays). Install ``multipletau`` from the Python package index:
+operations on arrays). Install multipletau from the Python package index:
 
 ::
 
@@ -21,7 +21,8 @@ operations on arrays). Install ``multipletau`` from the Python package index:
 
 Documentation
 -------------
-A full code reference including examples is available `here <http://FCS-analysis.github.io/multipletau/>`__.
+
+The documentation, including the reference and examples, is available on `readthedocs.io <https://multipletau.readthedocs.io/en/stable/>`__.
 
 
 Usage
@@ -61,8 +62,9 @@ You can find out what version you are using by typing (in a Python console):
 
 .. |PyPI Version| image:: http://img.shields.io/pypi/v/multipletau.svg
    :target: https://pypi.python.org/pypi/multipletau
-.. |Build Status| image:: http://img.shields.io/travis/FCS-analysis/multipletau.svg
+.. |Tests Status| image:: http://img.shields.io/travis/FCS-analysis/multipletau.svg?label=tests
    :target: https://travis-ci.org/FCS-analysis/multipletau
 .. |Coverage Status| image:: https://img.shields.io/coveralls/FCS-analysis/multipletau.svg
    :target: https://coveralls.io/r/FCS-analysis/multipletau
-
+.. |Docs Status| image:: https://readthedocs.org/projects/multipletau/badge/?version=latest
+   :target: https://readthedocs.org/projects/multipletau/builds/
diff --git a/doc/README.md b/doc/README.md
deleted file mode 100644
index 43f3f86..0000000
--- a/doc/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
-multipletau documentation
-=========================
-
-Install [numpydoc](https://pypi.python.org/pypi/numpydoc):
-
-    pip install numpydoc
-
-To compile the documentation, run
-
-    python setup.py build_sphinx
diff --git a/doc/deploy_ghpages.py b/doc/deploy_ghpages.py
deleted file mode 100644
index 1a98abf..0000000
--- a/doc/deploy_ghpages.py
+++ /dev/null
@@ -1,83 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-Publish the documentation on GitHub Pages.
-
-Prerequisites
--------------
-
-1. Create empty gh-pages branch:
-
-    git branch gh-pages
-    git checkout gh-pages
-    git symbolic-ref HEAD refs/heads/gh-pages
-    rm .git/index
-    git clean -fdx
-
-
-2. Setup sphinx.
-
-    python setup.py build_sphinx
-   
-   should create a build/sphinx/html folder in the repository root. 
-
-
-3. Create GitHub repo token and encrypt it 
-
-    gem install travis
-    travis encrypt GH_TOKEN="<token>" --add
-    
-
-4. Add the encrypted token to .travis.yml
-
-    env:
-      global:
-      - GH_REF: github.com/<your name>/<your repo>.git
-      - secure: "jdcn3kM/dI0zvVTn0UKgal8Br+745Qc1plaKXHcoKhwcwN+0Q1y5H1BnaF0KV2dWWeExVXMpqQMLOCylUSUmd30+hFqUgd3gFQ+oh9pF/+N72uzjnxHAyVjai5Lh7QnjN0SLCd2/xLYwaUIHjWbWsr5t2vK9UuyphZ6/F+7OHf+u8BErviE9HUunD7u4Q2XRaUF0oHuF8stoWbJgnQZtUZFr+qS1Gc3vF6/KBkMqjnq/DgBV61cWsnVUS1HVak/sGClPRXZMSGyz8d63zDxfA5NDO6AbPVgK02k+QV8KQCyIX7of8rBvBmWkBYGw5RnaeETLIAf6JrCKMiQzlJQZiMyLUvd/WflSIBKJyr5YmUKCjFkwvbKKvCU3WBUxFT2p7trKZip5JWg37OMvOAO8eiatf2FC1klNly1KHADU88QqNoi/0y2R/a+1Csrl8Gr/lXZkW4mMkI2due9epLwccDJtMF8 [...]
-
-5. Add the deploy command to .travis.yml
-
-    after_success:
-    - git config credential.helper "store --file=.git/credentials"
-    - echo "https://${GH_TOKEN}:@github.com" > .git/credentials
-    - if [[ $TRAVIS_PYTHON_VERSION == 3.4 ]]; then pip install numpydoc sphinx; fi
-    - if [[ $TRAVIS_PYTHON_VERSION == 3.4 ]]; then python doc/deploy_ghpages.py; fi
-
-"""
-from __future__ import print_function
-import os
-from os.path import dirname, abspath
-import subprocess as sp
-
-
-# go to root of repository
-os.chdir(dirname(dirname(abspath(__file__))))
-
-# build sphinx
-sp.check_output("python setup.py build_sphinx", shell=True)
-
-# clone into new folder the gh-pages branch
-sp.check_output("git config --global user.email 'travis at example.com'", shell=True)
-sp.check_output("git config --global user.name 'Travis CI'", shell=True)
-sp.check_output("git config --global credential.helper 'store --file=.git/credentials'", shell=True)
-sp.check_output("echo 'https://${GH_TOKEN}:@github.com' > .git/credentials", shell=True)
-sp.check_output("git clone --depth 1 -b gh-pages https://${GH_TOKEN}@${GH_REF} gh_pages", shell=True)
-
-# copy everything from ./build/sphinx/html to ./gh_pages
-#sp.check_output("cp -r ./build/sphinx/html/* ./gh_pages/", shell=True)
-sp.check_output("rsync -rt --del --exclude='.git' --exclude='.nojekyll' ./build/sphinx/html/* ./gh_pages/", shell=True)
-
-# commit changes
-os.chdir("gh_pages")
-sp.check_output("echo 'https://${GH_TOKEN}:@github.com' > .git/credentials", shell=True)
-sp.check_output("git add --all ./*", shell=True)
-
-try:
-    # If there is nothing to commit, then 'git commit' returns non-zero exit status
-    errorcode = sp.check_output("git commit -a -m 'travis bot build {} [ci skip]'".format(os.getenv("TRAVIS_COMMIT")), shell=True)
-    print("git commit returned:", errorcode)
-except:
-    pass
-else:
-    sp.check_output("git push --force --quiet origin gh-pages", shell=True)
-
diff --git a/doc/extensions/myviewcode.py b/doc/extensions/myviewcode.py
deleted file mode 100644
index d49b435..0000000
--- a/doc/extensions/myviewcode.py
+++ /dev/null
@@ -1,240 +0,0 @@
-"""
-    sphinx.ext.viewcode
-    ~~~~~~~~~~~~~~~~~~~
-    Add links to module code in Python object descriptions.
-    :copyright: Copyright 2007-2015 by the Sphinx team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-
-    Edited by Paul Mueller to support imports from submodules. Uses the
-    importlib library. Changes marked with "## EDIT". 2015-02-22
-"""
-
-## EDIT
-import importlib
-##
-
-import traceback
-
-from six import iteritems, text_type
-from docutils import nodes
-
-import sphinx
-from sphinx import addnodes
-from sphinx.locale import _
-from sphinx.pycode import ModuleAnalyzer
-from sphinx.util import get_full_modname
-from sphinx.util.nodes import make_refnode
-from sphinx.util.console import blue
-
-
-def _get_full_modname(app, modname, attribute):
-    try:
-        return get_full_modname(modname, attribute)
-    except AttributeError:
-        # sphinx.ext.viewcode can't follow class instance attribute
-        # then AttributeError logging output only verbose mode.
-        app.verbose('Didn\'t find %s in %s' % (attribute, modname))
-        return None
-    except Exception as e:
-        # sphinx.ext.viewcode follow python domain directives.
-        # because of that, if there are no real modules exists that specified
-        # by py:function or other directives, viewcode emits a lot of warnings.
-        # It should be displayed only verbose mode.
-        app.verbose(traceback.format_exc().rstrip())
-        app.verbose('viewcode can\'t import %s, failed with error "%s"' %
-                    (modname, e))
-        return None
-
-
-def doctree_read(app, doctree):
-    env = app.builder.env
-    if not hasattr(env, '_viewcode_modules'):
-        env._viewcode_modules = {}
-
-    def has_tag(modname, fullname, docname, refname):
-        entry = env._viewcode_modules.get(modname, None)
-        try:
-            analyzer = ModuleAnalyzer.for_module(modname)
-        except Exception:
-            env._viewcode_modules[modname] = False
-            return
-        if not isinstance(analyzer.code, text_type):
-            code = analyzer.code.decode(analyzer.encoding)
-        else:
-            code = analyzer.code
-        if entry is None or entry[0] != code:
-            analyzer.find_tags()
-            entry = code, analyzer.tags, {}, refname
-            env._viewcode_modules[modname] = entry
-        elif entry is False:
-            return
-        _, tags, used, _ = entry
-        if fullname in tags:
-            used[fullname] = docname
-            return True
-
-    for objnode in doctree.traverse(addnodes.desc):
-        if objnode.get('domain') != 'py':
-            continue
-        names = set()
-        for signode in objnode:
-            if not isinstance(signode, addnodes.desc_signature):
-                continue
-            modname = signode.get('module')
-            fullname = signode.get('fullname')
-            refname = modname
-            if env.config.viewcode_import:
-                modname = _get_full_modname(app, modname, fullname)
-            if not modname:
-                continue
-            fullname = signode.get('fullname')
-            
-            ## EDIT
-            fullname, modname = find_modname(fullname, modname)
-            ##
-            
-            if not has_tag(modname, fullname, env.docname, refname):
-                continue
-            if fullname in names:
-                # only one link per name, please
-                continue
-            names.add(fullname)
-            pagename = '_modules/' + modname.replace('.', '/')
-            onlynode = addnodes.only(expr='html')
-            onlynode += addnodes.pending_xref(
-                '', reftype='viewcode', refdomain='std', refexplicit=False,
-                reftarget=pagename, refid=fullname,
-                refdoc=env.docname)
-            onlynode[0] += nodes.inline('', _('[source]'),
-                                        classes=['viewcode-link'])
-            signode += onlynode
-
-
-def env_merge_info(app, env, docnames, other):
-    if not hasattr(other, '_viewcode_modules'):
-        return
-    # create a _viewcode_modules dict on the main environment
-    if not hasattr(env, '_viewcode_modules'):
-        env._viewcode_modules = {}
-    # now merge in the information from the subprocess
-    env._viewcode_modules.update(other._viewcode_modules)
-
-
-## EDIT
-def find_modname(fullname, modname):
-    mod = importlib.import_module(modname)
-    if hasattr(mod, fullname):
-        func = getattr(mod, fullname)
-        modname = func.__module__
-        fullname = func.__name__
-    return fullname, modname
-##
-
-
-def missing_reference(app, env, node, contnode):
-    # resolve our "viewcode" reference nodes -- they need special treatment
-    if node['reftype'] == 'viewcode':
-        return make_refnode(app.builder, node['refdoc'], node['reftarget'],
-                            node['refid'], contnode)
-
-
-def collect_pages(app):
-    env = app.builder.env
-    if not hasattr(env, '_viewcode_modules'):
-        return
-    highlighter = app.builder.highlighter
-    urito = app.builder.get_relative_uri
-
-    modnames = set(env._viewcode_modules)
-
-#    app.builder.info(' (%d module code pages)' %
-#                     len(env._viewcode_modules), nonl=1)
-
-    for modname, entry in app.status_iterator(
-            iteritems(env._viewcode_modules), 'highlighting module code... ',
-            blue, len(env._viewcode_modules), lambda x: x[0]):
-        if not entry:
-            continue
-        code, tags, used, refname = entry
-        # construct a page name for the highlighted source
-        pagename = '_modules/' + modname.replace('.', '/')
-        # highlight the source using the builder's highlighter
-        highlighted = highlighter.highlight_block(code, 'python', linenos=False)
-        # split the code into lines
-        lines = highlighted.splitlines()
-        # split off wrap markup from the first line of the actual code
-        before, after = lines[0].split('<pre>')
-        lines[0:1] = [before + '<pre>', after]
-        # nothing to do for the last line; it always starts with </pre> anyway
-        # now that we have code lines (starting at index 1), insert anchors for
-        # the collected tags (HACK: this only works if the tag boundaries are
-        # properly nested!)
-        maxindex = len(lines) - 1
-        for name, docname in iteritems(used):
-            type, start, end = tags[name]
-            backlink = urito(pagename, docname) + '#' + refname + '.' + name
-            lines[start] = (
-                '<div class="viewcode-block" id="%s"><a class="viewcode-back" '
-                'href="%s">%s</a>' % (name, backlink, _('[docs]')) +
-                lines[start])
-            lines[min(end - 1, maxindex)] += '</div>'
-        # try to find parents (for submodules)
-        parents = []
-        parent = modname
-        while '.' in parent:
-            parent = parent.rsplit('.', 1)[0]
-            if parent in modnames:
-                parents.append({
-                    'link': urito(pagename, '_modules/' +
-                                  parent.replace('.', '/')),
-                    'title': parent})
-        parents.append({'link': urito(pagename, '_modules/index'),
-                        'title': _('Module code')})
-        parents.reverse()
-        # putting it all together
-        context = {
-            'parents': parents,
-            'title': modname,
-            'body': (_('<h1>Source code for %s</h1>') % modname +
-                     '\n'.join(lines)),
-        }
-        yield (pagename, context, 'page.html')
-
-    if not modnames:
-        return
-
-    html = ['\n']
-    # the stack logic is needed for using nested lists for submodules
-    stack = ['']
-    for modname in sorted(modnames):
-        if modname.startswith(stack[-1]):
-            stack.append(modname + '.')
-            html.append('<ul>')
-        else:
-            stack.pop()
-            while not modname.startswith(stack[-1]):
-                stack.pop()
-                html.append('</ul>')
-            stack.append(modname + '.')
-        html.append('<li><a href="%s">%s</a></li>\n' % (
-            urito('_modules/index', '_modules/' + modname.replace('.', '/')),
-            modname))
-    html.append('</ul>' * (len(stack) - 1))
-    context = {
-        'title': _('Overview: module code'),
-        'body': (_('<h1>All modules for which code is available</h1>') +
-                 ''.join(html)),
-    }
-
-    yield ('_modules/index', context, 'page.html')
-
-
-def setup(app):
-    app.add_config_value('viewcode_import', True, False)
-    app.connect('doctree-read', doctree_read)
-    app.connect('env-merge-info', env_merge_info)
-    app.connect('html-collect-pages', collect_pages)
-    app.connect('missing-reference', missing_reference)
-    # app.add_config_value('viewcode_include_modules', [], 'env')
-    # app.add_config_value('viewcode_exclude_modules', [], 'env')
-    return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
\ No newline at end of file
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 0000000..3a06242
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,10 @@
+multipletau documentation
+=========================
+To install the requirements for building the documentation, run
+
+    pip install -r requirements.txt
+
+To compile the documentation, run
+
+    sphinx-build . _build
+
diff --git a/doc/conf.py b/docs/conf.py
similarity index 84%
rename from doc/conf.py
rename to docs/conf.py
index e5fcc79..4801fc3 100644
--- a/doc/conf.py
+++ b/docs/conf.py
@@ -12,22 +12,29 @@
 # All configuration values have a default; values that are commented out
 # serve to show the default.
 
-import sys
-import os
-
 # If extensions (or modules to document with autodoc) are in another directory,
 # add these directories to sys.path here. If the directory is relative to the
 # documentation root, use os.path.abspath to make it absolute, like shown here.
-#sys.path.insert(0, os.path.abspath('.'))
-
+#
+# import os
+# import sys
+# sys.path.insert(0, os.path.abspath('.'))
 
-sys.path.insert(0, os.path.abspath(os.path.join(os.path.abspath(
-                    os.path.dirname(__file__)), '../')))
+# Get version number from qpimage._version file
+import mock
+import os.path as op
+import sys
+# include parent directory
+pdir = op.dirname(op.dirname(op.abspath(__file__)))
+sys.path.insert(0, pdir)
+# include extenstions
+sys.path.append(op.abspath('extensions'))
 
-sys.path.append(os.path.abspath('extensions'))
+# Mock all dependencies
+install_requires = ["numpy"]
 
-# include examples
-sys.path.append(os.path.abspath(os.path.dirname(__file__)+"/../examples"))
+for mod_name in install_requires:
+    sys.modules[mod_name] = mock.Mock()
 
 
 # There should be a file "setup.py" that has the property "version"
@@ -35,6 +42,15 @@ from setup import author, authors, description, name, version, year
 projectname = name
 projectdescription = description
 
+# http://www.sphinx-doc.org/en/stable/ext/autodoc.html#confval-autodoc_member_order
+# Order class attributes and functions in separate blocks
+autodoc_member_order = 'bysource'
+autodoc_mock_imports = install_requires
+
+# Display link to GitHub repo instead of doc on rtfd
+rst_prolog = """
+:github_url: https://github.com/FCS-analysis/multipletau
+"""
 
 # -- General configuration ------------------------------------------------
 
@@ -44,34 +60,14 @@ projectdescription = description
 # Add any Sphinx extension module names here, as strings. They can be
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
 # ones.
-#extensions = [
-#    'sphinx.ext.autodoc',
-#    'sphinx.ext.doctest',
-#    'sphinx.ext.coverage',
-#    'sphinx.ext.pngmath',
-#    'sphinx.ext.viewcode',
-#]
-
-
-extensions = [
-#              'matplotlib.sphinxext.mathmpl',
-#              'matplotlib.sphinxext.only_directives',
-#              'matplotlib.sphinxext.plot_directive',
-#              'sphinx.ext.viewcode',
-#              'ipython_directive',
-              'sphinx.ext.intersphinx',
+
+extensions = ['sphinx.ext.intersphinx',
               'sphinx.ext.autosummary',
               'sphinx.ext.autodoc',
-#              'sphinx.ext.doctest',
-#              'ipython_console_highlighting',
-#              'sphinx.ext.pngmath',
               'sphinx.ext.mathjax',
-#              'sphinx.ext.viewcode',
-#              'sphinx.ext.todo',
-#              'inheritance_diagram',
-              'numpydoc',
-              'myviewcode',  
-#              'hidden_code_block',
+              'sphinx.ext.viewcode',
+              'sphinx.ext.napoleon',
+              'fancy_include',
               ]
 
 
@@ -142,12 +138,7 @@ release = version
 
 # The theme to use for HTML and HTML Help pages.  See the documentation for
 # a list of builtin themes.
-html_theme = 'classic'
-
-# Theme options are theme-specific and customize the look and feel of a theme
-# further.  For a list of options available for each theme, see the
-# documentation.
-html_theme_options = {"stickysidebar": True}
+html_theme = 'default'
 
 # Add any paths that contain custom themes here, relative to this directory.
 #html_theme_path = []
@@ -305,10 +296,6 @@ texinfo_documents = [
 # -----------------------------------------------------------------------------
 # intersphinx
 # -----------------------------------------------------------------------------
-_python_doc_base = 'http://docs.python.org/2.7'
-intersphinx_mapping = {
-    _python_doc_base: None,
-    'http://docs.scipy.org/doc/numpy': None,
-    'http://docs.scipy.org/doc/scipy/reference': None,
-}
-
+intersphinx_mapping = {"python": ('https://docs.python.org/', None),
+                       "numpy": ('http://docs.scipy.org/doc/numpy', None),
+                       }
diff --git a/docs/extensions/fancy_include.py b/docs/extensions/fancy_include.py
new file mode 100644
index 0000000..e0b746f
--- /dev/null
+++ b/docs/extensions/fancy_include.py
@@ -0,0 +1,99 @@
+"""Include single scripts with doc string, code, and image
+
+Use case
+--------
+There is an "examples" directory in the root of a repository,
+e.g. 'include_doc_code_img_path = "../examples"' in conf.py
+(default). An example is a file ("an_example.py") that consists
+of a doc string at the beginning of the file, the example code,
+and, optionally, an image file (png, jpg) ("an_example.png").
+
+
+Configuration
+-------------
+In conf.py, set the parameter
+
+   fancy_include_path = "../examples"
+
+to wherever the included files reside.
+
+
+Usage
+-----
+The directive
+
+   .. fancy_include:: an_example.py
+
+will display the doc string formatted with the first line as a
+heading, a code block with line numbers, and the image file.
+"""
+import io
+import os.path as op
+
+from docutils.statemachine import ViewList
+from docutils.parsers.rst import Directive
+from sphinx.util.nodes import nested_parse_with_titles
+from docutils import nodes
+
+
+class IncludeDirective(Directive):
+    required_arguments = 1
+    optional_arguments = 0
+
+    def run(self):
+        path = self.state.document.settings.env.config.fancy_include_path
+        full_path = op.join(path, self.arguments[0])
+
+        with io.open(full_path, "r") as myfile:
+            text = myfile.read()
+
+        source = text.split('"""')
+        doc = source[1].split("\n")
+        doc.insert(1, "~" * len(doc[0]))  # make title heading
+
+        code = source[2].split("\n")
+
+        # documentation
+        rst = []
+        for line in doc:
+            rst.append(line)
+
+        # image
+        for ext in [".png", ".jpg"]:
+            image_path = full_path[:-3] + ext
+            if op.exists(image_path):
+                break
+        else:
+            image_path = ""
+        if image_path:
+            rst.append(".. figure:: {}".format(image_path))
+            rst.append("")
+
+        # download file
+        rst.append(":download:`{}<{}>`".format(
+            op.basename(full_path), full_path))
+
+        # code
+        rst.append("")
+        rst.append(".. code-block:: python")
+        rst.append("   :linenos:")
+        rst.append("")
+        for line in code:
+            rst.append("   {}".format(line))
+        rst.append("")
+
+        vl = ViewList(rst, "fakefile.rst")
+        # Create a node.
+        node = nodes.section()
+        node.document = self.state.document
+        # Parse the rst.
+        nested_parse_with_titles(self.state, vl, node)
+        return node.children
+
+
+def setup(app):
+    app.add_config_value('fancy_include_path', "../examples", 'html')
+
+    app.add_directive('fancy_include', IncludeDirective)
+
+    return {'version': '0.1'}   # identifies the version of our extension
diff --git a/doc/index.rst b/docs/index.rst
similarity index 73%
rename from doc/index.rst
rename to docs/index.rst
index b0b6ab7..7b407c5 100644
--- a/doc/index.rst
+++ b/docs/index.rst
@@ -1,5 +1,5 @@
-multipletau reference
-=====================
+multipletau documentation
+=========================
 
 General
 :::::::
@@ -16,7 +16,6 @@ Summary:
     correlate
     correlate_numpy
 
-For a quick overview, see :ref:`genindex`.
 
 Autocorrelation
 ---------------
@@ -33,6 +32,6 @@ Cross-correlation (NumPy)
 
 Examples
 ========
-.. automodule:: compare_correlation_methods
-   :members:
+.. fancy_include:: compare_correlation_methods.py
+
 
diff --git a/docs/requirements.txt b/docs/requirements.txt
new file mode 100644
index 0000000..ce02819
--- /dev/null
+++ b/docs/requirements.txt
@@ -0,0 +1,2 @@
+mock
+sphinx>=1.6.4
diff --git a/examples/compare_correlation_methods.jpg b/examples/compare_correlation_methods.jpg
new file mode 100644
index 0000000..6fb1c49
Binary files /dev/null and b/examples/compare_correlation_methods.jpg differ
diff --git a/examples/compare_correlation_methods.png b/examples/compare_correlation_methods.png
deleted file mode 100644
index 2ced9a3..0000000
Binary files a/examples/compare_correlation_methods.png and /dev/null differ
diff --git a/examples/compare_correlation_methods.py b/examples/compare_correlation_methods.py
index ad30ce4..4dfa459 100644
--- a/examples/compare_correlation_methods.py
+++ b/examples/compare_correlation_methods.py
@@ -1,134 +1,102 @@
 #!/usr/bin/python
 # -*- coding: utf-8 -*-
-""" 
-Comparison of correlation methods
----------------------------------
-Comparison between the :py:mod:`multipletau` correlation methods
-(:py:func:`multipletau.autocorrelate`, :py:func:`multipletau.correlate`) and :py:func:`numpy.correlate`.
+"""Comparison of correlation methods
 
-.. image:: ../examples/compare_correlation_methods.png
-   :align:   center
+This example illustrates the differences between the
+:py:mod:`multipletau` correlation methods
+(:py:func:`multipletau.autocorrelate`,
+:py:func:`multipletau.correlate`) and :py:func:`numpy.correlate`.
 
-Download the 
-:download:`full example <../examples/compare_correlation_methods.py>`.    
+This example requires ``noise_generator.py`` to be present in the
+current working directory.
 """
-from __future__ import print_function
-
+from matplotlib import pylab as plt
 import numpy as np
-import os
-from os.path import abspath, dirname, join
-import sys
-import time
-
-sys.path.insert(0, dirname(dirname(abspath(__file__))))
 
-from noise_generator import noise_exponential, noise_cross_exponential
 from multipletau import autocorrelate, correlate, correlate_numpy
 
-
-def compare_corr():
-    ## Starting parameters
-    N = np.int(np.pi*1e3)
-    countrate = 250. * 1e-3 # in Hz
-    taudiff = 55. # in us
-    deltat = 2e-6 # time discretization [s]
-    normalize = True
-
-    # time factor
-    taudiff *= deltat
-
-    if N < 1e5:
-        do_np_corr = True
-    else:
-        do_np_corr = False
-
-    ## Autocorrelation
-    print("Creating noise for autocorrelation")
-    data = noise_exponential(N, taudiff, deltat=deltat)
-    data -= np.average(data)
-    if normalize:
-        data += countrate
-    # multipletau
-    print("Performing autocorrelation (multipletau).")
-    G = autocorrelate(data, deltat=deltat, normalize=normalize)
-    # numpy.correlate for comparison
-    if do_np_corr:
-        print("Performing autocorrelation (numpy).")
-        Gd = correlate_numpy(data, data, deltat=deltat,
-                             normalize=normalize)
-    else:
-        Gd = G
-    
-    ## Cross-correlation
-    print("Creating noise for cross-correlation")
-    a, v = noise_cross_exponential(N, taudiff, deltat=deltat)
-    a -= np.average(a)
-    v -= np.average(v)
-    if normalize:
-        a += countrate
-        v += countrate
-    Gccforw = correlate(a, v, deltat=deltat, normalize=normalize) # forward
-    Gccback = correlate(v, a, deltat=deltat, normalize=normalize) # backward
-    if do_np_corr:
-        print("Performing cross-correlation (numpy).")
-        Gdccforw = correlate_numpy(a, v, deltat=deltat, normalize=normalize)
-    
-    ## Calculate the model curve for cross-correlation
-    xcc = Gd[:,0]
-    ampcc = np.correlate(a-np.average(a), v-np.average(v), mode="valid")
-    if normalize:
-        ampcc /= len(a) * countrate**2
-    ycc = ampcc*np.exp(-xcc/taudiff)
-
-    ## Calculate the model curve for autocorrelation
-    x = Gd[:,0]
-    amp = np.correlate(data-np.average(data), data-np.average(data),
-                       mode="valid")
-    if normalize:
-        amp /= len(data) * countrate**2
-    y = amp*np.exp(-x/taudiff)
-
-
-    ## Plotting
-    # AC
-    fig = plt.figure()
-    fig.canvas.set_window_title('testing multipletau')
-    ax = fig.add_subplot(2,1,1)
-    ax.set_xscale('log')
-    if do_np_corr:
-        plt.plot(Gd[:,0], Gd[:,1] , "-", color="gray", label="correlate (numpy)")
-    plt.plot(x, y, "g-", label="input model")
-    plt.plot(G[:,0], G[:,1], "-",  color="#B60000", label="autocorrelate")
-    plt.xlabel("lag channel")
-    plt.ylabel("autocorrelation")
-    plt.legend(loc=0, fontsize='small')
-    plt.ylim( -amp*.2, amp*1.2)
-    plt.xlim( Gd[0,0], Gd[-1,0])
-
-    # CC
-    ax = fig.add_subplot(2,1,2)
-    ax.set_xscale('log')
-    if do_np_corr:
-        plt.plot(Gdccforw[:,0], Gdccforw[:,1] , "-", color="gray", label="forward (numpy)")
-    plt.plot(xcc, ycc, "g-", label="input model")
-    plt.plot(Gccforw[:,0], Gccforw[:,1], "-", color="#B60000", label="forward")
-    plt.plot(Gccback[:,0], Gccback[:,1], "-", color="#5D00B6", label="backward")
-    plt.xlabel("lag channel")
-    plt.ylabel("cross-correlation")
-    plt.legend(loc=0, fontsize='small')
-    plt.ylim( -ampcc*.2, ampcc*1.2)
-    plt.xlim( Gd[0,0], Gd[-1,0])
-    plt.tight_layout()
-
-    savename = __file__[:-3]+".png"
-    if os.path.exists(savename):
-        savename = __file__[:-3]+time.strftime("_%Y-%m-%d_%H-%M-%S.png")
-
-    plt.savefig(savename)
-    print("Saved output to", savename)
+from noise_generator import noise_exponential, noise_cross_exponential
 
 
-if __name__ == '__main__':
-    # move mpl import to main so travis automated doc build does not complain
-    from matplotlib import pylab as plt
-    compare_corr()
+# starting parameters
+N = np.int(np.pi * 1e3)
+countrate = 250. * 1e-3  # in Hz
+taudiff = 55.  # in us
+deltat = 2e-6  # time discretization [s]
+normalize = True
+
+# time factor
+taudiff *= deltat
+
+# create noise for autocorrelation
+data = noise_exponential(N, taudiff, deltat=deltat)
+data -= np.average(data)
+if normalize:
+    data += countrate
+# perform autocorrelation (multipletau)
+gac_mt = autocorrelate(data, deltat=deltat, normalize=normalize)
+# numpy.correlate for comparison
+gac_np = correlate_numpy(data, data, deltat=deltat,
+                         normalize=normalize)
+# calculate model curve for autocorrelation
+x = gac_np[:, 0]
+amp = np.correlate(data - np.average(data), data - np.average(data),
+                   mode="valid")
+if normalize:
+    amp /= len(data) * countrate**2
+y = amp * np.exp(-x / taudiff)
+
+# create noise for cross-correlation
+a, v = noise_cross_exponential(N, taudiff, deltat=deltat)
+a -= np.average(a)
+v -= np.average(v)
+if normalize:
+    a += countrate
+    v += countrate
+gcc_forw_mt = correlate(a, v, deltat=deltat, normalize=normalize)  # forward
+gcc_back_mt = correlate(v, a, deltat=deltat, normalize=normalize)  # backward
+# numpy.correlate for comparison
+gcc_forw_np = correlate_numpy(a, v, deltat=deltat, normalize=normalize)
+# calculate the model curve for cross-correlation
+xcc = gac_np[:, 0]
+ampcc = np.correlate(a - np.average(a), v - np.average(v), mode="valid")
+if normalize:
+    ampcc /= len(a) * countrate**2
+ycc = ampcc * np.exp(-xcc / taudiff)
+
+# plotting
+fig = plt.figure(figsize=(8, 5))
+fig.canvas.set_window_title('comparing multipletau')
+
+# autocorrelation
+ax1 = fig.add_subplot(211)
+ax1.plot(gac_np[:, 0], gac_np[:, 1], "-",
+         color="gray", label="correlate (numpy)")
+ax1.plot(x, y, "g-", label="input model")
+ax1.plot(gac_mt[:, 0], gac_mt[:, 1], "-",
+         color="#B60000", label="autocorrelate")
+ax1.legend(loc=0, fontsize='small')
+ax1.set_xlabel("lag channel")
+ax1.set_ylabel("autocorrelation")
+ax1.set_xscale('log')
+ax1.set_xlim(x.min(), x.max())
+ax1.set_ylim(-y.max()*.2, y.max()*1.1)
+
+# cross-correlation
+ax2 = fig.add_subplot(212)
+ax2.plot(gcc_forw_np[:, 0], gcc_forw_np[:, 1], "-",
+         color="gray", label="forward (numpy)")
+ax2.plot(xcc, ycc, "g-", label="input model")
+ax2.plot(gcc_forw_mt[:, 0], gcc_forw_mt[:, 1], "-",
+         color="#B60000", label="forward")
+ax2.plot(gcc_back_mt[:, 0], gcc_back_mt[:, 1], "-",
+         color="#5D00B6", label="backward")
+ax2.set_xlabel("lag channel")
+ax2.set_ylabel("cross-correlation")
+ax2.legend(loc=0, fontsize='small')
+ax2.set_xscale('log')
+ax2.set_xlim(x.min(), x.max())
+ax2.set_ylim(-ycc.max()*.2, ycc.max()*1.1)
+
+plt.tight_layout()
+plt.show()
diff --git a/examples/generate_example_images.py b/examples/generate_example_images.py
new file mode 100644
index 0000000..0d048c2
--- /dev/null
+++ b/examples/generate_example_images.py
@@ -0,0 +1,33 @@
+import os
+import os.path as op
+import sys
+
+import matplotlib.pylab as plt
+
+thisdir = op.dirname(op.abspath(__file__))
+sys.path.insert(0, op.dirname(thisdir))
+
+DPI = 80
+
+
+if __name__ == "__main__":
+    # Do not display example plots
+    plt.show = lambda: None
+    files = os.listdir(thisdir)
+    files = [f for f in files if f.endswith(".py")]
+    files = [f for f in files if not f == op.basename(__file__)]
+    files = sorted([op.join(thisdir, f) for f in files])
+
+    for f in files:
+        fname = f[:-3] + ".jpg"
+        if not op.exists(fname):
+            exec_str = open(f).read()
+            if exec_str.count("plt.show()"):
+                exec(exec_str)
+                plt.savefig(fname, dpi=DPI)
+                print("Image created: '{}'".format(fname))
+            else:
+                print("No image: '{}'".format(fname))
+        else:
+            print("Image skipped (already exists): '{}'".format(fname))
+        plt.close()
diff --git a/examples/noise_generator.py b/examples/noise_generator.py
index 13a2cb8..0995edb 100644
--- a/examples/noise_generator.py
+++ b/examples/noise_generator.py
@@ -1,9 +1,6 @@
 #!/usr/bin/python
 # -*- coding: utf-8 -*-
-"""
-This module contains methods for correlated noise generation.
-
-"""
+"""Methods for correlated noise generation"""
 
 from __future__ import division
 from __future__ import print_function
@@ -12,115 +9,110 @@ import numpy as np
 
 __all__ = ["noise_exponential", "noise_cross_exponential"]
 
+
 def noise_exponential(N, tau=20, variance=1, deltat=1):
-    """
-       Generate exponentially correlated noise.
-       
-       Parameters
-       ----------
-       N : integer
-          Total number of samples
-       tau : float
-          Correlation time of the exponential in `deltat`
-       variance : float
-          Variance of the noise
-       deltat : float
-          Bin size of output array, defines the time scale of `tau`
-       
-       Returns
-       -------
-       a : ndarray
-          Exponentially correlated noise.
+    """Generate exponentially correlated noise.
+
+    Parameters
+    ----------
+    N : integer
+       Total number of samples
+    tau : float
+       Correlation time of the exponential in `deltat`
+    variance : float
+       Variance of the noise
+    deltat : float
+       Bin size of output array, defines the time scale of `tau`
+
+    Returns
+    -------
+    a : ndarray
+       Exponentially correlated noise.
     """
     # time constant (inverse of correlationtime tau)
-    g = deltat/tau
+    g = deltat / tau
     # variance
     s0 = variance
-    
+
     # normalization factor (memory of the trace)
     exp_g = np.exp(-g)
-    one_exp_g = 1-exp_g
-    z_norm_factor = np.sqrt(1-np.exp(-2*g))/one_exp_g
-    
+    one_exp_g = 1 - exp_g
+    z_norm_factor = np.sqrt(1 - np.exp(-2 * g)) / one_exp_g
+
     # create random number array
     # generates random numbers in interval [0,1)
     randarray = np.random.random(N)
     # make numbers random in interval [-1,1)
-    randarray = 2*(randarray-0.5)
-    
+    randarray = 2 * (randarray - 0.5)
+
     # simulate exponential random behavior
     a = np.zeros(N)
-    a[0] = one_exp_g*randarray[0]
-    b = 1* a
-    for i in np.arange(N-1)+1:
-        a[i] = exp_g*a[i-1] + one_exp_g*randarray[i]
-    
+    a[0] = one_exp_g * randarray[0]
+    for i in np.arange(N - 1) + 1:
+        a[i] = exp_g * a[i - 1] + one_exp_g * randarray[i]
+
         # Solving the equation iteratively leads to this equation:
-        #j = np.arange(i)
-        #a[i] = a[0]*exp_g**(i) + \
+        # j = np.arange(i)
+        # a[i] = a[0]*exp_g**(i) + \
         #       one_exp_g)*np.sum(exp_g**(i-1-j)*randarray[1:i+1])
-        
-    a = a * z_norm_factor*s0
+
+    a = a * z_norm_factor * s0
     return a
 
 
 def noise_cross_exponential(N, tau=20, variance=1, deltat=1):
+    """Generate exponentially cross-correlated noise.
+
+    Parameters
+    ----------
+    N : integer
+       Total number of samples
+    tau : float
+       Correlation time of the exponential in `deltat`
+    variance : float
+       Variance of the noise
+    deltat : float
+       Bin size of output array, defines the time scale of `tau`
+
+    Returns
+    -------
+    a, randarray : ndarrays
+       Array `a` has positive exponential correlation to the 'truly'
+       random array `randarray`.
     """
-       Generate exponentially cross-correlated noise.
-       
-       Parameters
-       ----------
-       N : integer
-          Total number of samples
-       tau : float
-          Correlation time of the exponential in `deltat`
-       variance : float
-          Variance of the noise
-       deltat : float
-          Bin size of output array, defines the time scale of `tau`
-       
-       Returns
-       -------
-       a, randarray : ndarrays
-          Array `a` has positive exponential correlation to the 'truly'
-          random array `randarray`.
-    """
-    # length of mean0 trace
-    N_steps = N
     # time constant (inverse of correlationtime tau)
-    g = deltat/tau
+    g = deltat / tau
     # variance
     s0 = variance
     # normalization factor (memory of the trace)
     exp_g = np.exp(-g)
-    one_exp_g = 1-exp_g
-    z_norm_factor = np.sqrt(1-np.exp(-2*g))/one_exp_g
-    
+    one_exp_g = 1 - exp_g
+    z_norm_factor = np.sqrt(1 - np.exp(-2 * g)) / one_exp_g
+
     # create random number array
     # generates random numbers in interval [0,1)
     randarray = np.random.random(N)
     # make numbers random in interval [-1,1)
-    randarray = 2*(randarray-0.5)
-    
+    randarray = 2 * (randarray - 0.5)
+
     # simulate exponential random behavior
     a = np.zeros(N)
-    a[0] = one_exp_g*randarray[0]
-    
+    a[0] = one_exp_g * randarray[0]
+
     b = np.zeros(N)
-    b[0] = one_exp_g*randarray[0]
+    b[0] = one_exp_g * randarray[0]
     # slow
-    #for i in np.arange(N-1)+1:
+    # for i in np.arange(N-1)+1:
     #    for j in np.arange(i-1):
     #        a[i] += exp_g**j*randarray[i-j]
     #    a[i] += one_exp_g*randarray[i]
     # faster
-    j = np.arange(N+5)
-    for i in np.arange(N-1)+1:
-        a[i] += np.sum(exp_g**j[2:i+1] * randarray[2:i+1][::-1])
-        a[i] += one_exp_g*randarray[i]
-   
-    a *= z_norm_factor*s0
-    randarray = randarray * z_norm_factor*s0
-    
-    return a, randarray
+    j = np.arange(N + 5)
+    for i in np.arange(N - 1) + 1:
+        a[i] += np.sum(exp_g**j[2:i + 1] * randarray[2:i + 1][::-1])
+        a[i] += one_exp_g * randarray[i]
 
+    a *= z_norm_factor * s0
+    randarray = randarray * z_norm_factor * s0
+
+    return a, randarray
diff --git a/multipletau/__init__.py b/multipletau/__init__.py
index 31efe66..4e9292c 100644
--- a/multipletau/__init__.py
+++ b/multipletau/__init__.py
@@ -1,8 +1,8 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
 u"""
-This package provides a multiple-τ algorithm for Python 2.7 and
-Python 3.x and requires the package :py:mod:`numpy`.
+Multipletau provides a multiple-τ algorithm for Python 2.7 and
+Python 3.x with :py:mod:`numpy` as its sole dependency.
 
 Multipe-τ correlation is computed on a logarithmic scale (less
 data points are computed) and is thus much faster than conventional
@@ -71,8 +71,8 @@ The package is straightforward to use. Here is a quick example:
            [   8.        ,  386.39500297]])
 
 """
-from ._multipletau import *
-from ._version import version as __version__
+from .core import autocorrelate, correlate, correlate_numpy  # noqa: F401
+from ._version import version as __version__  # noqa: F401
 
 __author__ = u"Paul Müller"
 __license__ = "BSD (3 clause)"
diff --git a/multipletau/_version.py b/multipletau/_version.py
index 68151fc..b985749 100644
--- a/multipletau/_version.py
+++ b/multipletau/_version.py
@@ -54,14 +54,14 @@ if True:  # pragma: no cover
 
         try:
             out = _minimal_ext_cmd(['git', 'describe', '--tags', 'HEAD'])
-            GIT_REVISION = out.strip().decode('ascii')
+            git_revision = out.strip().decode('ascii')
         except OSError:
-            GIT_REVISION = ""
+            git_revision = ""
 
         # go back to original directory
         os.chdir(olddir)
 
-        return GIT_REVISION
+        return git_revision
 
     def load_version(versionfile):
         """ load version from version_save.py
@@ -86,7 +86,7 @@ if True:  # pragma: no cover
         """
         data = "#!/usr/bin/env python\n" \
             + "# This file was created automatically\n" \
-            + "longversion='{VERSION}'"
+            + "longversion = '{VERSION}'\n"
         try:
             with open(versionfile, "w") as fd:
                 fd.write(data.format(VERSION=version))
diff --git a/multipletau/_multipletau.py b/multipletau/core.py
similarity index 100%
rename from multipletau/_multipletau.py
rename to multipletau/core.py
diff --git a/setup.cfg b/setup.cfg
index 9c27bff..cf4e2a4 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,8 +1,5 @@
 [aliases]
-test=pytest
-
-[egg_info]
-tag_build = 
-tag_date = 0
-tag_svn_revision = 0
+test = pytest
 
+[bdist_wheel]
+universal = 1
diff --git a/setup.py b/setup.py
index 9de41bb..d73330b 100644
--- a/setup.py
+++ b/setup.py
@@ -1,7 +1,5 @@
 #!/usr/bin/env python
 # -*- coding: utf-8 -*-
-# To create a distribution package for pip or easy-install:
-# python setup.py sdist
 from os.path import exists, dirname, realpath
 from setuptools import setup
 import sys
@@ -13,19 +11,14 @@ description = 'A multiple-tau algorithm for Python/NumPy.'
 name = 'multipletau'
 year = "2013"
 
-
 sys.path.insert(0, realpath(dirname(__file__))+"/"+name)
-try:
-    from _version import version
-except:
-    version = "unknown"
-
+from _version import version
 
 if __name__ == "__main__":
     setup(
         name=name,
         author=author,
-        author_email='paul.mueller at biotec.tu-dresden.de',
+        author_email='dev at craban.de',
         url='https://github.com/FCS-analysis/multipletau',
         version=version,
         packages=[name],
@@ -33,20 +26,17 @@ if __name__ == "__main__":
         license="BSD (3 clause)",
         description=description,
         long_description=open('README.rst').read() if exists('README.rst') else '',
-        install_requires=["NumPy >= 1.5.1"],
-        keywords=["multiple", "tau", "FCS", "correlation", "spectroscopy",
-                  "fluorescence"],
-        extras_require={'doc': ['sphinx']},
+        install_requires=["numpy >= 1.5.1"],
+        keywords=["multiple tau", "fluorescence correlation spectroscopy"],
         setup_requires=['pytest-runner'],
         tests_require=["pytest"],
         classifiers= [
             'Operating System :: OS Independent',
-            'Programming Language :: Python :: 2.7',
-            'Programming Language :: Python :: 3.3',
-            'Programming Language :: Python :: 3.4',
+            'Programming Language :: Python :: 2',
+            'Programming Language :: Python :: 3',
             'Topic :: Scientific/Engineering :: Visualization',
             'Intended Audience :: Science/Research'
-                     ],
+            ],
         platforms=['ALL']
         )
 

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/python-multipletau.git



More information about the debian-med-commit mailing list