[med-svn] [python-multipletau] 01/09: Imported Upstream version 0.1.6+ds

Alex Mestiashvili malex-guest at moszumanska.debian.org
Fri Jul 22 13:43:47 UTC 2016


This is an automated email from the git hooks/post-receive script.

malex-guest pushed a commit to branch master
in repository python-multipletau.

commit e8e55fb8e05681d77fa712cb59a5b58c468197ae
Author: Alexandre Mestiashvili <alex at biotec.tu-dresden.de>
Date:   Thu Jul 21 16:48:27 2016 +0200

    Imported Upstream version 0.1.6+ds
---
 .gitignore                                         |  48 ---
 .travis.yml                                        |   3 +-
 CHANGELOG.md => CHANGELOG                          |   9 +-
 MANIFEST.in                                        |   6 +-
 README.md                                          |  51 ---
 README.rst                                         |  68 ++++
 README.txt                                         |  16 -
 doc/README.md                                      |   9 -
 examples/compare_correlation_methods.png           | Bin 80520 -> 90946 bytes
 examples/compare_correlation_methods.py            |  78 ++---
 multipletau/__init__.py                            |   6 +-
 multipletau/_multipletau.py                        | 358 +++++++++++----------
 multipletau/_version.py                            | 132 +++++++-
 setup.cfg                                          |   3 +
 setup.py                                           |  53 +--
 tests/README.md                                    |   5 +-
 tests/data/test_autocorrelate.py_test_ac_m.npy     | Bin 0 -> 7776 bytes
 .../test_autocorrelate.py_test_ac_normalize.npy    | Bin 0 -> 2304 bytes
 .../data/test_autocorrelate.py_test_ac_simple.npy  | Bin 0 -> 2304 bytes
 tests/data/test_correlate.py_test_cc_m.npy         | Bin 0 -> 16400 bytes
 tests/data/test_correlate.py_test_cc_normalize.npy | Bin 0 -> 2304 bytes
 tests/data/test_correlate.py_test_cc_simple.npy    | Bin 0 -> 4528 bytes
 tests/test_ac_cc.py                                | 139 ++++++++
 tests/test_autocorrelate.py                        | 254 +++++++++++++++
 tests/test_basic.py                                |  26 +-
 tests/test_correlate.py                            | 300 +++++++++++++++++
 tests/test_ref_numpy.py                            | 180 +++++++++++
 27 files changed, 1350 insertions(+), 394 deletions(-)

diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index 9110534..0000000
--- a/.gitignore
+++ /dev/null
@@ -1,48 +0,0 @@
-*.py[cod]
-
-# C extensions
-*.so
-
-# Packages
-*.egg
-*.egg-info
-dist
-build
-eggs
-parts
-bin
-var
-sdist
-develop-eggs
-.installed.cfg
-lib
-lib64
-
-# Installer logs
-pip-log.txt
-
-# Unit test / coverage reports
-.coverage
-.tox
-nosetests.xml
-
-# Translations
-*.mo
-
-# Mr Developer
-.mr.developer.cfg
-.project
-.pydevproject
-.settings*
-
-# doc build
-doc/_build/**
-
-
-# other stuff
-*.yml~
-*.py~
-*.md~
-*.in~
-*.txt~
-*.txt~
diff --git a/.travis.yml b/.travis.yml
index 96bb5bd..f582e2a 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -12,9 +12,10 @@ notifications:
 install:
 - travis_retry pip install coverage
 - travis_retry pip install coveralls
+- python setup.py develop
 - pip freeze
 script:
-- coverage run --source=multipletau tests/runtests.py
+- coverage run --source=multipletau ./setup.py test
 - coverage report -m
 after_success:
 - coveralls --verbose
diff --git a/CHANGELOG.md b/CHANGELOG
similarity index 51%
rename from CHANGELOG.md
rename to CHANGELOG
index cbf4fd3..78f92dc 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG
@@ -1,6 +1,13 @@
+0.1.6
+- also compute correlation for zero lag time (`G(tau==0)`)
+- support NumPy 1.11
+- add tests to complete code coverage
+- bugfixes:
+  - wrong normalization for cplx array `v` in `correlate` if `normalize==True`
+  - wrong normalization in `correlate_numpy` if `normalize==False`
 0.1.5
 - update documentation
-- officially support Python 3
+- support Python 3
 0.1.4
 - integer and boolean input types are now automatically converted to floats
 - `multipletau.correlate` now works with complex data types
diff --git a/MANIFEST.in b/MANIFEST.in
index 6f47c76..73c6c95 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1,7 +1,9 @@
-include CHANGELOG.md
-include README.txt
+include CHANGELOG
+include LICENSE
+include README.rst
 include tests/*.py
 include tests/*.md
+include tests/data/*.npy
 include examples/*.py
 include doc/*.py
 include doc/*.rst
diff --git a/README.md b/README.md
deleted file mode 100644
index cf5d8ef..0000000
--- a/README.md
+++ /dev/null
@@ -1,51 +0,0 @@
-multipletau
-===========
-[![PyPI](http://img.shields.io/pypi/v/multipletau.svg)](https://pypi.python.org/pypi/multipletau)
-[![Travis](http://img.shields.io/travis/FCS-analysis/multipletau.svg)](https://travis-ci.org/FCS-analysis/multipletau)
-[![Coveralls](https://img.shields.io/coveralls/FCS-analysis/multipletau.svg)](https://coveralls.io/r/FCS-analysis/multipletau)
-
-
-
-This repo contains a multiple-tau algorithm for Python
-
-- **multipletau** multiple-tau package, implemented using [numpy](http://www.numpy.org/)
-- **test** testing the algorithm
-- **doc** the source of the [documentation](http://FCS-analysis.github.io/multipletau/)
-
-
-
-
-Installation
-------------
-The package can be installed from the Python package index.
-
-
-    pip install multipletau
-
-
-Usage
------
-
-    >>> import numpy as np
-    >>> import multipletau
-    >>> a = np.linspace(2,5,42)
-    >>> v = np.linspace(1,6,42)
-    >>> multipletau.correlate(a, v, m=2)
-    array([[   1.        ,  549.87804878],
-           [   2.        ,  530.37477692],
-           [   4.        ,  491.85812017],
-           [   8.        ,  386.39500297]])
-
-
-Citing
-------
-The multipletau package should be cited like this (replace "x.x.x" with the actual version of multipletau that you used and "DD Month YYYY" with a matching date).
-
-Paul Müller (2012) _Python multiple-tau algorithm_ (Version x.x.x) [Computer program]. Available at https://pypi.python.org/pypi/multipletau/ (Accessed DD Month YYYY)
-
-You can find out what version you are using by typing (in a Python console):
-
-
-    >>> import multipletau
-    >>> multipletau.__version__
-    '0.1.4'
diff --git a/README.rst b/README.rst
new file mode 100644
index 0000000..7270054
--- /dev/null
+++ b/README.rst
@@ -0,0 +1,68 @@
+multipletau
+===========
+
+|PyPI Version| |Build Status| |Coverage Status|
+
+Multipe-tau correlation is computed on a logarithmic scale (less
+data points are computed) and is thus much faster than conventional
+correlation on a linear scale such as `numpy.correlate <http://docs.scipy.org/doc/numpy/reference/generated/numpy.correlate.html>`__. 
+
+
+Installation
+------------
+``multipletau`` supports Python 2.6+ and Python 3.3+ with a common codebase.
+The only requirement for ``multipletau`` is `NumPy <http://www.numpy.org/>`__ (for fast
+operations on arrays). Install ``multipletau`` from the Python package index:
+
+::
+
+    pip install multipletau
+
+
+Documentation
+-------------
+A full code reference including examples is available `here <http://FCS-analysis.github.io/multipletau/>`__.
+
+
+Usage
+-----
+
+.. code:: python
+
+    import numpy as np
+    import multipletau
+    a = np.linspace(2,5,42)
+    v = np.linspace(1,6,42)
+    multipletau.correlate(a, v, m=2)
+    array([[   0.        ,  569.56097561],
+    	   [   1.        ,  549.87804878],
+           [   2.        ,  530.37477692],
+           [   4.        ,  491.85812017],
+           [   8.        ,  386.39500297]])
+
+
+Citing
+------
+The multipletau package should be cited like this
+(replace "x.x.x" with the actual version of multipletau that you used
+and "DD Month YYYY" with a matching date).
+
+Paul Müller (2012) *Python multiple-tau algorithm* (Version x.x.x) [Computer program]. Available at `<https://pypi.python.org/pypi/multipletau/>`__ (Accessed DD Month YYYY)
+
+You can find out what version you are using by typing (in a Python console):
+
+.. code:: python
+
+    >>> import multipletau
+    >>> multipletau.__version__
+    '0.1.4'
+
+
+
+.. |PyPI Version| image:: http://img.shields.io/pypi/v/multipletau.svg
+   :target: https://pypi.python.org/pypi/multipletau
+.. |Build Status| image:: http://img.shields.io/travis/FCS-analysis/multipletau.svg
+   :target: https://travis-ci.org/FCS-analysis/multipletau
+.. |Coverage Status| image:: https://img.shields.io/coveralls/FCS-analysis/multipletau.svg
+   :target: https://coveralls.io/r/FCS-analysis/multipletau
+
diff --git a/README.txt b/README.txt
deleted file mode 100644
index 98db327..0000000
--- a/README.txt
+++ /dev/null
@@ -1,16 +0,0 @@
-Multipe-tau correlation is computed on a logarithmic scale (less
-data points are computed) and is thus much faster than conventional
-correlation on a linear scale such as `numpy.correlate`.
-
-Reference
-=========
-The code is fully documented. An online reference is available
-at http://FCS-analysis.github.io/multipletau/.
-
-
-Installation
-============
-The package can be installed from the Python package index.
-
-    pip install multipletau
-
diff --git a/doc/README.md b/doc/README.md
index 856634a..43f3f86 100644
--- a/doc/README.md
+++ b/doc/README.md
@@ -8,12 +8,3 @@ Install [numpydoc](https://pypi.python.org/pypi/numpydoc):
 To compile the documentation, run
 
     python setup.py build_sphinx
-
-
-To upload the documentation to gh-pages, run
-
-    python setup.py commit_doc
-    
-or
-
-    python doc/commit_gh-pages.py
diff --git a/examples/compare_correlation_methods.png b/examples/compare_correlation_methods.png
index 22543db..2ced9a3 100644
Binary files a/examples/compare_correlation_methods.png and b/examples/compare_correlation_methods.png differ
diff --git a/examples/compare_correlation_methods.py b/examples/compare_correlation_methods.py
index 361a0a0..ad30ce4 100644
--- a/examples/compare_correlation_methods.py
+++ b/examples/compare_correlation_methods.py
@@ -3,8 +3,8 @@
 """ 
 Comparison of correlation methods
 ---------------------------------
-Illustration of the difference between 
-:py:func:`multipletau.correlate` and :py:func:`numpy.correlate`.
+Comparison between the :py:mod:`multipletau` correlation methods
+(:py:func:`multipletau.autocorrelate`, :py:func:`multipletau.correlate`) and :py:func:`numpy.correlate`.
 
 .. image:: ../examples/compare_correlation_methods.png
    :align:   center
@@ -37,87 +37,87 @@ def compare_corr():
     # time factor
     taudiff *= deltat
 
-    ##
+    if N < 1e5:
+        do_np_corr = True
+    else:
+        do_np_corr = False
+
     ## Autocorrelation
-    ##
     print("Creating noise for autocorrelation")
     data = noise_exponential(N, taudiff, deltat=deltat)
-    data += - np.average(data)
+    data -= np.average(data)
     if normalize:
         data += countrate
     # multipletau
     print("Performing autocorrelation (multipletau).")
     G = autocorrelate(data, deltat=deltat, normalize=normalize)
     # numpy.correlate for comparison
-    if len(data) < 1e5:
+    if do_np_corr:
         print("Performing autocorrelation (numpy).")
         Gd = correlate_numpy(data, data, deltat=deltat,
                              normalize=normalize)
-    # Calculate the expected curve
-    x = G[:,0]
-    amp = np.correlate(data-np.average(data), data-np.average(data),
-                       mode="valid")
-    if normalize:
-        amp /= len(data) * countrate**2
-    y = amp*np.exp(-x/taudiff)
-
-    ##
+    else:
+        Gd = G
+    
     ## Cross-correlation
-    ##
     print("Creating noise for cross-correlation")
     a, v = noise_cross_exponential(N, taudiff, deltat=deltat)
-    a += - np.average(a)
-    v += - np.average(v)
+    a -= np.average(a)
+    v -= np.average(v)
     if normalize:
         a += countrate
         v += countrate
-    # multipletau
-    Gccforw = correlate(a, v, deltat=deltat, normalize=normalize)
-    Gccback = correlate(v, a, deltat=deltat, normalize=normalize)
-    if len(a) < 1e5:
-        print("Performing autocorrelation (numpy).")
+    Gccforw = correlate(a, v, deltat=deltat, normalize=normalize) # forward
+    Gccback = correlate(v, a, deltat=deltat, normalize=normalize) # backward
+    if do_np_corr:
+        print("Performing cross-correlation (numpy).")
         Gdccforw = correlate_numpy(a, v, deltat=deltat, normalize=normalize)
-    # Calculate the expected curve
-    xcc = Gccforw[:,0]
+    
+    ## Calculate the model curve for cross-correlation
+    xcc = Gd[:,0]
     ampcc = np.correlate(a-np.average(a), v-np.average(v), mode="valid")
-
     if normalize:
         ampcc /= len(a) * countrate**2
     ycc = ampcc*np.exp(-xcc/taudiff)
 
+    ## Calculate the model curve for autocorrelation
+    x = Gd[:,0]
+    amp = np.correlate(data-np.average(data), data-np.average(data),
+                       mode="valid")
+    if normalize:
+        amp /= len(data) * countrate**2
+    y = amp*np.exp(-x/taudiff)
 
-    ##
-    ## Plotting
-    ##
 
+    ## Plotting
     # AC
     fig = plt.figure()
     fig.canvas.set_window_title('testing multipletau')
     ax = fig.add_subplot(2,1,1)
     ax.set_xscale('log')
+    if do_np_corr:
+        plt.plot(Gd[:,0], Gd[:,1] , "-", color="gray", label="correlate (numpy)")
     plt.plot(x, y, "g-", label="input model")
-    plt.plot(G[:,0], G[:,1], "r-", label="autocorrelate")
-    if len(data) < 1e5:
-        plt.plot(Gd[:,0], Gd[:,1] , "b--", label="correlate (numpy)")
+    plt.plot(G[:,0], G[:,1], "-",  color="#B60000", label="autocorrelate")
     plt.xlabel("lag channel")
     plt.ylabel("autocorrelation")
     plt.legend(loc=0, fontsize='small')
     plt.ylim( -amp*.2, amp*1.2)
+    plt.xlim( Gd[0,0], Gd[-1,0])
 
-
-    ## CC
+    # CC
     ax = fig.add_subplot(2,1,2)
     ax.set_xscale('log')
+    if do_np_corr:
+        plt.plot(Gdccforw[:,0], Gdccforw[:,1] , "-", color="gray", label="forward (numpy)")
     plt.plot(xcc, ycc, "g-", label="input model")
-    plt.plot(Gccforw[:,0], Gccforw[:,1], "r-", label="forward")
-    if len(data) < 1e5:
-        plt.plot(Gdccforw[:,0], Gdccforw[:,1] , "b--", label="forward (numpy)")
-    plt.plot(Gccback[:,0], Gccback[:,1], "r--", label="backward")
+    plt.plot(Gccforw[:,0], Gccforw[:,1], "-", color="#B60000", label="forward")
+    plt.plot(Gccback[:,0], Gccback[:,1], "-", color="#5D00B6", label="backward")
     plt.xlabel("lag channel")
     plt.ylabel("cross-correlation")
     plt.legend(loc=0, fontsize='small')
-
     plt.ylim( -ampcc*.2, ampcc*1.2)
+    plt.xlim( Gd[0,0], Gd[-1,0])
     plt.tight_layout()
 
     savename = __file__[:-3]+".png"
diff --git a/multipletau/__init__.py b/multipletau/__init__.py
index 4eee6b6..7c31569 100644
--- a/multipletau/__init__.py
+++ b/multipletau/__init__.py
@@ -25,7 +25,7 @@ Obtaining multipletau
 ---------------------
 If you have Python and :py:mod:`numpy` installed, simply run
 
-    pip install multipletau
+   pip install multipletau
 
 The source code of multipletau is available at
 https://github.com/FCS-analysis/multipletau.
@@ -58,12 +58,14 @@ Usage
 -----
 The package is straightforward to use. Here is a quick example:
 
+
     >>> import numpy as np
     >>> import multipletau
     >>> a = np.linspace(2,5,42)
     >>> v = np.linspace(1,6,42)
     >>> multipletau.correlate(a, v, m=2)
-    array([[   1.        ,  549.87804878],
+    array([[   0.        ,  569.56097561],
+           [   1.        ,  549.87804878],
            [   2.        ,  530.37477692],
            [   4.        ,  491.85812017],
            [   8.        ,  386.39500297]])
diff --git a/multipletau/_multipletau.py b/multipletau/_multipletau.py
index 585dfc8..aa592a5 100755
--- a/multipletau/_multipletau.py
+++ b/multipletau/_multipletau.py
@@ -33,7 +33,6 @@ LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
 NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 """
-
 from __future__ import division
 
 import numpy as np
@@ -47,23 +46,19 @@ def autocorrelate(a, m=16, deltat=1, normalize=False,
     """ 
     Autocorrelation of a 1-dimensional sequence on a log2-scale.
 
-    This computes the correlation according to 
-    :py:func:`numpy.correlate` for positive :math:`k`  on a base 2
+    This computes the correlation similar to 
+    :py:func:`numpy.correlate` for positive :math:`k` on a base 2
     logarithmic scale.
 
         :func:`numpy.correlate(a, a, mode="full")[len(a)-1:]`
-
     
         :math:`z_k = \Sigma_n a_n a_{n+k}`
 
 
-    Note that only the correlation in the positive direction is
-    computed.
-
     Parameters
     ----------
-    a : array_like
-        input sequence of real numbers
+    a : array-like
+        input sequence
     m : even integer
         defines the number of points on one level, must be an
         even integer
@@ -71,132 +66,142 @@ def autocorrelate(a, m=16, deltat=1, normalize=False,
         distance between bins
     normalize : bool
         normalize the result to the square of the average input
-        signal and the factor `M-k`.
+        signal and the factor :math:`M-k`.
     copy : bool
-        copy input array, set to False to save memory
-    dtype : dtype, optional
-        The type of the returned array and of the accumulator in 
-        which the elements are summed.  By default, the dtype of 
-        `a` is used.
+        copy input array, set to ``False`` to save memory
+    dtype : object to be converted to a data type object
+        The data type of the returned array and of the accumulator
+        for the multiple-tau computation.
 
 
     Returns
     -------
-    autocorrelation : ndarray
-        Nx2 array containing lag time and autocorrelation
-
+    autocorrelation : ndarray of shape (N,2)
+        the lag time (1st column) and the autocorrelation (2nd column).
 
     Notes
     -----
+    .. versionchanged :: 0.1.6
+       Compute the correlation for zero lag time.
+
     The algorithm computes the correlation with the convention of the
     curve decaying to zero.
 
     For experiments like e.g. fluorescence correlation spectroscopy,
-    the signal can be normalized to `M-k` by invoking:
+    the signal can be normalized to :math:`M-k` by invoking ``normalize = True``.           
 
-           normalize = True
+    For normalizing according to the behavior of :py:func:`numpy.correlate`,
+    use ``normalize = False``.
 
-    For emulating the numpy.correlate behavior on a logarithmic
-    scale (default behavior) use:
-
-           normalize = False
+    For complex arrays, this method falls back to the method
+    :func:`correlate`.
 
 
     Examples
     --------
-    >>> from numpy import dtype
     >>> from multipletau import autocorrelate
-    >>> autocorrelate(range(42), m=2, dtype=dtype(float))
-    array([[  1.00000000e+00,   2.29600000e+04],
+    >>> autocorrelate(range(42), m=2, dtype=np.float)
+    array([[  0.00000000e+00,   2.38210000e+04],
+           [  1.00000000e+00,   2.29600000e+04],
            [  2.00000000e+00,   2.21000000e+04],
            [  4.00000000e+00,   2.03775000e+04],
            [  8.00000000e+00,   1.50612000e+04]])
-
     """
-    traceavg = np.average(a)
-    if normalize and traceavg == 0:
-        raise ZeroDivisionError("Normalization not possible. The " +
-                                "average of the input *binned_array* " +
-                                "is zero.")
-
-    trace = np.array(a, dtype=dtype, copy=copy)
-    dtype = trace.dtype
+    assert isinstance(copy, bool)
+    assert isinstance(normalize, bool)
 
-    if dtype.kind in ["b", "i", "u"]:
-        warnings.warn("Converting input data type ({}) to float.".
-                      format(dtype))
-        dtype = np.dtype(float)
-        trace = np.array(a, dtype=dtype, copy=copy)
+    if dtype is None:
+        dtype = np.dtype(a[0].__class__)
+    else:
+        dtype = np.dtype(dtype)
 
     # Complex data
     if dtype.kind == "c":
-        raise NotImplementedError(
-            "Please use `multipletau.correlate` for complex data.")
+        # run cross-correlation
+        return correlate(a=a,
+                         v=a,
+                         m=m,
+                         deltat=deltat,
+                         normalize=normalize,
+                         copy=copy,
+                         dtype=dtype)
+    elif dtype.kind != "f":
+        warnings.warn("Input dtype is not float; casting to np.float!")
+        dtype = np.dtype(np.float)
+
+    # If copy is false and dtype is the same as the input array,
+    # then this line does not have an effect:
+    trace = np.array(a, dtype=dtype, copy=copy)
 
     # Check parameters
-    if np.around(m / 2) != m / 2:
-        mold = 1 * m
-        m = int((np.around(m / 2) + 1) * 2)
+    if m//2 != m/2:
+        mold = m
+        m = np.int((m//2 + 1) * 2)
         warnings.warn("Invalid value of m={}. Using m={} instead"
                       .format(mold, m))
     else:
-        m = int(m)
+        m = np.int(m)
 
-    N = N0 = len(trace)
+    N = N0 = trace.shape[0]
 
     # Find out the length of the correlation function.
     # The integer k defines how many times we can average over
     # two neighboring array elements in order to obtain an array of
     # length just larger than m.
-    k = int(np.floor(np.log2(N / m)))
+    k = np.int(np.floor(np.log2(N/m)))
 
     # In the base2 multiple-tau scheme, the length of the correlation
     # array is (only taking into account values that are computed from
     # traces that are just larger than m):
-    lenG = np.int(np.floor(m + k * m / 2))
+    lenG = m + k*(m//2) + 1
 
     G = np.zeros((lenG, 2), dtype=dtype)
 
     normstat = np.zeros(lenG, dtype=dtype)
     normnump = np.zeros(lenG, dtype=dtype)
 
+    traceavg = np.average(trace)
+
     # We use the fluctuation of the signal around the mean
     if normalize:
         trace -= traceavg
-    if N < 2 * m:
-        # Otherwise the following for-loop will fail:
-        raise ValueError("len(binned_array) must be larger than 2m.")
-    # Calculate autocorrelation function for first m bins
+        assert traceavg != 0, "Cannot normalize: Average of `a` is zero!"
+    
+    # Otherwise the following for-loop will fail:
+    assert N >= 2*m, "len(a) must be larger than 2m!"
+
+    # Calculate autocorrelation function for first m+1 bins
     # Discrete convolution of m elements
-    for n in range(1, m + 1):
-        G[n - 1, 0] = deltat * n
+    for n in range(0, m+1):
+        G[n, 0] = deltat * n
         # This is the computationally intensive step
-        G[n - 1, 1] = np.sum(trace[:N - n] * trace[n:], dtype=dtype)
-        normstat[n - 1] = N - n
-        normnump[n - 1] = N
+        G[n, 1] = np.sum(trace[:N - n] * trace[n:])
+        normstat[n] = N - n
+        normnump[n] = N
     # Now that we calculated the first m elements of G, let us
     # go on with the next m/2 elements.
     # Check if len(trace) is even:
-    if N % 2 == 1:
+    if N%2 == 1:
         N -= 1
     # Add up every second element
-    trace = (trace[:N:2] + trace[1:N + 1:2]) / 2
-    N /= 2
+    trace = (trace[:N:2] + trace[1:N:2]) / 2
+    N //= 2
     # Start iteration for each m/2 values
     for step in range(1, k + 1):
         # Get the next m/2 values via correlation of the trace
-        for n in range(1, int(m / 2) + 1):
-            idx = int(m + n - 1 + (step - 1) * m / 2)
-            if len(trace[:N - (n + m / 2)]) == 0:
+        for n in range(1, m//2 + 1):
+            npmd2 = n + m//2
+            idx = m + n + (step - 1) * m//2
+            if len(trace[:N - npmd2]) == 0:
                 # This is a shortcut that stops the iteration once the
                 # length of the trace is too small to compute a corre-
                 # lation. The actual length of the correlation function
                 # does not only depend on k - We also must be able to
-                # perform the sum with repect to k for all elements.
+                # perform the sum with respect to k for all elements.
                 # For small N, the sum over zero elements would be
                 # computed here.
                 #
-                # One could make this for loop go up to maxval, where
+                # One could make this for-loop go up to maxval, where
                 #   maxval1 = int(m/2)
                 #   maxval2 = int(N-m/2-1)
                 #   maxval = min(maxval1, maxval2)
@@ -211,18 +216,18 @@ def autocorrelate(a, m=16, deltat=1, normalize=False,
                 # k in advance.
                 break
             else:
-                G[idx, 0] = deltat * (n + m / 2) * 2**step
+                G[idx, 0] = deltat * npmd2 * 2**step
                 # This is the computationally intensive step
-                G[idx, 1] = np.sum(trace[:N - (n + m / 2)] *
-                                   trace[(n + m / 2):], dtype=dtype)
-                normstat[idx] = N - (n + m / 2)
+                G[idx, 1] = np.sum(trace[:N - npmd2] *
+                                   trace[npmd2:])
+                normstat[idx] = N - npmd2
                 normnump[idx] = N
         # Check if len(trace) is even:
-        if N % 2 == 1:
+        if N%2 == 1:
             N -= 1
         # Add up every second element
-        trace = (trace[:N:2] + trace[1:N + 1:2]) / 2
-        N /= 2
+        trace = (trace[:N:2] + trace[1:N:2]) / 2
+        N //= 2
 
     if normalize:
         G[:, 1] /= traceavg**2 * normstat
@@ -238,21 +243,21 @@ def correlate(a, v, m=16, deltat=1, normalize=False,
     Cross-correlation of two 1-dimensional sequences
     on a log2-scale.
 
-    This computes the cross-correlation according to
+    This computes the cross-correlation similar to
     :py:func:`numpy.correlate` for positive :math:`k`  on a base 2
     logarithmic scale.
 
-        numpy.correlate(a, v, mode="full")[len(a)-1:]
+        :func:`numpy.correlate(a, v, mode="full")[len(a)-1:]`
 
         :math:`z_k = \Sigma_n a_n v_{n+k}`
 
-    Note that only the correlation
-    in the positive direction is computed.
-
+    Note that only the correlation in the positive direction is computed.
+    To obtain the correlation for negative lag times swap the input variables
+    ``a`` and ``v``.
 
     Parameters
     ----------
-    a, v : array_like
+    a, v : array-like
         input sequences with equal length
     m : even integer
         defines the number of points on one level, must be an
@@ -261,64 +266,75 @@ def correlate(a, v, m=16, deltat=1, normalize=False,
         distance between bins
     normalize : bool
         normalize the result to the square of the average input
-        signal and the factor `M-k`.
+        signal and the factor :math:`M-k`.
     copy : bool
-        copy input array, set to False to save memory
-    dtype : dtype, optional
-        The type of the returned array and of the accumulator in 
-        which the elements are summed.  By default, the dtype of 
-        `a` is used.
+        copy input array, set to ``False`` to save memory
+    dtype : object to be converted to a data type object
+        The data type of the returned array and of the accumulator
+        for the multiple-tau computation.
 
 
     Returns
     -------
-    crosscorrelation : ndarray
-        Nx2 array containing lag time and cross-correlation
+    cross_correlation : ndarray of shape (N,2)
+        the lag time (1st column) and the cross-correlation (2nd column).
 
 
     Notes
     -----
+    .. versionchanged :: 0.1.6
+       Compute the correlation for zero lag time and correctly normalize
+       the correlation for a complex input sequence `v`.
+
     The algorithm computes the correlation with the convention of the
     curve decaying to zero.
 
     For experiments like e.g. fluorescence correlation spectroscopy,
-    the signal can be normalized to `M-k` by invoking:
+    the signal can be normalized to :math:`M-k` by invoking ``normalize = True``.           
 
-           normalize = True
-
-    For emulating the numpy.correlate behavior on a logarithmic
-    scale (default behavior) use:
-
-           normalize = False
+    For normalizing according to the behavior of :py:func:`numpy.correlate`,
+    use ``normalize = False``.
 
 
     Examples
     --------
-    >>> from numpy import dtype
     >>> from multipletau import correlate
-    >>> correlate(range(42), range(1,43), m=2, dtype=dtype(float))
-    array([[  1.00000000e+00,   2.38210000e+04],
+    >>> correlate(range(42), range(1,43), m=2, dtype=np.float)
+    array([[  0.00000000e+00,   2.46820000e+04],
+           [  1.00000000e+00,   2.38210000e+04],
            [  2.00000000e+00,   2.29600000e+04],
            [  4.00000000e+00,   2.12325000e+04],
            [  8.00000000e+00,   1.58508000e+04]])
 
     """
+    assert isinstance(copy, bool)
+    assert isinstance(normalize, bool)
     # See `autocorrelation` for better documented code.
     traceavg1 = np.average(v)
     traceavg2 = np.average(a)
-    if normalize and traceavg1 * traceavg2 == 0:
-        raise ZeroDivisionError("Normalization not possible. The " +
-                                "average of the input *binned_array* " +
-                                "is zero.")
+    if normalize:
+        assert traceavg1 != 0, "Cannot normalize: Average of `v` is zero!"
+        assert traceavg2 != 0, "Cannot normalize: Average of `a` is zero!"
 
-    trace1 = np.array(v, dtype=dtype, copy=copy)
-    dtype = trace1.dtype
+    if dtype is None:
+        dtype = np.dtype(v[0].__class__)
+        dtype2 = np.dtype(a[0].__class__)
+        if dtype != dtype2:
+            if dtype.kind == "c" or dtype2.kind == "c":
+                # The user might try to combine complex64 and float128.
+                warnings.warn("Input dtypes not equal; casting to np.complex!")
+                dtype = np.dtype(np.complex)
+            else:
+                warnings.warn("Input dtypes not equal; casting to np.float!")
+                dtype = np.dtype(np.float)
+    else:
+        dtype = np.dtype(dtype)
+
+    if not dtype.kind in ["c", "f"]:
+        warnings.warn("Input dtype is not float; casting to np.float!")
+        dtype = np.dtype(np.float)
 
-    if dtype.kind in ["b", "i", "u"]:
-        warnings.warn(
-            "Converting input data type ({}) to float.".format(dtype))
-        dtype = np.dtype(float)
-        trace1 = np.array(v, dtype=dtype, copy=copy)
+    trace1 = np.array(v, dtype=dtype, copy=copy)
 
     # Prevent traces from overwriting each other
     if a is v:
@@ -327,33 +343,33 @@ def correlate(a, v, m=16, deltat=1, normalize=False,
 
     trace2 = np.array(a, dtype=dtype, copy=copy)
 
+    assert trace1.shape[0] == trace2.shape[0], "`a`,`v` must have same length!"
+
     # Complex data
     if dtype.kind == "c":
-        trace1.imag *= -1
+        np.conjugate(trace1, out=trace1)
 
     # Check parameters
-    if np.around(m / 2) != m / 2:
-        mold = 1 * m
-        m = int((np.around(m / 2) + 1) * 2)
+    if m//2 != m/2:
+        mold = m
+        m = np.int(m//2 + 1) * 2
         warnings.warn("Invalid value of m={}. Using m={} instead"
                       .format(mold, m))
     else:
-        m = int(m)
+        m = np.int(m)
 
-    if len(a) != len(v):
-        raise ValueError("Input arrays must be of equal length.")
 
-    N = N0 = len(trace1)
+    N = N0 = trace1.shape[0]
     # Find out the length of the correlation function.
     # The integer k defines how many times we can average over
     # two neighboring array elements in order to obtain an array of
     # length just larger than m.
-    k = int(np.floor(np.log2(N / m)))
+    k = np.int(np.floor(np.log2(N/m)))
 
     # In the base2 multiple-tau scheme, the length of the correlation
     # array is (only taking into account values that are computed from
     # traces that are just larger than m):
-    lenG = np.int(np.floor(m + k * m / 2))
+    lenG = m + k * m//2 + 1
 
     G = np.zeros((lenG, 2), dtype=dtype)
     normstat = np.zeros(lenG, dtype=dtype)
@@ -361,49 +377,51 @@ def correlate(a, v, m=16, deltat=1, normalize=False,
 
     # We use the fluctuation of the signal around the mean
     if normalize:
-        trace1 -= traceavg1
+        trace1 -= np.conj(traceavg1)
         trace2 -= traceavg2
-    if N < 2 * m:
-        # Otherwise the following for-loop will fail:
-        raise ValueError("len(binned_array) must be larger than 2m.")
-    # Calculate autocorrelation function for first m bins
-    for n in range(1, m + 1):
-        G[n - 1, 0] = deltat * n
-        G[n - 1, 1] = np.sum(trace1[:N - n] * trace2[n:])
-        normstat[n - 1] = N - n
-        normnump[n - 1] = N
+
+    # Otherwise the following for-loop will fail:
+    assert N >= 2*m, "len(a) must be larger than 2m!"
+
+    # Calculate autocorrelation function for first m+1 bins
+    for n in range(0, m + 1):
+        G[n, 0] = deltat * n
+        G[n, 1] = np.sum(trace1[:N - n] * trace2[n:])
+        normstat[n] = N - n
+        normnump[n] = N
     # Check if len(trace) is even:
-    if N % 2 == 1:
+    if N%2 == 1:
         N -= 1
     # Add up every second element
-    trace1 = (trace1[:N:2] + trace1[1:N + 1:2]) / 2
-    trace2 = (trace2[:N:2] + trace2[1:N + 1:2]) / 2
-    N /= 2
+    trace1 = (trace1[:N:2] + trace1[1:N:2]) / 2
+    trace2 = (trace2[:N:2] + trace2[1:N:2]) / 2
+    N //= 2
 
     for step in range(1, k + 1):
         # Get the next m/2 values of the trace
-        for n in range(1, int(m / 2) + 1):
-            idx = int(m + n - 1 + (step - 1) * m / 2)
-            if len(trace1[:N - (n + m / 2)]) == 0:
+        for n in range(1, m//2 + 1):
+            npmd2 = (n + m//2)
+            idx = m + n + (step - 1) * m//2
+            if len(trace1[:N - npmd2]) == 0:
                 # Abort
                 G = G[:idx - 1]
                 normstat = normstat[:idx - 1]
                 normnump = normnump[:idx - 1]
                 break
             else:
-                G[idx, 0] = deltat * (n + m / 2) * 2**step
+                G[idx, 0] = deltat * npmd2 * 2**step
                 G[idx, 1] = np.sum(
-                    trace1[:N - (n + m / 2)] * trace2[(n + m / 2):])
-                normstat[idx] = N - (n + m / 2)
+                    trace1[:N - npmd2] * trace2[npmd2:])
+                normstat[idx] = N - npmd2
                 normnump[idx] = N
 
         # Check if len(trace) is even:
-        if N % 2 == 1:
+        if N%2 == 1:
             N -= 1
         # Add up every second element
-        trace1 = (trace1[:N:2] + trace1[1:N + 1:2]) / 2
-        trace2 = (trace2[:N:2] + trace2[1:N + 1:2]) / 2
-        N /= 2
+        trace1 = (trace1[:N:2] + trace1[1:N:2]) / 2
+        trace2 = (trace2[:N:2] + trace2[1:N:2]) / 2
+        N //= 2
 
     if normalize:
         G[:, 1] /= traceavg1 * traceavg2 * normstat
@@ -416,51 +434,59 @@ def correlate(a, v, m=16, deltat=1, normalize=False,
 def correlate_numpy(a, v, deltat=1, normalize=False,
                     dtype=None, copy=True):
     """
-    Convenience function that wraps around numpy.correlate and
-    returns the data as multipletau.correlate does.
+    Convenience function that wraps around :py:func:`numpy.correlate` and
+    returns the correlation in the same format as :func:`correlate` does.
+
 
     Parameters
     ----------
-    a, v : array_like
+    a, v : array-like
         input sequences
     deltat : float
         distance between bins
     normalize : bool
-        normalize the result to the square of the average input
-        signal and the factor (M-k). The resulting curve follows
+        normalize the result to the square of the average input signal
+        and the factor :math:`M-k`. The resulting curve follows
         the convention of decaying to zero for large lag times.
     copy : bool
-        copy input array, set to False to save memory
-    dtype : dtype, optional
-        The type of the returned array and of the accumulator in 
-        which the elements are summed.  By default, the dtype of 
-        `a` is used.
+        copy input array, set to ``False`` to save memory
+    dtype : object to be converted to a data type object
+        The data type of the returned array.
+
 
     Returns
     -------
-    crosscorrelation : ndarray
-        Nx2 array containing lag time and cross-correlation
-    """
+    cross_correlation : ndarray of shape (N,2)
+        the lag time (1st column) and the cross-correlation (2nd column).
 
-    avg = np.average(a)
-    vvg = np.average(v)
-
-    if dtype is None:
-        dtype = a.dtype
-
-    if len(a) != len(v):
-        raise ValueError("Arrays must be of same length.")
 
+    Notes
+    -----
+    .. versionchanged :: 0.1.6
+       Removed false normalization when `normalize==False`.
+    """
     ab = np.array(a, dtype=dtype, copy=copy)
     vb = np.array(v, dtype=dtype, copy=copy)
 
-    Gd = np.correlate(ab - avg, vb - vvg, mode="full")[len(ab) - 1:]
+    assert ab.shape[0] == vb.shape[0], "`a`,`v` must have same length!"
+
+    avg = np.average(ab)
+    vvg = np.average(vb)
+
+    if normalize:
+        ab -= avg
+        vb -= vvg
+        assert avg != 0, "Cannot normalize: Average of `a` is zero!"
+        assert vvg != 0, "Cannot normalize: Average of `v` is zero!"
+
+    Gd = np.correlate(ab, vb, mode="full")[len(ab) - 1:]
 
     if normalize:
         N = len(Gd)
         m = N - np.arange(N)
         Gd /= m * avg * vvg
-    G = np.zeros((len(Gd), 2))
+    
+    G = np.zeros((len(Gd), 2), dtype=dtype)
     G[:, 1] = Gd
     G[:, 0] = np.arange(len(Gd)) * deltat
     return G
diff --git a/multipletau/_version.py b/multipletau/_version.py
index 75cddc8..83777cb 100644
--- a/multipletau/_version.py
+++ b/multipletau/_version.py
@@ -1,2 +1,132 @@
 #!/usr/bin/env python
-version = "0.1.5"
+"""
+Determine package version for git repositories.
+
+Each time this file is imported it checks if the ".git" folder is
+present and if so, obtains the version from the git history using
+`git describe`. This information is then stored in the file
+`_version_save.py` which is not versioned by git, but distributed
+along with e.g. pypi.
+"""
+from __future__ import print_function
+
+# Put the entire script into a `True` statement and add the hint
+# `pragma: no cover` to ignore code coverage here.
+if True: # pragma: no cover
+    import imp
+    import os
+    from os.path import join, abspath, dirname
+    import subprocess
+    import sys
+    import time
+    import traceback
+    import warnings
+
+    def git_describe():
+        """
+        Returns a string describing the version returned by the
+        command `git describe --tags HEAD`.
+        If it is not possible to determine the correct version,
+        then an empty string is returned.
+        """
+        # make sure we are in a directory that belongs to the correct
+        # repository.
+        ourdir  = dirname(abspath(__file__))
+        
+        def _minimal_ext_cmd(cmd):
+            # construct minimal environment
+            env = {}
+            for k in ['SYSTEMROOT', 'PATH']:
+                v = os.environ.get(k)
+                if v is not None:
+                    env[k] = v
+            # LANGUAGE is used on win32
+            env['LANGUAGE'] = 'C'
+            env['LANG'] = 'C'
+            env['LC_ALL'] = 'C'
+            out = subprocess.Popen(cmd, stdout = subprocess.PIPE, env=env).communicate()[0]
+            return out
+    
+        # change directory
+        olddir = abspath(os.curdir)
+        os.chdir(ourdir)
+        
+        try:
+            out = _minimal_ext_cmd(['git', 'describe', '--tags', 'HEAD'])
+            GIT_REVISION = out.strip().decode('ascii')
+        except OSError:
+            GIT_REVISION = ""
+            
+        # go back to original directory
+        os.chdir(olddir)
+        
+        return GIT_REVISION
+    
+    
+    def load_version(versionfile):
+        """ load version from version_save.py
+        """
+        longversion = ""
+        try:
+            _version_save = imp.load_source("_version_save", versionfile)
+            longversion = _version_save.longversion
+        except:
+            try:
+                from ._version_save import longversion
+            except:
+                try:
+                    from _version_save import longversion
+                except:
+                    pass
+    
+        return longversion
+    
+    
+    def save_version(version, versionfile):
+        """ save version to version_save.py
+        """
+        data="#!/usr/bin/env python\n"+\
+             "# This file was created automatically\n"+\
+             "longversion='{VERSION}'"
+        try:
+            with open(versionfile, "w") as fd:
+                fd.write(data.format(VERSION=version))
+        except:
+            warnings.warn("Could not write package version to {}.".format(versionfile))
+    
+    versionfile = join(dirname(abspath(__file__)), "_version_save.py")
+    
+    ## Determine the accurate version
+    longversion = ""
+    
+    # 1. git describe
+    try:
+        # Get the version using `git describe`
+        longversion = git_describe()
+    except:
+        pass
+    
+    # 2. previously created version file
+    if longversion == "":
+        # Either this is this is not a git repository or we are in the
+        # wrong git repository.
+        # Get the version from the previously generated `_version_save.py`
+        longversion = load_version(versionfile)
+    
+    # 3. last resort: date
+    if longversion == "":
+        print("Could not determine version. Reason:")
+        print(traceback.format_exc())
+        ctime = os.stat(__file__)[8]
+        longversion = time.strftime("%Y.%m.%d-%H-%M-%S", time.gmtime(ctime))
+        print("Using creation time to determine version: {}".format(longversion))
+    
+    if not hasattr(sys, 'frozen'):
+        # Save the version to `_version_save.py` to allow distribution using
+        # `python setup.py sdist`.
+        # This is only done if the program is not frozen (with e.g. pyinstaller),
+        if longversion != load_version(versionfile):
+            save_version(longversion, versionfile)
+    
+    # PEP 440-conform development version:
+    version = ".dev".join(longversion.split("-")[:2])
diff --git a/setup.cfg b/setup.cfg
index 861a9f5..9c27bff 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,3 +1,6 @@
+[aliases]
+test=pytest
+
 [egg_info]
 tag_build = 
 tag_date = 0
diff --git a/setup.py b/setup.py
index 821da70..9de41bb 100644
--- a/setup.py
+++ b/setup.py
@@ -2,11 +2,9 @@
 # -*- coding: utf-8 -*-
 # To create a distribution package for pip or easy-install:
 # python setup.py sdist
-from os.path import join, dirname, realpath
-from setuptools import setup, find_packages, Command
-import subprocess as sp
+from os.path import exists, dirname, realpath
+from setuptools import setup
 import sys
-from warnings import warn
 
 
 author = u"Paul Müller"
@@ -23,37 +21,6 @@ except:
     version = "unknown"
 
 
-
-class PyDocGitHub(Command):
-    """ Upload the docs to GitHub gh-pages branch
-    """
-    user_options = []
-    def initialize_options(self):
-        pass
-
-    def finalize_options(self):
-        pass
-
-    def run(self):
-        errno = sp.call([sys.executable, 'doc/commit_gh-pages.py'])
-        raise SystemExit(errno)
-
-
-class PyTest(Command):
-    """ Perform pytests
-    """
-    user_options = []
-    def initialize_options(self):
-        pass
-
-    def finalize_options(self):
-        pass
-
-    def run(self):
-        errno = sp.call([sys.executable, 'tests/runtests.py'])
-        raise SystemExit(errno)
-
-
 if __name__ == "__main__":
     setup(
         name=name,
@@ -65,25 +32,21 @@ if __name__ == "__main__":
         package_dir={name: name},
         license="BSD (3 clause)",
         description=description,
-        long_description=open(join(dirname(__file__), 'README.txt')).read(),
+        long_description=open('README.rst').read() if exists('README.rst') else '',
         install_requires=["NumPy >= 1.5.1"],
         keywords=["multiple", "tau", "FCS", "correlation", "spectroscopy",
                   "fluorescence"],
-        extras_require={
-                        'doc': ['sphinx']
-                       },
+        extras_require={'doc': ['sphinx']},
+        setup_requires=['pytest-runner'],
+        tests_require=["pytest"],
         classifiers= [
             'Operating System :: OS Independent',
             'Programming Language :: Python :: 2.7',
-            'Programming Language :: Python :: 3.2',
             'Programming Language :: Python :: 3.3',
+            'Programming Language :: Python :: 3.4',
             'Topic :: Scientific/Engineering :: Visualization',
             'Intended Audience :: Science/Research'
                      ],
-        platforms=['ALL'],
-        cmdclass = {'test': PyTest,
-                    'commit_doc': PyDocGitHub,
-                    },
+        platforms=['ALL']
         )
 
-
diff --git a/tests/README.md b/tests/README.md
index 0d5eb56..9ce9ce9 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -1,8 +1,9 @@
 ### Test Scripts
 
-This will run all tests:
 
-    python runtests.py
+Execute all tests using `setup.py` in the parent directory:
+
+    python setup.py test
 
 
 ### Running single tests
diff --git a/tests/data/test_autocorrelate.py_test_ac_m.npy b/tests/data/test_autocorrelate.py_test_ac_m.npy
new file mode 100644
index 0000000..155754c
Binary files /dev/null and b/tests/data/test_autocorrelate.py_test_ac_m.npy differ
diff --git a/tests/data/test_autocorrelate.py_test_ac_normalize.npy b/tests/data/test_autocorrelate.py_test_ac_normalize.npy
new file mode 100644
index 0000000..08825fc
Binary files /dev/null and b/tests/data/test_autocorrelate.py_test_ac_normalize.npy differ
diff --git a/tests/data/test_autocorrelate.py_test_ac_simple.npy b/tests/data/test_autocorrelate.py_test_ac_simple.npy
new file mode 100644
index 0000000..71fb23f
Binary files /dev/null and b/tests/data/test_autocorrelate.py_test_ac_simple.npy differ
diff --git a/tests/data/test_correlate.py_test_cc_m.npy b/tests/data/test_correlate.py_test_cc_m.npy
new file mode 100644
index 0000000..7dcaa48
Binary files /dev/null and b/tests/data/test_correlate.py_test_cc_m.npy differ
diff --git a/tests/data/test_correlate.py_test_cc_normalize.npy b/tests/data/test_correlate.py_test_cc_normalize.npy
new file mode 100644
index 0000000..b7e5c2a
Binary files /dev/null and b/tests/data/test_correlate.py_test_cc_normalize.npy differ
diff --git a/tests/data/test_correlate.py_test_cc_simple.npy b/tests/data/test_correlate.py_test_cc_simple.npy
new file mode 100644
index 0000000..6f56a61
Binary files /dev/null and b/tests/data/test_correlate.py_test_cc_simple.npy differ
diff --git a/tests/test_ac_cc.py b/tests/test_ac_cc.py
new file mode 100644
index 0000000..302615e
--- /dev/null
+++ b/tests/test_ac_cc.py
@@ -0,0 +1,139 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+""" Tests correlation-autocorrelation identity
+"""
+from __future__ import division, print_function
+
+import numpy as np
+import os
+from os.path import abspath, basename, dirname, join, split, exists
+import platform
+import sys
+import warnings
+import zipfile
+
+# Add parent directory to beginning of path variable
+DIR = dirname(abspath(__file__))
+sys.path = [split(DIR)[0]] + sys.path
+
+import multipletau
+
+from test_autocorrelate import get_sample_arrays
+
+
+def test_ac_cc_simple():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays()
+    
+    rescc = []
+    for a in arrs:
+        r = multipletau.correlate(a=a, v=a,
+                                  m=16,
+                                  deltat=1,
+                                  normalize=False,
+                                  copy=True,
+                                  dtype=np.float)
+        rescc.append(r)
+    
+    rescc = np.concatenate(rescc)
+
+    resac = []
+    for a in arrs:
+        r = multipletau.autocorrelate(a=a,
+                                      m=16,
+                                      deltat=1,
+                                      normalize=False,
+                                      copy=True,
+                                      dtype=np.float)
+        resac.append(r)
+    
+    resac = np.concatenate(resac)
+    
+    assert np.all(resac==rescc)
+
+
+def test_ac_cc_normalize():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays()
+    
+    res = []
+    for a in arrs:
+        r = multipletau.autocorrelate(a=a,
+                                      m=16,
+                                      deltat=1,
+                                      normalize=True,
+                                      copy=True,
+                                      dtype=np.float)
+        res.append(r)
+    
+    res = np.concatenate(res)
+
+    rescc = []
+    for a in arrs:
+        r = multipletau.correlate(a=a, v=a,
+                                  m=16,
+                                  deltat=1,
+                                  normalize=True,
+                                  copy=True,
+                                  dtype=np.float)
+        rescc.append(r)
+    
+    rescc = np.concatenate(rescc)
+
+    assert np.all(res==rescc)
+
+
+def test_ac_cc_m():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays()
+
+    ms = [8, 16, 32, 64, 128]
+    a = np.concatenate(arrs)
+
+    res = []    
+    for m in ms:
+        r = multipletau.autocorrelate(a=a,
+                                      m=m,
+                                      deltat=1,
+                                      normalize=False,
+                                      copy=True,
+                                      dtype=np.float)
+        res.append(r)
+    res = np.concatenate(res)
+
+    rescc = []    
+    for m in ms:
+        r = multipletau.correlate(a=a, v=a,
+                                  m=m,
+                                  deltat=1,
+                                  normalize=False,
+                                  copy=True,
+                                  dtype=np.float)
+        rescc.append(r)
+        # test minimal length of array
+        _r2 = multipletau.correlate(a=a[:2*m], v=a[:2*m],
+                                    m=m,
+                                    deltat=1,
+                                    normalize=False,
+                                    copy=True,
+                                    dtype=np.float)
+    
+    rescc = np.concatenate(rescc)
+    assert np.all(res==rescc)
+
+
+if __name__ == "__main__":
+    # Run all tests
+    loc = locals()
+    for key in list(loc.keys()):
+        if key.startswith("test_") and hasattr(loc[key], "__call__"):
+            loc[key]()
diff --git a/tests/test_autocorrelate.py b/tests/test_autocorrelate.py
new file mode 100644
index 0000000..40a12f1
--- /dev/null
+++ b/tests/test_autocorrelate.py
@@ -0,0 +1,254 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+""" Tests autocorrelation algorithm
+"""
+from __future__ import division, print_function
+
+import numpy as np
+import os
+from os.path import abspath, basename, dirname, join, split, exists
+import platform
+import sys
+import warnings
+import zipfile
+
+# Add parent directory to beginning of path variable
+DIR = dirname(abspath(__file__))
+sys.path = [split(DIR)[0]] + sys.path
+
+import multipletau
+
+
+def get_reference_data(funcname, pyfile):
+    adir = os.path.dirname(pyfile)+"/data/"
+    aname = os.path.basename(pyfile)+"_"+funcname+".npy"
+    return np.load(adir + aname)
+
+
+def get_sample_arrays():
+    a = [-4.3,   1,    9, -99.2, 13]
+    b = [9921, 281, 23.5,   5.3, 77]
+    l = [  33,  92,   47,    54, 99]
+    r = [   0,   1,   12,     4,  0] 
+    p = [   1,   4,   .5,     2,  3]
+    arrs = []
+    
+    for ai, bi, li, ri, pi in zip(a,b,l,r,p): 
+        x = np.linspace(ai,bi,li)
+        arr = (x*np.roll(x,ri))**pi
+        arrs.append(arr)
+    
+    return arrs
+
+
+def test_ac_simple():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays()
+    
+    res = []
+    for a in arrs:
+        r = multipletau.autocorrelate(a=a,
+                                      m=16,
+                                      deltat=1,
+                                      normalize=False,
+                                      copy=True,
+                                      dtype=np.float)
+        res.append(r)
+    
+    res = np.concatenate(res)
+    #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res)
+    ref = get_reference_data(myname, __file__)
+
+    assert np.all(res==ref)
+
+
+def test_ac_normalize():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays()
+    
+    res = []
+    for a in arrs:
+        r = multipletau.autocorrelate(a=a,
+                                      m=16,
+                                      deltat=1,
+                                      normalize=True,
+                                      copy=True,
+                                      dtype=np.float)
+        res.append(r)
+    
+    res = np.concatenate(res)
+    #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res)
+    ref = get_reference_data(myname, __file__)
+
+    assert np.all(res==ref)
+
+
+def test_ac_m():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays()
+
+    ms = [8, 16, 32, 64, 128]
+    a = np.concatenate(arrs)
+
+    res = []    
+    for m in ms:
+        r = multipletau.autocorrelate(a=a,
+                                      m=m,
+                                      deltat=1,
+                                      normalize=False,
+                                      copy=True,
+                                      dtype=np.float)
+        res.append(r)
+
+        # test minimal length of array
+        _r2 = multipletau.autocorrelate(a=a[:2*m],
+                                        m=m,
+                                        deltat=1,
+                                        normalize=False,
+                                        copy=True,
+                                        dtype=np.float)
+    
+    res = np.concatenate(res)
+    #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res)
+    ref = get_reference_data(myname, __file__)
+
+    assert np.all(res==ref)
+
+
+def test_ac_copy():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays()
+
+    res1 = []
+    for a in arrs:
+        r = multipletau.autocorrelate(a=a,
+                                      m=16,
+                                      deltat=1,
+                                      normalize=True,
+                                      copy=True,
+                                      dtype=np.float)
+        res1.append(r)
+
+    res2 = []
+    for a in arrs:
+        r = multipletau.autocorrelate(a=a,
+                                      m=16,
+                                      deltat=1,
+                                      normalize=True,
+                                      copy=False,
+                                      dtype=np.float)
+        res2.append(r)
+    
+    # simple test if result is the same
+    assert np.all(np.concatenate(res1) == np.concatenate(res2))
+
+    arrs = np.concatenate(arrs)
+    refarrs = np.concatenate(get_sample_arrays())
+
+    # make sure the copy function really changes something
+    assert not np.all(arrs == refarrs)
+
+    
+def test_ac_dtype():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = np.round(get_sample_arrays()[0])
+
+
+    # integer
+    rf = multipletau.autocorrelate(a=a,
+                                   m=16,
+                                   deltat=1,
+                                   normalize=True,
+                                   copy=True,
+                                   dtype=np.float)
+
+    ri = multipletau.autocorrelate(a=a,
+                                   m=16,
+                                   deltat=1,
+                                   normalize=True,
+                                   copy=True,
+                                   dtype=np.uint)
+
+    ri2 = multipletau.autocorrelate(a=np.array(a, dtype=np.uint),
+                                   m=16,
+                                   deltat=1,
+                                   normalize=True,
+                                   copy=True,
+                                   dtype=None)
+    
+    assert ri.dtype == np.dtype(np.float), "if wrong dtype, dtype should default to np.float"
+    assert ri2.dtype == np.dtype(np.float), "if wrong dtype, dtype should default to np.float"
+    assert np.all(rf == ri), "result should be the same, because input us the same"
+    assert np.all(rf == ri2), "result should be the same, because input us the same"
+
+
+def test_ac_m_wrong():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = get_sample_arrays()[0]
+
+    # integer
+    r1 = multipletau.autocorrelate(a=a,
+                                   m=16,
+                                   deltat=1,
+                                   normalize=True,
+                                   copy=True,
+                                   dtype=np.float)
+
+    r2 = multipletau.autocorrelate(a=a,
+                                   m=15,
+                                   deltat=1,
+                                   normalize=True,
+                                   copy=True,
+                                   dtype=np.float)
+
+    r3 = multipletau.autocorrelate(a=a,
+                                   m=15.5,
+                                   deltat=1,
+                                   normalize=True,
+                                   copy=True,
+                                   dtype=np.float)
+
+    r4 = multipletau.autocorrelate(a=a,
+                                   m=14.5,
+                                   deltat=1,
+                                   normalize=True,
+                                   copy=True,
+                                   dtype=np.float)
+
+    r5 = multipletau.autocorrelate(a=a,
+                                   m=16.,
+                                   deltat=1,
+                                   normalize=True,
+                                   copy=True,
+                                   dtype=np.float)
+    assert np.all(r1==r2)
+    assert np.all(r1==r3)
+    assert np.all(r1==r4)
+    assert np.all(r1==r5)
+
+
+if __name__ == "__main__":
+    # Run all tests
+    loc = locals()
+    for key in list(loc.keys()):
+        if key.startswith("test_") and hasattr(loc[key], "__call__"):
+            loc[key]()
+    
diff --git a/tests/test_basic.py b/tests/test_basic.py
index 5895d35..0cb994c 100644
--- a/tests/test_basic.py
+++ b/tests/test_basic.py
@@ -14,8 +14,8 @@ from multipletau import autocorrelate, correlate
 
 def test_ac():
     ist = autocorrelate(range(42), m=2, dtype=np.dtype(float))
-
-    soll = np.array([[  1.00000000e+00,   2.29600000e+04],
+    soll = np.array([[  0.00000000e+00,   2.38210000e+04],
+                     [  1.00000000e+00,   2.29600000e+04],
                      [  2.00000000e+00,   2.21000000e+04],
                      [  4.00000000e+00,   2.03775000e+04],
                      [  8.00000000e+00,   1.50612000e+04]])
@@ -23,15 +23,19 @@ def test_ac():
 
 
 def test_cc():
-    soll = correlate(range(42), range(1,43), m=2, dtype=np.dtype(float))
-    ist = np.array([[  1.00000000e+00,   2.38210000e+04],
-                    [  2.00000000e+00,   2.29600000e+04],
-                    [  4.00000000e+00,   2.12325000e+04],
-                    [  8.00000000e+00,   1.58508000e+04]])
+    ist = correlate(range(42), range(1,43), m=2, dtype=np.dtype(float))
+    soll = np.array([[  0.00000000e+00,   2.46820000e+04],
+                     [  1.00000000e+00,   2.38210000e+04],
+                     [  2.00000000e+00,   2.29600000e+04],
+                     [  4.00000000e+00,   2.12325000e+04],
+                     [  8.00000000e+00,   1.58508000e+04]])
     assert np.allclose(soll, ist)
-    
+
     
 if __name__ == "__main__":
-    test_ac()
-    test_cc()
-
+    # Run all tests
+    loc = locals()
+    for key in list(loc.keys()):
+        if key.startswith("test_") and hasattr(loc[key], "__call__"):
+            loc[key]()
+    
\ No newline at end of file
diff --git a/tests/test_correlate.py b/tests/test_correlate.py
new file mode 100644
index 0000000..7861d5f
--- /dev/null
+++ b/tests/test_correlate.py
@@ -0,0 +1,300 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+""" Tests correlation algorithm
+"""
+from __future__ import division, print_function
+
+import numpy as np
+import os
+from os.path import abspath, basename, dirname, join, split, exists
+import platform
+import sys
+import warnings
+import zipfile
+
+# Add parent directory to beginning of path variable
+DIR = dirname(abspath(__file__))
+sys.path = [split(DIR)[0]] + sys.path
+
+import multipletau
+
+from test_autocorrelate import get_reference_data
+
+def get_sample_arrays_cplx():
+    a = [-4.3,   1,    9, -99.2, 13]
+    b = [9921, 281, 23.5,   5.3, 77]
+    c = [  12,   0,    2,   1.3, 33]
+    d = [  32,  .1,   -2,   6.3, 88]
+    l = [  33,  92,   47,    54, 99]
+    r = [   0,   1,   12,     4,  0] 
+    p = [   1,   4,   .5,     2,  3]
+    arrs = []
+    
+    for ai, bi, ci, di, li, ri, pi in zip(a,b,c,d,l,r,p): 
+        x = np.linspace(ai,bi,li)
+        y = np.linspace(ci,di,li)
+        arr = (x*np.roll(x,ri))**pi + 1j*y
+        arrs.append(arr)
+    
+    return arrs
+
+
+def test_cc_simple():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays_cplx()
+    
+    res = []
+    for a in arrs:
+        r = multipletau.correlate(a=a,
+                                  v=a,
+                                  m=16,
+                                  deltat=1,
+                                  normalize=False,
+                                  copy=True,
+                                  dtype=np.complex)
+        res.append(r)
+    res = np.concatenate(res)
+
+    #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res)
+    ref = get_reference_data(myname, __file__)
+
+    assert np.all(res==ref)
+
+    # also check result of autocorrelate
+    res2 = []
+    for a in arrs:
+        r = multipletau.autocorrelate(a=a,
+                                      m=16,
+                                      deltat=1,
+                                      normalize=False,
+                                      copy=True,
+                                      dtype=np.complex)
+        res2.append(r)
+    res2 = np.concatenate(res2)
+
+    assert np.all(res==res2)
+
+
+def test_cc_normalize():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays_cplx()
+    
+    res = []
+    for a in arrs:
+        r = multipletau.correlate(a=a.real,
+                                  v=a.imag,
+                                  m=16,
+                                  deltat=1,
+                                  normalize=True,
+                                  copy=True,
+                                  dtype=np.float)
+        res.append(r)
+    
+    res = np.concatenate(res)
+    #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res)
+    ref = get_reference_data(myname, __file__)
+
+    assert np.all(res==ref)
+
+
+def test_cc_m():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays_cplx()
+
+    ms = [4, 8, 10, 16, 20, 64, 128]
+    a = np.concatenate(arrs)
+
+    res = []    
+    for m in ms:
+        r = multipletau.correlate(a=a,
+                                  v=a,
+                                  m=m,
+                                  deltat=1,
+                                  normalize=False,
+                                  copy=True,
+                                  dtype=np.complex)
+        res.append(r)
+
+        # test minimal length of array
+        _r2 = multipletau.correlate(a=a[:2*m],
+                                    v=a[:2*m],
+                                    m=m,
+                                    deltat=1,
+                                    normalize=False,
+                                    copy=True,
+                                    dtype=np.complex)
+    
+    res = np.concatenate(res)
+    #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res)
+    ref = get_reference_data(myname, __file__)
+
+    assert np.all(res==ref)
+
+
+def test_cc_copy():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    arrs = get_sample_arrays_cplx()
+
+    res1 = []
+    for a in arrs:
+        r = multipletau.correlate(a=a,
+                                  v=a,
+                                  m=16,
+                                  deltat=1,
+                                  normalize=True,
+                                  copy=True)
+        res1.append(r)
+
+    res2 = []
+    for a in arrs:
+        r = multipletau.correlate(a=a,
+                                  v=a,
+                                  m=16,
+                                  deltat=1,
+                                  normalize=True,
+                                  copy=False)
+        res2.append(r)
+    
+    # simple test if result is the same
+    assert np.all(np.concatenate(res1) == np.concatenate(res2))
+
+    arrs = np.concatenate(arrs)
+    refarrs = np.concatenate(get_sample_arrays_cplx())
+
+    # make sure the copy function really changes something
+    assert not np.all(arrs == refarrs)
+
+    
+def test_cc_dtype():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = np.round(get_sample_arrays_cplx()[0].real)
+
+
+    # integer
+    rf = multipletau.correlate(a=a,
+                               v=a,
+                               m=16,
+                               deltat=1,
+                               normalize=True,
+                               copy=True,
+                               dtype=np.float)
+
+    ri = multipletau.correlate(a=a,
+                               v=a,
+                               m=16,
+                               deltat=1,
+                               normalize=True,
+                               copy=True,
+                               dtype=np.uint)
+
+    ri2 = multipletau.correlate(a=np.array(a, dtype=np.uint),
+                                v=np.array(a, dtype=np.uint),
+                                m=16,
+                                deltat=1,
+                                normalize=True,
+                                copy=True,
+                                dtype=None)
+    
+    assert ri.dtype == np.dtype(np.float), "if wrong dtype, dtype should default to np.float"
+    assert ri2.dtype == np.dtype(np.float), "if wrong dtype, dtype should default to np.float"
+    assert np.all(rf == ri), "result should be the same, because input us the same"
+    assert np.all(rf == ri2), "result should be the same, because input us the same"
+
+
+def test_cc_dtype2():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = np.round(get_sample_arrays_cplx()[0])
+
+    print("this should issue a warning of unequal input dtypes, casting to complex")
+    rf = multipletau.correlate(a=a.real,
+                               v=a,
+                               m=16,
+                               deltat=1,
+                               normalize=True,
+                               copy=True)
+    assert np.dtype(rf.dtype) == np.dtype(np.complex)
+
+    print("this should issue a warning of unequal input dtypes, casting to float")
+    rf2 = multipletau.correlate(a=a.real,
+                               v=np.array(a.imag, dtype=np.int),
+                               m=16,
+                               deltat=1,
+                               normalize=True,
+                               copy=True)
+    assert np.dtype(rf2.dtype) == np.dtype(np.float)
+
+
+def test_cc_m_wrong():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = get_sample_arrays_cplx()[0]
+
+    # integer
+    r1 = multipletau.correlate(a=a,
+                               v=a,
+                               m=16,
+                               deltat=1,
+                               normalize=True,
+                               copy=True)
+
+    r2 = multipletau.correlate(a=a,
+                               v=a,
+                               m=15,
+                               deltat=1,
+                               normalize=True,
+                               copy=True)
+
+    r3 = multipletau.correlate(a=a,
+                               v=a,
+                               m=15.5,
+                               deltat=1,
+                               normalize=True,
+                               copy=True)
+
+    r4 = multipletau.correlate(a=a,
+                               v=a,
+                               m=14.5,
+                               deltat=1,
+                               normalize=True,
+                               copy=True)
+
+    r5 = multipletau.correlate(a=a,
+                               v=a,
+                               m=16.,
+                               deltat=1,
+                               normalize=True,
+                               copy=True)
+
+    assert np.all(r1==r2)
+    assert np.all(r1==r3)
+    assert np.all(r1==r4)
+    assert np.all(r1==r5)
+
+
+if __name__ == "__main__":
+    # Run all tests
+    loc = locals()
+    for key in list(loc.keys()):
+        if key.startswith("test_") and hasattr(loc[key], "__call__"):
+            loc[key]()
+    
diff --git a/tests/test_ref_numpy.py b/tests/test_ref_numpy.py
new file mode 100644
index 0000000..cdf3dfd
--- /dev/null
+++ b/tests/test_ref_numpy.py
@@ -0,0 +1,180 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+"""
+Compare to numpy data.
+"""
+import numpy as np
+from os.path import abspath, dirname, join
+import sys
+
+sys.path.insert(0, dirname(dirname(abspath(__file__))))
+
+import multipletau
+
+from test_correlate import get_sample_arrays_cplx
+
+
+
+def test_corresponds_ac():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = np.concatenate(get_sample_arrays_cplx()).real
+    m=16
+
+    restau = multipletau.autocorrelate(a=1*a,
+                                       m=m,
+                                       copy=True,
+                                       normalize=True,
+                                       dtype=np.float128)
+
+    reslin = multipletau.correlate_numpy(a=1*a,
+                                         v=1*a,
+                                         copy=True,
+                                         normalize=True,
+                                         dtype=np.float128)
+
+    idx = np.array(restau[:,0].real, dtype=int)[:m]
+
+    assert np.allclose(reslin[idx, 1], restau[:m,1])
+
+
+def test_corresponds_ac_nonormalize():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = np.concatenate(get_sample_arrays_cplx()).real
+    m=16
+
+    restau = multipletau.autocorrelate(a=1*a,
+                                       m=m,
+                                       copy=True,
+                                       normalize=False,
+                                       dtype=np.float128)
+
+    reslin = multipletau.correlate_numpy(a=1*a,
+                                         v=1*a,
+                                         copy=True,
+                                         normalize=False,
+                                         dtype=np.float128)
+
+    idx = np.array(restau[:,0].real, dtype=int)[:m+1]
+
+    assert np.allclose(reslin[idx, 1], restau[:m+1,1])
+
+
+def test_corresponds_cc():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = np.concatenate(get_sample_arrays_cplx())
+    m=16
+
+    restau = multipletau.correlate(a=a,
+                                   v=a.imag+1j*a.real,
+                                   m=m,
+                                   copy=True,
+                                   normalize=True,
+                                   dtype=np.complex256)
+
+    reslin = multipletau.correlate_numpy(a=a,
+                                         v=a.imag+1j*a.real,
+                                         copy=True,
+                                         normalize=True,
+                                         dtype=np.complex256)
+
+    idx = np.array(restau[:,0].real, dtype=int)[:m+1]
+
+    assert np.allclose(reslin[idx, 1], restau[:m+1,1])
+
+
+def test_corresponds_cc_nonormalize():
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = np.concatenate(get_sample_arrays_cplx())
+    m=16
+
+    restau = multipletau.correlate(a=a,
+                                   v=a.imag+1j*a.real,
+                                   m=m,
+                                   copy=True,
+                                   normalize=False,
+                                   dtype=np.complex256)
+
+    reslin = multipletau.correlate_numpy(a=a,
+                                         v=a.imag+1j*a.real,
+                                         copy=True,
+                                         normalize=False,
+                                         dtype=np.complex256)
+
+    idx = np.array(restau[:,0].real, dtype=int)[:m+1]
+
+    assert np.allclose(reslin[idx, 1], restau[:m+1,1])
+    
+
+def test_corresponds_ac_first_loop():
+    """
+    numpy correlation:
+    G_m = sum_i(a_i*a_{i+m})
+    
+    multipletau correlation 2nd order:
+    b_j = (a_{2i} + a_{2i+1} / 2)
+    G_m = sum_j(b_j*b_{j+1})
+        = 1/4*sum_i(a_{2i}   * a_{2i+m}   +
+                    a_{2i}   * a_{2i+m+1} +
+                    a_{2i+1} * a_{2i+m}   +   
+                    a_{2i+1} * a_{2i+m+1}
+                    )
+    
+    The values after the first m+1 lag times in the multipletau
+    correlation differ from the normal correlation, because the
+    traces are averaged over two consecutive items, effectively
+    halving the size of the trace. The multiple-tau correlation
+    can be compared to the regular correlation by using an even
+    sized sequence (here 222) in which the elements 2i and 2i+1
+    are equal, as is done in this test.
+    """
+    myframe = sys._getframe()
+    myname = myframe.f_code.co_name
+    print("running ", myname)
+    
+    a = [ arr / np.average(arr) for arr in get_sample_arrays_cplx() ]
+    a = np.concatenate(a)[:222]
+    # two consecutive elements are the same, so the multiple-tau method
+    # corresponds to the numpy correlation for the first loop.
+    a[::2] = a[1::2]
+    
+    for m in [2,4,6,8,10,12,14,16]:
+        restau = multipletau.correlate(a=a,
+                                       v=a.imag+1j*a.real,
+                                       m=m,
+                                       copy=True,
+                                       normalize=False,
+                                       dtype=np.complex256)
+        
+        reslin = multipletau.correlate_numpy(a=a,
+                                             v=a.imag+1j*a.real,
+                                             copy=True,
+                                             normalize=False,
+                                             dtype=np.complex256)
+        
+        idtau = np.where(restau[:,0]==m+2)[0][0]
+        tau3 = restau[idtau, 1] #m+1 initial bins
+    
+        idref = np.where(reslin[:,0]==m+2)[0][0]
+        tau3ref = reslin[idref, 1]
+        
+        assert np.allclose(tau3, tau3ref)
+
+
+if __name__ == "__main__":
+    # Run all tests
+    loc = locals()
+    for key in list(loc.keys()):
+        if key.startswith("test_") and hasattr(loc[key], "__call__"):
+            loc[key]()

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/python-multipletau.git



More information about the debian-med-commit mailing list