[med-svn] [pycorrfit] 13/18: Imported Upstream version 0.9.9+dfsg

Alex Mestiashvili malex-guest at moszumanska.debian.org
Fri Jul 22 14:23:43 UTC 2016


This is an automated email from the git hooks/post-receive script.

malex-guest pushed a commit to branch master
in repository pycorrfit.

commit 8e4201ecf979b977fcd143643eb408c15b016482
Author: Alexandre Mestiashvili <alex at biotec.tu-dresden.de>
Date:   Fri Jul 22 15:56:14 2016 +0200

    Imported Upstream version 0.9.9+dfsg
---
 ChangeLog.txt                                  |   8 +-
 README.rst                                     |   4 +-
 doc/PyCorrFit_doc.tex                          |   2 +-
 doc/PyCorrFit_doc_content.tex                  |  30 +--
 pycorrfit/correlation.py                       |   2 +-
 pycorrfit/fit.py                               |  10 +-
 pycorrfit/gui/page.py                          |  37 +--
 pycorrfit/gui/threaded_progress.py             | 256 ++++++++++++++++++++
 pycorrfit/gui/tools/batchcontrol.py            |  80 ++++--
 pycorrfit/gui/tools/statistics.py              |   2 +-
 pycorrfit/readfiles/read_SIN_correlator_com.py | 323 +++++++++++++++++--------
 tests/test_file_formats.py                     |   2 +-
 12 files changed, 593 insertions(+), 163 deletions(-)

diff --git a/ChangeLog.txt b/ChangeLog.txt
index 68700b0..35beef5 100644
--- a/ChangeLog.txt
+++ b/ChangeLog.txt
@@ -1,3 +1,9 @@
+0.9.9
+- Remove admin-requirement during install (Windows)
+- Support newer correlator.com .sin file format (experimental, #135)
+- Add smart progress dialog for fitting (#155)
+- Statistics: check "filename/title" by default (#151)
+- Documentation: fix bad LaTeX commands (#163)
 0.9.8
 - Bugfixes:
   - Indexing error when saving sessions (#154)
@@ -68,7 +74,7 @@
   - Computation of average intensity did not work
     correctly for unequally spaced traces
 - Update .pt3 reader to version 8399ff7401
-- Import traces of .pt3 files (#118)
+- Import traces of .pt3 files (experimental, #118)
   Warning: Absolute values for intensity might be wrong
 0.9.1
 - Tool 'Overlay curves': improve UI (#117)
diff --git a/README.rst b/README.rst
index 24189ba..457b9a2 100644
--- a/README.rst
+++ b/README.rst
@@ -1,7 +1,7 @@
 |PyCorrFit|
 ===========
 
-|PyPI Version| |Build Status Win| |Build Status Mac|
+|PyPI Version| |Build Status Win| |Build Status Mac| |Coverage Status|
 
 A graphical fitting tool for fluorescence correlation spectroscopy (FCS) that comes with support for several file formats, can be applied to a large variety of problems, and attempts to be as user-friendly as possible. Some of the features are
 
@@ -97,3 +97,5 @@ the running time of the build). From there you can download the Windows installe
    :target: https://ci.appveyor.com/project/paulmueller/pycorrfit
 .. |Build Status Mac| image:: https://img.shields.io/travis/FCS-analysis/PyCorrFit/master.svg?label=build_mac
    :target: https://travis-ci.org/FCS-analysis/PyCorrFit
+.. |Coverage Status| image:: https://img.shields.io/coveralls/FCS-analysis/PyCorrFit.svg
+   :target: https://coveralls.io/r/FCS-analysis/PyCorrFit
diff --git a/doc/PyCorrFit_doc.tex b/doc/PyCorrFit_doc.tex
index a11b32a..8a9f74d 100755
--- a/doc/PyCorrFit_doc.tex
+++ b/doc/PyCorrFit_doc.tex
@@ -20,7 +20,7 @@
 
 %Für englische Schriften:
 \usepackage[english]{babel}
-\usepackage{sistyle}
+\usepackage{siunitx}
 
 
 \usepackage[top = 2cm, left = 2.5cm, right = 2cm, bottom = 2.5cm]{geometry}
diff --git a/doc/PyCorrFit_doc_content.tex b/doc/PyCorrFit_doc_content.tex
index 6adcbad..16334ea 100755
--- a/doc/PyCorrFit_doc_content.tex
+++ b/doc/PyCorrFit_doc_content.tex
@@ -90,7 +90,7 @@ The fitting itself is usually explored with a representative data set. Here, the
 \begin{figure}[h]
 \centering
 \includegraphics[width=\linewidth]{PyCorrFit_Screenshot_Main.png}
- \mycaption{user interface of PyCorrFit}{Confocal measurement of nanomolar Alexa488 in aqueous solution. To avoid after-pulsing, the autocorrelation curve was measured by cross-correlating signals from two detection channels using a 50 \% beamsplitter. Fitting reveals the average number of observed particles ($n \approx 6$) and their residence time in the detection volume ($\tau_{\rm diff} = \SI{28}{\mu s})$. \label{fig:mainwin} }
+ \mycaption{user interface of PyCorrFit}{Confocal measurement of nanomolar Alexa488 in aqueous solution. To avoid after-pulsing, the autocorrelation curve was measured by cross-correlating signals from two detection channels using a 50 \% beamsplitter. Fitting reveals the average number of observed particles ($n \approx 6$) and their residence time in the detection volume ($\tau_{\mathrm{diff}} = \SI{28}{\mu s})$. \label{fig:mainwin} }
 \end{figure}
 Together with a system's terminal of the platform on which \textit{PyCorrFit} was installed (Windows, Linux, MacOS X), the \textit{main window} opens when starting the program. The window title bar contains the version of \textit{PyCorrFit} and, if a session was re-opened or saved, the name of the fitting session. A menu bar provides access to many supporting tools and additional information as thoroughly described in \hyref{Chapter}{sec:menub}. 
 
@@ -499,7 +499,7 @@ The factor $q$ combines all the photo-physical quantities associated with fluore
 	\label{eq2}
 	\langle S(t) \rangle = \lim_{t\to\ \infty} \int S(t) \,dt = q \int W(\vec{r})  c \,dV = qn
 	\end{equation}
-\hyref{Equation}{eq2} reveals that $q$ is the instrument dependent molecular brightness (kHz/particle), i.e. the average signal divided by the average number of particles $n$ observed within the effective detection volume $V_{\rm eff} = \int W(\vec{r})  \,dV$. During FCS measurements the detected signal is correlated by computing a normalized autocorrelation function: 
+\hyref{Equation}{eq2} reveals that $q$ is the instrument dependent molecular brightness (kHz/particle), i.e. the average signal divided by the average number of particles $n$ observed within the effective detection volume $V_{\mathrm{eff}} = \int W(\vec{r})  \,dV$. During FCS measurements the detected signal is correlated by computing a normalized autocorrelation function: 
 	\begin{equation}
 	\label{eq3}
 	G(\tau) = \frac{\langle S(t) \cdot S(t+\tau)\rangle}{\langle S(t) \rangle^2}-1 = \frac{\langle \delta S(t) \cdot \delta S(t+\tau)\rangle}{\langle S(t) \rangle^2} = \frac{g(\tau)}{\langle S(t) \rangle^2}
@@ -546,10 +546,10 @@ Solving these integrals for the confocal detection scheme yields a relatively si
 	\label{eq9}
 	G(\tau) = \frac{1}{n} \left(1+\frac{4 D_t \tau}{w_0^2} \right) ^{-1} \left(1+\frac{4D_t \tau}{z_0^2} \right)^{-1/2}
 	\end{equation}
-The inverse intercept $(G(0))^{-1}$ is proportional to the total concentration of oberved particles $C = N/V =  n/V_{\rm eff} = n/ (\pi^{3/2}w_0^2z_0)$. It is common to define the diffusion time $\tau_{\rm diff} = {w_0}^2/4D_t$ and the structural parameter $\textit{SP}=z_0^2/w_0^2$ as a measure of the elongated detection volume. Replacement finally yields the well known autocorrelation function for 3D diffusion in a confocal setup (Model ID 6012)
+The inverse intercept $(G(0))^{-1}$ is proportional to the total concentration of oberved particles $C = N/V =  n/V_{\mathrm{eff}} = n/ (\pi^{3/2}w_0^2z_0)$. It is common to define the diffusion time $\tau_{\mathrm{diff}} = {w_0}^2/4D_t$ and the structural parameter $\textit{SP}=z_0^2/w_0^2$ as a measure of the elongated detection volume. Replacement finally yields the well known autocorrelation function for 3D diffusion in a confocal setup (Model ID 6012)
 	\begin{equation}
 	\label{eq10}
-	G(\tau) \stackrel{\rm def}{=} G^{\rm D}(\tau) = \frac{1}{n} \overbrace{ \left(1+\frac{\tau}{\tau_{\rm {diff}}} \right) ^{-1}}^{\rm 2D} \overbrace{ \left(1+\frac{\tau}{\textit{SP}^2 \, \tau_{\rm {diff}}} \right)^{-1/2}}^{\rm 3D}
+	G(\tau) \stackrel{\mathrm{def}}{=} G^{\mathrm{D}}(\tau) = \frac{1}{n} \overbrace{ \left(1+\frac{\tau}{\tau_{\mathrm{diff}}} \right) ^{-1}}^{\mathrm{2D}} \overbrace{ \left(1+\frac{\tau}{\textit{SP}^2 \, \tau_{\mathrm{diff}}} \right)^{-1/2}}^{\mathrm{3D}}
 	\end{equation}
 For confocal FCS, both the detection volume $W(\vec{r})$ and the propagator for free diffusion $P_\mathrm{d}$ are described by exponentials (Gaussian functions). Therefore, spatial relationships can be factorized for each dimension $xyz$. As a result, \hyref{Equation}{eq10} can be written as a combination of transversal (2D) and longitudinal (3D) diffusion.
 
@@ -558,7 +558,7 @@ For confocal FCS, both the detection volume $W(\vec{r})$ and the propagator for
 Very often in FCS, one observes more than one dynamic property. Besides diffusion driven number fluctuations, a fluorophore usually shows some kind of inherent blinking, due to triplet state transitions (organic dyes) or protonation dependent quenching (GFPs) \cite{Widengren1995}.
 	\begin{equation}
 	\label{eq11}
-	G(\tau) \stackrel{\rm def}{=} G^{\rm T}(\tau) G^{\rm D}(\tau) = \left( 1+ \frac{T}{1-T} \exp\left[-\frac{\tau}{\tau_{\rm trp}} \right] \right)G^{\rm D}(\tau)
+	G(\tau) \stackrel{\mathrm{def}}{=} G^{\mathrm{T}}(\tau) G^{\mathrm{D}}(\tau) = \left( 1+ \frac{T}{1-T} \exp\left[-\frac{\tau}{\tau_{\mathrm{trp}}} \right] \right)G^{\mathrm{D}}(\tau)
 	\end{equation}
 Blinking increases the correlation amplitude $G(0)$ by the triplet fraction $1/(1-T)$. Accordingly, the average number of observed particles is decreased $n = (1-T)/G(0)$. In case of GFP blinking, two different blinking times have been described and the rate equations can get quite complicated.
 Besides photo-physics, the solution may contain mixtures of fluorescent particles with different dynamic properties, e.g. different mobility states or potential for transient binding. Such mixtures show several correlation times in the correlation curve. \hyref{Equation}{eq11} can be derived by considering the correlation functions of an ensemble, which can be built up by the contribution of $n$ single molecules in the sample volume:
@@ -575,17 +575,17 @@ Note that the diffusion propagator $P_{\mathrm{d},ij}$ is now indexed, since the
 Due to the sums in \hyref{Equation}{eq12}, adding up individual contributions of sub-ensembles is allowed. A frequently used expression to cover free diffusion of similarly labelled, differently sized particles is simply the sum of correlation functions, weighted with their relative fractions $F_k = n_k/n$ to the overall amplitude $G(0) = 1/n$:
 	\begin{equation}
 	\label{eq14}
-	G^{\rm D}(\tau) = \sum_{k=1}^m F_k G^{\rm D}(\tau) = \frac{1}{n} \sum_{k=1}^m F_k \left(1+\frac{\tau}{\tau_{{\rm diff},k}} \right) ^{-1} \left(1+\frac{\tau}{\textit{SP}^2 \, \tau_{{\rm diff},k}} \right)
+	G^{\mathrm{D}}(\tau) = \sum_{k=1}^m F_k G^{\mathrm{D}}(\tau) = \frac{1}{n} \sum_{k=1}^m F_k \left(1+\frac{\tau}{\tau_{{\mathrm{diff}},k}} \right) ^{-1} \left(1+\frac{\tau}{\textit{SP}^2 \, \tau_{{\mathrm{diff}},k}} \right)
 	\end{equation}
 Up to three diffusion times can usually be discriminated ($m = 3$) \cite{Meseth1999}. Note that this assumes homogenous molecular brightness of the different diffusion species. One of the molecular brightness values $q_k$ is usually taken as a reference ($\alpha_k = q_k/q_1$). Brighter particles are over-represented \cite{Thompson1991}
 	\begin{equation}
 	\label{eq15}
-	G^{\rm D}(\tau) = \frac{1}{n \left( \sum_k F_k \alpha_k \right)^2} \sum_k F_k \alpha_k^2 G_k^D(\tau)
+	G^{\mathrm{D}}(\tau) = \frac{1}{n \left( \sum_k F_k \alpha_k \right)^2} \sum_k F_k \alpha_k^2 G_k^D(\tau)
 	\end{equation}
-Inhomogeneity in molecular brightness affects both the total concentration of observed particles as well as the real molar fractions $F_k^{\rm cor}$ \cite{Thompson1991}
+Inhomogeneity in molecular brightness affects both the total concentration of observed particles as well as the real molar fractions $F_k^{\mathrm{cor}}$ \cite{Thompson1991}
 	\begin{equation}
 	\label{eq16}
-	n = \frac{1}{G^{\rm D}(0)} \frac{\sum_k F_k^{\rm {cor}} \alpha_k^2}{\left( \sum_k F_k^{\rm {cor}} \alpha_k \right)^2} \quad\mbox {with} \quad F_k^{\rm {cor}} = \frac{F_k/\alpha_k^2}{\sum_k F_k/\alpha_k}
+	n = \frac{1}{G^{\mathrm{D}}(0)} \frac{\sum_k F_k^{\mathrm{cor}} \alpha_k^2}{\left( \sum_k F_k^{\mathrm{cor}} \alpha_k \right)^2} \quad\mbox {with} \quad F_k^{\mathrm{cor}} = \frac{F_k/\alpha_k^2}{\sum_k F_k/\alpha_k}
 	\end{equation}
 
 \subsection{Correcting non-correlated background signal}
@@ -593,12 +593,12 @@ Inhomogeneity in molecular brightness affects both the total concentration of ob
 In FCS, the total signal is composed of the fluorescence and the non-correlated background: $S = F + B$. Non-correlated background signal like shot noise of the detectors or stray light decreases the relative fluctuation amplitude and must be corrected to derive true particle concentrations \cite{Koppel1974,Thompson1991}. In \textit{PyCorrFit}, the background value [kHz] can be manually set for each channel (B1, B2) (\hyref{Figure}{fig:mainwin}). For autocorrelation measurements ($B1 = B [...]
 	\begin{equation}
 	\label{eq17}
-	n = \frac{1}{G^{\rm D}(0)} \left( \frac{S-B}{S} \right)^2 = \frac{1}{(1-T)G(0)} \left( \frac{S-B}{S} \right)^2.
+	n = \frac{1}{G^{\mathrm{D}}(0)} \left( \frac{S-B}{S} \right)^2 = \frac{1}{(1-T)G(0)} \left( \frac{S-B}{S} \right)^2.
 	\end{equation}
 For dual-channel applications with cross-correlation (next section) the amplitudes must be corrected by contributions from each channel \cite{Weidemann2013}
 	\begin{equation}
 	\label{eq18}
-	G_{\times,\rm {cor}}(0) = G_{\times, \rm meas}(0) \left( \frac{S_1}{S_1-B_1} \right) \left( \frac{S_2}{S_2-B_2} \right) 
+	G_{\times,\mathrm{cor}}(0) = G_{\times, \mathrm{meas}}(0) \left( \frac{S_1}{S_1-B_1} \right) \left( \frac{S_2}{S_2-B_2} \right) 
 	\end{equation}
 
 \subsection{Cross-correlation}
@@ -606,17 +606,17 @@ For dual-channel applications with cross-correlation (next section) the amplitud
 Cross-correlation is an elegant way to measure molecular interactions. The principle is to implement a dual-channel setup (e.g. channels 1 and 2), where two, interacting populations of molecules can be discriminated \cite{Foo2012,Ries2010,Schwille1997,Weidemann2002}. In a dual-channel setup, complexes containing particles with both properties will evoke simultaneous signals in both channels. Such coincidence events can be extracted by cross-correlation between the two channels. A promine [...]
 	\begin{equation}
 	\label{eq19}
-	G_\times (\tau) \stackrel{\rm def}{=} G_{12} (\tau) = \frac{\langle \delta S_1(t) \delta S_2(t+\tau)\rangle}{\langle S_1(t) \rangle \langle S_2(t) \rangle} \approx \frac{\langle \delta S_2(t) \delta S_1(t+\tau)\rangle}{\langle S_1(t) \rangle \langle S_2(t) \rangle} =  G_{21} (\tau)
+	G_\times (\tau) \stackrel{\mathrm{def}}{=} G_{12} (\tau) = \frac{\langle \delta S_1(t) \delta S_2(t+\tau)\rangle}{\langle S_1(t) \rangle \langle S_2(t) \rangle} \approx \frac{\langle \delta S_2(t) \delta S_1(t+\tau)\rangle}{\langle S_1(t) \rangle \langle S_2(t) \rangle} =  G_{21} (\tau)
 	\end{equation}
 A finite cross-correlation amplitude $G_{12}(0)$ indicates co-diffusion of complexes containing both types of interaction partners. The increase of the cross-correlation amplitude is linear for heterotypic binding but non-linear for homotypic interactions or higher order oligomers. The absolute magnitude of the cross-correlation amplitude must be calibrated because the chromatic mismatch of the detection volumes (different wavelength, different size) and their spatial displacement ($d_\m [...]
 	\begin{equation}
 	\label{eq20}
-	G(\tau) = \frac{1}{n} \left(1+\frac{4 D_t \tau}{w_0^2} \right) ^{-1} \left(1+\frac{4D_t \tau}{z_0^2} \right)^{-1/2} \exp \left(- \frac{d_\mathrm{x}^2 + d_\mathrm{y}^2}{4 D_t \tau + w_{0,\rm eff}} + \frac{d_\mathrm{z}^2}{4 D_t \tau + z_{0,\rm eff}} \right)
+	G(\tau) = \frac{1}{n} \left(1+\frac{4 D_t \tau}{w_0^2} \right) ^{-1} \left(1+\frac{4D_t \tau}{z_0^2} \right)^{-1/2} \exp \left(- \frac{d_\mathrm{x}^2 + d_\mathrm{y}^2}{4 D_t \tau + w_{0,\mathrm{eff}}} + \frac{d_\mathrm{z}^2}{4 D_t \tau + z_{0,\mathrm{eff}}} \right)
 	\end{equation}
 The ratio between cross- and autocorrelation amplitude is used as a readout which can be linked to the degree of binding. Let us consider a heterodimerization, where channel $1$ is sensitive for green labelled particles ($g$) and channel $2$ is sensitive for red labelled particles ($r$), then the ratio of cross- and autocorrelation amplitudes is proportional to the fraction of ligand bound \cite{Weidemann2002}
 	\begin{eqnarray}
 	\label{eq21}
-	CC_1 \stackrel{\rm def}{=} \frac{G_\times(0)}{G_1(0)} & \propto & \frac{c_{gr}}{c_r} \nonumber \\ CC_2 \stackrel{\rm def}{=} \frac{G_\times(0)}{G_2(0)} &\propto & \frac{c_{gr}}{c_g}
+	CC_1 \stackrel{\mathrm{def}}{=} \frac{G_\times(0)}{G_1(0)} & \propto & \frac{c_{gr}}{c_r} \nonumber \\ CC_2 \stackrel{\mathrm{def}}{=} \frac{G_\times(0)}{G_2(0)} &\propto & \frac{c_{gr}}{c_g}
 	\end{eqnarray}
 Recently, a correction for bleed-through of the signals between the two channels has been worked out \cite{Bacia2012}. The effect on binding curves measured with cross-correlation can be quite dramatic \cite{Weidemann2013}. To treat spectral cross-talk, the experimenter has to determine with single coloured probes how much of the signal (ratio in \%) is detected by the orthogonal, 'wrong' channel ($BT_{12}, BT_{12}$). Usually the bleed-through from the red into the green channel can be n [...]
 	\begin{eqnarray}
@@ -628,7 +628,7 @@ Here, the dashed fluorescence signals are the true contributions from single lab
 \label{eq23}
   \begin{align}
     \frac{c_{gr}}{c_r} & \propto  \frac{CC_1-X_2}{\left( 1-X_2 \right)} \label{eq23a} \\
-    \frac{c_{gr}}{c_g} & \propto  \frac{CC_2-X_2 \left( 1-X_2 \right) \frac{G_1^{\rm D}(0)}{G_2^{\rm D}(0)}}{1+ X_2 \frac{G_1^{\rm D}(0)}{G_2^{\rm D}(0)} - 2 X_2 CC_2} \label{eq23b}
+    \frac{c_{gr}}{c_g} & \propto  \frac{CC_2-X_2 \left( 1-X_2 \right) \frac{G_1^{\mathrm{D}}(0)}{G_2^{\mathrm{D}}(0)}}{1+ X_2 \frac{G_1^{\mathrm{D}}(0)}{G_2^{\mathrm{D}}(0)} - 2 X_2 CC_2} \label{eq23b}
   \end{align}
 \end{subequations}
 As apparent from \hyref{Equations}{eq23}, it is much simpler to use the autocorrelation amplitude measured in the green channel for normalization (\ref{eq23a}) and not the cross-talk affected red  channel (\ref{eq23b}). Finally, the proportionality between the fraction ligand bound and the measured cross-correlation ratio depend solely on the effective detection volumes of all three channels (two auto- and the cross-correlation channels) and must be determined with appropriate positive c [...]
diff --git a/pycorrfit/correlation.py b/pycorrfit/correlation.py
index 6e83e40..2eec6d4 100644
--- a/pycorrfit/correlation.py
+++ b/pycorrfit/correlation.py
@@ -352,7 +352,7 @@ class Correlation(object):
     def fit_parameters(self):
         """parameters that were fitted/will be used for fitting"""
         # Do not return `self._fit_parameters.copy()`, because
-        # some methods of PyCorrFit depende on the array being
+        # some methods of PyCorrFit depend on the array being
         # accessible and changeable with indices.
         return self._fit_parameters
 
diff --git a/pycorrfit/fit.py b/pycorrfit/fit.py
index 27c9e1e..8bb214e 100644
--- a/pycorrfit/fit.py
+++ b/pycorrfit/fit.py
@@ -217,7 +217,7 @@ class Fit(object):
 
         Parameters
         ----------
-        correlations: list of instances of `pycorrfit.Correlation`
+        correlations: list of instances or instance of `pycorrfit.Correlation`
             Correlations to fit.
         global fit : bool
             Perform global fit. The default behavior is
@@ -272,7 +272,7 @@ class Fit(object):
                 self.minimize()
                 # update correlation model parameters
                 corr.fit_parameters = self.fit_parm
-                # save fit instance in correlation class
+                # save fit data in correlation class
                 corr.fit_results = self.get_fit_results(corr)
         else:
             # TODO:
@@ -393,7 +393,7 @@ class Fit(object):
                 # write new model parameters
                 corr.fit_parameters = parameters_global_to_local(self.fit_parm,
                                                                  ii)
-                # save fit instance in correlation class
+                # save fit data in correlation class
                 corr.fit_results = self.get_fit_results(corr)
 
 
@@ -414,7 +414,6 @@ class Fit(object):
              "fit weights" : 1*self.compute_weights(c)
              }
         
-        
         if c.is_weighted_fit:
             d["weighted fit type"] = c.fit_weight_type
             if isinstance(c.fit_weight_data, (int, float)):
@@ -422,8 +421,7 @@ class Fit(object):
 
         if d["fit algorithm"] == "Lev-Mar" and self.parmoptim_error is not None:
             d["fit error estimation"] = self.parmoptim_error
-        
-        
+
         return d
         
 
diff --git a/pycorrfit/gui/page.py b/pycorrfit/gui/page.py
index 92a3b69..1f0707d 100644
--- a/pycorrfit/gui/page.py
+++ b/pycorrfit/gui/page.py
@@ -15,7 +15,7 @@ import wx.lib.scrolledpanel as scrolled
 
 from pycorrfit import models as mdls
 from pycorrfit import fit
-from pycorrfit import Correlation, Fit
+from pycorrfit import Correlation
 
 
 from . import tools
@@ -58,7 +58,7 @@ class FittingPanel(wx.Panel):
         # A list containing page numbers that share parameters with this page.
         # This parameter is defined by the global fitting tool and is saved in
         # sessions.
-        self.GlobalParameterShare = list()
+        self.GlobalParameterShare = []
         # Counts number of Pages already created:
         self.counter = counter
         # Has inital plot been performed?
@@ -328,30 +328,21 @@ class FittingPanel(wx.Panel):
                       to `True`.
         
         """
-        # Make a busy cursor
-        wx.BeginBusyCursor()
-        # Apply parameters
-        # This also applies the background correction, if present
-        self.apply_parameters()
-        # Create instance of fitting class
-        
-        # TODO:
-        # 
-        self.GlobalParameterShare = list()
+        tools.batchcontrol.FitProgressDlg(self, self)
 
-        try:
-            Fit(self.corr)
-        except ValueError:
-            # I sometimes had this on Windows. It is caused by fitting to
-            # a .SIN file without selection proper channels first.
-            print "There was an Error fitting. Please make sure that you\n"+\
-                  "are fitting in a proper channel domain."
-            wx.EndBusyCursor()
-            raise
 
+    def Fit_finalize(self, trigger):
+        """ Things that need be done after fitting
+        """
+        # Reset list of globally shared parameters, because we are only
+        # fitting this single page now.
+        # TODO:
+        # - also remove this page from the GlobalParameterShare list of
+        #   the other pages
+        self.GlobalParameterShare = []
         # Update spin-control values
         self.apply_parameters_reverse()
-        # Plot everthing
+        # Plot everything
         try:
             self.PlotAll(trigger=trigger)
         except OverflowError:
@@ -361,8 +352,6 @@ class FittingPanel(wx.Panel):
             warnings.warn("Could not plot canvas.") 
         # update displayed chi2
         self.updateChi2()
-        # Return cursor to normal
-        wx.EndBusyCursor()
 
 
     def Fit_WeightedFitCheck(self, event=None):
diff --git a/pycorrfit/gui/threaded_progress.py b/pycorrfit/gui/threaded_progress.py
new file mode 100644
index 0000000..58e1859
--- /dev/null
+++ b/pycorrfit/gui/threaded_progress.py
@@ -0,0 +1,256 @@
+# -*- coding: utf-8 -*-
+"""
+PyCorrFit
+
+A progress bar with an abort button that works for long running processes.
+"""
+import time
+import threading
+import traceback as tb
+import wx
+import sys
+
+
+
+class KThread(threading.Thread):
+    """A subclass of threading.Thread, with a kill()
+    method.
+    
+    https://web.archive.org/web/20130503082442/http://mail.python.org/pipermail/python-list/2004-May/281943.html
+
+    The KThread class works by installing a trace in the thread.  The trace
+    checks at every line of execution whether it should terminate itself.
+    So it's possible to instantly kill any actively executing Python code.
+    However, if your code hangs at a lower level than Python, then the
+    thread will not actually be killed until the next Python statement is
+    executed.
+    """
+    def __init__(self, *args, **keywords):
+        threading.Thread.__init__(self, *args, **keywords)
+        self.killed = False
+
+    def start(self):
+        """Start the thread."""
+        self.__run_backup = self.run
+        self.run = self.__run      # Force the Thread to install our trace.
+        threading.Thread.start(self)
+
+    def __run(self):
+        """Hacked run function, which installs the
+        trace."""
+        sys.settrace(self.globaltrace)
+        self.__run_backup()
+        self.run = self.__run_backup
+
+    def globaltrace(self, frame, why, arg):
+        if why == 'call':
+            return self.localtrace
+        else:
+            return None
+
+    def localtrace(self, frame, why, arg):
+        if self.killed:
+            if why == 'line':
+                raise SystemExit()
+        return self.localtrace
+
+    def kill(self):
+        self.killed = True
+
+
+
+class WorkerThread(KThread):
+    """Worker Thread Class."""
+    def __init__(self, target, args, kwargs):
+        """Init Worker Thread Class."""
+        KThread.__init__(self)
+        self.traceback = None
+        self.target = target
+        self.args = args
+        self.kwargs = kwargs
+        # This starts the thread running on creation, but you could
+        # also make the GUI thread responsible for calling this
+        self.start()
+
+    def run(self):
+        """Run Worker Thread."""
+        try:
+            self.target(*self.args, **self.kwargs)
+        except:
+            self.traceback = tb.format_exc()
+
+
+class ThreadedProgressDlg(object):
+    def __init__(self, parent, targets, args=None, kwargs={},
+                 title="Dialog title",
+                 messages=None,
+                 time_delay=2):
+        """ This class implements a progress dialog that can abort during
+        a function call, as opposed to the stock wx.ProgressDialog.
+        
+        Parameters
+        ----------
+        parent : wx object
+            The parent of the progress dialog.
+        targets : list of callables
+            The methods that will be called in each step in the progress.
+        args : list
+            The arguments to the targets. Should match length of targets.
+        kwargs : dict or list of dicts
+            Keyword arguments to the targets. If dict, then the same dict
+            is used for all targets.
+        title : str
+            The title of the progress dialog.
+        messages : list of str
+            The message displayed for each target. Should match length of
+            targets.
+        time_delay : float
+            Time after which the dialog should be displayed. The default
+            is 2s, which means that a dialog is only displayed after 2s
+            or earlier, if the overall progress seems to be taking longer
+            than 2s.
+        
+        Arguments
+        ---------
+        aborted : bool
+            Whether the progress was aborted by the user.
+        index_aborted : None or int
+            The index in `targets` at which the progress was aborted.
+        finalize : callable
+            A method that will be called after fitting. Can be overriden
+            by subclasses.
+        
+        Notes
+        -----
+        The progress dialog is only displayed when `time_delay` is or
+        seems shorter than the total running time of the progress. If
+        the progress is not displayed, then a busy cursor is displayed.
+         
+        """
+        wx.BeginBusyCursor()
+        
+        if hasattr(targets, "__call__"):
+            targets = [targets]
+
+        nums = len(targets)
+        
+        if args is None:
+            args = [()]*nums
+        elif isinstance(args, list):
+            # convenience-convert args to tuples
+            if not isinstance(args[0], tuple):
+                args = [ (t,) for t in args ]
+        
+        if isinstance(kwargs, dict):
+            kwargs = [kwargs]*nums
+        
+        if messages is None:
+            messages = [ "item {} of {}".format(a+1, nums) for a in range(nums) ]
+        
+        
+        time1 = time.time()
+        sty = wx.PD_SMOOTH|wx.PD_AUTO_HIDE|wx.PD_CAN_ABORT
+        if len(targets) > 1:
+            sty = sty|wx.PD_REMAINING_TIME
+        dlgargs = [title, "initializing..."]
+        dlgkwargs = {"maximum":nums, "parent":parent, "style":sty }
+        dlg = None
+
+        self.aborted = False
+        self.index_aborted = None
+
+        for jj in range(nums):
+            init = True
+            worker = WorkerThread(target=targets[jj],
+                                  args=args[jj],
+                                  kwargs=kwargs[jj])
+            while worker.is_alive() or init:
+                if (time.time()-time1 > time_delay or
+                    (time.time()-time1)/(jj+1)*nums > time_delay
+                    ) and dlg is None:
+                    dlg = wx.ProgressDialog(*dlgargs, **dlgkwargs)
+                    wx.EndBusyCursor()
+                    
+                init=False
+                time.sleep(.01)
+                if dlg is not None:
+                    if len(targets) == 1:
+                        # no progress bar but pulse
+                        cont = dlg.UpdatePulse(messages[jj])[0]
+                    else:
+                        # show progress until end
+                        cont = dlg.Update(jj+1, messages[jj])[0]
+                    if cont == False:
+                        dlg.Destroy()
+                        worker.kill()
+                        self.aborted = True
+                        break
+
+            if self.aborted:
+                self.aborted = True
+                self.index_aborted = jj
+                break
+            
+            if worker.traceback is not None:
+                dlg.Destroy()
+                self.aborted = True
+                self.index_aborted = jj
+                raise Exception(worker.traceback)
+        
+        if dlg is not None:
+            dlg.Hide()
+            dlg.Destroy()
+        wx.EndBusyCursor()
+        wx.BeginBusyCursor()
+        self.finalize()
+        wx.EndBusyCursor()
+
+    def finalize(self):
+        """ You may override this method in subclasses.
+        """
+        pass
+
+
+
+if __name__ == "__main__":
+    # GUI Frame class that spins off the worker thread
+    class MainFrame(wx.Frame):
+        """Class MainFrame."""
+        def __init__(self, parent, aid):
+            """Create the MainFrame."""
+            wx.Frame.__init__(self, parent, aid, 'Thread Test')
+    
+            # Dumb sample frame with two buttons
+            but = wx.Button(self, wx.ID_ANY, 'Start Progress', pos=(0,0))
+    
+            
+            self.Bind(wx.EVT_BUTTON, self.OnStart, but)
+    
+        def OnStart(self, event):
+            """Start Computation."""
+            # Trigger the worker thread unless it's already busy
+            arguments = [ test_class(a) for a in range(10) ]
+            def method(x):
+                x.arg *= 1.1
+                time.sleep(1)
+            tp = ThreadedProgressDlg(self, [method]*len(arguments), arguments)
+            print(tp.index_aborted)
+            print([a.arg for a in arguments])
+    
+    
+    class MainApp(wx.App):
+        """Class Main App."""
+        def OnInit(self):
+            """Init Main App."""
+            self.frame = MainFrame(None, -1)
+            self.frame.Show(True)
+            self.SetTopWindow(self.frame)
+            return True
+    
+    class test_class(object):
+        def __init__(self, arg):
+            self.arg = arg
+    
+    
+    app = MainApp(0)
+    app.MainLoop()
\ No newline at end of file
diff --git a/pycorrfit/gui/tools/batchcontrol.py b/pycorrfit/gui/tools/batchcontrol.py
index 3e976c2..e38b92a 100644
--- a/pycorrfit/gui/tools/batchcontrol.py
+++ b/pycorrfit/gui/tools/batchcontrol.py
@@ -13,7 +13,8 @@ import wx
 
 from pycorrfit import openfile as opf     # How to treat an opened file
 from pycorrfit import models as mdls
-
+from pycorrfit import Fit
+from pycorrfit.gui.threaded_progress import ThreadedProgressDlg 
 
 # Menu entry name
 MENUINFO = ["B&atch control", "Batch fitting."]
@@ -75,6 +76,7 @@ class BatchCtrl(wx.Frame):
     
     
     def OnApply(self, event):
+        wx.BeginBusyCursor()
         Parms = self.GetParameters()
         modelid = Parms[1]
         # Set all parameters for all pages
@@ -97,6 +99,7 @@ class BatchCtrl(wx.Frame):
                 OtherPage.PlotAll(trigger="parm_batch")
         # Update all other tools fit the finalize trigger.
         self.parent.OnFNBPageChanged(trigger="parm_finalize")
+        wx.EndBusyCursor()
 
 
     def OnClose(self, event=None):
@@ -109,23 +112,24 @@ class BatchCtrl(wx.Frame):
         item = self.dropdown.GetSelection()
         if self.rbtnhere.Value == True:
             if item <= 0:
-                Page = self.parent.notebook.GetCurrentPage()
+                page = self.parent.notebook.GetCurrentPage()
             else:
-                Page = self.parent.notebook.GetPage(item-1)
+                page = self.parent.notebook.GetPage(item-1)
             # Get internal ID
-            modelid = Page.corr.fit_model.id
+            modelid = page.corr.fit_model.id
         else:
             # Get external ID
             modelid = self.YamlParms[item][1]
-        # Fit all pages with right modelid
-        for i in np.arange(self.parent.notebook.GetPageCount()):
-            OtherPage = self.parent.notebook.GetPage(i)
-            if (OtherPage.corr.fit_model.id == modelid and
-                OtherPage.corr.correlation is not None):
-                #Fit
-                OtherPage.Fit_function(noplots=True,trigger="fit_batch")
-        # Update all other tools fit the finalize trigger.
-        self.parent.OnFNBPageChanged(trigger="fit_finalize")
+
+        # Get all pages with right modelid
+        fit_page_list = []
+        for ii in np.arange(self.parent.notebook.GetPageCount()):
+            pageii = self.parent.notebook.GetPage(ii)
+            if (pageii.corr.fit_model.id == modelid and
+                pageii.corr.correlation is not None):
+                fit_page_list.append(pageii)
+
+        FitProgressDlg(self, fit_page_list, trigger="fit_batch")
 
 
     def OnPageChanged(self, Page=None, trigger=None):
@@ -323,4 +327,52 @@ for batch modification.""")
             self.SetSize(panel.GetSize())
             self.mastersizer.Fit(self)
         except:
-            pass
\ No newline at end of file
+            pass
+
+
+
+class FitProgressDlg(ThreadedProgressDlg):
+    def __init__(self, parent, pages, trigger=None):
+        """ A progress dialog for fitting in PyCorrFit
+        
+        This is a convenience class that wraps around `ThreadedProgressDlg`
+        and performs all necessary steps for fitting single pages in PyCorrFit.
+        
+        Parameters
+        ----------
+        parent : wx object
+            The parent of the progress dialog.
+        pages : list of instances of `pycorrfit.gui.page.FittingPanel`
+            The pages with the model and correlation for fitting.
+        trigger : str
+            PyCorrFit internal trigger string.
+        """
+        if not isinstance(pages, list):
+            pages = [pages]
+        self.pages = pages
+        self.trigger = trigger
+        title = "Fitting data"
+        messages = [ "Fitting page #{}.".format(pi.counter.strip("# :")) for pi in pages ]
+        targets = [Fit]*len(pages)
+        args = [pi.corr for pi in pages]
+        # write parameters from page instance to correlation 
+        [ pi.apply_parameters() for pi in self.pages ]
+        super(FitProgressDlg, self).__init__(parent, targets, args,
+                                             title=title,
+                                             messages=messages)
+    
+    def finalize(self):
+        """ Do everything that is required after fitting, including
+        cleanup of non-fitted pages.
+        """
+        if self.aborted:
+            ## we need to cleanup
+            fin_index = max(0,self.index_aborted-1)
+            pab = self.pages[self.index_aborted]
+            pab.fit_results = None
+            pab.apply_parameters()
+        else:
+            fin_index = len(self.pages)
+
+        # finalize fitting
+        [ pi.Fit_finalize(trigger=self.trigger) for pi in self.pages[:fin_index] ]
diff --git a/pycorrfit/gui/tools/statistics.py b/pycorrfit/gui/tools/statistics.py
index 15ee1be..ad69b92 100644
--- a/pycorrfit/gui/tools/statistics.py
+++ b/pycorrfit/gui/tools/statistics.py
@@ -208,7 +208,7 @@ class Stat(wx.Frame):
                 checked[ii] = True
         # A list with additional strings that should be default checked
         # if found somewhere in the data.
-        checklist = ["cpp", "duration", "bg rate", "avg.", "Model name"]
+        checklist = ["cpp", "duration", "bg rate", "avg.", "Model name", "filename/title"]
         for i in range(len(Info)):
             item = Info[i]
             for checkitem in checklist:
diff --git a/pycorrfit/readfiles/read_SIN_correlator_com.py b/pycorrfit/readfiles/read_SIN_correlator_com.py
index 92ee37a..3dc5c2c 100644
--- a/pycorrfit/readfiles/read_SIN_correlator_com.py
+++ b/pycorrfit/readfiles/read_SIN_correlator_com.py
@@ -1,6 +1,6 @@
 # -*- coding: utf-8 -*-
 """
-method to open correlator.com .sin files
+methods to open correlator.com .sin files
 """
 import os
 import csv
@@ -8,94 +8,226 @@ import numpy as np
 
 
 def openSIN(dirname, filename):
-    """ Read data from a .SIN file, usually created by
-        the software using correlators from correlator.com.
-
-            FLXA
-            Version= 1d
-
-            [Parameters]
-            ...
-            Mode= Single Auto
-            ...
-
-            [CorrelationFunction]
-            1.562500e-09	0.000000e+00
-            3.125000e-09	0.000000e+00
-            4.687500e-09	0.000000e+00
-            ...
-            1.887435e+01	1.000030e+00
-            1.929378e+01	1.000141e+00
-            1.971321e+01	9.999908e-01
-            2.013264e+01	9.996810e-01
-            2.055207e+01	1.000047e+00
-            2.097150e+01	9.999675e-01
-            2.139093e+01	9.999591e-01
-            2.181036e+01	1.000414e+00
-            2.222979e+01	1.000129e+00
-            2.264922e+01	9.999285e-01
-            2.306865e+01	1.000077e+00
-            ...
-            3.959419e+02	0.000000e+00
-            4.026528e+02	0.000000e+00
-            4.093637e+02	0.000000e+00
-            4.160746e+02	0.000000e+00
-            4.227854e+02	0.000000e+00
-            4.294963e+02	0.000000e+00
-
-            [RawCorrelationFunction]
-            ...
-
-            [IntensityHistory]
-            TraceNumber= 458
-            0.000000	9.628296e+03	9.670258e+03
-            0.262144	1.001358e+04	9.971619e+03
-            0.524288	9.540558e+03	9.548188e+03
-            0.786432	9.048462e+03	9.010315e+03
-            1.048576	8.815766e+03	8.819580e+03
-            1.310720	8.827210e+03	8.861542e+03
-            1.572864	9.201050e+03	9.185791e+03
-            1.835008	9.124756e+03	9.124756e+03
-            2.097152	9.059906e+03	9.029389e+03
-            ...
-
-        1. We are interested in the "[CorrelationFunction]" section,
-        where the first column denotes tau in seconds and the second row the
-        correlation signal. Values are separated by a tabulator "\t".
-        We do not import anything from the "[Parameters]" section.
-        We have to subtract "1" from the correlation function, since it
-        is a correlation function that converges to "1" and not to "0".
-
-        2. We are also interested in the "[IntensityHistory]" section.
-        If we are only interested in autocorrelation functions: An email
-        from Jixiang Zhu - Correlator.com (2012-01-22) said, that
-        "For autocorrelation mode, the 2nd and 3 column represent the same
-        intensity series with slight delay.  Therefore, they are statistically
-        the same but numerically different."
-        It is therefore perfectly fine to just use the 2nd column.
-
-        Different acquisition modes:
-        Mode            [CorrelationFunction]               [IntensityHistory]
-        Single Auto     2 Colums (tau,AC)                   1 significant
-        Single Cross    2 Colums (tau,CC)                   2
-        Dual Auto       3 Colums (tau,AC1,AC2)              2
-        Dual Cross      3 Colums (tau,CC12,CC21)            2
-        Quad            5 Colums (tau,AC1,AC2,CC12,CC21)    2
-
-        Returns:
-        [0]:
-         N arrays with tuples containing two elements:
-         1st: tau in ms
-         2nd: corresponding correlation signal
-        [1]:
-         N Intensity traces:
-         1st: time in ms
-         2nd: Trace in kHz
-        [2]: 
-         A list with N elements, indicating, how many correlation
-         curves we are importing.
+    """ D
+    
     """
-    openfile = open(os.path.join(dirname, filename), 'r')
+    path = os.path.join(dirname, filename)
+    with open(path) as fd:
+        data = fd.readlines()
+    
+    for line in data:
+        line = line.strip()
+        if line.lower().startswith("mode"):
+            mode = line.split("=")[1].strip().split()
+            # Find out what kind of mode it is
+            
+            # The rationale is that when the mode
+            # consists of single characters separated
+            # by empty spaces, then we have integer mode.
+            if len(mode) - np.sum([len(m) for m in mode]) == 0:
+                return openSIN_integer_mode(path)
+            else:
+                openSIN_old(path)
+
+
+def openSIN_integer_mode(path):
+    """ Integer mode file format of e.g. flex03lq-1 correlator
+    
+    This is a file format where the type (AC/CC) of the curve is
+    determined using integers in the "Mode=" line, e.g.
+    
+         Mode= 2    3    3    2    0    1    1    0
+    
+    which means the first correlation is CC23, the second CC32,
+    the third CC01, and the fourth CC10. Similarly, 
+    
+        Mode= 1    1    2    2    0    4    4    4
+    
+    would translate to AC11, AC22, CC04, and AC44.
+    """
+    openfile = open(path, 'r')
+    data = openfile.readlines()
+
+    # get mode (curve indices)
+    for line in data:
+        line = line.strip()
+        if line.lower().startswith("mode"):
+            mode = line.split("=")[1].strip().split()
+            mode = [ int(m) for m in mode ]
+            assert len(mode) % 2 == 0, "mode must be multiples of two"
+    
+    # build up the lists
+    corr_func = []
+    intensity = []
+    section = ""
+    
+    # loop through lines
+    for line in data:
+        line = line.strip().lower()
+        if line.startswith("["):
+            section = line
+            continue
+        elif (len(line) == 0 or
+              line.count("=")):
+            continue
+
+        if section.count("[correlationfunction]"):
+            corr_func.append(line.split())
+        elif section.count("[intensityhistory]"):
+            intensity.append(line.split())
+
+    # corr_func now contains lag time, and correlations according
+    # to the mode parameters.
+    corr_func = np.array(corr_func, dtype=float)
+    intensity = np.array(intensity, dtype=float)
+
+    timefactor = 1000 # because we want ms instead of s
+    timedivfac = 1000 # because we want kHz instead of Hz
+    intensity[:,0] *= timefactor
+    corr_func[:,0] *= timefactor
+    intensity[:,1:] /= timedivfac
+    
+    # correlator.com correlation is normalized to 1, not to 0
+    corr_func[:,1:] -= 1
+
+    # Now sort the information for pycorrfit
+    correlations = []
+    traces = []
+    curvelist = []
+    
+    for ii in range(len(mode)//2):
+        modea = mode[2*ii]
+        modeb = mode[2*ii+1]
+        
+        if modea == modeb:
+            # curve type AC
+            curvelist.append("AC{}".format(modea))
+            # trace
+            atrace = np.zeros((intensity.shape[0],2), dtype=float)
+            atrace[:,0] = intensity[:, 0]
+            atrace[:,1] = intensity[:, modea+1]
+            traces.append(atrace)
+        else:
+            # curve type CC
+            curvelist.append("CC{}{}".format(modea,modeb))
+            # trace
+            modmin = min(modea, modeb)
+            modmax = max(modea, modeb)
+            tracea = np.zeros((intensity.shape[0],2), dtype=float)
+            tracea[:,0] = intensity[:, 0]
+            tracea[:,1] = intensity[:, modmin+1]
+            traceb = np.zeros((intensity.shape[0],2), dtype=float)
+            traceb[:,0] = intensity[:, 0]
+            traceb[:,1] = intensity[:, modmax+1]            
+            traces.append([tracea, traceb])
+        # correlation
+        corr = np.zeros((corr_func.shape[0],2), dtype=float)
+        corr[:,0] = corr_func[:, 0]
+        corr[:,1] = corr_func[:, ii+1]
+        correlations.append(corr)
+
+    dictionary = {}
+    dictionary["Correlation"] = correlations
+    dictionary["Trace"] = traces
+    dictionary["Type"] = curvelist
+    filelist = list()
+    for _i in curvelist:
+        filelist.append(os.path.basename(path))
+    dictionary["Filename"] = filelist
+    return dictionary
+
+
+def openSIN_old(path):
+    """ Parses the "old" sin file format using an "old" implementation.
+    
+    Read data from a .SIN file, usually created by
+    the software using correlators from correlator.com.
+
+        FLXA
+        Version= 1d
+
+        [Parameters]
+        ...
+        Mode= Single Auto
+        ...
+
+        [CorrelationFunction]
+        1.562500e-09	0.000000e+00
+        3.125000e-09	0.000000e+00
+        4.687500e-09	0.000000e+00
+        ...
+        1.887435e+01	1.000030e+00
+        1.929378e+01	1.000141e+00
+        1.971321e+01	9.999908e-01
+        2.013264e+01	9.996810e-01
+        2.055207e+01	1.000047e+00
+        2.097150e+01	9.999675e-01
+        2.139093e+01	9.999591e-01
+        2.181036e+01	1.000414e+00
+        2.222979e+01	1.000129e+00
+        2.264922e+01	9.999285e-01
+        2.306865e+01	1.000077e+00
+        ...
+        3.959419e+02	0.000000e+00
+        4.026528e+02	0.000000e+00
+        4.093637e+02	0.000000e+00
+        4.160746e+02	0.000000e+00
+        4.227854e+02	0.000000e+00
+        4.294963e+02	0.000000e+00
+
+        [RawCorrelationFunction]
+        ...
+
+        [IntensityHistory]
+        TraceNumber= 458
+        0.000000	9.628296e+03	9.670258e+03
+        0.262144	1.001358e+04	9.971619e+03
+        0.524288	9.540558e+03	9.548188e+03
+        0.786432	9.048462e+03	9.010315e+03
+        1.048576	8.815766e+03	8.819580e+03
+        1.310720	8.827210e+03	8.861542e+03
+        1.572864	9.201050e+03	9.185791e+03
+        1.835008	9.124756e+03	9.124756e+03
+        2.097152	9.059906e+03	9.029389e+03
+        ...
+
+    1. We are interested in the "[CorrelationFunction]" section,
+    where the first column denotes tau in seconds and the second row the
+    correlation signal. Values are separated by a tabulator "\t".
+    We do not import anything from the "[Parameters]" section.
+    We have to subtract "1" from the correlation function, since it
+    is a correlation function that converges to "1" and not to "0".
+
+    2. We are also interested in the "[IntensityHistory]" section.
+    If we are only interested in autocorrelation functions: An email
+    from Jixiang Zhu - Correlator.com (2012-01-22) said, that
+    "For autocorrelation mode, the 2nd and 3 column represent the same
+    intensity series with slight delay.  Therefore, they are statistically
+    the same but numerically different."
+    It is therefore perfectly fine to just use the 2nd column.
+
+    Different acquisition modes:
+    Mode            [CorrelationFunction]               [IntensityHistory]
+    Single Auto     2 Colums (tau,AC)                   1 significant
+    Single Cross    2 Colums (tau,CC)                   2
+    Dual Auto       3 Colums (tau,AC1,AC2)              2
+    Dual Cross      3 Colums (tau,CC12,CC21)            2
+    Quad            5 Colums (tau,AC1,AC2,CC12,CC21)    2
+
+    Returns:
+    [0]:
+     N arrays with tuples containing two elements:
+     1st: tau in ms
+     2nd: corresponding correlation signal
+    [1]:
+     N Intensity traces:
+     1st: time in ms
+     2nd: Trace in kHz
+    [2]: 
+     A list with N elements, indicating, how many correlation
+     curves we are importing.
+    """
+    openfile = open(path, 'r')
     Alldata = openfile.readlines()
     # Find out where the correlation function and trace are
     for i in np.arange(len(Alldata)):
@@ -233,21 +365,16 @@ def openSIN(dirname, filename):
         traces.append([np.array(trace1), np.array(trace2)])
         traces.append([np.array(trace1), np.array(trace2)])
     else:
-        # We assume that we have a mode like this:
-        # Mode= 2 3 3 2 0 1 1 0
-        raise NotImplemented("This format is not yet implemented (issue #135).")
-        # TODO:
-        # - write test function for .sin correlator files
-        # - load correlation and traces from sin files in 2D arrays
-        # -> use this if, elif, elif, else loop to only assign curves
-        #    (to improve code quality)
+        raise NotImplemented(
+                    "'Mode' type '{}' in {} not supported by this method!".
+                    Mode, format(path))
         
-    dictionary = dict()
+    dictionary = {}
     dictionary["Correlation"] = correlations
     dictionary["Trace"] = traces
     dictionary["Type"] = curvelist
     filelist = list()
     for i in curvelist:
-        filelist.append(filename)
+        filelist.append(os.path.basename(path))
     dictionary["Filename"] = filelist
     return dictionary
diff --git a/tests/test_file_formats.py b/tests/test_file_formats.py
index d43a125..c08f2b2 100644
--- a/tests/test_file_formats.py
+++ b/tests/test_file_formats.py
@@ -15,7 +15,7 @@ import data_file_dl
 import pycorrfit
 
 # Files that are known to not work
-exclude = ["sin/Correlator.com_Integer-Mode.SIN"]
+exclude = []
 
 
 def test_open():

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/pycorrfit.git



More information about the debian-med-commit mailing list