[med-svn] [smrtanalysis] 01/02: Imported Upstream version 2.3.0

Afif Elghraoui afif-guest at moszumanska.debian.org
Sat May 16 22:13:58 UTC 2015


This is an automated email from the git hooks/post-receive script.

afif-guest pushed a commit to branch master
in repository smrtanalysis.

commit bb310dba49397e687730c98d654f64f57f6ce1b5
Author: Afif Elghraoui <afif at ghraoui.name>
Date:   Sat May 16 02:15:03 2015 -0700

    Imported Upstream version 2.3.0
---
 README.md                                          |   35 +
 add_on_scripts/finetune_distributed/README.txt     |   17 +
 docs/Cannot-See-The-SMRT-Portal-Page.md            |   16 +
 docs/Cannot-create-a-mysql-database.md             |   20 +
 docs/Cannot-import-new-SMRT-Cells.md               |   37 +
 ...he-first-time-due-to-hibernate.dialect-error.md |   17 +
 ...lera-Assembler-deadlocks-in-distributed-mode.md |    5 +
 docs/Command-line-options.md                       |   65 +
 docs/Common-SMRT-Portal-Errors.md                  |   66 +
 ...or-Updating-Symlinks-to-SMRT-Portal-Services.md |   35 +
 ...c-file-using-a-setting-not-exposed-in-the-UI.md |    6 +
 ...iles-you-received-from-your-service-provider.md |   95 +
 docs/Default-parameters-are-set-conservatively.md  |    5 +
 docs/Delete-SMRT-Cells-from-SMRT-Portal.md         |   57 +
 docs/Distributed-Computing-Configuration.md        |  114 +
 .../Environment-variables-are-not-set-correctly.md |   15 +
 ...ME-on-an-existing-SMRT-Analysis-Installation.md |    8 +
 docs/Head-node-may-run-out-of-resources.md         |    3 +
 docs/Home.md                                       |    7 +
 ...-migrate-smrt-analysis-to-a-different-server.md |   56 +
 docs/How-to-uninstall-smrt-analysis.md             |   19 +
 ...ort-and-Manage-SMRT-Cell-Data-to-SMRT-Portal.md |   64 +
 docs/Installation-and-Upgrade-Summary.md           |   48 +
 docs/Installation-assumes-local-mysql-instance.md  |   19 +
 ...stalling-and-upgrading-the-techsupport-tools.md |   15 +
 docs/Instrument-Control-Web-Services-API.md        |  270 ++
 docs/Introduction.md                               |   12 +
 ...he-application-is-downloaded-from-the-server.md |    1 +
 docs/Job-fails-at-exactly-12-hour.md               |    3 +
 ...if-a-soft-link-resolves-to-2-different-paths.md |    5 +
 ...-object-(Symbol-table:-Can't-open-object)\".md" |   55 +
 ...t-of-Programs-in-$SEYMOUR_HOME-analysis-bin-.md |  862 +++++++
 docs/Log-File-Locations.md                         |  127 +
 ...ing-the-SMRT-Analysis-Installation-Directory.md |   91 +
 docs/Navigating-the-SMRT-Pipe-Job-Directory.md     | 2653 ++++++++++++++++++++
 docs/Official-Documentation.md                     |   38 +
 ...-HGAP-Assembly-protocol-fails-in-SMRT-Portal.md |  141 ++
 docs/Reference-upgrades-fail.md                    |   17 +
 ...ing-SMRT-View-in-a-different-tomcat-instance.md |   18 +
 ...class-\"org.slf4j.impl.StaticLoggerBinder\".md" |    4 +
 docs/SMRT-Analysis-Release-Notes-v2.0.1.md         |   16 +
 docs/SMRT-Analysis-Release-Notes-v2.0.md           |  101 +
 docs/SMRT-Analysis-Release-Notes-v2.1.1.md         |   23 +
 docs/SMRT-Analysis-Release-Notes-v2.1.md           |  101 +
 docs/SMRT-Analysis-Release-Notes-v2.2.0.md         |  123 +
 docs/SMRT-Analysis-Release-Notes-v2.2.0.p1.md      |   29 +
 docs/SMRT-Analysis-Release-Notes-v2.2.0.p2.md      |   29 +
 docs/SMRT-Analysis-Release-Notes-v2.2.0.p3.md      |   34 +
 docs/SMRT-Analysis-Software-Installation-v1.4.0.md |  349 +++
 docs/SMRT-Analysis-Software-Installation-v2.0.1.md |  383 +++
 docs/SMRT-Analysis-Software-Installation-v2.0.md   |  386 +++
 docs/SMRT-Analysis-Software-Installation-v2.1.md   |  346 +++
 docs/SMRT-Analysis-Software-Installation-v2.2.0.md |  592 +++++
 docs/SMRT-Analysis-Software-Installation-v2.3.0.md |  629 +++++
 docs/SMRT-Pipe-Reference-Guide-v2.0.md             | 1503 +++++++++++
 docs/SMRT-Pipe-Reference-Guide-v2.1.md             | 1580 ++++++++++++
 docs/SMRT-Pipe-Reference-Guide-v2.2.0.md           | 1569 ++++++++++++
 docs/SMRT-Pipe-Reference-Guide-v2.3.0.md           | 1413 +++++++++++
 docs/SMRT-Pipe-file-structure.md                   |   64 +
 docs/SMRT-Pipe-modules-and-their-parameters.md     |   48 +
 docs/SMRT-Pipe-tools.md                            |    5 +
 ...al-GMAP-\"No-such-file-or-directory\"-Error.md" |   17 +
 docs/SMRT-Portal-Job-Fails.md                      |   59 +
 docs/SMRT-Portal-Job-Status-Does-Not-Update.md     |   55 +
 docs/SMRT-Portal-Lost-administrator-password.md    |   33 +
 docs/SMRT-Portal-freezes.md                        |    6 +
 ...-connecting-to-the-smrtportal-mysql-database.md |    9 +
 ...SMRT-Portal-jobs-are-being-submitted-as-root.md |    8 +
 docs/SMRT-Portal-protocols.md                      |  138 +
 docs/SMRT-View-Crashes-While-Browsing.md           |   13 +
 docs/SMRT-View-Does-Not-Launch.md                  |   42 +
 ...RT-View-Security-Certificate-Warning-Message.md |   16 +
 docs/SMRT-View-does-not-launch.md                  |   12 +
 ...iew-does-not-show-reads-in-the-details-panel.md |   30 +
 ...ded-from-the-server-every-time-you-access-it.md |    5 +
 docs/SMRT-View-is-slow.md                          |   18 +
 docs/SMRT-View-runs-out-of-resources.md            |    3 +
 docs/SMRT-analysis-software-installation-v2.1.1.md |  332 +++
 docs/Secondary-Analysis-Web-Services-API-v2.0.md   | 1513 +++++++++++
 docs/Secondary-Analysis-Web-Services-API-v2.1.md   | 1480 +++++++++++
 docs/Secondary-Analysis-Web-Services-API-v2.2.0.md | 1414 +++++++++++
 docs/Secondary-Analysis-Web-Services-API-v2.3.0.md | 1463 +++++++++++
 ...-java.lang.outofmemoryerror:-java-heap-space.md |   24 +
 docs/Specifying-SMRT-Pipe-inputs.md                |   37 +
 docs/Specifying-SMRT-Pipe-parameters.md            |   83 +
 docs/Step-3:-Extract-the-Tarball.md                |   20 +
 docs/Step-5,-Option-2:-Run-the-Upgrade-Script.md   |   14 +
 docs/Step-5:-Run-the-Installation-Script.md        |   22 +
 ...Installations-Only)-Set-Up-User-Data-Folders.md |   17 +
 ...tallations-Only)-Set-Up-SMRT\302\256-Portal.md" |   19 +
 ...al-and-Automatic-Secondary-Analysis-Services.md |    2 +
 docs/Stopping-Celera-Assembler-jobs.md             |   14 +
 docs/The-Reference-Repository.md                   |   28 +
 docs/The-configure_smrtanalysis.sh-script-fails.md |   24 +
 ...very-slow-when-more-than-20-jobs-are-running.md |    5 +
 docs/The-job-is-very-slow.md                       |   94 +
 docs/Timing-problem-with-jobs.md                   |    5 +
 ...ing---configure_smrtanalysis.sh-Script-Fails.md |    5 +
 ...roubleshooting-Kodos-Secondary-Auto-Analysis.md |  179 ++
 docs/Troubleshooting-the-SMRT-Analysis-Suite.md    |   77 +
 docs/Troubleshooting_everything.md                 |  213 ++
 ...-not-have-the-correct-permissions-to-upgrade.md |   30 +
 docs/Using-the-command-line.md                     |   21 +
 docs/Using-the-techsupport-tools.md                |   20 +
 docs/Verify-the-installation.md                    |   23 +
 ...astructure-is-compatible-with-SMRT-Analysis?.md |   40 +
 ...ta-storage-is-compatible-with-SMRT-Analysis?.md |   23 +
 ...g-the-sync-option-in-SGE,-you-may-see-errors.md |   27 +
 ...You-can-start-SMRT-Portal,-but-cannot-log-in.md |   12 +
 docs/_Footer.md                                    |    3 +
 docs/_Header.md                                    |    1 +
 docs/_preview.md                                   |  381 +++
 docs/qsub:-command-not-found.md                    |   57 +
 ...-\"Unable-to-parse-SF-readId:-%s\"-%-readId.md" |    3 +
 114 files changed, 22653 insertions(+)

diff --git a/README.md b/README.md
new file mode 100644
index 0000000..9366638
--- /dev/null
+++ b/README.md
@@ -0,0 +1,35 @@
+## Overview
+
+SMRT<sup><small>®</small></sup> Analysis is a powerful, open-source bioinformatics software suite available for analysis of DNA sequencing data from [Pacific Biosciences](http://www.pacificbiosciences.com)’ SMRT technology.
+
+Users can choose from a variety of analysis protocols that utilize PacBio<sup><small>®</small></sup> and third-party tools. Analysis protocols include _de novo_ genome assembly, cDNA mapping, DNA base-modification detection, and long-amplicon analysis to determine phased consensus sequences.
+
+The browser-based SMRT Portal GUI offers push-button analysis, allowing the user to create, submit, and monitor analysis jobs. The underlying algorithms can be accessed on the command-line for pipeline development with SMRT Pipe, utilizing LIMS-friendly APIs and a large collection of utilities for working with PacBio bas.h5 files and other common file formats. SMRT View is an application for graphical visualization of processed and annotated sequence data, including kinetics data unique  [...]
+
+SMRT Analysis can run in single, distributed, or mixed modes. An Amazon Web Service (AWS) cloud-based implementation is available.
+
+
+## Getting Support
+
+[__PacBio Developer's Network Website__](http://pacbiodevnet.com)
+
+Visit PacBio DevNet for the most up-to-date downloads, documentation and more.
+
+
+[__PacBio Customer Portal__](http://www.pacbioportal.com)
+
+Support for PacBio customers is available through the  Customer Portal.
+
+
+[__SMRT Analysis GitHub wiki__](https://github.com/PacificBiosciences/SMRT-Analysis/wiki)
+
+For customers of PacBio [Sequencing Providers](http://www.pacificbiosciences.com/support/sequencing_provider/).
+
+
+
+
+[![githalytics.com alpha](https://cruel-carlota.pagodabox.com/104b77caac44b82e52bce19ad64c9c0b "githalytics.com")](http://githalytics.com/github.com/PacificBiosciences)
+
+
+
+[![githalytics.com alpha](https://cruel-carlota.pagodabox.com/28728759ba8fe51b8c1c0e6b39f6e339 "githalytics.com")](http://githalytics.com/PacificBiosciences/SMRT-Analysis)
diff --git a/add_on_scripts/finetune_distributed/README.txt b/add_on_scripts/finetune_distributed/README.txt
new file mode 100644
index 0000000..25813c3
--- /dev/null
+++ b/add_on_scripts/finetune_distributed/README.txt
@@ -0,0 +1,17 @@
+AUTHOR
+
+Tamas Vince
+
+DATE OF INITIAL COMMIT
+
+???
+
+PURPOSE
+
+This script allows SMRT Analysis to use more threads when running Consensus jobs in distributed mode.  Other non-multi-threaded jobs can still be run using a few cores if NPROC = 1.
+
+USAGE
+
+???
+
+
diff --git a/docs/Cannot-See-The-SMRT-Portal-Page.md b/docs/Cannot-See-The-SMRT-Portal-Page.md
new file mode 100644
index 0000000..25734e0
--- /dev/null
+++ b/docs/Cannot-See-The-SMRT-Portal-Page.md
@@ -0,0 +1,16 @@
+**Cause 1:  The hostname and/or port is incorrect.**  The hostname and port is designated when you installed smrtanalysis. For example, if your want your SMRT Portal URI to be `http://server1:8080/smrtportal/`, the hostname should be set as **server1** and the port should be set as **8080**. To reset these variables, rerun `configure_smrtanalysis.sh` and enter **server1** and **8080** when prompted by the script.
+
+**Cause 2:  Networking issues.**  The client computer/laptop cannot see the server. SMRT Portal is a client-server application - this means that the client computer (your laptop) must know that the server computer (server1) exists somewhere in its network. Some institutions require vpn to login to their network. Ask your network administrator for what "hostname" to assign to SMRT Portal such that client computers can recognize it.
+
+**Cause 3:  Tomcat is not on.**  Tomcat is the webserver that hosts SMRT Portal. It must be turned on for SMRT Portal to function. When it is **off**, the ps command below returns only a single line. When it is **on**, the ps command below returns an additional line detailing the path to the tomcatd process.
+
+```
+user at server1$ ps -ef | grep tomcat
+71063    15603 23660  0 16:43 pts/15   00:00:00 grep tomcat
+
+user at server1$ /opt/smrtanalysis/etc/scripts/tomcatd start
+
+user at server1$ ps -ef | grep tomcat
+71063    15603 23660  0 16:43 pts/15   00:00:00 grep tomcat
+71109    16203     1  0 Dec05 ?        00:55:17 /opt/smrtanalysis-1.3.3//redist/java/bin/java -Djava.util.logging.config.file=/opt/smrtanalysis-1.3.3//redist/tomcat/conf/logging.properties -d64 -server -Xmx8g -Djava.library.path=/opt/smrtanalysis-1.3.3//common/lib -Djava.security.auth.login.config=/opt/smrtanalysis-1.3.3//redist/tomcat/conf/kerb5.conf -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/opt/smrtanalysis-1.3.3//redist/tomcat/endorsed -cl [...]
+```
diff --git a/docs/Cannot-create-a-mysql-database.md b/docs/Cannot-create-a-mysql-database.md
new file mode 100644
index 0000000..9590520
--- /dev/null
+++ b/docs/Cannot-create-a-mysql-database.md
@@ -0,0 +1,20 @@
+### Credentials problem:
+To install SMRT Portal, you **must** know the credentials to log into your mysql server as root.  The ``configure_smrtanalysis.sh`` script will prompt you for the username (default root) and the password (default no password).  
+
+You can test different passwords to see if you can log by typing the following:
+
+    mysql -u <root> -p <mysql>
+
+If you do not know your password or your system administrator has forgotten it, you may want to try resetting it like so:
+http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html
+
+### Bind address problem:
+SMRT Portal has difficulty connecting to the smrtportal mysql database after installation if you have a unique setting in your ``myslq my.conf`` file.  Following is typical error when you try to create the first administrator user:
+
+```
+Error listing Users. Case; 'hibernate.dialect' must be set when no Connection available.
+```
+Check that you are using a different bind address:
+`grep bind /etc/mysql/my.cnf`
+
+If you had changed the bind address to something other than the default 127.0.0.1, then you need to replace ``localhost`` in the `$SEYMOUR_HOME/redist/tomcat/webapps/smrtportal/WEBINF/classes/META-INF/persistence.xml` file with the actual IP address, or hostname of the server running mysql.
\ No newline at end of file
diff --git a/docs/Cannot-import-new-SMRT-Cells.md b/docs/Cannot-import-new-SMRT-Cells.md
new file mode 100644
index 0000000..6062206
--- /dev/null
+++ b/docs/Cannot-import-new-SMRT-Cells.md
@@ -0,0 +1,37 @@
+The file structure needed to import SMRT Cells is a top-level directory that contains the `metadata.xml` file, and an `Analysis_Results` directory that contains one `bas.h5` file, and three `bax.h5` files.  
+
+```
+A01_1
+    m130227_052443_42141_c100505662540000001823074808081360_s1_p0.metadata.xml
+    Analysis_Results
+        m130227_052443_42141_c100505662540000001823074808081360_s1_p0.bas.h5
+        m130227_052443_42141_c100505662540000001823074808081360_s1_p0.1.bax.h5
+        m130227_052443_42141_c100505662540000001823074808081360_s1_p0.2.bax.h5
+        m130227_052443_42141_c100505662540000001823074808081360_s1_p0.3.bax.h5
+A01_2
+    m130227_075201_42141_c100505662540000001823074808081361_s1_p0.metadata.xml
+    Analysis_Results
+        m130227_075201_42141_c100505662540000001823074808081361_s1_p0.bas.h5
+        m130227_075201_42141_c100505662540000001823074808081361_s1_p0.1.bax.h5
+        m130227_075201_42141_c100505662540000001823074808081361_s1_p0.2.bax.h5
+        m130227_075201_42141_c100505662540000001823074808081361_s1_p0.3.bax.h5
+B01_1    
+    m130227_090546_42141_c100505662540000001823074808081362_s1_p0.metadata.xml
+    m130227_090546_42141_c100505662540000001823074808081362_s2_p0.metadata.xml
+    Analysis_Results
+        m130227_090546_42141_c100505662540000001823074808081362_s1_p0.bas.h5
+        m130227_090546_42141_c100505662540000001823074808081362_s1_p0.1.bax.h5
+        m130227_090546_42141_c100505662540000001823074808081362_s1_p0.2.bax.h5
+        m130227_090546_42141_c100505662540000001823074808081362_s1_p0.3.bax.h5
+
+        m130227_090546_42141_c100505662540000001823074808081362_s2_p0.bas.h5
+        m130227_090546_42141_c100505662540000001823074808081362_s2_p0.1.bax.h5
+        m130227_090546_42141_c100505662540000001823074808081362_s2_p0.2.bax.h5
+        m130227_090546_42141_c100505662540000001823074808081362_s2_p0.3.bax.h5
+```
+
+In this example, sample in well `A01` was sequenced using two SMRT Cells, and the results from each cell were placed in separate `A01_1` and `A01_2` directories. Each SMRT Cell was run with a single long movie as indicated by the `s1` suffix.
+
+By contrast, the sample in well `B01` was sequenced using two movies, producing two metadata.xml and bas.h5 files distinguished by `s1` and `s2` suffixes under the same `B01_1` top level directory. The digit before the `s1` suffix indicates the SMRT cell number in a given 8pack. The sample in A01 was sequenced using the first `(0_s1)` and second `(1_s1)` SMRT cell, and the sample in B01 was sequenced using the third `(2_s1)` SMRT cell in the 8pack.
+
+Make sure the directory uses this file structure, and execute `ls -l` to make sure that the smrtanalysis user has read permissions to the files and execute permissions to the directories.
\ No newline at end of file
diff --git a/docs/Cannot-register-administrator-for-the-first-time-due-to-hibernate.dialect-error.md b/docs/Cannot-register-administrator-for-the-first-time-due-to-hibernate.dialect-error.md
new file mode 100644
index 0000000..4c7d667
--- /dev/null
+++ b/docs/Cannot-register-administrator-for-the-first-time-due-to-hibernate.dialect-error.md
@@ -0,0 +1,17 @@
+## 1.  Check the mysql database daemon:
+SMRT Portal depends on a mysql database backend.  It must be turned on.
+```
+service mysql status
+service mysql start
+```
+
+## 2. Check the mysql bind address:
+SMRT Portal has difficulty connecting to the MySQL database namde `smrtportal` after installation if you have a unique setting in your MySQL my.conf file. The following is a typical error when you try to create the first administrator user:
+
+`Error listing Users. Case; 'hibernate.dialect' must be set when no Connection available.`
+
+To resolve this issue, first verify that you have changed the MySQL bind address to something other than the default 127.0.0.1:
+
+`grep bind /etc/mysql/my.cnf`
+
+If the return is something other than the default 127.0.0.1, then you need to replace localhost in the `$SEYMOUR_HOME/redist/tomcat/webapps/smrtportal/WEBINF/classes/META-INF/persistence.xml` file with the actual IP address, or hostname of the server running MySQL.
\ No newline at end of file
diff --git a/docs/Celera-Assembler-deadlocks-in-distributed-mode.md b/docs/Celera-Assembler-deadlocks-in-distributed-mode.md
new file mode 100644
index 0000000..cd4d4b8
--- /dev/null
+++ b/docs/Celera-Assembler-deadlocks-in-distributed-mode.md
@@ -0,0 +1,5 @@
+This is a known issue when the protocol is run on a PSSC cluster. 
+
+The general workaround is to increase the number of available slots for jobs by either (a) increasing the number of available nodes or (b) decreasing ``NPROC`` in the ``smrtpipe.rc`` file.   
+
+You may also edit an existing spec file to set the number of slots in the SGE settings.
\ No newline at end of file
diff --git a/docs/Command-line-options.md b/docs/Command-line-options.md
new file mode 100644
index 0000000..1b88836
--- /dev/null
+++ b/docs/Command-line-options.md
@@ -0,0 +1,65 @@
+Following are some of the available options for invoking ``smrtpipe.py``:
+
+```
+-D key=value
+```
+
+* Overrides a configuration variable. Configuration variables are key-value pairs that are read from the global file ``smrtpipe.rc`` before starting an analysis. An example is the ``NPROC`` variable which controls the number of simultaneous processors to use during the analysis. To restrict SMRT Pipe to 4 processors, use ``-D NPROC=4``.
+
+```
+--debug
+```
+* Activates debugging output in the stderr and log outputs. To set this flag as a default, specify ``DEBUG=True`` in the ``smrtpipe.rc`` file.
+
+```
+--distribute
+```
+* Distributes the computation across a compute cluster. For information on onfiguring SMRT Pipe for a distributed computation environment, see SMRT Analysis Software Installation. (Add link)
+
+```
+--help
+```
+* Displays information about command-line usage and options, and then exits.
+
+```
+--noreports
+```
+* Turns off the production of XML/HTML/PNG reports.
+
+```
+--nohtml
+```
+* Turns off the conversion of XML reports into HTML. (This conversion **requires** that Java be installed.)
+
+```
+--output=outputDir
+```
+
+* Specifies a root directory to use for all SMRT Pipe outputs for this analysis. SMRT Pipe places outputs in this directory, as well as in data, results, and log subdirectories.
+
+```
+params=params.xml
+```
+* Specifies a settings XML file for running the pipeline analysis. If this option is **not** specified, SMRT Pipe prints a message and then exits.
+
+```
+--totalCells
+```
+* Specifies that if the number of cells in the job is less than ``totalCells``, the job is **not** marked complete when it finishes. Data from additional cells will be appended to the outputs, until the number of cells reaches ``totalCells``. 
+
+```
+--recover
+```
+* Attempts to rerun a SMRT Pipe analysis starting from the last successful stage. The same initial arguments should be specified in this case.
+
+```
+--version
+```
+* Displays the version number of SMRT Pipe and then exits.
+
+```
+--kill
+```
+* Kills a SMRT Pipe job running in the current directory. This works with ``output``.
+
+
diff --git a/docs/Common-SMRT-Portal-Errors.md b/docs/Common-SMRT-Portal-Errors.md
new file mode 100644
index 0000000..5e4f8cf
--- /dev/null
+++ b/docs/Common-SMRT-Portal-Errors.md
@@ -0,0 +1,66 @@
+## No permissions to shared memory
+If the smrtanalysis user cannot access shared memory, this error will appear in the smrtpipe.log or master.log file:
+```
+"/opt/smrtanalysis/install/smrtanalysis-2.1.1.128549/redist/python2.7/lib/python2.7/multiprocessing/synchronize.py",
+line 75, in __init__
+    sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
+OSError: [Errno 13] Permission denied
+```
+This is a general OS error described here: http://stackoverflow.com/questions/2009278/python-multiprocessing-permission-denied
+
+In verify that this is the problem, check the permissions to /dev/shm on your head node and *ALL* of your compute nodes:
+```
+$ ls -ld /dev/shm
+drwxrwxrwt 2 root root          40 2010-01-05 20:34 shm
+```
+
+To fix this problem, grant the smrtanalysis user read and write access to this file.  One way to do this is just to change the permissions of this file:
+```
+sudo chmod 777 /dev/shm
+```
+
+
+
+## Corrupted SMRT Cells
+SMRT Portal only checks that a metadata.xml, bas.h5, and bax.h5 exist.  It does not check that these files are valid and in the correct format.  If they are in fact corrupted, the smrtpipe job will fail immediately with this error in the first module after P_Fetch, which is usually P_Filter:
+
+```
+[ERROR] 2013-11-14 09:27:28,952 [pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > ERROR:root:I/O Error in accessing pls.h5 files. (/opt/smrtanalysis/common/jobs/016/016615/input.chunk002of006.fofn)
+[ERROR] 2013-11-14 09:27:28,953 [pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > ERROR:root:[Errno None] Unable to process H5 file: /path/to/smrtcell/directory/A02_2/Analysis_Results/m131112_224710_42196_c100580172550000001823090204021445_s1_p0.3.bax.h5
+[ERROR] 2013-11-14 09:27:28,953 [pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > None
+[ERROR] 2013-11-14 09:27:28,953 [pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > Your job 8240 ("Ffil016615") has been submitted
+[ERROR] 2013-11-14 09:27:28,953 [pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > Job 8240 exited with exit code 1.
+[ERROR] 2013-11-14 09:27:31,033 [pbpy.smrtpipe.engine.SmrtPipeTasks __run_task 806] task://016615/P_Filter/filter_003of006 returned non-zero exit status (1)
+```
+
+You can verify the validity of your bax.h5 file by doing the following:
+
+1. Verify that the fie is the correct size by comparing the size with other bax.h5 files using `ls -l`.
+
+2. Check that the h5 file is valid using any linux h5 commands such as `h5ls`.  The output should look like this if it is valid:
+```
+Analysis_Results$ h5ls m130929_174907_42175_c100588662550000001823089804281484_s1_p0.1.bax.h5
+PulseData                Group
+ScanData                 Group
+```
+The output will look like this if it is invalid:
+```
+/Analysis_Results$ h5ls m130929_174907_42175_c100588662550000001823089804281484_s1_p0.1.bax.h5
+m130929_174907_42175_c100588662550000001823089804281484_s1_p0.1.log: unable to open file
+```
+3.  Check that there is data in the h5 file using "h5dump".  The output should look something like this:
+```
+Analysis_Results$ h5dump m130929_174907_42175_c100588662550000001823089804281484_s1_p0.bas.h5 |head
+HDF5 "m130929_174907_42175_c100588662550000001823089804281484_s1_p0.bas.h5" {
+GROUP "/" {
+   GROUP "MultiPart" {
+      DATASET "HoleLookup" {
+         DATATYPE  H5T_STD_U32LE
+         DATASPACE  SIMPLE { ( 163482, 2 ) / ( 163482, 2 ) }
+         DATA {
+         (0,0): 0, 1,
+         (1,0): 1, 1,
+         (2,0): 2, 1,
+```
+
+Unfortunately, if the data is in fact corrupted, SMRT Portal will not be able to use it for any analysis.
\ No newline at end of file
diff --git a/docs/Creating-or-Updating-Symlinks-to-SMRT-Portal-Services.md b/docs/Creating-or-Updating-Symlinks-to-SMRT-Portal-Services.md
new file mode 100644
index 0000000..0a3c18f
--- /dev/null
+++ b/docs/Creating-or-Updating-Symlinks-to-SMRT-Portal-Services.md
@@ -0,0 +1,35 @@
+When installing or updating SMRT Analysis as a non-superuser, sometimes symlinks to Tomcat and Kodos services are not created or changed from the previous installation.  These symlinks will have to be created manually by a user with write permission to `/etc/init.d/`.
+
+## Installation
+
+**sudo user required for installation steps**
+
+### Link to the script from `/etc/init.d/`
+```
+$ sudo rm /etc/init.d/tomcatd
+$ sudo ln -sfn $SMRT_ROOT/admin/bin/smrtportald-initd /etc/init.d/smrtportald-initd
+$ sudo ln -sfn $SMRT_ROOT/admin/bin/kodosd /etc/init.d/kodosd
+```
+
+## Manual operations
+
+### Starting the daemon
+
+```
+$ /etc/init.d/smrtportald-initd start
+$ /etc/init.d/kodosd start
+```
+
+### Stopping the daemon
+
+```
+$ /etc/init.d/smrtportald-initd stop
+$ /etc/init.d/kodosd stop
+```
+
+### Restarting the daemon
+
+```
+$ /etc/init.d/smrtportald-initd restart
+$ /etc/init.d/kodosd restart
+```
\ No newline at end of file
diff --git a/docs/Customizing-a-spec-file-using-a-setting-not-exposed-in-the-UI.md b/docs/Customizing-a-spec-file-using-a-setting-not-exposed-in-the-UI.md
new file mode 100644
index 0000000..43736ea
--- /dev/null
+++ b/docs/Customizing-a-spec-file-using-a-setting-not-exposed-in-the-UI.md
@@ -0,0 +1,6 @@
+To customize a Celera Assembler spec file using a setting not exposed in the UI:
+
+1. Run a job using the Celera Assembler workflow.
+2. Download the spec file or copy from the job data file; both portions of the workflow generate their own spec file.
+3. Edit the spec file and place in a location that is visible to the compute nodes.
+4. Create a new Celera Assembler job using the new spec file(s) in the protocol settings.
\ No newline at end of file
diff --git a/docs/Data-files-you-received-from-your-service-provider.md b/docs/Data-files-you-received-from-your-service-provider.md
new file mode 100644
index 0000000..b8381ed
--- /dev/null
+++ b/docs/Data-files-you-received-from-your-service-provider.md
@@ -0,0 +1,95 @@
+This page describes some of the data files you received from your service provider.
+
+##Primary Analysis Data##
+
+This is data **directly** generated by a PacBio RS II run. 
+* The ``Primary`` directory includes one subdirectory for **each** run.
+* Each run directory includes a subdirectory for **each** SMRT Cell used in the run.
+* Each SMRT Cell directory includes an ``Analysis_Results`` subdirectory, which contains output files of interest. **Example:**
+
+```
+/path/to/secondary/storage/2420294/0011
+├── Analysis_Results
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.bax.h5
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.log
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.subreads.fasta
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.subreads.fastq
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.bax.h5
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.log
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.subreads.fasta
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.subreads.fastq
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.bax.h5
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.log
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.subreads.fasta
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.subreads.fastq
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.bas.h5
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.sts.csv
+│   └── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.sts.xml
+├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.xfer.xml
+├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.xfer.xml
+├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.xfer.xml
+├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.mcd.h5
+└── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.metadata.xml
+```
+
+For information on the main files of interest, see: 
+
+* [bas.h5 Reference Guide] (https://s3.amazonaws.com/files.pacb.com/software/instrument/2.0.0/bas.h5+Reference+Guide.pdf) **(PDF)**:  
+Describes the main output files produced by the primary analysis pipeline: ``bas.h5``, ``.1.bax.h5``, ``.2.bax.h5``, and ``.3.bax.h5``. The ``bax.h5`` files contain base call information from the sequencing run. The ``bas.h5`` file is essentially a pointer to the three ``bax.h5`` files.
+
+* [Metadata Output Guide] (https://s3.amazonaws.com/files.pacb.com/software/instrument/2.0.0/Metadata+Output+Guide.pdf) **(PDF)**: Describes the file ``metadata.xml``, which contains top-level information about the data, including what sequencing enzyme and chemistry were used, sample name, and other metadata. 
+
+* [Statistics Output Guide] (https://s3.amazonaws.com/files.pacb.com/software/instrument/1.3.1/Statistics+Output+Guide.pdf) **(PDF)**: Describes the file ``sts.xml``, which includes summary statistics from a single movie acquisition.
+
+
+##Secondary Analysis Data##
+
+This is data produced by secondary analysis, which is performed on the primary analysis data generated by the instrument. 
+
+  * All files for a specific job reside in **one** directory that is named according to the job ID number 
+  * Every SMRT Portal job has the following structure. **Example:**
+
+    ```
+    /path/to/smrtanalysis/userdata/jobs/016/016234
+    ├── data/
+    ├── results/ 
+    ├── log/   
+    ├── workflow/ 
+    ├── job.sh
+    ├── input.xml 
+    └── settings.xml
+    ```
+    * ``data`` is a **directory** that contains intermediate and final data files for the analysis job
+    * ``results`` is a **directory** that contains summary statistics and plots for the analysis job
+    * ``log`` is a **directory** that contains all log files for the analysis job
+    * ``workflow`` is a **directory** that contains all the executables for the analysis job
+    * ``job.sh`` is an executable file used by SMRT Portal to run the `smrtpipe.py` analysis job
+    * ``input.xml`` is a .xml file containing a list of input `bax.h5` files used to run the analysis job
+    * ``settings.xml`` is a .xml file containing the parameters needed to perform the analysis job
+
+  * For more detail on specific protocol outputs, see [[Navigating the SMRT Pipe Job Directory]].
+
+Within the ``data`` directory are several types of output files. You can use these data files as input for further downstream processing, pass on to collaborators, or upload to public genome sites. Depending on the protocol being performed, the ``data`` directory contain files in the following formats:
+
+* **cmp.h5:**  The primary sequence alignment file for SMRT sequencing data. (Click [here] (https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/1.4/doc/cmp.h5+Reference+Guide.pdf) **(PDF)** for further details.)  
+* **H5:** ``Hierarchical Data Format``; a file-system-like data format. (Click [here] (http://www.hdfgroup.org/HDF5/doc/H5.intro.html) for further details.)
+* **SAM:** ``Sequence Alignment Map`` is a generic nucleotide alignment format that describes the alignment of query sequences or sequencing reads to a reference sequence or assembly. (Click [here] (http://samtools.sourceforge.net/) for further details.)
+* **BAM:** Binary version of the ``Sequence Alignment Map (SAM)`` format. (Click [here] (http://genome.ucsc.edu/goldenPath/help/bam.html) for further details.)
+* **BAI:** The index file for a file generated in the BAM format. (This is a non-standard file type.)
+* **FASTA:** FASTA-formatted sequence files contains either nucleic acid sequence (such as DNA) or protein sequence information. FASTA files store multiple sequences in a single file. (Click [here] (http://en.wikipedia.org/wiki/FASTA_format) for further details.)
+* **GFF:** ``General Feature Format``, used for describing genes and other features associated with DNA, RNA and Protein sequences. (Click [here] (http://genome.ucsc.edu/FAQ/FAQformat#format3) for further details.)
+* **VCF:** ``Variant Call Format``, for use with the molecular visualization and analysis program VMD. (Click [here] (http://en.wikipedia.org/wiki/Variant_Call_Format) for further details.)
+* **BED:** Format that defines the data lines displayed in an annotation track. (Click [here] (http://genome.ucsc.edu/FAQ/FAQformat#format1) for further details.)
+* **CSV:** Comma-Separated Values file. Can be viewed using Microsoft Excel or a text editor.
+* **GML:** An XML representation of the scaffold graph that results from scaffolding contigs using the AHA hybrid assembly algorithm.
+
+##SMRT Portal Reports##
+
+Your service provider included secondary analysis reports generated using SMRT Portal. 
+* For an explanation of the report fields, click [here] (https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/Reports+-+Terminology.pdf) **(PDF)**. 
+
+##Downloading SMRT Analysis Software##
+
+* The latest version of the SMRT Analysis software is available [here] (http://www.pacb.com/devnet/). 
+
+* Pacific Biosciences provides a free Amazon Machine Image that you can use to run SMRT Portal in the cloud. See [here] (https://github.com/PacificBiosciences/Bioinformatics-Training/wiki/%22Installing%22-SMRT-Portal-the-easy-way---Launching-A-SMRT-Portal-AMI) for details.
\ No newline at end of file
diff --git a/docs/Default-parameters-are-set-conservatively.md b/docs/Default-parameters-are-set-conservatively.md
new file mode 100644
index 0000000..25fb5cf
--- /dev/null
+++ b/docs/Default-parameters-are-set-conservatively.md
@@ -0,0 +1,5 @@
+``pacBioToCA`` and Celera Assembler require modified parameters for optimal performance on large genomes. 
+
+The default parameters in the ``pacbio.spec`` file are set conservatively for resource usage due to the variety of possible deployments.
+
+For large genomes ( > 100 Mb), we recommend an SGE grid or a high-memory machine (256 GB of RAM) plus fine tuning of the ``ovl*`` settings in the ``pacbio.spec`` file for optimal performance.
\ No newline at end of file
diff --git a/docs/Delete-SMRT-Cells-from-SMRT-Portal.md b/docs/Delete-SMRT-Cells-from-SMRT-Portal.md
new file mode 100644
index 0000000..231e4f7
--- /dev/null
+++ b/docs/Delete-SMRT-Cells-from-SMRT-Portal.md
@@ -0,0 +1,57 @@
+While SMRT Portal jobs can be deleted through the user interface, SMRT Cells can **only** be deleted using the command-line.  The procedure is as follows:
+
+1.  Find the SMRT Cell(s) you wish to delete and write down the inputID.
+
+  Execute:
+
+```
+curl -H "Accept: text/tsv" -d 'options={"rows":0,"columnNames":["inputId","sampleName"]}' http://localhost:8080/smrtportal/api/inputs/
+```
+
+```
+$ curl -H "Accept: text/tsv" -d 'options={"rows":0,"columnNames":["inputId","sampleName"]}' http://localhost:8080/smrtportal/api/inputs/
+  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
+                                 Dload  Upload   Total   Spent    Left  Speed
+  0     0    0     0    0    57      0  21268 --:--:-- --:--:-- --:--:-- 21268inputId   sampleName
+78807   2kbLambda
+78808   test1
+78809   test2
+78810   sampleX
+78811   SampleY
+```
+
+
+   
+2.  Execute the command to delete the cell.
+
+    ```curl -u "<user>:<password>"  -X DELETE http://localhost:8080/smrtportal/api/inputs/<inputID>```
+
+    Use smrtportal login credentials e.g. user = `administrator`.
+
+
+##Delete a SMRT Cell Input for a specific Job###
+
+In this example, the job number is **82641**:
+```
+$ curl -d 'options={"columnNames":["inputId","sampleName"]}' http://localhost:8080/smrtportal/api/jobs/82641/inputs/
+{
+  "page" : 1,
+  "records" : 1,
+  "total" : 1,
+  "rows" : [ {
+    "inputId" : 135328,
+    "sampleName" : "2kb lambda"
+  } ]
+}
+```
+The inputID is **135328**.
+
+Next, delete the input 135328 using the following command:
+
+```
+$ curl -X DELETE -u user:password http://localhost:8080/smrtportal/api/inputs/135328
+{
+  "success" : true,
+  "message" : "Permanently deleted Input 135328"
+}
+```
\ No newline at end of file
diff --git a/docs/Distributed-Computing-Configuration.md b/docs/Distributed-Computing-Configuration.md
new file mode 100644
index 0000000..990f595
--- /dev/null
+++ b/docs/Distributed-Computing-Configuration.md
@@ -0,0 +1,114 @@
+##Introduction##
+
+SMRT Analysis provides support for grid computing and is compatible with several common batch-queuing systems, or job management schedulers. Pacific Biosciences has explicitly validated Sun Grid Engine (SGE), LSF and PBS.
+
+This section describes setup for SGE and gives guidance for extensions to other Job Management Systems.
+
+**Note**: Celera Assembler 7.0 will **only** work correctly with the SGE job management system. If you are **not** using SGE, you will need to **deactivate** the Celera Assembler protocols so that they do **not** display in SMRT Portal. To do so, rename the following files, located in ``common/protocols``:
+```
+RS_CeleraAssembler.1.xml to RS_CeleraAssembler.1.bak
+filtering/CeleraAssemblerSFilter.1.xml to CeleraAssemblerSFilter.1.bak
+assembly/CeleraAssembler.1.xml to CeleraAssembler.1.bak
+```
+
+
+##Cluster JMS Configuration##
+
+###Global JMS Settings###
+
+Job submission parameters are specified globally in ``$SMRT_ROOT/current/analysis/etc/smrtpipe.rc``
+
+Additional parameters specific to the JMS are configured in the template files for the JMS
+
+###Configuring Templates###
+
+The central components for configuring job submission in SMRT Analysis are the **Job Management Templates**.  These files provide a flexible format for specifying how SMRT Analysis communicates with the resident job scheduler.
+
+In most case, the template files will be 
+There are **two** templates which must be modified for your system:
+
+* ``$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/start.tmpl`` is the legacy template used for assembly algorithms.
+* ``$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/interactive.tmpl`` is the new template used for resequencing algorithms. The difference between the two is the additional requirement of a sync option in ``interactive.tmpl``. (``kill.tmpl`` is not used.)
+
+**Note**: We are in the process of converting **all** protocols to use only interactive.tmpl.
+
+To customize a JMS for a particular environment, edit or create ``$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/start.tmpl`` and ``$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/interactive.tmpl``.  For example, the installation includes the following sample ``start.tmpl`` and ``interactive.tmpl`` (respectively) for SGE:
+
+```
+qsub -pe smp ${NPROC} -S /bin/bash -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+qsub -S /bin/bash -sync y -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} -pe smp ${NPROC} ${CMD}
+```
+
+### To support a new JMS:
+
+1. Create a new directory in ``$SMRT_ROOT/current/analysis/etc/cluster/`` under ``NEW_NAME``.
+2. In ``$SMRT_ROOT/current/analysis/etc/smrtpipe.rc``, change the ``CLUSTER_MANAGER`` variable to ``NEW_NAME``, as described in “[Global JMS Settings](#global-jms-settings)”.
+3. Once you have a new JMS directory specified, edit the ``interactive.tmpl`` and ``start.tmpl`` files for your particular setup.
+
+Sample `.tmpl` files for SGE, LSF and PBS are located at ``$SMRT_ROOT/current/analysis/etc/cluster``.
+
+## Specifying the PBS Job Management System
+
+PBS does **not** have a ``–sync`` option, so the interactive.tmpl file runs a script named qsw.py to simulate the functionality. You must edit **both** interactive.tmpl and start.tmpl. 
+
+1. Change the queue name to one that exists on your system. (This is the ``–q`` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the ``-pe`` option.) 
+3. Make sure that ``interactive.tmpl`` calls the ``–PBS`` option.
+
+## Specifying the LSF Job Management System
+
+Create an ``interactive.tmpl`` file by copying the ``start.tmpl`` file and adding the ``–K`` functionality in the ``bsub`` call. Or, you can also edit the sample LSF templates.
+
+## Specifying other Job Management Systems
+
+We have **not** tested the ``–sync`` functionally on other systems. Find the equivalent to the ``–sync`` option for your JMS and create an ``interactive.tmpl`` file. If there is **no** ``-sync`` option available, you may need to edit the ``qsw.py`` script in ``$SEYMOUR_HOME/analysis/lib/python2.7/pbpy-0.1-py2.7.egg/EGG-INFO/scripts/qsw.py`` to add additional options for wrapping jobs on your system. 
+
+The code for PBS and SGE looks like the following: 
+```
+if '-PBS' in args:
+            args.remove('-PBS')
+            self.jobIdDecoder   = PBS_JOB_ID_DECODER
+            self.noJobFoundCode = PBS_NO_JOB_FOUND_CODE
+            self.successCode    = PBS_SUCCESS_CODE
+            self.qstatCmd       = "qstat"
+        else:
+            self.jobIdDecoder   = SGE_JOB_ID_DECODER
+            self.noJobFoundCode = SGE_NO_JOB_FOUND_CODE
+            self.successCode    = SGE_SUCCESS_CODE
+            self.qstatCmd       = "qstat -j"
+```
+
+
+##Controlling Job Distribution in SMRT Portal##
+
+Job distribution is enabled/disabled with the `jobsAreDistributed` parameter in ``$SMRT_ROOT/current/redist/tomcat/webapps/smrtportal/WEB-INF/web.xml``
+
+###Enabling Distributed Processing in SMRT Portal###
+
+To **enable** distributed processing, set `jobsAreDistributed` to `true`:
+```
+<context-param>
+     <param-name>jobsAreDistributed</param-name>
+     <param-value>true</param-value>
+</context-param>
+```
+
+Restart the smrtportald-initd daemon:
+```
+$SMRT_ROOT/admin/bin/smrtportald-initd restart
+```
+
+###Disabling Distributed Processing in SMRT Portal###
+
+To **disable** distributed processing set `jobsAreDistributed` to `false`:
+```
+<context-param>
+     <param-name>jobsAreDistributed</param-name>
+     <param-value>false</param-value>
+</context-param>
+```
+
+Restart the smrtportald-initd daemon:
+```
+$SMRT_ROOT/admin/bin/smrtportald-initd restart
+```
\ No newline at end of file
diff --git a/docs/Environment-variables-are-not-set-correctly.md b/docs/Environment-variables-are-not-set-correctly.md
new file mode 100644
index 0000000..b6d48b2
--- /dev/null
+++ b/docs/Environment-variables-are-not-set-correctly.md
@@ -0,0 +1,15 @@
+All environment variables used by SMRT Analysis are specified in `/opt/smrtanalysis/analysis/etc/smrtpipe.rc`.  A common error occurs when the `TMP` and `SHARED_DIR` variables are set to directories that **do not** exist on your file system.  To correct the error, create the directories in the local file system, and give the directories write permissions.
+
+```
+sudo mkdir /scratch/  
+sudo chmod +x /scratch/  
+
+sudo mkdir /opt/smrtanalysis/common/userdata/shared_dir/  
+sudo chmod +x opt/smrtanalysis/common/userdata/shared_dir/  
+```
+
+Then assign those directories to the environment variables by editing `smrtpipe.rc`:
+```
+TMP=/scratch/
+SHARED_DIR=/opt/smrtanalysis/common/userdata/shared_dir/
+```
diff --git a/docs/Finding-$SEYMOUR_HOME-on-an-existing-SMRT-Analysis-Installation.md b/docs/Finding-$SEYMOUR_HOME-on-an-existing-SMRT-Analysis-Installation.md
new file mode 100644
index 0000000..59af9e8
--- /dev/null
+++ b/docs/Finding-$SEYMOUR_HOME-on-an-existing-SMRT-Analysis-Installation.md
@@ -0,0 +1,8 @@
+Where is `$SEYMOUR_HOME`?
+
+Often, previous installations of SMRT Analysis will **not** be in the default path `/opt/smrtanalysis`.
+To find `$SEYMOUR_HOME` for a running instance of SMRT Analysis, enter the following:
+
+```
+ps -ef | grep 'redist/tomcat' | perl -ne 'print "$1/\n" if /.*-Dcatalina\.base=(\S*)\/redist\/tomcat.*/'
+```
\ No newline at end of file
diff --git a/docs/Head-node-may-run-out-of-resources.md b/docs/Head-node-may-run-out-of-resources.md
new file mode 100644
index 0000000..82b8ae5
--- /dev/null
+++ b/docs/Head-node-may-run-out-of-resources.md
@@ -0,0 +1,3 @@
+One consequence of using the ``qsw.py`` script is that for **each** job submitted, there is a corresponding process polling the job for completion on the head node. This effectively **limit**s the number of jobs which can be run. If too many jobs are running simultaneously, the head node may run out of resources. 
+
+The solution is to submit fewer jobs simultaneously.
\ No newline at end of file
diff --git a/docs/Home.md b/docs/Home.md
new file mode 100644
index 0000000..a601c37
--- /dev/null
+++ b/docs/Home.md
@@ -0,0 +1,7 @@
+Welcome to the SMRT® Analysis wiki!
+
+Top pages:
+* [[Official Documentation]]
+* [[Troubleshooting the SMRT Analysis Suite]]
+* [Bioinformatics Training](https://github.com/PacificBiosciences/Bioinformatics-Training/wiki)
+
diff --git a/docs/How-to-migrate-smrt-analysis-to-a-different-server.md b/docs/How-to-migrate-smrt-analysis-to-a-different-server.md
new file mode 100644
index 0000000..637a005
--- /dev/null
+++ b/docs/How-to-migrate-smrt-analysis-to-a-different-server.md
@@ -0,0 +1,56 @@
+Migrating SMRT Portal to another server is a manual process.  You must migrate the mysql database that stores all the SMRT cell, job and user metadata. You must also update the hostname, and update the paths to the run directory if these have changed.  There are two ways to migrate SMRT Portal to a different server.  
+
+## Option 1:  Install and import
+Choose this option if you are less comfortable with troubleshooting on the command-line. You can also do this if you tried option 2 but it did not work or you encountered unexpected errors that you could not recover from.  The only downside to this option is that you have to import the old SMRT Portal jobs one at a time.
+
+### Step 1: Make a list of all the old job IDs that you wish to keep
+### Step 2: Install the latest SMRT Analysis on the new server
+### Step 3: Import SMRT Cells 
+Go to SMRT Portal --> Design Job --> Import and Manage --> Import SMRT Cells 
+Enter the path to the directory that contains all the SMRT Cells
+click save --> select the new path --> click scan
+### Step 4: Import SMRT Portal Jobs
+Go to SMRT Portal --> Design Job --> Import and Manage --> Import Jobs
+Copy old jobs to the `/opt/smrtanalysis/userdata/jobs_dropbox` directory
+select the jobs --> click scan
+
+
+
+## Option 2:  Move and re-configure
+Choose this option if you are comfortable with troubleshooting on the command-line.  Please read these instructions and any prompts very carefully.  If this migration does not work, you can always do option 1, which is more robust.
+
+### Step 1. Turn off web services
+```
+$SMRT_ROOT=/opt/smrtanalysis
+$SMRT_ROOT/admin/bin/smrtportald-initd stop
+```
+
+### Step 2.  Move the $SMRT_ROOT directory to the new server
+Perform a file system move or copy of the `$SMRT_ROOT` directory and all of its subdirectories from the old host to the new host, and make sure the `smrtanalysis` user has read, write and execute permissions.
+```
+mv $SMRT_ROOT /path/on/new/server
+chmod -R 755 $SMRT_ROOT
+```
+
+Manually check that the following softlinks are valid in the new location:
+
+1. `$SMRT_ROOT/userdata` and subdirectories should contain data from original host
+
+2. `$SMRT_ROOT/tmpdir` should be a softlink pointing to a local directory that exists on the new host and all its child nodes.  
+
+3. `$SMRT_ROOT/shared_dir` should be a softlink pointing to a NFS-mounted directory that exists on the new host and all its child nodes
+
+
+### Step 3.  Reconfigure the hostname
+Run `$SMRT_ROOT/etc/postinstall/scripts/configure_smrtanalysis.sh` and enter the hostname of the new server to configure smrtanalysis on your new server. Skip the database and distributed computing configuration steps.
+
+### Step 4: Refresh paths to SMRT Cells on the new server
+Go to SMRT Portal --> Design Job --> Import and Manage --> Import SMRT Cells 
+Enter the path to the directory that contains all the SMRT Cells
+click save --> select the new path --> click scan
+
+### Step 5. Turn on web services
+```
+$SMRT_ROOT=/opt/smrtanalysis
+$SMRT_ROOT/admin/bin/smrtportald-initd start
+```
diff --git a/docs/How-to-uninstall-smrt-analysis.md b/docs/How-to-uninstall-smrt-analysis.md
new file mode 100644
index 0000000..c8e69df
--- /dev/null
+++ b/docs/How-to-uninstall-smrt-analysis.md
@@ -0,0 +1,19 @@
+### Step 1:  Turn off services and remove softlinks
+```
+sudo /etc/init.d/tomcatd stop
+sudo /etc/init.d/kodosd stop
+rm /etc/init.d/tomcatd
+rm /etc/init.d/kodosd
+```
+
+### Step 2: Delete the mysql database
+```
+mysql -u smrtportal -psmrtportal --execute "drop database smrtportal"
+```
+
+### Step 3: Delete the smrtanalysis files
+```
+rm -rf <$EYMOUR_HOME>
+```
+<$SEYMOUR_HOME> is the install location of SMRT Analysis, and is `/opt/smrtanalysis` by default.
+
diff --git a/docs/Import-and-Manage-SMRT-Cell-Data-to-SMRT-Portal.md b/docs/Import-and-Manage-SMRT-Cell-Data-to-SMRT-Portal.md
new file mode 100644
index 0000000..a43a487
--- /dev/null
+++ b/docs/Import-and-Manage-SMRT-Cell-Data-to-SMRT-Portal.md
@@ -0,0 +1,64 @@
+###Primary Analysis Overview###
+
+Once sequencing is initiated, the system’s computational blade center performs real-time signal processing, base calling and quality assessment. Primary analysis data, including read length, distribution, polymerase speed and quality measurement are streamed directly to the secondary analysis software. This data, as well as trace and pulse data, are also available through the RS Touch and RS Remote interfaces for quick assessment of a sequenced SMRT Cell.
+
+###What files are transferred to secondary storage from primary analysis on the RSII blade server?###
+
+Below is a typical directory hierarchy of files transferred from the primary analysis blade server to secondary storage server:
+
+```
+/path/to/secondary/storage/2420294/0011
+├── Analysis_Results
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.bax.h5
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.log
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.subreads.fasta
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.subreads.fastq
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.bax.h5
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.log
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.subreads.fasta
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.subreads.fastq
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.bax.h5
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.log
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.subreads.fasta
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.subreads.fastq
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.bas.h5
+│   ├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.sts.csv
+│   └── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.sts.xml
+├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.1.xfer.xml
+├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.2.xfer.xml
+├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.3.xfer.xml
+├── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.mcd.h5
+└── m140415_143853_42175_c100635972550000001823121909121417_s1_p0.metadata.xml
+
+1 directory, 20 files
+```
+
+
+###What files are required for importing SMRT Cells into SMRT Portal?###
+
+To import SMRT Cells into SMRT Portal, the above directory structure must be preserved.  The minimum requirement for SMRT Cells to be recognized by SMRT Portal is the ``*.metadata.xml`` file and all ``*.bax.h5`` and ``*.bas.h5`` files. The bax.h5 files contain base call information from the sequencing run, and the bas.h5 file is essentially a pointer to the three bax.h5 files. The ``*.metadata.xml`` contains top level information about the data, including what sequencing enzyme and chemi [...]
+
+
+###SMRT Pipe Job Directory Hierarchy###
+
+SMRT Pipe job output directories all have a basic top-level view.
+```
+$SMRT_ROOT/userdata/jobs/<JOB_PREFIX>/<JOB_ID>/
+├── data/
+├── log/
+├── movie_metadata/
+├── reference/
+├── results/
+├── workflow/
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── settings.xml
+└── vis.jnlp
+```
+
+For more detail on specific protocol outputs, see [[Navigating the SMRT Pipe Job Directory]]
+
+For more information on File Format Specifications,  visit [PacBio DevNet] (http://www.pacbiodevnet.com).
\ No newline at end of file
diff --git a/docs/Installation-and-Upgrade-Summary.md b/docs/Installation-and-Upgrade-Summary.md
new file mode 100644
index 0000000..f4aa291
--- /dev/null
+++ b/docs/Installation-and-Upgrade-Summary.md
@@ -0,0 +1,48 @@
+Following are the steps for installing SMRT Analysis v1.4.0. For further details, click the links.
+
+1. Select an installation directory to assign to the ``$SEYMOUR_HOME`` environmental variable. In this summary, we use ``/opt/smrtanalysis``. 
+
+2. Decide on a sudo user who will perform the installation. In this summary, we use ``<thisuser>``, who belongs to ``<thisgroup>``. 
+**Note**: The user installing SMRT Analysis **must** have sudo access.
+
+3. [Extract the tarball](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Step-3:-Extract-the-Tarball) and softlink the directories:
+```
+tar -C /opt -xvvzf <tarball_name>.tgz
+ln -s /opt/smrtanalysis-1.4.0 /opt/smrtanalysis
+sudo chown -R <thisuser>:<thisgroup> smrtanalysis-1.4.0
+```
+4. Edit the setup script ``/opt/smrtanalysis-1.4.0/etc/setup.sh`` to match your installation location:
+```
+SEYMOUR_HOME=/opt/smrtanalysis
+```
+
+5. Run the appropriate script: 
+  * **Option 1**: If you are performing a **fresh** installation, run the [installation script](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Step-5:-Run-the-Installation-Script):
+```
+  /opt/smrtanalysis/etc/scripts/postinstall/configure_smrtanalysis.sh
+```
+  * **Option 2**: If you are **upgrading** from v1.3.3 to v1.4.0 and want to preserve SMRT Cells, jobs, and users from a previous installation: Run the [upgrade script](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Step-5,-Option-2:-Run-the-Upgrade-Script).
+```
+  /opt/smrtanalysis/etc/scripts/postinstall/upgrade_and_configure_smrtanalysis.sh
+```
+6. Set up [distributed computing](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Step-6:-Set-up-Distributed-Computing) by deciding on a job management system (JMS), then edit the following files:
+```
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/start.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/interactive.tmpl
+/opt/smrtanalysis/analysis/etc/smrtpipe.rc
+/opt/smrtanalysis/redist/tomcat/webapps/smrtportal/WEB-INF/web.xml
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/kill.tmpl
+```
+**Note:** If you are **not** using SGE, you will need to **deactivate** the Celera Assembler protocols so that they do **not** display in SMRT Portal. To do so, rename the following files, located in ``common/protocols``:
+```
+RS_CeleraAssembler.1.xml to RS_CeleraAssembler.1.bak
+filtering/CeleraAssemblerSFilter.1.xml to CeleraAssemblerSFilter.1.bak
+assembly/CeleraAssembler.1.xml to CeleraAssembler.1.bak
+```
+7. **New Installations only**: [Set up user data folders](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Step-7:-%28New-Installations-Only%29-Set-Up-User-Data-Folders) that point to external storage.
+
+8. **New Installations only**: [Set up SMRT Portal](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Step-8:-%28New-Installations-Only%29-Set-Up-SMRT%C2%AE-Portal).
+
+9. [Start](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Step-9:-Start-the-SMRT-Portal-and-Automatic-Secondary-Analysis-Services) the SMRT Portal and Automatic Secondary Analysis Services
+
+10. [Verify](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Verify-the-installation) the installation.
\ No newline at end of file
diff --git a/docs/Installation-assumes-local-mysql-instance.md b/docs/Installation-assumes-local-mysql-instance.md
new file mode 100644
index 0000000..c29761f
--- /dev/null
+++ b/docs/Installation-assumes-local-mysql-instance.md
@@ -0,0 +1,19 @@
+The installation assumes that the MySQL instance is **local**. If you need to configure SMRT Analysis to use a **remote** mysql server, do the following:
+
+1. Execute `$SEYMOUR_HOME/etc/scripts/SMRTPortalSchema.sql` on the remote server to create the smrtportal database:
+```
+mysql -u someUser [-pSomePasswd] < SMRTPortalSchema.sql
+```
+2.  Execute `dbdata.sh` on the remote server to initialize the smrtportal database with data:
+```
+source /opt/smrtanalysis/etc/setup.sh
+/opt/smrtanalysis/etc/scripts/dbdata.sh
+```
+
+3.  Edit ` $SEYMOUR_HOME/redist/apache-tomcat-7.0.23/webapps/smrtportal/WEB-INF/classes/META-INF/persistence.xml` and change `localhost` and `port` in the `javax.persistence.jdbc.url` value from `mysql://localhost:port/smrtportal` to the hostname of the remote host.
+
+4. Restart tomcat
+```
+$SEYMOUR_HOME/etc/scripts/tomcatd stop
+$SEYMOUR_HOME/etc/scripts/tomcatd start
+``` 
\ No newline at end of file
diff --git a/docs/Installing-and-upgrading-the-techsupport-tools.md b/docs/Installing-and-upgrading-the-techsupport-tools.md
new file mode 100644
index 0000000..44f7edc
--- /dev/null
+++ b/docs/Installing-and-upgrading-the-techsupport-tools.md
@@ -0,0 +1,15 @@
+The technical support tools are packaged as an add-on to SMRT Analysis and must be initially installed manually for version 2.1.1 and 2.2.  They will be included as part of the 2.3 and later versions.  Note that these installs and upgrades can be performed at any time (they are independent of and therefore will not affect the SMRT Analysis install).
+
+To install the tools for 2.1.1 or 2.2, perform the following commands as the SMRT_USER (the user used to install and run SMRT Analysis) on your SMRT Analysis node:
+
+```
+wget https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/smrtanalysis-addon-support_2.2.0.139662.run
+SMRT_ROOT=<your smrtanalysis installation root dir>
+$SMRT_ROOT/admin/bin/smrtupdater smrtanalysis-addon-support_2.2.0.139662.run
+$SMRT_ROOT/admin/bin/techsupport --action get_techsupport_update
+```
+
+To upgrade the techsupport tools for any version:
+```
+$SMRT_ROOT/admin/bin/techsupport --action get_techsupport_update
+```
\ No newline at end of file
diff --git a/docs/Instrument-Control-Web-Services-API.md b/docs/Instrument-Control-Web-Services-API.md
new file mode 100644
index 0000000..c1b1e71
--- /dev/null
+++ b/docs/Instrument-Control-Web-Services-API.md
@@ -0,0 +1,270 @@
+* [Introduction](#Intro)
+* [Security] (#Sec)
+* [LIMS Integration] (#LIMS)
+* [Web Services Behavior] (#WSB)
+* [HTTP Response Codes] (#HCODE)
+* [Sample Sheet Service] (#SSS)
+ * [Sample Sheet Validate Function] (#SA_Val)
+ * [Sample Sheet Import Function] (#SA_Imp)
+ * [Sample Sheet Export Function] (#SA_Exp)
+ * [Sample Sheet Update Function] (#SA_Upd)
+ * [Sample Sheet Delete Function] (#SA_Del)
+* [Job Status Service] (#JSS)
+ * [Plate Job Status Function] (#JOB_Stat)
+ * [Acquisition Job Status Function] (#JOB_Aq)
+ * [Reagent Mixing Job Status Function] (#JOB_Rea)
+ * [Primary Analysis Job Status Function] (#JOB_PAStat)
+ * [Primary Analysis Query Function] (#JOB_PAQuery)
+ * [Transfer Status Query Function] (#JOB_Trans)
+
+## <a name="Intro"></a> Introduction
+This document describes the Instrument Control Web Services API provided by Pacific Biosciences. The API allows developers to access some of the functionality provided by the PacBio® System. The API includes functions for:
+* **Registering** plates
+* Obtaining **job status information** about different types of jobs
+
+The latest version of the API and this documentation are available from the PacBio® Developer’s Network at http://www.pacbiodevnet.com.
+
+## <a name="Sec"></a> Security
+Pacific Biosciences does **not** currently enforce any access control at the web service level.
+* We assume that the instrument(s) and web services are hosted **inside** an intranet only, for security reasons.
+
+## <a name ="LIMS"></a> LIMS Integration
+The following example displays a typical use case of LIMS (Laboratory Information Management System) integration using the Instrument Control Web Services API.
+
+1. The LIMS creates and registers plates to the PacBio® System.
+2. The PacBio® System processes the plate job based on the imported sample sheet.
+3. The LIMS periodically queries the plate job status, as well as individual job acquisition status.
+4. The Primary Analysis Pipeline creates one base file for each acquisition job. Based on the results of the job status query, the LIMS decides how to process the Primary Analysis output files.
+
+## <a name ="WSB"></a> Web Services Behavior
+The interface for the following services follow the **RESTful** (Representational State Transfer) model for web services. The status of an operation is communicated by way of the HTTP response code:
+
+* If the method indicates **success**, the body of the HTTP response may contain the result of the query, if there is any.
+* If a response **includes data**, the data is formatted in **JSON** (JavaScript Object Notation) format, unless otherwise specified.
+* If a request returns an **error code**, the body of the response contains a JSON object that describes the nature of the error.
+
+## <a name ="HCODE"></a> HTTP Response Codes
+###Return values for queries using the HTTP GET call:###
+
+*  **Return Value**: ``200 OK``  **Explanation:** The web service call returned successfully. The body of the response contains the query result.
+
+*  **Return Value**: ``404 Not Found``  **Explanation:** The object requested was not found.
+
+*  **Return Value**: ``413 Request Entity Too Large``  **Explanation:** The request will return a response that is too large for the client.
+
+*  **Return Value**: ``500 Internal Server Error``  **Explanation:** An internal error occurred while processing the request.
+
+###Return values for queries using the HTTP PUT or POST call:###
+
+*  **Return Value**: ``200 OK``  **Explanation:** The PUT updated an existing object in the system. The PUT or POST operation was successful.
+
+*  **Return Value**: ``201 Created``  **Explanation:** The PUT created a new object on the system. The PUT or POST operation was successful.
+
+*  **Return Value**: ``400 Bad request``  **Explanation:** The contents of the PUT or POST were invalid and the data was rejected.
+
+*  **Return Value**: ``409 Conflict``  **Explanation:** The PUT or POST attempted to modify an existing object that cannot be modified. The PUT or POST operation was unsuccessful.
+
+*  **Return Value**: ``500 Internal Server Error``  **Explanation:** An internal error occurred while processing the request.
+
+**POST** requests **must** have content-type multipart/form-data, and the object(s) being uploaded must be added as input type ``file``.
+
+## <a name ="SSS"></a> Sample Sheet Service
+The Sample Sheet Service includes functions to manage plate registration.
+
+**Important!** Use port number **80** or **8081** when specifying the URL for the following functions.
+
+### <a name="SA_Val"></a> Sample Sheet Validate Function
+Use this function to run validation on the sample sheet being uploaded.
+
+* **URL:**  ``/SampleSheet/Validate``
+* **Method:** ``PUT`` or ``POST``
+* **Input Format:**  Well-formatted native sample sheet format (CSV). For formatting details, see the document [Run Definition File Format] (https://s3.amazonaws.com/files.pacb.com/software/instrument/2.1.0/Run+Definition+File+Format.pdf) **(PDF)**. 
+* **Returns:** On success, includes a dictionary containing the barcode of the validated sample sheet, in JSON format.
+* **Example:** ``curl -s -H 'Expect: ' -T "filename_to_validate" http://host_ipaddress:8081/SampleSheet/Validate``
+
+### <a name="SA_Imp"></a> Sample Sheet Import Function
+Use this function to **import** a new sample sheet into the system, or to update an existing sample sheet with the same barcode ID (if updates are permitted.)
+
+* **URL:**  ``/SampleSheet/Import``
+* **Method:** ``PUT`` or ``POST``
+* **Input Format:**  Well-formatted native sample sheet format (CSV). For formatting details, see the document Run Definition FIle Format. (Include link)
+* **Returns:** On success, includes a dictionary containing the barcode of the imported (and validated) sample sheet, in JSON format.
+* **Example:** ``curl -s -H 'Expect: ' -T "filename_to_import" http://host_ipaddress:8081/SampleSheet/Import``
+
+### <a name="SA_Exp"></a> Sample Sheet Export Function
+Use this function to obtain a previously-imported sample sheet.
+
+* **URL:**  ``/SampleSheet/plate_id``
+* **Method:** ``GET``
+* **Parameters:**  ``plate_id``: The plate ID of the sample sheet.
+* **Returns:** The sample sheet requested, in native sample sheet format (CSV).
+* **Example:** ``curl -s -H 'Expect: ' http://host_ipaddress:8081/SampleSheet/XY12345``
+
+### <a name="SA_Upd"></a> Sample Sheet Update Function
+Use this function to **update** an existing sample sheet.
+
+* **URL:**  ``/SampleSheet/plate_id``
+* **Method:** ``PUT`` or ``POST``
+* **Input Format:**  Well-formatted native sample sheet format (CSV). For formatting details, see the document Run Definition FIle Format. (Include link)
+* **Parameters:**  ``plate_id``: The plate ID of the sample sheet.
+* **Returns:** Success or failure is indicated through the HTTP response code.
+* **Example:** ``curl -s -H 'Expect: ' -T "filename_to_update" http://host_ipaddress:8081/SampleSheet/XY12345``
+
+### <a name="SA_Del"></a> Sample Sheet Delete Function
+Use this function to **delete** an existing sample sheet.
+
+* **URL:**  ``/SampleSheet/plate_id``
+* **Method:** ``DELETE``
+* **Parameters:**  ``plate_id``: The plate ID of the sample sheet.
+* **Returns:** Success or failure is indicated through the HTTP response code.
+* **Example:** ``curl -s -H 'Expect: ' -X DELETE http://host_ipaddress:8081/SampleSheet/XY12345``
+
+## <a name ="JSS"></a> Job Status Service
+The Job Status Service includes functions to obtain the status of different types of jobs. The return type for these queries is a **JobState** structure, which includes two fields:
+
+* **JobStatus:** The JobStatus enumeration value, which can be ``Ready``, ``Running``, ``Aborted``, ``Failed`` or ``Complete``.
+* **CustomState:** An optional text string that provides additional information about the job state.
+
+Following is an example JSON structure for JobStatus:
+
+```
+{ Status: Ready, CustomState: “Ready to run.” }
+```
+
+### <a name="JOB_Stat"></a> Plate Job Status Function
+Use this function to obtain the job status of a plate job.
+
+* **URL:**  ``/Jobs/Plate/plate_id/Status``
+* **Method:** ``GET``
+* **Parameters:**  ``plate_id``: The plate ID of the sample sheet.
+* **Returns:** A JobState structure in JSON format.
+* **Example:** ``curl -s -v http://host_ipaddress:8081/Jobs/Plate/platebarcode/Status``
+
+### <a name="JOB_Aq"></a> Acquisition Job Status Function
+Use this function to obtain the job status of an acquisition job. The JobStatus object that is returned can include information about the status of **multiple** jobs.
+
+* **URL:**  ``/Jobs/Collection/plate_id/samplewell_num/Status``
+* **Method:** ``GET``
+* **Parameters:**  ``plate_id``: The plate ID of the sample sheet, ``samplewell_num``: The sample well number.
+* **Returns:** An array of JobState structures, in JSON format.
+
+### <a name="JOB_Rea"></a> Reagent Mixing Job Status Function
+Use this function to obtain the job status of a reagent mixing job.
+
+* **URL:**  ``/Jobs/ReagentMix/plate_id/samplewell_num/Status``
+* **Method:** ``GET``
+* **Parameters:**  ``plate_id``: The plate ID of the sample sheet, ``samplewell_num``: The sample well number.
+* **Returns:** A JobState structure, in JSON format.
+
+### <a name="JOB_PAStat"></a> Primary Analysis Job Status Function
+Use this function to obtain the job status of a primary analysis job. The JobStatus object returned can include information about the status of **multiple** jobs.
+
+* **URL:**  ``/Jobs/PrimaryAnalysis/plate_id/samplewell_num/Status``
+* **Method:** ``GET``
+* **Parameters:**  ``samplewell_num``: The number of the sample well being analyzed.
+* **Returns:** A JobState structure, in JSON format.
+
+### <a name="JOB_PAQuery"></a> Primary Analysis Query Function
+Use this function to obtain information about a primary analysis job.
+
+* **URL:**  ``/Jobs/PrimaryAnalysis/Query``
+* **Method:** ``GET``
+* **Parameters (Optional):**  
+  * ``after``: The **earliest** date and time to search to.
+  * ``before``: The **latest** date and time to search from.
+  * ``max``: The maximum number of entries to return, currently 108.
+  * ``status``: The status of the job to search for. Allowed values: ``None, Pending, Ready, Running, Interrupted, Aborted, Failed, Complete``, and ``SpecificState``.
+  * Parameters are given as ``GET`` arguments.
+* **Returns:** An array of objects (in JSON format) that contain the following members:
+  * ``WhenModified``: The date and time the job was last modified.
+  * ``JobStatus``: The current job status.
+  * ``OutputFilePath``: The output path for the analysis job.
+  * ``AcquisitionNumber``: The acquisition ID.
+  * ``Well``: The sample well for this job.
+  * ``Plate``: The barcode of the plate used.
+* **Examples:** 
+  * ``/Jobs/PrimaryAnalysis/Query?status=Complete&before=2010-03-15&after=2010-01-15&max=100``
+  * ``curl -s -H 'Expect: ' "http://host_ipaddress:8081/Jobs/PrimaryAnalysis/Query?status=Complete&before=2010-03-15&after=2010-01-15&max=100"``
+
+### <a name="JOB_Trans"></a> Transfer Status Query Function
+Use this function to obtain information about a transfer of data for secondary analysis
+
+* **URL:**  ``/Jobs/TransferQuery``
+* **Method:** ``GET``
+* **Parameters (Optional):**  
+  * ``after``: The **earliest** date and time to search to.
+  * ``before``: The **latest** date and time to search from.
+  * ``max``: The maximum number of entries to return, currently 108.
+  * ``status``: The status of the job to search for. Allowed values: ``None, Pending, Ready, Running, Interrupted, Aborted, Failed, Complete``, and ``SpecificState``.
+  * Parameters are given as ``GET`` arguments.
+* **Returns:** An array of objects (in JSON format) that contain the following members:
+  * ``WhenModified``: The date and time the job was last modified.
+  * ``JobStatus``: The current job status.
+  * ``OutputFilePath``: The output path for the analysis job.
+  * ``AcquisitionNumber``: The acquisition ID.
+  * ``Well``: The sample well for this job.
+  * ``Plate``: The barcode of the plate used.
+* **Examples:** 
+  * ``/Jobs/PrimaryAnalysis/Query?status=Complete&before=2010-03-15&after=2010-01-15&max=100``
+  * ``curl -s -H 'Expect: ' "http://host_ipaddress:8081/Jobs/PrimaryAnalysis/Query?status=Complete&before=2010-03-15&after=2010-01-15&max=100"``
+
+**Sample C# code to determine when a cell has been transferred, using ``TransferQuery``:**
+
+```
+using System.IO;
+using System.Net;
+using System.Web.Script.Serialization;
+
+static bool MatchRunIdAndCollectionName(string outputFilePath, string runId
+                                      , string collectionName)
+{
+    if (outputFilePath == null) return false;
+    var afolder = outputFilePath.Split('/');
+    int len = afolder.Length;
+    var folderName = afolder[len - 2].Replace("%20", " ");
+    return (folderName == runId && afolder[len - 1] == collectionName);
+}
+
+// @param uploadDataPath looks like "\usmp-data3\vol53\RS_DATA_STAGING"
+void LookForCompleteCell(string uploadDataPath) {
+   DirectoryInfo di = new DirectoryInfo(uploadDataPath);
+// Only look for look 1 metadata, to even know what to look for
+   FileInfo[] files = di.GetFiles("m*_s1_p0.metadata.xml"
+     , SearchOption.AllDirectories);
+   JavaScriptSerializer jSerialize = new JavaScriptSerializer();
+
+   foreach (FileInfo file in files)
+   {
+      //e.g m120201_025018_42142_c100208912554400001515048402131234_s1_p0
+      string context = file.Name.Split('.')[0];
+      string dates = actx[0].Substring(1), times = actx[1], instrument = actx[2] , look = actx[4];
+      string yr = dates.Substring(0, 2)
+         , mo = dates.Substring(2, 2)
+         , dy = dates.Substring(4, 2)
+         , mi = times.Substring(2, 2)
+         , sec = times.Substring(4, 2);
+      string safter = "20" + yr + "-" + mo + "-" + dy + "T" + hr + ":" + mi + ":" + sec;
+      var runId = di.Parent.Name;//would be something like 20120131_ETv26097p_scales_442
+      var plate = runId.Substring(0, runId.LastIndexOf('_'));
+      string serviceUrl =
+         "http://pap01-" + instrument
+         +
+         ":8081/Jobs/TransferQuery/Query?status=Complete&platform=Windows&after="
+         + safter;
+      HttpWebRequest webRequest = WebRequest.Create(serviceUrl) as HttpWebRequest;
+      HttpWebResponse webResponse = webRequest.GetResponse() as HttpWebResponse;
+      StreamReader sr = new StreamReader(webResponse.GetResponseStream());
+      var aj = jSerialize.Deserialize<List<Dictionary<string, object>>>(sr.ReadToEnd());
+      var status = aj.Find(d
+                            => MatchRunIdAndCollectionName((string)d["OutputFilePath"], runId , collectionName)
+                               && ("s" + d["IndexOfLook"]) == look);
+      if (status == null) continue;
+      //At this point, we have both look 1 and look 2 (if this is a 2 look movie).
+      //This SMRT Cell is ready for analysis
+   }
+}
+
+```
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
\ No newline at end of file
diff --git a/docs/Introduction.md b/docs/Introduction.md
new file mode 100644
index 0000000..ded39a5
--- /dev/null
+++ b/docs/Introduction.md
@@ -0,0 +1,12 @@
+This document describes the underlying command-line interface to SMRT Pipe, and is for use by bioinformaticians working with secondary analysis results.
+
+**SMRT Pipe** is Pacific Biosciences’ underlying analysis framework for secondary analysis functions. SMRT Pipe is a python-based general-purpose workflow engine. It is easily extensible, and supports logging, distributed computation, error handling, analysis parameters, and temporary files.
+
+In a typical installation of the SMRT Analysis Software, the SMRT Portal web application calls SMRT Pipe when a job is started. SMRT Portal provides a convenient and user-friendly way to analyze Pacific Biosciences’ sequencing data through SMRT Pipe. Power users will find that there is more flexibility and customization available by instead running SMRT Pipe analyses from the command line.
+
+* The latest version of SMRT Pipe is available **here**.
+
+* SMRT Pipe can also be accessed using the Secondary Analysis Web Services API. For details, see **Secondary Analysis Web Services API**.
+
+**Note:**
+Throughout this documentation, the path ``/opt/smrtanalysis`` is used to refer to the installation directory for SMRT Analysis (also known as ``$SEYMOUR_HOME``). Replace this path with the path appropriate to your installation when using this document.
\ No newline at end of file
diff --git a/docs/Issue:-when-you-try-to-access-SMRT-View,-the-application-is-downloaded-from-the-server.md b/docs/Issue:-when-you-try-to-access-SMRT-View,-the-application-is-downloaded-from-the-server.md
new file mode 100644
index 0000000..c1825b9
--- /dev/null
+++ b/docs/Issue:-when-you-try-to-access-SMRT-View,-the-application-is-downloaded-from-the-server.md
@@ -0,0 +1 @@
+Check the Java temporary file setting to ensure that Java files are being cached. This keeps SMRT View in memory so that you only need to download it once.
\ No newline at end of file
diff --git a/docs/Job-fails-at-exactly-12-hour.md b/docs/Job-fails-at-exactly-12-hour.md
new file mode 100644
index 0000000..f756af1
--- /dev/null
+++ b/docs/Job-fails-at-exactly-12-hour.md
@@ -0,0 +1,3 @@
+If job submitted to the SGE scheduler runs long enough to fail after **exactly** 12 hours, you might want to revise the setting in `$SEYMOUR_HOME/analysis/etc/cluster/SGE/interactive.tmpl`. 
+
+Change ``-h_rt=12:0:0`` to ``-h_rt=24:0:0`` or more.
\ No newline at end of file
diff --git a/docs/Job-fails-if-a-soft-link-resolves-to-2-different-paths.md b/docs/Job-fails-if-a-soft-link-resolves-to-2-different-paths.md
new file mode 100644
index 0000000..751b7b6
--- /dev/null
+++ b/docs/Job-fails-if-a-soft-link-resolves-to-2-different-paths.md
@@ -0,0 +1,5 @@
+If you try to run Celera Assembler where ``$SEYMOUR_HOME`` is a soft link that resolves to **two different paths**, the job will **fail**. 
+
+Theoretically, you can solve this problem by using the ``pathMap=`` parameter in the ``\*.spec`` to circumvent the $FindBin::RealBin check by explicitly stating the location of the binaries. However, in practice this does **not** work for the error correction step (P_PacBioToCA) in the current version of Celera Assembler. 
+
+This was fixed in the newest release of Celera Assembler, and will be included in a later release of SMRT Pipe.
\ No newline at end of file
diff --git "a/docs/KeyError:-\"unable-to-open-object-(Symbol-table:-Can't-open-object)\".md" "b/docs/KeyError:-\"unable-to-open-object-(Symbol-table:-Can't-open-object)\".md"
new file mode 100644
index 0000000..ced1449
--- /dev/null
+++ "b/docs/KeyError:-\"unable-to-open-object-(Symbol-table:-Can't-open-object)\".md"
@@ -0,0 +1,55 @@
+The error mesage:
+```
+KeyError: "unable to open object (Symbol table: Can't open object)"
+```
+
+can indicate:
+* A problem with specifying bas.h5/bax.h5 SMRT Pipe inputs.
+* A potentially corrupted HDF5 file.
+
+Starting with SMRT Analysis v2.0, a new file format was introduced: bax.h5. The details are in the bas.h5 Reference Guide at http://files.pacb.com/software/instrument/2.0.0/bas.h5%20Reference%20Guide.pdf.
+
+Users running custom scripts in conjunction with SMRT Pipe must take into account these changes.
+See `Specifying SMRT Pipe Inputs` at: https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Pipe-Reference-Guide-v2.0#wiki-PipeInputs
+
+To diagnose a corrupted HDF5 file:
+```
+$ source $SEYMOUR_HOME/etc/setup.sh
+$ h5debug /path/to/bas.h5
+```
+
+Output should be similar to:
+
+```
+Reading signature at address 0 (rel)
+File Super Block...
+File name:                                         ./m130405_225601_42171_c100502282550000001823069308081393_s1_p0.bas.h5
+File access flags                                  0x00000000
+File open reference count:                         1
+Address of super block:                            0 (abs)
+Size of userblock:                                 0 bytes
+Superblock version number:                         0
+Free list version number:                          0
+Root group symbol table entry version number:      0
+Shared header version number:                      0
+Size of file offsets (haddr_t type):               8 bytes
+Size of file lengths (hsize_t type):               8 bytes
+Symbol table leaf node 1/2 rank:                   4
+Symbol table internal node 1/2 rank:               16
+File status flags:                                 0x00
+Superblock extension address:                      UNDEF (rel)
+Shared object header message table address:        UNDEF (rel)
+Shared object header message version number:       0
+Number of shared object header message indexes:    0
+Address of driver information block:               UNDEF (rel)
+Root group symbol table entry:
+   Name offset into private heap:                  0
+   Object header address:                          96
+   Dirty:                                          No
+   Cache info type:                                Symbol Table
+   Cached entry information:
+      B-tree address:                              136
+      Heap address:                                680
+```
+
+An error message means the file is corrupt.
diff --git a/docs/List-of-Programs-in-$SEYMOUR_HOME-analysis-bin-.md b/docs/List-of-Programs-in-$SEYMOUR_HOME-analysis-bin-.md
new file mode 100644
index 0000000..20898a5
--- /dev/null
+++ b/docs/List-of-Programs-in-$SEYMOUR_HOME-analysis-bin-.md
@@ -0,0 +1,862 @@
+[[ace2contig]]
+
+[[acyclic]]
+
+[[addSVGLinks.py]]
+
+[[align2layouts.py]]
+
+[[amos2ace]]
+
+[[amos2frg]]
+
+[[amos2mates]]
+
+[[amos2sq]]
+
+[[AMOScmp]]
+
+[[AMOScmp-shortReads]]
+
+[[AMOScmp-shortReads-alignmentTrimmed]]
+
+[[amosvalidate]]
+
+[[analyze-read-depth]]
+
+[[analyzeSNPs]]
+
+[[arachne2ctg]]
+
+[[arachne2scaff]]
+
+[[arrive]]
+
+[[arrive2]]
+
+[[asmQC]]
+
+[[asmQC2]]
+
+[[assembleRNAs]]
+
+[[assertCmpH5NonEmpty.py]]
+
+[[astats]]
+
+[[auto-fix-contigs]]
+
+[[autoJoiner]]
+
+[[bacEnd]]
+
+[[bank2contig]]
+
+[[bank2coverage]]
+
+[[bank2fasta]]
+
+[[bank2scaff]]
+
+[[bank-clean]]
+
+[[bank-combine]]
+
+[[bank-mapping]]
+
+[[bankReadsToFasta.py]]
+
+[[bank-report]]
+
+[[bankToCmpH5.py]]
+
+[[bank-transact]]
+
+[[bank-tutorial]]
+
+[[bank-unlock]]
+
+[[barcode-graph]]
+
+[[basH5Compat]]
+
+[[bash5toFasta.py]]
+
+[[bash5tools.py]]
+
+[[bcomps]]
+
+[[benchmark2arachne]]
+
+[[benchmark2ca]]
+
+[[benchmark2mates]]
+
+[[benchmark2ta]]
+
+[[benchmark_qual]]
+
+[[benchmark_seq]]
+
+[[blasr]]
+
+[[bsdb]]
+
+[[build-persistent-bank]]
+
+[[Bundler]]
+
+[[bwt2sa]]
+
+[[ca2ctg]]
+
+[[ca2mates]]
+
+[[ca2scaff]]
+
+[[ca2singletons]]
+
+[[ca2ta]]
+
+[[caPlsFofn2Fastq.py]]
+
+[[casm-breaks]]
+
+[[casm-layout]]
+
+[[castats]]
+
+[[caStatsWrapper.py]]
+
+[[catXML]]
+
+[[cavalidate]]
+
+[[ccache-swig]]
+
+[[ccomps]]
+
+[[cestat-cov]]
+
+[[cgb2ctg]]
+
+[[Chainer]]
+
+[[chimeraDetector.py]]
+
+[[circo]]
+
+[[circularization.py]]
+
+[[clk]]
+
+[[cluster]]
+
+[[clusterOverlapIIDs.py]]
+
+[[clusterSnps]]
+
+[[cmpH5Convert.py]]
+
+[[cmpH5MIndex.py]]
+
+[[cmph5tools.py]]
+
+[[cmpPrintTupleCountTable]]
+
+[[compareSequences.py]]
+
+[[contig2contig]]
+
+[[contig-cmp]]
+
+[[converter]]
+
+[[coords2cam]]
+
+[[copyIpdSummaryDataset.py]]
+
+[[correlatedVariants.py]]
+
+[[count-kmers]]
+
+[[create_gmap_index.sh]]
+
+[[createHybridAfg.py]]
+
+[[createSequenceDictionary]]
+
+[[ctg2fasta]]
+
+[[ctg2umdcontig]]
+
+[[ctgovl]]
+
+[[cvgChop]]
+
+[[cvgStat]]
+
+[[deleteReadsOverlaps.py]]
+
+[[delta2clr]]
+
+[[delta2cvg]]
+
+[[detective]]
+
+[[diffimg]]
+
+[[dijkstra]]
+
+[[dot]]
+
+[[dot2gxl]]
+
+[[dot_builtins]]
+
+[[dotty]]
+
+[[dumpContigsAsReads]]
+
+[[dumpFeatures]]
+
+[[dumpmates]]
+
+[[dumpreads]]
+
+[[eliminateRepeats.py]]
+
+[[esd2esi]]
+
+[[evolve]]
+
+[[excl_seqs]]
+
+[[exonerate]]
+
+[[exonerate-server]]
+
+[[extractContig]]
+
+[[extractContigs.py]]
+
+[[extractUnmappedSubreads.py]]
+
+[[fasta2esd]]
+
+[[fastaannotatecdna]]
+
+[[fastachecksum]]
+
+[[fastaclean]]
+
+[[fastaclip]]
+
+[[fastacomposition]]
+
+[[fastadiff]]
+
+[[fastaexplode]]
+
+[[fastafetch]]
+
+[[fastahardmask]]
+
+[[fastaindex]]
+
+[[fastalength]]
+
+[[fastanrdb]]
+
+[[fastaoverlap]]
+
+[[fastareformat]]
+
+[[fastaremove]]
+
+[[fastarevcomp]]
+
+[[fastasoftmask]]
+
+[[fastasort]]
+
+[[fastasplit]]
+
+[[fastasubseq]]
+
+[[fastatranslate]]
+
+[[fastavalidcds]]
+
+[[fastqToCA]]
+
+[[fattenContig]]
+
+[[fdp]]
+
+[[fileWatcher.py]]
+
+[[filterArtifacts.py]]
+
+[[filter_contig]]
+
+[[filterfrg]]
+
+[[filterPlsH5.py]]
+
+[[findChimeras]]
+
+[[find-duplicate-reads]]
+
+[[find_ends]]
+
+[[findMissingMates]]
+
+[[find-query-breaks]]
+
+[[findTcovSnp]]
+
+[[fixfrg]]
+
+[[fixlib]]
+
+[[fofnToSmrtpipeInput.py]]
+
+[[frg2fasta]]
+
+[[frg2ta]]
+
+[[frg-umd-merge]]
+
+[[gap-closure-reads]]
+
+[[gapFiller.py]]
+
+[[gapFillFromFastas.py]]
+
+[[gap-links]]
+
+[[gatekeeper]]
+
+[[gc]]
+
+[[genome-complexity]]
+
+[[genome-complexity-fast]]
+
+[[gepardcmd.sh]]
+
+[[getlengths]]
+
+[[getN50]]
+
+[[gffToBed.py]]
+
+[[gffToCoverage.py]]
+
+[[gffToVcf.py]]
+
+[[gif2h5]]
+
+[[gmap]]
+
+[[gmap_build]]
+
+[[gmap_home]]
+
+[[gml2gv]]
+
+[[goBambus]]
+
+[[grommit]]
+
+[[grow-readbank]]
+
+[[gv2gxl]]
+
+[[gvcolor]]
+
+[[gvgen]]
+
+[[gvmap]]
+
+[[gvmap.sh]]
+
+[[gvpack]]
+
+[[gvpr]]
+
+[[gxl2dot]]
+
+[[gxl2gv]]
+
+[[h52gif]]
+
+[[h5c++]]
+
+[[h5cc]]
+
+[[h5copy]]
+
+[[h5debug]]
+
+[[h5diff]]
+
+[[h5dump]]
+
+[[h5import]]
+
+[[h5jam]]
+
+[[h5ls]]
+
+[[h5mkgrp]]
+
+[[h5perf_serial]]
+
+[[h5redeploy]]
+
+[[h5repack]]
+
+[[h5repart]]
+
+[[h5stat]]
+
+[[h5unjam]]
+
+[[hash-overlap]]
+
+[[hawkeye]]
+
+[[insertGapColumn]]
+
+[[insert-sizes]]
+
+[[ipcress]]
+
+[[ipdSummary.py]]
+
+[[iterate]]
+
+[[Joiner]]
+
+[[kbandMatcher]]
+
+[[kmer-count]]
+
+[[kmer-cov]]
+
+[[kmer-cov-plot]]
+
+[[kmers]]
+
+[[ktrimfrg]]
+
+[[library-histogram]]
+
+[[listcontigreads]]
+
+[[listGCContent]]
+
+[[list-linked-contigs]]
+
+[[listReadPlacedStatus]]
+
+[[listSingletonMates]]
+
+[[listSurrogates]]
+
+[[lneato]]
+
+[[loadFeatures]]
+
+[[load-overlaps]]
+
+[[loadPulses]]
+
+[[loadSequencingChemistryIntoCmpH5.py]]
+
+[[makeAdapterReport.py]]
+
+[[makeAssemblyFinalReport.py]]
+
+[[makeAssemblyIterationsReports.py]]
+
+[[makeAttributeReport.py]]
+
+[[makeChemistryMapping.py]]
+
+[[make-consensus]]
+
+[[make-consensus_poly]]
+
+[[makeControlReport.py]]
+
+[[makeCoverageReportFromGff.py]]
+
+[[makeDCCSReport.py]]
+
+[[makeFilterStatsReport.py]]
+
+[[makeFilterSubreadReport.py]]
+
+[[makeFilterSubreadSummary.py]]
+
+[[makeHybridFinalReport.py]]
+
+[[makeHybridIterationsReport.py]]
+
+[[makeKineticsReport.py]]
+
+[[makeLoadingReport.py]]
+
+[[makeMappingStatsReport.py]]
+
+[[makeOverviewReport.py]]
+
+[[makePulseKineticsReport.py]]
+
+[[makePulseStatsTable.py]]
+
+[[makeRefDirsFromCmpH5.py]]
+
+[[makeSATReport.py]]
+
+[[makeTopMinorVariantsReport.py]]
+
+[[makeTopVariantsReportHgap.py]]
+
+[[makeTopVariantsReport.py]]
+
+[[makeVariantReportFromGffHgap.py]]
+
+[[makeVariantReportFromGff.py]]
+
+[[manageContigs]]
+
+[[maskAlignedReads.py]]
+
+[[mate-evolution]]
+
+[[mergeBankLayouts.py]]
+
+[[mergeCompositions.py]]
+
+[[mergeConsH5IntoCmpH5.py]]
+
+[[merge-contigs]]
+
+[[mergeVariantsFromFofn.py]]
+
+[[message-count]]
+
+[[message-extract]]
+
+[[message-validate]]
+
+[[mgaps]]
+
+[[minimus]]
+
+[[minimus2]]
+
+[[missing-reads]]
+
+[[mm2gv]]
+
+[[motifMaker.sh]]
+
+[[mummer]]
+
+[[muscle]]
+
+[[nAstats]]
+
+[[neato]]
+
+[[nop]]
+
+[[normalizeScaffold]]
+
+[[nucmer]]
+
+[[nucmer2ovl]]
+
+[[nucmerAnnotate]]
+
+[[osage]]
+
+[[ovl2OVL]]
+
+[[ovl-degr-dist]]
+
+[[pacBioToCA]]
+
+[[parsecasm]]
+
+[[patchwork]]
+
+[[pbbarcode.py]]
+
+[[pbmask]]
+
+[[pbsamtools.py]]
+
+[[pbToCASpecWriter.py]]
+
+[[pbToCAWrapper.py]]
+
+[[persistent-assembly]]
+
+[[persistent-fix]]
+
+[[persistent-fix-contigs]]
+
+[[persistent-read-dist]]
+
+[[phd2afg]]
+
+[[pls2fasta]]
+
+[[plurality]]
+
+[[po-align]]
+
+[[postCAqc]]
+
+[[postnuc]]
+
+[[preassembleFrgs]]
+
+[[prenuc]]
+
+[[preTA]]
+
+[[printScaff]]
+
+[[printTupleCountTable]]
+
+[[prune]]
+
+[[pullTArchive]]
+
+[[pyrosim]]
+
+[[qsw.py]]
+
+[[quiver]]
+
+[[read-cov-plot]]
+
+[[read-evolution]]
+
+[[readinfo2cam]]
+
+[[recallConsensus]]
+
+[[referenceCCSCall.py]]
+
+[[referenceUploader]]
+
+[[removeAdapters]]
+
+[[renameReads]]
+
+[[reorderSAM.py]]
+
+[[rerunMultiTest]]
+
+[[resetFragLibrary]]
+
+[[revContig]]
+
+[[revScaffold]]
+
+[[rotateContig]]
+
+[[runAmos]]
+
+[[runCA]]
+
+[[runCA.clean]]
+
+[[runCA.euk]]
+
+[[runCA.meta]]
+
+[[runCA.prok]]
+
+[[runCASpecWriter.py]]
+
+[[runCA.stub]]
+
+[[runMultiTest]]
+
+[[running-cmp]]
+
+[[runTA]]
+
+[[runTest]]
+
+[[s3Transfer]]
+
+[[sa2bwt]]
+
+[[SAMIO.py]]
+
+[[samodify]]
+
+[[samtoh5]]
+
+[[samtools]]
+
+[[sawriter]]
+
+[[saxonb9]]
+
+[[scaff2fasta]]
+
+[[scaffoldRange2Ungapped]]
+
+[[sccmap]]
+
+[[scrubStalls.py]]
+
+[[sdpMatcher]]
+
+[[secondary]]
+
+[[select-reads]]
+
+[[sfdp]]
+
+[[show-coords]]
+
+[[show-ma-asm]]
+
+[[sim-cover2]]
+
+[[sim-cover-depth]]
+
+[[simpleContigLoader]]
+
+[[simple-overlap]]
+
+[[simplifyLibraries]]
+
+[[sim-shotgun]]
+
+[[singles]]
+
+[[smrtpipe.py]]
+
+[[sort2]]
+
+[[splitContig]]
+
+[[stitchContigs]]
+
+[[summarizeCompareByMovie.py]]
+
+[[summarizeConsensus.py]]
+
+[[summarizeCoverage.py]]
+
+[[summarizeModifications.py]]
+
+[[summarizeMultiTest]]
+
+[[suspiciousfeat2region]]
+
+[[swig]]
+
+[[swMatcher]]
+
+[[syncPerMovieFofn.py]]
+
+[[ta2ace]]
+
+[[tab2ovls]]
+
+[[tandemCollapse]]
+
+[[tarchive2amos]]
+
+[[tarchive2ca]]
+
+[[tigger]]
+
+[[tiling2cam]]
+
+[[toAfg]]
+
+[[toAmos]]
+
+[[toArachne]]
+
+[[trace_comment]]
+
+[[trace_comments]]
+
+[[trace_convert]]
+
+[[trace_scf_dump]]
+
+[[trace_seq]]
+
+[[transitiveOverlap.py]]
+
+[[translate-fasta]]
+
+[[tred]]
+
+[[trimByOvl]]
+
+[[trimContig]]
+
+[[trimends]]
+
+[[trimFastqByQVWindow.py]]
+
+[[trimfrg]]
+
+[[trimLayouts.py]]
+
+[[twopi]]
+
+[[unflatten]]
+
+[[untangle]]
+
+[[updateBankPositions]]
+
+[[updateClrRanges]]
+
+[[updateDeltaClr]]
+
+[[updateLibSizes]]
+
+[[variantCaller.py]]
+
+[[variantCallWithGATK.py]]
+
+[[vcfToGff.py]]
+
+[[vcfUploader.py]]
+
+[[vecfix]]
+
+[[verify-layout]]
+
+[[vimdot]]
+
+[[wgs-7.0]]
+
+[[writeHDFSubset]]
+
+[[xml2grommit]]
+
+[[zipalign]]
+
+[[zipContigs]]
+
diff --git a/docs/Log-File-Locations.md b/docs/Log-File-Locations.md
new file mode 100644
index 0000000..d9633a1
--- /dev/null
+++ b/docs/Log-File-Locations.md
@@ -0,0 +1,127 @@
+<html>
+<head>
+<meta http-equiv="content-type" content="text/html;
+charset=ISO-8859-1">
+<title></title>
+</head>
+<body>
+<table>
+<tbody>
+<tr>
+<th> Log Name </th>
+<th> Location </th>
+<th> Path </th>
+<th> Purpose </th>
+</tr>
+<tr>
+<td><b> Homer </b></td>
+<td> instrument: pap01 </td>
+<td><code>
+/pacbio/log/homer/InstrumentControl.log.<date> </code></td>
+<td>Records instrument control and workflow events and issues.</td>
+</tr>
+<tr>
+<td><b> Homer-mono </b></td>
+<td> instrument: pap01 </td>
+<td><code> /pacbio/log/homer/homer-mono.<date> </code></td>
+<td> Records mono output. Mono allows .NET code to run on linux.</td>
+</tr>
+<tr>
+<td><b> Otto </b></td>
+<td> instrument: NRT </td>
+<td><code> /pacbio/log/otto/ottolog-<date> </code></td>
+<td>Records file server and movie acquisition events and issues.</td>
+</tr>
+<tr>
+<td><b> Bart </b></td>
+<td> instrument: pap01 </td>
+<td><code>/pacbio/log/bart/Log_<date></code></td>
+<td>Records any network connection events to the instrument such as calculator, secondary analysis pings,
+and instrument control web services requests.</td>
+</tr>
+<tr>
+<td><b> Pipeline </b></td>
+<td> instrument: pap01 </td>
+<td><code>/data/imports/pap0*/spool/Analysis_Results/<movie_context>:.log</code></td>
+<td>Records primary analysis trc, pls, and bas file creation events and issues.</td>
+</tr>
+<tr>
+<td><b> Transfer </b></td>
+<td> instrument: pap02-04 </td>
+<td><code>/pacbio/log/oixfer/oixfer-<date>.log</code></td>
+<td>Records any data transfer attempts and the result of that attempt.</td>
+</tr>
+<tr>
+<td><b> RS Touch </b></td>
+<td> instrument: pap01 </td>
+<td><code> /data/logxfer/guipc/Users/pbi/AppData/<br>
+Roaming/Pacific Biosciences/RS Touch </code></td>
+<td>Records any events and issues relating to RS Touch.</td>
+</tr>
+<tr>
+<td><b> RS Remote </b></td>
+<td> Windows 7 Machine </td>
+<td><code> C:\Users\<username>\AppData\Roaming\Pacific
+Biosciences\RS Remote </code></td>
+<td>Records any events and issues relating to RS Remote.</td>
+</tr>
+<tr>
+<td><b> RS Remote </b></td>
+<td> Windows XP Machine </td>
+<td><code> C:\Documents and Settings\<user name><br>
+\Application Data\PacificBiosciences\RS Remote </code></td>
+<td>Records any events and issues relating to RS Remote.</td>
+</tr>
+<tr>
+<td><b> SMRT Portal </b></td>
+<td> SMRT Analysis Linux Server </td>
+<td><code>$SEYMOUR_HOME/common/log/smrtportal/<br>
+</code> </td>
+<td>Records any SMRT Portal web GUI events and issues.</td>
+</tr>
+<tr>
+<td><b> Kodos</b><b><br>
+</b> </td>
+<td> SMRT Analysis Linux Server </td>
+<td><code>$SEYMOUR_HOME/common/log/autoDaemon/</code></td>
+<td>Records any automatic secondary analysis events and issues.</td>
+</tr>
+<tr>
+<td><b> Tomcat</b><b><br>
+</b> </td>
+<td> SMRT Analysis Web Server </td>
+<td><code>$SEYMOUR_HOME/redist/tomcat/logs/</code></td>
+<td>Records web server events and issues.</td>
+</tr>
+<tr>
+<td><b> Reference Uploader </b></td>
+<td> SMRT Analysis Linux Server </td>
+<td><code>$SEYMOUR_HOME/common/log/referenceUploader/</code></td>
+<td>Records any events and issues when a reference is imported to SMRT Portal.</td>
+</tr>
+<tr>
+<td><b> SMRT View</b><b><br>
+</b> </td>
+<td> SMRT Analysis Linux Server </td>
+<td><code>$SEYMOUR_HOME/common/log/smrtview/</code></td>
+<td>Records any SMRT View Java events and issues.</td>
+</tr>
+<tr>
+<td><b> SMRT Pipe job </b></td>
+<td> SMRT Analysis Linux Server </td>
+<td><code>$SEYMOUR_HOME/common/jobs/<jobid_prefix>/<jobid>/log/smrtpipe.log</code></td>
+<td>Records all analysis events and issues when running a SMRT Pipe job.</td>
+</tr>
+<tr>
+<td><b> Distributed SMRT Pipe job </b></td>
+<td> SMRT Analysis Linux child node </td>
+<td> Look for tmp directory in smrtpipe.log.<br>
+It should look something like<br>
+<code>/tmp/tmp6w8WPk/tmpdvRNDh/<job>_err </code><br>
+</td>
+<td>Records all analysis events and issues that happen when the job is spawned off to a child node.<br></td>
+</tr>
+</tbody>
+</table>
+</body>
+</html>
\ No newline at end of file
diff --git a/docs/Navigating-the-SMRT-Analysis-Installation-Directory.md b/docs/Navigating-the-SMRT-Analysis-Installation-Directory.md
new file mode 100644
index 0000000..ec7818f
--- /dev/null
+++ b/docs/Navigating-the-SMRT-Analysis-Installation-Directory.md
@@ -0,0 +1,91 @@
+###Reorganized Directory Structure###
+Starting with SMRT Analysis v2.1.0, the directory structure has changed.  Instead of the environment variable ```SEYMOUR_HOME```, ```SMRT_ROOT``` is defined as the top-level directory of the SMRT Analysis installation.  Please ensure that these variables are not defined explicitly in any ``setup.sh`` files or elsewhere, such as in user ``.bash*`` files, ``/etc/profile``, or scripts in ``/etc/profile.d/``.  Although not a strict requirement, we recommend ``SMRT_ROOT=/opt/smrtanalysis/``.
+
+Below is what a ``SMRT_ROOT`` directory hierarchy might look like for v2.3.0 following a series of upgrades/patches ("`->`" = symbolic link):
+
+```
+$SMRT_ROOT
+├── admin -> current/admin
+│   ├── bin
+│   └── log
+├── current -> install/smrtanalysis-2.3.0.139491
+├── install
+│   ├── smrtanalysis-2.1.0.128013
+│   ├── smrtanalysis-2.1.1.128514
+│   ├── smrtanalysis-2.1.1.128514-patch-0.1
+│   ├── smrtanalysis-2.2.0.133377
+│   ├── smrtanalysis-2.2.0.133377-patch-1.134216
+│   ├── smrtanalysis-2.2.0.133377-patch-2.134913
+│   ├── smrtanalysis-2.2.0.133377-patch-3.137015
+│   ├── smrtanalysis_2.3.0.139660
+│   │   ├── admin
+│   │   ├── analysis
+│   │   ├── bin
+│   │   ├── common
+│   │   │   ├── etc
+│   │   │   ├── inputs_dropbox -> userdata/inputs_dropbox
+│   │   │   ├── jobs -> userdata/jobs
+│   │   │   ├── jobs_archive -> userdata/jobs_archive
+│   │   │   ├── jobs_dropbox -> userdata/jobs_dropbox
+│   │   │   ├── lib
+│   │   │   ├── log
+│   │   │   ├── protocols
+│   │   │   ├── references -> userdata/references
+│   │   │   ├── references_dropbox -> userdata/references_dropbox
+│   │   │   ├── test
+│   │   │   ├── userdata -> ../../../userdata
+│   │   │   ├── userdata.d
+│   │   │   └── www
+│   │   ├── doc
+│   │   ├── etc
+│   │   ├── installerdeps
+│   │   ├── licenses
+│   │   ├── miscdeps
+│   │   ├── misclibs
+│   │   ├── postinstall -> etc/scripts/postinstall
+│   │   ├── redist
+│   │   ├── smrtcmds
+│   │   └── support
+│   ├── smrtanalysis-2.3.0.139660-patch-0.1.123456
+│   └── smrtanalysis-addon-internal_2.3.0.139660
+│       ├── admin
+│       ├── etc
+│       └── prerun_files
+├── smrtcmds -> current/smrtcmds
+├── tmpdir -> /tmp
+└── userdata -> /path/to/NFS/mounted/offline_storage
+    ├── database
+    │   ├── current -> mysql
+    │   └── mysql
+    ├── inputs_dropbox
+    ├── jobs
+    │   └── 016
+    │       └── 016437
+    │           ├── data
+    │           ├── log
+    │           ├── movie_metadata
+    │           ├── results
+    │           └── workflow
+    ├── jobs_archive
+    ├── jobs_dropbox
+    ├── log
+    │   ├── log
+    │   └── mysql
+    ├── references
+    │   ├── 2kb_Control
+    │   ├── 4kb_Control
+    │   ├── 600bp_Control
+    │   ├── ecoli
+    │   ├── lambda
+    │   ├── pacbio_barcodes_384
+    │   ├── pacbio_barcodes_paired
+    │   ├── s_cerevisiae
+    │   ├── Standard_v1
+    │   └── Strobe_v1
+    ├── references_dropbox
+    ├── runtime
+    │   ├── run
+    │   └── tmp
+    └── shared_dir
+        └── tmpWY2aRJ
+```
\ No newline at end of file
diff --git a/docs/Navigating-the-SMRT-Pipe-Job-Directory.md b/docs/Navigating-the-SMRT-Pipe-Job-Directory.md
new file mode 100644
index 0000000..2acab82
--- /dev/null
+++ b/docs/Navigating-the-SMRT-Pipe-Job-Directory.md
@@ -0,0 +1,2653 @@
+Below are examples of SMRT Pipe job output directories for analysis protocols in SMRT Analysis v2.2.0
+
+* [RS_BridgeMapper.1](#rs_bridgemapper1)
+* [RS_HGAP_Assembly.2](#rs_hgap_assembly2)
+* [RS_HGAP_Assembly.3](#rs_hgap_assembly3)
+* [RS_IsoSeq.1](#rs_isoseq1)
+* [RS_Long_Amplicon_Analysis.1](#rs_long_amplicon_analysis1)
+* [RS_Minor_Variant.1](#rs_minor_variant1)
+* [RS_Modification_and_Motif_Analysis.1](#rs_modification_and_motif_analysis1)
+* [RS_Modification_Detection.1](#rs_modification_detection1)
+* [RS_ReadsOfInsert.1](#rs_readsofinsert1)
+* [RS_Resequencing.1](#rs_resequencing1)
+* [RS_Subreads.1](#rs_subreads1)
+
+***
+###RS_BridgeMapper.1###
+```
+├── data
+│   ├── filtered_regions
+│   │   ├── m130828_012641_42161_c100529060070000001823089211101390_s1_p0.1.rgn.h5
+│   │   ├── m130828_012641_42161_c100529060070000001823089211101390_s1_p0.2.rgn.h5
+│   │   └── m130828_012641_42161_c100529060070000001823089211101390_s1_p0.3.rgn.h5
+│   ├── pbbridgemapper_regions
+│   │   ├── m130828_012641_42161_c100529060070000001823089211101390_s1_p0.1.rgn.h5
+│   │   ├── m130828_012641_42161_c100529060070000001823089211101390_s1_p0.2.rgn.h5
+│   │   └── m130828_012641_42161_c100529060070000001823089211101390_s1_p0.3.rgn.h5
+│   ├── aligned_reads.bam
+│   ├── aligned_reads.bam.bai
+│   ├── aligned_reads.cmp.h5
+│   ├── aligned_reads.sam
+│   ├── alignment_summary.gff
+│   ├── chemistry_mapping.xml
+│   ├── consensus.fasta
+│   ├── consensus.fastq.gz
+│   ├── contig_ids.txt
+│   ├── coverage.bed
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── filtered_regions.fofn
+│   ├── filtered_subreads.fasta
+│   ├── filtered_subreads.fastq
+│   ├── filtered_subread_summary.csv
+│   ├── filtered_summary.csv
+│   ├── input.chunk001of001.pbbridgemapper.cmp.h5
+│   ├── input.chunk001of001.pbbridgemapper_regions.fofn
+│   ├── slots.pickle
+│   ├── split_reads.bridgemapper.gz
+│   ├── unmappedSubreads.fasta
+│   ├── variants.bed
+│   ├── variants.gff.gz
+│   └── variants.vcf
+├── log
+│   ├── P_BridgeMapper
+│   │   ├── runBridgeMapper_001of001.log
+│   │   ├── runBridgeMapper.input_fofn.Scatter.log
+│   │   └── runBridgeMapper.split_reads_gz.Gather.log
+│   ├── P_ConsensusReports
+│   │   ├── topVariantsReport.log
+│   │   └── variantsJsonReport.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── P_Filter
+│   │   ├── filter_001of001.log
+│   │   ├── filter.rgnFofn.Gather.log
+│   │   ├── filter.summary.Gather.log
+│   │   ├── subreads_001of001.log
+│   │   ├── subreads.subreadFastq.Gather.log
+│   │   ├── subreads.subreads.Gather.log
+│   │   └── subreadSummary.log
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.log
+│   │   ├── statsRpt.log
+│   │   └── subreadRpt.log
+│   ├── P_GenomicConsensus
+│   │   ├── callVariantsWithConsensus_001of001.log
+│   │   ├── callVariantsWithConsensus.consensusFasta.Gather.log
+│   │   ├── callVariantsWithConsensus.consensusFastq.Gather.log
+│   │   ├── callVariantsWithConsensus.contig_list.Scatter.log
+│   │   ├── callVariantsWithConsensus.variantsGff.Gather.log
+│   │   ├── enrichAlnSummary.log
+│   │   ├── makeBed.log
+│   │   ├── makeVcf.log
+│   │   ├── writeContigList.log
+│   │   └── zipVariants.log
+│   ├── P_Mapping
+│   │   ├── align_001of001.log
+│   │   ├── align.cmpH5.Gather.log
+│   │   ├── covGFF.log
+│   │   ├── gff2Bed.log
+│   │   ├── loadChemistry.log
+│   │   ├── repack.log
+│   │   ├── samBam.log
+│   │   ├── sort.log
+│   │   └── unmapped.log
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.log
+│   │   └── statsJsonReport.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m130828_012641_42161_c100529060070000001823089211101390_s1_p0.metadata.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── coverage_histogram.png
+│   ├── coverage_histogram_thumb.png
+│   ├── coverage_plot_74636eac7dd3a4cfc5b6dce677345e97.png
+│   ├── coverage_plot_74636eac7dd3a4cfc5b6dce677345e97_thumb.png
+│   ├── filtered_subread_report.png
+│   ├── filtered_subread_report_thmb.png
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── filter_reports_filter_stats.html
+│   ├── filter_reports_filter_stats.json
+│   ├── filter_reports_filter_subread_stats.html
+│   ├── filter_reports_filter_subread_stats.json
+│   ├── filter_reports_loading.html
+│   ├── filter_reports_loading.json
+│   ├── mapped_readlength_histogram.png
+│   ├── mapped_readlength_histogram_thumb.png
+│   ├── mapped_subread_accuracy_histogram.png
+│   ├── mapped_subread_accuracy_histogram_thumb.png
+│   ├── mapped_subreadlength_histogram.png
+│   ├── mapped_subreadlength_histogram_thumb.png
+│   ├── mapping_coverage_report.html
+│   ├── mapping_coverage_report.json
+│   ├── mapping_stats_report.html
+│   ├── mapping_stats_report.json
+│   ├── overview.html
+│   ├── overview.json
+│   ├── post_filter_readlength_histogram.png
+│   ├── post_filter_readlength_histogram_thumb.png
+│   ├── post_filterread_score_histogram.png
+│   ├── post_filterread_score_histogram_thumb.png
+│   ├── pre_filter_readlength_histogram.png
+│   ├── pre_filter_readlength_histogram_thumb.png
+│   ├── pre_filterread_score_histogram.png
+│   ├── pre_filterread_score_histogram_thumb.png
+│   ├── top_variants_report.html
+│   ├── top_variants_report.json
+│   ├── variants_plot_74636eac7dd3a4cfc5b6dce677345e97.png
+│   ├── variants_plot_74636eac7dd3a4cfc5b6dce677345e97_thumb.png
+│   ├── variants_plot_legend.png
+│   ├── variants_report.html
+│   └── variants_report.json
+├── workflow
+│   ├── P_BridgeMapper
+│   │   ├── runBridgeMapper_001of001.sh
+│   │   ├── runBridgeMapper.input_fofn.Scatter.sh
+│   │   └── runBridgeMapper.split_reads_gz.Gather.sh
+│   ├── P_ConsensusReports
+│   │   ├── topVariantsReport.sh
+│   │   └── variantsJsonReport.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── P_Filter
+│   │   ├── filter_001of001.sh
+│   │   ├── filter.rgnFofn.Gather.sh
+│   │   ├── filter.summary.Gather.sh
+│   │   ├── subreads_001of001.sh
+│   │   ├── subreads.subreadFastq.Gather.sh
+│   │   ├── subreads.subreads.Gather.sh
+│   │   └── subreadSummary.sh
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.sh
+│   │   ├── statsRpt.sh
+│   │   └── subreadRpt.sh
+│   ├── P_GenomicConsensus
+│   │   ├── callVariantsWithConsensus_001of001.sh
+│   │   ├── callVariantsWithConsensus.consensusFasta.Gather.sh
+│   │   ├── callVariantsWithConsensus.consensusFastq.Gather.sh
+│   │   ├── callVariantsWithConsensus.contig_list.Scatter.sh
+│   │   ├── callVariantsWithConsensus.variantsGff.Gather.sh
+│   │   ├── enrichAlnSummary.sh
+│   │   ├── makeBed.sh
+│   │   ├── makeVcf.sh
+│   │   ├── writeContigList.sh
+│   │   └── zipVariants.sh
+│   ├── P_Mapping
+│   │   ├── align_001of001.sh
+│   │   ├── align.cmpH5.Gather.sh
+│   │   ├── covGFF.sh
+│   │   ├── gff2Bed.sh
+│   │   ├── loadChemistry.sh
+│   │   ├── repack.sh
+│   │   ├── samBam.sh
+│   │   ├── sort.sh
+│   │   └── unmapped.sh
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.sh
+│   │   └── statsJsonReport.sh
+│   ├── PostWorkflow.details.dot
+│   ├── PostWorkflow.details.html
+│   ├── PostWorkflow.details.svg
+│   ├── PostWorkflow.profile.html
+│   ├── PostWorkflow.rdf
+│   ├── PostWorkflow.summary.dot
+│   ├── PostWorkflow.summary.html
+│   ├── PostWorkflow.summary.svg
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+├── toc.xml
+└── vis.jnlp
+```
+***
+###RS_HGAP_Assembly.2###
+```
+├── data
+│   ├── 0-mercounts
+│   │   ├── celera-assembler-C-ms14-cm0.estMerThresh.err
+│   │   ├── celera-assembler-C-ms14-cm0.estMerThresh.out
+│   │   ├── celera-assembler-C-ms14-cm0.mcdat
+│   │   ├── celera-assembler-C-ms14-cm0.mcidx
+│   │   ├── celera-assembler.nmers.obt.fasta
+│   │   └── celera-assembler.nmers.ovl.fasta
+│   ├── 0-mertrim
+│   │   └── mertrim.success
+│   ├── 0-overlaptrim
+│   │   ├── celera-assembler.obtStore
+│   │   │   ├── 0001
+│   │   │   ├── idx
+│   │   │   └── ovs
+│   │   ├── celera-assembler.chimera.err
+│   │   ├── celera-assembler.chimera.log
+│   │   ├── celera-assembler.chimera.summary
+│   │   ├── celera-assembler.finalTrim.err
+│   │   ├── celera-assembler.finalTrim.log
+│   │   ├── celera-assembler.finalTrim.summary
+│   │   ├── celera-assembler.initialTrim.err
+│   │   ├── celera-assembler.initialTrim.log
+│   │   ├── celera-assembler.initialTrim.summary
+│   │   ├── celera-assembler.obtStore.err
+│   │   ├── celera-assembler.obtStore.list
+│   │   └── overlaptrim.success
+│   ├── 0-overlaptrim-overlap
+│   │   ├── 001
+│   │   ├── 000001.out
+│   │   ├── overlap_partition.err
+│   │   ├── overlap.sh
+│   │   ├── ovlbat
+│   │   ├── ovljob
+│   │   └── ovlopt
+│   ├── 1-overlapper
+│   │   ├── 001
+│   │   ├── 000001.out
+│   │   ├── overlap_partition.err
+│   │   ├── overlap.sh
+│   │   ├── ovlbat
+│   │   ├── ovljob
+│   │   └── ovlopt
+│   ├── 3-overlapcorrection
+│   │   ├── 0001.erate
+│   │   ├── 0001.err
+│   │   ├── 0001.frgcorr
+│   │   ├── cat-corrects.err
+│   │   ├── cat-corrects.frgcorrlist
+│   │   ├── cat-erates.eratelist
+│   │   ├── cat-erates.err
+│   │   ├── celera-assembler.erates
+│   │   ├── celera-assembler.erates.updated
+│   │   ├── celera-assembler.frgcorr
+│   │   ├── frgcorr.sh
+│   │   ├── overlapStore-update-erates.err
+│   │   └── ovlcorr.sh
+│   ├── 4-unitigger
+│   │   ├── best.contains
+│   │   ├── best.edges
+│   │   ├── best.singletons
+│   │   ├── celera-assembler.002.bestOverlapGraph.thr000.num000.log
+│   │   ├── celera-assembler.005.buildUnitigs.thr000.num000.log
+│   │   ├── celera-assembler.006.placeContains.thr000.num000.log
+│   │   ├── celera-assembler.007.placeZombies.thr000.num000.log
+│   │   ├── celera-assembler.009.popBubbles.thr000.num000.log
+│   │   ├── celera-assembler.010.mergeSplitJoin.thr000.num000.log
+│   │   ├── celera-assembler.011.cleanup.thr000.num000.log
+│   │   ├── celera-assembler.013.output.thr000.num000.log
+│   │   ├── celera-assembler.fragmentInfo
+│   │   ├── celera-assembler.iidmap
+│   │   ├── celera-assembler.partitioning
+│   │   ├── celera-assembler.partitioningInfo
+│   │   ├── celera-assembler.unused.ovl
+│   │   ├── unitigger.err
+│   │   └── unitigger.success
+│   ├── 5-consensus
+│   │   ├── celera-assembler_001.cns.err
+│   │   ├── celera-assembler_001.fix.err
+│   │   ├── celera-assembler_001.fixes
+│   │   ├── celera-assembler_001.success
+│   │   ├── celera-assembler.fixes
+│   │   ├── celera-assembler.fixes.err
+│   │   ├── celera-assembler.partitioned
+│   │   ├── celera-assembler.partitioned.err
+│   │   ├── consensus.sh
+│   │   └── consensus.success
+│   ├── 5-consensus-coverage-stat
+│   │   ├── celera-assembler.cga.0
+│   │   ├── celera-assembler.log
+│   │   ├── celera-assembler.stats
+│   │   └── computeCoverageStat.err
+│   ├── 5-consensus-insert-sizes
+│   │   ├── celera-assembler.tigStore.distupdate
+│   │   ├── celera-assembler.tigStore.gp
+│   │   ├── estimates.out
+│   │   └── updates.err
+│   ├── 5-consensus-split
+│   │   ├── consensus-fix.out
+│   │   └── splitUnitigs.out
+│   ├── 6-clonesize
+│   │   ├── celera-assembler.tigStore
+│   │   │   ├── seqDB.v001.ctg -> ../../celera-assembler.tigStore/seqDB.v001.ctg
+│   │   │   ├── seqDB.v001.p001.dat -> ../../celera-assembler.tigStore/seqDB.v001.p001.dat
+│   │   │   ├── seqDB.v001.p001.utg -> ../../celera-assembler.tigStore/seqDB.v001.p001.utg
+│   │   │   ├── seqDB.v001.utg -> ../../celera-assembler.tigStore/seqDB.v001.utg
+│   │   │   ├── seqDB.v002.p001.dat -> ../../celera-assembler.tigStore/seqDB.v002.p001.dat
+│   │   │   ├── seqDB.v002.p001.utg -> ../../celera-assembler.tigStore/seqDB.v002.p001.utg
+│   │   │   ├── seqDB.v003.ctg -> ../../celera-assembler.tigStore/seqDB.v003.ctg
+│   │   │   ├── seqDB.v003.dat -> ../../celera-assembler.tigStore/seqDB.v003.dat
+│   │   │   ├── seqDB.v003.utg -> ../../celera-assembler.tigStore/seqDB.v003.utg
+│   │   │   ├── seqDB.v004.ctg -> ../../celera-assembler.tigStore/seqDB.v004.ctg
+│   │   │   ├── seqDB.v004.utg -> ../../celera-assembler.tigStore/seqDB.v004.utg
+│   │   │   ├── seqDB.v005.ctg -> ../../celera-assembler.tigStore/seqDB.v005.ctg
+│   │   │   ├── seqDB.v005.utg -> ../../celera-assembler.tigStore/seqDB.v005.utg
+│   │   │   ├── seqDB.v006.ctg
+│   │   │   ├── seqDB.v006.dat
+│   │   │   ├── seqDB.v006.utg
+│   │   │   ├── seqDB.v007.ctg
+│   │   │   ├── seqDB.v007.utg
+│   │   │   ├── seqDB.v008.ctg
+│   │   │   ├── seqDB.v008.utg
+│   │   │   ├── seqDB.v009.ctg
+│   │   │   ├── seqDB.v009.utg
+│   │   │   ├── seqDB.v010.ctg
+│   │   │   ├── seqDB.v010.utg
+│   │   │   ├── seqDB.v011.ctg
+│   │   │   ├── seqDB.v011.utg
+│   │   │   ├── seqDB.v012.ctg
+│   │   │   ├── seqDB.v012.utg
+│   │   │   ├── seqDB.v013.ctg
+│   │   │   ├── seqDB.v013.utg
+│   │   │   ├── seqDB.v014.ctg
+│   │   │   └── seqDB.v014.utg
+│   │   ├── rezlog
+│   │   │   ├── crocks.i01.log
+│   │   │   ├── rez.i01.log
+│   │   │   ├── rez.i02.log
+│   │   │   ├── stone.i01.log
+│   │   │   └── stone.i02.log
+│   │   ├── stat
+│   │   │   ├── CIfinal0.linkstd_all.cgm
+│   │   │   ├── CIfinal0.linkstd_no_overlap.cgm
+│   │   │   ├── CIfinal0.linkstd_w_overlap.cgm
+│   │   │   ├── CIfinal0.mates_per_link.cgm
+│   │   │   ├── CIGraph_U.nodeendoutdegree.cgm
+│   │   │   ├── CIGraph_U.nodeoutdegree.cgm
+│   │   │   ├── Contigfinal0.linkstd_all.cgm
+│   │   │   ├── Contigfinal0.linkstd_no_overlap.cgm
+│   │   │   ├── Contigfinal0.linkstd_w_overlap.cgm
+│   │   │   ├── Contigfinal0.mates_per_link.cgm
+│   │   │   ├── contig_final.distupdate
+│   │   │   ├── final0.PlacedContig.nodeendoutdegree.cgm
+│   │   │   ├── final0.PlacedContig.nodelength.cgm
+│   │   │   ├── final0.PlacedContig.nodeoutdegree.cgm
+│   │   │   ├── final0.PlacedContig.unitigs.cgm
+│   │   │   ├── final0.Scaffolds.contigs.cgm
+│   │   │   ├── final0.Scaffolds.intra_scaffold_gap_means.cgm
+│   │   │   ├── final0.Scaffolds.intra_scaffold_gap_stds.cgm
+│   │   │   ├── final0.Scaffolds.links_per_edge_w_bac.cgm
+│   │   │   ├── final0.Scaffolds.links_per_edge_wo_bac.cgm
+│   │   │   ├── final0.Scaffolds.Nature.txt
+│   │   │   ├── final0.Scaffolds.nodeendoutdegree.cgm
+│   │   │   ├── final0.Scaffolds.nodelength.cgm
+│   │   │   ├── final0.Scaffolds.nodeoutdegree.cgm
+│   │   │   ├── final0.SingleScaffolds.nodelength.cgm
+│   │   │   ├── final.surrogates_Created.cgm
+│   │   │   ├── final.surrogates_fragsPer.cgm
+│   │   │   ├── final.surrogates_per_repeatCI.cgm
+│   │   │   ├── final.surrogates_ratio.cgm
+│   │   │   ├── final.surrogates_size.cgm
+│   │   │   ├── scaffold_final.distupdate
+│   │   │   └── unitig_initial.distupdate
+│   │   ├── celera-assembler.ckp.13
+│   │   ├── celera-assembler.distupdate
+│   │   ├── celera-assembler.distupdate.err
+│   │   ├── celera-assembler.distupdate.success
+│   │   ├── celera-assembler.timing
+│   │   ├── cgw.out
+│   │   └── cgw.success
+│   ├── 7-0-CGW
+│   │   ├── rezlog
+│   │   │   ├── crocks.i01.log
+│   │   │   ├── rez.i01.log
+│   │   │   └── rez.i02.log
+│   │   ├── stat
+│   │   │   ├── CIfinal0.linkstd_all.cgm
+│   │   │   ├── CIfinal0.linkstd_no_overlap.cgm
+│   │   │   ├── CIfinal0.linkstd_w_overlap.cgm
+│   │   │   ├── CIfinal0.mates_per_link.cgm
+│   │   │   ├── CIGraph_U.nodeendoutdegree.cgm
+│   │   │   ├── CIGraph_U.nodeoutdegree.cgm
+│   │   │   ├── Contigfinal0.linkstd_all.cgm
+│   │   │   ├── Contigfinal0.linkstd_no_overlap.cgm
+│   │   │   ├── Contigfinal0.linkstd_w_overlap.cgm
+│   │   │   ├── Contigfinal0.mates_per_link.cgm
+│   │   │   ├── contig_final.distupdate
+│   │   │   ├── final0.PlacedContig.nodeendoutdegree.cgm
+│   │   │   ├── final0.PlacedContig.nodelength.cgm
+│   │   │   ├── final0.PlacedContig.nodeoutdegree.cgm
+│   │   │   ├── final0.PlacedContig.unitigs.cgm
+│   │   │   ├── final0.Scaffolds.contigs.cgm
+│   │   │   ├── final0.Scaffolds.intra_scaffold_gap_means.cgm
+│   │   │   ├── final0.Scaffolds.intra_scaffold_gap_stds.cgm
+│   │   │   ├── final0.Scaffolds.links_per_edge_w_bac.cgm
+│   │   │   ├── final0.Scaffolds.links_per_edge_wo_bac.cgm
+│   │   │   ├── final0.Scaffolds.Nature.txt
+│   │   │   ├── final0.Scaffolds.nodeendoutdegree.cgm
+│   │   │   ├── final0.Scaffolds.nodelength.cgm
+│   │   │   ├── final0.Scaffolds.nodeoutdegree.cgm
+│   │   │   ├── final0.SingleScaffolds.nodelength.cgm
+│   │   │   ├── final.surrogates_Created.cgm
+│   │   │   ├── final.surrogates_fragsPer.cgm
+│   │   │   ├── final.surrogates_per_repeatCI.cgm
+│   │   │   ├── final.surrogates_ratio.cgm
+│   │   │   ├── final.surrogates_size.cgm
+│   │   │   ├── scaffold_final.distupdate
+│   │   │   └── unitig_initial.distupdate
+│   │   ├── celera-assembler.ckp.9
+│   │   ├── celera-assembler.distupdate
+│   │   ├── celera-assembler.distupdate.err
+│   │   ├── celera-assembler.distupdate.success
+│   │   ├── celera-assembler.timing
+│   │   ├── cgw.out
+│   │   └── cgw.success
+│   ├── 7-1-ECR
+│   │   ├── celera-assembler.ckp.9 -> ../7-0-CGW/celera-assembler.ckp.9
+│   │   ├── celera-assembler.timing
+│   │   ├── extendClearRanges.partitionInfo
+│   │   ├── extendClearRanges.partitionInfo.err
+│   │   └── extendClearRanges.success
+│   ├── 7-2-CGW
+│   │   ├── rezlog
+│   │   │   ├── crocks.i01.log
+│   │   │   ├── rez.i01.log
+│   │   │   └── rez.i02.log
+│   │   ├── stat
+│   │   │   ├── CIfinal0.linkstd_all.cgm
+│   │   │   ├── CIfinal0.linkstd_no_overlap.cgm
+│   │   │   ├── CIfinal0.linkstd_w_overlap.cgm
+│   │   │   ├── CIfinal0.mates_per_link.cgm
+│   │   │   ├── CIGraph_U.nodeendoutdegree.cgm
+│   │   │   ├── CIGraph_U.nodeoutdegree.cgm
+│   │   │   ├── Contigfinal0.linkstd_all.cgm
+│   │   │   ├── Contigfinal0.linkstd_no_overlap.cgm
+│   │   │   ├── Contigfinal0.linkstd_w_overlap.cgm
+│   │   │   ├── Contigfinal0.mates_per_link.cgm
+│   │   │   ├── contig_final.distupdate
+│   │   │   ├── final0.PlacedContig.nodeendoutdegree.cgm
+│   │   │   ├── final0.PlacedContig.nodelength.cgm
+│   │   │   ├── final0.PlacedContig.nodeoutdegree.cgm
+│   │   │   ├── final0.PlacedContig.unitigs.cgm
+│   │   │   ├── final0.Scaffolds.contigs.cgm
+│   │   │   ├── final0.Scaffolds.intra_scaffold_gap_means.cgm
+│   │   │   ├── final0.Scaffolds.intra_scaffold_gap_stds.cgm
+│   │   │   ├── final0.Scaffolds.links_per_edge_w_bac.cgm
+│   │   │   ├── final0.Scaffolds.links_per_edge_wo_bac.cgm
+│   │   │   ├── final0.Scaffolds.Nature.txt
+│   │   │   ├── final0.Scaffolds.nodeendoutdegree.cgm
+│   │   │   ├── final0.Scaffolds.nodelength.cgm
+│   │   │   ├── final0.Scaffolds.nodeoutdegree.cgm
+│   │   │   ├── final0.SingleScaffolds.nodelength.cgm
+│   │   │   ├── final.surrogates_Created.cgm
+│   │   │   ├── final.surrogates_fragsPer.cgm
+│   │   │   ├── final.surrogates_per_repeatCI.cgm
+│   │   │   ├── final.surrogates_ratio.cgm
+│   │   │   ├── final.surrogates_size.cgm
+│   │   │   └── scaffold_final.distupdate
+│   │   ├── celera-assembler.ckp.12
+│   │   ├── celera-assembler.distupdate
+│   │   ├── celera-assembler.distupdate.err
+│   │   ├── celera-assembler.distupdate.success
+│   │   ├── celera-assembler.timing
+│   │   ├── cgw.out
+│   │   └── cgw.success
+│   ├── 7-3-ECR
+│   │   ├── celera-assembler.ckp.12 -> ../7-2-CGW/celera-assembler.ckp.12
+│   │   ├── celera-assembler.timing
+│   │   ├── extendClearRanges.partitionInfo
+│   │   ├── extendClearRanges.partitionInfo.err
+│   │   └── extendClearRanges.success
+│   ├── 7-4-CGW
+│   │   ├── rezlog
+│   │   │   ├── crocks.i01.log
+│   │   │   ├── rez.i01.log
+│   │   │   ├── rez.i02.log
+│   │   │   ├── stone.i01.log
+│   │   │   └── stone.i02.log
+│   │   ├── stat
+│   │   │   ├── CIfinal0.linkstd_all.cgm
+│   │   │   ├── CIfinal0.linkstd_no_overlap.cgm
+│   │   │   ├── CIfinal0.linkstd_w_overlap.cgm
+│   │   │   ├── CIfinal0.mates_per_link.cgm
+│   │   │   ├── CIGraph_U.nodeendoutdegree.cgm
+│   │   │   ├── CIGraph_U.nodeoutdegree.cgm
+│   │   │   ├── Contigfinal0.linkstd_all.cgm
+│   │   │   ├── Contigfinal0.linkstd_no_overlap.cgm
+│   │   │   ├── Contigfinal0.linkstd_w_overlap.cgm
+│   │   │   ├── Contigfinal0.mates_per_link.cgm
+│   │   │   ├── contig_final.distupdate
+│   │   │   ├── final0.PlacedContig.nodeendoutdegree.cgm
+│   │   │   ├── final0.PlacedContig.nodelength.cgm
+│   │   │   ├── final0.PlacedContig.nodeoutdegree.cgm
+│   │   │   ├── final0.PlacedContig.unitigs.cgm
+│   │   │   ├── final0.Scaffolds.contigs.cgm
+│   │   │   ├── final0.Scaffolds.intra_scaffold_gap_means.cgm
+│   │   │   ├── final0.Scaffolds.intra_scaffold_gap_stds.cgm
+│   │   │   ├── final0.Scaffolds.links_per_edge_w_bac.cgm
+│   │   │   ├── final0.Scaffolds.links_per_edge_wo_bac.cgm
+│   │   │   ├── final0.Scaffolds.Nature.txt
+│   │   │   ├── final0.Scaffolds.nodeendoutdegree.cgm
+│   │   │   ├── final0.Scaffolds.nodelength.cgm
+│   │   │   ├── final0.Scaffolds.nodeoutdegree.cgm
+│   │   │   ├── final0.SingleScaffolds.nodelength.cgm
+│   │   │   ├── final.surrogates_Created.cgm
+│   │   │   ├── final.surrogates_fragsPer.cgm
+│   │   │   ├── final.surrogates_per_repeatCI.cgm
+│   │   │   ├── final.surrogates_ratio.cgm
+│   │   │   ├── final.surrogates_size.cgm
+│   │   │   └── scaffold_final.distupdate
+│   │   ├── celera-assembler.asm.cam
+│   │   ├── celera-assembler.ckp.21
+│   │   ├── celera-assembler.distupdate
+│   │   ├── celera-assembler.distupdate.err
+│   │   ├── celera-assembler.distupdate.success
+│   │   ├── celera-assembler.dregs.cam
+│   │   ├── celera-assembler.partitionInfo
+│   │   ├── celera-assembler.partitioning
+│   │   ├── celera-assembler.timing
+│   │   ├── cgw.out
+│   │   └── cgw.success
+│   ├── 7-CGW -> 7-4-CGW
+│   ├── 8-consensus
+│   │   ├── celera-assembler_001.err
+│   │   ├── celera-assembler_001.success
+│   │   ├── celera-assembler.partitioned
+│   │   ├── celera-assembler.partitioned.err
+│   │   ├── consensus.sh
+│   │   └── consensus.success
+│   ├── 9-terminator
+│   │   ├── celera-assembler.asm
+│   │   ├── celera-assembler.asm.err
+│   │   ├── celera-assembler.badMateFragmentIDs
+│   │   ├── celera-assembler.ctg.fasta
+│   │   ├── celera-assembler.ctg.qual
+│   │   ├── celera-assembler.ctg.qv
+│   │   ├── celera-assembler.deg.fasta
+│   │   ├── celera-assembler.deg.qual
+│   │   ├── celera-assembler.deg.qv
+│   │   ├── celera-assembler.iidtouid
+│   │   ├── celera-assembler.posmap.ctginf
+│   │   ├── celera-assembler.posmap.ctglen
+│   │   ├── celera-assembler.posmap.ctglkg
+│   │   ├── celera-assembler.posmap.ctgscf
+│   │   ├── celera-assembler.posmap.deginf
+│   │   ├── celera-assembler.posmap.deglen
+│   │   ├── celera-assembler.posmap.frags
+│   │   ├── celera-assembler.posmap.frgctg
+│   │   ├── celera-assembler.posmap.frgdeg
+│   │   ├── celera-assembler.posmap.frgscf
+│   │   ├── celera-assembler.posmap.frgscf.sorted
+│   │   ├── celera-assembler.posmap.frgutg
+│   │   ├── celera-assembler.posmap.libraries
+│   │   ├── celera-assembler.posmap.libraries.78512
+│   │   ├── celera-assembler.posmap.mates
+│   │   ├── celera-assembler.posmap.scfinf
+│   │   ├── celera-assembler.posmap.scflen
+│   │   ├── celera-assembler.posmap.scflkg
+│   │   ├── celera-assembler.posmap.sfgctg
+│   │   ├── celera-assembler.posmap.sfgscf
+│   │   ├── celera-assembler.posmap.utgctg
+│   │   ├── celera-assembler.posmap.utgdeg
+│   │   ├── celera-assembler.posmap.utginf
+│   │   ├── celera-assembler.posmap.utglen
+│   │   ├── celera-assembler.posmap.utglkg
+│   │   ├── celera-assembler.posmap.utgscf
+│   │   ├── celera-assembler.posmap.varctg
+│   │   ├── celera-assembler.posmap.vardeg
+│   │   ├── celera-assembler.posmap.varscf
+│   │   ├── celera-assembler.qc
+│   │   ├── celera-assembler.scf.fasta
+│   │   ├── celera-assembler.scf.qual
+│   │   ├── celera-assembler.scf.qv
+│   │   ├── celera-assembler.singleton.fasta
+│   │   ├── celera-assembler.utg.fasta
+│   │   ├── celera-assembler.utg.qual
+│   │   └── celera-assembler.utg.qv
+│   ├── celera-assembler.gkpStore
+│   │   ├── clr-NORMAL-01-CLR
+│   │   ├── clr-NORMAL-05-OBTINITIAL
+│   │   ├── clr-NORMAL-06-OBTMERGE
+│   │   ├── clr-NORMAL-07-OBTCHIMERA
+│   │   ├── f2p
+│   │   ├── fnm
+│   │   ├── fnm.000
+│   │   ├── fnm.001
+│   │   ├── fpk
+│   │   ├── fpk.000
+│   │   ├── fpk.001
+│   │   ├── fsb
+│   │   ├── fsb.000
+│   │   ├── fsb.001
+│   │   ├── inf
+│   │   ├── lib
+│   │   ├── plc
+│   │   ├── qnm
+│   │   ├── qnm.000
+│   │   ├── qnm.001
+│   │   ├── qpk
+│   │   ├── qpk.000
+│   │   ├── qpk.001
+│   │   ├── qsb
+│   │   ├── qsb.000
+│   │   ├── qsb.001
+│   │   ├── snm
+│   │   ├── ssb
+│   │   ├── u2i
+│   │   └── uid
+│   ├── celera-assembler.ovlStore
+│   │   ├── 0001
+│   │   ├── corrected
+│   │   ├── idx
+│   │   └── ovs
+│   ├── celera-assembler.tigStore
+│   │   ├── seqDB.v001.ctg
+│   │   ├── seqDB.v001.p001.dat
+│   │   ├── seqDB.v001.p001.utg
+│   │   ├── seqDB.v001.utg
+│   │   ├── seqDB.v002.p001.dat
+│   │   ├── seqDB.v002.p001.utg
+│   │   ├── seqDB.v003.ctg
+│   │   ├── seqDB.v003.dat
+│   │   ├── seqDB.v003.utg
+│   │   ├── seqDB.v004.ctg
+│   │   ├── seqDB.v004.utg
+│   │   ├── seqDB.v005.ctg
+│   │   ├── seqDB.v005.utg
+│   │   ├── seqDB.v006.ctg
+│   │   ├── seqDB.v006.dat
+│   │   ├── seqDB.v006.utg
+│   │   ├── seqDB.v007.ctg
+│   │   ├── seqDB.v007.utg
+│   │   ├── seqDB.v008.ctg
+│   │   ├── seqDB.v008.utg
+│   │   ├── seqDB.v009.ctg
+│   │   ├── seqDB.v009.utg
+│   │   ├── seqDB.v010.ctg
+│   │   ├── seqDB.v010.utg
+│   │   ├── seqDB.v011.ctg
+│   │   ├── seqDB.v011.utg
+│   │   ├── seqDB.v012.ctg
+│   │   ├── seqDB.v012.utg
+│   │   ├── seqDB.v013.ctg
+│   │   ├── seqDB.v013.utg
+│   │   ├── seqDB.v014.ctg
+│   │   ├── seqDB.v014.utg
+│   │   ├── seqDB.v015.ctg
+│   │   ├── seqDB.v015.utg
+│   │   ├── seqDB.v016.ctg
+│   │   ├── seqDB.v016.utg
+│   │   ├── seqDB.v017.ctg
+│   │   ├── seqDB.v017.utg
+│   │   ├── seqDB.v018.ctg
+│   │   ├── seqDB.v018.utg
+│   │   ├── seqDB.v019.ctg
+│   │   ├── seqDB.v019.utg
+│   │   ├── seqDB.v020.ctg
+│   │   ├── seqDB.v020.utg
+│   │   ├── seqDB.v021.ctg
+│   │   ├── seqDB.v021.p001.ctg
+│   │   ├── seqDB.v021.p001.dat
+│   │   ├── seqDB.v021.utg
+│   │   ├── seqDB.v022.p001.ctg
+│   │   └── seqDB.v022.p001.dat
+│   ├── filtered_regions
+│   │   └── m140506_154708_42141_c100642411270000001823129210151426_s1_p0.3.rgn.h5
+│   ├── runCA-logs
+│   │   ├── 1399427150_mp-f131.nanofluidics.com_27879_runCA
+│   │   ├── 1399427150_mp-f131.nanofluidics.com_27885_gatekeeper
+│   │   ├── 1399427156_mp-f131.nanofluidics.com_27908_gatekeeper
+│   │   ├── 1399427156_mp-f131.nanofluidics.com_27910_gatekeeper
+│   │   ├── 1399427156_mp-f131.nanofluidics.com_27913_gatekeeper
+│   │   ├── 1399427156_mp-f131.nanofluidics.com_27915_initialTrim
+│   │   ├── 1399427158_mp-f131.nanofluidics.com_27916_gatekeeper
+│   │   ├── 1399427158_mp-f131.nanofluidics.com_27918_meryl
+│   │   ├── 1399427158_mp-f131.nanofluidics.com_27920_meryl
+│   │   ├── 1399427220_mp-f131.nanofluidics.com_27993_estimate-mer-threshold
+│   │   ├── 1399427220_mp-f131.nanofluidics.com_27995_meryl
+│   │   ├── 1399427221_mp-f131.nanofluidics.com_27996_meryl
+│   │   ├── 1399427221_mp-f131.nanofluidics.com_27998_meryl
+│   │   ├── 1399427221_mp-f131.nanofluidics.com_28001_overlap_partition
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28159_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28163_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28165_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28166_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28167_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28168_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28169_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28170_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28171_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28172_overlapInCore
+│   │   ├── 1399427223_mp-f131.nanofluidics.com_28178_overlapInCore
+│   │   ├── 1399427256_mp-f131.nanofluidics.com_28235_overlapInCore
+│   │   ├── 1399427267_mp-f131.nanofluidics.com_28254_overlapInCore
+│   │   ├── 1399427269_mp-f131.nanofluidics.com_28273_overlapInCore
+│   │   ├── 1399427273_mp-f131.nanofluidics.com_28299_overlapInCore
+│   │   ├── 1399427275_mp-f131.nanofluidics.com_28337_overlapInCore
+│   │   ├── 1399427279_mp-f131.nanofluidics.com_28356_overlapInCore
+│   │   ├── 1399427281_mp-f131.nanofluidics.com_28374_overlapInCore
+│   │   ├── 1399427283_mp-f131.nanofluidics.com_28392_overlapInCore
+│   │   ├── 1399427284_mp-f131.nanofluidics.com_28413_overlapInCore
+│   │   ├── 1399427285_mp-f131.nanofluidics.com_28450_overlapInCore
+│   │   ├── 1399427285_mp-f131.nanofluidics.com_28451_overlapInCore
+│   │   ├── 1399427304_mp-f131.nanofluidics.com_28474_overlapInCore
+│   │   ├── 1399427306_mp-f131.nanofluidics.com_28515_overlapInCore
+│   │   ├── 1399427312_mp-f131.nanofluidics.com_28533_overlapInCore
+│   │   ├── 1399427321_mp-f131.nanofluidics.com_28552_overlapInCore
+│   │   ├── 1399427348_mp-f131.nanofluidics.com_28601_overlapStoreBuild
+│   │   ├── 1399427352_mp-f131.nanofluidics.com_28707_finalTrim
+│   │   ├── 1399427354_mp-f131.nanofluidics.com_28709_chimera
+│   │   ├── 1399427355_mp-f131.nanofluidics.com_28714_overlap_partition
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28845_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28854_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28857_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28865_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28873_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28874_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28876_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28879_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28881_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28882_overlapInCore
+│   │   ├── 1399427356_mp-f131.nanofluidics.com_28883_overlapInCore
+│   │   ├── 1399427383_mp-f131.nanofluidics.com_28943_overlapInCore
+│   │   ├── 1399427397_mp-f131.nanofluidics.com_28995_overlapInCore
+│   │   ├── 1399427399_mp-f131.nanofluidics.com_29015_overlapInCore
+│   │   ├── 1399427401_mp-f131.nanofluidics.com_29064_overlapInCore
+│   │   ├── 1399427401_mp-f131.nanofluidics.com_29075_overlapInCore
+│   │   ├── 1399427401_mp-f131.nanofluidics.com_29098_overlapInCore
+│   │   ├── 1399427401_mp-f131.nanofluidics.com_29118_overlapInCore
+│   │   ├── 1399427401_mp-f131.nanofluidics.com_29119_overlapInCore
+│   │   ├── 1399427402_mp-f131.nanofluidics.com_29139_overlapInCore
+│   │   ├── 1399427407_mp-f131.nanofluidics.com_29181_overlapInCore
+│   │   ├── 1399427407_mp-f131.nanofluidics.com_29186_overlapInCore
+│   │   ├── 1399427422_mp-f131.nanofluidics.com_29208_overlapInCore
+│   │   ├── 1399427429_mp-f131.nanofluidics.com_29273_overlapInCore
+│   │   ├── 1399427435_mp-f131.nanofluidics.com_29293_overlapInCore
+│   │   ├── 1399427436_mp-f131.nanofluidics.com_29311_overlapInCore
+│   │   ├── 1399427471_mp-f131.nanofluidics.com_29361_overlapStoreBuild
+│   │   ├── 1399427475_mp-f131.nanofluidics.com_29476_correct-frags
+│   │   ├── 1399427519_mp-f131.nanofluidics.com_29553_correct-olaps
+│   │   ├── 1399427558_mp-f131.nanofluidics.com_29585_overlapStore
+│   │   ├── 1399427558_mp-f131.nanofluidics.com_29588_bogart
+│   │   ├── 1399427564_mp-f131.nanofluidics.com_29638_gatekeeper
+│   │   ├── 1399427566_mp-f131.nanofluidics.com_29646_utgcns
+│   │   ├── 1399437539_mp-f131.nanofluidics.com_45348_utgcnsfix
+│   │   ├── 1399437586_mp-f131.nanofluidics.com_45429_tigStore
+│   │   ├── 1399437586_mp-f131.nanofluidics.com_45433_tigStore
+│   │   ├── 1399437588_mp-f131.nanofluidics.com_45439_gatekeeper
+│   │   ├── 1399437588_mp-f131.nanofluidics.com_45442_splitUnitigs
+│   │   ├── 1399437588_mp-f131.nanofluidics.com_45444_utgcnsfix
+│   │   ├── 1399437588_mp-f131.nanofluidics.com_45447_computeCoverageStat
+│   │   ├── 1399437589_mp-f131.nanofluidics.com_45454_cgw
+│   │   ├── 1399437592_mp-f131.nanofluidics.com_45527_gatekeeper
+│   │   ├── 1399437592_mp-f131.nanofluidics.com_45530_cgw
+│   │   ├── 1399437594_mp-f131.nanofluidics.com_45585_gatekeeper
+│   │   ├── 1399437594_mp-f131.nanofluidics.com_45591_extendClearRangesPartition
+│   │   ├── 1399437594_mp-f131.nanofluidics.com_45599_cgw
+│   │   ├── 1399437595_mp-f131.nanofluidics.com_45608_gatekeeper
+│   │   ├── 1399437595_mp-f131.nanofluidics.com_45614_extendClearRangesPartition
+│   │   ├── 1399437596_mp-f131.nanofluidics.com_45622_cgw
+│   │   ├── 1399437597_mp-f131.nanofluidics.com_45630_gatekeeper
+│   │   ├── 1399437597_mp-f131.nanofluidics.com_45636_gatekeeper
+│   │   ├── 1399437599_mp-f131.nanofluidics.com_45645_ctgcns
+│   │   ├── 1399437757_mp-f131.nanofluidics.com_45941_terminator
+│   │   ├── 1399437758_mp-f131.nanofluidics.com_45943_asmOutputFasta
+│   │   ├── 1399437760_mp-f131.nanofluidics.com_45947_dumpSingletons
+│   │   ├── 1399437760_mp-f131.nanofluidics.com_45949_buildPosMap
+│   │   ├── 1399437761_mp-f131.nanofluidics.com_45952_fragmentDepth
+│   │   ├── 1399437761_mp-f131.nanofluidics.com_45953_fragmentDepth
+│   │   ├── 1399437761_mp-f131.nanofluidics.com_45954_fragmentDepth
+│   │   └── 1399437761_mp-f131.nanofluidics.com_45955_fragmentDepth
+│   ├── aligned_reads.bam
+│   ├── aligned_reads.bam.bai
+│   ├── aligned_reads.cmp.h5
+│   ├── aligned_reads.sam
+│   ├── alignment_summary.gff
+│   ├── celera-assembler.asm
+│   ├── celera-assembler.deg.fasta -> /mnt/secondary/Smrtanalysis/current/common/jobs/078/078512/data/9-terminator/celera-assembler.deg.fasta
+│   ├── celera-assembler.gkpStore.err
+│   ├── celera-assembler.gkpStore.errorLog
+│   ├── celera-assembler.gkpStore.fastqUIDmap
+│   ├── celera-assembler.gkpStore.info
+│   ├── celera-assembler.ovlStore.err
+│   ├── celera-assembler.ovlStore.list
+│   ├── celera-assembler.qc
+│   ├── celera-assembler.scf.fasta -> /mnt/secondary/Smrtanalysis/current/common/jobs/078/078512/data/9-terminator/celera-assembler.scf.fasta
+│   ├── celera-assembler.singleton.fasta -> /mnt/secondary/Smrtanalysis/current/common/jobs/078/078512/data/9-terminator/celera-assembler.singleton.fasta
+│   ├── chemistry_mapping.xml
+│   ├── corrected.fasta
+│   ├── corrected.fastq
+│   ├── corrected.frg
+│   ├── corrections.gff
+│   ├── coverage.bed
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── filtered_regions.fofn
+│   ├── filtered_subreads.fasta
+│   ├── filtered_subreads.fastq
+│   ├── filtered_subread_summary.csv
+│   ├── filtered_summary.csv
+│   ├── polished_assembly.fasta.gz
+│   ├── polished_assembly.fastq.gz
+│   ├── runCA.spec
+│   ├── slots.pickle
+│   └── unmappedSubreads.fasta
+├── log
+│   ├── P_AssemblyPolishing
+│   │   ├── callConsensus.log
+│   │   ├── enrichAlnSummary.log
+│   │   ├── polishedJsonReport.log
+│   │   ├── topCorrectionsJsonReport.log
+│   │   ├── variantsJsonReport.log
+│   │   └── zipPolishedFasta.log
+│   ├── P_CeleraAssembler
+│   │   ├── genFrgFile.log
+│   │   ├── runCaHgap.log
+│   │   └── writeRunCASpec.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── P_Filter
+│   │   ├── filter_001of001.log
+│   │   ├── filter.rgnFofn.Gather.log
+│   │   ├── filter.summary.Gather.log
+│   │   ├── subreads_001of001.log
+│   │   ├── subreads.plsFofn.Scatter.log
+│   │   ├── subreads.subreadFastq.Gather.log
+│   │   ├── subreads.subreads.Gather.log
+│   │   └── subreadSummary.log
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.log
+│   │   ├── statsRpt.log
+│   │   └── subreadRpt.log
+│   ├── P_Mapping
+│   │   ├── align_001of001.log
+│   │   ├── align.cmpH5.Gather.log
+│   │   ├── covGFF.log
+│   │   ├── gff2Bed.log
+│   │   ├── loadChemistry.log
+│   │   ├── repack.log
+│   │   ├── samBam.log
+│   │   ├── sort.log
+│   │   └── unmapped.log
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.log
+│   │   └── statsJsonReport.log
+│   ├── P_PreAssemblerDagcon
+│   │   ├── filterLongReadsByLength.log
+│   │   ├── hgapAlignForCorrection_001of001.log
+│   │   ├── hgapAlignForCorrection.blasrM4Fofn.Gather.log
+│   │   ├── hgapAlignForCorrection.blasrM4.Gather.log
+│   │   ├── hgapAlignForCorrection.target.Scatter.log
+│   │   ├── hgapCorrection_001of001.log
+│   │   ├── hgapCorrection.fasta.Gather.log
+│   │   ├── hgapCorrection.fastq.Gather.log
+│   │   ├── hgapFilterM4_001of001.log
+│   │   ├── hgapFilterM4.blasrM4Filtered.Gather.log
+│   │   └── preAssemblerJsonReport.log
+│   ├── P_ReferenceUploader
+│   │   └── runUploaderHgap.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140506_154708_42141_c100642411270000001823129210151426_s1_p0.metadata.xml
+├── reference
+│   ├── sequence
+│   │   ├── reference.fasta
+│   │   ├── reference.fasta.contig.index
+│   │   ├── reference.fasta.fai
+│   │   ├── reference.fasta.index
+│   │   └── reference.fasta.sa
+│   └── reference.info.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── corrections.html
+│   ├── corrections.json
+│   ├── coverage_histogram.png
+│   ├── coverage_histogram_thumb.png
+│   ├── coverage_plot_d1f8db8165253b08875c26154fbbab01.png
+│   ├── coverage_plot_d1f8db8165253b08875c26154fbbab01_thumb.png
+│   ├── filtered_subread_report.png
+│   ├── filtered_subread_report_thmb.png
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── filter_reports_filter_stats.html
+│   ├── filter_reports_filter_stats.json
+│   ├── filter_reports_filter_subread_stats.html
+│   ├── filter_reports_filter_subread_stats.json
+│   ├── filter_reports_loading.html
+│   ├── filter_reports_loading.json
+│   ├── mapped_readlength_histogram.png
+│   ├── mapped_readlength_histogram_thumb.png
+│   ├── mapped_subread_accuracy_histogram.png
+│   ├── mapped_subread_accuracy_histogram_thumb.png
+│   ├── mapped_subreadlength_histogram.png
+│   ├── mapped_subreadlength_histogram_thumb.png
+│   ├── mapping_coverage_report.html
+│   ├── mapping_coverage_report.json
+│   ├── mapping_stats_report.html
+│   ├── mapping_stats_report.json
+│   ├── overview.html
+│   ├── overview.json
+│   ├── polished_coverage_vs_quality.csv
+│   ├── polished_coverage_vs_quality.png
+│   ├── polished_coverage_vs_quality_thumb.png
+│   ├── polished_report.html
+│   ├── polished_report.json
+│   ├── post_filter_readlength_histogram.png
+│   ├── post_filter_readlength_histogram_thumb.png
+│   ├── post_filterread_score_histogram.png
+│   ├── post_filterread_score_histogram_thumb.png
+│   ├── preassembler_report.html
+│   ├── preassembler_report.json
+│   ├── pre_filter_readlength_histogram.png
+│   ├── pre_filter_readlength_histogram_thumb.png
+│   ├── pre_filterread_score_histogram.png
+│   ├── pre_filterread_score_histogram_thumb.png
+│   ├── top_corrections_report.html
+│   ├── top_corrections_report.json
+│   ├── variants_plot_d1f8db8165253b08875c26154fbbab01.png
+│   ├── variants_plot_d1f8db8165253b08875c26154fbbab01_thumb.png
+│   └── variants_plot_legend.png
+├── workflow
+│   ├── P_AssemblyPolishing
+│   │   ├── callConsensus.sh
+│   │   ├── enrichAlnSummary.sh
+│   │   ├── polishedJsonReport.sh
+│   │   ├── topCorrectionsJsonReport.sh
+│   │   ├── variantsJsonReport.sh
+│   │   └── zipPolishedFasta.sh
+│   ├── P_CeleraAssembler
+│   │   ├── genFrgFile.sh
+│   │   ├── runCaHgap.sh
+│   │   └── writeRunCASpec.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── P_Filter
+│   │   ├── filter_001of001.sh
+│   │   ├── filter.rgnFofn.Gather.sh
+│   │   ├── filter.summary.Gather.sh
+│   │   ├── subreads_001of001.sh
+│   │   ├── subreads.plsFofn.Scatter.sh
+│   │   ├── subreads.subreadFastq.Gather.sh
+│   │   ├── subreads.subreads.Gather.sh
+│   │   └── subreadSummary.sh
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.sh
+│   │   ├── statsRpt.sh
+│   │   └── subreadRpt.sh
+│   ├── P_Mapping
+│   │   ├── align_001of001.sh
+│   │   ├── align.cmpH5.Gather.sh
+│   │   ├── covGFF.sh
+│   │   ├── gff2Bed.sh
+│   │   ├── loadChemistry.sh
+│   │   ├── repack.sh
+│   │   ├── samBam.sh
+│   │   ├── sort.sh
+│   │   └── unmapped.sh
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.sh
+│   │   └── statsJsonReport.sh
+│   ├── P_PreAssemblerDagcon
+│   │   ├── filterLongReadsByLength.sh
+│   │   ├── hgapAlignForCorrection_001of001.sh
+│   │   ├── hgapAlignForCorrection.blasrM4Fofn.Gather.sh
+│   │   ├── hgapAlignForCorrection.blasrM4.Gather.sh
+│   │   ├── hgapAlignForCorrection.target.Scatter.sh
+│   │   ├── hgapCorrection_001of001.sh
+│   │   ├── hgapCorrection.fasta.Gather.sh
+│   │   ├── hgapCorrection.fastq.Gather.sh
+│   │   ├── hgapFilterM4_001of001.sh
+│   │   ├── hgapFilterM4.blasrM4Filtered.Gather.sh
+│   │   └── preAssemblerJsonReport.sh
+│   ├── P_ReferenceUploader
+│   │   └── runUploaderHgap.sh
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── filtered_longreads.fasta
+├── filtered_longreads.fasta.cutoff
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── seeds.m4
+├── seeds.m4.fofn
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+├── toc.xml
+└── vis.jnlp
+```
+***
+###RS_HGAP_Assembly.3###
+```
+├── data
+│   ├── 0-mercounts
+│   │   ├── celera-assembler-C-ms14-cm0.estMerThresh.err
+│   │   ├── celera-assembler-C-ms14-cm0.estMerThresh.out
+│   │   ├── celera-assembler-C-ms14-cm0.mcdat
+│   │   ├── celera-assembler-C-ms14-cm0.mcidx
+│   │   ├── celera-assembler.nmers.obt.fasta
+│   │   └── celera-assembler.nmers.ovl.fasta
+│   ├── 0-mertrim
+│   │   └── mertrim.success
+│   ├── 0-overlaptrim
+│   │   ├── celera-assembler.obtStore
+│   │   │   ├── 0001
+│   │   │   ├── idx
+│   │   │   └── ovs
+│   │   ├── celera-assembler.chimera.err
+│   │   ├── celera-assembler.chimera.log
+│   │   ├── celera-assembler.chimera.summary
+│   │   ├── celera-assembler.finalTrim.err
+│   │   ├── celera-assembler.finalTrim.log
+│   │   ├── celera-assembler.finalTrim.summary
+│   │   ├── celera-assembler.initialTrim.err
+│   │   ├── celera-assembler.initialTrim.log
+│   │   ├── celera-assembler.initialTrim.summary
+│   │   ├── celera-assembler.obtStore.err
+│   │   ├── celera-assembler.obtStore.list
+│   │   └── overlaptrim.success
+│   ├── 0-overlaptrim-overlap
+│   │   ├── 001
+│   │   ├── 000001.out
+│   │   ├── overlap_partition.err
+│   │   ├── overlap.sh
+│   │   ├── ovlbat
+│   │   ├── ovljob
+│   │   └── ovlopt
+│   ├── 1-overlapper
+│   │   ├── 001
+│   │   ├── 000001.out
+│   │   ├── overlap_partition.err
+│   │   ├── overlap.sh
+│   │   ├── ovlbat
+│   │   ├── ovljob
+│   │   └── ovlopt
+│   ├── 3-overlapcorrection
+│   │   ├── 0001.erate
+│   │   ├── 0001.err
+│   │   ├── 0001.frgcorr
+│   │   ├── cat-corrects.err
+│   │   ├── cat-corrects.frgcorrlist
+│   │   ├── cat-erates.eratelist
+│   │   ├── cat-erates.err
+│   │   ├── celera-assembler.erates
+│   │   ├── celera-assembler.erates.updated
+│   │   ├── celera-assembler.frgcorr
+│   │   ├── frgcorr.sh
+│   │   ├── overlapStore-update-erates.err
+│   │   └── ovlcorr.sh
+│   ├── 4-unitigger
+│   │   ├── best.contains
+│   │   ├── best.edges
+│   │   ├── best.singletons
+│   │   ├── celera-assembler.002.bestOverlapGraph.thr000.num000.log
+│   │   ├── celera-assembler.005.buildUnitigs.thr000.num000.log
+│   │   ├── celera-assembler.006.placeContains.thr000.num000.log
+│   │   ├── celera-assembler.007.placeZombies.thr000.num000.log
+│   │   ├── celera-assembler.009.popBubbles.thr000.num000.log
+│   │   ├── celera-assembler.010.mergeSplitJoin.thr000.num000.log
+│   │   ├── celera-assembler.010.mergeSplitJoin.thr028.num000.log
+│   │   ├── celera-assembler.011.cleanup.thr000.num000.log
+│   │   ├── celera-assembler.013.output.thr000.num000.log
+│   │   ├── celera-assembler.fragmentInfo
+│   │   ├── celera-assembler.iidmap
+│   │   ├── celera-assembler.partitioning
+│   │   ├── celera-assembler.partitioningInfo
+│   │   ├── celera-assembler.unused.ovl
+│   │   ├── unitigger.err
+│   │   └── unitigger.success
+│   ├── celera-assembler.gkpStore
+│   │   ├── clr-NORMAL-01-CLR
+│   │   ├── clr-NORMAL-05-OBTINITIAL
+│   │   ├── clr-NORMAL-06-OBTMERGE
+│   │   ├── clr-NORMAL-07-OBTCHIMERA
+│   │   ├── f2p
+│   │   ├── fnm
+│   │   ├── fpk
+│   │   ├── fsb
+│   │   ├── inf
+│   │   ├── lib
+│   │   ├── plc
+│   │   ├── qnm
+│   │   ├── qpk
+│   │   ├── qsb
+│   │   ├── snm
+│   │   ├── ssb
+│   │   ├── u2i
+│   │   └── uid
+│   ├── celera-assembler.ovlStore
+│   │   ├── 0001
+│   │   ├── corrected
+│   │   ├── idx
+│   │   └── ovs
+│   ├── celera-assembler.tigStore
+│   │   ├── seqDB.v001.ctg
+│   │   ├── seqDB.v001.p001.dat
+│   │   ├── seqDB.v001.p001.utg
+│   │   └── seqDB.v001.utg
+│   ├── filtered_regions
+│   │   ├── m140516_004425_sherri_c110042412550000001823111106241441_s1_p0.1.rgn.h5
+│   │   ├── m140516_004425_sherri_c110042412550000001823111106241441_s1_p0.2.rgn.h5
+│   │   └── m140516_004425_sherri_c110042412550000001823111106241441_s1_p0.3.rgn.h5
+│   ├── runCA-logs
+│   │   ├── 1400267302_mp-f114.nanofluidics.com_1862_runCA
+│   │   ├── 1400267302_mp-f114.nanofluidics.com_1868_gatekeeper
+│   │   ├── 1400267310_mp-f114.nanofluidics.com_1872_gatekeeper
+│   │   ├── 1400267310_mp-f114.nanofluidics.com_1874_gatekeeper
+│   │   ├── 1400267310_mp-f114.nanofluidics.com_1877_gatekeeper
+│   │   ├── 1400267310_mp-f114.nanofluidics.com_1879_initialTrim
+│   │   ├── 1400267312_mp-f114.nanofluidics.com_2190_gatekeeper
+│   │   ├── 1400267312_mp-f114.nanofluidics.com_2192_meryl
+│   │   ├── 1400267312_mp-f114.nanofluidics.com_2194_meryl
+│   │   ├── 1400267346_mp-f114.nanofluidics.com_2536_estimate-mer-threshold
+│   │   ├── 1400267346_mp-f114.nanofluidics.com_2538_meryl
+│   │   ├── 1400267347_mp-f114.nanofluidics.com_2539_meryl
+│   │   ├── 1400267347_mp-f114.nanofluidics.com_2541_meryl
+│   │   ├── 1400267347_mp-f114.nanofluidics.com_2544_overlap_partition
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2717_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2723_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2724_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2725_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2726_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2727_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2728_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2729_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2730_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2731_overlapInCore
+│   │   ├── 1400267348_mp-f114.nanofluidics.com_2732_overlapInCore
+│   │   ├── 1400267367_mp-f114.nanofluidics.com_2775_overlapInCore
+│   │   ├── 1400267380_mp-f114.nanofluidics.com_2794_overlapInCore
+│   │   ├── 1400267407_mp-f114.nanofluidics.com_2842_overlapInCore
+│   │   ├── 1400267410_mp-f114.nanofluidics.com_2863_overlapInCore
+│   │   ├── 1400267435_mp-f114.nanofluidics.com_2884_overlapInCore
+│   │   ├── 1400267436_mp-f114.nanofluidics.com_2904_overlapInCore
+│   │   ├── 1400267436_mp-f114.nanofluidics.com_2923_overlapInCore
+│   │   ├── 1400267440_mp-f114.nanofluidics.com_2941_overlapInCore
+│   │   ├── 1400267447_mp-f114.nanofluidics.com_2960_overlapInCore
+│   │   ├── 1400267456_mp-f114.nanofluidics.com_2979_overlapInCore
+│   │   ├── 1400267457_mp-f114.nanofluidics.com_2997_overlapInCore
+│   │   ├── 1400267467_mp-f114.nanofluidics.com_3538_overlapInCore
+│   │   ├── 1400267473_mp-f114.nanofluidics.com_3640_overlapInCore
+│   │   ├── 1400267476_mp-f114.nanofluidics.com_3659_overlapInCore
+│   │   ├── 1400267480_mp-f114.nanofluidics.com_3677_overlapInCore
+│   │   ├── 1400267480_mp-f114.nanofluidics.com_3701_overlapInCore
+│   │   ├── 1400267480_mp-f114.nanofluidics.com_3713_overlapInCore
+│   │   ├── 1400267536_mp-f114.nanofluidics.com_5237_overlapStoreBuild
+│   │   ├── 1400267538_mp-f114.nanofluidics.com_5351_finalTrim
+│   │   ├── 1400267540_mp-f114.nanofluidics.com_5353_chimera
+│   │   ├── 1400267540_mp-f114.nanofluidics.com_5357_overlap_partition
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5516_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5517_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5518_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5520_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5521_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5522_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5523_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5525_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5527_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5528_overlapInCore
+│   │   ├── 1400267541_mp-f114.nanofluidics.com_5529_overlapInCore
+│   │   ├── 1400267560_mp-f114.nanofluidics.com_5570_overlapInCore
+│   │   ├── 1400267570_mp-f114.nanofluidics.com_5589_overlapInCore
+│   │   ├── 1400267593_mp-f114.nanofluidics.com_5609_overlapInCore
+│   │   ├── 1400267594_mp-f114.nanofluidics.com_5627_overlapInCore
+│   │   ├── 1400267610_mp-f114.nanofluidics.com_5647_overlapInCore
+│   │   ├── 1400267611_mp-f114.nanofluidics.com_5665_overlapInCore
+│   │   ├── 1400267615_mp-f114.nanofluidics.com_5684_overlapInCore
+│   │   ├── 1400267620_mp-f114.nanofluidics.com_5703_overlapInCore
+│   │   ├── 1400267632_mp-f114.nanofluidics.com_5729_overlapInCore
+│   │   ├── 1400267633_mp-f114.nanofluidics.com_5747_overlapInCore
+│   │   ├── 1400267639_mp-f114.nanofluidics.com_5768_overlapInCore
+│   │   ├── 1400267647_mp-f114.nanofluidics.com_5787_overlapInCore
+│   │   ├── 1400267650_mp-f114.nanofluidics.com_5805_overlapInCore
+│   │   ├── 1400267650_mp-f114.nanofluidics.com_5823_overlapInCore
+│   │   ├── 1400267651_mp-f114.nanofluidics.com_5841_overlapInCore
+│   │   ├── 1400267658_mp-f114.nanofluidics.com_5860_overlapInCore
+│   │   ├── 1400267662_mp-f114.nanofluidics.com_5881_overlapInCore
+│   │   ├── 1400267701_mp-f114.nanofluidics.com_6838_overlapStoreBuild
+│   │   ├── 1400267703_mp-f114.nanofluidics.com_6962_correct-frags
+│   │   ├── 1400267724_mp-f114.nanofluidics.com_6993_correct-olaps
+│   │   ├── 1400267737_mp-f114.nanofluidics.com_7005_overlapStore
+│   │   └── 1400267737_mp-f114.nanofluidics.com_7008_bogart
+│   ├── aligned_reads.bam
+│   ├── aligned_reads.bam.bai
+│   ├── aligned_reads.cmp.h5
+│   ├── aligned_reads.sam
+│   ├── alignment_summary.gff
+│   ├── ca_finished
+│   ├── celera-assembler.gkpStore.err
+│   ├── celera-assembler.gkpStore.errorLog
+│   ├── celera-assembler.gkpStore.fastqUIDmap
+│   ├── celera-assembler.gkpStore.info
+│   ├── celera-assembler.ovlStore.err
+│   ├── celera-assembler.ovlStore.list
+│   ├── chemistry_mapping.xml
+│   ├── corrected.fasta
+│   ├── corrected.fastq
+│   ├── corrected.frg
+│   ├── corrections.gff
+│   ├── coverage.bed
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── draft_assembly.fasta
+│   ├── filtered_regions.fofn
+│   ├── filtered_subreads.fasta
+│   ├── filtered_subreads.fastq
+│   ├── filtered_subread_summary.csv
+│   ├── filtered_summary.csv
+│   ├── polished_assembly.fasta.gz
+│   ├── polished_assembly.fastq.gz
+│   ├── runCA.spec
+│   ├── slots.pickle
+│   ├── unitigs.lst
+│   └── unmappedSubreads.fasta
+├── log
+│   ├── P_AssembleUnitig
+│   │   ├── genFrgFile.log
+│   │   ├── getUnitigs.log
+│   │   ├── runCaToUnitig.log
+│   │   ├── unitigConsensus_001of001.log
+│   │   ├── unitigConsensus.unitigs.Scatter.log
+│   │   ├── unitigConsensus.utgConsensus.Gather.log
+│   │   └── writeRunCASpec.log
+│   ├── P_AssemblyPolishing
+│   │   ├── callConsensus.log
+│   │   ├── enrichAlnSummary.log
+│   │   ├── polishedJsonReport.log
+│   │   ├── topCorrectionsJsonReport.log
+│   │   ├── variantsJsonReport.log
+│   │   └── zipPolishedFasta.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── P_Filter
+│   │   ├── filter_001of001.log
+│   │   ├── filter.plsFofn.Scatter.log
+│   │   ├── filter.rgnFofn.Gather.log
+│   │   ├── filter.summary.Gather.log
+│   │   ├── subreads_001of001.log
+│   │   ├── subreads.subreadFastq.Gather.log
+│   │   ├── subreads.subreads.Gather.log
+│   │   └── subreadSummary.log
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.log
+│   │   ├── statsRpt.log
+│   │   └── subreadRpt.log
+│   ├── P_Mapping
+│   │   ├── align_001of001.log
+│   │   ├── align.cmpH5.Gather.log
+│   │   ├── covGFF.log
+│   │   ├── gff2Bed.log
+│   │   ├── loadChemistry.log
+│   │   ├── repack.log
+│   │   ├── samBam.log
+│   │   ├── sort.log
+│   │   └── unmapped.log
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.log
+│   │   └── statsJsonReport.log
+│   ├── P_PreAssemblerDagcon
+│   │   ├── filterLongReadsByLength.log
+│   │   ├── hgapAlignForCorrection_001of001.log
+│   │   ├── hgapAlignForCorrection.blasrM4Fofn.Gather.log
+│   │   ├── hgapAlignForCorrection.blasrM4.Gather.log
+│   │   ├── hgapAlignForCorrection.target.Scatter.log
+│   │   ├── hgapCorrection_001of001.log
+│   │   ├── hgapCorrection.fasta.Gather.log
+│   │   ├── hgapCorrection.fastq.Gather.log
+│   │   ├── hgapFilterM4_001of001.log
+│   │   ├── hgapFilterM4.blasrM4Filtered.Gather.log
+│   │   └── preAssemblerJsonReport.log
+│   ├── P_ReferenceUploader
+│   │   └── runUploaderUnitig.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140516_004425_sherri_c110042412550000001823111106241441_s1_p0.metadata.xml
+├── reference
+│   ├── sequence
+│   │   ├── reference.fasta
+│   │   ├── reference.fasta.contig.index
+│   │   ├── reference.fasta.fai
+│   │   ├── reference.fasta.index
+│   │   └── reference.fasta.sa
+│   └── reference.info.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── corrections.html
+│   ├── corrections.json
+│   ├── coverage_histogram.png
+│   ├── coverage_histogram_thumb.png
+│   ├── coverage_plot_b9c24724c0f58d73b9ceadef52bdabd1.png
+│   ├── coverage_plot_b9c24724c0f58d73b9ceadef52bdabd1_thumb.png
+│   ├── filtered_subread_report.png
+│   ├── filtered_subread_report_thmb.png
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── filter_reports_filter_stats.html
+│   ├── filter_reports_filter_stats.json
+│   ├── filter_reports_filter_subread_stats.html
+│   ├── filter_reports_filter_subread_stats.json
+│   ├── filter_reports_loading.html
+│   ├── filter_reports_loading.json
+│   ├── mapped_readlength_histogram.png
+│   ├── mapped_readlength_histogram_thumb.png
+│   ├── mapped_subread_accuracy_histogram.png
+│   ├── mapped_subread_accuracy_histogram_thumb.png
+│   ├── mapped_subreadlength_histogram.png
+│   ├── mapped_subreadlength_histogram_thumb.png
+│   ├── mapping_coverage_report.html
+│   ├── mapping_coverage_report.json
+│   ├── mapping_stats_report.html
+│   ├── mapping_stats_report.json
+│   ├── overview.html
+│   ├── overview.json
+│   ├── polished_coverage_vs_quality.csv
+│   ├── polished_coverage_vs_quality.png
+│   ├── polished_coverage_vs_quality_thumb.png
+│   ├── polished_report.html
+│   ├── polished_report.json
+│   ├── post_filter_readlength_histogram.png
+│   ├── post_filter_readlength_histogram_thumb.png
+│   ├── post_filterread_score_histogram.png
+│   ├── post_filterread_score_histogram_thumb.png
+│   ├── preassembler_report.html
+│   ├── preassembler_report.json
+│   ├── pre_filter_readlength_histogram.png
+│   ├── pre_filter_readlength_histogram_thumb.png
+│   ├── pre_filterread_score_histogram.png
+│   ├── pre_filterread_score_histogram_thumb.png
+│   ├── top_corrections_report.html
+│   ├── top_corrections_report.json
+│   ├── variants_plot_b9c24724c0f58d73b9ceadef52bdabd1.png
+│   ├── variants_plot_b9c24724c0f58d73b9ceadef52bdabd1_thumb.png
+│   └── variants_plot_legend.png
+├── workflow
+│   ├── P_AssembleUnitig
+│   │   ├── genFrgFile.sh
+│   │   ├── getUnitigs.sh
+│   │   ├── runCaToUnitig.sh
+│   │   ├── unitigConsensus_001of001.sh
+│   │   ├── unitigConsensus.unitigs.Scatter.sh
+│   │   ├── unitigConsensus.utgConsensus.Gather.sh
+│   │   └── writeRunCASpec.sh
+│   ├── P_AssemblyPolishing
+│   │   ├── callConsensus.sh
+│   │   ├── enrichAlnSummary.sh
+│   │   ├── polishedJsonReport.sh
+│   │   ├── topCorrectionsJsonReport.sh
+│   │   ├── variantsJsonReport.sh
+│   │   └── zipPolishedFasta.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── P_Filter
+│   │   ├── filter_001of001.sh
+│   │   ├── filter.plsFofn.Scatter.sh
+│   │   ├── filter.rgnFofn.Gather.sh
+│   │   ├── filter.summary.Gather.sh
+│   │   ├── subreads_001of001.sh
+│   │   ├── subreads.subreadFastq.Gather.sh
+│   │   ├── subreads.subreads.Gather.sh
+│   │   └── subreadSummary.sh
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.sh
+│   │   ├── statsRpt.sh
+│   │   └── subreadRpt.sh
+│   ├── P_Mapping
+│   │   ├── align_001of001.sh
+│   │   ├── align.cmpH5.Gather.sh
+│   │   ├── covGFF.sh
+│   │   ├── gff2Bed.sh
+│   │   ├── loadChemistry.sh
+│   │   ├── repack.sh
+│   │   ├── samBam.sh
+│   │   ├── sort.sh
+│   │   └── unmapped.sh
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.sh
+│   │   └── statsJsonReport.sh
+│   ├── P_PreAssemblerDagcon
+│   │   ├── filterLongReadsByLength.sh
+│   │   ├── hgapAlignForCorrection_001of001.sh
+│   │   ├── hgapAlignForCorrection.blasrM4Fofn.Gather.sh
+│   │   ├── hgapAlignForCorrection.blasrM4.Gather.sh
+│   │   ├── hgapAlignForCorrection.target.Scatter.sh
+│   │   ├── hgapCorrection_001of001.sh
+│   │   ├── hgapCorrection.fasta.Gather.sh
+│   │   ├── hgapCorrection.fastq.Gather.sh
+│   │   ├── hgapFilterM4_001of001.sh
+│   │   ├── hgapFilterM4.blasrM4Filtered.Gather.sh
+│   │   └── preAssemblerJsonReport.sh
+│   ├── P_ReferenceUploader
+│   │   └── runUploaderUnitig.sh
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── filtered_longreads.fasta
+├── filtered_longreads.fasta.cutoff
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── seeds.m4
+├── seeds.m4.fofn
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+├── toc.xml
+└── vis.jnlp
+```
+***
+###RS_IsoSeq.1###
+```
+├── data
+│   ├── classifyOut
+│   │   ├── hmmer.chimera.dom
+│   │   ├── hmmer.front_end.dom
+│   │   ├── primers.chimera.fa
+│   │   └── primers.front_end.fa
+│   ├── chemistry_mapping.xml
+│   ├── classify_summary.txt
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── isoseq_draft.fasta
+│   ├── isoseq_flnc.fasta
+│   ├── isoseq_nfl.fasta
+│   ├── isoseq_primer_info.csv
+│   ├── m140515_011259_42142_c100633042550000001823121009121405_s1_p0.1.ccs.h5
+│   ├── m140515_011259_42142_c100633042550000001823121009121405_s1_p0.2.ccs.h5
+│   ├── m140515_011259_42142_c100633042550000001823121009121405_s1_p0.3.ccs.h5
+│   ├── reads_of_insert.fasta
+│   ├── reads_of_insert.fastq
+│   └── slots.pickle
+├── log
+│   ├── P_CCS
+│   │   ├── gatherFastx.log
+│   │   ├── generateCCS_001of001.log
+│   │   ├── generateCCS.ccsSentinel.Gather.log
+│   │   ├── generateCCS.inputPlsFofn.Scatter.log
+│   │   ├── readsOfInsertJsonReport.log
+│   │   └── toReadsOfInsertFofn.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── P_IsoSeq
+│   │   └── classify.log
+│   ├── P_IsoSeqReports
+│   │   └── generateClassifyReport.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140515_011259_42142_c100633042550000001823121009121405_s1_p0.metadata.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── fulllength_nonchimeric_readlength_hist.png
+│   ├── fulllength_nonchimeric_readlength_hist_thumb.png
+│   ├── isoseq_classify.html
+│   ├── isoseq_classify.json
+│   ├── overview.html
+│   ├── overview.json
+│   ├── reads_of_insert_report.html
+│   ├── reads_of_insert_report.json
+│   ├── roi_accuracy_hist.png
+│   ├── roi_accuracy_hist_thumb.png
+│   ├── roi_npasses_hist.png
+│   ├── roi_npasses_hist_thumb.png
+│   ├── roi_readlength_hist.png
+│   └── roi_readlength_hist_thumb.png
+├── workflow
+│   ├── P_CCS
+│   │   ├── gatherFastx.sh
+│   │   ├── generateCCS_001of001.sh
+│   │   ├── generateCCS.ccsSentinel.Gather.sh
+│   │   ├── generateCCS.inputPlsFofn.Scatter.sh
+│   │   ├── readsOfInsertJsonReport.sh
+│   │   └── toReadsOfInsertFofn.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── P_IsoSeq
+│   │   └── classify.sh
+│   ├── P_IsoSeqReports
+│   │   └── generateClassifyReport.sh
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── reads_of_insert.fofn
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+└── toc.xml
+```
+***
+###RS_Long_Amplicon_Analysis.1###
+```
+├── data
+│   ├── amplicon_analysis_chimeras_noise.fasta
+│   ├── amplicon_analysis_chimeras_noise.fastq
+│   ├── amplicon_analysis.csv
+│   ├── amplicon_analysis.fasta
+│   ├── amplicon_analysis.fastq
+│   ├── amplicon_analysis.log
+│   ├── amplicon_analysis_summary.csv
+│   ├── chemistry_mapping.xml
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   └── slots.pickle
+├── log
+│   ├── P_AmpliconAssembly
+│   │   ├── ampliconAssemblyReport.log
+│   │   └── generateAmpliconAssembly.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140507_025855_42161_c110023601820000001823092705201462_s1_p0.metadata.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── amplicon_analysis_report.html
+│   ├── amplicon_analysis_report.json
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── overview.html
+│   └── overview.json
+├── workflow
+│   ├── P_AmpliconAssembly
+│   │   ├── ampliconAssemblyReport.sh
+│   │   └── generateAmpliconAssembly.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+└── toc.xml
+```
+***
+###RS_Minor_Variant.1###
+```
+├── data
+│   ├── aligned_reads.bam
+│   ├── aligned_reads.bam.bai
+│   ├── aligned_reads.cmp.h5
+│   ├── aligned_reads.sam
+│   ├── alignment_summary.gff
+│   ├── chemistry_mapping.xml
+│   ├── coverage.bed
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.1.ccs.fasta
+│   ├── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.1.ccs.fastq
+│   ├── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.1.ccs.h5
+│   ├── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.2.ccs.fasta
+│   ├── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.2.ccs.fastq
+│   ├── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.2.ccs.h5
+│   ├── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.3.ccs.fasta
+│   ├── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.3.ccs.fastq
+│   ├── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.3.ccs.h5
+│   ├── minor_variants.csv
+│   ├── minor_variants.log
+│   ├── minor_variants.vcf.gz
+│   ├── reads_of_insert.fasta
+│   ├── reads_of_insert.fastq
+│   ├── slots.pickle
+│   └── unmappedSubreads.fasta
+├── log
+│   ├── P_CCS
+│   │   ├── gatherFastx.log
+│   │   ├── generateCCS_001of001.log
+│   │   ├── generateCCS.ccsSentinel.Gather.log
+│   │   ├── generateCCS.inputPlsFofn.Scatter.log
+│   │   ├── readsOfInsertJsonReport.log
+│   │   └── toReadsOfInsertFofn.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── P_Mapping
+│   │   ├── alignCCS_001of001.log
+│   │   ├── alignCCS.cmpH5.Gather.log
+│   │   ├── alignCCS.plsFofn.Scatter.log
+│   │   ├── covGFF.log
+│   │   ├── gff2Bed.log
+│   │   ├── loadChemistry.log
+│   │   ├── repack.log
+│   │   ├── samBam.log
+│   │   ├── sort.log
+│   │   └── unmapped.log
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.log
+│   │   └── statsJsonReport.log
+│   ├── P_MinorVariants
+│   │   ├── callVariants.log
+│   │   └── zipVariants.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140205_070439_42194_c010032902559900001800000112311641_s1_p0.metadata.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── coverage_histogram.png
+│   ├── coverage_histogram_thumb.png
+│   ├── coverage_plot_5fad759ed93706f8450c19d6d106bca7.png
+│   ├── coverage_plot_5fad759ed93706f8450c19d6d106bca7_thumb.png
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── mapped_readlength_histogram.png
+│   ├── mapped_readlength_histogram_thumb.png
+│   ├── mapped_subread_accuracy_histogram.png
+│   ├── mapped_subread_accuracy_histogram_thumb.png
+│   ├── mapped_subreadlength_histogram.png
+│   ├── mapped_subreadlength_histogram_thumb.png
+│   ├── mapping_coverage_report.html
+│   ├── mapping_coverage_report.json
+│   ├── mapping_stats_report.html
+│   ├── mapping_stats_report.json
+│   ├── overview.html
+│   ├── overview.json
+│   ├── reads_of_insert_report.html
+│   ├── reads_of_insert_report.json
+│   ├── roi_accuracy_hist.png
+│   ├── roi_accuracy_hist_thumb.png
+│   ├── roi_npasses_hist.png
+│   ├── roi_npasses_hist_thumb.png
+│   ├── roi_readlength_hist.png
+│   └── roi_readlength_hist_thumb.png
+├── workflow
+│   ├── P_CCS
+│   │   ├── gatherFastx.sh
+│   │   ├── generateCCS_001of001.sh
+│   │   ├── generateCCS.ccsSentinel.Gather.sh
+│   │   ├── generateCCS.inputPlsFofn.Scatter.sh
+│   │   ├── readsOfInsertJsonReport.sh
+│   │   └── toReadsOfInsertFofn.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── P_Mapping
+│   │   ├── alignCCS_001of001.sh
+│   │   ├── alignCCS.cmpH5.Gather.sh
+│   │   ├── alignCCS.plsFofn.Scatter.sh
+│   │   ├── covGFF.sh
+│   │   ├── gff2Bed.sh
+│   │   ├── loadChemistry.sh
+│   │   ├── repack.sh
+│   │   ├── samBam.sh
+│   │   ├── sort.sh
+│   │   └── unmapped.sh
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.sh
+│   │   └── statsJsonReport.sh
+│   ├── P_MinorVariants
+│   │   ├── callVariants.sh
+│   │   └── zipVariants.sh
+│   ├── PostWorkflow.details.dot
+│   ├── PostWorkflow.details.html
+│   ├── PostWorkflow.details.svg
+│   ├── PostWorkflow.profile.html
+│   ├── PostWorkflow.rdf
+│   ├── PostWorkflow.summary.dot
+│   ├── PostWorkflow.summary.html
+│   ├── PostWorkflow.summary.svg
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── reads_of_insert.fofn
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+├── toc.xml
+└── vis.jnlp
+```
+***
+###RS_Modification_and_Motif_Analysis.1###
+```
+├── data
+│   ├── filtered_regions
+│   │   ├── m140309_032717_42161_c100617192550000001823115007181417_s1_p0.1.rgn.h5
+│   │   ├── m140309_032717_42161_c100617192550000001823115007181417_s1_p0.2.rgn.h5
+│   │   └── m140309_032717_42161_c100617192550000001823115007181417_s1_p0.3.rgn.h5
+│   ├── aligned_reads.bam
+│   ├── aligned_reads.bam.bai
+│   ├── aligned_reads.cmp.h5
+│   ├── aligned_reads.sam
+│   ├── alignment_summary.gff
+│   ├── base_mod_contig_ids.txt
+│   ├── chemistry_mapping.xml
+│   ├── consensus.fasta.gz
+│   ├── consensus.fastq.gz
+│   ├── contig_ids.txt
+│   ├── coverage.bed
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── filtered_regions.fofn
+│   ├── filtered_subreads.fasta
+│   ├── filtered_subreads.fastq
+│   ├── filtered_subread_summary.csv
+│   ├── filtered_summary.csv
+│   ├── modifications.csv.gz
+│   ├── modifications.gff.gz
+│   ├── motifs.gff.gz
+│   ├── motif_summary.csv
+│   ├── slots.pickle
+│   ├── temp_kinetics.h5
+│   ├── unmappedSubreads.fasta
+│   ├── variants.bed
+│   ├── variants.gff.gz
+│   └── variants.vcf
+├── log
+│   ├── P_ConsensusReports
+│   │   ├── topVariantsReport.log
+│   │   └── variantsJsonReport.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── P_Filter
+│   │   ├── filter_001of001.log
+│   │   ├── filter.plsFofn.Scatter.log
+│   │   ├── filter.rgnFofn.Gather.log
+│   │   ├── filter.summary.Gather.log
+│   │   ├── subreads_001of001.log
+│   │   ├── subreads.subreadFastq.Gather.log
+│   │   ├── subreads.subreads.Gather.log
+│   │   └── subreadSummary.log
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.log
+│   │   ├── statsRpt.log
+│   │   └── subreadRpt.log
+│   ├── P_GenomicConsensus
+│   │   ├── callVariantsWithConsensus_001of001.log
+│   │   ├── callVariantsWithConsensus.consensusFasta.Gather.log
+│   │   ├── callVariantsWithConsensus.consensusFastq.Gather.log
+│   │   ├── callVariantsWithConsensus.contig_list.Scatter.log
+│   │   ├── callVariantsWithConsensus.variantsGff.Gather.log
+│   │   ├── enrichAlnSummary.log
+│   │   ├── makeBed.log
+│   │   ├── makeVcf.log
+│   │   ├── writeContigList.log
+│   │   └── zipVariants.log
+│   ├── P_Mapping
+│   │   ├── align_001of001.log
+│   │   ├── align.cmpH5.Gather.log
+│   │   ├── covGFF.log
+│   │   ├── gff2Bed.log
+│   │   ├── loadChemistry.log
+│   │   ├── repack.log
+│   │   ├── samBam.log
+│   │   ├── sort.log
+│   │   └── unmapped.log
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.log
+│   │   └── statsJsonReport.log
+│   ├── P_ModificationDetection
+│   │   ├── addModificationsToAlignmentSummary.log
+│   │   ├── computeModifications_001of001.log
+│   │   ├── computeModifications.contig_list.Scatter.log
+│   │   ├── computeModifications.modificationsCsv.Gather.log
+│   │   ├── computeModifications.modificationsGff.Gather.log
+│   │   ├── computeModifications.tempKineticsH5.Gather.log
+│   │   ├── copyIpdSummary.log
+│   │   ├── modificationJsonReport.log
+│   │   └── writeContigList.log
+│   ├── P_MotifFinder
+│   │   ├── findMotifs.log
+│   │   ├── makeMotifGff.log
+│   │   └── makeMotifPlot.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140309_032717_42161_c100617192550000001823115007181417_s1_p0.metadata.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── coverage_histogram.png
+│   ├── coverage_histogram_thumb.png
+│   ├── coverage_plot_478d2191a276facb454943e01562e3bb.png
+│   ├── coverage_plot_478d2191a276facb454943e01562e3bb_thumb.png
+│   ├── filtered_subread_report.png
+│   ├── filtered_subread_report_thmb.png
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── filter_reports_filter_stats.html
+│   ├── filter_reports_filter_stats.json
+│   ├── filter_reports_filter_subread_stats.html
+│   ├── filter_reports_filter_subread_stats.json
+│   ├── filter_reports_loading.html
+│   ├── filter_reports_loading.json
+│   ├── kinetic_detections.png
+│   ├── kinetic_detections_thumb.png
+│   ├── kinetic_histogram.png
+│   ├── kinetic_histogram_thumb.png
+│   ├── kinetics_report.html
+│   ├── kinetics_report.json
+│   ├── mapped_readlength_histogram.png
+│   ├── mapped_readlength_histogram_thumb.png
+│   ├── mapped_subread_accuracy_histogram.png
+│   ├── mapped_subread_accuracy_histogram_thumb.png
+│   ├── mapped_subreadlength_histogram.png
+│   ├── mapped_subreadlength_histogram_thumb.png
+│   ├── mapping_coverage_report.html
+│   ├── mapping_coverage_report.json
+│   ├── mapping_stats_report.html
+│   ├── mapping_stats_report.json
+│   ├── motifHistogram.png
+│   ├── motif_summary.html
+│   ├── motif_summary.xml
+│   ├── overview.html
+│   ├── overview.json
+│   ├── post_filter_readlength_histogram.png
+│   ├── post_filter_readlength_histogram_thumb.png
+│   ├── post_filterread_score_histogram.png
+│   ├── post_filterread_score_histogram_thumb.png
+│   ├── pre_filter_readlength_histogram.png
+│   ├── pre_filter_readlength_histogram_thumb.png
+│   ├── pre_filterread_score_histogram.png
+│   ├── pre_filterread_score_histogram_thumb.png
+│   ├── top_variants_report.html
+│   ├── top_variants_report.json
+│   ├── variants_plot_478d2191a276facb454943e01562e3bb.png
+│   ├── variants_plot_478d2191a276facb454943e01562e3bb_thumb.png
+│   ├── variants_plot_legend.png
+│   ├── variants_report.html
+│   └── variants_report.json
+├── workflow
+│   ├── P_ConsensusReports
+│   │   ├── topVariantsReport.sh
+│   │   └── variantsJsonReport.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── P_Filter
+│   │   ├── filter_001of001.sh
+│   │   ├── filter.plsFofn.Scatter.sh
+│   │   ├── filter.rgnFofn.Gather.sh
+│   │   ├── filter.summary.Gather.sh
+│   │   ├── subreads_001of001.sh
+│   │   ├── subreads.subreadFastq.Gather.sh
+│   │   ├── subreads.subreads.Gather.sh
+│   │   └── subreadSummary.sh
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.sh
+│   │   ├── statsRpt.sh
+│   │   └── subreadRpt.sh
+│   ├── P_GenomicConsensus
+│   │   ├── callVariantsWithConsensus_001of001.sh
+│   │   ├── callVariantsWithConsensus.consensusFasta.Gather.sh
+│   │   ├── callVariantsWithConsensus.consensusFastq.Gather.sh
+│   │   ├── callVariantsWithConsensus.contig_list.Scatter.sh
+│   │   ├── callVariantsWithConsensus.variantsGff.Gather.sh
+│   │   ├── enrichAlnSummary.sh
+│   │   ├── makeBed.sh
+│   │   ├── makeVcf.sh
+│   │   ├── writeContigList.sh
+│   │   └── zipVariants.sh
+│   ├── P_Mapping
+│   │   ├── align_001of001.sh
+│   │   ├── align.cmpH5.Gather.sh
+│   │   ├── covGFF.sh
+│   │   ├── gff2Bed.sh
+│   │   ├── loadChemistry.sh
+│   │   ├── repack.sh
+│   │   ├── samBam.sh
+│   │   ├── sort.sh
+│   │   └── unmapped.sh
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.sh
+│   │   └── statsJsonReport.sh
+│   ├── P_ModificationDetection
+│   │   ├── addModificationsToAlignmentSummary.sh
+│   │   ├── computeModifications_001of001.sh
+│   │   ├── computeModifications.contig_list.Scatter.sh
+│   │   ├── computeModifications.modificationsCsv.Gather.sh
+│   │   ├── computeModifications.modificationsGff.Gather.sh
+│   │   ├── computeModifications.tempKineticsH5.Gather.sh
+│   │   ├── copyIpdSummary.sh
+│   │   ├── modificationJsonReport.sh
+│   │   └── writeContigList.sh
+│   ├── P_MotifFinder
+│   │   ├── findMotifs.sh
+│   │   ├── makeMotifGff.sh
+│   │   └── makeMotifPlot.sh
+│   ├── PostWorkflow.details.dot
+│   ├── PostWorkflow.details.html
+│   ├── PostWorkflow.details.svg
+│   ├── PostWorkflow.profile.html
+│   ├── PostWorkflow.rdf
+│   ├── PostWorkflow.summary.dot
+│   ├── PostWorkflow.summary.html
+│   ├── PostWorkflow.summary.svg
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+├── toc.xml
+└── vis.jnlp
+```
+***
+###RS_Modification_Detection.1###
+```
+├── data
+│   ├── filtered_regions
+│   │   ├── m140426_093105_42194_c110042112550000001823110806241434_s1_p0.1.rgn.h5
+│   │   ├── m140426_093105_42194_c110042112550000001823110806241434_s1_p0.2.rgn.h5
+│   │   └── m140426_093105_42194_c110042112550000001823110806241434_s1_p0.3.rgn.h5
+│   ├── aligned_reads.bam
+│   ├── aligned_reads.bam.bai
+│   ├── aligned_reads.cmp.h5
+│   ├── aligned_reads.sam
+│   ├── alignment_summary.gff
+│   ├── base_mod_contig_ids.txt
+│   ├── chemistry_mapping.xml
+│   ├── consensus.fasta.gz
+│   ├── consensus.fastq.gz
+│   ├── contig_ids.txt
+│   ├── coverage.bed
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── filtered_regions.fofn
+│   ├── filtered_subreads.fasta
+│   ├── filtered_subreads.fastq
+│   ├── filtered_subread_summary.csv
+│   ├── filtered_summary.csv
+│   ├── modifications.csv.gz
+│   ├── modifications.gff.gz
+│   ├── slots.pickle
+│   ├── temp_kinetics.h5
+│   ├── unmappedSubreads.fasta
+│   ├── variants.bed
+│   ├── variants.gff.gz
+│   └── variants.vcf
+├── log
+│   ├── P_ConsensusReports
+│   │   ├── topVariantsReport.log
+│   │   └── variantsJsonReport.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── P_Filter
+│   │   ├── filter_001of001.log
+│   │   ├── filter.plsFofn.Scatter.log
+│   │   ├── filter.rgnFofn.Gather.log
+│   │   ├── filter.summary.Gather.log
+│   │   ├── subreads_001of001.log
+│   │   ├── subreads.subreadFastq.Gather.log
+│   │   ├── subreads.subreads.Gather.log
+│   │   └── subreadSummary.log
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.log
+│   │   ├── statsRpt.log
+│   │   └── subreadRpt.log
+│   ├── P_GenomicConsensus
+│   │   ├── callVariantsWithConsensus_001of001.log
+│   │   ├── callVariantsWithConsensus.consensusFasta.Gather.log
+│   │   ├── callVariantsWithConsensus.consensusFastq.Gather.log
+│   │   ├── callVariantsWithConsensus.contig_list.Scatter.log
+│   │   ├── callVariantsWithConsensus.variantsGff.Gather.log
+│   │   ├── enrichAlnSummary.log
+│   │   ├── makeBed.log
+│   │   ├── makeVcf.log
+│   │   ├── writeContigList.log
+│   │   └── zipVariants.log
+│   ├── P_Mapping
+│   │   ├── align_001of001.log
+│   │   ├── align.cmpH5.Gather.log
+│   │   ├── covGFF.log
+│   │   ├── gff2Bed.log
+│   │   ├── loadChemistry.log
+│   │   ├── repack.log
+│   │   ├── samBam.log
+│   │   ├── sort.log
+│   │   └── unmapped.log
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.log
+│   │   └── statsJsonReport.log
+│   ├── P_ModificationDetection
+│   │   ├── addModificationsToAlignmentSummary.log
+│   │   ├── computeModifications_001of001.log
+│   │   ├── computeModifications.contig_list.Scatter.log
+│   │   ├── computeModifications.modificationsCsv.Gather.log
+│   │   ├── computeModifications.modificationsGff.Gather.log
+│   │   ├── computeModifications.tempKineticsH5.Gather.log
+│   │   ├── copyIpdSummary.log
+│   │   ├── modificationJsonReport.log
+│   │   └── writeContigList.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140426_093105_42194_c110042112550000001823110806241434_s1_p0.metadata.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── coverage_histogram.png
+│   ├── coverage_histogram_thumb.png
+│   ├── coverage_plot_c97c20a06416e1ec126e2e280fb0a963.png
+│   ├── coverage_plot_c97c20a06416e1ec126e2e280fb0a963_thumb.png
+│   ├── filtered_subread_report.png
+│   ├── filtered_subread_report_thmb.png
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── filter_reports_filter_stats.html
+│   ├── filter_reports_filter_stats.json
+│   ├── filter_reports_filter_subread_stats.html
+│   ├── filter_reports_filter_subread_stats.json
+│   ├── filter_reports_loading.html
+│   ├── filter_reports_loading.json
+│   ├── kinetic_detections.png
+│   ├── kinetic_detections_thumb.png
+│   ├── kinetic_histogram.png
+│   ├── kinetic_histogram_thumb.png
+│   ├── kinetics_report.html
+│   ├── kinetics_report.json
+│   ├── mapped_readlength_histogram.png
+│   ├── mapped_readlength_histogram_thumb.png
+│   ├── mapped_subread_accuracy_histogram.png
+│   ├── mapped_subread_accuracy_histogram_thumb.png
+│   ├── mapped_subreadlength_histogram.png
+│   ├── mapped_subreadlength_histogram_thumb.png
+│   ├── mapping_coverage_report.html
+│   ├── mapping_coverage_report.json
+│   ├── mapping_stats_report.html
+│   ├── mapping_stats_report.json
+│   ├── overview.html
+│   ├── overview.json
+│   ├── post_filter_readlength_histogram.png
+│   ├── post_filter_readlength_histogram_thumb.png
+│   ├── post_filterread_score_histogram.png
+│   ├── post_filterread_score_histogram_thumb.png
+│   ├── pre_filter_readlength_histogram.png
+│   ├── pre_filter_readlength_histogram_thumb.png
+│   ├── pre_filterread_score_histogram.png
+│   ├── pre_filterread_score_histogram_thumb.png
+│   ├── top_variants_report.html
+│   ├── top_variants_report.json
+│   ├── variants_plot_c97c20a06416e1ec126e2e280fb0a963.png
+│   ├── variants_plot_c97c20a06416e1ec126e2e280fb0a963_thumb.png
+│   ├── variants_plot_legend.png
+│   ├── variants_report.html
+│   └── variants_report.json
+├── workflow
+│   ├── P_ConsensusReports
+│   │   ├── topVariantsReport.sh
+│   │   └── variantsJsonReport.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── P_Filter
+│   │   ├── filter_001of001.sh
+│   │   ├── filter.plsFofn.Scatter.sh
+│   │   ├── filter.rgnFofn.Gather.sh
+│   │   ├── filter.summary.Gather.sh
+│   │   ├── subreads_001of001.sh
+│   │   ├── subreads.subreadFastq.Gather.sh
+│   │   ├── subreads.subreads.Gather.sh
+│   │   └── subreadSummary.sh
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.sh
+│   │   ├── statsRpt.sh
+│   │   └── subreadRpt.sh
+│   ├── P_GenomicConsensus
+│   │   ├── callVariantsWithConsensus_001of001.sh
+│   │   ├── callVariantsWithConsensus.consensusFasta.Gather.sh
+│   │   ├── callVariantsWithConsensus.consensusFastq.Gather.sh
+│   │   ├── callVariantsWithConsensus.contig_list.Scatter.sh
+│   │   ├── callVariantsWithConsensus.variantsGff.Gather.sh
+│   │   ├── enrichAlnSummary.sh
+│   │   ├── makeBed.sh
+│   │   ├── makeVcf.sh
+│   │   ├── writeContigList.sh
+│   │   └── zipVariants.sh
+│   ├── P_Mapping
+│   │   ├── align_001of001.sh
+│   │   ├── align.cmpH5.Gather.sh
+│   │   ├── covGFF.sh
+│   │   ├── gff2Bed.sh
+│   │   ├── loadChemistry.sh
+│   │   ├── repack.sh
+│   │   ├── samBam.sh
+│   │   ├── sort.sh
+│   │   └── unmapped.sh
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.sh
+│   │   └── statsJsonReport.sh
+│   ├── P_ModificationDetection
+│   │   ├── addModificationsToAlignmentSummary.sh
+│   │   ├── computeModifications_001of001.sh
+│   │   ├── computeModifications.contig_list.Scatter.sh
+│   │   ├── computeModifications.modificationsCsv.Gather.sh
+│   │   ├── computeModifications.modificationsGff.Gather.sh
+│   │   ├── computeModifications.tempKineticsH5.Gather.sh
+│   │   ├── copyIpdSummary.sh
+│   │   ├── modificationJsonReport.sh
+│   │   └── writeContigList.sh
+│   ├── PostWorkflow.details.dot
+│   ├── PostWorkflow.details.html
+│   ├── PostWorkflow.details.svg
+│   ├── PostWorkflow.profile.html
+│   ├── PostWorkflow.rdf
+│   ├── PostWorkflow.summary.dot
+│   ├── PostWorkflow.summary.html
+│   ├── PostWorkflow.summary.svg
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+├── toc.xml
+└── vis.jnlp
+```
+***
+###RS_ReadsOfInsert.1###
+```
+├── data
+│   ├── barcoded-fastqs.tgz
+│   ├── barcode.fofn
+│   ├── chemistry_mapping.xml
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.1.bc.h5
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.1.ccs.fasta
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.1.ccs.fastq
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.1.ccs.h5
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.2.bc.h5
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.2.ccs.fasta
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.2.ccs.fastq
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.2.ccs.h5
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.3.bc.h5
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.3.ccs.fasta
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.3.ccs.fastq
+│   ├── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.3.ccs.h5
+│   ├── reads_of_insert.fasta
+│   ├── reads_of_insert.fastq
+│   └── slots.pickle
+├── log
+│   ├── P_Barcode
+│   │   ├── barcodeJsonReport.log
+│   │   ├── emitFastqs.log
+│   │   ├── labelZMWs_001of001.log
+│   │   └── labelZMWs.barcodeFofn.Gather.log
+│   ├── P_CCS
+│   │   ├── gatherFastx.log
+│   │   ├── generateCCS_001of001.log
+│   │   ├── generateCCS.ccsSentinel.Gather.log
+│   │   ├── generateCCS.inputPlsFofn.Scatter.log
+│   │   ├── readsOfInsertJsonReport.log
+│   │   └── toReadsOfInsertFofn.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140330_205701_42160_c100646422550000001823121309101414_s1_p0.metadata.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── barcode_report.html
+│   ├── barcode_report.json
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── overview.html
+│   ├── overview.json
+│   ├── reads_of_insert_report.html
+│   ├── reads_of_insert_report.json
+│   ├── roi_accuracy_hist.png
+│   ├── roi_accuracy_hist_thumb.png
+│   ├── roi_npasses_hist.png
+│   ├── roi_npasses_hist_thumb.png
+│   ├── roi_readlength_hist.png
+│   └── roi_readlength_hist_thumb.png
+├── workflow
+│   ├── P_Barcode
+│   │   ├── barcodeJsonReport.sh
+│   │   ├── emitFastqs.sh
+│   │   ├── labelZMWs_001of001.sh
+│   │   └── labelZMWs.barcodeFofn.Gather.sh
+│   ├── P_CCS
+│   │   ├── gatherFastx.sh
+│   │   ├── generateCCS_001of001.sh
+│   │   ├── generateCCS.ccsSentinel.Gather.sh
+│   │   ├── generateCCS.inputPlsFofn.Scatter.sh
+│   │   ├── readsOfInsertJsonReport.sh
+│   │   └── toReadsOfInsertFofn.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── reads_of_insert.fofn
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+└── toc.xml
+```
+***
+###RS_Resequencing.1###
+```
+├── data
+│   ├── filtered_regions
+│   │   ├── m140515_083939_42161_c100569172550000001823095301191501_s1_p0.1.rgn.h5
+│   │   ├── m140515_083939_42161_c100569172550000001823095301191501_s1_p0.2.rgn.h5
+│   │   └── m140515_083939_42161_c100569172550000001823095301191501_s1_p0.3.rgn.h5
+│   ├── aligned_reads.bam
+│   ├── aligned_reads.bam.bai
+│   ├── aligned_reads.cmp.h5
+│   ├── aligned_reads.sam
+│   ├── alignment_summary.gff
+│   ├── chemistry_mapping.xml
+│   ├── consensus.fasta.gz
+│   ├── consensus.fastq.gz
+│   ├── contig_ids.txt
+│   ├── coverage.bed
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── filtered_regions.fofn
+│   ├── filtered_subreads.fasta
+│   ├── filtered_subreads.fastq
+│   ├── filtered_subread_summary.csv
+│   ├── filtered_summary.csv
+│   ├── slots.pickle
+│   ├── unmappedSubreads.fasta
+│   ├── variants.bed
+│   ├── variants.gff.gz
+│   └── variants.vcf
+├── log
+│   ├── P_ConsensusReports
+│   │   ├── topVariantsReport.log
+│   │   └── variantsJsonReport.log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── P_Filter
+│   │   ├── filter_001of001.log
+│   │   ├── filter.plsFofn.Scatter.log
+│   │   ├── filter.rgnFofn.Gather.log
+│   │   ├── filter.summary.Gather.log
+│   │   ├── subreads_001of001.log
+│   │   ├── subreads.subreadFastq.Gather.log
+│   │   ├── subreads.subreads.Gather.log
+│   │   └── subreadSummary.log
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.log
+│   │   ├── statsRpt.log
+│   │   └── subreadRpt.log
+│   ├── P_GenomicConsensus
+│   │   ├── callVariantsWithConsensus_001of001.log
+│   │   ├── callVariantsWithConsensus.consensusFasta.Gather.log
+│   │   ├── callVariantsWithConsensus.consensusFastq.Gather.log
+│   │   ├── callVariantsWithConsensus.contig_list.Scatter.log
+│   │   ├── callVariantsWithConsensus.variantsGff.Gather.log
+│   │   ├── enrichAlnSummary.log
+│   │   ├── makeBed.log
+│   │   ├── makeVcf.log
+│   │   ├── writeContigList.log
+│   │   └── zipVariants.log
+│   ├── P_Mapping
+│   │   ├── align_001of001.log
+│   │   ├── align.cmpH5.Gather.log
+│   │   ├── covGFF.log
+│   │   ├── gff2Bed.log
+│   │   ├── loadChemistry.log
+│   │   ├── repack.log
+│   │   ├── samBam.log
+│   │   ├── sort.log
+│   │   └── unmapped.log
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.log
+│   │   └── statsJsonReport.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140515_083939_42161_c100569172550000001823095301191501_s1_p0.metadata.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── coverage_histogram.png
+│   ├── coverage_histogram_thumb.png
+│   ├── coverage_plot_c4ca4238a0b923820dcc509a6f75849b.png
+│   ├── coverage_plot_c4ca4238a0b923820dcc509a6f75849b_thumb.png
+│   ├── filtered_subread_report.png
+│   ├── filtered_subread_report_thmb.png
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── filter_reports_filter_stats.html
+│   ├── filter_reports_filter_stats.json
+│   ├── filter_reports_filter_subread_stats.html
+│   ├── filter_reports_filter_subread_stats.json
+│   ├── filter_reports_loading.html
+│   ├── filter_reports_loading.json
+│   ├── mapped_readlength_histogram.png
+│   ├── mapped_readlength_histogram_thumb.png
+│   ├── mapped_subread_accuracy_histogram.png
+│   ├── mapped_subread_accuracy_histogram_thumb.png
+│   ├── mapped_subreadlength_histogram.png
+│   ├── mapped_subreadlength_histogram_thumb.png
+│   ├── mapping_coverage_report.html
+│   ├── mapping_coverage_report.json
+│   ├── mapping_stats_report.html
+│   ├── mapping_stats_report.json
+│   ├── overview.html
+│   ├── overview.json
+│   ├── post_filter_readlength_histogram.png
+│   ├── post_filter_readlength_histogram_thumb.png
+│   ├── post_filterread_score_histogram.png
+│   ├── post_filterread_score_histogram_thumb.png
+│   ├── pre_filter_readlength_histogram.png
+│   ├── pre_filter_readlength_histogram_thumb.png
+│   ├── pre_filterread_score_histogram.png
+│   ├── pre_filterread_score_histogram_thumb.png
+│   ├── top_variants_report.html
+│   ├── top_variants_report.json
+│   ├── variants_plot_c4ca4238a0b923820dcc509a6f75849b.png
+│   ├── variants_plot_c4ca4238a0b923820dcc509a6f75849b_thumb.png
+│   ├── variants_plot_legend.png
+│   ├── variants_report.html
+│   └── variants_report.json
+├── workflow
+│   ├── P_ConsensusReports
+│   │   ├── topVariantsReport.sh
+│   │   └── variantsJsonReport.sh
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── P_Filter
+│   │   ├── filter_001of001.sh
+│   │   ├── filter.plsFofn.Scatter.sh
+│   │   ├── filter.rgnFofn.Gather.sh
+│   │   ├── filter.summary.Gather.sh
+│   │   ├── subreads_001of001.sh
+│   │   ├── subreads.subreadFastq.Gather.sh
+│   │   ├── subreads.subreads.Gather.sh
+│   │   └── subreadSummary.sh
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.sh
+│   │   ├── statsRpt.sh
+│   │   └── subreadRpt.sh
+│   ├── P_GenomicConsensus
+│   │   ├── callVariantsWithConsensus_001of001.sh
+│   │   ├── callVariantsWithConsensus.consensusFasta.Gather.sh
+│   │   ├── callVariantsWithConsensus.consensusFastq.Gather.sh
+│   │   ├── callVariantsWithConsensus.contig_list.Scatter.sh
+│   │   ├── callVariantsWithConsensus.variantsGff.Gather.sh
+│   │   ├── enrichAlnSummary.sh
+│   │   ├── makeBed.sh
+│   │   ├── makeVcf.sh
+│   │   ├── writeContigList.sh
+│   │   └── zipVariants.sh
+│   ├── P_Mapping
+│   │   ├── align_001of001.sh
+│   │   ├── align.cmpH5.Gather.sh
+│   │   ├── covGFF.sh
+│   │   ├── gff2Bed.sh
+│   │   ├── loadChemistry.sh
+│   │   ├── repack.sh
+│   │   ├── samBam.sh
+│   │   ├── sort.sh
+│   │   └── unmapped.sh
+│   ├── P_MappingReports
+│   │   ├── coverageJsonReport.sh
+│   │   └── statsJsonReport.sh
+│   ├── PostWorkflow.details.dot
+│   ├── PostWorkflow.details.html
+│   ├── PostWorkflow.details.svg
+│   ├── PostWorkflow.profile.html
+│   ├── PostWorkflow.rdf
+│   ├── PostWorkflow.summary.dot
+│   ├── PostWorkflow.summary.html
+│   ├── PostWorkflow.summary.svg
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+├── toc.xml
+└── vis.jnlp
+```
+***
+###RS_Subreads.1###
+```
+├── data
+│   ├── filtered_regions
+│   │   ├── m140515_021659_42161_c100569172550000001823095301191500_s1_p0.1.rgn.h5
+│   │   ├── m140515_021659_42161_c100569172550000001823095301191500_s1_p0.2.rgn.h5
+│   │   └── m140515_021659_42161_c100569172550000001823095301191500_s1_p0.3.rgn.h5
+│   ├── chemistry_mapping.xml
+│   ├── data.items.json
+│   ├── data.items.pickle
+│   ├── filtered_regions.fofn
+│   ├── filtered_subreads.fasta
+│   ├── filtered_subreads.fastq
+│   ├── filtered_subread_summary.csv
+│   ├── filtered_summary.csv
+│   └── slots.pickle
+├── log
+│   ├── P_Fetch
+│   │   ├── adapterRpt.log
+│   │   ├── getChemistry.log
+│   │   ├── overviewRpt.log
+│   │   └── toFofn.log
+│   ├── P_Filter
+│   │   ├── filter_001of001.log
+│   │   ├── filter.rgnFofn.Gather.log
+│   │   ├── filter.summary.Gather.log
+│   │   ├── subreads_001of001.log
+│   │   ├── subreads.plsFofn.Scatter.log
+│   │   ├── subreads.subreadFastq.Gather.log
+│   │   ├── subreads.subreads.Gather.log
+│   │   └── subreadSummary.log
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.log
+│   │   ├── statsRpt.log
+│   │   └── subreadRpt.log
+│   ├── master.log
+│   └── smrtpipe.log
+├── movie_metadata
+│   └── m140515_021659_42161_c100569172550000001823095301191500_s1_p0.metadata.xml
+├── results
+│   ├── adapter_observed_insert_length_distribution.png
+│   ├── adapter_observed_insert_length_distribution_thumb.png
+│   ├── filtered_subread_report.png
+│   ├── filtered_subread_report_thmb.png
+│   ├── filter_reports_adapters.html
+│   ├── filter_reports_adapters.json
+│   ├── filter_reports_filter_stats.html
+│   ├── filter_reports_filter_stats.json
+│   ├── filter_reports_filter_subread_stats.html
+│   ├── filter_reports_filter_subread_stats.json
+│   ├── filter_reports_loading.html
+│   ├── filter_reports_loading.json
+│   ├── overview.html
+│   ├── overview.json
+│   ├── post_filter_readlength_histogram.png
+│   ├── post_filter_readlength_histogram_thumb.png
+│   ├── post_filterread_score_histogram.png
+│   ├── post_filterread_score_histogram_thumb.png
+│   ├── pre_filter_readlength_histogram.png
+│   ├── pre_filter_readlength_histogram_thumb.png
+│   ├── pre_filterread_score_histogram.png
+│   └── pre_filterread_score_histogram_thumb.png
+├── workflow
+│   ├── P_Fetch
+│   │   ├── adapterRpt.sh
+│   │   ├── getChemistry.sh
+│   │   ├── overviewRpt.sh
+│   │   └── toFofn.sh
+│   ├── P_Filter
+│   │   ├── filter_001of001.sh
+│   │   ├── filter.rgnFofn.Gather.sh
+│   │   ├── filter.summary.Gather.sh
+│   │   ├── subreads_001of001.sh
+│   │   ├── subreads.plsFofn.Scatter.sh
+│   │   ├── subreads.subreadFastq.Gather.sh
+│   │   ├── subreads.subreads.Gather.sh
+│   │   └── subreadSummary.sh
+│   ├── P_FilterReports
+│   │   ├── loadingRpt.sh
+│   │   ├── statsRpt.sh
+│   │   └── subreadRpt.sh
+│   ├── Workflow.details.dot
+│   ├── Workflow.details.html
+│   ├── Workflow.details.svg
+│   ├── Workflow.profile.html
+│   ├── Workflow.rdf
+│   ├── Workflow.summary.dot
+│   ├── Workflow.summary.html
+│   └── Workflow.summary.svg
+├── index.html
+├── input.fofn
+├── input.xml
+├── job.sh
+├── metadata.rdf
+├── settings.xml
+├── smrtpipe.stderr
+├── smrtpipe.stdout
+└── toc.xml
+```
+***
\ No newline at end of file
diff --git a/docs/Official-Documentation.md b/docs/Official-Documentation.md
new file mode 100644
index 0000000..64cf3ea
--- /dev/null
+++ b/docs/Official-Documentation.md
@@ -0,0 +1,38 @@
+## Release Documentation - Release v2.2.0
+* [[SMRT Analysis Release Notes v2.2.0]]
+* [[SMRT Analysis Release Notes v2.2.0.p1]]
+* [[SMRT Analysis Release Notes v2.2.0.p2]]
+* [[SMRT Analysis Release Notes v2.2.0.p3]]
+* [[SMRT Analysis Software Installation v2.2.0]]
+* [[Secondary Analysis Web Services API v2.2.0]]
+* [[SMRT Pipe Reference Guide v2.2.0]] 
+
+## Release Documentation - Release v2.1
+* [[SMRT Analysis Release Notes v2.1]]
+* [[SMRT Analysis Software Installation v2.1]]
+* [[Secondary Analysis Web Services API v2.1]]
+* [[SMRT Pipe Reference Guide v2.1]] 
+
+## Release Documentation - Release v2.0.1
+* [[SMRT Analysis Release Notes v2.0.1]]
+* [[SMRT Analysis Software Installation v2.0.1]]
+* [Running SMRT Analysis on Amazon] (https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.0.1/doc/Running SMRT Analysis on Amazon.pdf)  **(PDF)**
+
+## Release Documentation - Release v2.0
+* [[SMRT Analysis Release Notes v2.0]]
+* [[SMRT Analysis Software Installation v2.0]]
+* [Running SMRT Analysis on Amazon] (https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.0.0/doc/Running SMRT Analysis on Amazon.pdf)  **(PDF)**
+
+## User Documentation - Release v2.0
+* [[SMRT Pipe Reference Guide v2.0]] 
+* [SMRT Portal Help v2.0](https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.0.0/doc/smrtportal/help/SMRT_Portal_csh.htm) 
+* [SMRT View Help v2.0](https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.0.0/doc/smrtview/help/SMRT_View.htm) 
+
+
+## File formats - Release v2.0
+* [bas.h5 Reference Guide] (https://s3.amazonaws.com/files.pacb.com/software/instrument/2.0.0/bas.h5 Reference Guide.pdf)   **(PDF)**
+
+
+## Web services - Release v2.0
+* [[Secondary Analysis Web Services API v2.0]]
+* [[Instrument Control Web Services API]]
\ No newline at end of file
diff --git a/docs/RS-HGAP-Assembly-protocol-fails-in-SMRT-Portal.md b/docs/RS-HGAP-Assembly-protocol-fails-in-SMRT-Portal.md
new file mode 100644
index 0000000..c0bd6aa
--- /dev/null
+++ b/docs/RS-HGAP-Assembly-protocol-fails-in-SMRT-Portal.md
@@ -0,0 +1,141 @@
+### Step 1:  Check smrtpipe.log
+When any SMRT Portal job fails, you can troubleshoot the errors by looking at the `$SEYMOUR_HOME/common/jobs/<job_id_prefix>/<job_id>/log/smrtpipe.log` file.  For more verbose logging, look at `$SEYMOUR_HOME/common/jobs/<job_id_prefix>/<job_id>/log/master.log`. Some common errors related to HGAP are:
+
+#### Problem 1a:  Not enough long reads.
+```
+Exiting smrtpipe last error: SmrtExit Seed read filter failed, check cutoffs
+```
+
+There are no reads longer than the minimum seed read length filter. Look at the distribution of subread lengths plotted in SMRT portal, and lower your `Minimum Seed Read Length` parameter. A good rule of thumb is to sets this equal to the mean subread length.
+
+#### Problem 1b:  Something failed when trying to write the SPEC file for Celera Assembler.
+```
+TaskExecutionError: task://017112/P_CeleraAssembler/writeRunCASpec returned non-zero exit status (1)
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > #!/bin/bash
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] >
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > ########### TASK metadata #############
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > # Task        : writeRunCASpec
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > # Module      : P_CeleraAssembler
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > # TaskType    : None
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > # URL         : task://017112/P_CeleraAssembler/writeRunCASpec
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > # createdAt   : 2013-06-10 09:04:51.505276
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > # ncmds       : 1
+```
+
+If you do not have SGE, please see Step 5. The log outputs the contents of the `writeRunCASpec.sh` script with the `>` prefix, and you can generally find the error within these lines. The script relies heavily on `qstat` and other SGE q-commands to configure Celera assembler, and assumes standard outputs for these commands. If you have aliased or changed these q-commands on your system, the defaults must be restored.  If you do **not** have q-commands in your path, you can add them by fo [...]
+
+##### Example 1:  qstat is not defined.
+```
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > raise Exception, res[2]
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > Exception: /bin/sh: qstat: command not found
+```
+
+##### Example 2:  qstat has been changed.
+```
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > 2013-06-10 19:25:47,420 [INFO] QSTAT :
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > ['queuename      
+qtype resv/used/tot. load_avg arch          states', '-------------------------
+--------------------------------------------------------', 'smrtanalysis.q at hostname.com BIP   0/27/170       ---      lx-amd64      ', '\thl:arch=lx-amd64', '\thl:num_proc=192', '\t?v:mem_
+total=0.000', '\t?v:swap_total=0.000', '\t?v:virtual_total=0.000', '\thl:m_topology=SCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSC
+CCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCCSCCCCCC', '\thl:m_socket=32', '\thl:m_core=192', '\thl:m_thread=192', '\t?v:load_
+avg=0.000000', '\t?v:load_short=0.000000', '\t?v:load_
+```
+
+
+#### Problem 1c:  Something failed at the Celera Assembler step.
+
+```
+[ERROR][pbpy.smrtpipe.engine.SmrtPipeTasks __run_task 806] task://017227/P_CeleraAssembler/runCaHgap returned non-zero exit status (1)
+```
+
+This is a pure Celera Assembler problem. Find the `runCaHgap.sh` script in `workflow/P_CeleraAssembler/runCaHgap.sh` and execute it on the command-line:
+
+```
+source /opt/smrtanalysis/etc/setup.sh
+runCA -d /opt/smrtanalysis/common/jobs/017/017227/data -p celera-assembler -s /opt/smrtanalysis/common/jobs/017/017227/data/runCA.spec /opt/smrtanalysis/common/jobs/017/017227/data/corrected.frg
+```
+
+Look into the Celera Assembler logs to find the error - these are located in the `<job_id>/data` directory.
+
+#### Problem 1d: Cannot find runCA due to path issue
+Celera Assembler will fail the runCA script is not in the $PATH:  
+```
+Cannot find /opt/smrtanalysis-2.0.1/analysis/bin/sgs-7.0/$syst-$arch/bin/runCA
+```
+This will happen if the SMRT Analysis install directory resolves to different paths in the head- and child- nodes.  To fix the problem, make sure the SMRT Analysis install directory (in this case /opt/smrtanalysis) appears to be in the same path on all nodes by doing the following on all nodes:
+```
+ls /opt/smrtanalysis/
+```
+If this is not possible, you can edit the runCA script itself.  The way CA figures out what the binary path is by using $FindBin::RealBin from runCA (or from pacBioToCA, depending on which you ran). Whatever that returns is what it will try to use. It does this on your head node so it assumes that your head node on the grid has the same file structure that your nodes do. If this is not the case, there is a pathMap option to runCA which takes a file name with a hostname to path translatio [...]
+like:
+```
+node1 /node1/ca/bin
+```
+and it would try to runCA and other binaries from /node1/ca/bin rather than whatever $FindBin::RealBin had returned on the head node.
+
+
+#### Problem 1e: Celera Assembler step deadlocks
+If you have a lot of CA jobs that are running but not consuming any CPU, that is often a sign of deadlock. 
+Technically it is caused by a queue being filled with jobs that all depend on a job that is waiting to execute on that same queue. Running many large Celera Assembler jobs at once can cause deadlock.  One solution is to specify a different/dedicated queue for the Celera Assembler step itself.  To do this, edit the following file:
+```
+/path/to/smrtanalysis/current/analysis/etc/celeraAssembler/template.spec
+```
+The other parts of SMRT Analysis will still use the settings defined in `/path/to/smrtanalysis/current/analysis/etc/cluster/SGE/interactive.tmpl`
+
+#### Problem 1f: Resubmitting `job.sh` on the command-line does not work.
+```
+[ERROR] [pbpy.smrtpipe.engine.SmrtPipeTasks run 660] > ERROR: Bank path already exists
+```
+You **cannot** rerun HGAP jobs using `job.sh`. You **must** rerun the jobs using the GUI. 
+
+
+### Step 2.  Check the run parameters for your data set.
+The assembly algorithms does not check that your data matches the parameters that are set. The most critical parameters are `Estimated Genome size` and `Minimum Seed Read Length`.  Look in the `smrtpipe.log` file and find the following lines:
+```
+            p_preassembler.minLongReadLength = 6000   
+            p_celeraassembler.genomeSize = 5000000 
+```
+Make sure you have at least 15X of data that exceed the `Minimum Seed Read Length`, and that your estimated genome size is correct for your organism.
+
+### Step 3. Run HGAP on lambda to establish a baseline.
+Run `RS_HGAP_Assembly` using the SMRT Cell in `$SEYMOUR_HOME/common/test/primary/lambda`, and the pre-packaged lambda reference in `$SEYMOUR_HOME/common/userdata/references/lambda`. Make sure you set the `Estimated Genome size` to 40,000 bp and `Minimum Seed Read Length` to 3500 bp. Set these parameters by clicking the box with three dots "..." next to the protocol drop-down menu:
+``` 
+            p_preassembler.minLongReadLength = 3500
+            p_celeraassembler.genomeSize = 40000  
+```
+
+
+If the lambda job also fails, then there is a problem with the software configuration that must be investigated.  
+
+### Step 4.  Check the run parameters for your hardware infrastructure.
+This is the hardest part of the entire process. It requires knowing the hardware architecture of your system and the fix requires iterative and laborious fine-tuning.
+ 
+#### Step 4a. Check that your compute nodes can also submit jobs.
+Celera Assembler requires that your compute hosts are also submit hosts because it submits array jobs. You can list all your compute hosts using `qconf -sc -q <queue_name>`, and list all your submit hosts by using`qconf -ss -q <queue_name>`.  If these lists are **not** the same, you can add submit hosts by using `qconf -as <hostname>`.
+
+
+#### Step 4b. Downtune `NPROC` in `$SEYMOUR_HOME/analysis/etc/smrtpipe.rc`.
+
+Look at the `.tmpl` template files in `$SEYMOUR_HOME/analysis/etc/cluster/sge/` and you will notice that there are certain environment variables that are used in the qsub command. These environment variables are set in `$SEYMOUR_HOME/analysis/etc/smrtpipe.rc`; and the most important variable is `NPROC`. NPROC stands for number of processes and dictates the number of slots to request when running Celera Assembler. The default value is set to one less than the number of cores on the head-n [...]
+
+1. Edit `$SEYMOUR_HOME/analysis/etc/smrtpipe.rc` such that `NPROC` is a low number (around 5).
+2. Restart tomcat.
+3. Rerun the RS_HGAP_Assembly job using lambda.
+ 
+#### Step 4c. Fine tune the `runCA.spec` file.
+The parameters set in the `.tmpl` and `smrtpipe.rc` files are interpreted and written to a `runCA.spec` file found in `$SEYMOUR_HOME/common/jobs/<id_prefix>/<id>/data/runCA.spec`. If none of the NPROC values result in a successful run, you need to examine other parameters listed at the [Celera Assembler website](http://sourceforge.net/apps/mediawiki/wgs-assembler/index.php?title=RunCA#Sun_Grid_Engine_Options). Pay particular attention to the `Overlap` parameters. Do the following a few t [...]
+
+1. Edit the `runCA.spec` file.
+2. Rerun the RS_HGAP_Assembly job using the using new SPEC file by specifying the full path in the assembly parameters dialog box. (To do this, click the box with three dots "..." next to the protocol drop-down menu.)
+3. Repeat.
+
+### Step 5.  Work around the problem.
+If none of the above steps solve your problem, and you generally only have a small (<10Mb) genome to assemble, then you may want to consider running HGAP in single-node mode. To do this, download the `runCA.spec` file from a failed RS_HGAP_Assembly job. You can find it in the "Data" panel on the bottom-left corner of the SMRT Portal Job view.  
+
+Edit the file to turn off distributed computing (Default in v2.0.1):
+```
+useGrid=0
+scriptOnGrid=0
+```
+
+Now designate the new SPEC file by specifying the full path in the assembly parameters dialog box. (To do this, click the box with three dots "..." next to the protocol drop-down menu.)
\ No newline at end of file
diff --git a/docs/Reference-upgrades-fail.md b/docs/Reference-upgrades-fail.md
new file mode 100644
index 0000000..942161c
--- /dev/null
+++ b/docs/Reference-upgrades-fail.md
@@ -0,0 +1,17 @@
+1. An old reference entry with only headers contains `refNNNNNN|` patterns, such as:
+```
+>ref000001|ref000001
+or
+>ref000001
+```
+This still fails the upgrade script. Check the upgrade log for the names of failed references.
+
+2. Users had directory structures inside the repository for previous versions, such as:
+```
+myrepository/foo/
+myrepository/foo/reference.info.xml
+myrepository/foo/sequence/
+myrepository/foo/sequence/foo.fasta
+myrepository/foo/sequence/foo.fasta.*
+```
+This might confuse the upgrade script. Users might need to manually upgrade references in this layout.
\ No newline at end of file
diff --git a/docs/Running-SMRT-View-in-a-different-tomcat-instance.md b/docs/Running-SMRT-View-in-a-different-tomcat-instance.md
new file mode 100644
index 0000000..036add9
--- /dev/null
+++ b/docs/Running-SMRT-View-in-a-different-tomcat-instance.md
@@ -0,0 +1,18 @@
+SMRT View may be run from it's own tomcat instance by making the following configuration change to SMRT Portal:
+
+Edit the configuration file $SMRT_ROOT/current/etc/config.xml to include the host attributes smrtview and smrtviewhttpport where the name and httpport are defined.  For example:
+
+```
+Change this:
+     <host name="172.16.13.213" httpport="8080" httpsport=""  />
+to this:
+     <host name="172.16.13.213" httpport="8080" httpsport="" smrtview="172.16.13.213" smrtviewhttpport="8084" />
+where SMRT View is installed on the same machine as SMRT Portal with a tomcat running port 8084.
+```
+
+SMRT View requires access to the same paths where SMRT Portal is installed (it uses data from the references and jobs folders using the same paths that SMRT Portal sees for these files).  This is something that needs to be kept in mind if installing SMRT View on a different machine than SMRT Portal.
+
+The recommended way to install SMRT View is to follow the instructions here https://github.com/PacificBiosciences/DevNet/wiki/SMRT-View
+Note that you will likely have to modify the redist/tomcat/conf/server.xml file to specify unique ports for the machine that you are running it on.
+
+Another simple way to install SMRT View is to just do a fresh install of SMRT Analysis choosing different ports for the services when prompted.  You can then launch SMRT View by using $SMRT_ROOT/admin/bin/tomcatd-initd start.
\ No newline at end of file
diff --git "a/docs/SLF4J:-Failed-to-load-class-\"org.slf4j.impl.StaticLoggerBinder\".md" "b/docs/SLF4J:-Failed-to-load-class-\"org.slf4j.impl.StaticLoggerBinder\".md"
new file mode 100644
index 0000000..3ee69af
--- /dev/null
+++ "b/docs/SLF4J:-Failed-to-load-class-\"org.slf4j.impl.StaticLoggerBinder\".md"
@@ -0,0 +1,4 @@
+You can safely ignore the following error message in the `$SEYMOUR_HOME/common/log/smrtportal.0.log` file:
+```
+SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder"
+```
\ No newline at end of file
diff --git a/docs/SMRT-Analysis-Release-Notes-v2.0.1.md b/docs/SMRT-Analysis-Release-Notes-v2.0.1.md
new file mode 100644
index 0000000..0a9aab0
--- /dev/null
+++ b/docs/SMRT-Analysis-Release-Notes-v2.0.1.md
@@ -0,0 +1,16 @@
+###  New Features
+* Now includes Quiver training for DNA/Polymerase P4.
+* Now includes modification detection using the P4/C2 combination with an updated in silico control. 
+  * Modification identification of 6-methyladenine (6-mA) and 4-methylcytosine (4-mC) is also supported, and is expected to have equivalent performance to previous chemistry releases.
+  * Modification identification of 5-methylcytosine (5-mC) using TET-treated samples is also supported. However, due to a limited training dataset, this application is not yet optimized for the P4/C2 combination. Future releases of the software are expected to have improved TET-converted 5-mC identification as the in silico control is updated with additional training data. 
+
+###  Fixed Issues
+
+* Fixed an Instrument Web Services problem with well status queries. (23191)
+* Removed a time limit to the Sun Grid Engine (SGE) that caused analysis jobs to stop after 12 hours. Any limits must now be placed by your IT department; SMRT® Pipe will **not** limit the run time. (23312)
+* Fixed an issue where sample barcodes were not working properly with multi-streamed data files (bax.h5), and most barcodes were not being recognized. (23136)
+* Modified HGAP defaults so that partial alignments are allowed and Celera® Assembler will run on a single node.
+
+### Known Issues
+
+*  The case-control run scenario for the ``RS_BaseModification`` protocol is not functional. The _in silico_ use case is still supported.  
\ No newline at end of file
diff --git a/docs/SMRT-Analysis-Release-Notes-v2.0.md b/docs/SMRT-Analysis-Release-Notes-v2.0.md
new file mode 100644
index 0000000..1b51d4f
--- /dev/null
+++ b/docs/SMRT-Analysis-Release-Notes-v2.0.md
@@ -0,0 +1,101 @@
+* [Introduction] (#Intro)
+* [Installation] (#Install)
+* [New Features in v2.0] (#New)
+* [Fixed Issues in v2.0] (#Fixed)
+* [Known Issues in v2.0] (#Known)
+
+
+## <a name="Intro"></a> Introduction
+
+SMRT Analysis software performs automated and distributed secondary analysis of sequencing data generated by the PacBio® System.
+
+## <a name="Install"></a> Installation
+
+For installation instructions, see [SMRT® Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.0).
+
+## <a name="New"></a> New Features in v2.0
+
+###SMRT® Portal###
+* Now includes the ``RS_HGAP_Assembly`` protocol to perform pre-assembly, assembly with Celera Assembler, and assembly polishing with Quiver, all in one integrated workflow.
+* ``RS_Modification_and_Motif_Analysis`` and ``RS_Modification_Detection`` protocols now include improved accuracy for XL-C2 chemistry. Base modification detection with C2 continues to be supported as well.
+* ``RS_Modification_and_Motif_Analysis`` now supports analysis of fractionally methylated or modified base positions within a sample.
+* You can now upload references using SMRT Portal **without** first placing the reference FASTA file in the dropbox by using the **Select Reference Sequence Files(s) to Import** dialog.
+* The Motif report now includes separate histograms of modification QV per motif.
+* Now displays **simplified metric names**, including Polymerase Read metrics and Read Length of Insert. To shorten the names, the report tooltip definitions now clarify whether a particular metric refers to the mean or another summary statistic. Several detailed metrics are no longer displayed on the Summary page; they continue to be displayed in the more detailed reports.
+* The **View Log** window now includes a link to download the log file. 
+* A new web service call, the **Cleanup** Function, cleans up duplicate SMRT Cells located at different paths, if the cells are unassociated with jobs. When you scan and import SMRT Cells from SMRT Portal and the same SMRT Cell ID already exists, the existing path is updated to the new location. No duplicate entries are created.
+
+###SMRT® View###
+* Multiple UI enhancements: 
+ * When run in stand-alone mode, uses the native file system browser.
+ * The toolbar includes new, more intuitive icons for base modifications and showing/hiding bases. 
+ * You can now **remove** tracks from the display without exiting SMRT View.
+ * Now displays variant call tracks per barcode.
+
+###SMRT® Pipe###
+* Many SMRT Pipe command-line tools now include improved documentation and command-line examples.
+* Now includes a new examples directory.
+* Quiver was enhanced to reduce errors in long dinucleotide repeats.
+
+###Installation###
+* The installation now checks for version mismatches on compute nodes.
+* Includes many small improvements to the upgrade and post-installation scripts.
+* The ``/opt/smrtanalysis/doc/`` directory in the SMRT Analysis installation now includes a README file pointing to the online documentation.
+* The SMRT Analysis installation documentation is now delivered as a web page: [SMRT® Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.0).
+
+
+## <a name="Fixed"></a> Fixed Issues in v2.0
+
+###SMRT® Pipe##
+* The minor and compound variant detection algorithm now allows adjustment of the minimum coverage requirement. (21474)
+* The long insert control was renamed from ``Strobe_v1`` to ``4kb_Control_c2``, and is intended for use only with C2 sequencing runs. The shorter insert control is now called ``600bp_Control_c2``. (19483)
+* ``bash5tools.py`` now outputs subreads within the HQ region only. (21565)
+* To more accurately show the uniformity of coverage in BLASR mapping, the default is now to place repeats randomly. (21769)
+* Mapping results are now reproducible due to fully deterministic pseudo-randomization. (22126)
+* GMAP download link names were changed for consistency and intuitiveness. (22238)
+* The consensus sequence no longer includes N's by default when there is no evidence. Instead, the reference base is output, but in lowercase. (22227)
+* Now correctly produces a Barcoded Reads FASTQ file for barcode output. (22341)
+
+###SMRT® Pipe - Assembly###
+* The Pre-Assembler protocol now outputs summary metrics for the pre-assembly. (22631)
+* The ``P_PreAssembler`` protocol now correctly writes the``tmp.fasta`` and ``tmp.fastq`` files to the correct ``sharedTmp`` directory. (22964)
+* The P_PreAssembler protocol now completes correctly when using SMRT Cell inputs created by different ICS versions. (22205)
+
+###SMRT® Pipe - Reference Uploader###
+* The reference import mechanism now rejects duplicate FASTA sequences. (21947)
+
+###Installation/Upgrades###
+* The ``conf_main.log`` file was moved to ``common/log/install``. (22612)
+* The ``conf_main.log`` file now contains all output from the installation. (22200)
+* The ``upgrade_and_configur``e script now manages soft links and Tomcat and Kodos services. (22599)
+* The ``upgrade_and_configure`` script now displays a warning if reference repository updates are required, as this operation may take a long time. (22324)
+* The CentOS 5.6 binaries for SMRT Analysis now include ``libgfortran.so.3``. (22464)
+
+###SMRT® Analysis System###
+* The sample data sets were updated. (22839)
+* The ``cmp.h5`` file was updated to address barcoding and multi-streamed base files. (22491).
+
+## <a name="Known"></a> Known Issues in v2.0
+
+###SMRT® Portal###
+* In the **Create New** and **View Data** pages, Advanced search may not display the calendar on some browsers. (17275)
+* Exporting Table Data does **not** correctly export Group names. (22567)
+* Exporting Table data from the **Create New Job** page results in an empty version field. (22627)
+* In the **View Data** page, sorting by Groups returns empty values. (22666)
+* In the **SMRT Cell Available** table, sorting on Version returns empty values. You also **cannot** search the table on Version. (22687)
+* You **cannot** select an inactive protocol in the **Manage Secondary Analysis Protocols** page. (22068)
+* You **cannot** add SMRT Cells after saving a job containing no cells. (22400)
+
+###SMRT® Pipe - Mapping###
+* BLASR does **not** process renamed files correctly. (21439)
+* BLASR may align subreads from the same ZMW to different genomic coordinates. (20755)
+
+###SMRT® Pipe - System###
+* MySQL passwords containing ``&`` (ampersand) characters are not handled correctly. (22761)
+* The ``configure_smrtanalysis.sh`` script aborts when [y/n] is not chosen. (22323)
+* ``CCS allpass`` mode produces incorrect results when ``--placeRepeatsRandomly`` is specified. (22935)
+* BaseQV is not passed to the SAM file in the ``P_GMAP`` module. (21884)
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
\ No newline at end of file
diff --git a/docs/SMRT-Analysis-Release-Notes-v2.1.1.md b/docs/SMRT-Analysis-Release-Notes-v2.1.1.md
new file mode 100644
index 0000000..f86d618
--- /dev/null
+++ b/docs/SMRT-Analysis-Release-Notes-v2.1.1.md
@@ -0,0 +1,23 @@
+* [Introduction] (#Intro)
+* [Installation] (#Install)
+* [Fixed Issues in v2.1.1] (#Fixed)
+
+## <a name="Intro"></a> Introduction
+
+The SMRT® Analysis software suite performs assembly and variant detection analysis of sequencing data generated by the Pacific Biosciences instrument.
+
+## <a name="Install"></a> Installation
+
+For installation instructions, see [SMRT® Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.1).
+
+## <a name="Fixed"></a> Fixed Issues in v2.1.1
+
+* Fixed an issue that caused the ``RS_HGAP.1`` and ``RS_PreAssembler`` protocols to fail when processing v2.1 data. (24154)
+* Fixed an issue that caused the ``RS_AHA_Scaffolding`` protocol to fail when the ``gapFill`` option is set to ``True``. (24140) 
+* Fixed an issue in SMRT® Portal which caused the wrong reference to be used after a job was saved. (24146) 
+* Fixed an installation issue where a directory was not correctly created. (24135)
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 100-293-900**
diff --git a/docs/SMRT-Analysis-Release-Notes-v2.1.md b/docs/SMRT-Analysis-Release-Notes-v2.1.md
new file mode 100644
index 0000000..54a164f
--- /dev/null
+++ b/docs/SMRT-Analysis-Release-Notes-v2.1.md
@@ -0,0 +1,101 @@
+* [Introduction] (#Intro)
+* [Installation] (#Install)
+* [New Features in v2.1] (#New)
+* [New Protocols in v2.1] (#Protocols)
+* [Fixed Issues in v2.1] (#Fixed)
+* [Known Issues in v2.1] (#Known)
+
+## <a name="Intro"></a> Introduction
+
+The SMRT® Analysis software suite performs assembly and variant detection analysis of sequencing data generated by the Pacific Biosciences instrument.
+
+## <a name="Install"></a> Installation
+
+For installation instructions, see [SMRT® Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.1).
+
+## <a name="New"></a> New Features in v2.1
+
+###Important Note:###
+Circular consensus sequencing (CCS) is **no longer** performed on the Blade Center. Instead, you must run the ``RS_ReadsOfInsert`` protocol. That protocol provides more options to determine the highest quality single molecule reads, and includes built-in DNA barcoding support. The Blade Center now processes base calls in real-time, and data is available immediately after a run regardless of insert size and movie time.
+
+###BridgeMapper Protocol###
+* Use the new **BridgeMapper** protocol to find sequences that map to multiple parts of a genome assembly, to help with assembly QC and other tasks. For information on BridgeMapper and its specialized SMRT View visualization mode, see the SMRT View online help.
+
+###SMRT® Pipe - Consensus###
+* Quiver was upgraded to handle diploid variant calling. As a result, the ``RS_Resequencing_CCS_GATK`` and ``RS_Resequencing_GATK`` protocols were **removed** in favor of the diploid Quiver option in the ``RS_Resequencing`` and ``RS_ReadsOfInsert_Resequencing`` protocols.  Note that the ``RS_Resequencing_GATK_Barcode`` protocol **remains** for this release, but we intend to remove it in the next version of SMRT® Analysis.
+
+###SMRT® Pipe - Mapping###
+A new BLASR option, ``-concordant``, aligns all subreads of a ZMW to where the longest full pass subread of this ZMW has aligned to. This option is now **turned on** by default in the following SMRT Pipe  protocols: 
+
+* ``RS_Resequencing``
+* ``RS_Modification_and_motif_Analysis``
+* ``RS_Modification_Detection``
+
+###SMRT® Pipe - Assembly###
+* The new **HGAP 2** protocol has significantly increased performance; assembly time and the memory footprint are dramatically reduced. Cluster time is reduced up to 100-fold, and disk space use is reduced from gigabytes to tens of megabytes. Assembly output is also improved with some new chimera detection and sequencing filtering. To help you compare the differences in assembly results, we include both the original HGAP and the new HGAP 2 protocols in SMRT Portal (``RS_HGAP_Assembly.1`` [...]
+
+* As CCS is no longer performed on the instrument, we **removed** options to use CCS reads for pre-assembly and alignment. To use CCS or Reads of Insert for pre-assembly, please use a FASTQ file of reads. To make sure that subreads all align to the same location, resequencing analysis now uses the alignment of the longest subread from a ZMW to constrain the mapping of all other subreads. (See the BLASR ``-concordant`` option.)
+
+* The Allora algorithm and ``RS_Allora_Assembly`` protocols were **retired** in v2.1. Please use the ``RS_HGAP_Assembly`` protocol instead.
+
+###SMRT® Pipe - Barcoding###
+* Analysis jobs with barcoding (such as those using the ``RS_ReadsOfInsert`` protocol) now provide a simple barcoding report with the number of reads for each barcode in a table. To help improve this table (or any other features), please send us your feedback at PBFeedback at pacificbiosciences.com.
+
+###SMRT® Portal###
+* Added a "Forgot your password?" link in the login screen to reset your password.
+* When a job is created, the SMRT Analysis version is now stored. SMRT Portal displays this version number in the **View Data** tab.
+* Added a new Barcoding report that is generated when running jobs that include barcoding. You can also now specify a score threshold for calling barcodes when setting up a barcoding job.
+
+###SMRT® View###
+* Now displays **BridgeMapper** visualizations, used to find sequences that map to multiple parts of a genome assembly. This can help with assembly QC and other tasks. See the SMRT View online help for details.
+
+###Bioinformatics Tools###
+* ``pbalign`` is a tool which aligns Pacific Biosciences' reads in various formats (e.g., bax.h5/plx.h5/ccs.h5/FASTA/fofn) to reference sequences, and produces alignments in SAM or CMP.H5 format. It is designed to help you align Pacific Biosciences' reads and generate alignments in convenient formats for downstream analysis **without** access to SMRT Portal. For example, you can use ``pbalign`` with the ``--forQuiver`` option to produce alignments in a CMP.H5 file, which has all the puls [...]
+
+###Base Modifications###
+SMRT® Analysis includes base modification support for P5-C3 (P5 polymerase with C3 sequencing chemistry). The same three types of modification are still supported in identification: 
+* N6-methyladenine
+* N4-methylcytosine
+* Tet-converted 5-methylcytosine (5-carboxylcytosine)
+
+###Installation/Upgrade###
+* Streamlined the directory structure: All administrative scripts are now in the ``$SMRT_ROOT/admin/bin`` directory.
+* Streamlined the directory structure: All analysis data are now in the ``$SMRT_ROOT/userdata`` directory.
+* Install and upgrade procedures now attempt to automatically detect and propagate system configurations, including SGE environment variables.
+* The software tarball is now a “.run” self-extracting executable. 
+
+## <a name="Protocols"></a> New Protocols in v2.1
+
+###RS_ReadsOfInsert###
+* This protocol extracts the biologically meaningful portion of the sequenced read. It replaces CCS on the instrument and produces “reads_of_insert” fasta and fastq files containing reads from the insert sequence of single molecules, optionally splitting by barcode.  
+
+###RS_LongAmpliconAnalysis###
+* This protocol enables haplotype analysis by detecting phased variants in consensus sequences for pooled amplicon data, optionally splitting by barcode.
+
+###RS_cDNA_Mapping###
+* This protocol now produces a report and summary statistics on the alignment of cDNA transcripts to a genomic DNA reference using the third-party software tool GMAP. Reads are filtered by length and quality and then mapped against the reference using GMAP to span introns.
+
+## <a name="Fixed"></a> Fixed Issues in v2.1
+
+###SMRT® Pipe - Assembly###
+* The AHA algorithm was refactored to provide the ``pbaha.py`` executable. The ``pbaha.py`` executable allows use of the AHA scaffolding and gapfilling algorithms outside of ``smrtpipe.py``. In SMRT Analysis v2.1.0, ``pbaha.pyis`` the **only** way to execute AHA on Pacific Biosciences' reads in FASTA files. (18912). **Note**: AHA ("A Hybrid Assembler") is the Pacific Biosciences hybrid assembly algorithm. It is based on the open source assembly software package AMOS, with additional soft [...]
+
+###SMRT® Pipe - Reference Uploader###
+* Reference creation now terminates with an error if duplicate sequences exist in any of the input FASTA files. (21947)
+
+###SMRT® Analysis Web Services API###
+* The **Create Job** function accepts data via an HTTP POST request. Previously, the job was always saved with the reference embedded in the protocol xml, even if a different reference was specified in the POST data. This issue was fixed so that the reference specified in the POST data takes precedence. (22797)
+
+###SMRT® Portal##
+* Fixed an issue where only 10 groups were visible in the **Add User** dialog’s Groups select box. Now all available groups display. (23591)
+
+## <a name="Known"></a> Known Issues in v2.1
+
+###SMRT® Pipe - Mapping###
+* BLASR does not process renamed files correctly. (21439)
+
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 100-262-000**
diff --git a/docs/SMRT-Analysis-Release-Notes-v2.2.0.md b/docs/SMRT-Analysis-Release-Notes-v2.2.0.md
new file mode 100644
index 0000000..828993a
--- /dev/null
+++ b/docs/SMRT-Analysis-Release-Notes-v2.2.0.md
@@ -0,0 +1,123 @@
+* [Introduction] (#Intro)
+* [Installation] (#Install)
+* [New Features in v2.2.0] (#New)
+* [Enhanced Protocols in v2.2.0] (#ENH_Protocols)
+* [Fixed Issues in v2.2.0] (#Fixed)
+* [Known Issues in v2.2.0] (#Known)
+
+## <a name="Intro"></a> Introduction
+
+The SMRT Analysis software suite performs assembly and variant detection analysis of sequencing data generated by the Pacific Biosciences instrument.
+
+## <a name="Install"></a> Installation
+
+For installation instructions, see [SMRT Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0).
+
+## <a name="New"></a> New Features in v2.2.0
+
+###SMRT Analysis###
+
+* **Iso-Seq™** module adds full-length transcript Q/C and clustering steps, in addition to mapping to a reference genome using the GMAP tool.
+
+* **Long-Amplicon Analysis** module now includes enhanced chimera filtering, providing greater confidence in genotyping results.
+
+* **Minor-Variant Analysis** module now uses a more sophisticated model tuned to PacBio reads; the same as used with Quiver. 
+
+* **HGAP 3** (PacBio genome assembly tool) now incorporates a potential **10-fold** speed improvement (wall-clock time) for microbial assembly. The increased speed can dramatically reduce the time required to completely assemble a full microbial genome.  
+  * HGAP 2 is now our **production** assembly version and HGAP 3 is the beta version.  
+  * HGAP 1 is **no longer supported**. We encourage you to migrate to SMRT Analysis v2.2.0.
+
+* Improved SMRT Pipe module interface documentation and examples.
+
+###SMRT Portal###
+
+* **Protocol Selector** groups protocols by application and simplifies navigation for new users. (The feature can be turned off by users after the first use.)
+* **N50 statistics** are now included in many of the generated reports; resequencing reports now display "Concordance" instead of "Accuracy".
+* **Tooltip descriptions** display when editing protocol parameters. 
+* Can now create output FASTA/FASTQ files **without** control reads.
+
+###Installation/Upgrade###
+* One tarball supplied for **all** supported operating systems.
+* Now includes the ``mySQL`` Server bundled with the tarball - no external MySQL® server needed.
+* Now includes Celera® Assembler 8.1 bundled with the tarball.
+* Now includes the ``phmmer`` prebuilt binary bundled with the tarball.
+
+## <a name="ENH_Protocols"></a> Enhanced Protocols in v2.2.0
+
+* ``RS_IsoSeq`` **(BETA)**: Classifies PacBio reads into full-length (FL) or non-full-length (non-FL) transcript reads, with optional clustering and mapping steps. **Replaces** the ``RS_cDNA_Mapping`` protocol.
+
+* ``RS_HGAP_Assembly.3`` **(BETA)**: Optimized for speed: 10-fold improvement for small- and midsize genome assembly, providing shorter turnaround time.
+
+* ``RS_Minor_Variant`` **(BETA)**: Calls minor variants in a heterogeneous dataset against a user-provided reference sequence, with frequencies down to 0.5%. **Replaces** the ``RS_Minor_and_Compound_Variants`` protocol.
+
+* ``RS_Long_Amplicon_Analysis``: Includes enhanced chimera filtering. 
+
+*  ``RS_HGAP_Assembly.2, RS_HGAP_Assembly.3``: Added support for filtering control reads out of the filtered FASTA/FASTQ files generated by the protocols.
+
+###Obsolete Protocols:###
+* ``RS_HGAP_Assembly.1``:  Use ``RS_HGAP_Assembly.2``, which  is now our production assembly software.
+* ``RS_cDNA_Mapping``:  Use ``RS_IsoSeq`` instead.
+* ``RS_Minor_and_Compound_Variants``:  Use ``RS_Minor_Variant`` instead.
+* ``RS_Resequencing_GATK_Barcode``:  
+   * Use ``RS_Subreads`` if your desired output is ``FASTQ`` files containing reads split up by barcodes.
+   * Use ``RS_Resequencing_Barcode`` if your desired output is ``cmp.h5`` files containing reads split up by barcodes. 
+
+**Note:** GATK and associated executables are **no longer** included.
+
+###Protocols Whose Names Changed:###
+* ``RS_Filter_Only``:  Use ``RS_Subreads`` instead.
+* ``RS_Resequencing_ReadsOfInsert``:  Use ``RS_ReadsOfInsert_Mapping`` instead.
+* ``BridgeMapper_Beta``:  Use ``RS_BridgeMapper`` instead.
+
+## <a name="Fixed"></a> Fixed Issues in v2.2.0
+
+###SMRT Portal##
+* Users logged in as **scientist** can now **archive and restore** their own jobs. (24442) 
+* Consolidated and reorganized the protocols. (24622) 
+* Now calls a script to backup the database. (24644)
+* Added the build number to the **About** dialog. (24303)
+* Changed to the "Reads of Insert" labeling in the Protocols Details dialog. (24675)
+
+###SMRT Pipe###
+* Various improvements to reduce memory usage in the resequencing pipeline. (24455)
+* AHA algorithm now works with reference sequence headers that contain space characters. (24407)
+* User environment is now cached **before** installation, then reinstalled **after** SMRT Analysis is installed. (24668, 24881)
+
+###SMRT Pipe - Assembly##
+* Greater efficiency in the use of cluster resources. (24540) 
+* Added a Contig Depth vs Quality Report. (24429)
+
+###SMRT Pipe - Barcoding##
+* Fixed two issues in ``pbbarcode`` scoring that caused mislabeling in paired mode. (24426)
+
+###SMRT Pipe - Consensus###
+* Improved handling of experiments containing different chemistries, and improved robustness and speed for the P5-C3 chemistry. (23819)
+* Quiver now gives high confidence to variants in extremely low coverage regions. (24541) 
+* The ``RS_Resequencing_ReadsOfInsert`` protocol does **not** include variant calling. (24374)
+
+###SMRT Pipe - Base Modifications###
+* Made many general scaling improvements; now works with medium-scale genomes, such as Arabadopsis. (24059)
+
+###SMRT Pipe - Long-Amplicon Analysis###
+* Improved chimera detection. (24565)
+
+###SMRT Pipe - Mapping###
+* Replaced ``compareSequences.py`` with ``pbalign.py``. (24093)
+
+## <a name="Known"></a> Known Issues in v2.2.0
+
+###SMRT Analysis###
+
+* SMRT Analysis has **not** been tested with all Job Management systems, and may not work correctly with glusterFS. (22571, 23220)
+* SMRT Analysis was designed to work with **known supported workflows**; that is, the included ``RS_`` protocols. Memory constraints should be considered for non-supported command-line workflows. (24090)
+* ``ReferenceUploader`` fails if there are illegal ``>`` characters in the FASTA file header. (24236)
+* Robust validation of input ``bax.h5`` and ``bas.h5`` files is required when creating a job. (23246)
+* Robust path validation in python is required to avoid NFS issues. (24549)
+* SAM files output by BLASR do **not** conform to the SAM format. (23264)
+* ``pbalign.py`` incorrectly treats a multipart ``bas.h5`` file as a CCS file, and aborts when it can't find data it needs. (24173)
+* Motif Finder software is not yet optimized for high-GC genome base modifications.  (24315)
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 100-321-300**
\ No newline at end of file
diff --git a/docs/SMRT-Analysis-Release-Notes-v2.2.0.p1.md b/docs/SMRT-Analysis-Release-Notes-v2.2.0.p1.md
new file mode 100644
index 0000000..7a82768
--- /dev/null
+++ b/docs/SMRT-Analysis-Release-Notes-v2.2.0.p1.md
@@ -0,0 +1,29 @@
+* [Introduction] (#Intro)
+* [Installation] (#Install)
+* [Fixed Issues in v2.2.0.p1] (#Fixed)
+
+## <a name="Intro"></a> Introduction
+
+The SMRT® Analysis software suite performs assembly and variant detection analysis of sequencing data generated by the Pacific Biosciences® instrument.
+
+## <a name="Install"></a> Installation
+
+For installation instructions, see [SMRT® Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0).
+
+## <a name="Fixed"></a> Fixed Issues in v2.2.0.p1
+* Fixed an issue that caused Quiver jobs for large genomes to fail. (25124)
+* Fixed an issue that caused SMRT Pipe jobs to fail when trying to clean up temp files. (25074)
+* Fixed an issue where ``qstat`` could not be found in HGAP ``CeleraAssembler``. (25055)
+* Fixed an issue that caused BLASR to fail if the SMRT Cell path included any white spaces. (25012)
+* Fixed an issue where Reads of Insert execution failed due to an operating system memory error. (25034)
+* Fixed an issue where ``SEYMOUR_HOME=/opt/smrtanalysis`` was still hard-coded for distributed jobs. (25076)
+* The deleted job service ``fail-orphans`` is now included in the installation. (25049)
+
+###SMRT® Portal###
+* Fixed an issue where SMRT Portal did not start correctly on server reboot. (25067)
+* The **About** box now includes patch information. (25129)
+* When exporting metrics, the .csv output file is now organized with metrics as columns and jobs as rows. (25030)
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
diff --git a/docs/SMRT-Analysis-Release-Notes-v2.2.0.p2.md b/docs/SMRT-Analysis-Release-Notes-v2.2.0.p2.md
new file mode 100644
index 0000000..06b826c
--- /dev/null
+++ b/docs/SMRT-Analysis-Release-Notes-v2.2.0.p2.md
@@ -0,0 +1,29 @@
+* [Introduction] (#Intro)
+* [Installation] (#Install)
+* [Fixed Issues in v2.2.0.p2] (#Fixed)
+
+## <a name="Intro"></a> Introduction
+
+The SMRT® Analysis software suite performs assembly and variant detection analysis of sequencing data generated by the Pacific Biosciences® instrument.
+
+## <a name="Install"></a> Installation
+
+For installation instructions, see [SMRT® Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0).
+
+## <a name="Fixed"></a> Fixed Issues in v2.2.0.p2
+* Fixed an issue that caused Quiver jobs for large genomes to fail. (25124)
+* Fixed an issue that caused SMRT Pipe jobs to fail when trying to clean up temp files. (25074)
+* Fixed an issue where ``qstat`` could not be found in HGAP ``CeleraAssembler``. (25055)
+* Fixed an issue that caused BLASR to fail if the SMRT Cell path included any white spaces. (25012)
+* Fixed an issue where Reads of Insert execution failed due to an operating system memory error. (25034)
+* Fixed an issue where ``SEYMOUR_HOME=/opt/smrtanalysis`` was still hard-coded for distributed jobs. (25076)
+* The deleted job service ``fail-orphans`` is now included in the installation. (25049)
+
+###SMRT® Portal###
+* Fixed an issue where SMRT Portal did not start correctly on server reboot. (25067)
+* The **About** box now includes patch information. (25129)
+* When exporting metrics, the .csv output file is now organized with metrics as columns and jobs as rows. (25030)
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
diff --git a/docs/SMRT-Analysis-Release-Notes-v2.2.0.p3.md b/docs/SMRT-Analysis-Release-Notes-v2.2.0.p3.md
new file mode 100644
index 0000000..5e37d01
--- /dev/null
+++ b/docs/SMRT-Analysis-Release-Notes-v2.2.0.p3.md
@@ -0,0 +1,34 @@
+* [Introduction] (#Intro)
+* [Installation] (#Install)
+* [Fixed Issues in v2.2.0.p3] (#Fixed)
+
+## <a name="Intro"></a> Introduction
+
+The SMRT® Analysis software suite performs assembly and variant detection analysis of sequencing data generated by the Pacific Biosciences® instrument.
+
+## <a name="Install"></a> Installation
+
+For installation instructions, see [SMRT® Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0).
+
+## <a name="Fixed"></a> Fixed Issues in v2.2.0.p3
+
+* Fixed an issue that caused Minor Variants jobs using references containing lowercase characters to fail. (25358)
+* Fixed an issue that incorrectly caused HGAP jobs to fail for lack of free space. (25326)
+* Fixed an SGE resource issue that caused Iso-Seq jobs to fail. (25301)
+* Fixed an issue that caused Quiver to fail. (25239)
+* Updated the example files located under ``assembly/seymour/dist/doc/examples``. (25162)
+* Fixed an issue that caused fraction methylated estimates to appear downwardly-biased. (25260)
+* The ``metrics2`` web service now returns **all** values as numeric types. (24951)
+* The ``metrics2`` web service now formats metrics as columns, and jobs as rows. (25032)
+
+###Installation###
+* Improved error-handling in cases where the installation cannot access a DNS server. (25213)
+* Added a check when starting an upgrade or installation to verify that the system uses a GNU/Linux OS running a Linux kernel, is running on a x86_64 (64-bit) machine architecture, and is running ``libc-2.5`` or later. (25379)
+* Fixed an issue where parameters in ``smrtpipe.rc`` were not being set correctly for **non-SGE** job management systems. (25261)
+
+###SMRT® Portal###
+* Fixed an issue where job metrics displayed in SMRT Portal reports differed from the metrics downloaded in .csv format using the **View Data** page **Metrics** button. (25248)
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
diff --git a/docs/SMRT-Analysis-Software-Installation-v1.4.0.md b/docs/SMRT-Analysis-Software-Installation-v1.4.0.md
new file mode 100644
index 0000000..cd21d38
--- /dev/null
+++ b/docs/SMRT-Analysis-Software-Installation-v1.4.0.md
@@ -0,0 +1,349 @@
+# Introduction #
+* This document describes the basic requirements for installing SMRT Analysis® v1.4.0 on a customer system.
+* This document is for use by Field Service and Support personnel, as well as Customer IT.
+
+# System Requirements
+## Operating System##
+* SMRT Analysis is **only** supported on:
+    *  English-language Ubuntu 8.04 
+    * English-language Ubuntu 10.04 
+    * English-language RedHat/CentOS 5.3
+    * English-language RedHat/CentOS 5.6
+* SMRT Analysis **cannot** be installed on the Mac OS or Windows.
+* Users with alternate versions of Ubuntu or CentOS will likely encounter library errors when running an initial analysis job. The errors in the ``smrtpipe.log`` file indicate which libraries are needed. Install any missing libraries on your system for an analysis job to complete successfully.
+
+## Running SMRT Analysis in the Cloud ##
+Users who do **not** have access to a server with CentOS 5.6 or later or Ubuntu 10.0.4 or later can use the public Amazon Machine Image (AMI). For details, see the document **Running SMRT Analysis on Amazon**, 
+available from the PacBio® Developer’s Network at http://www.pacbiodevnet.com.
+
+## Software Requirement ##
+* MySQL 5
+* bash
+* Perl (v5.8.8)
+
+### Ubuntu:###
+* ``aptitude install mysql-server libxml-parser-perl liblapack3gf libssl0.9.8``
+
+### CentOS 5:###
+* ``yum install mysql-server perl-XML-Parser libgfortran libgfortran44 openssl redhat-lsb``
+
+### CentOS 6:###
+* ``yum install mysql-server perl-XML-Parser compat-libgfortran-41 openssl098e redhat-lsb``
+
+### Client web browser: ###
+We recommend using Firefox® 15 or Google Chrome® 21 web browsers to run SMRT Portal for consistent functionality. We also support Apple’s Safari® and Internet Explorer® web browsers; however some features may not be optimized on these browsers.
+
+### Client Java: ###
+To run SMRT View, we recommend using Java 7 for Windows (Java 7 64 bit for users with 64 bit OS), and Java 6 for the Mac OS.
+
+## Minimum Hardware Requirements##
+### 1 head node:###
+* Minimum 16 GB RAM. Larger references such as human may require 32 GB RAM.
+* Minimum 250 GB of disk space
+
+### 3 compute nodes:###
+* 8 cores per node, with 2 GB RAM per core
+* Minimum 250 GB of disk space per node
+* To perform _de novo_ assembly of large genomes using the Celera® Assembler, one of the nodes will need to have considerably more memory. See the Celera Assembler home page for recommendations: http://wgs-assembler.sourceforge.net/.
+
+### Data storage:###
+* 10 TB (Actual storage depends on usage.)
+
+### Network File System Requirement###
+* NFS mounts to the input locations (metadata.xml, bas.h5 files, and so on).
+* NFS mounts to the output locations ``($SEYMOUR_HOME/common/userdata)``.
+* ``$SEYMOUR_HOME`` should be viewable by **all** compute nodes.
+* Compute nodes must be able to write back to the job directory.
+
+# Installation and Upgrade Summary
+
+Following are the steps for installing SMRT Analysis v1.4.0. For further details, click the links.
+
+1. Select an installation directory to assign to the ``$SEYMOUR_HOME`` environmental variable. In this summary, we use ``/opt/smrtanalysis``. 
+
+2. Decide on a sudo user who will perform the installation. In this summary, we use ``<thisuser>``, who belongs to ``<thisgroup>``. 
+
+3. [Extract the tarball](#Step3) and softlink the directories:
+```
+tar -C /opt -xvvzf <tarball_name>.tgz
+rm /opt/smrtanalysis (if it already exists)
+ln -s /opt/smrtanalysis-1.4.0 /opt/smrtanalysis
+sudo chown -R <thisuser>:<thisgroup> smrtanalysis-1.4.0
+```
+4. Edit the setup script ``/opt/smrtanalysis-1.4.0/etc/setup.sh`` to match your installation location:
+```
+SEYMOUR_HOME=/opt/smrtanalysis
+```
+
+5. Run the appropriate script: 
+  * **Option 1**: If you are performing a **fresh** installation, run the [installation script](#Step5Install):
+```
+  /opt/smrtanalysis/etc/scripts/postinstall/configure_smrtanalysis.sh
+```
+  * **Option 2**: If you are **upgrading** and want to preserve SMRT Cells, jobs, and users from a previous installation: Turn off services and run the [upgrade script] (#Step5Upgrade).
+```
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/tomcatd/ stop
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/kodosd/ stop
+  /opt/smrtanalysis/etc/scripts/postinstall/upgrade_and_configure_smrtanalysis.sh
+```
+6. Set up [distributed computing](#Step6) by deciding on a job management system (JMS), then edit the following files:
+```
+/opt/smrtanalysis/analysis/etc/smrtpipe.rc
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/start.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/interactive.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/kill.tmpl
+/opt/smrtanalysis/redist/tomcat/webapps/smrtportal/WEB-INF/web.xml
+```
+**Note:** If you are **not** using SGE, you will need to **deactivate** the Celera Assembler protocols so that they do **not** display in SMRT Portal. To do so, rename the following files, located in ``common/protocols``.  Rename the following files:
+```
+RS_CeleraAssembler.1.xml to RS_CeleraAssembler.1.bak
+filtering/CeleraAssemblerSFilter.1.xml to CeleraAssemblerSFilter.1.bak
+assembly/CeleraAssembler.1.xml to CeleraAssembler.1.bak
+```
+7. **New Installations only**: [Set up user data folders](#Step7) that point to external storage.
+
+8. **New Installations only**: [Set up SMRT Portal] (#Step8).
+
+9. [Start](#Step9) the SMRT Portal and Automatic Secondary Analysis Services.
+
+10. [Verify] (#Step10) the installation.
+
+## Bundled with SMRT® Analysis ##
+The following are bundled within the application and should **not** depend on what is already deployed on the system.
+* Java® 1.6
+* Python® 2.5.2
+* Tomcat™ 7.0.23
+
+## Changes from SMRT® Analysis v1.3.3 ##
+See **SMRT Analysis Release Notes (v1.4.0**) for changes and known issues. The latest version of the document resides on the Pacific Biosciences DevNet site; you can link to it from the main SMRT Analysis web page.
+
+## <a name="Step3"></a> Step 3: Extract the Tarball
+
+Extract the tarball to its final destination - this creates a ``smrtanalysis-1.4.0/ directory``. Be sure to use the tarball appropriate to your system - Ubuntu or CentOS.
+
+**Note**: You need to run these commands as sudo if you do not have permission to write to the install folder. If the extracted folder is **not** owned by the user performing the installation (``/opt`` is typically owned by root), change the ownership of the folder and all its contents. 
+
+Example: To change permissions within ``/opt``:
+```
+sudo chown -R <thisuser>:<thisgroup> smrtanalysis-1.4.0
+```
+
+We recommend deploying to ``/opt``:
+```
+tar -C /opt -xvvzf <tarball_name>.tgz
+```
+
+We also recommend creating a symbolic link to ``/opt/smrtanalysis-1.4.0`` with ``/opt/smrtanalysis``:
+```
+ln -s /opt/smrtanalysis-1.4.0 /opt/smrtanalysis
+```
+
+This enables subsequent upgrades to be transparent with a change in the symbolic link to the upgraded tarball directory.
+
+## <a name="Step5Install"></a> Step 5: Run the Installation Script
+
+Run the installation script:
+```
+cd $SEYMOUR_HOME/etc/scripts/postinstall
+./configure_smrtanalysis.sh
+```
+
+The installation script requires the following input:
+* The **system name**. (Default: ``hostname -a``)
+* The **port number** that the services will run under. (Default: ``8080``)
+* The Tomcat **shutdown por**t. (Default: ``8005``)
+* The **user/group** to run the services and set permissions for the files. (Default: ``smrtanalysis:smrtanalysis``)
+* The **mysql user name and password** to install the database. (Default: ``root:no password``)
+
+The installation script performs the following:
+* Creates the SMRT Portal database. **Note**: The mysql user performing the install **must** have permissions to alter or create databases. Otherwise, the installer will **reject** the user and prompt for another.
+* Sets the host and port names for various configuration files.
+* Sets the Tomcat/kodos user. The services will run as the specified user.
+* Sets the user and group permissions and ownership of the application to the Tomcat user.
+* Adds links in ``/etc/init.d`` to the Tomcat and kodos services. (The defaults are: ``/etc/init.d/kodosd`` and ``/etc/init.d/tomcatd``.) These are soft links to the actual service files within the application. If a file is already present (for example, tomcatd is already installed), the link can be created with a different name. The permissions of the underlying scripts are limited to the user running the services.
+* Installs the services. The services will automatically restart if the system restarts. (On CentOS, the installer will run ``chkconfig`` to install the services, rather than ``update-rc.d``.)
+
+**Note**: The installer will attempt to run without sudo access first. If this fails, the installer will prompt the user for a sudo password and retry.
+
+## <a name="Step5Upgrade"></a> Step 5, Option 2: Run the Upgrade Script
+
+If you are **upgrading** from v1.3.3 to v1.4.0 and want to preserve SMRT Cells, jobs, and users from a previous installation:
+
+Run ``upgrade_and_configure_smrtanalysis.sh`` to update the database schema and the reference repository entries:
+```
+cd $SEYMOUR_HOME/etc/scripts/postinstall
+./upgrade_and_configure_smrtanalysis.sh
+```
+
+Skip setting up the services: (These should already exist from the previous installation.)
+```
+Now creating symbolic links in /etc/init.d. Continue? [Y/n] n
+```
+
+## <a name="Step6"></a> Step 6: Set up Distributed Computing
+
+SMRT Analysis provides support for distributed computation using an existing job management system. Pacific Biosciences has explicitly validated Sun Grid Engine (SGE), LSF and PBS.
+
+**Note**: Celera Assembler 7.0 will **only** work correctly with the SGE job management system. If you are **not** using SGE, you will need to **deactivate** the Celera Assembler protocols so that they do **not** display in SMRT Portal. To do so, rename the following files, located in ``common/protocols``:
+```
+RS_CeleraAssembler.1.xml to RS_CeleraAssembler.1.bak
+filtering/CeleraAssemblerSFilter.1.xml to CeleraAssemblerSFilter.1.bak
+assembly/CeleraAssembler.1.xml to CeleraAssembler.1.bak
+```
+
+This section describes setup for SGE and gives guidance for extensions to other Job Management Systems.
+
+### Smrtpipe.rc Configuration
+Following are the options in the ``$SEYMOUR_HOME/analysis/etc/smrtpipe.rc`` file that you can set to execute distributed SMRT Pipe runs.
+
+**Link to the SMRT Pipe section when ready**
+
+### Configuring Templates 
+
+The central component for setting up distributed computing in SMRT Analysis are the **Job Management Templates** (JMTs). JMTs provide a flexible format for specifying how SMRT Analysis communicates with the resident JMS. There are **two** templates which must be modified for your system:
+
+* ``start.tmpl`` is the legacy template used for assembly algorithms.
+* ``interactive.tmpl`` is the new template used for resequencing algorithms. The difference between the two is the additional requirement of a sync option in ``interactive.tmpl``. (``kill.tmpl`` is not used.)
+
+**Note**: We are in the process of converting **all** protocols to use only interactive.tmpl.
+
+To customize a JMS for a particular environment, edit or create ``start.tmpl`` and ``interactive.tmpl``. For example, the installation includes the following sample start.tmpl and interactive.tmpl (respectively) for SGE:
+```
+qsub -pe smp ${NPROC} -S /bin/bash -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+qsub -S /bin/bash -sync y -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} -pe smp ${NPROC} ${CMD}
+```
+### To support a new JMS:
+
+1. Create a new directory in ``etc/cluster/`` under ``NEW_NAME``.
+2. In ``smrtpipe.rc``, change the ``CLUSTER_MANAGER`` variable to ``NEW_NAME``, as described in “Smrtpipe.rc Configuration”.
+3. Once you have a new JMS directory specified, edit the ``interactive.tmpl`` and ``start.tmpl`` files for your particular setup.
+
+Sample SGE, LSF and PBS templates are included with the installation in ``$SEYMOUR_HOME/analysis/etc/cluste``r.
+
+### Specifying the SGE Job Management System:
+
+For this version (v1.4.0), you must still edit **both** ``interactive.tmpl`` and ``start.tmpl`` as follows:
+
+1. Change ``secondary`` to the queue name on your system. (This is the ``–q`` option.) 
+2. Change ``smp`` to the parallel environment on your system. (This is the ``-pe`` option.) 
+
+### Specifying the PBS Job Management System
+
+PBS does **not** have a ``–sync`` option, so the interactive.tmpl file runs a script named qsw.py to simulate the functionality. You must edit **both** interactive.tmpl and start.tmpl. 
+
+1. Change the queue name to one that exists on your system. (This is the ``–q`` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the ``-pe`` option.) 
+3. Make sure that ``interactive.tmpl`` calls the ``–PBS`` option.
+
+### Specifying the LSF Job Management System
+
+Create an ``interactive.tmpl`` file by copying the ``start.tmpl`` file and adding the ``–K`` functionality in the ``bsub`` call. Or, you can also edit the sample LSF templates.
+
+### Specifying other Job Management Systems
+
+We have **not** tested the ``–sync`` functionally on other systems. Find the equivalent to the ``–sync`` option for your JMS and create an ``interactive.tmpl`` file. If there is **no** ``-sync`` option available, you may need to edit the ``qsw.py`` script in ``$SEYMOUR_HOME/analysis/lib/python2.7/pbpy-0.1-py2.7.egg/EGG-INFO/scripts/qsw.py`` to add additional options for wrapping jobs on your system. 
+
+The code for PBS and SGE looks like the following: 
+```
+if '-PBS' in args:
+            args.remove('-PBS')
+            self.jobIdDecoder   = PBS_JOB_ID_DECODER
+            self.noJobFoundCode = PBS_NO_JOB_FOUND_CODE
+            self.successCode    = PBS_SUCCESS_CODE
+            self.qstatCmd       = "qstat"
+        else:
+            self.jobIdDecoder   = SGE_JOB_ID_DECODER
+            self.noJobFoundCode = SGE_NO_JOB_FOUND_CODE
+            self.successCode    = SGE_SUCCESS_CODE
+            self.qstatCmd       = "qstat -j"
+```
+### Configuring SMRT Portal
+
+Running jobs in distributed mode is **disabled by default** in SMRT Portal.
+To enable distributed processing, set the ``jobsAreDistributed`` value in ``$SEYMOUR_HOME/redist/tomcat/webapps/smrtportal/WEB-INF/web.xml`` to true: 
+```
+<context-param>
+<param-name>jobsAreDistributed</param-name>
+<param-value>true</param-value>
+</context-param>
+```
+You will need to restart Tomcat.
+
+The upgrade process will port over the configuration settings from the previous version.
+
+## <a name="Step7"></a> Step 7: (New Installations Only) Set Up User Data Folders
+
+SMRT Analysis saves references and results in its own hierarchy. Note that large amounts of data are generated and storage can get filled up. We suggest that you softlink to an **external** directory with more storage.
+
+All jobs and references, as well as drop boxes, are contained in ``$SEYMOUR_HOME/common/userdata``. You can move this folder to another location, then soft link ``$SEYMOUR_HOME/common/userdata`` to the new location. 
+
+**If performing a fresh installation**: For example
+```
+mv $SEYMOUR_HOME/common/userdata /my_offline_storage
+ln -s /my_offline_storage/userdata $SEYMOUR_HOME/common/userdata
+```
+
+If **upgrading**, you need to point the new build to the external storage location. For example:
+```
+rm $SEYMOUR_HOME/common/userdata
+ln -s /my_offline_storage/userdata $SEYMOUR_HOME/common/userdata
+```
+
+**Note**: The default protocols and underlying support files within ``common/protocols`` and subfolders were updated **significantly** for v1.4.0. We **strongly recommend** that you recreate protocols for v1.4.0 rather than carry over protocols from previous versions.
+
+## <a name="Step8"></a> Step 8: (New Installations Only) Set Up SMRT® Portal
+
+1. Use your web browser to start SMRT Portal: ``http://HOST:PORT/smrtportal``
+2. Click **Register** at the top right.
+3. Create a user named ``administrator`` (all lowercase). This user is special, as it is the only user that does not require activation on creation.
+4. Enter the user name ``administrator``.
+5. Enter an email address. All administrative emails, such as new user registrations, will be sent to this address.
+6. Enter the password and confirm the password.
+7. Select **Click Here** to access **Change Settings**.
+8. To set up the mail server, enter the SMTP server information and click **Apply**. For email authentication, enter a user name and password. You can also enable Transport Layer Security.
+9. To enable automated submission from a PacBio® RS instrument, click **Add** under the Instrument Web
+Services URI field. Then, enter the following into the dialog box and click **OK**:
+```
+http://INSTRUMENT_PAP01:8081
+```
+``INSTRUMENT_PAP01`` is the IP address or name (pap01) of the instrument.
+``8081`` is the port for the instrument web service.
+
+10. Select the new URI, then click **Test** to check if SMRT Portal can communicate with the instrument service.
+11. (Optional) You can delete the pre-existing instrument entry by clicking **Remove**.
+
+## <a name="Step9"></a> Step 9: Start the SMRT® Portal and Automatic Secondary Analysis Services
+
+1. Start Tomcat: ``sudo /$SEYMOUR_HOME/etc/scripts/tomcatd start``
+2. Start kodos: ``sudo /etc/init.d/kodosd start``
+
+## <a name="Step10"></a> Step 10: Verify the installation
+
+Create a test job in SMRT Portal using canned installation data:
+
+Open your web browser and clear the browser cache:
+
+* **Google Chrome**: Choose **Tools > Clear browsing data**. Choose **the beginning of time** from the droplist, then check **Empty the cache** and click **Clear browsing data**.
+* **Internet Explorer**: Choose **Tools > Internet Options > General**, then under Browsing history, click **Delete**. Check **Temporary Internet files**, then click **Delete**.
+* **Firefox**: Choose **Tools > Options > Advanced**, then click the **Network** tab. In the Cached Web Content section, click **Clear Now**.
+
+2. Refresh the current page by pressing **F5**.
+3. Log into SMRT Portal by navigating to ``http://HOST:PORT/smrtportal``.
+4. Click **Design Job**.
+5. Click **Import and Manage**.
+6. Click **Import SMRT Cells**.
+7. Click **Add**.
+8. Enter ``/opt/smrtanalysis/common/test/primary``, then click **OK**.
+9. Select the new path and click **Scan**. You should get a dialog saying “One input was scanned." **Note**: If you are upgrading to v1.4.0, this cell will already have been imported into your system. In addition, the input was downsampled to speed the test and reduce the overall tarball size.
+10. Click **Design Job**.
+11. Click **Create New**.
+12. Enter a job name and comment.
+13. Select the protocol ``RS_Resequencing.1``.
+14. Under **SMRT Cells Available**, select a lambda cell and click the right-arrow button.
+15. Click **Save** on the bottom right, then click **Start**. The job should complete successfully.
+16. Click the **SMRT View** button. SMRT View should open with tracks displayed, and the reads displayed in the Details panel.
+
+***
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+
+Pacific Biosciences, the Pacific Biosciences logo, PacBio, SMRT and SMRTbell are trademarks of Pacific Biosciences in the United States and/or certain other countries. All other trademarks are the sole property of their respective owners.
\ No newline at end of file
diff --git a/docs/SMRT-Analysis-Software-Installation-v2.0.1.md b/docs/SMRT-Analysis-Software-Installation-v2.0.1.md
new file mode 100644
index 0000000..d657d05
--- /dev/null
+++ b/docs/SMRT-Analysis-Software-Installation-v2.0.1.md
@@ -0,0 +1,383 @@
+* [System Requirements] (#SysReq)
+  * [Operating System] (#OS)
+  * [Running SMRT® Analysis in the Cloud] (#Cloud)
+  * [Software Requirement] (#SoftReq)
+  * [Minimum Hardware Requirements] (#HardReq)
+* [Installation and Upgrade Summary] (#Summary)
+* [Bundled with SMRT® Analysis] (#Bundled)
+* [Changes from SMRT® Analysis v2.0.0] (#Changes)
+
+# <a name="SysReq"></a> System Requirements
+
+## <a name="OS"></a> Operating System
+* SMRT® Analysis is **only** supported on:
+    * English-language **Ubuntu 10.04**
+    * English-language **RedHat/CentOS 5.6**
+    * By request English-language **Ubuntu 8.04** and **RedHat/CentOS 5.3** are temporarily supported as well.
+* SMRT Analysis **cannot** be installed on the Mac OS or Windows.
+* Users with alternate versions of Ubuntu or CentOS will likely encounter library errors when running an initial analysis job. Install the following missing libraries on your system for an analysis job to complete successfully.
+
+
+## <a name="Cloud"></a> Running SMRT® Analysis in the Cloud ##
+Users who do **not** have access to a server with CentOS 5.6 or later or Ubuntu 10.0.4 or later can use the public Amazon Machine Image (AMI). For details, see the document [Running SMRT Analysis on Amazon] (https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.0.0/doc/Running SMRT Analysis on Amazon.pdf).
+
+## <a name="SoftReq"></a> Software Requirement ##
+
+* MySQL 5
+* bash
+* Perl (v5.8.8)
+  * Statistics::Descriptive Perl module: `sudo cpan Statistics::Descriptive`
+
+**Ubuntu:** `sudo aptitude install mysql-server libxml-parser-perl liblapack3gf libssl0.9.8`
+
+**CentOS 5:** `sudo yum install mysql-server perl-XML-Parser libgfortran libgfortran44 openssl redhat-lsb`
+
+**CentOS 6:** `sudo yum install mysql-server perl-XML-Parser compat-libgfortran-41 openssl098e redhat-lsb`
+
+
+### Client web browser: ###
+We recommend using Firefox® 15 or Google Chrome® 21 web browsers to run SMRT Portal for consistent functionality. We also support Apple’s Safari® and Internet Explorer® web browsers; however some features may not be optimized on these browsers.
+
+### Client Java: ###
+To run SMRT View, we recommend using Java 7 for Windows (Java 7 64 bit for users with 64 bit OS), and Java 6 for the Mac OS.
+
+## <a name="HardReq"></a> Minimum Hardware Requirements ##
+
+
+### 1 head node:###
+* Minimum 8 cores, with 2 GB RAM per core. We recommend 16 cores with 4 GB RAM per core for _de novo_ assemblies and larger references such as human
+* Minimum 250 GB of disk space
+
+### Compute nodes:###
+* Minimum 3 compute nodes. We recommend 5 nodes for high utilization focused on _de novo_ assemblies
+* Minimum 8 cores per node, with 2 GB RAM per core. We recommend 16 cores per node with 4 GB RAM per core
+* Minimum 250 GB of disk space per node
+* To perform _de novo_ assembly of large genomes using the Celera® Assembler, one of the nodes will need to have considerably more memory. See the Celera® Assembler home page for recommendations: http://wgs-assembler.sourceforge.net/.
+
+**Note:** It is possible, but not advisable to install SMRT Analysis on a single-node machine (see the distributed computing section).  You will likely be able to submit jobs one SMRT Cell at a time, but the time to completion may be long as the software may not have sufficient resources to complete the job.  
+
+
+### Data storage: ###
+* 10 TB (Actual storage depends on usage.)
+
+### Network File System Requirement 
+Please refer to the IT Site Prep guide provided with your instrument purchase for more details.
+
+1. The **SMRT Analysis software directory** ``$SEYMOUR_HOME`` **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  Note the smrtanalysis user is specified during the installation and does not have to be "smrtanalysis".  
+
+2. The **SMRT Cell input directories** contain data from the PacBio RS. The `metadata.xml` and `bas.h5` files under this directory **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  
+
+3. The **SMRT Analysis output directory** ``$SEYMOUR_HOME/common/userdata`` **must** have the same path and be **writable** by the smrtanalysis user across **all** compute nodes via **NFS**.  This directory is usually soft-linked to a large storage volume.
+
+4. The **local temporary directory** ``$TMP`` specified in smrtpipe.rc and default to `/tmp/` **must** be **writable** by the smrtanalysis user and exist as independent directories on **all** compute nodes.
+
+5. The **shared temporary directory** ``$SHARED_DIR`` specified in smrtpipe.rc and default to `$SEYMOUR_HOME/common/userdata/shared_dir/` **must** be **writable** by the smrtanalysis user across **all** compute nodes via **NFS**. This functionality is enabled by (3), but is listed again here to provide guidance to users who want to change the location of this directory.
+
+
+# <a name="Summary"></a> Installation and Upgrade Summary
+
+Following are the steps for installing and upgrading SMRT Analysis v2.0.1. For further details, click the links.
+
+**IMPORTANT**: The upgrade script works **only** from **v2.0.0 to v2.0.1**. If you are using an older version of SMRT Analysis, you can either perform a fresh installation and manually import old SMRT Cells and jobs, or download and upgrade any intermediate versions (v1.3.0, v1.3.1, v1.3.3, v1.4, v2.0.0).  
+
+
+1. Select an installation directory to assign to the ``$SEYMOUR_HOME`` environment variable. The default is ``/opt/smrtanalysis``. 
+
+2. Decide on a user who will perform the installation.  We recommend that a system administrator create a special user with sudo privileges.  The default is ``smrtanalysis``, who belongs to the ``smrtanalysis`` group.  If you are upgrading, the smrtanalysis user is the owner of the previous $SEYMOUR_HOME directory ex: ``ls -lLd /opt/smrtanalysis``.  Although not recommended, it is possible to install SMRT Analysis as a non-sudo user.
+ 
+3. Extract the tarball and softlink the directories:
+  ```
+  tar -C /opt -xvvzf <tarball_name>.tgz
+  rm /opt/smrtanalysis (if it already exists)
+  ln -s /opt/smrtanalysis-2.0.1 /opt/smrtanalysis
+  ```
+
+4. Edit ``/opt/smrtanalysis-2.0.1/etc/setup.sh`` to match your installation location:
+  ```
+  SEYMOUR_HOME=/opt/smrtanalysis
+  ```
+
+5. Run the appropriate script: 
+  * **Option 1**: If you are performing a **fresh** installation, run the [installation script] (#Step5Install) and start tomcat and kodos:
+  ```
+  /opt/smrtanalysis/etc/scripts/postinstall/configure_smrtanalysis.sh
+  /opt/smrtanalysis/etc/scripts/tomcatd start
+  /opt/smrtanalysis/etc/scripts/kodosd start
+  ```
+  * **Option 2**: If you are **upgrading** and want to preserve SMRT Cells, jobs, and users from a previous installation: Turn off services in the previous installation, run the [upgrade script] (#Step5Upgrade), and turn on services in the current installation.  **Note:** Updating the references may take **several hours**.
+  ```
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/kodosd stop
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/tomcatd stop
+  /opt/smrtanalysis/etc/scripts/postinstall/upgrade_and_configure_smrtanalysis.sh
+  /opt/smrtanalysis-<current-version-number>/etc/scripts/tomcatd start
+  /opt/smrtanalysis-<current-version-number>/etc/scripts/kodosd start
+  ```
+6. **New Installations only:** Set up [distributed computing](#Step6) by deciding on a job management system (JMS), then edit the following files:
+```
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/start.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/interactive.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/kill.tmpl
+/opt/smrtanalysis/redist/tomcat/webapps/smrtportal/WEB-INF/web.xml
+```
+**Note:** If you are **not** using SGE, you will need to **deactivate** the Celera® Assembler protocols so that they do **not** display in SMRT Portal. To do so, rename the following files, located in ``common/protocols``.  Rename the following files:
+```
+RS_CeleraAssembler.1.xml to RS_CeleraAssembler.1.bak
+filtering/CeleraAssemblerSFilter.1.xml to CeleraAssemblerSFilter.1.bak
+assembly/CeleraAssembler.1.xml to CeleraAssembler.1.bak
+```
+7. **New Installations only**: [Set up user data folders](#Step7) that point to external storage.
+
+8. **New Installations only**: [Set up SMRT Portal] (#Step8).
+
+10. [Verify] (#Step9) the installation.
+
+## <a name="Bundled"></a> Bundled with SMRT® Analysis ##
+The following are bundled within the application and should **not** depend on what is already deployed on the system.
+* Java® 1.6
+* Python® 2.5.2
+* Tomcat™ 7.0.23
+
+## <a name="Changes"></a> Changes from SMRT® Analysis v2.0.0 ##
+###  New Features
+* Now includes Quiver training for DNA/Polymerase P4.
+* Now includes modification detection using the P4/C2 combination with an updated in silico control. 
+  * Modification identification of 6-methyladenine (6-mA) and 4-methylcytosine (4-mC) is also supported, and is expected to have equivalent performance to previous chemistry releases.
+  * Modification identification of 5-methylcytosine (5-mC) using TET-treated samples is also supported. However, due to a limited training dataset, this application is not yet optimized for the P4/C2 combination. Future releases of the software are expected to have improved TET-converted 5-mC identification as the in silico control is updated with additional training data. 
+
+###  Fixed Issues
+
+* Fixed an Instrument Web Services problem with well status queries. (23191)
+* Removed a time limit to the Sun Grid Engine (SGE) that caused analysis jobs to stop after 12 hours. Any limits must now be placed by your IT department; SMRT® Pipe will **not** limit the run time. (23312)
+* Fixed an issue where sample barcodes were not working properly with multi-streamed data files (bax.h5), and most barcodes were not being recognized. (23136)
+* Modified HGAP defaults so that partial alignments are allowed and Celera® Assembler will run on a single node.
+
+## <a name="Step5Install"></a> Step 5, Option 1 Details: Run the Installation script and turn on services
+
+```
+cd /opt/smrtanalysis/etc/scripts/postinstall
+./configure_smrtanalysis.sh
+/opt/smrtanalysis-<current-version-number>/etc/scripts/tomcatd start
+/opt/smrtanalysis-<current-version-number>/etc/scripts/kodosd start
+```
+
+The installation script requires the following input:
+* The **system name**. (Default: ``hostname -a``)
+* The **port number** that the services will run under. (Default: ``8080``)
+* The Tomcat **shutdown port**. (Default: ``8005``)
+* The **user/group** to run the services and set permissions for the files. (Default: ``smrtanalysis:smrtanalysis``)
+* The **mysql user name and password** to install the database. (Default: ``root:no password``)
+* The Job Management System for your distributed system (SGE)
+  * The **queue name**. (Default: ``secondary``)
+  * The **Parallel environment**. (Default: ``smp``)
+
+The installation script performs the following:
+* Creates the SMRT Portal database. The mysql user performing the install **must** have permissions to alter or create databases. Otherwise, the installer will **reject** the user and prompt for another.
+* Sets the host and port names for various configuration files.
+* Sets the Tomcat/kodos user. The services will run as the specified user.
+* Sets the user and group permissions and ownership of the application to the Tomcat user.
+* Adds links in ``/etc/init.d`` to the tomcat and kodos services if invoked as `root`. (The defaults are: ``/etc/init.d/kodosd`` and ``/etc/init.d/tomcatd``.) These are soft links to the actual service files within the application. If a file is already present (for example, tomcatd is already installed), the link can be created with a different name. The permissions of the underlying scripts are limited to the user running the services.
+* Installs the services. The services will automatically restart if the system restarts. (On CentOS, the installer will run ``chkconfig`` to install the services, rather than ``update-rc.d``.)
+
+
+## <a name="Step5Upgrade"></a> Step 5, Option 2 Details: Run the Upgrade Script
+
+Run ``upgrade_and_configure_smrtanalysis.sh`` script. This may take **several hours** if you have many references to upgrade:
+  ```
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/kodosd stop
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/tomcatd stop
+  cd /opt/smrtanalysis/etc/scripts/postinstall/
+  ./upgrade_and_configure_smrtanalysis.sh
+  /opt/smrtanalysis-<current-version-number>/etc/scripts/tomcatd start
+  /opt/smrtanalysis-<current-version-number>/etc/scripts/kodosd start
+  ```
+
+The upgrade script performs the following:
+* Preserves SMRT Cells, jobs, and users from a previous installation by updating any smrtportal database schema changes.  
+* Preserves SMRT Cells, jobs, and users from a previous installation by updating the softlink to the ``userdata`` directory
+* Preserves computing configurations from a previous installation such that steps 6-8 do not need to be repeated.  
+* The upgrade script does **not** port over protocols that were defined in previous versions. This is because protocol files can vary a great deal between versions due to rapid code development and change.  Please recreate any custom protocols you may have.
+
+
+## <a name="Step6"></a> Step 6 Details: (New Installations Only) Set up Distributed Computing
+
+SMRT Analysis provides support for distributed computation using an existing job management system. Pacific Biosciences has explicitly validated Sun Grid Engine (SGE), LSF and PBS.  You only need to configure the software once during initial install.  The upgrade process will port over any configuration settings from the previous version.  This section describes setup for SGE and gives guidance for extensions to other Job Management Systems.
+
+**Note**: Celera® Assembler 7.0 will **only** work correctly with the SGE job management system. If you are **not** using SGE, you will need to **deactivate** the Celera® Assembler protocols so that they do **not** display in SMRT Portal. To do so, rename the following files, located in ``common/protocols``:
+```
+RS_CeleraAssembler.1.xml to RS_CeleraAssembler.1.bak
+filtering/CeleraAssemblerSFilter.1.xml to CeleraAssemblerSFilter.1.bak
+assembly/CeleraAssembler.1.xml to CeleraAssembler.1.bak
+```
+
+
+### Configuring SMRT Portal
+
+Running jobs in distributed mode is **disabled by default** in SMRT Portal. To enable distributed processing, set the `jobsAreDistributed` value in `/opt/smrtanalysis/redist/tomcat/webapps/smrtportal/WEB-INF/web.xml` to ``true``, and then restart Tomcat: 
+
+```
+<context-param>
+<param-name>jobsAreDistributed</param-name>
+<param-value>true</param-value>
+</context-param>
+```
+
+
+
+
+### Smrtpipe.rc Configuration
+Following are the options in the ``/opt/smrtanalysis/analysis/etc/smrtpipe.rc`` file that you can set to execute distributed SMRT Pipe runs.
+
+* ``CLUSTER_MANAGER`` Default value: **SGE**   Text string that points to template files in ``/opt/smrtanalysis/analysis/etc/cluster/``. These files communicate with the Job Management System. SGE is officially supported, but adding new JMSs is straightforward.
+
+
+* ``EXIT_ON_FAILURE`` Default value: **False**   The default behavior is to continue executing tasks as long as possible. Set to ``True`` to specify that smrtpipe.py **not** submit any additional tasks after a failure.
+
+
+* ``MAX_CHUNKS`` Default value: **64**   SMRT Pipe splits inputs into ‘chunks’ during distributed computing. Different tasks use different chunking mechanisms, but ``MAX_CHUNKS`` sets the maximum number of chunks any file or task will be split into. This also affects the maximum number of tasks, and the size of the graph for a job.
+
+
+* ``MAX_THREADS`` Default value: **8**   SMRT Pipe uses one thread per active task to launch, block, and monitor return status for each task. This option limits the number of active threads for a single job. Additional tasks will wait until a thread is freed up before launching.
+
+
+* ``MAX_SLOTS`` Default value: **256**   SMRT Pipe cluster resource management is controlled by the ‘slots’ mechanism. ``MAX_SLOTS`` limits the total number of concurrent slots used by a single job. In a non-distributed environment, this roughly determines the total number of cores to be used at once.
+
+
+* ``NJOBS`` Default value: **64**   Specifies the number of jobs to submit for a distributed job. This applies only to assembly workflows (S_* modules).
+
+
+* ``NPROC`` Default value: **15**
+ * Determines the number of JMS ‘slots’ reserved by compute-intensive tasks.
+ * Determines the number of cores that compute-intensive tasks will attempt to use.
+ * In a distributed environment, NPROC should be at most (total slots - 1). This allows an I/O-heavy single process task to share a node with a CPU-intensive tasks that would not otherwise be using the I/O.
+
+
+
+* ``SHARED_DIR`` Default value: **$SEYMOUR_HOME/common/userdata/shared_dir/.**  A **shared writeable directory** visible to all nodes.  Used for sharing temporary files that can be used by more than one compute process. 
+
+* ``TMP`` Default value: **/tmp/**   Specifies the **local** temporary storage location for creation of temporary files and directories used for fast read/write access. For optimal performance, this should have at least 100 GB of free space. **Important:** Make sure to change this to an **actual** temporary location on the head node and compute nodes. Your jobs will **fail** if the path does not exist.
+
+
+### Configuring Templates 
+
+The central component for setting up distributed computing in SMRT Analysis are the **Job Management Templates** (JMTs). JMTs provide a flexible format for specifying how SMRT Analysis communicates with the resident Job Management System (JMS). There are **two** templates which must be modified for your system:
+
+* ``start.tmpl`` is the legacy template used for assembly algorithms.
+* ``interactive.tmpl`` is the new template used for resequencing algorithms. The difference between the two is the additional requirement of a sync option in ``interactive.tmpl``. (``kill.tmpl`` is not used.)
+
+**Note**: We are in the process of converting **all** protocols to use only interactive.tmpl.
+
+To customize a JMS for a particular environment, edit or create ``start.tmpl`` and ``interactive.tmpl``. For example, the installation includes the following sample start.tmpl and interactive.tmpl (respectively) for SGE:
+```
+qsub -pe smp ${NPROC} -S /bin/bash -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+qsub -S /bin/bash -sync y -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} -pe smp ${NPROC} ${CMD}
+```
+### To support a new JMS:
+
+1. Create a new directory in ``etc/cluster/`` under ``NEW_NAME``.
+2. In ``smrtpipe.rc``, change the ``CLUSTER_MANAGER`` variable to ``NEW_NAME``, as described in “Smrtpipe.rc Configuration”.
+3. Once you have a new JMS directory specified, edit the ``interactive.tmpl`` and ``start.tmpl`` files for your particular setup.
+
+Sample SGE, LSF and PBS templates are included with the installation in ``/opt/smrtanalysis/analysis/etc/cluster``.
+
+### Specifying the SGE Job Management System:
+
+For this version (v2.0.1), you must still edit **both** ``interactive.tmpl`` and ``start.tmpl`` as follows:
+
+1. Change ``secondary`` to the queue name on your system. (This is the ``–q`` option.) 
+2. Change ``smp`` to the parallel environment on your system. (This is the ``-pe`` option.) 
+
+### Specifying the PBS Job Management System
+
+PBS does **not** have a ``–sync`` option, so the interactive.tmpl file runs a script named qsw.py to simulate the functionality. You must edit **both** interactive.tmpl and start.tmpl. 
+
+1. Change the queue name to one that exists on your system. (This is the ``–q`` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the ``-pe`` option.) 
+3. Make sure that ``interactive.tmpl`` calls the ``–PBS`` option.
+
+### Specifying the LSF Job Management System
+
+Create an ``interactive.tmpl`` file by copying the ``start.tmpl`` file and adding the ``–K`` functionality in the ``bsub`` call. Or, you can also edit the sample LSF templates.
+
+### Specifying other Job Management Systems
+
+We have **not** tested the ``–sync`` functionally on other systems. Find the equivalent to the ``–sync`` option for your JMS and create an ``interactive.tmpl`` file. If there is **no** ``-sync`` option available, you may need to edit the ``qsw.py`` script in ``/opt/smrtanalysis/analysis/lib/python2.7/pbpy-0.1-py2.7.egg/EGG-INFO/scripts/qsw.py`` to add additional options for wrapping jobs on your system. 
+
+The code for PBS and SGE looks like the following: 
+```
+if '-PBS' in args:
+            args.remove('-PBS')
+            self.jobIdDecoder   = PBS_JOB_ID_DECODER
+            self.noJobFoundCode = PBS_NO_JOB_FOUND_CODE
+            self.successCode    = PBS_SUCCESS_CODE
+            self.qstatCmd       = "qstat"
+        else:
+            self.jobIdDecoder   = SGE_JOB_ID_DECODER
+            self.noJobFoundCode = SGE_NO_JOB_FOUND_CODE
+            self.successCode    = SGE_SUCCESS_CODE
+            self.qstatCmd       = "qstat -j"
+```
+### Configuring Submit hosts for Celera® Assembler
+To run Celera® Assembler on a distributed infrastructure, **all** the execute hosts in your queue must also be submit hosts. You can add submit hosts by executing `qconf -as <hostname>` in SGE.
+
+
+## <a name="Step7"></a> Step 7 Details: (New Installations Only) Set Up User Data Folders
+
+SMRT Analysis saves references and results in its own hierarchy. Note that large amounts of data are generated and storage can get filled up. We suggest that you softlink to an **external** directory with more storage.
+
+All jobs and references, as well as drop boxes, are contained in ``/opt/smrtanalysis/common/userdata``. You can move this folder to another location, then soft link ``/opt/smrtanalysis/common/userdata`` to the new location. 
+
+```
+mv /opt/smrtanalysis/common/userdata /my_offline_storage
+ln -s /my_offline_storage/userdata /opt/smrtanalysis/common/userdata
+```
+
+## <a name="Step8"></a> Step 8 Details: (New Installations Only) Set Up SMRT® Portal
+
+1. Use your web browser to start SMRT Portal: ``http://HOST:PORT/smrtportal``
+2. Click **Register** at the top right.
+3. Create a user named ``administrator`` (all lowercase). This user is special, as it is the only user that does not require activation on creation.
+4. Enter the user name ``administrator``.
+5. Enter an email address. All administrative emails, such as new user registrations, will be sent to this address.
+6. Enter the password and confirm the password.
+7. Select **Click Here** to access **Change Settings**.
+8. To set up the mail server, enter the SMTP server information and click **Apply**. For email authentication, enter a user name and password. You can also enable Transport Layer Security.
+9. To enable automated submission from a PacBio® RS instrument, click **Add** under the Instrument Web
+Services URI field. Then, enter the following into the dialog box and click **OK**:
+```
+http://INSTRUMENT_PAP01:8081
+```
+``INSTRUMENT_PAP01`` is the IP address or name (pap01) of the instrument.
+``8081`` is the port for the instrument web service.
+
+10. Select the new URI, then click **Test** to check if SMRT Portal can communicate with the instrument service.
+11. (Optional) You can delete the pre-existing instrument entry by clicking **Remove**.
+
+## <a name="Step9"></a> Step 9: Verify the installation
+
+Create a test job in SMRT Portal using the provided lambda sequence data.  This is data from a single SMRT cell that has been down-sampled to reduce overall tarball size.  If you are upgrading, this cell will already have been imported into your system, and you can skip to step 10 below.
+
+Open your web browser and clear the browser cache:
+
+* **Google Chrome**: Choose **Tools > Clear browsing data**. Choose **the beginning of time** from the droplist, then check **Empty the cache** and click **Clear browsing data**.
+* **Internet Explorer**: Choose **Tools > Internet Options > General**, then under Browsing history, click **Delete**. Check **Temporary Internet files**, then click **Delete**.
+* **Firefox**: Choose **Tools > Options > Advanced**, then click the **Network** tab. In the Cached Web Content section, click **Clear Now**.
+
+2. Refresh the current page by pressing **F5**.
+3. Log into SMRT Portal by navigating to ``http://HOST:PORT/smrtportal``.
+4. Click **Design Job**.
+5. Click **Import and Manage**.
+6. Click **Import SMRT Cells**.
+7. Click **Add**.
+8. Enter ``/opt/smrtanalysis/common/test/primary/lambda``, then click **OK**.
+9. Select the new path and click **Scan**. You should get a dialog saying “One input was scanned." 
+10. Click **Design Job**.
+11. Click **Create New**.
+12. Enter a job name and comment.
+13. Select the protocol ``RS_Resequencing.1``.
+14. Under **SMRT Cells Available**, select a lambda cell and click the right-arrow button.
+15. Click **Save** on the bottom right, then click **Start**. The job should complete successfully.
+16. Click the **SMRT View** button. SMRT View should open with tracks displayed, and the reads displayed in the Details panel.
+
+***
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 100-250-300**
\ No newline at end of file
diff --git a/docs/SMRT-Analysis-Software-Installation-v2.0.md b/docs/SMRT-Analysis-Software-Installation-v2.0.md
new file mode 100644
index 0000000..38ae344
--- /dev/null
+++ b/docs/SMRT-Analysis-Software-Installation-v2.0.md
@@ -0,0 +1,386 @@
+* [System Requirements] (#SysReq)
+  * [Operating System] (#OS)
+  * [Running SMRT® Analysis in the Cloud] (#Cloud)
+  * [Software Requirement] (#SoftReq)
+  * [Minimum Hardware Requirements] (#HardReq)
+* [Installation and Upgrade Summary] (#Summary)
+* [Bundled with SMRT® Analysis] (#Bundled)
+* [Changes from SMRT® Analysis v1.4.0] (#Changes)
+
+# <a name="SysReq"></a> System Requirements
+
+## <a name="OS"></a> Operating System
+* SMRT® Analysis is **only** supported on:
+    * English-language **Ubuntu 10.04**
+    * English-language **RedHat/CentOS 5.6**
+    * By request English-language **Ubuntu 8.04** and **RedHat/CentOS 5.3** are temporarily supported as well.
+* SMRT Analysis **cannot** be installed on the Mac OS or Windows.
+* Users with alternate versions of Ubuntu or CentOS will likely encounter library errors when running an initial analysis job. The errors in the ``smrtpipe.log`` file indicate which libraries are needed. Install any missing libraries on your system for an analysis job to complete successfully.
+
+## <a name="Cloud"></a> Running SMRT® Analysis in the Cloud ##
+Users who do **not** have access to a server with CentOS 5.6 or later or Ubuntu 10.0.4 or later can use the public Amazon Machine Image (AMI). For details, see the document [Running SMRT Analysis on Amazon] (https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.0.0/doc/Running SMRT Analysis on Amazon.pdf).
+
+## <a name="SoftReq"></a> Software Requirement ##
+
+* MySQL 5
+* bash
+* Perl (v5.8.8)
+  * Statistics::Descriptive Perl module: `sudo cpan Statistics::Descriptive`
+
+**Ubuntu:** `sudo aptitude install mysql-server libxml-parser-perl liblapack3gf libssl0.9.8`
+
+**CentOS 5:** `sudo yum install mysql-server perl-XML-Parser libgfortran libgfortran44 openssl redhat-lsb`
+
+**CentOS 6:** `sudo yum install mysql-server perl-XML-Parser compat-libgfortran-41 openssl098e redhat-lsb`
+
+### Client web browser: ###
+We recommend using Firefox® 15 or Google Chrome® 21 web browsers to run SMRT Portal for consistent functionality. We also support Apple’s Safari® and Internet Explorer® web browsers; however some features may not be optimized on these browsers.
+
+### Client Java: ###
+To run SMRT View, we recommend using Java 7 for Windows (Java 7 64 bit for users with 64 bit OS), and Java 6 for the Mac OS.
+
+## <a name="HardReq"></a> Minimum Hardware Requirements ##
+
+
+### 1 head node:###
+* Minimum 8 cores, with 2 GB RAM per core. Recommended 16 cores with 4 GB RAM per core for _de novo_ assemblies and larger references such as human
+* Minimum 250 GB of disk space
+
+### Compute nodes:###
+* Minimum 3 compute nodes. Recommended 5 nodes for high utilization focused on _de novo_ assemblies
+* Minimum 8 cores per node, with 2 GB RAM per core. Recommended 16 cores per node with 4 GB RAM per core
+* Minimum 250 GB of disk space per node
+* To perform _de novo_ assembly of large genomes using the Celera® Assembler, one of the nodes will need to have considerably more memory. See the Celera Assembler home page for recommendations: http://wgs-assembler.sourceforge.net/.
+
+**Note:** It is possible, but not advisable to install SMRT Analysis on a single-node machine (please see the distributed computing section).  You will likely be able to submit jobs one SMRT Cell at a time, but the time to completion may be long as the software may not have sufficient resources to complete the job.  
+
+
+### Data storage: ###
+* 10 TB (Actual storage depends on usage.)
+
+### Network File System Requirement 
+1.  The **SMRT Analysis software directory** ``$SEYMOUR_HOME`` must have the same path and be _readable_ by the smrtanalysis user across _all_ compute nodes via **NFS**.
+2.  The **SMRT Cell input directories** containing `metadata.xml` and `bas.h5` files must have the same path and be _readable_ by the smrtanalysis user across _all_ compute nodes via **NFS**.
+3.  The **SMRT Analysis output directory** ``$SEYMOUR_HOME/common/userdata`` must have the same path and be _writable_ by the smrtanalysis user across _all_ compute nodes via **NFS**.  This directory is usually soft linked to a large storage volume.
+4.  The **local temporary directory** ``$TMP`` specified in smrtpipe.rc and default to `/scratch/` must be _writable_ by the smrtanalysis user and exist as independent directories on _all_ compute nodes.
+5.  The **shared temporary directory** ``$SHARED_DIR`` specified in smrtpipe.rc and default to `$SEYMOUR_HOME/common/userdata/shared_dir/` must be _writable_ by the smrtanalysis user across _all_ compute nodes via **NFS**.  This functionality is enabled by (3), but is listed again here to provide guidance to users who want to change the location of this directory.
+
+
+# <a name="Summary"></a> Installation and Upgrade Summary
+
+Following are the steps for installing and upgrading SMRT Analysis v2.0. For further details, click the links.
+
+**It is currently impossible to perform skip-level upgrades. The upgrade script expects that you are upgrading from v1.4 to v2.0. If you are using an older version of SMRT Analysis, you can either perform a fresh installation and manually import old SMRT Cells and jobs, or download and upgrade any intermediate versions (v1.3.0, v1.3.1, v1.3.3, v1.4).**   
+
+
+1. Select an installation directory to assign to the ``$SEYMOUR_HOME`` environmental variable. The default is ``/opt/smrtanalysis``. 
+
+2. Decide on a user who will perform the installation.  The default is ``smrtanalysis``, who belongs to the ``smrtanalysis`` group.  If you are upgrading, the smrtanalysis user is the owner of the previous $SEYMOUR_HOME directory ex: ``ls -l /opt/smrtanalysis-1.4.0``.  
+  * **Option 1:** If this user has sudo privileges, the script will prompt you to add tomdatd and kodosd services softlinks to `/etc/init.d/`.  This allows the daemon processes to automatically start up after a server reboot via the softlinks.  Tomcat is the web server that allows SMRT Portal to function, and kodos is the automatic secondary analysis daemon that continually pings the instrument for SMRT cells to import:
+  ```
+  /etc/init.d/tomcatd start
+  /etc/init.d/kodosd start
+  ```
+  * **Option 2:** If this user does not have sudo privileges, you must start the daemon processes in the original directories:
+  ```
+  /opt/smrtanalysis/etc/scripts/tomcatd start
+  /opt/smrtanalysis/etc/scripts/kodosd start
+  ```
+
+3. [Extract the tarball](#Step3) and softlink the directories:
+  ```
+  tar -C /opt -xvvzf <tarball_name>.tgz
+  rm /opt/smrtanalysis (if it already exists)
+  ln -s /opt/smrtanalysis-2.0.0 /opt/smrtanalysis
+  ```
+4. Edit ``/opt/smrtanalysis-2.0.0/etc/setup.sh`` to match your installation location:
+  ```
+  SEYMOUR_HOME=/opt/smrtanalysis
+  ```
+
+5. Run the appropriate script: 
+  * **Option 1**: If you are performing a **fresh** installation, run the [installation script] (#Step5Install) and start tomcat and kodos:
+  ```
+  /opt/smrtanalysis/etc/scripts/postinstall/configure_smrtanalysis.sh
+  /opt/smrtanalysis/etc/scripts/tomcatd start
+  /opt/smrtanalysis/etc/scripts/kodosd start
+  ```
+  * **Option 2**: If you are **upgrading** and want to preserve SMRT Cells, jobs, and users from a previous installation: Turn off services in the previous installation, run the [upgrade script] (#Step5Upgrade), and turn on services in the current installation.  **Note:** Updating the references may take **several hours**.
+  ```
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/kodosd stop
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/tomcatd stop
+  /opt/smrtanalysis/etc/scripts/postinstall/upgrade_and_configure_smrtanalysis.sh
+  /opt/smrtanalysis-<current-version-number>/etc/scripts/tomcatd start
+  /opt/smrtanalysis-<current-version-number>/etc/scripts/kodosd start
+  ```
+6. **New Installations only:** Set up [distributed computing](#Step6) by deciding on a job management system (JMS), then edit the following files:
+```
+/opt/smrtanalysis/analysis/etc/smrtpipe.rc
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/start.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/interactive.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/kill.tmpl
+/opt/smrtanalysis/redist/tomcat/webapps/smrtportal/WEB-INF/web.xml
+```
+**Note:** If you are **not** using SGE, you will need to **deactivate** the Celera Assembler protocols so that they do **not** display in SMRT Portal. To do so, rename the following files, located in ``common/protocols``.  Rename the following files:
+```
+RS_CeleraAssembler.1.xml to RS_CeleraAssembler.1.bak
+filtering/CeleraAssemblerSFilter.1.xml to CeleraAssemblerSFilter.1.bak
+assembly/CeleraAssembler.1.xml to CeleraAssembler.1.bak
+```
+7. **New Installations only**: [Set up user data folders](#Step7) that point to external storage.
+
+8. **New Installations only**: [Set up SMRT Portal] (#Step8).
+
+10. [Verify] (#Step9) the installation.
+
+## <a name="Bundled"></a> Bundled with SMRT® Analysis ##
+The following are bundled within the application and should **not** depend on what is already deployed on the system.
+* Java® 1.6
+* Python® 2.5.2
+* Tomcat™ 7.0.23
+
+## <a name="Changes"></a> Changes from SMRT® Analysis v1.4.0 ##
+See [SMRT Analysis Release Notes v2.0](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Release-Notes-v2.0) for changes and known issues. The latest version of the document resides on the Pacific Biosciences DevNet site; you can link to it from the main SMRT Analysis web page.
+
+## <a name="Step3"></a> Step 3 Details: Extract the Tarball
+
+Extract the tarball to its final destination - this creates a ``smrtanalysis-2.0.0/ directory``. Be sure to use the tarball appropriate to your system - Ubuntu or CentOS.  The following examples assume you have downloaded the tarball to the ``/opt`` directory:
+```
+tar -C /opt -xvvzf <tarball_name>.tgz
+```
+
+Create a symbolic link to ``/opt/smrtanalysis-2.0.0`` with ``/opt/smrtanalysis``:
+```
+rm /opt/smrtanalysis
+ln -s /opt/smrtanalysis-2.0.0 /opt/smrtanalysis
+```
+
+## <a name="Step5Install"></a> Step 5, Option 1 Details: Run the Installation script and turn on services
+
+```
+cd /opt/smrtanalysis/etc/scripts/postinstall
+./configure_smrtanalysis.sh
+/opt/smrtanalysis-<current-version-number>/etc/scripts/tomcatd start
+/opt/smrtanalysis-<current-version-number>/etc/scripts/kodosd start
+```
+
+The installation script requires the following input:
+* The **system name**. (Default: ``hostname -a``)
+* The **port number** that the services will run under. (Default: ``8080``)
+* The Tomcat **shutdown por**t. (Default: ``8005``)
+* The **user/group** to run the services and set permissions for the files. (Default: ``smrtanalysis:smrtanalysis``)
+* The **mysql user name and password** to install the database. (Default: ``root:no password``)
+* The Job Managment System for your distributed system (SGE)
+  * The queue name (Default: secondary )
+  * The Parpallel environment (Default: smp)
+
+The installation script performs the following:
+* Creates the SMRT Portal database. The mysql user performing the install **must** have permissions to alter or create databases. Otherwise, the installer will **reject** the user and prompt for another.
+* Sets the host and port names for various configuration files.
+* Sets the Tomcat/kodos user. The services will run as the specified user.
+* Sets the user and group permissions and ownership of the application to the Tomcat user.
+* Adds links in ``/etc/init.d`` to the Tomcat and kodos services. (The defaults are: ``/etc/init.d/kodosd`` and ``/etc/init.d/tomcatd``.) These are soft links to the actual service files within the application. If a file is already present (for example, tomcatd is already installed), the link can be created with a different name. The permissions of the underlying scripts are limited to the user running the services.
+* Installs the services. The services will automatically restart if the system restarts. (On CentOS, the installer will run ``chkconfig`` to install the services, rather than ``update-rc.d``.)
+
+
+## <a name="Step5Upgrade"></a> Step 5, Option 2 Details: Run the Upgrade Script
+
+Run ``upgrade_and_configure_smrtanalysis.sh`` script. This may take **several hours** if you have many references to upgrade:
+  ```
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/kodosd stop
+  /opt/smrtanalysis-<old-version-number>/etc/scripts/tomcatd stop
+  cd /opt/smrtanalysis/etc/scripts/postinstall/
+  ./upgrade_and_configure_smrtanalysis.sh
+  /opt/smrtanalysis-<current-version-number>/etc/scripts/tomcatd start
+  /opt/smrtanalysis-<current-version-number>/etc/scripts/kodosd start
+  ```
+
+The upgrade script performs the following:
+* Preserves SMRT Cells, jobs, and users from a previous installation by updating any smrtportal database schema changes.  
+* Preserves SMRT Cells, jobs, and users from a previous installation by updating the softlink to the userdata directory
+* Preserves computing configurations from a previous installation such that steps 6-8 do not need to be repeated.  
+* The upgrade script does **NOT** port over protocols that were defined in previous versions.  This is because protocol files can be very different between versions due to rapid code development and change.  Please re-create any custom protocols you may have.
+
+
+## <a name="Step6"></a> Step 6 Details: (New Installations Only) Set up Distributed Computing
+
+SMRT Analysis provides support for distributed computation using an existing job management system. Pacific Biosciences has explicitly validated Sun Grid Engine (SGE), LSF and PBS.  You only need to configure the software once during initial install.  The upgrade process will port over any configuration settings from the previous version.  This section describes setup for SGE and gives guidance for extensions to other Job Management Systems.
+
+**Note**: Celera Assembler 7.0 will **only** work correctly with the SGE job management system. If you are **not** using SGE, you will need to **deactivate** the Celera Assembler protocols so that they do **not** display in SMRT Portal. To do so, rename the following files, located in ``common/protocols``:
+```
+RS_CeleraAssembler.1.xml to RS_CeleraAssembler.1.bak
+filtering/CeleraAssemblerSFilter.1.xml to CeleraAssemblerSFilter.1.bak
+assembly/CeleraAssembler.1.xml to CeleraAssembler.1.bak
+```
+
+
+### Configuring SMRT Portal
+
+Running jobs in distributed mode is **disabled by default** in SMRT Portal.  To enable distributed processing, set the `jobsAreDistributed` value in `/opt/smrtanalysis/redist/tomcat/webapps/smrtportal/WEB-INF/web.xml` to true, and then restart Tomcat: 
+
+```
+<context-param>
+<param-name>jobsAreDistributed</param-name>
+<param-value>true</param-value>
+</context-param>
+```
+
+
+
+
+### Smrtpipe.rc Configuration
+Following are the options in the ``/opt/smrtanalysis/analysis/etc/smrtpipe.rc`` file that you can set to execute distributed SMRT Pipe runs.
+
+* ``CLUSTER_MANAGER`` Default value: **SGE**   Text string that points to template files in ``/opt/smrtanalysis/analysis/etc/cluster/``. These files communicate with the Job Management System. SGE is officially supported, but adding new JMSs is straightforward.
+
+
+* ``EXIT_ON_FAILURE`` Default value: **False**   The default behavior is to continue executing tasks as long as possible. Set to ``True`` to specify that smrtpipe.py **not** submit any additional tasks after a failure.
+
+
+* ``MAX_CHUNKS`` Default value: **64**   SMRT Pipe splits inputs into ‘chunks’ during distributed computing. Different tasks use different chunking mechanisms, but ``MAX_CHUNKS`` sets the maximum number of chunks any file or task will be split into. This also affects the maximum number of tasks, and the size of the graph for a job.
+
+
+* ``MAX_THREADS`` Default value: **8**   SMRT Pipe uses one thread per active task to launch, block, and monitor return status for each task. This option limits the number of active threads for a single job. Additional tasks will wait until a thread is freed up before launching.
+
+
+* ``MAX_SLOTS`` Default value: **256**   SMRT Pipe cluster resource management is controlled by the ‘slots’ mechanism. ``MAX_SLOTS`` limits the total number of concurrent slots used by a single job. In a non-distributed environment, this roughly determines the total number of cores to be used at once.
+
+
+* ``NJOBS`` Default value: **64**   Specifies the number of jobs to submit for a distributed job. This applies only to assembly workflows (S_* modules).
+
+
+* ``NPROC`` Default value: **15**
+ * Determines the number of JMS ‘slots’ reserved by compute-intensive tasks.
+ * Determines the number of cores that compute-intensive tasks will attempt to use.
+ * In a distributed environment, NPROC should be at most (total slots - 1). This allows an I/O-heavy single process task to share a node with a CPU-intensive tasks that would not otherwise be using the I/O.
+
+
+
+* ``SHARED_DIR`` Default value: **$SEYMOUR_HOME/common/userdata/shared_dir/.**  A **shared writeable directory** visible to all nodes.  Used for sharing temporary files that can be used by more than one compute process. 
+
+* ``TMP`` Default value: **/tmp/**   Specifies the **local** temporary storage location for creation of temporary files and directories used for fast read/write access. For optimal performance, this should have at least 100 GB of free space. **Important:** Make sure to change this to an **actual** temporary location on the head node and compute nodes. Your jobs will **fail** if the path does not exist.
+
+
+### Configuring Templates 
+
+The central component for setting up distributed computing in SMRT Analysis are the **Job Management Templates** (JMTs). JMTs provide a flexible format for specifying how SMRT Analysis communicates with the resident Job Management System (JMS). There are **two** templates which must be modified for your system:
+
+* ``start.tmpl`` is the legacy template used for assembly algorithms.
+* ``interactive.tmpl`` is the new template used for resequencing algorithms. The difference between the two is the additional requirement of a sync option in ``interactive.tmpl``. (``kill.tmpl`` is not used.)
+
+**Note**: We are in the process of converting **all** protocols to use only interactive.tmpl.
+
+To customize a JMS for a particular environment, edit or create ``start.tmpl`` and ``interactive.tmpl``. For example, the installation includes the following sample start.tmpl and interactive.tmpl (respectively) for SGE:
+```
+qsub -pe smp ${NPROC} -S /bin/bash -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+qsub -S /bin/bash -sync y -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} -pe smp ${NPROC} ${CMD}
+```
+### To support a new JMS:
+
+1. Create a new directory in ``etc/cluster/`` under ``NEW_NAME``.
+2. In ``smrtpipe.rc``, change the ``CLUSTER_MANAGER`` variable to ``NEW_NAME``, as described in “Smrtpipe.rc Configuration”.
+3. Once you have a new JMS directory specified, edit the ``interactive.tmpl`` and ``start.tmpl`` files for your particular setup.
+
+Sample SGE, LSF and PBS templates are included with the installation in ``/opt/smrtanalysis/analysis/etc/cluster``.
+
+### Specifying the SGE Job Management System:
+
+For this version (v2.0.0), you must still edit **both** ``interactive.tmpl`` and ``start.tmpl`` as follows:
+
+1. Change ``secondary`` to the queue name on your system. (This is the ``–q`` option.) 
+2. Change ``smp`` to the parallel environment on your system. (This is the ``-pe`` option.) 
+
+### Specifying the PBS Job Management System
+
+PBS does **not** have a ``–sync`` option, so the interactive.tmpl file runs a script named qsw.py to simulate the functionality. You must edit **both** interactive.tmpl and start.tmpl. 
+
+1. Change the queue name to one that exists on your system. (This is the ``–q`` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the ``-pe`` option.) 
+3. Make sure that ``interactive.tmpl`` calls the ``–PBS`` option.
+
+### Specifying the LSF Job Management System
+
+Create an ``interactive.tmpl`` file by copying the ``start.tmpl`` file and adding the ``–K`` functionality in the ``bsub`` call. Or, you can also edit the sample LSF templates.
+
+### Specifying other Job Management Systems
+
+We have **not** tested the ``–sync`` functionally on other systems. Find the equivalent to the ``–sync`` option for your JMS and create an ``interactive.tmpl`` file. If there is **no** ``-sync`` option available, you may need to edit the ``qsw.py`` script in ``/opt/smrtanalysis/analysis/lib/python2.7/pbpy-0.1-py2.7.egg/EGG-INFO/scripts/qsw.py`` to add additional options for wrapping jobs on your system. 
+
+The code for PBS and SGE looks like the following: 
+```
+if '-PBS' in args:
+            args.remove('-PBS')
+            self.jobIdDecoder   = PBS_JOB_ID_DECODER
+            self.noJobFoundCode = PBS_NO_JOB_FOUND_CODE
+            self.successCode    = PBS_SUCCESS_CODE
+            self.qstatCmd       = "qstat"
+        else:
+            self.jobIdDecoder   = SGE_JOB_ID_DECODER
+            self.noJobFoundCode = SGE_NO_JOB_FOUND_CODE
+            self.successCode    = SGE_SUCCESS_CODE
+            self.qstatCmd       = "qstat -j"
+```
+### Configuring Submit hosts for Celera Assembler
+In order to run Celera Assembler on a distributed infrastructure, all the execute hosts in your queue must also be submit hosts.  You can add submit hosts by executing `qconf -as <hostname>` in SGE.
+
+
+## <a name="Step7"></a> Step 7 Details: (New Installations Only) Set Up User Data Folders
+
+SMRT Analysis saves references and results in its own hierarchy. Note that large amounts of data are generated and storage can get filled up. We suggest that you softlink to an **external** directory with more storage.
+
+All jobs and references, as well as drop boxes, are contained in ``/opt/smrtanalysis/common/userdata``. You can move this folder to another location, then soft link ``/opt/smrtanalysis/common/userdata`` to the new location. 
+
+```
+mv /opt/smrtanalysis/common/userdata /my_offline_storage
+ln -s /my_offline_storage/userdata /opt/smrtanalysis/common/userdata
+```
+
+## <a name="Step8"></a> Step 8 Details: (New Installations Only) Set Up SMRT® Portal
+
+1. Use your web browser to start SMRT Portal: ``http://HOST:PORT/smrtportal``
+2. Click **Register** at the top right.
+3. Create a user named ``administrator`` (all lowercase). This user is special, as it is the only user that does not require activation on creation.
+4. Enter the user name ``administrator``.
+5. Enter an email address. All administrative emails, such as new user registrations, will be sent to this address.
+6. Enter the password and confirm the password.
+7. Select **Click Here** to access **Change Settings**.
+8. To set up the mail server, enter the SMTP server information and click **Apply**. For email authentication, enter a user name and password. You can also enable Transport Layer Security.
+9. To enable automated submission from a PacBio® RS instrument, click **Add** under the Instrument Web
+Services URI field. Then, enter the following into the dialog box and click **OK**:
+```
+http://INSTRUMENT_PAP01:8081
+```
+``INSTRUMENT_PAP01`` is the IP address or name (pap01) of the instrument.
+``8081`` is the port for the instrument web service.
+
+10. Select the new URI, then click **Test** to check if SMRT Portal can communicate with the instrument service.
+11. (Optional) You can delete the pre-existing instrument entry by clicking **Remove**.
+
+## <a name="Step9"></a> Step 9: Verify the installation
+
+Create a test job in SMRT Portal using the provided lambda sequence data.  This is data from a single SMRT cell that has been down-sampled to reduce overall tarball size.  If you are upgrading, this cell will already have been imported into your system, and you can skip to step 10 below.
+
+Open your web browser and clear the browser cache:
+
+* **Google Chrome**: Choose **Tools > Clear browsing data**. Choose **the beginning of time** from the droplist, then check **Empty the cache** and click **Clear browsing data**.
+* **Internet Explorer**: Choose **Tools > Internet Options > General**, then under Browsing history, click **Delete**. Check **Temporary Internet files**, then click **Delete**.
+* **Firefox**: Choose **Tools > Options > Advanced**, then click the **Network** tab. In the Cached Web Content section, click **Clear Now**.
+
+2. Refresh the current page by pressing **F5**.
+3. Log into SMRT Portal by navigating to ``http://HOST:PORT/smrtportal``.
+4. Click **Design Job**.
+5. Click **Import and Manage**.
+6. Click **Import SMRT Cells**.
+7. Click **Add**.
+8. Enter ``/opt/smrtanalysis/common/test/primary/lambda``, then click **OK**.
+9. Select the new path and click **Scan**. You should get a dialog saying “One input was scanned." 
+10. Click **Design Job**.
+11. Click **Create New**.
+12. Enter a job name and comment.
+13. Select the protocol ``RS_Resequencing.1``.
+14. Under **SMRT Cells Available**, select a lambda cell and click the right-arrow button.
+15. Click **Save** on the bottom right, then click **Start**. The job should complete successfully.
+16. Click the **SMRT View** button. SMRT View should open with tracks displayed, and the reads displayed in the Details panel.
+
+***
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
\ No newline at end of file
diff --git a/docs/SMRT-Analysis-Software-Installation-v2.1.md b/docs/SMRT-Analysis-Software-Installation-v2.1.md
new file mode 100644
index 0000000..cefdbf1
--- /dev/null
+++ b/docs/SMRT-Analysis-Software-Installation-v2.1.md
@@ -0,0 +1,346 @@
+* [Important Changes] (#ImportantChanges)
+* [System Requirements] (#SysReq)
+  * [Operating System] (#OS)
+  * [Running SMRT® Analysis in the Cloud] (#Cloud)
+  * [Software Requirement] (#SoftReq)
+  * [Minimum Hardware Requirements] (#HardReq)
+* [Installation and Upgrade Summary] (#Summary)
+  * [Step 1: Decide on a user and an installation directory] (#Bookmark_DecideInstallDir)
+  * [Step 2: Create and set the installation directory $SMRT_ROOT] (#Bookmark_CreateInstallDir)
+* [Installation and Upgrade Detail] (#Details)
+  * [Step 3 Option 1: Run the install script] (#Bookmark_InstallDetail)
+  * [Step 3 Option 2: Run the upgrade script] (#Bookmark_UpgradeDetail)
+  * [Step 4: Set up distributed computing] (#Bookmark_DistributedDetail)
+  * [Step 5: Set up SMRT Portal] (#Bookmark_SMRTPortalDetail)
+  * [Step 6: Verify install or upgrade] (#Bookmark_VerifyDetail)
+* [Optional Configurations] (#Optional)
+  * [Set up userdata directory] (#Bookmark_UserdataDetail)
+* [Bundled with SMRT® Analysis] (#Bundled)
+* [Changes from SMRT® Analysis v2.0.1] (#Changes)
+
+
+#<a name="ImportantChanges"></a> Important Changes
+
+SMRT Analysis migrated to a completely new directory structure starting with v2.1. Instead of ``$SEYMOUR_HOME``, we are now using ``$SMRT_ROOT``, and you will **not** need to specify it explicitly.  We still recommend that ``$SMRT_ROOT`` be set to `/opt/smrtanalysis/`, but the underlying folders will be as follows (arrows indicate softlinks):
+
+```
+/opt/smrtanalysis/
+              admin/
+                   bin/
+                   log/
+
+              current --> softlink to ../install/smrtanalysis-2.1.0
+
+              install/
+                 smrtanalysis-<other versions>/
+                 smrtanalysis-2.1.0/
+
+              userdata/  --> softlink to offline storage location
+              
+```
+
+
+
+# <a name="SysReq"></a> System Requirements
+
+## <a name="OS"></a> Operating System
+* SMRT® Analysis is **only** supported on:
+    * English-language **Ubuntu 12.04, Ubuntu 10.04, Ubuntu 8.04** 
+    * English-language **RedHat/CentOS 6.3, RedHat/CentOS 5.6, RedHat/CentOS 5.3**
+* If you are using alternate versions of Ubuntu or CentOS (not recommended), you should download and install the SMRT Analysis executable that is **older** than the OS installed on your system. (For example, if you are running CentOS 6.4, you should run the CentOS 6.3 executable). The software assumes a uniform operating system across **all** compute nodes.  If you have **different** OS versions on your cluster (not recommended), choose an executable that matches the **oldest** OS on you [...]
+
+* Check for any library errors when running an initial ``RS_resequencing`` analysis job on lambda. Here are some common packages that need to be installed:
+    * **RedHat/CentOS 5.xxx**: Enter `sudo yum install mysql-server perl-XML-Parser openssl redhat-lsb`
+    * **RedHat/CentOS 6.xxx**: Enter `sudo yum install mysql-server perl-XML-Parser openssl098e redhat-lsb`
+    * **Ubuntu 10.xxx**: Enter `sudo aptitude install mysql-server libxml-parser-perl libssl0.9.8`
+* SMRT Analysis **cannot** be installed on the Mac OS or Windows.
+
+
+## <a name="Cloud"></a> Running SMRT® Analysis in the Cloud ##
+Users who do **not** have access to a server with the supported OS can use the public Amazon Machine Image (AMI). For details, see the document [Running SMRT Analysis on Amazon] (https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.1.0/Doc/Running%20SMRT%20Analysis%20on%20Amazon.pdf).
+
+## <a name="SoftReq"></a> Software Requirement ##
+
+* MySQL 5 (`yum install mysql-server`; `apt-get install mysql-server`)
+* bash
+* Perl (v5.10.1)
+  * Statistics::Descriptive Perl module: `sudo cpan Statistics::Descriptive`
+
+
+### Client web browser: ###
+We recommend using Google Chrome® 21 web browsers to run SMRT Portal for consistent functionality. We also support Apple’s Safari® and Internet Explorer® web browsers; however some features may not be optimized on these browsers.
+
+### Client Java: ###
+To run SMRT View, we recommend using Java 7 for Windows (Java 7 64 bit for users with 64 bit OS), and Java 6 for the Mac OS.
+
+## <a name="HardReq"></a> Minimum Hardware Requirements ##
+
+
+### 1 head node:###
+* Minimum 8 cores, with 2 GB RAM per core. 
+* Minimum 250 GB of disk space.
+
+### Compute nodes:###
+* Minimum 3 compute nodes. We recommend 5 nodes for high utilization focused on _de novo_ assemblies.
+* Minimum 8 cores per node, with 2 GB RAM per core. We recommend 16 cores per node with 4 GB RAM per core.
+* Minimum 250 GB of disk space per node.
+* To perform _de novo_ assembly of large genomes using the Celera® Assembler, **one** of the nodes will need to have considerably more memory. See the Celera® Assembler home page for recommendations: http://wgs-assembler.sourceforge.net/.
+
+**Notes:** 
+* It is possible, but **not** advisable, to install SMRT Analysis on a single-node machine (see the distributed computing section). You will likely be able to submit jobs one SMRT Cell at a time, but the time to completion may be long as the software may not have sufficient resources to complete the job.  
+
+* The ``RS_ReadsOfInsert`` protocol can be **compute-intensive**. If you plan to run it on every SMRT Cell, we recommend adding 3 additional 8-core compute nodes with at least 4 GB of RAM per core.
+
+### Data storage: ###
+* 10 TB (Actual storage depends on usage.)
+
+### Network File System Requirement 
+Please refer to the IT Site Prep guide provided with your instrument purchase for more details.
+
+1. The **SMRT Analysis software directory** (We recommend `$SMRT_ROOT=/opt/smrtanalysis`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  
+
+2. The **SMRT Cell input directory**  (We recommend `$SMRT_ROOT/pacbio_insrument_data/`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  This directory contains data from the instrument and can either be a directory configured by RS Remote during instrument installation, or a directory you created when you received data from a core lab. 
+
+3. The **SMRT Analysis output directory** (We recommend `$SMRT_ROOT/userdata`) **must** have the same path and be **writable** by the smrtanalysis user across **all** compute nodes via **NFS**. This directory is usually soft-linked to a large storage volume.
+
+4. The **SMRT Analysis temporary directory** is used for fast I/O operations during runtime.  The software accesses this directory from `$SMRT_ROOT/tmpdir` and you can softlink this directory manually or using the install script.  This directory should be a local directory (not NFS mounted) and be writable by the `smrtanalysis` user and exist as independent directories on all compute nodes. 
+
+
+# <a name="Summary"></a> Installation and Upgrade Summary
+
+**Please pay close attention as the upgrade procedure has changed.** 
+
+The following instructions apply to **fresh v2.1 installations** and **v2.0.1 to v2.1 upgrades only**.
+* If you are using an **older** version of SMRT Analysis, you can either perform a fresh installation and manually import old SMRT Cells and jobs, or download and upgrade any intermediate versions (v1.4, v2.0.0, v2.0.1).  
+
+<a name="Bookmark_DecideInstallDir"></a> 
+### Step 1. Decide on a user and an installation directory for the SMRT Analysis software suite.
+
+The SMRT Analysis install directory, `$SMRT_ROOT`, can be any directory as long as the smrtanalysis user has read, write, and execute permissions in that directory.  Historically we have referred to `$SMRT_ROOT` as `/opt/smrtanalysis`.  
+
+We recommend that a system administrator create a special user called `smrtanalysis`, who belongs to the `smrtanalysis` group. This user will own all SMRT Analysis files, daemon processes, and smrtpipe jobs.   
+
+
+<a name="Bookmark_CreateInstallDir"></a> 
+### Step 2. Download the .run executable to the same level as $SMRT_ROOT and create $SMRT_ROOT.  
+
+* **Option 1:** The SMRT Analysis user has sudo privileges.
+
+  ```  
+  cd /opt
+  wget http://path/to/smrtanalysis-os-version.run
+
+  SMRT_ROOT=/opt/smrtanalysis
+  sudo mkdir $SMRT_ROOT
+  sudo chown smrtanalysis:smrtanalysis $SMRT_ROOT
+  ```
+
+* **Option 2:** The SMRT Analysis user does **not** have sudo privileges.
+If you do not have sudo privileges, you can install SMRT Analysis as yourself in your home directory or any other directory you wish to use.  However, you still must have root login credentials for the mysql database.
+
+  ```
+  cd /home/<your_username>
+  wget <http://path/to/smrtanalysis-os-version.run>
+
+  SMRT_ROOT=/home/<your_username>/smrtanalysis
+  mkdir $SMRT_ROOT
+  ```
+
+### Step 3. Run the installer or upgrade script and start services.  
+
+  * **Option 1**: If you are performing a **fresh** installation, run the installation script and start tomcat and kodos.  [See below for more details.] (#Bookmark_InstallDetail)
+  ```
+  cd /opt/
+  bash smrtanalysis-2.1.0.Current_Ubuntu-8.04.run --rootdir $SMRT_ROOT
+  $SMRT_ROOT/admin/bin/tomcatd start
+  $SMRT_ROOT/admin/bin/kodosd start
+  ```
+  
+If you need to rerun the script and have already extracted the file, you can rerun using the `--no-extract` option:
+
+  `bash smrtanalysis-2.1.0.Current_Ubuntu-8.04.run --rootdir $SMRT_ROOT --no-extract`
+
+You can see all other options by invoking the --help option:
+
+  `bash smrtanalysis-2.1.0.Current_Ubuntu-8.04.run --help`
+
+
+  * **Option 2**: **Please pay close attention as the upgrade procedure has changed.**  The new procedure requires running a script called ``smrtupdater`` from the old v2.0.1 smrtanalysis directory, which takes the path to the new v2.1 installer as an argument.  [See below for more details.](#Bookmark_UpgradeDetail)
+**IMPORTANT: If `$SMRT_ROOT` is a pre-existing symbolic link (e.g. `/opt/smrtanalysis`--> `/opt/smrtanalysis-2.0.1`), you must manually delete the softlink and create a new directory this time only.**
+**IMPORTANT: Make sure you type `SMRT_PATH_ORIG="$PATH"` exactly as shown in the command below and do not replace it with a real path.  Otherwise, the script will error out because it cannot find bash. 
+  ```
+  /opt/smrtanalysis-2.0.1/etc/scripts/kodosd stop
+  /opt/smrtanalysis-2.0.1/etc/scripts/tomcatd stop
+
+  rm /opt/smrtanalysis
+  mkdir /opt/smrtanalysis
+  SMRT_PATH_ORIG=”$PATH” SMRT_ROOTDIR="/opt/smrtanalysis" bash /opt/smrtanalysis-2.0.1/admin/bin/smrtupdater /opt/smrtanalysis-2.1.0.Current_Ubuntu-8.04.run
+
+  /opt/smrtanalysis/admin/bin/tomcatd start
+  /opt/smrtanalysis/admin/bin/kodosd start
+  ```
+
+
+**Note:** For future upgrades beyond v2.1, we expect the upgrade command to be `$SMRT_ROOT/admin/bin/smrtupdater /path/to/smrtanalysis-2.1.0.Current_Ubuntu-8.04.run` 
+
+
+### Step 4. **New Installations only:** Set up distributed computing 
+
+Decide on a job management system (JMS). [See below for more details.](#Bookmark_DistributedDetail)
+
+### Step 5. **New Installations only**: Set up SMRT Portal
+
+Register the administrative user and set up the SMRT Portal GUI. [See below for more details.](#Bookmark_SMRTPortalDetail)
+
+### Step 6. Verify the installation. 
+
+Run a sample SMRT Portal job to verify functionality. [See below for more details.] (#Bookmark_VerifyDetail)
+
+
+# <a name="Details"></a> Installation and Upgrade Details
+### <a name="Bookmark_InstallDetail"></a> Step 3, Option 1 Details: Run the Installation script and turn on services
+
+The installation script attempts to discover inputs when possible, and performs the following: 
+
+* Looks for valid hostnames (DNS) and IP Addresses. You must choose one from the list.   
+* Assumes that the user running the script is the designated smrtanalysis user.
+* Installs the Tomcat web server. You will be prompted for:
+  * The **port number** that the tomcat service will run under. (Default: ``8080``)
+  * The **port number** that the tomcat service will use to shutdown. (Default: ``8005``)
+* Creates the smrtportal database in mysql. You will be prompted for:
+  * The mysql administrative user name. (Default: ``root``)
+  * The mysql password. (Default:  no password)
+  * The mysql port number. (Default: ``3306``)
+* Attempts to configure the Job Management System (``SGE``, ``LSF``, ``PBS``, or ``NONE``)
+  * The ``$SGE_ROOT`` directory
+  * The ``$SGE_CELL`` directory name
+  * The ``$SGE_BINDIR`` directory that contains all the q-commands
+  * The queue name
+  * The parallel environment
+* Creates and configures special directories:
+  * The ``$TMP`` directory
+  * The ``$USERDATA`` directory 
+
+
+### <a name="Bookmark_UpgradeDetail"></a> Step 3, Option 2 Details: Run the Upgrade Script
+
+The upgrade script performs the following:
+* Checks that the same user is running the upgrade script
+* Checks for running services
+* Checks that the OS and hardware requirements are still met
+* Transfers computing configurations from a previous installation
+* Upgrades any references as necessary
+* Preserves SMRT Cells, jobs, and users from a previous installation by updating smrtportal database schema changes as necessary
+* Preserves special directories settings
+  * Updates the `$SMRT_ROOT/tmpdir` softlink 
+  * Updates the `$SMRT_ROOT/userdata` softlink
+* The upgrade script does **not** port over protocols that were defined in previous versions of SMRT Analysis. This is because protocol files can vary a great deal between versions due to rapid code development and change. Please **recreate** any custom protocols you may have.
+
+
+### <a name="Bookmark_DistributedDetail"></a> Step 4 Details: Set up Distributed Computing
+
+Pacific Biosciences has explicitly validated Sun Grid Engine (SGE), and provide job submission templates for LSF and PBS. You only need to configure the software once during initial install. 
+
+#### Configuring Templates 
+
+The central component for setting up distributed computing in SMRT Analysis are the **Job Management Templates**, which provide a flexible format for specifying how SMRT Analysis communicates with the resident Job Management System (JMS). If you are using a non-SGE job managment system, you **must** create or edit the following files:
+```
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/start.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/interactive.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/kill.tmpl
+```
+
+#### Specifying the PBS Job Management System
+
+PBS does **not** have a ``–sync`` option, so the ``interactive.tmpl`` file runs a script named ``qsw.py`` to simulate the functionality. You must edit **both** ``interactive.tmpl`` and ``start.tmpl``. 
+
+1. Change the queue name to one that exists on your system. (This is the ``–q`` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the ``-pe`` option.) 
+3. Make sure that ``interactive.tmpl`` calls the ``–PBS`` option.
+
+#### Specifying the LSF Job Management System
+
+The equivalent SGE -sync option in LSF is `-K` and this should be provided with the `bsub` command in the `interactive.tmpl` file.
+
+1. Change the queue name to one that exists on your system. (This is the `–q` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the `-pe` option.) 
+3. Make sure that ``interactive.tmpl`` calls the `–K` option.
+
+
+#### Specifying other Job Management Systems
+
+1. Create a new directory `smrtanalysis/current/analysis/etc/cluster/NEW_JMS`.
+2. Edit `smrtanalysis/current/analysis/etcsmrtpipe.rc`, and change the `CLUSTER_MANAGER` variable to `NEW_JMS`
+3. Once you have a new JMS directory specified, create and edit the `interactive.tmpl`, `start.tmpl`, and `kill.tmpl` files for your particular setup.
+
+### <a name="Bookmark_SMRTPortalDetail"></a> Step 5 Details: (New Installations Only) Set Up SMRT® Portal
+
+1. Use your web browser to start SMRT Portal: `http://hostname:port/smrtportal`
+2. Click **Register** at the top right.
+3. Create a user named ``administrator`` (all lowercase). This user is special, as it is the only user that does not require activation on creation.
+4. Enter the user name ``administrator``.
+5. Enter an email address. All administrative emails, such as new user registrations, will be sent to this address.
+6. Enter the password and confirm the password.
+7. Select **Click Here** to access **Change Settings**.
+8. To set up the mail server, enter the SMTP server information and click **Apply**. For email authentication, enter a user name and password. You can also enable Transport Layer Security.
+9. To enable automated submission from a PacBio® RS instrument, click **Add** under the Instrument Web
+Services URI field. Then, enter the following into the dialog box and click **OK**:
+```
+http://INSTRUMENT_PAP01:8081
+```
+``INSTRUMENT_PAP01`` is the IP address or name (pap01) of the instrument.
+``8081`` is the port for the instrument web service.
+
+10. Select the new URI, then click **Test** to check if SMRT Portal can communicate with the instrument service.
+11. (Optional) You can delete the pre-existing instrument entry by clicking **Remove**.
+
+### <a name="Bookmark_VerifyDetail"></a> Step 6: Verify the installation
+
+Create a test job in SMRT Portal using the provided lambda sequence data. This is data from a single SMRT cell that has been down-sampled to reduce overall tarball size. If you are upgrading, this cell will already have been imported into your system, and you can skip to step 10 below.
+
+Open your web browser and clear the browser cache:
+
+* **Google Chrome**: Choose **Tools > Clear browsing data**. Choose **the beginning of time** from the droplist, then check **Empty the cache** and click **Clear browsing data**.
+* **Internet Explorer**: Choose **Tools > Internet Options > General**, then under Browsing history, click **Delete**. Check **Temporary Internet files**, then click **Delete**.
+* **Firefox**: Choose **Tools > Options > Advanced**, then click the **Network** tab. In the Cached Web Content section, click **Clear Now**.
+
+2. Refresh the current page by pressing **F5**.
+3. Log into SMRT Portal by navigating to ``http://HOST:PORT/smrtportal``.
+4. Click **Design Job**.
+5. Click **Import and Manage**.
+6. Click **Import SMRT Cells**.
+7. Click **Add**.
+8. Enter ``/opt/smrtanalysis/current/common/test/primary``, then click **OK**.
+9. Select the new path and click **Scan**. You should get a dialog saying “One input was scanned." 
+10. Click **Design Job**.
+11. Click **Create New**.
+12. Enter a job name and comment.
+13. Select the protocol ``RS_Resequencing.1``.
+14. Under **SMRT Cells Available**, select a lambda cell and click the right-arrow button.
+15. Click **Save** on the bottom right, then click **Start**. The job should complete successfully.
+16. Click the **SMRT View** button. SMRT View should open with tracks displayed, and the reads displayed in the Details panel.
+
+## <a name="Optional"></a> Optional Configurations ##
+### Set up Userdata folders ###
+
+The userdata folder, `$SMRT_ROOT/userdata`, expands rapidly because it contains all jobs, references, and drop boxes.  We recommend softlinking this folder to an **external** directory with more storage: 
+
+
+```
+mv /opt/smrtanalysis/userdata /path/to/NFS/mounted/offline_storage
+ln -s /path/to/NFS/mounted/offline_storage /opt/smrtanalysis/common/userdata
+```
+
+## <a name="Bundled"></a> Bundled with SMRT® Analysis ##
+The following are bundled within the application and should **not** depend on what is already deployed on the system.
+* Java® 1.7
+* Python® 2.7
+* Tomcat™ 7.0.23
+
+## <a name="Changes"></a> Changes from SMRT® Analysis v2.0.1 ##
+See [SMRT Analysis Release Notes v2.1](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Release-Notes-v2.1) for changes and known issues. The latest version of this document resides on the Pacific Biosciences DevNet site; you can also link to it from the main SMRT Analysis web page.
+
+
+***
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 100-262-100**
\ No newline at end of file
diff --git a/docs/SMRT-Analysis-Software-Installation-v2.2.0.md b/docs/SMRT-Analysis-Software-Installation-v2.2.0.md
new file mode 100644
index 0000000..8bcf4d7
--- /dev/null
+++ b/docs/SMRT-Analysis-Software-Installation-v2.2.0.md
@@ -0,0 +1,592 @@
+* [What's New?] (#whats-new)
+  * [Reorganized Directory Structure] (#reorganized-directory-structure)
+  * [Embedded SMRT Portal Application Database] (#embedded-smrt-portal-application-database)
+  * [Bundled Software] (#bundled-software)
+  * [Release Notes] (#release-notes)
+* [Getting Started] (#getting-started)
+  * [Download SMRT Analysis] (#download-smrt-analysis)
+  * [Installation Summary] (#installation-summary)
+  * [Upgrade Summary] (#upgrade-summary)
+  * [Patch Summary] (#patch-summary)
+* [Installation Guide] (#installation-guide)
+  * [System Requirements] (#system-requirements)
+    * [Hardware Guidelines] (#hardware-guidelines)
+    * [Software Prerequisites] (#software-prerequisites)
+  * [Installation Details] (#installation-details)
+    * [Downloading SMRT Analysis] (#downloading-smrt-analysis)
+    * [Create the SMRT Analysis User] (#create-the-smrt-analysis-user)
+    * [Create the Installation Path] (#create-the-installation-path)
+    * [Run the Installer] (#run-the-installer)
+    * [Apply Patches During Installation] (#apply-patches-during-installation)
+    * [Set up Distributed Computing] (#set-up-distributed-computing)
+    * [Start the SMRT Analysis Services] (#start-the-smrt-analysis-services)
+    * [Set up SMRT Portal] (#set-up-smrt-portal)
+    * [Verify the Installation] (#verify-the-installation)
+    * [Optional Configurations] (#optional-configurations)
+  * [Upgrade Details] (#upgrade-details)
+    * [Supported Upgrade Path](#supported-upgrade-path)
+    * [Run the Upgrader] (#run-the-upgrader)
+    * [Applying Patches During Upgrade] (#applying-patches-during-upgrade)
+    * [Start the SMRT Analysis Services] (#start-the-smrt-analysis-services)
+  * [Known Install Problems and Workarounds] (#known-install-problems-and-workarounds)
+    * [Remote Storage Issues] (#remote-storage-issues)
+    * [ACL Problems] (#acl-problems)
+  * [Advanced Deployment](#advanced-deployment)
+    * [Using Amazon Web Services](#using-amazon-web-services)
+
+
+#What's New?#
+
+###Reorganized Directory Structure###
+Starting with SMRT Analysis v2.1.0, a new directory structure is being employed.  Instead of the environment variable ``$SEYMOUR_HOME``, ``$SMRT_ROOT`` is defined as the top-level directory of the SMRT Analysis installation.  Please ensure that these variables are not defined explicitly in any ``setup.sh`` files or elsewhere, such as in user ``.bash*`` files, ``/etc/profile``, or scripts in ``/etc/profile.d/``.  Although not a strict requirement, we recommend ``SMRT_ROOT=/opt/smrtanalysis/``.
+
+Below is a typical directory hierarchy of ``$SMRT_ROOT`` ("`->`" = symbolic link):
+
+```
+/opt/smrtanalysis
+├── admin -> current/admin
+│   ├── bin
+│   └── log
+├── current -> install/smrtanalysis-2.2.0.133377
+├── install
+│   ├── smrtanalysis-2.1.0.128013
+│   ├── smrtanalysis-2.1.1.128514
+│   ├── smrtanalysis-2.1.1.128514-patch-0.1
+│   ├── smrtanalysis-2.2.0.133377
+│   ├── smrtanalysis-2.2.0.133377-patch-1.134216
+│   └── smrtanalysis-2.2.0.133377-patch-2.134913
+├── README
+├── scripts
+├── tmpdir -> /tmp
+└── userdata -> /path/to/NFS/mounted/offline_storage
+    ├── database
+    ├── inputs_dropbox
+    ├── jobs
+    ├── jobs_archive
+    ├── jobs_dropbox
+    ├── log
+    ├── references
+    ├── references_dropbox
+    ├── runtime
+    └── shared_dir
+```
+
+###Embedded SMRT Portal Application Database###
+Pre-built binaries for MySQL Server are now bundled with the SMRT Analysis suite, providing a standalone, isolated environment, free of external system dependencies for the SMRT Portal application database.  This new architecture allows a more seamless installation and upgrade process and provides an additional measure of data security with automated schema backups.
+
+Upon install/upgrade, the default behavior is to embed the database.  All data from the remote database will be migrated to the embedded server.
+
+The default behavior can overridden by using the ``--no-bundled-db`` option during install/upgrade.  However, users opting to run an external MySQL instance may have limited support options, and are discouraged from doing so.
+
+###Bundled Software###
+SMRT Analysis includes the following third-party software packages bundled within the application; the application should **not** depend on what is already deployed on the system.
+
+* Apache Tomcat™ 7.0.23
+* Celera® Assembler 8.1
+* Docutils 0.8.1
+* GMAP (2014-01-21)
+* HMMER 3.1b1 (May 2013)
+* Java™ SE Runtime Environment (build 1.7.0_02-b13)
+* Mono 3.0.7
+* MySQL® 5.1.73
+* Perl v5.8.8
+* Python® 2.7.3
+* SAMtools 0.1.17
+* Scala 2.9.0 RC3
+
+**Note:** GATK and associated executables are **no longer** included.
+
+##Release Notes##
+See: 
+
+* [SMRT Analysis Release Notes v2.2.0](SMRT-Analysis-Release-Notes-v2.2.0)
+* [SMRT Analysis Release Notes v2.2.0.p1](SMRT-Analysis-Release-Notes-v2.2.0.p1)
+* [SMRT Analysis Release Notes v2.2.0.p2](SMRT-Analysis-Release-Notes-v2.2.0.p2), and 
+* [SMRT Analysis Release Notes v2.2.0.p3](SMRT-Analysis-Release-Notes-v2.2.0.p3) for changes and known issues. 
+
+You can find the latest version of this document on the Pacific Biosciences DevNet site; you can also link to it from the main SMRT Analysis web page.
+
+#Getting Started#
+
+**Note:** This section contains a summary of the commands used for a **quick installation**.  
+
+* Use these commands **only** if you are familiar with the installation/upgrade process.  
+* Proceed to the [Installation Guide] (#installation-guide) for the more detailed procedure.
+
+##Download SMRT Analysis#
+Download SMRT Analysis from PacBio DevNet (http://www.pacbiodevnet.com):
+```
+wget https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/smrtanalysis-2.2.0.133377.run
+wget https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/smrtanalysis-2.2.0.133377-patch-3.run
+```
+ 
+##Installation Summary##
+  ```
+  SMRT_ROOT=/opt/smrtanalysis
+  sudo mkdir $SMRT_ROOT
+  sudo chown smrtanalysis:smrtanalysis $SMRT_ROOT
+
+  su -l smrtanalysis
+  smrtanalysis-2.2.0.133377.run -p smrtanalysis-2.2.0.133377-patch-3.run --rootdir $SMRT_ROOT
+
+  $SMRT_ROOT/admin/bin/smrtportald-initd start
+  $SMRT_ROOT/admin/bin/kodosd start
+  ```
+
+##Upgrade Summary##
+
+  ```
+ su -l smrtanalysis
+ SMRT_ROOT=/opt/smrtanalysis
+ $SMRT_ROOT/admin/bin/smrtportald-initd stop 
+ $SMRT_ROOT/admin/bin/smrtupdater -- -p smrtanalysis-2.2.0.133377-patch-3.run smrtanalysis-2.2.0.133377.run
+ $SMRT_ROOT/admin/bin/smrtportald-initd start
+  ```
+
+Once SMRT Portal is installed, proceed to the following sections to complete setup:
+
+1. [Set up SMRT Portal] (#set-up-smrt-portal) (for new installations only)
+2. [Verify the Installation] (#verify-the-installation) (for new installations **and** upgrades)
+
+
+##Patch Summary##
+
+**These two commands must be run as `smrtanalysis` user.**
+  ```
+  SMRT_ROOT=/opt/smrtanalysis
+  $SMRT_ROOT/admin/bin/smrtportald-initd stop
+  ```
+  
+**These two commands must be run as `root` (e.x. using `sudo`).  Skip these commands if the files do not exist.**
+  ```
+  sudo rm /tmp/mysql_XXXXX.sock 
+  sudo rm $SMRT_ROOT/userdata/database/../../error.log 
+  ```  
+
+**These two commands must be run as `smrtanalysis` user.**
+  ```
+  $SMRT_ROOT/admin/bin/smrtupdater smrtanalysis-2.2.0.133377-patch-3.run
+  $SMRT_ROOT/admin/bin/smrtportald-initd start
+  ```
+
+#Installation Guide#
+##System Requirements##
+
+###Hardware Guidelines###
+
+####Submit Host####
+* Minimum 8 cores, with 2 GB RAM per core.
+* Minimum 250 GB of disk space.
+
+####Execution Hosts####
+* Minimum of 3 nodes. We **recommend** 5 nodes for high utilization focused on _de novo_ assemblies.
+* Minimum of 8 cores per node, with 2 GB RAM per core. We **recommend** 16 cores per node with 4 GB RAM per core.
+* Minimum of 250 GB of disk space per node.
+* To perform _de novo_ assembly of large genomes using Celera® Assembler, **one** of the nodes will need to have considerably more memory. See the Celera® Assembler home page for recommendations: http://wgs-assembler.sourceforge.net/.
+
+For more information, see [What computing infrastructure is compatible with SMRT Analysis?](What-computing-infrastructure-is-compatible-with-SMRT-Analysis%3F)
+
+**Notes:** 
+* It is possible, but **not** advisable, to install SMRT Analysis on a single-node machine (see the distributed computing section). You will likely be able to submit jobs one SMRT Cell at a time, but the time to completion may be long as the software may not have sufficient resources to complete the job.  
+
+* The ``RS_ReadsOfInsert`` protocol can be **compute-intensive**. If you plan to run it on every SMRT Cell, we recommend adding 3 additional 8-core compute nodes with at least 4 GB of RAM per core.
+
+
+###Software Prerequisites###
+
+####Operating Systems####
+
+* SMRT Analysis is supported on:
+    * English-language **Ubuntu: versions 12.04, 10.04, 8.04** 
+    * English-language **RedHat/CentOS: versions 6.3, 5.6, 5.3**
+
+* SMRT Analysis **cannot** be installed on Mac OS® or Windows® systems.
+
+
+
+####Software Dependencies####
+
+* Bash
+* Linux Standard Base (LSB)
+
+These are usually installed by default on most systems. If necessary, use the following commands to ensure that these packages are installed.
+
+**CentOS:**
+
+```
+sudo yum groupinstall "Development Tools"
+sudo yum install redhat-lsb
+```
+
+**Ubuntu:**
+
+```
+sudo apt-get install build-essential lsb-release
+```
+
+####Client Web Browser####
+We recommend using the Google Chrome® 21 web browser to run SMRT Portal for consistent functionality. We also support Apple’s Safari® and Internet Explorer® web browsers; however some features may not be optimized on these browsers.
+
+####Client Java####
+To run SMRT View, we recommend:
+* **Oracle Java:** Java Version 7 Update 45 or later for Linux, Windows, and Mac OS X. 
+* **Apple Java:** Java for OS X 2013-004 (1.6.0_51-b11-457-10M4509) or later.
+
+
+###Network Configuration###
+
+Please refer to the **IT Site Prep** guide provided with your instrument purchase for more details.
+
+See also [What data storage is compatible with SMRT Analysis?](What-data-storage-is-compatible-with-SMRT-Analysis%3F)
+
+####Data Storage####
+* 10 TB (Actual storage depends on usage.)
+
+* The **SMRT Analysis software directory** (we recommend `$SMRT_ROOT=/opt/smrtanalysis`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  
+
+* The **SMRT Cell input directory**  (we recommend `$SMRT_ROOT/pacbio_instrument_data/`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  This directory contains data from the instrument and can either be a directory configured by RS Remote during instrument installation, or a directory you created when you received data from a core lab. 
+
+* The **SMRT Analysis output directory** (we recommend `$SMRT_ROOT/userdata`) **must** have the same path and be **writable** by the smrtanalysis user across **all** compute nodes via **NFS**. This directory is usually softlinked to a large storage volume.
+
+* The **SMRT Analysis temporary directory** is used for fast I/O operations during runtime.  The software accesses this directory from `$SMRT_ROOT/tmpdir` and you can softlink this directory manually or using the install script.  This directory should be a local directory (**not** NFS-mounted) and be writable by the `smrtanalysis` user and exist as independent directories on all compute nodes. 
+
+###Cluster Configuration###
+
+Pacific Biosciences has explicitly validated Sun **Grid Engine (SGE)**, and provides job submission templates for **LSF** and **PBS**. You only need to configure the software **once** during initial install. 
+
+##Installation Details##
+
+###Downloading SMRT Analysis#
+Download SMRT Analysis from PacBio DevNet (http://www.pacbiodevnet.com):
+```
+wget https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/smrtanalysis-2.2.0.133377.run
+```
+Download the latest patch available for your version:
+```
+wget https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/smrtanalysis-2.2.0.133377-patch-3.run
+```
+
+###Create the SMRT Analysis User###
+We recommend that a system administrator create a special user called `smrtanalysis`, who belongs to the `smrtanalysis` group. This user will own **all** SMRT Analysis files, daemon processes, and smrtpipe jobs.   
+
+
+###Create the Installation Path###
+The SMRT Analysis top-level directory, `$SMRT_ROOT`, can be **any** directory as long as the `smrtanalysis` user has read, write, and execute permissions in that directory.  Historically, we referred to `$SMRT_ROOT` as `/opt/smrtanalysis`.  
+
+If the parent directory `$SMRT_ROOT` is **not** writable by the SMRT Analysis user, the `$SMRT_ROOT` directory **must** be pre-created with read/write/execute permissions for the SMRT Analysis user.  
+
+
+###Run the Installer###
+The installation script attempts to discover inputs when possible, and performs the following configurations: 
+
+1. Confirms valid non-root user that will own SMRT Pipe jobs and daemon processes.
+1. Performs system hardware, OS, and software prerequisite check.
+1. Identifies valid host names and IP addresses recognized by DNS.
+1. Tomcat web server **main** port and **shutdown** port numbers.
+1. Creates and verifies symbolic links to TMP and USERDATA directories.
+1. MySQL server settings and initializes SMRT Portal database.
+1. Distributed/non-distributed SMRT Pipe jobs
+1. Job Management System and related parameters for queues and parallel environments.
+
+
+**Option 1:** The SMRT Analysis user has sudo privileges.
+
+For example, if `$SMRT_ROOT` is `/opt/smrtanalysis`, `/opt` is only writable by root, and the SMRT Analysis user is `smrtanalysis` belonging to the group `smrtanalysis`.  
+
+  ```
+  SMRT_ROOT=/opt/smrtanalysis
+  sudo mkdir $SMRT_ROOT
+  sudo chown smrtanalysis:smrtanalysis $SMRT_ROOT
+  ```
+
+**Option 2:** The SMRT Analysis user does **not** have sudo privileges.
+
+For example, if you do not have sudo privileges, you can install SMRT Analysis as yourself in your home directory.
+
+  ```
+  SMRT_ROOT=/home/<your_username>/smrtanalysis
+  mkdir $SMRT_ROOT
+  chown smrtanalysis:smrtanalysis $SMRT_ROOT
+  ```
+  ```
+  smrtanalysis-2.2.0.133377.run -p smrtanalysis-2.2.0.133377-patch-3.run --rootdir $SMRT_ROOT
+  ```
+  
+  If you cancelled out of the install prompt and want to rerun the script without extracting again, you can rerun using the `--no-extract` option:
+
+  ```
+  smrtanalysis-2.2.0.133377.run -p smrtanalysis-2.2.0.133377-patch-3.run --rootdir $SMRT_ROOT --no-extract
+  ```
+
+
+###Apply Patches During Installation###
+If installing **after** a patch has been released for the software, you can install **both** the software and the patch in one command using the ``-p`` option:
+
+  ```
+  smrtanalysis-2.2.0.133377.run -p smrtanalysis-2.2.0.133377-patch-3.run --rootdir $SMRT_ROOT
+  ```
+
+
+###Set up Distributed Computing###
+####Configuring Job Submission Templates####
+Distributed computing is configured by editing three template files:
+```
+$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/start.tmpl
+$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/interactive.tmpl
+$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/kill.tmpl
+```
+
+#####Specifying the SGE Job Management System#####
+The install script will automatically discover the `queue name` and `parallel environment` name based on the SGE installed on your system.  If you want to configure or add options to the qsub command, you must edit the .tmp files manually.  For example, the default interactive.tmpl looks like the following:
+
+```
+qsub -pe smp ${NPROC} -S /bin/bash -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+```
+
+If you are assembling large genomes (>100 Mb) and wish to use the job distribution functionality within Celera Assembler, you must make sure the parallel environment is configured to use the `$pe_slots` allocation rule.  For example, the `smp` parallel environment is configured as follows:
+
+```
+$ qconf -sp smp
+pe_name            smp
+slots              99999
+user_lists         NONE
+xuser_lists        NONE
+start_proc_args    /bin/true
+stop_proc_args     /bin/true
+allocation_rule    $pe_slots
+control_slaves     FALSE
+job_is_first_task  TRUE
+urgency_slots      min
+accounting_summary FALSE
+```
+
+#####Specifying the PBS Job Management System#####
+PBS does **not** have a ``–sync`` option, and the ``interactive.tmpl`` file runs a script named ``qsw.py`` to simulate the functionality. You must edit **both** ``interactive.tmpl`` and ``start.tmpl``. 
+
+1. Change the queue name to one that exists on your system. (This is the ``–q`` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the ``-pe`` option.) 
+3. Make sure that ``interactive.tmpl`` calls the ``–PBS`` option.
+
+#####Specifying the LSF Job Management System#####
+The equivalent SGE `-sync` option in LSF is `-K` and this should be provided with the `bsub` command in the `interactive.tmpl` file.
+
+1. Change the queue name to one that exists on your system. (This is the `–q` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the `-pe` option.) 
+3. Make sure that ``interactive.tmpl`` calls the `–K` option.
+
+
+#####Specifying other Job Management Systems#####
+1. Create a new directory `$SMRT_ROOT/current/analysis/etc/cluster/NEW_JMS`.
+2. Edit `$SMRT_ROOT/current/analysis/etcsmrtpipe.rc`, and change the `CLUSTER_MANAGER` variable to `NEW_JMS`.
+3. Once you have a new JMS directory specified, create and edit the `interactive.tmpl`, `start.tmpl`, and `kill.tmpl` files for your particular setup.
+
+
+###Start the SMRT Analysis Services###
+
+####Start the MySQL and Tomcat Daemons####
+
+The following command will start both Tomcat and MySQL.  You should use this command to restart services.
+```
+$SMRT_ROOT/admin/bin/smrtportald-initd start
+```
+
+MySQL and Tomcat can also be controlled individually for troubleshooting purposes:
+```
+$SMRT_ROOT/admin/bin/mysqld start
+$SMRT_ROOT/admin/bin/tomcatd start
+```
+
+You can check that the services are on or off using the `ps` command:
+```
+ps -ef | grep tomcat
+ps -ef | grep mysql
+```
+
+####Start the Kodos Daemon####
+```
+$SMRT_ROOT/admin/bin/kodosd start
+```
+
+You can check that the services are on or off using the `ps` command:
+```
+ps -ef | grep kodos
+```
+###Set Up SMRT Portal###
+
+Register the administrative user and set up the SMRT Portal GUI:
+
+1. Use a web browser to launch SMRT Portal: `http://hostname:port/smrtportal`
+1. Click **Register** at the top right.
+1. Create a user named ``administrator`` (all lowercase). This user is special, as it is the only user that does **not** require activation on creation.
+1. Enter the user name ``administrator``.
+1. Enter an email address. All administrative emails, such as new user registrations, are sent to this address.
+1. Enter, then confirm the password.
+1. Select **Click Here** to access **Change Settings**.
+1. To set up the mail server, enter the SMTP server information and click **Apply**. For email authentication, enter a user name and password. You can also enable Transport Layer Security.
+1. To enable automated submission from a PacBio instrument, click **Add** under the Instrument Web Services URI field. Then, enter the following into the dialog box and click **OK**:
+   ```
+   http://INSTRUMENT_PAP01:8081
+   ```
+   * ``INSTRUMENT_PAP01`` is the IP address or name (pap01) of the instrument.
+   * ``8081`` is the port for the instrument web service.
+1. Select the new URI, then click **Test** to check if SMRT Portal can communicate with the instrument service.
+1. (Optional) You can delete the pre-existing instrument entry by clicking **Remove**.
+
+
+###Verify the Installation###
+
+Create a test job in SMRT Portal using the provided lambda sequence data. This is data from a single SMRT Cell that has been down-sampled to reduce overall tarball size. If you are upgrading, this cell will already have been imported into your system, and you can skip to step 10 below.
+
+Open your web browser and clear the browser cache:
+
+* **Google Chrome**: Choose **Tools > Clear browsing data**. Choose **the beginning of time** from the droplist, then check **Empty the cache** and click **Clear browsing data**.
+* **Internet Explorer**: Choose **Tools > Internet Options > General**, then under Browsing history, click **Delete**. Check **Temporary Internet files**, then click **Delete**.
+* **Firefox®**: Choose **Tools > Options > Advanced**, then click the **Network** tab. In the Cached Web Content section, click **Clear Now**.
+
+1. Refresh the current page by pressing **F5**.
+2. Navigate to SMRT Portal at ``http://HOST:PORT/smrtportal``, then log in.
+3. Click **Design Job**.
+4. Click **Import and Manage**.
+5. Click **Import SMRT Cells**.
+6. Click **Add**.
+7. Enter ``common/test/primary``, then click **OK**.
+8. Select the new path and click **Scan**. You should get a dialog saying “One input was scanned." 
+9. Click **Design Job**.
+10. Click **Create New**.
+11. Enter a job name and comment.
+12. Select the protocol ``RS_Resequencing.1``.
+13. Under **SMRT Cells Available**, select a lambda cell and click the right-arrow button.
+14. Click **Save** on the bottom right, then click **Start**. The job should complete successfully.
+15. Click the **SMRT View** button. SMRT View should open with tracks displayed, and the reads displayed in the Details panel.
+
+##Optional Configurations##
+###Set Up User Data Directory###
+
+The user data folder, `$SMRT_ROOT/userdata`, expands rapidly because it contains all jobs, references, and drop boxes.  We recommend softlinking this folder to an **external** directory with more storage: 
+```
+mv $SMRT_ROOT/userdata /path/to/NFS/mounted/offline_storage
+ln -s /path/to/NFS/mounted/offline_storage $SMRT_ROOT/userdata
+```
+
+##Upgrade Details##
+
+###Supported Upgrade Path###
+
+* For SMRT Analysis v2.2.0, **only** upgrades directly from **v2.1.1** or **v2.1.0** are supported.
+
+* SMRT Analysis does **not** support upgrades from SMRT Analysis v2.0.1 or earlier. The recommended upgrade path is to incrementally upgrade to each version, that is:
+
+  ``1.4 -> 2.0.0 -> 2.0.1 -> 2.1.0 -> 2.2.0``
+
+Alternately, you may opt for a [fresh installation] (#installation-details) of SMRT Analysis v2.2.0 and then manually import old SMRT Cells and jobs to preserve analysis history.
+
+See [[Official Documentation]] for upgrading from earlier versions of SMRT Analysis:
+* [[SMRT Analysis Software Installation v2.0.1]]
+* [[SMRT Analysis Software Installation v2.1]]
+
+
+###Run the Upgrader###
+
+Upgrades are handled by the script ``smrtupdater`` located in ``$SMRT_ROOT/admin/bin/smrtupdater``.
+The script performs the following:
+
+1. Confirms valid non-root user that will own SMRT Pipe jobs and daemon processes.
+1. Check for running services, and stops them if needed.
+1. Performs system hardware, OS, and software prerequisite check.
+1. Transfers computing configurations from the previous installation.
+1. Reference Repository Upgrade Check.
+1. Confirms and validates symbolic links to TMP and USERDATA directories.
+1. MySQL Database Upgrade.
+ 
+* The upgrade script does **not** port over protocols that were defined in previous versions of SMRT Analysis. This is because protocol files can vary a great deal between versions due to rapid code development and change. Please **recreate** any custom protocols you may have.
+
+  ```
+  $SMRT_ROOT/admin/bin/smrtupdater smrtanalysis-2.2.0.133377.run
+  ```
+
+####Applying Patches During Upgrade####
+
+If you are upgrading **after** a patch has been released for the software, you can upgrade **both** the software and the patch in one command using the ``-- -p`` option.  This uses the "-p" option in smrtanalysis-2.2.0.133377.run by passing it via the "--" option in smrtupdater.
+
+ ```
+ $SMRT_ROOT/admin/bin/smrtupdater -- -p smrtanalysis-2.2.0.133377-patch-3.run smrtanalysis-2.2.0.133377.run
+ ```
+
+###Start the SMRT Analysis Services###
+  
+####Start the MySQL and Tomcat Daemons####
+```
+$SMRT_ROOT/admin/bin/smrtportald-initd start
+```
+####Start the Kodos Daemon####
+```
+$SMRT_ROOT/admin/bin/kodosd start
+```
+
+
+
+
+
+
+
+
+##Known Install Problems and Workarounds##
+
+
+
+
+###Remote Storage Issues###
+In several installations, problems have been encountered with the mysql portion of the install due to the inability of the mysql scripts to change ownership (and possibly to change perm
+issions) of files in $SMRT_ROOT/userdata/runtime/tmp.  In each case, userdata was linked to remote NFS storage where the problem could be demonstrated with simple tests like creating a
+temporary file and running chown on it.  The best method to resolve this problem is to fix the storage issue, but the following workaround can be used instead:
+
+```
+
+SMRT_ROOT=<customer_specific>
+# you can actually put these new directories anywhere on the head node
+# local filesystem but these are shown as an example
+SMRT_DB=$SMRT_ROOT/../smrtanalysis_db
+SMRT_RTTMP=$SMRT_ROOT/../smrtanalysis_runtime_tmp
+SAUSER=<smrtanalysis_user>
+SAGRP=<smrtanalysis_group>
+sudo mkdir $SMRT_DB; sudo chown $SAUSER:$SAGRP  $SMRT_DB
+sudo mkdir $SMRT_RTTMP; sudo chown $SAUSER:$SAGRP  $SMRT_RTTMP
+
+# replace old directories with links to these new ones
+# note that this is safe to do only because this database
+# directory is new with the 2.2 install and you have not
+# yet finished a 2.2. install or used 2.2 yet
+sudo rm -rf $SMRT_ROOT/userdata/database
+sudo rm -rf $SMRT_ROOT/userdata/runtime/tmp
+
+# then as the  <smrtanalysis_user>:
+ln -s $SMRT_DB $SMRT_ROOT/userdata/database
+ln -s $SMRT_RTTMP $SMRT_ROOT/userdata/runtime/tmp
+
+#From there, you should be able to execute the install or upgrade as shown above.
+
+# the following is probably not necessary due to the way that we resolve paths in our scripts,
+# but this will cleanup broken links created during the install
+rm $SMRT_ROOT/userdata/database/mysql/log
+rm $SMRT_ROOT/userdata/database/mysql/runtime
+ln -s $SMRT_ROOT/userdata/log $SMRT_ROOT/userdata/database/mysql/log
+ln -s $SMRT_ROOT/userdata/runtime $SMRT_ROOT/userdata/database/mysql/runtime
+
+```
+
+###ACL Problems###
+If you use ACLs in the SMRT_ROOT or any of the linked storage, you may have obscure install or execution problems if the "smrtanalysis" user does not have full permissions.  For example, we have seen cases that failed in the middle of an install due to the inability to copy a file with "cp -a" in some of the install scripts.  If you suspect ACL related probelms, try disabling them and retrying.
+
+##Advanced Deployment##
+
+###Using Amazon Web Services##
+Users wishing to run SMRT Analysis in the cloud can use an Amazon Machine Image (AMI) with SMRT Analysis pre-installed. For details, see:
+
+["Installing" SMRT Portal the easy way - Launching a SMRT Portal AMI] (https://github.com/PacificBiosciences/Bioinformatics-Training/wiki/%22Installing%22-SMRT-Portal-the-easy-way---Launching-A-SMRT-Portal-AMI).
+
+
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 100-321-100-03**
\ No newline at end of file
diff --git a/docs/SMRT-Analysis-Software-Installation-v2.3.0.md b/docs/SMRT-Analysis-Software-Installation-v2.3.0.md
new file mode 100644
index 0000000..2237ddd
--- /dev/null
+++ b/docs/SMRT-Analysis-Software-Installation-v2.3.0.md
@@ -0,0 +1,629 @@
+* [What's New?] (#whats-new)
+  * [Bundled SMRT Portal Application Database] (#bundled-smrt-portal-application-database)
+  * [SMRT Analysis User Environment Changes] (#smrt-analysis-user-environment-changes)
+  * [Bundled Software] (#bundled-software)
+  * [Release Notes] (#release-notes)
+* [Introduction] (#introduction)
+  * [SMRT Analysis Installation Environment Assumptions] (#smrt-analysis-installation-environment-assumptions)
+  * [Installation Instruction Conventions] (#installation-instruction-conventions)
+* [Getting Started] (#getting-started)
+  * [Download SMRT Analysis] (#download-smrt-analysis)
+  * [Installation Summary] (#installation-summary)
+  * [Upgrade Summary] (#upgrade-summary)
+  * [Patch Summary] (#patch-summary)
+* [Installation Guide] (#installation-guide)
+  * [System Requirements] (#system-requirements)
+    * [Hardware Guidelines] (#hardware-guidelines)
+    * [Software Prerequisites] (#software-prerequisites)
+  * [Installation Details] (#installation-details)
+    * [Downloading SMRT Analysis] (#downloading-smrt-analysis)
+    * [Create the SMRT Analysis User] (#create-the-smrt-analysis-user)
+    * [Create the Installation Path] (#create-the-installation-path)
+    * [Run the Installer] (#run-the-installer)
+    * [Apply Patches During Installation] (#apply-patches-during-installation)
+    * [Set up Distributed Computing] (#set-up-distributed-computing)
+    * [Start the SMRT Analysis Services] (#start-the-smrt-analysis-services)
+    * [Set up SMRT Portal] (#set-up-smrt-portal)
+    * [Verify the Installation] (#verify-the-installation)
+    * [Optional Configurations] (#optional-configurations)
+  * [Upgrade Details] (#upgrade-details)
+    * [Supported Upgrade Path](#supported-upgrade-path)
+    * [Run the Upgrader] (#run-the-upgrader)
+    * [Applying Patches During Upgrade] (#applying-patches-during-upgrade)
+    * [Start the SMRT Analysis Services] (#start-the-smrt-analysis-services)
+  * [Known Installation Problems and Workarounds] (#known-installation-problems-and-workarounds)
+    * [Remote Storage Issues] (#remote-storage-issues)
+    * [ACL Problems] (#acl-problems)
+  * [Advanced Deployment](#advanced-deployment)
+    * [Using Amazon Web Services](#using-amazon-web-services)
+
+
+#What's New?#
+
+###Bundled SMRT Portal Application Database###
+
+Pre-built binaries for MySQL Server are bundled with the SMRT Analysis suite >= v2.2.0, providing a standalone, isolated environment, free of external system dependencies for the SMRT Portal application database.  This new architecture allows a more seamless installation and upgrade process and provides an additional measure of data security with automated schema backups.
+
+Upon install/upgrade, the default behavior is to embed the database.  All data from the external database will be migrated to the bundled server.
+
+###SMRT Analysis User Environment Changes###
+To better deploy SMRT Analysis in an increasingly complex variety of user environments, we've implemented a new approach to invoking an isolated and controlled SMRT Analysis environment in which to run SMRT Portal and SMRTpipe commands.  These changes aim to alleviate some of the headaches associated with resolving library version dependencies and permissions restrictions for non-privileged users. 
+
+####"setup.sh"-related Changes####
+```$SMRT_ROOT/current/etc/setup.sh``` sets up environment for access to SMRT Analysis internals.
+* Issuing ```source $SMRT_ROOT/current/etc/setup.sh``` directly by command-line users is no longer needed and this usage is currently deprecated.  Instead, ```$SMRT_ROOT/smrtcmds/bin/smrtshell``` invokes a new shell, similar to ```virtualenv```.
+
+Example usage:
+```
+user at host:~$ $SMRT_ROOT/smrtcmds/bin/smrtshell
+(smrtshell-2.3.0) user at host:~$ smrtpipe.py --version
+smrtpipe.py v1.87.139427
+(smrtshell-2.3.0) user at host:~$
+```
+
+* setup.sh squashes almost all user environment variables. Exceptions:
+ - ```USER```, ```LOGNAME```, ```PWD```, ```TERM```, ```TERMCAP```, ```HOME```, ```WORKSPACE```, ```MPLCONFIGDIR```, and all ```SMRT_*``` variables.
+* New user-accessible directory contains SMRT Analysis wrapper programs.  ```$SMRT_ROOT/smrtcmds/bin```
+
+
+####System and User-Specific Locales Are Ignored####
+* Force to "C" (a.k.a. POSIX) default locale
+* This affects all code run under setup.sh, including SMRT Portal and all command-line usage.
+
+
+###Bundled Software###
+
+SMRT Analysis includes the following third-party software packages bundled within the application; the application should **not** depend on what is already deployed on the system.
+
+* Apache Tomcat™ 7.0.23
+* Celera® Assembler 8.1
+* Docutils 0.8.1
+* GMAP (2014-01-21)
+* HMMER 3.1b1 (May 2013)
+* Java™ SE Runtime Environment (build 1.7.0_02-b13)
+* Mono 3.0.7
+* MySQL® 5.1.73
+* Perl v5.8.8
+* Python® 2.7.3
+* SAMtools 0.1.17
+* Scala 2.9.0 RC3
+
+**Note:** GATK and associated executables are **no longer** included.
+
+##Release Notes##
+
+See [SMRT Analysis Release Notes v2.3.0](SMRT-Analysis-Release-Notes-v2.3.0) for changes and known issues.
+
+You can find the latest version of this document on the Pacific Biosciences DevNet site; you can also link to it from the main SMRT Analysis web page.
+
+#Introduction#
+
+SMRT Analysis is designed to be installed on a wide variety of 64-bit Linux distributions **without** requiring root access. The installation is all within a single directory tree and includes embedded Apache Tomcat and MySQL servers as well as the whole set of analysis tools and the libraries to support them all.  
+
+The input and output data as well as the temporary data used by the system may be located on local storage or network storage and there are two directories or softlinks created in the root directory during the initial installation to point to this ``userdata`` and ``tmpdir``.  
+
+When running on a cluster, **all** machines in the queue need to be able to see the install directory and any remotely mounted directories used for the data at the same paths as those used on the server where the tools are installed.
+
+##SMRT Analysis Installation Environment Assumptions##
+
+* A 64-bit Linux machine with a ``libc`` version greater than 2.5.
+* Installing as the same non-root user (SMRT_USER) that will be used to run the system.
+* if upgrading, the same user with the same root directory (``SMRT_ROOT``) as used previously.
+* The ``SMRT_USER`` has full permissions in the file system in the ``SMRT_ROOT`` directory and in all linked directories for ``tmpdir`` and ``userdata``. (Common problems include NFS setup problems, ACLs, and so on.)
+* When running in distributed mode, all other nodes have the **same path** for ``SMRT_ROOT`` and for all linked directories.
+* During the installation, no other daemons are running on the same ports chosen for the Tomcat and MySQL servers.
+* Post-installation assumptions:
+  * No changes to ``SMRT_USER``, ``SMRT_ROOT``, the local hostname and/or IP that was used to install the system.
+
+##Installation Instruction Conventions##
+
+In the instruction commands below, several environment variables are used and must be defined OR you must run the installation with the proper values substituted for these variables.  These variables merely clarify where you have options during the installation process; they are **not** required during or after the install.
+
+* ``SMRT_ROOT``: Full path to the directory in which SMRT Analysis is or will be installed.
+* ``SMRT_USER``: The username of the user running the installation and SMRT Analysis daemons and tools; or was used to do so in previous installs. This should be the same value as ``$(id -nu)``.
+* ``SMRT_GROUP``: The default group name for ``SMRT_USER``. This should be the same value as ``$(id -ng)``.
+
+Below are typical values for ``SMRT_ROOT``, ``SMRT_USER``, and ``SMRT_GROUP``, and although different values can be used, we recommend defaulting to the following conventions for most installations:
+```
+SMRT_ROOT=/opt/smrtanalysis/
+SMRT_USER=smrtanalysis
+SMRT_GROUP=smrtanalysis
+```
+
+#Getting Started#
+
+**Note:** This section contains a summary of the commands used for a **quick installation**.  
+
+* Use these commands **only** if you are familiar with the installation/upgrade process.  
+* Proceed to the [Installation Guide] (#installation-guide) for the more detailed procedure.
+
+##Download SMRT Analysis##
+Download SMRT Analysis from PacBio DevNet (http://www.pacbiodevnet.com):
+```
+wget https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/smrtanalysis-2.2.0.133377.run
+wget https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/smrtanalysis-2.2.0.133377-patch-3.run
+```
+ 
+##Installation Summary##
+
+Set installation environment variables.  See [Installation Instruction Conventions] (#installation-instruction-conventions) for details on ```SMRT_*``` variables:
+```
+SMRT_ROOT=/opt/smrtanalysis/
+SMRT_USER=smrtanalysis
+SMRT_GROUP=smrtanalysis
+```
+
+Create SMRT Analysis installation directory:
+```
+sudo mkdir $SMRT_ROOT
+sudo chown $SMRT_USER:$SMRT_GROUP $SMRT_ROOT
+```
+
+Run installation:
+```
+su -l $SMRT_USER
+bash smrtanalysis-2.3.0.xxxxx.run --rootdir $SMRT_ROOT
+```
+
+Start SMRT Analysis daemons:
+```
+$SMRT_ROOT/admin/bin/smrtportald-initd start
+$SMRT_ROOT/admin/bin/kodosd start
+```
+
+##Upgrade Summary##
+
+Set installation environment variables.  See [Installation Instruction Conventions] (#installation-instruction-conventions) for details on ```SMRT_*``` variables:
+```
+SMRT_ROOT=/opt/smrtanalysis/
+SMRT_USER=smrtanalysis
+```
+
+Stop SMRT Analysis daemons and run upgrader:
+```
+su -l $SMRT_USER
+$SMRT_ROOT/admin/bin/smrtportald-initd stop 
+$SMRT_ROOT/admin/bin/smrtupdater smrtanalysis-2.3.0.xxxxx.run
+```
+
+Start SMRT Analysis daemons:
+```
+$SMRT_ROOT/admin/bin/smrtportald-initd start
+$SMRT_ROOT/admin/bin/kodosd start
+```
+
+Once SMRT Portal is installed, proceed to the following sections to complete setup:
+
+1. [Set up SMRT Portal] (#set-up-smrt-portal) (for new installations only)
+2. [Verify the Installation] (#verify-the-installation) (for new installations **and** upgrades)
+
+
+##Patch Summary##
+
+Set installation environment variables.  See [Installation Instruction Conventions] (#installation-instruction-conventions) for details on ```SMRT_*``` variables:
+```
+SMRT_ROOT=/opt/smrtanalysis/
+SMRT_USER=smrtanalysis
+```
+
+Stop SMRT Analysis daemons:
+```
+su -l $SMRT_USER
+$SMRT_ROOT/admin/bin/smrtportald-initd stop
+```
+ 
+**These two commands must be run as the SMRT_USER. They perform the patch and restart the daemons.**
+```
+$SMRT_ROOT/admin/bin/smrtupdater <patchfile>
+$SMRT_ROOT/admin/bin/smrtportald-initd start
+```
+
+#Installation Guide#
+##System Requirements##
+
+###Hardware Guidelines###
+
+####Submit Host####
+* Minimum 8 cores, with 2 GB RAM per core.
+* Minimum 250 GB of disk space.
+
+####Execution Hosts####
+* Minimum of 3 nodes. We **recommend** 5 nodes for high utilization focused on _de novo_ assemblies.
+* Minimum of 8 cores per node, with 2 GB RAM per core. We **recommend** 16 cores per node with 4 GB RAM per core.
+* Minimum of 250 GB of disk space per node.
+* To perform _de novo_ assembly of large genomes using Celera® Assembler, **one** of the nodes will need to have considerably more memory. See the Celera® Assembler home page for recommendations: http://wgs-assembler.sourceforge.net/.
+
+For more information, see [What computing infrastructure is compatible with SMRT Analysis?](What-computing-infrastructure-is-compatible-with-SMRT-Analysis%3F)
+
+**Notes:** 
+* It is possible, but **not** advisable, to install SMRT Analysis on a single-node machine (see the distributed computing section). You will likely be able to submit jobs one SMRT Cell at a time, but the time to completion may be long as the software may not have sufficient resources to complete the job.  
+
+* The ``RS_ReadsOfInsert`` protocol can be **compute-intensive**. If you plan to run it on every SMRT Cell, we recommend adding 3 additional 8-core compute nodes with at least 4 GB of RAM per core.
+
+
+###Software Prerequisites###
+
+####Operating Systems####
+
+* SMRT Analysis is supported on:
+    * English-language **Ubuntu: versions 12.04, 10.04, 8.04** 
+    * English-language **RedHat/CentOS: versions 6.3, 5.6, 5.3**
+
+* SMRT Analysis **cannot** be installed on Mac OS® or Windows® systems.
+
+
+
+####Software Dependencies####
+
+* Bash
+* Linux Standard Base (LSB)
+
+These are usually installed by default on most systems. If necessary, use the following commands to ensure that these packages are installed.
+
+**CentOS:**
+
+```
+sudo yum groupinstall "Development Tools"
+sudo yum install redhat-lsb
+```
+
+**Ubuntu:**
+
+```
+sudo apt-get install build-essential lsb-release
+```
+
+####Client OS####
+
+To run SMRT Portal and SMRT View, we recommend:
+
+* Microsoft Windows...
+* Mac OS X .....
+
+####Client Web Browser####
+We recommend using the Google Chrome® 21 web browser to run SMRT Portal for consistent functionality. We also support Apple’s Safari® and Internet Explorer® web browsers; however some features may not be optimized on these browsers.
+
+####Client Java####
+
+To run SMRT View, we recommend installing the **latest** version of Java. The minimum recommended versions are:
+* **Oracle Java:** Java Version 7 Update 67 or later for Linux, Windows, and Mac OS X. 
+* **Apple Java:** Java for OS X 2013-004 (1.6.0_51-b11-457-10M4509) or later.
+
+###Network Configuration###
+
+Please refer to the **IT Site Prep** guide provided with your instrument purchase for more details.
+
+See also [What data storage is compatible with SMRT Analysis?](What-data-storage-is-compatible-with-SMRT-Analysis%3F)
+
+####Data Storage####
+* 10 TB (Actual storage depends on usage.)
+
+* The **SMRT Analysis software directory** (we recommend `$SMRT_ROOT=/opt/smrtanalysis`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  
+
+* The **SMRT Cell input directory**  (we recommend `$SMRT_ROOT/pacbio_instrument_data/`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  This directory contains data from the instrument and can either be a directory configured by RS Remote during instrument installation, or a directory you created when you received data from a core lab. 
+
+* The **SMRT Analysis output directory** (we recommend `$SMRT_ROOT/userdata`) **must** have the same path and be **writable** by the smrtanalysis user across **all** compute nodes via **NFS**. This directory is usually softlinked to a large storage volume.
+
+* The **SMRT Analysis temporary directory** is used for fast I/O operations during runtime.  The software accesses this directory from `$SMRT_ROOT/tmpdir` and you can softlink this directory manually or using the install script.  This directory should be a local directory (**not** NFS-mounted) and be writable by the `smrtanalysis` user and exist as independent directories on all compute nodes. 
+
+###Cluster Configuration###
+
+Pacific Biosciences has explicitly validated Sun **Grid Engine (SGE)**, and provides job submission templates for **LSF** and **PBS**. You only need to configure the software **once** during initial install. 
+
+##Installation Details##
+
+###Downloading SMRT Analysis#
+Download SMRT Analysis from PacBio DevNet (http://www.pacbiodevnet.com):
+```
+wget https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/smrtanalysis-2.2.0.133377.run
+```
+Download the latest patch available for your version:
+```
+wget https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.2.0/smrtanalysis-2.2.0.133377-patch-3.run
+```
+
+###Create the SMRT Analysis User###
+We recommend that a system administrator create a special user called `smrtanalysis`, who belongs to the `smrtanalysis` group. This user will own **all** SMRT Analysis files, daemon processes, and smrtpipe jobs. Alternatively, the system may be installed and run by an individual user.  Whatever username is used, the instructions below assume that you are executing them as that user, and refer to that user and it's default group as ``SMRT_USER`` and ``SMRT_GROUP``.
+
+
+###Create the Installation Path###
+The SMRT Analysis top-level directory, `$SMRT_ROOT`, can be **any** directory as long as the `smrtanalysis` user has read, write, and execute permissions in that directory.  Historically, we referred to `$SMRT_ROOT` as `/opt/smrtanalysis`.  
+
+If the parent directory `$SMRT_ROOT` is **not** writable by the SMRT Analysis user, the `$SMRT_ROOT` directory **must** be pre-created with read/write/execute permissions for the SMRT Analysis user.  
+
+
+###Run the Installer###
+The installation script attempts to discover inputs when possible, and performs the following configurations: 
+
+1. Confirms valid non-root user that will own SMRT Pipe jobs and daemon processes.
+2. Performs system hardware, OS, and software prerequisite check.
+3. Identifies valid host names and IP addresses recognized by DNS.
+4. Tomcat web server **main** port and **shutdown** port numbers.
+5. Creates and verifies symbolic links to ``TMP`` and ``USERDATA`` directories.
+6. MySQL server settings and initializes the SMRT Portal database.
+7. Distributed/non-distributed SMRT Pipe jobs.
+8. Job Management System and related parameters for queues and parallel environments.
+
+
+  ```
+  # see Installation Instruction Conventions for details about SMRT_* variables
+  # SMRT_ROOT is the directory where you want to install SMRT Analysis
+  sudo mkdir $SMRT_ROOT
+  # SMRT_USER and SMRT_GROUP are the user and group you are using to install SMRT Analysis
+  sudo chown $SMRT_USER:$SMRT_GROUP $SMRT_ROOT
+
+  su -l $SMRT_USER
+  bash smrtanalysis-2.3.0.xxxxx.run --rootdir $SMRT_ROOT
+
+  $SMRT_ROOT/admin/bin/smrtportald-initd start
+  $SMRT_ROOT/admin/bin/kodosd start
+  ```
+  
+  If you cancelled out of the install prompt and want to rerun the script without extracting again, you can rerun using the `--no-extract` option:
+
+  ```
+  bash smrtanalysis-2.3.0.xxxxx.run --rootdir $SMRT_ROOT --no-extract
+  ```
+
+
+###Apply Patches During Installation###
+If installing **after** a patch has been released for the software, you can install **both** the software and the patch in one command using the ``-p`` option:
+
+  ```
+  bash smrtanalysis-2.3.0.xxxxx.run -p smrtanalysis-2.3.0.xxxxx-patch-y.run --rootdir $SMRT_ROOT
+  ```
+
+
+###Set up Distributed Computing###
+####Configuring Job Submission Templates####
+You configure distributed computing by editing three template files:
+```
+$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/start.tmpl
+$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/interactive.tmpl
+$SMRT_ROOT/current/analysis/etc/cluster/<JMS>/kill.tmpl
+```
+
+#####Specifying the SGE Job Management System#####
+The install script will automatically discover the `queue name` and `parallel environment` name based on the SGE installed on your system.  If you want to configure or add options to the qsub command, you must edit the .tmp files manually.  For example, the default ``interactive.tmpl`` looks like the following:
+
+```
+qsub -pe smp ${NPROC} -S /bin/bash -V -q secondary -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+```
+
+If you are assembling large genomes (> 100 Mb) and wish to use the job distribution functionality within Celera Assembler, you **must** make sure the parallel environment is configured to use the `$pe_slots` allocation rule.  For example, the `smp` parallel environment is configured as follows:
+
+```
+$ qconf -sp smp
+pe_name            smp
+slots              99999
+user_lists         NONE
+xuser_lists        NONE
+start_proc_args    /bin/true
+stop_proc_args     /bin/true
+allocation_rule    $pe_slots
+control_slaves     FALSE
+job_is_first_task  TRUE
+urgency_slots      min
+accounting_summary FALSE
+```
+
+#####Specifying the PBS Job Management System#####
+PBS does **not** have a ``–sync`` option, and the ``interactive.tmpl`` file runs a script named ``qsw.py`` to simulate the functionality. You must edit **both** ``interactive.tmpl`` and ``start.tmpl``. 
+
+1. Change the queue name to one that exists on your system. (This is the ``–q`` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the ``-pe`` option.) 
+3. Make sure that ``interactive.tmpl`` calls the ``–PBS`` option.
+
+#####Specifying the LSF Job Management System#####
+The equivalent SGE `-sync` option in LSF is `-K` and this should be provided with the `bsub` command in the `interactive.tmpl` file.
+
+1. Change the queue name to one that exists on your system. (This is the `–q` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the `-pe` option.) 
+3. Make sure that ``interactive.tmpl`` calls the `–K` option.
+
+
+#####Specifying other Job Management Systems#####
+1. Create a new directory `$SMRT_ROOT/current/analysis/etc/cluster/NEW_JMS`.
+2. Edit `$SMRT_ROOT/current/analysis/etcsmrtpipe.rc`, and change the `CLUSTER_MANAGER` variable to `NEW_JMS`.
+3. Once you have a new JMS directory specified, create and edit the `interactive.tmpl`, `start.tmpl`, and `kill.tmpl` files for your particular setup.
+
+
+###Start the SMRT Analysis Services###
+
+####Start the MySQL and Tomcat Daemons####
+
+The following command will start both Tomcat and MySQL.  You should use this command to restart services.
+```
+$SMRT_ROOT/admin/bin/smrtportald-initd start
+```
+
+MySQL and Tomcat can also be controlled individually for troubleshooting purposes:
+```
+$SMRT_ROOT/admin/bin/mysqld start
+$SMRT_ROOT/admin/bin/tomcatd start
+```
+
+You can check that the services are on or off using the `ps` command:
+```
+ps -ef | grep tomcat
+ps -ef | grep mysql
+```
+
+####Start the Kodos Daemon####
+```
+$SMRT_ROOT/admin/bin/kodosd start
+```
+
+You can check that the services are on or off using the `ps` command:
+```
+ps -ef | grep kodos
+```
+###Set Up SMRT Portal###
+
+Register the administrative user and set up the SMRT Portal GUI:
+
+1. Use a web browser to launch SMRT Portal: `http://hostname:port/smrtportal`
+1. Click **Register** at the top right.
+1. Create a user named ``administrator`` (all lowercase). This user is special, as it is the only user that does **not** require activation on creation.
+1. Enter the user name ``administrator``.
+1. Enter an email address. All administrative emails, such as new user registrations, are sent to this address.
+1. Enter, then confirm the password.
+1. Select **Click Here** to access **Change Settings**.
+1. To set up the mail server, enter the SMTP server information and click **Apply**. For email authentication, enter a user name and password. You can also enable Transport Layer Security.
+1. To enable automated submission from a PacBio instrument, click **Add** under the Instrument Web Services URI field. Then, enter the following into the dialog box and click **OK**:
+   ```
+   http://INSTRUMENT_PAP01:8081
+   ```
+   * ``INSTRUMENT_PAP01`` is the IP address or name (pap01) of the instrument.
+   * ``8081`` is the port for the instrument web service.
+1. Select the new URI, then click **Test** to check if SMRT Portal can communicate with the instrument service.
+1. (Optional) You can delete the pre-existing instrument entry by clicking **Remove**.
+
+
+###Verify the Installation###
+
+Create a test job in SMRT Portal using the provided lambda sequence data. This is data from a single SMRT Cell that has been down-sampled to reduce overall tarball size. If you are upgrading, this cell will already have been imported into your system, and you can skip to step 10 below.
+
+Open your web browser and clear the browser cache:
+
+* **Google Chrome**: Choose **Tools > Clear browsing data**. Choose **the beginning of time** from the droplist, then check **Empty the cache** and click **Clear browsing data**.
+* **Internet Explorer**: Choose **Tools > Internet Options > General**, then under Browsing history, click **Delete**. Check **Temporary Internet files**, then click **Delete**.
+* **Firefox®**: Choose **Tools > Options > Advanced**, then click the **Network** tab. In the Cached Web Content section, click **Clear Now**.
+
+1. Refresh the current page by pressing **F5**.
+2. Navigate to SMRT Portal at ``http://HOST:PORT/smrtportal``, then log in.
+3. Click **Design Job**.
+4. Click **Import and Manage**.
+5. Click **Import SMRT Cells**.
+6. Click **Add**.
+7. Enter ``common/test/primary``, then click **OK**.
+8. Select the new path and click **Scan**. You should get a dialog saying “One input was scanned." 
+9. Click **Design Job**.
+10. Click **Create New**.
+11. Enter a job name and comment.
+12. Select the protocol ``RS_Resequencing.1``.
+13. Under **SMRT Cells Available**, select a lambda cell and click the right-arrow button.
+14. Click **Save** on the bottom right, then click **Start**. The job should complete successfully.
+15. Click the **SMRT View** button. SMRT View should open with tracks displayed, and the reads displayed in the Details panel.
+
+##Optional Configurations##
+###Set Up User Data Directory###
+
+The user data folder, `$SMRT_ROOT/userdata`, expands rapidly because it contains all jobs, references, and drop boxes.  We recommend softlinking this folder to an **external** directory with more storage: 
+```
+mv $SMRT_ROOT/userdata /path/to/NFS/mounted/offline_storage
+ln -s /path/to/NFS/mounted/offline_storage $SMRT_ROOT/userdata
+```
+
+##Upgrade Details##
+
+###Supported Upgrade Path###
+
+* For SMRT Analysis v2.3.0, **only** upgrades directly from **v2.2.0** are supported.
+
+* SMRT Analysis does **not** support upgrades from SMRT Analysis v2.0.1 or earlier. The recommended upgrade path is to incrementally upgrade to each version, that is:
+
+  ``1.4 -> 2.0.0 -> 2.0.1 -> 2.1.0 -> 2.2.0 ->2.3.0``
+
+Alternately, you may opt for a [fresh installation] (#installation-details) of SMRT Analysis v2.3.0 and then manually import old SMRT Cells and jobs to preserve analysis history.
+
+See [[Official Documentation]] for upgrading from earlier versions of SMRT Analysis:
+* [[SMRT Analysis Software Installation v2.0.1]]
+* [[SMRT Analysis Software Installation v2.1]]
+* [[SMRT Analysis Software Installation v2.2.0]]
+
+
+###Run the Upgrader###
+
+Upgrades are handled by the script ``smrtupdater`` located in ``$SMRT_ROOT/admin/bin/smrtupdater``.
+The script performs the following:
+
+1. Confirms valid non-root user that will own SMRT Pipe jobs and daemon processes.
+1. Check for running services, and stops them if needed.
+1. Performs system hardware, OS, and software prerequisite check.
+1. Transfers computing configurations from the previous installation.
+1. Reference Repository Upgrade Check.
+1. Confirms and validates symbolic links to TMP and USERDATA directories.
+1. MySQL Database Upgrade.
+ 
+* The upgrade script does **not** port over protocols that were defined in previous versions of SMRT Analysis. This is because protocol files can vary a great deal between versions due to rapid code development and change. Please **recreate** any custom protocols you may have.
+
+  ```
+  $SMRT_ROOT/admin/bin/smrtupdater smrtanalysis-2.2.0.133377.run
+  ```
+
+####Applying Patches During Upgrade####
+
+If you are upgrading **after** a patch has been released for the software, you can upgrade **both** the software and the patch in one command using the ``-- -p`` option.  This uses the "-p" option in smrtanalysis-2.2.0.133377.run by passing it via the "--" option in smrtupdater.
+
+ ```
+ $SMRT_ROOT/admin/bin/smrtupdater -- -p smrtanalysis-2.2.0.133377-patch-3.run smrtanalysis-2.2.0.133377.run
+ ```
+
+###Start the SMRT Analysis Services###
+  
+####Start the MySQL and Tomcat Daemons####
+```
+$SMRT_ROOT/admin/bin/smrtportald-initd start
+```
+####Start the Kodos Daemon####
+```
+$SMRT_ROOT/admin/bin/kodosd start
+```
+
+##Known Installation Problems and Workarounds##
+
+###Remote Storage Issues###
+In several installations, there were problems with the mysql portion of the installation due to the inability of the mysql scripts to change ownership (and possibly to change permissions) of files in ``$SMRT_ROOT/userdata/runtime/tmp``. In each case, ``userdata`` was linked to remote NFS storage where the problem could be demonstrated with simple tests like creating a temporary file and running ``chown`` on it. 
+
+The best method to resolve this problem is to fix the storage issue, but the following workaround can be used instead:
+
+```
+SMRT_ROOT=<customer_specific>
+# you can actually put these new directories anywhere on the head node
+# local filesystem but these are shown as an example
+SMRT_DB=$SMRT_ROOT/../smrtanalysis_db
+SMRT_RTTMP=$SMRT_ROOT/../smrtanalysis_runtime_tmp
+SAUSER=<smrtanalysis_user>
+SAGRP=<smrtanalysis_group>
+sudo mkdir $SMRT_DB; sudo chown $SAUSER:$SAGRP  $SMRT_DB
+sudo mkdir $SMRT_RTTMP; sudo chown $SAUSER:$SAGRP  $SMRT_RTTMP
+
+# replace old directories with links to these new ones
+# note that this is safe to do only because this database
+# directory is new with the 2.2 install and you have not
+# yet finished a 2.2. install or used 2.2 yet
+sudo rm -rf $SMRT_ROOT/userdata/database
+sudo rm -rf $SMRT_ROOT/userdata/runtime/tmp
+
+# then as the  <smrtanalysis_user>:
+ln -s $SMRT_DB $SMRT_ROOT/userdata/database
+ln -s $SMRT_RTTMP $SMRT_ROOT/userdata/runtime/tmp
+
+#From there, you should be able to execute the install or upgrade as shown above.
+
+# the following is probably not necessary due to the way that we resolve paths in our scripts,
+# but this will cleanup broken links created during the install
+rm $SMRT_ROOT/userdata/database/mysql/log
+rm $SMRT_ROOT/userdata/database/mysql/runtime
+ln -s $SMRT_ROOT/userdata/log $SMRT_ROOT/userdata/database/mysql/log
+ln -s $SMRT_ROOT/userdata/runtime $SMRT_ROOT/userdata/database/mysql/runtime
+
+```
+
+###ACL Problems###
+If you use ACLs in the ``SMRT_ROOT`` or any of the linked storage, you may have obscure install or execution problems if the "smrtanalysis" user does **not** have full permissions. For example, we have seen cases that failed in the middle of an install due to the inability to copy a file with "cp -a" in some of the install scripts. If you suspect ACL related problems, try disabling them and retrying.
+
+##Advanced Deployment##
+
+###Using Amazon Web Services##
+Users wishing to run SMRT Analysis in the cloud can use an Amazon Machine Image (AMI) with SMRT Analysis pre-installed. For details, see:
+
+["Installing" SMRT Portal the easy way - Launching a SMRT Portal AMI] (https://github.com/PacificBiosciences/Bioinformatics-Training/wiki/%22Installing%22-SMRT-Portal-the-easy-way---Launching-A-SMRT-Portal-AMI).
+
+
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 100-376-500-01**
\ No newline at end of file
diff --git a/docs/SMRT-Pipe-Reference-Guide-v2.0.md b/docs/SMRT-Pipe-Reference-Guide-v2.0.md
new file mode 100644
index 0000000..72ee0f1
--- /dev/null
+++ b/docs/SMRT-Pipe-Reference-Guide-v2.0.md
@@ -0,0 +1,1503 @@
+* [Introduction](#Intro)
+* [Installation] (#Install)
+* [Using the Command Line] (#CommandLine)
+ * [Command-Line Options] (#CommandLineOptions)
+ * [Specifying SMRT Pipe Inputs] (#PipeInputs)
+ * [Specifying SMRT Pipe Parameters] (#PipeParams)
+* [SMRT Portal Protocols] (#PortalProtocols)
+* [SMRT Pipe Modules and Their Parameters] (#Modules)
+ * [Global Parameters] (#Global)
+ * [P_Fetch Module] (#P_Fetch)
+ * [P_Filter Module] (#P_Filter)
+ * [P_PreAssembler Module] (#P_Pre)
+ * [P_Mapping (BLASR) Module] (#P_Map)
+ * [P_GenomicConsensus (Quiver) Module] (#P_Quiver)
+ * [P_AnalysisHook Module] (#P_Hook)
+ * [Assembly (Allora Assembly) Module] (#P_Allora)
+ * [HybridAssembly (AHA Scaffolding) Module] (#P_AHA)
+ * [P_GATKVC (GATK Unified Genotyper) Module] (#P_GATK)
+ * [P_Modification Detection Module]  (#P_MOD)
+ * [RS_CeleraAssembler Workflow] (#Celera)
+ * [P_CorrelatedVariants (Minor and Compound Variants) Module] (#P_Cor)
+ * [P_MotifFinder (Motif Analysis) Module] (#P_Motif)
+ * [P_GMAP Module] (#P_GMAP)
+ * [P_Barcode Module]  (#P_BAR)
+* [SMRT Pipe Tools] (#Tools)
+* [Building the SMRT Pipe tools manually, without SMRT Portal, SMRT View, or Kodos] (#Build_SPTools)
+* [SMRT Pipe File Structure] (#Files)
+* [The Reference Repository] (#RefRep)
+
+## <a name="Intro"></a> Introduction
+
+This document describes the underlying command-line interface to SMRT Pipe, and is for use by bioinformaticians working with secondary analysis results.
+
+**SMRT Pipe** is Pacific Biosciences’ underlying analysis framework for secondary analysis functions. SMRT Pipe is a python-based general-purpose workflow engine. It is easily extensible, and supports logging, distributed computation, error handling, analysis parameters, and temporary files.
+
+In a typical installation of the SMRT Analysis Software, the SMRT Portal web application calls SMRT Pipe when a job is started. SMRT Portal provides a convenient and user-friendly way to analyze Pacific Biosciences’ sequencing data through SMRT Pipe. Power users will find that there is more flexibility and customization available by instead running SMRT Pipe analyses from the command line.
+
+* The latest version of SMRT Pipe is available **here**.
+
+* SMRT Pipe can also be accessed using the Secondary Analysis Web Services API. For details, see **Secondary Analysis Web Services API**.
+
+**Note:**
+Throughout this documentation, the path ``/opt/smrtanalysis`` is used to refer to the installation directory for SMRT Analysis (also known as ``$SEYMOUR_HOME``). Replace this path with the path appropriate to your installation when using this document.
+
+## <a name="Install"></a> Installation
+
+SMRT Pipe is installed as part of the SMRT Analysis software installation. For details, see [SMRT Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-%28v2.0%29).
+
+## <a name="CommandLine"></a> Using the Command Line
+
+In a typical SMRT Analysis installation, SMRT Pipe is in your path after sourcing the ``setup.sh file``. To do so, enter the following:
+```
+. /opt/smrtanalysis/etc/setup.sh
+```
+
+**Note**: Make sure to replace ``/opt/smrtanalysis`` with the path to your SMRT Analysis installation.
+
+To check that SMRT Pipe is available, enter the following:
+```
+smrtpipe.py --help
+```
+
+This displays a help message describing how to run smrtpipe.py and all of the available command-line options.
+
+You invoke SMRT Pipe with the following command:
+```
+smrtpipe.py [--help] [options] --params=settings.xml xml:inputFile
+```
+
+Logging messages are printed to stderr as well as a log file (``log/smrtpipe.log``). It is standard practice to pipe the stderr messages to a file using redirection in your shell, for example appending 
+``&> smrtpipe.err`` to the command line if running under bash.
+
+### <a name="CommandLineOptions"></a> Command Line Options
+
+Following are some of the available options for invoking ``smrtpipe.py``:
+
+```
+-D key=value
+```
+
+* Overrides a configuration variable. Configuration variables are key-value pairs that are read from the global file ``smrtpipe.rc`` before starting an analysis. An example is the ``NPROC`` variable which controls the number of simultaneous processors to use during the analysis. To restrict SMRT Pipe to 4 processors, use ``-D NPROC=4``.
+
+```
+--debug
+```
+* Activates debugging output in the stderr and log outputs. To set this flag as a default, specify ``DEBUG=True`` in the ``smrtpipe.rc`` file.
+
+```
+--distribute
+```
+* Distributes the computation across a compute cluster. For information on onfiguring SMRT Pipe for a distributed computation environment, see [SMRT Analysis Software Installation] (https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-%28v2.0%29).
+
+```
+--help
+```
+* Displays information about command-line usage and options, and then exits.
+
+```
+--noreports
+```
+* Turns off the production of XML/HTML/PNG reports.
+
+```
+--nohtml
+```
+* Turns off the conversion of XML reports into HTML. (This conversion **requires** that Java be installed.)
+
+```
+--output=outputDir
+```
+
+* Specifies a root directory to use for all SMRT Pipe outputs for this analysis. SMRT Pipe places outputs in this directory, as well as in data, results, and log subdirectories.
+
+```
+params=params.xml
+```
+* Specifies a settings XML file for running the pipeline analysis. If this option is **not** specified, SMRT Pipe prints a message and then exits.
+
+```
+--totalCells
+```
+* Specifies that if the number of cells in the job is less than ``totalCells``, the job is **not** marked complete when it finishes. Data from additional cells will be appended to the outputs, until the number of cells reaches ``totalCells``. 
+
+```
+--recover
+```
+* Attempts to rerun a SMRT Pipe analysis starting from the last successful stage. The same initial arguments should be specified in this case.
+
+```
+--version
+```
+* Displays the version number of SMRT Pipe and then exits.
+
+```
+--kill
+```
+* Kills a SMRT Pipe job running in the current directory. This works with ``output``.
+
+### <a name="PipeInputs"></a> Specifying SMRT Pipe Inputs
+
+The input file is an XML file specifying the sequencing data to process. Generally, you specify the inputs as URIs (Universal Resource Identifiers) which are resolved by code internal to SMRT Pipe. In practice, this is most useful to large enterprise users that have a data management scheme and are able to modify the SMRT Pipe code to include their own resolver.
+
+The simpler way to specify inputs is to fully resolve the path to each input file, which as of version 2.0.0, is a bax.h5 file. (For more information, see bas.h5 Reference Guide at http://files.pacb.com/software/instrument/2.0.0/bas.h5%20Reference%20Guide.pdf)
+
+The script ``fofnToSmrtpipeInput.py`` is provided to convert a FOFN (a "file of file names" file) to the input format expected by SMRT Pipe. If ``my_inputs.fofn`` looks like
+```
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.2.bax.h5
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.3.bax.h5
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.1.bax.h5
+```
+or, for SMRT Pipe versions <2.0.0:
+```
+/share/data/run_1/m100923_005722_00122_c15301919401091173_s0_p0.bas.h5
+/share/data/run_2/m100820_063008_00118_c04442556811011070_s0_p0.bas.h5
+```
+
+
+then it can be converted to a SMRT Pipe input XML file by entering:
+```
+fofnToSmrtpipeInput.py my_inputs.fofn > my_inputs.xml
+```
+Following is the resulting XML file for SMRT Pipe version 2.0.0:
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+  <dataReferences>
+    <url ref="run:0000000-0000"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.2.bax.h5</location></url>
+    <url ref="run:0000000-0001"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.3.bax.h5</location></url>
+    <url ref="run:0000000-0002"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.1.bax.h5</location></url>
+  </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+...and for SMRT Pipe versions <2.0.0:
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+ <dataReferences>
+    <url ref="run:0000000-0000"><location>/share/data/
+    /share/data/run_1 m100923_005722_00122_c15301919401091173_s0_p0.bas.h5
+    <url ref="run:0000000-0001"><location>/share/data/
+    /share/data/run_2/m100820_063008_00118_c04442556811011070_s0_p0.bas.h5
+ </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+To run an analysis using these input files, use the following command:
+```
+smrtpipe.py --params=settings.xml xml:my_inputs.xml
+```
+
+The SMRT Pipe input format lets you specify annotations, such as job IDs, job names, and job comments, in a job-management environment. The ``fofnToSmrtpipeInput.py`` application has command-line options for setting these optional attributes.
+
+**Note**: To get help for a script, execute the script with the ``--help`` option and no additional arguments. For example:
+```
+fofnToSmrtpipeInput.py --help
+```
+
+### <a name="PipeParams"></a> Specifying SMRT Pipe Parameters
+
+The ``--params`` option is the most important SMRT Pipe option, and is required for any sophisticated use. The option specifies an XML file that controls:
+
+* The analysis modules to run.
+* The **order** of execution.
+* The **parameters** used by the modules.
+
+The general structure of the settings XML file is as follows:
+```
+<?xml version="1.0"?>
+<smrtpipeSettings>
+
+<protocol>
+...global parameters...
+</protocol>
+
+<module id="module_1">
+...parameters...
+</module>
+
+<module id="module_2">
+...parameters...
+</module>
+
+</smrtpipeSettings>
+```
+
+* The ``protocol`` element allows setting global parameters that could possibly be used by all modules.
+* Each ``module`` element defines an analysis module to run. 
+* The order of the ``module`` elements defines the order in which the modules execute.
+
+SMRT Portal protocol templates are located in: ```$SEYMOUR_HOME/common/protocols/```.
+
+SMRT Pipe modules are located in: 
+``$SEYMOUR_HOME/analysis/lib/pythonx.x/pbpy-0.1-py2.7.egg/pbpy/smrtpipe/modules/``.
+
+You specify parameters by entering a key-value pair in a ``param`` element. 
+* The name of the key is in the name attribute of the ``param`` element.
+* The value of the key is contained in a nested value element. 
+
+For example, to set the parameter named ``reference``, you specify:
+```
+<param name="reference">
+  <value>/share/references/repository/celegans</value>
+</param>
+```
+
+**Note**: To reference a parameter value in other parameters, use the notation ``${variable}`` when specifying a value. For example, to reference a global parameter named home, use it in other parameters as ``${home}``. SMRT Pipe supports arbitrary parameters in the settings XML file, so the use of temporary variables like this can help readability and maintainability.
+
+Following is a complete example of a settings file for running filtering, mapping, and consensus steps against the E coli reference genome:
+```
+<?xml version="1.0" encoding="utf-8"?>
+<smrtpipeSettings>
+ <protocol>
+  <param name="reference">
+   <value>/share/references/repository/ecoli</value>
+  </param>
+ </protocol>
+
+ <module name="P_Filter">
+  <param name="minLength">
+    <value>50</value>
+  </param>
+  <param name="readScore">
+    <value>0.75</value>
+  </param>
+ </module>
+
+ <module name="P_FilterReports" />
+
+ <module name="P_Mapping">
+  <param name="align_opts" hidden="true">
+   <value>--minAccuracy=0.75 --minLength=50 -x </value>
+  </param>
+ </module>
+
+ <module name="P_MappingReports" />
+ <module name="P_Consensus" />
+ <module name="P_ConsensusReports" />
+
+</smrtpipeSettings>
+```
+
+## <a name="PortalProtocols"></a> SMRT Portal Protocols
+
+Following are the secondary analysis protocols included in SMRT Analysis v2.0, with the SMRT Pipe module(s) called by each protocol. Many of these modules are described later in this document.
+
+```
+RS_AHA_Scaffolding
+```
+* P_Filter
+* HybridAssembly
+
+```
+RS_ALLORA_Assembly
+```
+* AlloraSFilter
+* Assembly
+
+```
+RS_ALLORA_Assembly_EC
+```
+* AlloraSFIlter
+* Assembly
+
+```
+RS_CeleraAssembler
+```
+* P_PacBioToCA
+* P_CeleraAssembler
+
+```
+RS_Filter_Only
+```
+* P_Filter
+
+```
+RS_Minor_and_Compound_Variants
+```
+* P_Filter
+* BLASR_Minor_and_Compound_Variants
+* P_CorrelatedVariants
+
+```
+RS_Modification_Detection
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_Modification Detection
+
+```
+RS_Modification_and_Motif_Analysis
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_MotifFinder
+
+```
+RS_PreAssembler
+```
+* PreAssemblerSFilter
+* P_PreAssembler
+
+```
+RS_PreAssembler_Allora
+```
+* PreAssemblerSFilter
+* AlloraWithPreAssembler
+
+```
+RS_Resequencing
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+
+```
+RS_Resequencing_CCS
+```
+* P_Filter
+* BLASR_De_Novo_CCS
+* GenomicConcensus_Plurality
+
+```
+RS_Resequencing_CCS_GATK
+```
+* P_Filter
+* BLASR_De_Novo_CCS
+* P_GATKVC
+
+```
+RS_Resequencing_GATK
+```
+* P_Filter
+* P_Mapping
+* P_GATKVC
+
+```
+RS_Resequencing_GATK_Barcode
+```
+* P_Filter
+* BLASR_Barcode
+* P_GATKVC
+
+```
+RS_Site_Acceptance_Test
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+
+```
+RS_cDNA_Mapping
+```
+* P_Filter
+* P_GMAP
+
+```
+11k_Unrolled_Resequencing
+```
+* P_Filter
+* BLASR_Unrolled
+* P_MotifFinder
+* P_Modification Detection
+* P_AnalysisHook
+
+```
+ecoliK12_RS_Resequencing
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_AnalysisHook
+
+```
+lambda_RS_Resequencing
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConcensus
+* P_AnalysisHook
+
+## <a name="Modules"></a>  SMRT Pipe Modules and their Parameters
+Following is an overview of some of the common modules included in SMRT Pipe and their parameters. Not all modules or parameters are listed here. 
+
+Developers interested in even finer control should look inside the ``validateSettings`` method for each python analysis module. By convention, **all** of the settings known to the analysis module are referenced in this method.
+
+## <a name="Global"></a> Global Parameters
+
+Global parameters are potentially used in multiple modules. In the SMRT Pipe internals, they are accessed in the “global” namespace.  Following are some common global parameters:
+
+```
+reference
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping reads. **Required** for resequencing workflows.
+* Default value: ``None``
+
+```
+control
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping spike-in control reads. **Optional**
+* Default value: ``None``
+
+```
+use_subreads
+```
+* Specifies whether to divide reads into subreads using the adapter region boundaries found by the primary analysis software. **Optional**
+* Default value: ``True``
+
+```
+num_stats_regions
+```
+* Specifies how many regions to use when reporting region statistics such as depth of coverage and variant density. **Optional**
+* Default value: ``500``
+
+## <a name="P_Fetch"></a> P_Fetch Module
+
+This module fetches the input data and generates a file of the file names of the input .pls files for downstream analysis. This module has **no** exposed parameters.
+
+###Output:###
+
+* pls.fofn (file of file names of the input .pls files)
+
+## <a name="P_Filter"></a> P_Filter Module
+
+This module filters and trims the raw reads produced by Pacific Biosciences’ primary analysis software. Options are available for taking the information found in the bas.h5 files and using this to pass reads and portions of reads forward.
+
+###Input:###
+
+* bas.h5 files
+
+###Output:###
+
+* ``data/filtering_summary.csv``: Includes raw metrics and filtering information for each read (not subread) found in the original bas.h5 files.
+* ``rgn.h`` (one for each input bas.h5 file): Filtering information generated by the module.
+
+###Parameters:
+
+* ``minLength``  Reads with a high quality region read length below this threshold are filtered out. **(Optional)**
+
+* ``maxLength``  Reads with a high quality region read length above this threshold are filtered out. **(Optional)**
+
+* ``minSubReadLength``  Subreads **shorter** than this length are filtered out.
+
+* ``maxSubReadLength``  Subreads **longer** than this length are filtered out.
+
+* ``minSNR``  Reads with signal-to-noise ratio below this threshold are filtered out. **(Optional)**
+
+* ``readScore`` Reads with a high quality region (Read Quality) score below this threshold are filtered out. **(Optional)**
+
+* ``trim`` Default value = ``True``, Specifies whether to trim reads to the high-quality region. **(Optional)**
+
+* ``artifact``  Reads with a read artifact score less than this (negative) number are filtered out. No number indicates no artifact filtering. Reasonable thresholds are typically between -1000 and -200. **(Optional)**
+
+## <a name="P_Pre"></a> P_PreAssembler Module
+
+This module takes as input long reads and short reads in standard formats, aligns the short reads to the long reads, and outputs a consensus from the preassembled short reads using the long reads as seeds.
+**Note:** You **must** run the ``P_Fetch`` and ``P_Filter`` modules before running ``P_PreAssembler`` to get meaningful results.
+
+###Input:###
+
+* **Long reads ("seed reads")**: PacBio pls.h5/bas.h5 file(s) and optionally associated rgn.h5 file(s).
+* **Short reads**: Can be one of the following:
+ * PacBio CCS pls.h5/bas.h5 file(s), without associated rgn.h5 file(s).
+ * Arbitrary high-quality reads in FASTQ format, such as Illumina reads, without Ns.
+ * PacBio pls.h5/bas.h5 file(s): The same reads as used for the long reads. This mode is the first step of HGAP (Hierarchical Genome Assembly Procedure.)
+* ``params.xml``
+* ``input.xml``
+
+The module can run on bas.h5 files only, and on bas.h5 and FASTQ files. Following are sample XML inputs for both modes.
+
+###Sample input.xml,bas.h5-only input mode###
+
+* **Note:** bas.h5 input files must have the suffix bas.h5.
+
+```
+<pacbioAnalysisInputs>
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+      <url ref="run:0000000-0001">
+   <location>
+      /path/to/input.bas.h5
+   </location>
+</url>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml, bas.h5-only input mode###
+* This XML parameter file was tested on 90X short reads and 24X long reads.
+
+```
+<module name="P_PreAssembler">
+   <param name="useFastqAsShortReads">
+     <value>False</value>
+   </param>
+   <param name="useFastaAsLongReads">
+     <value>False</value>
+   </param>
+   <param name="useLongReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="useUnalignedReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="useCCS">
+     <value>False</value>
+   </param>
+   <param name="minLongReadLength">
+     <value>5000</value>
+   </param>
+   <param name="blasrOpts">
+     <value> -minReadLength 200 -maxScore -1000 -bestn 24 -maxLCPLength 16 -nCandidates 24 </value>
+   </param>
+   <param name="consensusOpts">
+     <value> -L </value>
+   </param>
+   <param name="layoutOpts">
+     <value> --overlapTolerance 100 --trimHit 50 </value>
+   </param>
+   <param name="consensusChunks">
+     <value>60</value>
+   </param>
+   <param name="trimFastq">
+     <value>True</value>
+   </param>
+   <param name="trimOpts">
+     <value> --qvCut=59.5 --minSeqLen=500 </value>
+   </param>
+</module>
+```
+
+###Sample params.xml (bas.h5-only input mode, CCS)###
+
+```
+<?xml version="1.0" ?>
+<smrtpipeSettings>
+  <module name="P_Fetch"/>
+  <module name="P_PreAssembler">
+   <param name="useFastqAsShortReads">
+     <value>False</value>
+   </param>
+   <param name="useFastaAsLongReads">
+     <value>False</value>
+   </param>
+   <param name="useLongReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="useUnalignedReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="blasrOpts">
+     <value>-advanceHalf -noSplitSubreads -ignoreQuality -minMatch 10 -minPctIdentity 70 -bestn 20</value>
+   </param>
+   <param name="layoutOpts">
+     <value>--overlapTolerance=25</value>
+   </param>
+</module>
+</smrtpipeSettings>
+```
+
+###Sample input.xml (FASTQ and bas.h5 input mode)###
+
+* This parameter XML file was tested on 50X 100bp Illumina® reads correcting 15X PacBio long reads.
+
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+     <url ref="run:0000000-0001">
+   <location>
+     /path/to/input.bas.h5
+   </location>
+   </url>
+     <url ref="fastq:/path/to/input.fastq"/>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml (FASTQ and bas.h5 input mode)###
+
+```
+<?xml version="1.0" ?>
+<smrtpipeSettings>
+  <module name="P_Fetch"/>
+  <module name="P_Filter">
+    <param name="filters">
+       <value>MinRL=1000,MinReadScore=0.80</value>
+    </param>
+    <param name="artifact">
+       <value>-1000</value>
+    </param>
+  </module>
+  <module name="P_PreAssembler">
+    <param name="useFastqAsShortReads">
+       <value>True</value>
+    </param>
+    <param name="useFastaAsLongReads">
+       <value>False</value>
+    </param>
+    <param name="useLongReadsInConsensus">
+       <value>False</value>
+    </param>
+    <param name="useUnalignedReadsInConsensus">
+       <value>False</value>
+    </param>
+    <param name="blasrOpts">
+       <value>-minMatch 8 -minReadLength 30 -maxScore -100 -minPctIdentity 70 -bestn 100</value>
+    </param>
+    <param name="layoutOpts">
+       <value>--overlapTolerance=25</value>
+    </param>
+    <param name="consensusOpts">
+       <value>-w 2</value>
+    </param>
+</module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``corrected.fasta``,`` corrected.fastq``: FASTQ or FASTA file of corrected long reads.
+* ``idmap.csv``: csv file mapping corrected long read ids to original read ids
+
+## <a name="P_Map"></a> P_Mapping (BLASR) Module
+
+This module aligns reads against a reference sequence, possibly a multi-contig reference.
+If the ``P_Filter`` module is run first, then **only** the reads which passed filtering are aligned.
+
+###Output:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read.
+* ``data/alignment_summary.gff``: Summary information.
+
+###Parameters:###
+
+* ``align_opts`` Default value = ``Empty string``, Passes options to the underlying ``compareSequences.py`` script. **(Optional)**
+
+* ``--useCcs=`` Default value = ``None``, A parameter sent to the underlying ``compareSequences.py`` script via the ``align_opts`` parameter value above. Values are ``{denovo|fullpass|allpass}``. **(Optional)**
+
+  * ``denovo``: Maps just the _de novo_ called sequence and report. (Does not include quality values.)
+
+  * ``fullpass``: Maps the _de novo_ called sequence, then aligns full passes to the sequence that the _de novo_ called sequence aligns to.
+
+  * ``allpass``: Maps the _de novo_ called sequence, then aligns all passes (even ones that don't span the length of the template) to the sequence the _de novo_ called sequence aligned to.
+
+
+* ``load_pulses``: Default value = ``True``, Specifies whether to load pulse metric information into the cmp.h5 file. **(Optional)**
+
+* ``maxHits``: Default value = ``None``, Attempts to find sub-optimal alignments and report up to this many hits per read. **(Optional)**
+
+* ``minAnchorSize``: Default value = ``None``, Ignores anchors **smaller** than this size when finding candidate hits for dynamic programming alignment. **(Optional)**
+
+* ``maxDivergence``: Default value = ``None``, Specifies maximum divergence between read and reference to allow a mapping. Divergence = (1 - accuracy).
+
+* ``output_bam``: Default value = ``False``, Specifies whether to output a BAM representation of the cmp.h5 file. **(Optional)**
+
+* ``output_sam``: Default value = ``False``, Specifies whether to output a SAM representation of the cmp.h5 file. **(Optional)**
+
+* ``gff2Bed``: Default value = ``False``, Specifies whether to output a BED representation of the depth of coverage summary. **(Optional)**
+
+## <a name="P_Quiver"></a> P_GenomicConsensus (Quiver) Module
+
+This module takes the alignments generated by the ``P_Mapping`` module and calls the consensus sequence across the reads.
+
+###Input:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read.
+
+* ``data/alignment_summary.gff``: Summary information.
+
+###Output:###
+
+* ``data/aligned_reads.cmp.h5``
+
+* ``data/variants.gff.gz``: A gzipped GFF3 file containing variants versus the reference.
+
+* ``data/consensus.fastq.gz``: The consensus sequence in FASTQ format.
+
+* ``data/alignment_summary.gff, data/variants.vc``: Useful information about variants.
+
+###Parameters:###
+
+* ``makeBed``: Default value = ``True``, Specifies whether to output a BED representation of the variants. **(Optional)**
+
+* ``makeVcf``: Default value = ``True``, Specifies whether to output a VCF representation of the variants. **(Optional)**
+
+## <a name="P_Hook"></a> P_AnalysisHook Module
+
+This module allows you to call executable code as part of a SMRT Pipe analysis. ``P_AnalysisHook`` can be called multiple times in a settings XML file, allowing for an arbitrary number of calls to external (non-SMRT Pipe) code.
+
+###Parameters:###
+
+* ``scriptDir``: Default value = ``None``, All executables in this directory are called serially with the command line ``exeCmd jobDir``, where ``jobDir`` is the root of the SMRT Pipe output for this analysis. **(Optional)**
+
+* ``script``: Default value = ``None``, Path to an executable called with the command line ``exeCmd jobDir``, where ``jobDir`` is the root of the SMRT Pipe output for this analysis. **(Optional)**
+
+## <a name="P_Allora"></a> Assembly (Allora Assembly) Module
+
+This module takes the trimmed reads which pass filtering and attempts to assemble them into contiguous sequences (contigs).
+
+###Input:###
+
+* ``input.xml`` with bas.h5
+
+* You can also run Allora on raw FASTA or FASTQ files using the following syntax: ``smrtpipe.py --params=settings.xml input.fast[a|q]``
+
+###Sample settings.xml file:###
+
+```
+<?xml version="1.0" ?>
+<smrtpipeSettings>
+  <module id="Assembly" label="Allora v1">
+   <param label="Overlap permissiveness" name="overlapScoreThreshold">
+    <title>
+      Overlap permissiveness affects how strictly or permissively reads and contigs will be joined.
+    </title>
+    <value>700</value>
+    <select>
+     <option value="1300">Least permissive</option>
+     <option value="1000">Less permissive</option>
+     <option value="700">Normal</option>
+     <option value="500">More permissive</option>
+     <option value="300">Most permissive</option>
+    </select>
+   </param>
+   <param label="Expected genome size (bp)" name="genomeSize">
+    <title>
+      The expected genome size helps to estimate coverage and can lead to more accurate assembly.
+    </title>
+    <value>50000000</value>
+    <rule message="Value must be positive" min="1" type="number"/>
+   </param>
+   <param label="Minimum number of iterations" name="minIterations">
+    <title>
+     The minimum number of iterations before the algorithm halts.
+    </title>
+    <value>1</value>
+    <rule max="1000" message="Value must be an integer between 0 and 1000" min="0" type="digits"/>
+   </param>
+   <param label="Maximum number of iterations" name="maxIterations">
+    <title>
+     The maximum number of iterations before the algorithm halts.
+    </title>
+    <value>10</value>
+    <rule max="1000" message="Value must be an integer between 0 and 1000" min="0" type="digits"/>
+   </param>
+   <param name="detectChimeras">
+    <title>
+     Whether to detect chimeras using all-vs-all read comparison.
+    </title>
+    <value>True</value>
+   </param>
+   <param name="detectChimerasOptions">
+    <value>--detector=Iterative:threshold=2</value>
+   </param>
+   <param name="trimLayouts">
+    <value>False</value>
+   </param>
+   <param label="Write an ACE output file." name="outputAce">
+    <value>False</value>
+   </param>
+   <param label="Write an AMOS bank directory." name="outputBank">
+    <value>True</value>
+   </param>
+   <param hidden="true" name="autoParameters">
+    <value>True</value>
+   </param>
+  </module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``data/assembled.fsta``: A FASTA file containing the assembled contig consensus sequences.
+* ``data/assembled_reads.cmp.h5``: The pairwise alignments for each read against its assembled contig consensus.
+* ``data/assembled_summary.gff``: Summary information about each of the contigs.
+* ``data/assembled.ace``: The assembly, in ACE format. **(Optional)**
+* ``data/assembled.bnk.tar.gz``: The assembly, as a compressed AMOS bank. **(Optional)**
+
+###Parameters:###
+
+* ``overlapScoreThreshold``: Default value = ``700``, The score threshold for accepting an overlap between two reads. The suggested range is between 300 and 1500, with 700 being a typical threshold. **(Optional)**
+
+* ``genomeSize``: Default value = ``100,000 bases``, A rough estimate of the expected size of this genome. This provides an estimate of expected coverage and modulates parameters based on genome size. **(Optional, but strongly recommended.)**
+
+* ``maxIterations``: Default value = ``10``, Specifies the **maximum number** of iterations for progressive assembly. **(Optional)**
+
+* ``minIterations``: Default value = ``4``, Specifies the **minimum number** of iterations for progressive assembly. **(Optional)**
+
+* ``outputAce``: Default value = ``False``, Specifies whether to output an ACE representation of the assembly. **(Optional)**
+
+* ``outputBank``: Default value = ``False``, Specifies whether to output an AMOS representation of the assembly. **(Optional)**
+
+* ``autoParameters``: Default value = ``False``, Specifies whether alignment parameters should be automatically set by the module. **Note:** You can sometimes improve the accuracy of contigs output from the module by feeding these contigs to the Resequencing protocol as a reference sequence, and re-aligning the reads to this reference. The resulting consensus will often improve on the original _de novo_ assembly accuracy.
+
+## <a name="P_AHA"></a> HybridAssembly (AHA Scaffolding) Module
+
+This module scaffolds high-confidence contigs, such as those from Illumina® data, using Pacific Biosciences’ long reads.
+
+###Input:###
+
+* ``settings.xml``: Specifies the parameters to run.
+* ``input.xml``: Specifies the inputs.
+
+``HybridAssembly.py`` uses two kinds of input instead of one:
+
+* A FASTA file of high-confidence sequences to be scaffolded. These are typically contigs assembled from Illumina® short-read sequence data.
+
+* Pacific Biosciences’ long reads, in HDF5 or FASTA format. These are used to join the high-confidence contigs into a scaffold.
+
+###Sample input.xml file:###
+
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+<dataReferences>
+<!-- High-confidence sequences fasta file -->
+<url ref="assembled_contigs:test_contigs.fsta"/>
+<!-- PacBio reads, either in fasta or in bas.h5 format. -->
+<url ref="file:test_reads.fsta" />
+</dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample settings.xml file for long reads, with only customer-facing parameters:###
+
+```
+<?xml version="1.0"?>
+<smrtpipeSettings>
+
+  <!-- HybridAssembly 1.2.0 parameter file for long reads -->
+  <module name="HybridAssembly">
+
+  <!-- General options -->
+  <!-- Parameter schedules are used for iterative hybrid assembly. 
+They are given in comma delimited tuples separate by semicolons. The fields in order are:
+- Minimum alignment score (aka Z-score). Higher is more stringent.
+- Minimum number of reads needed to link two contigs. (Redundancy)
+- Minimum subread length to participate in alignment.
+- Minimum contig length to participate in alignment.
+If a tuple contains less than 4 fields, defaults will be used for the remaining fields. -->
+
+  <paramSchedule>6,3,75;6,3,75;6,2,75;6,2,75</paramSchedule>
+
+<!-- Untangling occurs after the main scaffolding step. 
+Valid values are "bambus" and "pacbio" (recommended and the default). -->
+  <untangler>pacbio</untangler>
+
+<!-- Gap fillin can be turned on by setting to True or off by setting to False -->
+  <fillin>False</fillin>
+
+<!-- These options allow long reads -->
+  <longReadsAsStrobe>True</longReadsAsStrobe>
+  <blasrOpts>-minMatch 10 -minPctIdentity 70 -bestn 10 -noSplitSubreads</blasrOpts>
+
+<!-- Parallelization options -->
+  <numberProcesses>16</numberProcesses>
+  </module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``data/scaffold.gml``: A GraphML file that contains the final scaffold. This file can be readily parsed in python using the networkx package.
+
+* ``data/scaffold.fasta``: A FASTA file with a single entry for each scaffold.
+
+To run ``HybridAssembly.py``, enter the following:
+
+```
+smrtpipe.py --params=settings.xml xml:input.xml >& smrtpipe.err
+```
+
+###Parameters:###
+
+* ``paramSchedule``: Default value = ``None``, Specifies parameter schedules used for iterative hybrid assembly. Schedules are in comma-delimited tuples, separated by semicolons. **Example:** ``6,3,75;6,3,75;6,2,75;6,2,75``. The fields, in order, are:
+
+  * Minimum alignment score. Higher is more stringent.
+  * Minimum number of reads needed to link two contigs. (Redundancy)
+  * Minimum subread length to participate in alignment.
+  * Minimum contig length to participate in alignment.
+
+
+* ``untangler``: Default value = ``pacbio``, Untangling occurs **after** the main scaffolding step. Valid values are bambus and pacbio. **(Recommended)**
+
+* ``fillin``: Default value = ``False``, Specifies whether to use long reads.
+
+* ``blasrOpts``: Default value = ``-minMatch 10 -minPctIdentity 60 -bestn 10 -noSplitSubreads``, Options passed directly to BLASR for aligning reads to contigs.
+
+* ``maxIterations``: Default value = ``6``, Specifies the maximum number of iterations to use from paramSchedule. If ``paramSchedule`` has more than ``maxIterations``, it will be truncated at ``maxIterations``. If ``paramSchedule`` has less than ``maxIterations``, the last iteration of ``paramSchedule`` is repeated.
+
+* ``cleanup``: Default value = ``True``, Specifies whether to clean up intermediate files. This can be useful for debugging purposes.
+
+* ``runNucmer``: Default value = ``True``, Specifies whether to use ``Nucmer`` to detect repeat locations. This can improve assemblies, but can be very slow on large highly repetitive genomes.
+
+* ``maxContigsLength``: Default value = ``170000000``, Specifies the **maximum** total length of contigs. **Note:** Overriding this value could be risky.
+
+* ``maxNumContigs``: Default value = ``20000``, Specifies the **maximum** number of contigs. **Note:** Overriding this value could be risky.
+
+* ``maxNumReads``: Default value = ``2000000``, Specifies the **maximum** number of reads. **Note:** Overriding this value could be risky.
+
+* ``gapFillOpts``: Default value = ``“”``, Options to be passed directly to ``gapFiller.py``.
+
+* ``noScaffoldImages``: Default value = ``True``, Specifies ``not`` producing SVG files of the scaffolds. Creating these files can be expensive for large assemblies, but is recommended for small assemblies.
+
+###Known Issues###
+
+* Depending on the repetitive content of the high-confidence input contigs, a large fraction of the sequence in the contigs can be called repeats. To avoid this, turn off the split repeats step by setting the minimum repeat identity to a number greater than 100, for example:
+```
+<minRepeatIdentity>1000</minRepeatIdentity>
+```
+
+## <a name="P_GATK"></a> P_GATKVC (GATK Unified Genotyper) Module
+
+This module wraps the Broad Institute's GATK Unified Genotyper for Bayesian diploid and haploid SNP calling, using base quality score recalibration and default settings. The module calls both homozygous and heterozygous SNPs.
+
+We recommend that you use a dbSNP file as a prior for base quality score recalibration. By default, the P_GATKVC (GATK Unified Genotyper) module uses a null prior. To use a prior, see the script ``vcfUploader.py``, included with the SMRT Analysis installation.
+
+**Note:** Indel calling and other options are **not** currently supported through SMRT Pipe.
+
+## <a name="P_MOD"></a> P_Modification Detection Module
+
+This module uses the cmp.h5 output by the ``P_Mapping`` module to:
+
+1. Compare observed IPDs in the cmp.h5 file at each reference position on each strand with control IPDs. Control IPDs are supplied by either an in-silico computational model, or observed IPDs from unmodified “control” DNA.
+
+2.  Generate ``modifications.csv`` and ``modifications.gff`` reporting statistics on the IPD comparison.
+
+###Predicted Kinetic Background Control vs Case-Control Analysis###
+
+By default, the control IPDs are generated per-base of the reference with an in-silico model of the expected IPD values for each position, based on sequence context. The computational model is called the **Predicted IPD Background Control**. Even in normal unmodified DNA, the IPD at any particular point will vary. Internal studies at Pacific Biosciences show that most of the variation in mean IPD across a genome can be predicted from a 12-base sequence context surrounding the active site [...]
+
+###Filtering and Trimming###
+
+Some PacBio data features require special attention for good modification detection performance. The module inspects the alignment between the observed bases and the reference sequence. For an IPD measurement to be included in the analysis, the read sequence must match the reference sequence for K around the cognate base; currently, K = 1. The IPD distribution at some locus can be seen as a mixture of the “normal” incorporation process IPD (sensitive to the local sequence context and DNA [...]
+
+**Pauses** are defined as pulses with an IPD >10x longer than the mean IPD at that context. Heuristics are used to filter out the pauses.
+
+###Statistical Testing###
+
+The module tests the hypothesis that IPDs observed at a particular locus in the sample have longer means than IPDs observed at the same locus in unmodified DNA. If a Whole-Genome-Amplified dataset is generated, which removes DNA modifications, the module uses a case-control, two-sample t-test.
+
+The module also provides a pre-calibrated **Predicted Kinetic Background Control** model which predicts the unmodified IPD, given a 12-base sequence context. In that case, the module uses a one-sample t-test, with an adjustment to account for error in the control model.
+
+###Input:###
+
+* ``aligned_reads.cmp.h5``: A standard cmp.h5 file with alignments and IPD information that supplies the kinetic data for modification detection.
+
+* Reference Sequence: The path to a SMRT Portal reference repository entry for the reference sequence used to perform alignments.
+
+###Output:###
+
+* ``modifications.csv``: Contains one row for each (reference position, strand) pair that appeared in the dataset with coverage of at least x. (x defaults to 3, but is configurable using the ``ipdSummary.py –minCoverage`` flag.) The reference position index is 1-based for compatibility with the GFF file in the R environment.
+
+* ``modifications.gff``: Each template position/strand pair whose p-value exceeds the p-value threshold displays as a row. (The default threshold is ``p=0.01`` or ``score=20``.) The file is compliant with the GFF version 3 specification, and the template position is 1-based, per the GFF specification. The strand column refers to the strand carrying the detected modification, which is the opposite strand from those used to detect the modification.
+
+The auxiliary data column of the GFF file contains other statistics useful for downstream analysis or filtering. This includes the coverage level of the reads used to make the call, and +/- 20 bp sequence context surrounding the site.
+
+Results are generally indexed by reference position and reference strand. In all cases, the strand value refers to the strand carrying the modification in the DNA sample. The kinetic effect of the modification is observed in read sequences aligning to the opposite strand, so reads aligning to the positive strand carry information about modification on the negative strand and vice versa. The module **always** reports the strand containing the putative modification.
+
+###Parameters###
+
+* ``identifyModifications``: Default value = ``False``, Specifies whether to use a multi-site model to identify the modification type.
+
+* ``tetTreated``: Default value = ``False``, Specifies whether the sample was TET-treated to amplify the signal of m5C modifications.
+
+## <a name="Celera"></a> RS_Celera Assembler Workflow
+
+This workflow (comprised of the ``P_PacBioToCA`` and ``P_CeleraAssembler`` modules) wraps the Celera® Assembler’s error correction and assembly programs.
+
+For full documentation of pacBioToCA and the Celera Assembler, see
+http://sourceforge.net/apps/mediawiki/wgs-assembler/index.php?title=Main_Page.
+
+The error correction may be run with external high confidence reads, such as those from Illumina® data, or from internally generated CCS reads.
+
+###Input:###
+
+* ``settings.xml``: Specifies the parameters.
+
+* ``input.xml``: Specifies the inputs.
+
+###Sample settings.xml file:###
+
+```
+<module id="P_PacBioToCA" label="PacBioToCA v1" editableInJob="true">
+  <description>This module wraps pacBioToCA, the error correction pipeline of Celera Assembler v7.0</description>
+```
+
+If ``useCCSFastq`` is set to ``True``, CSS reads are used as the high-confidence reads:
+
+```
+  <param name="useCCSFastq" label="Correct With CCS">
+     <title>Use CCS reads to correct long reads</title>
+     <value>True</value>
+     <input type="checkbox" />
+  </param>
+```
+
+To use a FASTQ file, set the value of ``shortReadFastqA`` to the full path.
+
+```
+  <param name="shortReadFastqA" label="FASTQ to Correct With">
+     <title>(Optional) FASTQ file of reads to correct long reads with </title>
+     <input type="text" />
+     <value></value>
+     <rule remote="api/protocols/resource-exists?paramName=shortReadFastqA" required="false" message="File does not exist" />
+  </param>
+```
+
+To use a FASTQ file, set the value of ``shortReadTechnology`` to the correct quality encoding value: ``sanger``, ``454``, or ``illumina``. (Illumina® encoding is shown in the example.)
+
+```
+  <param name="shortReadTechnology" label="FASTQ Read Type">
+     <title>Sequencing platform used to generate the FASTQ file, if specified</title>
+     <value>illumina</value>
+     <select>
+       <option value="sanger">~1kb (e.g., Sanger and PacBio CCS)</option>
+       <option value="454">~600bp (e.g., 454)</option>
+       <option value="illumina">~100bp (e.g., Illumina)</option>
+     </select>
+  </param>
+```
+
+To use a FASTQ file, set the value of ``shortReadType`` to the correct quality encoding value: ``sanger``, ``solexa``, or ``illumina``. (Illumina® encoding is shown in the example.)
+
+```
+  <param name="shortReadType" label="FASTQ Quality Value Encoding">
+     <title>ASCII encoding of the quality values in the FASTQ file, if specified</title>
+     <value>illumina</value>
+     <select>
+       <option value="sanger">Phred+33 (e.g., Sanger and PacBio fastq)
+       </option>
+       <option value="solexa">Solexa+64 (e.g., Solexa fastq)</option>
+       <option value="illumina">Phred+64 (e.g., Illumina fastq)</option>
+     </select>
+  </param>
+
+  <param name="pbReadMinLength" label="Min fragment length">
+     <title>Minimum length of PacBio RS fragment to keep.</title>
+     <input type="text" />
+     <value>1000</value>
+     <rule type="digits" message="Value must be an integer" required="true" />
+  </param>
+
+  <param name="specInPacBioToCA" label="Pre-defined spec file">
+    <title>Enter the server path to an existing spec file</title>
+    <input type="text" />
+    <rule remote="api/protocols/resource-exists?paramName=specInPacBioToCA" required="false" message="File does not exist" />
+  </param>
+</Module>
+
+<module id="P_CeleraAssembler" label="CeleraAssembler v1" editableInJob="true">
+   <description>This module wraps the Celera Assembler v7.0</description>
+
+  <param name="genomeSize" label="Genome Size (bp)">
+     <title>Approximate genome size in base pairs</title>
+     <value>5000000</value>
+     <input type="text" />
+     <rule type="digits" message="Must be a value between 1 and 200000000" min="1"
+     required="true" max="200000000" />
+  </param>
+
+  <param name="defaultFrgMinLen" hidden="true">
+    <input type="text" />
+    <value>1000</value>
+  </param>
+
+  <param name="xCoverage" label="Target Coverage">
+    <title>Fold coverage to target when picking frgMinLen for assembly. Typically 15 to 25.</title>
+    <input type="text" />
+    <value>15</value>
+    <rule type="digits" message="Value must be an integer between 10 and 30, inclusive” min="10" max="30" />
+  </param>
+
+  <param name="ovlErrorRate" label="Overlapper error rate">
+    <title>Overlapper error rate</title>
+    <input type="text" />
+    <value>0.015</value>
+    <rule type="number" message="Value must be numeric" />
+  </param>
+
+  <param name="ovlMinLen" label="Overlapper min length">
+    <title>Overlaps shorter than this length are not computed.</title>
+    <input type="text" />
+    <value>40</value>
+    <rule type="digits" message="Value must be an integer" />
+  </param>
+
+  <param name="specInRunCA" label="Pre-defined spec file">
+    <title>Enter the server path to an existing spec file</title>
+    <input type="text" />
+    <rule remote="api/protocols/resource-exists?paramName=specInRunCA" required="false" message="File does not exist" />
+  </param>
+
+ </module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+**Note:** Some of the worklflow outputs are produced by the Celera Assembler, and some by Pacific Biosciences’ software.
+
+* ``data/runCa.spec``: The specification file used to run the assembly program. The ``P_CeleraAssembler`` module auto-generates the specification file based on the input data and selected parameters. Alternatively, you can provide an explicit specification file.
+
+* ``data/pacBioToCA.spec``: The specification file used to run the error correction program. The ``P_PacBioToCA`` module auto-generates the specification file based on the input data and selected parameters. Alternatively, you can provide an explicit specification file.
+
+* ``data/celera-assembler.asm``: The official output of Celera Assembler’s assembly program.
+
+* ``data/assembled_reads.cmp.h5``: The pairwise alignment for each read against its assembled contig consensus.
+
+* ``data/assembled_summary.gff.gz``: Summary information about each of the contigs.
+
+* ``data/castats.txt``: Assembly statistics report.
+
+To run the error correction and assembly modules, enter the following:
+```
+smrtpipe.py --params=settings.xml xml:input.xml >& smrtpipe.err
+```
+
+## <a name="P_Cor"></a> P_CorrelatedVariants (Minor and Compound Variants) Module
+
+This module calls and correlates rare variants from a sample and provides support for determining whether or not sets of mutations are co-located. **Note:** This only includes SNPs, not indels.
+
+The module takes high-coverage CCS reads that are aligned without quality scores to a similar reference. The module requires the following:
+
+* CCS reads **only**.
+* High coverage (at least 500x).
+* Alignment to reference without using quality scores.
+* The sample **cannot** be highly divergent from the reference.
+
+The algorithm uses simple column counting and a plurality call. While it work well with higher depths (> 500x), it does suffer from reference bias, systematic alignment error and sizeable divergence from the reference.
+
+Variants may not only coexist on the same molecule, but they may also be coselected for; that is, inherited together. This algorithm attempts to define a measurable relationship between a set of co-located variants found with some significance on a set of reads.
+
+1. The variant information is read from the GFF input file, then the corresponding cmp.h5 file is searched for reads that cover the variant. Reads that contain the variant are tracked and later assigned any other variants they contain, building a picture of the different haplotypes occuring within the read sets.
+
+2. The frequencies and coverage values for each haplotype are computed. These values will likely deviate (to the downside) from those found in the GFF file as the read set is constrained by whether or not they completely span the region defined by the variant set. Only reads that cover all variants are included in the frequency and coverage calculation.
+
+3. The frequency and coverage values are used to calculate an observed probability of each permutation within the variant set. These probabilities are used to compute the Mutual Information score for the set. Frequency informs mutual information, but does not define it. It is possible to have a lower frequency variant set with a higher mutual information score than a high frequency one.
+
+###Input:###
+
+* A GFF file containing CCS-based variant calls at each position including read information: ID, start, and stop. (start and stop are in (+) strand genomic coordinates.)
+
+* A cmp.h5 alignment file aligned without quality, and with a minimum accuracy of 95%.
+
+* **(Optional)** ``score``: Include the mutual information score in the output. (Default: Don't include.)
+* **(Optional)**``out``: The output file name. (Default: Output to screen)
+
+###Output:###
+
+* ``data/rare_variants.gff(.gz)``: Contains rare variant information, accessible from SMRT Portal.
+
+* ``data/correlated_variants.gff``: Accessible from SMRT Portal.
+
+* ``results/topRareVariants.xml``: Viewable report based on the contents of the GFF file.
+
+* ``results/topRareVariants.xml``: Viewable report based on the contents of the GFF file.
+
+* CSV file containing the location and count of co-variants. Example:
+
+```
+ref,haplotype,frequency,coverage,percent,mutinf
+ref000001,285-G|297-G,133,2970,4.48,0.263799623501
+ref000001,285-T|286-T,128,2971,4.31,0.256253924909
+ref000001,285-G|406-G,103,2963,3.48,0.217737973781
+ref000001,99-C|285-G,45,2963,1.52,0.113489812305
+ref000001,286-T|406-G,43,2963,1.45,0.109404796397
+ref000001,285-G|286-T,38,2971,1.28,0.0987697454578
+ref000001,99-C|286-T,31,2963,1.05,0.0838430015349
+```
+
+
+## <a name="P_Motif"></a> P_MotifFinder (Motif Analysis) Module
+
+This module finds sequence motifs containing base modifications. The primary application is finding restriction-modification systems in prokaryotic genomes. ``P_MotifFinder`` analyzes the output of the ``P_ModificationDetection`` module.
+
+###Input:###
+
+* ``modifications.csv``: Contains one row for each (reference position, strand) pair that appeared in the dataset with coverage of at least x.
+
+* ``modifications.gff``: Each template position/strand pair whose p-value exceeds the p-value threshold displays as a row.
+
+###Output:###
+
+* ``data/motif_summary.csv``: A summary of the detected motifs, as well as the evidence for motifs.
+
+* ``data/motifs.gff``: A reprocessed version of ``modifications.gff`` (from ``P_ModificationDetection``) containing motif annotations.
+
+###Parameters:###
+
+* ``minScore`` Default value = ``35`` Only consider detected modifications with a Modification QV **above** this threshold.
+
+## <a name="P_GMAP"></a> P_GMAP Module
+
+This module maps PacBio reads onto a reference as if they were cDNA, allowing for large insertions corresponding to putative introns.
+
+The way SMRT Pipe currently computes accuracy is **incompatible** with these large gaps. As a result, P_GMAP does **not** report an accuracy histogram like the other alignment modules.
+
+GMAP is a third-party tool, and requires that we build a GMAP-type database before running the tool. The time to build the database, as well as its size, are prohibitive, and not all references will need it. The database is built on the fly **once**, the first time the module is run against a reference. This results in an extended execution time.
+
+###Input:###
+
+* ``input.fofn``(base files): Names of the raw input files used for the analysis.
+
+* ``data/filtered_regions.fofn``
+
+* The path to a reference in the PacBio reference repository.
+
+###Sample params.xml file:###
+
+```
+<?xml version="1.0" ?>
+  <smrtpipeSettings>
+    <protocol id="my_protocol">
+      <param name="reference">
+        <value>/data/references/my_reference</value>
+        <select>
+          <import contentType="text/xml" element="reference" filter="state='active' type='sample'" isPath="true" name="name" value="directory"> /data/references/index.xml</import>
+        </select>
+      </param>
+    </protocol>
+    <module id="P_GMAP" label="GMAP v1">
+  </smrtpipeSettings>
+```
+
+###Output:###
+
+* ``/data/alignment_summary.gff``
+* ``/data/aligned_reads.sam``
+* ``/data/aligned_reads.cmp.h5``
+* ``/results/gmap_quality.xml``
+
+## <a name="P_BAR"></a> P_Barcode Module
+
+This module provides access to the ``pbbarcode`` command-line tools, which you use to identify barcodes in PacBio reads.
+
+###Input:###
+
+* Complete barcode FASTA file: A standard FASTA file with barcodes less than 48 bp in length. Based on the score mode you specify, the barcode file might need to contain an even number of barcodes. **Example:**
+
+```
+<param name="barcode.fasta">
+  <value>/mnt/secondary/Smrtpipe/martin/prod/data/workflows/barcode_complete.fasta</value>
+</param>
+```
+
+* Barcode scoring method: This directly relates to the particular sample preparation used to construct the molecules. Valid options are:
+
+  *  ``symmetric``: Supports barcode designs with two identical barcodes on both sides of a SMRTbell™ template. Example: For barcodes (A, B), molecules are labeled as A--A or B--B.
+
+  * ``asymmetric``: Supports barcode designs with **different** barcodes on each side of a molecule, with no constraints on which barcodes must appear together. Example: For barcodes (A,B,C), the following barcode sets are checked, A--B, A--C, and B--C. There is no orientation, hence there is no difference between A--B and B--A; A--B is arbitrarily chosen because A appears before B in the barcode list.
+
+  * ``paired``: Each set of two barcodes, (1,2; 3,4, ... (2n-1, n)) are considered as pairs. In contrast, the barcodes, (1,2,3,4,5, ..., n) are considered separate and aligned alone yielding a n*(n-1)/2 possible barcode labels.
+
+**Example:**
+```
+<param name="mode">
+  <value>symmetric</value>
+</param>
+```
+
+  * Pad arguments: Defines how many bases and to include from the adapter and how many bases to include from the insert. Ideally, this is 0 and 0. This produces shorter alignments; however, if the adapter-calling algorithm slips a little one might lose a little sensitivity and/or specificity because of this. Do **not** set these unless you have a compelling use case. **Examples:**
+
+```
+<param name="adapterSidePad">
+   <value>2</value>
+</param>
+
+<param name="insertSidePad">
+   <value>2</value>
+</param>
+```
+
+###Output:###
+
+* ``/data/*.bc.h5``: Barcode calls and their scores for each ZMW.
+
+* ``/data/barcode.fofn``: Contains a list of files.
+
+* ``/data/aligned_reads.cmp.h5``
+
+## <a name="Tools"></a> SMRT Pipe Tools
+
+**Tools** are programs that run as part of SMRT Pipe. A module, such as P_Mapping, can call several tools (such as the mapping tools summarizeCoverage.py or compareSequences.py) to actually perform the underlying processing. 
+
+All the tools are located at ``$SEYMOUR_HOME/analysis/bin``.
+
+Use the ``--help`` option to see usage information for each tool. (Some tools are undocumented.)
+
+
+## <a name="Build_SPTools"></a> Building the SMRT Pipe tools manually, without SMRT Portal, SMRT View, or Kodos
+
+###Requirements:###
+
+BOOST C++ Libraries (http://www.boost.org/); usually located at ``/usr/include`` if it came with the Operating System. For this procedure, we assume that the BOOST Library is located here: ``/usr/local/boost_1_47_0``:
+
+```
+$ export BOOST=/usr/local/boost_1_47_0
+```
+
+###To prepare the source files:###
+
+Assume that the user saved the source tarball at ``$HOME/Downloads/smrtpipe-sources-2.0.0.tar.gz``:
+```
+$ tar zxf $HOME/Downloads/smrtpipe-sources-2.0.0.tar.gz -C $HOME
+```
+The file structure should look like this before starting to build the software:
+
+```
+├── assembly
+│   ├── build
+│   ├── cpp
+│   ├── java
+│   ├── papers
+│   ├── pbpy
+│   ├── seymour
+│   ├── smrtpipe-doc
+│   └── third-party
+├── bioinformatics
+│   ├── doc
+│   ├── lib
+│   ├── release-utils
+│   ├── third-party
+│   └── tools
+└── common
+    └── ConsensusCore
+```
+
+###To determine where to install smrtpipe.py, and set the environment variable SEYMOUR_HOME to this location:###
+
+```
+$ mkdir $HOME/smrtpipe-build
+$ export SEYMOUR_HOME=$HOME/smrtpipe-build/
+```
+
+###To build the software:###
+```
+
+$ cd $HOME/smrtpipe-sources-2.0.0/assembly
+$ make
+$ make install
+```
+
+###To test the smrtpipe build:###
+
+```
+$ . $SEYMOUR_HOME/etc/setup.sh
+$ smrtpipe.py –h
+```
+
+This should display the man page for ``smrtpipe.py``.
+
+
+## <a name="Files"></a> SMRT Pipe File Structure
+
+**Note**: The output of a SMRT Pipe analysis includes more files than described here; interested users should explore the file structure. Following are details about the major files.
+
+```
+ <jobID>/job.sh
+```
+* Contains the SMRT Pipe command line call for the job.
+
+```
+<jobID>/settings.xml
+```
+* Contains the modules (and their associated parameters) to be run as part of the SMRT Pipe run. 
+
+```
+<jobID>/metadata.rdf
+```
+* Contains all important metadata associated with the job. This includes metadata propagated from primary results, links to all reports and data files exposed to users, and high-level summary metrics computed during the job. The file is an entry point to the job by tools such as SMRT Portal and SMRT View. ``metadata.rdf`` is formatted as an RDF-XML file using OWL ontologies. See http://www.w3.org/standards/semanticweb/ for an introduction to Semantic Web technologies.
+
+```
+<jobID>/input.fofn
+```
+* This file (“file of file names”) is generated early during a job and contains the file names of the raw input files used for the analysis.
+
+```
+<jobID>/input.xml
+```
+* Used to specify the input files to be analyzed in a job, and is passed on to the command line.
+
+```
+<jobID>/vis.jnlp
+```
+* **Deprecated** - no longer generated in v1.4.0. To visualize data, install SMRT View and choose **File > Open Data from Server**.
+
+```
+log/smrtpipe.log
+```
+* Contains debugging output from SMRT Pipe modules. This is typically shown by way of the **View Log** button in SMRT Portal.
+
+### Data Files ###
+
+The ``Data`` directory is where most raw files generated by the pipeline are stored. (**Note**: The following are example output files - for more details about specific files, see the sections dealing with individual modules.)
+
+```
+aligned_reads.cmp.h5, aligned_reads.sam, aligned_reads.bam
+```
+* Mapping and consensus data from secondary analysis.
+
+```
+alignment_summary.gff
+```
+* Alignment data summarized on sequence regions.
+
+```
+variants.gff.gz
+```
+* All sequence variants called from consensus sequence.
+
+```
+toc.xml
+```
+* **Deprecated** - The master index information for the job outputs is now included in the ``metadata.rdf`` file.
+
+### Results/Reports Files ###
+
+Modules with **Reports** in their name produce HTML reports with static PNG images using XML+XSLT. These reports are located in the results ``subdirectory``. The underlying XML document for each report is preserved there as well; these can be useful files for data-mining the outputs of SMRT Pipe.
+
+
+## <a name="RefRep"></a> The Reference Repository
+
+The **reference repository** is a file-based data store used by SMRT Analysis to manage reference sequences and associated information. The full description of all of the attributes of the reference repository is beyond the scope of this document, but you need to use some basic aspects of the reference repository in most SMRT Pipe analyses. 
+
+**Example**: Analysis of multi-contig references can **only** be handled by supplying a reference entry from a reference repository.
+
+It is simple to create and use a reference repository:
+
+* A reference repository can be any directory on your system. You can have as many reference repositories as you wish; the input to SMRT Pipe is a fully resolved path to a reference entry, so this can live in any accessible reference repository.
+
+Starting with the FASTA sequence ``genome.fasta``, you upload the sequence to your reference repository using the following command:
+```
+referenceUploader -c -p/path/to/repository -nGenomeName
+-fgenome.fasta
+```
+
+where:
+
+* ``/path/to/repository`` is the path to your reference repository.
+* ``GenomeName`` is the name to use for the reference entry that will be created.
+* ``genome.fasta`` is the FASTA file containing the reference sequence to upload.
+
+For a large genome, we highly recommended that you produce the BLASR suffix array during this upload step. Use the following command:
+```
+referenceUploader -c -p/path/to/repository -nHumanGenome -fhuman.fasta --Saw='sawriter -welter'
+```
+
+There are many more options for reference management. Consult the MAN page entry for referenceUploader by entering ``referenceUploader -h``.
+
+To learn more about what is being stored in the reference entries, look at the directory containing a reference entry. You will find a metadata description (reference.info.xml) of the reference and its associated files. For example, various static indices for BLASR and SMRT View are stored in the sequence directory along with the FASTA sequence.
+
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
\ No newline at end of file
diff --git a/docs/SMRT-Pipe-Reference-Guide-v2.1.md b/docs/SMRT-Pipe-Reference-Guide-v2.1.md
new file mode 100644
index 0000000..b20cae0
--- /dev/null
+++ b/docs/SMRT-Pipe-Reference-Guide-v2.1.md
@@ -0,0 +1,1580 @@
+* [Introduction](#Intro)
+* [Installation] (#Install)
+* [Using the Command Line] (#CommandLine)
+ * [Command-Line Options] (#CommandLineOptions)
+ * [Utility Scripts] (#UtilityScripts)
+ * [Specifying SMRT Pipe Inputs] (#PipeInputs)
+ * [Specifying SMRT Pipe Parameters] (#PipeParams)
+* [SMRT Portal Protocols] (#PortalProtocols)
+ * [11k_Unrolled_Resequencing] (#PRO_11K)
+ * [ecoliK12_RS_Resequencing] (#PRO_ECOLI)
+ * [lambda_RS_Resequencing] (#PRO_LAM)
+ * [BridgeMapper_beta] (#PRO_BM)
+ * [RS_AHA_Scaffolding] (#PRO_AHA)
+ * [RS_cDNA_Mapping] (#PRO_CDNA)
+ * [RS_CeleraAssembler] (#PRO_CEL)
+ * [RS_HGAP_Assembly.1, RS_HGAP_Assembly.2] (#PRO_HGAP)
+ * [RS_Long_Amplicon_Analysis (BETA)] (#PRO_LAMP)
+ * [RS_Minor_and_Compound_Variants (BETA)] (#PRO_MINOR)
+ * [RS_Modification_Detection] (#PRO_MOD)
+ * [RS_Modification_and_Motif_Analysis] (#PRO_MODM)
+ * [RS_PreAssembler] (#PRO_PRE)
+ * [RS_ReadsOfInsert] (#PRO_ROI)
+ * [RS_Resequencing] (#PRO_RESEQ)
+ * [RS_Resequencing_ReadsOfInsert] (#PRO_RESEQ_ROI)
+ * [RS_Resequencing_GATK_Barcode] (#PRO_RESEQ_GATK)
+ * [RS_Site_Acceptance_Test] (#PRO_SITE)
+* [SMRT Pipe Modules and Their Parameters] (#Modules)
+ * [Global Parameters] (#Global)
+ * [P_Fetch Module] (#P_Fetch)
+ * [P_Filter Module] (#P_Filter)
+ * [P_PreAssembler Module] (#P_Pre)
+ * [P_PreAssemblerDagcon Module (Beta)] (#P_PreDag)
+ * [P_Mapping (BLASR) Module] (#P_Map)
+ * [P_GenomicConsensus (Quiver) Module] (#P_Quiver)
+ * [P_AssemblyPolishing Module] (#P_Polish)
+ * [P_AnalysisHook Module] (#P_Hook)
+ * [P_AHA (AHA Scaffolding) Module] (#P_AHA)
+ * [P_GATKVC (GATK Unified Genotyper) Module] (#P_GATK)
+ * [P_ModificationDetection Module]  (#P_MOD)
+ * [P_CorrelatedVariants (Minor and Compound Variants) Module] (#P_Cor)
+ * [P_MotifFinder (Motif Analysis) Module] (#P_Motif)
+ * [P_GMAP Module] (#P_GMAP)
+ * [P_Barcode Module]  (#P_BAR)
+ * [P_AmpliconAssembly Module (Beta)]  (#P_AMP)
+ * [P_CCS (Reads of Insert) Module]  (#P_CCS)
+ * [P_BridgeMapper Module (Beta)]  (#P_Bridge)
+* [SMRT Pipe Tools] (#Tools)
+* [Building the SMRT Pipe tools manually, without SMRT Portal, SMRT View, or Kodos] (#Build_SPTools)
+* [SMRT Pipe File Structure] (#Files)
+* [The Reference Repository] (#RefRep)
+
+## <a name="Intro"></a> Introduction
+
+This document describes the underlying command-line interface to SMRT Pipe, and is for use by bioinformaticians working with secondary analysis results.
+
+**SMRT Pipe** is Pacific Biosciences’ underlying analysis framework for secondary analysis functions.  SMRT Pipe is a python-based general-purpose workflow engine. It is easily extensible, and supports logging, distributed computation, error handling, analysis parameters, and temporary files.
+
+In a typical installation of the SMRT Analysis Software, the SMRT Portal web application calls SMRT Pipe when a job is started. SMRT Portal provides a convenient and user-friendly way to analyze Pacific Biosciences’ sequencing data through SMRT Pipe. Power users will find that there is more flexibility and customization available by instead running SMRT Pipe analyses from the command line.
+
+* The latest version of SMRT Pipe is available [here] (http://pacificbiosciences.github.io/DevNet/).
+
+* SMRT Pipe can also be accessed using the Secondary Analysis Web Services API. For details, see [Secondary Analysis Web Services API](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Secondary-Analysis-Web-Services-API-v2.1).
+
+**Note:**
+Throughout this documentation, the path ``/opt/smrtanalysis`` is used to refer to the installation directory for SMRT Analysis (also known as ``$SEYMOUR_HOME``). Replace this path with the path appropriate to your installation when using this document.
+
+## <a name="Install"></a> Installation
+
+SMRT Pipe is installed as part of the SMRT Analysis software installation. For details, see [SMRT Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.1).
+
+## <a name="CommandLine"></a> Using the Command Line
+
+In a typical SMRT Analysis installation, SMRT Pipe is in your path after sourcing the ``setup.sh`` file.  This file declares the ``$SEYMOUR_HOME`` environment variable and also sources two subsequent files, ``$SEYMOUR_HOME/analysis/etc/setup.sh`` and ``$SEYMOUR_HOME/common/etc/setup.sh``.  Do not declare `$SEYMOUR_HOME` in `~/.bashrc` or any other environment setting file because it will cause conflicts.
+
+
+Invoke the ``smrtpipe.py`` script in by executing:
+
+```
+. /path/to/smrtanalysis/etc/setup.sh && smrtpipe.py [--help] [options] --params=settings.xml xml:input.xml
+```
+
+Replace ``/path/to/smrtanalysis/`` with the path to your SMRT Analysis installation. The is the same way smrtpipe.py is invoked in SMRT Portal using the `job.sh` script.
+
+Logging messages are printed to stderr as well as a log file (``log/smrtpipe.log``). It is standard practice to pipe the stderr messages to a file using redirection in your shell, for example appending 
+``&> smrtpipe.err`` to the command line if running under bash.
+
+### <a name="CommandLineOptions"></a> Command Line Options
+
+Following are some of the available options for invoking ``smrtpipe.py``:
+
+```
+-D key=value
+```
+
+* Overrides a configuration variable. Configuration variables are key-value pairs that are read from the global file ``smrtpipe.rc`` before starting an analysis. An example is the ``NPROC`` variable which controls the number of simultaneous processors to use during the analysis. To restrict SMRT Pipe to 4 processors, use ``-D NPROC=4``.
+
+```
+--debug
+```
+* Activates debugging output in the stderr and log outputs. To set this flag as a default, specify ``DEBUG=True`` in the ``smrtpipe.rc`` file.
+
+```
+--distribute
+```
+* Distributes the computation across a compute cluster. For information on onfiguring SMRT Pipe for a distributed computation environment, see [SMRT Analysis Software Installation] (https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.1).
+
+```
+--help
+```
+* Displays information about command-line usage and options, and then exits.
+
+```
+--noreports
+```
+* Turns off the production of XML/HTML/PNG reports.
+
+```
+--nohtml
+```
+* Turns off the conversion of XML reports into HTML. (This conversion **requires** that Java be installed.)
+
+```
+--output=outputDir
+```
+
+* Specifies a root directory to use for all SMRT Pipe outputs for this analysis.  SMRT Pipe places outputs in this directory, as well as in data, results, and log subdirectories.
+
+```
+params=params.xml
+```
+* Specifies a settings XML file for running the pipeline analysis. If this option is **not** specified, SMRT Pipe prints a message and then exits.
+
+```
+--totalCells
+```
+* Specifies that if the number of cells in the job is less than ``totalCells``, the job is **not** marked complete when it finishes. Data from additional cells will be appended to the outputs, until the number of cells reaches ``totalCells``. 
+
+```
+--recover
+```
+* Attempts to rerun a SMRT Pipe analysis starting from the last successful stage. The same initial arguments should be specified in this case.
+
+```
+--version
+```
+* Displays the version number of SMRT Pipe and then exits.
+
+```
+--kill
+```
+* Kills a SMRT Pipe job running in the current directory. This works with ``output``.
+
+
+### <a name="UtilityScripts"></a> Utility Scripts
+
+For convenience, you can create several utility scripts:
+
+**run_smrtpipe_singlenode.sh**
+
+```
+SMRT_ROOT=/path/to/smrtanalysis/
+.  $SMRT_ROOT/common/etc/setup.sh && smrtpipe.py  --params=settings.xml   xml:input.xml
+```
+
+
+**run_smrtpipe_distribute.sh**
+
+```
+SMRT_ROOT=/path/to/smrtanalysis/
+.   $SMRT_ROOT/common/etc/setup.sh && smrtpipe.py  --distribute --params=settings.xml   xml:input.xml
+```
+
+**run_smrtpipe_debug.sh**
+```
+SMRT_ROOT=/path/to/smrtanalysis/
+.   $SMRT_ROOT/common/etc/setup.sh && smrtpipe.py  --debug --params=settings.xml   xml:input.xml
+```
+
+
+
+### <a name="PipeInputs"></a> Specifying SMRT Pipe Inputs
+
+The input file is an XML file specifying the sequencing data to process. Generally, you specify the inputs as URIs (Universal Resource Identifiers) which are resolved by code internal to SMRT Pipe. In practice, this is most useful to large enterprise users that have a data management scheme and are able to modify the SMRT Pipe code to include their own resolver.
+
+The simpler way to specify inputs is to **fully resolve** the path to each input file, which as of v2.0, is a ``bax.h5`` file. For more information, see [bas.h5 Reference Guide] (http://files.pacb.com/software/instrument/2.0.0/bas.h5%20Reference%20Guide.pdf).
+
+The script ``fofnToSmrtpipeInput.py`` is provided to convert a FOFN (a "file of file names" file) to the input format expected by SMRT Pipe. If ``my_inputs.fofn`` looks like
+```
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.2.bax.h5
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.3.bax.h5
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.1.bax.h5
+```
+or, for SMRT Pipe versions **before** v2.1:
+```
+/share/data/run_1/m100923_005722_00122_c15301919401091173_s0_p0.bas.h5
+/share/data/run_2/m100820_063008_00118_c04442556811011070_s0_p0.bas.h5
+```
+
+
+then it can be converted to a SMRT Pipe input XML file by entering:
+```
+fofnToSmrtpipeInput.py my_inputs.fofn > my_inputs.xml
+```
+Following is the resulting XML file for SMRT Pipe v2.1:
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+  <dataReferences>
+    <url ref="run:0000000-0000"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.2.bax.h5</location></url>
+    <url ref="run:0000000-0001"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.3.bax.h5</location></url>
+    <url ref="run:0000000-0002"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.1.bax.h5</location></url>
+  </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+For SMRT Pipe versions **before** v2.1:
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+ <dataReferences>
+    <url ref="run:0000000-0000"><location>/share/data/
+    /share/data/run_1 m100923_005722_00122_c15301919401091173_s0_p0.bas.h5
+    <url ref="run:0000000-0001"><location>/share/data/
+    /share/data/run_2/m100820_063008_00118_c04442556811011070_s0_p0.bas.h5
+ </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+To run an analysis using these input files, use the following command:
+```
+smrtpipe.py --params=settings.xml xml:my_inputs.xml
+```
+
+The SMRT Pipe input format lets you specify annotations, such as job IDs, job names, and job comments, in a job-management environment. The ``fofnToSmrtpipeInput.py`` application has command-line options for setting these optional attributes.
+
+**Note**: To get help for a script, run the script with the ``--help`` option and no additional arguments. For example:
+```
+fofnToSmrtpipeInput.py --help
+```
+
+### <a name="PipeParams"></a> Specifying SMRT Pipe Parameters
+
+The ``--params`` option is the most important SMRT Pipe option, and is required for any sophisticated use. The option specifies an XML file that controls:
+
+* The analysis modules to run.
+* The **order** of execution.
+* The **parameters** used by the modules.
+
+The general structure of the settings XML file is as follows:
+```
+<?xml version="1.0"?>
+<smrtpipeSettings>
+
+<protocol>
+...global parameters...
+</protocol>
+
+<module id="module_1">
+...parameters...
+</module>
+
+<module id="module_2">
+...parameters...
+</module>
+
+</smrtpipeSettings>
+```
+
+* The ``protocol`` element allows setting global parameters that could possibly be used by all modules.
+* Each ``module`` element defines an analysis module to run. 
+* The order of the ``module`` elements defines the order in which the modules execute.
+
+SMRT Portal protocol templates are located in: ```$SEYMOUR_HOME/common/protocols/```.
+
+SMRT Pipe modules are located in: 
+``$SEYMOUR_HOME/analysis/lib/pythonx.x/pbpy-0.1-py2.7.egg/pbpy/smrtpipe/modules/``.
+
+You specify parameters by entering a key-value pair in a ``param`` element. 
+* The name of the key is in the name attribute of the ``param`` element.
+* The value of the key is contained in a nested value element. 
+
+For example, to set the parameter named ``reference``, you specify:
+```
+<param name="reference">
+  <value>/share/references/repository/celegans</value>
+</param>
+```
+
+**Note**: To reference a parameter value in other parameters, use the notation ``${variable}`` when specifying a value. For example, to reference a global parameter named home, use it in other parameters as ``${home}``.  SMRT Pipe supports arbitrary parameters in the settings XML file, so the use of temporary variables like this can help readability and maintainability.
+
+Following is a complete example of a settings file for running filtering, mapping, and consensus steps against the E. coli reference genome:
+```
+<?xml version="1.0" encoding="utf-8"?>
+<smrtpipeSettings>
+ <protocol>
+  <param name="reference">
+   <value>/share/references/repository/ecoli</value>
+  </param>
+ </protocol>
+
+ <module name="P_Filter">
+  <param name="minLength">
+    <value>50</value>
+  </param>
+  <param name="readScore">
+    <value>0.75</value>
+  </param>
+ </module>
+
+ <module name="P_FilterReports" />
+
+ <module name="P_Mapping">
+  <param name="align_opts" hidden="true">
+   <value>--minAccuracy=0.75 --minLength=50 -x </value>
+  </param>
+ </module>
+
+ <module name="P_MappingReports" />
+ <module name="P_Consensus" />
+ <module name="P_ConsensusReports" />
+
+</smrtpipeSettings>
+```
+
+## <a name="PortalProtocols"></a> SMRT Portal Protocols
+
+Following are the secondary analysis protocols included in SMRT Analysis v2.1, with the SMRT Pipe module(s) called by each protocol. Many of these modules are described later in this document.
+
+### <a name="PRO_11K"></a> 11k_Unrolled_Resequencing
+
+* Used for 11k plasmidbell resequencing against an unrolled reference.
+* Designed for troubleshooting the performance of 120 minute movies to get a more accurate estimate of the full polymerase read length.
+```
+* P_Filter
+* BLASR_Unrolled
+* P_MotifFinder
+* P_ModificationDetection
+* P_AnalysisHook
+```
+
+### <a name="PRO_ECOLI"></a> ecoliK12_RS_Resequencing:
+
+* Used for E. coli whole genome resequencing.  
+* Reads are filtered, mapped to the E. coli reference sequence, and consensus and variants are identified versus this reference.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_AnalysisHook
+```
+
+### <a name="PRO_LAM"></a> lambda_RS_Resequencing:
+
+* Used for lambda resequencing.  
+* Reads are filtered, mapped to the lambda reference sequence, and consensus and variants are identified versus this reference.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConcensus
+* P_AnalysisHook
+
+```
+
+### <a name="PRO_BM"></a> BridgeMapper_beta:
+
+* Used for whole-genome or targeted resequencing.
+* Returns split alignments of PacBio reads using BLASR. 
+* Reads are filtered by length and quality, mapped to a provided reference sequence, and consensus and variants are identified versus this reference using the Quiver algorithm.
+```
+* P_Filter
+* P_Mapping
+
+```
+
+### <a name="PRO_AHA"></a> RS_AHA_Scaffolding:
+
+* Used for hybrid assembly of genomes up to 10 Mbp.
+* Improve existing assemblies up to 200 Mb in size by scaffolding with PacBio long reads to join contigs. 
+* Reads are filtered and assembled with high confidence contigs into scaffolds using a combination of algorithms developed by Pacific Biosciences and the AMOS open-source project.
+```
+* P_Filter
+* P_AHA
+
+```
+
+### <a name="PRO_CDNA"></a> RS_cDNA_Mapping:
+
+* Aligns reads from cDNA to a genomic DNA reference using the third-party software tool GMAP.
+* Reads are filtered by length and quality and then mapped against the reference using GMAP to span introns.
+```
+* P_Filter
+* P_GMAP
+
+```
+
+### <a name="PRO_CEL"></a> RS_CeleraAssembler:
+
+* Performs _de novo_ assembly of genomes up to 200 Mbp using ``pacBioToCA`` for error correction and Celera® Assembler 7.0 for assembly.
+* Combines long reads (ideally from a 10 kb or longer insert library) with shorter, high-accuracy reads (Reads of Insert reads or reads from another sequencing technology).
+* This workflow (comprised of the ``P_PacBioToCA`` and ``P_CeleraAssembler`` modules) wraps the Celera® Assembler’s error correction and assembly programs. For full documentation of pacBioToCA and the Celera Assembler, see http://sourceforge.net/apps/mediawiki/wgs-assembler/index.php?title=Main_Page.
+* The error correction may be run with external high confidence reads, such as those from Illumina® data, or from internally generated Reads of Insert reads.
+
+####Input:####
+
+* ``settings.xml``: Specifies the parameters.
+
+* ``input.xml``: Specifies the inputs.
+
+####Sample settings.xml file:####
+
+```
+<module id="P_PacBioToCA" label="PacBioToCA v1" editableInJob="true">
+  <description>This module wraps pacBioToCA, the error correction pipeline of Celera Assembler v7.0</description>
+```
+
+If ``useCCSFastq`` is set to ``True``, CSS reads are used as the high-confidence reads:
+
+```
+  <param name="useCCSFastq" label="Correct With CCS">
+     <title>Use CCS reads to correct long reads</title>
+     <value>True</value>
+     <input type="checkbox" />
+  </param>
+```
+
+To use a FASTQ file, set the value of ``shortReadFastqA`` to the full path.
+
+```
+  <param name="shortReadFastqA" label="FASTQ to Correct With">
+     <title>(Optional) FASTQ file of reads to correct long reads with </title>
+     <input type="text" />
+     <value></value>
+     <rule remote="api/protocols/resource-exists?paramName=shortReadFastqA" required="false" message="File does not exist" />
+  </param>
+```
+
+The ``shortReadTechnology`` option selects the platform on which the reads were generated. This sets library feature flags to enable different correction, trimming and untagging algorithms. The default is ``illumina``.
+
+```
+  <param name="shortReadTechnology" label="FASTQ Read Type">
+     <title>Sequencing platform used to generate the FASTQ file, if specified</title>
+     <value>illumina</value>
+     <select>
+       <option value="sanger">~1kb (e.g., Sanger and PacBio CCS)</option>
+       <option value="454">~600bp (e.g., 454)</option>
+       <option value="illumina">~100bp (e.g., Illumina)</option>
+     </select>
+  </param>
+```
+
+The ``shortReadType`` option selects the type of QV encoding (``sanger``, ``solexa``, or ``illumina``) in the FASTQ file. 
+
+```
+  <param name="shortReadType" label="FASTQ Quality Value Encoding">
+     <title>ASCII encoding of the quality values in the FASTQ file, if specified</title>
+     <value>illumina</value>
+     <select>
+       <option value="sanger">Phred+33 (e.g., Sanger and PacBio fastq)
+       </option>
+       <option value="solexa">Solexa+64 (e.g., Solexa fastq)</option>
+       <option value="illumina">Phred+64 (e.g., Illumina fastq)</option>
+     </select>
+  </param>
+
+  <param name="pbReadMinLength" label="Min fragment length">
+     <title>Minimum length of PacBio RS fragment to keep.</title>
+     <input type="text" />
+     <value>1000</value>
+     <rule type="digits" message="Value must be an integer" required="true" />
+  </param>
+
+  <param name="specInPacBioToCA" label="Pre-defined spec file">
+    <title>Enter the server path to an existing spec file</title>
+    <input type="text" />
+    <rule remote="api/protocols/resource-exists?paramName=specInPacBioToCA" required="false" message="File does not exist" />
+  </param>
+</Module>
+
+<module id="P_CeleraAssembler" label="CeleraAssembler v1" editableInJob="true">
+   <description>This module wraps the Celera Assembler v7.0</description>
+
+  <param name="genomeSize" label="Genome Size (bp)">
+     <title>Approximate genome size in base pairs</title>
+     <value>5000000</value>
+     <input type="text" />
+     <rule type="digits" message="Must be a value between 1 and 200000000" min="1"
+     required="true" max="200000000" />
+  </param>
+
+  <param name="defaultFrgMinLen" hidden="true">
+    <input type="text" />
+    <value>1000</value>
+  </param>
+
+  <param name="xCoverage" label="Target Coverage">
+    <title>Fold coverage to target when picking frgMinLen for assembly. Typically 15 to 25.</title>
+    <input type="text" />
+    <value>15</value>
+    <rule type="digits" message="Value must be an integer between 10 and 30, inclusive” min="10" max="30" />
+  </param>
+
+  <param name="ovlErrorRate" label="Overlapper error rate">
+    <title>Overlapper error rate</title>
+    <input type="text" />
+    <value>0.015</value>
+    <rule type="number" message="Value must be numeric" />
+  </param>
+
+  <param name="ovlMinLen" label="Overlapper min length">
+    <title>Overlaps shorter than this length are not computed.</title>
+    <input type="text" />
+    <value>40</value>
+    <rule type="digits" message="Value must be an integer" />
+  </param>
+
+  <param name="specInRunCA" label="Pre-defined spec file">
+    <title>Enter the server path to an existing spec file</title>
+    <input type="text" />
+    <rule remote="api/protocols/resource-exists?paramName=specInRunCA" required="false" message="File does not exist" />
+  </param>
+
+ </module>
+</smrtpipeSettings>
+```
+
+####Output:####
+
+**Note:** Some of the worklflow outputs are produced by the Celera Assembler, and some by Pacific Biosciences’ software.
+
+* ``data/runCa.spec``: The specification file used to run the assembly program. The ``P_CeleraAssembler`` module auto-generates the specification file based on the input data and selected parameters. Alternatively, you can provide an explicit specification file.
+
+* ``data/pacBioToCA.spec``: The specification file used to run the error correction program. The ``P_PacBioToCA`` module auto-generates the specification file based on the input data and selected parameters. Alternatively, you can provide an explicit specification file.
+
+* ``data/celera-assembler.asm``: The official output of Celera Assembler’s assembly program.
+
+* ``data/assembled_reads.cmp.h5``: The pairwise alignment for each read against its assembled contig consensus.
+
+* ``data/assembled_summary.gff.gz``: Summary information about each of the contigs.
+
+* ``data/castats.txt``: Assembly statistics report.
+
+To run the error correction and assembly modules, enter the following:
+```
+smrtpipe.py --params=settings.xml xml:input.xml >& smrtpipe.err
+```
+
+### <a name="PRO_FILTER"></a> RS_Filter_Only:
+
+* Filters reads based on the minimum read length and read quality specified. No additional analysis is performed.
+```
+* P_Filter
+
+```
+
+### <a name="PRO_HGAP"></a> RS_HGAP_Assembly.1, RS_HGAP_Assembly.2:
+
+* HGAP (Hierarchical Genome Assembly Process) performs high quality _de novo_ assembly using a single PacBio library preparation. 
+* HGAP consists of pre-assembly, _de novo_ assembly with Celera® Assembler, and assembly polishing with Quiver.
+* HGAP.2 protocol is designed to position the software to scale against larger genomes.
+```
+* P_PreAssembler (HGAP.1)
+* P_PreAssemblerDagcon (HGAP.2)
+* P_CeleraAssembler
+* P_Mapping
+* P_AssemblyPolishing 
+
+```
+
+### <a name="PRO_LAMP"></a> RS_Long_Amplicon_Analysis (BETA):
+
+* Used to determine phased consensus sequences for pooled amplicon data. 
+* Up to 20 distinct amplicons can be pooled. Reads are clustered into high-level groups, then each group is phased and consensus is called using the Quiver algorithm.
+* Optionally splits reads by barcode if the sample is barcoded.
+```
+* P_AmpliconAssembly
+* P_Barcode
+
+```
+
+### <a name="PRO_MINOR"></a> RS_Minor_and_Compound_Variants (BETA):
+
+* A single-molecule analysis workflow for detection of minor variants and compound mutations relative to a reference using Reads of Insert data. Suitable for cancer amplicons and low-divergence viral samples.
+```
+* P_Filter
+* BLASR_Minor_and_Compound_Variants
+* P_CorrelatedVariants
+
+```
+
+### <a name="PRO_MOD"></a> RS_Modification_Detection:
+
+* A resequencing analysis that performs base-modification identification for 6-mA, 4-mC, and optionally TET-converted 5-mC. Also performs variant detection.
+* Reads are filtered by length and quality, mapped to a provided reference sequence, and consensus and variants are identified.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_ModificationDetection
+
+```
+
+### <a name="PRO_MODM"></a> RS_Modification_and_Motif_Analysis:
+
+* A resequencing analysis that identifies common bacterial base modifications (6-mA, 4-mC, and optionally TET-converted 5-mC), and then analyzes the methyltransferase recognition motifs. 
+* Reads are filtered by length and quality, mapped against a specified reference sequence, and then variants are called.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_ModificationDetection
+* P_MotifFinder
+
+```
+
+### <a name="PRO_PRE"></a> RS_PreAssembler:
+
+* Used to construct a set of highly accurate long reads for use in _de novo_ assembly, using the hierarchical genome assembly process (HGAP).
+* Takes each read exceeding a minimum length, aligns all reads against it, trims the edges, and then takes the consensus.
+```
+* PreAssemblerSFilter
+* P_PreAssembler
+
+```
+
+### <a name="PRO_ROI"></a> RS_ReadsOfInsert:
+
+* Generates reads from the insert sequence of single molecules, optionally splitting by barcode.
+* Used to estimate the length of the insert sequence loaded onto a SMRT® Cell. 
+* Replaces the Circular Consensus Sequencing (CCS) protocol, which has been moved off the primary analysis instrument. 
+* To obtain the closest approximation of CCS as it existed on-instrument, specify ``MinCompletePasses = 2`` and ``MinPredictedAccuracy = 0.9`` in the SMRT® Portal Reads of Insert protocol dialog box.
+
+```
+* P_CCS
+* P_Barcode
+
+```
+
+### <a name="PRO_RESEQ"></a> RS_Resequencing:
+
+* Used for whole-genome or targeted resequencing.
+* Reads are filtered, mapped to a provided reference sequence, and consensus and variants are identified against this reference.
+* Haploid variants and small indels, but **not** diploid variants, are called during consensus.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+
+```
+
+### <a name="PRO_RESEQ_ROI"></a> RS_Resequencing_ReadsOfInsert:
+
+* Used for whole-genome or targeted resequencing.
+* Reads are filtered, mapped to a provided reference sequence, and consensus and variants are identified against this reference.
+* Haploid variants and small indels, but **not** diploid variants, are called during consensus.
+* Uses Reads of Insert (formerly known as CCS) data during mapping.
+```
+* P_Filter
+* P_CCS
+* BLASR_De_Novo_CCS
+
+```
+
+### <a name="PRO_RESEQ_GATK"></a> RS_Resequencing_GATK_Barcode:
+
+* Used for heterozygous and homozygous SNP calling of targeted regions and whole genomes.
+* Reads are filtered by length and quality, mapped to a provided reference sequence, and consensus and variants are identified versus the reference using the GATK Unified Genotyper.
+* **Note:** This protocol is deprecated, and will be removed in a future release of SMRT Pipe.
+```
+* P_Filter
+* BLASR_Barcode
+* P_GATKVC
+
+```
+
+### <a name="PRO_SITE"></a> RS_Site_Acceptance_Test:
+
+* Site acceptance test workflow for lambda resequencing. 
+* Generates a report displaying site acceptance test metrics.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+
+```
+
+## <a name="Modules"></a>  SMRT Pipe Modules and their Parameters
+Following is an overview of some of the common modules included in SMRT Pipe and their parameters. Not all modules or parameters are listed here. 
+
+Developers interested in even finer control should look inside the ``validateSettings`` method for each python analysis module. By convention, **all** of the settings known to the analysis module are referenced in this method.
+
+## <a name="Global"></a> Global Parameters
+
+Global parameters are potentially used in multiple modules. In the SMRT Pipe internals, they are accessed in the “global” namespace. Following are some common global parameters:
+
+```
+reference
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping reads. **Required** for resequencing workflows.
+* Default value = ``None``
+
+```
+control
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping spike-in control reads. **(Optional)**
+* Default value = ``None``
+
+```
+use_subreads
+```
+* Specifies whether to divide reads into subreads using the adapter region boundaries found by the primary analysis software. **(Optional)**
+* Default value = ``True``
+
+```
+num_stats_regions
+```
+* Specifies how many regions to use when reporting region statistics such as depth of coverage and variant density. **(Optional)**
+* Default value = ``500``
+
+## <a name="P_Fetch"></a> P_Fetch Module
+
+This module fetches the input data and generates a file of the file names of the input .pls files for downstream analysis. This module has **no** exposed parameters.
+
+###Output:###
+
+* ``pls.fofn`` (File containing file names of the input .pls files)
+
+## <a name="P_Filter"></a> P_Filter Module
+
+This module filters and trims the raw reads produced by Pacific Biosciences’ primary analysis software. Options are available for taking the information found in the bas.h5 files and using this to pass reads and portions of reads forward.
+
+###Input:###
+
+* bas.h5 files (pre v2.1) or bax.h5 files (post v2.1)
+
+###Output:###
+
+* ``data/filtering_summary.csv``: Includes raw metrics and filtering information for each read (not subread) found in the original bas.h5 files.
+* ``rgn.h5`` (one for each input bas.h5 file): Filtering information generated by the module.
+
+###Parameters:
+
+* ``minLength``  Reads with a high quality region read length below this threshold are filtered out. **(Optional)**
+
+* ``maxLength``  Reads with a high quality region read length above this threshold are filtered out. **(Optional)**
+
+* ``minSubReadLength``  Subreads **shorter** than this length are filtered out.
+
+* ``maxSubReadLength``  Subreads **longer** than this length are filtered out.
+
+* ``minSNR``  Reads with signal-to-noise ratio below this threshold are filtered out. **(Optional)**
+
+* ``readScore`` Reads with a high quality region (Read Quality) score below this threshold are filtered out. **(Optional)**
+
+* ``trim`` Default value = ``True``, Specifies whether to trim reads to the high-quality region. **(Optional)**
+
+* ``artifact``  Reads with a read artifact score less than this (negative) number are filtered out. No number indicates no artifact filtering. Reasonable thresholds are typically between -1000 and -200. **(Optional)**
+
+## <a name="P_Pre"></a> P_PreAssembler Module
+
+This module takes as input long reads and short reads in standard formats, aligns the short reads to the long reads, and outputs a consensus from the preassembled short reads using the long reads as seeds.
+**Note:** You **must** run the ``P_Fetch`` and ``P_Filter`` modules before running ``P_PreAssembler`` to get meaningful results.
+
+###Input:###
+
+* **Long reads ("seed reads")**: PacBio pls.h5/bas.h5 file(s) and optionally associated rgn.h5 file(s).
+* **Short reads**: Can be one of the following:
+ * Arbitrary high-quality reads in FASTQ format, such as Illumina® reads, without Ns.
+ * PacBio pls.h5/bas.h5 file(s): The same reads as used for the long reads. This mode is the first step of HGAP (Hierarchical Genome Assembly Procedure.)
+* ``params.xml``
+* ``input.xml``
+
+The module can run on bas.h5 files only, and on bas.h5 and FASTQ files. Following are sample XML inputs for both modes.
+
+###Sample input.xml,bas.h5-only input mode###
+
+* **Note:** bas.h5 input files must have the suffix bas.h5.
+
+```
+<pacbioAnalysisInputs>
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+      <url ref="run:0000000-0001">
+   <location>
+      /path/to/input.bas.h5
+   </location>
+</url>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml, bas.h5-only input mode###
+* This XML parameter file was tested on 90X short reads and 24X long reads.
+
+```
+<module name="P_PreAssembler">
+   <param name="useFastqAsShortReads">
+     <value>False</value>
+   </param>
+   <param name="useFastaAsLongReads">
+     <value>False</value>
+   </param>
+   <param name="useLongReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="useUnalignedReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="useCCS">
+     <value>False</value>
+   </param>
+   <param name="minLongReadLength">
+     <value>5000</value>
+   </param>
+   <param name="blasrOpts">
+     <value> -minReadLength 200 -maxScore -1000 -bestn 24 -maxLCPLength 16 -nCandidates 24 </value>
+   </param>
+   <param name="consensusOpts">
+     <value> -L </value>
+   </param>
+   <param name="layoutOpts">
+     <value> --overlapTolerance 100 --trimHit 50 </value>
+   </param>
+   <param name="consensusChunks">
+     <value>60</value>
+   </param>
+   <param name="trimFastq">
+     <value>True</value>
+   </param>
+   <param name="trimOpts">
+     <value> --qvCut=59.5 --minSeqLen=500 </value>
+   </param>
+</module>
+```
+
+###Sample input.xml (FASTQ and bas.h5 input mode)###
+
+* This parameter XML file was tested on 50X 100 bp Illumina® reads correcting 15X PacBio long reads.
+
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+     <url ref="run:0000000-0001">
+   <location>
+     /path/to/input.bas.h5
+   </location>
+   </url>
+     <url ref="fastq:/path/to/input.fastq"/>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml (FASTQ and bas.h5 input mode)###
+
+```
+<?xml version="1.0" ?>
+<smrtpipeSettings>
+  <module name="P_Fetch"/>
+  <module name="P_Filter">
+    <param name="filters">
+       <value>MinRL=1000,MinReadScore=0.80</value>
+    </param>
+    <param name="artifact">
+       <value>-1000</value>
+    </param>
+  </module>
+  <module name="P_PreAssembler">
+    <param name="useFastqAsShortReads">
+       <value>True</value>
+    </param>
+    <param name="useFastaAsLongReads">
+       <value>False</value>
+    </param>
+    <param name="useLongReadsInConsensus">
+       <value>False</value>
+    </param>
+    <param name="useUnalignedReadsInConsensus">
+       <value>False</value>
+    </param>
+    <param name="blasrOpts">
+       <value>-minMatch 8 -minReadLength 30 -maxScore -100 -minPctIdentity 70 -bestn 100</value>
+    </param>
+    <param name="layoutOpts">
+       <value>--overlapTolerance=25</value>
+    </param>
+    <param name="consensusOpts">
+       <value>-w 2</value>
+    </param>
+</module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``corrected.fasta``,`` corrected.fastq``: FASTQ or FASTA file of corrected long reads.
+* ``idmap.csv``: csv file mapping corrected long read ids to original read ids.
+
+## <a name="P_PreDag"></a> P_PreAssemblerDagcon Module (Beta)
+
+This module provides the primary difference in HGAP 2. ``P_PreAssemblerDagcon`` was designed as a drop-in 
+replacement for the correction step in HGAP 1, providing the same functionality much faster and more 
+efficiently than the ``P_PreAssembler`` module.  It includes a simple, alignment-based chimera filter 
+that reduces effects caused by missing SMRTbell™ adapters, such as spurious contigs in assemblies.
+
+Note that the quality values in the FASTQ file for the corrected reads are a uniformly set to ``QV24``. This is determined by mapping corrected reads to a known reference and appears to work well on a broad set of data.  We are considering deriving QV values directly from the data for a future release.
+
+As the HGAP 2 implementation was completely redesigned and includes much new code, it is labeled as "Beta" for this release.  
+
+###Input:###
+
+* Filtered subreads fasta file (generated by ``P_Filter``)
+* ``params.xml``
+* ``input.xml``
+
+The module is a much simpler design and can **only** be run using smrtpipe in combination with the
+filtered subreads module. Auto-seed cutoff still targets 30x seed reads.
+
+###Parameters:###
+
+* ``targetChunks`` How many chunks to split the seed reads (target) into. In the example below
+the value is set to ``6``, which generates approximately 5x (30x/6) worth of sequence per split file,
+ or chunk. If set to ``1``, then set ``splitBestn`` to the same value as ``totalBestn``.
+
+* ``splitBestn`` Must be adjusted based on ``targetChunk``.  Roughly 1.5 - 2 the coverage found in a 
+given split file, though may produce false positives in some cases, affecting correction so be 
+careful. 
+
+* ``totalBestn`` Default value = ``24``.  Based on the total coverage of 30x.  Default is sensible 
+in most cases.
+
+###Sample input.xml,bas.h5-only input mode###
+
+* **Note:** bas.h5 input files must have the suffix bas.h5.
+
+```
+<pacbioAnalysisInputs>
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+      <url ref="run:0000000-0001">
+   <location>
+      /path/to/input.bas.h5
+   </location>
+</url>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml, bas.h5-only input mode###
+
+```
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<smrtpipeSettings>
+    <module id="P_Filter" >
+        <param name="minLength"><value>100</value></param>
+        <param name="minSubReadLength"><value>500</value></param>
+        <param name="readScore"><value>0.80</value></param>
+    </module>
+    <module id="P_PreAssemblerDagcon">
+        <param name="computeLengthCutoff"><value>true</value></param>
+        <param name="minLongReadLength"><value>6000</value></param>
+        <param name="targetChunks"><value>6</value></param>
+        <param name="splitBestn"><value>11</value></param>
+        <param name="totalBestn"><value>24</value></param>
+        <param name="blasrOpts"><value> -noSplitSubreads -minReadLength 200 -maxScore -1000 -maxLCPLength 16 </value></param>
+    </module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``data/corrected.fasta``,``data/corrected.fastq``: FASTQ and FASTA file of corrected long reads.
+* ``preassembler_report.json``: JSON-formated pre-assembly report.
+* ``preassembler_report.html``: HTML-formated pre-assembly report.
+
+## <a name="P_Map"></a> P_Mapping (BLASR) Module
+
+This module aligns reads against a reference sequence, possibly a multi-contig reference.
+If the ``P_Filter`` module is run first, then **only** the reads which passed filtering are aligned.
+
+###Output:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read.
+* ``data/alignment_summary.gff``: Summary information.
+
+###Parameters:###
+
+* ``align_opts`` Default value = ``Empty string``, Passes options to the underlying ``compareSequences.py`` script. **(Optional)**
+
+* ``--useCcs=`` Default value = ``None``, A parameter sent to the underlying ``compareSequences.py`` script via the ``align_opts`` parameter value above. Values are ``{denovo|fullpass|allpass}``. **(Optional)**
+
+  * ``denovo``: Maps just the _de novo_ called sequence and report. (Does not include quality values.)
+
+  * ``fullpass``: Maps the _de novo_ called sequence, then aligns full passes to the sequence that the _de novo_ called sequence aligns to.
+
+  * ``allpass``: Maps the _de novo_ called sequence, then aligns all passes (even ones that do not span the length of the template) to the sequence the _de novo_ called sequence aligned to.
+
+
+* ``load_pulses``: Default value = ``True``, Specifies whether to load pulse metric information into the cmp.h5 file. **(Optional)**
+
+* ``maxHits``: Default value = ``None``, Attempts to find sub-optimal alignments and report up to this many hits per read. **(Optional)**
+
+* ``minAnchorSize``: Default value = ``None``, Ignores anchors **smaller** than this size when finding candidate hits for dynamic programming alignment. **(Optional)**
+
+* ``maxDivergence``: Default value = ``None``, Specifies maximum divergence between read and reference to allow a mapping. Divergence = (1 - accuracy).
+
+* ``output_bam``: Default value = ``False``, Specifies whether to output a BAM representation of the cmp.h5 file. **(Optional)**
+
+* ``output_sam``: Default value = ``False``, Specifies whether to output a SAM representation of the cmp.h5 file. **(Optional)**
+
+* ``gff2Bed``: Default value = ``False``, Specifies whether to output a BED representation of the depth of coverage summary. **(Optional)**
+
+## <a name="P_Quiver"></a> P_GenomicConsensus (Quiver) Module
+
+This module takes the alignments generated by the ``P_Mapping`` module and calls the consensus sequence across the reads.
+
+###Input:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read.
+
+* ``data/alignment_summary.gff``: Summary information.
+
+###Output:###
+
+* ``data/aligned_reads.cmp.h5``
+
+* ``data/variants.gff.gz``: A gzipped GFF3 file containing variants versus the reference.
+
+* ``data/consensus.fastq.gz``: The consensus sequence in FASTQ format.
+
+* ``data/alignment_summary.gff, data/variants.vc``: Useful information about variants.
+
+###Parameters:###
+
+* ``makeBed``: Default value = ``True``, Specifies whether to output a BED representation of the variants. **(Optional)**
+
+* ``makeVcf``: Default value = ``True``, Specifies whether to output a VCF representation of the variants. **(Optional)**
+
+## <a name="P_Polish"></a> P_AssemblyPolishing
+
+This module is used in HGAP to polish draft assemblies using Quiver. 
+
+###Input:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read against the draft assembly.
+
+* ``data/alignment_summary.gff``: Summary information.
+
+###Output:###
+
+* ``data/polished_assembly.fasta.gz``: The consensus sequence in FASTA format.
+
+* ``data/polished_assembly.fastq.gz``: The consensus sequence in FASTQ format.
+
+* ``results/polished_report.html``: HTML-formatted report for the polished assembly.
+
+* ``results/polished_report.xml``: XML-formatted report for the polished assembly.
+
+###Parameters:###
+
+* ``enableMapQVFilter`` Default value = ``True``
+
+## <a name="P_Hook"></a> P_AnalysisHook Module
+
+This module allows you to call executable code as part of a SMRT Pipe analysis. ``P_AnalysisHook`` can be called multiple times in a settings XML file, allowing for an arbitrary number of calls to external (non-SMRT Pipe) code.
+
+###Parameters:###
+
+* ``scriptDir``: Default value = ``None``, All executables in this directory are called serially with the command line ``exeCmd jobDir``, where ``jobDir`` is the root of the SMRT Pipe output for this analysis. **(Optional)**
+
+* ``script``: Default value = ``None``, Path to an executable called with the command line ``exeCmd jobDir``, where ``jobDir`` is the root of the SMRT Pipe output for this analysis. **(Optional)**
+
+## <a name="P_AHA"></a> P_AHA (AHA Scaffolding) Module
+
+This module scaffolds high-confidence contigs, such as those from Illumina® data, using Pacific Biosciences’ long reads.
+
+###Input:###
+
+``P_AHA.py`` uses two kinds of input instead of one:
+
+* A FASTA file of high-confidence sequences to be scaffolded. These are typically contigs assembled from Illumina® short-read sequence data. They are passed to AHA as a reference sequence in the ``settings.xml`` input file.
+
+* Pacific Biosciences’ long reads, in HDF5 format. These are used to join the high-confidence contigs into a scaffold. Note that versions of the AHA Scaffolding algorithm prior to v2.1 accepted reads in FASTA format. After v2.1, users with FASTA formatted reads should use the underlying executable ``pbaha.py``.
+
+###Sample settings.xml file for long reads, with only customer-facing parameters:###
+
+```
+<?xml version="1.0"?>
+<smrtpipeSettings>
+  <global>
+    <param name="reference">
+        <value>/mnt/secondary-siv/references/ecoli_contig</value>
+    </param>
+  </global>
+  <module name="P_Fetch"/>
+  <module name="P_Filter">
+    <param name="minLength">
+        <value>50</value>
+    </param>
+    <param name="minSubReadLength">
+        <value>50</value>
+    </param>
+    <param name="readScore">
+        <value>0.75</value>
+    </param>
+  </module>
+  <module name="P_FilterReports"/>
+  <module name="P_AHA">
+    <param name="fillin">
+        <value>False</value>
+    </param>
+    <param name="blasrOpts">
+        <value>-minMatch 10 -minPctIdentity 70 -bestn 10 -noSplitSubreads</value>
+    </param>
+    <param name="instrumentModel">
+        <value>RS</value>
+    </param>
+    <param name="paramSchedule">
+        <value>6,3,75,100;6,3,75,100;5,3,75,100;6,2,75,100;6,2,75,100;5,2,75,100</value>
+    </param>
+    <param name="maxIterations">
+        <value>6</value>
+    </param>
+    <param name="description">
+        <value>AHA ("A Hybrid Assembler") is the PacBio hybrid assembly algorithm. It is based on the open source assembly software package AMOS, with additional software components tailored to PacBio's long reads and error profile.</value>
+    </param>
+  </module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``data/scaffold.gml``: A GraphML file that contains the final scaffold. This file can be readily parsed in python using the ``networkx`` package.
+
+* ``data/scaffold.fasta``: A FASTA file with a single entry for each scaffold.
+
+
+###Parameters:###
+
+* ``paramSchedule``: Default value = ``None``  Specifies parameter schedules used for iterative hybrid assembly. Schedules are in comma-delimited tuples, separated by semicolons. **Example:** ``6,3,75;6,3,75;6,2,75;6,2,75``. The fields, in order, are:
+
+  * Minimum alignment score. Higher is more stringent.
+  * Minimum number of reads needed to link two contigs. (Redundancy)
+  * Minimum subread length to participate in alignment.
+  * Minimum contig length to participate in alignment.
+
+* ``fillin``: Default value = ``False``  Specifies whether to use long reads.
+
+* ``blasrOpts``: Default value = ``-minMatch 10 -minPctIdentity 60 -bestn 10 -noSplitSubreads``  Options passed directly to BLASR for aligning reads to contigs.
+
+* ``maxIterations``: Default value = ``6``  Specifies the maximum number of iterations to use from ``paramSchedule``. If ``paramSchedule`` has more than ``maxIterations``, it will be truncated at ``maxIterations``. If ``paramSchedule`` has less than ``maxIterations``, the last iteration of ``paramSchedule`` is repeated.
+
+* ``cleanup``: Default value = ``True``  Specifies whether to clean up intermediate files. This can be useful for debugging purposes.
+
+* ``runNucmer``: Default value = ``True``  Specifies whether to use ``Nucmer`` to detect repeat locations. This can improve assemblies, but can be very slow on large highly repetitive genomes.
+
+* ``gapFillOpts``: Default value = ``“”``  Options to be passed directly to ``gapFiller.py``.
+
+* ``noScaffoldImages``: Default value = ``True``  Specifies **not** producing SVG files of the scaffolds. Creating these files can be expensive for large assemblies, but is recommended for small assemblies.
+
+
+To run ``P_AHA.py``, enter the following:
+
+```
+smrtpipe.py --params=settings.xml xml:input.xml >& smrtpipe.err
+```
+
+###Known Issues###
+
+* Depending on the repetitive content of the high-confidence input contigs, a large fraction of the sequence in the contigs can be called repeats. To avoid this, turn off the split repeats step by setting the minimum repeat identity to a number greater than 100, for example:
+```
+<minRepeatIdentity>1000</minRepeatIdentity>
+```
+
+## <a name="P_GATK"></a> P_GATKVC (GATK Unified Genotyper) Module
+
+This module wraps the Broad Institute's GATK Unified Genotyper for Bayesian diploid and haploid SNP calling, using base quality score recalibration and default settings. The module calls both homozygous and heterozygous SNPs.
+
+**Note:** This module is deprecated, and will be removed in a future release of SMRT Pipe.
+
+We recommend that you use a dbSNP file as a prior for base quality score recalibration. By default, the P_GATKVC (GATK Unified Genotyper) module uses a null prior. To use a prior, see the script ``vcfUploader.py``, included with the SMRT Analysis installation.
+
+**Note:** Indel calling and other options are **not** currently supported through SMRT Pipe.
+
+## <a name="P_MOD"></a> P_ModificationDetection Module
+
+This module uses the cmp.h5 output by the ``P_Mapping`` module to:
+
+1. Compare observed IPDs in the cmp.h5 file at each reference position on each strand with control IPDs. Control IPDs are supplied by either an in-silico computational model, or observed IPDs from unmodified “control” DNA.
+
+2.  Generate ``modifications.csv`` and ``modifications.gff`` reporting statistics on the IPD comparison.
+
+###Predicted Kinetic Background Control vs Case-Control Analysis###
+
+By default, the control IPDs are generated per-base of the reference with an in-silico model of the expected IPD values for each position, based on sequence context. The computational model is called the **Predicted IPD Background Control**. Even in normal unmodified DNA, the IPD at any particular point will vary. Internal studies at Pacific Biosciences show that most of the variation in mean IPD across a genome can be predicted from a 12-base sequence context surrounding the active site [...]
+
+###Filtering and Trimming###
+
+Some PacBio data features require special attention for good modification detection performance. The module inspects the alignment between the observed bases and the reference sequence. For an IPD measurement to be included in the analysis, the read sequence must match the reference sequence for K around the cognate base; currently, K = 1. The IPD distribution at some locus can be seen as a mixture of the “normal” incorporation process IPD (sensitive to the local sequence context and DNA [...]
+
+**Pauses** are defined as pulses with an IPD >10x longer than the mean IPD at that context. Heuristics are used to filter out the pauses.
+
+###Statistical Testing###
+
+The module tests the hypothesis that IPDs observed at a particular locus in the sample have longer means than IPDs observed at the same locus in unmodified DNA. If a Whole-Genome-Amplified dataset is generated, which removes DNA modifications, the module uses a case-control, two-sample t-test.
+
+The module also provides a pre-calibrated **Predicted Kinetic Background Control** model which predicts the unmodified IPD, given a 12-base sequence context. In that case, the module uses a one-sample t-test, with an adjustment to account for error in the control model.
+
+###Input:###
+
+* ``aligned_reads.cmp.h5``: A standard cmp.h5 file with alignments and IPD information that supplies the kinetic data for modification detection.
+
+* Reference Sequence: The path to a SMRT Portal reference repository entry for the reference sequence used to perform alignments.
+
+###Output:###
+
+* ``modifications.csv``: Contains one row for each (reference position, strand) pair that appeared in the dataset with coverage of at least x. (x defaults to 3, but is configurable using the ``ipdSummary.py –minCoverage`` flag.) The reference position index is 1-based for compatibility with the GFF file in the R environment.
+
+* ``modifications.gff``: Each template position/strand pair whose p-value exceeds the p-value threshold displays as a row. (The default threshold is ``p=0.01`` or ``score=20``.) The file is compliant with the GFF version 3 specification, and the template position is 1-based, per the GFF specification. The strand column refers to the strand carrying the detected modification, which is the **opposite** strand from those used to detect the modification.
+
+The auxiliary data column of the GFF file contains other statistics useful for downstream analysis or filtering. This includes the coverage level of the reads used to make the call, and +/- 20 bp sequence context surrounding the site.
+
+Results are generally indexed by reference position and reference strand. In all cases, the strand value refers to the strand carrying the modification in the DNA sample. The kinetic effect of the modification is observed in read sequences aligning to the opposite strand, so reads aligning to the positive strand carry information about modification on the negative strand and vice versa. The module **always** reports the strand containing the putative modification.
+
+###Parameters###
+
+* ``identifyModifications``: Default value = ``False``, Specifies whether to use a multi-site model to identify the modification type.
+
+* ``tetTreated``: Default value = ``False``, Specifies whether the sample was TET-treated to amplify the signal of m5C modifications.
+
+## <a name="P_Cor"></a> P_CorrelatedVariants (Minor and Compound Variants) Module
+
+This module calls and correlates rare variants from a sample and provides support for determining whether or not sets of mutations are co-located. **Note:** This only includes SNPs, not indels.
+
+The module takes high-coverage Reads of Insert reads that are aligned without quality scores to a similar reference. The module requires the following:
+
+* Reads of Insert reads **only**.
+* High coverage (at least 500x).
+* Alignment to reference without using quality scores.
+* The sample **cannot** be highly divergent from the reference.
+
+The algorithm uses simple column counting and a plurality call. While it works well with higher depths (> 500x), it does suffer from reference bias, systematic alignment error and sizeable divergence from the reference.
+
+Variants may not only coexist on the same molecule, but they may also be coselected for; that is, inherited together. This algorithm attempts to define a measurable relationship between a set of co-located variants found with some significance on a set of reads.
+
+1. The variant information is read from the GFF input file, then the corresponding cmp.h5 file is searched for reads that cover the variant. Reads that contain the variant are tracked and later assigned any other variants they contain, building a picture of the different haplotypes occuring within the read sets.
+
+2. The frequencies and coverage values for each haplotype are computed. These values will likely deviate (to the downside) from those found in the GFF file as the read set is constrained by whether or not they completely span the region defined by the variant set. Only reads that cover all variants are included in the frequency and coverage calculation.
+
+3. The frequency and coverage values are used to calculate an observed probability of each permutation within the variant set. These probabilities are used to compute the Mutual Information score for the set. Frequency informs mutual information, but does not define it. It is possible to have a lower frequency variant set with a higher mutual information score than a high frequency one.
+
+###Input:###
+
+* A GFF file containing CCS-based variant calls at each position including read information: ID, start, and stop. (Start and stop are in (+) strand genomic coordinates.)
+
+* A cmp.h5 alignment file aligned without quality, and with a minimum accuracy of 95%.
+
+* **(Optional)** ``score``: Include the mutual information score in the output. (Default value = ``Don't include``)
+* **(Optional)**``out``: The output file name. (Default value = ``Output to screen``)
+
+###Output:###
+
+* ``data/rare_variants.gff(.gz)``: Contains rare variant information, accessible from SMRT Portal.
+
+* ``data/correlated_variants.gff``: Accessible from SMRT Portal.
+
+* ``results/topRareVariants.xml``: Viewable report based on the contents of the GFF file.
+
+* ``results/topRareVariants.xml``: Viewable report based on the contents of the GFF file.
+
+* CSV file containing the location and count of co-variants. Example:
+
+```
+ref,haplotype,frequency,coverage,percent,mutinf
+ref000001,285-G|297-G,133,2970,4.48,0.263799623501
+ref000001,285-T|286-T,128,2971,4.31,0.256253924909
+ref000001,285-G|406-G,103,2963,3.48,0.217737973781
+ref000001,99-C|285-G,45,2963,1.52,0.113489812305
+ref000001,286-T|406-G,43,2963,1.45,0.109404796397
+ref000001,285-G|286-T,38,2971,1.28,0.0987697454578
+ref000001,99-C|286-T,31,2963,1.05,0.0838430015349
+```
+
+
+## <a name="P_Motif"></a> P_MotifFinder (Motif Analysis) Module
+
+This module finds sequence motifs containing base modifications. The primary application is finding restriction-modification systems in prokaryotic genomes. ``P_MotifFinder`` analyzes the output of the ``P_ModificationDetection`` module.
+
+###Input:###
+
+* ``modifications.csv``: Contains one row for each (reference position, strand) pair that appeared in the dataset with coverage of at least x.
+
+* ``modifications.gff``: Each template position/strand pair whose p-value exceeds the p-value threshold displays as a row.
+
+###Output:###
+
+* ``data/motif_summary.csv``: A summary of the detected motifs, as well as the evidence for motifs.
+
+* ``data/motifs.gff``: A reprocessed version of ``modifications.gff`` (from ``P_ModificationDetection``) containing motif annotations.
+
+###Parameters:###
+
+* ``minScore`` Default value = ``35`` Only consider detected modifications with a Modification QV **above** this threshold.
+
+## <a name="P_GMAP"></a> P_GMAP Module
+
+This module maps PacBio reads onto a reference as if they were cDNA, allowing for large insertions corresponding to putative introns.
+
+The way SMRT Pipe currently computes accuracy is **incompatible** with these large gaps. As a result, P_GMAP does **not** report an accuracy histogram like the other alignment modules.
+
+GMAP is a third-party tool, and requires that we build a GMAP-type database before running the tool. The time to build the database, as well as its size, are prohibitive, and not all references will need it. The database is built on the fly **once**, the first time the module is run against a reference. This results in an extended execution time.
+
+###Input:###
+
+* ``input.fofn``(base files): File containing the names of the raw input files used for the analysis.
+
+* ``data/filtered_regions.fofn``
+
+* The path to a reference in the PacBio reference repository.
+
+###Sample params.xml file:###
+
+```
+<?xml version="1.0" ?>
+  <smrtpipeSettings>
+    <protocol id="my_protocol">
+      <param name="reference">
+        <value>/data/references/my_reference</value>
+        <select>
+          <import contentType="text/xml" element="reference" filter="state='active' type='sample'" isPath="true" name="name" value="directory"> /data/references/index.xml</import>
+        </select>
+      </param>
+    </protocol>
+    <module id="P_GMAP" label="GMAP v1">
+  </smrtpipeSettings>
+```
+
+###Output:###
+
+* ``/data/alignment_summary.gff``
+* ``/data/aligned_reads.sam``
+* ``/data/aligned_reads.cmp.h5``
+* ``/results/gmap_quality.xml``
+
+## <a name="P_BAR"></a> P_Barcode Module
+
+This module provides access to the ``pbbarcode`` command-line tools, which you use to identify barcodes in PacBio reads.
+
+###Input:###
+
+* Complete barcode FASTA file: A standard FASTA file with barcodes less than 48 bp in length. Based on the score mode you specify, the barcode file might need to contain an even number of barcodes. **Example:**
+
+```
+<param name="barcode.fasta">
+  <value>/mnt/secondary/Smrtpipe/martin/prod/data/workflows/barcode_complete.fasta</value>
+</param>
+```
+
+* Barcode scoring method: This directly relates to the particular sample preparation used to construct the molecules. Depending on the scoring mode, the barcodes are grouped together in different ways. Valid options are:
+
+  *  ``symmetric``: Supports barcode designs with two identical barcodes on both sides of a SMRTbell™ template. Example: For barcodes (A, B), molecules are labeled as A--A or B--B.
+
+  * ``paired``: Supports barcode designs with two distinct barcodes on each side of the molecule, with neither barcode appearing without its mate. Minimum example: (ALeft, ARight, BLeft, BRight), where the following barcode sets are checked: ALeft--ARight, BLeft--BRight.
+
+**Example:**
+```
+<param name="mode">
+  <value>symmetric</value>
+</param>
+```
+
+  * Pad arguments: Defines how many bases to include from the adapter, and how many bases to include from the insert. Ideally, this is ``0`` and ``0``. This produces shorter alignments; however, if the adapter-calling algorithm slips a little one might lose a little sensitivity and/or specificity because of this. Do **not** set these unless you have a compelling use case. **Examples:**
+
+```
+<param name="adapterSidePad">
+   <value>2</value>
+</param>
+
+<param name="insertSidePad">
+   <value>2</value>
+</param>
+```
+
+###Output:###
+
+* ``/data/*.bc.h5``: Barcode calls and their scores for each ZMW.
+
+* ``/data/barcode.fofn``: Contains a list of files.
+
+* ``/data/aligned_reads.cmp.h5``
+
+
+## <a name="P_AMP"></a> P_AmpliconAssembly (Amplicon Analysis) Module
+
+This module finds _de novo_ phased consensus sequences from a pooled set of (possibly diploid) amplicons.
+
+###Input:###
+
+* bas.h5 files
+
+
+###Output:###
+
+* ``data/amplicon_analysis.fasta``:  A FASTA file containing the consensus sequences of each haplotype of each amplicon.
+
+* ``data/amplicon_analysis.fastq``:  A FASTQ file containing the consensus sequences and base-confidence of each haplotype of each amplicon.
+
+* ``data/amplicon_analysis.csv``:  A .csv file containing one row per base in each consensus sequence, with extra metadata about base quality and coverage.
+
+###Parameters:###
+
+* ``minLength`` Default value = ``1000``  Only use subreads longer than this threshold. Should be set to ~75% of the shortest amplicon length.
+
+* ``minReadScore`` Default value = ``0.78``  Only use reads with a ReadScore higher than this value.
+
+* ``maxReads`` Default value = ``2000``  Use at most this number of reads to find results. Values greater than 10000 may cause long run times.
+
+
+
+## <a name="P_CCS"></a> P_CCS (Reads of Insert) Module
+
+This module computes Read of Insert/CCS sequences from single-molecule reads. It is used to estimate the length of the insert sequence loaded onto a SMRT® Cell. Reads of Insert **replaces** the Circular Consensus Sequencing (CCS) protocol, which has been moved off the primary analysis instrument. 
+
+###Input:###
+
+* bas.h5 files
+
+
+###Output:###
+
+* ``data/<movie_name>.fasta``:  A FASTA file containing the consensus sequences of each molecule passing quality filtering.
+
+* ``data/<movie_name>.fastq``:  A FASTQ file containing the consensus sequences and base quality of each molecule passing quality filtering.
+
+* ``data/<movie_name>.ccs.h5``:  A ccs.h5 (similar to a bas.h5) file containing a representation of the CCS sequences and quality values.
+
+###Parameters:###
+
+**Note**: Use the default values to obtain the closest approximation of CCS as it existed on-instrument.
+
+* ``minFullPasses`` Default value = ``2``  The raw sequence must make at least this number of passes over the insert sequence to emit a CCS read for this ZMW.
+
+* ``minPredictedAccuracy`` Default value = ``0.9``  The minimum allowed value of the predicted consensus accuracy to emit a CCS read for this ZMW.
+
+## <a name="P_Bridge"></a> P_BridgeMapper Module (Beta)
+
+This module creates split alignments of Pacific Biosciences' reads for viewing with SMRT View. The split alignments can be used to infer the presence of assembly errors or structural variation. ``P_BridgeMapper`` works by first using BLASR to get primary alignments for filtered subreads. Then, ``P_BridgeMapper`` calls BLASR again, mapping any portions of those subreads not contained in the primary alignments.
+
+###Input:###
+
+* ``Filtered_subreads.fastq``: A FASTQ file containing subreads that passed quality filters.
+
+###Output:###
+
+* ``data/split_reads.bridgemapper.gz``: A gzipped, tab-separated file of split alignments. This file is consumed by SMRT View.
+
+###Parameters:###
+
+* ``minRootLength`` Default value = ``250``  Only consider subreads with primary alignments longer than this threshold.
+
+* ``minAffixLength`` Default value = ``50``  Only report split alignments with secondary alignments longer than this threshold.
+
+## <a name="Tools"></a> SMRT Pipe Tools
+
+**Tools** are programs that run as part of SMRT Pipe. A module, such as ``P_Mapping``, can call several tools (such as the mapping tools ``summarizeCoverage.py`` or ``compareSequences.py``) to actually perform the underlying processing. 
+
+All the tools are located at ``$SEYMOUR_HOME/analysis/bin``.
+
+Use the ``--help`` option to see usage information for each tool. (Some tools are undocumented.)
+
+
+## <a name="Build_SPTools"></a> Building the SMRT Pipe tools manually, without SMRT Portal, SMRT View, or Kodos
+
+
+It is currently not possible to build the SMRT Pipe tools without SMRT Portal, SMRT View, or Kodos.
+
+
+## <a name="Files"></a> SMRT Pipe File Structure
+
+**Note**: The output of a SMRT Pipe analysis includes more files than described here; interested users should explore the file structure. Following are details about the major files.
+
+```
+ <jobID>/job.sh
+```
+* Contains the SMRT Pipe command line call for the job.
+
+```
+<jobID>/settings.xml
+```
+* Contains the modules (and their associated parameters) to be run as part of the SMRT Pipe run. 
+
+```
+<jobID>/metadata.rdf
+```
+* Contains all important metadata associated with the job. This includes metadata propagated from primary results, links to all reports and data files exposed to users, and high-level summary metrics computed during the job. The file is an entry point to the job by tools such as SMRT Portal and SMRT View. ``metadata.rdf`` is formatted as an RDF-XML file using OWL ontologies. See http://www.w3.org/standards/semanticweb/ for an introduction to Semantic Web technologies.
+
+```
+<jobID>/input.fofn:  File containing the file names of the raw input files used for the analysis.
+```
+* This file (“file of file names”) is generated early during a job and contains the file names of the raw input files used for the analysis.
+
+```
+<jobID>/input.xml
+```
+* Used to specify the input files to be analyzed in a job, and is passed on to the command line.
+
+```
+log/smrtpipe.log
+```
+* Contains debugging output from SMRT Pipe modules. This is typically shown by way of the **View Log** button in SMRT Portal.
+
+### Data Files ###
+
+The ``Data`` directory is where most raw files generated by the pipeline are stored. (**Note**: The following are example output files - for more details about specific files, see the sections dealing with individual modules.)
+
+```
+aligned_reads.cmp.h5, aligned_reads.sam, aligned_reads.bam
+```
+* Mapping and consensus data from secondary analysis.
+
+```
+alignment_summary.gff
+```
+* Alignment data summarized on sequence regions.
+
+```
+variants.gff.gz
+```
+* All sequence variants called from consensus sequence.
+
+```
+toc.xml
+```
+* **Deprecated** - The master index information for the job outputs is now included in the ``metadata.rdf`` file.
+
+### Results/Reports Files ###
+
+Modules with **Reports** in their name produce HTML reports with static PNG images using XML+XSLT. These reports are located in the ``results`` subdirectory. The underlying XML document for each report is also preserved there; these can be useful files for data-mining the outputs of SMRT Pipe.
+
+
+## <a name="RefRep"></a> The Reference Repository
+
+The **reference repository** is a file-based data store used by SMRT Analysis to manage reference sequences and associated information. The full description of all of the attributes of the reference repository is beyond the scope of this document, but you need to use some basic aspects of the reference repository in most SMRT Pipe analyses. 
+
+**Example**: Analysis of multi-contig references can **only** be handled by supplying a reference entry from a reference repository.
+
+It is simple to create and use a reference repository:
+
+* A reference repository can be any directory on your system. You can have as many reference repositories as you wish; the input to SMRT Pipe is a fully resolved path to a reference entry, so this can live in any accessible reference repository.
+
+Starting with the FASTA sequence ``genome.fasta``, you upload the sequence to your reference repository using the following command:
+```
+referenceUploader -c -p/path/to/repository -nGenomeName
+-fgenome.fasta
+```
+
+where:
+
+* ``/path/to/repository`` is the path to your reference repository.
+* ``GenomeName`` is the name to use for the reference entry that will be created.
+* ``genome.fasta`` is the FASTA file containing the reference sequence to upload.
+
+For a large genome, we highly recommended that you produce the BLASR suffix array during this upload step. Use the following command:
+```
+referenceUploader -c -p/path/to/repository -nHumanGenome -fhuman.fasta --saw='sawriter -welter'
+```
+
+There are many more options for reference management. Consult the MAN page entry for referenceUploader by entering ``referenceUploader -h``.
+
+To learn more about what is being stored in the reference entries, look at the directory containing a reference entry. You will find a metadata description (reference.info.xml) of the reference and its associated files. For example, various static indices for BLASR and SMRT View are stored in the sequence directory along with the FASTA sequence.
+
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 001-353-082-05**
diff --git a/docs/SMRT-Pipe-Reference-Guide-v2.2.0.md b/docs/SMRT-Pipe-Reference-Guide-v2.2.0.md
new file mode 100644
index 0000000..50f5d75
--- /dev/null
+++ b/docs/SMRT-Pipe-Reference-Guide-v2.2.0.md
@@ -0,0 +1,1569 @@
+* [Introduction](#Intro)
+* [Installation] (#Install)
+* [Using the Command Line] (#CommandLine)
+ * [Command-Line Options] (#CommandLineOptions)
+ * [Utility Scripts] (#UtilityScripts)
+ * [Specifying SMRT Pipe Inputs] (#PipeInputs)
+ * [Specifying SMRT Pipe Parameters] (#PipeParams)
+* [SMRT Portal Protocols] (#PortalProtocols)
+ * [RS_AHA_Scaffolding] (#PRO_AHA)
+ * [RS_BridgeMapper] (#PRO_BM)
+ * [RS_CeleraAssembler] (#PRO_CEL)
+ * [RS_HGAP_Assembly.2] (#PRO_HGAP2)
+ * [RS_HGAP_Assembly.3 (Beta)] (#PRO_HGAP3)
+ * [RS_IsoSeq (Beta)] (#PRO_ISO)
+ * [RS_Long_Amplicon_Analysis (Beta)] (#PRO_LAMP)
+ * [RS_Minor_Variant (Beta)] (#PRO_MINOR)
+ * [RS_Modification_Detection] (#PRO_MOD)
+ * [RS_Modification_and_Motif_Analysis] (#PRO_MODM)
+ * [RS_PreAssembler] (#PRO_PRE)
+ * [RS_ReadsOfInsert] (#PRO_ROI)
+ * [RS_ReadsOfInsert_Mapping] (#PRO_ROI_MAP)
+ * [RS_Resequencing] (#PRO_RESEQ)
+ * [RS_Site_Acceptance_Test] (#PRO_SITE)
+ * [RS_Subreads] (#PRO_SUB)
+* [SMRT Pipe Modules and Their Parameters] (#Modules)
+ * [Global Parameters] (#Global)
+ * [P_AHA (AHA Scaffolding) Module] (#P_AHA)
+ * [P_AnalysisHook Module] (#P_Hook)
+ * [P_AssemblyPolishing Module] (#P_Polish)
+ * [P_AssembleUnitig Module] (#P_Unitig)
+ * [P_Barcode Module]  (#P_BAR)
+ * [P_BridgeMapper Module]  (#P_Bridge)
+ * [P_CCS (Reads of Insert) Module]  (#P_CCS)
+ * [P_CorrelatedVariants (Minor and Compound Variants) Module] (#P_Cor)
+ * [P_Fetch Module] (#P_Fetch)
+ * [P_Filter Module] (#P_Filter)
+ * [P_GenomicConsensus (Quiver) Module] (#P_Quiver)
+ * [P_IsoSeq Module] (#P_ISO)
+ * [P_LongAmpliconAnalysis Module]  (#P_AMP)
+ * [P_Mapping (BLASR) Module] (#P_Map)
+ * [P_MotifFinder (Motif Analysis) Module] (#P_Motif)
+ * [P_ModificationDetection Module]  (#P_MOD)
+ * [P_PreAssembler Module] (#P_Pre)
+ * [P_PreAssemblerDagcon Module] (#P_PreDag)
+* [SMRT Pipe Tools] (#Tools)
+* [Building the SMRT Pipe tools manually, without SMRT Portal, SMRT View, or Kodos] (#Build_SPTools)
+* [SMRT Pipe File Structure] (#Files)
+* [The Reference Repository] (#RefRep)
+
+## <a name="Intro"></a> Introduction
+
+This document describes the underlying command-line interface to SMRT Pipe, and is for use by bioinformaticians working with secondary analysis results.
+
+**SMRT Pipe** is Pacific Biosciences’ underlying analysis framework for secondary analysis functions.  SMRT Pipe is a general-purpose workflow engine based on the Python® programming language. SMRT Pipe is easily extensible, and supports logging, distributed computation, error handling, analysis parameters, and temporary files.
+
+In a typical installation of the SMRT Analysis Software, the SMRT Portal web application calls SMRT Pipe when a job is started. SMRT Portal provides a convenient and user-friendly way to analyze Pacific Biosciences’ sequencing data through SMRT Pipe. Power users will find that there is more flexibility and customization available by instead running SMRT Pipe analyses from the command line.
+
+* The latest version of SMRT Pipe is available [here] (http://pacificbiosciences.github.io/DevNet/).
+
+* SMRT Pipe can also be accessed using the Secondary Analysis Web Services API. For details, see [Secondary Analysis Web Services API](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Secondary-Analysis-Web-Services-API-v2.2.0).
+
+**Note:**
+Throughout this documentation, the path ``/opt/smrtanalysis`` is used to refer to the installation directory for SMRT Analysis (also known as ``$SEYMOUR_HOME``). Replace this path with the path appropriate to your installation when using this document.
+
+## <a name="Install"></a> Installation
+
+SMRT Pipe is installed as part of the SMRT Analysis software installation. For details, see [SMRT Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0).
+
+## <a name="CommandLine"></a> Using the Command Line
+
+In a typical SMRT Analysis installation, SMRT Pipe is in your path after sourcing the ``setup.sh`` file.  This file declares the ``$SEYMOUR_HOME`` environment variable and also sources two subsequent files, ``$SEYMOUR_HOME/analysis/etc/setup.sh`` and ``$SEYMOUR_HOME/common/etc/setup.sh``.  Do **not** declare `$SEYMOUR_HOME` in `~/.bashrc` or any other environment setting file because it will cause conflicts.
+
+
+Invoke the ``smrtpipe.py`` script in by executing:
+
+```
+. /path/to/smrtanalysis/etc/setup.sh && smrtpipe.py [--help] [options] --params=settings.xml xml:input.xml
+```
+
+Replace ``/path/to/smrtanalysis/`` with the path to your SMRT Analysis installation. The is the same way ``smrtpipe.py`` is invoked in SMRT Portal using the `job.sh` script.
+
+Logging messages are printed to stderr as well as a log file (``log/smrtpipe.log``). It is standard practice to pipe the stderr messages to a file using redirection in your shell, for example appending 
+``&> smrtpipe.err`` to the command line if running under bash.
+
+### <a name="CommandLineOptions"></a> Command-Line Options
+
+Following are some of the available options for invoking ``smrtpipe.py``:
+
+```
+-D key=value
+```
+
+* Overrides a configuration variable. Configuration variables are key-value pairs that are read from the global file ``smrtpipe.rc`` before starting an analysis. An example is the ``NPROC`` variable which controls the number of simultaneous processors to use during the analysis. To restrict SMRT Pipe to 4 processors, use ``-D NPROC=4``.
+
+```
+--debug
+```
+* Activates debugging output in the stderr and log outputs. To set this flag as a default, specify ``DEBUG=True`` in the ``smrtpipe.rc`` file.
+
+```
+--distribute
+```
+* Distributes the computation across a compute cluster. For information on configuring SMRT Pipe for a distributed computation environment, see [SMRT Analysis Software Installation] (https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0).
+
+```
+--help
+```
+* Displays information about command-line usage and options, and then exits.
+
+```
+--noreports
+```
+* Turns off the production of XML/HTML/PNG reports.
+
+```
+--nohtml
+```
+* Turns off the conversion of XML reports into HTML. (This conversion **requires** that Java be installed.)
+
+```
+--output=outputDir
+```
+
+* Specifies a root directory to use for all SMRT Pipe outputs for this analysis.  SMRT Pipe places outputs in this directory, as well as in data, results, and log subdirectories.
+
+```
+params=params.xml
+```
+* Specifies a settings XML file for running the pipeline analysis. If this option is **not** specified, SMRT Pipe prints a message and then exits.
+
+```
+--totalCells
+```
+* Specifies that if the number of cells in the job is less than ``totalCells``, the job is **not** marked complete when it finishes. Data from additional cells will be appended to the outputs, until the number of cells reaches ``totalCells``. 
+
+```
+--version
+```
+* Displays the version number of SMRT Pipe and then exits.
+
+```
+--kill
+```
+* Kills a SMRT Pipe job running in the current directory. This works with ``output``.
+
+```
+smrtpipe.py --examples
+    Name                               Directory
+1   smrtpipe_basemods                  /srv/depot/jdrake/build/doc/examples/smrtpipe_basemods
+2   smrtpipe_assembly_allora           /srv/depot/jdrake/build/doc/examples/smrtpipe_assembly_allora
+3   smrtpipe_assembly_hgap3            /srv/depot/jdrake/build/doc/examples/smrtpipe_assembly_hgap3
+4   smrtpipe_resequencing_barcode      /srv/depot/jdrake/build/doc/examples/smrtpipe_resequencing_barcode
+5   smrtpipe_resequencing              /srv/depot/jdrake/build/doc/examples/smrtpipe_resequencing
+6   smrtpipe_hybrid_aha                /srv/depot/jdrake/build/doc/examples/smrtpipe_hybrid_aha
+```
+* Display the SMRT Pipe example jobs.  A useful reference for how different workflows are configured and run through SMRT Pipe.
+
+### <a name="UtilityScripts"></a> Utility Scripts
+
+For convenience, you can create several utility scripts:
+
+**run_smrtpipe_singlenode.sh**
+
+```
+SMRT_ROOT=/path/to/smrtanalysis/
+.  $SMRT_ROOT/common/etc/setup.sh && smrtpipe.py  --params=settings.xml   xml:input.xml
+```
+
+
+**run_smrtpipe_distribute.sh**
+
+```
+SMRT_ROOT=/path/to/smrtanalysis/
+.   $SMRT_ROOT/common/etc/setup.sh && smrtpipe.py  --distribute --params=settings.xml   xml:input.xml
+```
+
+**run_smrtpipe_debug.sh**
+```
+SMRT_ROOT=/path/to/smrtanalysis/
+.   $SMRT_ROOT/common/etc/setup.sh && smrtpipe.py  --debug --params=settings.xml   xml:input.xml
+```
+
+
+
+### <a name="PipeInputs"></a> Specifying SMRT Pipe Inputs
+
+The input file is an XML file specifying the sequencing data to process. Generally, you specify the inputs as URIs (Universal Resource Identifiers) which are resolved by code internal to SMRT Pipe. In practice, this is most useful to large enterprise users that have a data management scheme and are able to modify the SMRT Pipe code to include their own resolver.
+
+The simpler way to specify inputs is to **fully resolve** the path to each input file, which as of v2.0, is a ``bax.h5`` file. For more information, see [bas.h5 Reference Guide] (http://files.pacb.com/software/instrument/2.0.0/bas.h5%20Reference%20Guide.pdf).
+
+The script ``fofnToSmrtpipeInput.py`` is provided to convert a FOFN (a "file of file names" file) to the input format expected by SMRT Pipe. If ``my_inputs.fofn`` looks like
+```
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.2.bax.h5
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.3.bax.h5
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.1.bax.h5
+```
+or, for SMRT Pipe versions **before** v2.1:
+```
+/share/data/run_1/m100923_005722_00122_c15301919401091173_s0_p0.bas.h5
+/share/data/run_2/m100820_063008_00118_c04442556811011070_s0_p0.bas.h5
+```
+
+
+then it can be converted to a SMRT Pipe input XML file by entering:
+```
+fofnToSmrtpipeInput.py my_inputs.fofn > my_inputs.xml
+```
+Following is the resulting XML file for SMRT Pipe v2.1:
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+  <dataReferences>
+    <url ref="run:0000000-0000"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.2.bax.h5</location></url>
+    <url ref="run:0000000-0001"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.3.bax.h5</location></url>
+    <url ref="run:0000000-0002"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.1.bax.h5</location></url>
+  </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+For SMRT Pipe versions **before** v2.1:
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+ <dataReferences>
+    <url ref="run:0000000-0000"><location>/share/data/
+    /share/data/run_1 m100923_005722_00122_c15301919401091173_s0_p0.bas.h5
+    <url ref="run:0000000-0001"><location>/share/data/
+    /share/data/run_2/m100820_063008_00118_c04442556811011070_s0_p0.bas.h5
+ </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+To run an analysis using these input files, use the following command:
+```
+smrtpipe.py --params=settings.xml xml:my_inputs.xml
+```
+
+The SMRT Pipe input format lets you specify annotations, such as job IDs, job names, and job comments, in a job-management environment. The ``fofnToSmrtpipeInput.py`` application has command-line options for setting these optional attributes.
+
+**Note**: To get help for a script, run the script with the ``--help`` option and no additional arguments. For example:
+```
+fofnToSmrtpipeInput.py --help
+```
+
+### <a name="PipeParams"></a> Specifying SMRT Pipe Parameters
+
+The ``--params`` option is the most important SMRT Pipe option, and is required for any sophisticated use. The option specifies an XML file that controls:
+
+* The analysis modules to run.
+* The **order** of execution.
+* The **parameters** used by the modules.
+
+The general structure of the settings XML file is as follows:
+```
+<?xml version="1.0"?>
+<smrtpipeSettings>
+
+<protocol>
+...global parameters...
+</protocol>
+
+<module id="module_1">
+...parameters...
+</module>
+
+<module id="module_2">
+...parameters...
+</module>
+
+</smrtpipeSettings>
+```
+
+* The ``protocol`` element allows setting global parameters that could possibly be used by all modules.
+* Each ``module`` element defines an analysis module to run. 
+* The order of the ``module`` elements defines the order in which the modules execute.
+
+SMRT Portal protocol templates are located in: ```$SEYMOUR_HOME/common/protocols/```.
+
+SMRT Pipe modules are located in: 
+``$SEYMOUR_HOME/analysis/lib/pythonx.x/pbpy-0.1-py2.7.egg/pbpy/smrtpipe/modules/``.
+
+You specify parameters by entering a key-value pair in a ``param`` element. 
+* The name of the key is in the name attribute of the ``param`` element.
+* The value of the key is contained in a nested value element. 
+
+For example, to set the parameter named ``reference``, you specify:
+```
+<param name="reference">
+  <value>/share/references/repository/celegans</value>
+</param>
+```
+
+**Note**: To reference a parameter value in other parameters, use the notation ``${variable}`` when specifying a value. For example, to reference a global parameter named ``home``, use it in other parameters as ``${home}``.  SMRT Pipe supports arbitrary parameters in the settings XML file, so the use of temporary variables like this can help readability and maintainability.
+
+Following is a complete example of a settings file for running filtering, mapping, and consensus steps against the _E. coli_ reference genome:
+```
+<?xml version="1.0" encoding="utf-8"?>
+<smrtpipeSettings>
+ <protocol>
+  <param name="reference">
+   <value>/share/references/repository/ecoli</value>
+  </param>
+ </protocol>
+
+ <module name="P_Filter">
+  <param name="minLength">
+    <value>50</value>
+  </param>
+  <param name="readScore">
+    <value>0.75</value>
+  </param>
+ </module>
+
+ <module name="P_FilterReports" />
+
+ <module name="P_Mapping">
+  <param name="align_opts" hidden="true">
+   <value>--minAccuracy=0.75 --minLength=50 -x </value>
+  </param>
+ </module>
+
+ <module name="P_MappingReports" />
+ <module name="P_Consensus" />
+ <module name="P_ConsensusReports" />
+
+</smrtpipeSettings>
+```
+
+## <a name="PortalProtocols"></a> SMRT Portal Protocols
+
+Following are the secondary analysis protocols included in SMRT Analysis v2.2.0, with the SMRT Pipe module(s) called by each protocol. Many of these modules are described later in this document.
+
+### <a name="PRO_AHA"></a> RS_AHA_Scaffolding:
+
+* Used for hybrid assembly of genomes up to 200 Mb in size with PacBio reads.
+* Improve existing assemblies up to 200 Mb in size by scaffolding with PacBio long reads to join contigs. 
+* Reads are filtered and assembled with high confidence contigs into scaffolds using a combination of algorithms developed by Pacific Biosciences and the AMOS open-source project.
+```
+* P_Filter
+* P_AHA
+```
+
+### <a name="PRO_BM"></a> RS_BridgeMapper:
+
+* Used for troubleshooting _de novo_ assemblies, variants, indels, and so on.
+* Returns split alignments of PacBio reads using BLASR. 
+* Reads are filtered by length and quality, mapped to a provided reference sequence, and consensus and variants are identified versus this reference using the Quiver algorithm.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_BridgeMapper
+```
+
+### <a name="PRO_CEL"></a> RS_CeleraAssembler:
+
+* Performs _de novo_ assembly of genomes up to 200 Mbp using ``pacBioToCA`` for error correction and Celera® Assembler 8.1 for assembly.
+* Combines long reads (ideally from a 10 kb or longer insert library) with shorter, high-accuracy reads (Reads of Insert reads or reads from another sequencing technology).
+* This workflow (comprised of the ``P_PacBioToCA`` and ``P_CeleraAssembler`` modules) wraps the Celera® Assembler’s error correction and assembly programs. For full documentation of pacBioToCA and the Celera Assembler, see http://sourceforge.net/apps/mediawiki/wgs-assembler/index.php?title=Main_Page.
+* The error correction may be run with external high confidence reads, such as those from Illumina® data, or from internally generated Reads of Insert reads.
+
+####Input:####
+
+* ``settings.xml``: Specifies the parameters.
+
+* ``input.xml``: Specifies the inputs.
+
+####Sample settings.xml file:####
+
+```
+<module id="P_PacBioToCA" label="PacBioToCA v1" editableInJob="true">
+  <description>This module wraps pacBioToCA, the error correction pipeline of Celera Assembler v7.0</description>
+```
+
+To use a FASTQ file, set the value of ``shortReadFastqA`` to the full path.
+
+```
+  <param name="shortReadFastqA" label="FASTQ to Correct With">
+     <title>(Optional) FASTQ file of reads to correct long reads with </title>
+     <input type="text" />
+     <value></value>
+     <rule remote="api/protocols/resource-exists?paramName=shortReadFastqA" required="false" message="File does not exist" />
+  </param>
+```
+
+The ``shortReadTechnology`` option selects the platform on which the reads were generated. This sets library feature flags to enable different correction, trimming and untagging algorithms. The default is ``illumina``.
+
+```
+  <param name="shortReadTechnology" label="FASTQ Read Type">
+     <title>Sequencing platform used to generate the FASTQ file, if specified</title>
+     <value>illumina</value>
+     <select>
+       <option value="sanger">~1kb (e.g., Sanger and PacBio CCS)</option>
+       <option value="454">~600bp (e.g., 454)</option>
+       <option value="illumina">~100bp (e.g., Illumina)</option>
+     </select>
+  </param>
+```
+
+The ``shortReadType`` option selects the type of QV encoding (``sanger``, ``solexa``, or ``illumina``) in the FASTQ file. 
+
+```
+  <param name="shortReadType" label="FASTQ Quality Value Encoding">
+     <title>ASCII encoding of the quality values in the FASTQ file, if specified</title>
+     <value>illumina</value>
+     <select>
+       <option value="sanger">Phred+33 (e.g., Sanger and PacBio fastq)
+       </option>
+       <option value="solexa">Solexa+64 (e.g., Solexa fastq)</option>
+       <option value="illumina">Phred+64 (e.g., Illumina fastq)</option>
+     </select>
+  </param>
+
+  <param name="pbReadMinLength" label="Min fragment length">
+     <title>Minimum length of PacBio RS fragment to keep.</title>
+     <input type="text" />
+     <value>1000</value>
+     <rule type="digits" message="Value must be an integer" required="true" />
+  </param>
+
+  <param name="specInPacBioToCA" label="Pre-defined spec file">
+    <title>Enter the server path to an existing spec file</title>
+    <input type="text" />
+    <rule remote="api/protocols/resource-exists?paramName=specInPacBioToCA" required="false" message="File does not exist" />
+  </param>
+</Module>
+
+<module id="P_CeleraAssembler" label="CeleraAssembler v1" editableInJob="true">
+   <description>This module wraps the Celera Assembler v7.0</description>
+
+  <param name="genomeSize" label="Genome Size (bp)">
+     <title>Approximate genome size in base pairs</title>
+     <value>5000000</value>
+     <input type="text" />
+     <rule type="digits" message="Must be a value between 1 and 200000000" min="1"
+     required="true" max="200000000" />
+  </param>
+
+  <param name="defaultFrgMinLen" hidden="true">
+    <input type="text" />
+    <value>1000</value>
+  </param>
+
+  <param name="xCoverage" label="Target Coverage">
+    <title>Fold coverage to target when picking frgMinLen for assembly. Typically 15 to 25.</title>
+    <input type="text" />
+    <value>15</value>
+    <rule type="digits" message="Value must be an integer between 10 and 30, inclusive” min="10" max="30" />
+  </param>
+
+  <param name="ovlErrorRate" label="Overlapper error rate">
+    <title>Overlapper error rate</title>
+    <input type="text" />
+    <value>0.015</value>
+    <rule type="number" message="Value must be numeric" />
+  </param>
+
+  <param name="ovlMinLen" label="Overlapper min length">
+    <title>Overlaps shorter than this length are not computed.</title>
+    <input type="text" />
+    <value>40</value>
+    <rule type="digits" message="Value must be an integer" />
+  </param>
+
+  <param name="specInRunCA" label="Pre-defined spec file">
+    <title>Enter the server path to an existing spec file</title>
+    <input type="text" />
+    <rule remote="api/protocols/resource-exists?paramName=specInRunCA" required="false" message="File does not exist" />
+  </param>
+
+ </module>
+</smrtpipeSettings>
+```
+
+####Output:####
+
+**Note:** Some of the worklflow outputs are produced by Celera® Assembler, and some by Pacific Biosciences’ software.
+
+* ``data/runCa.spec``: The specification file used to run the assembly program. The ``P_CeleraAssembler`` module auto-generates the specification file based on the input data and selected parameters. Alternatively, you can provide an explicit specification file.
+
+* ``data/pacBioToCA.spec``: The specification file used to run the error correction program. The ``P_PacBioToCA`` module auto-generates the specification file based on the input data and selected parameters. Alternatively, you can provide an explicit specification file.
+
+* ``data/celera-assembler.asm``: The official output of Celera Assembler’s assembly program.
+
+* ``data/assembled_reads.cmp.h5``: The pairwise alignment for each read against its assembled contig consensus.
+
+* ``data/assembled_summary.gff.gz``: Summary information about each of the contigs.
+
+* ``data/castats.txt``: Assembly statistics report.
+
+To run the error correction and assembly modules, enter the following:
+```
+smrtpipe.py --params=settings.xml xml:input.xml >& smrtpipe.err
+```
+
+
+### <a name="PRO_HGAP2"></a> RS_HGAP_Assembly.2:
+
+* HGAP (Hierarchical Genome Assembly Process) performs high quality _de novo_ assembly using a single PacBio library preparation. 
+* HGAP consists of pre-assembly, _de novo_ assembly with Celera® Assembler, and assembly polishing with Quiver.
+* The protocol is optimized for **quality.**
+
+```
+* P_PreAssembler
+* P_CeleraAssembler
+* P_Mapping
+* P_AssemblyPolishing 
+```
+
+### <a name="PRO_HGAP3"></a> RS_HGAP_Assembly.3 (Beta):
+
+* HGAP (Hierarchical Genome Assembly Process) performs high quality _de novo_ assembly using a single PacBio library preparation. 
+* HGAP consists of pre-assembly, _de novo_ assembly with PacBio's ``AssembleUnitig``, and assembly polishing with Quiver.
+* The protocol is optimized for **speed.**  It introduces a new unitig consensus caller that is substantially faster than the one included with ``P_CeleraAssembler``.  This protocol is designed with larger genomes in mind, but can also be used as a replacement for ``RS_HGAP_Assembly.2``, which will eventually be deprecated.
+
+To see an example on how to setup and run ``RS_HGAP_Assembly.3`` using ``smrtpipe.py``, take a look at the ``smrtpipe_assembly_hgap3`` example included with ``smrtpipe.py``.
+
+```
+smrtpipe.py --examples
+    Name                               Directory
+1   smrtpipe_basemods                  /srv/depot/jdrake/build/doc/examples/smrtpipe_basemods
+2   smrtpipe_assembly_hgap3            /srv/depot/jdrake/build/doc/examples/smrtpipe_assembly_hgap3
+3   smrtpipe_resequencing_barcode      /srv/depot/jdrake/build/doc/examples/smrtpipe_resequencing_barcode
+4   smrtpipe_resequencing              /srv/depot/jdrake/build/doc/examples/smrtpipe_resequencing
+```
+
+
+```
+* P_PreAssemblerDagcon
+* P_AssembleUnitig
+* P_Mapping
+* P_AssemblyPolishing 
+```
+
+### <a name="PRO_ISO"></a> RS_IsoSeq (Beta):
+
+* Reads are filtered by length and quality and then mapped against the reference using GMAP to span introns.
+```
+* P_CCS
+* P_IsoSeq
+```
+
+### <a name="PRO_LAMP"></a> RS_Long_Amplicon_Analysis (Beta):
+
+* Used to determine phased consensus sequences for pooled amplicon data. 
+* Can pool up to 20 distinct amplicons. Reads are clustered into high-level groups, then each group is phased and consensus is called using the Quiver algorithm.
+* Filters chimeric sequences.
+* Optionally splits reads by barcode if the sample is barcoded.
+```
+* P_LongAmpliconAnalysis
+* P_Barcode
+```
+
+### <a name="PRO_MINOR"></a> RS_Minor_Variant (Beta):
+
+* Used to call minor variants in a heterogeneous data set against a user-provided reference sequence.
+```
+* P_CCS
+* P_Mapping
+* P_CorrelatedVariants
+```
+
+### <a name="PRO_MOD"></a> RS_Modification_Detection:
+
+* A resequencing analysis that identifies common bacterial base modifications (6-mA, 4-mC, and optionally TET-converted 5-mC). 
+* Reads are filtered by length and quality, mapped against a specified reference sequence, and then variants are called.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_ModificationDetection
+```
+
+### <a name="PRO_MODM"></a> RS_Modification_and_Motif_Analysis:
+
+* A resequencing analysis that identifies common bacterial base modifications (6-mA, 4-mC, and optionally TET-converted 5-mC), and then analyzes the methyltransferase recognition motifs. 
+* Reads are filtered by length and quality, mapped against a specified reference sequence, and then variants are called.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_ModificationDetection
+* P_MotifFinder
+```
+
+### <a name="PRO_PRE"></a> RS_PreAssembler:
+
+* Used to build a set of highly accurate long reads for use in _de novo_ assembly, using the hierarchical genome assembly process (HGAP).
+* Takes each read exceeding a minimum length, aligns all reads against it, trims the edges, and then takes the consensus.
+```
+* PreAssemblerSFilter
+* P_PreAssembler
+```
+
+### <a name="PRO_ROI"></a> RS_ReadsOfInsert:
+
+* Used to estimate the length of the insert sequence loaded onto a SMRT Cell. 
+* Generates reads from the insert sequence of single molecules, optionally splitting by barcode.
+* Replaces the Circular Consensus Sequencing (CCS) protocol, which has been moved off the primary analysis instrument. 
+* To obtain the closest approximation of CCS as it existed on-instrument, specify ``MinCompletePasses = 2`` and ``MinPredictedAccuracy = 0.9`` in the SMRT Portal Reads of Insert protocol dialog box.
+
+```
+* P_CCS
+* P_Barcode
+```
+
+### <a name="PRO_ROI_MAP"></a> RS_ReadsOfInsert_Mapping:
+
+* Used for whole-genome or targeted resequencing.
+* Reads are filtered, then mapped to a provided reference sequence.
+* Haploid variants and small indels, but **not** diploid variants, are called during consensus.
+* Uses Reads of Insert (formerly known as CCS) data during mapping.
+```
+* P_Filter
+* P_CCS
+* BLASR_De_Novo_CCS
+```
+
+### <a name="PRO_RESEQ"></a> RS_Resequencing:
+
+* Used for whole-genome or targeted resequencing.
+* Reads are filtered, mapped to a provided reference sequence, and consensus and variants are identified against this reference.
+* Haploid variants and small indels, but **not** diploid variants, are called during consensus.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+```
+
+
+### <a name="PRO_SITE"></a> RS_Site_Acceptance_Test:
+
+* Site acceptance test workflow for lambda resequencing. 
+* Generates a report displaying site acceptance test metrics.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+```
+
+### <a name="PRO_SUB"></a> RS_Subreads:
+
+* Filters reads based on the minimum read length and read quality specified.
+```
+* P_Filter
+```
+
+## <a name="Modules"></a>  SMRT Pipe Modules and their Parameters
+Following is an overview of some of the common modules included in SMRT Pipe and their parameters. Not all modules or parameters are listed here. 
+
+Developers interested in even finer control should look inside the ``validateSettings`` method for each Python analysis module. By convention, **all** of the settings known to the analysis module are referenced in this method.
+
+## <a name="Global"></a> Global Parameters
+
+Global parameters are potentially used in multiple modules. In the SMRT Pipe internals, they are accessed in the “global” namespace. Following are some common global parameters:
+
+```
+reference
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping reads. **Required** for resequencing workflows.
+* Default value = ``None``
+
+```
+control
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping spike-in control reads. **(Optional)**
+* Default value = ``None``
+
+```
+use_subreads
+```
+* Specifies whether to divide reads into subreads using the adapter region boundaries found by the primary analysis software. **(Optional)**
+* Default value = ``True``
+
+```
+num_stats_regions
+```
+* Specifies how many regions to use when reporting region statistics such as depth of coverage and variant density. **(Optional)**
+* Default value = ``500``
+
+## <a name="P_Fetch"></a> P_Fetch Module
+
+This module fetches the input data and generates a file of the file names of the input .pls files for downstream analysis. This module has **no** exposed parameters.
+
+###Output:###
+
+* ``pls.fofn`` (File containing file names of the input .pls files)
+
+## <a name="P_Filter"></a> P_Filter Module
+
+This module filters and trims the raw reads produced by Pacific Biosciences’ primary analysis software. Options are available for taking the information found in the bas.h5 files and using this to pass reads and portions of reads forward.
+
+###Input:###
+
+* ``bas.h5`` files (pre v2.1) or ``bax.h5`` files (post v2.1)
+
+###Output:###
+
+* ``data/filtering_summary.csv``: Includes raw metrics and filtering information for each read (not subread) found in the original bas.h5 files.
+* ``rgn.h5`` (one for each input bas.h5 file): Filtering information generated by the module.
+
+###Parameters:
+
+* ``minLength``  Reads with a high quality region read length **below** this threshold are filtered out. **(Optional)**
+
+* ``maxLength``  Reads with a high quality region read length **above** this threshold are filtered out. **(Optional)**
+
+* ``minSubReadLength``  Subreads **shorter** than this length are filtered out.
+
+* ``maxSubReadLength``  Subreads **longer** than this length are filtered out.
+
+* ``minSNR``  Reads with signal-to-noise ratio **below** this threshold are filtered out. **(Optional)**
+
+* ``readScore`` Reads with a high quality region (Read Quality) score **below** this threshold are filtered out. **(Optional)**
+
+* ``trim`` Default value = ``True``, Specifies whether to trim reads to the high-quality region. **(Optional)**
+
+* ``artifact``  Reads with a read artifact score less than this (negative) number are filtered out. No number indicates no artifact filtering. Reasonable thresholds are typically between -1000 and -200. **(Optional)**
+
+## <a name="P_Pre"></a> P_PreAssembler Module
+
+This module takes as input long reads and short reads in standard formats, aligns the short reads to the long reads, and outputs a consensus from the preassembled short reads using the long reads as seeds.
+**Note:** You **must** run the ``P_Fetch`` and ``P_Filter`` modules before running ``P_PreAssembler`` to get meaningful results.
+
+###Input:###
+
+* **Long reads ("seed reads")**: PacBio pls.h5/bas.h5 file(s) and optionally associated rgn.h5 file(s).
+* **Short reads**: Can be one of the following:
+ * Arbitrary high-quality reads in FASTQ format, such as Illumina® reads, without Ns.
+ * PacBio pls.h5/bas.h5 file(s): The same reads as used for the long reads. This mode is the first step of HGAP (Hierarchical Genome Assembly Procedure.)
+* ``params.xml``
+* ``input.xml``
+
+The module can run on bas.h5 files only, and on bas.h5 and FASTQ files. Following are sample XML inputs for both modes.
+
+###Sample input.xml,bas.h5-only input mode###
+
+* **Note:** bas.h5 input files must have the suffix bas.h5.
+
+```
+<pacbioAnalysisInputs>
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+      <url ref="run:0000000-0001">
+   <location>
+      /path/to/input.bas.h5
+   </location>
+</url>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml, bas.h5-only input mode###
+* This XML parameter file was tested on 90X short reads and 24X long reads.
+
+```
+<module name="P_PreAssembler">
+   <param name="useFastqAsShortReads">
+     <value>False</value>
+   </param>
+   <param name="useFastaAsLongReads">
+     <value>False</value>
+   </param>
+   <param name="useLongReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="useUnalignedReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="useCCS">
+     <value>False</value>
+   </param>
+   <param name="minLongReadLength">
+     <value>5000</value>
+   </param>
+   <param name="blasrOpts">
+     <value> -minReadLength 200 -maxScore -1000 -bestn 24 -maxLCPLength 16 -nCandidates 24 </value>
+   </param>
+   <param name="consensusOpts">
+     <value> -L </value>
+   </param>
+   <param name="layoutOpts">
+     <value> --overlapTolerance 100 --trimHit 50 </value>
+   </param>
+   <param name="consensusChunks">
+     <value>60</value>
+   </param>
+   <param name="trimFastq">
+     <value>True</value>
+   </param>
+   <param name="trimOpts">
+     <value> --qvCut=59.5 --minSeqLen=500 </value>
+   </param>
+</module>
+```
+
+###Sample input.xml (FASTQ and bas.h5 input mode)###
+
+* This parameter XML file was tested on 50X 100 bp Illumina® reads correcting 15X PacBio long reads.
+
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+     <url ref="run:0000000-0001">
+   <location>
+     /path/to/input.bas.h5
+   </location>
+   </url>
+     <url ref="fastq:/path/to/input.fastq"/>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml (FASTQ and bas.h5 input mode)###
+
+```
+<?xml version="1.0" ?>
+<smrtpipeSettings>
+  <module name="P_Fetch"/>
+  <module name="P_Filter">
+    <param name="filters">
+       <value>MinRL=1000,MinReadScore=0.80</value>
+    </param>
+    <param name="artifact">
+       <value>-1000</value>
+    </param>
+  </module>
+  <module name="P_PreAssembler">
+    <param name="useFastqAsShortReads">
+       <value>True</value>
+    </param>
+    <param name="useFastaAsLongReads">
+       <value>False</value>
+    </param>
+    <param name="useLongReadsInConsensus">
+       <value>False</value>
+    </param>
+    <param name="useUnalignedReadsInConsensus">
+       <value>False</value>
+    </param>
+    <param name="blasrOpts">
+       <value>-minMatch 8 -minReadLength 30 -maxScore -100 -minPctIdentity 70 -bestn 100</value>
+    </param>
+    <param name="layoutOpts">
+       <value>--overlapTolerance=25</value>
+    </param>
+    <param name="consensusOpts">
+       <value>-w 2</value>
+    </param>
+</module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``corrected.fasta``,`` corrected.fastq``: FASTQ or FASTA file of corrected long reads.
+* ``idmap.csv``: csv file mapping corrected long read ids to original read ids.
+
+## <a name="P_PreDag"></a> P_PreAssemblerDagcon Module
+
+This module provides the primary difference in ``RS_HGAP_Assembly.3``. ``P_PreAssemblerDagcon`` was designed as a drop-in replacement for the correction step in ``RS_HGAP_Assembly.2``, providing the same functionality much faster and more efficiently than the ``P_PreAssembler`` module.  It includes a simple, alignment-based chimera filter that reduces effects caused by missing SMRTbell™ adapters, such as spurious contigs in assemblies.
+
+Note that the quality values in the FASTQ file for the corrected reads are a uniformly set to ``QV24``. This is determined by mapping corrected reads to a known reference and appears to work well on a broad set of data.  We are considering deriving QV values directly from the data for a future release.
+
+As the ``RS_HGAP_Assembly.3`` implementation was completely redesigned and includes much new code, it is labeled as "Beta" for this release.  
+
+###Input:###
+
+* Filtered subreads fasta file (generated by ``P_Filter``)
+* ``params.xml``
+* ``input.xml``
+
+The module is a much simpler design and can **only** be run using smrtpipe in combination with the
+filtered subreads module. Auto-seed cutoff still targets 30x seed reads.
+
+###Parameters:###
+
+* ``targetChunks``: How many chunks to split the seed reads (target) into. In the example below
+the value is set to ``6``, which generates approximately 5x (30x/6) worth of sequence per split file,
+ or chunk. If set to ``1``, then set ``splitBestn`` to the same value as ``totalBestn``.
+
+* ``splitBestn``: Must be adjusted based on ``targetChunk``.  Roughly 1.5 - 2 the coverage found in a 
+given split file, though may produce false positives in some cases, affecting correction so be 
+careful. 
+
+* ``totalBestn``: Default value = ``24``.  Based on the total coverage of 30x.  Default is sensible 
+in most cases.
+
+###Sample input.xml,bas.h5-only input mode###
+
+* **Note:** bas.h5 input files must have the suffix bas.h5.
+
+```
+<pacbioAnalysisInputs>
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+      <url ref="run:0000000-0001">
+   <location>
+      /path/to/input.bas.h5
+   </location>
+</url>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml, bas.h5-only input mode###
+
+```
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<smrtpipeSettings>
+    <module id="P_Filter" >
+        <param name="minLength"><value>100</value></param>
+        <param name="minSubReadLength"><value>500</value></param>
+        <param name="readScore"><value>0.80</value></param>
+    </module>
+    <module id="P_PreAssemblerDagcon">
+        <param name="computeLengthCutoff"><value>true</value></param>
+        <param name="minLongReadLength"><value>6000</value></param>
+        <param name="targetChunks"><value>6</value></param>
+        <param name="splitBestn"><value>11</value></param>
+        <param name="totalBestn"><value>24</value></param>
+        <param name="blasrOpts"><value> -noSplitSubreads -minReadLength 200 -maxScore -1000 -maxLCPLength 16 </value></param>
+    </module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``data/corrected.fasta``,``data/corrected.fastq``: FASTQ and FASTA file of corrected long reads.
+* ``preassembler_report.json``: JSON-formated pre-assembly report.
+* ``preassembler_report.html``: HTML-formated pre-assembly report.
+
+## <a name="P_Map"></a> P_Mapping (BLASR) Module
+
+This module aligns reads against a reference sequence, possibly a multi-contig reference.
+If the ``P_Filter`` module is run first, then **only** the reads which passed filtering are aligned.
+
+###Output:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read.
+* ``data/alignment_summary.gff``: Summary information.
+
+###Parameters:###
+
+* ``pbalign_opts`` Default value = ``Empty string``, Passes options to the underlying ``pbalign.py`` script. **(Optional)**
+
+* ``--useccs=`` Default value = ``None``, A parameter sent to the underlying ``pbalign.py`` script via the ``pbalign_opts`` parameter value above. Values are ``{useccsdenovo|useccs|useccsall}``. **(Optional)**
+
+  * ``useccsdenovo``: Maps just the _de novo_ called sequence and report. (Does not include quality values.)
+
+  * ``useccs``: Maps the _de novo_ called sequence, then aligns full passes to the sequence that the _de novo_ called sequence aligns to.
+
+  * ``useccsall``: Maps the _de novo_ called sequence, then aligns all passes (even ones that do not span the length of the template) to the sequence the _de novo_ called sequence aligned to.
+
+
+* ``load_pulses``: Default value = ``True``, Specifies whether to load pulse metric information into the cmp.h5 file. **(Optional)**
+
+* ``maxHits``: Default value = ``None``, Attempts to find sub-optimal alignments and report up to this many hits per read. **(Optional)**
+
+* ``minAnchorSize``: Default value = ``None``, Ignores anchors **smaller** than this size when finding candidate hits for dynamic programming alignment. **(Optional)**
+
+* ``maxDivergence``: Default value = ``None``, Specifies maximum divergence between read and reference to allow a mapping. Divergence = (1 - accuracy).
+
+* ``sambam``: Default value = ``False``, Specifies whether to output a BAM representation of the cmp.h5 file. **(Optional)**
+
+* ``gff2Bed``: Default value = ``False``, Specifies whether to output a BED representation of the depth of coverage summary. **(Optional)**
+
+
+## <a name="P_ISO"></a> P_IsoSeq Module
+
+This module is used for cDNA analysis, including cDNA reads quality control and _de novo_ consensus isoforms prediction.
+
+1. The module generates reads of insert from SMRT Cell cDNA molecules, removes cDNA primers and poly(A) sequences from reads, and then classifies reads of insert into full-length or non-full-length, chimeric or non-chimeric reads. 
+2. The module **optionally** predicts _de novo_ consensus isoforms from classified reads. 
+3. Finally, the module maps classified reads and predicted consensus isoforms to a provided reference sequence.
+
+###Input:###
+
+* ``input.fofn``:  A file containing the file names of PacBio movies produced by the ``P_Fetch`` module.
+
+* ``reads_of_insert.fasta``: A FASTA file containing reads of insert produced by the ``P_CCS`` module.
+
+* ``reads_of_insert.fofn``:  A file containing the file names of reads of insert ccs.h5 files produced by the ``P_CCS`` module.
+
+###Output:###
+
+* ``isoseq_draft.fasta``: A FASTA file containing all classified reads of insert.
+
+* ``isoseq_flnc.fasta``: A FASTA file containing full-length non-chimeric reads generated by ``pbtranscript.py classify`` in the ``P_IsoSeq`` classify task.
+
+* ``isoseq_nfl.fasta``: A FASTA file containing non-full-length reads generated by ``pbtranscript.py classify`` in the ``P_IsoSeq`` classify task.
+
+* ``aligned_flnc_reads_of_insert.sam|bam|cmp.h5``: A SAM|BAM|cmp.h5 file containing alignments of full-length non-chimeric reads, and the reference sequence.
+
+* ``consensus_isoforms.fasta``: A FASTA file containing predicted consensus isoforms generated by the ``P_IsoSeq`` cluster task. These isoforms are **not** quiver-polished.
+
+* ``polished_high_qv_consensus_isoforms.fasta|q``: A FASTA/FASTAQ file containing polished high-QV consensus isoforms generated by the ``P_IsoSeq`` cluster task. Produced **only** if the ‘Call quiver to polish consensus isoforms’ option is specified.
+
+* ``polished_low_qv_consensus_isoforms.fasta|q``: A FASTA/FASTAQ file containing polished low-QV consensus isoforms generated by the ``P_IsoSeq`` cluster task. Produced **only** if the ‘Call quiver to polish consensus isoforms’ option is specified.
+
+* ``aligned_consensus_isoforms.sam|bam|cmp.h5``: A SAM|BAM|cmp.h5 file containing alignments of predicted consensus isoforms, and the reference sequence.
+
+###Parameters:###
+
+* ``minScore``: Default value = ``10``,  The minimum phmmer score to detect a primer in a read.
+
+* ``cluster``: Default value = ``False``,  Specifies whether or not to predict _de novo_ consensus isoforms using the ICE (Iterative Clustering and Error correction) algorithm.
+
+* ``cDNASize``: Default value = ``under1k``,  The estimated cDNA size. Values are {“under1k”|”between1k2k”|”between2k3k”|”above3k”}
+
+* ``quiver``: Default value = ``False``,  Specifies whether or not to call Quiver to polish consensus isoforms. 
+
+* ``gmap_n``: Default value = ``0``,  The maximum number of paths to show per isoform (that is, the GMAP ``--npaths`` option). If set to ``0``, GMAP will output **two** paths if chimeras are detected; **one** path if chimeras are not detected. 
+
+## <a name="P_Quiver"></a> P_GenomicConsensus (Quiver) Module
+
+This module takes the alignments generated by the ``P_Mapping`` module and calls the consensus sequence across the reads.
+
+###Input:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read.
+
+* ``data/alignment_summary.gff``: Summary information.
+
+###Output:###
+
+* ``data/aligned_reads.cmp.h5``
+
+* ``data/variants.gff.gz``: A gzipped GFF3 file containing variants versus the reference.
+
+* ``data/consensus.fastq.gz``: The consensus sequence in FASTQ format.
+
+* ``data/alignment_summary.gff, data/variants.vc``: Useful information about variants.
+
+###Parameters:###
+
+* ``makeBed``: Default value = ``True``, Specifies whether to output a BED representation of the variants. **(Optional)**
+
+* ``makeVcf``: Default value = ``True``, Specifies whether to output a VCF representation of the variants. **(Optional)**
+
+## <a name="P_Unitig"></a> P_AssembleUnitig
+
+This module is new to HGAP.3. It calls ``P_CeleraAssembler configure`` to assemble corrected reads into unitigs, truncating the traditional ``P_CeleraAssembler`` workflow after the unitigger stage.  This avoids the time-consuming unitig consensus, CA/utgcns, stage built into ``P_CeleraAssembler`` in favor of our own, much faster, unitig consensus caller [PB/utgcns](https://github.com/pbjd/pbutgcns).
+
+###Input:###
+
+* ``corrected.fastq``: FASTQ file of corrected long seed reads generated by ``pbdagcon`` during the pre-assembler stage.
+
+###Output:###
+
+* ``draft_consensus.fasta``: A decent first cut of the assembly (typically ~QV30). Contains both contigs and degenerates.
+
+###Parameters:###
+
+* ``Genome Size``: Approximate size of the sample genome
+* ``Target Coverage``: How much coverage to allow into the assembly
+
+## <a name="P_Polish"></a> P_AssemblyPolishing
+
+This module is used in HGAP to polish draft assemblies using Quiver. 
+
+###Input:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read against the draft assembly.
+
+* ``data/alignment_summary.gff``: Summary information.
+
+###Output:###
+
+* ``data/polished_assembly.fasta.gz``: The consensus sequence in FASTA format.
+
+* ``data/polished_assembly.fastq.gz``: The consensus sequence in FASTQ format.
+
+* ``results/polished_report.html``: HTML-formatted report for the polished assembly.
+
+* ``results/polished_report.xml``: XML-formatted report for the polished assembly.
+
+###Parameters:###
+
+* ``enableMapQVFilter`` Default value = ``True``
+
+
+## <a name="P_Hook"></a> P_AnalysisHook Module
+
+This module allows you to call executable code as part of a SMRT Pipe analysis. ``P_AnalysisHook`` can be called multiple times in a settings XML file, allowing for an arbitrary number of calls to external (non-SMRT Pipe) code.
+
+###Parameters:###
+
+* ``scriptDir``: Default value = ``None``, All executables in this directory are called serially with the command line ``exeCmd jobDir``, where ``jobDir`` is the root of the SMRT Pipe output for this analysis. **(Optional)**
+
+* ``script``: Default value = ``None``, Path to an executable called with the command line ``exeCmd jobDir``, where ``jobDir`` is the root of the SMRT Pipe output for this analysis. **(Optional)**
+
+## <a name="P_AHA"></a> P_AHA (AHA Scaffolding) Module
+
+This module scaffolds high-confidence contigs, such as those from Illumina® data, using Pacific Biosciences’ long reads.
+
+###Input:###
+
+``P_AHA.py`` uses two kinds of input instead of one:
+
+* A FASTA file of high-confidence sequences to be scaffolded. These are typically contigs assembled from Illumina® short-read sequence data. They are passed to AHA as a reference sequence in the ``settings.xml`` input file.
+
+* Pacific Biosciences’ long reads, in HDF5 format. These are used to join the high-confidence contigs into a scaffold. Note that versions of the AHA Scaffolding algorithm prior to v2.1 accepted reads in FASTA format. After v2.1, users with FASTA formatted reads should use the underlying executable ``pbaha.py``.
+
+###Sample settings.xml file for long reads, with only customer-facing parameters:###
+
+```
+<?xml version="1.0"?>
+<smrtpipeSettings>
+  <global>
+    <param name="reference">
+        <value>/mnt/secondary-siv/references/ecoli_contig</value>
+    </param>
+  </global>
+  <module name="P_Fetch"/>
+  <module name="P_Filter">
+    <param name="minLength">
+        <value>50</value>
+    </param>
+    <param name="minSubReadLength">
+        <value>50</value>
+    </param>
+    <param name="readScore">
+        <value>0.75</value>
+    </param>
+  </module>
+  <module name="P_FilterReports"/>
+  <module name="P_AHA">
+    <param name="fillin">
+        <value>False</value>
+    </param>
+    <param name="blasrOpts">
+        <value>-minMatch 10 -minPctIdentity 70 -bestn 10 -noSplitSubreads</value>
+    </param>
+    <param name="instrumentModel">
+        <value>RS</value>
+    </param>
+    <param name="paramSchedule">
+        <value>6,3,75,100;6,3,75,100;5,3,75,100;6,2,75,100;6,2,75,100;5,2,75,100</value>
+    </param>
+    <param name="maxIterations">
+        <value>6</value>
+    </param>
+    <param name="description">
+        <value>AHA ("A Hybrid Assembler") is the PacBio hybrid assembly algorithm. It is based on the open source assembly software package AMOS, with additional software components tailored to PacBio's long reads and error profile.</value>
+    </param>
+  </module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``data/scaffold.gml``: A GraphML file that contains the final scaffold. This file can be readily parsed in the Python programming language using the ``networkx`` package.
+
+* ``data/scaffold.fasta``: A FASTA file with a single entry for each scaffold.
+
+
+###Parameters:###
+
+* ``paramSchedule``: Default value = ``None``  Specifies parameter schedules used for iterative hybrid assembly. Schedules are in comma-delimited tuples, separated by semicolons. **Example:** ``6,3,75;6,3,75;6,2,75;6,2,75``. The fields, in order, are:
+
+  * Minimum alignment score. Higher is more stringent.
+  * Minimum number of reads needed to link two contigs. (Redundancy)
+  * Minimum subread length to participate in alignment.
+  * Minimum contig length to participate in alignment.
+
+* ``fillin``: Default value = ``False``  Specifies whether to use long reads.
+
+* ``blasrOpts``: Default value = ``-minMatch 10 -minPctIdentity 60 -bestn 10 -noSplitSubreads``  Options passed directly to BLASR for aligning reads to contigs.
+
+* ``maxIterations``: Default value = ``6``  Specifies the maximum number of iterations to use from ``paramSchedule``. 
+  * If ``paramSchedule`` is **larger** than ``maxIterations``, it will be truncated at ``maxIterations``. 
+  * If ``paramSchedule`` is **smaller** than ``maxIterations``, the last iteration of ``paramSchedule`` is repeated.
+
+* ``cleanup``: Default value = ``True``  Specifies whether to clean up intermediate files. This can be useful for debugging purposes.
+
+* ``runNucmer``: Default value = ``True``  Specifies whether to use ``Nucmer`` to detect repeat locations. This can improve assemblies, but can be very slow on large highly repetitive genomes.
+
+* ``gapFillOpts``: Default value = ``“”``  Options to be passed directly to ``gapFiller.py``.
+
+* ``noScaffoldImages``: Default value = ``True``  Specifies **not** producing SVG files of the scaffolds. Creating these files can be expensive for large assemblies, but is recommended for small assemblies.
+
+
+To run ``P_AHA.py``, enter the following:
+
+```
+smrtpipe.py --params=settings.xml xml:input.xml >& smrtpipe.err
+```
+
+###Known Issues###
+
+* Depending on the repetitive content of the high-confidence input contigs, a large fraction of the sequence in the contigs can be called repeats. To avoid this, turn off the split repeats step by setting the minimum repeat identity to a number greater than 100, for example:
+```
+<minRepeatIdentity>1000</minRepeatIdentity>
+```
+
+## <a name="P_MOD"></a> P_ModificationDetection Module
+
+This module uses the cmp.h5 output by the ``P_Mapping`` module to:
+
+1. Compare observed IPDs in the cmp.h5 file at each reference position on each strand with control IPDs. Control IPDs are supplied by either an in-silico computational model, or observed IPDs from unmodified “control” DNA.
+
+2.  Generate ``modifications.csv`` and ``modifications.gff`` reporting statistics on the IPD comparison.
+
+###Predicted Kinetic Background Control vs Case-Control Analysis###
+
+By default, the control IPDs are generated per-base of the reference with an in-silico model of the expected IPD values for each position, based on sequence context. The computational model is called the **Predicted IPD Background Control**. Even in normal unmodified DNA, the IPD at any particular point will vary. Internal studies at Pacific Biosciences show that most of the variation in mean IPD across a genome can be predicted from a 12-base sequence context surrounding the active site [...]
+
+###Filtering and Trimming###
+
+Some PacBio data features require special attention for good modification detection performance. The module inspects the alignment between the observed bases and the reference sequence. For an IPD measurement to be included in the analysis, the read sequence must match the reference sequence for K around the cognate base; currently, K = 1. The IPD distribution at some locus can be seen as a mixture of the “normal” incorporation process IPD (sensitive to the local sequence context and DNA [...]
+
+**Pauses** are defined as pulses with an IPD >10x longer than the mean IPD at that context. Heuristics are used to filter out the pauses.
+
+###Statistical Testing###
+
+The module tests the hypothesis that IPDs observed at a particular locus in the sample have longer means than IPDs observed at the same locus in unmodified DNA. If a Whole-Genome-Amplified dataset is generated, which removes DNA modifications, the module uses a case-control, two-sample t-test.
+
+The module also provides a pre-calibrated **Predicted Kinetic Background Control** model which predicts the unmodified IPD, given a 12-base sequence context. In that case, the module uses a one-sample t-test, with an adjustment to account for error in the control model.
+
+###Input:###
+
+* ``aligned_reads.cmp.h5``: A standard cmp.h5 file with alignments and IPD information that supplies the kinetic data for modification detection.
+
+* ``Reference Sequence``: The path to a SMRT Portal reference repository entry for the reference sequence used to perform alignments.
+
+###Output:###
+
+* ``modifications.csv``: Contains one row for each (reference position, strand) pair that appeared in the dataset with coverage of at least x. (x defaults to 3, but is configurable using the ``ipdSummary.py –minCoverage`` flag.) The reference position index is 1-based for compatibility with the GFF file in the R environment.
+
+* ``modifications.gff``: Each template position/strand pair whose p-value exceeds the p-value threshold displays as a row. (The default threshold is ``p=0.01`` or ``score=20``.) The file is compliant with the GFF version 3 specification, and the template position is 1-based, per the GFF specification. The strand column refers to the strand carrying the detected modification, which is the **opposite** strand from those used to detect the modification.
+
+The auxiliary data column of the GFF file contains other statistics useful for downstream analysis or filtering. This includes the coverage level of the reads used to make the call, and +/- 20 bp sequence context surrounding the site.
+
+Results are generally indexed by reference position and reference strand. In all cases, the strand value refers to the strand carrying the modification in the DNA sample. The kinetic effect of the modification is observed in read sequences aligning to the opposite strand, so reads aligning to the positive strand carry information about modification on the negative strand and vice versa. The module **always** reports the strand containing the putative modification.
+
+###Parameters###
+
+* ``identifyModifications``: Default value = ``False``, Specifies whether to use a multi-site model to identify the modification type.
+
+* ``tetTreated``: Default value = ``False``, Specifies whether the sample was TET-treated to amplify the signal of 5-mC modifications.
+
+## <a name="P_Cor"></a> P_CorrelatedVariants (Minor and Compound Variants) Module
+
+This module calls and correlates rare variants from a sample and provides support for determining whether or not sets of mutations are co-located. **Note:** This only includes SNPs, not indels.
+
+The module takes high-coverage Reads of Insert reads that are aligned without quality scores to a similar reference. The module requires the following:
+
+* Reads of Insert reads **only**.
+* High coverage (at least 500x).
+* Alignment to reference without using quality scores.
+* The sample **cannot** be highly divergent from the reference.
+
+The algorithm uses simple column counting and a plurality call. While it works well with higher depths (> 500x), it does suffer from reference bias, systematic alignment error, and sizeable divergence from the reference.
+
+Variants may not only coexist on the same molecule, but they may also be co-selected for; that is, inherited together. This algorithm attempts to define a measurable relationship between a set of co-located variants found with some significance on a set of reads.
+
+1. The variant information is read from the GFF input file, then the corresponding cmp.h5 file is searched for reads that cover the variant. Reads that contain the variant are tracked and later assigned any other variants they contain, building a picture of the different haplotypes occuring within the read sets.
+
+2. The frequencies and coverage values for each haplotype are computed. These values will likely deviate (to the downside) from those found in the GFF file as the read set is constrained by whether or not they completely span the region defined by the variant set. Only reads that cover **all** variants are included in the frequency and coverage calculation.
+
+3. The frequency and coverage values are used to calculate an observed probability of each permutation within the variant set. These probabilities are used to compute the Mutual Information score for the set. Frequency informs mutual information, but does not define it. It is possible to have a lower frequency variant set with a higher mutual information score than a high frequency one.
+
+###Input:###
+
+* A GFF file containing CCS-based variant calls at each position including read information: ID, start, and stop. (Start and stop are in (+) strand genomic coordinates.)
+
+* A cmp.h5 alignment file aligned without quality, and with a minimum accuracy of 95%.
+
+* **(Optional)** ``score``: Include the mutual information score in the output. (Default value = ``Don't include``)
+* **(Optional)**``out``: The output file name. (Default value = ``Output to screen``)
+
+###Output:###
+
+* ``data/rare_variants.gff(.gz)``: Contains rare variant information, accessible from SMRT Portal.
+
+* ``data/correlated_variants.gff``: Accessible from SMRT Portal.
+
+* ``results/topRareVariants.xml``: Viewable report based on the contents of the GFF file.
+
+* CSV file containing the location and count of co-variants. Example:
+
+    ```
+    ref,haplotype,frequency,coverage,percent,mutinf
+    ref000001,285-G|297-G,133,2970,4.48,0.263799623501
+    ref000001,285-T|286-T,128,2971,4.31,0.256253924909
+    ref000001,285-G|406-G,103,2963,3.48,0.217737973781
+    ref000001,99-C|285-G,45,2963,1.52,0.113489812305
+    ref000001,286-T|406-G,43,2963,1.45,0.109404796397
+    ref000001,285-G|286-T,38,2971,1.28,0.0987697454578
+    ref000001,99-C|286-T,31,2963,1.05,0.0838430015349
+    ```
+
+
+## <a name="P_Motif"></a> P_MotifFinder (Motif Analysis) Module
+
+This module finds sequence motifs containing base modifications. The primary application is finding restriction-modification systems in prokaryotic genomes. ``P_MotifFinder`` analyzes the output of the ``P_ModificationDetection`` module.
+
+###Input:###
+
+* ``modifications.csv``: Contains one row for each (reference position, strand) pair that appeared in the dataset with coverage of at least x.
+
+* ``modifications.gff``: Each template position/strand pair whose p-value exceeds the p-value threshold displays as a row.
+
+###Output:###
+
+* ``data/motif_summary.csv``: A summary of the detected motifs, as well as the evidence for motifs.
+
+* ``data/motifs.gff``: A reprocessed version of ``modifications.gff`` (from ``P_ModificationDetection``) containing motif annotations.
+
+###Parameters:###
+
+* ``minScore`` Default value = ``35`` Only consider detected modifications with a Modification QV **above** this threshold.
+
+## <a name="P_BAR"></a> P_Barcode Module
+
+This module provides access to the ``pbbarcode`` command-line tools, which you use to identify barcodes in PacBio reads.
+
+###Input:###
+
+* Complete barcode FASTA file: A standard FASTA file with barcodes less than 48 bp in length. Based on the score mode you specify, the barcode file might need to contain an even number of barcodes. **Example:**
+
+  ```
+<param name="barcode.fasta">
+  <value>/mnt/secondary/Smrtpipe/martin/prod/data/workflows/barcode_complete.fasta</value>
+</param>
+  ```
+
+* Barcode scoring method: This directly relates to the particular sample preparation used to construct the molecules. Depending on the scoring mode, the barcodes are grouped together in different ways. Valid options are:
+
+  *  ``symmetric``: Supports barcode designs with two identical barcodes on both sides of a SMRTbell™ template. Example: For barcodes (A, B), molecules are labeled as A--A or B--B.
+
+  * ``paired``: Supports barcode designs with two distinct barcodes on each side of the molecule, with neither barcode appearing without its mate. Minimum example: (ALeft, ARight, BLeft, BRight), where the following barcode sets are checked: ALeft--ARight, BLeft--BRight. **Example:** 
+
+  ```
+<param name="mode">
+  <value>symmetric</value>
+</param>
+  ```
+
+* Pad arguments: Defines how many bases to include from the adapter, and how many bases to include from the insert. Ideally, this is ``0`` and ``0``. This produces shorter alignments; however, if the adapter-calling algorithm slips a little one might lose a little sensitivity and/or specificity because of this. Do **not** set these unless you have a compelling use case. **Examples:**
+
+    ```
+<param name="adapterSidePad">
+   <value>2</value>
+</param>
+<param name="insertSidePad">
+   <value>2</value>
+</param>
+    ```
+
+###Output:###
+
+* ``/data/*.bc.h5``: Barcode calls and their scores for each ZMW.
+
+* ``/data/barcode.fofn``: Contains a list of files.
+
+* ``/data/aligned_reads.cmp.h5``
+
+
+## <a name="P_AMP"></a> P_LongAmpliconAnalysis Module
+
+This module finds _de novo_ phased consensus sequences from a pooled set of (possibly diploid) amplicons.
+
+###Input:###
+
+* bas.h5 files
+
+
+###Output:###
+
+* ``data/amplicon_analysis.fasta/q``:  A FASTA/FASTAQ file containing the high-quality, non-chimeric sequences found.
+
+* ``data/amplicon_analysis_chimeras_noise.fasta/q``:  A FASTA/FASTAQ file containing the low-quality, chimeric sequences found.
+
+* ``data/amplicon_analysis_summary.csv``:  A .csv file containing summary information about each read.
+
+* ``data/amplicon_analysis.csv``:  A .csv file containing coverage and QV information at the per-base level.
+
+###Parameters:###
+
+* ``minLength`` Default value = ``1000``  Only use subreads longer than this threshold. Should be set to ~75% of the shortest amplicon length.
+
+* ``minReadScore`` Default value = ``0.78``  Only use reads with a ReadScore higher than this value.
+
+* ``maxReads`` Default value = ``2000``  Use at most this number of reads to find results. Values greater than 10000 may cause long run times.
+
+
+
+## <a name="P_CCS"></a> P_CCS (Reads of Insert) Module
+
+This module computes Read of Insert/CCS sequences from single-molecule reads. It is used to estimate the length of the insert sequence loaded onto a SMRT Cell. Reads of Insert **replaces** the Circular Consensus Sequencing (CCS) protocol, which has been moved off the primary analysis instrument. 
+
+###Input:###
+
+* bas.h5 files
+
+
+###Output:###
+
+* ``data/<movie_name>.fasta``:  A FASTA file containing the consensus sequences of each molecule passing quality filtering.
+
+* ``data/<movie_name>.fastq``:  A FASTQ file containing the consensus sequences and base quality of each molecule passing quality filtering.
+
+* ``data/<movie_name>.ccs.h5``:  A ccs.h5 (similar to a bas.h5) file containing a representation of the CCS sequences and quality values.
+
+###Parameters:###
+
+**Note**: Use the default values to obtain the closest approximation of CCS as it existed on-instrument.
+
+* ``minFullPasses`` Default value = ``2``  The raw sequence must make at least this number of passes over the insert sequence to emit a CCS read for this ZMW.
+
+* ``minPredictedAccuracy`` Default value = ``0.9``  The minimum allowed value of the predicted consensus accuracy to emit a CCS read for this ZMW.
+
+* ``minLength`` Default value = ``None``  The minimum length of CCS reads in bases.  **(Optional)**
+
+* ``maxLength`` Default value = ``None``  The maximum length of CCS reads in bases.  **(Optional)**
+
+## <a name="P_Bridge"></a> P_BridgeMapper Module
+
+This module creates split alignments of Pacific Biosciences' reads for viewing with SMRT View. The split alignments can be used to infer the presence of assembly errors or structural variation. ``P_BridgeMapper`` works by first using BLASR to get primary alignments for filtered subreads. Then, ``P_BridgeMapper`` calls BLASR again, mapping any portions of those subreads not contained in the primary alignments.
+
+###Input:###
+
+* ``input.fofn``: A file containing the file names of the raw input files used for the analysis.
+* ``data/aligned_reads.cmp.h5``: The initial alignments for each subread.
+
+###Output:###
+
+* ``data/split_reads.bridgemapper.gz``: A gzipped, tab-separated file of split alignments. This file is consumed by SMRT View. 
+
+**Note:** The meanings of some of the columns in this file have changed:
+  * The columns for BLASR scores now contain placeholder values. 
+  * The columns for the starts and ends of alignments now follow the convention used in cmp.h5 files: Start is **always** less than end, regardless of the orientation of the alignment.
+
+###Parameters:###
+
+* ``minRootLength`` Default value = ``250``  Only consider subreads with primary alignments longer than this threshold.
+
+* ``minAffixLength`` Default value = ``50``  Only report split alignments with secondary alignments longer than this threshold.
+
+## <a name="Tools"></a> SMRT Pipe Tools
+
+**Tools** are programs that run as part of SMRT Pipe. A module, such as ``P_Mapping``, can call several tools (such as the mapping tools ``summarizeCoverage.py`` or ``compareSequences.py``) to actually perform the underlying processing. 
+
+All the tools are located at ``$SEYMOUR_HOME/analysis/bin``.
+
+Use the ``--help`` option to see usage information for each tool. (Some tools are undocumented.)
+
+
+## <a name="Build_SPTools"></a> Building the SMRT Pipe tools manually, without SMRT Portal, SMRT View, or Kodos
+
+
+It is currently **not** possible to build the SMRT Pipe tools without SMRT Portal, SMRT View, or Kodos.
+
+
+## <a name="Files"></a> SMRT Pipe File Structure
+
+**Note**: The output of a SMRT Pipe analysis includes more files than described here; interested users should explore the file structure. Following are details about the major files.
+
+```
+ <jobID>/job.sh
+```
+* Contains the SMRT Pipe command line call for the job.
+
+```
+<jobID>/settings.xml
+```
+* Contains the modules (and their associated parameters) to be run as part of the SMRT Pipe run. 
+
+```
+<jobID>/metadata.rdf
+```
+* Contains all important metadata associated with the job. This includes metadata propagated from primary results, links to all reports and data files exposed to users, and high-level summary metrics computed during the job. The file is an entry point to the job by tools such as SMRT Portal and SMRT View. ``metadata.rdf`` is formatted as an RDF-XML file using OWL ontologies. See http://www.w3.org/standards/semanticweb/ for an introduction to Semantic Web technologies.
+
+```
+<jobID>/input.fofn:  File containing the file names of the raw input files used for the analysis.
+```
+* This file (“file of file names”) is generated early during a job and contains the file names of the raw input files used for the analysis.
+
+```
+<jobID>/input.xml
+```
+* Used to specify the input files to be analyzed in a job, and is passed on to the command line.
+
+```
+log/smrtpipe.log
+```
+* Contains debugging output from SMRT Pipe modules. This is typically shown by way of the **View Log** button in SMRT Portal.
+
+### Data Files ###
+
+The ``Data`` directory is where most raw files generated by the pipeline are stored. (**Note**: The following are example output files - for more details about specific files, see the sections dealing with individual modules.)
+
+```
+aligned_reads.cmp.h5, aligned_reads.sam, aligned_reads.bam
+```
+* Mapping and consensus data from secondary analysis.
+
+```
+alignment_summary.gff
+```
+* Alignment data summarized on sequence regions.
+
+```
+variants.gff.gz
+```
+* All sequence variants called from consensus sequence.
+
+```
+toc.xml
+```
+* **Deprecated** - The master index information for the job outputs is now included in the ``metadata.rdf`` file.
+
+### Results/Reports Files ###
+
+Modules with **Reports** in their name produce HTML reports with static PNG images using XML+XSLT. These reports are located in the ``results`` subdirectory. The underlying XML document for each report is also preserved there; these can be useful files for data-mining the outputs of SMRT Pipe.
+
+
+## <a name="RefRep"></a> The Reference Repository
+
+The **reference repository** is a file-based data store used by SMRT Analysis to manage reference sequences and associated information. The full description of all of the attributes of the reference repository is beyond the scope of this document, but you need to use some basic aspects of the reference repository in most SMRT Pipe analyses. 
+
+**Example**: Analysis of multi-contig references can **only** be handled by supplying a reference entry from a reference repository.
+
+It is simple to create and use a reference repository:
+
+* A reference repository can be **any** directory on your system. You can have as many reference repositories as you wish; the input to SMRT Pipe is a fully resolved path to a reference entry, so this can live in any accessible reference repository.
+
+Starting with the FASTA sequence ``genome.fasta``, you upload the sequence to your reference repository using the following command:
+```
+referenceUploader -c -p/path/to/repository -nGenomeName
+-fgenome.fasta
+```
+
+where:
+
+* ``/path/to/repository`` is the path to your reference repository.
+* ``GenomeName`` is the name to use for the reference entry that will be created.
+* ``genome.fasta`` is the FASTA file containing the reference sequence to upload.
+
+**Notes on FASTA files to be used in the reference repository:**
+
+* The FASTA header should **not** be processed in any way when imported into the
+reference repository. 
+* The FASTA header **cannot** contain any tab characters, colons, double quotes, or additional ``>`` characters beyond the standard demarcation of the start of the header. 
+* Within a multi-sequence FASTA file, the header must be **unique**.
+
+
+For a large genome, we highly recommended that you produce the BLASR suffix array during this upload step. Use the following command:
+```
+referenceUploader -c -p/path/to/repository -nHumanGenome -fhuman.fasta --saw='sawriter -welter'
+```
+
+There are many more options for reference management. Consult the MAN page entry for referenceUploader by entering ``referenceUploader -h``.
+
+To learn more about what is being stored in the reference entries, look at the directory containing a reference entry. You will find a metadata description (``reference.info.xml``) of the reference and its associated files. For example, various static indices for BLASR and SMRT View are stored in the sequence directory along with the FASTA sequence.
+
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 001-353-082-05**
\ No newline at end of file
diff --git a/docs/SMRT-Pipe-Reference-Guide-v2.3.0.md b/docs/SMRT-Pipe-Reference-Guide-v2.3.0.md
new file mode 100644
index 0000000..2f753f1
--- /dev/null
+++ b/docs/SMRT-Pipe-Reference-Guide-v2.3.0.md
@@ -0,0 +1,1413 @@
+* [Introduction](#Intro)
+* [Installation] (#Install)
+* [Using the Command Line] (#CommandLine)
+ * [Command-Line Options] (#CommandLineOptions)
+ * [Utility Scripts] (#UtilityScripts)
+ * [Specifying SMRT Pipe Inputs] (#PipeInputs)
+ * [Specifying SMRT Pipe Parameters] (#PipeParams)
+* [SMRT Portal Protocols] (#PortalProtocols)
+ * [RS_AHA_Scaffolding] (#PRO_AHA)
+ * [RS_BridgeMapper] (#PRO_BM)
+ * [RS_HGAP_Assembly.2] (#PRO_HGAP2)
+ * [RS_HGAP_Assembly.3 (Beta)] (#PRO_HGAP3)
+ * [RS_IsoSeq (Beta)] (#PRO_ISO)
+ * [RS_Long_Amplicon_Analysis (Beta)] (#PRO_LAMP)
+ * [RS_Minor_Variant (Beta)] (#PRO_MINOR)
+ * [RS_Modification_Detection] (#PRO_MOD)
+ * [RS_Modification_and_Motif_Analysis] (#PRO_MODM)
+ * [RS_PreAssembler] (#PRO_PRE)
+ * [RS_ReadsOfInsert] (#PRO_ROI)
+ * [RS_ReadsOfInsert_Mapping] (#PRO_ROI_MAP)
+ * [RS_Resequencing] (#PRO_RESEQ)
+ * [RS_Site_Acceptance_Test] (#PRO_SITE)
+ * [RS_Subreads] (#PRO_SUB)
+* [SMRT Pipe Modules and Their Parameters] (#Modules)
+ * [Global Parameters] (#Global)
+ * [P_AHA (AHA Scaffolding) Module] (#P_AHA)
+ * [P_AnalysisHook Module] (#P_Hook)
+ * [P_AssemblyPolishing Module] (#P_Polish)
+ * [P_AssembleUnitig Module] (#P_Unitig)
+ * [P_Barcode Module]  (#P_BAR)
+ * [P_BridgeMapper Module]  (#P_Bridge)
+ * [P_CCS (Reads of Insert) Module]  (#P_CCS)
+ * [P_Fetch Module] (#P_Fetch)
+ * [P_Filter Module] (#P_Filter)
+ * [P_GenomicConsensus (Quiver) Module] (#P_Quiver)
+ * [P_IsoSeqClassify Module] (#P_ISO_CLASS)
+ * [P_IsoSeqCluster Module] (#P_ISO_CLUS)
+ * [P_LongAmpliconAnalysis Module]  (#P_AMP)
+ * [P_Mapping (BLASR) Module] (#P_Map)
+ * [P_MotifFinder (Motif Analysis) Module] (#P_Motif)
+ * [P_ModificationDetection Module]  (#P_MOD)
+ * [P_PreAssembler Module] (#P_Pre)
+ * [P_PreAssemblerDagcon Module] (#P_PreDag)
+* [SMRT Pipe Tools] (#Tools)
+* [Building the SMRT Pipe tools manually, without SMRT Portal, SMRT View, or Kodos] (#Build_SPTools)
+* [SMRT Pipe File Structure] (#Files)
+* [The Reference Repository] (#RefRep)
+
+## <a name="Intro"></a> Introduction
+
+This document describes the underlying command-line interface to SMRT Pipe, and is for use by bioinformaticians working with secondary analysis results.
+
+**SMRT Pipe** is Pacific Biosciences’ underlying analysis framework for secondary analysis functions.  SMRT Pipe is a general-purpose workflow engine based on the Python® programming language. SMRT Pipe is easily extensible, and supports logging, distributed computation, error handling, analysis parameters, and temporary files.
+
+In a typical installation of the SMRT Analysis Software, the SMRT Portal web application calls SMRT Pipe when a job is started. SMRT Portal provides a convenient and user-friendly way to analyze Pacific Biosciences’ sequencing data through SMRT Pipe. Power users will find that there is more flexibility and customization available by instead running SMRT Pipe analyses from the command line.
+
+* The latest version of SMRT Pipe is available [here] (http://pacificbiosciences.github.io/DevNet/).
+
+* SMRT Pipe can also be accessed using the Secondary Analysis Web Services API. For details, see [Secondary Analysis Web Services API](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Secondary-Analysis-Web-Services-API-v2.2.0).
+
+**Note:**
+Throughout this documentation, the path ``/opt/smrtanalysis`` is used to refer to the installation directory for SMRT Analysis (also known as ``$SEYMOUR_HOME``). Replace this path with the path appropriate to your installation when using this document.
+
+## <a name="Install"></a> Installation
+
+SMRT Pipe is installed as part of the SMRT Analysis software installation. For details, see [SMRT Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0).
+
+## <a name="CommandLine"></a> Using the Command Line
+
+In a typical SMRT Analysis installation, SMRT Pipe is in your path after sourcing the ``setup.sh`` file.  This file declares the ``$SEYMOUR_HOME`` environment variable and also sources two subsequent files, ``$SEYMOUR_HOME/analysis/etc/setup.sh`` and ``$SEYMOUR_HOME/common/etc/setup.sh``.  Do **not** declare `$SEYMOUR_HOME` in `~/.bashrc` or any other environment setting file because it will cause conflicts.
+
+
+Invoke the ``smrtpipe.py`` script in by executing:
+
+```
+. /path/to/smrtanalysis/etc/setup.sh && smrtpipe.py [--help] [options] --params=settings.xml xml:input.xml
+```
+
+Replace ``/path/to/smrtanalysis/`` with the path to your SMRT Analysis installation. The is the same way ``smrtpipe.py`` is invoked in SMRT Portal using the `job.sh` script.
+
+Logging messages are printed to stderr as well as a log file (``log/smrtpipe.log``). It is standard practice to pipe the stderr messages to a file using redirection in your shell, for example appending 
+``&> smrtpipe.err`` to the command line if running under bash.
+
+### <a name="CommandLineOptions"></a> Command-Line Options
+
+Following are some of the available options for invoking ``smrtpipe.py``:
+
+```
+-D key=value
+```
+
+* Overrides a configuration variable. Configuration variables are key-value pairs that are read from the global file ``smrtpipe.rc`` before starting an analysis. An example is the ``NPROC`` variable which controls the number of simultaneous processors to use during the analysis. To restrict SMRT Pipe to 4 processors, use ``-D NPROC=4``.
+
+```
+--debug
+```
+* Activates debugging output in the stderr and log outputs. To set this flag as a default, specify ``DEBUG=True`` in the ``smrtpipe.rc`` file.
+
+```
+--distribute
+```
+* Distributes the computation across a compute cluster. For information on configuring SMRT Pipe for a distributed computation environment, see [SMRT Analysis Software Installation] (https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0).
+
+```
+--help
+```
+* Displays information about command-line usage and options, and then exits.
+
+```
+--noreports
+```
+* Turns off the production of XML/HTML/PNG reports.
+
+```
+--nohtml
+```
+* Turns off the conversion of XML reports into HTML. (This conversion **requires** that Java be installed.)
+
+```
+--output=outputDir
+```
+
+* Specifies a root directory to use for all SMRT Pipe outputs for this analysis.  SMRT Pipe places outputs in this directory, as well as in data, results, and log subdirectories.
+
+```
+params=params.xml
+```
+* Specifies a settings XML file for running the pipeline analysis. If this option is **not** specified, SMRT Pipe prints a message and then exits.
+
+```
+--totalCells
+```
+* Specifies that if the number of cells in the job is less than ``totalCells``, the job is **not** marked complete when it finishes. Data from additional cells will be appended to the outputs, until the number of cells reaches ``totalCells``. 
+
+```
+--version
+```
+* Displays the version number of SMRT Pipe and then exits.
+
+```
+--kill
+```
+* Kills a SMRT Pipe job running in the current directory. This works with ``output``.
+
+```
+smrtpipe.py --examples
+    Name                               Directory
+1   smrtpipe_basemods                  /srv/depot/jdrake/build/doc/examples/smrtpipe_basemods
+2   smrtpipe_assembly_allora           /srv/depot/jdrake/build/doc/examples/smrtpipe_assembly_allora
+3   smrtpipe_assembly_hgap3            /srv/depot/jdrake/build/doc/examples/smrtpipe_assembly_hgap3
+4   smrtpipe_resequencing_barcode      /srv/depot/jdrake/build/doc/examples/smrtpipe_resequencing_barcode
+5   smrtpipe_resequencing              /srv/depot/jdrake/build/doc/examples/smrtpipe_resequencing
+6   smrtpipe_hybrid_aha                /srv/depot/jdrake/build/doc/examples/smrtpipe_hybrid_aha
+```
+* Display the SMRT Pipe example jobs.  A useful reference for how different workflows are configured and run through SMRT Pipe.
+
+### <a name="UtilityScripts"></a> Utility Scripts
+
+For convenience, you can create several utility scripts:
+
+**run_smrtpipe_singlenode.sh**
+
+```
+SMRT_ROOT=/path/to/smrtanalysis/
+.  $SMRT_ROOT/common/etc/setup.sh && smrtpipe.py  --params=settings.xml   xml:input.xml
+```
+
+
+**run_smrtpipe_distribute.sh**
+
+```
+SMRT_ROOT=/path/to/smrtanalysis/
+.   $SMRT_ROOT/common/etc/setup.sh && smrtpipe.py  --distribute --params=settings.xml   xml:input.xml
+```
+
+**run_smrtpipe_debug.sh**
+```
+SMRT_ROOT=/path/to/smrtanalysis/
+.   $SMRT_ROOT/common/etc/setup.sh && smrtpipe.py  --debug --params=settings.xml   xml:input.xml
+```
+
+
+
+### <a name="PipeInputs"></a> Specifying SMRT Pipe Inputs
+
+The input file is an XML file specifying the sequencing data to process. Generally, you specify the inputs as URIs (Universal Resource Identifiers) which are resolved by code internal to SMRT Pipe. In practice, this is most useful to large enterprise users that have a data management scheme and are able to modify the SMRT Pipe code to include their own resolver.
+
+The simpler way to specify inputs is to **fully resolve** the path to each input file, which as of v2.0, is a ``bax.h5`` file. For more information, see [bas.h5 Reference Guide] (http://files.pacb.com/software/instrument/2.0.0/bas.h5%20Reference%20Guide.pdf).
+
+The script ``fofnToSmrtpipeInput.py`` is provided to convert a FOFN (a "file of file names" file) to the input format expected by SMRT Pipe. If ``my_inputs.fofn`` looks like
+```
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.2.bax.h5
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.3.bax.h5
+/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524962550000001823079609281357_s1_p0.1.bax.h5
+```
+or, for SMRT Pipe versions **before** v2.1:
+```
+/share/data/run_1/m100923_005722_00122_c15301919401091173_s0_p0.bas.h5
+/share/data/run_2/m100820_063008_00118_c04442556811011070_s0_p0.bas.h5
+```
+
+
+then it can be converted to a SMRT Pipe input XML file by entering:
+```
+fofnToSmrtpipeInput.py my_inputs.fofn > my_inputs.xml
+```
+Following is the resulting XML file for SMRT Pipe v2.1:
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+  <dataReferences>
+    <url ref="run:0000000-0000"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.2.bax.h5</location></url>
+    <url ref="run:0000000-0001"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.3.bax.h5</location></url>
+    <url ref="run:0000000-0002"><location>/mnt/data/2770276/0006/Analysis_Results/m130512_050747_42209_c100524
+962550000001823079609281357_s1_p0.1.bax.h5</location></url>
+  </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+For SMRT Pipe versions **before** v2.1:
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+ <dataReferences>
+    <url ref="run:0000000-0000"><location>/share/data/
+    /share/data/run_1 m100923_005722_00122_c15301919401091173_s0_p0.bas.h5
+    <url ref="run:0000000-0001"><location>/share/data/
+    /share/data/run_2/m100820_063008_00118_c04442556811011070_s0_p0.bas.h5
+ </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+To run an analysis using these input files, use the following command:
+```
+smrtpipe.py --params=settings.xml xml:my_inputs.xml
+```
+
+The SMRT Pipe input format lets you specify annotations, such as job IDs, job names, and job comments, in a job-management environment. The ``fofnToSmrtpipeInput.py`` application has command-line options for setting these optional attributes.
+
+**Note**: To get help for a script, run the script with the ``--help`` option and no additional arguments. For example:
+```
+fofnToSmrtpipeInput.py --help
+```
+
+### <a name="PipeParams"></a> Specifying SMRT Pipe Parameters
+
+The ``--params`` option is the most important SMRT Pipe option, and is required for any sophisticated use. The option specifies an XML file that controls:
+
+* The analysis modules to run.
+* The **order** of execution.
+* The **parameters** used by the modules.
+
+The general structure of the settings XML file is as follows:
+```
+<?xml version="1.0"?>
+<smrtpipeSettings>
+
+<protocol>
+...global parameters...
+</protocol>
+
+<module id="module_1">
+...parameters...
+</module>
+
+<module id="module_2">
+...parameters...
+</module>
+
+</smrtpipeSettings>
+```
+
+* The ``protocol`` element allows setting global parameters that could possibly be used by all modules.
+* Each ``module`` element defines an analysis module to run. 
+* The order of the ``module`` elements defines the order in which the modules execute.
+
+SMRT Portal protocol templates are located in: ```$SEYMOUR_HOME/common/protocols/```.
+
+SMRT Pipe modules are located in: 
+``$SEYMOUR_HOME/analysis/lib/pythonx.x/pbpy-0.1-py2.7.egg/pbpy/smrtpipe/modules/``.
+
+You specify parameters by entering a key-value pair in a ``param`` element. 
+* The name of the key is in the name attribute of the ``param`` element.
+* The value of the key is contained in a nested value element. 
+
+For example, to set the parameter named ``reference``, you specify:
+```
+<param name="reference">
+  <value>/share/references/repository/celegans</value>
+</param>
+```
+
+**Note**: To reference a parameter value in other parameters, use the notation ``${variable}`` when specifying a value. For example, to reference a global parameter named ``home``, use it in other parameters as ``${home}``.  SMRT Pipe supports arbitrary parameters in the settings XML file, so the use of temporary variables like this can help readability and maintainability.
+
+Following is a complete example of a settings file for running filtering, mapping, and consensus steps against the _E. coli_ reference genome:
+```
+<?xml version="1.0" encoding="utf-8"?>
+<smrtpipeSettings>
+ <protocol>
+  <param name="reference">
+   <value>/share/references/repository/ecoli</value>
+  </param>
+ </protocol>
+
+ <module name="P_Filter">
+  <param name="minLength">
+    <value>50</value>
+  </param>
+  <param name="readScore">
+    <value>0.75</value>
+  </param>
+ </module>
+
+ <module name="P_FilterReports" />
+
+ <module name="P_Mapping">
+  <param name="align_opts" hidden="true">
+   <value>--minAccuracy=0.75 --minLength=50 -x </value>
+  </param>
+ </module>
+
+ <module name="P_MappingReports" />
+ <module name="P_Consensus" />
+ <module name="P_ConsensusReports" />
+
+</smrtpipeSettings>
+```
+
+## <a name="PortalProtocols"></a> SMRT Portal Protocols
+
+Following are the secondary analysis protocols included in SMRT Analysis v2.2.0, with the SMRT Pipe module(s) called by each protocol. Many of these modules are described later in this document.
+
+### <a name="PRO_AHA"></a> RS_AHA_Scaffolding:
+
+* Used for hybrid assembly of genomes up to 200 Mb in size with PacBio reads.
+* Improve existing assemblies up to 200 Mb in size by scaffolding with PacBio long reads to join contigs. 
+* Reads are filtered and assembled with high confidence contigs into scaffolds using a combination of algorithms developed by Pacific Biosciences and the AMOS open-source project.
+```
+* P_Filter
+* P_AHA
+```
+
+### <a name="PRO_BM"></a> RS_BridgeMapper:
+
+* Used for troubleshooting _de novo_ assemblies, variants, indels, and so on.
+* Returns split alignments of PacBio reads using BLASR. 
+* Reads are filtered by length and quality, mapped to a provided reference sequence, and consensus and variants are identified versus this reference using the Quiver algorithm.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_BridgeMapper
+```
+
+### <a name="PRO_HGAP2"></a> RS_HGAP_Assembly.2:
+
+* HGAP (Hierarchical Genome Assembly Process) performs high quality _de novo_ assembly using a single PacBio library preparation. 
+* HGAP consists of pre-assembly, _de novo_ assembly with Celera® Assembler, and assembly polishing with Quiver.
+* The protocol is optimized for **quality.**
+
+```
+* P_PreAssembler
+* P_CeleraAssembler
+* P_Mapping
+* P_AssemblyPolishing 
+```
+
+### <a name="PRO_HGAP3"></a> RS_HGAP_Assembly.3 (Beta):
+
+* HGAP (Hierarchical Genome Assembly Process) performs high quality _de novo_ assembly using a single PacBio library preparation. 
+* HGAP consists of pre-assembly, _de novo_ assembly with PacBio's ``AssembleUnitig``, and assembly polishing with Quiver.
+* The protocol is optimized for **speed.**  It introduces a new unitig consensus caller that is substantially faster than the one included with ``P_CeleraAssembler``.  This protocol is designed with larger genomes in mind, but can also be used as a replacement for ``RS_HGAP_Assembly.2``, which will eventually be deprecated.
+
+To see an example on how to setup and run ``RS_HGAP_Assembly.3`` using ``smrtpipe.py``, take a look at the ``smrtpipe_assembly_hgap3`` example included with ``smrtpipe.py``.
+
+```
+smrtpipe.py --examples
+    Name                               Directory
+1   smrtpipe_basemods                  /srv/depot/jdrake/build/doc/examples/smrtpipe_basemods
+2   smrtpipe_assembly_hgap3            /srv/depot/jdrake/build/doc/examples/smrtpipe_assembly_hgap3
+3   smrtpipe_resequencing_barcode      /srv/depot/jdrake/build/doc/examples/smrtpipe_resequencing_barcode
+4   smrtpipe_resequencing              /srv/depot/jdrake/build/doc/examples/smrtpipe_resequencing
+```
+
+
+```
+* P_PreAssemblerDagcon
+* P_AssembleUnitig
+* P_Mapping
+* P_AssemblyPolishing 
+```
+
+### <a name="PRO_ISO"></a> RS_IsoSeq (Beta):
+
+* Reads of insert from SMRT Cell cDNA modules are generated, filtered by length and quality, classified into full-length or non-full-length, chimeric or non-chimeric reads, and then mapped against the reference using GMAP to span introns.
+
+* _de novo_ consensus isoforms are predicted from classified reads of insert optionally using the ICE (Iterative Clustering and Error Correction) algorithm, and optionally polished via Quiver and classified into High-QV and Low-QV isoforms. 
+
+```
+* P_CCS
+* P_IsoSeqClassify
+* P_IsoSeqCluster
+```
+
+### <a name="PRO_LAMP"></a> RS_Long_Amplicon_Analysis (Beta):
+
+* Used to determine phased consensus sequences for pooled amplicon data. 
+* Can pool up to 20 distinct amplicons. Reads are clustered into high-level groups, then each group is phased and consensus is called using the Quiver algorithm.
+* Filters chimeric sequences.
+* Optionally splits reads by barcode if the sample is barcoded.
+```
+* P_LongAmpliconAnalysis
+* P_Barcode
+```
+
+### <a name="PRO_MINOR"></a> RS_Minor_Variant (Beta):
+
+* Used to call minor variants in a heterogeneous data set against a user-provided reference sequence.
+```
+* P_CCS
+* P_Mapping
+```
+
+### <a name="PRO_MOD"></a> RS_Modification_Detection:
+
+* A resequencing analysis that identifies common bacterial base modifications (6-mA, 4-mC, and optionally TET-converted 5-mC). 
+* Reads are filtered by length and quality, mapped against a specified reference sequence, and then variants are called.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_ModificationDetection
+```
+
+### <a name="PRO_MODM"></a> RS_Modification_and_Motif_Analysis:
+
+* A resequencing analysis that identifies common bacterial base modifications (6-mA, 4-mC, and optionally TET-converted 5-mC), and then analyzes the methyltransferase recognition motifs. 
+* Reads are filtered by length and quality, mapped against a specified reference sequence, and then variants are called.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_ModificationDetection
+* P_MotifFinder
+```
+
+### <a name="PRO_PRE"></a> RS_PreAssembler:
+
+* Used to build a set of highly accurate long reads for use in _de novo_ assembly, using the hierarchical genome assembly process (HGAP).
+* Takes each read exceeding a minimum length, aligns all reads against it, trims the edges, and then takes the consensus.
+```
+* PreAssemblerSFilter
+* P_PreAssembler
+```
+
+### <a name="PRO_ROI"></a> RS_ReadsOfInsert:
+
+* Used to estimate the length of the insert sequence loaded onto a SMRT Cell. 
+* Generates reads from the insert sequence of single molecules, optionally splitting by barcode.
+* Replaces the Circular Consensus Sequencing (CCS) protocol, which has been moved off the primary analysis instrument. 
+* To obtain the closest approximation of CCS as it existed on-instrument, specify ``MinCompletePasses = 2`` and ``MinPredictedAccuracy = 0.9`` in the SMRT Portal Reads of Insert protocol dialog box.
+
+```
+* P_CCS
+* P_Barcode
+```
+
+### <a name="PRO_ROI_MAP"></a> RS_ReadsOfInsert_Mapping:
+
+* Used for whole-genome or targeted resequencing.
+* Reads are filtered, then mapped to a provided reference sequence.
+* Haploid variants and small indels, but **not** diploid variants, are called during consensus.
+* Uses Reads of Insert (formerly known as CCS) data during mapping.
+```
+* P_Filter
+* P_CCS
+* BLASR_De_Novo_CCS
+```
+
+### <a name="PRO_RESEQ"></a> RS_Resequencing:
+
+* Used for whole-genome or targeted resequencing.
+* Reads are filtered, mapped to a provided reference sequence, and consensus and variants are identified against this reference.
+* Haploid variants and small indels, but **not** diploid variants, are called during consensus.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+```
+
+
+### <a name="PRO_SITE"></a> RS_Site_Acceptance_Test:
+
+* Site acceptance test workflow for lambda resequencing. 
+* Generates a report displaying site acceptance test metrics.
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+```
+
+### <a name="PRO_SUB"></a> RS_Subreads:
+
+* Filters reads based on the minimum read length and read quality specified.
+```
+* P_Filter
+```
+
+## <a name="Modules"></a>  SMRT Pipe Modules and their Parameters
+Following is an overview of some of the common modules included in SMRT Pipe and their parameters. Not all modules or parameters are listed here. 
+
+Developers interested in even finer control should look inside the ``validateSettings`` method for each Python analysis module. By convention, **all** of the settings known to the analysis module are referenced in this method.
+
+## <a name="Global"></a> Global Parameters
+
+Global parameters are potentially used in multiple modules. In the SMRT Pipe internals, they are accessed in the “global” namespace. Following are some common global parameters:
+
+```
+reference
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping reads. **Required** for resequencing workflows.
+* Default value = ``None``
+
+```
+control
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping spike-in control reads. **(Optional)**
+* Default value = ``None``
+
+```
+use_subreads
+```
+* Specifies whether to divide reads into subreads using the adapter region boundaries found by the primary analysis software. **(Optional)**
+* Default value = ``True``
+
+```
+num_stats_regions
+```
+* Specifies how many regions to use when reporting region statistics such as depth of coverage and variant density. **(Optional)**
+* Default value = ``500``
+
+## <a name="P_Fetch"></a> P_Fetch Module
+
+This module fetches the input data and generates a file of the file names of the input .pls files for downstream analysis. This module has **no** exposed parameters.
+
+###Output:###
+
+* ``pls.fofn`` (File containing file names of the input .pls files)
+
+## <a name="P_Filter"></a> P_Filter Module
+
+This module filters and trims the raw reads produced by Pacific Biosciences’ primary analysis software. Options are available for taking the information found in the bas.h5 files and using this to pass reads and portions of reads forward.
+
+###Input:###
+
+* ``bas.h5`` files (pre v2.1) or ``bax.h5`` files (post v2.1)
+
+###Output:###
+
+* ``data/filtering_summary.csv``: Includes raw metrics and filtering information for each read (not subread) found in the original bas.h5 files.
+* ``rgn.h5`` (one for each input bas.h5 file): Filtering information generated by the module.
+
+###Parameters:
+
+* ``minLength``  Reads with a high quality region read length **below** this threshold are filtered out. **(Optional)**
+
+* ``maxLength``  Reads with a high quality region read length **above** this threshold are filtered out. **(Optional)**
+
+* ``minSubReadLength``  Subreads **shorter** than this length are filtered out.
+
+* ``maxSubReadLength``  Subreads **longer** than this length are filtered out.
+
+* ``minSNR``  Reads with signal-to-noise ratio **below** this threshold are filtered out. **(Optional)**
+
+* ``readScore`` Reads with a high quality region (Read Quality) score **below** this threshold are filtered out. **(Optional)**
+
+* ``trim`` Default value = ``True``, Specifies whether to trim reads to the high-quality region. **(Optional)**
+
+* ``artifact``  Reads with a read artifact score less than this (negative) number are filtered out. No number indicates no artifact filtering. Reasonable thresholds are typically between -1000 and -200. **(Optional)**
+
+## <a name="P_Pre"></a> P_PreAssembler Module
+
+This module takes as input long reads and short reads in standard formats, aligns the short reads to the long reads, and outputs a consensus from the preassembled short reads using the long reads as seeds.
+**Note:** You **must** run the ``P_Fetch`` and ``P_Filter`` modules before running ``P_PreAssembler`` to get meaningful results.
+
+###Input:###
+
+* **Long reads ("seed reads")**: PacBio pls.h5/bas.h5 file(s) and optionally associated rgn.h5 file(s).
+* **Short reads**: Can be one of the following:
+ * Arbitrary high-quality reads in FASTQ format, such as Illumina® reads, without Ns.
+ * PacBio pls.h5/bas.h5 file(s): The same reads as used for the long reads. This mode is the first step of HGAP (Hierarchical Genome Assembly Procedure.)
+* ``params.xml``
+* ``input.xml``
+
+The module can run on bas.h5 files only, and on bas.h5 and FASTQ files. Following are sample XML inputs for both modes.
+
+###Sample input.xml,bas.h5-only input mode###
+
+* **Note:** bas.h5 input files must have the suffix bas.h5.
+
+```
+<pacbioAnalysisInputs>
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+      <url ref="run:0000000-0001">
+   <location>
+      /path/to/input.bas.h5
+   </location>
+</url>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml, bas.h5-only input mode###
+* This XML parameter file was tested on 90X short reads and 24X long reads.
+
+```
+<module name="P_PreAssembler">
+   <param name="useFastqAsShortReads">
+     <value>False</value>
+   </param>
+   <param name="useFastaAsLongReads">
+     <value>False</value>
+   </param>
+   <param name="useLongReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="useUnalignedReadsInConsensus">
+     <value>False</value>
+   </param>
+   <param name="useCCS">
+     <value>False</value>
+   </param>
+   <param name="minLongReadLength">
+     <value>5000</value>
+   </param>
+   <param name="blasrOpts">
+     <value> -minReadLength 200 -maxScore -1000 -bestn 24 -maxLCPLength 16 -nCandidates 24 </value>
+   </param>
+   <param name="consensusOpts">
+     <value> -L </value>
+   </param>
+   <param name="layoutOpts">
+     <value> --overlapTolerance 100 --trimHit 50 </value>
+   </param>
+   <param name="consensusChunks">
+     <value>60</value>
+   </param>
+   <param name="trimFastq">
+     <value>True</value>
+   </param>
+   <param name="trimOpts">
+     <value> --qvCut=59.5 --minSeqLen=500 </value>
+   </param>
+</module>
+```
+
+###Sample input.xml (FASTQ and bas.h5 input mode)###
+
+* This parameter XML file was tested on 50X 100 bp Illumina® reads correcting 15X PacBio long reads.
+
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+     <url ref="run:0000000-0001">
+   <location>
+     /path/to/input.bas.h5
+   </location>
+   </url>
+     <url ref="fastq:/path/to/input.fastq"/>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml (FASTQ and bas.h5 input mode)###
+
+```
+<?xml version="1.0" ?>
+<smrtpipeSettings>
+  <module name="P_Fetch"/>
+  <module name="P_Filter">
+    <param name="filters">
+       <value>MinRL=1000,MinReadScore=0.80</value>
+    </param>
+    <param name="artifact">
+       <value>-1000</value>
+    </param>
+  </module>
+  <module name="P_PreAssembler">
+    <param name="useFastqAsShortReads">
+       <value>True</value>
+    </param>
+    <param name="useFastaAsLongReads">
+       <value>False</value>
+    </param>
+    <param name="useLongReadsInConsensus">
+       <value>False</value>
+    </param>
+    <param name="useUnalignedReadsInConsensus">
+       <value>False</value>
+    </param>
+    <param name="blasrOpts">
+       <value>-minMatch 8 -minReadLength 30 -maxScore -100 -minPctIdentity 70 -bestn 100</value>
+    </param>
+    <param name="layoutOpts">
+       <value>--overlapTolerance=25</value>
+    </param>
+    <param name="consensusOpts">
+       <value>-w 2</value>
+    </param>
+</module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``corrected.fasta``,`` corrected.fastq``: FASTQ or FASTA file of corrected long reads.
+* ``idmap.csv``: csv file mapping corrected long read ids to original read ids.
+
+## <a name="P_PreDag"></a> P_PreAssemblerDagcon Module
+
+This module provides the primary difference in ``RS_HGAP_Assembly.3``. ``P_PreAssemblerDagcon`` was designed as a drop-in replacement for the correction step in ``RS_HGAP_Assembly.2``, providing the same functionality much faster and more efficiently than the ``P_PreAssembler`` module.  It includes a simple, alignment-based chimera filter that reduces effects caused by missing SMRTbell™ adapters, such as spurious contigs in assemblies.
+
+Note that the quality values in the FASTQ file for the corrected reads are a uniformly set to ``QV24``. This is determined by mapping corrected reads to a known reference and appears to work well on a broad set of data.  We are considering deriving QV values directly from the data for a future release.
+
+As the ``RS_HGAP_Assembly.3`` implementation was completely redesigned and includes much new code, it is labeled as "Beta" for this release.  
+
+###Input:###
+
+* Filtered subreads fasta file (generated by ``P_Filter``)
+* ``params.xml``
+* ``input.xml``
+
+The module is a much simpler design and can **only** be run using smrtpipe in combination with the
+filtered subreads module. Auto-seed cutoff still targets 30x seed reads.
+
+###Parameters:###
+
+* ``targetChunks``: How many chunks to split the seed reads (target) into. In the example below
+the value is set to ``6``, which generates approximately 5x (30x/6) worth of sequence per split file,
+ or chunk. If set to ``1``, then set ``splitBestn`` to the same value as ``totalBestn``.
+
+* ``splitBestn``: Must be adjusted based on ``targetChunk``.  Roughly 1.5 - 2 the coverage found in a 
+given split file, though may produce false positives in some cases, affecting correction so be 
+careful. 
+
+* ``totalBestn``: Default value = ``24``.  Based on the total coverage of 30x.  Default is sensible 
+in most cases.
+
+###Sample input.xml,bas.h5-only input mode###
+
+* **Note:** bas.h5 input files must have the suffix bas.h5.
+
+```
+<pacbioAnalysisInputs>
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+   <dataReferences>
+      <url ref="run:0000000-0001">
+   <location>
+      /path/to/input.bas.h5
+   </location>
+</url>
+   </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+###Sample params.xml, bas.h5-only input mode###
+
+```
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<smrtpipeSettings>
+    <module id="P_Filter" >
+        <param name="minLength"><value>100</value></param>
+        <param name="minSubReadLength"><value>500</value></param>
+        <param name="readScore"><value>0.80</value></param>
+    </module>
+    <module id="P_PreAssemblerDagcon">
+        <param name="computeLengthCutoff"><value>true</value></param>
+        <param name="minLongReadLength"><value>6000</value></param>
+        <param name="targetChunks"><value>6</value></param>
+        <param name="splitBestn"><value>11</value></param>
+        <param name="totalBestn"><value>24</value></param>
+        <param name="blasrOpts"><value> -noSplitSubreads -minReadLength 200 -maxScore -1000 -maxLCPLength 16 </value></param>
+    </module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``data/corrected.fasta``,``data/corrected.fastq``: FASTQ and FASTA file of corrected long reads.
+* ``preassembler_report.json``: JSON-formated pre-assembly report.
+* ``preassembler_report.html``: HTML-formated pre-assembly report.
+
+## <a name="P_Map"></a> P_Mapping (BLASR) Module
+
+This module aligns reads against a reference sequence, possibly a multi-contig reference.
+If the ``P_Filter`` module is run first, then **only** the reads which passed filtering are aligned.
+
+###Output:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read.
+* ``data/alignment_summary.gff``: Summary information.
+
+###Parameters:###
+
+* ``maxHits``: Default value = ``10``, Attempts to find sub-optimal alignments and report up to this many hits per read. **(Optional)**
+
+* ``minAnchorSize``: Default value = ``12``, Ignores anchors **smaller** than this size when finding candidate hits for dynamic programming alignment. **(Optional)**
+
+* ``maxDivergence``: Default value = ``30``, Specifies maximum divergence between read and reference to allow a mapping. Divergence = (1 - accuracy).
+
+* ``placeRepeatsRandomly``: Default value = ``True``, Specifies that if BLASR maps a read to more than one location with equal probability, then it **randomly** selects which location it chooses as the best location. If **not** set, defaults to the first on the list of matches.
+
+* ``pbalign_opts``: Default value = ``--seed=1 --minAccuracy=0.75 --minLength=50 --concordant --algorithmOptions="-useQuality"``, Specifies default options passed to the underlying ``pbalign`` script.
+
+* ``pbalign_advanced_opts``: Default value = ``Empty string``, Passes advanced options to the underlying ``pbalign.py`` script. **(Optional)**  **Note:** This option is now exposed in SMRT Portal to give advanced users more freedom to pass non-standard parameters to the underlying ``pbalign`` script. However, this option must be used with care.
+
+  * ``--useccs``: Default value = ``None``, A parameter sent to the underlying ``pbalign.py`` script via the ``pbalign_opts`` or ``pbalign_advanced_opts`` parameters Values are ``{useccsdenovo|useccs|useccsall}``. **(Optional)**
+
+     * ``useccsdenovo``: Maps just the _de novo_ called sequence and report. (Does **not** include quality values.)
+
+     * ``useccs``: Maps the _de novo_ called sequence, then aligns full passes to the sequence that the _de novo_ called sequence aligns to.
+
+     * ``useccsall``: Maps the _de novo_ called sequence, then aligns all passes (even ones that do not span the length of the template) to the sequence the _de novo_ called sequence aligned to.
+
+* ``sambam``: Default value = ``False``, Specifies whether to output a BAM representation of the cmp.h5 file. **(Optional)**
+
+* ``gff2Bed``: Default value = ``False``, Specifies whether to output a BED representation of the depth of coverage summary. **(Optional)**
+
+## <a name="P_ISO_CLASS"></a> P_IsoSeqClassify Module
+
+This module is used for cDNA analysis, including cDNA reads quality control and mapping to a provided reference.
+
+1. The module generates reads of insert from SMRT Cell cDNA molecules, removes cDNA primers and poly(A) sequences from reads, and then classifies reads of insert into full-length or non-full-length, chimeric or non-chimeric reads.
+2. Finally, the module maps classified reads and predicted consensus isoforms to a provided reference sequence.
+
+###Input:###
+
+* ``input.fofn``:  A file containing the file names of PacBio movies produced by the ``P_Fetch`` module.
+
+* ``reads_of_insert.fasta``: A FASTA file containing reads of insert produced by the ``P_CCS`` module.
+
+* ``reads_of_insert.fofn``:  A file containing the file names of reads of insert ccs.h5 files produced by the ``P_CCS`` module.
+
+###Output:###
+
+* ``isoseq_draft.fasta``: A FASTA file containing all classified reads of insert.
+
+* ``isoseq_flnc.fasta``: A FASTA file containing full-length non-chimeric reads generated by ``pbtranscript.py classify`` in the ``P_IsoSeqclassify`` module.
+
+* ``isoseq_nfl.fasta``: A FASTA file containing non-full-length reads generated by ``pbtranscript.py classify`` in the ``P_IsoSeqClassify`` module.
+
+* ``Isoseq_primer_info.csv``: A CSV file containing classified reads of insert, including the read ID, strand, whether 5’ primer is seen, whether polyA tail is seen, whether 3’ primer is seen, the position of 5’ primer, the position of polyA tail, the position of 3’ primer, ID of the primer seen in this read, and whether this read is chimeric or not.
+
+* ``classify_summary.txt``: A text file summarizing ``P_IsoSeqClassify`` results.
+
+###Parameters:###
+
+* ``minSeqLen``: Default value = ``300``, Minimum length of reads of insert to analyze.
+
+* ``customizedPrimerFa``: Default value = ``Empty string``. A FASTA file containing customized primers, which will be used to detect primers in reads of insert. **(Optional)**
+
+* ``ignorePolyA``: Default value = ``False``, Specifies whether or not full-length reads of insert require polyA tails.
+
+* ``gmap_n``: Default value = ``0``,  The maximum number of paths to show per isoform (that is, the GMAP ``--npaths`` option). If set to ``0``, GMAP will output **two** paths if chimeras are detected; **one** path if chimeras are not detected. 
+
+## <a name="P_ISO_CLUS"></a> P_IsoSeqCluster Module
+
+This module is used for cDNA analysis, including _de novo_ consensus isoforms prediction and polishing.
+
+1. The module **optionally** predicts _de novo_ consensus isoforms from classified reads of insert using the ICE (Iterative Clustering and Error Correction) algorithm.
+
+2. The module **optionally** polishes predicted consensus isoforms using Quiver and classifies the polished isoforms into high-QV or low-QV isoforms based on user-specified criteria.
+
+###Input:###
+
+* ``input.fofn``:  A file containing the file names of PacBio movies produced by the ``P_Fetch`` module.
+
+* ``reads_of_insert.fofn``:  A file containing the file names of reads of insert ccs.h5 files produced by the ``P_CCS`` module.
+
+* ``isoseq_flnc.fasta``: A FASTA file containing full-length non-chimeric reads generated by ``pbtranscript.py classify`` in the ``P_IsoSeqClassify`` module.
+
+* ``isoseq_nfl.fasta``: A FASTA file containing non-full-length reads generated by ``pbtranscript.py classify`` in the ``P_IsoSeqClassify`` module.
+
+###Output:###
+
+* ``consensus_isoforms.fasta``: A FASTA file containing predicted consensus isoforms generated by the ``P_IsoSeqCluster`` module. These isoforms are **not** quiver-polished.
+
+* ``polished_high_qv_consensus_isoforms.fasta|q``: A FASTA/FASTAQ file containing polished high-QV consensus isoforms generated by the ``P_IsoSeqCLuster`` module. Produced **only** if the ‘Call quiver to polish consensus isoforms’ option is specified.
+
+* ``polished_low_qv_consensus_isoforms.fasta|q``: A FASTA/FASTAQ file containing polished low-QV consensus isoforms generated by the ``P_IsoSeqCluster`` module. Produced **only** if the ‘Call quiver to polish consensus isoforms’ option is specified.
+
+* ``isoseq_cluster_info.csv``: A CSV file containing information on predicted isoforms. Each line includes: cluster ID, ID of a read which support this cluster, and whether this supportive read is full-length or non-full-length.
+
+* ``cluster_summary.txt``: A text file summarizing ``P_IsoSeqCluster`` results.
+
+* ``aligned_consensus_isoforms.sam|bam|cmp.h5``: A SAM|BAM|cmp.h5 file containing alignments of predicted consensus isoforms, and the reference sequence.
+
+###Parameters:###
+
+* ``cluster``: Default value = ``False``, Specifies whether or not to predict _de novo_ consensus isoforms using the ICE (Iterative Clustering and Error correction) algorithm.
+
+* ``cDNASize``: Default value = ``under1k``, Specifies the estimated cDNA size. Values are ``{“under1k”|”between1k2k”|”between2k3k”|”above3k”}``
+
+* ``quiver``: Default value = ``False``, Specifies whether or not to call Quiver to polish consensus isoforms.
+
+* ``hq_quiver_min_accuracy``: Default value = ``0.99``, Specifies the minimum Quiver accuracy needed to classify an polished isoform as high-QV.
+
+## <a name="P_Quiver"></a> P_GenomicConsensus (Quiver) Module
+
+This module takes the alignments generated by the ``P_Mapping`` module and calls the consensus sequence across the reads.
+
+###Input:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read.
+
+* ``data/alignment_summary.gff``: Summary information.
+
+###Output:###
+
+* ``data/aligned_reads.cmp.h5``
+
+* ``data/variants.gff.gz``: A gzipped GFF3 file containing variants versus the reference.
+
+* ``data/consensus.fastq.gz``: The consensus sequence in FASTQ format.
+
+* ``data/alignment_summary.gff, data/variants.vc``: Useful information about variants.
+
+###Parameters:###
+
+* ``makeBed``: Default value = ``True``, Specifies whether to output a BED representation of the variants. **(Optional)**
+
+* ``makeVcf``: Default value = ``True``, Specifies whether to output a VCF representation of the variants. **(Optional)**
+
+## <a name="P_Unitig"></a> P_AssembleUnitig
+
+This module is new to HGAP.3. It calls ``P_CeleraAssembler configure`` to assemble corrected reads into unitigs, truncating the traditional ``P_CeleraAssembler`` workflow after the unitigger stage.  This avoids the time-consuming unitig consensus, CA/utgcns, stage built into ``P_CeleraAssembler`` in favor of our own, much faster, unitig consensus caller [PB/utgcns](https://github.com/pbjd/pbutgcns).
+
+###Input:###
+
+* ``corrected.fastq``: FASTQ file of corrected long seed reads generated by ``pbdagcon`` during the pre-assembler stage.
+
+###Output:###
+
+* ``draft_consensus.fasta``: A decent first cut of the assembly (typically ~QV30). Contains both contigs and degenerates.
+
+###Parameters:###
+
+* ``Genome Size``: Approximate size of the sample genome
+* ``Target Coverage``: How much coverage to allow into the assembly
+
+## <a name="P_Polish"></a> P_AssemblyPolishing
+
+This module is used in HGAP to polish draft assemblies using Quiver. 
+
+###Input:###
+
+* ``data/aligned_reads.cmp.h5``: The pairwise alignments for each read against the draft assembly.
+
+* ``data/alignment_summary.gff``: Summary information.
+
+###Output:###
+
+* ``data/polished_assembly.fasta.gz``: The consensus sequence in FASTA format.
+
+* ``data/polished_assembly.fastq.gz``: The consensus sequence in FASTQ format.
+
+* ``results/polished_report.html``: HTML-formatted report for the polished assembly.
+
+* ``results/polished_report.xml``: XML-formatted report for the polished assembly.
+
+###Parameters:###
+
+* ``enableMapQVFilter`` Default value = ``True``
+
+
+## <a name="P_Hook"></a> P_AnalysisHook Module
+
+This module allows you to call executable code as part of a SMRT Pipe analysis. ``P_AnalysisHook`` can be called multiple times in a settings XML file, allowing for an arbitrary number of calls to external (non-SMRT Pipe) code.
+
+###Parameters:###
+
+* ``scriptDir``: Default value = ``None``, All executables in this directory are called serially with the command line ``exeCmd jobDir``, where ``jobDir`` is the root of the SMRT Pipe output for this analysis. **(Optional)**
+
+* ``script``: Default value = ``None``, Path to an executable called with the command line ``exeCmd jobDir``, where ``jobDir`` is the root of the SMRT Pipe output for this analysis. **(Optional)**
+
+## <a name="P_AHA"></a> P_AHA (AHA Scaffolding) Module
+
+This module scaffolds high-confidence contigs, such as those from Illumina® data, using Pacific Biosciences’ long reads.
+
+###Input:###
+
+``P_AHA.py`` uses two kinds of input instead of one:
+
+* A FASTA file of high-confidence sequences to be scaffolded. These are typically contigs assembled from Illumina® short-read sequence data. They are passed to AHA as a reference sequence in the ``settings.xml`` input file.
+
+* Pacific Biosciences’ long reads, in HDF5 format. These are used to join the high-confidence contigs into a scaffold. Note that versions of the AHA Scaffolding algorithm prior to v2.1 accepted reads in FASTA format. After v2.1, users with FASTA formatted reads should use the underlying executable ``pbaha.py``.
+
+###Sample settings.xml file for long reads, with only customer-facing parameters:###
+
+```
+<?xml version="1.0"?>
+<smrtpipeSettings>
+  <global>
+    <param name="reference">
+        <value>/mnt/secondary-siv/references/ecoli_contig</value>
+    </param>
+  </global>
+  <module name="P_Fetch"/>
+  <module name="P_Filter">
+    <param name="minLength">
+        <value>50</value>
+    </param>
+    <param name="minSubReadLength">
+        <value>50</value>
+    </param>
+    <param name="readScore">
+        <value>0.75</value>
+    </param>
+  </module>
+  <module name="P_FilterReports"/>
+  <module name="P_AHA">
+    <param name="fillin">
+        <value>False</value>
+    </param>
+    <param name="blasrOpts">
+        <value>-minMatch 10 -minPctIdentity 70 -bestn 10 -noSplitSubreads</value>
+    </param>
+    <param name="instrumentModel">
+        <value>RS</value>
+    </param>
+    <param name="paramSchedule">
+        <value>6,3,75,100;6,3,75,100;5,3,75,100;6,2,75,100;6,2,75,100;5,2,75,100</value>
+    </param>
+    <param name="maxIterations">
+        <value>6</value>
+    </param>
+    <param name="description">
+        <value>AHA ("A Hybrid Assembler") is the PacBio hybrid assembly algorithm. It is based on the open source assembly software package AMOS, with additional software components tailored to PacBio's long reads and error profile.</value>
+    </param>
+  </module>
+</smrtpipeSettings>
+```
+
+###Output:###
+
+* ``data/scaffold.gml``: A GraphML file that contains the final scaffold. This file can be readily parsed in the Python programming language using the ``networkx`` package.
+
+* ``data/scaffold.fasta``: A FASTA file with a single entry for each scaffold.
+
+
+###Parameters:###
+
+* ``paramSchedule``: Default value = ``None``  Specifies parameter schedules used for iterative hybrid assembly. Schedules are in comma-delimited tuples, separated by semicolons. **Example:** ``6,3,75;6,3,75;6,2,75;6,2,75``. The fields, in order, are:
+
+  * Minimum alignment score. Higher is more stringent.
+  * Minimum number of reads needed to link two contigs. (Redundancy)
+  * Minimum subread length to participate in alignment.
+  * Minimum contig length to participate in alignment.
+
+* ``fillin``: Default value = ``False``  Specifies whether to use long reads.
+
+* ``blasrOpts``: Default value = ``-minMatch 10 -minPctIdentity 60 -bestn 10 -noSplitSubreads``  Options passed directly to BLASR for aligning reads to contigs.
+
+* ``maxIterations``: Default value = ``6``  Specifies the maximum number of iterations to use from ``paramSchedule``. 
+  * If ``paramSchedule`` is **larger** than ``maxIterations``, it will be truncated at ``maxIterations``. 
+  * If ``paramSchedule`` is **smaller** than ``maxIterations``, the last iteration of ``paramSchedule`` is repeated.
+
+* ``cleanup``: Default value = ``True``  Specifies whether to clean up intermediate files. This can be useful for debugging purposes.
+
+* ``runNucmer``: Default value = ``True``  Specifies whether to use ``Nucmer`` to detect repeat locations. This can improve assemblies, but can be very slow on large highly repetitive genomes.
+
+* ``gapFillOpts``: Default value = ``“”``  Options to be passed directly to ``gapFiller.py``.
+
+* ``noScaffoldImages``: Default value = ``True``  Specifies **not** producing SVG files of the scaffolds. Creating these files can be expensive for large assemblies, but is recommended for small assemblies.
+
+
+To run ``P_AHA.py``, enter the following:
+
+```
+smrtpipe.py --params=settings.xml xml:input.xml >& smrtpipe.err
+```
+
+###Known Issues###
+
+* Depending on the repetitive content of the high-confidence input contigs, a large fraction of the sequence in the contigs can be called repeats. To avoid this, turn off the split repeats step by setting the minimum repeat identity to a number greater than 100, for example:
+```
+<minRepeatIdentity>1000</minRepeatIdentity>
+```
+
+## <a name="P_MOD"></a> P_ModificationDetection Module
+
+This module uses the cmp.h5 output by the ``P_Mapping`` module to:
+
+1. Compare observed IPDs in the cmp.h5 file at each reference position on each strand with control IPDs. Control IPDs are supplied by either an in-silico computational model, or observed IPDs from unmodified “control” DNA.
+
+2.  Generate ``modifications.csv`` and ``modifications.gff`` reporting statistics on the IPD comparison.
+
+###Predicted Kinetic Background Control vs Case-Control Analysis###
+
+By default, the control IPDs are generated per-base of the reference with an in-silico model of the expected IPD values for each position, based on sequence context. The computational model is called the **Predicted IPD Background Control**. Even in normal unmodified DNA, the IPD at any particular point will vary. Internal studies at Pacific Biosciences show that most of the variation in mean IPD across a genome can be predicted from a 12-base sequence context surrounding the active site [...]
+
+###Filtering and Trimming###
+
+Some PacBio data features require special attention for good modification detection performance. The module inspects the alignment between the observed bases and the reference sequence. For an IPD measurement to be included in the analysis, the read sequence must match the reference sequence for K around the cognate base; currently, K = 1. The IPD distribution at some locus can be seen as a mixture of the “normal” incorporation process IPD (sensitive to the local sequence context and DNA [...]
+
+**Pauses** are defined as pulses with an IPD >10x longer than the mean IPD at that context. Heuristics are used to filter out the pauses.
+
+###Statistical Testing###
+
+The module tests the hypothesis that IPDs observed at a particular locus in the sample have longer means than IPDs observed at the same locus in unmodified DNA. If a Whole-Genome-Amplified dataset is generated, which removes DNA modifications, the module uses a case-control, two-sample t-test.
+
+The module also provides a pre-calibrated **Predicted Kinetic Background Control** model which predicts the unmodified IPD, given a 12-base sequence context. In that case, the module uses a one-sample t-test, with an adjustment to account for error in the control model.
+
+###Input:###
+
+* ``aligned_reads.cmp.h5``: A standard cmp.h5 file with alignments and IPD information that supplies the kinetic data for modification detection.
+
+* ``Reference Sequence``: The path to a SMRT Portal reference repository entry for the reference sequence used to perform alignments.
+
+###Output:###
+
+* ``modifications.csv``: Contains one row for each (reference position, strand) pair that appeared in the dataset with coverage of at least x. (x defaults to 3, but is configurable using the ``ipdSummary.py –minCoverage`` flag.) The reference position index is 1-based for compatibility with the GFF file in the R environment.
+
+* ``modifications.gff``: Each template position/strand pair whose p-value exceeds the p-value threshold displays as a row. (The default threshold is ``p=0.01`` or ``score=20``.) The file is compliant with the GFF version 3 specification, and the template position is 1-based, per the GFF specification. The strand column refers to the strand carrying the detected modification, which is the **opposite** strand from those used to detect the modification.
+
+The auxiliary data column of the GFF file contains other statistics useful for downstream analysis or filtering. This includes the coverage level of the reads used to make the call, and +/- 20 bp sequence context surrounding the site.
+
+Results are generally indexed by reference position and reference strand. In all cases, the strand value refers to the strand carrying the modification in the DNA sample. The kinetic effect of the modification is observed in read sequences aligning to the opposite strand, so reads aligning to the positive strand carry information about modification on the negative strand and vice versa. The module **always** reports the strand containing the putative modification.
+
+###Parameters###
+
+* ``identifyModifications``: Default value = ``False``, Specifies whether to use a multi-site model to identify the modification type.
+
+* ``tetTreated``: Default value = ``False``, Specifies whether the sample was TET-treated to amplify the signal of 5-mC modifications.
+
+## <a name="P_Motif"></a> P_MotifFinder (Motif Analysis) Module
+
+This module finds sequence motifs containing base modifications. The primary application is finding restriction-modification systems in prokaryotic genomes. ``P_MotifFinder`` analyzes the output of the ``P_ModificationDetection`` module.
+
+###Input:###
+
+* ``modifications.csv``: Contains one row for each (reference position, strand) pair that appeared in the dataset with coverage of at least x.
+
+* ``modifications.gff``: Each template position/strand pair whose p-value exceeds the p-value threshold displays as a row.
+
+###Output:###
+
+* ``data/motif_summary.csv``: A summary of the detected motifs, as well as the evidence for motifs.
+
+* ``data/motifs.gff``: A reprocessed version of ``modifications.gff`` (from ``P_ModificationDetection``) containing motif annotations.
+
+###Parameters:###
+
+* ``minScore`` Default value = ``35`` Only consider detected modifications with a Modification QV **above** this threshold.
+
+## <a name="P_BAR"></a> P_Barcode Module
+
+This module provides access to the ``pbbarcode`` command-line tools, which you use to identify barcodes in PacBio reads.
+
+###Input:###
+
+* Complete barcode FASTA file: A standard FASTA file with barcodes less than 48 bp in length. Based on the score mode you specify, the barcode file might need to contain an even number of barcodes. **Example:**
+
+  ```
+<param name="barcode.fasta">
+  <value>/mnt/secondary/Smrtpipe/martin/prod/data/workflows/barcode_complete.fasta</value>
+</param>
+  ```
+
+* Barcode scoring method: This directly relates to the particular sample preparation used to construct the molecules. Depending on the scoring mode, the barcodes are grouped together in different ways. Valid options are:
+
+  *  ``symmetric``: Supports barcode designs with two identical barcodes on both sides of a SMRTbell™ template. Example: For barcodes (A, B), molecules are labeled as A--A or B--B.
+
+  * ``paired``: Supports barcode designs with two distinct barcodes on each side of the molecule, with neither barcode appearing without its mate. Minimum example: (ALeft, ARight, BLeft, BRight), where the following barcode sets are checked: ALeft--ARight, BLeft--BRight. **Example:** 
+
+  ```
+<param name="mode">
+  <value>symmetric</value>
+</param>
+  ```
+
+* Pad arguments: Defines how many bases to include from the adapter, and how many bases to include from the insert. Ideally, this is ``0`` and ``0``. This produces shorter alignments; however, if the adapter-calling algorithm slips a little one might lose a little sensitivity and/or specificity because of this. Do **not** set these unless you have a compelling use case. **Examples:**
+
+    ```
+<param name="adapterSidePad">
+   <value>2</value>
+</param>
+<param name="insertSidePad">
+   <value>2</value>
+</param>
+    ```
+
+###Output:###
+
+* ``/data/*.bc.h5``: Barcode calls and their scores for each ZMW.
+
+* ``/data/barcode.fofn``: Contains a list of files.
+
+* Other files are output based on the protocol used to call the ``P_Barcode`` module. Example:
+``/data/aligned_reads.cmp.h5``, returned by the RS_Resequencing_Barcode protocol.
+
+
+
+
+## <a name="P_AMP"></a> P_LongAmpliconAnalysis Module
+
+This module finds _de novo_ phased consensus sequences from a pooled set of (possibly diploid) amplicons.
+
+###Input:###
+
+* bas.h5 files
+
+
+###Output:###
+
+* ``data/amplicon_analysis.fasta/q``:  A FASTA/FASTAQ file containing the high-quality, non-chimeric sequences found.
+
+* ``data/amplicon_analysis_chimeras_noise.fasta/q``:  A FASTA/FASTAQ file containing the low-quality, chimeric sequences found.
+
+* ``data/amplicon_analysis_summary.csv``:  A .csv file containing summary information about each read.
+
+* ``data/amplicon_analysis.csv``:  A .csv file containing coverage and QV information at the per-base level.
+
+###Parameters:###
+
+* ``minLength`` Default value = ``1000``  Only use subreads longer than this threshold. Should be set to ~75% of the shortest amplicon length.
+
+* ``minReadScore`` Default value = ``0.78``  Only use reads with a ReadScore higher than this value.
+
+* ``maxReads`` Default value = ``2000``  Use at most this number of reads to find results. Values greater than 10000 may cause long run times.
+
+
+
+## <a name="P_CCS"></a> P_CCS (Reads of Insert) Module
+
+This module computes Read of Insert/CCS sequences from single-molecule reads. It is used to estimate the length of the insert sequence loaded onto a SMRT Cell. Reads of Insert **replaces** the Circular Consensus Sequencing (CCS) protocol, which has been moved off the primary analysis instrument. 
+
+###Input:###
+
+* bas.h5 files
+
+
+###Output:###
+
+* ``data/<movie_name>.fasta``:  A FASTA file containing the consensus sequences of each molecule passing quality filtering.
+
+* ``data/<movie_name>.fastq``:  A FASTQ file containing the consensus sequences and base quality of each molecule passing quality filtering.
+
+* ``data/<movie_name>.ccs.h5``:  A ccs.h5 (similar to a bas.h5) file containing a representation of the CCS sequences and quality values.
+
+###Parameters:###
+
+**Note**: Use the default values to obtain the closest approximation of CCS as it existed on-instrument.
+
+* ``minFullPasses`` Default value = ``2``  The raw sequence must make at least this number of passes over the insert sequence to emit a CCS read for this ZMW.
+
+* ``minPredictedAccuracy`` Default value = ``0.9``  The minimum allowed value of the predicted consensus accuracy to emit a CCS read for this ZMW.
+
+* ``minLength`` Default value = ``None``  The minimum length of CCS reads in bases.  **(Optional)**
+
+* ``maxLength`` Default value = ``None``  The maximum length of CCS reads in bases.  **(Optional)**
+
+## <a name="P_Bridge"></a> P_BridgeMapper Module
+
+This module creates split alignments of Pacific Biosciences' reads for viewing with SMRT View. The split alignments can be used to infer the presence of assembly errors or structural variation. ``P_BridgeMapper`` works by first using BLASR to get primary alignments for filtered subreads. Then, ``P_BridgeMapper`` calls BLASR again, mapping any portions of those subreads not contained in the primary alignments.
+
+###Input:###
+
+* ``input.fofn``: A file containing the file names of the raw input files used for the analysis.
+* ``data/aligned_reads.cmp.h5``: The initial alignments for each subread.
+
+###Output:###
+
+* ``data/split_reads.bridgemapper.gz``: A gzipped, tab-separated file of split alignments. This file is consumed by SMRT View. 
+
+**Note:** The meanings of some of the columns in this file have changed:
+  * The columns for BLASR scores now contain placeholder values. 
+  * The columns for the starts and ends of alignments now follow the convention used in cmp.h5 files: Start is **always** less than end, regardless of the orientation of the alignment.
+
+###Parameters:###
+
+* ``minRootLength`` Default value = ``250``  Only consider subreads with primary alignments longer than this threshold.
+
+* ``minAffixLength`` Default value = ``50``  Only report split alignments with secondary alignments longer than this threshold.
+
+## <a name="Tools"></a> SMRT Pipe Tools
+
+**Tools** are programs that run as part of SMRT Pipe. A module, such as ``P_Mapping``, can call several tools (such as the mapping tools ``summarizeCoverage.py`` or ``compareSequences.py``) to actually perform the underlying processing. 
+
+All the tools are located at ``$SEYMOUR_HOME/analysis/bin``.
+
+Use the ``--help`` option to see usage information for each tool. (Some tools are undocumented.)
+
+
+## <a name="Build_SPTools"></a> Building the SMRT Pipe tools manually, without SMRT Portal, SMRT View, or Kodos
+
+
+It is currently **not** possible to build the SMRT Pipe tools without SMRT Portal, SMRT View, or Kodos.
+
+
+## <a name="Files"></a> SMRT Pipe File Structure
+
+**Note**: The output of a SMRT Pipe analysis includes more files than described here; interested users should explore the file structure. Following are details about the major files.
+
+```
+ <jobID>/job.sh
+```
+* Contains the SMRT Pipe command line call for the job.
+
+```
+<jobID>/settings.xml
+```
+* Contains the modules (and their associated parameters) to be run as part of the SMRT Pipe run. 
+
+```
+<jobID>/metadata.rdf
+```
+* Contains all important metadata associated with the job. This includes metadata propagated from primary results, links to all reports and data files exposed to users, and high-level summary metrics computed during the job. The file is an entry point to the job by tools such as SMRT Portal and SMRT View. ``metadata.rdf`` is formatted as an RDF-XML file using OWL ontologies. See http://www.w3.org/standards/semanticweb/ for an introduction to Semantic Web technologies.
+
+```
+<jobID>/input.fofn:  File containing the file names of the raw input files used for the analysis.
+```
+* This file (“file of file names”) is generated early during a job and contains the file names of the raw input files used for the analysis.
+
+```
+<jobID>/input.xml
+```
+* Used to specify the input files to be analyzed in a job, and is passed on to the command line.
+
+```
+log/smrtpipe.log
+```
+* Contains debugging output from SMRT Pipe modules. This is typically shown by way of the **View Log** button in SMRT Portal.
+
+### Data Files ###
+
+The ``Data`` directory is where most raw files generated by the pipeline are stored. (**Note**: The following are example output files - for more details about specific files, see the sections dealing with individual modules.)
+
+```
+aligned_reads.cmp.h5, aligned_reads.sam, aligned_reads.bam
+```
+* Mapping and consensus data from secondary analysis.
+
+```
+alignment_summary.gff
+```
+* Alignment data summarized on sequence regions.
+
+```
+variants.gff.gz
+```
+* All sequence variants called from consensus sequence.
+
+```
+toc.xml
+```
+* **Deprecated** - The master index information for the job outputs is now included in the ``metadata.rdf`` file.
+
+### Results/Reports Files ###
+
+Modules with **Reports** in their name produce HTML reports with static PNG images using XML+XSLT. These reports are located in the ``results`` subdirectory. The underlying XML document for each report is also preserved there; these can be useful files for data-mining the outputs of SMRT Pipe.
+
+
+## <a name="RefRep"></a> The Reference Repository
+
+The **reference repository** is a file-based data store used by SMRT Analysis to manage reference sequences and associated information. The full description of all of the attributes of the reference repository is beyond the scope of this document, but you need to use some basic aspects of the reference repository in most SMRT Pipe analyses. 
+
+**Example**: Analysis of multi-contig references can **only** be handled by supplying a reference entry from a reference repository.
+
+It is simple to create and use a reference repository:
+
+* A reference repository can be **any** directory on your system. You can have as many reference repositories as you wish; the input to SMRT Pipe is a fully resolved path to a reference entry, so this can live in any accessible reference repository.
+
+Starting with the FASTA sequence ``genome.fasta``, you upload the sequence to your reference repository using the following command:
+```
+referenceUploader -c -p/path/to/repository -nGenomeName
+-fgenome.fasta
+```
+
+where:
+
+* ``/path/to/repository`` is the path to your reference repository.
+* ``GenomeName`` is the name to use for the reference entry that will be created.
+* ``genome.fasta`` is the FASTA file containing the reference sequence to upload.
+
+**Notes on FASTA files to be used in the reference repository:**
+
+* The FASTA header should **not** be processed in any way when imported into the
+reference repository. 
+* The FASTA header **cannot** contain any tab characters, colons, double quotes, or additional ``>`` characters beyond the standard demarcation of the start of the header. 
+* Within a multi-sequence FASTA file, the header must be **unique**.
+
+
+For a large genome, we highly recommended that you produce the BLASR suffix array during this upload step. Use the following command:
+```
+referenceUploader -c -p/path/to/repository -nHumanGenome -fhuman.fasta --saw='sawriter -welter'
+```
+
+There are many more options for reference management. Consult the MAN page entry for referenceUploader by entering ``referenceUploader -h``.
+
+To learn more about what is being stored in the reference entries, look at the directory containing a reference entry. You will find a metadata description (``reference.info.xml``) of the reference and its associated files. For example, various static indices for BLASR and SMRT View are stored in the sequence directory along with the FASTA sequence.
+
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 001-353-082-06**
\ No newline at end of file
diff --git a/docs/SMRT-Pipe-file-structure.md b/docs/SMRT-Pipe-file-structure.md
new file mode 100644
index 0000000..7868ade
--- /dev/null
+++ b/docs/SMRT-Pipe-file-structure.md
@@ -0,0 +1,64 @@
+**Note**: The output of a SMRT Pipe analysis includes more files than described here; interested users should explore the file structure. Following are details about the major files.
+
+```
+ <jobID>/job.sh
+```
+* Contains the SMRT Pipe command line call for the job.
+
+```
+<jobID>/settings.xml
+```
+* Contains the modules (and their associated parameters) to be run as part of the SMRT Pipe run. 
+
+```
+<jobID>/metadata.rdf
+```
+* Contains all important metadata associated with the job. This includes metadata propagated from primary results, links to all reports and data files exposed to users, and high-level summary metrics computed during the job. The file is an entry point to the job by tools such as SMRT Portal and SMRT View. ``metadata.rdf`` is formatted as an RDF-XML file using OWL ontologies. See http://www.w3.org/standards/semanticweb/ for an introduction to Semantic Web technologies.
+
+```
+<jobID>/input.fofn
+```
+* This file (“file of file names”) is generated early during a job and contains the file names of the raw input files used for the analysis.
+
+```
+<jobID>/input.xml
+```
+* Used to specify the input files to be analyzed in a job, and is passed on to the command line.
+
+```
+<jobID>/vis.jnlp
+```
+* **Deprecated** - no longer generated in v1.4.0. To visualize data, install SMRT View and choose **File > Open Data from Server**.
+
+```
+log/smrtpipe.log
+```
+* Contains debugging output from SMRT Pipe modules. This is typically shown by way of the **View Log** button in SMRT Portal.
+
+## Data Files##
+
+The ``Data`` directory is where most raw files generated by the pipeline are stored. (**Note**: The following are example output files - for more details about specific files, see the sections dealing with individual modules.)
+
+```
+aligned_reads.cmp.h5, aligned_reads.sam, aligned_reads.bam
+```
+* Mapping and consensus data from secondary analysis.
+
+```
+alignment_summary.gff
+```
+* Alignment data summarized on sequence regions.
+
+```
+variants.gff.gz
+```
+* All sequence variants called from consensus sequence.
+
+```
+toc.xml
+```
+* **Deprecated** - The master index information for the job outputs is now included in the ``metadata.rdf`` file.
+
+## Results/Reports Files ##
+
+Modules with **Reports** in their name produce HTML reports with static PNG images using XML+XSLT. These reports are located in the results ``subdirectory``. The underlying XML document for each report is preserved there as well; these can be useful files for data-mining the outputs of SMRT Pipe.
\ No newline at end of file
diff --git a/docs/SMRT-Pipe-modules-and-their-parameters.md b/docs/SMRT-Pipe-modules-and-their-parameters.md
new file mode 100644
index 0000000..8c9ce41
--- /dev/null
+++ b/docs/SMRT-Pipe-modules-and-their-parameters.md
@@ -0,0 +1,48 @@
+Following is an overview of some of the common modules included in SMRT Pipe and their parameters. Not all modules or parameters are listed here. 
+
+Developers interested in even finer control should look inside the ``validateSettings`` method for each python analysis module. By convention, **all** of the settings known to the analysis module are referenced in this method.
+
+## Global Parameters ##
+
+Global parameters are potentially used in multiple modules. In the SMRT Pipe internals, they are accessed in the “global” namespace.  Following are some common global parameters:
+
+```
+reference
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping reads. **Required** for resequencing workflows.
+* Default value: ``None``
+
+```
+control
+```
+* Specifies the name of a reference repository entry or FASTA file for mapping spike-in control reads. **Optional**
+* Default value: ``None``
+
+```
+use_subreads
+```
+* Specifies whether to divide reads into subreads using the adapter region boundaries found by the primary analysis software. **Optional**
+* Default value: ``True``
+
+```
+num_stats_regions
+```
+* Specifies how many regions to use when reporting region statistics such as depth of coverage and variant density. **Optional**
+* Default value: ``500``
+
+## Modules ##
+* [[ P_Fetch Module ]]
+* [[ P_Filter Module ]]
+* [[ P_PreAssembler Module ]]
+* [[ P_Mapping (BLASR) Module ]]
+* [[ P_GenomicConsensus (Quiver) Module ]]
+* [[ P_AnalysisHook Module ]]
+* [[ Assembly (Allora Assembly) Module ]]
+* [[ HybridAssembly (AHA Scaffolding) Module ]]
+* [[ P_GATKVC (GATK Unified Genotyper) Module ]]
+* [[ P_Modification Detection Module ]]
+* [[ RS_CeleraAssembler Workflow ]]
+* [[ P_CorrelatedVariants (Minor and Compound Variants) Module ]]
+* [[ P_MotifFinder (Motif Analysis) Module ]]
+* [[ P_GMAP Module ]]
+* [[ P_Barcode Module ]]
\ No newline at end of file
diff --git a/docs/SMRT-Pipe-tools.md b/docs/SMRT-Pipe-tools.md
new file mode 100644
index 0000000..f9f2e11
--- /dev/null
+++ b/docs/SMRT-Pipe-tools.md
@@ -0,0 +1,5 @@
+**Tools** are programs that run as part of SMRT Pipe. A module, such as P_Mapping, can call several tools (such as the mapping tools summarizeCoverage.py or compareSequences.py) to actually perform the underlying processing. 
+
+All the tools are located at ``$SEYMOUR_HOME/analysis/bin``.
+
+Use the ``--help`` option to see usage information for each tool. (Some tools are undocumented.)
\ No newline at end of file
diff --git "a/docs/SMRT-Portal-GMAP-\"No-such-file-or-directory\"-Error.md" "b/docs/SMRT-Portal-GMAP-\"No-such-file-or-directory\"-Error.md"
new file mode 100644
index 0000000..ddb0790
--- /dev/null
+++ "b/docs/SMRT-Portal-GMAP-\"No-such-file-or-directory\"-Error.md"
@@ -0,0 +1,17 @@
+There is a bug in SMRT Analysis v1.4.0 affecting only `RS_Transcriptome_Mapping` protocol. The error occurs because the path to the GMAP scripts are hard-coded instead of relative to `$SEYMOUR_HOME`. Look for the following error in your `smrtpipe.log` file:
+
+```
+[ERROR] 2013-04-17 17:41:30,404 [pbpy.smrtpipe.engine.SmrtPipeTasks run 655] > Can't exec "/home/NANOFLUIDICS/build/workspace/secondary-1.4-centos56/_output/smrtanalysis-1.4.0/analysis/bin/gmap_home/bin/fa_coords": No such file or directory at /opt/smrtanalysis/analysis/bin/gmap_build line 116.
+```
+
+
+
+If you see this error, edit lines 6 and 16 of `SEYMOUR_HOME/analysis/bin/gmap_home/bin/gmap_build` to restore full functionality.
+
+Line 6 should be:
+
+`$gmapdb = $ENV{"SEYMOUR_HOME"} . "/analysis/bin/gmap_home/share";`
+
+Line 16 should be:
+
+`$bindir = $ENV{"SEYMOUR_HOME"} . "/analysis/bin/gmap_home/bin";`
\ No newline at end of file
diff --git a/docs/SMRT-Portal-Job-Fails.md b/docs/SMRT-Portal-Job-Fails.md
new file mode 100644
index 0000000..ecf090c
--- /dev/null
+++ b/docs/SMRT-Portal-Job-Fails.md
@@ -0,0 +1,59 @@
+When a SMRT Portal job fails, please do the following before filing a case or issue.
+
+### Step 1: Run a Lambda test job 
+Before looking at the `master.log` file, run a Lambda test job to determine if the problem is with the **software** or with the **data**. Run a RS_Resequencing job using the pre-packaged SMRT Cell in `$SEYMOUR_HOME/common/test/primary/lambda` and the pre-packaged reference lambda sequence in `$SEYMOUR_HOME/common/userdata/reference/lambda`.  
+
+###Step 2a:  Investigate the data if the Lambda job succeeds
+If the Lambda job succeeds, then the **software is working fine** and you must investigate the data.  Not all SMRT Portal jobs will succeed.  The job will fail if your data is not appropriate for the analysis.
+
+
+1.  Do I have a corrupted SMRT Cell?
+    https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Common-smrt%C2%AE-portal-errors
+
+2.  Do I have too little data to run this job? 
+
+    e.x. RS_HGAP2/3 jobs will fail if you have less than 20X coverage for the genome size you selected
+ 
+    e.x. RS_Resequencing jobs will fail at the genomic consensus step if you have less than 1x coverage of your genome
+
+3.  Do I have too much data to run this job?
+
+    e.x. RS_HGAP2/3 jobs may timeout if you have >200x coverage for the genome size you selected
+   
+    e.x. SMRT View will not display reads if have more than 1000x coverage 
+
+
+### Step 2b: Investigate distributed computing if the Lambda job fails
+Misconfigured distributed computing environments are common problems in SMRT Analysis.  Turn off distributed computing by editing `web.xml` and restarting smrtportal-initd, then run another Lambda test job.
+
+```
+$SMRT_ROOT=/opt/smrtanalysis
+vi $SMRT_ROOT/redist/apache-tomcat-7.0.23/webapps/smrtportal/WEB-INF/web.xml
+```
+
+Change the `jobsAreDistributed` parameter to `false` to turn off distributed computing in SMRT Portal:
+```
+   <param-name>jobsAreDistributed</param-name>
+   <param-value>false</param-value>
+```
+
+Restart SMRT Portal daemons:
+```
+$SMRT_ROOT/admin/bin/smrtportal-inid stop
+$SMRT_ROOT/admin/bin/smrtportal-inid start
+```
+
+### Step 3a: Investigate the distributed computing configuration if the single-node Lamba job suceeds
+Read the detailed section carefully and ask your cluster administrator to assit. https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.2.0#set-up-distributed-computing
+
+### Step 3b: Investigate the master.log if the single-node lamba fails
+Look for lines that begin with `[ERROR]` in the file located at `$SEYMOUR_HOME/common/jobs/<job_id_prefix>/<job_id>/log/master.log`.  If this file does not exist, the job fails **immediately**, and no job directory is created, look for errors written to  `$SEYMOUR_HOME/common/log/smrtportal/smrtportal.0.log`.  
+
+
+### Step 5. Report an issue 
+Please file a case at [customer portal](http://www.pacbioportal.com) if you have an instrument for expedited response, or an issue at [github issues](https://github.com/PacificBiosciences/SMRT-Analysis/issues), and provide the following:
+
+1.  Did a distributed RS_Resequencing lambda job succeed?
+2.  Did a single-node RS_Resequencing lambda job succeed?
+3.  Paste the [ERROR] lines from the `master.log` file 
+
diff --git a/docs/SMRT-Portal-Job-Status-Does-Not-Update.md b/docs/SMRT-Portal-Job-Status-Does-Not-Update.md
new file mode 100644
index 0000000..7be5974
--- /dev/null
+++ b/docs/SMRT-Portal-Job-Status-Does-Not-Update.md
@@ -0,0 +1,55 @@
+SMRT Portal jobs are updated using a web services call. If the job fails to update, it may appear to be stuck in "filtering" or "mapping" status in SMRT Portal, when, in fact, the job is completed according to the `smrtpipe.log` file.  
+
+### Step 1:  Check that the web services are alive.
+
+Execute `curl http://<hostname>:<port>/smrtportal/api`. You should get the following response: 
+```
+{
+  "success" : true,
+  "message" : "Web services are alive"
+}
+```
+
+Try this command from the head node as well as any child nodes. Most update issues are due to hostname mapping problems, such as the child node not recognizing the head node using the hostname provided. You must examine your network configurations.  
+
+
+### Step 2:  Reset the hostname.
+
+Execute `$SEYMOUR_HOME/postinstall/configure_smrtanalysis.sh`. When setting the hostname, you may want to consider the following:
+* The hostname needs to be recognized by **any** compute nodes.
+* The hostname needs to be recognized by **any** external clients.
+* The hostname should be immutable between reboots; that is, using a static ip or dns-recognized name.
+
+
+### Step 3: Verify the reported status of your job.
+
+Execute `curl http://<hostname>:<port>/smrtportal/api/jobs/<job_id>/status`.  You should get the following response:
+```
+{
+  "jobStatusId" : 1856353,
+  "jobId" : 56632,
+  "code" : "Filtering",
+  "jobStage" : null,
+  "moduleName" : null,
+  "percentComplete" : 100,
+  "message" : "Successfully completed smrtpipe job",
+  "name" : null,
+  "whenCreated" : "2013-03-04T13:03:46-0800",
+  "whenModified" : "2013-03-04T13:03:46-0800",
+  "modifiedBy" : null,
+  "createdBy" : "smrtpipe",
+  "description" : null
+}
+```
+
+
+### Step 4: Reset the status to "Complete".
+
+Execute `curl -u <administrator>:<password> -d 'progress={"code":"Completed"}' http://<hostname>:8080/smrtportal/api/jobs/<job_id>/status`.  You should get the following response:
+
+```
+{
+  "success" : true,
+  "message" : "Job status updated"
+}
+```
\ No newline at end of file
diff --git a/docs/SMRT-Portal-Lost-administrator-password.md b/docs/SMRT-Portal-Lost-administrator-password.md
new file mode 100644
index 0000000..7b6f303
--- /dev/null
+++ b/docs/SMRT-Portal-Lost-administrator-password.md
@@ -0,0 +1,33 @@
+### Option 1: You have another user account with administrator privileges, and you know the password for that user account: 
+
+  1.  Login to SMRT Portal with the user account with the role of administrator.
+  2.  Go to **Admin -> Manage Users**.
+  3.  Select the administrator user.
+  4.  Reset the administrator user's password.
+  5.  A temporary password will be emailed to the administrator user's email address.
+  6.  Click the link in the email to login and change the password.
+
+### Option 2: If you have other user accounts, but none have administrator privileges, you will need a mysql login: 
+
+  1. Login to mysql as root or using the smrtportal account. The password for smrtportal user is stored in
+    `$SEYMOUR_HOME/redist/tomcat/webapps/smrtportal/WEB-INF/classesMETA-INF/persistence.xml`
+  2. Set the role of a user account to "administrator":
+
+    ```
+    USE smrtportal; 
+    UPDATE User SET role = 'administrator' WHERE name = '[the_user_name]'; 
+    ```
+
+  3. Follow the instructions above to reset the administrator password.
+
+
+###Option 3: If you only have one administrator account: 
+
+  1. Login to mysql as root.
+  2. Delete the administrator user: 
+    ```
+    USE smrtportal;
+    DELETE FROM User WHERE name = 'administrator';
+    ``` 
+  3. Launch SMRT Portal.
+  4. Register with the user name "administrator" and specify a password.
\ No newline at end of file
diff --git a/docs/SMRT-Portal-freezes.md b/docs/SMRT-Portal-freezes.md
new file mode 100644
index 0000000..d9fea02
--- /dev/null
+++ b/docs/SMRT-Portal-freezes.md
@@ -0,0 +1,6 @@
+Use the following commands to stop and then restart Tomcat:
+```
+sudo $SEYMOUR_HOME/etc/scripts/postinstall/tomcatd stop
+sudo $SEYMOUR_HOME/etc/scripts/postinstall/tomcatd start
+```
+To determine if Tomcat is running, enter ``ps –ef | grep tomcat``.
\ No newline at end of file
diff --git a/docs/SMRT-Portal-has-dificulty-connecting-to-the-smrtportal-mysql-database.md b/docs/SMRT-Portal-has-dificulty-connecting-to-the-smrtportal-mysql-database.md
new file mode 100644
index 0000000..5af2165
--- /dev/null
+++ b/docs/SMRT-Portal-has-dificulty-connecting-to-the-smrtportal-mysql-database.md
@@ -0,0 +1,9 @@
+SMRT Portal has difficulty connecting to the smrtportal mysql database after installation if you have a **unique** setting in your ``myslq my.conf`` file. 
+
+Following is the typical error when you try to create the first administrator user: 
+```
+Error listing Users. Case; 'hibernate.dialect' must be set when no Connection available.
+```
+Enter ``grep bind /etc/mysql/my.cnf``. 
+
+If you had changed the bind address to something **other** than the default 127.0.0.1, then you need to replace localhost in the ``$SEYMOUR_HOME/redist/tomcat/webapps/smrtportal/WEB-INF/classes/META-INF/persistence.xml`` file with the actual IP address, or hostname of the server running mysql.
\ No newline at end of file
diff --git a/docs/SMRT-Portal-jobs-are-being-submitted-as-root.md b/docs/SMRT-Portal-jobs-are-being-submitted-as-root.md
new file mode 100644
index 0000000..d8b66d9
--- /dev/null
+++ b/docs/SMRT-Portal-jobs-are-being-submitted-as-root.md
@@ -0,0 +1,8 @@
+This only happens when the installation steps are not correctly followed.
+```
+smrtanalysis at server$ qstat -u \*
+job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID
+-----------------------------------------------------------------------------------------------------------------        
+4298052 0.47166 S52657     root         qw    12/09/2012 22:12:57                                    1
+```
+You should go back to replace commands inside `/etc/init.d/tomcatd` from ```sh $CATALINA_HOME/bin/startup.sh``` to ```su -c "sh $CATALINA_HOME/bin/startup.sh" smrtanalysis``` (Don't forget to use root to do ```/etc/init.d/tomcatd stop```, qdel the jobs and ```/etc/init.d/tomcatd start``` as well.)
\ No newline at end of file
diff --git a/docs/SMRT-Portal-protocols.md b/docs/SMRT-Portal-protocols.md
new file mode 100644
index 0000000..90820a7
--- /dev/null
+++ b/docs/SMRT-Portal-protocols.md
@@ -0,0 +1,138 @@
+Following are the secondary analysis protocols included in SMRT Analysis v1.4.0, with the SMRT Pipe module(s) called by each protocol.
+
+```
+RS_AHA_Scaffolding
+```
+* P_Filter
+* HybridAssembly
+
+```
+RS_ALLORA_Assembly
+```
+* AlloraSFilter
+* Assembly
+
+```
+RS_ALLORA_Assembly_EC
+```
+* AlloraSFIlter
+* Assembly
+
+```
+RS_CeleraAssembler
+```
+* P_PacBioToCA
+* P_CeleraAssembler
+
+```
+RS_Filter_Only
+```
+* P_Filter
+
+```
+RS_Minor_and_Compound_Variants
+```
+* P_Filter
+* BLASR_Minor_and_Compound_Variants
+* P_CorrelatedVariants
+
+```
+RS_Modification_Detection
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_Modification Detection
+
+```
+RS_Modification_and_Motif_Analysis
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_MotifFinder
+
+```
+RS_PreAssembler
+```
+* PreAssemblerSFilter
+* P_PreAssembler
+
+```
+RS_PreAssembler_Allora
+```
+* PreAssemblerSFilter
+* AlloraWithPreAssembler
+
+```
+RS_Resequencing
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+
+```
+RS_Resequencing_CCS
+```
+* P_Filter
+* BLASR_De_Novo_CCS
+* GenomicConcensus_Plurality
+
+```
+RS_Resequencing_CCS_GATK
+```
+* P_Filter
+* BLASR_De_Novo_CCS
+* P_GATKVC
+
+```
+RS_Resequencing_GATK
+```
+* P_Filter
+* P_Mapping
+* P_GATKVC
+
+```
+RS_Resequencing_GATK_Barcode
+```
+* P_Filter
+* BLASR_Barcode
+* P_GATKVC
+
+```
+RS_Site_Acceptance_Test
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+
+```
+RS_cDNA_Mapping
+```
+* P_Filter
+* P_GMAP
+
+```
+11k_Unrolled_Resequencing
+```
+* P_Filter
+* BLASR_Unrolled
+* P_MotifFinder
+* P_Modification Detection
+* P_AnalysisHook
+
+```
+ecoliK12_RS_Resequencing
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConsensus
+* P_AnalysisHook
+
+```
+lambda_RS_Resequencing
+```
+* P_Filter
+* P_Mapping
+* P_GenomicConcensus
+* P_AnalysisHook
\ No newline at end of file
diff --git a/docs/SMRT-View-Crashes-While-Browsing.md b/docs/SMRT-View-Crashes-While-Browsing.md
new file mode 100644
index 0000000..02ac9ec
--- /dev/null
+++ b/docs/SMRT-View-Crashes-While-Browsing.md
@@ -0,0 +1,13 @@
+### Step 1:  Show memory usage
+Go to **Tools > Options > Global**, and check `Always show memory usage in status bar`. Observe the memory usage to see if it has reached the maximum on either the client computer (your laptop) or the server (smrtanalysis host).
+
+### Step 2:  Edit vis.jnlp
+Go to `$SEYMOUR_HOME/common/etc/` and edit the `vis.jnlp` file. Find the line that says:
+
+```
+ <resources>
+        <j2se version="1.6+" java-vm-args="-Xms256m -Xmx1024m" href="http://java.sun.com/products/autodl/j2se"/>
+
+```
+
+Edit the `-Xms` and `-Xmx` parameters to allocate more memory to the process. The java `-Xms` variable stands for initial heap size and the `-Xmx` stands for maximum heap size.
\ No newline at end of file
diff --git a/docs/SMRT-View-Does-Not-Launch.md b/docs/SMRT-View-Does-Not-Launch.md
new file mode 100644
index 0000000..d9ee4dd
--- /dev/null
+++ b/docs/SMRT-View-Does-Not-Launch.md
@@ -0,0 +1,42 @@
+SMRT View is a client-server application.  Underlying problems can generally be traced to issues with the server, client (your laptop), or the network connecting the two.  
+
+### Check the client
+
+**Windows:** SMRT View needs Java 7 to execute the .jnlp file. Check your Java version by running `java -version` on a terminal or command-line prompt.  
+
+**Mac OS:** SMRT View has problems with Java 7 on the Mac OS, but works with Java 6. If you are running Java 7, please follow the instructions on [this apple support page](http://support.apple.com/kb/HT5559) to revert Java Webstart back to 6.
+  
+
+####Read the Stacktrace
+
+Example 1: Permissions problem
+```
+Caused by: net.sourceforge.jnlp.LaunchException: Fatal: Application Error: Cannot grant permissions to unsigned jars. Application requested security permissions, but jars are not signed. 
+```
+
+To fix this:
+
+1. Open **Control Panel > Java > Security > Manage Certificates**.
+
+2. Delete the certificates associated with SMRT Analysis. 
+
+3. Double-click on the .jnlp file to reopen SMRT View.
+
+
+### Check the network
+
+Open your .jnlp file in a text editor, such as Notepad, and check to make sure the hostname is defined correctly. The first line of your jnlp file should look like this:
+
+```
+<jnlp spec="6.0+" version="1.3.3" codebase="http://localhost:8080/smrtview/axis2-web/app/bin" > 
+```
+
+In the above example, the hostname is incorrectly set to `localhost`, which means that only the server can see and open the jnlp file. For the client to also open the jnlp file and run SMRT View, it must be an externally facing name or ip address. You can reset the hostname by rerunning `$SEYMOUR_HOME/etc/scripts/postinstall/configure_smrtanalysis.sh` and typing it in when prompted.
+
+
+
+### Check the server
+
+Go to the SMRT View homepage in `http://<hostname>:8080/smrtview/`, and click the `Web Services Validation` link to check that all web services are on and that all library dependencies are installed.  Note any errors you see and correct them. Look for errors in the `$SEYMOUR_HOME/common/log/smrtview/` directory.  
+
+Go to the SMRT View homepage in `http://<hostname>:8080/smrtview/`, and click the `SMRT View Test Page` link and launch the example provided. If this example does **not** launch, you may have to re-evaluate what you used as the "hostname" during initial installation. The `configure_smrtnalysis.sh` script uses the return from `hostname -a` as the default hostname. If you did not change this, or if you have dual DNS, or other more esoteric configurations, you must choose, or configure a ho [...]
\ No newline at end of file
diff --git a/docs/SMRT-View-Security-Certificate-Warning-Message.md b/docs/SMRT-View-Security-Certificate-Warning-Message.md
new file mode 100644
index 0000000..0417af9
--- /dev/null
+++ b/docs/SMRT-View-Security-Certificate-Warning-Message.md
@@ -0,0 +1,16 @@
+**Problem:**
+
+User clicks on the SMRT View link in SMRT Portal and sees the Security Certificate Warning message ``Wrong key usage``, and the application terminates.
+
+**Applies to:**
+
+**All** supported operating systems platforms with a Java Runtime (JRE) version 6_35 through 6_40, and version 7_04 through 7_25.
+
+**Background:**
+
+Oracle has released numerous Java Runtime (JRE) updates to address recently discovered security vulnerabilities. The default JRE settings have also changed, turning on the ``certificate revocation checking`` feature. The above listed versions of JREs have a known problem, which causes the check to fail on security certificates issued by Comodo Inc., such as those used by SMRT Portal and SMRT View.
+Here is the full description of the [problem] (http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7174966).
+
+**Action:**
+
+The bug was fixed in java7_40 (b28), but as of August 2013 the new version has not yet been made available. We recommend that **all** users update their JRE when the new version becomes available for their operating system. For the time being, you can change the setting for the certificate revocation check, using the following [instructions] (http://www.java.com/en/download/help/revocation_options.xml).
\ No newline at end of file
diff --git a/docs/SMRT-View-does-not-launch.md b/docs/SMRT-View-does-not-launch.md
new file mode 100644
index 0000000..12a5f55
--- /dev/null
+++ b/docs/SMRT-View-does-not-launch.md
@@ -0,0 +1,12 @@
+If there is a problem starting SMRT View, an error message displays with diagnostic
+information. In addition, SMRT View creates two log files on the server that you
+can examine for diagnostic information:
+ 
+```
+/opt/smrtanalysis/common/log/smrtview
+```
+```
+/opt/smrtanalysis/redist/tomcat/logs
+```
+
+
diff --git a/docs/SMRT-View-does-not-show-reads-in-the-details-panel.md b/docs/SMRT-View-does-not-show-reads-in-the-details-panel.md
new file mode 100644
index 0000000..ea0a502
--- /dev/null
+++ b/docs/SMRT-View-does-not-show-reads-in-the-details-panel.md
@@ -0,0 +1,30 @@
+This procedure assumes you are able to open the SMRT View application itself. If not, please follow the steps in this article: [[ SMRT View does not launch ]].
+
+## Step 1: Zoom in and turn on read view 
+
+Make sure you are zoomed in enough such that you can see the bases on the reference genome. Click on the "View Reads" icon located in the shortcuts bar at the top of the SMRT View Window.
+
+
+## Step 2: Check that your coverage is less than 100,000
+
+SMRT View has a hard-cutoff of 100,000 reads to limit memory usage. Your computer or the server would freeze if this limit was not imposed. To make sure that you are not over the coverage limit, look at the coverage line plot in the "Regions Panel". Check the Y-axis to make sure it is not greater than 100,000.
+
+## Step 3: Check for errors in the SMRT View Experience Index
+
+Go to the SMRT Analysis home page at `http://<your_hostname>:<your_port>/`, and click on "SMRT View Home Page" --> "Experience Index". Note any errors that display.  
+
+### Step 3.1: If the error is as follows:
+```
+An error occurred reading data from test file: /mnt/ngswork/pacbio/smrtanalysis/common/test/jobs/scerevisiae/data/aligned_reads.cmp.h5
+Your system may not be properly configured or some files required for reading HDF data are missing.
+```
+
+1. Check that HDF native library files are in `$SEYMOUR_HOME/common/lib`, e.g. `libjhdf.so` and `libjhdf5.so` for Linux, `libjhdf.jnilib` and `libjhdf5.jnilib` for Mac OS, `jhdf.dll` and `jhdf5.dll` for Windows. Execute: `ls $SEYMOUR_HOME/common/lib`.
+
+2. Check that the script or command line argument used to start tomcat adds `$SEYMOUR_HOME/common/lib` to Java's native library path search, e.g. tomcatd includes `-Djava.library.path=$SEYMOUR_HOME/common/lib`. Execute: `ps -ef | grep tomcat`
+
+3. Restart tomcat: `$SEYMOUR_HOME/etc/scripts/tomcatd restart` 
+Make sure you restart tomcat as the SMRT Analysis user and not 'root' or some other linux user.  You can determine the SMRT Analysis user by checking the ownership of the tomcatd script.  
+  ```
+  ls -l /opt/smrtanalysis/admin/bin/tomcatd
+  ```
\ No newline at end of file
diff --git a/docs/SMRT-View-is-downloaded-from-the-server-every-time-you-access-it.md b/docs/SMRT-View-is-downloaded-from-the-server-every-time-you-access-it.md
new file mode 100644
index 0000000..da49a95
--- /dev/null
+++ b/docs/SMRT-View-is-downloaded-from-the-server-every-time-you-access-it.md
@@ -0,0 +1,5 @@
+Check the Java temporary file setting to ensure that Java files are being cached. This keeps SMRT View in memory so that you only need to download it **once**.
+
+**Windows XP/Windows 7**: Choose **Start > Settings > Control Panel > Java**. On the **General** Tab,
+click the **Settings** button. If necessary, select the **keep temporary files on my computer
+option** and click **OK**.
\ No newline at end of file
diff --git a/docs/SMRT-View-is-slow.md b/docs/SMRT-View-is-slow.md
new file mode 100644
index 0000000..3b0ca52
--- /dev/null
+++ b/docs/SMRT-View-is-slow.md
@@ -0,0 +1,18 @@
+###  1. Reduce Max Bases
+Set the **Details Panel Max Bases** preference to a smaller number. This option specifies the maximum number of bases to display in the Details panel. The **higher** the number, the **slower** SMRT View may be to load files, especially files with high coverage.  
+
+To change the setting: 
+Choose **Tools > Preferences**, click **Details Panel**, select **Fixed Range** or **Dynamic Range**, enter a smaller number into the field, then click **OK**.
+
+### 2. Increase Java memory 
+Change the `JAVA_OPTS` variable (`-Xmx` option) to a larger number in `$SMRT_ROOT/admin/bin/tomcatd-initd`
+
+```
+export JAVA_OPTS="-d64 -server -Xmx8g
+```
+
+### 3.  Change to a faster file system (flash, or local).
+This may not be possible, but SMRT View will perform better if it can read the cmp.h5 files quicker.  This can be achieved by changing the installation directory to a faster file system, such as flash, or running jobs on a local file system.
+
+### 4.  Split your reference genome
+Split your reference into smaller pieces (e.x. one file per chromosome) and re-run in several jobs.  This would decrease the burden on SMRT View and improve performance.
\ No newline at end of file
diff --git a/docs/SMRT-View-runs-out-of-resources.md b/docs/SMRT-View-runs-out-of-resources.md
new file mode 100644
index 0000000..9b4fa38
--- /dev/null
+++ b/docs/SMRT-View-runs-out-of-resources.md
@@ -0,0 +1,3 @@
+SMRT View is designed to use system resources as efficiently as possible, but when using it to visualize large genomes, application and system resource limits can easily be crossed.  When running SMRT View from the same Tomcat instance as SMRT Portal, this will cause SMRT Portal to crash as well.  One way to address this is to increase the available heap as described below.  Another is to run SMRT View on another Tomcat instance (see the SMRT View Alternate Configurations/Running SMRT Vi [...]
+
+In order to increase the heap size for tomcat, edit the file $SMRT_ROOT/admin/bin/tomcatd-initd and in the line starting with "export JAVA_OPTS=" modify the -Xmx argument to a higher value (e.g change it from -Xmx8g to -Xmx16g).
\ No newline at end of file
diff --git a/docs/SMRT-analysis-software-installation-v2.1.1.md b/docs/SMRT-analysis-software-installation-v2.1.1.md
new file mode 100644
index 0000000..8a1104d
--- /dev/null
+++ b/docs/SMRT-analysis-software-installation-v2.1.1.md
@@ -0,0 +1,332 @@
+* [Important Changes] (#ImportantChanges)
+* [System Requirements] (#SysReq)
+  * [Operating System] (#OS)
+  * [Running SMRT® Analysis in the Cloud] (#Cloud)
+  * [Software Requirement] (#SoftReq)
+  * [Minimum Hardware Requirements] (#HardReq)
+* [Installation and Upgrade Summary] (#Summary)
+  * [Step 1: Decide on a user and an installation directory] (#Bookmark_DecideInstallDir)
+  * [Step 2: Create and set the installation directory $SMRT_ROOT] (#Bookmark_CreateInstallDir)
+* [Installation and Upgrade Detail] (#Details)
+  * [Step 3 Option 1: Run the install script] (#Bookmark_InstallDetail)
+  * [Step 3 Option 2: Run the upgrade script] (#Bookmark_UpgradeDetail)
+  * [Step 4: Set up distributed computing] (#Bookmark_DistributedDetail)
+  * [Step 5: Set up SMRT Portal] (#Bookmark_SMRTPortalDetail)
+  * [Step 6: Verify install or upgrade] (#Bookmark_VerifyDetail)
+* [Optional Configurations] (#Optional)
+  * [Set up userdata directory] (#Bookmark_UserdataDetail)
+* [Bundled with SMRT® Analysis] (#Bundled)
+* [Changes from SMRT® Analysis v2.0.1] (#Changes)
+
+
+#<a name="ImportantChanges"></a> Important Changes
+
+SMRT Analysis migrated to a completely new directory structure starting with v2.1. Instead of ``$SEYMOUR_HOME``, we are now using ``$SMRT_ROOT``, and you will **not** need to specify it explicitly.  We still recommend that ``$SMRT_ROOT`` be set to `/opt/smrtanalysis/`, but the underlying folders will be as follows (arrows indicate softlinks):
+
+```
+/opt/smrtanalysis/
+              admin/
+                   bin/
+                   log/
+
+              current --> softlink to ../install/smrtanalysis-2.1.1
+
+              install/
+                 smrtanalysis-<other versions>/
+                 smrtanalysis-2.1.1/
+
+              userdata/  --> softlink to offline storage location
+              
+```
+
+
+
+# <a name="SysReq"></a> System Requirements
+
+## <a name="OS"></a> Operating System
+* SMRT® Analysis is **only** supported on:
+    * English-language **Ubuntu 12.04, Ubuntu 10.04, Ubuntu 8.04** 
+    * English-language **RedHat/CentOS 6.3, RedHat/CentOS 5.6, RedHat/CentOS 5.3**
+* If you are using alternate versions of Ubuntu or CentOS (not recommended), you should download and install the SMRT Analysis executable that is **older** than the OS installed on your system. (For example, if you are running CentOS 6.4, you should run the CentOS 6.3 executable). The software assumes a uniform operating system across **all** compute nodes.  If you have **different** OS versions on your cluster (not recommended), choose an executable that matches the **oldest** OS on you [...]
+
+* Check for any library errors when running an initial ``RS_resequencing`` analysis job on lambda. Here are some common packages that need to be installed:
+    * **RedHat/CentOS 5.xxx**: Enter `sudo yum install mysql-server perl-XML-Parser openssl redhat-lsb`
+    * **RedHat/CentOS 6.xxx**: Enter `sudo yum install mysql-server perl-XML-Parser openssl098e redhat-lsb`
+    * **Ubuntu 10.xxx**: Enter `sudo aptitude install mysql-server libxml-parser-perl libssl0.9.8`
+* SMRT Analysis **cannot** be installed on the Mac OS or Windows.
+
+
+## <a name="Cloud"></a> Running SMRT® Analysis in the Cloud ##
+Users who do **not** have access to a server with the supported OS can use the public Amazon Machine Image (AMI). For details, see the document [Running SMRT Analysis on Amazon] (https://s3.amazonaws.com/files.pacb.com/software/smrtanalysis/2.1/doc/Running SMRT Analysis on Amazon.pdf).
+
+## <a name="SoftReq"></a> Software Requirement ##
+
+* MySQL 5 (`yum install mysql-server`; `apt-get install mysql-server`)
+* bash
+* Perl (v5.8.8)
+  * Statistics::Descriptive Perl module: `sudo cpan Statistics::Descriptive`
+
+
+### Client web browser: ###
+We recommend using Google Chrome® 21 web browsers to run SMRT Portal for consistent functionality. We also support Apple’s Safari® and Internet Explorer® web browsers; however some features may not be optimized on these browsers.
+
+### Client Java: ###
+To run SMRT View, we recommend using Java 7 for Windows (Java 7 64 bit for users with 64 bit OS), and Java 6 for the Mac OS.
+
+## <a name="HardReq"></a> Minimum Hardware Requirements ##
+
+
+### 1 head node:###
+* Minimum 8 cores, with 2 GB RAM per core. We recommend 16 cores with 4 GB RAM per core for _de novo_ assemblies and larger references such as human.
+* Minimum 250 GB of disk space.
+
+### Compute nodes:###
+* Minimum 3 compute nodes. We recommend 5 nodes for high utilization focused on _de novo_ assemblies.
+* Minimum 8 cores per node, with 2 GB RAM per core. We recommend 16 cores per node with 4 GB RAM per core.
+* Minimum 250 GB of disk space per node.
+* To perform _de novo_ assembly of large genomes using the Celera® Assembler, **one** of the nodes will need to have considerably more memory. See the Celera® Assembler home page for recommendations: http://wgs-assembler.sourceforge.net/.
+
+**Notes:** 
+* It is possible, but **not** advisable, to install SMRT Analysis on a single-node machine (see the distributed computing section). You will likely be able to submit jobs one SMRT Cell at a time, but the time to completion may be long as the software may not have sufficient resources to complete the job.  
+
+* The ``RS_ReadsOfInsert`` protocol can be **compute-intensive**. If you plan to run it on every SMRT Cell, we recommend adding 3 additional 8-core compute nodes with at least 4 GB of RAM per core.
+
+### Data storage: ###
+* 10 TB (Actual storage depends on usage.)
+
+### Network File System Requirement 
+Please refer to the IT Site Prep guide provided with your instrument purchase for more details.
+
+1. The **SMRT Analysis software directory** (We recommend `$SMRT_ROOT=/opt/smrtanalysis`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  
+
+2. The **SMRT Cell input directory**  (We recommend `$SMRT_ROOT/pacbio_insrument_data/`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  This directory contains data from the instrument and can either be a directory configured by RS Remote during instrument installation, or a directory you created when you received data from a core lab. 
+
+3. The **SMRT Analysis output directory** (We recommend `$SMRT_ROOT/userdata`) **must** have the same path and be **writable** by the smrtanalysis user across **all** compute nodes via **NFS**. This directory is usually soft-linked to a large storage volume.
+
+4. The **SMRT Analysis temporary directory** is used for fast I/O operations during runtime.  The software accesses this directory from `$SMRT_ROOT/tmpdir` and you can softlink this directory manually or using the install script.  This directory should be a local directory (not NFS mounted) and be writable by the `smrtanalysis` user and exist as independent directories on all compute nodes. 
+
+
+# <a name="Summary"></a> Installation and Upgrade Summary
+
+**Please pay close attention as the upgrade procedure has changed.** 
+
+The following instructions apply to **fresh v2.1.1 installations** and **v2.0.1 to v2.1.1 upgrades only**.
+* If you are using an **older** version of SMRT Analysis, you can either perform a fresh installation and manually import old SMRT Cells and jobs, or download and upgrade any intermediate versions (v1.4, v2.0.0, v2.0.1).  
+
+<a name="Bookmark_DecideInstallDir"></a> 
+### Step 1. Decide on a user and an installation directory for the SMRT Analysis software suite.
+
+The SMRT Analysis install directory, `$SMRT_ROOT`, can be any directory as long as the smrtanalysis user has read, write, and execute permissions in that directory.  Historically we have referred to `$SMRT_ROOT` as `/opt/smrtanalysis`.  
+
+We recommend that a system administrator create a special user called `smrtanalysis`, who belongs to the `smrtanalysis` group. This user will own all SMRT Analysis files, daemon processes, and smrtpipe jobs.   
+
+
+<a name="Bookmark_CreateInstallDir"></a> 
+### Step 2. Create and set the installation directory $SMRT_ROOT.
+If the parent directory `$SMRT_ROOT` is not writable by the SMRT Analysis user, the `$SMRT_ROOT` directory must be pre-created with read/write/execute permissions for the SMRT Analysis user.  
+
+* **Option 1:** The SMRT Analysis user has sudo privileges.
+For example, if `$SMRT_ROOT` is `/opt/smrtanalysis`, `/opt` is only writable by root, and the SMRT Analysis user is `smrtanalysis` belonging to the group `smrtanalysis`.  
+  ```
+  SMRT_ROOT=/opt/smrtanalysis
+  sudo mkdir $SMRT_ROOT
+  sudo chown smrtanalysis:smrtanalysis $SMRT_ROOT
+  ```
+* **Option 2:** The SMRT Analysis user does **not** have sudo privileges.
+For example, if you do not have sudo privileges, you can install SMRT Analysis as yourself in your home directory, however you still must have root login credentials for the mysql database.
+  ```
+  SMRT_ROOT=/home/<your_username>/smrtanalysis
+  mkdir $SMRT_ROOT
+  ```
+
+### Step 3. Run the installer or upgrade script and start services.  
+
+  * **Option 1**: If you are performing a **fresh** installation, run the installation script and start tomcat and kodos.  [See below for more details.] (#Bookmark_InstallDetail)
+  ```
+  bash smrtanalysis-2.1.1.Current_Ubuntu-8.04.run --rootdir $SMRT_ROOT
+  $SMRT_ROOT/admin/bin/tomcatd start
+  $SMRT_ROOT/admin/bin/kodosd start
+  ```
+  
+  If you need to rerun the script and have already extracted the file, you can rerun using the `--no-extract` option:
+
+  `bash smrtanalysis-2.1.1.Current_Ubuntu-8.04.run --rootdir $SMRT_ROOT --no-extract`
+
+  * **Option 2**: **Please pay close attention as the upgrade procedure has changed.**  The new procedure requires running a script called ``smrtupdater`` from the old v2.0.1 smrtanalysis directory, which takes the path to the new v2.1.1 installer as an argument.  
+**IMPORTANT: If `$SMRT_ROOT` is a pre-existing symbolic link (e.g. `/opt/smrtanalysis`--> `/opt/smrtanalysis-2.0.1`), you must manually delete the softlink and create a new directory this time only.** [See below for more details.] (#Bookmark_UpgradeDetail)
+  ```
+  /opt/smrtanalysis-2.0.1/etc/scripts/kodosd stop
+  /opt/smrtanalysis-2.0.1/etc/scripts/tomcatd stop
+
+  rm /opt/smrtanalysis
+  mkdir /opt/smrtanalysis
+  SMRT_PATH_ORIG=”$PATH” SMRT_ROOTDIR="/opt/smrtanalysis" bash /opt/smrtanalysis-2.0.1/admin/bin/smrtupdater /opt/smrtanalysis-2.1.1.Current_Ubuntu-8.04.run
+
+  /opt/smrtanalysis/admin/bin/tomcatd start
+  /opt/smrtanalysis/admin/bin/kodosd start
+  ```
+
+
+**Note:** For future upgrades beyond v2.1.1, we expect the upgrade command to be `$SMRT_ROOT/admin/bin/smrtupdater /path/to/smrtanalysis-2.1.1.Current_Ubuntu-8.04.run` 
+
+
+### Step 4. **New Installations only:** Set up distributed computing 
+
+Decide on a job management system (JMS). [See below for more details.](#Bookmark_DistributedDetail)
+
+### Step 5. **New Installations only**: Set up SMRT Portal
+
+Register the administrative user and set up the SMRT Portal GUI. [See below for more details.](#Bookmark_SMRTPortalDetail)
+
+### Step 6. Verify the installation. 
+
+Run a sample SMRT Portal job to verify functionality. [See below for more details.] (#Bookmark_VerifyDetail)
+
+
+# <a name="Details"></a> Installation and Upgrade Details
+### <a name="Bookmark_InstallDetail"></a> Step 3, Option 1 Details: Run the Installation script and turn on services
+
+The installation script attempts to discover inputs when possible, and performs the following: 
+
+* Looks for valid hostnames (DNS) and IP Addresses. You must choose one from the list.   
+* Assumes that the user running the script is the designated smrtanalysis user.
+* Installs the Tomcat web server. You will be prompted for:
+  * The **port number** that the tomcat service will run under. (Default: ``8080``)
+  * The **port number** that the tomcat service will use to shutdown. (Default: ``8005``)
+* Creates the smrtportal database in mysql. You will be prompted for:
+  * The mysql administrative user name. (Default: ``root``)
+  * The mysql password. (Default:  no password)
+  * The mysql port number. (Default: ``3306``)
+* Attempts to configure the Job Management System (``SGE``, ``LSF``, ``PBS``, or ``NONE``)
+  * The ``$SGE_ROOT`` directory
+  * The ``$SGE_CELL`` directory name
+  * The ``$SGE_BINDIR`` directory that contains all the q-commands
+  * The queue name
+  * The parallel environment
+* Creates and configures special directories:
+  * The ``$TMP`` directory
+  * The ``$USERDATA`` directory 
+
+
+### <a name="Bookmark_UpgradeDetail"></a> Step 3, Option 2 Details: Run the Upgrade Script
+
+The upgrade script performs the following:
+* Checks that the same user is running the upgrade script
+* Checks for running services
+* Checks that the OS and hardware requirements are still met
+* Transfers computing configurations from a previous installation
+* Upgrades any references as necessary
+* Preserves SMRT Cells, jobs, and users from a previous installation by updating smrtportal database schema changes as necessary
+* Preserves special directories settings
+  * Updates the `$SMRT_ROOT/tmpdir` softlink 
+  * Updates the `$SMRT_ROOT/userdata` softlink
+* The upgrade script does **not** port over protocols that were defined in previous versions of SMRT Analysis. This is because protocol files can vary a great deal between versions due to rapid code development and change. Please **recreate** any custom protocols you may have.
+
+
+### <a name="Bookmark_DistributedDetail"></a> Step 4 Details: Set up Distributed Computing
+
+Pacific Biosciences has explicitly validated Sun Grid Engine (SGE), and provide job submission templates for LSF and PBS. You only need to configure the software once during initial install. 
+
+#### Configuring Templates 
+
+The central component for setting up distributed computing in SMRT Analysis are the **Job Management Templates**, which provide a flexible format for specifying how SMRT Analysis communicates with the resident Job Management System (JMS). If you are using a non-SGE job managment system, you **must** create or edit the following files:
+```
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/start.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/interactive.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/kill.tmpl
+```
+
+#### Specifying the PBS Job Management System
+
+PBS does **not** have a ``–sync`` option, so the ``interactive.tmpl`` file runs a script named ``qsw.py`` to simulate the functionality. You must edit **both** ``interactive.tmpl`` and ``start.tmpl``. 
+
+1. Change the queue name to one that exists on your system. (This is the ``–q`` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the ``-pe`` option.) 
+3. Make sure that ``interactive.tmpl`` calls the ``–PBS`` option.
+
+#### Specifying the LSF Job Management System
+
+The equivalent SGE -sync option in LSF is `-K` and this should be provided with the `bsub` command in the `interactive.tmpl` file.
+
+1. Change the queue name to one that exists on your system. (This is the `–q` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the `-pe` option.) 
+3. Make sure that ``interactive.tmpl`` calls the `–K` option.
+
+
+#### Specifying other Job Management Systems
+
+1. Create a new directory `smrtanalysis/current/analysis/etc/cluster/NEW_JMS`.
+2. Edit `smrtanalysis/current/analysis/etcsmrtpipe.rc`, and change the `CLUSTER_MANAGER` variable to `NEW_JMS`
+3. Once you have a new JMS directory specified, create and edit the `interactive.tmpl`, `start.tmpl`, and `kill.tmpl` files for your particular setup.
+
+### <a name="Bookmark_SMRTPortalDetail"></a> Step 5 Details: (New Installations Only) Set Up SMRT® Portal
+
+1. Use your web browser to start SMRT Portal: `http://hostname:port/smrtportal`
+2. Click **Register** at the top right.
+3. Create a user named ``administrator`` (all lowercase). This user is special, as it is the only user that does not require activation on creation.
+4. Enter the user name ``administrator``.
+5. Enter an email address. All administrative emails, such as new user registrations, will be sent to this address.
+6. Enter the password and confirm the password.
+7. Select **Click Here** to access **Change Settings**.
+8. To set up the mail server, enter the SMTP server information and click **Apply**. For email authentication, enter a user name and password. You can also enable Transport Layer Security.
+9. To enable automated submission from a PacBio® RS instrument, click **Add** under the Instrument Web
+Services URI field. Then, enter the following into the dialog box and click **OK**:
+```
+http://INSTRUMENT_PAP01:8081
+```
+``INSTRUMENT_PAP01`` is the IP address or name (pap01) of the instrument.
+``8081`` is the port for the instrument web service.
+
+10. Select the new URI, then click **Test** to check if SMRT Portal can communicate with the instrument service.
+11. (Optional) You can delete the pre-existing instrument entry by clicking **Remove**.
+
+### <a name="Bookmark_VerifyDetail"></a> Step 6: Verify the installation
+
+Create a test job in SMRT Portal using the provided lambda sequence data. This is data from a single SMRT cell that has been down-sampled to reduce overall tarball size. If you are upgrading, this cell will already have been imported into your system, and you can skip to step 10 below.
+
+Open your web browser and clear the browser cache:
+
+* **Google Chrome**: Choose **Tools > Clear browsing data**. Choose **the beginning of time** from the droplist, then check **Empty the cache** and click **Clear browsing data**.
+* **Internet Explorer**: Choose **Tools > Internet Options > General**, then under Browsing history, click **Delete**. Check **Temporary Internet files**, then click **Delete**.
+* **Firefox**: Choose **Tools > Options > Advanced**, then click the **Network** tab. In the Cached Web Content section, click **Clear Now**.
+
+2. Refresh the current page by pressing **F5**.
+3. Log into SMRT Portal by navigating to ``http://HOST:PORT/smrtportal``.
+4. Click **Design Job**.
+5. Click **Import and Manage**.
+6. Click **Import SMRT Cells**.
+7. Click **Add**.
+8. Enter ``/opt/smrtanalysis/common/test/primary``, then click **OK**.
+9. Select the new path and click **Scan**. You should get a dialog saying “One input was scanned." 
+10. Click **Design Job**.
+11. Click **Create New**.
+12. Enter a job name and comment.
+13. Select the protocol ``RS_Resequencing.1``.
+14. Under **SMRT Cells Available**, select a lambda cell and click the right-arrow button.
+15. Click **Save** on the bottom right, then click **Start**. The job should complete successfully.
+16. Click the **SMRT View** button. SMRT View should open with tracks displayed, and the reads displayed in the Details panel.
+
+## <a name="Optional"></a> Optional Configurations ##
+### Set up Userdata folders ###
+
+The userdata folder, `$SMRT_ROOT/userdata`, expands rapidly because it contains all jobs, references, and drop boxes.  We recommend softlinking this folder to an **external** directory with more storage: 
+
+
+```
+mv /opt/smrtanalysis/userdata /path/to/NFS/mounted/offline_storage
+ln -s /path/to/NFS/mounted/offline_storage /opt/smrtanalysis/common/userdata
+```
+
+## <a name="Bundled"></a> Bundled with SMRT® Analysis ##
+The following are bundled within the application and should **not** depend on what is already deployed on the system.
+* Java® 1.7
+* Python® 2.7
+* Tomcat™ 7.0.23
+
+## <a name="Changes"></a> Changes from SMRT® Analysis v2.0.1 ##
+See [SMRT Analysis Release Notes v2.1.1](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Release-Notes-v2.1.1) for changes and known issues. The latest version of this document resides on the Pacific Biosciences DevNet site; you can also link to it from the main SMRT Analysis web page.
+
+
+***
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 100-299-000**
\ No newline at end of file
diff --git a/docs/Secondary-Analysis-Web-Services-API-v2.0.md b/docs/Secondary-Analysis-Web-Services-API-v2.0.md
new file mode 100644
index 0000000..ee02ba1
--- /dev/null
+++ b/docs/Secondary-Analysis-Web-Services-API-v2.0.md
@@ -0,0 +1,1513 @@
+* [Introduction](#Intro)
+* [Security] (#Sec)
+* [Overview] (#Ov)
+* [Web Services Behavior] (#WSB)
+* [HTTP Response Codes] (#HCODE)
+* [Search Conventions] (#CONV)
+* [Examples] (#EX)
+* [Reference Service] (#REF_SVC)
+ *  [List References Function] (#REF_List_Ref)
+ *  [Reference Details Function] (#REF_Ref_Det)
+ *  [List References by Type Function] (#REF_List_Ref_Type)
+ *  [Create Reference Function] (#REF_Create_Ref)
+ *  [Save Reference Function] (#REF_Save_Ref) 
+ *  [Delete Reference Function] (#REF_Del_Ref)
+ *  [List Reference Dropbox Files Function] (#REF_List_DB)
+ *  [RSS Feed Function] (#REF_RSS) 
+* [User Service] (#USER)
+ *  [List Users Function] (#USR_List) 
+ *  [User Details Function] (#USR_Det)
+ *  [Create User Function] (#USR_Create)
+ *  [Save User Function] (#USR_Save)
+ *  [Delete User Function] (#USR_Del)
+ *  [Register User Function] (#USR_Reg)
+ *  [Change Password Function] (#USR_CPW)
+ *  [Reset Password Function] (#USR_RPW)
+ *  [List User-Defined Fields Function] (#USR_LUDF)
+ *  [List User-Defined Field Names Function] (#USR_LUDFN)
+* [Secondary Analysis Input Service] (#SA_SVC)
+  * [List Secondary Inputs Function] (#SA_LInput)
+  * [Secondary Input Details Function] (#SA_InputDet)
+  * [Create Secondary Input Function] (#SA_CR)
+  * [Save Secondary Input Function] (#SA_SA)
+  * [Last Timestamp of Secondary Input Function] (#SA_LTime)
+  * [Import Secondary Input Metadata Function] (#SA_Imp)
+  * [Scan for New Input Metadata Function] (#SA_Scan)
+  * [Delete Secondary Input Function] (#SA_Del)
+  * [Compatibility Function] (#SA_Comp)
+  * [Groups Function] (#SA_Group)
+  * [Cleanup Function] (#SA_Clean)
+* [Jobs Service] (#JOB_SVC)
+  * [List Jobs Function] (#JOB_List) 
+  * [List Jobs by Status Function] (#JOB_ListBStatus)
+  * [List Jobs By Protocol Function] (#JOB_ListBProt)
+  * [Job Details Function] (#JOB_Det)
+  * [Create Job Function] (#JOB_CR)
+  * [Save Job Function] (#JOB_Save)
+  * [Delete Job Function] (#JOB_Del)
+  * [Archive Job Function] (#JOB_Arch)
+  * [Restore Archived Job Function] (#JOB_RestArch)
+  * [Get Job Metrics Function] (#JOB_Metrics)
+  * [Get Job Protocol Function] (#JOB_Prot)
+  * [Set Job Protocol Function] (#JOB_SetProt)
+  * [Get Job Inputs Function] (#JOB_Input)
+  * [Start Job Function] (#JOB_Start)
+  * [Get Job Status Function] (#JOB_GetStatus)
+  * [Update Job Status Function] (#JOB_UpStatus)
+  * [Job History Function] (#JOB_Hist)
+  * [Job Log Function] (#JOB_Log)
+  * [Analysis Table of Content Function] (#JOB_TOC)
+  * [Job Analysis File Function] (#JOB_File)
+  * [Mark Job Complete Function] (#JOB_COmplete)
+  * [List Jobs in Dropbox Function] (#JOB_inDrop)
+  * [Import Job Function] (#JOB_Import)
+  * [Job Overview Function] (#JOB_OV)
+  * [Job Last Heartbeat Function] (#JOB_Heart)
+  * [Job Raw-Read Function] (#JOB_RR)
+* [Protocol Service] (#PRO_SVC)
+  * [List Protocols Function] (#PRO_List)
+  * [List Protocol Names Function] (#PRO_ListNames)
+  * [Protocol Details Function] (#PRO_Det)
+  * [Create Protocol Function] (#PRO_CR)
+  * [Update Protocol Function] (#PRO_UP)
+  * [Delete Protocol Function] (#PRO_Del)
+* [Sample Sheet Service] (#SAM_SVC)
+  * [Validate Sample Sheet Function] (#SAM_Val)
+* [Settings Service] (#SET_SVC)
+  * [Check Free Disk Space Function] (#SET_CheckSpace)
+  * [Get Job Dropbox Function] (#SET_GetDrop)
+  * [Set Job Dropbox Function] (#SET_SetDrop)
+  * [Get Reference Sequence Dropbox Function] (#SET_GetRefDrop)
+  * [Set Reference Sequence Dropbox Function] (#SET_SetRefDrop)
+  * [Get SMTP Host Function] (#SET_GetSMTP)
+  * [Set SMTP Host Function] (#SET_SetSMTP)
+  * [Send Test Email Function] (#SET_Email)
+  * [Get Input Paths Function] (#SET_GetPath)
+  * [Add Input Paths Function] (#SET_AddPath)
+  * [Remove Input Paths Function] (#SET_AddPath)
+  * [Validate Path for use in pbids Function] (#SET_ValPath)
+  * [Get Instrument URIs Function] (#SET_GetURI)
+  * [Add Instrument URIs Function] (#SET_SetURI)
+  * [Remove Instrument URIs Function] (#SET_DelURI)
+  * [Test Instrument URIs Function] (#SET_TestURI)
+  * [Check Anonymous UI Access Function] (#SET_CheckUI)
+  * [Set Anonymous UI Access Function] (#SET_SetUI)
+  * [Check Anonymous Web Services Access Function] (#SET_CheckWS)
+  * [Set Anonymous Web Services Access Function] (#SET_SetWS)
+  * [Set Anonymous Web and UI Access Function] (#SET_SetUIWS)
+  * [Get Job Archive Directory Function] (#SET_GetArch)
+  * [Set Job Archive Directory Path Function] (#SET_SetArch)
+* [Groups Service] (#GR_SVC)
+  * [Create Group Function] (#GR_CR)
+  * [Save Group Function] (#GR_Save)
+  * [Delete Group Function] (#GR_Del)
+  * [List Group Names Function] (#GR_ListNames)
+  * [List Groups Function] (#GR_List)
+
+***
+
+## <a name="Intro"></a> Introduction
+
+This document describes the Secondary Analysis Web Services API provided by Pacific Biosciences. The API allows developers to search, submit and manage secondary analysis jobs, data, results, and user accounts.
+
+Secondary Analysis Web Services follow the **REST** (Representational State Transfer) model for web services, and use the JSON (JavaScript Object Notation) format. The web services:
+
+* Run as the server-side layer for managing secondary analysis jobs.
+* Maintain data integrity in the secondary analysis database and file system.
+* Act as a layer on top of SMRT® Pipe, the lower-level code that performs secondary analysis processing.
+* Support AJAX access from web clients and can be used from the command line with wget or curl; from scripting languages (PHP, Python, PERL), and from Java and C Sharp.
+
+The API includes functions for:
+* Managing **reference sequences**
+* Managing **user accounts** and **passwords**
+* Managing **groups of users**
+* Managing **instrument output** (SMRT® Cell data)
+* Managing secondary analysis **jobs**
+* Managing **protocols**
+* Validating **sample sheets**
+* Managing **settings**
+
+The latest version of the API and this documentation are available from the PacBio® Developer’s Network at http://www.pacbiodevnet.com.
+
+## <a name="Sec"></a> Security
+
+* Anonymous read-only access to web services is enabled by **default**.
+* Services that **create** and **modify** data require authentication.
+* Authentication is enforced for administrator, scientist and technician-level access and **cannot** be disabled.
+* An application setting (``restrictAccess``) in the ``web.xml`` file turns on or off authentication for all those web services **not** solely for administrators.
+
+
+## <a name="Ov"></a> Overview
+
+Secondary Analysis Web Services API:
+
+* Run in or under a standard Linux/Apache environment, and can be accessed from Windows, Mac OS or Linux.
+* Require MySQL.
+* Are installed as part of the secondary analysis system, and require a one-time configuration. Any additional changes can be made using SMRT® Portal.
+* Require that SMRT® Pipe be correctly configured and working.
+
+## <a name="WSB"></a> Web Services Behavior
+
+* URLs and parameters are all **case-sensitive.**
+
+* Most requests use the HTTP ``GET`` command to retrieve an object in JSON (JavaScript Object Notation) format: GET data to view details of an object: ``/{objects}/{id_or_name}`` **Example:**
+``curl http://{server}/smrtportal/api/jobs/12345``
+
+* **Deleting objects** uses the HTTP DELETE command: ``DELETE data: /{objects}/{id_or_name}``. **Example:** ``curl -X DELETE -u administrator:somepassword http://{server}/smrtportal/api/jobs/12345``
+
+* **Saving objects to the server, manipulating objects, and operating on the server:** These use the HTTP POST command common to standard HTML forms. This is **not** the same as for file uploads, which use a different mime type (multipart form data). In this case, the request body consists of key-value form pairs. POST data to create a new object: ``/{objects}``. **Example:**
+
+```
+curl -d 'data={the job returned from the GET method, with some edits}' http://{server}/smrtportal/api/
+jobs/12345
+```
+
+* **Saving objects to the server** also supports the PUT and POST commands with alternative content-types, such as application/json and text/xml. In this case, the request body consists of JSON or XML, and contains no key-value form pairs: PUT/POST data to save/update objects: ``/{objects}``
+
+* Most of the time you use ``/{objects}/create`` for both ways of saving objects.
+
+* Web services requiring authentication use the HTTP header’s Authorization feature. **Example:**
+``curl –u “janeuser:somepassword” http://server/secret/sauce``. Alternatively, you could log in using the users/log-on method and store the cookie for use with future web service calls.
+
+* **Creating objects** can be done using an HTTP POST using the ``/create method``, or by using an HTTP PUT with JSON or XML as the request body. The PUT method is considered more of a REST “purist” approach, whereas POST is more widely supported by web browsers.
+
+* By default, most web services return JSON. However, it’s possible in most cases to change the result format by adding an Accept header to the request. Most methods will support ``Accept: text/xml`` as well as ``application/json``, ``text/csv`` and ``text/tsv`` (tab-separated values).
+
+###Passing Arguments###
+
+* Arguments that are primitive types can be passed like the standard HTTP POST parameters: ``param1=value1&param2=value2``
+
+* Arguments that are objects should be serialized as JSON: ``param1={“name1”:“value1”,“name2”:“value2”}``
+
+* When using an HTTP PUT, simply pass the JSON or XML object in the request body:
+``{“name1”: “Value1”, “name2”: “Value2”}``
+
+###Date and Time Format###
+* All dates and times are in the ISO 8601 Universal Time format.
+
+## <a name="HCODE"></a> HTTP Response Codes
+
+###Success Conditions###
+
+When successful, a web services call returns an object or list of objects serialized as JSON, unless a different format is requested using an ``Accept`` header. You can deserialize the object in any language as a dictionary/hashtable, a list, or a list of dictionary/hashtables. For more advanced use, you can create custom, strongly-typed objects.
+
+For service calls that **don’t** return data from a server, a Notice message object with a uniform signatured is returned. For example: ``{“success”: true, “message”: “It worked”}``
+
+* **Return Value:** ``200 OK``  **Explanation:** The web service call returned successfully. The
+body of the response contains the requested JSON object. For function calls, the response may be a
+simple status message object.
+
+* **Return Value:** ``201 Created``  **Explanation:** The web service created a new object on the server. A simple PrimaryKey object is returned, such as: ``{“idName”:”id”,”idValue”:12345}``.
+The response will contain a header: Location: ``http://where/the/new/object/is``
+
+###Error Conditions###
+
+When errors occur, the web services return an HTTP error code. The body of the response contains a standard JSON object that can be uniformly deserialized as a strongly typed object, or left as a dictionary/hashtable. For example: ``{“success”:false, “type”:”IllegalArgumentException”,
+“message”:“Job id cannot be null”}``
+
+* **Return Value:** ``400 Bad request``  **Explanation:** The arguments were incorrect, or the web service was called incorrectly.
+
+* **Return Value:** ``403 Forbidden``  **Explanation:** The web service requires authentication, and the credentials in the HTTP header’s Authorization section were rejected.
+
+* **Return Value:** ``404 Not Found``  **Explanation:** The search or web service call did not find the
+requested object.
+
+* **Return Value:** ``409 Not Modified``  **Explanation:** The attempt to update or delete an object failed.
+
+* **Return Value:** ``413 Request Entity Too Large``  **Explanation:** When searching a large database table, there may be practical limits to how many records can be returned. The query asked for too many records.
+
+* **Return Value:** ``500 Internal Server Error``  **Explanation:** An internal error occurred.
+
+## <a name="CONV"></a> Search Conventions
+
+Lists of objects are retrieved using either the HTTP GET or POST commands. For objects with a small number of members, a JSON list is returned. Searching and filtering are possible through web services; see the documentation for the jqGrid plugin at http://www.trirand.com/jqgridwiki/.
+
+``
+GET the full list: /jobs
+``
+
+For objects with a **large** number of records (such as secondary analysis jobs and instrument output), results are paged. A wrapper object specifies the page number, total number of records, rows per page, and the list of objects themselves. The data structure is taken directly from the jqGrid plugin; for details see http://www.trirand.com/blog. Following is a sample structure: ``{“page”:1,“records”:50,“total”:510,”rows”:[{object1},{obj2}]}`` where:
+
+* ``page`` is the current page.
+* ``records`` is the number of rows on the current page.
+* ``total`` is the total number of rows.
+* ``rows`` is a list of objects for the current page.
+
+###Usage###
+
+* GET the first page: ``/{objects}``  **Example:**  ``curl http://{server}/smrtportal/api/jobs``
+
+* POST search or filtering options to the same url: ``/{objects}`` **Example:**  ``curl -d
+'options={"page":2,"rows":10,"sortOrder":"desc","sortBy":"jobId"}' http://{server}/smrtportal/api/jobs``
+
+The set of search and filtering parameters available is extensive, flexible, and is also derived from the jqGrid plugin. Key options include:
+
+* **Option:** ``page``  **Values:** ``int``  **Description:** Page number, starting from 1.
+
+* **Option:** ``rows``  **Values:** ``int``  **Description:** Rows per page. If the requested number is too large, a ``413 Request Entity Too Large`` error is generated.
+
+* **Option:** ``sortOrder``  **Values:** ``asc`` or ``desc``  **Description:** Sort order, ascending or descending.
+
+* **Option:** ``sortBy``  **Values:** ``String``; object property name  **Description:** ID of the column property to sort on. Example: ``JobId``.
+
+Arguments can be passed as JSON objects. For example: ``options={“sortOrder”:“asc”, “sortBy”:“name”, “page”:1}``
+
+## <a name="EX"></a> Examples
+Commonly-used methods include sample curl commands and sample returned values. The examples include a user named ``administrator`` and a PacBio® instrument located at ``http://pssc1:8080/``.
+
+## <a name="REF_SVC"></a> Reference Service
+The References Service includes functions that you use to manage the reference sequences used in secondary analysis. (Reference sequences are used to map reads against a reference genome for resequencing and for filtering reads.)
+
+### <a name="REF_List_Ref"></a> List References Function
+Use this function to list the reference sequences available on the system.
+
+* **URL:** ``/reference-sequences``  
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions`` (Only ``sortOrder`` and ``sortBy`` are supported.)
+* **Returns:** ``PagedList<ReferenceEntry>``
+
+### <a name="REF_Ref_Det"></a> Reference Details Function
+Use this function to obtain details about a specific reference sequence.
+
+* **URL:** ``/reference-sequences/{id}``  
+* **Method:** ``GET``
+* **Parameters:**  ``id=string``
+* **Returns:** ``ReferenceEntry``
+
+### <a name="REF_List_Ref_Type"></a> List References by Type Function
+Use this function to list the reference sequences available on the system by their **type**.
+
+* **URL:** ``/reference-sequences/by-type/{name}``  
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  
+  * ``name= control`` or ``name=sample``
+  * ``options=SearchOptions`` (Only ``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<ReferenceEntry>``
+
+### <a name="REF_Create_Ref"></a> Create Reference Function
+Use this function to **create** a new reference sequence.
+
+* **URL:** ``/reference-sequences/create`` (Using POST), ``/reference-sequences`` (Using PUT)  
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  
+  * ``data=ReferenceSequence`` (Using POST)
+  * ``ReferenceSequence`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="REF_Save_Ref"></a> Save Reference Function
+Use this function to **save** a reference sequence.
+
+* **URL:**  ``/reference-sequences/{id}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  
+  * ``id=string``
+  * ``data=ReferenceSequence``
+* **Returns:** ``A notice message object``
+
+### <a name="REF_Del_Ref"></a> Delete Reference Function
+Use this function to **delete** a reference sequence.
+
+* **URL:**  ``/reference-sequences/{id}``
+* **Method:** ``DELETE``
+* **Parameters:** ``id=string``
+* **Returns:** ``A notice message object``
+
+### <a name="REF_List_DB"></a> List Reference Dropbox Files Function
+Use this function to list the reference files located in the Reference Sequence Dropbox.
+
+* **URL:**  ``/reference-sequences/dropbox-files``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="REF_RSS"></a> RSS Feed Function
+Use this function to access an RSS feed which lists when secondary analysis jobs complete or fail.
+
+* **URL:**  ``/rss``
+* **Method:** ``GET``
+* **Returns:** ``An RSS XML file.``
+
+## <a name="USER"></a> User Service
+The User Service includes functions used to manage **users**, **roles** and **passwords**.
+
+### <a name="USR_List"></a> List Users Function
+Use this function to list users on the system. **(Administrators only)**
+
+* **URL:**  ``/users``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:** ``options=SearchOptions``  (Only ``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<User>``
+
+### <a name="USR_Det"></a> User Details Function
+Use this function to obtain information about a specific user. **(Administrators only)**
+
+* **URL:**  ``/users``
+* **Method:** ``GET``
+* **Parameters:** ``userName=string`` 
+* **Returns:** ``User``
+
+### <a name="USR_Create"></a> Create User Function
+Use this function to **add** a new user to the system. Note that the user needs to be registered to gain access. **(Administrators only)**
+
+* **URL:**  ``/users/create`` (Using POST),``/users`` (Using PUT)
+* **Method:** ``POST``, ``PUT``
+* **Parameters:** 
+ * ``data=User`` (Using POST)
+ * ``User`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="USR_Save"></a> Save User Function
+Use this function to **save** changes made to a user. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:** 
+ * ``userName=string``
+ * ``data=User``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_Del"></a> Delete User Function
+Use this function to **delete** a user from the system. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}``
+* **Method:** ``DELETE``
+* **Parameters:** ``userName=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_Reg"></a> Register User Function
+Use this function to register a new user.
+
+* **URL:**  ``/users/register``
+* **Method:** ``POST``
+* **Parameters:** 
+ * ``data=User``  **(Required)**
+ * ``userName=string``
+ * ``email=string``
+ * ``password=string``
+ * ``confirmPassword=string``
+* **Returns:** ``User``
+
+### <a name="USR_CPW"></a> Change Password Function
+Use this function to change a user’s password with a specified replacement password. This functionality is available to administrators for **all** passwords.
+
+* **URL:**  ``/users/{userName}/change-password``
+* **Method:** ``POST``
+* **Parameters:** 
+ * ``data=User``  **(Required)**
+ * ``userName=string``
+ * ``newPassword=string``
+ * ``password=string``
+ * ``confirmPassword=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_RPW"></a> Reset Password Function
+Use this function to reset a user’s password. The user is then asked to change their password. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}/reset-password``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_LUDF"></a> List User-Defined Fields Function
+Use this function to obtain a list of user-defined fields. These fields are created using the RS Remote software. If a run specified a secondary analysis protocol, these fields (if defined) propagate throughout the secondary analysis pipeline.
+
+* **URL:**  ``/custom-fields``
+* **Method:** ``GET``
+* **Parameters:** ``options=SearchOptions`` (**Only **``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<CustomField>``
+
+### <a name="USR_LUDFN"></a> List User-Defined Field Names Function
+Use this function to obtain a list of the names of **user-defined fields**. These fields are created using the RS Remote software. If a run specified a secondary analysis protocol, these fields (if defined) propagate throughout the secondary analysis pipeline.
+
+* **URL:**  ``/custom-fields/names``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+## <a name="SA_SVC"></a> Secondary Analysis Input Service
+The Secondary Analysis Input Service includes functions used to manage the data associated with each SMRT® Cell that is included in a secondary analysis job.
+
+### <a name="SA_LInput"></a> List Secondary Inputs Function
+Use this function to obtain a list of secondary analysis input.
+
+* **URL:**  ``/inputs``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:** ``options=SearchOptions``
+* **Returns:** ``PagedList<Input>``
+
+### <a name="SA_InputDet"></a> Secondary Input Details Function
+Use this function to obtain details for a specfied secondary analysis input.
+
+* **URL:**  ``/inputs``
+* **Method:** ``GET``
+* **Parameters:** ``id=int``
+* **Returns:** ``Input``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/inputs
+{
+"page" : 1,
+"records" : 1,
+"total" : 1,
+"rows" : [ {
+"adapterSequence" : "ATCTCTCTCttttcctcctcctccgttgttgttgttGAGAGAGAT",
+"bindingKitBarcode" : "000001001546011123111",
+"bindingKitControl" : "Standard_v1",
+"bindingKitExpirationDate" : "2011-12-31T00:00:00-0800",
+...
+} ]
+}
+```
+
+### <a name="SA_CR"></a> Create Secondary Input Function
+Use this function to **create** secondary analysis input.
+
+* **URL:**  ``/inputs/create`` (Using POST), ``/inputs`` (Using PUT)
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  
+  * ``data=Input`` (Using POST)
+  * ``Input`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="SA_SA"></a> Save Secondary Input Function
+Use this function to **save** secondary analysis input.
+
+* **URL:**  ``/inputs/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  
+  * ``data=Input``
+  * ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_LTime"></a> Last Timestamp of Secondary Input Function
+Use this function to obtain the time of the last secondary analysis input saved to the database.
+
+* **URL:**  ``/inputs/last-timestamp``
+* **Method:** ``GET``
+* **Returns:** ``Date``
+
+### <a name="SA_Imp"></a> Import Secondary Input Metadata Function
+Use this function to **import** secondary analysis input.
+
+* **URL:**  ``/inputs/import``
+* **Method:** ``POST``
+* **Parameters:**  ``data=array of Collections from instrument``
+* **Returns:** ``List<Input>``
+
+### <a name="SA_Scan"></a> Scan for New Input Metadata Function
+Use this function to **scan** for secondary analysis input.
+
+* **URL:**  ``/inputs/scan``
+* **Method:** ``POST``
+* **Parameters:**  ``paths=array of string``
+* **Returns:** ``List<Input>``
+* **Example:** ``curl -u administrator:administrator#1 -d 'paths=["/data/smrta/smrtanalysis/common/inputs_dropbox"]'
+http://secondary_host:8088/smrtportal/api/inputs/scan``
+* **Python code example:**
+
+```
+import os
+import logging
+import urllib
+import urllib2
+import json
+import base64
+
+log = logging.getLogger(__name__)
+
+class DefaultProgressErrorHandler(urllib2.HTTPDefaultErrorHandler):
+    def http_error_default(self, req, fp, code, msg, headers):
+        result = urllib2.HTTPError(req.get_full_url(), code, msg, headers, fp)
+        result.status = code
+        return result
+
+def request_to_string(request):
+    "for debugging"
+    buffer = []
+    buffer.append('Method: %s' % request.get_method())
+    buffer.append('Host: %s' % request.get_host())
+    buffer.append('Selector: %s' % request.get_selector())
+    buffer.append('Data: %s' % request.get_data())
+    return os.linesep.join(buffer)
+
+def scan():
+    url = 'http://localhost:8080/smrtportal/api/inputs/scan'
+    #You'd want to use something like this: /opt/testdata/LIMS/2311013/0002
+    #Having trouble getting this to work with > 1 path. How to pass multiple form params of same name to urllib2?
+    c_path = '/data/smrta/smrtanalysis/common/inputs_dropbox'
+    scan_data = urllib.urlencode({'paths[]': c_path } )
+    request = urllib2.Request(url, data=scan_data)
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    #response.read() - in this case - returns a list of len == to number of paths you're passing as data
+    retList = json.loads( response.read() ) 
+    return retList[0]['idValue']
+
+def saveJob(inputId):
+    url = 'http://localhost:8080/smrtportal/api/jobs/create'
+    job = {
+        'name':'test_job',
+        'createdBy':'admin',
+        'protocolName':'RS_Filter_Only.1',
+        'groupNames':['all'],
+        'inputIds':[inputId]
+    }
+    job_data = urllib.urlencode( {'data': json.dumps(job)  } )
+
+    request = urllib2.Request(url, data=job_data)
+    print request_to_string(request)
+
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    ret = json.loads( response.read() )
+    return ret['idValue']
+
+def startJob(jobId):
+    url = 'http://localhost:8080/smrtportal/api/jobs/{i}/start'.format(i=jobId)
+
+    #This is a GET
+    request = urllib2.Request(url)
+    print request_to_string(request)
+
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    ret = json.loads( response.read() )
+    print( ret )
+
+def test():
+    inputId = scan()
+    print( 'Scanned inputId = %s' % inputId ) 
+    jobId = saveJob(inputId)
+    print( 'jobId = %s' % jobId )     
+    startJob(jobId)
+```
+### <a name="SA_Del"></a> Delete Secondary Input Function
+Use this function to **delete** specified secondary analysis input. **(Scientists and administrators only)**
+
+* **URL:**  ``/inputs/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_Comp"></a> Compatibility Function
+Use this function to return information specifying whether the SMRT® Cell inputs for the job are compatible. Mixing data that was generated using v1.2.1 primary analysis software with data generated with later versions may fail. (v1.2.1 calculated Quality Values differently than later versions.)
+
+* **URL:**  ``/inputs/compatibility``
+* **Method:** ``GET``
+* **Parameters:**  ``ids=[array of ids]``
+* **Returns:** ``JSON object specifying whether or not the inputs are compatible.``
+
+### <a name="SA_Group"></a> Groups Function
+Use this function to add group information to secondary analysis input.
+
+* **URL:**  ``/inputs/{INPUT_ID}/groups``
+* **Method:** ``POST``
+* **Parameters:**  ``data=[name of groups].``  **Example:** ``data=”[‘grp1’, ‘grp2’]”``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_Clean"></a> Cleanup Function
+Use this function to delete any input that is **unassociated** with a job and has an invalid
+or empty collectionPathUri. This is useful for cleaning up duplicate SMRT Cells located at different paths. When you scan and import SMRT Cells from SMRT Portal and the same SMRT Cell ID already exists, the existing path is updated to the new location. No duplicate entries are created. **(Scientists and administrators only)**
+
+* **URL:**  ``/inputs/{INPUT_ID}/cleanup``
+* **Method:** ``DELETE``
+* **Returns:** ``A notice message object that includes the list of deleted input IDs.``
+
+
+## <a name="JOB_SVC"></a> Jobs Service
+The Jobs Service includes functions used to manage secondary analysis jobs.
+
+### <a name="JOB_List"></a> List Jobs Function
+Use this function to obtain a list of **all** secondary analysis jobs.
+
+* **URL:**  ``/jobs``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+
+```
+curl -d 'options={"filters":{"rules":[{"field":"createdBy","op":"eq","data":"AutomationSystem"},{"field":"jobId","op":"lt","data":"30000"}],"groupOp":"and"},"columnNames":["jobId"],"rows":"0"}' http://pssc1:8080/smrtportal/api/jobs
+{
+"page" : 1,
+"records" : 57,
+"total" : 1,
+"rows" : [ {
+"jobId" : 26392
+}, {
+"jobId" : 26360
+}, {
+"jobId" : 26359
+}, {
+...
+}]
+}
+```
+
+### <a name="JOB_ListBStatus"></a> List Jobs by Status Function
+Use this function to obtain a list of secondary analysis jobs, based on their **job status**.
+
+* **URL:**  ``/jobs/by-status/{jobStatus}``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/by-status/Completed
+{
+"page" : 1,
+"records" : 25,
+"total" : 3,
+"rows" : [ {
+"automated" : true,
+"collectionProtocol" : "Standard Seq v2",
+...
+} ]
+}
+```
+
+### <a name="JOB_ListBProt"></a> List Jobs By Protocol Function
+Use this function to list the currently open secondary analysis jobs, based on a specified **protocol**.
+
+* **URL:**  ``/jobs/by-protocol/{protocol}``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``protocol=string``, ``options=SearchOptions``, ``jobStatus=`` status code such as ``NotStarted``.
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+```
+curl -d 'jobStatus=Completed' http://pssc1:8080/smrtportal/api/jobs/by-protocol/RS_resequencing.1
+{
+"page" : 1,
+"records" : 1,
+"total" : 1,
+"rows" : [ {
+"automated" : false,
+...
+"whenStarted" : null
+} ]
+}
+```
+
+### <a name="JOB_Det"></a> Job Details Function
+Use this function to display **details** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}`` or ``/jobs/by-name/{name}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``, ``name=string``
+* **Returns:** ``Job``
+* **Examples:**
+```
+By ID:
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/016437
+{
+"jobId" : 16437,
+"protocolName" : "RS_Site_Acceptance_Test.1",
+"referenceSequenceName" : "lambda",
+"jobStatus" : "Completed",
+...
+"whenModified" : "2012-01-31T09:12:48-0800",
+"modifiedBy" : null
+}
+By Name:
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/by-name/2311084_0002
+{
+"jobId" : 16437,
+"protocolName" : "RS_Site_Acceptance_Test.1",
+"referenceSequenceName" : "lambda",
+"jobStatus" : "Completed",
+...
+"whenModified" : "2012-01-31T09:12:48-0800",
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_CR"></a> Create Job Function
+Use this function to **create** a new secondary analysis job.
+
+* **URL:**  ``/jobs/create`` (Using POST), ``/jobs`` (Using PUT)
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``data=Job`` (Using POST), ``job`` (Using PUT). In both cases, the name must be **unique**, and ``CreatedBy`` must be **non-null**.
+* **Returns:** ``PrimaryKey``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'data={"name":"DemoJobName", "createdBy":"testuser", "description":"demo job", "protocolName":"RS_Resequencing.1", "groupNames":["all"], "inputIds":["78807"]}' http://pssc1:8080/smrtportal/api/jobs/create
+{
+"idValue" : 16478,
+"idProperty" : "jobId"
+}
+```
+
+### <a name="JOB_Save"></a> Save Job Function
+Use this function to **save** a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=int``, ``data=Job``
+* **Returns:** ``A notice message object.``
+
+### <a name="JOB_Del"></a> Delete Job Function
+Use this function to **delete** a secondary analysis job. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u "administrator:administrator#1" -X DELETE http://pssc1:8080/smrtportal/api/jobs/16478
+{
+"success" : true,
+"message" : "Job 16478 has been permanently deleted"
+}
+```
+
+### <a name="JOB_Arch"></a> Archive Job Function
+Use this function to **archive** a secondary analysis job. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}/archive`` (Using GET), ``/jobs/archive`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/archive
+{
+"success" : true,
+"message" : "Archived 2 jobs."
+}
+```
+
+### <a name="JOB_RestArch"></a> Restore Archived Job Function
+Use this function to **restore** a secondary analysis job that was archived. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}/restore`` (Using GET), ``/jobs/restore`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/restore
+{
+"success" : true,
+"message" : "Restored 2 jobs."
+}
+```
+
+### <a name="JOB_Metrics"></a> Get Job Metrics Function
+Use this function to retrieve **metrics** for one or more secondary analysis jobs, in CSV format.
+
+* **URL:**  ``/jobs/{id}/metrics`` (Using GET), ``/jobs/metrics`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object`` containing the following fields:
+ * **Job ID:** Numeric ID for the job.
+ * **Job Name:** Name given to the job when created in SMRT Portal.
+ * **Adapter Dimers (%):** The % of pre-filter ZMWs which have observed inserts of 0-10bp. These are likely adapter dimers.
+ * **Short Inserts (%):** The % of pre-filter ZMWs which have observed inserts of 11-100bp. These are likely short fragment contamination.
+ * **Medium Insert (%):**
+ * **Pre-Filter Polymerase Read Bases:** The number of bases in the polymerase reads before filtering, including adaptors.
+ * **Post-Filter Polymerase Read Bases:** The number of bases in the polymerase reads after filtering, including adaptors.
+ * **Pre-Filter Polymerase Reads:** The number of polymerases generating trimmed reads before filtering. Polymerase reads include bases from adaptors and multiple passes around a circular template.
+ * **Post-Filter Polymerase Reads:** The number of polymerases generating trimmed reads after filtering. Polymerase reads include bases from adaptors and multiple passes around a circular template.
+ * **Pre-Filter Polymerase Read Length:** The mean trimmed read length of all polymerase reads before filtering. The value includes bases from adaptors as well as multiple passes around a circular template.
+ * **Post-Filter Polymerase Read Length:** The mean trimmed read length of all polymerase reads after filtering. The value includes bases from adaptors as well as multiple passes around a circular template.
+ * **Pre-Filter Polymerase Read Quality:** The mean single-pass read quality of all polymerase reads before filtering.
+ * **Post-Filter Polymerase Read Quality:** The mean single-pass read quality of all polymerase reads after filtering.
+ * **Coverage:** The mean depth of coverage across the reference sequence.
+ * **Missing Bases (%):** The percentage of the reference sequence that has zero coverage.
+ * **Post-Filter Reads:**  The number of reads that passed filtering.
+ * **Mapped Subread Accuracy:** The mean accuracy of post-filter subreads that mapped to the reference sequence.
+ * **Mapped Reads:** The number of post-filter reads that mapped to the reference sequence.
+ * **Mapped Subreads:** The number of post-filter subreads that mapped to the reference sequence.
+ * **Mapped Polymerase Bases:** The number of post-filter bases that mapped to the reference sequence. 
+ * **Mapped Subread Bases:** The number of post-filter bases that mapped to the reference sequence. This does not include adapters.
+ * **Mapped Polymerase Read Length:** The mean trimmed read length of all polymerase reads. The value includes bases from adaptors as well as multiple passes around a circular template.
+ * **Mapped Subread Length:** The mean read length of post-filter subreads that mapped to the reference sequence. This does not include adapters.
+ * **Mapped Polymerase Read Length 95%:** The 95th percentile of read length of post-filter polymerase reads that mapped to the reference sequence.
+ * **Mapped Read Length of Insert:** The average length of the Read of Insert, which is a representative read of a DNA molecule from a single ZMW; that is, the sequence of a DNA molecule read from a single ZMW. On circularized SMRTbell™  templates that are shorter than the read length, a Read of Insert length distribution will closely resemble the insert size distribution.
+ * **Mapped Polymerase Read Length Max:** The maximum read length of post-filter polymerase reads that mapped to the reference sequence.
+ * **Mapped Full Subread Length:**  The lengths of full subreads, which includes only mapped subreads. Full subreads are subreads flanked by two adapters.
+ * **First Subread Length:**
+ * **Reads Starting Within 50 bp (%):**
+ * **Reads Starting Within 100 bp (%):**
+ * **Reference Length:** The length of the reference sequence.
+ * **Bases Called (%):** The percentage of reference sequence that has ≥ 1x coverage. % Bases Called + % Missing Bases should equal 100.
+ * **Consensus Accuracy:** The accuracy of the consensus sequence compared to the reference.
+ * **Coverage:** The mean depth of coverage across the reference sequence.
+ * **SMRT Cells:** The number of SMRT Cells used in the job.
+ * **Movies:** The number of movies generated in the job.
+* **Example Notice Message Object returned:**
+```
+{
+  "Job ID" : 58765,
+  "Job Name" : "20130404_891_Final_v2_q1",
+  "Adapter Dimers (%)" : "0.46",
+  "Short Inserts (%)" : "0.04",
+  "Medium Insert (%)" : "0.03",
+  "Pre-Filter Polymerase Read Bases" : "342682225",
+  "Post-Filter Polymerase Read Bases" : "321587405",
+  "Pre-Filter Polymerase Reads" : "450918",
+  "Post-Filter Polymerase Reads" : "103439",
+  "Pre-Filter Polymerase Read Length" : "760",
+  "Post-Filter Polymerase Read Length" : "3109",
+  "Pre-Filter Polymerase Read Quality" : "0.203",
+  "Post-Filter Polymerase Read Quality" : "0.844",
+  "Coverage" : "137.94",
+  "Missing Bases (%)" : "0.00",
+  "Post-Filter Reads" : "103439",
+  "Mapped Subread Accuracy" : "86.49",
+  "Mapped Reads" : "95308",
+  "Mapped Subreads" : "117348",
+  "Mapped Polymerase Bases" : "278385730",
+  "Mapped Subread Bases" : "274408990",
+  "Mapped Polymerase Read Length" : "2921",
+  "Mapped Subread Length" : "2338",
+  "Mapped Polymerase Read Length 95%" : "7997",
+  "Mapped Read Length of Insert" : "2564",
+  "Mapped Polymerase Read Length Max" : "15085",
+  "Mapped Full Subread Length" : "2025",
+  "First Subread Length" : "2488",
+  "Reads Starting Within 50 bp (%)" : "0.06",
+  "Reads Starting Within 100 bp (%)" : "0.06",
+  "Reference Length - Campylobacter_891_8523_chromosome|quiver" : "1853005",
+  "Bases Called (%) - Campylobacter_891_8523_chromosome|quiver" : "100.00",
+  "Consensus Accuracy - Campylobacter_891_8523_chromosome|quiver" : "100.0000",
+  "Coverage - Campylobacter_891_8523_chromosome|quiver" : "137.94",
+  "SMRT Cells" : "6",
+  "Movies" : "6"
+}
+```
+* **Example command to call the function:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/metrics
+[ {
+"Job ID" : 16437,
+"Job Name" : "2311084_0002",
+...
+},{
+...
+}]
+```
+
+### <a name="JOB_Prot"></a> Get Job Protocol Function
+Use this function to **return the protocol** used by a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/protocol``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``Protocol XML document``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobss/16437/protocol
+<smrtpipeSettings>
+<protocol version="1.3.0" id="RS_Site_Acceptance_Test.1" editable="false">
+<param name="name" label="Protocol Name" editable="false">
+...
+<fileName>settings.xml</fileName>
+</smrtpipeSettings>
+```
+
+### <a name="JOB_SetProt"></a> Set Job Protocol Function
+Use this function to **specify the protocol** used by a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/protocol``
+* **Method:** ``POST``
+* **Parameters:**  ``id=int``, ``data=Xml(escaped)`` This is a function used for transmission from a web browser, such as the Javascript escape function.
+* **Returns:** ``A notice message object.``
+
+### <a name="JOB_Input"></a> Get Job Inputs Function
+Use this function to return information about the SMRT® Cell data used for a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/inputs``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``PagedList<Input>``
+
+### <a name="JOB_Start"></a> Start Job Function
+Use this function to **start** a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/start``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JobStatus``
+* **Example:**
+```
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/16479/start
+{
+"jobStatusId" : 1775,
+"jobId" : 16479,
+"code" : "Submitted",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : "Job submitted",
+"name" : null,
+"description" : null,
+"whenCreated" : null,
+"createdBy" : null,
+"whenModified" : null,
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_GetStatus"></a> Get Job Status Function
+Use this function to obtain the **status** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/status``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JobStatus``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16479/status
+{
+"jobStatusId" : 1780,
+"jobId" : 16479,
+"code" : "In Progress",
+"jobStage" : "Filtering",
+"moduleName" : "P_FilterReports/adapterRpt",
+"percentComplete" : 100,
+"message" : "task://016479/P_FilterReports/adapterRpt complete",
+"name" : null,
+"description" : null,
+"whenCreated" : "2012-02-03T17:38:06-0800",
+"createdBy" : "smrtpipe",
+"whenModified" : "2012-02-03T17:38:06-0800",
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_UpStatus"></a> Update Job Status Function
+Use this function to **modify the status** of a secondary analysis job. **(Scientists and administrators only)**
+
+* **URL:**  ``/jobs/{id}/status``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=int``, ``progress=JobStatus``
+* **Returns:** ``PrimaryKey``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'progress={"code":"Failed"}' http://pssc1:8080/smrtportal/api/jobs/16471/status
+{
+"success" : true,
+"message" : "Job status updated"
+}
+```
+
+### <a name="JOB_Hist"></a> Job History Function
+Use this function to obtain the **history** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/history``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``List<JobStatus>``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/history
+[ {
+"jobStatusId" : 1773,
+"jobId" : 16437,
+"code" : "Completed",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : null,
+"name" : null,
+"description" : null,
+"whenCreated" : "2012-02-03T17:13:31-0800",
+"createdBy" : null,
+"whenModified" : "2012-02-03T17:13:31-0800",
+"modifiedBy" : null
+}, {
+...
+}]
+```
+
+### <a name="JOB_Log"></a> Job Log Function
+Use this function to obtain the **log** for a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/log``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``Text file``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/log
+[INFO] 2012-01-30 23:52:41,437 [SmrtPipeContext 139] Configuration override for PROGRESS_URL: Old: --> New: http://pssc1:8080/smrtportal/api
+[INFO] 2012-01-30 23:52:41,437 [SmrtPipeContext 150] Changing working directory to /tmp/tmpTKPKi4
+...
+[INFO] 2012-01-31 00:35:10,443 [SmrtPipeContext 362] Removed 2 temporary directories
+[INFO] 2012-01-31 00:35:10,450 [SmrtPipeContext 365] Removed 1 temporary files
+[INFO] 2012-01-31 00:35:10,450 [SmrtPipeMain 394] Successfully exiting smrtpipe
+***
+```
+
+### <a name="JOB_TOC"></a> Analysis Table of Content Function
+Use this function to returns a JSON object listing the reports and data files that were generated for a secondary analysis job. This function is used primarily for SMRT® Portal to display the report and data links in the View Data/Job Details page.
+
+* **URL:**  ``/jobs/{id}/contents``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JSON object listing contents.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/contents
+{
+"reportGroups" : [ {
+"name" : "General",
+"members" : [ {
+"group" : "General",
+"title" : "Workflow",
+"links" : [ {
+"path" : "workflow/Workflow.summary.html",
+"format" : "text/html"
+...
+}
+```
+
+### <a name="JOB_File"></a> Job Analysis File Function
+Use this function to obtain any specified **file** that was generated during a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/contents/{file}`` or ``/jobs/{id}/contents/{dir}/{file}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``, ``file=filename``, ``dir=directory``
+* **Returns:** ``Data file, report XML, image, and so on.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/contents/results/overview.xml
+<?xml version="1.0" encoding="UTF-8"?>
+<report>
+<layout onecolumn="true"/>
+<title>General Attribute Report</title>
+<attributes>
+<attribute id="n_smrt_cells" name="# of SMRT Cells" value="1">1</attribute>
+<attribute id="n_movies" name="# of Movies" value="2">2</attribute>
+</attributes>
+</report>
+```
+
+### <a name="JOB_COmplete"></a> Mark Job Complete Function
+Use this function to specify that a job using more than one SMRT® Cell is complete
+
+* **URL:**  ``/jobs/{id}/complete``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+http://pssc1:8080/smrtportal/api/jobs/16477/complete
+{
+"jobStatusId" : 1844,
+"jobId" : 16477,
+"code" : "Submitted",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : "Job submitted",
+"name" : null,
+"description" : null,
+"whenCreated" : null,
+"createdBy" : null,
+"whenModified" : null,
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_inDrop"></a> List Jobs in Dropbox Function
+Use this function to list the jobs located in the Job Import Dropbox.
+
+* **URL:**  ``/jobs/dropbox-paths``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+* **Example:**
+```
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/dropbox-paths
+[ "999991" ]
+```
+
+### <a name="JOB_Import"></a> Import Job Function
+Use this function to **import** a job located in the Job Import Dropbox.
+
+* **URL:**  ``/jobs/import``
+* **Method:** ``POST``
+* **Parameters:** ``paths=array of strings``
+* **Returns:** ``List<PrimaryKey>``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'paths=["/opt/smrtanalysis/common/jobs_dropbox/035169"]' http://pssc1:8080/smrtportal/api/jobs/import
+[ {
+"idValue" : 16480,
+"idProperty" : "jobId"
+} ]
+```
+
+### <a name="JOB_OV"></a> Job Overview Function
+Use this function to obtain metrics and reports for a job. (These are displayed on the SMRT® Portal Overview page.)
+
+* **URL:**  ``/jobs/{id}/Overview``
+* **Method:** ``GET``
+* **Returns:** ``JSON object listing jobs attributes in the file metadata.rdf.``
+* **Example:**
+```
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/16437/overview
+{
+"metrics" : {
+"Job ID" : 16437,
+"Job Name" : "2311084_0002",
+"# of SMRT Cells" : "1",
+"% Adapter Dimer (0-10bp)" : "0.34",
+"% Short Insert (11-100bp)" : "0.13",
+"# of Control Reads" : "208",
+"% Control Reads" : "0.33",
+"Mean Accuracy of Control Reads" : "76.07",
+"Mean Mapped Readlength of Control Reads" : "62",
+"95th Percentile Mapped Readlength of Control Reads" : "81",
+"Post-Filter # of Bases" : "264941458",
+"Post-Filter # of Reads" : "63280",
+"Post-Filter Mean Readlength" : "2793",
+"Post-Filter Mean Read Quality" : "0.834",
+"Mean Depth of Coverage" : "2556.35",
+"% Missing Bases" : "0.00",
+"Mean Mapped Subread Accuracy" : "83.24",
+"# of Mapped Reads" : "58774",
+"Mean Mapped Readlength" : "2675",
+"Mean Mapped Subread Readlength" : "668",
+"# of Movies" : "2"
+},
+"thumbnails" : [ {
+"report" : "Adapters",
+"title" : "Observed Insert Length Distribution Histogram",
+"link" : "results/filterReports_adapters.xml",
+"image" : "results/adapter_observed_insert_length_distribution_thumb.png"
+}, {
+"report" : "Coverage",
+"title" : "Depth of Coverage Across Reference",
+"link" : "results/coverage.xml",
+"image" : "results/coveragePlot_ref000001_thmb.png"
+}, {
+"report" : "Coverage",
+"title" : "Depth of Coverage Histogram",
+"link" : "results/coverage.xml",
+"image" : "results/coverageHistogram_thmb.png"
+}, {
+"report" : "Mapping",
+"title" : "Accuracy Histogram",
+"link" : "results/quality.xml",
+"image" : "results/accuracyHistogram_thmb.png"
+} ]
+```
+
+### <a name="JOB_Heart"></a> Job Last Heartbeat Function
+Use this function to find out if a job is still alive.
+
+* **URL:**  ``/jobs/{id}/status/heartbeat``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d "data={'lastHeartbeat':'2011-06-20T00:50:20-0700'}" http://pssc1:8080/smrtportal/api/jobs/016471/status/heartbeat
+{
+"success" : true,
+"message" : "Job lastHeartbeat status updated"
+}
+```
+
+### <a name="JOB_RR"></a> Job Raw-Read Function
+Use this function to download a data file generated by a job.
+
+* **URL:**  ``/jobs/{id}/raw-reads``
+* **Method:** ``GET``
+* **Returns:** ``Data file, report XML, image, and so on.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/raw-reads?format=fasta
+```
+
+## <a name="PRO_SVC"></a> Protocol Service
+The Protocol Service includes functions that you use to manage the protocols used by secondary analysis jobs.
+
+### <a name="PRO_List"></a> List Protocols Function
+Use this function to obtain all the **active** and **inactive** protocols in the system.
+
+* **URL:**  ``/protocols``
+* **Method:** ``GET``
+* **Returns:** ``PagedList<Protocol>``
+
+### <a name="PRO_ListNames"></a> List Protocol Names Function
+Use this function to obtain the names of all the **active** protocols in the system.
+
+* **URL:**  ``/protocols/names``
+* **Method:** ``GET``
+* **Returns:** ``List<string>``
+
+### <a name="PRO_Det"></a> Protocol Details Function
+Use this function to obtain **details** about a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=string``
+* **Returns:** ``An XML protocol file.``
+
+### <a name="PRO_CR"></a> Create Protocol Function
+Use this function to **add** a new protocol to the system.
+
+* **URL:**  ``/protocols/create`` (Using POST), ``/protocols`` (Using PUT)
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``data=Xml(escaped)`` (Using POST), ``Xml`` (Using PUT). ``Xml`` a function used for transmission from a web browser, such as the Javascript escape function.
+* **Returns:** ``PrimaryKey``
+
+### <a name="PRO_UP"></a> Update Protocol Function
+Use this function to **update** a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=string``, ``data=Xml``
+* **Returns:** ``A notice message object.``
+
+### <a name="PRO_Del"></a> Delete Protocol Function
+Use this function to **permanently delete** a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=string``
+* **Returns:** ``A notice message object.``
+
+## <a name="SAM_SVC"></a> Sample Sheet Service
+The Sample Sheet Service includes a function to validate a specified sample sheet.
+
+### <a name="SAM_Val"></a> Validate Sample Sheet Function
+
+* **URL:**  ``/sample-sheets/validate``
+* **Method:** ``POST``
+* **Parameters:**  ``sampleSheet=SampleSheet``
+* **Returns:** ``A notice message object.``
+
+## <a name="SET_SVC"></a> Settings Service
+The Settings Service includes functions that you use to manage the SMTP host, send test email, manage instrument URIs, and manage the file input paths where SMRT® Portal looks for secondary analysis input, reference sequences, and jobs to import.
+
+### <a name="SET_CheckSpace"></a> Check Free Disk Space Function
+Use this function to check how much free space resides on the disk containing the jobs directory, by default located at ``/opt/smrtanalysis/common/jobs``.
+
+* **URL:**  ``/settings/free-space``
+* **Method:** ``GET``
+* **Returns:** ``Floating point value between 0 and 1, representing the fraction of disk space that is free.``
+
+### <a name="SET_GetDrop"></a> Get Job Dropbox Function
+Use this function to obtain the location of the dropbox where SMRT® Portal looks for jobs to import.
+
+* **URL:**  ``/settings/job-dropbox``
+* **Method:** ``GET``
+* **Returns:** ``The path for the job dropbox directory.``
+
+### <a name="SET_SetDrop"></a> Set Job Dropbox Function
+Use this function to **specify** the location of the Job Import Dropbox where SMRT® Portal looks for jobs to import.
+
+* **URL:**  ``/settings/job-dropbox``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetRefDrop"></a> Get Reference Sequence Dropbox Function
+Use this function to obtain the location of the Reference Sequence Dropbox where SMRT® Portal looks for reference sequences.
+
+* **URL:**  ``/settings/reference-dropbox``
+* **Method:** ``GET``
+* **Returns:** ``The path for the reference sequence dropbox directory.``
+
+### <a name="SET_SetRefDrop"></a> Set Reference Sequence Dropbox Function
+Use this function to **specify** the location of the Reference Sequence Dropbox where SMRT® Portal looks for reference sequences.
+
+* **URL:**  ``/settings/reference-dropbox``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetSMTP"></a> Get SMTP Host Function
+Use this function to obtain the name of the current SMTP host.
+
+* **URL:**  ``/settings/smtp-host``
+* **Method:** ``GET``
+* **Returns:** ``The host name.``
+
+### <a name="SET_SetSMTP"></a> Set SMTP Host Function
+Use this function to **specify** the name of the SMTP host to use.
+
+* **URL:**  ``/settings/smtp-host``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``host=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_Email"></a> Send Test Email Function
+Use this function to send a test email to the administrator, using the specified SMTP Host.
+
+* **URL:**  ``/settings/smtp-host/test``
+* **Method:** ``GET``
+* **Parameters:**  ``host=string``
+* **Returns:** ``A notice message object, then sends an email to the administrator.``
+
+### <a name="SET_GetPath"></a> Get Input Paths Function
+Use this function to obtain the file input paths where SMRT® Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_AddPath"></a> Add Input Paths Function
+Use this function to **add** file input paths where SMRT® Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``data=array of paths``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_AddPath"></a> Remove Input Paths Function
+Use this function to **remove** file input paths where SMRT® Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``DELETE``
+* **Parameters:**  ``data=array of paths``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_ValPath"></a> Validate Path for use in pbids Function
+Use this function to validate the URI (Universal Resource Identifier) path that specifies where the primary analysis data is stored. You specify the path using the RS Remote software; the path uses the ``pbids`` format.
+
+* **URL:**  ``/settings/validate-path``
+* **Method:** ``POST``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetURI"></a> Get Instrument URIs Function
+Use this function to obtain the URI (Universal Resource Identifier) that specifies the location of the PacBio® instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_SetURI"></a> Add Instrument URIs Function
+Use this function to **specify** the URI (Universal Resource Identifier) that specifies the location of the PacBio® instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``POST``,  ``PUT``
+* **Parameters:**  ``data=array of URIs``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_DelURI"></a> Remove Instrument URIs Function
+Use this function to **remove** the URI (Universal Resource Identifier) that specifies the location of the PacBio® instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``DELETE``
+* **Parameters:**  ``data=array of URIs``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_TestURI"></a> Test Instrument URIs Function
+Use this function to **test** the URI (Universal Resource Identifier) that specifies the location of the PacBio® instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``settings/instrument-uris/test``
+* **Method:** ``POST``
+* **Parameters:**  ``uri=instrument URI``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_CheckUI"></a> Check Anonymous UI Access Function
+Use this function to check whether users have read-only access to SMRT® Portal without logging in. (Users must still log in to **create** or **modify** jobs.)
+
+* **URL:**  ``/settings/restrict-web-access``
+* **Method:** ``GET``
+* **Returns:** ``True/False``
+
+### <a name="SET_SetUI"></a> Set Anonymous UI Access Function
+Use this function to **specify** whether users have read-only access to SMRT® Portal without logging in. (Users must still log in to **create** or **modify** jobs.)
+
+* **URL:**  ``/settings/restrict-web-access``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``value=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_CheckWS"></a> Check Anonymous Web Services Access Function
+Use this function when your organization has written custom software to access SMRT® Pipe, or integrate with a LIMS system. The function checks whether software can have access to certain web services methods **without** authentication.
+
+* **URL:**  ``/settings/restrict-service-access``
+* **Method:** ``GET``
+* **Returns:** ``True/False``
+
+### <a name="SET_SetWS"></a> Set Anonymous Web Services Access Function
+Use this function when your organization has written custom software to access SMRT® Pipe, or integrate with a LIMS system. The function specifies whether software can have access to certain web services methods **without** authentication. (The software would supply the credentials programmatically.)
+
+* **URL:**  ``/settings/restrict-service-access``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``value=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_SetUIWS"></a> Set Anonymous Web and UI Access Function
+Use this function to specify 1) Whether a user has read-only access to SMRT® Portal and 2) Whether software can use certain web services methods without authentication.
+
+* **URL:**  ``/settings/restrict-access``
+* **Method:** ``POST``
+* **Parameters:**  ``web=true|false``, ``service=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetArch"></a> Get Job Archive Directory Function
+Use this function to obtain the path to the directory used to store archived jobs.
+
+* **URL:**  ``/settings/job-archive``
+* **Method:** ``GET``
+* **Returns:** ``The path for the job archive directory.``
+
+### <a name="SET_SetArch"></a> Set Job Archive Directory Path Function
+Use this function to **set** the path to the directory used to store archived jobs.
+
+* **URL:**  ``/settings/job-archive``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+## <a name="GR_SVC"></a> Group Service
+The Group Service includes functions that you use to manage groups of SMRT® Portal users.
+
+### <a name="GR_CR"></a> Create Group Function
+Use this function to **create** a new group of users. **(Administrators only)**.
+
+* **URL:**  ``/groups/create`` (Using POST), ``/groups`` (Using PUT)
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``data=Group`` (Using POST), ``group`` (Using PUT). In both cases, the name must be **unique**, and ``CreatedBy`` must be **non-null**.
+* **Returns:** ``PrimaryKey``
+
+### <a name="GR_Save"></a> Save Group Function
+Use this function to **save** a specified group of users. **(Administrators only)**.
+
+* **URL:**  ``/groups/{id}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``id=int``, ``data=group``
+* **Returns:** ``A notice message object.``
+
+### <a name="GR_Del"></a> Delete Group Function
+Use this function to **delete** a specified group of users. **(Administrators only)**.
+
+* **URL:**  ``/groups/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="GR_ListNames"></a> List Group Names Function
+Use this function to get a list of the names of groups of users on the system. **(Administrators only)**.
+
+* **URL:**  ``/groups/names``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="GR_List"></a> List Groups Function
+Use this function to return information about the groups of users available on the system.  **(Administrators only)**.
+
+* **URL:**  ``/groups``
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Group>``
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
\ No newline at end of file
diff --git a/docs/Secondary-Analysis-Web-Services-API-v2.1.md b/docs/Secondary-Analysis-Web-Services-API-v2.1.md
new file mode 100644
index 0000000..d798484
--- /dev/null
+++ b/docs/Secondary-Analysis-Web-Services-API-v2.1.md
@@ -0,0 +1,1480 @@
+* [Introduction](#Intro)
+* [Security] (#Sec)
+* [Overview] (#Ov)
+* [Web Services Behavior] (#WSB)
+* [HTTP Response Codes] (#HCODE)
+* [Search Conventions] (#CONV)
+* [Examples] (#EX)
+* [Reference Service] (#REF_SVC)
+ *  [List References Function] (#REF_List_Ref)
+ *  [Reference Details Function] (#REF_Ref_Det)
+ *  [List References by Type Function] (#REF_List_Ref_Type)
+ *  [Create Reference Function] (#REF_Create_Ref)
+ *  [Save Reference Function] (#REF_Save_Ref) 
+ *  [Delete Reference Function] (#REF_Del_Ref)
+ *  [List Reference Dropbox Files Function] (#REF_List_DB)
+ *  [RSS Feed Function] (#REF_RSS) 
+* [User Service] (#USER)
+ *  [List Users Function] (#USR_List) 
+ *  [User Details Function] (#USR_Det)
+ *  [Create User Function] (#USR_Create)
+ *  [Save User Function] (#USR_Save)
+ *  [Delete User Function] (#USR_Del)
+ *  [Register User Function] (#USR_Reg)
+ *  [Change Password Function] (#USR_CPW)
+ *  [Reset Password Function] (#USR_RPW)
+ *  [List User-Defined Fields Function] (#USR_LUDF)
+ *  [List User-Defined Field Names Function] (#USR_LUDFN)
+* [Secondary Analysis Input Service] (#SA_SVC)
+  * [List Secondary Inputs Function] (#SA_LInput)
+  * [Secondary Input Details Function] (#SA_InputDet)
+  * [Create Secondary Input Function] (#SA_CR)
+  * [Save Secondary Input Function] (#SA_SA)
+  * [Last Timestamp of Secondary Input Function] (#SA_LTime)
+  * [Import Secondary Input Metadata Function] (#SA_Imp)
+  * [Scan for New Input Metadata Function] (#SA_Scan)
+  * [Delete Secondary Input Function] (#SA_Del)
+  * [Compatibility Function] (#SA_Comp)
+  * [Groups Function] (#SA_Group)
+  * [Cleanup Function] (#SA_Clean)
+* [Jobs Service] (#JOB_SVC)
+  * [List Jobs Function] (#JOB_List) 
+  * [List Jobs by Status Function] (#JOB_ListBStatus)
+  * [List Jobs By Protocol Function] (#JOB_ListBProt)
+  * [Job Details Function] (#JOB_Det)
+  * [Create Job Function] (#JOB_CR)
+  * [Save Job Function] (#JOB_Save)
+  * [Delete Job Function] (#JOB_Del)
+  * [Archive Job Function] (#JOB_Arch)
+  * [Restore Archived Job Function] (#JOB_RestArch)
+  * [Get Job Metrics Function] (#JOB_Metrics)
+  * [Get Job Protocol Function] (#JOB_Prot)
+  * [Set Job Protocol Function] (#JOB_SetProt)
+  * [Get Job Inputs Function] (#JOB_Input)
+  * [Start Job Function] (#JOB_Start)
+  * [Get Job Status Function] (#JOB_GetStatus)
+  * [Update Job Status Function] (#JOB_UpStatus)
+  * [Job History Function] (#JOB_Hist)
+  * [Job Log Function] (#JOB_Log)
+  * [Analysis Table of Content Function] (#JOB_TOC)
+  * [Job Analysis File Function] (#JOB_File)
+  * [Mark Job Complete Function] (#JOB_COmplete)
+  * [List Jobs in Dropbox Function] (#JOB_inDrop)
+  * [Import Job Function] (#JOB_Import)
+  * [Job Last Heartbeat Function] (#JOB_Heart)
+  * [Job Raw-Read Function] (#JOB_RR)
+* [Protocol Service] (#PRO_SVC)
+  * [List Protocols Function] (#PRO_List)
+  * [List Protocol Names Function] (#PRO_ListNames)
+  * [Protocol Details Function] (#PRO_Det)
+  * [Create Protocol Function] (#PRO_CR)
+  * [Update Protocol Function] (#PRO_UP)
+  * [Delete Protocol Function] (#PRO_Del)
+* [Sample Sheet Service] (#SAM_SVC)
+  * [Validate Sample Sheet Function] (#SAM_Val)
+* [Settings Service] (#SET_SVC)
+  * [Check Free Disk Space Function] (#SET_CheckSpace)
+  * [Get Job Dropbox Function] (#SET_GetDrop)
+  * [Set Job Dropbox Function] (#SET_SetDrop)
+  * [Get Reference Sequence Dropbox Function] (#SET_GetRefDrop)
+  * [Set Reference Sequence Dropbox Function] (#SET_SetRefDrop)
+  * [Get SMTP Host Function] (#SET_GetSMTP)
+  * [Set SMTP Host Function] (#SET_SetSMTP)
+  * [Send Test Email Function] (#SET_Email)
+  * [Get Input Paths Function] (#SET_GetPath)
+  * [Add Input Paths Function] (#SET_AddPath)
+  * [Remove Input Paths Function] (#SET_AddPath)
+  * [Validate Path for Use in pbids Function] (#SET_ValPath)
+  * [Get Instrument URIs Function] (#SET_GetURI)
+  * [Add Instrument URIs Function] (#SET_SetURI)
+  * [Remove Instrument URIs Function] (#SET_DelURI)
+  * [Test Instrument URIs Function] (#SET_TestURI)
+  * [Check Anonymous UI Access Function] (#SET_CheckUI)
+  * [Set Anonymous UI Access Function] (#SET_SetUI)
+  * [Check Anonymous Web Services Access Function] (#SET_CheckWS)
+  * [Set Anonymous Web Services Access Function] (#SET_SetWS)
+  * [Set Anonymous Web and UI Access Function] (#SET_SetUIWS)
+  * [Get Job Archive Directory Function] (#SET_GetArch)
+  * [Set Job Archive Directory Path Function] (#SET_SetArch)
+* [Groups Service] (#GR_SVC)
+  * [Create Group Function] (#GR_CR)
+  * [Save Group Function] (#GR_Save)
+  * [Delete Group Function] (#GR_Del)
+  * [List Group Names Function] (#GR_ListNames)
+  * [List Groups Function] (#GR_List)
+
+***
+
+## <a name="Intro"></a> Introduction
+
+This document describes the Secondary Analysis Web Services API provided by Pacific Biosciences. The API allows developers to search, submit and manage secondary analysis jobs, data, results, and user accounts.
+
+Secondary Analysis Web Services follow the **REST** (Representational State Transfer) model for web services, and use the JSON (JavaScript Object Notation) format. The web services:
+
+* Run as the server-side layer for managing secondary analysis jobs.
+* Maintain data integrity in the secondary analysis database and file system.
+* Act as a layer on top of SMRT® Pipe, the lower-level code that performs secondary analysis processing.
+* Support AJAX access from web clients and can be used from the command line with wget or curl; from scripting languages (PHP, Python, PERL), and from Java and C Sharp.
+
+The API includes functions for:
+* Managing **reference sequences**
+* Managing **user accounts** and **passwords**
+* Managing **groups of users**
+* Managing **instrument output** (SMRT® Cell data)
+* Managing secondary analysis **jobs**
+* Managing **protocols**
+* Validating **sample sheets**
+* Managing **settings**
+
+The latest version of the API and this documentation are available from the PacBio® Developer’s Network at http://www.pacbiodevnet.com.
+
+## <a name="Sec"></a> Security
+
+* Anonymous read-only access to web services is enabled by **default**.
+* Services that **create** and **modify** data require authentication.
+* Authentication is enforced for administrator, scientist and technician-level access and **cannot** be disabled.
+* An application setting (``restrictAccess``) in the ``web.xml`` file turns on or off authentication for all those web services **not** solely for administrators.
+
+
+## <a name="Ov"></a> Overview
+
+Secondary Analysis Web Services API:
+
+* Run in or under a standard Linux/Apache environment, and can be accessed from Windows, Mac OS or Linux.
+* Require MySQL.
+* Are installed as part of the secondary analysis system, and require a one-time configuration. Any additional changes can be made using SMRT® Portal.
+* Require that SMRT® Pipe be correctly configured and working.
+
+## <a name="WSB"></a> Web Services Behavior
+
+* URLs and parameters are all **case-sensitive.**
+
+* Most requests use the HTTP ``GET`` command to retrieve an object in JSON (JavaScript Object Notation) format: GET data to view details of an object: ``/{objects}/{id_or_name}`` **Example:**
+``curl http://{server}/smrtportal/api/jobs/12345``
+
+* **Deleting objects** uses the HTTP DELETE command: ``DELETE data: /{objects}/{id_or_name}``. **Example:** ``curl -X DELETE -u administrator:somepassword http://{server}/smrtportal/api/jobs/12345``
+
+* **Saving objects to the server, manipulating objects, and operating on the server:** These use the HTTP POST command common to standard HTML forms. This is **not** the same as for file uploads, which use a different mime type (multipart form data). In this case, the request body consists of key-value form pairs. POST data to create a new object: ``/{objects}``. **Example:**
+
+```
+curl -d 'data={the job returned from the GET method, with some edits}' http://{server}/smrtportal/api/
+jobs/12345
+```
+
+* **Saving objects to the server** also supports the PUT and POST commands with alternative content-types, such as application/json and text/xml. In this case, the request body consists of JSON or XML, and contains no key-value form pairs: PUT/POST data to save/update objects: ``/{objects}``
+
+* Most of the time you use ``/{objects}/create`` for both ways of saving objects.
+
+* Web services requiring authentication use the HTTP header’s Authorization feature. **Example:**
+``curl –u “janeuser:somepassword” http://server/secret/sauce``. Alternatively, you could log in using the users/log-on method and store the cookie for use with future web service calls.
+
+* **Creating objects** can be done using an HTTP POST using the ``/create method``, or by using an HTTP PUT with JSON or XML as the request body. The PUT method is considered more of a REST “purist” approach, whereas POST is more widely supported by web browsers.
+
+* By default, most web services return JSON. However, it’s possible in most cases to change the result format by adding an Accept header to the request. Most methods will support ``Accept: text/xml`` as well as ``application/json``, ``text/csv`` and ``text/tsv`` (tab-separated values).
+
+###Passing Arguments###
+
+* Arguments that are primitive types can be passed like the standard HTTP POST parameters: ``param1=value1&param2=value2``
+
+* Arguments that are objects should be serialized as JSON: ``param1={“name1”:“value1”,“name2”:“value2”}``
+
+* When using an HTTP PUT, simply pass the JSON or XML object in the request body:
+``{“name1”: “Value1”, “name2”: “Value2”}``
+
+###Date and Time Format###
+* All dates and times are in the ISO 8601 Universal Time format.
+
+## <a name="HCODE"></a> HTTP Response Codes
+
+###Success Conditions###
+
+When successful, a web services call returns an object or list of objects serialized as JSON, unless a different format is requested using an ``Accept`` header. You can deserialize the object in any language as a dictionary/hashtable, a list, or a list of dictionary/hashtables. For more advanced use, you can create custom, strongly-typed objects.
+
+For service calls that **don’t** return data from a server, a Notice message object with a uniform signatured is returned. For example: ``{“success”: true, “message”: “It worked”}``
+
+* **Return Value:** ``200 OK``  **Explanation:** The web service call returned successfully. The
+body of the response contains the requested JSON object. For function calls, the response may be a
+simple status message object.
+
+* **Return Value:** ``201 Created``  **Explanation:** The web service created a new object on the server. A simple PrimaryKey object is returned, such as: ``{“idName”:”id”,”idValue”:12345}``.
+The response will contain a header: Location: ``http://where/the/new/object/is``
+
+###Error Conditions###
+
+When errors occur, the web services return an HTTP error code. The body of the response contains a standard JSON object that can be uniformly deserialized as a strongly typed object, or left as a dictionary/hashtable. For example: ``{“success”:false, “type”:”IllegalArgumentException”,
+“message”:“Job id cannot be null”}``
+
+* **Return Value:** ``400 Bad request``  **Explanation:** The arguments were incorrect, or the web service was called incorrectly.
+
+* **Return Value:** ``403 Forbidden``  **Explanation:** The web service requires authentication, and the credentials in the HTTP header’s Authorization section were rejected.
+
+* **Return Value:** ``404 Not Found``  **Explanation:** The search or web service call did not find the
+requested object.
+
+* **Return Value:** ``409 Not Modified``  **Explanation:** The attempt to update or delete an object failed.
+
+* **Return Value:** ``413 Request Entity Too Large``  **Explanation:** When searching a large database table, there may be practical limits to how many records can be returned. The query asked for too many records.
+
+* **Return Value:** ``500 Internal Server Error``  **Explanation:** An internal error occurred.
+
+## <a name="CONV"></a> Search Conventions
+
+Lists of objects are retrieved using either the HTTP GET or POST commands. For objects with a small number of members, a JSON list is returned. Searching and filtering are possible through web services; see the documentation for the jqGrid plugin at http://www.trirand.com/jqgridwiki/.
+
+``
+GET the full list: /jobs
+``
+
+For objects with a **large** number of records (such as secondary analysis jobs and instrument output), results are paged. A wrapper object specifies the page number, total number of records, rows per page, and the list of objects themselves. The data structure is taken directly from the jqGrid plugin; for details see http://www.trirand.com/blog. Following is a sample structure: ``{“page”:1,“records”:50,“total”:510,”rows”:[{object1},{obj2}]}`` where:
+
+* ``page`` is the current page.
+* ``records`` is the number of rows on the current page.
+* ``total`` is the total number of rows.
+* ``rows`` is a list of objects for the current page.
+
+###Usage###
+
+* GET the first page: ``/{objects}``  **Example:**  ``curl http://{server}/smrtportal/api/jobs``
+
+* POST search or filtering options to the same url: ``/{objects}`` **Example:**  ``curl -d
+'options={"page":2,"rows":10,"sortOrder":"desc","sortBy":"jobId"}' http://{server}/smrtportal/api/jobs``
+
+The set of search and filtering parameters available is extensive, flexible, and is also derived from the jqGrid plugin. Key options include:
+
+* **Option:** ``page``  **Values:** ``int``  **Description:** Page number, starting from 1.
+
+* **Option:** ``rows``  **Values:** ``int``  **Description:** Rows per page. If the requested number is too large, a ``413 Request Entity Too Large`` error is generated.
+
+* **Option:** ``sortOrder``  **Values:** ``asc`` or ``desc``  **Description:** Sort order, ascending or descending.
+
+* **Option:** ``sortBy``  **Values:** ``String``; object property name  **Description:** ID of the column property to sort on. Example: ``JobId``.
+
+Arguments can be passed as JSON objects. For example: ``options={“sortOrder”:“asc”, “sortBy”:“name”, “page”:1}``
+
+## <a name="EX"></a> Examples
+Commonly-used methods include sample curl commands and sample returned values. The examples include a user named ``administrator`` and a PacBio® instrument located at ``http://pssc1:8080/``.
+
+## <a name="REF_SVC"></a> Reference Service
+The References Service includes functions that you use to manage the reference sequences used in secondary analysis. (Reference sequences are used to map reads against a reference genome for resequencing and for filtering reads.)
+
+### <a name="REF_List_Ref"></a> List References Function
+Use this function to list the reference sequences available on the system.
+
+* **URL:** ``/reference-sequences``  
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions`` (Only ``sortOrder`` and ``sortBy`` are supported.)
+* **Returns:** ``PagedList<ReferenceEntry>``
+
+### <a name="REF_Ref_Det"></a> Reference Details Function
+Use this function to obtain details about a specific reference sequence.
+
+* **URL:** ``/reference-sequences/{id}``  
+* **Method:** ``GET``
+* **Parameters:**  ``id=string``
+* **Returns:** ``ReferenceEntry``
+
+### <a name="REF_List_Ref_Type"></a> List References by Type Function
+Use this function to list the reference sequences available on the system by their **type**.
+
+* **URL:** ``/reference-sequences/by-type/{name}``  
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  
+  * ``name= control`` or ``name=sample``
+  * ``options=SearchOptions`` (Only ``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<ReferenceEntry>``
+
+### <a name="REF_Create_Ref"></a> Create Reference Function
+Use this function to **create** a new reference sequence.
+
+* **URL:** ``/reference-sequences/create`` (Using POST), ``/reference-sequences`` (Using PUT)  
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  
+  * ``data=ReferenceSequence`` (Using POST)
+  * ``ReferenceSequence`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="REF_Save_Ref"></a> Save Reference Function
+Use this function to **save** a reference sequence.
+
+* **URL:**  ``/reference-sequences/{id}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  
+  * ``id=string``
+  * ``data=ReferenceSequence``
+* **Returns:** ``A notice message object``
+
+### <a name="REF_Del_Ref"></a> Delete Reference Function
+Use this function to **delete** a reference sequence.
+
+* **URL:**  ``/reference-sequences/{id}``
+* **Method:** ``DELETE``
+* **Parameters:** ``id=string``
+* **Returns:** ``A notice message object``
+
+### <a name="REF_List_DB"></a> List Reference Dropbox Files Function
+Use this function to list the reference files located in the Reference Sequence Dropbox.
+
+* **URL:**  ``/reference-sequences/dropbox-files``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="REF_RSS"></a> RSS Feed Function
+Use this function to access an RSS feed which lists when secondary analysis jobs complete or fail.
+
+* **URL:**  ``/rss``
+* **Method:** ``GET``
+* **Returns:** ``An RSS XML file.``
+
+## <a name="USER"></a> User Service
+The User Service includes functions used to manage **users**, **roles** and **passwords**.
+
+### <a name="USR_List"></a> List Users Function
+Use this function to list users on the system. **(Administrators only)**
+
+* **URL:**  ``/users``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:** ``options=SearchOptions``  (Only ``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<User>``
+
+### <a name="USR_Det"></a> User Details Function
+Use this function to obtain information about a specific user. **(Administrators only)**
+
+* **URL:**  ``/users``
+* **Method:** ``GET``
+* **Parameters:** ``userName=string`` 
+* **Returns:** ``User``
+
+### <a name="USR_Create"></a> Create User Function
+Use this function to **add** a new user to the system. Note that the user needs to be registered to gain access. **(Administrators only)**
+
+* **URL:**  ``/users/create`` (Using POST),``/users`` (Using PUT)
+* **Method:** ``POST``, ``PUT``
+* **Parameters:** 
+ * ``data=User`` (Using POST)
+ * ``User`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="USR_Save"></a> Save User Function
+Use this function to **save** changes made to a user. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:** 
+ * ``userName=string``
+ * ``data=User``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_Del"></a> Delete User Function
+Use this function to **delete** a user from the system. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}``
+* **Method:** ``DELETE``
+* **Parameters:** ``userName=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_Reg"></a> Register User Function
+Use this function to register a new user.
+
+* **URL:**  ``/users/register``
+* **Method:** ``POST``
+* **Parameters:** 
+ * ``data=User``  **(Required)**
+ * ``userName=string``
+ * ``email=string``
+ * ``password=string``
+ * ``confirmPassword=string``
+* **Returns:** ``User``
+
+### <a name="USR_CPW"></a> Change Password Function
+Use this function to change a user’s password with a specified replacement password. This functionality is available to administrators for **all** passwords.
+
+* **URL:**  ``/users/{userName}/change-password``
+* **Method:** ``POST``
+* **Parameters:** 
+ * ``data=User``  **(Required)**
+ * ``userName=string``
+ * ``newPassword=string``
+ * ``password=string``
+ * ``confirmPassword=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_RPW"></a> Reset Password Function
+Use this function to reset a user’s password. The user is then asked to change their password. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}/reset-password``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_LUDF"></a> List User-Defined Fields Function
+Use this function to obtain a list of user-defined fields. These fields are created using the RS Remote software. If a run specified a secondary analysis protocol, these fields (if defined) propagate throughout the secondary analysis pipeline.
+
+* **URL:**  ``/custom-fields``
+* **Method:** ``GET``
+* **Parameters:** ``options=SearchOptions`` (**Only **``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<CustomField>``
+
+### <a name="USR_LUDFN"></a> List User-Defined Field Names Function
+Use this function to obtain a list of the names of **user-defined fields**. These fields are created using the RS Remote software. If a run specified a secondary analysis protocol, these fields (if defined) propagate throughout the secondary analysis pipeline.
+
+* **URL:**  ``/custom-fields/names``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+## <a name="SA_SVC"></a> Secondary Analysis Input Service
+The Secondary Analysis Input Service includes functions used to manage the data associated with each SMRT® Cell that is included in a secondary analysis job.
+
+### <a name="SA_LInput"></a> List Secondary Inputs Function
+Use this function to obtain a list of secondary analysis input.
+
+* **URL:**  ``/inputs``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:** ``options=SearchOptions``
+* **Returns:** ``PagedList<Input>``
+
+### <a name="SA_InputDet"></a> Secondary Input Details Function
+Use this function to obtain details for a specfied secondary analysis input.
+
+* **URL:**  ``/inputs``
+* **Method:** ``GET``
+* **Parameters:** ``id=int``
+* **Returns:** ``Input``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/inputs
+{
+"page" : 1,
+"records" : 1,
+"total" : 1,
+"rows" : [ {
+"adapterSequence" : "ATCTCTCTCttttcctcctcctccgttgttgttgttGAGAGAGAT",
+"bindingKitBarcode" : "000001001546011123111",
+"bindingKitControl" : "Standard_v1",
+"bindingKitExpirationDate" : "2011-12-31T00:00:00-0800",
+...
+} ]
+}
+```
+
+### <a name="SA_CR"></a> Create Secondary Input Function
+Use this function to **create** secondary analysis input.
+
+* **URL:**  ``/inputs/create`` (Using POST), ``/inputs`` (Using PUT)
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  
+  * ``data=Input`` (Using POST)
+  * ``Input`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="SA_SA"></a> Save Secondary Input Function
+Use this function to **save** secondary analysis input.
+
+* **URL:**  ``/inputs/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  
+  * ``data=Input``
+  * ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_LTime"></a> Last Timestamp of Secondary Input Function
+Use this function to obtain the time of the last secondary analysis input saved to the database.
+
+* **URL:**  ``/inputs/last-timestamp``
+* **Method:** ``GET``
+* **Returns:** ``Date``
+
+### <a name="SA_Imp"></a> Import Secondary Input Metadata Function
+Use this function to **import** secondary analysis input.
+
+* **URL:**  ``/inputs/import``
+* **Method:** ``POST``
+* **Parameters:**  ``data=array of Collections from instrument``
+* **Returns:** ``List<Input>``
+
+### <a name="SA_Scan"></a> Scan for New Input Metadata Function
+Use this function to **scan** for secondary analysis input.
+
+* **URL:**  ``/inputs/scan``
+* **Method:** ``POST``
+* **Parameters:**  ``paths=array of string``
+* **Returns:** ``List<Input>``
+* **Example:** ``curl -u administrator:administrator#1 -d 'paths=["/data/smrta/smrtanalysis/common/inputs_dropbox"]'
+http://secondary_host:8088/smrtportal/api/inputs/scan``
+* **Python code example:**
+
+```
+import os
+import logging
+import urllib
+import urllib2
+import json
+import base64
+
+log = logging.getLogger(__name__)
+
+class DefaultProgressErrorHandler(urllib2.HTTPDefaultErrorHandler):
+    def http_error_default(self, req, fp, code, msg, headers):
+        result = urllib2.HTTPError(req.get_full_url(), code, msg, headers, fp)
+        result.status = code
+        return result
+
+def request_to_string(request):
+    "for debugging"
+    buffer = []
+    buffer.append('Method: %s' % request.get_method())
+    buffer.append('Host: %s' % request.get_host())
+    buffer.append('Selector: %s' % request.get_selector())
+    buffer.append('Data: %s' % request.get_data())
+    return os.linesep.join(buffer)
+
+def scan():
+    url = 'http://localhost:8080/smrtportal/api/inputs/scan'
+    #You'd want to use something like this: /opt/testdata/LIMS/2311013/0002
+    #Having trouble getting this to work with > 1 path. How to pass multiple form params of same name to urllib2?
+    c_path = '/data/smrta/smrtanalysis/common/inputs_dropbox'
+    scan_data = urllib.urlencode({'paths[]': c_path } )
+    request = urllib2.Request(url, data=scan_data)
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    #response.read() - in this case - returns a list of len == to number of paths you're passing as data
+    retList = json.loads( response.read() ) 
+    return retList[0]['idValue']
+
+def saveJob(inputId):
+    url = 'http://localhost:8080/smrtportal/api/jobs/create'
+    job = {
+        'name':'test_job',
+        'createdBy':'admin',
+        'protocolName':'RS_Filter_Only.1',
+        'groupNames':['all'],
+        'inputIds':[inputId]
+    }
+    job_data = urllib.urlencode( {'data': json.dumps(job)  } )
+
+    request = urllib2.Request(url, data=job_data)
+    print request_to_string(request)
+
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    ret = json.loads( response.read() )
+    return ret['idValue']
+
+def startJob(jobId):
+    url = 'http://localhost:8080/smrtportal/api/jobs/{i}/start'.format(i=jobId)
+
+    #This is a GET
+    request = urllib2.Request(url)
+    print request_to_string(request)
+
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    ret = json.loads( response.read() )
+    print( ret )
+
+def test():
+    inputId = scan()
+    print( 'Scanned inputId = %s' % inputId ) 
+    jobId = saveJob(inputId)
+    print( 'jobId = %s' % jobId )     
+    startJob(jobId)
+```
+### <a name="SA_Del"></a> Delete Secondary Input Function
+Use this function to **delete** specified secondary analysis input. **(Scientists and administrators only)**
+
+* **URL:**  ``/inputs/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_Comp"></a> Compatibility Function
+Use this function to return information specifying whether the SMRT® Cell inputs for the job are compatible. Mixing data that was generated using v1.2.1 primary analysis software with data generated with later versions may fail. (v1.2.1 calculated Quality Values differently than later versions.)
+
+* **URL:**  ``/inputs/compatibility``
+* **Method:** ``GET``
+* **Parameters:**  ``ids=[array of ids]``
+* **Returns:** ``JSON object specifying whether or not the inputs are compatible.``
+
+### <a name="SA_Group"></a> Groups Function
+Use this function to add group information to secondary analysis input.
+
+* **URL:**  ``/inputs/{INPUT_ID}/groups``
+* **Method:** ``POST``
+* **Parameters:**  ``data=[name of groups].``  **Example:** ``data=”[‘grp1’, ‘grp2’]”``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_Clean"></a> Cleanup Function
+Use this function to delete any input that is **unassociated** with a job and has an invalid
+or empty collectionPathUri. This is useful for cleaning up duplicate SMRT® Cells located at different paths. When you scan and import SMRT® Cells from SMRT® Portal and the same SMRT® Cell ID already exists, the existing path is updated to the new location. No duplicate entries are created. **(Scientists and administrators only)**
+
+* **URL:**  ``/inputs/{INPUT_ID}/cleanup``
+* **Method:** ``DELETE``
+* **Returns:** ``A notice message object that includes the list of deleted input IDs.``
+
+
+## <a name="JOB_SVC"></a> Jobs Service
+The Jobs Service includes functions used to manage secondary analysis jobs.
+
+### <a name="JOB_List"></a> List Jobs Function
+Use this function to obtain a list of **all** secondary analysis jobs.
+
+* **URL:**  ``/jobs``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+
+```
+curl -d 'options={"filters":{"rules":[{"field":"createdBy","op":"eq","data":"AutomationSystem"},{"field":"jobId","op":"lt","data":"30000"}],"groupOp":"and"},"columnNames":["jobId"],"rows":"0"}' http://pssc1:8080/smrtportal/api/jobs
+{
+"page" : 1,
+"records" : 57,
+"total" : 1,
+"rows" : [ {
+"jobId" : 26392
+}, {
+"jobId" : 26360
+}, {
+"jobId" : 26359
+}, {
+...
+}]
+}
+```
+
+### <a name="JOB_ListBStatus"></a> List Jobs by Status Function
+Use this function to obtain a list of secondary analysis jobs, based on their **job status**.
+
+* **URL:**  ``/jobs/by-status/{jobStatus}``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/by-status/Completed
+{
+"page" : 1,
+"records" : 25,
+"total" : 3,
+"rows" : [ {
+"automated" : true,
+"collectionProtocol" : "Standard Seq v2",
+...
+} ]
+}
+```
+
+### <a name="JOB_ListBProt"></a> List Jobs By Protocol Function
+Use this function to list the currently open secondary analysis jobs, based on a specified **protocol**.
+
+* **URL:**  ``/jobs/by-protocol/{protocol}``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``protocol=string``, ``options=SearchOptions``, ``jobStatus=`` status code such as ``NotStarted``.
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+```
+curl -d 'jobStatus=Completed' http://pssc1:8080/smrtportal/api/jobs/by-protocol/RS_resequencing.1
+{
+"page" : 1,
+"records" : 1,
+"total" : 1,
+"rows" : [ {
+"automated" : false,
+...
+"whenStarted" : null
+} ]
+}
+```
+
+### <a name="JOB_Det"></a> Job Details Function
+Use this function to display **details** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}`` or ``/jobs/by-name/{name}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``, ``name=string``
+* **Returns:** ``Job``
+* **Examples:**
+```
+By ID:
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/016437
+{
+"jobId" : 16437,
+"protocolName" : "RS_Site_Acceptance_Test.1",
+"referenceSequenceName" : "lambda",
+"jobStatus" : "Completed",
+...
+"whenModified" : "2012-01-31T09:12:48-0800",
+"modifiedBy" : null
+}
+By Name:
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/by-name/2311084_0002
+{
+"jobId" : 16437,
+"protocolName" : "RS_Site_Acceptance_Test.1",
+"referenceSequenceName" : "lambda",
+"jobStatus" : "Completed",
+...
+"whenModified" : "2012-01-31T09:12:48-0800",
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_CR"></a> Create Job Function
+Use this function to **create** a new secondary analysis job.
+
+* **URL:**  ``/jobs/create`` (Using POST), ``/jobs`` (Using PUT)
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``data=Job`` (Using POST), ``job`` (Using PUT). In both cases, the name must be **unique**, and ``CreatedBy`` must be **non-null**.
+* **Returns:** ``PrimaryKey``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'data={"name":"DemoJobName", "createdBy":"testuser", "description":"demo job", "protocolName":"RS_Resequencing.1", "groupNames":["all"], "inputIds":["78807"]}' http://pssc1:8080/smrtportal/api/jobs/create
+{
+"idValue" : 16478,
+"idProperty" : "jobId"
+}
+```
+
+### <a name="JOB_Save"></a> Save Job Function
+Use this function to **save** a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=int``, ``data=Job``
+* **Returns:** ``A notice message object.``
+
+### <a name="JOB_Del"></a> Delete Job Function
+Use this function to **delete** a secondary analysis job. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u "administrator:administrator#1" -X DELETE http://pssc1:8080/smrtportal/api/jobs/16478
+{
+"success" : true,
+"message" : "Job 16478 has been permanently deleted"
+}
+```
+
+### <a name="JOB_Arch"></a> Archive Job Function
+Use this function to **archive** a secondary analysis job. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}/archive`` (Using GET), ``/jobs/archive`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/archive
+{
+"success" : true,
+"message" : "Archived 2 jobs."
+}
+```
+
+### <a name="JOB_RestArch"></a> Restore Archived Job Function
+Use this function to **restore** a secondary analysis job that was archived. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}/restore`` (Using GET), ``/jobs/restore`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/restore
+{
+"success" : true,
+"message" : "Restored 2 jobs."
+}
+```
+
+### <a name="JOB_Metrics"></a> Get Job Metrics Function
+Use this function to retrieve **metrics** for one or more secondary analysis jobs, in CSV format.
+
+* **URL:**  ``/jobs/{id}/metrics`` (Using GET), ``/jobs/metrics`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object`` containing the following fields:
+ * **Job ID:** Numeric ID for the job.
+ * **Job Name:** Name given to the job when created in SMRT® Portal.
+ * **Adapter Dimers (%):** The % of pre-filter ZMWs which have observed inserts of 0-10 bp. These are likely adapter dimers.
+ * **Short Inserts (%):** The % of pre-filter ZMWs which have observed inserts of 11-100 bp. These are likely short fragment contamination.
+ * **Medium Insert (%):**
+ * **Pre-Filter Polymerase Read Bases:** The number of bases in the polymerase reads before filtering, including adaptors.
+ * **Post-Filter Polymerase Read Bases:** The number of bases in the polymerase reads after filtering, including adaptors.
+ * **Pre-Filter Polymerase Reads:** The number of polymerases generating trimmed reads before filtering. Polymerase reads include bases from adaptors and multiple passes around a circular template.
+ * **Post-Filter Polymerase Reads:** The number of polymerases generating trimmed reads after filtering. Polymerase reads include bases from adaptors and multiple passes around a circular template.
+ * **Pre-Filter Polymerase Read Length:** The mean trimmed read length of all polymerase reads before filtering. The value includes bases from adaptors as well as multiple passes around a circular template.
+ * **Post-Filter Polymerase Read Length:** The mean trimmed read length of all polymerase reads after filtering. The value includes bases from adaptors as well as multiple passes around a circular template.
+ * **Pre-Filter Polymerase Read Quality:** The mean single-pass read quality of all polymerase reads before filtering.
+ * **Post-Filter Polymerase Read Quality:** The mean single-pass read quality of all polymerase reads after filtering.
+ * **Coverage:** The mean depth of coverage across the reference sequence.
+ * **Missing Bases (%):** The percentage of the reference sequence that has zero coverage.
+ * **Post-Filter Reads:**  The number of reads that passed filtering.
+ * **Mapped Subread Accuracy:** The mean accuracy of post-filter subreads that mapped to the reference sequence.
+ * **Mapped Reads:** The number of post-filter reads that mapped to the reference sequence.
+ * **Mapped Subreads:** The number of post-filter subreads that mapped to the reference sequence.
+ * **Mapped Polymerase Bases:** The number of post-filter bases that mapped to the reference sequence. 
+ * **Mapped Subread Bases:** The number of post-filter bases that mapped to the reference sequence. This does not include adapters.
+ * **Mapped Polymerase Read Length:** The mean trimmed read length of all polymerase reads. The value includes bases from adaptors as well as multiple passes around a circular template.
+ * **Mapped Subread Length:** The mean read length of post-filter subreads that mapped to the reference sequence. This does not include adapters.
+ * **Mapped Polymerase Read Length 95%:** The 95th percentile of read length of post-filter polymerase reads that mapped to the reference sequence.
+ * **Mapped Read Length of Insert:** The average length of the Read of Insert, which is a representative read of a DNA molecule from a single ZMW; that is, the sequence of a DNA molecule read from a single ZMW. On circularized SMRTbell™  templates that are shorter than the read length, a Read of Insert length distribution will closely resemble the insert size distribution.
+ * **Mapped Polymerase Read Length Max:** The maximum read length of post-filter polymerase reads that mapped to the reference sequence.
+ * **Mapped Full Subread Length:**  The lengths of full subreads, which includes only mapped subreads. Full subreads are subreads flanked by two adapters.
+ * **First Subread Length:**
+ * **Reads Starting Within 50 bp (%):**
+ * **Reads Starting Within 100 bp (%):**
+ * **Reference Length:** The length of the reference sequence.
+ * **Bases Called (%):** The percentage of reference sequence that has ≥ 1x coverage. % Bases Called + % Missing Bases should equal 100.
+ * **Consensus Accuracy:** The accuracy of the consensus sequence compared to the reference.
+ * **Coverage:** The mean depth of coverage across the reference sequence.
+ * **SMRT Cells:** The number of SMRT® Cells used in the job.
+ * **Movies:** The number of movies generated in the job.
+* **Example Notice Message Object returned:**
+```
+{
+  "Job ID" : 58765,
+  "Job Name" : "20130404_891_Final_v2_q1",
+  "Adapter Dimers (%)" : "0.46",
+  "Short Inserts (%)" : "0.04",
+  "Medium Insert (%)" : "0.03",
+  "Pre-Filter Polymerase Read Bases" : "342682225",
+  "Post-Filter Polymerase Read Bases" : "321587405",
+  "Pre-Filter Polymerase Reads" : "450918",
+  "Post-Filter Polymerase Reads" : "103439",
+  "Pre-Filter Polymerase Read Length" : "760",
+  "Post-Filter Polymerase Read Length" : "3109",
+  "Pre-Filter Polymerase Read Quality" : "0.203",
+  "Post-Filter Polymerase Read Quality" : "0.844",
+  "Coverage" : "137.94",
+  "Missing Bases (%)" : "0.00",
+  "Post-Filter Reads" : "103439",
+  "Mapped Subread Accuracy" : "86.49",
+  "Mapped Reads" : "95308",
+  "Mapped Subreads" : "117348",
+  "Mapped Polymerase Bases" : "278385730",
+  "Mapped Subread Bases" : "274408990",
+  "Mapped Polymerase Read Length" : "2921",
+  "Mapped Subread Length" : "2338",
+  "Mapped Polymerase Read Length 95%" : "7997",
+  "Mapped Read Length of Insert" : "2564",
+  "Mapped Polymerase Read Length Max" : "15085",
+  "Mapped Full Subread Length" : "2025",
+  "First Subread Length" : "2488",
+  "Reads Starting Within 50 bp (%)" : "0.06",
+  "Reads Starting Within 100 bp (%)" : "0.06",
+  "Reference Length - Campylobacter_891_8523_chromosome|quiver" : "1853005",
+  "Bases Called (%) - Campylobacter_891_8523_chromosome|quiver" : "100.00",
+  "Consensus Accuracy - Campylobacter_891_8523_chromosome|quiver" : "100.0000",
+  "Coverage - Campylobacter_891_8523_chromosome|quiver" : "137.94",
+  "SMRT Cells" : "6",
+  "Movies" : "6"
+}
+```
+* **Example command to call the function:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/metrics
+[ {
+"Job ID" : 16437,
+"Job Name" : "2311084_0002",
+...
+},{
+...
+}]
+```
+
+### <a name="JOB_Prot"></a> Get Job Protocol Function
+Use this function to **return the protocol** used by a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/protocol``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``Protocol XML document``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobss/16437/protocol
+<smrtpipeSettings>
+<protocol version="1.3.0" id="RS_Site_Acceptance_Test.1" editable="false">
+<param name="name" label="Protocol Name" editable="false">
+...
+<fileName>settings.xml</fileName>
+</smrtpipeSettings>
+```
+You can also return a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="JOB_SetProt"></a> Set Job Protocol Function
+Use this function to **specify the protocol** used by a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/protocol``
+* **Method:** ``POST``
+* **Parameters:**  ``id=int``, ``data=Xml(escaped)`` This is a function used for transmission from a web browser, such as the Javascript escape function.
+* **Returns:** ``A notice message object.``
+
+You can also return a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="JOB_Input"></a> Get Job Inputs Function
+Use this function to return information about the SMRT® Cell data used for a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/inputs``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``PagedList<Input>``
+
+### <a name="JOB_Start"></a> Start Job Function
+Use this function to **start** a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/start``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JobStatus``
+* **Example:**
+```
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/16479/start
+{
+"jobStatusId" : 1775,
+"jobId" : 16479,
+"code" : "Submitted",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : "Job submitted",
+"name" : null,
+"description" : null,
+"whenCreated" : null,
+"createdBy" : null,
+"whenModified" : null,
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_GetStatus"></a> Get Job Status Function
+Use this function to obtain the **status** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/status``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JobStatus``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16479/status
+{
+"jobStatusId" : 1780,
+"jobId" : 16479,
+"code" : "In Progress",
+"jobStage" : "Filtering",
+"moduleName" : "P_FilterReports/adapterRpt",
+"percentComplete" : 100,
+"message" : "task://016479/P_FilterReports/adapterRpt complete",
+"name" : null,
+"description" : null,
+"whenCreated" : "2012-02-03T17:38:06-0800",
+"createdBy" : "smrtpipe",
+"whenModified" : "2012-02-03T17:38:06-0800",
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_UpStatus"></a> Update Job Status Function
+Use this function to **modify the status** of a secondary analysis job. **(Scientists and administrators only)**
+
+* **URL:**  ``/jobs/{id}/status``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=int``, ``progress=JobStatus``
+* **Returns:** ``PrimaryKey``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'progress={"code":"Failed"}' http://pssc1:8080/smrtportal/api/jobs/16471/status
+{
+"success" : true,
+"message" : "Job status updated"
+}
+```
+
+### <a name="JOB_Hist"></a> Job History Function
+Use this function to obtain the **history** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/history``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``List<JobStatus>``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/history
+[ {
+"jobStatusId" : 1773,
+"jobId" : 16437,
+"code" : "Completed",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : null,
+"name" : null,
+"description" : null,
+"whenCreated" : "2012-02-03T17:13:31-0800",
+"createdBy" : null,
+"whenModified" : "2012-02-03T17:13:31-0800",
+"modifiedBy" : null
+}, {
+...
+}]
+```
+
+### <a name="JOB_Log"></a> Job Log Function
+Use this function to obtain the **log** for a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/log``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``Text file``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/log
+[INFO] 2012-01-30 23:52:41,437 [SmrtPipeContext 139] Configuration override for PROGRESS_URL: Old: --> New: http://pssc1:8080/smrtportal/api
+[INFO] 2012-01-30 23:52:41,437 [SmrtPipeContext 150] Changing working directory to /tmp/tmpTKPKi4
+...
+[INFO] 2012-01-31 00:35:10,443 [SmrtPipeContext 362] Removed 2 temporary directories
+[INFO] 2012-01-31 00:35:10,450 [SmrtPipeContext 365] Removed 1 temporary files
+[INFO] 2012-01-31 00:35:10,450 [SmrtPipeMain 394] Successfully exiting smrtpipe
+***
+```
+
+### <a name="JOB_TOC"></a> Analysis Table of Content Function
+Use this function to returns a JSON object listing the reports and data files that were generated for a secondary analysis job. This function is used primarily for SMRT® Portal to display the report and data links in the View Data/Job Details page.
+
+* **URL:**  ``/jobs/{id}/contents``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JSON object listing contents.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/contents
+{
+"reportGroups" : [ {
+"name" : "General",
+"members" : [ {
+"group" : "General",
+"title" : "Workflow",
+"links" : [ {
+"path" : "workflow/Workflow.summary.html",
+"format" : "text/html"
+...
+}
+```
+
+### <a name="JOB_File"></a> Job Analysis File Function
+Use this function to obtain any specified **file** that was generated during a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/contents/{file}`` or ``/jobs/{id}/contents/{dir}/{file}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``, ``file=filename``, ``dir=directory``
+* **Returns:** ``Data file, report XML, image, and so on.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/contents/results/overview.xml
+<?xml version="1.0" encoding="UTF-8"?>
+<report>
+<layout onecolumn="true"/>
+<title>General Attribute Report</title>
+<attributes>
+<attribute id="n_smrt_cells" name="# of SMRT Cells" value="1">1</attribute>
+<attribute id="n_movies" name="# of Movies" value="2">2</attribute>
+</attributes>
+</report>
+```
+
+### <a name="JOB_COmplete"></a> Mark Job Complete Function
+Use this function to specify that a job using more than one SMRT® Cell is complete
+
+* **URL:**  ``/jobs/{id}/complete``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+http://pssc1:8080/smrtportal/api/jobs/16477/complete
+{
+"jobStatusId" : 1844,
+"jobId" : 16477,
+"code" : "Submitted",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : "Job submitted",
+"name" : null,
+"description" : null,
+"whenCreated" : null,
+"createdBy" : null,
+"whenModified" : null,
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_inDrop"></a> List Jobs in Dropbox Function
+Use this function to list the jobs located in the Job Import Dropbox.
+
+* **URL:**  ``/jobs/dropbox-paths``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+* **Example:**
+```
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/dropbox-paths
+[ "999991" ]
+```
+
+### <a name="JOB_Import"></a> Import Job Function
+Use this function to **import** a job located in the Job Import Dropbox.
+
+* **URL:**  ``/jobs/import``
+* **Method:** ``POST``
+* **Parameters:** ``paths=array of strings``
+* **Returns:** ``List<PrimaryKey>``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'paths=["/opt/smrtanalysis/common/jobs_dropbox/035169"]' http://pssc1:8080/smrtportal/api/jobs/import
+[ {
+"idValue" : 16480,
+"idProperty" : "jobId"
+} ]
+```
+
+### <a name="JOB_Heart"></a> Job Last Heartbeat Function
+Use this function to find out if a job is still alive.
+
+* **URL:**  ``/jobs/{id}/status/heartbeat``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d "data={'lastHeartbeat':'2011-06-20T00:50:20-0700'}" http://pssc1:8080/smrtportal/api/jobs/016471/status/heartbeat
+{
+"success" : true,
+"message" : "Job lastHeartbeat status updated"
+}
+```
+
+### <a name="JOB_RR"></a> Job Raw-Read Function
+Use this function to download a data file generated by a job.
+
+* **URL:**  ``/jobs/{id}/raw-reads``
+* **Method:** ``GET``
+* **Returns:** ``Data file, report XML, image, and so on.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/raw-reads?format=fasta
+```
+
+## <a name="PRO_SVC"></a> Protocol Service
+The Protocol Service includes functions that you use to manage the protocols used by secondary analysis jobs.
+
+### <a name="PRO_List"></a> List Protocols Function
+Use this function to obtain all the **active** and **inactive** protocols in the system.
+
+* **URL:**  ``/protocols``
+* **Method:** ``GET``
+* **Returns:** ``PagedList<Protocol>``
+
+### <a name="PRO_ListNames"></a> List Protocol Names Function
+Use this function to obtain the names of all the **active** protocols in the system.
+
+* **URL:**  ``/protocols/names``
+* **Method:** ``GET``
+* **Returns:** ``List<string>``
+
+### <a name="PRO_Det"></a> Protocol Details Function
+Use this function to obtain **details** about a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=string``
+* **Returns:** ``An XML protocol file.``
+
+You can also return a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="PRO_CR"></a> Create Protocol Function
+Use this function to **add** a new protocol to the system.
+
+* **URL:**  ``/protocols/create`` (Using POST), ``/protocols`` (Using PUT)
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``data=Xml(escaped)`` (Using POST), ``Xml`` (Using PUT). ``Xml`` a function used for transmission from a web browser, such as the Javascript escape function.
+* **Returns:** ``PrimaryKey``
+
+### <a name="PRO_UP"></a> Update Protocol Function
+Use this function to **update** a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=string``, ``data=Xml``
+* **Returns:** ``A notice message object.``
+
+You can also update a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="PRO_Del"></a> Delete Protocol Function
+Use this function to **permanently delete** a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=string``
+* **Returns:** ``A notice message object.``
+
+## <a name="SAM_SVC"></a> Sample Sheet Service
+The Sample Sheet Service includes a function to validate a specified sample sheet.
+
+### <a name="SAM_Val"></a> Validate Sample Sheet Function
+
+* **URL:**  ``/sample-sheets/validate``
+* **Method:** ``POST``
+* **Parameters:**  ``sampleSheet=SampleSheet``
+* **Returns:** ``A notice message object.``
+
+## <a name="SET_SVC"></a> Settings Service
+The Settings Service includes functions that you use to manage the SMTP host, send test email, manage instrument URIs, and manage the file input paths where SMRT® Portal looks for secondary analysis input, reference sequences, and jobs to import.
+
+### <a name="SET_CheckSpace"></a> Check Free Disk Space Function
+Use this function to check how much free space resides on the disk containing the jobs directory, by default located at ``/opt/smrtanalysis/common/jobs``.
+
+* **URL:**  ``/settings/free-space``
+* **Method:** ``GET``
+* **Returns:** ``Floating point value between 0 and 1, representing the fraction of disk space that is free.``
+
+### <a name="SET_GetDrop"></a> Get Job Dropbox Function
+Use this function to obtain the location of the dropbox where SMRT® Portal looks for jobs to import.
+
+* **URL:**  ``/settings/job-dropbox``
+* **Method:** ``GET``
+* **Returns:** ``The path for the job dropbox directory.``
+
+### <a name="SET_SetDrop"></a> Set Job Dropbox Function
+Use this function to **specify** the location of the Job Import Dropbox where SMRT® Portal looks for jobs to import.
+
+* **URL:**  ``/settings/job-dropbox``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetRefDrop"></a> Get Reference Sequence Dropbox Function
+Use this function to obtain the location of the Reference Sequence Dropbox where SMRT® Portal looks for reference sequences.
+
+* **URL:**  ``/settings/reference-dropbox``
+* **Method:** ``GET``
+* **Returns:** ``The path for the reference sequence dropbox directory.``
+
+### <a name="SET_SetRefDrop"></a> Set Reference Sequence Dropbox Function
+Use this function to **specify** the location of the Reference Sequence Dropbox where SMRT® Portal looks for reference sequences.
+
+* **URL:**  ``/settings/reference-dropbox``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetSMTP"></a> Get SMTP Host Function
+Use this function to obtain the name of the current SMTP host.
+
+* **URL:**  ``/settings/smtp-host``
+* **Method:** ``GET``
+* **Returns:** ``The host name.``
+
+### <a name="SET_SetSMTP"></a> Set SMTP Host Function
+Use this function to **specify** the name of the SMTP host to use.
+
+* **URL:**  ``/settings/smtp-host``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``host=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_Email"></a> Send Test Email Function
+Use this function to send a test email to the administrator, using the specified SMTP Host.
+
+* **URL:**  ``/settings/smtp-host/test``
+* **Method:** ``GET``
+* **Parameters:**  ``host=string``
+* **Returns:** ``A notice message object, then sends an email to the administrator.``
+
+### <a name="SET_GetPath"></a> Get Input Paths Function
+Use this function to obtain the file input paths where SMRT® Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_AddPath"></a> Add Input Paths Function
+Use this function to **add** file input paths where SMRT® Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``data=array of paths``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_AddPath"></a> Remove Input Paths Function
+Use this function to **remove** file input paths where SMRT® Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``DELETE``
+* **Parameters:**  ``data=array of paths``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_ValPath"></a> Validate Path for use in pbids Function
+Use this function to validate the URI (Universal Resource Identifier) path that specifies where the primary analysis data is stored. You specify the path using the RS Remote software; the path uses the ``pbids`` format.
+
+* **URL:**  ``/settings/validate-path``
+* **Method:** ``POST``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetURI"></a> Get Instrument URIs Function
+Use this function to obtain the URI (Universal Resource Identifier) that specifies the location of the PacBio® instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_SetURI"></a> Add Instrument URIs Function
+Use this function to **specify** the URI (Universal Resource Identifier) that specifies the location of the PacBio® instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``POST``,  ``PUT``
+* **Parameters:**  ``data=array of URIs``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_DelURI"></a> Remove Instrument URIs Function
+Use this function to **remove** the URI (Universal Resource Identifier) that specifies the location of the PacBio® instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``DELETE``
+* **Parameters:**  ``data=array of URIs``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_TestURI"></a> Test Instrument URIs Function
+Use this function to **test** the URI (Universal Resource Identifier) that specifies the location of the PacBio® instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``settings/instrument-uris/test``
+* **Method:** ``POST``
+* **Parameters:**  ``uri=instrument URI``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_CheckUI"></a> Check Anonymous UI Access Function
+Use this function to check whether users have read-only access to SMRT® Portal without logging in. (Users must still log in to **create** or **modify** jobs.)
+
+* **URL:**  ``/settings/restrict-web-access``
+* **Method:** ``GET``
+* **Returns:** ``True/False``
+
+### <a name="SET_SetUI"></a> Set Anonymous UI Access Function
+Use this function to **specify** whether users have read-only access to SMRT® Portal without logging in. (Users must still log in to **create** or **modify** jobs.)
+
+* **URL:**  ``/settings/restrict-web-access``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``value=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_CheckWS"></a> Check Anonymous Web Services Access Function
+Use this function when your organization has written custom software to access SMRT® Pipe, or integrate with a LIMS system. The function checks whether software can have access to certain web services methods **without** authentication.
+
+* **URL:**  ``/settings/restrict-service-access``
+* **Method:** ``GET``
+* **Returns:** ``True/False``
+
+### <a name="SET_SetWS"></a> Set Anonymous Web Services Access Function
+Use this function when your organization has written custom software to access SMRT® Pipe, or integrate with a LIMS system. The function specifies whether software can have access to certain web services methods **without** authentication. (The software would supply the credentials programmatically.)
+
+* **URL:**  ``/settings/restrict-service-access``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``value=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_SetUIWS"></a> Set Anonymous Web and UI Access Function
+Use this function to specify 1) Whether a user has read-only access to SMRT® Portal and 2) Whether software can use certain web services methods without authentication.
+
+* **URL:**  ``/settings/restrict-access``
+* **Method:** ``POST``
+* **Parameters:**  ``web=true|false``, ``service=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetArch"></a> Get Job Archive Directory Function
+Use this function to obtain the path to the directory used to store archived jobs.
+
+* **URL:**  ``/settings/job-archive``
+* **Method:** ``GET``
+* **Returns:** ``The path for the job archive directory.``
+
+### <a name="SET_SetArch"></a> Set Job Archive Directory Path Function
+Use this function to **set** the path to the directory used to store archived jobs.
+
+* **URL:**  ``/settings/job-archive``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+## <a name="GR_SVC"></a> Group Service
+The Group Service includes functions that you use to manage groups of SMRT® Portal users.
+
+### <a name="GR_CR"></a> Create Group Function
+Use this function to **create** a new group of users. **(Administrators only)**
+
+* **URL:**  ``/groups/create`` (Using POST), ``/groups`` (Using PUT)
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``data=Group`` (Using POST), ``group`` (Using PUT). In both cases, the name must be **unique**, and ``CreatedBy`` must be **non-null**.
+* **Returns:** ``PrimaryKey``
+
+### <a name="GR_Save"></a> Save Group Function
+Use this function to **save** a specified group of users. **(Administrators only)**
+
+* **URL:**  ``/groups/{id}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``id=int``, ``data=group``
+* **Returns:** ``A notice message object.``
+
+### <a name="GR_Del"></a> Delete Group Function
+Use this function to **delete** a specified group of users. **(Administrators only)**
+
+* **URL:**  ``/groups/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="GR_ListNames"></a> List Group Names Function
+Use this function to get a list of the names of groups of users on the system. **(Administrators only)**
+
+* **URL:**  ``/groups/names``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="GR_List"></a> List Groups Function
+Use this function to return information about the groups of users available on the system.  **(Administrators only)**
+
+* **URL:**  ``/groups``
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Group>``
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2013, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+P/N 000-999-643-05
\ No newline at end of file
diff --git a/docs/Secondary-Analysis-Web-Services-API-v2.2.0.md b/docs/Secondary-Analysis-Web-Services-API-v2.2.0.md
new file mode 100644
index 0000000..94e3378
--- /dev/null
+++ b/docs/Secondary-Analysis-Web-Services-API-v2.2.0.md
@@ -0,0 +1,1414 @@
+* [Introduction](#Intro)
+* [Security] (#Sec)
+* [Overview] (#Ov)
+* [Web Services Behavior] (#WSB)
+* [HTTP Response Codes] (#HCODE)
+* [Search Conventions] (#CONV)
+* [Examples] (#EX)
+* [Reference Service] (#REF_SVC)
+ *  [List References Function] (#REF_List_Ref)
+ *  [Reference Details Function] (#REF_Ref_Det)
+ *  [List References by Type Function] (#REF_List_Ref_Type)
+ *  [Create Reference Function] (#REF_Create_Ref)
+ *  [Save Reference Function] (#REF_Save_Ref) 
+ *  [Delete Reference Function] (#REF_Del_Ref)
+ *  [List Reference Dropbox Files Function] (#REF_List_DB)
+ *  [RSS Feed Function] (#REF_RSS) 
+* [User Service] (#USER)
+ *  [List Users Function] (#USR_List) 
+ *  [User Details Function] (#USR_Det)
+ *  [Create User Function] (#USR_Create)
+ *  [Save User Function] (#USR_Save)
+ *  [Delete User Function] (#USR_Del)
+ *  [Register User Function] (#USR_Reg)
+ *  [Change Password Function] (#USR_CPW)
+ *  [Reset Password Function] (#USR_RPW)
+ *  [List User-Defined Fields Function] (#USR_LUDF)
+ *  [List User-Defined Field Names Function] (#USR_LUDFN)
+* [Secondary Analysis Input Service] (#SA_SVC)
+  * [List Secondary Inputs Function] (#SA_LInput)
+  * [Secondary Input Details Function] (#SA_InputDet)
+  * [Create Secondary Input Function] (#SA_CR)
+  * [Save Secondary Input Function] (#SA_SA)
+  * [Last Timestamp of Secondary Input Function] (#SA_LTime)
+  * [Import Secondary Input Metadata Function] (#SA_Imp)
+  * [Scan for New Input Metadata Function] (#SA_Scan)
+  * [Delete Secondary Input Function] (#SA_Del)
+  * [Compatibility Function] (#SA_Comp)
+  * [Groups Function] (#SA_Group)
+  * [Cleanup Function] (#SA_Clean)
+* [Jobs Service] (#JOB_SVC)
+  * [List Jobs Function] (#JOB_List) 
+  * [List Jobs by Status Function] (#JOB_ListBStatus)
+  * [List Jobs By Protocol Function] (#JOB_ListBProt)
+  * [Job Details Function] (#JOB_Det)
+  * [Create Job Function] (#JOB_CR)
+  * [Save Job Function] (#JOB_Save)
+  * [Delete Job Function] (#JOB_Del)
+  * [Archive Job Function] (#JOB_Arch)
+  * [Restore Archived Job Function] (#JOB_RestArch)
+  * [Get Job Metrics Function] (#JOB_Metrics)
+  * [Get Job Protocol Function] (#JOB_Prot)
+  * [Set Job Protocol Function] (#JOB_SetProt)
+  * [Get Job Inputs Function] (#JOB_Input)
+  * [Start Job Function] (#JOB_Start)
+  * [Get Job Status Function] (#JOB_GetStatus)
+  * [Update Job Status Function] (#JOB_UpStatus)
+  * [Job History Function] (#JOB_Hist)
+  * [Job Log Function] (#JOB_Log)
+  * [Analysis Table of Content Function] (#JOB_TOC)
+  * [Job Analysis File Function] (#JOB_File)
+  * [Mark Job Complete Function] (#JOB_COmplete)
+  * [List Jobs in Dropbox Function] (#JOB_inDrop)
+  * [Import Job Function] (#JOB_Import)
+  * [Job Last Heartbeat Function] (#JOB_Heart)
+  * [Job Raw-Read Function] (#JOB_RR)
+* [Protocol Service] (#PRO_SVC)
+  * [List Protocols Function] (#PRO_List)
+  * [Protocol Details Function] (#PRO_Det)
+  * [Update Protocol Function] (#PRO_UP)
+  * [Delete Protocol Function] (#PRO_Del)
+* [Sample Sheet Service] (#SAM_SVC)
+  * [Validate Sample Sheet Function] (#SAM_Val)
+* [Settings Service] (#SET_SVC)
+  * [Check Free Disk Space Function] (#SET_CheckSpace)
+  * [Get Job Dropbox Function] (#SET_GetDrop)
+  * [Set Job Dropbox Function] (#SET_SetDrop)
+  * [Get Reference Sequence Dropbox Function] (#SET_GetRefDrop)
+  * [Set Reference Sequence Dropbox Function] (#SET_SetRefDrop)
+  * [Get SMTP Host Function] (#SET_GetSMTP)
+  * [Set SMTP Host Function] (#SET_SetSMTP)
+  * [Send Test Email Function] (#SET_Email)
+  * [Get Input Paths Function] (#SET_GetPath)
+  * [Add Input Paths Function] (#SET_AddPath)
+  * [Remove Input Paths Function] (#SET_AddPath)
+  * [Validate Path for Use in pbids Function] (#SET_ValPath)
+  * [Get Instrument URIs Function] (#SET_GetURI)
+  * [Add Instrument URIs Function] (#SET_SetURI)
+  * [Remove Instrument URIs Function] (#SET_DelURI)
+  * [Test Instrument URIs Function] (#SET_TestURI)
+  * [Check Anonymous UI Access Function] (#SET_CheckUI)
+  * [Set Anonymous UI Access Function] (#SET_SetUI)
+  * [Check Anonymous Web Services Access Function] (#SET_CheckWS)
+  * [Set Anonymous Web Services Access Function] (#SET_SetWS)
+  * [Set Anonymous Web and UI Access Function] (#SET_SetUIWS)
+  * [Get Job Archive Directory Function] (#SET_GetArch)
+  * [Set Job Archive Directory Path Function] (#SET_SetArch)
+  * [Set Enable Wizards] (#SET_EnableWizard)
+* [Groups Service] (#GR_SVC)
+  * [Create Group Function] (#GR_CR)
+  * [Save Group Function] (#GR_Save)
+  * [Delete Group Function] (#GR_Del)
+  * [List Group Names Function] (#GR_ListNames)
+  * [List Groups Function] (#GR_List)
+
+***
+
+## <a name="Intro"></a> Introduction
+
+This document describes the Secondary Analysis Web Services API provided by Pacific Biosciences. The API allows developers to search, submit and manage secondary analysis jobs, data, results, and user accounts.
+
+Secondary Analysis Web Services follow the **REST** (Representational State Transfer) model for web services, and use the JSON (JavaScript Object Notation) format. The web services:
+
+* Run as the server-side layer for managing secondary analysis jobs.
+* Maintain data integrity in the secondary analysis database and file system.
+* Act as a layer on top of SMRT Pipe, the lower-level code that performs secondary analysis processing.
+* Support AJAX access from web clients and can be used from the command line with ``wget`` or ``curl``; from scripting languages (PHP, Python® , PERL), and from Java® and C#® programming languages.
+
+The API includes functions for:
+* Managing **reference sequences**
+* Managing **user accounts** and **passwords**
+* Managing **groups of users**
+* Managing **instrument output** (SMRT Cell data)
+* Managing secondary analysis **jobs**
+* Managing **protocols**
+* Validating **sample sheets**
+* Managing **settings**
+
+The latest version of the API and this documentation are available from the PacBio Developer’s Network at http://www.pacbiodevnet.com.
+
+## <a name="Sec"></a> Security
+
+* Anonymous read-only access to web services is enabled by **default**.
+* Services that **create** and **modify** data require authentication.
+* Authentication is enforced for administrator, scientist and technician-level access and **cannot** be disabled.
+* An application setting (``restrictAccess``) in the ``web.xml`` file turns on or off authentication for all those web services **not** solely for administrators.
+
+
+## <a name="Ov"></a> Overview
+
+Secondary Analysis Web Services API:
+
+* Run in or under a standard Linux®/Apache™ environment, and can be accessed from Windows®, Mac OS® or Linux® operating systems.
+* Require MySQL® software.
+* Are installed as part of the secondary analysis system, and require a one-time configuration. Any additional changes can be made using SMRT Portal.
+* Require that SMRT Pipe be correctly configured and working.
+
+## <a name="WSB"></a> Web Services Behavior
+
+* URLs and parameters are all **case-sensitive.**
+
+* Most requests use the HTTP ``GET`` command to retrieve an object in JSON (JavaScript Object Notation) format: GET data to view details of an object: ``/{objects}/{id_or_name}`` **Example:**
+``curl http://{server}/smrtportal/api/jobs/12345``
+
+* **Deleting objects** uses the HTTP DELETE command: ``DELETE data: /{objects}/{id_or_name}``. **Example:** ``curl -X DELETE -u administrator:somepassword http://{server}/smrtportal/api/jobs/12345``
+
+* **Saving objects to the server, manipulating objects, and operating on the server:** These use the HTTP POST command common to standard HTML forms. This is **not** the same as for file uploads, which use a different mime type (multipart form data). In this case, the request body consists of key-value form pairs. POST data to create a new object: ``/{objects}``. **Example:**
+
+```
+curl -d 'data={the job returned from the GET method, with some edits}' http://{server}/smrtportal/api/
+jobs/12345
+```
+
+* **Saving objects to the server** also supports the PUT and POST commands with alternative content-types, such as application/json and text/xml. In this case, the request body consists of JSON or XML, and contains no key-value form pairs: PUT/POST data to save/update objects: ``/{objects}``
+
+* Most of the time you use ``/{objects}/create`` for both ways of saving objects.
+
+* Web services requiring authentication use the HTTP header’s Authorization feature. **Example:**
+``curl –u “janeuser:somepassword” http://server/secret/sauce``. Alternatively, you could log in using the users/log-on method and store the cookie for use with future web service calls.
+
+* **Creating objects** can be done using an HTTP POST using the ``/create method``, or by using an HTTP PUT with JSON or XML as the request body. The PUT method is considered more of a REST “purist” approach, whereas POST is more widely supported by web browsers.
+
+* By default, most web services return JSON. However, it’s possible in most cases to change the result format by adding an Accept header to the request. Most methods will support ``Accept: text/xml`` as well as ``application/json``, ``text/csv`` and ``text/tsv`` (tab-separated values).
+
+Some examples:
+
+JSON (default):
+```
+$ curl http://localhost:8080/smrtportal/api
+{
+  "success" : true,
+  "message" : "Web services are alive"
+}
+```
+XML:
+```
+$ curl -H "Accept: text/xml" http://localhost:8080/smrtportal/api
+<?xml version="1.0" encoding="UTF-8"?>
+<notice>
+     <success>true</success>
+     <message>Web services are alive</message>
+</notice>
+```
+Comma-separated values:
+```
+$ curl -H "Accept: text/csv" http://localhost:8080/smrtportal/api
+"message","success"
+"Web services are alive","true"
+```
+Tab-separated values:
+```
+$ curl -H "Accept: text/tsv" http://localhost:8080/smrtportal/api
+message success
+Web services are alive  true
+```
+And back to JSON:
+```
+$ curl -H "Accept: application/json" http://localhost:8080/smrtportal/api
+{
+  "success" : true,
+  "message" : "Web services are alive"
+}
+```
+
+
+###Passing Arguments###
+
+* Arguments that are primitive types can be passed like the standard HTTP POST parameters: ``param1=value1&param2=value2``
+
+* Arguments that are objects should be serialized as JSON: ``param1={“name1”:“value1”,“name2”:“value2”}``
+
+* When using an HTTP PUT, simply pass the JSON or XML object in the request body:
+``{“name1”: “Value1”, “name2”: “Value2”}``
+
+###Date and Time Format###
+* All dates and times are in the ISO 8601 Universal Time format.
+
+## <a name="HCODE"></a> HTTP Response Codes
+
+###Success Conditions###
+
+When successful, a web services call returns an object or list of objects serialized as JSON, unless a different format is requested using an ``Accept`` header. You can deserialize the object in any language as a dictionary/hashtable, a list, or a list of dictionary/hashtables. For more advanced use, you can create custom, strongly-typed objects.
+
+For service calls that **don’t** return data from a server, a Notice message object with a uniform signatured is returned. For example: ``{“success”: true, “message”: “It worked”}``
+
+* **Return Value:** ``200 OK``  **Explanation:** The web service call returned successfully. The
+body of the response contains the requested JSON object. For function calls, the response may be a
+simple status message object.
+
+* **Return Value:** ``201 Created``  **Explanation:** The web service created a new object on the server. A simple PrimaryKey object is returned, such as: ``{“idName”:”id”,”idValue”:12345}``.
+The response will contain a header: Location: ``http://where/the/new/object/is``
+
+###Error Conditions###
+
+When errors occur, the web services return an HTTP error code. The body of the response contains a standard JSON object that can be uniformly deserialized as a strongly typed object, or left as a dictionary/hashtable. For example: ``{“success”:false, “type”:”IllegalArgumentException”,
+“message”:“Job id cannot be null”}``
+
+* **Return Value:** ``400 Bad request``  **Explanation:** The arguments were incorrect, or the web service was called incorrectly.
+
+* **Return Value:** ``403 Forbidden``  **Explanation:** The web service requires authentication, and the credentials in the HTTP header’s Authorization section were rejected.
+
+* **Return Value:** ``404 Not Found``  **Explanation:** The search or web service call did not find the
+requested object.
+
+* **Return Value:** ``409 Not Modified``  **Explanation:** The attempt to update or delete an object failed.
+
+* **Return Value:** ``413 Request Entity Too Large``  **Explanation:** When searching a large database table, there may be practical limits to how many records can be returned. The query asked for too many records.
+
+* **Return Value:** ``500 Internal Server Error``  **Explanation:** An internal error occurred.
+
+## <a name="CONV"></a> Search Conventions
+
+Lists of objects are retrieved using either the HTTP GET or POST commands. For objects with a small number of members, a JSON list is returned. Searching and filtering are possible through web services; see the documentation for the jqGrid plugin at http://www.trirand.com/jqgridwiki/.
+
+``
+GET the full list: /jobs
+``
+
+For objects with a **large** number of records (such as secondary analysis jobs and instrument output), results are paged. A wrapper object specifies the page number, total number of records, rows per page, and the list of objects themselves. The data structure is taken directly from the jqGrid plugin; for details see http://www.trirand.com/blog. Following is a sample structure: ``{“page”:1,“records”:50,“total”:510,”rows”:[{object1},{obj2}]}`` where:
+
+* ``page`` is the current page.
+* ``records`` is the number of rows on the current page.
+* ``total`` is the total number of rows.
+* ``rows`` is a list of objects for the current page.
+
+###Usage###
+
+* GET the first page: ``/{objects}``  **Example:**  ``curl http://{server}/smrtportal/api/jobs``
+
+* POST search or filtering options to the same url: ``/{objects}`` **Example:**  ``curl -d
+'options={"page":2,"rows":10,"sortOrder":"desc","sortBy":"jobId"}' http://{server}/smrtportal/api/jobs``
+
+The set of search and filtering parameters available is extensive, flexible, and is also derived from the jqGrid plugin. Key options include:
+
+* **Option:** ``page``  **Values:** ``int``  **Description:** Page number, starting from 1.
+
+* **Option:** ``rows``  **Values:** ``int``  **Description:** Rows per page. If the requested number is too large, a ``413 Request Entity Too Large`` error is generated.
+
+* **Option:** ``sortOrder``  **Values:** ``asc`` or ``desc``  **Description:** Sort order, ascending or descending.
+
+* **Option:** ``sortBy``  **Values:** ``String``; object property name  **Description:** ID of the column property to sort on. Example: ``JobId``.
+
+Arguments can be passed as JSON objects. For example: ``options={“sortOrder”:“asc”, “sortBy”:“name”, “page”:1}``
+
+## <a name="EX"></a> Examples
+Commonly-used methods include sample curl commands and sample returned values. The examples include a user named ``administrator`` and a PacBio instrument located at ``http://pssc1:8080/``.
+
+## <a name="REF_SVC"></a> Reference Service
+The References Service includes functions that you use to manage the reference sequences used in secondary analysis. (Reference sequences are used to map reads against a reference genome for resequencing and for filtering reads.)
+
+### <a name="REF_List_Ref"></a> List References Function
+Use this function to list the reference sequences available on the system.
+
+* **URL:** ``/reference-sequences``  
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions`` (Only ``sortOrder`` and ``sortBy`` are supported.)
+* **Returns:** ``PagedList<ReferenceEntry>``
+
+### <a name="REF_Ref_Det"></a> Reference Details Function
+Use this function to obtain details about a specific reference sequence.
+
+* **URL:** ``/reference-sequences/{id}``  
+* **Method:** ``GET``
+* **Parameters:**  ``id=string``
+* **Returns:** ``ReferenceEntry``
+
+### <a name="REF_List_Ref_Type"></a> List References by Type Function
+Use this function to list the reference sequences available on the system by their **type**.
+
+* **URL:** ``/reference-sequences/by-type/{name}``  
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  
+  * ``name= control`` or ``name=sample``
+  * ``options=SearchOptions`` (Only ``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<ReferenceEntry>``
+
+### <a name="REF_Create_Ref"></a> Create Reference Function
+Use this function to **create** a new reference sequence.
+
+* **URL:** ``/reference-sequences/create`` (Using POST), ``/reference-sequences`` (Using PUT)  
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  
+  * ``data=ReferenceSequence`` (Using POST)
+  * ``ReferenceSequence`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="REF_Save_Ref"></a> Save Reference Function
+Use this function to **save** a reference sequence.
+
+* **URL:**  ``/reference-sequences/{id}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  
+  * ``id=string``
+  * ``data=ReferenceSequence``
+* **Returns:** ``A notice message object``
+
+### <a name="REF_Del_Ref"></a> Delete Reference Function
+Use this function to **delete** a reference sequence.
+
+* **URL:**  ``/reference-sequences/{id}``
+* **Method:** ``DELETE``
+* **Parameters:** ``id=string``
+* **Returns:** ``A notice message object``
+
+### <a name="REF_List_DB"></a> List Reference Dropbox Files Function
+Use this function to list the reference files located in the Reference Sequence Dropbox.
+
+* **URL:**  ``/reference-sequences/dropbox-files``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="REF_RSS"></a> RSS Feed Function
+Use this function to access an RSS feed which lists when secondary analysis jobs complete or fail.
+
+* **URL:**  ``/rss``
+* **Method:** ``GET``
+* **Returns:** ``An RSS XML file.``
+
+## <a name="USER"></a> User Service
+The User Service includes functions used to manage **users**, **roles** and **passwords**.
+
+### <a name="USR_List"></a> List Users Function
+Use this function to list users on the system. **(Administrators only)**
+
+* **URL:**  ``/users``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:** ``options=SearchOptions``  (Only ``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<User>``
+
+### <a name="USR_Det"></a> User Details Function
+Use this function to obtain information about a specific user. **(Administrators only)**
+
+* **URL:**  ``/users``
+* **Method:** ``GET``
+* **Parameters:** ``userName=string`` 
+* **Returns:** ``User``
+
+### <a name="USR_Save"></a> Save User Function
+Use this function to **save** changes made to a user. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:** 
+ * ``userName=string``
+ * ``data=User``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_Del"></a> Delete User Function
+Use this function to **delete** a user from the system. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}``
+* **Method:** ``DELETE``
+* **Parameters:** ``userName=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_Reg"></a> Register User Function
+Use this function to register a new user.
+
+* **URL:**  ``/users/register``
+* **Method:** ``POST``
+* **Parameters:** 
+ * ``data=User``  **(Required)**
+ * ``userName=string``
+ * ``email=string``
+ * ``password=string``
+ * ``confirmPassword=string``
+* **Returns:** ``User``
+
+### <a name="USR_CPW"></a> Change Password Function
+Use this function to change a user’s password with a specified replacement password. This functionality is available to administrators for **all** passwords.
+
+* **URL:**  ``/users/{userName}/change-password``
+* **Method:** ``POST``
+* **Parameters:** 
+ * ``data=User``  **(Required)**
+ * ``userName=string``
+ * ``newPassword=string``
+ * ``password=string``
+ * ``confirmPassword=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_RPW"></a> Reset Password Function
+Use this function to reset a user’s password. The user is then asked to change their password. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}/reset-password``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_LUDF"></a> List User-Defined Fields Function
+Use this function to obtain a list of user-defined fields. These fields are created using the RS Remote software. If a run specified a secondary analysis protocol, these fields (if defined) propagate throughout the secondary analysis pipeline.
+
+* **URL:**  ``/custom-fields``
+* **Method:** ``GET``
+* **Parameters:** ``options=SearchOptions`` (**Only **``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<CustomField>``
+
+### <a name="USR_LUDFN"></a> List User-Defined Field Names Function
+Use this function to obtain a list of the names of **user-defined fields**. These fields are created using the RS Remote software. If a run specified a secondary analysis protocol, these fields (if defined) propagate throughout the secondary analysis pipeline.
+
+* **URL:**  ``/custom-fields/names``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+## <a name="SA_SVC"></a> Secondary Analysis Input Service
+The Secondary Analysis Input Service includes functions used to manage the data associated with each SMRT Cell that is included in a secondary analysis job.
+
+### <a name="SA_LInput"></a> List Secondary Inputs Function
+Use this function to obtain a list of secondary analysis input.
+
+* **URL:**  ``/inputs``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:** ``options=SearchOptions``
+* **Returns:** ``PagedList<Input>``
+
+### <a name="SA_InputDet"></a> Secondary Input Details Function
+Use this function to obtain details for a specfied secondary analysis input.
+
+* **URL:**  ``/inputs``
+* **Method:** ``GET``
+* **Parameters:** ``id=int``
+* **Returns:** ``Input``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/inputs
+{
+"page" : 1,
+"records" : 1,
+"total" : 1,
+"rows" : [ {
+"adapterSequence" : "ATCTCTCTCttttcctcctcctccgttgttgttgttGAGAGAGAT",
+"bindingKitBarcode" : "000001001546011123111",
+"bindingKitControl" : "Standard_v1",
+"bindingKitExpirationDate" : "2011-12-31T00:00:00-0800",
+...
+} ]
+}
+```
+
+### <a name="SA_CR"></a> Create Secondary Input Function
+Use this function to **create** secondary analysis input.
+
+* **URL:**  ``/inputs/create`` (Using POST), ``/inputs`` (Using PUT)
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  
+  * ``data=Input`` (Using POST)
+  * ``Input`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="SA_SA"></a> Save Secondary Input Function
+Use this function to **save** secondary analysis input.
+
+* **URL:**  ``/inputs/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  
+  * ``data=Input``
+  * ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_LTime"></a> Last Timestamp of Secondary Input Function
+Use this function to obtain the time of the last secondary analysis input saved to the database.
+
+* **URL:**  ``/inputs/last-timestamp``
+* **Method:** ``GET``
+* **Returns:** ``Date``
+
+### <a name="SA_Imp"></a> Import Secondary Input Metadata Function
+Use this function to **import** secondary analysis input.
+
+* **URL:**  ``/inputs/import``
+* **Method:** ``POST``
+* **Parameters:**  ``data=array of Collections from instrument``
+* **Returns:** ``List<Input>``
+
+### <a name="SA_Scan"></a> Scan for New Input Metadata Function
+Use this function to **scan** for secondary analysis input.
+
+* **URL:**  ``/inputs/scan``
+* **Method:** ``POST``
+* **Parameters:**  ``paths=array of string``
+* **Returns:** ``List<Input>``
+* **Example:** ``curl -u administrator:administrator#1 -d 'paths=["/data/smrta/smrtanalysis/common/inputs_dropbox"]'
+http://secondary_host:8088/smrtportal/api/inputs/scan``
+* **Python® code example:**
+
+```
+import os
+import logging
+import urllib
+import urllib2
+import json
+import base64
+
+log = logging.getLogger(__name__)
+
+class DefaultProgressErrorHandler(urllib2.HTTPDefaultErrorHandler):
+    def http_error_default(self, req, fp, code, msg, headers):
+        result = urllib2.HTTPError(req.get_full_url(), code, msg, headers, fp)
+        result.status = code
+        return result
+
+def request_to_string(request):
+    "for debugging"
+    buffer = []
+    buffer.append('Method: %s' % request.get_method())
+    buffer.append('Host: %s' % request.get_host())
+    buffer.append('Selector: %s' % request.get_selector())
+    buffer.append('Data: %s' % request.get_data())
+    return os.linesep.join(buffer)
+
+def scan():
+    url = 'http://localhost:8080/smrtportal/api/inputs/scan'
+    #You'd want to use something like this: /opt/testdata/LIMS/2311013/0002
+    #Having trouble getting this to work with > 1 path. How to pass multiple form params of same name to urllib2?
+    c_path = '/data/smrta/smrtanalysis/common/inputs_dropbox'
+    scan_data = urllib.urlencode({'paths[]': c_path } )
+    request = urllib2.Request(url, data=scan_data)
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    #response.read() - in this case - returns a list of len == to number of paths you're passing as data
+    retList = json.loads( response.read() ) 
+    return retList[0]['idValue']
+
+def saveJob(inputId):
+    url = 'http://localhost:8080/smrtportal/api/jobs/create'
+    job = {
+        'name':'test_job',
+        'createdBy':'admin',
+        'protocolName':'RS_Filter_Only.1',
+        'groupNames':['all'],
+        'inputIds':[inputId]
+    }
+    job_data = urllib.urlencode( {'data': json.dumps(job)  } )
+
+    request = urllib2.Request(url, data=job_data)
+    print request_to_string(request)
+
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    ret = json.loads( response.read() )
+    return ret['idValue']
+
+def startJob(jobId):
+    url = 'http://localhost:8080/smrtportal/api/jobs/{i}/start'.format(i=jobId)
+
+    #This is a GET
+    request = urllib2.Request(url)
+    print request_to_string(request)
+
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    ret = json.loads( response.read() )
+    print( ret )
+
+def test():
+    inputId = scan()
+    print( 'Scanned inputId = %s' % inputId ) 
+    jobId = saveJob(inputId)
+    print( 'jobId = %s' % jobId )     
+    startJob(jobId)
+```
+### <a name="SA_Del"></a> Delete Secondary Input Function
+Use this function to **delete** specified secondary analysis input. **(Scientists and administrators only)**
+
+* **URL:**  ``/inputs/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_Comp"></a> Compatibility Function
+Use this function to return information specifying whether the SMRT Cell inputs for the job are compatible.
+
+* **URL:**  ``/inputs/compatibility``
+* **Method:** ``GET``
+* **Parameters:**  ``ids=[array of ids]``
+* **Returns:** ``JSON object specifying whether or not the inputs are compatible.``
+
+### <a name="SA_Group"></a> Groups Function
+Use this function to update group information for SMRT Cell inputs.
+
+* **URL:**  ``/inputs/{INPUT_ID}/groups``
+* **Method:** ``POST``
+* **Parameters:**  ``data=[name of groups].``  **Example:** ``data=”[‘grp1’, ‘grp2’]”``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_Clean"></a> Cleanup Function
+Use this function to delete any input that is **unassociated** with a job and has an invalid
+or empty collectionPathUri. This is useful for cleaning up duplicate SMRT Cells located at different paths. When you scan and import SMRT Cells from SMRT Portal and the same SMRT Cell ID already exists, the existing path is updated to the new location. No duplicate entries are created. **(Scientists and administrators only)**
+
+* **URL:**  ``/inputs/{INPUT_ID}/cleanup``
+* **Method:** ``DELETE``
+* **Returns:** ``A notice message object that includes the list of deleted input IDs.``
+
+
+## <a name="JOB_SVC"></a> Jobs Service
+The Jobs Service includes functions used to manage secondary analysis jobs.
+
+### <a name="JOB_List"></a> List Jobs Function
+Use this function to obtain a list of **all** secondary analysis jobs.
+
+* **URL:**  ``/jobs``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+
+```
+curl -d 'options={"filters":{"rules":[{"field":"createdBy","op":"eq","data":"AutomationSystem"},{"field":"jobId","op":"lt","data":"30000"}],"groupOp":"and"},"columnNames":["jobId"],"rows":"0"}' http://pssc1:8080/smrtportal/api/jobs
+{
+"page" : 1,
+"records" : 57,
+"total" : 1,
+"rows" : [ {
+"jobId" : 26392
+}, {
+"jobId" : 26360
+}, {
+"jobId" : 26359
+}, {
+...
+}]
+}
+```
+
+### <a name="JOB_ListBStatus"></a> List Jobs by Status Function
+Use this function to obtain a list of secondary analysis jobs, based on their **job status**.
+
+* **URL:**  ``/jobs/by-status/{jobStatus}``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/by-status/Completed
+{
+"page" : 1,
+"records" : 25,
+"total" : 3,
+"rows" : [ {
+"automated" : true,
+"collectionProtocol" : "Standard Seq v2",
+...
+} ]
+}
+```
+
+### <a name="JOB_ListBProt"></a> List Jobs By Protocol Function
+Use this function to list the currently open secondary analysis jobs, based on a specified **protocol**.
+
+* **URL:**  ``/jobs/by-protocol/{protocol}``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``protocol=string``, ``options=SearchOptions``, ``jobStatus=`` status code such as ``NotStarted``.
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+```
+curl -d 'jobStatus=Completed' http://pssc1:8080/smrtportal/api/jobs/by-protocol/RS_resequencing.1
+{
+"page" : 1,
+"records" : 1,
+"total" : 1,
+"rows" : [ {
+"automated" : false,
+...
+"whenStarted" : null
+} ]
+}
+```
+
+### <a name="JOB_Det"></a> Job Details Function
+Use this function to display **details** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}`` or ``/jobs/by-name/{name}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``, ``name=string``
+* **Returns:** ``Job``
+* **Examples:**
+```
+By ID:
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/016437
+{
+"jobId" : 16437,
+"protocolName" : "RS_Site_Acceptance_Test.1",
+"referenceSequenceName" : "lambda",
+"jobStatus" : "Completed",
+...
+"whenModified" : "2012-01-31T09:12:48-0800",
+"modifiedBy" : null
+}
+By Name:
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/by-name/2311084_0002
+{
+"jobId" : 16437,
+"protocolName" : "RS_Site_Acceptance_Test.1",
+"referenceSequenceName" : "lambda",
+"jobStatus" : "Completed",
+...
+"whenModified" : "2012-01-31T09:12:48-0800",
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_CR"></a> Create Job Function
+Use this function to **create** a new secondary analysis job.
+
+* **URL:**  ``/jobs/create`` (Using POST), ``/jobs`` (Using PUT)
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``data=Job`` (Using POST), ``job`` (Using PUT). In both cases, the name must be **unique**, and ``CreatedBy`` must be **non-null**.
+* **Returns:** ``PrimaryKey``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'data={"name":"DemoJobName", "createdBy":"testuser", "description":"demo job", "protocolName":"RS_Resequencing.1", "groupNames":["all"], "inputIds":["78807"]}' http://pssc1:8080/smrtportal/api/jobs/create
+{
+"idValue" : 16478,
+"idProperty" : "jobId"
+}
+```
+
+### <a name="JOB_Save"></a> Save Job Function
+Use this function to **save** a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=int``, ``data=Job``
+* **Returns:** ``A notice message object.``
+
+### <a name="JOB_Del"></a> Delete Job Function
+Use this function to **delete** a secondary analysis job. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u "administrator:administrator#1" -X DELETE http://pssc1:8080/smrtportal/api/jobs/16478
+{
+"success" : true,
+"message" : "Job 16478 has been permanently deleted"
+}
+```
+
+### <a name="JOB_Arch"></a> Archive Job Function
+Use this function to **archive** a secondary analysis job. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}/archive`` (Using GET), ``/jobs/archive`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/archive
+{
+"success" : true,
+"message" : "Archived 2 jobs."
+}
+```
+
+### <a name="JOB_RestArch"></a> Restore Archived Job Function
+Use this function to **restore** a secondary analysis job that was archived. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}/restore`` (Using GET), ``/jobs/restore`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/restore
+{
+"success" : true,
+"message" : "Restored 2 jobs."
+}
+```
+
+### <a name="JOB_Metrics"></a> Get Job Metrics Function
+Use this function to retrieve **metrics** for a secondary analysis jobs.
+
+* **URL:**  ``/jobs/{id}/metrics2``
+* **Method:** ``GET``
+* **Parameter:**  ``id=int``
+* **Returns:** ``List<String>``
+
+### <a name="JOB_Prot"></a> Get Job Protocol Function
+Use this function to **return the protocol** used by a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/protocol``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``Protocol XML document``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobss/16437/protocol
+<smrtpipeSettings>
+<protocol version="1.3.0" id="RS_Site_Acceptance_Test.1" editable="false">
+<param name="name" label="Protocol Name" editable="false">
+...
+<fileName>settings.xml</fileName>
+</smrtpipeSettings>
+```
+You can also return a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="JOB_SetProt"></a> Set Job Protocol Function
+Use this function to **specify the protocol** used by a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/protocol``
+* **Method:** ``POST``
+* **Parameters:**  ``id=int``, ``data=Xml(escaped)`` This is a function used for transmission from a web browser, such as the Javascript escape function.
+* **Returns:** ``A notice message object.``
+
+You can also return a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="JOB_Input"></a> Get Job Inputs Function
+Use this function to return information about the SMRT Cell data used for a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/inputs``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``PagedList<Input>``
+
+### <a name="JOB_Start"></a> Start Job Function
+Use this function to **start** a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/start``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JobStatus``
+* **Example:**
+```
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/16479/start
+{
+"jobStatusId" : 1775,
+"jobId" : 16479,
+"code" : "Submitted",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : "Job submitted",
+"name" : null,
+"description" : null,
+"whenCreated" : null,
+"createdBy" : null,
+"whenModified" : null,
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_GetStatus"></a> Get Job Status Function
+Use this function to obtain the **status** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/status``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JobStatus``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16479/status
+{
+"jobStatusId" : 1780,
+"jobId" : 16479,
+"code" : "In Progress",
+"jobStage" : "Filtering",
+"moduleName" : "P_FilterReports/adapterRpt",
+"percentComplete" : 100,
+"message" : "task://016479/P_FilterReports/adapterRpt complete",
+"name" : null,
+"description" : null,
+"whenCreated" : "2012-02-03T17:38:06-0800",
+"createdBy" : "smrtpipe",
+"whenModified" : "2012-02-03T17:38:06-0800",
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_UpStatus"></a> Update Job Status Function
+Use this function to **modify the status** of a secondary analysis job. **(Scientists and administrators only)**
+
+* **URL:**  ``/jobs/{id}/status``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=int``, ``progress=JobStatus``
+* **Returns:** ``PrimaryKey``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'progress={"code":"Failed"}' http://pssc1:8080/smrtportal/api/jobs/16471/status
+{
+"success" : true,
+"message" : "Job status updated"
+}
+```
+
+### <a name="JOB_Hist"></a> Job History Function
+Use this function to obtain the **history** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/history``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``List<JobStatus>``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/history
+[ {
+"jobStatusId" : 1773,
+"jobId" : 16437,
+"code" : "Completed",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : null,
+"name" : null,
+"description" : null,
+"whenCreated" : "2012-02-03T17:13:31-0800",
+"createdBy" : null,
+"whenModified" : "2012-02-03T17:13:31-0800",
+"modifiedBy" : null
+}, {
+...
+}]
+```
+
+### <a name="JOB_Log"></a> Job Log Function
+Use this function to obtain the **log** for a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/log``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``Text file``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/log
+[INFO] 2012-01-30 23:52:41,437 [SmrtPipeContext 139] Configuration override for PROGRESS_URL: Old: --> New: http://pssc1:8080/smrtportal/api
+[INFO] 2012-01-30 23:52:41,437 [SmrtPipeContext 150] Changing working directory to /tmp/tmpTKPKi4
+...
+[INFO] 2012-01-31 00:35:10,443 [SmrtPipeContext 362] Removed 2 temporary directories
+[INFO] 2012-01-31 00:35:10,450 [SmrtPipeContext 365] Removed 1 temporary files
+[INFO] 2012-01-31 00:35:10,450 [SmrtPipeMain 394] Successfully exiting smrtpipe
+***
+```
+
+### <a name="JOB_TOC"></a> Analysis Table of Content Function
+Use this function to returns a JSON object listing the reports and data files that were generated for a secondary analysis job. This function is used primarily for SMRT Portal to display the report and data links in the View Data/Job Details page.
+
+* **URL:**  ``/jobs/{id}/contents``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JSON object listing contents.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/contents
+{
+"reportGroups" : [ {
+"name" : "General",
+"members" : [ {
+"group" : "General",
+"title" : "Workflow",
+"links" : [ {
+"path" : "workflow/Workflow.summary.html",
+"format" : "text/html"
+...
+}
+```
+
+### <a name="JOB_File"></a> Job Analysis File Function
+Use this function to obtain any specified **file** that was generated during a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/contents/{file}`` or ``/jobs/{id}/contents/{dir}/{file}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``, ``file=filename``, ``dir=directory``
+* **Returns:** ``Data file, report XML, image, and so on.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/contents/results/overview.xml
+<?xml version="1.0" encoding="UTF-8"?>
+<report>
+<layout onecolumn="true"/>
+<title>General Attribute Report</title>
+<attributes>
+<attribute id="n_smrt_cells" name="# of SMRT Cells" value="1">1</attribute>
+<attribute id="n_movies" name="# of Movies" value="2">2</attribute>
+</attributes>
+</report>
+```
+
+### <a name="JOB_COmplete"></a> Mark Job Complete Function
+Use this function to specify that a job using more than one SMRT Cell is complete.
+
+* **URL:**  ``/jobs/{id}/complete``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+http://pssc1:8080/smrtportal/api/jobs/16477/complete
+{
+"jobStatusId" : 1844,
+"jobId" : 16477,
+"code" : "Submitted",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : "Job submitted",
+"name" : null,
+"description" : null,
+"whenCreated" : null,
+"createdBy" : null,
+"whenModified" : null,
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_inDrop"></a> List Jobs in Dropbox Function
+Use this function to list the jobs located in the Job Import Dropbox.
+
+* **URL:**  ``/jobs/dropbox-paths``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+* **Example:**
+```
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/dropbox-paths
+[ "999991" ]
+```
+
+### <a name="JOB_Import"></a> Import Job Function
+Use this function to **import** a job located in the Job Import Dropbox.
+
+* **URL:**  ``/jobs/import``
+* **Method:** ``POST``
+* **Parameters:** ``paths=array of strings``
+* **Returns:** ``List<PrimaryKey>``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'paths=["/opt/smrtanalysis/common/jobs_dropbox/035169"]' http://pssc1:8080/smrtportal/api/jobs/import
+[ {
+"idValue" : 16480,
+"idProperty" : "jobId"
+} ]
+```
+
+### <a name="JOB_Heart"></a> Job Last Heartbeat Function
+Use this function to find out if a job is still alive.
+
+* **URL:**  ``/jobs/{id}/status/heartbeat``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d "data={'lastHeartbeat':'2011-06-20T00:50:20-0700'}" http://pssc1:8080/smrtportal/api/jobs/016471/status/heartbeat
+{
+"success" : true,
+"message" : "Job lastHeartbeat status updated"
+}
+```
+
+### <a name="JOB_RR"></a> Job Raw-Read Function
+Use this function to download a data file generated by a job.
+
+* **URL:**  ``/jobs/{id}/raw-reads``
+* **Method:** ``GET``
+* **Returns:** ``Data file, report XML, image, and so on.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/raw-reads?format=fasta
+```
+
+## <a name="PRO_SVC"></a> Protocol Service
+The Protocol Service includes functions that you use to manage the protocols used by secondary analysis jobs.
+
+### <a name="PRO_List"></a> List Protocols Function
+Use this function to obtain all the **active** and **inactive** protocols in the system.
+
+* **URL:**  ``/protocols``
+* **Method:** ``GET``
+* **Returns:** ``PagedList<Protocol>``
+
+### <a name="PRO_Det"></a> Protocol Details Function
+Use this function to obtain **details** about a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=string``
+* **Returns:** ``An XML protocol file.``
+
+You can also return a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="PRO_UP"></a> Update Protocol Function
+Use this function to **update** a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=string``, ``data=Xml``
+* **Returns:** ``A notice message object.``
+
+You can also update a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="PRO_Del"></a> Delete Protocol Function
+Use this function to **permanently delete** a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=string``
+* **Returns:** ``A notice message object.``
+
+## <a name="SAM_SVC"></a> Sample Sheet Service
+The Sample Sheet Service includes a function to validate a specified sample sheet.
+
+### <a name="SAM_Val"></a> Validate Sample Sheet Function
+
+* **URL:**  ``/sample-sheets/validate``
+* **Method:** ``POST``
+* **Parameters:**  ``sampleSheet=SampleSheet``
+* **Returns:** ``A notice message object.``
+
+## <a name="SET_SVC"></a> Settings Service
+The Settings Service includes functions that you use to manage the SMTP host, send test email, manage instrument URIs, and manage the file input paths where SMRT Portal looks for secondary analysis input, reference sequences, and jobs to import.
+
+### <a name="SET_CheckSpace"></a> Check Free Disk Space Function
+Use this function to check how much free space resides on the disk containing the jobs directory, by default located at ``/opt/smrtanalysis/common/jobs``.
+
+* **URL:**  ``/settings/free-space``
+* **Method:** ``GET``
+* **Returns:** ``Floating point value between 0 and 1, representing the fraction of disk space that is free.``
+
+### <a name="SET_GetDrop"></a> Get Job Dropbox Function
+Use this function to obtain the location of the dropbox where SMRT Portal looks for jobs to import.
+
+* **URL:**  ``/settings/job-dropbox``
+* **Method:** ``GET``
+* **Returns:** ``The path for the job dropbox directory.``
+
+### <a name="SET_SetDrop"></a> Set Job Dropbox Function
+Use this function to **specify** the location of the Job Import Dropbox where SMRT Portal looks for jobs to import.
+
+* **URL:**  ``/settings/job-dropbox``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetRefDrop"></a> Get Reference Sequence Dropbox Function
+Use this function to obtain the location of the Reference Sequence Dropbox where SMRT Portal looks for reference sequences.
+
+* **URL:**  ``/settings/reference-dropbox``
+* **Method:** ``GET``
+* **Returns:** ``The path for the reference sequence dropbox directory.``
+
+### <a name="SET_SetRefDrop"></a> Set Reference Sequence Dropbox Function
+Use this function to **specify** the location of the Reference Sequence Dropbox where SMRT Portal looks for reference sequences.
+
+* **URL:**  ``/settings/reference-dropbox``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetSMTP"></a> Get SMTP Host Function
+Use this function to obtain the name of the current SMTP host.
+
+* **URL:**  ``/settings/smtp-host``
+* **Method:** ``GET``
+* **Returns:** ``The host name.``
+
+### <a name="SET_SetSMTP"></a> Set SMTP Host Function
+Use this function to **specify** the name of the SMTP host to use.
+
+* **URL:**  ``/settings/smtp-host``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``host=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_Email"></a> Send Test Email Function
+Use this function to send a test email to the administrator, using the specified SMTP Host.
+
+* **URL:**  ``/settings/smtp-host/test``
+* **Method:** ``GET``
+* **Parameters:**  ``host=string``
+* **Returns:** ``A notice message object, then sends an email to the administrator.``
+
+### <a name="SET_GetPath"></a> Get Input Paths Function
+Use this function to obtain the file input paths where SMRT Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_AddPath"></a> Add Input Paths Function
+Use this function to **add** file input paths where SMRT Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``data=array of paths``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_AddPath"></a> Remove Input Paths Function
+Use this function to **remove** file input paths where SMRT Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``DELETE``
+* **Parameters:**  ``data=array of paths``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_ValPath"></a> Validate Path for use in pbids Function
+Use this function to validate the URI (Universal Resource Identifier) path that specifies where the primary analysis data is stored. You specify the path using the RS Remote software; the path uses the ``pbids`` format.
+
+* **URL:**  ``/settings/validate-path``
+* **Method:** ``POST``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetURI"></a> Get Instrument URIs Function
+Use this function to obtain the URI (Universal Resource Identifier) that specifies the location of the PacBio instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_SetURI"></a> Add Instrument URIs Function
+Use this function to **specify** the URI (Universal Resource Identifier) that specifies the location of the PacBio instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``POST``,  ``PUT``
+* **Parameters:**  ``data=array of URIs``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_DelURI"></a> Remove Instrument URIs Function
+Use this function to **remove** the URI (Universal Resource Identifier) that specifies the location of the PacBio instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``DELETE``
+* **Parameters:**  ``data=array of URIs``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_TestURI"></a> Test Instrument URIs Function
+Use this function to **test** the URI (Universal Resource Identifier) that specifies the location of the PacBio instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``settings/instrument-uris/test``
+* **Method:** ``POST``
+* **Parameters:**  ``uri=instrument URI``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_CheckUI"></a> Check Anonymous UI Access Function
+Use this function to check whether users have read-only access to SMRT Portal without logging in. (Users must still log in to **create** or **modify** jobs.)
+
+* **URL:**  ``/settings/restrict-web-access``
+* **Method:** ``GET``
+* **Returns:** ``True/False``
+
+### <a name="SET_SetUI"></a> Set Anonymous UI Access Function
+Use this function to **specify** whether users have read-only access to SMRT Portal without logging in. (Users must still log in to **create** or **modify** jobs.)
+
+* **URL:**  ``/settings/restrict-web-access``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``value=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_CheckWS"></a> Check Anonymous Web Services Access Function
+Use this function when your organization has written custom software to access SMRT Pipe, or integrate with a LIMS system. The function checks whether software can have access to certain web services methods **without** authentication.
+
+* **URL:**  ``/settings/restrict-service-access``
+* **Method:** ``GET``
+* **Returns:** ``True/False``
+
+### <a name="SET_SetWS"></a> Set Anonymous Web Services Access Function
+Use this function when your organization has written custom software to access SMRT Pipe, or integrate with a LIMS system. The function specifies whether software can have access to certain web services methods **without** authentication. (The software would supply the credentials programmatically.)
+
+* **URL:**  ``/settings/restrict-service-access``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``value=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_SetUIWS"></a> Set Anonymous Web and UI Access Function
+Use this function to specify 1) Whether a user has read-only access to SMRT Portal and 2) Whether software can use certain web services methods without authentication.
+
+* **URL:**  ``/settings/restrict-access``
+* **Method:** ``POST``
+* **Parameters:**  ``web=true|false``, ``service=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetArch"></a> Get Job Archive Directory Function
+Use this function to obtain the path to the directory used to store archived jobs.
+
+* **URL:**  ``/settings/job-archive``
+* **Method:** ``GET``
+* **Returns:** ``The path for the job archive directory.``
+
+### <a name="SET_SetArch"></a> Set Job Archive Directory Path Function
+Use this function to **set** the path to the directory used to store archived jobs.
+
+* **URL:**  ``/settings/job-archive``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_EnableWizard"></a> Set Enable Wizards
+Use this function to **enable** or **disable** the Protocol Selector wizard.
+
+* **URL:**  ``/settings/enable-wizards``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+
+## <a name="GR_SVC"></a> Group Service
+The Group Service includes functions that you use to manage groups of SMRT Portal users.
+
+### <a name="GR_CR"></a> Create Group Function
+Use this function to **create** a new group of users. **(Administrators only)**
+
+* **URL:**  ``/groups/create`` (Using POST), ``/groups`` (Using PUT)
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``data=Group`` (Using POST), ``group`` (Using PUT). In both cases, the name must be **unique**, and ``CreatedBy`` must be **non-null**.
+* **Returns:** ``PrimaryKey``
+
+### <a name="GR_Save"></a> Save Group Function
+Use this function to **save** a specified group of users. **(Administrators only)**
+
+* **URL:**  ``/groups/{id}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``id=int``, ``data=group``
+* **Returns:** ``A notice message object.``
+
+### <a name="GR_Del"></a> Delete Group Function
+Use this function to **delete** a specified group of users. **(Administrators only)**
+
+* **URL:**  ``/groups/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="GR_ListNames"></a> List Group Names Function
+Use this function to get a list of the names of groups of users on the system. **(Administrators only)**
+
+* **URL:**  ``/groups/names``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="GR_List"></a> List Groups Function
+Use this function to return information about the groups of users available on the system.  **(Administrators only)**
+
+* **URL:**  ``/groups``
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Group>``
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 000-999-643-06**
\ No newline at end of file
diff --git a/docs/Secondary-Analysis-Web-Services-API-v2.3.0.md b/docs/Secondary-Analysis-Web-Services-API-v2.3.0.md
new file mode 100644
index 0000000..0720e27
--- /dev/null
+++ b/docs/Secondary-Analysis-Web-Services-API-v2.3.0.md
@@ -0,0 +1,1463 @@
+* [Introduction](#Intro)
+* [Security] (#Sec)
+* [Overview] (#Ov)
+* [Web Services Behavior] (#WSB)
+* [HTTP Response Codes] (#HCODE)
+* [Search Conventions] (#CONV)
+* [Examples] (#EX)
+* [Reference Sequence Service] (#REF_SVC)
+ *  [List Reference Sequences Function] (#REF_List_Ref)
+ *  [Reference Sequence Details Function] (#REF_Ref_Det)
+ *  [List Reference Sequences by Type Function] (#REF_List_Ref_Type)
+ *  [Create Reference Sequence Function] (#REF_Create_Ref)
+ *  [Save Reference Sequence Function] (#REF_Save_Ref) 
+ *  [Delete Reference Sequence Function] (#REF_Del_Ref)
+ *  [List Reference Sequence Dropbox Files Function] (#REF_List_DB)
+ *  [RSS Feed Function] (#REF_RSS) 
+* [User Service] (#USER)
+ *  [List Users Function] (#USR_List) 
+ *  [User Details Function] (#USR_Det)
+ *  [Create User Function] (#USR_Create)
+ *  [Save User Function] (#USR_Save)
+ *  [Delete User Function] (#USR_Del)
+ *  [Register User Function] (#USR_Reg)
+ *  [Change Password Function] (#USR_CPW)
+ *  [Reset Password Function] (#USR_RPW)
+ *  [List User-Defined Fields Function] (#USR_LUDF)
+ *  [List User-Defined Field Names Function] (#USR_LUDFN)
+* [Secondary Analysis Input Service] (#SA_SVC)
+  * [List Secondary Analysis Inputs Function] (#SA_LInput)
+  * [Secondary Analysis Input Details Function] (#SA_InputDet)
+  * [Create Secondary Analysis Input Function] (#SA_CR)
+  * [Save Secondary Analysis Input Function] (#SA_SA)
+  * [Last Timestamp of Secondary Analysis Input Function] (#SA_LTime)
+  * [Import Secondary Analysis Input Metadata Function] (#SA_Imp)
+  * [Scan for New Input Metadata Function] (#SA_Scan)
+  * [Delete Secondary Analysis Input Function] (#SA_Del)
+  * [Compatibility Function] (#SA_Comp)
+  * [Groups Function] (#SA_Group)
+  * [Cleanup Function] (#SA_Clean)
+* [Jobs Service] (#JOB_SVC)
+  * [List Jobs Function] (#JOB_List) 
+  * [List Jobs by Status Function] (#JOB_ListBStatus)
+  * [List Jobs By Protocol Function] (#JOB_ListBProt)
+  * [Job Details Function] (#JOB_Det)
+  * [Create Job Function] (#JOB_CR)
+  * [Save Job Function] (#JOB_Save)
+  * [Delete Job Function] (#JOB_Del)
+  * [Archive Job Function] (#JOB_Arch)
+  * [Restore Archived Job Function] (#JOB_RestArch)
+  * [Get Job Metrics Function] (#JOB_Metrics)
+  * [Get Job Protocol Function] (#JOB_Prot)
+  * [Set Job Protocol Function] (#JOB_SetProt)
+  * [Get Job Inputs Function] (#JOB_Input)
+  * [Start Job Function] (#JOB_Start)
+  * [Get Job Status Function] (#JOB_GetStatus)
+  * [Update Job Status Function] (#JOB_UpStatus)
+  * [Job History Function] (#JOB_Hist)
+  * [Job Log Function] (#JOB_Log)
+  * [Analysis Table of Content Function] (#JOB_TOC)
+  * [Job Analysis File Function] (#JOB_File)
+  * [Mark Job Complete Function] (#JOB_COmplete)
+  * [List Jobs in Dropbox Function] (#JOB_inDrop)
+  * [Import Job Function] (#JOB_Import)
+  * [Job Last Heartbeat Function] (#JOB_Heart)
+  * [Job Raw-Read Function] (#JOB_RR)
+  * [Get Tech Support Files (TGZ) Function] (#JOB_Get_TGZ)
+  * [Get Tech Support Files (ZIP) Function] (#JOB_Get_ZIP)
+* [Protocol Service] (#PRO_SVC)
+  * [List Protocols Function] (#PRO_List)
+  * [Protocol Details Function] (#PRO_Det)
+  * [Update Protocol Function] (#PRO_UP)
+  * [Delete Protocol Function] (#PRO_Del)
+* [Sample Sheet Service] (#SAM_SVC)
+  * [Validate Sample Sheet Function] (#SAM_Val)
+* [Settings Service] (#SET_SVC)
+  * [Check Free Disk Space Function] (#SET_CheckSpace)
+  * [Get Job Dropbox Function] (#SET_GetDrop)
+  * [Set Job Dropbox Function] (#SET_SetDrop)
+  * [Get Reference Sequence Dropbox Function] (#SET_GetRefDrop)
+  * [Set Reference Sequence Dropbox Function] (#SET_SetRefDrop)
+  * [Get SMTP Host Function] (#SET_GetSMTP)
+  * [Set SMTP Host Function] (#SET_SetSMTP)
+  * [Send Test Email Function] (#SET_Email)
+  * [Get Input Paths Function] (#SET_GetPath)
+  * [Add Input Paths Function] (#SET_AddPath)
+  * [Remove Input Paths Function] (#SET_AddPath)
+  * [Validate Path for Use in pbids Function] (#SET_ValPath)
+  * [Get Instrument URIs Function] (#SET_GetURI)
+  * [Add Instrument URIs Function] (#SET_SetURI)
+  * [Remove Instrument URIs Function] (#SET_DelURI)
+  * [Test Instrument URIs Function] (#SET_TestURI)
+  * [Check Anonymous UI Access Function] (#SET_CheckUI)
+  * [Set Anonymous UI Access Function] (#SET_SetUI)
+  * [Check Anonymous Web Services Access Function] (#SET_CheckWS)
+  * [Set Anonymous Web Services Access Function] (#SET_SetWS)
+  * [Set Anonymous Web and UI Access Function] (#SET_SetUIWS)
+  * [Get Job Archive Directory Function] (#SET_GetArch)
+  * [Set Job Archive Directory Path Function] (#SET_SetArch)
+  * [Set Enable Wizards] (#SET_EnableWizard)
+  * [Get Job Directory Tech Support Glob Patterns Function] (#SET_GetJobGlob)
+  * [Set Job Directory Tech Support Glob Patterns Function] (#SET_SetJobGlob)
+  * [Get Install Directory Tech Support Glob Patterns Function] (#SET_GetTopGlob)
+  * [Set Install Directory Tech Support Glob Patterns Function] (#SET_SetTopGlob)
+* [Groups Service] (#GR_SVC)
+  * [Create Group Function] (#GR_CR)
+  * [Save Group Function] (#GR_Save)
+  * [Delete Group Function] (#GR_Del)
+  * [List Group Names Function] (#GR_ListNames)
+  * [List Groups Function] (#GR_List)
+
+***
+
+## <a name="Intro"></a> Introduction
+
+This document describes the Secondary Analysis Web Services API provided by Pacific Biosciences. The API allows developers to search, submit and manage secondary analysis jobs, data, results, and user accounts.
+
+Secondary Analysis Web Services follow the **REST** (Representational State Transfer) model for web services, and use the JSON (JavaScript Object Notation) format. The web services:
+
+* Run as the server-side layer for managing secondary analysis jobs.
+* Maintain data integrity in the secondary analysis database and file system.
+* Act as a layer on top of SMRT Pipe, the lower-level code that performs secondary analysis processing.
+* Support AJAX access from web clients and can be used from the command line with ``wget`` or ``curl``; from scripting languages (PHP, Python® , PERL), and from Java® and C#® programming languages.
+
+The API includes functions for:
+* Managing **reference sequences**
+* Managing **user accounts** and **passwords**
+* Managing **groups of users**
+* Managing **instrument output** (SMRT Cell data)
+* Managing secondary analysis **jobs**
+* Managing **protocols**
+* Validating **sample sheets**
+* Managing **settings**
+
+The latest version of the API and this documentation are available from the PacBio Developer’s Network at http://www.pacbiodevnet.com.
+
+## <a name="Sec"></a> Security
+
+* Anonymous read-only access to web services is enabled by **default**.
+* Services that **create** and **modify** data require authentication.
+* Authentication is enforced for administrator, scientist and technician-level access and **cannot** be disabled.
+* An application setting (``restrictAccess``) in the ``web.xml`` file turns on or off authentication for all those web services **not** solely for administrators.
+
+
+## <a name="Ov"></a> Overview
+
+Secondary Analysis Web Services API:
+
+* Run in or under a standard Linux®/Apache™ environment, and can be accessed from Windows®, Mac OS® or Linux® operating systems.
+* Require MySQL® software.
+* Are installed as part of the secondary analysis system, and require a one-time configuration. Any additional changes can be made using SMRT Portal.
+* Require that SMRT Pipe be correctly configured and working.
+
+## <a name="WSB"></a> Web Services Behavior
+
+* URLs and parameters are all **case-sensitive.**
+
+* Most requests use the HTTP ``GET`` command to retrieve an object in JSON (JavaScript Object Notation) format: GET data to view details of an object: ``/{objects}/{id_or_name}`` **Example:**
+``curl http://{server}/smrtportal/api/jobs/12345``
+
+* **Deleting objects** uses the HTTP DELETE command: ``DELETE data: /{objects}/{id_or_name}``. **Example:** ``curl -X DELETE -u administrator:somepassword http://{server}/smrtportal/api/jobs/12345``
+
+* **Saving objects to the server, manipulating objects, and operating on the server:** These use the HTTP POST command common to standard HTML forms. This is **not** the same as for file uploads, which use a different mime type (multipart form data). In this case, the request body consists of key-value form pairs. POST data to create a new object: ``/{objects}``. **Example:**
+
+```
+curl -d 'data={the job returned from the GET method, with some edits}' http://{server}/smrtportal/api/
+jobs/12345
+```
+
+* **Saving objects to the server** also supports the PUT and POST commands with alternative content-types, such as application/json and text/xml. In this case, the request body consists of JSON or XML, and contains no key-value form pairs: PUT/POST data to save/update objects: ``/{objects}``
+
+* Most of the time you use ``/{objects}/create`` for both ways of saving objects.
+
+* Web services requiring authentication use the HTTP header’s Authorization feature. **Example:**
+``curl –u “janeuser:somepassword” http://server/secret/sauce``. Alternatively, you could log in using the users/log-on method and store the cookie for use with future web service calls.
+
+* **Creating objects** can be done using an HTTP POST using the ``/create method``, or by using an HTTP PUT with JSON or XML as the request body. The PUT method is considered more of a REST “purist” approach, whereas POST is more widely supported by web browsers.
+
+* By default, most web services return JSON. However, it’s possible in most cases to change the result format by adding an Accept header to the request. Most methods will support ``Accept: text/xml`` as well as ``application/json``, ``text/csv`` and ``text/tsv`` (tab-separated values).
+
+Some examples:
+
+JSON (default):
+```
+$ curl http://localhost:8080/smrtportal/api
+{
+  "success" : true,
+  "message" : "Web services are alive"
+}
+```
+XML:
+```
+$ curl -H "Accept: text/xml" http://localhost:8080/smrtportal/api
+<?xml version="1.0" encoding="UTF-8"?>
+<notice>
+     <success>true</success>
+     <message>Web services are alive</message>
+</notice>
+```
+Comma-separated values:
+```
+$ curl -H "Accept: text/csv" http://localhost:8080/smrtportal/api
+"message","success"
+"Web services are alive","true"
+```
+Tab-separated values:
+```
+$ curl -H "Accept: text/tsv" http://localhost:8080/smrtportal/api
+message success
+Web services are alive  true
+```
+And back to JSON:
+```
+$ curl -H "Accept: application/json" http://localhost:8080/smrtportal/api
+{
+  "success" : true,
+  "message" : "Web services are alive"
+}
+```
+
+
+###Passing Arguments###
+
+* Arguments that are primitive types can be passed like the standard HTTP POST parameters: ``param1=value1&param2=value2``
+
+* Arguments that are objects should be serialized as JSON: ``param1={“name1”:“value1”,“name2”:“value2”}``
+
+* When using an HTTP PUT, simply pass the JSON or XML object in the request body:
+``{“name1”: “Value1”, “name2”: “Value2”}``
+
+###Date and Time Format###
+* All dates and times are in the ISO 8601 Universal Time format.
+
+## <a name="HCODE"></a> HTTP Response Codes
+
+###Success Conditions###
+
+When successful, a web services call returns an object or list of objects serialized as JSON, unless a different format is requested using an ``Accept`` header. You can deserialize the object in any language as a dictionary/hashtable, a list, or a list of dictionary/hashtables. For more advanced use, you can create custom, strongly-typed objects.
+
+For service calls that **don’t** return data from a server, a Notice message object with a uniform signatured is returned. For example: ``{“success”: true, “message”: “It worked”}``
+
+* **Return Value:** ``200 OK``  **Explanation:** The web service call returned successfully. The
+body of the response contains the requested JSON object. For function calls, the response may be a
+simple status message object.
+
+* **Return Value:** ``201 Created``  **Explanation:** The web service created a new object on the server. A simple PrimaryKey object is returned, such as: ``{“idName”:”id”,”idValue”:12345}``.
+The response will contain a header: Location: ``http://where/the/new/object/is``
+
+###Error Conditions###
+
+When errors occur, the web services return an HTTP error code. The body of the response contains a standard JSON object that can be uniformly deserialized as a strongly typed object, or left as a dictionary/hashtable. For example: ``{“success”:false, “type”:”IllegalArgumentException”,
+“message”:“Job id cannot be null”}``
+
+* **Return Value:** ``400 Bad request``  **Explanation:** The arguments were incorrect, or the web service was called incorrectly.
+
+* **Return Value:** ``403 Forbidden``  **Explanation:** The web service requires authentication, and the credentials in the HTTP header’s Authorization section were rejected.
+
+* **Return Value:** ``404 Not Found``  **Explanation:** The search or web service call did not find the
+requested object.
+
+* **Return Value:** ``409 Not Modified``  **Explanation:** The attempt to update or delete an object failed.
+
+* **Return Value:** ``413 Request Entity Too Large``  **Explanation:** When searching a large database table, there may be practical limits to how many records can be returned. The query asked for too many records.
+
+* **Return Value:** ``500 Internal Server Error``  **Explanation:** An internal error occurred.
+
+## <a name="CONV"></a> Search Conventions
+
+Lists of objects are retrieved using either the HTTP GET or POST commands. For objects with a small number of members, a JSON list is returned. Searching and filtering are possible through web services; see the documentation for the jqGrid plugin at http://www.trirand.com/jqgridwiki/.
+
+``
+GET the full list: /jobs
+``
+
+For objects with a **large** number of records (such as secondary analysis jobs and instrument output), results are paged. A wrapper object specifies the page number, total number of records, rows per page, and the list of objects themselves. The data structure is taken directly from the jqGrid plugin; for details see http://www.trirand.com/blog. Following is a sample structure: ``{“page”:1,“records”:50,“total”:510,”rows”:[{object1},{obj2}]}`` where:
+
+* ``page`` is the current page.
+* ``records`` is the number of rows on the current page.
+* ``total`` is the total number of rows.
+* ``rows`` is a list of objects for the current page.
+
+###Usage###
+
+* GET the first page: ``/{objects}``  **Example:**  ``curl http://{server}/smrtportal/api/jobs``
+
+* POST search or filtering options to the same url: ``/{objects}`` **Example:**  ``curl -d
+'options={"page":2,"rows":10,"sortOrder":"desc","sortBy":"jobId"}' http://{server}/smrtportal/api/jobs``
+
+The set of search and filtering parameters available is extensive, flexible, and is also derived from the jqGrid plugin. Key options include:
+
+* **Option:** ``page``  **Values:** ``int``  **Description:** Page number, starting from 1.
+
+* **Option:** ``rows``  **Values:** ``int``  **Description:** Rows per page. If the requested number is too large, a ``413 Request Entity Too Large`` error is generated.
+
+* **Option:** ``sortOrder``  **Values:** ``asc`` or ``desc``  **Description:** Sort order, ascending or descending.
+
+* **Option:** ``sortBy``  **Values:** ``String``; object property name  **Description:** ID of the column property to sort on. Example: ``JobId``.
+
+Arguments can be passed as JSON objects. For example: ``options={“sortOrder”:“asc”, “sortBy”:“name”, “page”:1}``
+
+## <a name="EX"></a> Examples
+Commonly-used methods include sample curl commands and sample returned values. The examples include a user named ``administrator`` and an instrument located at ``http://pssc1:8080/``.
+
+## <a name="REF_SVC"></a> Reference Sequence Service
+The Reference Sequence Service includes functions that you use to manage the reference sequences used in secondary analysis. (Reference sequences are used to map reads against a reference genome for resequencing and for filtering reads.)
+
+### <a name="REF_List_Ref"></a> List Reference Sequences Function
+Use this function to list the reference sequences available on the system.
+
+* **URL:** ``/reference-sequences``  
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions`` (Only ``sortOrder`` and ``sortBy`` are supported.)
+* **Returns:** ``PagedList<ReferenceEntry>``
+
+### <a name="REF_Ref_Det"></a> Reference Sequence Details Function
+Use this function to obtain details about a specific reference sequence.
+
+* **URL:** ``/reference-sequences/{id}``  
+* **Method:** ``GET``
+* **Parameters:**  ``id=string``
+* **Returns:** ``ReferenceEntry``
+
+### <a name="REF_List_Ref_Type"></a> List Reference Sequences by Type Function
+Use this function to list the reference sequences available on the system by their **type**.
+
+* **URL:** ``/reference-sequences/by-type/{name}``  
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  
+  * ``name= control`` or ``name=sample``
+  * ``options=SearchOptions`` (Only ``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<ReferenceEntry>``
+
+### <a name="REF_Create_Ref"></a> Create Reference Sequence Function
+Use this function to **create** a new reference sequence.
+
+* **URL:** ``/reference-sequences/create`` (Using POST), ``/reference-sequences`` (Using PUT)  
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  
+  * ``data=ReferenceSequence`` (Using POST)
+  * ``ReferenceSequence`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="REF_Save_Ref"></a> Save Reference Sequence Function
+Use this function to **save** a reference sequence.
+
+* **URL:**  ``/reference-sequences/{id}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  
+  * ``id=string``
+  * ``data=ReferenceSequence``
+* **Returns:** ``A notice message object``
+
+### <a name="REF_Del_Ref"></a> Delete Reference Sequence Function
+Use this function to **delete** a reference sequence.
+
+* **URL:**  ``/reference-sequences/{id}``
+* **Method:** ``DELETE``
+* **Parameters:** ``id=string``
+* **Returns:** ``A notice message object``
+
+### <a name="REF_List_DB"></a> List Reference Sequence Dropbox Files Function
+Use this function to list the reference files located in the Reference Sequence Dropbox.
+
+* **URL:**  ``/reference-sequences/dropbox-files``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="REF_RSS"></a> RSS Feed Function
+Use this function to access an RSS feed which lists when secondary analysis jobs complete or fail.
+
+* **URL:**  ``/rss``
+* **Method:** ``GET``
+* **Returns:** ``An RSS XML file.``
+
+## <a name="USER"></a> User Service
+The User Service includes functions used to manage **users**, **roles** and **passwords**.
+
+### <a name="USR_List"></a> List Users Function
+Use this function to list users on the system. **(Administrators only)**
+
+* **URL:**  ``/users``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:** ``options=SearchOptions``  (Only ``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<User>``
+
+### <a name="USR_Det"></a> User Details Function
+Use this function to obtain information about a specific user. **(Administrators only)**
+
+* **URL:**  ``/users``
+* **Method:** ``GET``
+* **Parameters:** ``userName=string`` 
+* **Returns:** ``User``
+
+### <a name="USR_Save"></a> Save User Function
+Use this function to **save** changes made to a user. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:** 
+ * ``userName=string``
+ * ``data=User``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_Del"></a> Delete User Function
+Use this function to **delete** a user from the system. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}``
+* **Method:** ``DELETE``
+* **Parameters:** ``userName=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_Reg"></a> Register User Function
+Use this function to register a new user.
+
+* **URL:**  ``/users/register``
+* **Method:** ``POST``
+* **Parameters:** 
+ * ``data=User``  **(Required)**
+ * ``userName=string``
+ * ``email=string``
+ * ``password=string``
+ * ``confirmPassword=string``
+* **Returns:** ``User``
+
+### <a name="USR_CPW"></a> Change Password Function
+Use this function to change a user’s password with a specified replacement password. This functionality is available to administrators for **all** passwords.
+
+* **URL:**  ``/users/{userName}/change-password``
+* **Method:** ``POST``
+* **Parameters:** 
+ * ``data=User``  **(Required)**
+ * ``userName=string``
+ * ``newPassword=string``
+ * ``password=string``
+ * ``confirmPassword=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_RPW"></a> Reset Password Function
+Use this function to reset a user’s password. The user is then asked to change their password. **(Administrators only)**
+
+* **URL:**  ``/users/{userName}/reset-password``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+
+### <a name="USR_LUDF"></a> List User-Defined Fields Function
+Use this function to obtain a list of user-defined fields. These fields are created using the RS Remote software. If a run specified a secondary analysis protocol, these fields (if defined) propagate throughout the secondary analysis pipeline.
+
+* **URL:**  ``/custom-fields``
+* **Method:** ``GET``
+* **Parameters:** ``options=SearchOptions`` (**Only **``sortOrder``, ``sortBy`` and ``columnNames`` are supported.)
+* **Returns:** ``PagedList<CustomField>``
+
+### <a name="USR_LUDFN"></a> List User-Defined Field Names Function
+Use this function to obtain a list of the names of **user-defined fields**. These fields are created using the RS Remote software. If a run specified a secondary analysis protocol, these fields (if defined) propagate throughout the secondary analysis pipeline.
+
+* **URL:**  ``/custom-fields/names``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+## <a name="SA_SVC"></a> Secondary Analysis Input Service
+The Secondary Analysis Input Service includes functions used to manage the data associated with each SMRT Cell that is included in a secondary analysis job.
+
+### <a name="SA_LInput"></a> List Secondary Analysis Inputs Function
+Use this function to obtain a list of secondary analysis input.
+
+* **URL:**  ``/inputs``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:** ``options=SearchOptions``
+* **Returns:** ``PagedList<Input>``
+
+### <a name="SA_InputDet"></a> Secondary Analysis Input Details Function
+Use this function to obtain details for a specfied secondary analysis input.
+
+* **URL:**  ``/inputs``
+* **Method:** ``GET``
+* **Parameters:** ``id=int``
+* **Returns:** ``Input``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/inputs
+{
+"page" : 1,
+"records" : 1,
+"total" : 1,
+"rows" : [ {
+"adapterSequence" : "ATCTCTCTCttttcctcctcctccgttgttgttgttGAGAGAGAT",
+"bindingKitBarcode" : "000001001546011123111",
+"bindingKitControl" : "Standard_v1",
+"bindingKitExpirationDate" : "2011-12-31T00:00:00-0800",
+...
+} ]
+}
+```
+
+### <a name="SA_CR"></a> Create Secondary Analysis Input Function
+Use this function to **create** secondary analysis input.
+
+* **URL:**  ``/inputs/create`` (Using POST), ``/inputs`` (Using PUT)
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  
+  * ``data=Input`` (Using POST)
+  * ``Input`` (Using PUT)
+* **Returns:** ``PrimaryKey``
+
+### <a name="SA_SA"></a> Save Secondary Analysis Input Function
+Use this function to **save** secondary analysis input.
+
+* **URL:**  ``/inputs/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  
+  * ``data=Input``
+  * ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_LTime"></a> Last Timestamp of Secondary Analysis Input Function
+Use this function to obtain the time of the last secondary analysis input saved to the database.
+
+* **URL:**  ``/inputs/last-timestamp``
+* **Method:** ``GET``
+* **Returns:** ``Date``
+
+### <a name="SA_Imp"></a> Import Secondary Analysis Input Metadata Function
+Use this function to **import** secondary analysis input.
+
+* **URL:**  ``/inputs/import``
+* **Method:** ``POST``
+* **Parameters:**  ``data=array of Collections from instrument``
+* **Returns:** ``List<Input>``
+
+### <a name="SA_Scan"></a> Scan for New Input Metadata Function
+Use this function to **scan** for secondary analysis input.
+
+* **URL:**  ``/inputs/scan``
+* **Method:** ``POST``
+* **Parameters:**  ``paths=array of string``
+* **Returns:** ``List<Input>``
+* **Example:** ``curl -u administrator:administrator#1 -d 'paths=["/data/smrta/smrtanalysis/common/inputs_dropbox"]'
+http://secondary_host:8088/smrtportal/api/inputs/scan``
+* **Python® code example:**
+
+```
+import os
+import logging
+import urllib
+import urllib2
+import json
+import base64
+
+log = logging.getLogger(__name__)
+
+class DefaultProgressErrorHandler(urllib2.HTTPDefaultErrorHandler):
+    def http_error_default(self, req, fp, code, msg, headers):
+        result = urllib2.HTTPError(req.get_full_url(), code, msg, headers, fp)
+        result.status = code
+        return result
+
+def request_to_string(request):
+    "for debugging"
+    buffer = []
+    buffer.append('Method: %s' % request.get_method())
+    buffer.append('Host: %s' % request.get_host())
+    buffer.append('Selector: %s' % request.get_selector())
+    buffer.append('Data: %s' % request.get_data())
+    return os.linesep.join(buffer)
+
+def scan():
+    url = 'http://localhost:8080/smrtportal/api/inputs/scan'
+    #You'd want to use something like this: /opt/testdata/LIMS/2311013/0002
+    #Having trouble getting this to work with > 1 path. How to pass multiple form params of same name to urllib2?
+    c_path = '/data/smrta/smrtanalysis/common/inputs_dropbox'
+    scan_data = urllib.urlencode({'paths[]': c_path } )
+    request = urllib2.Request(url, data=scan_data)
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    #response.read() - in this case - returns a list of len == to number of paths you're passing as data
+    retList = json.loads( response.read() ) 
+    return retList[0]['idValue']
+
+def saveJob(inputId):
+    url = 'http://localhost:8080/smrtportal/api/jobs/create'
+    job = {
+        'name':'test_job',
+        'createdBy':'admin',
+        'protocolName':'RS_Filter_Only.1',
+        'groupNames':['all'],
+        'inputIds':[inputId]
+    }
+    job_data = urllib.urlencode( {'data': json.dumps(job)  } )
+
+    request = urllib2.Request(url, data=job_data)
+    print request_to_string(request)
+
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    ret = json.loads( response.read() )
+    return ret['idValue']
+
+def startJob(jobId):
+    url = 'http://localhost:8080/smrtportal/api/jobs/{i}/start'.format(i=jobId)
+
+    #This is a GET
+    request = urllib2.Request(url)
+    print request_to_string(request)
+
+    request.add_header('User-Agent', 'admin-user')
+    key = 'Basic %s' % (base64.b64encode("administrator:administrator#1"))
+    request.add_header('Authorization', key)
+
+    opener = urllib2.build_opener(DefaultProgressErrorHandler())
+    response = opener.open(request)
+    ret = json.loads( response.read() )
+    print( ret )
+
+def test():
+    inputId = scan()
+    print( 'Scanned inputId = %s' % inputId ) 
+    jobId = saveJob(inputId)
+    print( 'jobId = %s' % jobId )     
+    startJob(jobId)
+```
+### <a name="SA_Del"></a> Delete Secondary Analysis Input Function
+Use this function to **delete** specified secondary analysis input. **(Scientists and administrators only)**
+
+* **URL:**  ``/inputs/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_Comp"></a> Compatibility Function
+Use this function to return information specifying whether the SMRT Cell inputs for the job are compatible.
+
+* **URL:**  ``/inputs/compatibility``
+* **Method:** ``GET``
+* **Parameters:**  ``ids=[array of ids]``
+* **Returns:** ``JSON object specifying whether or not the inputs are compatible.``
+
+### <a name="SA_Group"></a> Groups Function
+Use this function to update group information for SMRT Cell inputs.
+
+* **URL:**  ``/inputs/{INPUT_ID}/groups``
+* **Method:** ``POST``
+* **Parameters:**  ``data=[name of groups].``  **Example:** ``data=”[‘grp1’, ‘grp2’]”``
+* **Returns:** ``A notice message object.``
+
+### <a name="SA_Clean"></a> Cleanup Function
+Use this function to delete any input that is **unassociated** with a job and has an invalid
+or empty collectionPathUri. This is useful for cleaning up duplicate SMRT Cells located at different paths. When you scan and import SMRT Cells from SMRT Portal and the same SMRT Cell ID already exists, the existing path is updated to the new location. No duplicate entries are created. **(Scientists and administrators only)**
+
+* **URL:**  ``/inputs/{INPUT_ID}/cleanup``
+* **Method:** ``DELETE``
+* **Returns:** ``A notice message object that includes the list of deleted input IDs.``
+
+
+## <a name="JOB_SVC"></a> Jobs Service
+The Jobs Service includes functions used to manage secondary analysis jobs.
+
+### <a name="JOB_List"></a> List Jobs Function
+Use this function to obtain a list of **all** secondary analysis jobs.
+
+* **URL:**  ``/jobs``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+
+```
+curl -d 'options={"filters":{"rules":[{"field":"createdBy","op":"eq","data":"AutomationSystem"},{"field":"jobId","op":"lt","data":"30000"}],"groupOp":"and"},"columnNames":["jobId"],"rows":"0"}' http://pssc1:8080/smrtportal/api/jobs
+{
+"page" : 1,
+"records" : 57,
+"total" : 1,
+"rows" : [ {
+"jobId" : 26392
+}, {
+"jobId" : 26360
+}, {
+"jobId" : 26359
+}, {
+...
+}]
+}
+```
+
+### <a name="JOB_ListBStatus"></a> List Jobs by Status Function
+Use this function to obtain a list of secondary analysis jobs, based on their **job status**.
+
+* **URL:**  ``/jobs/by-status/{jobStatus}``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/by-status/Completed
+{
+"page" : 1,
+"records" : 25,
+"total" : 3,
+"rows" : [ {
+"automated" : true,
+"collectionProtocol" : "Standard Seq v2",
+...
+} ]
+}
+```
+
+### <a name="JOB_ListBProt"></a> List Jobs By Protocol Function
+Use this function to list the currently open secondary analysis jobs, based on a specified **protocol**.
+
+* **URL:**  ``/jobs/by-protocol/{protocol}``
+* **Method:** ``GET`` or ``POST``
+* **Parameters:**  ``protocol=string``, ``options=SearchOptions``, ``jobStatus=`` status code such as ``NotStarted``.
+* **Returns:** ``PagedList<Job>``
+* **Example:**
+```
+curl -d 'jobStatus=Completed' http://pssc1:8080/smrtportal/api/jobs/by-protocol/RS_resequencing.1
+{
+"page" : 1,
+"records" : 1,
+"total" : 1,
+"rows" : [ {
+"automated" : false,
+...
+"whenStarted" : null
+} ]
+}
+```
+
+### <a name="JOB_Det"></a> Job Details Function
+Use this function to display **details** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}`` or ``/jobs/by-name/{name}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``, ``name=string``
+* **Returns:** ``Job``
+* **Examples:**
+```
+By ID:
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/016437
+{
+"jobId" : 16437,
+"protocolName" : "RS_Site_Acceptance_Test.1",
+"referenceSequenceName" : "lambda",
+"jobStatus" : "Completed",
+...
+"whenModified" : "2012-01-31T09:12:48-0800",
+"modifiedBy" : null
+}
+By Name:
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/by-name/2311084_0002
+{
+"jobId" : 16437,
+"protocolName" : "RS_Site_Acceptance_Test.1",
+"referenceSequenceName" : "lambda",
+"jobStatus" : "Completed",
+...
+"whenModified" : "2012-01-31T09:12:48-0800",
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_CR"></a> Create Job Function
+Use this function to **create** a new secondary analysis job.
+
+* **URL:**  ``/jobs/create`` (Using POST), ``/jobs`` (Using PUT)
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``data=Job`` (Using POST), ``job`` (Using PUT). In both cases, the name must be **unique**, and ``CreatedBy`` must be **non-null**.
+* **Returns:** ``PrimaryKey``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'data={"name":"DemoJobName", "createdBy":"testuser", "description":"demo job", "protocolName":"RS_Resequencing.1", "groupNames":["all"], "inputIds":["78807"]}' http://pssc1:8080/smrtportal/api/jobs/create
+{
+"idValue" : 16478,
+"idProperty" : "jobId"
+}
+```
+
+### <a name="JOB_Save"></a> Save Job Function
+Use this function to **save** a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=int``, ``data=Job``
+* **Returns:** ``A notice message object.``
+
+### <a name="JOB_Del"></a> Delete Job Function
+Use this function to **delete** a secondary analysis job. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u "administrator:administrator#1" -X DELETE http://pssc1:8080/smrtportal/api/jobs/16478
+{
+"success" : true,
+"message" : "Job 16478 has been permanently deleted"
+}
+```
+
+### <a name="JOB_Arch"></a> Archive Job Function
+Use this function to **archive** a secondary analysis job. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}/archive`` (Using GET), ``/jobs/archive`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/archive
+{
+"success" : true,
+"message" : "Archived 2 jobs."
+}
+```
+
+### <a name="JOB_RestArch"></a> Restore Archived Job Function
+Use this function to **restore** a secondary analysis job that was archived. **(Administrators only)**
+
+* **URL:**  ``/jobs/{id}/restore`` (Using GET), ``/jobs/restore`` (Using POST)
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``id=int``  (Using GET),  ``ids=int[]``  (Using POST)
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'ids=[16437,16438]' http://pssc1:8080/smrtportal/api/jobs/restore
+{
+"success" : true,
+"message" : "Restored 2 jobs."
+}
+```
+
+### <a name="JOB_Metrics"></a> Get Job Metrics Function
+Use this function to retrieve **metrics** for a secondary analysis jobs.
+
+* **URL:**  ``/jobs/{id}/metrics2``
+* **Method:** ``GET``
+* **Parameter:**  ``id=int``
+* **Returns:** ``List<String>``
+
+### <a name="JOB_Prot"></a> Get Job Protocol Function
+Use this function to **return the protocol** used by a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/protocol``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``Protocol XML document``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobss/16437/protocol
+<smrtpipeSettings>
+<protocol version="1.3.0" id="RS_Site_Acceptance_Test.1" editable="false">
+<param name="name" label="Protocol Name" editable="false">
+...
+<fileName>settings.xml</fileName>
+</smrtpipeSettings>
+```
+You can also return a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="JOB_SetProt"></a> Set Job Protocol Function
+Use this function to **specify the protocol** used by a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/protocol``
+* **Method:** ``POST``
+* **Parameters:**  ``id=int``, ``data=Xml(escaped)`` This is a function used for transmission from a web browser, such as the Javascript escape function.
+* **Returns:** ``A notice message object.``
+
+You can also return a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="JOB_Input"></a> Get Job Inputs Function
+Use this function to return information about the SMRT Cell data used for a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/inputs``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``PagedList<Input>``
+
+### <a name="JOB_Start"></a> Start Job Function
+Use this function to **start** a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/start``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JobStatus``
+* **Example:**
+```
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/16479/start
+{
+"jobStatusId" : 1775,
+"jobId" : 16479,
+"code" : "Submitted",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : "Job submitted",
+"name" : null,
+"description" : null,
+"whenCreated" : null,
+"createdBy" : null,
+"whenModified" : null,
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_GetStatus"></a> Get Job Status Function
+Use this function to obtain the **status** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/status``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JobStatus``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16479/status
+{
+"jobStatusId" : 1780,
+"jobId" : 16479,
+"code" : "In Progress",
+"jobStage" : "Filtering",
+"moduleName" : "P_FilterReports/adapterRpt",
+"percentComplete" : 100,
+"message" : "task://016479/P_FilterReports/adapterRpt complete",
+"name" : null,
+"description" : null,
+"whenCreated" : "2012-02-03T17:38:06-0800",
+"createdBy" : "smrtpipe",
+"whenModified" : "2012-02-03T17:38:06-0800",
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_UpStatus"></a> Update Job Status Function
+Use this function to **modify the status** of a secondary analysis job. **(Scientists and administrators only)**
+
+* **URL:**  ``/jobs/{id}/status``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=int``, ``progress=JobStatus``
+* **Returns:** ``PrimaryKey``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'progress={"code":"Failed"}' http://pssc1:8080/smrtportal/api/jobs/16471/status
+{
+"success" : true,
+"message" : "Job status updated"
+}
+```
+
+### <a name="JOB_Hist"></a> Job History Function
+Use this function to obtain the **history** of a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/history``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``List<JobStatus>``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/history
+[ {
+"jobStatusId" : 1773,
+"jobId" : 16437,
+"code" : "Completed",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : null,
+"name" : null,
+"description" : null,
+"whenCreated" : "2012-02-03T17:13:31-0800",
+"createdBy" : null,
+"whenModified" : "2012-02-03T17:13:31-0800",
+"modifiedBy" : null
+}, {
+...
+}]
+```
+
+### <a name="JOB_Log"></a> Job Log Function
+Use this function to obtain the **log** for a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/log``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``Text file``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/log
+[INFO] 2012-01-30 23:52:41,437 [SmrtPipeContext 139] Configuration override for PROGRESS_URL: Old: --> New: http://pssc1:8080/smrtportal/api
+[INFO] 2012-01-30 23:52:41,437 [SmrtPipeContext 150] Changing working directory to /tmp/tmpTKPKi4
+...
+[INFO] 2012-01-31 00:35:10,443 [SmrtPipeContext 362] Removed 2 temporary directories
+[INFO] 2012-01-31 00:35:10,450 [SmrtPipeContext 365] Removed 1 temporary files
+[INFO] 2012-01-31 00:35:10,450 [SmrtPipeMain 394] Successfully exiting smrtpipe
+***
+```
+
+### <a name="JOB_TOC"></a> Analysis Table of Content Function
+Use this function to returns a JSON object listing the reports and data files that were generated for a secondary analysis job. This function is used primarily for SMRT Portal to display the report and data links in the View Data/Job Details page.
+
+* **URL:**  ``/jobs/{id}/contents``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``
+* **Returns:** ``JSON object listing contents.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/contents
+{
+"reportGroups" : [ {
+"name" : "General",
+"members" : [ {
+"group" : "General",
+"title" : "Workflow",
+"links" : [ {
+"path" : "workflow/Workflow.summary.html",
+"format" : "text/html"
+...
+}
+```
+
+### <a name="JOB_File"></a> Job Analysis File Function
+Use this function to obtain any specified **file** that was generated during a secondary analysis job.
+
+* **URL:**  ``/jobs/{id}/contents/{file}`` or ``/jobs/{id}/contents/{dir}/{file}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=int``, ``file=filename``, ``dir=directory``
+* **Returns:** ``Data file, report XML, image, and so on.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/contents/results/overview.xml
+<?xml version="1.0" encoding="UTF-8"?>
+<report>
+<layout onecolumn="true"/>
+<title>General Attribute Report</title>
+<attributes>
+<attribute id="n_smrt_cells" name="# of SMRT Cells" value="1">1</attribute>
+<attribute id="n_movies" name="# of Movies" value="2">2</attribute>
+</attributes>
+</report>
+```
+
+### <a name="JOB_COmplete"></a> Mark Job Complete Function
+Use this function to specify that a job using more than one SMRT Cell is complete.
+
+* **URL:**  ``/jobs/{id}/complete``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+http://pssc1:8080/smrtportal/api/jobs/16477/complete
+{
+"jobStatusId" : 1844,
+"jobId" : 16477,
+"code" : "Submitted",
+"jobStage" : null,
+"moduleName" : null,
+"percentComplete" : 0,
+"message" : "Job submitted",
+"name" : null,
+"description" : null,
+"whenCreated" : null,
+"createdBy" : null,
+"whenModified" : null,
+"modifiedBy" : null
+}
+```
+
+### <a name="JOB_inDrop"></a> List Jobs in Dropbox Function
+Use this function to list the jobs located in the Job Import Dropbox.
+
+* **URL:**  ``/jobs/dropbox-paths``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+* **Example:**
+```
+curl -u administrator:administrator#1 http://pssc1:8080/smrtportal/api/jobs/dropbox-paths
+[ "999991" ]
+```
+
+### <a name="JOB_Import"></a> Import Job Function
+Use this function to **import** a job located in the Job Import Dropbox.
+
+* **URL:**  ``/jobs/import``
+* **Method:** ``POST``
+* **Parameters:** ``paths=array of strings``
+* **Returns:** ``List<PrimaryKey>``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d 'paths=["/opt/smrtanalysis/common/jobs_dropbox/035169"]' http://pssc1:8080/smrtportal/api/jobs/import
+[ {
+"idValue" : 16480,
+"idProperty" : "jobId"
+} ]
+```
+
+### <a name="JOB_Heart"></a> Job Last Heartbeat Function
+Use this function to find out if a job is still alive.
+
+* **URL:**  ``/jobs/{id}/status/heartbeat``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+* **Example:**
+```
+curl -u administrator:administrator#1 -d "data={'lastHeartbeat':'2011-06-20T00:50:20-0700'}" http://pssc1:8080/smrtportal/api/jobs/016471/status/heartbeat
+{
+"success" : true,
+"message" : "Job lastHeartbeat status updated"
+}
+```
+
+### <a name="JOB_RR"></a> Job Raw-Read Function
+Use this function to download a data file generated by a job.
+
+* **URL:**  ``/jobs/{id}/raw-reads``
+* **Method:** ``GET``
+* **Returns:** ``Data file, report XML, image, and so on.``
+* **Example:**
+```
+curl http://pssc1:8080/smrtportal/api/jobs/16437/raw-reads?format=fasta
+```
+
+### <a name="JOB_Get_TGZ"></a> Get Job Tech Support Files (TGZ) Function
+ Use this function to download a job's Tech Support files in .tgz format.
+
+* **URL:**  ``/jobs/{id}/techsupport/tgz``
+* **Method:** ``GET``
+* **Returns:** ``.tgz file.``
+
+### <a name="JOB_Get_ZIP"></a> Get Job Tech Support Files (ZIP) Function
+ Use this function to download a job's Tech Support files in .zip format.
+
+* **URL:**  ``/jobs/{id}/techsupport/zip``
+* **Method:** ``GET``
+* **Returns:** ``.zip file.``
+
+## <a name="PRO_SVC"></a> Protocol Service
+The Protocol Service includes functions that you use to manage the protocols used by secondary analysis jobs.
+
+### <a name="PRO_List"></a> List Protocols Function
+Use this function to obtain all the **active** and **inactive** protocols in the system.
+
+* **URL:**  ``/protocols``
+* **Method:** ``GET``
+* **Returns:** ``PagedList<Protocol>``
+
+### <a name="PRO_Det"></a> Protocol Details Function
+Use this function to obtain **details** about a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``GET``
+* **Parameters:**  ``id=string``
+* **Returns:** ``An XML protocol file.``
+
+You can also return a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="PRO_UP"></a> Update Protocol Function
+Use this function to **update** a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``id=string``, ``data=Xml``
+* **Returns:** ``A notice message object.``
+
+You can also update a protocol as a `json` object by specifying a header item:
+```
+curl --verbose  -H "accept:application/json" 
+http://localhost:8080/smrtportal/api/jobs/16454/protocol
+```
+
+### <a name="PRO_Del"></a> Delete Protocol Function
+Use this function to **permanently delete** a protocol.
+
+* **URL:**  ``/protocols/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=string``
+* **Returns:** ``A notice message object.``
+
+## <a name="SAM_SVC"></a> Sample Sheet Service
+The Sample Sheet Service includes a function to validate a specified sample sheet.
+
+### <a name="SAM_Val"></a> Validate Sample Sheet Function
+
+* **URL:**  ``/sample-sheets/validate``
+* **Method:** ``POST``
+* **Parameters:**  ``sampleSheet=SampleSheet``
+* **Returns:** ``A notice message object.``
+
+## <a name="SET_SVC"></a> Settings Service
+The Settings Service includes functions that you use to manage the SMTP host, send test email, manage instrument URIs, and manage the file input paths where SMRT Portal looks for secondary analysis input, reference sequences, and jobs to import.
+
+### <a name="SET_CheckSpace"></a> Check Free Disk Space Function
+Use this function to check how much free space resides on the disk containing the jobs directory, by default located at ``/opt/smrtanalysis/common/jobs``.
+
+* **URL:**  ``/settings/free-space``
+* **Method:** ``GET``
+* **Returns:** ``Floating point value between 0 and 1, representing the fraction of disk space that is free.``
+
+### <a name="SET_GetDrop"></a> Get Job Dropbox Function
+Use this function to obtain the location of the dropbox where SMRT Portal looks for jobs to import.
+
+* **URL:**  ``/settings/job-dropbox``
+* **Method:** ``GET``
+* **Returns:** ``The path for the job dropbox directory.``
+
+### <a name="SET_SetDrop"></a> Set Job Dropbox Function
+Use this function to **specify** the location of the Job Import Dropbox where SMRT Portal looks for jobs to import.
+
+* **URL:**  ``/settings/job-dropbox``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetRefDrop"></a> Get Reference Sequence Dropbox Function
+Use this function to obtain the location of the Reference Sequence Dropbox where SMRT Portal looks for reference sequences.
+
+* **URL:**  ``/settings/reference-dropbox``
+* **Method:** ``GET``
+* **Returns:** ``The path for the reference sequence dropbox directory.``
+
+### <a name="SET_SetRefDrop"></a> Set Reference Sequence Dropbox Function
+Use this function to **specify** the location of the Reference Sequence Dropbox where SMRT Portal looks for reference sequences.
+
+* **URL:**  ``/settings/reference-dropbox``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetSMTP"></a> Get SMTP Host Function
+Use this function to obtain the name of the current SMTP host.
+
+* **URL:**  ``/settings/smtp-host``
+* **Method:** ``GET``
+* **Returns:** ``The host name.``
+
+### <a name="SET_SetSMTP"></a> Set SMTP Host Function
+Use this function to **specify** the name of the SMTP host to use.
+
+* **URL:**  ``/settings/smtp-host``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``host=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_Email"></a> Send Test Email Function
+Use this function to send a test email to the administrator, using the specified SMTP Host.
+
+* **URL:**  ``/settings/smtp-host/test``
+* **Method:** ``GET``
+* **Parameters:**  ``host=string``
+* **Returns:** ``A notice message object, then sends an email to the administrator.``
+
+### <a name="SET_GetPath"></a> Get Input Paths Function
+Use this function to obtain the file input paths where SMRT Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_AddPath"></a> Add Input Paths Function
+Use this function to **add** file input paths where SMRT Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``data=array of paths``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_AddPath"></a> Remove Input Paths Function
+Use this function to **remove** file input paths where SMRT Portal looks for secondary analysis input.
+
+* **URL:**  ``/settings/input-paths``
+* **Method:** ``DELETE``
+* **Parameters:**  ``data=array of paths``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_ValPath"></a> Validate Path for use in pbids Function
+Use this function to validate the URI (Universal Resource Identifier) path that specifies where the primary analysis data is stored. You specify the path using the RS Remote software; the path uses the ``pbids`` format.
+
+* **URL:**  ``/settings/validate-path``
+* **Method:** ``POST``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetURI"></a> Get Instrument URIs Function
+Use this function to obtain the URI (Universal Resource Identifier) that specifies the location of the instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_SetURI"></a> Add Instrument URIs Function
+Use this function to **specify** the URI (Universal Resource Identifier) that specifies the location of the instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``POST``,  ``PUT``
+* **Parameters:**  ``data=array of URIs``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_DelURI"></a> Remove Instrument URIs Function
+Use this function to **remove** the URI (Universal Resource Identifier) that specifies the location of the instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``/settings/instrument-uris``
+* **Method:** ``DELETE``
+* **Parameters:**  ``data=array of URIs``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_TestURI"></a> Test Instrument URIs Function
+Use this function to **test** the URI (Universal Resource Identifier) that specifies the location of the instrument(s) running the Instrument Control Web Services.
+
+* **URL:**  ``settings/instrument-uris/test``
+* **Method:** ``POST``
+* **Parameters:**  ``uri=instrument URI``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_CheckUI"></a> Check Anonymous UI Access Function
+Use this function to check whether users have read-only access to SMRT Portal without logging in. (Users must still log in to **create** or **modify** jobs.)
+
+* **URL:**  ``/settings/restrict-web-access``
+* **Method:** ``GET``
+* **Returns:** ``True/False``
+
+### <a name="SET_SetUI"></a> Set Anonymous UI Access Function
+Use this function to **specify** whether users have read-only access to SMRT Portal without logging in. (Users must still log in to **create** or **modify** jobs.)
+
+* **URL:**  ``/settings/restrict-web-access``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``value=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_CheckWS"></a> Check Anonymous Web Services Access Function
+Use this function when your organization has written custom software to access SMRT Pipe, or integrate with a LIMS system. The function checks whether software can have access to certain web services methods **without** authentication.
+
+* **URL:**  ``/settings/restrict-service-access``
+* **Method:** ``GET``
+* **Returns:** ``True/False``
+
+### <a name="SET_SetWS"></a> Set Anonymous Web Services Access Function
+Use this function when your organization has written custom software to access SMRT Pipe, or integrate with a LIMS system. The function specifies whether software can have access to certain web services methods **without** authentication. (The software would supply the credentials programmatically.)
+
+* **URL:**  ``/settings/restrict-service-access``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``value=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_SetUIWS"></a> Set Anonymous Web and UI Access Function
+Use this function to specify 1) Whether a user has read-only access to SMRT Portal and 2) Whether software can use certain web services methods without authentication.
+
+* **URL:**  ``/settings/restrict-access``
+* **Method:** ``POST``
+* **Parameters:**  ``web=true|false``, ``service=true|false``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetArch"></a> Get Job Archive Directory Function
+Use this function to obtain the path to the directory used to store archived jobs.
+
+* **URL:**  ``/settings/job-archive``
+* **Method:** ``GET``
+* **Returns:** ``The path for the job archive directory.``
+
+### <a name="SET_SetArch"></a> Set Job Archive Directory Path Function
+Use this function to **set** the path to the directory used to store archived jobs.
+
+* **URL:**  ``/settings/job-archive``
+* **Method:** ``POST``, ``PUT``
+* **Parameters:**  ``path=string``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_EnableWizard"></a> Set Enable Wizards
+Use this function to **enable** or **disable** the Protocol Selector wizard.
+
+* **URL:**  ``/settings/enable-wizards``
+* **Method:** ``GET``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetJobGlob"></a> Get Job Directory Tech Support Glob Patterns
+Use this function to **obtain** file-name patterns identifying files in a specified job directory that can be sent to Pacific Biosciences Technical Support for debugging. These files are downloaded as a compressed archive in zip or tgz format. **(Administrators only.)**
+
+
+* **URL:**  ``/settings/techsupport-job-matchers``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_SetJobGlob"></a> Set Job Directory Tech Support Glob Patterns
+Use this function to **set** file-name patterns identifying files in a specified job directory that can be sent to Pacific Biosciences Technical Support for debugging. These files are downloaded as a compressed archive in zip or tgz format. **(Administrators only.)**
+
+* **URL:**  ``/settings/techsupport-job-matchers``
+* **Method:** ``POST``
+* **Returns:** ``A notice message object.``
+
+### <a name="SET_GetTopGlob"></a> Get Install Directory Tech Support Glob Patterns
+Use this function to **obtain** file-name patterns identifying files in the Installation directory that can be sent to Pacific Biosciences Technical Support for debugging. These files are downloaded as a compressed archive in zip or tgz format. **(Administrators only.)**
+
+* **URL:**  ``/settings/techsupport-topdir-matchers``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="SET_SetTopGlob"></a> Set Install Directory Tech Support Glob Patterns
+Use this function to **set** file-name patterns identifying files in the Installation directory that can be sent to Pacific Biosciences Technical Support for debugging. These files are downloaded as a compressed archive in zip or tgz format. **(Administrators only.)**
+
+* **URL:**  ``/settings/techsupport-topdir-matchers``
+* **Method:** ``POST``
+* **Returns:** ``A notice message object.``
+
+## <a name="GR_SVC"></a> Group Service
+The Group Service includes functions that you use to manage groups of SMRT Portal users.
+
+### <a name="GR_CR"></a> Create Group Function
+Use this function to **create** a new group of users. **(Administrators only)**
+
+* **URL:**  ``/groups/create`` (Using POST), ``/groups`` (Using PUT)
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``data=Group`` (Using POST), ``group`` (Using PUT). In both cases, the name must be **unique**, and ``CreatedBy`` must be **non-null**.
+* **Returns:** ``PrimaryKey``
+
+### <a name="GR_Save"></a> Save Group Function
+Use this function to **save** a specified group of users. **(Administrators only)**
+
+* **URL:**  ``/groups/{id}``
+* **Method:** ``POST`` or ``PUT``
+* **Parameters:**  ``id=int``, ``data=group``
+* **Returns:** ``A notice message object.``
+
+### <a name="GR_Del"></a> Delete Group Function
+Use this function to **delete** a specified group of users. **(Administrators only)**
+
+* **URL:**  ``/groups/{id}``
+* **Method:** ``DELETE``
+* **Parameters:**  ``id=int``
+* **Returns:** ``A notice message object.``
+
+### <a name="GR_ListNames"></a> List Group Names Function
+Use this function to get a list of the names of groups of users on the system. **(Administrators only)**
+
+* **URL:**  ``/groups/names``
+* **Method:** ``GET``
+* **Returns:** ``List<String>``
+
+### <a name="GR_List"></a> List Groups Function
+Use this function to return information about the groups of users available on the system.  **(Administrators only)**
+
+* **URL:**  ``/groups``
+* **Method:** ``GET``, ``POST``
+* **Parameters:**  ``options=SearchOptions``
+* **Returns:** ``PagedList<Group>``
+
+***
+
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 000-999-643-07**
\ No newline at end of file
diff --git a/docs/Smrtpipe.log-error:-java.lang.outofmemoryerror:-java-heap-space.md b/docs/Smrtpipe.log-error:-java.lang.outofmemoryerror:-java-heap-space.md
new file mode 100644
index 0000000..4608536
--- /dev/null
+++ b/docs/Smrtpipe.log-error:-java.lang.outofmemoryerror:-java-heap-space.md
@@ -0,0 +1,24 @@
+The `java.lang.outofmemoryerror: java heap space` error is a common Java error that occurs when the software does not allocate sufficient memory prior to running. (http://stackoverflow.com/questions/1596009/java-lang-outofmemoryerror-java-heap-space). Memory requirements for some Java programs are hard-coded in some of the SMRT Analysis scripts, for example:
+
+#### referenceUploader ####
+The script `$SEYMOUR_HOME/analysis/bin/referenceUploader` contains the command-line java call:
+
+`java -jar ${SEYMOUR_HOME}/analysis/lib/java/secondary-analysis-referenceUploader.jar` where there is no 
+
+default memory allocation.  
+
+#### MotifMaker.sh ####
+The script `$SEYMOUR_HOME/analysis/bin/motifMaker.sh` contains the command-line java call:
+
+`java -Xmx4000m -jar ${SEYMOUR_HOME}/analysis/lib/java/motif-maker-0.1.one-jar.jar $@ || exit $?` 
+
+where the default memory allocation is 4 Gb.
+
+#### GATKVC.py ####
+The script `$SEYMOUR_HOME/analysis/lib/python2.7/pbpy-0.1-py2.7.egg/pbpy/smrtpipe/modules/P_GATKVC.py`  contains the command-line java call:
+
+` java -Xmx4g -Djava.io.tmpdir=%s`
+
+where the default is also 4 Gb: “-Xmx4g”. 
+
+The hard-coded memory limit can be different for every application, and depending on how much memory you have on each node, you can increase this value and rerun the program. The Java `-Xms` variable stands for initial heap size, and `-Xmx` stands for maximum heap size.  
diff --git a/docs/Specifying-SMRT-Pipe-inputs.md b/docs/Specifying-SMRT-Pipe-inputs.md
new file mode 100644
index 0000000..d06a9dc
--- /dev/null
+++ b/docs/Specifying-SMRT-Pipe-inputs.md
@@ -0,0 +1,37 @@
+The input file is an XML file specifying the sequencing data to process. Generally, you specify the inputs as URIs (Universal Resource Identifiers) which are resolved by code internal to SMRT Pipe. In practice, this is most useful to large enterprise users that have a data management scheme and are able to modify the SMRT Pipe code to include their own resolver.
+
+The simpler way to specify inputs is to fully resolve the path to each input file, which is almost always a bas.h5 file. The script ``fofnToSmrtpipeInput.py`` is provided to convert a file of bas.h5 file names (a "file of file names" file) to the input format expected by SMRT Pipe. If ``my_inputs.fofn`` looks like
+```
+/share/data/run_1/m100923_005722_00122_c15301919401091173_s0_p0.bas.h5
+/share/data/run_2/m100820_063008_00118_c04442556811011070_s0_p0.bas.h5
+```
+then it can be converted to a SMRT Pipe input XML file by entering:
+```
+fofnToSmrtpipeInput.py my_inputs.fofn > my_inputs.xml
+```
+Following is the resulting XML file:
+```
+<?xml version="1.0"?>
+<pacbioAnalysisInputs>
+ <dataReferences>
+    <url ref="run:0000000-0000"><location>/share/data/
+    run_1 m100923_005722_00122_c15301919401091173_s0_
+    <url ref="run:0000000-0001"><location>/share/data/
+    run_2/m100820_063008_00118_c04442556811011070_s0_
+ </dataReferences>
+</pacbioAnalysisInputs>
+```
+
+To run an analysis using these two bas.h5 files as input, enter the following command:
+```
+smrtpipe.py --params=settings.xml xml:my_inputs.xml
+```
+
+The SMRT Pipe input format lets you specify annotations, such as job IDs, job names, and job comments, in a job-management environment. The ``fofnToSmrtpipeInput.py`` application has command-line options for setting these optional attributes.
+
+**Note**: To get help for a script, execute the script with the ``--help`` option and no additional arguments. For example:
+```
+fofnToSmrtpipeInput.py --help
+```
+
+
diff --git a/docs/Specifying-SMRT-Pipe-parameters.md b/docs/Specifying-SMRT-Pipe-parameters.md
new file mode 100644
index 0000000..ce3ba72
--- /dev/null
+++ b/docs/Specifying-SMRT-Pipe-parameters.md
@@ -0,0 +1,83 @@
+The ``--params`` option is the most important SMRT Pipe option, and is required for any sophisticated use. The option specifies an XML file that controls:
+
+* The analysis modules to run.
+* The **order** of execution.
+* The **parameters** used by the modules.
+
+The general structure of the settings XML file is as follows:
+```
+<?xml version="1.0"?>
+<smrtpipeSettings>
+
+<protocol>
+...global parameters...
+</protocol>
+
+<module id="module_1">
+...parameters...
+</module>
+
+<module id="module_2">
+...parameters...
+</module>
+
+</smrtpipeSettings>
+```
+
+* The ``protocol`` element allows setting global parameters that could possibly be used by all modules.
+* Each ``module`` element defines an analysis module to run. 
+* The order of the ``module`` elements defines the order in which the modules execute.
+
+SMRT Portal protocol templates are located in: ```$SEYMOUR_HOME/common/protocols/```.
+
+SMRT Pipe modules are located in: 
+``$SEYMOUR_HOME/analysis/lib/pythonx.x/pbpy-0.1-py2.7.egg/pbpy/smrtpipe/modules/``.
+
+You specify parameters by entering a key-value pair in a ``param`` element. 
+* The name of the key is in the name attribute of the ``param`` element.
+* The value of the key is contained in a nested value element. 
+
+For example, to set the parameter named ``reference``, you specify:
+```
+<param name="reference">
+  <value>/share/references/repository/celegans</value>
+</param>
+```
+
+**Note**: To reference a parameter value in other parameters, use the notation ``${variable}`` when specifying a value. For example, to reference a global parameter named home, use it in other parameters as ``${home}``. SMRT Pipe supports arbitrary parameters in the settings XML file, so the use of temporary variables like this can help readability and maintainability.
+
+Following is a complete example of a settings file for running filtering, mapping, and consensus steps against the E coli reference genome:
+```
+<?xml version="1.0" encoding="utf-8"?>
+<smrtpipeSettings>
+ <protocol>
+  <param name="reference">
+   <value>/share/references/repository/ecoli</value>
+  </param>
+ </protocol>
+
+ <module name="P_Filter">
+  <param name="minLength">
+    <value>50</value>
+  </param>
+  <param name="readScore">
+    <value>0.75</value>
+  </param>
+ </module>
+
+ <module name="P_FilterReports" />
+
+ <module name="P_Mapping">
+  <param name="align_opts" hidden="true">
+   <value>--minAccuracy=0.75 --minLength=50 -x </value>
+  </param>
+ </module>
+
+ <module name="P_MappingReports" />
+ <module name="P_Consensus" />
+ <module name="P_ConsensusReports" />
+
+</smrtpipeSettings>
+```
+
+
diff --git a/docs/Step-3:-Extract-the-Tarball.md b/docs/Step-3:-Extract-the-Tarball.md
new file mode 100644
index 0000000..f16354f
--- /dev/null
+++ b/docs/Step-3:-Extract-the-Tarball.md
@@ -0,0 +1,20 @@
+Extract the tarball to its final destination - this creates a ``smrtanalysis-1.4.0/ directory``. Be sure to use the tarball appropriate to your system - Ubuntu or CentOS.
+
+**Note**: You need to run these commands as sudo if you do not have permission to write to the install folder. If the extracted folder is **not** owned by the user performing the installation (``/opt`` is typically owned by root), change the ownership of the folder and all its contents. 
+
+Example: To change permissions within ``/opt``:
+```
+sudo chown -R <thisuser>:<thisgroup> smrtanalysis-1.4.0
+```
+
+We recommend deploying to ``/opt``:
+```
+tar -C /opt -xvvzf <tarball_name>.tgz
+```
+
+We also recommend creating a symbolic link to ``/opt/smrtanalysis-1.4.0`` with ``/opt/smrtanalysis``:
+```
+ln -s /opt/smrtanalysis-1.4.0 /opt/smrtanalysis
+```
+
+This enables subsequent upgrades to be transparent with a change in the symbolic link to the upgraded tarball directory.
\ No newline at end of file
diff --git a/docs/Step-5,-Option-2:-Run-the-Upgrade-Script.md b/docs/Step-5,-Option-2:-Run-the-Upgrade-Script.md
new file mode 100644
index 0000000..a159e28
--- /dev/null
+++ b/docs/Step-5,-Option-2:-Run-the-Upgrade-Script.md
@@ -0,0 +1,14 @@
+If you are **upgrading** from v1.3.3 to v1.4.0 and want to preserve SMRT Cells, jobs, and users from a previous installation:
+
+Run ``upgrade_and_configure_smrtanalysis.sh`` to update the database schema and the reference repository entries:
+```
+cd $SEYMOUR_HOME/etc/scripts/postinstall
+./upgrade_and_configure_smrtanalysis.sh
+```
+
+Skip setting up the services: (These should already exist from the previous installation.)
+```
+Now creating symbolic links in /etc/init.d. Continue? [Y/n] n
+```
+
+The upgrade process will port over the configuration settings from the previous version.
\ No newline at end of file
diff --git a/docs/Step-5:-Run-the-Installation-Script.md b/docs/Step-5:-Run-the-Installation-Script.md
new file mode 100644
index 0000000..7fa5254
--- /dev/null
+++ b/docs/Step-5:-Run-the-Installation-Script.md
@@ -0,0 +1,22 @@
+Run the installation script:
+```
+cd $SEYMOUR_HOME/etc/scripts/postinstall
+./configure_smrtanalysis.sh
+```
+
+The installation script requires the following input:
+* The **system name**. (Default: ``hostname -a``)
+* The **port number** that the services will run under. (Default: ``8080``)
+* The Tomcat **shutdown por**t. (Default: ``8005``)
+* The **user/group** to run the services and set permissions for the files. (Default: ``smrtanalysis:smrtanalysis``)
+* The **mysql user name and password** to install the database. (Default: ``root:no password``)
+
+The installation script performs the following:
+* Creates the SMRT Portal database. **Note**: The mysql user performing the install **must** have permissions to alter or create databases. Otherwise, the installer will **reject** the user and prompt for another.
+* Sets the host and port names for various configuration files.
+* Sets the Tomcat/kodos user. The services will run as the specified user.
+* Sets the user and group permissions and ownership of the application to the Tomcat user.
+* Adds links in ``/etc/init.d`` to the Tomcat and kodos services. (The defaults are: ``/etc/init.d/kodosd`` and ``/etc/init.d/tomcatd``.) These are soft links to the actual service files within the application. If a file is already present (for example, tomcatd is already installed), the link can be created with a different name. The permissions of the underlying scripts are limited to the user running the services.
+* Installs the services. The services will automatically restart if the system restarts. (On CentOS, the installer will run ``chkconfig`` to install the services, rather than ``update-rc.d``.)
+
+**Note**: The installer will attempt to run without sudo access first. If this fails, the installer will prompt the user for a sudo password and retry.
\ No newline at end of file
diff --git a/docs/Step-7:-(New-Installations-Only)-Set-Up-User-Data-Folders.md b/docs/Step-7:-(New-Installations-Only)-Set-Up-User-Data-Folders.md
new file mode 100644
index 0000000..d0112dd
--- /dev/null
+++ b/docs/Step-7:-(New-Installations-Only)-Set-Up-User-Data-Folders.md
@@ -0,0 +1,17 @@
+SMRT Analysis saves references and results in its own hierarchy. Note that large amounts of data are generated and storage can get filled up. We suggest that you softlink to an **external** directory with more storage.
+
+All jobs and references, as well as drop boxes, are contained in ``$SEYMOUR_HOME/common/userdata``. You can move this folder to another location, then soft link ``$SEYMOUR_HOME/common/userdata`` to the new location. 
+
+**If performing a fresh installation**: For example
+```
+mv $SEYMOUR_HOME/common/userdata /my_offline_storage
+ln -s /my_offline_storage/userdata $SEYMOUR_HOME/common/userdata
+```
+
+If **upgrading**, you need to point the new build to the external storage location. For example:
+```
+rm $SEYMOUR_HOME/common/userdata
+ln -s /my_offline_storage/userdata $SEYMOUR_HOME/common/userdata
+```
+
+**Note**: The default protocols and underlying support files within ``common/protocols`` and subfolders were updated **significantly** for v1.4.0. We **strongly recommend** that you recreate protocols for v1.4.0 rather than carry over protocols from previous versions.
\ No newline at end of file
diff --git "a/docs/Step-8:-(New-Installations-Only)-Set-Up-SMRT\302\256-Portal.md" "b/docs/Step-8:-(New-Installations-Only)-Set-Up-SMRT\302\256-Portal.md"
new file mode 100644
index 0000000..f951015
--- /dev/null
+++ "b/docs/Step-8:-(New-Installations-Only)-Set-Up-SMRT\302\256-Portal.md"
@@ -0,0 +1,19 @@
+1. Use your web browser to start SMRT Portal: ``http://HOST:PORT/smrtportal``
+2. Click **Register** at the top right.
+3. Create a user named ``administrator`` (all lowercase). This user is special, as it is the only user that does not require activation on creation.
+4. Enter the user name ``administrator``.
+5. Enter an email address. All administrative emails, such as new user registrations, will be sent to this address.
+6. Enter the password and confirm the password.
+7. Select **Click Here** to access **Change Settings**.
+8. To set up the mail server, enter the SMTP server information and click **Apply**. For email authentication, enter a user name and password. You can also enable Transport Layer Security.
+9. To enable automated submission from a PacBio® RS instrument, click **Add** under the Instrument Web
+Services URI field. Then, enter the following into the dialog box and click **OK**:
+```
+http://INSTRUMENT_PAP01:8081
+```
+``INSTRUMENT_PAP01`` is the IP address or name (pap01) of the instrument.
+``8081`` is the port for the instrument web service.
+
+
+10. Select the new URI, then click **Test** to check if SMRT Portal can communicate with the instrument service.
+11. (Optional) You can delete the pre-existing instrument entry by clicking **Remove**.
\ No newline at end of file
diff --git a/docs/Step-9:-Start-the-SMRT-Portal-and-Automatic-Secondary-Analysis-Services.md b/docs/Step-9:-Start-the-SMRT-Portal-and-Automatic-Secondary-Analysis-Services.md
new file mode 100644
index 0000000..996b376
--- /dev/null
+++ b/docs/Step-9:-Start-the-SMRT-Portal-and-Automatic-Secondary-Analysis-Services.md
@@ -0,0 +1,2 @@
+1. Start Tomcat: ``sudo /$SEYMOUR_HOME/etc/scripts/tomcatd start``
+2. Start kodos: ``sudo /etc/init.d/kodosd start``
\ No newline at end of file
diff --git a/docs/Stopping-Celera-Assembler-jobs.md b/docs/Stopping-Celera-Assembler-jobs.md
new file mode 100644
index 0000000..d58c775
--- /dev/null
+++ b/docs/Stopping-Celera-Assembler-jobs.md
@@ -0,0 +1,14 @@
+Celera Assembler SGE jobs **cannot** be stopped by using the smrtpipe ``--kill`` command in the SMRT Pipe job directory. This is because during the Celera Assembler workflow, some jobs are submitted to SGE by SMRT Pipe (such as jobs submitted by the SFilter module) and others are submitted by Celera Assembler (such as overlapper jobs.) Celera Assembler-submitted jobs typically have names that end with ``_asm`` or ``_celera``. 
+
+Monitor and possibly terminate Celera Assembler-submitted jobs manually by using the ``qstat`` and ``qdel`` commands:
+
+```
+#Look for assembler jobs run by smrtanalysis in ``my_queue``:
+
+qstat -U smrtanalysis -q my_queue | grep _asm
+3637705 0.00000 rCA_asm    smrtanalysis      qw    09/10/2012 09:26:53
+
+#Manually delete the job:
+
+qdel 3637705
+```
\ No newline at end of file
diff --git a/docs/The-Reference-Repository.md b/docs/The-Reference-Repository.md
new file mode 100644
index 0000000..e28f5ef
--- /dev/null
+++ b/docs/The-Reference-Repository.md
@@ -0,0 +1,28 @@
+The **reference repository** is a file-based data store used by SMRT Analysis to manage reference sequences and associated information. The full description of all of the attributes of the reference repository is beyond the scope of this document, but you need to use some basic aspects of the reference repository in most SMRT Pipe analyses. 
+
+**Example**: Analysis of multi-contig references can **only** be handled by supplying a reference entry from a reference repository.
+
+It is simple to create and use a reference repository:
+
+* A reference repository can be any directory on your system. You can have as many reference repositories as you wish; the input to SMRT Pipe is a fully resolved path to a reference entry, so this can live in any accessible reference repository.
+
+Starting with the FASTA sequence ``genome.fasta``, you upload the sequence to your reference repository using the following command:
+```
+referenceUploader -c -p/path/to/repository -nGenomeName
+-fgenome.fasta
+```
+
+where:
+
+* ``/path/to/repository`` is the path to your reference repository.
+* ``GenomeName`` is the name to use for the reference entry that will be created.
+* ``genome.fasta`` is the FASTA file containing the reference sequence to upload.
+
+For a large genome, we highly recommended that you produce the BLASR suffix array during this upload step. Use the following command:
+```
+referenceUploader -c -p/path/to/repository -nHumanGenome -fhuman.fasta --saw='sawriter -welter'
+```
+
+There are many more options for reference management. Consult the MAN page entry for referenceUploader by entering ``referenceUploader -h``.
+
+To learn more about what is being stored in the reference entries, look at the directory containing a reference entry. You will find a metadata description (reference.info.xml) of the reference and its associated files. For example, various static indices for BLASR and SMRT View are stored in the sequence directory along with the FASTA sequence.
\ No newline at end of file
diff --git a/docs/The-configure_smrtanalysis.sh-script-fails.md b/docs/The-configure_smrtanalysis.sh-script-fails.md
new file mode 100644
index 0000000..1d891b7
--- /dev/null
+++ b/docs/The-configure_smrtanalysis.sh-script-fails.md
@@ -0,0 +1,24 @@
+Make sure that all required libraries are installed.
+
+**Ubuntu 10.04 +**
+
+```
+apt-get install mysql-server libssl0.9.8 libgfortran3 liblapack3gf libxml-parser-perl
+```
+
+**CentOS 5.6 +**
+
+```
+yum install mysql-server perl-XML-Parser.x86_64 libgfortran openssl openssl098e
+```
+
+**CentOS 6 +**
+```
+yum install mysql-server perl-XML-Parser.x86_64 libgfortran compat-libgfortran-41 openssl openssl098e
+```
+
+Make sure the system user executing the script is either the `smrtanalysis` user, or is superuser.
+
+The user which SMRT Analysis will be running under will need to be created, as well as a MySQL user with elevated privileges for creating the smrtportal schema.
+
+See also: [SMRT® Analysis Software Installation](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.0.1).
\ No newline at end of file
diff --git a/docs/The-head-node-is-very-slow-when-more-than-20-jobs-are-running.md b/docs/The-head-node-is-very-slow-when-more-than-20-jobs-are-running.md
new file mode 100644
index 0000000..3cfaf21
--- /dev/null
+++ b/docs/The-head-node-is-very-slow-when-more-than-20-jobs-are-running.md
@@ -0,0 +1,5 @@
+The head node can becomes **extremely** slow when more than 20 jobs are running simultaneously.
+
+The current distributed system can potentially use resources and create processes on the head node, for example, if the ``qsw.py`` script is used. Too many simultaneous running jobs may impact performance on the head node.
+
+In practice, we have had no issues with 20 or more simultaneous jobs (resequencing with consensus). When running GATK, we did **not** see this issue with 80 or more simultaneous jobs.
\ No newline at end of file
diff --git a/docs/The-job-is-very-slow.md b/docs/The-job-is-very-slow.md
new file mode 100644
index 0000000..31f0dd2
--- /dev/null
+++ b/docs/The-job-is-very-slow.md
@@ -0,0 +1,94 @@
+### Step 1: Check resource requirements
+
+We recommend that SMRT Analysis be installed on a compute cluster with at least the following hardware:
+```
+1 head node:
+• Minimum 16 GB RAM. Larger references such as human may require 32 GB RAM.
+• Minimum 250 GB of disk space
+
+3 compute nodes:
+• 8 cores per node, with 2 GB RAM per core
+• Minimum 250 GB of disk space per node
+```
+
+#### You **do not** meet resource requirements, and are running on a single server.
+
+If SMRT Analysis is only running on a single-server, the software makes no attempt to load-balance or queue any jobs on the single server. All jobs are submitted and executed, which simultaneously slows down all other processes running on the server. You must advise your users to submit SMRT Portal jobs with restraint, preferably one-at-a-time.
+
+#### You **do** meet resource requirements, and are running on a distributed computing environment.
+
+If SMRT Analysis is configured for distributed computing, but the jobs are still running slowly you need to edit the template file for your job management system (JMS):
+```
+$SEYMOUR_HOME/analysis/etc/cluster/<JMS>/start.tmpl
+$SEYMOUR_HOME/analysis/etc/cluster/<JMS>/interactive.tmpl
+$SEYMOUR_HOME/analysis/etc/cluster/<JMS>/kill.tmpl
+```
+
+Step 3 provides more specific suggestions on what options to edit.
+
+### Step 2:  Make sure `TMP` is a local (not NFS) directory.
+Temporary files are written to the `TMP` directory specified in `$SEYMOUR_HOME/etc/smrtpipe.rc`. Unlike `SHARED_DIR`, that must be cross-mounted on all nodes to share files, `TMP` should be set to a **local directory** to speed up I/O overheads. Edit `smrtpipe.rc`, restart tomcat and rerun the job.
+
+### Step 3:  Examine the .tmpl files.
+
+The default start.tmpl file for Sun Grid Engine (SGE) looks like this:
+```
+qsub -pe <your_parallel_environment> ${NPROC} -S /bin/bash -V -q <your new_queue> -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+```
+
+It is highly configurable, and it is sometimes critical for performance to match the options such that they are specific to your hardware. Any option available to the qsub command can be added to both `start.tmpl` and `interactive.tmpl` files. You can also add additional scripts to the .tmpl file to prepare the job for submission. Any change you make to the `start.tmpl` file should also be repeated in the `interactive.tmpl` file, making sure to preserve the extra `-sync y` option in the  [...]
+ 
+#### Step 3a: Decrease NPROC
+
+The `NPROC` environment variable controls how many cores/processes/slots are reserved for each job. We use this option because SMRTpipe has a number of multi-threaded algorithms. This number is by default, one less the total number of cores on your head node, but may be different for your compute nodes. Change the `NPROC` environment variable by editing it in `$SEYMOUR_HOME/analysis/etc/smrtpipe.rc`. This is an iterative process where you:
+
+1. Change NPROC to a smaller number so that it doesn't go over the CPU limit for the least CPU intensive node in your cluster.
+
+2. Restart tomcat.
+
+3. execute `$SEYMOUR_HOME/common/jobs/<id_prefix>/<id>/job.sh` and check how long it takes to complete the job.
+
+
+#### Step 3b option 1 for memory:  Add `mem_free` and `h_vmem` to the `-l` option in SGE:
+When submitting your job(s), if you did not specify any memory requirements, SGE would choose the cluster node(s) with the lowest CPU load **WITHOUT REGARD TO MEMORY AVAILABILITY**. For example, by adding the options `-l mem_free=4G,h_vmem=6G` to your qsub command, your job would go to a node with at least 4 GB of memory available at the time the job starts and the job would automatically be stopped if it exceeded 6 GB of memory usage at any time.
+
+Restart tomcat.  Double check the memory usage of your job by executing `qstat -j <job_number> | grep vmem` while it is running.
+
+#### Step 3b option 2 for memory:  Edit the `virtual_free` entry for each node if you have a heterogeneous cluster.
+
+1. Call up the complex attribute modification editor:  `qconf -mc`
+2. Edit the "slots" entry, so it looks like:
+`slots               s          INT`         
+3. Edit the "virtual_free" entry, so it looks like:
+`virtual_free        vf         MEMORY`
+4. For each host with the name/address "", type `$ qconf -me` and either add or edit the existing `complex_values` entry so that it looks like:
+`complex_values        slots=8, virtual_free=16G`
+
+Restart tomcat. Double check the memory usage of your job by executing `qstat -j <job_number> | grep vmem` while it is running.
+
+#### Step 3c: Add the `-M` option in LSF:
+```
+bsub -q pacbio -g /pacbio/smrtanalysis -J ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} -M 33000000 -n 4 ${CMD}
+```
+
+#### Step 3d: Add the `-R` option in LSF:
+If you have a heterogeneous cluster, you can also use the `-R` option to specify compute nodes that meet certain resource requirements:
+```
+bsub -q pacbio -g /pacbio/smrtanalysis -J ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} -M 33000000 -R 'select[type==LINUX64 && mem>=32000 && tmp>=300000] rusage[mem=32000, tmp=250000] span[hosts=1] -n 4 ${CMD} '
+```
+
+#### Step 3e: Add a script to override options.
+You can add any arbitrary number of operations to the job submission by adding lines to the tmpl files.  In the following example, additional environment variables are being defined in a profile script, instead of being managed by the parallel environment (`-pe`) option:
+
+```
+. /path/to/profile
+qsub  ${NPROC} -S /bin/bash -V -q <your new_queue> -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+```
+
+### Step 4:  Troubleshoot the distributed computing environment
+
+1.  Check which jobs are stuck in the queue. For example, you can use `qstat` if your Job Management system is SGE. The first column of the return will be the job id, and you can find out which node is running that job by executing `qstat -j <job_id>`.
+
+2.  If `qstat -f` shows a node in `E` error status, disable the queue on the node `qmod -d <queue name>` and re-enable it `qmod -e <queue name>`.
+
+A good summary for troubleshooting Grid Engine issues was published by [bioteam in 2009](http://www.bioteam.net/wp-content/uploads/2009/09/07-SGE-6-Admin-Troubleshooting.pdf). This document describes troubleshooting at the Job level and at the Cluster level.
diff --git a/docs/Timing-problem-with-jobs.md b/docs/Timing-problem-with-jobs.md
new file mode 100644
index 0000000..8fa567c
--- /dev/null
+++ b/docs/Timing-problem-with-jobs.md
@@ -0,0 +1,5 @@
+Due to the ``-hold_jid 'ovl_asm`` parameter passed to qsub, if more than one Celera Assembler jobs are run in parallel from the same user account, the ``runCA.sge.out.*.sh`` scripts submitted by qsub from **all** jobs will hold until **all** ``ovl_asm`` tasks have completed. 
+
+This is problematic as multiple jobs spawned by SMRT Portal at the same time will **all** run with the same user name, such as ``smrtanalysis``. These jobs will be **all** be waiting for the job with the longest ``ovl_asm`` tasks to complete before proceeding. This unnecessarily prolonging analysis.
+
+The ``sgeName=`` parameter solves this issue for the default use case where no external ``\*.spec`` file is specified. However, to run **multiple** CA jobs in parallel with externally specified \*.spec files, you need to remember to add or modify the ``sgeName=`` parameter in the \*.spec file to be different for each job running in parallel.
\ No newline at end of file
diff --git a/docs/Troubleshooting---configure_smrtanalysis.sh-Script-Fails.md b/docs/Troubleshooting---configure_smrtanalysis.sh-Script-Fails.md
new file mode 100644
index 0000000..08479d3
--- /dev/null
+++ b/docs/Troubleshooting---configure_smrtanalysis.sh-Script-Fails.md
@@ -0,0 +1,5 @@
+Make sure that all required libraries are installed.  See installation documentation for specifics for your OS.
+
+Make sure the system user executing the script is either the smrtanalysis user, or is superuser.
+
+The user which SMRT Analysis will be running under will need to be created, as well as a MySQL user with elevated privileges for creating the smrtportal schema.
\ No newline at end of file
diff --git a/docs/Troubleshooting-Kodos-Secondary-Auto-Analysis.md b/docs/Troubleshooting-Kodos-Secondary-Auto-Analysis.md
new file mode 100644
index 0000000..7a4f818
--- /dev/null
+++ b/docs/Troubleshooting-Kodos-Secondary-Auto-Analysis.md
@@ -0,0 +1,179 @@
+##Introduction##
+
+This document describes a series of troubleshooting steps to ensure that secondary auto-analysis processing takes place.
+
+When selecting a secondary analysis protocol in RS Remote, that protocol is automatically initiated upon completion of primary analysis.  After secondary analysis is finished, results can be viewed in SMRT® Portal.  Kodos Secondary Auto-Analysis Daemon is controlled by `kodosd` service.
+
+###Start the Kodos Service###
+Often, the first thing to do is simply start the `kodosd` service:
+```
+$SMRT_ROOT/admin/bin/kodosd start
+```
+If `kodosd` was **not** already running, the message returned:
+```
+ * starting Kodos Secondary Auto-Analysis Daemon
+```
+If `kodosd` was already running, the message returned:
+```
+ * Kodos Secondary Auto-Analysis Daemon is already running
+```
+
+
+###Stop the Kodos Service###
+
+To stop the `kodosd` service,
+```
+$SMRT_ROOT/admin/bin/kodosd stop
+```
+If `kodosd` was already running, the message returned:
+```
+ * stopped Kodos Secondary Auto-Analysis Daemon
+```
+If `kodosd` was **not** already running, the message returned:
+```
+ * Kodos Secondary Auto-Analysis Daemon is not running
+```
+
+###Restart the Kodos Service###
+```
+$SMRT_ROOT/admin/bin/kodosd restart
+```
+If `kodosd` was already running, the message returned:
+```
+ * Restarting Kodos Secondary Auto-Analysis Daemon kodosd
+ * stopped Kodos Secondary Auto-Analysis Daemon
+ * starting Kodos Secondary Auto-Analysis Daemon
+```
+If `kodosd` was **not** already running, the message returned:
+```
+ * Restarting Kodos Secondary Auto-Analysis Daemon kodosd
+ * Kodos Secondary Auto-Analysis Daemon is not running
+ * starting Kodos Secondary Auto-Analysis Daemon
+```
+
+
+###Check for Running `kodosd` autoDaemon Processes###
+After starting the Kodos autoDaemon,  check that `kodosd` continues to run on the server:
+
+1. SSH into the SMRT Analysis server.
+2. Check that the Kodos autoDaemon process is running:
+
+```
+sudo ps –ef | grep kodos
+```
+should return output showing two `kodosd` services, similar to:
+```
+smrtanalysis      23681     1  0 12:52 ?        00:00:00 jsvc.exec -Xms256m -Xmx256m -user fas -pidfile /tmp/kodosd.pid -home /net/usmp-data3-10g/ifs/data/vol53/fas/secondary/opt/smrtanalysis/current/redist/java
+...
+smrtanalysis      23682 23681  0 12:52 ?        00:00:07 jsvc.exec -Xms256m -Xmx256m -user fas -pidfile /tmp/kodosd.pid -home /net/usmp-data3-10g/ifs/data/vol53/fas/secondary/opt/smrtanalysis/current/redist/java
+...
+root     29597 15158  0 13:37 pts/23   00:00:00 grep kodosd
+```
+
+If there are more than two `kodosd` services running, or services running under different or multiple users, all processes should be terminated by executing
+```
+kill <PID>
+```
+for each process, then start `kodosd` with
+```
+$SMRT_ROOT/admin/bin/kodosd start
+```
+
+Confirm the `kodosd` processes are running by repeating Step 2.
+
+An additional confirmation can be to check for the lock file and timestamps for the autoDaemon logs.  :
+
+```
+$ ls -l $SMRT_ROOT/current/common/log/autoDaemon
+total 434
+-rw-r--r-- 1 smrtanalysis smrtanalysis 139475 2014-06-04 12:47 autoDaemon.0.log
+-rw-r--r-- 1 smrtanalysis smrtanalysis      0 2014-06-04 12:06 autoDaemon.0.log.lck
+```
+
+
+###Testing the URI Paths###
+1. In RS Remote, choose View -> Settings -> Data Sources.
+2. Check the In-Instrument, Windows and Unix Path URIs (Uniform Resource Identifiers) by clicking the respective Test buttons. All path URIs must pass the test. (For information on specifying the path URIs, see the document Installing and Setting Up RS Remote.)
+3. Click Apply.
+4. Exit and then restart RS Remote.
+
+
+###Testing the Web Server Address###
+1. In RS Remote, choose View -> Settings -> Secondary Analysis.
+2. Copy the path in the Web Server Address field, paste it into a web browser, confirm settings are valid with the use of  testing functions provided in the configuration UI.
+3. In RS Remote, click Apply.
+
+
+###Configuring Instrument Web Service URIs###
+
+Kodos must be configured to communicate with the PacBio RS II Blade Center.  Failure to do so may result in messages in `$SMRT_ROOT/current/common/log/autoDaemon/autoDaemon.0.log`, similar to:
+```
+Jun 09, 2014 2:46:15 PM com.pacbio.secondary.analysis.daemon.rest.DataProviderImpl getThenRead
+INFO: GET-ing from uri > http://100.10.0.1:8080/smrtportal/api/settings/instrument-uris
+Jun 09, 2014 2:46:15 PM com.pacbio.secondary.analysis.daemon.rest.DataProviderImpl getThenRead
+INFO: Returned <  [ "http://PRIMARY_HOST:8081" ]
+```
+To specify the Instrument Web Service URI:
+
+1. Log in to SMRT Portal as a user with the administrator role.
+2. Click *Admin* -> *Change Settings*.
+3. Under the section Instrument Web Service URIs, Click *Add*, then enter the IP address or host name of the instrument, and port number, which is `8081`.  URIs will typically be in the form of `http://pap01-12345:8081`, where '`12345`' is the Instrument number of the PacBio RS II.
+4. Click *Test* to confirm the communication is successful.
+5. If the entry for `http://PRIMARY_HOST:8081` exists, select the entry and click *Remove*.
+
+
+###Check the autoDaemon Logs###
+
+####Check the timestamps on the autoDaemon log files####
+```
+ls -l $SMRT_ROOT/current/common/log/autoDaemon
+total 434
+-rw-r--r-- 1 smrtanalysis smrtanalysis 139475 2014-06-04 12:47 autoDaemon.0.log
+-rw-r--r-- 1 smrtanalysis smrtanalysis      0 2014-06-04 12:06 autoDaemon.0.log.lck
+```
+* The timestamp for `autoDaemon.0.log.lck` should coincide with when `kodosd` was started.
+* The timestamp for `autoDaemon.0.log` should coincide with the end of the last work loop completed, and should be within the last 5 minutes for a running `kodosd`.
+
+####Look for error messages in the log####
+
+Many errors in `autoDaemon.0.log` can be found in and immediately following lines starting with `SEVERE`.
+
+An example:
+
+```
+SEVERE: Failed to get instrument output
+com.sun.jersey.api.client.ClientHandlerException: java.net.UnknownHostException: null
+        at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:128)
+        at com.sun.jersey.api.client.filter.HTTPBasicAuthFilter.handle(HTTPBasicAuthFilter.java:78)
+        at com.sun.jersey.api.client.Client.handle(Client.java:569)
+        at com.sun.jersey.api.client.WebResource.handle(WebResource.java:556)
+        at com.sun.jersey.api.client.WebResource.get(WebResource.java:179)
+        at com.pacbio.secondary.analysis.daemon.rest.DataProviderImpl.getThenRead(DataProviderImpl.java:75)
+        at com.pacbio.secondary.analysis.daemon.rest.DataProviderImpl.getInstrumentOutputJson(DataProviderImpl.java:53)
+        at com.pacbio.secondary.analysis.daemon.rest.DataProvider.getInstrumentOutput(DataProvider.java:86)
+        at com.pacbio.secondary.analysis.daemon.AutoJob.getInstrumentOutput(AutoJob.java:129)
+        at com.pacbio.secondary.analysis.daemon.AutoJob.execute(AutoJob.java:88)
+        at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
+        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549)
+Caused by: java.net.UnknownHostException: null
+        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
+        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
+        at java.net.Socket.connect(Socket.java:579)
+        at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
+        at sun.net.www.http.HttpClient.openServer(HttpClient.java:388)
+        at sun.net.www.http.HttpClient.openServer(HttpClient.java:483)
+        at sun.net.www.http.HttpClient.<init>(HttpClient.java:213)
+        at sun.net.www.http.HttpClient.New(HttpClient.java:300)
+        at sun.net.www.http.HttpClient.New(HttpClient.java:316)
+        at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:992)
+        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:928)
+        at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:846)
+        at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1296)
+        at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
+        at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:215)
+        at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:126)
+        ... 11 more
+
+Jun 04, 2014 12:05:55 PM com.pacbio.secondary.analysis.daemon.AutoJob execute
+INFO: No instrument outputs to submit...
+```
diff --git a/docs/Troubleshooting-the-SMRT-Analysis-Suite.md b/docs/Troubleshooting-the-SMRT-Analysis-Suite.md
new file mode 100644
index 0000000..76132e3
--- /dev/null
+++ b/docs/Troubleshooting-the-SMRT-Analysis-Suite.md
@@ -0,0 +1,77 @@
+This troubleshooting guide provides general strategies for troubleshooting SMRT Analysis issues and offers solutions to common problems.
+
+##Technical Support Tools##
+When getting support from Pacific Biosciences, you may be asked to install and run some technical support tools.  See the links below for instructions.
+* [[ Installing and upgrading the techsupport tools ]]
+* [[ Using the techsupport tools ]]
+
+##SMRT Analysis Installation##
+
+* Installation
+   * [[ The configure_smrtanalysis.sh script fails]]
+   * [[ Cannot create a mysql database ]]
+   * [[ Installation assumes local MySQL instance ]]
+   * [[ Environment variables are not set correctly ]]
+   * [[ Cannot create softlinks to services ]]
+* Upgrade
+   * [[ User does not have the correct permissions to upgrade ]]
+   * [[ Finding $SEYMOUR_HOME on an existing SMRT Analysis Installation ]]
+* SMRT Analysis Migration
+   * [[How to migrate SMRT Analysis to a different server]]
+* SMRT Analysis Uninstalls
+   * [[How to uninstall SMRT Analysis]]
+* Locating [SMRT Analysis Log Files](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/Log-File-Locations)
+
+##SMRT Portal##
+
+* Administration
+  * [[ Cannot See The SMRT Portal Page ]]
+  * [[ You can start SMRT Portal, but cannot log in ]]
+  * [[ Cannot register administrator for the first time due to hibernate.dialect error ]]
+  * [[ SMRT Portal Lost administrator password ]]
+  * [[ SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder" ]]
+* Creating/Running Jobs
+   * [[ SMRT Portal job fails ]]
+   * [[ Common SMRT Portal Errors ]]
+   * [[ RS HGAP Assembly protocol fails in SMRT Portal]]
+   * [[ smrtpipe.log Error: java.lang.OutOfMemoryError: Java heap space]]
+   * [[ SMRT Portal has dificulty connecting to the smrtportal mysql database ]]
+   * [[ SMRT Portal freezes]]
+   * [[ SMRT Portal job status does not update]]
+   * [[ SMRT Portal GMAP "No such file or directory" error]]
+   * [[ KeyError: "unable to open object (Symbol table: Can't open object)" ]]
+* Distributed Computing
+   * [[ The job is very slow ]]
+   * [[ SMRT Portal jobs are being submitted as root ]]
+   * [[ Job fails at exactly 12 hour ]]
+   * [[ When using the sync option in SGE, you may see errors]]
+   * [[ The head node is very slow when more than 20 jobs are running ]]
+   * [[ Head node may run out of resources ]]
+   * [[ qsub: command not found ]]
+* Import and Manage Data
+   * [[ Cannot import new SMRT Cells ]]
+   * [[ Reference upgrades fail]]
+   * [[ Troubleshooting Kodos Secondary Auto Analysis]]
+   * [[ Delete SMRT Cells from SMRT Portal]]
+* Known System Issues
+   * [[ SMRT View runs out of resources ]]
+
+
+## SMRT View
+* Common Problems
+  * [[ SMRT View does not launch ]]
+  * [[ SMRT View is slow ]]
+  * [[ SMRT View is downloaded from the server every time you access it ]]
+  * [[ SMRT View does not show reads in the details panel ]]
+  * [[ SMRT View crashes while browsing ]]
+  * [[ SMRT View security certificate warning message ]]
+* Alternate Configurations
+  * [[ Running SMRT View in a different tomcat instance ]]
+ 
+## Celera® Assembler
+  * [[ Stopping Celera Assembler jobs ]]
+  * [[ Job fails if a soft link resolves to 2 different paths ]]
+  * [[ Timing problem with jobs ]]
+  * [[ Celera Assembler deadlocks in distributed mode ]]
+  * [[ Customizing a spec file using a setting not exposed in the UI ]]
+  * [[ Default parameters are set conservatively ]]
\ No newline at end of file
diff --git a/docs/Troubleshooting_everything.md b/docs/Troubleshooting_everything.md
new file mode 100644
index 0000000..f3820d6
--- /dev/null
+++ b/docs/Troubleshooting_everything.md
@@ -0,0 +1,213 @@
+# SMRT Analysis Installs and Upgrades
+## SMRT Analysis Install
+### Issue: The configure_smrtanalysis.sh script fails
+Make sure that all required libraries are installed.
+
+**Ubuntu 10.04 +**
+
+```
+apt-get install mysql-server libssl0.9.8 libgfortran3 liblapack3gf libxml-parser-perl
+```
+
+**CentOS 5.6 +**
+
+```
+yum install mysql-server perl-XML-Parser.x86_64 libgfortran openssl openssl098e
+```
+
+**CentOS 6 +**
+```
+yum install mysql-server perl-XML-Parser.x86_64 libgfortran compat-libgfortran-41 openssl openssl098e
+```
+
+Make sure the system user executing the script is either the `smrtanalysis` user, or is superuser.
+
+The user which SMRT Analysis will be running under will need to be created, as well as a MySQL user with elevated privileges for creating the smrtportal schema.
+
+SEE ALSO: [SMRT Analysis Software Installation]
+
+### Issue: Environment variables are not set correctly 
+All environment variables used by SMRT Analysis are specified in `/opt/smrtanalysis/analysis/etc/smrtpipe.rc`.  A common error occurs when the **TMP** and **SHARED_DIR** variables are set to directories that do not exist on your file system.  To correct the error, create the directories in the local file system, and give the directories write permissions.
+
+```
+sudo mkdir /scratch/  
+sudo chmod +x /scratch/  
+
+sudo mkdir /opt/smrtanalysis/common/userdata/shared_dir/  
+sudo chmod +x opt/smrtanalysis/common/userdata/shared_dir/  
+```
+
+Then assign those directories to the environment variables by editing smrtpipe.rc:
+```
+TMP=/scratch/
+SHARED_DIR=/opt/smrtanalysis/common/userdata/shared_dir/
+```
+
+### Issue: I cannot create softlinks to services - Joey
+## SMRT Analysis Upgrade
+### Issue:  I do not have the correct permissions to upgrade 
+SMRT Analysis upgrades are very sensitive to ownership and permission settings.  During upgrade, you need to make sure that you are the same user as the user who installed the previous version of SMRT Analysis.  You can check this by examining the ownership of the `$SEYMOUR_HOME` directory:
+
+```
+user1 at server$ ls -l /opt
+lrwxrwxr-x 1 smrtanalysis smrtanalysis 32 2012-12-17 08:02 smrtanalysis -> smrtanalysis-1.4.0
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.3.1
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.3.3
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.4.0
+logout
+```
+```
+smrtanalysis at server$ /opt/smrtanalysis/etc/scripts/postinstall/upgrade_and_configure_smrtanalysis.sh
+```
+
+In this example, the `$SEYMOUR_HOME` directory is set to `/opt/smrtanalysis/` and this directory is actually a softlink to smrtanalysis-1.4.0, which is owned by the user **smrtanalysis** belonging to the group **smrtanalysis**.  However, you are currently logged in as **user1**.  In order to proceed, you must log out and log back in as the user **smrtanalysis**. This user should now run upgrade_and_configure_smrtanalysis.sh
+
+
+
+
+If you lost the credentials for the **smrtanalysis** user, you need a root user (with sudo permissions) to change the ownership of `$SEYMOUR_HOME`. say **user1**, and then run upgrade_and_configure_smrtanalysis.sh as the new user.
+
+```
+user1 at server$ sudo chown -R user1:group1 /opt/smrtanalysis/
+user1 at server$ sudo chown -R user1:group1 /opt/smrtanalysis-1.4.0/
+user1 at server$ ls -l /opt
+lrwxrwxr-x 1 user1 group1 32 2012-12-17 08:02 smrtanalysis -> smrtanalysis-1.4.0
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.3.1
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.3.3
+drwxrwxr-x 8 user1 group1 142 2012-05-17 21:38 smrtanalysis-1.4.0
+```
+
+
+If you **DO NOT** have sudo and, you **ARE NOT** the same user who installed the previous version, ****you cannot upgrade smrtanalysis****.  You MUST either log in as the same user, or find an administrator with root privileges to reset the ownership and permissions in the `$SEYMOUR_HOME` directory.
+### Issue:  Cannot access SMRT cells 
+### Issue:  Cannot access references - Joey
+
+***
+
+# SMRT Portal
+##Administration
+### Issue: I cannot see the SMRT Portal page 
+**Cause 1:  The hostname and/or port is incorrect.**  The hostname and port is designated when you installed smrtanalysis.  For example, if your want your SMRT Portal URI to be http://server1:8080/smrtportal/, the hostname should be set as **server1** and the port should be set as **8080**.  To reset these variables, re-run configure_smrtanalysis.sh and enter **server1** and **8080** when prompted by the script.
+
+**Cause 2:  Networking issues**  The client computer/laptop cannot see the server.  SMRT Portal is a client-server application.  This means that the client computer (your laptop) must know that the server computer (server1) exists somewhere in its network.  Some institutions require vpn to login to their network.  You must ask your network administrator for what "hostname" to assign to SMRT Portal such that client computers can recognize it.
+
+**Cause 3:  Tomcat is not on**  Tomcat is the webserver that hosts SMRT Portal.  It must be turned on for SMRT Portal to function.  When it is off, the ps command will below returns only a single line.  When it is on, the ps command below will return an additional line detailing the path to the tomcatd process.
+
+```
+user at server1$ ps -ef | grep tomcat
+71063    15603 23660  0 16:43 pts/15   00:00:00 grep tomcat
+
+user at server1$ /opt/smrtanalysis/etc/scripts/tomcatd start
+
+user at server1$ ps -ef | grep tomcat
+71063    15603 23660  0 16:43 pts/15   00:00:00 grep tomcat
+71109    16203     1  0 Dec05 ?        00:55:17 /opt/smrtanalysis-1.3.3//redist/java/bin/java -Djava.util.logging.config.file=/opt/smrtanalysis-1.3.3//redist/tomcat/conf/logging.properties -d64 -server -Xmx8g -Djava.library.path=/opt/smrtanalysis-1.3.3//common/lib -Djava.security.auth.login.config=/opt/smrtanalysis-1.3.3//redist/tomcat/conf/kerb5.conf -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/opt/smrtanalysis-1.3.3//redist/tomcat/endorsed -cl [...]
+```
+
+### Issue: I cannot log into SMRT Portal - Joey
+
+##Creating/Running Jobs
+### Issue:  My SMRT Portal Job fails immediately
+When a SMRT Portal job fails, you can generally troubleshoot the errors by looking at the smrtpipe.log file.  Sometimes a job fails immediately and the smrtpipe.log file is never created.  Then this happens, you can look for errors written to  `$SEYMOUR_HOME/common/log/smrtportal/smrtportal.0.log`. 
+
+### Issue:  The job is very slow
+We recommend that SMRT Analysis be installed on a compute cluster with the following HW specs:
+```
+1 head node:
+• Minimum 16 GB RAM. Larger references such as human may require 32 GB RAM.
+• Minimum 250 GB of disk space
+
+3 compute nodes:
+• 8 cores per node, with 2 GB RAM per core
+• Minimum 250 GB of disk space per node
+```
+
+If SMRT Analysis is only running on a single-server, the software makes no attempt to load-balance or queue any jobs on the single server. All jobs are submitted and executed, which simultaneously slows down all other processes running on the server.  You must advise your users to submit SMRT Portal jobs with restraint, preferably one-at-a-time.
+
+If SMRT Analysis is configured for distributed computing, but the jobs are still running slowly, you may want to consider the following edits to the appropriate template file for your job management system (JMS).  These files are `$SEYMOUR_HOME/analysis/etc/cluster/<JMS>/start.tmpl` and `$SEYMOUR_HOME/analysis/etc/cluster/<JMS>/interactive.tmpl`.  
+
+If there are other, perhaps larger jobs, being submitted to the same queue, change the designated queue (-q option) to an exclusive environment and monitor the resource usage. For example, the start.tmpl file for Sun Grid Engine (SGE) looks like this:
+```
+qsub -pe <your_parallel_environment> ${NPROC} -S /bin/bash -V -q <your new_queue> -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+```
+
+You can also add options to limit the memory usage, for example, using the -M option for the bsub command in LSF:
+```
+bsub -q pacbio -g /pacbio/smrtanalysis -J ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} -M 33000000 -n 4 ${CMD}
+```
+
+If you have a heterogeneous cluster, you can also use the -R option to specify compute nodes that meet certain resource requrements:
+```
+bsub -q pacbio -g /pacbio/smrtanalysis -J ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} -M 33000000 -R 'select[type==LINUX64 && mem>=32000 && tmp>=300000] rusage[mem=32000, tmp=250000] span[hosts=1] -n 4 ${CMD} '
+```
+
+Finally, you can add any arbitrary number of operations to the job submission by adding lines to the tmpl files.  In the following example, additional environment variables are being defined in a profile script, instead of being managed by the parallel environment (-pe) option:
+
+```
+. /path/to/profile
+qsub  ${NPROC} -S /bin/bash -V -q <your new_queue> -N ${JOB_ID} -o ${STDOUT_FILE} -e ${STDERR_FILE} ${EXTRAS} ${CMD}
+```
+
+
+## Import and Manage Data
+### Issue: I cannot import new SMRT Cells
+The assumed file structure needed to import SMRT Cells is a top-level-directory that contains the `metadata.xml` file and an "Analysis_Results" directory that contains the `bas.h5` file.
+
+```
+top-level-directory
+    *.metadata.xml
+    Analysis_Results
+        *.bas.h5
+```
+
+Make sure the directory is in this file structure, and execute `ls -l` to make sure that the smrtanalysis user has read permissions to the files and execute permissions to the directories.
+
+
+***
+
+
+#SMRT View
+### Issue: SMRT View does not launch 
+SMRT View is a client-server application.  Underlying problems can generally be traced to issues with the server, client (your laptop), or the network connecting the two.  
+
+First check the client.  Your laptop needs Java 6 to execute the .jnlp file.  Check your java version by running `jave -version` on a terminal or command-line prompt.  SMRT View has problems with Java 7 in Macs, but works in Java 6.  If you are running Java 7, please follow the instructions on [this apple support page](http://support.apple.com/kb/HT5559) to revert Java Webstart back to 6.
+  
+Second, check the network.  Open your .jnlp file in a text editor (e.g. notepad) and check to make sure the hostname is defined correctly.  The first line of your jnlp file should look like this:
+
+```
+<jnlp spec="6.0+" version="1.3.3" codebase="http://localhost:8080/smrtview/axis2-web/app/bin" > 
+```
+
+In the above example, the hostname is incorrectly set to localhost, which means that only the server can see and open the jnlp file.  In order for the client to also open the jnlp file and run SMRT View, it must be an externally facing name or ip address.  You can reset the hostname by rerunning `$SEYMOUR_HOME/etc/scripts/postinstall/configure_smrtanalysis.sh` and typing it in when you are prompted.
+
+Finally, to troubleshoot server-side issues, start by looking for errors in the `$SEYMOUR_HOME/common/log/smrtview/` directory.  
+
+Go to the SMRT View homepage in http://<hostname>:8080/smrtview/, and click on the Web Services Validation link to check that all web services are on and all library dependencies are installed.
+
+
+### Issue: SMRT View is slow - joey
+
+
+***
+#Staging, TBD - mj
+### Issue: Details of Job page delays about 7 seconds before a user can click/see anything.
+### Issue: reference upgrades:
+1. an old reference entry with headers only contain refNNNNNN| patterns, e.g.:
+```
+>ref000001|ref000001
+or
+>ref000001
+```
+
+### Issue: (only happened when users are not following the manual) jobs got submitted as root
+```
+smrtanalysis at server$ qstat -u \*
+job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID
+-----------------------------------------------------------------------------------------------------------------        
+4298052 0.47166 S52657     root         qw    12/09/2012 22:12:57                                    1
+```
+You should go back to change /etc/init.d/tomcatd and change from
+```sh $CATALINA_HOME/bin/startup.sh```
+to
+```su -c "sh $CATALINA_HOME/bin/startup.sh" smrtanalysis```
+(Don't forget to use root to do ```/etc/init.d/tomcatd stop```, ```qdel the jobs``` and ```/etc/init.d/tomcatd start``` as well.)
\ No newline at end of file
diff --git a/docs/User-does-not-have-the-correct-permissions-to-upgrade.md b/docs/User-does-not-have-the-correct-permissions-to-upgrade.md
new file mode 100644
index 0000000..f7e9609
--- /dev/null
+++ b/docs/User-does-not-have-the-correct-permissions-to-upgrade.md
@@ -0,0 +1,30 @@
+SMRT Analysis upgrades are **very sensitive** to ownership and permission settings. During upgrade, you need to make sure that you are the **same** user as the user who installed the previous version of SMRT Analysis. You can check this by examining the ownership of the `$SEYMOUR_HOME` directory:
+
+```
+user1 at server$ ls -l /opt
+lrwxrwxr-x 1 smrtanalysis smrtanalysis 32 2012-12-17 08:02 smrtanalysis -> smrtanalysis-1.4.0
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.3.1
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.3.3
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.4.0
+logout
+```
+```
+smrtanalysis at server$ /opt/smrtanalysis/etc/scripts/postinstall/upgrade_and_configure_smrtanalysis.sh
+```
+
+In this example, the `$SEYMOUR_HOME` directory is set to `/opt/smrtanalysis/` and this directory is actually a softlink to smrtanalysis-1.4.0, which is owned by the user **smrtanalysis** belonging to the group **smrtanalysis**. However, you are currently logged in as **user1**. To proceed, you **must** log out and log back in as the user **smrtanalysis**. This user should now run `upgrade_and_configure_smrtanalysis.sh`.
+
+
+If you lost the credentials for the **smrtanalysis** user, you need a root user (with sudo permissions) to change the ownership of `$SEYMOUR_HOME` to, say **user1**, and then run `upgrade_and_configure_smrtanalysis.sh` as the new user.
+
+```
+user1 at server$ sudo chown -R user1:group1 /opt/smrtanalysis/
+user1 at server$ sudo chown -R user1:group1 /opt/smrtanalysis-1.4.0/
+user1 at server$ ls -l /opt
+lrwxrwxr-x 1 user1 group1 32 2012-12-17 08:02 smrtanalysis -> smrtanalysis-1.4.0
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.3.1
+drwxrwxr-x 8 smrtanalysis smrtanalysis 142 2012-05-17 21:38 smrtanalysis-1.3.3
+drwxrwxr-x 8 user1 group1 142 2012-05-17 21:38 smrtanalysis-1.4.0
+```
+
+If you **DO NOT** have sudo and, you **ARE NOT** the same user who installed the previous version, **you CANNOT upgrade smrtanalysis**.  You **MUST** either log in as the same user, or find an administrator with root privileges to reset the ownership and permissions in the `$SEYMOUR_HOME` directory.
\ No newline at end of file
diff --git a/docs/Using-the-command-line.md b/docs/Using-the-command-line.md
new file mode 100644
index 0000000..788b69e
--- /dev/null
+++ b/docs/Using-the-command-line.md
@@ -0,0 +1,21 @@
+In a typical SMRT Analysis installation, SMRT Pipe is in your path after sourcing the ``setup.sh file``. To do so, enter the following:
+```
+. /opt/smrtanalysis/etc/setup.sh
+```
+
+**Note**: Make sure to replace ``/opt/smrtanalysis`` with the path to your SMRT Analysis installation.
+
+To check that SMRT Pipe is available, enter the following:
+```
+smrtpipe.py --help
+```
+
+This displays a help message describing how to run smrtpipe.py and all of the available command-line options.
+
+You invoke SMRT Pipe with the following command:
+```
+smrtpipe.py [--help] [options] --params=settings.xml xml:inputFile
+```
+
+Logging messages are printed to stderr as well as a log file (``log/smrtpipe.log``). It is standard practice to pipe the stderr messages to a file using redirection in your shell, for example appending 
+``&> smrtpipe.err`` to the command line if running under bash.
diff --git a/docs/Using-the-techsupport-tools.md b/docs/Using-the-techsupport-tools.md
new file mode 100644
index 0000000..08ebc9f
--- /dev/null
+++ b/docs/Using-the-techsupport-tools.md
@@ -0,0 +1,20 @@
+The techsupport tools are provided to gather information about your installation or collect job data and package it up as a single compressed file.  Normally, you will be asked to execute a specific command and send the output.  All of the tools are accessible through one script (from the root of the installation): admin/bin/techsupport.  
+
+To see what options are available, run:
+
+```
+SMRT_ROOT=<your smrtanalysis installation root dir>
+$SMRT_ROOT/admin/bin/techsupport --help
+```
+
+##Specific Commands##
+
+The common commands are described below.  Unless otherwise noted, these can be run at any time as they are just collecting data and will not interfere with the running SMRT Analysis installation.
+
+###Show Tech Support###
+The most common command that you will be asked to run gathers configuration and log data as well as cluster environment data (if you are using one):
+
+```
+SMRT_ROOT=<your smrtanalysis installation root dir>
+$SMRT_ROOT/admin/bin/techsupport --action show_tech_support
+```
\ No newline at end of file
diff --git a/docs/Verify-the-installation.md b/docs/Verify-the-installation.md
new file mode 100644
index 0000000..6f6f821
--- /dev/null
+++ b/docs/Verify-the-installation.md
@@ -0,0 +1,23 @@
+Create a test job in SMRT Portal using canned installation data:
+
+Open your web browser and clear the browser cache:
+
+* **Google Chrome**: Choose **Tools > Clear browsing data**. Choose **the beginning of time** from the droplist, then check **Empty the cache** and click **Clear browsing data**.
+* **Internet Explorer**: Choose **Tools > Internet Options > General**, then under Browsing history, click **Delete**. Check **Temporary Internet files**, then click **Delete**.
+* **Firefox**: Choose **Tools > Options > Advanced**, then click the **Network** tab. In the Cached Web Content section, click **Clear Now**.
+
+2. Refresh the current page by pressing **F5**.
+3. Log into SMRT Portal by navigating to ``http://HOST:PORT/smrtportal``.
+4. Click **Design Job**.
+5. Click **Import and Manage**.
+6. Click **Import SMRT Cells**.
+7. Click **Add**.
+8. Enter ``/opt/smrtanalysis/common/test/primary``, then click **OK**.
+9. Select the new path and click **Scan**. You should get a dialog saying “One input was scanned." **Note**: If you are upgrading to v1.4.0, this cell will already have been imported into your system. In addition, the input was downsampled to speed the test and reduce the overall tarball size.
+10. Click **Design Job**.
+11. Click **Create New**.
+12. Enter a job name and comment.
+13. Select the protocol ``RS_Resequencing.1``.
+14. Under **SMRT Cells Available**, select a lambda cell and click the right-arrow button.
+15. Click **Save** on the bottom right, then click **Start**. The job should complete successfully.
+16. Click the **SMRT View** button. SMRT View should open with tracks displayed, and the reads displayed in the Details panel.
\ No newline at end of file
diff --git a/docs/What-computing-infrastructure-is-compatible-with-SMRT-Analysis?.md b/docs/What-computing-infrastructure-is-compatible-with-SMRT-Analysis?.md
new file mode 100644
index 0000000..b6488d0
--- /dev/null
+++ b/docs/What-computing-infrastructure-is-compatible-with-SMRT-Analysis?.md
@@ -0,0 +1,40 @@
+This article describes computing infrastructure considerations when installing SMRT Analysis. (See the article on [data storage considerations] (https://github.com/PacificBiosciences/SMRT-Analysis/wiki/What-data-storage-is-compatible-with-SMRT-Analysis%3F) for additional considerations.) SMRT Analysis can generally be deployed on four different computing infrastructure types.  We recommend and support deployment on a multi-node cluster computing environment, though it is possible, but un [...]
+
+## 1. Multi-node clusters
+For production-level processing of multiple SMRT Cells and analyses per day, a multi-node cluster is necessary.  SMRT Analysis can be configured to use SGE, PBS, or LSF job-management systems. SGE is preferred and most extensively tested.  UGE is similar enough to SGE such that it can be configured exactly the same way as SGE, though we do not do any testing on UGE.  If you are interested in large genome (> 200 Mb) assemblies, you **must** use SGE and provide a custom `.spec` file to to  [...]
+
+**Example Applications:**
+ * All SMRT Analysis protocols for genomes up to 100 Mb including:
+    - RS_HGAP_Assembly - _De novo_ Assembly with PacBio data only
+    - RS_AHA_ - Hybrid Scaffolding and gap filling of an existing assembly using PacBio long reads
+    - RS_CeleraAssembler - Hybrid assembly using a short read fasta file with PacBio long reads.
+ * Multiple SMRT Analysis jobs running concurrently
+ * Experimental large genome assemblies >200Mb
+
+
+## 2. High-powered single-node computer
+It is possible to install SMRT analysis on a single high-powered computer if it meets or exceeds the sum of the [minimum CPU and memory requirements for a multi-node cluster] (https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.1#-minimum-hardware-requirements). There is a risk in having multiple jobs fail at the same time when resources are exhausted.  This is because jobs are not manged by any queuing system such as SGE, PBS, or LSF.  Therefo [...]
+
+**Example Applications:**
+  * All SMRT Analysis protocols for genomes up to 100 Mb including:
+    - RS_HGAP_Assembly - _De novo_ Assembly with PacBio data only
+    - RS_AHA_ - Hybrid Scaffolding and gap filling of an existing assembly using PacBio long reads
+    - RS_CeleraAssembler - Hybrid assembly using a short read fasta file with PacBio long reads.
+  * Only a few SMRT Analysis jobs running concurrently
+
+
+## 3. Commodity laptop or desktop computer
+It is possible, but **not** recommended, to install SMRT Analysis on a commodity laptop or desktop computer as these systems typically do not meet [minimum hardware requirements] (https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Software-Installation-v2.1#-minimum-hardware-requirements).  SMRT analysis must be installed on Ubuntu or CentOS Linux operating systems and **cannot** be installed directly on Windows or Mac OS.  However, you may use VMware Player, VirtualB [...]
+
+**Example Applications:**
+  * Certain SMRT Analysis protocols for genomes up to 10 Mb including:
+    - RS_HGAP_Assembly - _De novo_ Assembly with PacBio data only (E. coli)
+  * Only one SMRT Analysis job running at a time
+
+## 4. Amazon Machine Instance
+If you do not have any computing resources available, consider running SMRT Analysis on the Amazon Elastic Compute Cloud (EC2) infrastructure.  We provide a publically accessible and SMRT-Analysis-specific Amazon Machine Image (AMI) for every SMRT Analysis release.  Detailed install instructions are provided [here] (https://github.com/PacificBiosciences/Bioinformatics-Training/wiki/%22Installing%22-SMRT-Portal-the-easy-way---Launching-A-SMRT-Portal-AMI).
+
+**Example Applications:**
+  * Certain SMRT Analysis protocols for genomes up to 10 Mb including:
+    - RS_HGAP_Assembly - _De novo_ Assembly with PacBio data only (E. coli)
+  * Only one SMRT Analysis job running at a time
\ No newline at end of file
diff --git a/docs/What-data-storage-is-compatible-with-SMRT-Analysis?.md b/docs/What-data-storage-is-compatible-with-SMRT-Analysis?.md
new file mode 100644
index 0000000..c7c37d7
--- /dev/null
+++ b/docs/What-data-storage-is-compatible-with-SMRT-Analysis?.md
@@ -0,0 +1,23 @@
+This article describes data storage considerations when installing SMRT Analysis, especially after purchasing a PacBio RS II instrument. See the article on [operating system and computational infrastructure considerations] (https://github.com/PacificBiosciences/SMRT-Analysis/wiki/What-computing-infrastructure-is-compatible-with-SMRT-Analysis%3F) for additional considerations.
+
+
+## Disk usage
+We recommend 10 TB of storage for all PacBio related analyses.  Each SMRT Cell generates roughly 5-8 Gb of data, and the PacBio RS II can sequence roughly 8 SMRT Cells per day.  This means that disk space occupied by raw data alone can expand to ~7.3 Tb if continuously sequencing for 6 months.  Each SMRT Portal job may also add ~1Gb to disk space, and one SMRT Cell can be analyzed multiple times.  Looking forward, we do anticipate increases in instrument throughput, which also increases  [...]
+
+##File system considerations
+The **only** supported file system is NFS and we have only tested SMRT Analysis on NFS.  There are several known bugs associated with alternate or distributed file systems such as GlusterFS.  One workaround for these problems is to install the software on a local directory first.  Then move to the smrtanalysis install directory manually to a mounted directory on the file system.
+
+##Directory structure considerations
+By default, All PacBio Data will only occupy four directories on your file system:
+
+**1. Install directory** The installation script runs only once on the head-node.  The script places all executables and files under an installation directory that must be cross-mounted on all compute nodes.  We recommend that you install SMRT Analysis under `/opt`.
+
+**2. TMP directory** SMRT Analysis uses a temporary directory that is local to all nodes and not cross-mounted, which allows for fast I/O operations.  We recommend that you create a local directory on each compute node named `/tmp/smrtanalysis`.
+
+**3. Userdata directory** SMRT Analysis stores all analysis data under a userdata directory that expands as more jobs are executed.  We recommend that you create a cross-mounted directory at `/data/pacbio/smrtanalysis_userdata`.
+
+**4. PacBio raw data directory** SMRT Analysis can import raw data from the PacBio RS II from anywhere on the filesystem, however only one directory can be specified in RS_Remote when setting up a run on the PacBio RS II.  We recommend designate this directory to be a cross-mounted directory at `/data/pacbio/rawdata`.
+
+
+##Network Connectivity considerations
+See the network diagram provided in the "computer requirements" section of the PacBio RS II Site preparation document.  The PacBio RS II requires four IP addresses and data can be transferred to the SMRT Analysis Server via three methods.  Please open the port required for the data transfer method of your choice.   
\ No newline at end of file
diff --git a/docs/When-using-the-sync-option-in-SGE,-you-may-see-errors.md b/docs/When-using-the-sync-option-in-SGE,-you-may-see-errors.md
new file mode 100644
index 0000000..ddd587f
--- /dev/null
+++ b/docs/When-using-the-sync-option-in-SGE,-you-may-see-errors.md
@@ -0,0 +1,27 @@
+If you see one of the following errors:
+
+```
+[ERROR] 2013-03-06 15:27:40,134 [pbpy.smrtpipe.engine.SmrtPipeTasks run 655] > Unable to initialize environment because of error: cannot register event client. Only 99 event clients are allowed in the system
+```
+```
+[ERROR] 2013-05-02 10:22:07,119 [pbpy.smrtpipe.engine.SmrtPipeTasks run 655] > Unable to initialize environment because of error: range_list containes no elements
+```
+
+The errors occur when the number of dynamic event client processes have exceeded the limit defined in your job scheduler configuration.  (Each ``qsub sync -y`` is a dynamic event client.)
+
+The solution is to increase the value assigned to ``MAX_DYN_EC``(the maximum number of dynamic event clients) in ``qmaster_params``. The default is 99 on some systems, but more recent versions of SGE have increased this to 500.
+
+To get current ``MAX_DYN_EC`` settings:
+```
+$ qconf -sconf | grep MAX_DYN_EC
+qmaster_params               MAX_DYN_EC=500
+```
+(This value may not be set on some systems.)
+
+
+To set the ``MAX_DYN_EC`` value, for example, to 1000:
+```
+$ qconf -mconf qmaster_params MAX_DYN_EC=1000
+```
+
+If this does **not** fix the issue, use the ``qsw.py`` script instead.
\ No newline at end of file
diff --git a/docs/You-can-start-SMRT-Portal,-but-cannot-log-in.md b/docs/You-can-start-SMRT-Portal,-but-cannot-log-in.md
new file mode 100644
index 0000000..78ac074
--- /dev/null
+++ b/docs/You-can-start-SMRT-Portal,-but-cannot-log-in.md
@@ -0,0 +1,12 @@
+* If you are using an older version of Internet Explorer (or Internet Explorer 8 in compatibility mode), try another browser or disable compatibility settings. To do so:
+ 
+   * Choose **Tools > Compatibility View Settings**, then uncheck **Display intranet sites in Compatibility View** and **Display all websites in Compatibility View**.
+
+
+***
+
+* Check that the database is configured correctly and `persistence.xml` points to the proper location. This is normally localhost; or the IP address if bind-address is not 127.0.0.1.
+
+***
+
+* Stop the services and restart.
\ No newline at end of file
diff --git a/docs/_Footer.md b/docs/_Footer.md
new file mode 100644
index 0000000..daca013
--- /dev/null
+++ b/docs/_Footer.md
@@ -0,0 +1,3 @@
+Visit the [PacBio Developer's Network Website](http://pacbiodevnet.com) for the most up-to-date links to downloads, documentation and more.
+
+[Terms of Use](http://pacbiodevnet.com/Terms_of_Use.html) | [Trademarks](http://pacb.com/terms-of-use/index.html#trademarks) | [Contact Us](mailto:devnet at pacificbiosciences.com)
\ No newline at end of file
diff --git a/docs/_Header.md b/docs/_Header.md
new file mode 100644
index 0000000..358da46
--- /dev/null
+++ b/docs/_Header.md
@@ -0,0 +1 @@
+==HEADER==
\ No newline at end of file
diff --git a/docs/_preview.md b/docs/_preview.md
new file mode 100644
index 0000000..0625711
--- /dev/null
+++ b/docs/_preview.md
@@ -0,0 +1,381 @@
+* [What's New?] (#Whats_New)
+  * [Release Notes] (#Release_Notes)
+* [Quick Start Guide] (#Quick_Start_Guide)
+  * [Installation] (#Quick_Install)
+  * [Upgrade] (#Quick_Upgrade)
+* [Detailed Installation Guide] (#Installation_Detail)
+  * [System Requirements] (#System_Requirements)
+    * [Hardware] (#Hardware_Requirements)
+  * [Operating System] (#OS)
+  * [Running SMRT® Analysis in the Cloud] (#Cloud)
+  * [Software Requirement] (#SoftReq)
+  * [Minimum Hardware Requirements] (#HardReq)
+* [Installation and Upgrade Summary] (#Summary)
+  * [Step 1: Decide on a user and an installation directory] (#Bookmark_DecideInstallDir)
+  * [Step 2: Create and set the installation directory $SMRT_ROOT] (#Bookmark_CreateInstallDir)
+* [Installation and Upgrade Detail] (#Details)
+  * [Step 3 Option 1: Run the install script] (#Bookmark_InstallDetail)
+  * [Step 3 Option 2: Run the upgrade script] (#Bookmark_UpgradeDetail)
+  * [Step 4: Set up distributed computing] (#Bookmark_DistributedDetail)
+  * [Step 5: Set up SMRT Portal] (#Bookmark_SMRTPortalDetail)
+  * [Step 6: Verify install or upgrade] (#Bookmark_VerifyDetail)
+* [Optional Configurations] (#Optional)
+  * [Set up userdata directory] (#Bookmark_UserdataDetail)
+* [Bundled with SMRT® Analysis] (#Bundled)
+* [Changes from SMRT® Analysis v2.1.1] (#Changes)
+
+
+#<a name="Whats_New"></a> What's New?
+
+Beginning with SMRT Analysis v2.1.0, a new directory structure is being used. Instead of ``$SEYMOUR_HOME``, we are now using ``$SMRT_ROOT``, and it will **not** need to specify it explicitly in any setup.sh files or elsewhere, such as in user ``.bash*`` files.  We still recommend that ``$SMRT_ROOT`` be set to `/opt/smrtanalysis/`, but the underlying folders will be as follows (arrows indicate softlinks):
+
+```
+/opt/smrtanalysis/
+              admin/
+                   bin/
+                   log/
+              current --> softlink to ../install/smrtanalysis-2.2.0
+              install/
+                 smrtanalysis-<other versions>/
+                 smrtanalysis-2.2.0/
+              userdata/  --> softlink to offline storage location
+              
+```
+
+##<a name="Release_Notes"></a> Release Notes
+See [Release Notes] (https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Release-Notes-v2.2.0) for detailed list of changes.
+
+
+#<a name="Quick_Start_Guide"></a> Quick Start Guide
+
+##<a name="Quick_Install"></a> Installation
+
+  ```
+  SMRT_ROOT=/opt/smrtanalysis
+  sudo mkdir $SMRT_ROOT
+  sudo chown smrtanalysis:smrtanalysis $SMRT_ROOT
+
+  bash smrtanalysis-2.2.0.Current_Ubuntu-8.04.run --rootdir $SMRT_ROOT
+  $SMRT_ROOT/admin/bin/smrtportald-initd start
+  $SMRT_ROOT/admin/bin/kodosd start
+  ```
+
+
+##<a name="Quick_Upgrade"></a> Upgrade
+
+  ```
+  SMRT_ROOT=/opt/smrtanalysis
+  bash $SMRT_ROOT/admin/bin/smrtupdater /path/to/smrtanalysis-2.2.0.xxxxxx.run
+  $SMRT_ROOT/admin/bin/smrtportald-initd start
+  $SMRT_ROOT/admin/bin/kodosd start
+  ```
+
+#<a name="SysReq"></a>System Requirements#
+
+##<a name="OS"></a>Operating System##
+
+* SMRT® Analysis is supported on:
+    * English-language **Ubuntu: versions 12.04, 10.04, 8.04** 
+    * English-language **RedHat/CentOS: versions 6.3, 5.6, 5.3**
+
+* SMRT Analysis **cannot** be installed on Mac OS or Windows.
+
+## <a name="Cloud"></a> Running SMRT® Analysis in the Cloud ##
+
+Users wishing to run SMRT Analysis in the cloud can use an Amazon Machine Image (AMI) with SMRT Analysis pre-installed. For details, click [here] (https://github.com/PacificBiosciences/Bioinformatics-Training/wiki/%22Installing%22-SMRT-Portal-the-easy-way---Launching-A-SMRT-Portal-AMI).
+
+## <a name="SoftReq"></a>Software Prerequisites##
+
+* Bash
+* Linux Standard Base (LSB)
+
+These are probably installed by default on most systems. If necessary, use the following commands to ensure that these packages are installed.
+
+**CentOS:**
+
+  * ```sudo yum groupinstall "Development Tools"```
+  * ```sudo yum install redhat-lsb```
+
+**Ubuntu:**
+
+  * ```sudo apt-get install build-essential lsb-release```
+
+
+###Client Web Browser###
+We recommend using Google Chrome® 21 web browsers to run SMRT Portal for consistent functionality. We also support Apple’s Safari® and Internet Explorer® web browsers; however some features may not be optimized on these browsers.
+
+###Client Java###
+To run SMRT View, we recommend using Java 7 for Windows (Java 7 64 bit for users with 64 bit OS), and Java 6 for the Mac OS.
+
+##<a name="HardReq"></a>Hardware Recommendations##
+
+### 1 head node:###
+* Minimum 8 cores, with 2 GB RAM per core.
+* Minimum 250 GB of disk space.
+
+### Compute nodes:###
+* Minimum 3 compute nodes. We recommend 5 nodes for high utilization focused on _de novo_ assemblies.
+* Minimum 8 cores per node, with 2 GB RAM per core. We recommend 16 cores per node with 4 GB RAM per core.
+* Minimum 250 GB of disk space per node.
+* To perform _de novo_ assembly of large genomes using the Celera® Assembler, **one** of the nodes will need to have considerably more memory. See the Celera® Assembler home page for recommendations: http://wgs-assembler.sourceforge.net/.
+
+**Notes:** 
+* It is possible, but **not** advisable, to install SMRT Analysis on a single-node machine (see the distributed computing section). You will likely be able to submit jobs one SMRT® Cell at a time, but the time to completion may be long as the software may not have sufficient resources to complete the job.  
+
+* The ``RS_ReadsOfInsert`` protocol can be **compute-intensive**. If you plan to run it on every SMRT® Cell, we recommend adding 3 additional 8-core compute nodes with at least 4 GB of RAM per core.
+
+### Data storage: ###
+* 10 TB (Actual storage depends on usage.)
+
+### Network File System Requirement 
+Please refer to the **IT Site Prep** guide provided with your instrument purchase for more details.
+
+1. The **SMRT Analysis software directory** (we recommend `$SMRT_ROOT=/opt/smrtanalysis`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  
+
+2. The **SMRT Cell input directory**  (we recommend `$SMRT_ROOT/pacbio_instrument_data/`) **must** have the same path and be **readable** by the smrtanalysis user across **all** compute nodes via **NFS**.  This directory contains data from the instrument and can either be a directory configured by RS Remote during instrument installation, or a directory you created when you received data from a core lab. 
+
+3. The **SMRT Analysis output directory** (we recommend `$SMRT_ROOT/userdata`) **must** have the same path and be **writable** by the smrtanalysis user across **all** compute nodes via **NFS**. This directory is usually soft-linked to a large storage volume.
+
+4. The **SMRT Analysis temporary directory** is used for fast I/O operations during runtime.  The software accesses this directory from `$SMRT_ROOT/tmpdir` and you can softlink this directory manually or using the install script.  This directory should be a local directory (**not** NFS-mounted) and be writable by the `smrtanalysis` user and exist as independent directories on all compute nodes. 
+
+
+# <a name="Summary"></a> Installation and Upgrade Summary #
+
+The following instructions apply to **fresh v2.2.0 installations** and **v2.1.1 to v2.2.0 upgrades only**.
+
+##Upgrading from SMRT Analysis v2.0.1 or earlier##
+SMRT Analysis does **not** support skip-level version upgrades. The recommended upgrade path is to incrementally upgrade to each version, that is:
+
+``1.4 -> 2.0 -> 2.0.1 -> 2.1.1 -> 2.2.0``
+
+Alternately, you may opt for a fresh installation of SMRT Analysis v2.2.0 and manually import old SMRT® Cells and jobs to preserve analysis history.
+
+See [[Official Documentation]] for upgrading from earlier versions of SMRT Analysis:
+* [[SMRT Analysis Software Installation v2.0.1]]
+* [[SMRT Analysis Software Installation v2.1]]
+
+<a name="Bookmark_DecideInstallDir"></a> 
+### Step 1. Decide on a user and an installation directory for the SMRT Analysis software suite.
+
+The SMRT Analysis install directory, `$SMRT_ROOT`, can be **any** directory as long as the smrtanalysis user has read, write, and execute permissions in that directory.  Historically we have referred to `$SMRT_ROOT` as `/opt/smrtanalysis`.  
+
+We recommend that a system administrator create a special user called `smrtanalysis`, who belongs to the `smrtanalysis` group. This user will own all SMRT Analysis files, daemon processes, and smrtpipe jobs.   
+
+
+<a name="Bookmark_CreateInstallDir"></a> 
+### Step 2. Create and set the installation directory $SMRT_ROOT.
+If the parent directory `$SMRT_ROOT` is not writable by the SMRT Analysis user, the `$SMRT_ROOT` directory must be pre-created with read/write/execute permissions for the SMRT Analysis user.  
+
+* **Option 1:** The SMRT Analysis user has sudo privileges.
+For example, if `$SMRT_ROOT` is `/opt/smrtanalysis`, `/opt` is only writable by root, and the SMRT Analysis user is `smrtanalysis` belonging to the group `smrtanalysis`.  
+
+  ```
+  SMRT_ROOT=/opt/smrtanalysis
+  sudo mkdir $SMRT_ROOT
+  sudo chown smrtanalysis:smrtanalysis $SMRT_ROOT
+  ```
+
+* **Option 2:** The SMRT Analysis user does **not** have sudo privileges.
+For example, if you do not have sudo privileges, you can install SMRT Analysis as yourself in your home directory.
+
+  ```
+  SMRT_ROOT=/home/<your_username>/smrtanalysis
+  mkdir $SMRT_ROOT
+  chown smrtanalysis:smrtanalysis $SMRT_ROOT
+  ```
+
+### Step 3. Run the installer or upgrade script and start services.  
+
+  * **Option 1**: If you are performing a **fresh** installation, run the installation script and start tomcat and kodos.  [See below for more details.] (#Bookmark_InstallDetail)
+  ```
+  bash smrtanalysis-2.2.0.Current_Ubuntu-8.04.run --rootdir $SMRT_ROOT
+  $SMRT_ROOT/admin/bin/smrtportald-initd start
+  $SMRT_ROOT/admin/bin/kodosd start
+  ```
+  
+  If you accidentally canceled out of the install/upgrade prompt and want to rerun the script without extracting again, you can rerun using the `--no-extract` option:
+
+  `bash smrtanalysis-2.2.0.Current_Ubuntu-8.04.run --rootdir $SMRT_ROOT --no-extract`
+
+  If you are installing after a patch has been released for the software, you can install both the software and the patch in one command using the -p option:
+
+  ```
+  bash /path/to/smrtanalysis-2.2.0.131971.run -p /path/to/patch/smrtanalysis-2.2.0.131971-patch-0.2.run --rootdir $SMRT_ROOT
+  ```
+
+  * **Option 2**: If you are performing an **upgrade**, Run the script called ``smrtupdater`` from the old v2.1.1 smrtanalysis directory, which takes the path to the new v2.2.0 installer as an argument. [See below for more details.] (#Bookmark_UpgradeDetail)
+
+  ```
+  SMRT_ROOT=/opt/smrtanalysis
+  bash $SMRT_ROOT/admin/bin/smrtupdater /path/to/smrtanalysis-2.2.0.xxxxxx.run
+  $SMRT_ROOT/admin/bin/smrtportald-initd start
+  $SMRT_ROOT/admin/bin/kodosd start
+  ```
+
+  If you are upgrading after a patch has been released for the software, you can upgrade both the software and the patch in one command using the ``-p`` option:
+  ```
+  bash $SMRT_ROOT/admin/bin/smrtupdater -p /path/to/patch/smrtanalysis-2.2.0.131971-patch-0.2.run /path/to/smrtanalysis-2.2.0.131971.run
+  ```
+
+### Step 4. **New Installations only:** Set up distributed computing 
+
+Decide on a job management system (JMS). [See below for more details.](#Bookmark_DistributedDetail)
+
+### Step 5. **New Installations only**: Set up SMRT Portal
+
+Register the administrative user and set up the SMRT Portal GUI. [See below for more details.](#Bookmark_SMRTPortalDetail)
+
+### Step 6. Verify the installation. 
+
+Run a sample SMRT Portal job to verify functionality. [See below for more details.] (#Bookmark_VerifyDetail)
+
+
+# <a name="Details"></a> Installation and Upgrade Details
+### <a name="Bookmark_InstallDetail"></a> Step 3, Option 1 Details: Run the Installation script and turn on services
+
+The installation script attempts to discover inputs when possible, and performs the following: 
+
+* Looks for valid hostnames (DNS) and IP Addresses. You must choose one from the list.   
+* Assumes that the user running the script is the designated smrtanalysis user.
+* Installs the Tomcat web server. You will be prompted for:
+  * The **port number** that the tomcat service will run under. (Default: ``8080``)
+  * The **port number** that the tomcat service will use to shutdown. (Default: ``8005``)
+* Creates the smrtportal database in mysql. You will be prompted for:
+  * The mysql administrative user name. (Default: ``root``)
+  * The mysql password. (Default:  no password)
+  * The mysql port number. (Default: ``3306``)
+* Attempts to configure the Job Management System (``SGE``, ``LSF``, ``PBS``, or ``NONE``)
+  * The ``$SGE_ROOT`` directory
+  * The ``$SGE_CELL`` directory name
+  * The ``$SGE_BINDIR`` directory that contains all the q-commands
+  * The queue name
+  * The parallel environment
+* Creates and configures special directories:
+  * The ``$TMP`` directory
+  * The ``$USERDATA`` directory 
+
+
+### <a name="Bookmark_UpgradeDetail"></a> Step 3, Option 2 Details: Run the Upgrade Script
+
+The upgrade script performs the following:
+* Checks that the same user is running the upgrade script
+* Checks for running services
+* Checks that the OS and hardware requirements are still met
+* Transfers computing configurations from a previous installation
+* Upgrades any references as necessary
+* Preserves SMRT Cells, jobs, and users from a previous installation by updating smrtportal database schema changes as necessary
+* Preserves special directories settings
+  * Updates the `$SMRT_ROOT/tmpdir` softlink 
+  * Updates the `$SMRT_ROOT/userdata` softlink
+* The upgrade script does **not** port over protocols that were defined in previous versions of SMRT Analysis. This is because protocol files can vary a great deal between versions due to rapid code development and change. Please **recreate** any custom protocols you may have.
+
+
+### <a name="Bookmark_DistributedDetail"></a> Step 4 Details: Set up Distributed Computing
+
+Pacific Biosciences has explicitly validated Sun **Grid Engine (SGE)**, and provides job submission templates for **LSF** and **PBS**. You only need to configure the software **once** during initial install. 
+
+#### Configuring Templates 
+
+Skip this section if you are using SGE.   Configuration files are automatically edited based on the questions you answered in the interactive prompts during installation.  
+
+If you are using a non-SGE job managment system, you **must** create or edit the **Job Management Template files**, which provide a flexible format for specifying how SMRT Analysis communicates with the resident Job Management System (JMS). You **must** create or edit the following files:
+```
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/start.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/interactive.tmpl
+/opt/smrtanalysis/analysis/etc/cluster/<JMS>/kill.tmpl
+```
+
+#### Specifying the PBS Job Management System
+
+PBS does **not** have a ``–sync`` option, and the ``interactive.tmpl`` file runs a script named ``qsw.py`` to simulate the functionality. You must edit **both** ``interactive.tmpl`` and ``start.tmpl``. 
+
+1. Change the queue name to one that exists on your system. (This is the ``–q`` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the ``-pe`` option.) 
+3. Make sure that ``interactive.tmpl`` calls the ``–PBS`` option.
+
+#### Specifying the LSF Job Management System
+
+The equivalent SGE `-sync` option in LSF is `-K` and this should be provided with the `bsub` command in the `interactive.tmpl` file.
+
+1. Change the queue name to one that exists on your system. (This is the `–q` option.) 
+2. Change the parallel environment to one that exists on your system. (This is the `-pe` option.) 
+3. Make sure that ``interactive.tmpl`` calls the `–K` option.
+
+
+#### Specifying other Job Management Systems
+
+1. Create a new directory `smrtanalysis/current/analysis/etc/cluster/NEW_JMS`.
+2. Edit `smrtanalysis/current/analysis/etcsmrtpipe.rc`, and change the `CLUSTER_MANAGER` variable to `NEW_JMS`
+3. Once you have a new JMS directory specified, create and edit the `interactive.tmpl`, `start.tmpl`, and `kill.tmpl` files for your particular setup.
+
+### <a name="Bookmark_SMRTPortalDetail"></a> Step 5 Details: (New Installations Only) Set Up SMRT® Portal
+
+1. Use your web browser to start SMRT Portal: `http://hostname:port/smrtportal`
+2. Click **Register** at the top right.
+3. Create a user named ``administrator`` (all lowercase). This user is special, as it is the only user that does not require activation on creation.
+4. Enter the user name ``administrator``.
+5. Enter an email address. All administrative emails, such as new user registrations, will be sent to this address.
+6. Enter the password and confirm the password.
+7. Select **Click Here** to access **Change Settings**.
+8. To set up the mail server, enter the SMTP server information and click **Apply**. For email authentication, enter a user name and password. You can also enable Transport Layer Security.
+9. To enable automated submission from a PacBio® RS instrument, click **Add** under the Instrument Web
+Services URI field. Then, enter the following into the dialog box and click **OK**:
+   ```
+   http://INSTRUMENT_PAP01:8081
+   ```
+   * ``INSTRUMENT_PAP01`` is the IP address or name (pap01) of the instrument.
+   * ``8081`` is the port for the instrument web service.
+
+10. Select the new URI, then click **Test** to check if SMRT Portal can communicate with the instrument service.
+11. (Optional) You can delete the pre-existing instrument entry by clicking **Remove**.
+
+### <a name="Bookmark_VerifyDetail"></a> Step 6: Verify the installation
+
+Create a test job in SMRT Portal using the provided lambda sequence data. This is data from a single SMRT cell that has been down-sampled to reduce overall tarball size. If you are upgrading, this cell will already have been imported into your system, and you can skip to step 10 below.
+
+Open your web browser and clear the browser cache:
+
+* **Google Chrome**: Choose **Tools > Clear browsing data**. Choose **the beginning of time** from the droplist, then check **Empty the cache** and click **Clear browsing data**.
+* **Internet Explorer**: Choose **Tools > Internet Options > General**, then under Browsing history, click **Delete**. Check **Temporary Internet files**, then click **Delete**.
+* **Firefox**: Choose **Tools > Options > Advanced**, then click the **Network** tab. In the Cached Web Content section, click **Clear Now**.
+
+1. Refresh the current page by pressing **F5**.
+2. Log into SMRT Portal by navigating to ``http://HOST:PORT/smrtportal``.
+3. Click **Design Job**.
+4. Click **Import and Manage**.
+5. Click **Import SMRT Cells**.
+6. Click **Add**.
+7. Enter ``/opt/smrtanalysis/common/test/primary``, then click **OK**.
+8. Select the new path and click **Scan**. You should get a dialog saying “One input was scanned." 
+9. Click **Design Job**.
+10. Click **Create New**.
+11. Enter a job name and comment.
+12. Select the protocol ``RS_Resequencing.1``.
+13. Under **SMRT Cells Available**, select a lambda cell and click the right-arrow button.
+14. Click **Save** on the bottom right, then click **Start**. The job should complete successfully.
+15. Click the **SMRT View** button. SMRT View should open with tracks displayed, and the reads displayed in the Details panel.
+
+## <a name="Optional"></a> Optional Configurations ##
+### Set up Userdata folders ###
+
+The userdata folder, `$SMRT_ROOT/userdata`, expands rapidly because it contains all jobs, references, and drop boxes.  We recommend softlinking this folder to an **external** directory with more storage: 
+
+
+```
+mv /opt/smrtanalysis/userdata /path/to/NFS/mounted/offline_storage
+ln -s /path/to/NFS/mounted/offline_storage /opt/smrtanalysis/common/userdata
+```
+
+## <a name="Bundled"></a> Bundled with SMRT® Analysis ##
+The following are bundled within the application and should **not** depend on what is already deployed on the system.
+* Java® 1.7
+* Python® 2.7
+* Tomcat™ 7.0.23
+
+## <a name="Changes"></a> Changes from SMRT® Analysis v2.1.1 ##
+See [SMRT Analysis Release Notes v2.2.0](https://github.com/PacificBiosciences/SMRT-Analysis/wiki/SMRT-Analysis-Release-Notes-v2.2.0) for changes and known issues. The latest version of this document resides on the Pacific Biosciences DevNet site; you can also link to it from the main SMRT Analysis web page.
+
+
+***
+For Research Use Only. Not for use in diagnostic procedures. © Copyright 2010 - 2014, Pacific Biosciences of California, Inc. All rights reserved. Information in this document is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/or use restrictions may pertain to your use of Pacific Biosciences products and/or third party products. Please refer to the applicable Pacific Bios [...]
+**P/N 100-321-100**
\ No newline at end of file
diff --git a/docs/qsub:-command-not-found.md b/docs/qsub:-command-not-found.md
new file mode 100644
index 0000000..0ef14c9
--- /dev/null
+++ b/docs/qsub:-command-not-found.md
@@ -0,0 +1,57 @@
+SMRT Analysis assumes that SGE is configured for the `smrtanalysis` user such that the environment variables `SGE_ROOT` and `SGE_CELL` are declared and the q-commands are in the user's path.  When this is not the case, you may see the following error messages display in the file `<job_id>/log/smrtpipe.log`:
+
+```
+/bin/sh: qsub: command not found
+/bin/sh: qstat: command not found
+/bin/sh: qconf: command not found
+```
+
+Or the following error messages display in the `<job_id>/log/P_Fetch/overviewRpt.log` file:
+
+```
+# Writing stdout and stderr from Popen:
+Unable to initialize environment because of error: Please set the environment variable SGE_ROOT.
+Exiting.
+
+```
+
+
+These messages indicate that the SMRT Analysis user does **not** have the appropriate environment defined for the job management scheduler installed on the cluster (in this case, SGE). More specifically, the user cannot find the `qsub`, `qstat`, or `qconf` commands in `$PATH`.
+
+To fix this problem, do the following:
+
+
+1.  Download the v2.1.1 patch:
+http://files.pacb.com/software/smrtanalysis/2.1.1/smrtanalysis-2.1.1-patch-0.1.run
+
+2.  Run the smrtupdater script and point it to the .run file:
+
+`bash /path/to/smrtanalysis/admin/bin/smrtupdater   smrtanalysis-2.1.1-patch-0.1.run`
+
+
+## OR
+
+1.  To fix this manually, find out where the q-commands are:
+```
+$ which qsub
+<path_to_qcommands>/qsub
+```
+
+2.  Create a jms.setup.sh script in `$SMRT_ROOT/current/analysis/etc/jms.setup.sh` and put the following lines in the script:
+```
+export SGE_ROOT=<root_name>
+export SGE_CELL=<cell_name>
+PATH=$PATH:<path_to_qcommads>;
+```
+Where `<root_name>` is usually `/usr/share/gridengine` in CentOS, or `/var/lib/gridengine` in Ubuntu, and `<cell_name>` is usually `default` if you installed SGE using yum or apt (recommended).   The `<path_to_qcommands>` is the return from step 1.
+
+
+3.  Add the jms.setup.sh to the main setup.sh file in `$SMRT_ROOT/current/etc/setup.sh`
+```
+CONFIG_FILES=(
+  ${SEYMOUR_HOME}/analysis/etc/setup.sh
+  ${SEYMOUR_HOME}/common/etc/setup.sh
+  "${SEYMOUR_HOME/analysis/etc/jms.setup.sh"  <-- add this line without the quotes to the CONFIG_FILES definition
+)
+```
+
diff --git "a/docs/raise-CmpH5Error,-\"Unable-to-parse-SF-readId:-%s\"-%-readId.md" "b/docs/raise-CmpH5Error,-\"Unable-to-parse-SF-readId:-%s\"-%-readId.md"
new file mode 100644
index 0000000..cc627d7
--- /dev/null
+++ "b/docs/raise-CmpH5Error,-\"Unable-to-parse-SF-readId:-%s\"-%-readId.md"
@@ -0,0 +1,3 @@
+```
+raise CmpH5Error, "Unable to parse SF readId: %s" % readId
+```
\ No newline at end of file

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/smrtanalysis.git



More information about the debian-med-commit mailing list