[med-svn] [r-cran-solrium] 04/06: New upstream version 0.4.0

Andreas Tille tille at debian.org
Mon Oct 2 07:07:41 UTC 2017


This is an automated email from the git hooks/post-receive script.

tille pushed a commit to branch master
in repository r-cran-solrium.

commit 1a3674ad7dd326bfa115081ce71ebb5707cf6ca6
Author: Andreas Tille <tille at debian.org>
Date:   Mon Oct 2 09:05:04 2017 +0200

    New upstream version 0.4.0
---
 DESCRIPTION                              |  24 +
 LICENSE                                  |   2 +
 MD5                                      | 175 +++++++
 NAMESPACE                                | 107 +++++
 NEWS.md                                  |  21 +
 R/add.R                                  |  98 ++++
 R/classes.r                              |  15 +
 R/collection_addreplica.R                |  60 +++
 R/collection_addreplicaprop.R            |  52 +++
 R/collection_addrole.R                   |  35 ++
 R/collection_balanceshardunique.R        |  46 ++
 R/collection_clusterprop.R               |  40 ++
 R/collection_clusterstatus.R             |  42 ++
 R/collection_create.R                    |  98 ++++
 R/collection_createalias.R               |  29 ++
 R/collection_createshard.R               |  30 ++
 R/collection_delete.R                    |  22 +
 R/collection_deletealias.R               |  25 +
 R/collection_deletereplica.R             |  55 +++
 R/collection_deletereplicaprop.R         |  51 +++
 R/collection_deleteshard.R               |  38 ++
 R/collection_exists.R                    |  30 ++
 R/collection_list.R                      |  21 +
 R/collection_migrate.R                   |  48 ++
 R/collection_overseerstatus.R            |  33 ++
 R/collection_rebalanceleaders.R          |  46 ++
 R/collection_reload.R                    |  22 +
 R/collection_removerole.R                |  30 ++
 R/collection_requeststatus.R             |  34 ++
 R/collection_splitshard.R                |  37 ++
 R/collections.R                          |  33 ++
 R/commit.R                               |  41 ++
 R/config_get.R                           |  80 ++++
 R/config_overlay.R                       |  30 ++
 R/config_params.R                        |  68 +++
 R/config_set.R                           |  44 ++
 R/connect.R                              | 164 +++++++
 R/core_create.R                          |  59 +++
 R/core_exists.R                          |  30 ++
 R/core_mergeindexes.R                    |  46 ++
 R/core_reload.R                          |  31 ++
 R/core_rename.R                          |  36 ++
 R/core_requeststatus.R                   |  25 +
 R/core_split.R                           | 120 +++++
 R/core_status.R                          |  39 ++
 R/core_swap.R                            |  54 +++
 R/core_unload.R                          |  44 ++
 R/delete.R                               |  59 +++
 R/optimize.R                             |  42 ++
 R/parsers.R                              | 686 ++++++++++++++++++++++++++++
 R/ping.R                                 |  53 +++
 R/schema.R                               |  53 +++
 R/solr_all.r                             |  77 ++++
 R/solr_facet.r                           | 126 +++++
 R/solr_get.R                             |  42 ++
 R/solr_group.r                           | 107 +++++
 R/solr_highlight.r                       |  76 ++++
 R/solr_mlt.r                             |  62 +++
 R/solr_search.r                          | 151 ++++++
 R/solr_stats.r                           |  67 +++
 R/solrium-package.R                      |  69 +++
 R/update_csv.R                           |  45 ++
 R/update_json.R                          |  50 ++
 R/update_xml.R                           |  50 ++
 R/zzz.r                                  | 239 ++++++++++
 README.md                                | 485 ++++++++++++++++++++
 build/vignette.rds                       | Bin 0 -> 291 bytes
 debian/README.test                       |   8 -
 debian/changelog                         |   5 -
 debian/compat                            |   1 -
 debian/control                           |  32 --
 debian/copyright                         |  54 ---
 debian/docs                              |   3 -
 debian/rules                             |   5 -
 debian/source/format                     |   1 -
 debian/tests/control                     |   5 -
 debian/tests/run-unit-test               |  17 -
 debian/watch                             |   2 -
 inst/doc/cores_collections.Rmd           | 119 +++++
 inst/doc/cores_collections.html          | 310 +++++++++++++
 inst/doc/document_management.Rmd         | 318 +++++++++++++
 inst/doc/document_management.html        | 469 +++++++++++++++++++
 inst/doc/local_setup.Rmd                 |  79 ++++
 inst/doc/local_setup.html                | 286 ++++++++++++
 inst/doc/search.Rmd                      | 600 ++++++++++++++++++++++++
 inst/doc/search.html                     | 759 +++++++++++++++++++++++++++++++
 inst/examples/add_delete.json            |  19 +
 inst/examples/add_delete.xml             |  19 +
 inst/examples/books.csv                  |  11 +
 inst/examples/books.json                 |  51 +++
 inst/examples/books.xml                  |  50 ++
 inst/examples/books2.json                |  51 +++
 inst/examples/books2_delete.json         |   6 +
 inst/examples/books2_delete.xml          |   6 +
 inst/examples/books_delete.json          |   6 +
 inst/examples/books_delete.xml           |   6 +
 inst/examples/schema.xml                 | 534 ++++++++++++++++++++++
 inst/examples/solrconfig.xml             | 583 ++++++++++++++++++++++++
 inst/examples/updatecommands_add.json    |  16 +
 inst/examples/updatecommands_add.xml     |  13 +
 inst/examples/updatecommands_delete.json |   3 +
 inst/examples/updatecommands_delete.xml  |   1 +
 man/add.Rd                               |  88 ++++
 man/collapse_pivot_names.Rd              |  24 +
 man/collectargs.Rd                       |  15 +
 man/collection_addreplica.Rd             |  66 +++
 man/collection_addreplicaprop.Rd         |  56 +++
 man/collection_addrole.Rd                |  38 ++
 man/collection_balanceshardunique.Rd     |  49 ++
 man/collection_clusterprop.Rd            |  42 ++
 man/collection_clusterstatus.Rd          |  36 ++
 man/collection_create.Rd                 | 106 +++++
 man/collection_createalias.Rd            |  31 ++
 man/collection_createshard.Rd            |  35 ++
 man/collection_delete.Rd                 |  26 ++
 man/collection_deletealias.Rd            |  29 ++
 man/collection_deletereplica.Rd          |  59 +++
 man/collection_deletereplicaprop.Rd      |  55 +++
 man/collection_deleteshard.Rd            |  41 ++
 man/collection_exists.Rd                 |  39 ++
 man/collection_list.Rd                   |  24 +
 man/collection_migrate.Rd                |  54 +++
 man/collection_overseerstatus.Rd         |  34 ++
 man/collection_rebalanceleaders.Rd       |  49 ++
 man/collection_reload.Rd                 |  26 ++
 man/collection_removerole.Rd             |  33 ++
 man/collection_requeststatus.Rd          |  36 ++
 man/collection_splitshard.Rd             |  44 ++
 man/collections.Rd                       |  41 ++
 man/commit.Rd                            |  47 ++
 man/config_get.Rd                        |  67 +++
 man/config_overlay.Rd                    |  38 ++
 man/config_params.Rd                     |  61 +++
 man/config_set.Rd                        |  52 +++
 man/core_create.Rd                       |  66 +++
 man/core_exists.Rd                       |  39 ++
 man/core_mergeindexes.Rd                 |  48 ++
 man/core_reload.Rd                       |  33 ++
 man/core_rename.Rd                       |  40 ++
 man/core_requeststatus.Rd                |  28 ++
 man/core_split.Rd                        |  84 ++++
 man/core_status.Rd                       |  43 ++
 man/core_swap.Rd                         |  57 +++
 man/core_unload.Rd                       |  48 ++
 man/delete.Rd                            |  65 +++
 man/is-sr.Rd                             |  25 +
 man/makemultiargs.Rd                     |  17 +
 man/optimize.Rd                          |  48 ++
 man/ping.Rd                              |  54 +++
 man/pivot_flatten_tabular.Rd             |  22 +
 man/schema.Rd                            |  59 +++
 man/solr_all.Rd                          | 141 ++++++
 man/solr_connect.Rd                      |  58 +++
 man/solr_facet.Rd                        | 366 +++++++++++++++
 man/solr_get.Rd                          |  52 +++
 man/solr_group.Rd                        | 166 +++++++
 man/solr_highlight.Rd                    | 221 +++++++++
 man/solr_mlt.Rd                          | 112 +++++
 man/solr_parse.Rd                        |  45 ++
 man/solr_search.Rd                       | 202 ++++++++
 man/solr_stats.Rd                        |  91 ++++
 man/solrium-package.Rd                   |  72 +++
 man/update_csv.Rd                        | 120 +++++
 man/update_json.Rd                       |  90 ++++
 man/update_xml.Rd                        |  89 ++++
 tests/cloud_mode/test-add.R              |  25 +
 tests/cloud_mode/test-collections.R      |  24 +
 tests/standard_mode/test-core_create.R   |  31 ++
 tests/test-all.R                         |   2 +
 tests/testthat/test-core_create.R        |  33 ++
 tests/testthat/test-errors.R             |  50 ++
 tests/testthat/test-ping.R               |  35 ++
 tests/testthat/test-schema.R             |  36 ++
 tests/testthat/test-solr_all.R           |  86 ++++
 tests/testthat/test-solr_connect.R       |  50 ++
 tests/testthat/test-solr_error.R         |  49 ++
 tests/testthat/test-solr_facet.r         |  69 +++
 tests/testthat/test-solr_group.r         |  39 ++
 tests/testthat/test-solr_highlight.r     |  25 +
 tests/testthat/test-solr_mlt.r           |  35 ++
 tests/testthat/test-solr_search.r        | 100 ++++
 tests/testthat/test-solr_settings.R      |  31 ++
 tests/testthat/test-solr_stats.r         | 110 +++++
 vignettes/cores_collections.Rmd          | 119 +++++
 vignettes/document_management.Rmd        | 318 +++++++++++++
 vignettes/local_setup.Rmd                |  79 ++++
 vignettes/search.Rmd                     | 600 ++++++++++++++++++++++++
 187 files changed, 15152 insertions(+), 133 deletions(-)

diff --git a/DESCRIPTION b/DESCRIPTION
new file mode 100644
index 0000000..7595704
--- /dev/null
+++ b/DESCRIPTION
@@ -0,0 +1,24 @@
+Package: solrium
+Title: General Purpose R Interface to 'Solr'
+Description: Provides a set of functions for querying and parsing data
+    from 'Solr' (<http://lucene.apache.org/solr>) 'endpoints' (local and 
+    remote), including search, 'faceting', 'highlighting', 'stats', and 
+    'more like this'. In addition, some functionality is included for 
+    creating, deleting, and updating documents in a 'Solr' 'database'.
+Version: 0.4.0
+Authors at R: person("Scott", "Chamberlain", role = c("aut", "cre"),
+    email = "myrmecocystus at gmail.com")
+License: MIT + file LICENSE
+URL: https://github.com/ropensci/solrium
+BugReports: http://www.github.com/ropensci/solrium/issues
+VignetteBuilder: knitr
+Imports: utils, dplyr (>= 0.5.0), plyr (>= 1.8.4), httr (>= 1.2.0),
+        xml2 (>= 1.0.0), jsonlite (>= 1.0), tibble (>= 1.2)
+Suggests: roxygen2 (>= 5.0.1), testthat, knitr, covr
+RoxygenNote: 5.0.1
+NeedsCompilation: no
+Packaged: 2016-10-05 20:41:34 UTC; sacmac
+Author: Scott Chamberlain [aut, cre]
+Maintainer: Scott Chamberlain <myrmecocystus at gmail.com>
+Repository: CRAN
+Date/Publication: 2016-10-06 00:52:32
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..d044f7e
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,2 @@
+YEAR: 2016
+COPYRIGHT HOLDER: Scott Chamberlain
diff --git a/MD5 b/MD5
new file mode 100644
index 0000000..96391d3
--- /dev/null
+++ b/MD5
@@ -0,0 +1,175 @@
+1e3407357ace4dffd780087af36a5c6a *DESCRIPTION
+769bdbb0572f2eefda48945aefb690fc *LICENSE
+4b4dd872f4ac3702ae8353a23ca4d7de *NAMESPACE
+16b8215614efd12268d070eb742fcaa4 *NEWS.md
+042ae6c92cb790d73ae73895585ca5df *R/add.R
+ed9b1328a6361812eb7f44a8dd71928e *R/classes.r
+5f5bc88650e588764e5141debd6c284e *R/collection_addreplica.R
+c14bc915d103fdf9cb03ded344636801 *R/collection_addreplicaprop.R
+8a4cc98f61507c44bb97c987d906c94b *R/collection_addrole.R
+e6f972fe99931651438f5c557be82650 *R/collection_balanceshardunique.R
+3bd66336fa6df09c446bb2fee1701189 *R/collection_clusterprop.R
+e68bf45e955ede48839742dd14168c4f *R/collection_clusterstatus.R
+d734674cd72eeac330c8a5f24b8270df *R/collection_create.R
+df0fc73b8f2a4632d99df219cd1c7b37 *R/collection_createalias.R
+1a1a44d204dd2f951ed41f81c7a8aa26 *R/collection_createshard.R
+34b6e2356b1ca290a2713f63bcb183cd *R/collection_delete.R
+8b4b8bf1500a0171540573297f5b07e4 *R/collection_deletealias.R
+2ffe5f55802ce4042fef3e42f15265e1 *R/collection_deletereplica.R
+04a0045ca626e752b1edb895e7f23eef *R/collection_deletereplicaprop.R
+61bbfca06860882c643c4aab54d0f9a6 *R/collection_deleteshard.R
+3f85403747851e8a57e4c9388feb729d *R/collection_exists.R
+9cf7f14e8ea90fcc766a6db3e7cbef9c *R/collection_list.R
+711e5820f74e4dbf32b51b1d7c3fd31c *R/collection_migrate.R
+0a9a7798bee29c2e02a98f685417d931 *R/collection_overseerstatus.R
+dcb8980942da90ce37b057728c8e7f00 *R/collection_rebalanceleaders.R
+b39f73c57f985efa81802ad53aaf79c6 *R/collection_reload.R
+91bc558e6e006dda14ec3518a928c302 *R/collection_removerole.R
+1f60721671157bf6b58120d2bce68561 *R/collection_requeststatus.R
+7cb83e55408c66aff7c63d5372271b92 *R/collection_splitshard.R
+87a4dfb2c17ca145eccccff511f97ad6 *R/collections.R
+f32d5e0c66238949fc85f75fc8ad8f4f *R/commit.R
+4a7331cf105712ad607cac87d2643dda *R/config_get.R
+59b698fe6d931839a7073ca29def07aa *R/config_overlay.R
+bff441cc3ecc45cae3705165e23c656b *R/config_params.R
+cecf41bf78cf8ade2ee413c7bded44be *R/config_set.R
+b120ccede9c7ccf32235a030c36e7774 *R/connect.R
+aa43dd790dc1cf5abab7089aa0745ef6 *R/core_create.R
+7eccfffac673bb7e133f10321ed3b8ce *R/core_exists.R
+5dfa47a191e16def0199d85ef3d20a53 *R/core_mergeindexes.R
+826e2180e7a88e0cf118a96fd3aadef7 *R/core_reload.R
+592592972b9bcfb63da73625863c73d2 *R/core_rename.R
+3882c8dc2b5b4948b5877f1e95e3264c *R/core_requeststatus.R
+0e6f9033e87ab7f916e40764f8b7378e *R/core_split.R
+a67847b5b53be428c7d90242d41113e4 *R/core_status.R
+d19e8f8c78a5e38d477845682e58f30f *R/core_swap.R
+d68d4430bbed63e103bc68176e65361f *R/core_unload.R
+61d914b7266ffa22ca25887b6a1b5968 *R/delete.R
+8983e299d349397b18629f99a21ae750 *R/optimize.R
+351dc1c8e530ad741325153869cc3131 *R/parsers.R
+209300e7b795ddd8cd5a2df42dcbedda *R/ping.R
+a3927f5509ec56b665f6c6eb12268c0e *R/schema.R
+b66ab779938427e1098b048724bb38b8 *R/solr_all.r
+571feb0c743e31d365e3bd9d7867bcc0 *R/solr_facet.r
+b5d70d74be22f43b32ba98cfc28ff58e *R/solr_get.R
+6369b7581dcb084806c0807d367262f2 *R/solr_group.r
+d4d431ced585c439b02f1948242a7b88 *R/solr_highlight.r
+669d2b52dc43ff033a8a38e8e3ec5b32 *R/solr_mlt.r
+4c12d8c43a9231dc2aef61f1d45d31f7 *R/solr_search.r
+f3ef00ab64fb6ac47fb658b186f30804 *R/solr_stats.r
+85d89423eb166ed3b91c6477bcecfc90 *R/solrium-package.R
+bc3d2fe13e45ad93a212b3e9200d3177 *R/update_csv.R
+28ef96c7d5cb72216d308dd1c8c2de46 *R/update_json.R
+5397084fc6f5abf6c98738bc863d8f57 *R/update_xml.R
+9095b8d2a21574b8955d77d4aa2f4d00 *R/zzz.r
+0e9a61c7a417d1f6092af4feb2ed9a63 *README.md
+4f696e68a3f28548dccebef0953fed29 *build/vignette.rds
+ae1097c4c79e8dfbf4f5982af5a2bb3f *inst/doc/cores_collections.Rmd
+0f33cd79c266c0543a4a8ec6dca17c91 *inst/doc/cores_collections.html
+24a71da1896d16ecbd9fc4d7476c91d3 *inst/doc/document_management.Rmd
+b077e3a569d0726ca65946c5513a000b *inst/doc/document_management.html
+17b2cf10a4ff9abc151600f8efad7b03 *inst/doc/local_setup.Rmd
+ae251089e247e82ea7c591dc878e7a6a *inst/doc/local_setup.html
+f4bc6338aebf8ed9212b6f8c5122a1d1 *inst/doc/search.Rmd
+253ad63f64961638a8d40bfdca2c143b *inst/doc/search.html
+cd1cc006debed6e2b4556001fb61b872 *inst/examples/add_delete.json
+ab2c69200e3d331b34d8b8d9158feab4 *inst/examples/add_delete.xml
+8dc63db5d80bc7f931c6a439080c3bbc *inst/examples/books.csv
+b2c72a069b9c55a21c7e4a512cb52c32 *inst/examples/books.json
+d19723170a92193a92840af3cfbb0c15 *inst/examples/books.xml
+ec0e387d7aee2c2f391d31882cc75eed *inst/examples/books2.json
+d56e9cd01b8e1a6c61cfcc077219cffa *inst/examples/books2_delete.json
+f437720237985879e5f2347694aac166 *inst/examples/books2_delete.xml
+c79fd4b2cbf3d6b44752c71a944335b0 *inst/examples/books_delete.json
+0cbb22664329aa7d8e125bff214f1a35 *inst/examples/books_delete.xml
+1c8662000097080ed54d2d9cdc4313c2 *inst/examples/schema.xml
+7344fdb8f567b318829a54322bafc504 *inst/examples/solrconfig.xml
+f8225c6c4a245757188e297d7259f5bb *inst/examples/updatecommands_add.json
+1d42c66dcbc92c2f8ac0c2a3effabcca *inst/examples/updatecommands_add.xml
+5eab27b9c1f8c6f873c7bb16dd7d24a7 *inst/examples/updatecommands_delete.json
+d268b60d7387fb5dc7c7b640de3e1ea1 *inst/examples/updatecommands_delete.xml
+b442acc0ef5259a14ffe466f4d9b68b4 *man/add.Rd
+09fac0ac81484533d27da59b4d28ae2b *man/collapse_pivot_names.Rd
+f928c15332cddd32a053acf65e2b6154 *man/collectargs.Rd
+50edf4f47dc16efcb0c9803d2ebbc9e5 *man/collection_addreplica.Rd
+3481633c2ae2d271c66c1cd2aa2571f8 *man/collection_addreplicaprop.Rd
+863220e5be3c44f16894a63de0c4bb1f *man/collection_addrole.Rd
+30aedaf0285d9758ed58966959494c6a *man/collection_balanceshardunique.Rd
+d6859c5ea8355dcece7f2b3c16ea0d46 *man/collection_clusterprop.Rd
+3aa70e87fa8d90cebc6c58a56403a576 *man/collection_clusterstatus.Rd
+afb5e5bfb08a6fcbedef9623a124e309 *man/collection_create.Rd
+e2b69db6c36c4037d144c9d4d5a9818c *man/collection_createalias.Rd
+b63107d7916f450176a4ee2eeb76d167 *man/collection_createshard.Rd
+9ea7005f31d7fc335cbf7d0d6ddb471a *man/collection_delete.Rd
+70cf52f10af4ec611c07578350abab5b *man/collection_deletealias.Rd
+1c0d9f2eafe233baad095011c20c2219 *man/collection_deletereplica.Rd
+ebe88b276884ce0ac516fcec5624bf60 *man/collection_deletereplicaprop.Rd
+bd2f73bbd90927d4303772f6566cb9e9 *man/collection_deleteshard.Rd
+80baa3bcc8707b26b4e527e4eccc7f26 *man/collection_exists.Rd
+306789b56163228adf1cbc08112a69dc *man/collection_list.Rd
+72d5aca86ccfa8c3e863f779fa24e69b *man/collection_migrate.Rd
+0762e205891253d7f0babfb94e67c99e *man/collection_overseerstatus.Rd
+b10821e8b317462765483f9ead272f86 *man/collection_rebalanceleaders.Rd
+d01abcd1db2401886eca4329689fd4b6 *man/collection_reload.Rd
+60586db27a665b9c1a733629debbef5a *man/collection_removerole.Rd
+ccadfcae48dbb7bf7dea0b8731c1c09b *man/collection_requeststatus.Rd
+a6d7e3b92741db99023850bb99ad6b8e *man/collection_splitshard.Rd
+c9870202f4f842af2ca41fcdbebedb26 *man/collections.Rd
+b7b539cc2a5d273e19d85937e81c1347 *man/commit.Rd
+eda4a72e94fa01b0d188cb78bd207d5a *man/config_get.Rd
+87feec17216fc41495abd8b558ebb421 *man/config_overlay.Rd
+905af41498a3c2d4720d44a59573038e *man/config_params.Rd
+9ce3c931155ab597bda364dfe212e82d *man/config_set.Rd
+a56c6cfa42b947076bf8d0528ee99ea9 *man/core_create.Rd
+80845e25fe010437ae631d7a249989bc *man/core_exists.Rd
+555aa1958aa10d9b6008b9c6478409e2 *man/core_mergeindexes.Rd
+62a41c43111d53c1e0f24571a3480d8e *man/core_reload.Rd
+f66ffce36ee693570233162582fcdc57 *man/core_rename.Rd
+f47a5bac5e63a03723662b10915fa8a9 *man/core_requeststatus.Rd
+fb0b38c91635d17377af96534cb81463 *man/core_split.Rd
+459a178c90678304f413db250f4fd972 *man/core_status.Rd
+23b44147bc10d678f3b1906fbf016b22 *man/core_swap.Rd
+ee993dffa018053e21064340a42e3d7a *man/core_unload.Rd
+2317e698215663f4d5c3e8b015de7ec5 *man/delete.Rd
+d05d5dbb1295cfa97cd282c5bd165c8a *man/is-sr.Rd
+5fdc32ecdc180365d23aebc29364722b *man/makemultiargs.Rd
+47dc0f9ce0aa48e5202eb59a87e157a0 *man/optimize.Rd
+08de32419aa64bb6cb8f397d66d49807 *man/ping.Rd
+6489a80c5ff1d662c06a9a6363a72d1e *man/pivot_flatten_tabular.Rd
+8b0b6e516777030b673f4d61e097dee3 *man/schema.Rd
+71dd82c1312f20153a0ae925af58fbd5 *man/solr_all.Rd
+b980a9159999acffd61014f07f555d8b *man/solr_connect.Rd
+dd431c67f9c9b4e82e91eee74fb99c7f *man/solr_facet.Rd
+6c3b041a87f552ad383fe1b47e0c9863 *man/solr_get.Rd
+c59c6bb03d8f728b54c04b32d8872bc5 *man/solr_group.Rd
+75b8e242a3fe3c8f6d059ee01db0cdfd *man/solr_highlight.Rd
+67d1e2223cef7255b701fc707a7a6e3f *man/solr_mlt.Rd
+4aa2ff06afacbf86d05eefe409758ecb *man/solr_parse.Rd
+cfce05916ff81f431ba0d5ce50ffb2e4 *man/solr_search.Rd
+008a2d7ffedc2c9865ee2a7a4f32c17a *man/solr_stats.Rd
+885ddddf54c7479a48f21b1c0346c075 *man/solrium-package.Rd
+c80d338cd022acbd23e377f013ee53f1 *man/update_csv.Rd
+76c2d2c6fc7ef2a5ea43c863db93c3d5 *man/update_json.Rd
+4b7fbdb25a7c60eb9785c405fdfdccfb *man/update_xml.Rd
+b4487f117183b6157cba9b75de6d078a *tests/cloud_mode/test-add.R
+a72186f5ba6d9b13fe5219c2e8024c2e *tests/cloud_mode/test-collections.R
+1baaceeffe758af5c1b0b01e073927e2 *tests/standard_mode/test-core_create.R
+d4549d7babf9d1437a58916e7778aafb *tests/test-all.R
+68fe948d65ab12bcf4358ccd67936bd8 *tests/testthat/test-core_create.R
+8d0a8f385e29f2f3823e75a13a507b19 *tests/testthat/test-errors.R
+0ae0bf3544431d4933adb7d36702f923 *tests/testthat/test-ping.R
+e5b3d2ca168afdffbd68ea2ccc6ecb7d *tests/testthat/test-schema.R
+3cc3c33ba45c3662a5bb19db53875731 *tests/testthat/test-solr_all.R
+a560f95a79ba74a3b8db747541df4e45 *tests/testthat/test-solr_connect.R
+026c5851382faf1967616cb897d5501f *tests/testthat/test-solr_error.R
+2ff1226459959411035392906e7522bf *tests/testthat/test-solr_facet.r
+7d0f2d7545878325d53e6e650d874218 *tests/testthat/test-solr_group.r
+237bd9e4c99c714269a152dfb6cb605b *tests/testthat/test-solr_highlight.r
+a6da0b4abbd193ccad3eee22de160729 *tests/testthat/test-solr_mlt.r
+9e3e5256bcd62c5adca8f3826e9464a2 *tests/testthat/test-solr_search.r
+1c9ca9c79b510d58e9e027e0de36f142 *tests/testthat/test-solr_settings.R
+c9e394804c05152a3526fa7996ebcce1 *tests/testthat/test-solr_stats.r
+ae1097c4c79e8dfbf4f5982af5a2bb3f *vignettes/cores_collections.Rmd
+24a71da1896d16ecbd9fc4d7476c91d3 *vignettes/document_management.Rmd
+17b2cf10a4ff9abc151600f8efad7b03 *vignettes/local_setup.Rmd
+f4bc6338aebf8ed9212b6f8c5122a1d1 *vignettes/search.Rmd
diff --git a/NAMESPACE b/NAMESPACE
new file mode 100644
index 0000000..01b0ff1
--- /dev/null
+++ b/NAMESPACE
@@ -0,0 +1,107 @@
+# Generated by roxygen2: do not edit by hand
+
+S3method(add,data.frame)
+S3method(add,list)
+S3method(print,solr_connection)
+S3method(solr_parse,default)
+S3method(solr_parse,ping)
+S3method(solr_parse,sr_all)
+S3method(solr_parse,sr_facet)
+S3method(solr_parse,sr_group)
+S3method(solr_parse,sr_high)
+S3method(solr_parse,sr_mlt)
+S3method(solr_parse,sr_search)
+S3method(solr_parse,sr_stats)
+S3method(solr_parse,update)
+export(add)
+export(collection_addreplica)
+export(collection_addreplicaprop)
+export(collection_addrole)
+export(collection_balanceshardunique)
+export(collection_clusterprop)
+export(collection_clusterstatus)
+export(collection_create)
+export(collection_createalias)
+export(collection_createshard)
+export(collection_delete)
+export(collection_deletealias)
+export(collection_deletereplica)
+export(collection_deletereplicaprop)
+export(collection_deleteshard)
+export(collection_exists)
+export(collection_list)
+export(collection_migrate)
+export(collection_overseerstatus)
+export(collection_rebalanceleaders)
+export(collection_reload)
+export(collection_removerole)
+export(collection_requeststatus)
+export(collection_splitshard)
+export(collections)
+export(commit)
+export(config_get)
+export(config_overlay)
+export(config_params)
+export(config_set)
+export(core_create)
+export(core_exists)
+export(core_mergeindexes)
+export(core_reload)
+export(core_rename)
+export(core_requeststatus)
+export(core_split)
+export(core_status)
+export(core_swap)
+export(core_unload)
+export(cores)
+export(delete_by_id)
+export(delete_by_query)
+export(is.sr_facet)
+export(is.sr_high)
+export(is.sr_search)
+export(optimize)
+export(ping)
+export(schema)
+export(solr_all)
+export(solr_connect)
+export(solr_facet)
+export(solr_get)
+export(solr_group)
+export(solr_highlight)
+export(solr_mlt)
+export(solr_parse)
+export(solr_search)
+export(solr_settings)
+export(solr_stats)
+export(update_csv)
+export(update_json)
+export(update_xml)
+importFrom(dplyr,bind_rows)
+importFrom(httr,GET)
+importFrom(httr,POST)
+importFrom(httr,content)
+importFrom(httr,content_type)
+importFrom(httr,content_type_json)
+importFrom(httr,content_type_xml)
+importFrom(httr,http_condition)
+importFrom(httr,http_status)
+importFrom(httr,stop_for_status)
+importFrom(httr,upload_file)
+importFrom(jsonlite,fromJSON)
+importFrom(plyr,rbind.fill)
+importFrom(tibble,add_column)
+importFrom(tibble,as_data_frame)
+importFrom(tibble,as_tibble)
+importFrom(tibble,data_frame)
+importFrom(utils,URLdecode)
+importFrom(utils,head)
+importFrom(utils,modifyList)
+importFrom(utils,read.table)
+importFrom(xml2,read_xml)
+importFrom(xml2,xml_attr)
+importFrom(xml2,xml_attrs)
+importFrom(xml2,xml_children)
+importFrom(xml2,xml_find_all)
+importFrom(xml2,xml_find_first)
+importFrom(xml2,xml_name)
+importFrom(xml2,xml_text)
diff --git a/NEWS.md b/NEWS.md
new file mode 100644
index 0000000..bc4cb20
--- /dev/null
+++ b/NEWS.md
@@ -0,0 +1,21 @@
+solrium 0.4.0
+=============
+
+### MINOR IMPROVEMENTS
+
+* Change `dplyr::rbind_all()` (deprecated) to `dplyr::bind_rows()` (#90)
+* Added additional examples of using pivot facetting to `solr_facet()` (#91)
+* Fix to `solr_group()` (#92)
+* Replaced dependency `XML` with `xml2` (#57)
+* Added examples and tests for a few more public Solr instances (#30)
+* Now using `tibble` to give back compact data.frame's
+* namespace all base package calls
+* Many changes to internal parsers to use `xml2` instead of `XML`, and 
+improvements
+
+solrium 0.3.0
+=============
+
+### NEW FEATURES
+
+* released to CRAN
diff --git a/R/add.R b/R/add.R
new file mode 100644
index 0000000..ccc2ba0
--- /dev/null
+++ b/R/add.R
@@ -0,0 +1,98 @@
+#' Add documents from R objects
+#' 
+#' @export
+#' @param x Documents, either as rows in a data.frame, or a list.
+#' @param name (character) A collection or core name. Required.
+#' @param commit (logical) If \code{TRUE}, documents immediately searchable. 
+#' Default: \code{TRUE}
+#' @param commit_within (numeric) Milliseconds to commit the change, the 
+#' document will be added within that time. Default: NULL
+#' @param overwrite (logical) Overwrite documents with matching keys. 
+#' Default: \code{TRUE}
+#' @param boost (numeric) Boost factor. Default: NULL
+#' @param wt (character) One of json (default) or xml. If json, uses 
+#' \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to 
+#' parse
+#' @param raw (logical) If \code{TRUE}, returns raw data in format specified by 
+#' \code{wt} param
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' 
+#' @details Works for Collections as well as Cores (in SolrCloud and Standalone 
+#' modes, respectively)
+#' 
+#' @seealso \code{\link{update_json}}, \code{\link{update_xml}}, 
+#' \code{\link{update_csv}} for adding documents from files
+#' 
+#' @examples \dontrun{
+#' solr_connect()
+#' 
+#' # create the boooks collection
+#' if (!collection_exists("books")) {
+#'   collection_create(name = "books", numShards = 2)
+#' }
+#' 
+#' # Documents in a list
+#' ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+#' add(ss, name = "books")
+#' 
+#' # Documents in a data.frame
+#' ## Simple example
+#' df <- data.frame(id = c(67, 68), price = c(1000, 500000000))
+#' add(x = df, "books")
+#' df <- data.frame(id = c(77, 78), price = c(1, 2.40))
+#' add(x = df, "books")
+#' 
+#' ## More complex example, get file from package examples
+#' # start Solr in Schemaless mode first: bin/solr start -e schemaless
+#' file <- system.file("examples", "books.csv", package = "solrium")
+#' x <- read.csv(file, stringsAsFactors = FALSE)
+#' class(x)
+#' head(x)
+#' if (!collection_exists("mybooks")) {
+#'   collection_create(name = "mybooks", numShards = 2)
+#' }
+#' add(x, "mybooks")
+#' 
+#' # Use modifiers
+#' add(x, "mybooks", commit_within = 5000)
+#' 
+#' # Get back XML instead of a list
+#' ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+#' # parsed XML
+#' add(ss, name = "books", wt = "xml")
+#' # raw XML
+#' add(ss, name = "books", wt = "xml", raw = TRUE)
+#' }
+add <- function(x, name, commit = TRUE, commit_within = NULL, overwrite = TRUE,
+                boost = NULL, wt = 'json', raw = FALSE, ...) {
+  UseMethod("add")
+}
+
+#' @export
+add.list <- function(x, name, commit = TRUE, commit_within = NULL, 
+                     overwrite = TRUE, boost = NULL, wt = 'json', raw = FALSE, ...) {
+  
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(commit = asl(commit), commitWithin = commit_within, 
+                  overwrite = asl(overwrite), wt = wt))
+  if (!is.null(boost)) {
+    x <- lapply(x, function(z) modifyList(z, list(boost = boost)))
+  }
+  obj_proc(file.path(conn$url, sprintf('solr/%s/update/json/docs', name)), x, args, raw, conn$proxy, ...)
+}
+
+#' @export
+add.data.frame <- function(x, name, commit = TRUE, commit_within = NULL, 
+                           overwrite = TRUE, boost = NULL, wt = 'json', raw = FALSE, ...) {
+  
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(commit = asl(commit), commitWithin = commit_within, 
+                  overwrite = asl(overwrite), wt = wt))
+  if (!is.null(boost)) {
+    x$boost <- boost
+  }
+  x <- apply(x, 1, as.list)
+  obj_proc(file.path(conn$url, sprintf('solr/%s/update/json/docs', name)), x, args, raw, conn$proxy, ...)
+}
diff --git a/R/classes.r b/R/classes.r
new file mode 100644
index 0000000..ee8de59
--- /dev/null
+++ b/R/classes.r
@@ -0,0 +1,15 @@
+#' Test for sr_facet class
+#' @export
+#' @param x Input
+#' @rdname is-sr
+is.sr_facet <- function(x) inherits(x, "sr_facet")
+
+#' Test for sr_high class
+#' @export
+#' @rdname is-sr
+is.sr_high <- function(x) inherits(x, "sr_high")
+
+#' Test for sr_search class
+#' @export
+#' @rdname is-sr
+is.sr_search <- function(x) inherits(x, "sr_search")
\ No newline at end of file
diff --git a/R/collection_addreplica.R b/R/collection_addreplica.R
new file mode 100644
index 0000000..1a6cac1
--- /dev/null
+++ b/R/collection_addreplica.R
@@ -0,0 +1,60 @@
+#' @title Add a replica
+#'
+#' @description Add a replica to a shard in a collection. The node name can be
+#' specified if the replica is to be created in a specific node
+#'
+#' @export
+#' @param name (character) The name of the collection. Required
+#' @param shard (character) The name of the shard to which replica is to be added.
+#' If \code{shard} is not given, then \code{route} must be.
+#' @param route (character) If the exact shard name is not known, users may pass
+#' the \code{route} value and the system would identify the name of the shard.
+#' Ignored if the \code{shard} param is also given
+#' @param node (character) The name of the node where the replica should be created
+#' @param instanceDir (character) The instanceDir for the core that will be created
+#' @param dataDir	(character)	The directory in which the core should be created
+#' @param async	(character) Request ID to track this action which will be processed
+#' asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @param ... You can pass in parameters like \code{property.name=value}	to set
+#' core property name to value. See the section Defining core.properties for details on
+#' supported properties and values.
+#' (https://cwiki.apache.org/confluence/display/solr/Defining+core.properties)
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # create collection
+#' if (!collection_exists("foobar")) {
+#'   collection_create(name = "foobar", numShards = 2) # bin/solr create -c foobar
+#' }
+#'
+#' # status
+#' collection_clusterstatus()$cluster$collections$foobar
+#'
+#' # add replica
+#' if (!collection_exists("foobar")) {
+#'   collection_addreplica(name = "foobar", shard = "shard1")
+#' }
+#'
+#' # status again
+#' collection_clusterstatus()$cluster$collections$foobar
+#' collection_clusterstatus()$cluster$collections$foobar$shards
+#' collection_clusterstatus()$cluster$collections$foobar$shards$shard1
+#' }
+collection_addreplica <- function(name, shard = NULL, route = NULL, node = NULL,
+                              instanceDir = NULL, dataDir = NULL, async = NULL,
+                              raw = FALSE, callopts=list(), ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'ADDREPLICA', collection = name, shard = shard, route = route,
+                  node = node, instanceDir = instanceDir, dataDir = dataDir,
+                  async = async, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, callopts, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_addreplicaprop.R b/R/collection_addreplicaprop.R
new file mode 100644
index 0000000..18e316b
--- /dev/null
+++ b/R/collection_addreplicaprop.R
@@ -0,0 +1,52 @@
+#' @title Add a replica property
+#'
+#' @description Assign an arbitrary property to a particular replica and give it
+#' the value specified. If the property already exists, it will be overwritten
+#' with the new value.
+#'
+#' @export
+#' @param name (character) Required. The name of the collection this replica belongs to.
+#' @param shard (character) Required. The name of the shard the replica belongs to.
+#' @param replica (character) Required. The replica, e.g. core_node1.
+#' @param property (character) Required. The property to add. Note: this will have the
+#' literal 'property.' prepended to distinguish it from system-maintained properties.
+#' So these two forms are equivalent: \code{property=special} and
+#' \code{property=property.special}
+#' @param property.value (character) Required. The value to assign to the property.
+#' @param shardUnique (logical) If \code{TRUE}, then setting this property in one
+#' replica will (1) remove the property from all other replicas in that shard.
+#' Default: \code{FALSE}
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # create collection
+#' collection_create(name = "addrep", numShards = 2) # bin/solr create -c addrep
+#'
+#' # status
+#' collection_clusterstatus()$cluster$collections$addrep$shards
+#'
+#' # add the value world to the property hello
+#' collection_addreplicaprop(name = "addrep", shard = "shard1", replica = "core_node1",
+#'    property = "hello", property.value = "world")
+#'
+#' # check status
+#' collection_clusterstatus()$cluster$collections$addrep$shards
+#' collection_clusterstatus()$cluster$collections$addrep$shards$shard1$replicas$core_node1
+#' }
+collection_addreplicaprop <- function(name, shard, replica, property, property.value,
+                                      shardUnique = FALSE, raw = FALSE, callopts=list()) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'ADDREPLICAPROP', collection = name, shard = shard,
+                  replica = replica, property = property,
+                  property.value = property.value,
+                  shardUnique = asl(shardUnique), wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_addrole.R b/R/collection_addrole.R
new file mode 100644
index 0000000..2f21b6d
--- /dev/null
+++ b/R/collection_addrole.R
@@ -0,0 +1,35 @@
+#' @title Add a role to a node
+#'
+#' @description Assign a role to a given node in the cluster. The only supported role
+#' as of 4.7 is 'overseer' . Use this API to dedicate a particular node as Overseer.
+#' Invoke it multiple times to add more nodes. This is useful in large clusters where
+#' an Overseer is likely to get overloaded . If available, one among the list of
+#' nodes which are assigned the 'overseer' role would become the overseer. The
+#' system would assign the role to any other node if none of the designated nodes
+#' are up and running
+#'
+#' @export
+#' @param role (character) Required. The name of the role. The only supported role
+#' as of now is overseer (set as default).
+#' @param node (character) Required. The name of the node. It is possible to assign a
+#' role even before that node is started.
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # get list of nodes
+#' nodes <- collection_clusterstatus()$cluster$live_nodes
+#' collection_addrole(node = nodes[1])
+#' }
+collection_addrole <- function(role = "overseer", node, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'ADDROLE', role = role, node = node, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_balanceshardunique.R b/R/collection_balanceshardunique.R
new file mode 100644
index 0000000..da44e22
--- /dev/null
+++ b/R/collection_balanceshardunique.R
@@ -0,0 +1,46 @@
+#' @title Balance a property
+#'
+#' @description Insures that a particular property is distributed evenly amongst the
+#' physical nodes that make up a collection. If the property already exists on a replica,
+#' every effort is made to leave it there. If the property is not on any replica on a
+#' shard one is chosen and the property is added.
+#'
+#' @export
+#' @param name (character) Required. The name of the collection to balance the property in
+#' @param property (character) Required. The property to balance. The literal "property."
+#' is prepended to this property if not specified explicitly.
+#' @param onlyactivenodes (logical) Normally, the property is instantiated on active
+#' nodes only. If \code{FALSE}, then inactive nodes are also included for distribution.
+#' Default: \code{TRUE}
+#' @param shardUnique (logical) Something of a safety valve. There is one pre-defined
+#' property (preferredLeader) that defaults this value to \code{TRUE}. For all other
+#' properties that are balanced, this must be set to \code{TRUE} or an error message is
+#' returned
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # create collection
+#' collection_create(name = "mycollection") # bin/solr create -c mycollection
+#'
+#' # balance preferredLeader property
+#' collection_balanceshardunique("mycollection", property = "preferredLeader")
+#'
+#' # examine cluster status
+#' collection_clusterstatus()$cluster$collections$mycollection
+#' }
+collection_balanceshardunique <- function(name, property, onlyactivenodes = TRUE,
+                                          shardUnique = NULL, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'BALANCESHARDUNIQUE', collection = name, property = property,
+                  onlyactivenodes = asl(onlyactivenodes), shardUnique = asl(shardUnique),
+                  wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_clusterprop.R b/R/collection_clusterprop.R
new file mode 100644
index 0000000..c2753a1
--- /dev/null
+++ b/R/collection_clusterprop.R
@@ -0,0 +1,40 @@
+#' @title Add, edit, delete a cluster-wide property
+#'
+#' @description Important: whether add, edit, or delete is used is determined by
+#' the value passed to the \code{val} parameter. If the property name is
+#' new, it will be added. If the property name exists, and the value is different,
+#' it will be edited. If the property name exists, and the value is NULL or empty
+#' the property is deleted (unset).
+#'
+#' @export
+#' @param name (character) Required. The name of the property. The two supported
+#' properties names are urlScheme and autoAddReplicas. Other names are rejected
+#' with an error
+#' @param val (character) Required. The value of the property. If the value is
+#' empty or null, the property is unset.
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # add the value https to the property urlScheme
+#' collection_clusterprop(name = "urlScheme", val = "https")
+#'
+#' # status again
+#' collection_clusterstatus()$cluster$properties
+#'
+#' # delete the property urlScheme by setting val to NULL or a 0 length string
+#' collection_clusterprop(name = "urlScheme", val = "")
+#' }
+collection_clusterprop <- function(name, val, raw = FALSE, callopts=list()) {
+  conn <- solr_settings()
+  check_conn(conn)
+  val <- if (is.null(val)) "" else val
+  args <- sc(list(action = 'CLUSTERPROP', name = name, val = val, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_clusterstatus.R b/R/collection_clusterstatus.R
new file mode 100644
index 0000000..a5ce9da
--- /dev/null
+++ b/R/collection_clusterstatus.R
@@ -0,0 +1,42 @@
+#' @title Get cluster status
+#'
+#' @description Fetch the cluster status including collections, shards, replicas,
+#' configuration name as well as collection aliases and cluster properties.
+#'
+#' @export
+#' @param name (character) The collection name for which information is requested.
+#' If omitted, information on all collections in the cluster will be returned.
+#' @param shard (character) The shard(s) for which information is requested. Multiple
+#' shard names can be specified as a character vector.
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' collection_clusterstatus()
+#' res <- collection_clusterstatus()
+#' res$responseHeader
+#' res$cluster
+#' res$cluster$collections
+#' res$cluster$collections$gettingstarted
+#' res$cluster$live_nodes
+#' }
+collection_clusterstatus <- function(name = NULL, shard = NULL, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  shard <- check_shard(shard)
+  args <- sc(list(action = 'CLUSTERSTATUS', collection = name, shard = shard, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
+
+check_shard <- function(x) {
+  if (is.null(x)) {
+    x
+  } else {
+    paste0(x, collapse = ",")
+  }
+}
diff --git a/R/collection_create.R b/R/collection_create.R
new file mode 100644
index 0000000..73f9a92
--- /dev/null
+++ b/R/collection_create.R
@@ -0,0 +1,98 @@
+#' Add a collection
+#'
+#' @export
+#' @param name The name of the collection to be created. Required
+#' @param numShards (integer) The number of shards to be created as part of the
+#' collection. This is a required parameter when using the 'compositeId' router.
+#' @param maxShardsPerNode (integer) When creating collections, the shards and/or replicas
+#' are spread across all available (i.e., live) nodes, and two replicas of the same shard
+#' will never be on the same node. If a node is not live when the CREATE operation is called,
+#' it will not get any parts of the new collection, which could lead to too many replicas
+#' being created on a single live node. Defining maxShardsPerNode sets a limit on the number
+#' of replicas CREATE will spread to each node. If the entire collection can not be fit into
+#' the live nodes, no collection will be created at all. Default: 1
+#' @param createNodeSet (logical) Allows defining the nodes to spread the new collection
+#' across. If not provided, the CREATE operation will create shard-replica spread across all
+#' live Solr nodes. The format is a comma-separated list of node_names, such as
+#' localhost:8983_solr, localhost:8984_solr, localhost:8985_solr. Default: \code{NULL}
+#' @param collection.configName (character) Defines the name of the configurations (which
+#' must already be stored in ZooKeeper) to use for this collection. If not provided, Solr
+#' will default to the collection name as the configuration name. Default: \code{compositeId}
+#' @param replicationFactor (integer) The number of replicas to be created for each shard.
+#' Default: 1
+#' @param router.name (character) The router name that will be used. The router defines
+#' how documents will be distributed among the shards. The value can be either \code{implicit},
+#' which uses an internal default hash, or \code{compositeId}, which allows defining the specific
+#' shard to assign documents to. When using the 'implicit' router, the shards parameter is
+#' required. When using the 'compositeId' router, the numShards parameter is required.
+#' For more information, see also the section Document Routing. Default: \code{compositeId}
+#' @param shards (character) A comma separated list of shard names, e.g.,
+#' shard-x,shard-y,shard-z . This is a required parameter when using the 'implicit' router.
+#' @param createNodeSet.shuffle	(logical)	Controls wether or not the shard-replicas created
+#' for this collection will be assigned to the nodes specified by the createNodeSet in a
+#' sequential manner, or if the list of nodes should be shuffled prior to creating individual
+#' replicas.  A 'false' value makes the results of a collection creation predictible and
+#' gives more exact control over the location of the individual shard-replicas, but 'true'
+#' can be a better choice for ensuring replicas are distributed evenly across nodes. Ignored
+#' if createNodeSet is not also specified. Default: \code{TRUE}
+#' @param router.field (character) If this field is specified, the router will look at the
+#' value of the field in an input document to compute the hash and identify a shard instead of
+#' looking at the uniqueKey field. If the field specified is null in the document, the document
+#' will be rejected. Please note that RealTime Get or retrieval by id would also require the
+#' parameter _route_ (or shard.keys) to avoid a distributed search.
+#' @param autoAddReplicas	(logical)	When set to true, enables auto addition of replicas on
+#' shared file systems. See the section autoAddReplicas Settings for more details on settings
+#' and overrides. Default: \code{FALSE}
+#' @param async	(character) Request ID to track this action which will be processed
+#' asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @param ... You can pass in parameters like \code{property.name=value}	to set
+#' core property name to value. See the section Defining core.properties for details on
+#' supported properties and values.
+#' (https://cwiki.apache.org/confluence/display/solr/Defining+core.properties)
+#' @examples \dontrun{
+#' solr_connect()
+#' 
+#' if (!collection_exists("foobar")) {
+#'   collection_delete(name = "helloWorld")
+#'   collection_create(name = "helloWorld", numShards = 2)
+#' }
+#' if (!collection_exists("foobar")) {
+#'   collection_delete(name = "tablesChairs")
+#'   collection_create(name = "tablesChairs")
+#' }
+#' 
+#' # you may have to do this if you don't want to use 
+#' # bin/solr or use zookeeper directly
+#' path <- "~/solr-5.4.1/server/solr/newcore/conf"
+#' dir.create(path, recursive = TRUE)
+#' files <- list.files("~/solr-5.4.1/server/solr/configsets/data_driven_schema_configs/conf/",
+#' full.names = TRUE)
+#' invisible(file.copy(files, path, recursive = TRUE))
+#' collection_create(name = "newcore", collection.configName = "newcore")
+#' }
+collection_create <- function(name, numShards = 2, maxShardsPerNode = 1,
+                       createNodeSet = NULL, collection.configName = NULL,
+                       replicationFactor = 1, router.name = NULL, shards = NULL,
+                       createNodeSet.shuffle = TRUE, router.field = NULL,
+                       autoAddReplicas = FALSE, async = NULL,
+                       raw = FALSE, callopts=list(), ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'CREATE', name = name, numShards = numShards,
+                  replicationFactor = replicationFactor,
+                  maxShardsPerNode = maxShardsPerNode, createNodeSet = createNodeSet,
+                  collection.configName = collection.configName,
+                  router.name = router.name, shards = shards,
+                  createNodeSet.shuffle = asl(createNodeSet.shuffle),
+                  router.field = router.field, autoAddReplicas = asl(autoAddReplicas),
+                  async = async, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, callopts, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_createalias.R b/R/collection_createalias.R
new file mode 100644
index 0000000..44174c6
--- /dev/null
+++ b/R/collection_createalias.R
@@ -0,0 +1,29 @@
+#' @title Create an alias for a collection
+#'
+#' @description Create a new alias pointing to one or more collections. If an
+#' alias by the same name already exists, this action will replace the existing
+#' alias, effectively acting like an atomic "MOVE" command.
+#'
+#' @export
+#' @param alias (character) Required. The alias name to be created
+#' @param collections (character) Required. A character vector of collections to be aliased
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' collection_create(name = "thingsstuff", numShards = 2)
+#' collection_createalias("tstuff", "thingsstuff")
+#' collection_clusterstatus()$cluster$collections$thingsstuff$aliases # new alias
+#' }
+collection_createalias <- function(alias, collections, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  collections <- check_shard(collections)
+  args <- sc(list(action = 'CREATEALIAS', name = alias, collections = collections, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_createshard.R b/R/collection_createshard.R
new file mode 100644
index 0000000..7e0c084
--- /dev/null
+++ b/R/collection_createshard.R
@@ -0,0 +1,30 @@
+#' Create a shard
+#'
+#' @export
+#' @param name (character) Required. The name of the collection that includes the shard
+#' that will be splitted.
+#' @param shard (character) Required. The name of the shard to be created.
+#' @param createNodeSet (character) Allows defining the nodes to spread the new
+#' collection across. If not provided, the CREATE operation will create shard-replica
+#' spread across all live Solr nodes. The format is a comma-separated list of
+#' node_names, such as localhost:8983_solr, localhost:8984_s olr, localhost:8985_solr.
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' ## FIXME - doesn't work right now
+#' # collection_create(name = "trees")
+#' # collection_createshard(name = "trees", shard = "newshard")
+#' }
+collection_createshard <- function(name, shard, createNodeSet = NULL, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'CREATESHARD', collection = name, shard = shard,
+                  createNodeSet = createNodeSet, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_delete.R b/R/collection_delete.R
new file mode 100644
index 0000000..de773d3
--- /dev/null
+++ b/R/collection_delete.R
@@ -0,0 +1,22 @@
+#' Add a collection
+#'
+#' @export
+#' @param name The name of the collection to be created. Required
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' collection_create(name = "helloWorld")
+#' collection_delete(name = "helloWorld")
+#' }
+collection_delete <- function(name, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'DELETE', name = name, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_deletealias.R b/R/collection_deletealias.R
new file mode 100644
index 0000000..8d8f399
--- /dev/null
+++ b/R/collection_deletealias.R
@@ -0,0 +1,25 @@
+#' Delete a collection alias
+#'
+#' @export
+#' @param alias (character) Required. The alias name to be created
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' collection_create(name = "thingsstuff", numShards = 2)
+#' collection_createalias("tstuff", "thingsstuff")
+#' collection_clusterstatus()$cluster$collections$thingsstuff$aliases # new alias
+#' collection_deletealias("tstuff")
+#' collection_clusterstatus()$cluster$collections$thingsstuff$aliases # gone
+#' }
+collection_deletealias <- function(alias, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'DELETEALIAS', name = alias, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_deletereplica.R b/R/collection_deletereplica.R
new file mode 100644
index 0000000..aedf253
--- /dev/null
+++ b/R/collection_deletereplica.R
@@ -0,0 +1,55 @@
+#' @title Delete a replica
+#'
+#' @description Delete a replica from a given collection and shard. If the
+#' corresponding core is up and running the core is unloaded and the entry is
+#' removed from the clusterstate. If the node/core is down , the entry is taken
+#' off the clusterstate and if the core comes up later it is automatically
+#' unregistered.
+#'
+#' @export
+#' @param name (character) Required. The name of the collection.
+#' @param shard (character) Required. The name of the shard that includes the replica to
+#' be removed.
+#' @param replica (character) Required. The name of the replica to remove.
+#' @param onlyIfDown (logical) When \code{TRUE} will not take any action if the replica
+#' is active. Default: \code{FALSE}
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @param ... You can pass in parameters like \code{property.name=value}	to set
+#' core property name to value. See the section Defining core.properties for details on
+#' supported properties and values.
+#' (https://cwiki.apache.org/confluence/display/solr/Defining+core.properties)
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # create collection
+#' collection_create(name = "foobar2", numShards = 2) # bin/solr create -c foobar2
+#'
+#' # status
+#' collection_clusterstatus()$cluster$collections$foobar2$shards$shard1
+#'
+#' # add replica
+#' collection_addreplica(name = "foobar2", shard = "shard1")
+#'
+#' # delete replica
+#' ## get replica name
+#' nms <- names(collection_clusterstatus()$cluster$collections$foobar2$shards$shard1$replicas)
+#' collection_deletereplica(name = "foobar2", shard = "shard1", replica = nms[1])
+#'
+#' # status again
+#' collection_clusterstatus()$cluster$collections$foobar2$shards$shard1
+#' }
+collection_deletereplica <- function(name, shard = NULL, replica = NULL, onlyIfDown = FALSE,
+                                  raw = FALSE, callopts=list(), ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'DELETEREPLICA', collection = name, shard = shard, replica = replica,
+                  onlyIfDown = asl(onlyIfDown), wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, callopts, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_deletereplicaprop.R b/R/collection_deletereplicaprop.R
new file mode 100644
index 0000000..315dd3d
--- /dev/null
+++ b/R/collection_deletereplicaprop.R
@@ -0,0 +1,51 @@
+#' @title Delete a replica property
+#'
+#' @description Deletes an arbitrary property from a particular replica.
+#'
+#' @export
+#' @param name (character) Required. The name of the collection this replica belongs to.
+#' @param shard (character) Required. The name of the shard the replica belongs to.
+#' @param replica (character) Required. The replica, e.g. core_node1.
+#' @param property (character) Required. The property to delete. Note: this will have the
+#' literal 'property.' prepended to distinguish it from system-maintained properties.
+#' So these two forms are equivalent: \code{property=special} and
+#' \code{property=property.special}
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # create collection
+#' collection_create(name = "deleterep", numShards = 2) # bin/solr create -c deleterep
+#'
+#' # status
+#' collection_clusterstatus()$cluster$collections$deleterep$shards
+#'
+#' # add the value bar to the property foo
+#' collection_addreplicaprop(name = "deleterep", shard = "shard1", replica = "core_node1",
+#'    property = "foo", property.value = "bar")
+#'
+#' # check status
+#' collection_clusterstatus()$cluster$collections$deleterep$shards
+#' collection_clusterstatus()$cluster$collections$deleterep$shards$shard1$replicas$core_node1
+#'
+#' # delete replica property
+#' collection_deletereplicaprop(name = "deleterep", shard = "shard1",
+#'    replica = "core_node1", property = "foo")
+#'
+#' # check status - foo should be gone
+#' collection_clusterstatus()$cluster$collections$deleterep$shards$shard1$replicas$core_node1
+#' }
+collection_deletereplicaprop <- function(name, shard, replica, property,
+                                         raw = FALSE, callopts=list()) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'DELETEREPLICAPROP', collection = name, shard = shard,
+                  replica = replica, property = property, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_deleteshard.R b/R/collection_deleteshard.R
new file mode 100644
index 0000000..78dd2e9
--- /dev/null
+++ b/R/collection_deleteshard.R
@@ -0,0 +1,38 @@
+#' @title Delete a shard
+#'
+#' @description Deleting a shard will unload all replicas of the shard and remove
+#' them from clusterstate.json. It will only remove shards that are inactive, or
+#' which have no range given for custom sharding.
+#'
+#' @export
+#' @param name (character) Required. The name of the collection that includes the shard
+#' to be deleted
+#' @param shard (character) Required. The name of the shard to be deleted
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' # create collection
+#' # collection_create(name = "buffalo") # bin/solr create -c buffalo
+#'
+#' # find shard names
+#' names(collection_clusterstatus()$cluster$collections$buffalo$shards)
+#' # split a shard by name
+#' collection_splitshard(name = "buffalo", shard = "shard1")
+#' # now we have three shards
+#' names(collection_clusterstatus()$cluster$collections$buffalo$shards)
+#'
+#' # delete shard
+#' collection_deleteshard(name = "buffalo", shard = "shard1_1")
+#' }
+collection_deleteshard <- function(name, shard, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'DELETESHARD', collection = name, shard = shard, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_exists.R b/R/collection_exists.R
new file mode 100644
index 0000000..a89bced
--- /dev/null
+++ b/R/collection_exists.R
@@ -0,0 +1,30 @@
+#' Check if a collection exists
+#' 
+#' @export
+#' 
+#' @param name (character) The name of the core. If not given, all cores.
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @details Simply calls \code{\link{collection_list}} internally
+#' @return A single boolean, \code{TRUE} or \code{FALSE}
+#' @examples \dontrun{
+#' # start Solr with Cloud mode via the schemaless eg: bin/solr -e cloud
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#' 
+#' # connect
+#' solr_connect()
+#' 
+#' # exists
+#' collection_exists("gettingstarted")
+#' 
+#' # doesn't exist
+#' collection_exists("hhhhhh")
+#' }
+collection_exists <- function(name, ...) {
+  tmp <- suppressMessages(collection_list(...))$collections
+  if (name %in% tmp) {
+    TRUE 
+  } else {
+    FALSE
+  }
+}
diff --git a/R/collection_list.R b/R/collection_list.R
new file mode 100644
index 0000000..b70c937
--- /dev/null
+++ b/R/collection_list.R
@@ -0,0 +1,21 @@
+#' List collections
+#'
+#' @export
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' collection_list()
+#' collection_list()$collections
+#' }
+collection_list <- function(raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'LIST', wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_migrate.R b/R/collection_migrate.R
new file mode 100644
index 0000000..6ee23c9
--- /dev/null
+++ b/R/collection_migrate.R
@@ -0,0 +1,48 @@
+#' Migrate documents to another collection
+#'
+#' @export
+#' @param name (character) Required. The name of the source collection from which
+#' documents will be split
+#' @param target.collection (character) Required. The name of the target collection
+#' to which documents will be migrated
+#' @param split.key (character) Required. The routing key prefix. For example, if
+#' uniqueKey is a!123, then you would use split.key=a!
+#' @param forward.timeout (integer) The timeout (seconds), until which write requests
+#' made to the source collection for the given \code{split.key} will be forwarded to the
+#' target shard. Default: 60
+#' @param async	(character) Request ID to track this action which will be processed
+#' asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # create collection
+#' collection_create(name = "migrate_from") # bin/solr create -c migrate_from
+#'
+#' # create another collection
+#' collection_create(name = "migrate_to") # bin/solr create -c migrate_to
+#'
+#' # add some documents
+#' file <- system.file("examples", "books.csv", package = "solr")
+#' x <- read.csv(file, stringsAsFactors = FALSE)
+#' add(x, "migrate_from")
+#'
+#' # migrate some documents from one collection to the other
+#' ## FIXME - not sure if this is actually working....
+#' collection_migrate("migrate_from", "migrate_to", split.key = "05535")
+#' }
+collection_migrate <- function(name, target.collection, split.key, forward.timeout = NULL,
+                               async = NULL, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'MIGRATE', collection = name, target.collection = target.collection,
+                  split.key = split.key, forward.timeout = forward.timeout,
+                  async = async, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_overseerstatus.R b/R/collection_overseerstatus.R
new file mode 100644
index 0000000..8d888f6
--- /dev/null
+++ b/R/collection_overseerstatus.R
@@ -0,0 +1,33 @@
+#' @title Get overseer status
+#'
+#' @description Returns the current status of the overseer, performance statistics
+#' of various overseer APIs as well as last 10 failures per operation type.
+#'
+#' @export
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' collection_overseerstatus()
+#' res <- collection_overseerstatus()
+#' res$responseHeader
+#' res$leader
+#' res$overseer_queue_size
+#' res$overseer_work_queue_size
+#' res$overseer_operations
+#' res$collection_operations
+#' res$overseer_queue
+#' res$overseer_internal_queue
+#' res$collection_queue
+#' }
+collection_overseerstatus <- function(raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'OVERSEERSTATUS', wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_rebalanceleaders.R b/R/collection_rebalanceleaders.R
new file mode 100644
index 0000000..a6cdc72
--- /dev/null
+++ b/R/collection_rebalanceleaders.R
@@ -0,0 +1,46 @@
+#' @title Rebalance leaders
+#'
+#' @description Reassign leaders in a collection according to the preferredLeader
+#' property across active nodes
+#'
+#' @export
+#' @param name (character) Required. The name of the collection rebalance preferredLeaders on.
+#' @param maxAtOnce (integer) The maximum number of reassignments to have queue up at once.
+#' Values <=0 are use the default value Integer.MAX_VALUE. When this number is reached, the
+#' process waits for one or more leaders to be successfully assigned before adding more
+#' to the queue.
+#' @param maxWaitSeconds (integer) Timeout value when waiting for leaders to be reassigned.
+#' NOTE: if maxAtOnce is less than the number of reassignments that will take place,
+#' this is the maximum interval that any single wait for at least one reassignment.
+#' For example, if 10 reassignments are to take place and maxAtOnce is 1 and maxWaitSeconds
+#' is 60, the upper bound on the time that the command may wait is 10 minutes. Default: 60
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # create collection
+#' collection_create(name = "mycollection2") # bin/solr create -c mycollection2
+#'
+#' # balance preferredLeader property
+#' collection_balanceshardunique("mycollection2", property = "preferredLeader")
+#'
+#' # balance preferredLeader property
+#' collection_rebalanceleaders("mycollection2")
+#'
+#' # examine cluster status
+#' collection_clusterstatus()$cluster$collections$mycollection2
+#' }
+collection_rebalanceleaders <- function(name, maxAtOnce = NULL, maxWaitSeconds = NULL,
+                                          raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'REBALANCELEADERS', collection = name, maxAtOnce = maxAtOnce,
+                  maxWaitSeconds = maxWaitSeconds, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_reload.R b/R/collection_reload.R
new file mode 100644
index 0000000..7fe723d
--- /dev/null
+++ b/R/collection_reload.R
@@ -0,0 +1,22 @@
+#' Reload a collection
+#'
+#' @export
+#' @param name The name of the collection to reload. Required
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' collection_create(name = "helloWorld")
+#' collection_reload(name = "helloWorld")
+#' }
+collection_reload <- function(name, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'RELOAD', name = name, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_removerole.R b/R/collection_removerole.R
new file mode 100644
index 0000000..f54ef2f
--- /dev/null
+++ b/R/collection_removerole.R
@@ -0,0 +1,30 @@
+#' @title Remove a role from a node
+#'
+#' @description Remove an assigned role. This API is used to undo the roles
+#' assigned using \code{\link{collection_addrole}}
+#'
+#' @export
+#' @param role (character) Required. The name of the role. The only supported role
+#' as of now is overseer (set as default).
+#' @param node (character) Required. The name of the node.
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # get list of nodes
+#' nodes <- collection_clusterstatus()$cluster$live_nodes
+#' collection_addrole(node = nodes[1])
+#' collection_removerole(node = nodes[1])
+#' }
+collection_removerole <- function(role = "overseer", node, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'REMOVEROLE', role = role, node = node, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_requeststatus.R b/R/collection_requeststatus.R
new file mode 100644
index 0000000..c6e646a
--- /dev/null
+++ b/R/collection_requeststatus.R
@@ -0,0 +1,34 @@
+#' @title Get request status
+#'
+#' @description Request the status of an already submitted Asynchronous Collection
+#' API call. This call is also used to clear up the stored statuses.
+#'
+#' @export
+#' @param requestid (character) Required. The user defined request-id for the request.
+#' This can be used to track the status of the submitted asynchronous task. \code{-1}
+#' is a special request id which is used to cleanup the stored states for all of the
+#' already completed/failed tasks.
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # invalid requestid
+#' collection_requeststatus(requestid = "xxx")
+#'
+#' # valid requestid
+#' collection_requeststatus(requestid = "xxx")
+#' res$responseHeader
+#' res$xxx
+#' }
+collection_requeststatus <- function(requestid, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'REQUESTSTATUS', requestid = requestid, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collection_splitshard.R b/R/collection_splitshard.R
new file mode 100644
index 0000000..2354c4a
--- /dev/null
+++ b/R/collection_splitshard.R
@@ -0,0 +1,37 @@
+#' Create a shard
+#'
+#' @export
+#' @param name (character) Required. The name of the collection that includes the shard
+#' to be split
+#' @param shard (character) Required. The name of the shard to be split
+#' @param ranges (character) A comma-separated list of hash ranges in hexadecimal
+#' e.g. ranges=0-1f4,1f5-3e8,3e9-5dc
+#' @param split.key (character) The key to use for splitting the index
+#' @param async	(character) Request ID to track this action which will be processed
+#' asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#' # create collection
+#' collection_create(name = "trees")
+#' # find shard names
+#' names(collection_clusterstatus()$cluster$collections$trees$shards)
+#' # split a shard by name
+#' collection_splitshard(name = "trees", shard = "shard1")
+#' # now we have three shards
+#' names(collection_clusterstatus()$cluster$collections$trees$shards)
+#' }
+collection_splitshard <- function(name, shard, ranges = NULL, split.key = NULL,
+                                  async = NULL, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'SPLITSHARD', collection = name, shard = shard,
+                  ranges = do_ranges(ranges), split.key = split.key, async = async, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/collections'), args, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/collections.R b/R/collections.R
new file mode 100644
index 0000000..0d5734d
--- /dev/null
+++ b/R/collections.R
@@ -0,0 +1,33 @@
+#' List collections or cores
+#' 
+#' @name collections
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @details Calls \code{\link{collection_list}} or \code{\link{core_status}} internally, 
+#' and parses out names for you.
+#' @return A character vector
+#' @examples \dontrun{
+#' # connect
+#' solr_connect(verbose = FALSE)
+#' 
+#' # list collections
+#' collections()
+#' 
+#' # list cores
+#' cores()
+#' 
+#' # curl options
+#' library("httr")
+#' collections(config = verbose())
+#' }
+
+#' @export
+#' @rdname collections
+collections <- function(...) {
+  collection_list(...)$collections
+}
+
+#' @export
+#' @rdname collections
+cores <- function(...) {
+  names(core_status(...)$status)
+}
diff --git a/R/commit.R b/R/commit.R
new file mode 100644
index 0000000..e057203
--- /dev/null
+++ b/R/commit.R
@@ -0,0 +1,41 @@
+#' Commit
+#'
+#' @export
+#' @param name (character) A collection or core name. Required.
+#' @param expunge_deletes merge segments with deletes away. Default: \code{FALSE}
+#' @param wait_searcher block until a new searcher is opened and registered as the
+#' main query searcher, making the changes visible. Default: \code{TRUE}
+#' @param soft_commit  perform a soft commit - this will refresh the 'view' of the
+#' index in a more performant manner, but without "on-disk" guarantees.
+#' Default: \code{FALSE}
+#' @param wt (character) One of json (default) or xml. If json, uses
+#' \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to
+#' parse
+#' @param raw (logical) If \code{TRUE}, returns raw data in format specified by
+#' \code{wt} param
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' commit("gettingstarted")
+#' commit("gettingstarted", wait_searcher = FALSE)
+#'
+#' # get xml back
+#' commit("gettingstarted", wt = "xml")
+#' ## raw xml
+#' commit("gettingstarted", wt = "xml", raw = TRUE)
+#' }
+commit <- function(name, expunge_deletes = FALSE, wait_searcher = TRUE, soft_commit = FALSE,
+                   wt = 'json', raw = FALSE, ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  obj_proc(file.path(conn$url, sprintf('solr/%s/update', name)),
+           body = list(commit =
+                         list(expungeDeletes = asl(expunge_deletes),
+                              waitSearcher = asl(wait_searcher),
+                              softCommit = asl(soft_commit))),
+           args = list(wt = wt),
+           raw = raw,
+           conn$proxy, ...)
+}
diff --git a/R/config_get.R b/R/config_get.R
new file mode 100644
index 0000000..14037de
--- /dev/null
+++ b/R/config_get.R
@@ -0,0 +1,80 @@
+#' Get Solr configuration details
+#'
+#' @export
+#'
+#' @param name (character) The name of the core. If not given, all cores.
+#' @param what (character) What you want to look at. One of solrconfig or
+#' schema. Default: solrconfig
+#' @param wt (character) One of json (default) or xml. Data type returned.
+#' If json, uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses
+#' \code{\link[xml2]{read_xml}} to parse.
+#' @param raw (logical) If \code{TRUE}, returns raw data in format specified by
+#' \code{wt}
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @return A list, \code{xml_document}, or character
+#' @details Note that if \code{raw=TRUE}, \code{what} is ignored. That is,
+#' you get all the data when \code{raw=TRUE}.
+#' @examples \dontrun{
+#' # start Solr with Cloud mode via the schemaless eg: bin/solr -e cloud
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#'
+#' # connect
+#' solr_connect()
+#'
+#' # all config settings
+#' config_get("gettingstarted")
+#'
+#' # just znodeVersion
+#' config_get("gettingstarted", "znodeVersion")
+#'
+#' # just znodeVersion
+#' config_get("gettingstarted", "luceneMatchVersion")
+#'
+#' # just updateHandler
+#' config_get("gettingstarted", "updateHandler")
+#'
+#' # just updateHandler
+#' config_get("gettingstarted", "requestHandler")
+#'
+#' ## Get XML
+#' config_get("gettingstarted", wt = "xml")
+#' config_get("gettingstarted", "updateHandler", wt = "xml")
+#' config_get("gettingstarted", "requestHandler", wt = "xml")
+#'
+#' ## Raw data - what param ignored when raw=TRUE
+#' config_get("gettingstarted", raw = TRUE)
+#' }
+config_get <- function(name, what = NULL, wt = "json", raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(wt = wt))
+  res <- solr_GET(file.path(conn$url, sprintf('solr/%s/config', name)), args, conn$proxy, ...)
+  config_parse(res, what, wt, raw)
+}
+
+config_parse <- function(x, what = NULL, wt, raw) {
+  if (raw) {
+    return(x)
+  } else {
+    switch(
+      wt,
+      json = {
+        tt <- jsonlite::fromJSON(x)
+        if (is.null(what)) {
+          tt
+        } else {
+          tt$config[what]
+        }
+      },
+      xml = {
+        tt <- xml2::read_xml(x)
+        if (is.null(what)) {
+          tt
+        } else {
+          xml2::xml_find_all(tt, sprintf('//lst[@name="%s"]', what))
+        }
+      }
+    )
+  }
+}
diff --git a/R/config_overlay.R b/R/config_overlay.R
new file mode 100644
index 0000000..e7ef37a
--- /dev/null
+++ b/R/config_overlay.R
@@ -0,0 +1,30 @@
+#' Get Solr configuration overlay
+#' 
+#' @export
+#' 
+#' @param name (character) The name of the core. If not given, all cores.
+#' @param omitHeader (logical) If \code{TRUE}, omit header. Default: \code{FALSE}
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @return A list with response from server
+#' @examples \dontrun{
+#' # start Solr with Cloud mode via the schemaless eg: bin/solr -e cloud
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#' 
+#' # connect
+#' solr_connect()
+#' 
+#' # get config overlay
+#' config_overlay("gettingstarted")
+#' 
+#' # without header
+#' config_overlay("gettingstarted", omitHeader = TRUE)
+#' }
+config_overlay <- function(name, omitHeader = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  url <- file.path(conn$url, sprintf('solr/%s/config/overlay', name))
+  args <- sc(list(wt = "json", omitHeader = asl(omitHeader)))
+  res <- solr_GET(url, args, conn$proxy, ...)
+  jsonlite::fromJSON(res)
+}
diff --git a/R/config_params.R b/R/config_params.R
new file mode 100644
index 0000000..76ca0ac
--- /dev/null
+++ b/R/config_params.R
@@ -0,0 +1,68 @@
+#' Set Solr configuration params
+#' 
+#' @export
+#' 
+#' @param name (character) The name of the core. If not given, all cores.
+#' @param param (character) Name of a parameter
+#' @param set (list) List of key:value pairs of what to set. Create or overwrite 
+#' a parameter set map. Default: NULL (nothing passed)
+#' @param unset (list) One or more character strings of keys to unset. Default: NULL 
+#' (nothing passed)
+#' @param update (list) List of key:value pairs of what to update. Updates a parameter 
+#' set map. This essentially overwrites the old parameter set, so all parameters must 
+#' be sent in each update request.
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @return A list with response from server
+#' @details The Request Parameters API allows creating parameter sets that can 
+#' override or take the place of parameters defined in solrconfig.xml. It is 
+#' really another endpoint of the Config API instead of a separate API, and 
+#' has distinct commands. It does not replace or modify any sections of 
+#' solrconfig.xml, but instead provides another approach to handling parameters 
+#' used in requests. It behaves in the same way as the Config API, by storing 
+#' parameters in another file that will be used at runtime. In this case, 
+#' the parameters are stored in a file named params.json. This file is kept in 
+#' ZooKeeper or in the conf directory of a standalone Solr instance.
+#' @examples \dontrun{
+#' # start Solr in standard or Cloud mode
+#' # connect
+#' solr_connect()
+#' 
+#' # set a parameter set
+#' myFacets <- list(myFacets = list(facet = TRUE, facet.limit = 5))
+#' config_params("gettingstarted", set = myFacets)
+#' 
+#' # check a parameter
+#' config_params("gettingstarted", param = "myFacets")
+#' 
+#' # see all params
+#' config_params("gettingstarted")
+#' }
+config_params <- function(name, param = NULL, set = NULL, 
+                          unset = NULL, update = NULL, ...) {
+  
+  conn <- solr_settings()
+  check_conn(conn)
+  if (all(vapply(list(set, unset, update), is.null, logical(1)))) {
+    if (is.null(param)) {
+      url <- file.path(conn$url, sprintf('solr/%s/config/params', name))
+    } else {
+      url <- file.path(conn$url, sprintf('solr/%s/config/params/%s', name, param))
+    }
+    res <- solr_GET(url, list(wt = "json"), conn$proxy, ...)
+  } else {
+    url <- file.path(conn$url, sprintf('solr/%s/config/params', name))
+    body <- sc(c(name_by(unbox_if(set, TRUE), "set"), 
+                 name_by(unbox_if(unset, TRUE), "unset"),
+                 name_by(unbox_if(update, TRUE), "update")))
+    res <- solr_POST_body(url, body, list(wt = "json"), conn$proxy, ...)
+  }
+  jsonlite::fromJSON(res)
+}
+
+name_by <- function(x, y) {
+  if (is.null(x)) {
+    NULL
+  } else {
+    stats::setNames(list(y = x), y)
+  }
+}
diff --git a/R/config_set.R b/R/config_set.R
new file mode 100644
index 0000000..1d009d9
--- /dev/null
+++ b/R/config_set.R
@@ -0,0 +1,44 @@
+#' Set Solr configuration details
+#' 
+#' @export
+#' 
+#' @param name (character) The name of the core. If not given, all cores.
+#' @param set (list) List of key:value pairs of what to set. Default: NULL 
+#' (nothing passed)
+#' @param unset (list) One or more character strings of keys to unset. Default: NULL 
+#' (nothing passed)
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @return A list with response from server
+#' @examples \dontrun{
+#' # start Solr with Cloud mode via the schemaless eg: bin/solr -e cloud
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#' 
+#' # connect
+#' solr_connect()
+#' 
+#' # set a property
+#' config_set("gettingstarted", set = list(query.filterCache.autowarmCount = 1000))
+#' 
+#' # unset a property
+#' config_set("gettingstarted", unset = "query.filterCache.size", config = verbose())
+#' 
+#' # both set a property and unset a property
+#' config_set("gettingstarted", unset = "enableLazyFieldLoading")
+#' 
+#' # many properties
+#' config_set("gettingstarted", set = list(
+#'    query.filterCache.autowarmCount = 1000,
+#'    query.commitWithin.softCommit = 'false'
+#'  )
+#' )
+#' }
+config_set <- function(name, set = NULL, unset = NULL, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  url <- file.path(conn$url, sprintf('solr/%s/config', name))
+  body <- sc(list(`set-property` = unbox_if(set), 
+                  `unset-property` = unset))
+  res <- solr_POST_body(url, body, list(wt = "json"), conn$proxy, ...)
+  jsonlite::fromJSON(res)
+}
diff --git a/R/connect.R b/R/connect.R
new file mode 100644
index 0000000..7c60a6c
--- /dev/null
+++ b/R/connect.R
@@ -0,0 +1,164 @@
+#' @title Solr connection 
+#' 
+#' @description Set Solr options, including base URL, proxy, and errors
+#' 
+#' @export
+#' @param url Base URL for Solr instance. For a local instance, this is likely going
+#' to be \code{http://localhost:8983} (also the default), or a different port if you
+#' set a different port. 
+#' @param proxy List of arguments for a proxy connection, including one or more of:
+#' url, port, username, password, and auth. See \code{\link[httr]{use_proxy}} for 
+#' help, which is used to construct the proxy connection.
+#' @param errors (character) One of simple or complete. Simple gives http code and 
+#' error message on an error, while complete gives both http code and error message, 
+#' and stack trace, if available.
+#' @param verbose (logical) Whether to print help messages or not. E.g., if 
+#' \code{TRUE}, we print the URL on each request to a Solr server for your 
+#' reference. Default: \code{TRUE}
+#' @details This function sets environment variables that we use internally
+#' within functions in this package to determine the right thing to do given your
+#' inputs. 
+#' 
+#' In addition, \code{solr_connect} does a quick \code{GET} request to the URL you 
+#' provide to make sure the service is up.
+#' @examples \dontrun{
+#' # set solr settings
+#' solr_connect()
+#' 
+#' # set solr settings with a proxy
+#' prox <- list(url = "187.62.207.130", port = 3128)
+#' solr_connect(url = "http://localhost:8983", proxy = prox)
+#' 
+#' # get solr settings
+#' solr_settings()
+#' 
+#' # you can also check your settings via Sys.getenv()
+#' Sys.getenv("SOLR_URL")
+#' Sys.getenv("SOLR_ERRORS")
+#' }
+solr_connect <- function(url = "http://localhost:8983", proxy = NULL, 
+                         errors = "simple", verbose = TRUE) {
+  # checks
+  url <- checkurl(url)
+  errors <- match.arg(errors, c('simple', 'complete'))
+  check_proxy_args(proxy)
+  
+  # set
+  Sys.setenv("SOLR_URL" = url)
+  Sys.setenv("SOLR_ERRORS" = errors)
+  Sys.setenv("SOLR_VERBOSITY" = verbose)
+  options(solr_proxy = proxy)
+  
+  # ping server
+  res <- tryCatch(GET(Sys.getenv("SOLR_URL")), error = function(e) e)
+  if (inherits(res, "error")) {
+    stop(sprintf("\n  Failed to connect to %s\n  Remember to start Solr before connecting",
+                 url), call. = FALSE)
+  }
+  
+  structure(list(url = Sys.getenv("SOLR_URL"), 
+                 proxy = make_proxy(proxy), 
+                 errors = Sys.getenv("SOLR_ERRORS"), 
+                 verbose = Sys.getenv("SOLR_VERBOSITY")), 
+            class = "solr_connection")
+}
+
+#' @export
+#' @rdname solr_connect
+solr_settings <- function() {
+  url <- Sys.getenv("SOLR_URL")
+  err <- Sys.getenv("SOLR_ERRORS")
+  verbose <- Sys.getenv("SOLR_VERBOSITY")
+  proxy <- getOption("solr_proxy")
+  structure(list(url = url, proxy = make_proxy(proxy), errors = err, verbose = verbose), class = "solr_connection")
+}
+
+#' @export
+print.solr_connection <- function(x, ...) {
+  cat("<solr_connection>", sep = "\n")
+  cat(paste0("  url:    ", x$url), sep = "\n")
+  cat(paste0("  errors: ", x$errors), sep = "\n")
+  cat(paste0("  verbose: ", x$verbose), sep = "\n")
+  cat("  proxy:", sep = "\n")
+  if (is.null(x$proxy)) {
+  } else {
+    cat(paste0("      url:     ", x$proxy$options$proxy), sep = "\n")
+    cat(paste0("      port:     ", x$proxy$options$proxyport))
+  }
+}
+
+# cat_proxy <- function(x) {
+#   if (is.null(x)) {
+#     ''
+#   } else {
+#     x$options$proxy
+#   }
+# }
+
+check_proxy_args <- function(x) {
+  if (!all(names(x) %in% c('url', 'port', 'username', 'password', 'auth'))) {
+    stop("Input to proxy can only contain: url, port, username, password, auth", 
+         call. = FALSE)
+  }
+}
+
+make_proxy <- function(args) {
+  if (is.null(args)) {
+    NULL
+  } else {
+    httr::use_proxy(url = args$url, port = args$port, 
+                    username = args$username, password = args$password, 
+                    auth = args$auth)
+  }
+}
+
+is_url <- function(x){
+  grepl("https?://", x, ignore.case = TRUE) || grepl("localhost:[0-9]{4}", x, ignore.case = TRUE)
+}
+
+checkurl <- function(x){
+  if (!is_url(x)) {
+    stop("That does not appear to be a url", call. = FALSE)
+  } else {
+    if (grepl("https?", x)) {
+      x
+    } else {
+      paste0("http://", x)
+    }
+  }
+}
+
+# ### R6 version
+# library("R6")
+# library("httr")
+# 
+# solr_connect <- function(url, proxy = NULL) {
+#   .solr_connection$new(url, proxy)
+# }
+# 
+# .solr_connection <-
+#   R6::R6Class("solr_connection",
+#     public = list(
+#       url = "http://localhost:8983",
+#       proxy = NULL,
+#       initialize = function(url, proxy) {
+#         if (!missing(url)) self$url <- url
+#         if (!missing(proxy)) self$proxy <- proxy
+#       },
+#       status = function(...) {
+#         httr::http_status(httr::HEAD(self$url, ...))$message
+#       }
+#     ),
+#     cloneable = FALSE
+# )
+# 
+# conn <- solr_connect("http://scottchamberlain.info/")
+# # conn <- solr_connect$new(url = "http://localhost:8983")
+# # conn <- solr_connect$new(url = 'http://api.plos.org/search')
+# # conn <- solr_connect$new(proxy = use_proxy("64.251.21.73", 8080))
+# conn
+# conn$url
+# conn$proxy
+# conn$status()
+# conn$status(config = verbose())
+# conn$ping()
diff --git a/R/core_create.R b/R/core_create.R
new file mode 100644
index 0000000..c068cad
--- /dev/null
+++ b/R/core_create.R
@@ -0,0 +1,59 @@
+#' Create a core
+#'
+#' @export
+#'
+#' @param name (character) The name of the core to be created. Required
+#' @param instanceDir (character) Path to instance directory
+#' @param config (character) Path to config file
+#' @param schema (character) Path to schema file
+#' @param dataDir (character) Name of the data directory relative to instanceDir.
+#' @param configSet (character) Name of the configset to use for this core. For more
+#' information, see https://cwiki.apache.org/confluence/display/solr/Config+Sets
+#' @param collection (character) The name of the collection to which this core belongs.
+#' The default is the name of the core. collection.<param>=<val ue> causes a property of
+#' <param>=<value> to be set if a new collection is being created. Use collection.configNa
+#' me=<configname> to point to the configuration for a new collection.
+#' @param shard (character) The shard id this core represents. Normally you want to be
+#' auto-assigned a shard id.
+#' @param async	(character) Request ID to track this action which will be
+#' processed asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @param ... You can pass in parameters like \code{property.name=value}	to set
+#' core property name to value. See the section Defining core.properties for details on
+#' supported properties and values.
+#' (https://cwiki.apache.org/confluence/display/solr/Defining+core.properties)
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or create as below
+#'
+#' # connect
+#' solr_connect()
+#'
+#' # Create a core
+#' path <- "~/solr-5.4.1/server/solr/newcore/conf"
+#' dir.create(path, recursive = TRUE)
+#' files <- list.files("~/solr-5.4.1/server/solr/configsets/data_driven_schema_configs/conf/",
+#' full.names = TRUE)
+#' file.copy(files, path, recursive = TRUE)
+#' core_create(name = "newcore", instanceDir = "newcore", configSet = "basic_configs")
+#' }
+core_create <- function(name, instanceDir = NULL, config = NULL, schema = NULL, dataDir = NULL,
+                        configSet = NULL, collection = NULL, shard = NULL, async = NULL,
+                        raw = FALSE, callopts=list(), ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'CREATE', name = name, instanceDir = instanceDir,
+                  config = config, schema = schema, dataDir = dataDir,
+                  configSet = configSet, collection = collection, shard = shard,
+                  async = async, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/cores'), args, callopts, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
+
diff --git a/R/core_exists.R b/R/core_exists.R
new file mode 100644
index 0000000..b91f11b
--- /dev/null
+++ b/R/core_exists.R
@@ -0,0 +1,30 @@
+#' Check if a core exists
+#' 
+#' @export
+#' 
+#' @param name (character) The name of the core. If not given, all cores.
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @details Simply calls \code{\link{core_status}} internally
+#' @return A single boolean, \code{TRUE} or \code{FALSE}
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#' 
+#' # connect
+#' solr_connect()
+#' 
+#' # exists
+#' core_exists("gettingstarted")
+#' 
+#' # doesn't exist
+#' core_exists("hhhhhh")
+#' }
+core_exists <- function(name, callopts=list()) {
+  tmp <- suppressMessages(core_status(name = name, callopts = callopts))
+  if (length(tmp$status[[1]]) > 0) {
+    TRUE 
+  } else {
+    FALSE
+  }
+}
diff --git a/R/core_mergeindexes.R b/R/core_mergeindexes.R
new file mode 100644
index 0000000..0d082c6
--- /dev/null
+++ b/R/core_mergeindexes.R
@@ -0,0 +1,46 @@
+#' @title Merge indexes (cores)
+#'
+#' @description Merges one or more indexes to another index. The indexes must
+#' have completed commits, and should be locked against writes until the merge
+#' is complete or the resulting merged index may become corrupted. The target
+#' core index must already exist and have a compatible schema with the one or
+#' more indexes that will be merged to it.
+#'
+#' @export
+#'
+#' @param name The name of the target core/index. Required
+#' @param indexDir (character)	Multi-valued, directories that would be merged.
+#' @param srcCore	(character)	Multi-valued, source cores that would be merged.
+#' @param async	(character) Request ID to track this action which will be processed
+#' asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#'
+#' # connect
+#' solr_connect()
+#'
+#' ## FIXME: not tested yet
+#'
+#' # use indexDir parameter
+#' core_mergeindexes(core="new_core_name", indexDir = c("/solr_home/core1/data/index",
+#'    "/solr_home/core2/data/index"))
+#'
+#' # use srcCore parameter
+#' core_mergeindexes(name = "new_core_name", srcCore = c('core1', 'core2'))
+#' }
+core_mergeindexes <- function(name, indexDir = NULL, srcCore = NULL, async = NULL,
+                        raw = FALSE, callopts = list()) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'MERGEINDEXES', core = name, indexDir = indexDir,
+                  srcCore = srcCore, async = async, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/cores'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/core_reload.R b/R/core_reload.R
new file mode 100644
index 0000000..0589756
--- /dev/null
+++ b/R/core_reload.R
@@ -0,0 +1,31 @@
+#' Reload a core
+#'
+#' @export
+#'
+#' @param name (character) The name of the core. Required
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#'
+#' # connect
+#' solr_connect()
+#'
+#' # Status of particular cores
+#' core_reload("gettingstarted")
+#' core_status("gettingstarted")
+#' }
+core_reload <- function(name, raw = FALSE, callopts=list()) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'RELOAD', core = name, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/cores'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
+
diff --git a/R/core_rename.R b/R/core_rename.R
new file mode 100644
index 0000000..e35a669
--- /dev/null
+++ b/R/core_rename.R
@@ -0,0 +1,36 @@
+#' Rename a core
+#'
+#' @export
+#'
+#' @param name (character) The name of the core to be renamed. Required
+#' @param other (character) The new name of the core. Required.
+#' @param async	(character) Request ID to track this action which will be processed
+#' asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#'
+#' # connect
+#' solr_connect()
+#'
+#' # Status of particular cores
+#' core_create("testcore") # or create in the CLI: bin/solr create -c testcore
+#' core_rename("testcore", "newtestcore")
+#' core_status("testcore") # core missing
+#' core_status("newtestcore", FALSE) # not missing
+#' }
+core_rename <- function(name, other, async = NULL, raw = FALSE, callopts=list()) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'RENAME', core = name, other = other, async = async, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/cores'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
+
diff --git a/R/core_requeststatus.R b/R/core_requeststatus.R
new file mode 100644
index 0000000..d4a122c
--- /dev/null
+++ b/R/core_requeststatus.R
@@ -0,0 +1,25 @@
+#' Request status of asynchronous CoreAdmin API call
+#'
+#' @export
+#'
+#' @param requestid The name of one of the cores to be removed. Required
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#'
+#' # FIXME: not tested yet...
+#' # solr_connect()
+#' # core_requeststatus(requestid = 1)
+#' }
+core_requeststatus <- function(requestid, raw = FALSE, callopts = list()) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'REQUESTSTATUS', requestid = requestid, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/cores'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/core_split.R b/R/core_split.R
new file mode 100644
index 0000000..ae2a5ad
--- /dev/null
+++ b/R/core_split.R
@@ -0,0 +1,120 @@
+#' @title Split a core
+#'
+#' @description SPLIT splits an index into two or more indexes. The index being
+#' split can continue to handle requests. The split pieces can be placed into
+#' a specified directory on the server's filesystem or it can be merged into
+#' running Solr cores.
+#'
+#' @export
+#'
+#' @param name (character) The name of one of the cores to be swapped. Required
+#' @param path (character) Two or more target directory paths in which a piece of the
+#' index will be written
+#' @param targetCore (character) Two or more target Solr cores to which a piece
+#' of the index will be merged
+#' @param ranges (character) A list of number ranges, or hash ranges in hexadecimal format.
+#' If numbers, they get converted to hexidecimal format before being passed to
+#' your Solr server.
+#' @param split.key (character) The key to be used for splitting the index
+#' @param async	(character) Request ID to track this action which will be processed
+#' asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @details The core index will be split into as many pieces as the number of \code{path}
+#' or \code{targetCore} parameters.
+#'
+#' Either \code{path} or \code{targetCore} parameter must be specified but not
+#' both. The \code{ranges} and \code{split.key} parameters are optional and only one of
+#' the two should be specified, if at all required.
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#'
+#' # connect
+#' solr_connect()
+#'
+#' # Swap a core
+#' ## First, create two cores
+#' # core_split("splitcoretest0") # or create in the CLI: bin/solr create -c splitcoretest0
+#' # core_split("splitcoretest1") # or create in the CLI: bin/solr create -c splitcoretest1
+#' # core_split("splitcoretest2") # or create in the CLI: bin/solr create -c splitcoretest2
+#'
+#' ## check status
+#' core_status("splitcoretest0", FALSE)
+#' core_status("splitcoretest1", FALSE)
+#' core_status("splitcoretest2", FALSE)
+#'
+#' ## split core using targetCore parameter
+#' core_split("splitcoretest0", targetCore = c("splitcoretest1", "splitcoretest2"))
+#'
+#' ## split core using split.key parameter
+#' ### Here all documents having the same route key as the split.key i.e. 'A!'
+#' ### will be split from the core index and written to the targetCore
+#' core_split("splitcoretest0", targetCore = "splitcoretest1", split.key = "A!")
+#'
+#' ## split core using ranges parameter
+#' ### Solr expects hash ranges in hexidecimal, but since we're in R,
+#' ### let's not make our lives any harder, so you can pass in numbers
+#' ### but you can still pass in hexidecimal if you want.
+#' rgs <- c('0-1f4', '1f5-3e8')
+#' core_split("splitcoretest0", targetCore = c("splitcoretest1", "splitcoretest2"), ranges = rgs)
+#' rgs <- list(c(0, 500), c(501, 1000))
+#' core_split("splitcoretest0", targetCore = c("splitcoretest1", "splitcoretest2"), ranges = rgs)
+#' }
+core_split <- function(name, path = NULL, targetCore = NULL, ranges = NULL, split.key = NULL,
+                       async = NULL, raw = FALSE, callopts=list()) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'SPLIT', core = name, ranges = do_ranges(ranges),
+                  split.key = split.key, async = async, wt = 'json'))
+  args <- c(args, make_args(path), make_args(targetCore))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/cores'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
+
+make_args <- function(x) {
+  if (!is.null(x)) {
+    as.list(stats::setNames(x, rep(deparse(substitute(x)), length(x))))
+  } else {
+    NULL
+  }
+}
+
+do_ranges <- function(x) {
+  if (is.null(x)) {
+    NULL
+  } else {
+    make_hex(x)
+  }
+}
+
+make_hex <- function(x) {
+  if (inherits(x, "list")) {
+    clzz <- sapply(x, class)
+    if (clzz[1] == "character") {
+      paste0(x, collapse = ",")
+    } else {
+      zz <- lapply(x, function(z) {
+        tmp <- try_as_hex(z)
+        paste0(tmp, collapse = "-")
+      })
+      paste0(zz, collapse = ",")
+    }
+  } else {
+    clzz <- sapply(x, class)
+    if (clzz[1] == "character") {
+      paste0(x, collapse = ",")
+    } else {
+      paste0(try_as_hex(x), collapse = ",")
+    }
+  }
+}
+
+try_as_hex <- function(x) {
+  tryCatch(as.hexmode(x), error = function(e) e)
+}
diff --git a/R/core_status.R b/R/core_status.R
new file mode 100644
index 0000000..22f15c8
--- /dev/null
+++ b/R/core_status.R
@@ -0,0 +1,39 @@
+#' Get core status
+#'
+#' @export
+#'
+#' @param name (character) The name of the core. If not given, all cores.
+#' @param indexInfo (logical)
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#'
+#' # connect
+#' solr_connect()
+#'
+#' # Status of all cores
+#' core_status()
+#'
+#' # Status of particular cores
+#' core_status("gettingstarted")
+#'
+#' # Get index info or not
+#' ## Default: TRUE
+#' core_status("gettingstarted", indexInfo = TRUE)
+#' core_status("gettingstarted", indexInfo = FALSE)
+#' }
+core_status <- function(name = NULL, indexInfo = TRUE, raw = FALSE, callopts=list()) {
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'STATUS', core = name, indexInfo = asl(indexInfo), wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/cores'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
+
diff --git a/R/core_swap.R b/R/core_swap.R
new file mode 100644
index 0000000..b14f1e6
--- /dev/null
+++ b/R/core_swap.R
@@ -0,0 +1,54 @@
+#' @title Swap a core
+#'
+#' @description SWAP atomically swaps the names used to access two existing Solr cores.
+#' This can be used to swap new content into production. The prior core remains
+#' available and can be swapped back, if necessary. Each core will be known by
+#' the name of the other, after the swap
+#'
+#' @export
+#'
+#' @param name (character) The name of one of the cores to be swapped. Required
+#' @param other (character) The name of one of the cores to be swapped. Required.
+#' @param async	(character) Request ID to track this action which will be processed
+#' asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @details Do not use \code{core_swap} with a SolrCloud node. It is not supported and
+#' can result in the core being unusable. We'll try to stop you if you try.
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#' # you can create a new core like: bin/solr create -c corename
+#' # where <corename> is the name for your core - or creaate as below
+#'
+#' # connect
+#' solr_connect()
+#'
+#' # Swap a core
+#' ## First, create two cores
+#' core_create("swapcoretest") # or create in the CLI: bin/solr create -c swapcoretest
+#' core_create("swapcoretest") # or create in the CLI: bin/solr create -c swapcoretest
+#'
+#' ## check status
+#' core_status("swapcoretest1", FALSE)
+#' core_status("swapcoretest2", FALSE)
+#'
+#' ## swap core
+#' core_swap("swapcoretest1", "swapcoretest2")
+#'
+#' ## check status again
+#' core_status("swapcoretest1", FALSE)
+#' core_status("swapcoretest2", FALSE)
+#' }
+core_swap <- function(name, other, async = NULL, raw = FALSE, callopts=list()) {
+  conn <- solr_settings()
+  check_conn(conn)
+  if (is_in_cloud_mode(conn)) stop("You are in SolrCloud mode, stopping", call. = FALSE)
+  args <- sc(list(action = 'SWAP', core = name, other = other, async = async, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/cores'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
+
diff --git a/R/core_unload.R b/R/core_unload.R
new file mode 100644
index 0000000..bbd5e54
--- /dev/null
+++ b/R/core_unload.R
@@ -0,0 +1,44 @@
+#' Unload (delete) a core
+#'
+#' @export
+#'
+#' @param name The name of one of the cores to be removed. Required
+#' @param deleteIndex	(logical)	If \code{TRUE}, will remove the index when unloading
+#' the core. Default: \code{FALSE}
+#' @param deleteDataDir	(logical)	If \code{TRUE}, removes the data directory and all
+#' sub-directories. Default: \code{FALSE}
+#' @param deleteInstanceDir	(logical)	If \code{TRUE}, removes everything related to
+#' the core, including the index directory, configuration files and other related
+#' files. Default: \code{FALSE}
+#' @param async	(character) Request ID to track this action which will be processed
+#' asynchronously
+#' @param raw (logical) If \code{TRUE}, returns raw data
+#' @param callopts curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' # start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+#'
+#' # connect
+#' solr_connect()
+#'
+#' # Create a core
+#' core_create(name = "thingsstuff")
+#'
+#' # Unload a core
+#' core_unload(name = "fart")
+#' }
+core_unload <- function(name, deleteIndex = FALSE, deleteDataDir = FALSE,
+                        deleteInstanceDir = FALSE, async = NULL,
+                        raw = FALSE, callopts = list()) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(action = 'UNLOAD', core = name, deleteIndex = asl(deleteIndex),
+                  deleteDataDir = asl(deleteDataDir), deleteInstanceDir = asl(deleteInstanceDir),
+                  async = async, wt = 'json'))
+  res <- solr_GET(file.path(conn$url, 'solr/admin/cores'), args, callopts, conn$proxy)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/delete.R b/R/delete.R
new file mode 100644
index 0000000..cd32ed2
--- /dev/null
+++ b/R/delete.R
@@ -0,0 +1,59 @@
+#' Delete documents by ID or query
+#'
+#' @name delete
+#' @param ids Document IDs, one or more in a vector or list
+#' @param name (character) A collection or core name. Required.
+#' @param query Query to use to delete documents
+#' @param commit (logical) If \code{TRUE}, documents immediately searchable.
+#' Deafult: \code{TRUE}
+#' @param commit_within (numeric) Milliseconds to commit the change, the document will be added
+#' within that time. Default: NULL
+#' @param overwrite (logical) Overwrite documents with matching keys. Default: \code{TRUE}
+#' @param boost (numeric) Boost factor. Default: NULL
+#' @param wt (character) One of json (default) or xml. If json, uses
+#' \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to
+#' parse
+#' @param raw (logical) If \code{TRUE}, returns raw data in format specified by
+#' \code{wt} param
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @details We use json internally as data interchange format for this function.
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # add some documents first
+#' ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+#' add(ss, name = "gettingstarted")
+#'
+#' # Now, delete them
+#' # Delete by ID
+#' # delete_by_id(ids = 9)
+#' ## Many IDs
+#' # delete_by_id(ids = c(3, 4))
+#'
+#' # Delete by query
+#' # delete_by_query(query = "manu:bank")
+#' }
+
+#' @export
+#' @name delete
+delete_by_id <- function(ids, name, commit = TRUE, commit_within = NULL, overwrite = TRUE,
+                         boost = NULL, wt = 'json', raw = FALSE, ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(commit = asl(commit), wt = wt))
+  body <- list(delete = lapply(ids, function(z) list(id = z)))
+  obj_proc(file.path(conn$url, sprintf('solr/%s/update/json', name)), body, args, raw, conn$proxy, ...)
+}
+
+#' @export
+#' @name delete
+delete_by_query <- function(query, name, commit = TRUE, commit_within = NULL, overwrite = TRUE,
+                            boost = NULL, wt = 'json', raw = FALSE, ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  args <- sc(list(commit = asl(commit), wt = wt))
+  body <- list(delete = list(query = query))
+  obj_proc(file.path(conn$url, sprintf('solr/%s/update/json', name)), body, args, raw, conn$proxy, ...)
+}
diff --git a/R/optimize.R b/R/optimize.R
new file mode 100644
index 0000000..76aa74b
--- /dev/null
+++ b/R/optimize.R
@@ -0,0 +1,42 @@
+#' Optimize
+#'
+#' @export
+#' @param name (character) A collection or core name. Required.
+#' @param max_segments optimizes down to at most this number of segments. Default: 1
+#' @param wait_searcher block until a new searcher is opened and registered as the
+#' main query searcher, making the changes visible. Default: \code{TRUE}
+#' @param soft_commit  perform a soft commit - this will refresh the 'view' of the
+#' index in a more performant manner, but without "on-disk" guarantees.
+#' Default: \code{FALSE}
+#' @param wt (character) One of json (default) or xml. If json, uses
+#' \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to
+#' parse
+#' @param raw (logical) If \code{TRUE}, returns raw data in format specified by
+#' \code{wt} param
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' optimize("gettingstarted")
+#' optimize("gettingstarted", max_segments = 2)
+#' optimize("gettingstarted", wait_searcher = FALSE)
+#'
+#' # get xml back
+#' optimize("gettingstarted", wt = "xml")
+#' ## raw xml
+#' optimize("gettingstarted", wt = "xml", raw = TRUE)
+#' }
+optimize <- function(name, max_segments = 1, wait_searcher = TRUE, soft_commit = FALSE,
+                     wt = 'json', raw = FALSE, ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  obj_proc(file.path(conn$url, sprintf('solr/%s/update', name)),
+           body = list(optimize =
+                         list(maxSegments = max_segments,
+                              waitSearcher = asl(wait_searcher),
+                              softCommit = asl(soft_commit))),
+           args = list(wt = wt),
+           raw = raw,
+           conn$proxy, ...)
+}
diff --git a/R/parsers.R b/R/parsers.R
new file mode 100644
index 0000000..9181127
--- /dev/null
+++ b/R/parsers.R
@@ -0,0 +1,686 @@
+#' Parse raw data from solr_search, solr_facet, or solr_highlight.
+#'
+#' @param input Output from solr_facet
+#' @param parsetype One of 'list' or 'df' (data.frame)
+#' @param concat Character to conactenate strings by, e.g,. ',' (character). Used
+#' in solr_parse.sr_search only.
+#' @details This is the parser used internally in solr_facet, but if you output raw
+#' data from solr_facet using raw=TRUE, then you can use this function to parse that
+#' data (a sr_facet S3 object) after the fact to a list of data.frame's for easier
+#' consumption. The data format type is detected from the attribute "wt" on the
+#' sr_facet object.
+#' @export
+solr_parse <- function(input, parsetype = NULL, concat) {
+  UseMethod("solr_parse")
+}
+
+#' @export
+solr_parse.default <- function(input, parsetype=NULL, concat=',') {
+  stop("no 'solr_parse' method for ", class(input), call. = FALSE)
+}
+
+#' @export
+solr_parse.ping <- function(input, parsetype=NULL, concat=',') {
+  wt <- attributes(input)$wt
+  parse_it(input, wt)
+}
+
+#' @export
+solr_parse.update <- function(input, parsetype=NULL, concat=',') {
+  wt <- attributes(input)$wt
+  switch(wt,
+         xml = xml2::read_xml(unclass(input)),
+         json = jsonlite::fromJSON(input, simplifyDataFrame = FALSE, simplifyMatrix = FALSE),
+         csv = jsonlite::fromJSON(input, simplifyDataFrame = FALSE, simplifyMatrix = FALSE)
+  )
+}
+
+#' @export
+solr_parse.sr_facet <- function(input, parsetype = NULL, concat = ',') {
+  if (inherits(unclass(input), "character")) input <- parse_ch(input, parsetype, concat)
+  wt <- attributes(input)$wt
+  
+  # Facet queries
+  if (wt == 'json') {
+    fqdat <- input$facet_counts$facet_queries
+    if (length(fqdat) == 0) {
+      fqout <- NULL
+    } else {
+      fqout <- data_frame(
+        term = names(fqdat),
+        value = do.call(c, fqdat)
+      )
+    }
+    row.names(fqout) <- NULL
+  } else {
+    nodes <- xml2::xml_find_all(input, '//lst[@name="facet_queries"]//int')
+    if (length(nodes) == 0) {
+      fqout <- NULL
+    } else {
+      fqout <- data_frame(
+        term = xml2::xml_attr(nodes, "name"),
+        value = xml2::xml_text(nodes)
+      )
+    }
+  }
+
+  # facet fields
+  if (wt == 'json') {
+    ffout <- lapply(input$facet_counts$facet_fields, function(x) {
+      stats::setNames(as_data_frame(do.call(rbind, lapply(seq(1, length(x), by = 2), function(y) {
+        x[c(y, y + 1)]
+      }))), c('term', 'value'))
+    })
+  } else {
+    nodes <- xml_find_all(input, '//lst[@name="facet_fields"]//lst')
+    ffout <- lapply(nodes, function(z) {
+      ch <- xml_children(z)
+      data_frame(term = vapply(ch, xml_attr, "", attr = "name"), value = vapply(ch, xml_text, ""))
+    })
+    names(ffout) <- xml_attr(nodes, "name")
+  }
+
+  # facet pivot
+  if (wt == 'json') {
+    fpout <- NULL
+    pivot_input <- jsonlite::fromJSON(jsonlite::toJSON(input))$facet_count$facet_pivot[[1]]
+    if (length(pivot_input) != 0) {
+      fpout <- list()
+      pivots_left <- ('pivot' %in% names(pivot_input))
+      if (pivots_left) {
+        infinite_loop_check <- 1
+        while (pivots_left & infinite_loop_check < 100) {
+          stopifnot(is.data.frame(pivot_input))
+          flattened_result <- pivot_flatten_tabular(pivot_input)
+          fpout <- c(fpout, list(flattened_result$parent))
+          pivot_input <- flattened_result$flattened_pivot
+          pivots_left <- ('pivot' %in% names(pivot_input))
+          infinite_loop_check <- infinite_loop_check + 1
+        }
+        fpout <- c(fpout, list(flattened_result$flattened_pivot))
+      } else {
+        fpout <- c(fpout, list(pivot_input))
+      }
+      fpout <- lapply(fpout, collapse_pivot_names)
+      names(fpout) <- sapply(fpout, FUN = function(x) {
+        paste(head(names(x), -1), collapse = ",")
+      })
+    }
+  } else {
+    message('facet.pivot results are not supported with XML response types, use wt="json"')
+    fpout <- NULL
+  }
+
+  # Facet dates
+  if (wt == 'json') {
+    datesout <- NULL
+    if (length(input$facet_counts$facet_dates) != 0) {
+      datesout <- lapply(input$facet_counts$facet_dates, function(x) {
+        x <- x[!names(x) %in% c('gap','start','end')]
+        data_frame(date = names(x), value = do.call(c, x))
+      })
+    }
+  } else {
+    nodes <- xml_find_all(input, '//lst[@name="facet_dates"]')[[1]]
+    if (length(nodes) != 0) {
+      datesout <- stats::setNames(lapply(xml_children(nodes), function(z) {
+        z <- xml_find_all(z, 'int')
+        data_frame(
+          date = xml2::xml_attr(z, "name"),
+          value = xml2::xml_text(z)
+        )
+      }), xml_attr(xml_children(nodes), "name"))
+    }
+  }
+
+  # Facet ranges
+  rangesout <- NULL
+  if (wt == 'json') {
+    if (length(input$facet_counts$facet_ranges) != 0) {
+      rangesout <- lapply(input$facet_counts$facet_ranges, function(x){
+        x <- x[!names(x) %in% c('gap','start','end')]$counts
+        stats::setNames(as_data_frame(do.call(rbind, lapply(seq(1, length(x), by = 2), function(y){
+          x[c(y, y + 1)]
+        }))), c('term', 'value'))
+      })
+    }
+  } else {
+    nodes <- xml_find_all(input, '//lst[@name="facet_ranges"]//lst[not(@name="counts")]')
+    if (length(nodes) != 0) {
+      rangesout <- stats::setNames(lapply(nodes, function(z) {
+        z <- xml_children(xml_find_first(z, 'lst[@name="counts"]'))
+        data_frame(
+          term = xml2::xml_attr(z, "name"),
+          value = xml2::xml_text(z)
+        )
+      }), xml_attr(nodes, "name"))
+    }
+  }
+
+  # output
+  res <- list(facet_queries = replacelen0(fqout),
+              facet_fields = replacelen0(ffout),
+              facet_pivot = replacelen0(fpout),
+              facet_dates = replacelen0(datesout),
+              facet_ranges = replacelen0(rangesout))
+  res <- if (length(sc(res)) == 0) NULL else res
+  return( res )
+}
+
+#' @export
+#' @rdname solr_parse
+solr_parse.sr_high <- function(input, parsetype='list', concat=',') {
+  if (inherits(unclass(input), "character")) input <- parse_ch(input, parsetype, concat)
+  wt <- attributes(input)$wt
+  if (wt == 'json') {
+    if (parsetype == 'df') {
+      dat <- input$highlight
+      df <- dplyr::bind_rows(lapply(dat, as_data_frame))
+      if (NROW(df) == 0) {
+        highout <- tibble::data_frame()
+      } else {
+        highout <- tibble::add_column(df, names = names(dat), .before = TRUE)
+      }
+    } else {
+      highout <- input$highlight
+    }
+  } else {
+    highout <- xml_children(xml_find_all(input, '//lst[@name="highlighting"]'))
+    tmptmp <- lapply(highout, function(z) {
+      c(
+        names = xml_attr(z, "name"),
+        sapply(
+          xml_children(z),
+          function(w) as.list(stats::setNames(xml_text(w), xml_attr(w, "name"))))
+      )
+    })
+    if (parsetype == 'df') {
+      highout <- bind_rows(lapply(tmptmp, as_data_frame))
+    } else {
+      highout <- tmptmp
+    }
+  }
+
+  return( highout )
+}
+
+#' @export
+#' @rdname solr_parse
+solr_parse.sr_search <- function(input, parsetype = 'list', concat = ',') {
+  if (inherits(unclass(input), "character")) input <- parse_ch(input, parsetype, concat)
+  wt <- attributes(input)$wt
+  if (wt == 'json') {
+    if (parsetype == 'df') {
+      dat <- input$response$docs
+      dat2 <- lapply(dat, function(x) {
+        lapply(x, function(y) {
+          tmp <- if (length(y) > 1) {
+            paste(y, collapse = concat)
+          } else {
+            y
+          }
+          if (inherits(y, "list")) unlist(tmp) else tmp
+        })
+      })
+      datout <- bind_rows(lapply(dat2, as_data_frame))
+    } else {
+      datout <- input$response$docs
+    }
+    datout <- add_atts(datout, popp(input$response, "docs"))
+  } else if (wt == "xml") {
+    temp <- xml2::xml_find_all(input, '//doc')
+    tmptmp <- lapply(temp, function(x) {
+      sapply(xml2::xml_children(x), nmtxt)
+    })
+    if (parsetype == 'df') {
+      datout <- bind_rows(lapply(tmptmp, as_data_frame))
+    } else {
+      datout <- tmptmp
+    }
+    datout <- add_atts(datout, as.list(xml2::xml_attrs(xml2::xml_find_first(input, "result"))))
+  } else {
+    datout <- input
+  }
+
+  return( datout )
+}
+
+#' @export
+#' @rdname solr_parse
+solr_parse.sr_all <- function(input, parsetype = 'list', concat = ',') {
+  list(
+    search = solr_parse.sr_search(unclass(input), parsetype, concat),
+    facet = solr_parse.sr_facet(unclass(input), parsetype, concat),
+    high = solr_parse.sr_high(unclass(input), parsetype, concat),
+    mlt = solr_parse.sr_mlt(unclass(input), parsetype, concat),
+    group = solr_parse.sr_group(unclass(input), parsetype, concat),
+    stats = solr_parse.sr_stats(unclass(input), parsetype, concat)
+  )
+}
+
+#' @export
+#' @rdname solr_parse
+solr_parse.sr_mlt <- function(input, parsetype = 'list', concat = ',') {
+  if (inherits(unclass(input), "character")) input <- parse_ch(input, parsetype, concat)
+  wt <- attributes(input)$wt
+  if (wt == 'json') {
+    if (parsetype == 'df') {
+      res <- input$response
+      reslist <- lapply(res$docs, function(y) {
+        lapply(y, function(z) {
+          if (length(z) > 1) {
+            paste(z, collapse = concat)
+          } else {
+            z
+          }
+        })
+      })
+      resdat <- bind_rows(lapply(reslist, as_data_frame))
+
+      dat <- input$moreLikeThis
+      dat2 <- lapply(dat, function(x){
+        lapply(x$docs, function(y){
+          lapply(y, function(z){
+            if (length(z) > 1) {
+              paste(z, collapse = concat)
+            } else {
+              z
+            }
+          })
+        })
+      })
+
+      datmlt <- list()
+      for (i in seq_along(dat)) {
+        attsdf <- as_data_frame(popp(dat[[i]], "docs"))
+        df <- bind_rows(lapply(dat[[i]]$docs, function(y) {
+          as_data_frame(lapply(y, function(z) {
+            if (length(z) > 1) {
+              paste(z, collapse = concat)
+            } else {
+              z
+            }
+          }))
+        }))
+        if (NROW(df) == 0) {
+          df <- attsdf
+        } else {
+          df <- as_tibble(cbind(attsdf, df))
+        }
+        datmlt[[names(dat[i])]] <- df
+      }
+
+      datout <- list(docs = resdat, mlt = datmlt)
+    } else {
+      datout <- input$moreLikeThis
+    }
+  } else {
+    res <- xml_find_all(input, '//result[@name="response"]//doc')
+    resdat <- bind_rows(lapply(res, function(x){
+      tmp <- sapply(xml_children(x), nmtxt)
+      as_data_frame(tmp)
+    }))
+
+    temp <- xml_find_all(input, '//lst[@name="moreLikeThis"]')
+    tmptmp <- stats::setNames(lapply(xml_children(temp), function(z) {
+      lapply(xml_find_all(z, "doc"), function(w) {
+        sapply(xml_children(w), nmtxt)
+      })
+    }), xml_attr(xml_children(temp), "name"))
+    tmptmp <- Map(function(x, y) {
+      atts <- as.list(xml_attrs(y))
+      for (i in seq_along(atts)) {
+        attr(x, names(atts)[i]) <- atts[[i]]
+      }
+      x
+    },
+      tmptmp,
+      xml_children(temp)
+    )
+
+    if (parsetype == 'df') {
+      datmlt <- lapply(tmptmp, function(z) {
+        df <- bind_rows(lapply(z, as_data_frame))
+        atts <- attributes(z)
+        attsdf <- as_data_frame(atts)
+        if (NROW(df) == 0) {
+          attsdf
+        } else {
+          as_tibble(cbind(attsdf, df))
+        }
+      })
+      datout <- list(docs = resdat, mlt = datmlt)
+    } else {
+      datout <- list(docs = resdat, mlt = tmptmp)
+    }
+  }
+
+  return( datout )
+}
+
+#' @export
+#' @rdname solr_parse
+solr_parse.sr_stats <- function(input, parsetype = 'list', concat = ',') {
+  if (inherits(unclass(input), "character")) input <- parse_ch(input, parsetype, concat)
+  wt <- attributes(input)$wt
+  if (wt == 'json') {
+    if (parsetype == 'df') {
+      dat <- input$stats$stats_fields
+
+      dat2 <- lapply(dat, function(x){
+        data.frame(x[!names(x) %in% 'facets'])
+      })
+      dat_reg <- do.call(rbind, dat2)
+
+      # parse the facets
+      if (length(dat[[1]]$facets) == 0) {
+        dat_facet <- NULL
+      } else {
+        dat_facet <- lapply(dat, function(x){
+          facetted <- x[names(x) %in% 'facets'][[1]]
+          if (length(facetted) == 1) {
+            df <- bind_rows(
+              lapply(facetted[[1]], function(z) {
+                as_data_frame(
+                  lapply(z[!names(z) %in% 'facets'], function(w) {
+                    if (length(w) == 0) "" else w
+                  })
+                )
+              })
+            , .id = names(facetted))
+          } else {
+            df <- stats::setNames(lapply(seq.int(length(facetted)), function(n) {
+              bind_rows(lapply(facetted[[n]], function(b) {
+                as_data_frame(
+                  lapply(b[!names(b) %in% 'facets'], function(w) {
+                    if (length(w) == 0) "" else w
+                  })
+                )
+              }), .id = names(facetted)[n])
+            }), names(facetted))
+          }
+          return(df)
+        })
+      }
+
+      datout <- list(data = dat_reg, facet = dat_facet)
+
+    } else {
+      dat <- input$stats$stats_fields
+      # w/o facets
+      dat_reg <- lapply(dat, function(x){
+        x[!names(x) %in% 'facets']
+      })
+      # just facets
+      dat_facet <- lapply(dat, function(x){
+        facetted <- x[names(x) %in% 'facets'][[1]]
+        if (length(facetted) == 1) {
+          lapply(facetted[[1]], function(z) z[!names(z) %in% 'facets'])
+        } else {
+          df <- lapply(facetted, function(z){
+            lapply(z, function(zz) zz[!names(zz) %in% 'facets'])
+          })
+        }
+      })
+
+      datout <- list(data = dat_reg, facet = dat_facet)
+    }
+  } else {
+    temp <- xml_find_all(input, '//lst/lst[@name="stats_fields"]/lst')
+    if (parsetype == 'df') {
+      # w/o facets
+      dat_reg <- bind_rows(stats::setNames(lapply(temp, function(h){
+        as_data_frame(popp(sapply(xml_children(h), nmtxt), "facets"))
+      }), xml_attr(temp, "name")), .id = "stat")
+      # just facets
+      dat_facet <- stats::setNames(lapply(temp, function(e){
+        tt <- xml_find_first(e, 'lst[@name="facets"]')
+        stats::setNames(lapply(xml_children(tt), function(f){
+          bind_rows(stats::setNames(lapply(xml_children(f), function(g){
+            as_data_frame(popp(sapply(xml_children(g), nmtxt), "facets"))
+          }), xml_attr(xml_children(f), "name")), .id = xml_attr(f, "name"))
+        }), xml_attr(xml_children(tt), "name"))
+      }), xml_attr(temp, "name"))
+      datout <- list(data = dat_reg, facet = dat_facet)
+    } else {
+      # w/o facets
+      dat_reg <- stats::setNames(lapply(temp, function(h){
+        popp(sapply(xml_children(h), nmtxt), "facets")
+      }), xml_attr(temp, "name"))
+      # just facets
+      dat_facet <- stats::setNames(lapply(temp, function(e){
+        tt <- xml_find_first(e, 'lst[@name="facets"]')
+        stats::setNames(lapply(xml_children(tt), function(f){
+          stats::setNames(lapply(xml_children(f), function(g){
+            popp(sapply(xml_children(g), nmtxt), "facets")
+          }), xml_attr(xml_children(f), "name"))
+        }), xml_attr(xml_children(tt), "name"))
+      }), xml_attr(temp, "name"))
+      datout <- list(data = dat_reg, facet = dat_facet)
+    }
+  }
+  
+  datout <- if (length(Filter(length, datout)) == 0) NULL else datout
+  return( datout )
+}
+
+#' @export
+#' @rdname solr_parse
+solr_parse.sr_group <- function(input, parsetype = 'list', concat = ',') {
+  if (inherits(unclass(input), "character")) input <- parse_ch(input, parsetype, concat)
+  wt <- attributes(input)$wt
+
+  if (wt == 'json') {
+    if (parsetype == 'df') {
+      if ('response' %in% names(input)) {
+        datout <- cbind(data.frame(
+          numFound = input[[1]]$numFound,
+          start = input[[1]]$start),
+          do.call(rbind.fill, lapply(input[[1]]$docs,
+                                     data.frame,
+                                     stringsAsFactors = FALSE))
+        )
+      } else {
+        dat <- input$grouped
+        if (length(dat) == 1) {
+          if ('groups' %in% names(dat[[1]])) {
+            datout <- dat[[1]]$groups
+            datout <- do.call(rbind.fill, lapply(datout, function(x){
+              df <- data.frame(groupValue = ifelse(is.null(x$groupValue),"none",x$groupValue),
+                               numFound = x$doclist$numFound,
+                               start = x$doclist$start)
+              cbind(df, do.call(rbind.fill,
+                lapply(x$doclist$docs, function(z) {
+                  data.frame(lapply(z, function(zz) {
+                    if (length(zz) > 1) {
+                      paste(zz, collapse = concat)
+                    } else {
+                      zz
+                    }
+                  }), stringsAsFactors = FALSE)
+                })
+              ))
+            }))
+          } else {
+            datout <- cbind(data.frame(numFound = dat[[1]]$doclist$numFound,
+                                       start = dat[[1]]$doclist$start),
+                            do.call(rbind.fill, lapply(dat[[1]]$doclist$docs,
+                                                       data.frame,
+                                                       stringsAsFactors = FALSE)))
+          }
+        } else {
+          if ('groups' %in% names(dat[[1]])) {
+            datout <- lapply(dat, function(y) {
+              y <- y$groups
+              do.call(rbind.fill, lapply(y, function(x){
+                df <- data.frame(
+                  groupValue = ifelse(is.null(x$groupValue), "none", x$groupValue),
+                  numFound = x$doclist$numFound,
+                  start = x$doclist$start,
+                  stringsAsFactors = FALSE
+                )
+                cbind(df, do.call(rbind.fill, lapply(x$doclist$docs,
+                                                     data.frame,
+                                                     stringsAsFactors = FALSE)))
+              }))
+            })
+          } else {
+            datout <- do.call(rbind.fill, lapply(dat, function(x){
+              df <- data.frame(
+                numFound = x$doclist$numFound,
+                start = x$doclist$start,
+                stringsAsFactors = FALSE
+              )
+              cbind(df, do.call(rbind.fill, lapply(x$doclist$docs,
+                                                   data.frame,
+                                                   stringsAsFactors = FALSE)))
+            }))
+          }
+        }
+      }
+    } else {
+      datout <- input$grouped
+    }
+  } else {
+    temp <- xml_find_all(input, '//lst[@name="grouped"]/lst')
+    if (parsetype == 'df') {
+      datout <- stats::setNames(lapply(temp, function(e){
+        tt <- xml_find_first(e, 'arr[@name="groups"]')
+        bind_rows(stats::setNames(lapply(xml_children(tt), function(f){
+          docc <- xml_find_all(f, 'result[@name="doclist"]/doc')
+          df <- bind_rows(lapply(docc, function(g){
+            as_data_frame(sapply(xml_children(g), nmtxt))
+          }))
+          add_column(
+            df,
+            numFound = xml_attr(xml_find_first(f, "result"), "numFound"),
+            start = xml_attr(xml_find_first(f, "result"), "start"),
+            .before = TRUE
+          )
+        }), vapply(xml_children(tt), function(z) xml_text(xml_find_first(z, "str")) %||% "", "")),
+        .id = "group"
+        )
+      }), xml_attr(temp, "name"))
+    } else {
+      datout <- stats::setNames(lapply(temp, function(e){
+        tt <- xml_find_first(e, 'arr[@name="groups"]')
+        stats::setNames(lapply(xml_children(tt), function(f){
+          docc <- xml_find_all(f, 'result[@name="doclist"]/doc')
+          lst <- lapply(docc, function(g){
+            sapply(xml_children(g), nmtxt)
+          })
+          list(
+            docs = lst,
+            numFound = xml_attr(xml_find_first(f, "result"), "numFound"),
+            start = xml_attr(xml_find_first(f, "result"), "start")
+          )
+        }), vapply(xml_children(tt), function(z) xml_text(xml_find_first(z, "str")) %||% "", ""))
+      }), xml_attr(temp, "name"))
+    }
+  }
+
+  return( datout )
+}
+
+# helper fxns ---------------------
+nmtxt <- function(x) {
+  as.list(stats::setNames(xml2::xml_text(x), xml2::xml_attr(x, "name")))
+}
+
+add_atts <- function(x, atts = NULL) {
+  if (!is.null(atts)) {
+    for (i in seq_along(atts)) {
+      attr(x, names(atts)[i]) <- atts[[i]]
+    }
+    return(x)
+  } else {
+    return(x)
+  }
+}
+
+parse_it <- function(x, wt) {
+  switch(
+    wt,
+    xml = {
+      xml2::read_xml(unclass(x))
+    },
+    json = {
+      jsonlite::fromJSON(x, simplifyDataFrame = FALSE, simplifyMatrix = FALSE)
+    },
+    csv = {
+      tibble::as_data_frame(
+        read.table(text = x, sep = ",", stringsAsFactors = FALSE,
+                   header = TRUE, fill = TRUE, comment.char = "")
+      )
+    }
+  )
+}
+
+parse_ch <- function(x, parsetype, concat) {
+  parsed <- cont_parse(x, attr(x, "wt"))
+  structure(parsed, class = c(class(parsed), class(x)))
+}
+
+cont_parse <- function(x, wt) {
+  structure(parse_it(x, wt), wt = wt)
+}
+
+# facet.pivot helpers --------------
+#' Flatten facet.pivot responses
+#'
+#' Convert a nested hierarchy of facet.pivot elements
+#' to tabular data (rows and columns)
+#'
+#' @param df_w_pivot a \code{data.frame} with another
+#' \code{data.frame} nested inside representing a
+#' pivot reponse
+#' @return a \code{data.frame}
+#'
+#' @keywords internal
+pivot_flatten_tabular <- function(df_w_pivot){
+  # drop last column assumed to be named "pivot"
+  parent <- df_w_pivot[head(names(df_w_pivot),-1)]
+  pivot <- df_w_pivot$pivot
+  pp <- list()
+  for (i in 1:nrow(parent)) {
+    if ((!is.null(pivot[[i]])) && (nrow(pivot[[i]]) > 0)) {
+      # from parent drop last column assumed to be named "count" to not create duplicate columns of information
+      pp[[i]] <- data.frame(cbind(parent[i,], pivot[[i]], row.names = NULL))
+    }
+  }
+  flattened_pivot <- do.call('rbind', pp)
+  # return a tbl_df to flatten again if necessary
+  return(list(parent = parent, flattened_pivot = flattened_pivot))
+}
+
+#' Collapse Pivot Field and Value Columns
+#'
+#' Convert a table consisting of columns in sets of 3
+#' into 2 columns assuming that the first column of every set of 3
+#' (field) is duplicated throughout all rows and should be removed.
+#' This type of structure is usually returned by facet.pivot responses.
+#'
+#' @param data a \code{data.frame} with every 2 columns
+#' representing a field and value and the final representing
+#' a count
+#' @return a \code{data.frame}
+#'
+#' @keywords internal
+collapse_pivot_names <- function(data){
+
+  # shift field name to the column name to its right
+  for (i in seq(1, ncol(data) - 1, by = 3)) {
+    names(data)[i + 1] <- data[1, i]
+  }
+
+  # remove columns with duplicating information (anything named field)
+  data <- data[-c(seq(1, ncol(data) - 1, by = 3))]
+
+  # remove vestigial count columns
+  if (ncol(data) > 2) {
+    data <- data[-c(seq(0, ncol(data) - 1, by = 2))]
+  }
+
+  names(data)[length(data)] <- 'count'
+  return(data)
+}
diff --git a/R/ping.R b/R/ping.R
new file mode 100644
index 0000000..3304048
--- /dev/null
+++ b/R/ping.R
@@ -0,0 +1,53 @@
+#' Ping a Solr instance
+#'
+#' @export
+#' @param name (character) Name of a collection or core. Required.
+#' @param wt (character) One of json (default) or xml. If json, uses
+#' \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses
+#' \code{\link[xml2]{read_xml}} to parse
+#' @param verbose If TRUE (default) the url call used printed to console.
+#' @param raw (logical) If TRUE, returns raw data in format specified by
+#' \code{wt} param
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#'
+#' @return if \code{wt="xml"} an object of class \code{xml_document}, if
+#' \code{wt="json"} an object of class \code{list}
+#'
+#' @details You likely may not be able to run this function against many public
+#' Solr services as they hopefully don't expose their admin interface to the
+#' public, but works locally.
+#'
+#' @examples \dontrun{
+#' # start Solr, in your CLI, run: `bin/solr start -e cloud -noprompt`
+#' # after that, if you haven't run `bin/post -c gettingstarted docs/` yet,
+#' # do so
+#'
+#' # connect: by default we connect to localhost, port 8983
+#' solr_connect()
+#'
+#' # ping the gettingstarted index
+#' ping("gettingstarted")
+#' ping("gettingstarted", wt = "xml")
+#' ping("gettingstarted", verbose = FALSE)
+#' ping("gettingstarted", raw = TRUE)
+#'
+#' library("httr")
+#' ping("gettingstarted", wt="xml", config = verbose())
+#' }
+
+ping <- function(name, wt = 'json', verbose = TRUE, raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  res <- tryCatch(solr_GET(file.path(conn$url, sprintf('solr/%s/admin/ping', name)),
+           args = list(wt = wt), verbose = verbose, conn$proxy, ...), error = function(e) e)
+  if (inherits(res, "error")) {
+    return(list(status = "not found"))
+  } else {
+    out <- structure(res, class = "ping", wt = wt)
+    if (raw) {
+      return( out )
+    } else {
+      solr_parse(out)
+    }
+  }
+}
diff --git a/R/schema.R b/R/schema.R
new file mode 100644
index 0000000..2eca2b3
--- /dev/null
+++ b/R/schema.R
@@ -0,0 +1,53 @@
+#' Get the schema for a collection or core
+#' 
+#' @export
+#' @param name (character) Name of collection or core
+#' @param what (character) What to retrieve. By default, we retrieve the entire
+#' schema. Options include: fields, dynamicfields, fieldtypes, copyfields, name,
+#' version, uniquekey, similarity, "solrqueryparser/defaultoperator"
+#' @param raw (logical) If \code{TRUE}, returns raw data 
+#' @param verbose If TRUE (default) the url call used printed to console.
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @examples \dontrun{
+#' # start Solr, in your CLI, run: `bin/solr start -e cloud -noprompt`
+#' # after that, if you haven't run `bin/post -c gettingstarted docs/` yet, do so
+#' 
+#' # connect: by default we connect to localhost, port 8983
+#' solr_connect()
+#' 
+#' # get the schema for the gettingstarted index
+#' schema(name = "gettingstarted")
+#' 
+#' # Get parts of the schema
+#' schema(name = "gettingstarted", "fields")
+#' schema(name = "gettingstarted", "dynamicfields")
+#' schema(name = "gettingstarted", "fieldtypes")
+#' schema(name = "gettingstarted", "copyfields")
+#' schema(name = "gettingstarted", "name")
+#' schema(name = "gettingstarted", "version")
+#' schema(name = "gettingstarted", "uniquekey")
+#' schema(name = "gettingstarted", "similarity")
+#' schema(name = "gettingstarted", "solrqueryparser/defaultoperator")
+#' 
+#' # get raw data
+#' schema(name = "gettingstarted", "similarity", raw = TRUE)
+#' schema(name = "gettingstarted", "uniquekey", raw = TRUE)
+#' 
+#' # start Solr in Schemaless mode: bin/solr start -e schemaless
+#' # schema("gettingstarted")
+#' 
+#' # start Solr in Standalone mode: bin/solr start
+#' # then add a core: bin/solr create -c helloWorld
+#' # schema("helloWorld")
+#' }
+schema <- function(name, what = '', raw = FALSE, verbose = TRUE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  res <- solr_GET(file.path(conn$url, sprintf('solr/%s/schema', name), what), 
+                  list(wt = "json"), verbose = verbose, conn$proxy, ...)
+  if (raw) {
+    return(res)
+  } else {
+    jsonlite::fromJSON(res)
+  }
+}
diff --git a/R/solr_all.r b/R/solr_all.r
new file mode 100644
index 0000000..4724a45
--- /dev/null
+++ b/R/solr_all.r
@@ -0,0 +1,77 @@
+#' @title All purpose search
+#'
+#' @description Includes documents, facets, groups, mlt, stats, and highlights.
+#'
+#' @template search
+#' @param wt (character) One of json (default) or xml. If json, uses
+#' \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}}
+#' to parse. You can't use \code{csv} because the point of this function
+#' @return XML, JSON, a list, or data.frame
+#' @seealso \code{\link{solr_highlight}}, \code{\link{solr_facet}}
+#' @references See \url{http://wiki.apache.org/solr/#Search_and_Indexing} for
+#' more information.
+#' @export
+#' @examples \dontrun{
+#' # connect
+#' solr_connect('http://api.plos.org/search')
+#'
+#' solr_all(q='*:*', rows=2, fl='id')
+#'
+#' # facets
+#' solr_all(q='*:*', rows=2, fl='id', facet="true", facet.field="journal")
+#'
+#' # mlt
+#' solr_all(q='ecology', rows=2, fl='id', mlt='true', mlt.count=2, mlt.fl='abstract')
+#'
+#' # facets and mlt
+#' solr_all(q='ecology', rows=2, fl='id', facet="true", facet.field="journal",
+#' mlt='true', mlt.count=2, mlt.fl='abstract')
+#'
+#' # stats
+#' solr_all(q='ecology', rows=2, fl='id', stats='true', stats.field='counter_total_all')
+#'
+#' # facets, mlt, and stats
+#' solr_all(q='ecology', rows=2, fl='id', facet="true", facet.field="journal",
+#' mlt='true', mlt.count=2, mlt.fl='abstract', stats='true', stats.field='counter_total_all')
+#'
+#' # group
+#' solr_all(q='ecology', rows=2, fl='id', group='true',
+#'    group.field='journal', group.limit=3)
+#'
+#' # facets, mlt, stats, and groups
+#' solr_all(q='ecology', rows=2, fl='id', facet="true", facet.field="journal",
+#'    mlt='true', mlt.count=2, mlt.fl='abstract', stats='true', stats.field='counter_total_all',
+#'    group='true', group.field='journal', group.limit=3)
+#'
+#' # using wt = xml
+#' solr_all(q='*:*', rows=50, fl=c('id','score'), fq='doc_type:full', wt="xml", raw=TRUE)
+#' }
+
+solr_all <- function(name = NULL, q='*:*', sort=NULL, start=0, rows=NULL, pageDoc=NULL,
+  pageScore=NULL, fq=NULL, fl=NULL, defType=NULL, timeAllowed=NULL, qt=NULL,
+  wt='json', NOW=NULL, TZ=NULL, echoHandler=NULL, echoParams=NULL, key = NULL,
+  callopts=list(), raw=FALSE, parsetype='df', concat=',', ...) {
+
+  check_defunct(...)
+  conn <- solr_settings()
+  check_conn(conn)
+  check_wt(wt)
+  if (!is.null(fl)) fl <- paste0(fl, collapse = ",")
+  args <- sc(list(q = q, sort = sort, start = start, rows = rows, pageDoc = pageDoc,
+                       pageScore = pageScore, fl = fl, fq = fq, defType = defType,
+                       timeAllowed = timeAllowed, qt = qt, wt = wt, NOW = NOW, TZ = TZ,
+                       echoHandler = echoHandler, echoParams = echoParams))
+
+  # additional parameters
+  args <- c(args, list(...))
+
+  out <- structure(solr_GET(handle_url(conn, name), args, callopts, conn$proxy),
+                   class = "sr_all", wt = wt)
+  if (raw) {
+    return( out )
+  } else {
+    parsed <- cont_parse(out, wt)
+    parsed <- structure(parsed, class = c(class(parsed), "sr_all"))
+    solr_parse(parsed, parsetype, concat)
+  }
+}
diff --git a/R/solr_facet.r b/R/solr_facet.r
new file mode 100644
index 0000000..c8c199d
--- /dev/null
+++ b/R/solr_facet.r
@@ -0,0 +1,126 @@
+#' @title Faceted search
+#'
+#' @description Returns only facet items
+#'
+#' @template facet
+#' @return Raw json or xml, or a list of length 4 parsed elements (usually data.frame's).
+#' @seealso \code{\link{solr_search}}, \code{\link{solr_highlight}}, \code{\link{solr_parse}}
+#' @references See \url{http://wiki.apache.org/solr/SimpleFacetParameters} for
+#' more information on faceting.
+#' @export
+#' @examples \dontrun{
+#' # connect
+#' solr_connect('http://api.plos.org/search')
+#'
+#' # Facet on a single field
+#' solr_facet(q='*:*', facet.field='journal')
+#'
+#' # Facet on multiple fields
+#' solr_facet(q='alcohol', facet.field=c('journal','subject'))
+#'
+#' # Using mincount
+#' solr_facet(q='alcohol', facet.field='journal', facet.mincount='500')
+#'
+#' # Using facet.query to get counts
+#' solr_facet(q='*:*', facet.field='journal', facet.query=c('cell','bird'))
+#'
+#' # Using facet.pivot to simulate SQL group by counts
+#' solr_facet(q='alcohol', facet.pivot='journal,subject',
+#'              facet.pivot.mincount=10)
+#' ## two or more fields are required - you can pass in as a single character string
+#' solr_facet(facet.pivot = "journal,subject", facet.limit =  3)
+#' ## Or, pass in as a vector of length 2 or greater
+#' solr_facet(facet.pivot = c("journal", "subject"), facet.limit =  3)
+#'
+#' # Date faceting
+#' solr_facet(q='*:*', facet.date='publication_date',
+#' facet.date.start='NOW/DAY-5DAYS', facet.date.end='NOW', facet.date.gap='+1DAY')
+#' ## two variables
+#' solr_facet(q='*:*', facet.date=c('publication_date', 'timestamp'),
+#' facet.date.start='NOW/DAY-5DAYS', facet.date.end='NOW', facet.date.gap='+1DAY')
+#'
+#' # Range faceting
+#' solr_facet(q='*:*', facet.range='counter_total_all',
+#' facet.range.start=5, facet.range.end=1000, facet.range.gap=10)
+#'
+#' # Range faceting with > 1 field, same settings
+#' solr_facet(q='*:*', facet.range=c('counter_total_all','alm_twitterCount'),
+#' facet.range.start=5, facet.range.end=1000, facet.range.gap=10)
+#'
+#' # Range faceting with > 1 field, different settings
+#' solr_facet(q='*:*', facet.range=c('counter_total_all','alm_twitterCount'),
+#' f.counter_total_all.facet.range.start=5, f.counter_total_all.facet.range.end=1000,
+#' f.counter_total_all.facet.range.gap=10, f.alm_twitterCount.facet.range.start=5,
+#' f.alm_twitterCount.facet.range.end=1000, f.alm_twitterCount.facet.range.gap=10)
+#'
+#' # Get raw json or xml
+#' ## json
+#' solr_facet(q='*:*', facet.field='journal', raw=TRUE)
+#' ## xml
+#' solr_facet(q='*:*', facet.field='journal', raw=TRUE, wt='xml')
+#'
+#' # Get raw data back, and parse later, same as what goes on internally if
+#' # raw=FALSE (Default)
+#' out <- solr_facet(q='*:*', facet.field='journal', raw=TRUE)
+#' solr_parse(out)
+#' out <- solr_facet(q='*:*', facet.field='journal', raw=TRUE,
+#'    wt='xml')
+#' solr_parse(out)
+#'
+#' # Using the USGS BISON API (https://bison.usgs.gov/#solr)
+#' ## The occurrence endpoint
+#' solr_connect("https://bison.usgs.gov/solr/occurrences/select")
+#' solr_facet(q='*:*', facet.field='year')
+#' solr_facet(q='*:*', facet.field='computedStateFips')
+#'
+#' # using a proxy
+#' # prox <- list(url = "54.195.48.153", port = 8888)
+#' # solr_connect(url = 'http://api.plos.org/search', proxy = prox)
+#' # solr_facet(facet.field='journal', callopts=verbose())
+#' }
+
+solr_facet <- function(name = NULL, q="*:*", facet.query=NA, facet.field=NA,
+   facet.prefix = NA, facet.sort = NA, facet.limit = NA, facet.offset = NA,
+   facet.mincount = NA, facet.missing = NA, facet.method = NA, facet.enum.cache.minDf = NA,
+   facet.threads = NA, facet.date = NA, facet.date.start = NA, facet.date.end = NA,
+   facet.date.gap = NA, facet.date.hardend = NA, facet.date.other = NA,
+   facet.date.include = NA, facet.range = NA, facet.range.start = NA, facet.range.end = NA,
+   facet.range.gap = NA, facet.range.hardend = NA, facet.range.other = NA, facet.range.include = NA,
+   facet.pivot = NA, facet.pivot.mincount = NA, start=NA, rows=NA, key=NA, wt='json',
+   raw=FALSE, callopts=list(), ...) {
+
+  check_defunct(...)
+  conn <- solr_settings()
+  check_conn(conn)
+  check_wt(wt)
+  todonames <- c("q",  "facet.query",  "facet.field",
+     "facet.prefix", "facet.sort", "facet.limit", "facet.offset",
+     "facet.mincount", "facet.missing", "facet.method", "facet.enum.cache.minDf",
+     "facet.threads", "facet.date", "facet.date.start", "facet.date.end",
+     "facet.date.gap", "facet.date.hardend", "facet.date.other",
+     "facet.date.include", "facet.range", "facet.range.start", "facet.range.end",
+     "facet.range.gap", "facet.range.hardend", "facet.range.other",
+     "facet.range.include", "facet.pivot", "facet.pivot.mincount",
+     "start", "rows", "key", "wt")
+  args <- collectargs(todonames)
+  args$fl <- 'DOES_NOT_EXIST'
+  args$facet <- 'true'
+
+  # additional parameters
+  args <- c(args, list(...))
+  if (length(args[names(args) %in% "facet.pivot"]) > 1) {
+    xx <- paste0(unlist(unname(args[names(args) %in% "facet.pivot"])), collapse = ",")
+    args[names(args) %in% "facet.pivot"] <- NULL
+    args$facet.pivot <- xx
+  }
+
+  out <- structure(solr_GET(handle_url(conn, name), args, callopts, conn$proxy),
+                   class = "sr_facet", wt = wt)
+  if (raw) {
+    return( out )
+  } else {
+    parsed <- cont_parse(out, wt)
+    parsed <- structure(parsed, class = c(class(parsed), "sr_facet"))
+    solr_parse(parsed)
+  }
+}
diff --git a/R/solr_get.R b/R/solr_get.R
new file mode 100644
index 0000000..52d395a
--- /dev/null
+++ b/R/solr_get.R
@@ -0,0 +1,42 @@
+#' @title Real time get
+#'
+#' @description Get documents by id
+#'
+#' @export
+#' @param ids Document IDs, one or more in a vector or list
+#' @param name (character) A collection or core name. Required.
+#' @param fl Fields to return, can be a character vector like \code{c('id', 'title')},
+#' or a single character vector with one or more comma separated names, like
+#' \code{'id,title'}
+#' @param wt (character) One of json (default) or xml. Data type returned.
+#' If json, uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses
+#' \code{\link[xml2]{read_xml}} to parse.
+#' @param raw (logical) If \code{TRUE}, returns raw data in format specified by
+#' \code{wt} param
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @details We use json internally as data interchange format for this function.
+#' @examples \dontrun{
+#' solr_connect()
+#'
+#' # add some documents first
+#' ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+#' add(ss, name = "gettingstarted")
+#'
+#' # Now, get documents by id
+#' solr_get(ids = 1, "gettingstarted")
+#' solr_get(ids = 2, "gettingstarted")
+#' solr_get(ids = c(1, 2), "gettingstarted")
+#' solr_get(ids = "1,2", "gettingstarted")
+#'
+#' # Get raw JSON
+#' solr_get(ids = 1, "gettingstarted", raw = TRUE, wt = "json")
+#' solr_get(ids = 1, "gettingstarted", raw = TRUE, wt = "xml")
+#' }
+solr_get <- function(ids, name, fl = NULL, wt = 'json', raw = FALSE, ...) {
+  conn <- solr_settings()
+  check_conn(conn)
+  if (!is.null(fl)) fl <- paste0(fl, collapse = ",")
+  args <- sc(list(ids = paste0(ids, collapse = ","), fl = fl, wt = wt))
+  res <- solr_GET(file.path(conn$url, sprintf('solr/%s/get', name)), args, conn$proxy, ...)
+  config_parse(res, wt = wt, raw = raw)
+}
diff --git a/R/solr_group.r b/R/solr_group.r
new file mode 100644
index 0000000..b6095d9
--- /dev/null
+++ b/R/solr_group.r
@@ -0,0 +1,107 @@
+#' @title Grouped search
+#'
+#' @description Returns only group items
+#'
+#' @template group
+#' @return XML, JSON, a list, or data.frame
+#' @seealso \code{\link{solr_highlight}}, \code{\link{solr_facet}}
+#' @references See \url{http://wiki.apache.org/solr/FieldCollapsing} for more
+#' information.
+#' @export
+#' @examples \dontrun{
+#' # connect
+#' solr_connect('http://api.plos.org/search')
+#'
+#' # Basic group query
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl=c('id','score'))
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl='article_type')
+#'
+#' # Different ways to sort (notice diff btw sort of group.sort)
+#' # note that you can only sort on a field if you return that field
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'    fl=c('id','score'))
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'    fl=c('id','score','alm_twitterCount'), group.sort='alm_twitterCount desc')
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'    fl=c('id','score','alm_twitterCount'), sort='score asc',
+#'    group.sort='alm_twitterCount desc')
+#'
+#' # Two group.field values
+#' out <- solr_group(q='ecology', group.field=c('journal','article_type'),
+#'   group.limit=3,
+#'   fl='id', raw=TRUE)
+#' solr_parse(out)
+#' solr_parse(out, 'df')
+#'
+#' # Get two groups, one with alm_twitterCount of 0-10, and another group
+#' # with 10 to infinity
+#' solr_group(q='ecology', group.limit=3, fl=c('id','alm_twitterCount'),
+#'  group.query=c('alm_twitterCount:[0 TO 10]','alm_twitterCount:[10 TO *]'))
+#'
+#' # Use of group.format and group.simple.
+#' ## The raw data structure of these two calls are slightly different, but
+#' ## the parsing inside the function outputs the same results. You can
+#' ## of course set raw=TRUE to get back what the data actually look like
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl=c('id','score'), group.format='simple')
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl=c('id','score'), group.format='grouped')
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl=c('id','score'), group.format='grouped', group.main='true')
+#'
+#' # xml back
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl=c('id','score'), wt = "xml")
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl=c('id','score'), wt = "xml", parsetype = "list")
+#' res <- solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl=c('id','score'), wt = "xml", raw = TRUE)
+#' library("xml2")
+#' xml2::read_xml(unclass(res))
+#'
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl='article_type', wt = "xml")
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl='article_type', wt = "xml", parsetype = "list")
+#'
+#' # examples with Dryad's Solr instance
+#' solr_connect("http://datadryad.org/solr/search/select")
+#' solr_group(q='ecology', group.field='journal', group.limit=3,
+#'   fl='article_type')
+#' }
+
+solr_group <- function(name = NULL, q='*:*', start=0, rows = NA, sort = NA, fq = NA, fl = NULL,
+  wt='json', key = NA, group.field = NA, group.limit = NA, group.offset = NA,
+  group.sort = NA, group.main = NA, group.ngroups = NA,
+  group.cache.percent = NA, group.query = NA, group.format = NA,
+  group.func = NA, callopts=list(), raw=FALSE, parsetype='df',
+  concat=',', ...) {
+
+  check_defunct(...)
+  conn <- solr_settings()
+  check_conn(conn)
+  check_wt(wt)
+  if (!is.null(fl)) fl <- paste0(fl, collapse = ",")
+  todonames <- c("group.query","group.field", 'q', 'start', 'rows', 'sort',
+    'fq', 'wt', 'group.limit', 'group.offset', 'group.sort', 'group.sort',
+    'group.format', 'group.func', 'group.main', 'group.ngroups',
+    'group.cache.percent', 'group.cache.percent', 'fl')
+  args <- collectargs(todonames)
+  args$group <- 'true'
+
+  # additional parameters
+  args <- c(args, list(...))
+
+  out <- structure(solr_GET(base = handle_url(conn, name), args, callopts, conn$proxy),
+                   class = "sr_group", wt = wt)
+
+  if (raw) {
+    return(out)
+  } else {
+    parsed <- cont_parse(out, wt)
+    parsed <- structure(parsed, class = c(class(parsed), "sr_group"))
+    solr_parse(out, parsetype)
+  }
+}
diff --git a/R/solr_highlight.r b/R/solr_highlight.r
new file mode 100644
index 0000000..967f28c
--- /dev/null
+++ b/R/solr_highlight.r
@@ -0,0 +1,76 @@
+#' @title Highlighting search
+#'
+#' @description Returns only highlight items
+#'
+#' @export
+#' @template high
+#' @return XML, JSON, a list, or data.frame
+#' @seealso \code{\link{solr_search}}, \code{\link{solr_facet}}
+#' @references See \url{http://wiki.apache.org/solr/HighlightingParameters} for
+#' more information on highlighting.
+#' @examples \dontrun{
+#' # connect
+#' solr_connect('http://api.plos.org/search')
+#'
+#' # highlight search
+#' solr_highlight(q='alcohol', hl.fl = 'abstract', rows=10)
+#' solr_highlight(q='alcohol', hl.fl = c('abstract','title'), rows=3)
+#'
+#' # Raw data back
+#' ## json
+#' solr_highlight(q='alcohol', hl.fl = 'abstract', rows=10,
+#'    raw=TRUE)
+#' ## xml
+#' solr_highlight(q='alcohol', hl.fl = 'abstract', rows=10,
+#'    raw=TRUE, wt='xml')
+#' ## parse after getting data back
+#' out <- solr_highlight(q='alcohol', hl.fl = c('abstract','title'), hl.fragsize=30,
+#'    rows=10, raw=TRUE, wt='xml')
+#' solr_parse(out, parsetype='df')
+#' }
+
+solr_highlight <- function(name = NULL, q, hl.fl = NULL, hl.snippets = NULL, hl.fragsize = NULL,
+     hl.q = NULL, hl.mergeContiguous = NULL, hl.requireFieldMatch = NULL,
+     hl.maxAnalyzedChars = NULL, hl.alternateField = NULL, hl.maxAlternateFieldLength = NULL,
+     hl.preserveMulti = NULL, hl.maxMultiValuedToExamine = NULL,
+     hl.maxMultiValuedToMatch = NULL, hl.formatter = NULL, hl.simple.pre = NULL,
+     hl.simple.post = NULL, hl.fragmenter = NULL, hl.fragListBuilder = NULL,
+     hl.fragmentsBuilder = NULL, hl.boundaryScanner = NULL, hl.bs.maxScan = NULL,
+     hl.bs.chars = NULL, hl.bs.type = NULL, hl.bs.language = NULL, hl.bs.country = NULL,
+     hl.useFastVectorHighlighter = NULL, hl.usePhraseHighlighter = NULL,
+     hl.highlightMultiTerm = NULL, hl.regex.slop = NULL, hl.regex.pattern = NULL,
+     hl.regex.maxAnalyzedChars = NULL, start = 0, rows = NULL,
+     wt='json', raw = FALSE, key = NULL, callopts=list(),
+     fl='DOES_NOT_EXIST', fq=NULL, parsetype='list') {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  check_wt(wt)
+  if(!is.null(hl.fl)) names(hl.fl) <- rep("hl.fl", length(hl.fl))
+  args <- sc(list(wt=wt, q=q, start=start, rows=rows, hl='true',
+     hl.snippets=hl.snippets, hl.fragsize=hl.fragsize, fl=fl, fq=fq,
+     hl.mergeContiguous = hl.mergeContiguous, hl.requireFieldMatch = hl.requireFieldMatch,
+     hl.maxAnalyzedChars = hl.maxAnalyzedChars, hl.alternateField = hl.alternateField,
+     hl.maxAlternateFieldLength = hl.maxAlternateFieldLength, hl.preserveMulti = hl.preserveMulti,
+     hl.maxMultiValuedToExamine = hl.maxMultiValuedToExamine, hl.maxMultiValuedToMatch = hl.maxMultiValuedToMatch,
+     hl.formatter = hl.formatter, hl.simple.pre = hl.simple.pre, hl.simple.post = hl.simple.post,
+     hl.fragmenter = hl.fragmenter, hl.fragListBuilder = hl.fragListBuilder,
+     hl.fragmentsBuilder = hl.fragmentsBuilder, hl.boundaryScanner = hl.boundaryScanner,
+     hl.bs.maxScan = hl.bs.maxScan, hl.bs.chars = hl.bs.chars, hl.bs.type = hl.bs.type,
+     hl.bs.language = hl.bs.language, hl.bs.country = hl.bs.country,
+     hl.useFastVectorHighlighter = hl.useFastVectorHighlighter,
+     hl.usePhraseHighlighter = hl.usePhraseHighlighter, hl.highlightMultiTerm = hl.highlightMultiTerm,
+     hl.regex.slop = hl.regex.slop, hl.regex.pattern = hl.regex.pattern,
+     hl.regex.maxAnalyzedChars = hl.regex.maxAnalyzedChars))
+  args <- c(args, hl.fl)
+
+  out <- structure(solr_GET(handle_url(conn, name), args, callopts, conn$proxy),
+                   class = "sr_high", wt = wt)
+  if (raw) {
+    return(out)
+  } else {
+    parsed <- cont_parse(out, wt)
+    parsed <- structure(parsed, class = c(class(parsed), "sr_high"))
+    solr_parse(out, parsetype)
+  }
+}
diff --git a/R/solr_mlt.r b/R/solr_mlt.r
new file mode 100644
index 0000000..91b9218
--- /dev/null
+++ b/R/solr_mlt.r
@@ -0,0 +1,62 @@
+#' @title "more like this" search
+#'
+#' @description Returns only more like this items
+#'
+#' @export
+#' @template mlt
+#' @return XML, JSON, a list, or data.frame
+#' @references See \url{http://wiki.apache.org/solr/MoreLikeThis} for more
+#' information.
+#' @examples \dontrun{
+#' # connect
+#' solr_connect('http://api.plos.org/search')
+#'
+#' # more like this search
+#' solr_mlt(q='*:*', mlt.count=2, mlt.fl='abstract', fl='score',
+#'   fq="doc_type:full")
+#' solr_mlt(q='*:*', rows=2, mlt.fl='title', mlt.mindf=1, mlt.mintf=1,
+#'   fl='alm_twitterCount')
+#' solr_mlt(q='title:"ecology" AND body:"cell"', mlt.fl='title', mlt.mindf=1,
+#'   mlt.mintf=1, fl='counter_total_all', rows=5)
+#' solr_mlt(q='ecology', mlt.fl='abstract', fl='title', rows=5)
+#' solr_mlt(q='ecology', mlt.fl='abstract', fl=c('score','eissn'),
+#'   rows=5)
+#' solr_mlt(q='ecology', mlt.fl='abstract', fl=c('score','eissn'),
+#'   rows=5, wt = "xml")
+#'
+#' # get raw data, and parse later if needed
+#' out <- solr_mlt(q='ecology', mlt.fl='abstract', fl='title',
+#'  rows=2, raw=TRUE)
+#' library('jsonlite')
+#' solr_parse(out, "df")
+#' }
+
+solr_mlt <- function(name = NULL, q='*:*', fq = NULL, mlt.count=NULL, mlt.fl=NULL, mlt.mintf=NULL,
+  mlt.mindf=NULL, mlt.minwl=NULL, mlt.maxwl=NULL, mlt.maxqt=NULL, mlt.maxntp=NULL,
+  mlt.boost=NULL, mlt.qf=NULL, fl=NULL, wt='json', start=0, rows=NULL, key = NULL,
+  callopts=list(), raw=FALSE, parsetype='df', concat=',') {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  check_wt(wt)
+  fl_str <- paste0(fl, collapse = ",")
+  if (any(grepl('id', fl))) {
+    fl2 <- fl_str
+  } else {
+    fl2 <- sprintf('id,%s',fl_str)
+  }
+  args <- sc(list(q = q, fq = fq, mlt = 'true', fl = fl2, mlt.count = mlt.count, mlt.fl = mlt.fl,
+    mlt.mintf = mlt.mintf, mlt.mindf = mlt.mindf, mlt.minwl = mlt.minwl,
+    mlt.maxwl = mlt.maxwl, mlt.maxqt = mlt.maxqt, mlt.maxntp = mlt.maxntp,
+    mlt.boost = mlt.boost, mlt.qf = mlt.qf, start = start, rows = rows, wt = wt))
+
+  out <- structure(solr_GET(handle_url(conn, name), args, callopts, conn$proxy),
+                   class = "sr_mlt", wt = wt)
+  if (raw) {
+    return( out )
+  } else {
+    parsed <- cont_parse(out, wt)
+    parsed <- structure(parsed, class = c(class(parsed), "sr_mlt"))
+    solr_parse(parsed, parsetype, concat)
+  }
+}
diff --git a/R/solr_search.r b/R/solr_search.r
new file mode 100644
index 0000000..b6b8a44
--- /dev/null
+++ b/R/solr_search.r
@@ -0,0 +1,151 @@
+#' @title Solr search
+#'
+#' @description Returns only matched documents, and doesn't return other items,
+#' including facets, groups, mlt, stats, and highlights.
+#'
+#' @template search
+#' @return XML, JSON, a list, or data.frame
+#' @param wt (character) One of json, xml, or csv. Data type returned, defaults to 'csv'.
+#' If json, uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses
+#' \code{\link[xml2]{read_xml}} to parse. If csv, uses \code{\link{read.table}} to parse.
+#' \code{wt=csv} gives the fastest performance at least in all the cases we have
+#' tested in, thus it's the default value for \code{wt}.
+#' @seealso \code{\link{solr_highlight}}, \code{\link{solr_facet}}
+#' @references See \url{http://wiki.apache.org/solr/#Search_and_Indexing} for more information.
+#' @note SOLR v1.2 was first version to support csv. See
+#' \url{https://issues.apache.org/jira/browse/SOLR-66}
+#' @export
+#' @examples \dontrun{
+#' # connect
+#' solr_connect('http://api.plos.org/search')
+#'
+#' # search
+#' solr_search(q='*:*', rows=2, fl='id')
+#'
+#' # Search for word ecology in title and cell in the body
+#' solr_search(q='title:"ecology" AND body:"cell"', fl='title', rows=5)
+#'
+#' # Search for word "cell" and not "body" in the title field
+#' solr_search(q='title:"cell" -title:"lines"', fl='title', rows=5)
+#'
+#' # Wildcards
+#' ## Search for word that starts with "cell" in the title field
+#' solr_search(q='title:"cell*"', fl='title', rows=5)
+#'
+#' # Proximity searching
+#' ## Search for words "sports" and "alcohol" within four words of each other
+#' solr_search(q='everything:"sports alcohol"~7', fl='abstract', rows=3)
+#'
+#' # Range searches
+#' ## Search for articles with Twitter count between 5 and 10
+#' solr_search(q='*:*', fl=c('alm_twitterCount','id'), fq='alm_twitterCount:[5 TO 50]',
+#' rows=10)
+#'
+#' # Boosts
+#' ## Assign higher boost to title matches than to body matches (compare the two calls)
+#' solr_search(q='title:"cell" abstract:"science"', fl='title', rows=3)
+#' solr_search(q='title:"cell"^1.5 AND abstract:"science"', fl='title', rows=3)
+#'
+#' # FunctionQuery queries
+#' ## This kind of query allows you to use the actual values of fields to calculate
+#' ## relevancy scores for returned documents
+#'
+#' ## Here, we search on the product of counter_total_all and alm_twitterCount
+#' ## metrics for articles in PLOS Journals
+#' solr_search(q="{!func}product($v1,$v2)", v1 = 'sqrt(counter_total_all)',
+#'    v2 = 'log(alm_twitterCount)', rows=5, fl=c('id','title'), fq='doc_type:full')
+#'
+#' ## here, search on the product of counter_total_all and alm_twitterCount, using
+#' ## a new temporary field "_val_"
+#' solr_search(q='_val_:"product(counter_total_all,alm_twitterCount)"',
+#'    rows=5, fl=c('id','title'), fq='doc_type:full')
+#'
+#' ## papers with most citations
+#' solr_search(q='_val_:"max(counter_total_all)"',
+#'    rows=5, fl=c('id','counter_total_all'), fq='doc_type:full')
+#'
+#' ## papers with most tweets
+#' solr_search(q='_val_:"max(alm_twitterCount)"',
+#'    rows=5, fl=c('id','alm_twitterCount'), fq='doc_type:full')
+#'
+#' ## using wt = csv
+#' solr_search(q='*:*', rows=50, fl=c('id','score'), fq='doc_type:full', wt="csv")
+#' solr_search(q='*:*', rows=50, fl=c('id','score'), fq='doc_type:full')
+#'
+#' # using a proxy
+#' # prox <- list(url = "186.249.1.146", port = 80)
+#' # solr_connect(url = 'http://api.plos.org/search', proxy = prox)
+#' # solr_search(q='*:*', rows=2, fl='id', callopts=verbose())
+#' ## vs. w/o a proxy
+#' # solr_connect(url = 'http://api.plos.org/search')
+#' # solr_search(q='*:*', rows=2, fl='id', callopts=verbose())
+#'
+#' # Pass on curl options to modify request
+#' solr_connect(url = 'http://api.plos.org/search')
+#' ## verbose
+#' solr_search(q='*:*', rows=2, fl='id', callopts=verbose())
+#' ## progress
+#' res <- solr_search(q='*:*', rows=200, fl='id', callopts=progress())
+#' ## timeout
+#' # solr_search(q='*:*', rows=200, fl='id', callopts=timeout(0.01))
+#' ## combine curl options using the c() function
+#' opts <- c(verbose(), progress())
+#' res <- solr_search(q='*:*', rows=200, fl='id', callopts=opts)
+#'
+#' ## Searching Europeana
+#' ### They don't return the expected Solr output, so we can get raw data, then parse separately
+#' solr_connect('http://europeana.eu/api/v2/search.json')
+#' key <- getOption("eu_key")
+#' dat <- solr_search(query='*:*', rows=5, wskey = key, raw=TRUE)
+#' library('jsonlite')
+#' head( jsonlite::fromJSON(dat)$items )
+#'
+#' # Connect to a local Solr instance
+#' ## not run - replace with your local Solr URL and collection/core name
+#' # solr_connect("localhost:8889")
+#' # solr_search("gettingstarted")
+#' }
+
+solr_search <- function(name = NULL, q='*:*', sort=NULL, start=NULL, rows=NULL, pageDoc=NULL,
+  pageScore=NULL, fq=NULL, fl=NULL, defType=NULL, timeAllowed=NULL, qt=NULL,
+  wt='json', NOW=NULL, TZ=NULL, echoHandler=NULL, echoParams=NULL, key = NULL,
+  callopts=list(), raw=FALSE, parsetype='df', concat=',', ...) {
+
+  check_defunct(...)
+  conn <- solr_settings()
+  check_conn(conn)
+  check_wt(wt)
+  if (!is.null(fl)) fl <- paste0(fl, collapse = ",")
+  args <- sc(list(q = q, sort = sort, start = start, rows = rows, pageDoc = pageDoc,
+      pageScore = pageScore, fl = fl, defType = defType,
+      timeAllowed = timeAllowed, qt = qt, wt = wt, NOW = NOW, TZ = TZ,
+      echoHandler = echoHandler, echoParams = echoParams))
+
+  # args that can be repeated
+  todonames <- "fq"
+  args <- c(args, collectargs(todonames))
+
+  # additional parameters
+  args <- c(args, list(...))
+  if ('query' %in% names(args)) {
+    args <- args[!names(args) %in% "q"]
+  }
+
+  out <- structure(solr_GET(handle_url(conn, name), args, callopts, conn$proxy),
+                   class = "sr_search", wt = wt)
+  if (raw) {
+    return( out )
+  } else {
+    parsed <- cont_parse(out, wt)
+    parsed <- structure(parsed, class = c(class(parsed), "sr_search"))
+    solr_parse(parsed, parsetype, concat)
+  }
+}
+
+handle_url <- function(conn, name) {
+  if (is.null(name)) {
+    conn$url
+  } else {
+    file.path(conn$url, "solr", name, "select")
+  }
+}
diff --git a/R/solr_stats.r b/R/solr_stats.r
new file mode 100644
index 0000000..beda65d
--- /dev/null
+++ b/R/solr_stats.r
@@ -0,0 +1,67 @@
+#' @title Solr stats
+#'
+#' @description Returns only stat items
+#'
+#' @template stats
+#' @return XML, JSON, a list, or data.frame
+#' @seealso \code{\link{solr_highlight}}, \code{\link{solr_facet}},
+#' \code{\link{solr_search}}, \code{\link{solr_mlt}}
+#' @references See \url{http://wiki.apache.org/solr/StatsComponent} for
+#' more information on Solr stats.
+#' @export
+#' @examples \dontrun{
+#' # connect
+#' solr_connect('http://api.plos.org/search')
+#'
+#' # get stats
+#' solr_stats(q='science', stats.field='counter_total_all', raw=TRUE)
+#' solr_stats(q='title:"ecology" AND body:"cell"',
+#'    stats.field=c('counter_total_all','alm_twitterCount'))
+#' solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'),
+#'    stats.facet='journal')
+#' solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'),
+#'    stats.facet=c('journal','volume'))
+#'
+#' # Get raw data, then parse later if you feel like it
+#' ## json
+#' out <- solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'),
+#'    stats.facet=c('journal','volume'), raw=TRUE)
+#' library("jsonlite")
+#' jsonlite::fromJSON(out)
+#' solr_parse(out) # list
+#' solr_parse(out, 'df') # data.frame
+#'
+#' ## xml
+#' out <- solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'),
+#'    stats.facet=c('journal','volume'), raw=TRUE, wt="xml")
+#' library("xml2")
+#' xml2::read_xml(unclass(out))
+#' solr_parse(out) # list
+#' solr_parse(out, 'df') # data.frame
+#'
+#' # Get verbose http call information
+#' library("httr")
+#' solr_stats(q='ecology', stats.field='alm_twitterCount',
+#'    callopts=verbose())
+#' }
+
+solr_stats <- function(name = NULL, q='*:*', stats.field=NULL, stats.facet=NULL,
+  wt='json', start=0, rows=0, key = NULL, callopts=list(), raw=FALSE, parsetype='df') {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  check_wt(wt)
+  todonames <- c("q", "stats.field", "stats.facet", "start", "rows", "key", "wt")
+  args <- collectargs(todonames)
+  args$stats <- 'true'
+
+  out <- structure(solr_GET(handle_url(conn, name), args, callopts, conn$proxy),
+                   class = "sr_stats", wt = wt)
+  if (raw) {
+    return( out )
+  } else {
+    parsed <- cont_parse(out, wt)
+    parsed <- structure(parsed, class = c(class(parsed), "sr_stats"))
+    solr_parse(out, parsetype)
+  }
+}
diff --git a/R/solrium-package.R b/R/solrium-package.R
new file mode 100644
index 0000000..6465207
--- /dev/null
+++ b/R/solrium-package.R
@@ -0,0 +1,69 @@
+#' General purpose R interface to Solr.
+#' 
+#' This package has support for all the search endpoints, as well as a suite
+#' of functions for managing a Solr database, including adding and deleting 
+#' documents. 
+#'
+#' @section Important search functions:
+#'
+#' \itemize{
+#'   \item \code{\link{solr_search}} - General search, only returns documents
+#'   \item \code{\link{solr_all}} - General search, including all non-documents
+#'   in addition to documents: facets, highlights, groups, mlt, stats.
+#'   \item \code{\link{solr_facet}} - Faceting only (w/o general search)
+#'   \item \code{\link{solr_highlight}} - Highlighting only (w/o general search)
+#'   \item \code{\link{solr_mlt}} - More like this (w/o general search)
+#'   \item \code{\link{solr_group}} - Group search (w/o general search)
+#'   \item \code{\link{solr_stats}} - Stats search (w/o general search)
+#' }
+#' 
+#' @section Important Solr management functions:
+#'
+#' \itemize{
+#'   \item \code{\link{update_json}} - Add or delete documents using json in a 
+#'   file
+#'   \item \code{\link{add}} - Add documents via an R list or data.frame
+#'   \item \code{\link{delete_by_id}} - Delete documents by ID
+#'   \item \code{\link{delete_by_query}} - Delete documents by query
+#' } 
+#'
+#' @section Vignettes:
+#'
+#' See the vignettes for help \code{browseVignettes(package = "solrium")}
+#'
+#' @section Performance:
+#'
+#' \code{v0.2} and above of this package will have \code{wt=csv} as the default.
+#' This  should give significant performance improvement over the previous 
+#' default of \code{wt=json}, which pulled down json, parsed to an R list, 
+#' then to a data.frame. With \code{wt=csv}, we pull down csv, and read that 
+#' in directly to a data.frame.
+#'
+#' The http library we use, \pkg{httr}, sets gzip compression header by 
+#' default. As long as compression is used server side, you're good to go on 
+#' compression, which should be a good peformance boost. See
+#' \url{https://wiki.apache.org/solr/SolrPerformanceFactors#Query_Response_Compression}
+#' for notes on how to enable compression.
+#'
+#' There are other notes about Solr performance at
+#' \url{https://wiki.apache.org/solr/SolrPerformanceFactors} that can be 
+#' used server side/in your Solr config, but aren't things to tune here in 
+#' this R client.
+#'
+#' Let us know if there's any further performance improvements we can make.
+#'
+#' @importFrom utils URLdecode head modifyList read.table
+#' @importFrom httr GET POST stop_for_status content content_type_json
+#' content_type_xml content_type upload_file http_condition http_status
+#' @importFrom xml2 read_xml xml_children xml_find_first xml_find_all
+#' xml_name xml_text xml_attr xml_attrs
+#' @importFrom jsonlite fromJSON
+#' @importFrom plyr rbind.fill
+#' @importFrom dplyr bind_rows
+#' @importFrom tibble data_frame as_data_frame as_tibble add_column
+#' @name solrium-package
+#' @aliases solrium
+#' @docType package
+#' @author Scott Chamberlain \email{myrmecocystus@@gmail.com}
+#' @keywords package
+NULL
diff --git a/R/update_csv.R b/R/update_csv.R
new file mode 100644
index 0000000..7f85218
--- /dev/null
+++ b/R/update_csv.R
@@ -0,0 +1,45 @@
+#' Update documents using CSV
+#'
+#' @export
+#' @family update
+#' @template csvcreate
+#' @param files Path to file to load into Solr
+#' @param name (character) Name of the core or collection
+#' @param wt (character) One of json (default) or xml. If json, uses
+#' \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to parse
+#' @param raw (logical) If TRUE, returns raw data in format specified by wt param
+#' @param ... curl options passed on to \code{\link[httr]{GET}}
+#' @note SOLR v1.2 was first version to support csv. See
+#' \url{https://issues.apache.org/jira/browse/SOLR-66}
+#' @examples \dontrun{
+#' # start Solr in Schemaless mode: bin/solr start -e schemaless
+#'
+#' # connect
+#' solr_connect()
+#'
+#' df <- data.frame(id=1:3, name=c('red', 'blue', 'green'))
+#' write.csv(df, file="df.csv", row.names=FALSE, quote = FALSE)
+#' update_csv("df.csv", "books")
+#'
+#' # give back xml
+#' update_csv("df.csv", "books", wt = "xml")
+#' ## raw xml
+#' update_csv("df.csv", "books", wt = "xml", raw = FALSE)
+#' }
+update_csv <- function(files, name, separator = ',', header = TRUE,
+                       fieldnames = NULL, skip = NULL, skipLines = 0, trim = FALSE,
+                       encapsulator = NULL, escape = NULL, keepEmpty = FALSE, literal = NULL,
+                       map = NULL, split = NULL, rowid = NULL, rowidOffset = NULL, overwrite = NULL,
+                       commit = NULL, wt = 'json', raw = FALSE, ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  stop_if_absent(name)
+  if (!is.null(fieldnames)) fieldnames <- paste0(fieldnames, collapse = ",")
+  args <- sc(list(separator = separator, header = header, fieldnames = fieldnames, skip = skip,
+                  skipLines = skipLines, trim = trim, encapsulator = encapsulator, escape = escape,
+                  keepEmpty = keepEmpty, literal = literal, map = map, split = split,
+                  rowid = rowid, rowidOffset = rowidOffset, overwrite = overwrite,
+                  commit = commit, wt = wt))
+  docreate(file.path(conn$url, sprintf('solr/%s/update/csv', name)), files, args, content = "csv", raw, ...)
+}
diff --git a/R/update_json.R b/R/update_json.R
new file mode 100644
index 0000000..0a3def8
--- /dev/null
+++ b/R/update_json.R
@@ -0,0 +1,50 @@
+#' Update documents using JSON
+#'
+#' @export
+#' @family update
+#' @template update
+#' @template commitcontrol
+#' @param files Path to file to load into Solr
+#' @examples \dontrun{
+#' # start Solr in Schemaless mode: bin/solr start -e schemaless
+#' 
+#' # connect
+#' solr_connect()
+#'
+#' # Add documents
+#' file <- system.file("examples", "books2.json", package = "solrium")
+#' cat(readLines(file), sep = "\n")
+#' update_json(file, "books")
+#'
+#' # Update commands - can include many varying commands
+#' ## Add file
+#' file <- system.file("examples", "updatecommands_add.json", package = "solrium")
+#' cat(readLines(file), sep = "\n")
+#' update_json(file, "books")
+#'
+#' ## Delete file
+#' file <- system.file("examples", "updatecommands_delete.json", package = "solrium")
+#' cat(readLines(file), sep = "\n")
+#' update_json(file, "books")
+#'
+#' # Add and delete in the same document
+#' ## Add a document first, that we can later delete
+#' ss <- list(list(id = 456, name = "cat"))
+#' add(ss, "books")
+#' ## Now add a new document, and delete the one we just made
+#' file <- system.file("examples", "add_delete.json", package = "solrium")
+#' cat(readLines(file), sep = "\n")
+#' update_json(file, "books")
+#' }
+update_json <- function(files, name, commit = TRUE, optimize = FALSE, max_segments = 1,
+                        expunge_deletes = FALSE, wait_searcher = TRUE, soft_commit = FALSE,
+                        prepare_commit = NULL, wt = 'json', raw = FALSE, ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  #stop_if_absent(name)
+  args <- sc(list(commit = asl(commit), optimize = asl(optimize), maxSegments = max_segments,
+                  expungeDeletes = asl(expunge_deletes), waitSearcher = asl(wait_searcher),
+                  softCommit = asl(soft_commit), prepareCommit = prepare_commit, wt = wt))
+  docreate(file.path(conn$url, sprintf('solr/%s/update/json/docs', name)), files, args, 'json', raw, ...)
+}
diff --git a/R/update_xml.R b/R/update_xml.R
new file mode 100644
index 0000000..7eb646f
--- /dev/null
+++ b/R/update_xml.R
@@ -0,0 +1,50 @@
+#' Update documents using XML
+#'
+#' @export
+#' @family update
+#' @template update
+#' @template commitcontrol
+#' @param files Path to file to load into Solr
+#' @examples \dontrun{
+#' # start Solr in Schemaless mode: bin/solr start -e schemaless
+#' 
+#' # connect
+#' solr_connect()
+#'
+#' # Add documents
+#' file <- system.file("examples", "books.xml", package = "solrium")
+#' cat(readLines(file), sep = "\n")
+#' update_xml(file, "books")
+#'
+#' # Update commands - can include many varying commands
+#' ## Add files
+#' file <- system.file("examples", "books2_delete.xml", package = "solrium")
+#' cat(readLines(file), sep = "\n")
+#' update_xml(file, "books")
+#'
+#' ## Delete files
+#' file <- system.file("examples", "updatecommands_delete.xml", package = "solrium")
+#' cat(readLines(file), sep = "\n")
+#' update_xml(file, "books")
+#'
+#' ## Add and delete in the same document
+#' ## Add a document first, that we can later delete
+#' ss <- list(list(id = 456, name = "cat"))
+#' add(ss, "books")
+#' ## Now add a new document, and delete the one we just made
+#' file <- system.file("examples", "add_delete.xml", package = "solrium")
+#' cat(readLines(file), sep = "\n")
+#' update_xml(file, "books")
+#' }
+update_xml <- function(files, name, commit = TRUE, optimize = FALSE, max_segments = 1,
+                       expunge_deletes = FALSE, wait_searcher = TRUE, soft_commit = FALSE,
+                       prepare_commit = NULL, wt = 'json', raw = FALSE, ...) {
+
+  conn <- solr_settings()
+  check_conn(conn)
+  stop_if_absent(name)
+  args <- sc(list(commit = asl(commit), optimize = asl(optimize), maxSegments = max_segments,
+                  expungeDeletes = asl(expunge_deletes), waitSearcher = asl(wait_searcher),
+                  softCommit = asl(soft_commit), prepareCommit = prepare_commit, wt = wt))
+  docreate(file.path(conn$url, sprintf('solr/%s/update', name)), files, args, content = 'xml', raw, ...)
+}
diff --git a/R/zzz.r b/R/zzz.r
new file mode 100644
index 0000000..aa5e719
--- /dev/null
+++ b/R/zzz.r
@@ -0,0 +1,239 @@
+#' Function to make make multiple args of the same name from a 
+#' single input with length > 1
+#' @param x Value
+makemultiargs <- function(x){
+  value <- get(x, envir = parent.frame(n = 2))
+  if ( length(value) == 0 ) { 
+    NULL 
+  } else {
+    if ( any(sapply(value, is.na)) ) { 
+      NULL 
+    } else {
+      if ( !is.character(value) ) { 
+        value <- as.character(value)
+      }
+      names(value) <- rep(x, length(value))
+      value
+    }
+  }
+}
+
+popp <- function(x, nms) {
+  x[!names(x) %in% nms]
+}
+
+#' Function to make a list of args passing arg names through multiargs function.
+#' @param x Value
+collectargs <- function(x){
+  outlist <- list()
+  for (i in seq_along(x)) {
+    outlist[[i]] <- makemultiargs(x[[i]])
+  }
+  as.list(unlist(sc(outlist)))
+}
+
+# GET helper fxn
+solr_GET <- function(base, args, callopts = NULL, ...){
+  tt <- GET(base, query = args, callopts, ...)
+  if (solr_settings()$verbose) message(URLdecode(tt$url))
+  if (tt$status_code > 201) {
+    solr_error(tt)
+  } else {
+    content(tt, as = "text", encoding = "UTF-8")
+  }
+}
+
+solr_error <- function(x) {
+  if (grepl("html", x$headers$`content-type`)) {
+    stop(http_status(x)$message, call. = FALSE)
+  } else { 
+    err <- jsonlite::fromJSON(content(x, "text", encoding = "UTF-8"))
+    erropt <- Sys.getenv("SOLR_ERRORS")
+    if (erropt == "simple" || erropt == "") {
+      stop(err$error$code, " - ", err$error$msg, call. = FALSE)
+    } else {
+      stop(err$error$code, " - ", err$error$msg, 
+           "\nAPI stack trace\n", 
+           pluck_trace(err$error$trace), call. = FALSE)
+    }
+  }
+}
+
+pluck_trace <- function(x) {
+  if (is.null(x)) {
+    " - no stack trace"
+  } else {
+    x
+  }
+}
+
+# POST helper fxn
+solr_POST <- function(base, body, args, content, ...) {
+  invisible(match.arg(args$wt, c("xml", "json", "csv")))
+  ctype <- get_ctype(content)
+  args <- lapply(args, function(x) if (is.logical(x)) tolower(x) else x)
+  tt <- POST(base, query = args, body = upload_file(path = body), ctype)
+  get_response(tt)
+}
+
+# POST helper fxn - just a body
+solr_POST_body <- function(base, body, args, ...) {
+  invisible(match.arg(args$wt, c("xml", "json")))
+  tt <- POST(base, query = args, body = body, 
+             content_type_json(), encode = "json", ...)
+  get_response(tt)
+}
+
+# POST helper fxn for R objects
+obj_POST <- function(base, body, args, ...) {
+  invisible(match.arg(args$wt, c("xml", "json", "csv")))
+  args <- lapply(args, function(x) if (is.logical(x)) tolower(x) else x)
+  body <- jsonlite::toJSON(body, auto_unbox = TRUE)
+  tt <- POST(base, query = args, body = body, content_type_json(), ...)
+  get_response(tt)
+}
+
+# check if core/collection exists, if not stop
+stop_if_absent <- function(x) {
+  tmp <- vapply(list(core_exists, collection_exists), function(z) {
+    tmp <- tryCatch(z(x), error = function(e) e)
+    if (inherits(tmp, "error")) FALSE else tmp
+  }, logical(1))
+  if (!any(tmp)) {
+    stop(x, " doesn't exist - create it first.\n See core_create() or collection_create()", 
+         call. = FALSE)
+  }
+}
+
+# helper for POSTing from R objects
+obj_proc <- function(url, body, args, raw, ...) {
+  out <- structure(obj_POST(url, body, args, ...), class = "update", wt = args$wt)
+  if (raw) {
+    out
+  } else {
+    solr_parse(out) 
+  }
+}
+
+get_ctype <- function(x) {
+  switch(x, 
+         xml = content_type_xml(),
+         json = content_type_json(),
+         csv = content_type("application/csv; charset=utf-8")
+  )
+}
+
+get_response <- function(x, as = "text") {
+  if (x$status_code > 201) {
+    err <- jsonlite::fromJSON(httr::content(x, "text", encoding = "UTF-8"))$error
+    stop(sprintf("%s: %s", err$code, err$msg), call. = FALSE)
+  } else {
+    content(x, as = as, encoding = "UTF-8")
+  }
+}
+
+# small function to replace elements of length 0 with NULL
+replacelen0 <- function(x) {
+  if (length(x) < 1) { 
+    NULL 
+  } else { 
+    x 
+  }
+}
+  
+sc <- function(l) Filter(Negate(is.null), l)
+
+asl <- function(z) {
+  if (is.null(z)) {
+    NULL
+  } else {
+    if (is.logical(z) || tolower(z) == "true" || tolower(z) == "false") {
+      if (z) {
+        return('true')
+      } else {
+        return('false')
+      }
+    } else {
+      return(z)
+    }
+  }
+}
+
+docreate <- function(base, files, args, content, raw, ...) {
+  out <- structure(solr_POST(base, files, args, content, ...), class = "update", wt = args$wt)
+  if (raw) { 
+    return(out) 
+  } else { 
+    solr_parse(out) 
+  } 
+}
+
+objcreate <- function(base, dat, args, raw, ...) {
+  out <- structure(solr_POST(base, dat, args, "json", ...), class = "update", wt = args$wt)
+  if (raw) { 
+    return(out) 
+  } else { 
+    solr_parse(out) 
+  } 
+}
+
+check_conn <- function(x) {
+  if (!inherits(x, "solr_connection")) {
+    stop("Input to conn parameter must be an object of class solr_connection", 
+         call. = FALSE)
+  }
+  if (is.null(x)) {
+    stop("You must provide a connection object", 
+         call. = FALSE)
+  }
+}
+
+check_wt <- function(x) {
+  if (!x %in% c('json', 'xml', 'csv')) {
+    stop("wt must be one of: json, xml, csv", 
+         call. = FALSE)
+  }  
+}
+
+check_defunct <- function(...) {
+  calls <- names(sapply(match.call(), deparse))[-1]
+  calls_vec <- "verbose" %in% calls
+  if (any(calls_vec)) {
+    stop("The parameter verbose has been removed - see ?solr_connect", 
+         call. = FALSE)
+  }
+}
+
+is_in_cloud_mode <- function(x) {
+  res <- GET(file.path(x$url, "solr/admin/collections"), 
+             query = list(wt = 'json'))
+  if (res$status_code > 201) return(FALSE)
+  msg <- jsonlite::fromJSON(content(res, "text", encoding = "UTF-8"))$error$msg
+  if (grepl("not running", msg)) {
+    FALSE
+  } else {
+    TRUE
+  }
+}
+
+json_parse <- function(x, raw) {
+  if (raw) {
+    x
+  } else {
+    jsonlite::fromJSON(x)
+  }
+}
+
+unbox_if <- function(x, recursive = FALSE) {
+  if (!is.null(x)) {
+    if (recursive) {
+      rapply(x, jsonlite::unbox, how = "list")
+    } else {
+      lapply(x, jsonlite::unbox)
+    }
+  } else {
+    NULL
+  }
+}
+
+`%||%` <- function(x, y) if (is.na(x) || is.null(x)) y else x
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..a479cbd
--- /dev/null
+++ b/README.md
@@ -0,0 +1,485 @@
+solrium
+=======
+
+
+
+[![Build Status](https://api.travis-ci.org/ropensci/solrium.png)](https://travis-ci.org/ropensci/solrium)
+[![codecov.io](https://codecov.io/github/ropensci/solrium/coverage.svg?branch=master)](https://codecov.io/github/ropensci/solrium?branch=master)
+[![rstudio mirror downloads](http://cranlogs.r-pkg.org/badges/solrium?color=2ED968)](https://github.com/metacran/cranlogs.app)
+
+**A general purpose R interface to [Solr](http://lucene.apache.org/solr/)**
+
+Development is now following Solr v5 and greater - which introduced many changes, which means many functions here may not work with your Solr installation older than v5.
+
+Be aware that currently some functions will only work in certain Solr modes, e.g, `collection_create()` won't work when you are not in Solrcloud mode. But, you should get an error message stating that you aren't.
+
+> Currently developing against Solr `v5.4.1`
+
+> Note that we recently changed the package name to `solrium`. A previous version of this package is on CRAN as `solr`, but next version will be up as `solrium`.
+
+## Solr info
+
++ [Solr home page](http://lucene.apache.org/solr/)
++ [Highlighting help](http://wiki.apache.org/solr/HighlightingParameters)
++ [Faceting help](http://wiki.apache.org/solr/SimpleFacetParameters)
++ [Solr stats](http://wiki.apache.org/solr/StatsComponent)
++ ['More like this' searches](http://wiki.apache.org/solr/MoreLikeThis)
++ [Grouping/Feild collapsing](http://wiki.apache.org/solr/FieldCollapsing)
++ [Install and Setup SOLR in OSX, including running Solr](http://risnandar.wordpress.com/2013/09/08/how-to-install-and-setup-apache-lucene-solr-in-osx/)
++ [Solr csv writer](http://wiki.apache.org/solr/CSVResponseWriter)
+
+## Install
+
+Stable version from CRAN
+
+
+```r
+install.packages("solrium")
+```
+
+Or development version from GitHub
+
+
+```r
+devtools::install_github("ropensci/solrium")
+```
+
+
+```r
+library("solrium")
+```
+
+## Setup
+
+Use `solr_connect()` to initialize your connection. These examples use a remote Solr server, but work on any local Solr server.
+
+
+```r
+invisible(solr_connect('http://api.plos.org/search'))
+```
+
+You can also set whether you want simple or detailed error messages (via `errors`), and whether you want URLs used in each function call or not (via `verbose`), and your proxy settings (via `proxy`) if needed. For example:
+
+
+```r
+solr_connect("localhost:8983", errors = "complete", verbose = FALSE)
+```
+
+Then you can get your settings like
+
+
+```r
+solr_settings()
+#> <solr_connection>
+#>   url:    localhost:8983
+#>   errors: complete
+#>   verbose: FALSE
+#>   proxy:
+```
+
+## Search
+
+
+```r
+solr_search(q='*:*', rows=2, fl='id')
+#> Source: local data frame [2 x 1]
+#>
+#>                                                              id
+#>                                                           (chr)
+#> 1       10.1371/annotation/d090733e-1f34-43c5-a06a-255456946303
+#> 2 10.1371/annotation/d090733e-1f34-43c5-a06a-255456946303/title
+```
+
+### Search grouped data
+
+Most recent publication by journal
+
+
+```r
+solr_group(q='*:*', group.field='journal', rows=5, group.limit=1, group.sort='publication_date desc', fl='publication_date, score')
+#>                         groupValue numFound start     publication_date
+#> 1                         plos one  1233651     0 2016-02-05T00:00:00Z
+#> 2                   plos pathogens    42827     0 2016-02-05T00:00:00Z
+#> 3                     plos biology    28755     0 2016-02-04T00:00:00Z
+#> 4 plos neglected tropical diseases    33921     0 2016-02-05T00:00:00Z
+#> 5                    plos genetics    49295     0 2016-02-05T00:00:00Z
+#>   score
+#> 1     1
+#> 2     1
+#> 3     1
+#> 4     1
+#> 5     1
+```
+
+First publication by journal
+
+
+```r
+solr_group(q='*:*', group.field='journal', group.limit=1, group.sort='publication_date asc', fl='publication_date, score', fq="publication_date:[1900-01-01T00:00:00Z TO *]")
+#>                          groupValue numFound start     publication_date
+#> 1                          plos one  1233651     0 2006-12-20T00:00:00Z
+#> 2                    plos pathogens    42827     0 2005-07-22T00:00:00Z
+#> 3                      plos biology    28755     0 2003-08-18T00:00:00Z
+#> 4  plos neglected tropical diseases    33921     0 2007-08-30T00:00:00Z
+#> 5                     plos genetics    49295     0 2005-06-17T00:00:00Z
+#> 6                     plos medicine    19944     0 2004-09-07T00:00:00Z
+#> 7        plos computational biology    36383     0 2005-06-24T00:00:00Z
+#> 8                              none    57557     0 2005-08-23T00:00:00Z
+#> 9              plos clinical trials      521     0 2006-04-21T00:00:00Z
+#> 10                     plos medicin        9     0 2012-04-17T00:00:00Z
+#>    score
+#> 1      1
+#> 2      1
+#> 3      1
+#> 4      1
+#> 5      1
+#> 6      1
+#> 7      1
+#> 8      1
+#> 9      1
+#> 10     1
+```
+
+Search group query : Last 3 publications of 2013.
+
+
+```r
+solr_group(q='*:*', group.query='publication_date:[2013-01-01T00:00:00Z TO 2013-12-31T00:00:00Z]', group.limit = 3, group.sort='publication_date desc', fl='publication_date')
+#>   numFound start     publication_date
+#> 1   307081     0 2013-12-31T00:00:00Z
+#> 2   307081     0 2013-12-31T00:00:00Z
+#> 3   307081     0 2013-12-31T00:00:00Z
+```
+
+Search group with format simple
+
+
+```r
+solr_group(q='*:*', group.field='journal', rows=5, group.limit=3, group.sort='publication_date desc', group.format='simple', fl='journal, publication_date')
+#>   numFound start        journal     publication_date
+#> 1  1508973     0       PLOS ONE 2016-02-05T00:00:00Z
+#> 2  1508973     0       PLOS ONE 2016-02-05T00:00:00Z
+#> 3  1508973     0       PLOS ONE 2016-02-05T00:00:00Z
+#> 4  1508973     0 PLOS Pathogens 2016-02-05T00:00:00Z
+#> 5  1508973     0 PLOS Pathogens 2016-02-05T00:00:00Z
+```
+
+### Facet
+
+
+```r
+solr_facet(q='*:*', facet.field='journal', facet.query='cell,bird')
+#> $facet_queries
+#>        term value
+#> 1 cell,bird    24
+#>
+#> $facet_fields
+#> $facet_fields$journal
+#>                                 X1      X2
+#> 1                         plos one 1233651
+#> 2                    plos genetics   49295
+#> 3                   plos pathogens   42827
+#> 4       plos computational biology   36383
+#> 5 plos neglected tropical diseases   33921
+#> 6                     plos biology   28755
+#> 7                    plos medicine   19944
+#> 8             plos clinical trials     521
+#> 9                     plos medicin       9
+#>
+#>
+#> $facet_pivot
+#> NULL
+#>
+#> $facet_dates
+#> NULL
+#>
+#> $facet_ranges
+#> NULL
+```
+
+### Highlight
+
+
+```r
+solr_highlight(q='alcohol', hl.fl = 'abstract', rows=2)
+#> $`10.1371/journal.pmed.0040151`
+#> $`10.1371/journal.pmed.0040151`$abstract
+#> [1] "Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting"
+#>
+#>
+#> $`10.1371/journal.pone.0027752`
+#> $`10.1371/journal.pone.0027752`$abstract
+#> [1] "Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking"
+```
+
+### Stats
+
+
+```r
+out <- solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'), stats.facet='journal')
+```
+
+
+```r
+out$data
+#>                   min    max count missing       sum sumOfSquares
+#> counter_total_all   0 366453 31467       0 140736717 3.127644e+12
+#> alm_twitterCount    0   1753 31467       0    166651 3.225792e+07
+#>                          mean     stddev
+#> counter_total_all 4472.517781 8910.30381
+#> alm_twitterCount     5.296056   31.57718
+```
+
+### More like this
+
+`solr_mlt` is a function to return similar documents to the one
+
+
+```r
+out <- solr_mlt(q='title:"ecology" AND body:"cell"', mlt.fl='title', mlt.mindf=1, mlt.mintf=1, fl='counter_total_all', rows=5)
+```
+
+
+```r
+out$docs
+#> Source: local data frame [5 x 2]
+#>
+#>                             id counter_total_all
+#>                          (chr)             (int)
+#> 1 10.1371/journal.pbio.1001805             17004
+#> 2 10.1371/journal.pbio.0020440             23871
+#> 3 10.1371/journal.pone.0087217              5904
+#> 4 10.1371/journal.pbio.1002191             12846
+#> 5 10.1371/journal.pone.0040117              4294
+```
+
+
+```r
+out$mlt
+#> $`10.1371/journal.pbio.1001805`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0082578              2192
+#> 2 10.1371/journal.pone.0098876              2434
+#> 3 10.1371/journal.pone.0102159              1166
+#> 4 10.1371/journal.pone.0076063              3217
+#> 5 10.1371/journal.pone.0087380              1883
+#>
+#> $`10.1371/journal.pbio.0020440`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0035964              5524
+#> 2 10.1371/journal.pone.0102679              3085
+#> 3 10.1371/journal.pone.0003259              2784
+#> 4 10.1371/journal.pone.0068814              7503
+#> 5 10.1371/journal.pone.0101568              2648
+#>
+#> $`10.1371/journal.pone.0087217`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0131665               403
+#> 2 10.1371/journal.pcbi.0020092             19563
+#> 3 10.1371/journal.pone.0133941               463
+#> 4 10.1371/journal.pone.0123774               990
+#> 5 10.1371/journal.pone.0140306               321
+#>
+#> $`10.1371/journal.pbio.1002191`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pbio.1002232              1936
+#> 2 10.1371/journal.pone.0131700               972
+#> 3 10.1371/journal.pone.0070448              1607
+#> 4 10.1371/journal.pone.0144763               483
+#> 5 10.1371/journal.pone.0062824              2531
+#>
+#> $`10.1371/journal.pone.0040117`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0069352              2743
+#> 2 10.1371/journal.pone.0148280                 0
+#> 3 10.1371/journal.pone.0035502              4016
+#> 4 10.1371/journal.pone.0014065              5744
+#> 5 10.1371/journal.pone.0113280              1977
+```
+
+### Parsing
+
+`solr_parse` is a general purpose parser function with extension methods `solr_parse.sr_search`, `solr_parse.sr_facet`, and `solr_parse.sr_high`, for parsing `solr_search`, `solr_facet`, and `solr_highlight` function output, respectively. `solr_parse` is used internally within those three functions (`solr_search`, `solr_facet`, `solr_highlight`) to do parsing. You can optionally get back raw `json` or `xml` from `solr_search`, `solr_facet`, and `solr_highlight` setting parameter `raw=TRU [...]
+
+For example:
+
+
+```r
+(out <- solr_highlight(q='alcohol', hl.fl = 'abstract', rows=2, raw=TRUE))
+#> [1] "{\"response\":{\"numFound\":20268,\"start\":0,\"docs\":[{},{}]},\"highlighting\":{\"10.1371/journal.pmed.0040151\":{\"abstract\":[\"Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting\"]},\"10.1371/journal.pone.0027752\":{\"abstract\":[\"Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking\"]}}}\n"
+#> attr(,"class")
+#> [1] "sr_high"
+#> attr(,"wt")
+#> [1] "json"
+```
+
+Then parse
+
+
+```r
+solr_parse(out, 'df')
+#>                          names
+#> 1 10.1371/journal.pmed.0040151
+#> 2 10.1371/journal.pone.0027752
+#>                                                                                                    abstract
+#> 1   Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting
+#> 2 Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking
+```
+
+### Advanced: Function Queries
+
+Function Queries allow you to query on actual numeric fields in the SOLR database, and do addition, multiplication, etc on one or many fields to stort results. For example, here, we search on the product of counter_total_all and alm_twitterCount, using a new temporary field "_val_"
+
+
+```r
+solr_search(q='_val_:"product(counter_total_all,alm_twitterCount)"',
+  rows=5, fl='id,title', fq='doc_type:full')
+#> Source: local data frame [5 x 2]
+#>
+#>                             id
+#>                          (chr)
+#> 1 10.1371/journal.pmed.0020124
+#> 2 10.1371/journal.pone.0073791
+#> 3 10.1371/journal.pone.0115069
+#> 4 10.1371/journal.pone.0046362
+#> 5 10.1371/journal.pone.0069841
+#> Variables not shown: title (chr)
+```
+
+Here, we search for the papers with the most citations
+
+
+```r
+solr_search(q='_val_:"max(counter_total_all)"',
+    rows=5, fl='id,counter_total_all', fq='doc_type:full')
+#> Source: local data frame [5 x 2]
+#>
+#>                             id counter_total_all
+#>                          (chr)             (int)
+#> 1 10.1371/journal.pmed.0020124           1553063
+#> 2 10.1371/journal.pmed.0050045            378855
+#> 3 10.1371/journal.pcbi.0030102            374783
+#> 4 10.1371/journal.pone.0069841            366453
+#> 5 10.1371/journal.pone.0007595            362047
+```
+
+Or with the most tweets
+
+
+```r
+solr_search(q='_val_:"max(alm_twitterCount)"',
+    rows=5, fl='id,alm_twitterCount', fq='doc_type:full')
+#> Source: local data frame [5 x 2]
+#>
+#>                             id alm_twitterCount
+#>                          (chr)            (int)
+#> 1 10.1371/journal.pone.0061981             2383
+#> 2 10.1371/journal.pone.0115069             2338
+#> 3 10.1371/journal.pmed.0020124             2169
+#> 4 10.1371/journal.pbio.1001535             1753
+#> 5 10.1371/journal.pone.0073791             1624
+```
+
+### Using specific data sources
+
+__USGS BISON service__
+
+The occurrences service
+
+
+```r
+invisible(solr_connect("http://bison.usgs.ornl.gov/solrstaging/occurrences/select"))
+solr_search(q='*:*', fl=c('decimalLatitude','decimalLongitude','scientificName'), rows=2)
+#> Source: local data frame [2 x 3]
+#>
+#>   decimalLongitude decimalLatitude        scientificName
+#>              (dbl)           (dbl)                 (chr)
+#> 1         -98.2376         29.5502   Nyctanassa violacea
+#> 2         -98.2376         29.5502 Myiarchus cinerascens
+```
+
+The species names service
+
+
+```r
+invisible(solr_connect("http://bisonapi.usgs.ornl.gov/solr/scientificName/select"))
+solr_search(q='*:*', raw=TRUE)
+#> [1] "{\"responseHeader\":{\"status\":0,\"QTime\":12},\"response\":{\"numFound\":401329,\"start\":0,\"docs\":[{\"scientificName\":\"Catocala editha\",\"_version_\":1518645306257833984},{\"scientificName\":\"Dictyopteris polypodioides\",\"_version_\":1518645306259931136},{\"scientificName\":\"Lonicera iberica\",\"_version_\":1518645306259931137},{\"scientificName\":\"Pseudopomala brachyptera\",\"_version_\":1518645306259931138},{\"scientificName\":\"Lycopodium cernuum ingens\",\"_versio [...]
+#> attr(,"class")
+#> [1] "sr_search"
+#> attr(,"wt")
+#> [1] "json"
+```
+
+__PLOS Search API__
+
+Most of the examples above use the PLOS search API... :)
+
+## Solr server management
+
+This isn't as complete as searching functions show above, but we're getting there.
+
+### Cores
+
+Many functions, e.g.:
+
+* `core_create()`
+* `core_rename()`
+* `core_status()`
+* ...
+
+Create a core
+
+
+```r
+core_create(name = "foo_bar")
+```
+
+### Collections
+
+Many functions, e.g.:
+
+* `collection_create()`
+* `collection_list()`
+* `collection_addrole()`
+* ...
+
+Create a collection
+
+
+```r
+collection_create(name = "hello_world")
+```
+
+### Add documents
+
+Add documents, supports adding from files (json, xml, or csv format), and from R objects (including `data.frame` and `list` types so far)
+
+
+```r
+df <- data.frame(id = c(67, 68), price = c(1000, 500000000))
+add(df, name = "books")
+```
+
+Delete documents, by id
+
+
+```r
+delete_by_id(ids = c(3, 4))
+```
+
+Or by query
+
+
+```r
+delete_by_query(query = "manu:bank")
+```
+
+## Meta
+
+* Please [report any issues or bugs](https://github.com/ropensci/solrium/issues)
+* License: MIT
+* Get citation information for `solrium` in R doing `citation(package = 'solrium')`
+* Please note that this project is released with a [Contributor Code of Conduct](CONDUCT.md). By participating in this project you agree to abide by its terms.
+
+[![ropensci_footer](http://ropensci.org/public_images/github_footer.png)](http://ropensci.org)
diff --git a/build/vignette.rds b/build/vignette.rds
new file mode 100644
index 0000000..8c713a4
Binary files /dev/null and b/build/vignette.rds differ
diff --git a/debian/README.test b/debian/README.test
deleted file mode 100644
index 90657cf..0000000
--- a/debian/README.test
+++ /dev/null
@@ -1,8 +0,0 @@
-Notes on how this package can be tested.
-────────────────────────────────────────
-
-This package can be tested by running the provided test:
-
-    sh run-unit-test
-
-in order to confirm its integrity.
diff --git a/debian/changelog b/debian/changelog
deleted file mode 100644
index 21c64ab..0000000
--- a/debian/changelog
+++ /dev/null
@@ -1,5 +0,0 @@
-r-cran-solrium (0.4.0-1) unstable; urgency=medium
-
-  * Initial release (closes: #849070)
-
- -- Andreas Tille <tille at debian.org>  Thu, 22 Dec 2016 12:42:50 +0100
diff --git a/debian/compat b/debian/compat
deleted file mode 100644
index f599e28..0000000
--- a/debian/compat
+++ /dev/null
@@ -1 +0,0 @@
-10
diff --git a/debian/control b/debian/control
deleted file mode 100644
index 7c8ac63..0000000
--- a/debian/control
+++ /dev/null
@@ -1,32 +0,0 @@
-Source: r-cran-solrium
-Maintainer: Debian Med Packaging Team <debian-med-packaging at lists.alioth.debian.org>
-Uploaders: Andreas Tille <tille at debian.org>
-Section: gnu-r
-Priority: optional
-Build-Depends: debhelper (>= 10),
-               dh-r,
-               r-base-dev,
-               r-cran-dplyr (>= 0.5.0),
-               r-cran-plyr (>= 1.8.4),
-               r-cran-httr (>= 1.2.0),
-               r-cran-xml2 (>= 1.0.0),
-               r-cran-jsonlite (>= 1.0),
-               r-cran-tibble
-Standards-Version: 3.9.8
-Vcs-Browser: https://anonscm.debian.org/viewvc/debian-med/trunk/packages/R/r-cran-solrium/trunk/
-Vcs-Svn: svn://anonscm.debian.org/debian-med/trunk/packages/R/r-cran-solrium/trunk/
-Homepage: https://cran.r-project.org/package=solrium
-
-Package: r-cran-solrium
-Architecture: all
-Depends: ${R:Depends},
-         ${shlibs:Depends},
-         ${misc:Depends}
-Recommends: ${R:Recommends}
-Suggests: ${R:Suggests}
-Description: general purpose R interface to 'Solr'
- This GNU R package provides a set of functions for querying and parsing
- data from 'Solr' (<http://lucene.apache.org/solr>) 'endpoints' (local
- and  remote), including search, 'faceting', 'highlighting', 'stats', and
- 'more like this'. In addition, some functionality is included for
- creating, deleting, and updating documents in a 'Solr' 'database'.
diff --git a/debian/copyright b/debian/copyright
deleted file mode 100644
index 1eb18ef..0000000
--- a/debian/copyright
+++ /dev/null
@@ -1,54 +0,0 @@
-Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
-Upstream-Name: solrium
-Upstream-Contact: Scott Chamberlain <myrmecocystus at gmail.com>
-Source: https://cran.r-project.org/package=solrium
-
-Files: *
-Copyright: 2014-2016 Scott Chamberlain
-License: MIT
-
-Files: inst/examples/schema.xml
-       inst/examples/solrconfig.xml
-Copyright: Apache Software Foundation
-License: Apache-2.0
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
- .
-     http://www.apache.org/licenses/LICENSE-2.0
- .
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
- .
- On Debian systems you can find the full text of the Apache 2.0 license
- at /usr/share/common-licenses/Apache-2.0
-
-Files: debian/*
-Copyright: 2016 Andreas Tille <tille at debian.org>
-License: MIT
-
-License: MIT
- Permission is hereby granted, free of charge, to any person obtaining a
- copy of this software and associated documentation files (the
- "Software"), to deal in the Software without restriction, including
- without limitation the rights to use, copy, modify, merge, publish,
- distribute, sublicense, and/or sell copies of the Software, and to
- permit persons to whom the Software is furnished to do so, subject to
- the following conditions:
- .
- The above copyright notice and this permission notice shall be included
- in all copies or substantial portions of the Software.
- .
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
- OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
- CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
- TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
- SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/debian/docs b/debian/docs
deleted file mode 100644
index 6466d39..0000000
--- a/debian/docs
+++ /dev/null
@@ -1,3 +0,0 @@
-debian/tests/run-unit-test
-debian/README.test
-tests
diff --git a/debian/rules b/debian/rules
deleted file mode 100755
index 529c38a..0000000
--- a/debian/rules
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/usr/bin/make -f
-
-%:
-	dh $@ --buildsystem R
-
diff --git a/debian/source/format b/debian/source/format
deleted file mode 100644
index 163aaf8..0000000
--- a/debian/source/format
+++ /dev/null
@@ -1 +0,0 @@
-3.0 (quilt)
diff --git a/debian/tests/control b/debian/tests/control
deleted file mode 100644
index a62fb6e..0000000
--- a/debian/tests/control
+++ /dev/null
@@ -1,5 +0,0 @@
-Tests: run-unit-test
-Depends: @, r-cran-testthat,
-Restrictions: allow-stderr
-
-
diff --git a/debian/tests/run-unit-test b/debian/tests/run-unit-test
deleted file mode 100644
index a6bc9e3..0000000
--- a/debian/tests/run-unit-test
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/sh -e
-
-pkgname=solrium
-debname=r-cran-solrium
-
-if [ "$ADTTMP" = "" ] ; then
-    ADTTMP=`mktemp -d /tmp/${debname}-test.XXXXXX`
-    trap "rm -rf $ADTTMP" 0 INT QUIT ABRT PIPE TERM
-fi
-cd $ADTTMP
-cp -a /usr/share/doc/$debname/tests/* $ADTTMP
-gunzip -r *
-for testfile in *.R; do
-    echo "BEGIN TEST $testfile"
-    LC_ALL=C R --no-save < $testfile
-done
-
diff --git a/debian/watch b/debian/watch
deleted file mode 100644
index 499c465..0000000
--- a/debian/watch
+++ /dev/null
@@ -1,2 +0,0 @@
-version=4
-https://cran.r-project.org/src/contrib/solrium_([-\d.]*)\.tar\.gz
diff --git a/inst/doc/cores_collections.Rmd b/inst/doc/cores_collections.Rmd
new file mode 100644
index 0000000..33d4f3b
--- /dev/null
+++ b/inst/doc/cores_collections.Rmd
@@ -0,0 +1,119 @@
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Cores/collections management}
+%\VignetteEncoding{UTF-8}
+-->
+
+
+
+Cores/collections management
+============================
+
+## Installation
+
+Stable version from CRAN
+
+
+```r
+install.packages("solrium")
+```
+
+Or the development version from GitHub
+
+
+```r
+install.packages("devtools")
+devtools::install_github("ropensci/solrium")
+```
+
+Load
+
+
+```r
+library("solrium")
+```
+
+Initialize connection
+
+
+```r
+solr_connect()
+```
+
+```
+#> <solr_connection>
+#>   url:    http://localhost:8983
+#>   errors: simple
+#>   verbose: TRUE
+#>   proxy:
+```
+
+## Cores 
+
+There are many operations you can do on cores, including:
+
+* `core_create()` - create a core
+* `core_exists()` - check if a core exists
+* `core_mergeindexes()` - merge indexes
+* `core_reload()` - reload a core
+* `core_rename()` - rename a core
+* `core_requeststatus()` - check request status
+* `core_split()` - split a core
+* `core_status()` - check core status
+* `core_swap()` - core swap
+* `core_unload()` - delete a core
+
+### Create a core
+
+
+```r
+core_create()
+```
+
+### Delete a core
+
+
+```r
+core_unload()
+```
+
+## Collections
+
+There are many operations you can do on collections, including:
+
+* `collection_addreplica()` 
+* `collection_addreplicaprop()` 
+* `collection_addrole()` 
+* `collection_balanceshardunique()` 
+* `collection_clusterprop()` 
+* `collection_clusterstatus()` 
+* `collection_create()` 
+* `collection_createalias()` 
+* `collection_createshard()` 
+* `collection_delete()` 
+* `collection_deletealias()` 
+* `collection_deletereplica()` 
+* `collection_deletereplicaprop()` 
+* `collection_deleteshard()` 
+* `collection_list()` 
+* `collection_migrate()` 
+* `collection_overseerstatus()` 
+* `collection_rebalanceleaders()` 
+* `collection_reload()` 
+* `collection_removerole()` 
+* `collection_requeststatus()` 
+* `collection_splitshard()` 
+
+### Create a collection
+
+
+```r
+collection_create()
+```
+
+### Delete a collection
+
+
+```r
+collection_delete()
+```
diff --git a/inst/doc/cores_collections.html b/inst/doc/cores_collections.html
new file mode 100644
index 0000000..daf3c55
--- /dev/null
+++ b/inst/doc/cores_collections.html
@@ -0,0 +1,310 @@
+<!DOCTYPE html>
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
+
+<title>Cores/collections management</title>
+
+<script type="text/javascript">
+window.onload = function() {
+  var imgs = document.getElementsByTagName('img'), i, img;
+  for (i = 0; i < imgs.length; i++) {
+    img = imgs[i];
+    // center an image if it is the only element of its parent
+    if (img.parentElement.childElementCount === 1)
+      img.parentElement.style.textAlign = 'center';
+  }
+};
+</script>
+
+<!-- Styles for R syntax highlighter -->
+<style type="text/css">
+   pre .operator,
+   pre .paren {
+     color: rgb(104, 118, 135)
+   }
+
+   pre .literal {
+     color: #990073
+   }
+
+   pre .number {
+     color: #099;
+   }
+
+   pre .comment {
+     color: #998;
+     font-style: italic
+   }
+
+   pre .keyword {
+     color: #900;
+     font-weight: bold
+   }
+
+   pre .identifier {
+     color: rgb(0, 0, 0);
+   }
+
+   pre .string {
+     color: #d14;
+   }
+</style>
+
+<!-- R syntax highlighter -->
+<script type="text/javascript">
+var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/</gm,"<")}function f(r,q,p){return RegExp(q,"m"+(r.cI?"i":"")+(p?"g":""))}function b(r){for(var p=0;p<r.childNodes.length;p++){var q=r.childNodes[p];if(q.nodeName=="CODE"){return q}if(!(q.nodeType==3&&q.nodeValue.match(/\s+/))){break}}}function h(t,s){var p="";for(var r=0;r<t.childNodes.length;r++){if(t.childNodes[r].nodeType==3){var q=t.childNodes[r].nodeValue;if(s){q=q.replace(/\n/g,"")}p+=q}else{if(t.chi [...]
+hljs.initHighlightingOnLoad();
+</script>
+
+
+
+<style type="text/css">
+body, td {
+   font-family: sans-serif;
+   background-color: white;
+   font-size: 13px;
+}
+
+body {
+  max-width: 800px;
+  margin: auto;
+  padding: 1em;
+  line-height: 20px;
+}
+
+tt, code, pre {
+   font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace;
+}
+
+h1 {
+   font-size:2.2em;
+}
+
+h2 {
+   font-size:1.8em;
+}
+
+h3 {
+   font-size:1.4em;
+}
+
+h4 {
+   font-size:1.0em;
+}
+
+h5 {
+   font-size:0.9em;
+}
+
+h6 {
+   font-size:0.8em;
+}
+
+a:visited {
+   color: rgb(50%, 0%, 50%);
+}
+
+pre, img {
+  max-width: 100%;
+}
+pre {
+  overflow-x: auto;
+}
+pre code {
+   display: block; padding: 0.5em;
+}
+
+code {
+  font-size: 92%;
+  border: 1px solid #ccc;
+}
+
+code[class] {
+  background-color: #F8F8F8;
+}
+
+table, td, th {
+  border: none;
+}
+
+blockquote {
+   color:#666666;
+   margin:0;
+   padding-left: 1em;
+   border-left: 0.5em #EEE solid;
+}
+
+hr {
+   height: 0px;
+   border-bottom: none;
+   border-top-width: thin;
+   border-top-style: dotted;
+   border-top-color: #999999;
+}
+
+ at media print {
+   * {
+      background: transparent !important;
+      color: black !important;
+      filter:none !important;
+      -ms-filter: none !important;
+   }
+
+   body {
+      font-size:12pt;
+      max-width:100%;
+   }
+
+   a, a:visited {
+      text-decoration: underline;
+   }
+
+   hr {
+      visibility: hidden;
+      page-break-before: always;
+   }
+
+   pre, blockquote {
+      padding-right: 1em;
+      page-break-inside: avoid;
+   }
+
+   tr, img {
+      page-break-inside: avoid;
+   }
+
+   img {
+      max-width: 100% !important;
+   }
+
+   @page :left {
+      margin: 15mm 20mm 15mm 10mm;
+   }
+
+   @page :right {
+      margin: 15mm 10mm 15mm 20mm;
+   }
+
+   p, h2, h3 {
+      orphans: 3; widows: 3;
+   }
+
+   h2, h3 {
+      page-break-after: avoid;
+   }
+}
+</style>
+
+
+
+</head>
+
+<body>
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Cores/collections management}
+%\VignetteEncoding{UTF-8}
+-->
+
+<h1>Cores/collections management</h1>
+
+<h2>Installation</h2>
+
+<p>Stable version from CRAN</p>
+
+<pre><code class="r">install.packages("solrium")
+</code></pre>
+
+<p>Or the development version from GitHub</p>
+
+<pre><code class="r">install.packages("devtools")
+devtools::install_github("ropensci/solrium")
+</code></pre>
+
+<p>Load</p>
+
+<pre><code class="r">library("solrium")
+</code></pre>
+
+<p>Initialize connection</p>
+
+<pre><code class="r">solr_connect()
+</code></pre>
+
+<pre><code>#> <solr_connection>
+#>   url:    http://localhost:8983
+#>   errors: simple
+#>   verbose: TRUE
+#>   proxy:
+</code></pre>
+
+<h2>Cores</h2>
+
+<p>There are many operations you can do on cores, including:</p>
+
+<ul>
+<li><code>core_create()</code> - create a core</li>
+<li><code>core_exists()</code> - check if a core exists</li>
+<li><code>core_mergeindexes()</code> - merge indexes</li>
+<li><code>core_reload()</code> - reload a core</li>
+<li><code>core_rename()</code> - rename a core</li>
+<li><code>core_requeststatus()</code> - check request status</li>
+<li><code>core_split()</code> - split a core</li>
+<li><code>core_status()</code> - check core status</li>
+<li><code>core_swap()</code> - core swap</li>
+<li><code>core_unload()</code> - delete a core</li>
+</ul>
+
+<h3>Create a core</h3>
+
+<pre><code class="r">core_create()
+</code></pre>
+
+<h3>Delete a core</h3>
+
+<pre><code class="r">core_unload()
+</code></pre>
+
+<h2>Collections</h2>
+
+<p>There are many operations you can do on collections, including:</p>
+
+<ul>
+<li><code>collection_addreplica()</code> </li>
+<li><code>collection_addreplicaprop()</code> </li>
+<li><code>collection_addrole()</code> </li>
+<li><code>collection_balanceshardunique()</code> </li>
+<li><code>collection_clusterprop()</code> </li>
+<li><code>collection_clusterstatus()</code> </li>
+<li><code>collection_create()</code> </li>
+<li><code>collection_createalias()</code> </li>
+<li><code>collection_createshard()</code> </li>
+<li><code>collection_delete()</code> </li>
+<li><code>collection_deletealias()</code> </li>
+<li><code>collection_deletereplica()</code> </li>
+<li><code>collection_deletereplicaprop()</code> </li>
+<li><code>collection_deleteshard()</code> </li>
+<li><code>collection_list()</code> </li>
+<li><code>collection_migrate()</code> </li>
+<li><code>collection_overseerstatus()</code> </li>
+<li><code>collection_rebalanceleaders()</code> </li>
+<li><code>collection_reload()</code> </li>
+<li><code>collection_removerole()</code> </li>
+<li><code>collection_requeststatus()</code> </li>
+<li><code>collection_splitshard()</code> </li>
+</ul>
+
+<h3>Create a collection</h3>
+
+<pre><code class="r">collection_create()
+</code></pre>
+
+<h3>Delete a collection</h3>
+
+<pre><code class="r">collection_delete()
+</code></pre>
+
+</body>
+
+</html>
diff --git a/inst/doc/document_management.Rmd b/inst/doc/document_management.Rmd
new file mode 100644
index 0000000..aca9daa
--- /dev/null
+++ b/inst/doc/document_management.Rmd
@@ -0,0 +1,318 @@
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Document management}
+%\VignetteEncoding{UTF-8}
+-->
+
+
+
+Document management
+===================
+
+## Installation
+
+Stable version from CRAN
+
+
+```r
+install.packages("solrium")
+```
+
+Or the development version from GitHub
+
+
+```r
+install.packages("devtools")
+devtools::install_github("ropensci/solrium")
+```
+
+Load
+
+
+```r
+library("solrium")
+```
+
+Initialize connection. By default, you connect to `http://localhost:8983`
+
+
+```r
+solr_connect()
+```
+
+```
+#> <solr_connection>
+#>   url:    http://localhost:8983
+#>   errors: simple
+#>   verbose: TRUE
+#>   proxy:
+```
+
+## Create documents from R objects
+
+For now, only lists and data.frame's supported.
+
+### data.frame
+
+
+```r
+df <- data.frame(id = c(67, 68), price = c(1000, 500000000))
+add(df, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 112
+```
+
+### list
+
+
+
+
+```r
+ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+add(ss, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 16
+```
+
+## Delete documents
+
+### By id
+
+Add some documents first
+
+
+
+
+```r
+docs <- list(list(id = 1, price = 100, name = "brown"),
+             list(id = 2, price = 500, name = "blue"),
+             list(id = 3, price = 2000L, name = "pink"))
+add(docs, "gettingstarted")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 18
+```
+
+And the documents are now in your Solr database
+
+
+```r
+tail(solr_search(name = "gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+```
+
+```
+#> Source: local data frame [3 x 4]
+#> 
+#>      id price  name    _version_
+#>   (chr) (int) (chr)        (dbl)
+#> 1     1   100 brown 1.525729e+18
+#> 2     2   500  blue 1.525729e+18
+#> 3     3  2000  pink 1.525729e+18
+```
+
+Now delete those documents just added
+
+
+```r
+delete_by_id(ids = c(1, 2, 3), "gettingstarted")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 24
+```
+
+And now they are gone
+
+
+```r
+tail(solr_search("gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+```
+
+```
+#> Source: local data frame [0 x 0]
+```
+
+### By query
+
+Add some documents first
+
+
+```r
+add(docs, "gettingstarted")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 19
+```
+
+And the documents are now in your Solr database
+
+
+```r
+tail(solr_search("gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+```
+
+```
+#> Source: local data frame [3 x 4]
+#> 
+#>      id price  name    _version_
+#>   (chr) (int) (chr)        (dbl)
+#> 1     1   100 brown 1.525729e+18
+#> 2     2   500  blue 1.525729e+18
+#> 3     3  2000  pink 1.525729e+18
+```
+
+Now delete those documents just added
+
+
+```r
+delete_by_query(query = "(name:blue OR name:pink)", "gettingstarted")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 12
+```
+
+And now they are gone
+
+
+```r
+tail(solr_search("gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+```
+
+```
+#> Source: local data frame [1 x 4]
+#> 
+#>      id price  name    _version_
+#>   (chr) (int) (chr)        (dbl)
+#> 1     1   100 brown 1.525729e+18
+```
+
+## Update documents from files
+
+This approach is best if you have many different things you want to do at once, e.g., delete and add files and set any additional options. The functions are:
+
+* `update_xml()`
+* `update_json()`
+* `update_csv()`
+
+There are separate functions for each of the data types as they take slightly different parameters - and to make it more clear that those are the three input options for data types.
+
+### JSON
+
+
+```r
+file <- system.file("examples", "books.json", package = "solrium")
+update_json(file, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 39
+```
+
+### Add and delete in the same file
+
+Add a document first, that we can later delete
+
+
+```r
+ss <- list(list(id = 456, name = "cat"))
+add(ss, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 19
+```
+
+Now add a new document, and delete the one we just made
+
+
+```r
+file <- system.file("examples", "add_delete.xml", package = "solrium")
+cat(readLines(file), sep = "\n")
+```
+
+```
+#> <update>
+#> 	<add>
+#> 	  <doc>
+#> 	    <field name="id">978-0641723445</field>
+#> 	    <field name="cat">book,hardcover</field>
+#> 	    <field name="name">The Lightning Thief</field>
+#> 	    <field name="author">Rick Riordan</field>
+#> 	    <field name="series_t">Percy Jackson and the Olympians</field>
+#> 	    <field name="sequence_i">1</field>
+#> 	    <field name="genre_s">fantasy</field>
+#> 	    <field name="inStock">TRUE</field>
+#> 	    <field name="price">12.5</field>
+#> 	    <field name="pages_i">384</field>
+#> 	  </doc>
+#> 	</add>
+#> 	<delete>
+#> 		<id>456</id>
+#> 	</delete>
+#> </update>
+```
+
+```r
+update_xml(file, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 23
+```
+
+### Notes
+
+Note that `update_xml()` and `update_json()` have exactly the same parameters, but simply use different data input formats. `update_csv()` is different in that you can't provide document or field level boosts or other modifications. In addition `update_csv()` can accept not just csv, but tsv and other types of separators.
+
diff --git a/inst/doc/document_management.html b/inst/doc/document_management.html
new file mode 100644
index 0000000..eaef37f
--- /dev/null
+++ b/inst/doc/document_management.html
@@ -0,0 +1,469 @@
+<!DOCTYPE html>
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
+
+<title>Document management</title>
+
+<script type="text/javascript">
+window.onload = function() {
+  var imgs = document.getElementsByTagName('img'), i, img;
+  for (i = 0; i < imgs.length; i++) {
+    img = imgs[i];
+    // center an image if it is the only element of its parent
+    if (img.parentElement.childElementCount === 1)
+      img.parentElement.style.textAlign = 'center';
+  }
+};
+</script>
+
+<!-- Styles for R syntax highlighter -->
+<style type="text/css">
+   pre .operator,
+   pre .paren {
+     color: rgb(104, 118, 135)
+   }
+
+   pre .literal {
+     color: #990073
+   }
+
+   pre .number {
+     color: #099;
+   }
+
+   pre .comment {
+     color: #998;
+     font-style: italic
+   }
+
+   pre .keyword {
+     color: #900;
+     font-weight: bold
+   }
+
+   pre .identifier {
+     color: rgb(0, 0, 0);
+   }
+
+   pre .string {
+     color: #d14;
+   }
+</style>
+
+<!-- R syntax highlighter -->
+<script type="text/javascript">
+var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/</gm,"<")}function f(r,q,p){return RegExp(q,"m"+(r.cI?"i":"")+(p?"g":""))}function b(r){for(var p=0;p<r.childNodes.length;p++){var q=r.childNodes[p];if(q.nodeName=="CODE"){return q}if(!(q.nodeType==3&&q.nodeValue.match(/\s+/))){break}}}function h(t,s){var p="";for(var r=0;r<t.childNodes.length;r++){if(t.childNodes[r].nodeType==3){var q=t.childNodes[r].nodeValue;if(s){q=q.replace(/\n/g,"")}p+=q}else{if(t.chi [...]
+hljs.initHighlightingOnLoad();
+</script>
+
+
+
+<style type="text/css">
+body, td {
+   font-family: sans-serif;
+   background-color: white;
+   font-size: 13px;
+}
+
+body {
+  max-width: 800px;
+  margin: auto;
+  padding: 1em;
+  line-height: 20px;
+}
+
+tt, code, pre {
+   font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace;
+}
+
+h1 {
+   font-size:2.2em;
+}
+
+h2 {
+   font-size:1.8em;
+}
+
+h3 {
+   font-size:1.4em;
+}
+
+h4 {
+   font-size:1.0em;
+}
+
+h5 {
+   font-size:0.9em;
+}
+
+h6 {
+   font-size:0.8em;
+}
+
+a:visited {
+   color: rgb(50%, 0%, 50%);
+}
+
+pre, img {
+  max-width: 100%;
+}
+pre {
+  overflow-x: auto;
+}
+pre code {
+   display: block; padding: 0.5em;
+}
+
+code {
+  font-size: 92%;
+  border: 1px solid #ccc;
+}
+
+code[class] {
+  background-color: #F8F8F8;
+}
+
+table, td, th {
+  border: none;
+}
+
+blockquote {
+   color:#666666;
+   margin:0;
+   padding-left: 1em;
+   border-left: 0.5em #EEE solid;
+}
+
+hr {
+   height: 0px;
+   border-bottom: none;
+   border-top-width: thin;
+   border-top-style: dotted;
+   border-top-color: #999999;
+}
+
+ at media print {
+   * {
+      background: transparent !important;
+      color: black !important;
+      filter:none !important;
+      -ms-filter: none !important;
+   }
+
+   body {
+      font-size:12pt;
+      max-width:100%;
+   }
+
+   a, a:visited {
+      text-decoration: underline;
+   }
+
+   hr {
+      visibility: hidden;
+      page-break-before: always;
+   }
+
+   pre, blockquote {
+      padding-right: 1em;
+      page-break-inside: avoid;
+   }
+
+   tr, img {
+      page-break-inside: avoid;
+   }
+
+   img {
+      max-width: 100% !important;
+   }
+
+   @page :left {
+      margin: 15mm 20mm 15mm 10mm;
+   }
+
+   @page :right {
+      margin: 15mm 10mm 15mm 20mm;
+   }
+
+   p, h2, h3 {
+      orphans: 3; widows: 3;
+   }
+
+   h2, h3 {
+      page-break-after: avoid;
+   }
+}
+</style>
+
+
+
+</head>
+
+<body>
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Document management}
+%\VignetteEncoding{UTF-8}
+-->
+
+<h1>Document management</h1>
+
+<h2>Installation</h2>
+
+<p>Stable version from CRAN</p>
+
+<pre><code class="r">install.packages("solrium")
+</code></pre>
+
+<p>Or the development version from GitHub</p>
+
+<pre><code class="r">install.packages("devtools")
+devtools::install_github("ropensci/solrium")
+</code></pre>
+
+<p>Load</p>
+
+<pre><code class="r">library("solrium")
+</code></pre>
+
+<p>Initialize connection. By default, you connect to <code>http://localhost:8983</code></p>
+
+<pre><code class="r">solr_connect()
+</code></pre>
+
+<pre><code>#> <solr_connection>
+#>   url:    http://localhost:8983
+#>   errors: simple
+#>   verbose: TRUE
+#>   proxy:
+</code></pre>
+
+<h2>Create documents from R objects</h2>
+
+<p>For now, only lists and data.frame's supported.</p>
+
+<h3>data.frame</h3>
+
+<pre><code class="r">df <- data.frame(id = c(67, 68), price = c(1000, 500000000))
+add(df, "books")
+</code></pre>
+
+<pre><code>#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 112
+</code></pre>
+
+<h3>list</h3>
+
+<pre><code class="r">ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+add(ss, "books")
+</code></pre>
+
+<pre><code>#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 16
+</code></pre>
+
+<h2>Delete documents</h2>
+
+<h3>By id</h3>
+
+<p>Add some documents first</p>
+
+<pre><code class="r">docs <- list(list(id = 1, price = 100, name = "brown"),
+             list(id = 2, price = 500, name = "blue"),
+             list(id = 3, price = 2000L, name = "pink"))
+add(docs, "gettingstarted")
+</code></pre>
+
+<pre><code>#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 18
+</code></pre>
+
+<p>And the documents are now in your Solr database</p>
+
+<pre><code class="r">tail(solr_search(name = "gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+</code></pre>
+
+<pre><code>#> Source: local data frame [3 x 4]
+#> 
+#>      id price  name    _version_
+#>   (chr) (int) (chr)        (dbl)
+#> 1     1   100 brown 1.525729e+18
+#> 2     2   500  blue 1.525729e+18
+#> 3     3  2000  pink 1.525729e+18
+</code></pre>
+
+<p>Now delete those documents just added</p>
+
+<pre><code class="r">delete_by_id(ids = c(1, 2, 3), "gettingstarted")
+</code></pre>
+
+<pre><code>#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 24
+</code></pre>
+
+<p>And now they are gone</p>
+
+<pre><code class="r">tail(solr_search("gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+</code></pre>
+
+<pre><code>#> Source: local data frame [0 x 0]
+</code></pre>
+
+<h3>By query</h3>
+
+<p>Add some documents first</p>
+
+<pre><code class="r">add(docs, "gettingstarted")
+</code></pre>
+
+<pre><code>#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 19
+</code></pre>
+
+<p>And the documents are now in your Solr database</p>
+
+<pre><code class="r">tail(solr_search("gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+</code></pre>
+
+<pre><code>#> Source: local data frame [3 x 4]
+#> 
+#>      id price  name    _version_
+#>   (chr) (int) (chr)        (dbl)
+#> 1     1   100 brown 1.525729e+18
+#> 2     2   500  blue 1.525729e+18
+#> 3     3  2000  pink 1.525729e+18
+</code></pre>
+
+<p>Now delete those documents just added</p>
+
+<pre><code class="r">delete_by_query(query = "(name:blue OR name:pink)", "gettingstarted")
+</code></pre>
+
+<pre><code>#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 12
+</code></pre>
+
+<p>And now they are gone</p>
+
+<pre><code class="r">tail(solr_search("gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+</code></pre>
+
+<pre><code>#> Source: local data frame [1 x 4]
+#> 
+#>      id price  name    _version_
+#>   (chr) (int) (chr)        (dbl)
+#> 1     1   100 brown 1.525729e+18
+</code></pre>
+
+<h2>Update documents from files</h2>
+
+<p>This approach is best if you have many different things you want to do at once, e.g., delete and add files and set any additional options. The functions are:</p>
+
+<ul>
+<li><code>update_xml()</code></li>
+<li><code>update_json()</code></li>
+<li><code>update_csv()</code></li>
+</ul>
+
+<p>There are separate functions for each of the data types as they take slightly different parameters - and to make it more clear that those are the three input options for data types.</p>
+
+<h3>JSON</h3>
+
+<pre><code class="r">file <- system.file("examples", "books.json", package = "solrium")
+update_json(file, "books")
+</code></pre>
+
+<pre><code>#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 39
+</code></pre>
+
+<h3>Add and delete in the same file</h3>
+
+<p>Add a document first, that we can later delete</p>
+
+<pre><code class="r">ss <- list(list(id = 456, name = "cat"))
+add(ss, "books")
+</code></pre>
+
+<pre><code>#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 19
+</code></pre>
+
+<p>Now add a new document, and delete the one we just made</p>
+
+<pre><code class="r">file <- system.file("examples", "add_delete.xml", package = "solrium")
+cat(readLines(file), sep = "\n")
+</code></pre>
+
+<pre><code>#> <update>
+#>  <add>
+#>    <doc>
+#>      <field name="id">978-0641723445</field>
+#>      <field name="cat">book,hardcover</field>
+#>      <field name="name">The Lightning Thief</field>
+#>      <field name="author">Rick Riordan</field>
+#>      <field name="series_t">Percy Jackson and the Olympians</field>
+#>      <field name="sequence_i">1</field>
+#>      <field name="genre_s">fantasy</field>
+#>      <field name="inStock">TRUE</field>
+#>      <field name="price">12.5</field>
+#>      <field name="pages_i">384</field>
+#>    </doc>
+#>  </add>
+#>  <delete>
+#>      <id>456</id>
+#>  </delete>
+#> </update>
+</code></pre>
+
+<pre><code class="r">update_xml(file, "books")
+</code></pre>
+
+<pre><code>#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 23
+</code></pre>
+
+<h3>Notes</h3>
+
+<p>Note that <code>update_xml()</code> and <code>update_json()</code> have exactly the same parameters, but simply use different data input formats. <code>update_csv()</code> is different in that you can't provide document or field level boosts or other modifications. In addition <code>update_csv()</code> can accept not just csv, but tsv and other types of separators.</p>
+
+</body>
+
+</html>
diff --git a/inst/doc/local_setup.Rmd b/inst/doc/local_setup.Rmd
new file mode 100644
index 0000000..290ff07
--- /dev/null
+++ b/inst/doc/local_setup.Rmd
@@ -0,0 +1,79 @@
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Local Solr setup}
+%\VignetteEncoding{UTF-8}
+-->
+
+Local Solr setup 
+======
+
+### OSX
+
+__Based on http://lucene.apache.org/solr/quickstart.html__
+
+1. Download most recent version from an Apache mirror http://www.apache.org/dyn/closer.cgi/lucene/solr/5.4.1
+2. Unzip/untar the file. Move to your desired location. Now you have Solr `v.5.4.1`
+3. Go into the directory you just created: `cd solr-5.4.1`
+4. Launch Solr: `bin/solr start -e cloud -noprompt` - Sets up SolrCloud mode, rather
+than Standalone mode. As far as I can tell, SolrCloud mode seems more common.
+5. Once Step 4 completes, you can go to `http://localhost:8983/solr/` now, which is
+the admin interface for Solr.
+6. Load some documents: `bin/post -c gettingstarted docs/`
+7. Once Step 6 is complete (will take a few minutes), navigate in your browser to `http://localhost:8983/solr/gettingstarted/select?q=*:*&wt=json` and you should see a
+bunch of documents
+
+
+### Linux
+
+> You should be able to use the above instructions for OSX on a Linux machine.
+
+#### Linuxbrew
+
+[Linuxbrew](http://brew.sh/linuxbrew/) is a port of Mac OS homebrew to linux.  Operation is essentially the same as for homebrew.  Follow the [installation instructions for linuxbrew](http://brew.sh/linuxbrew/#installation) and then the instructions for using homebrew (above) should work without modification.
+
+### Windows
+
+You should be able to use the above instructions for OSX on a Windows machine, but with some slight differences. For example, the `bin/post` tool for OSX and Linux doesn't work on Windows, but see https://cwiki.apache.org/confluence/display/solr/Post+Tool#PostTool-Windows for an equivalent.
+
+### `solrium` usage
+
+And we can now use the `solrium` R package to query the Solr database to get raw JSON data:
+
+
+```r
+solr_connect('http://localhost:8983')
+solr_search("gettingstarted", q = '*:*', raw = TRUE, rows = 3)
+
+#> [1] "{\"responseHeader\":{\"status\":0,\"QTime\":8,\"params\":{\"q\":\"*:*\",\"rows\":\"3\",\"wt\":\"json\"}},\"response\":{\"numFound\":3577,\"start\":0,\"maxScore\":1.0,\"docs\":[{\"id\":\"/Users/sacmac/solr-5.2.1/docs/solr-core/org/apache/solr/highlight/class-use/SolrFragmenter.html\",\"stream_size\":[9016],\"date\":[\"2015-06-10T00:00:00Z\"],\"x_parsed_by\":[\"org.apache.tika.parser.DefaultParser\",\"org.apache.tika.parser.html.HtmlParser\"],\"stream_content_type\":[\"text/html\"] [...]
+#> attr(,"class")
+#> [1] "sr_search"
+#> attr(,"wt")
+#> [1] "json"
+```
+
+Or parsed data to a data.frame (just looking at a few columns for brevity):
+
+
+```r
+solr_search("gettingstarted", q = '*:*', fl = c('date', 'title'))
+
+#> Source: local data frame [10 x 2]
+#>
+#>                    date                                                                         title
+#> 1  2015-06-10T00:00:00Z   Uses of Interface org.apache.solr.highlight.SolrFragmenter (Solr 5.2.1 API)
+#> 2  2015-06-10T00:00:00Z Uses of Class org.apache.solr.highlight.SolrFragmentsBuilder (Solr 5.2.1 API)
+#> 3  2015-06-10T00:00:00Z                                                    CSVParser (Solr 5.2.1 API)
+#> 4  2015-06-10T00:00:00Z                                                     CSVUtils (Solr 5.2.1 API)
+#> 5  2015-06-10T00:00:00Z                                 org.apache.solr.internal.csv (Solr 5.2.1 API)
+#> 6  2015-06-10T00:00:00Z                 org.apache.solr.internal.csv Class Hierarchy (Solr 5.2.1 API)
+#> 7  2015-06-10T00:00:00Z       Uses of Class org.apache.solr.internal.csv.CSVStrategy (Solr 5.2.1 API)
+#> 8  2015-06-10T00:00:00Z          Uses of Class org.apache.solr.internal.csv.CSVUtils (Solr 5.2.1 API)
+#> 9  2015-06-10T00:00:00Z                                                    CSVConfig (Solr 5.2.1 API)
+#> 10 2015-06-10T00:00:00Z                                             CSVConfigGuesser (Solr 5.2.1 API)
+```
+
+See the other vignettes for more thorough examples:
+
+* `Document management`
+* `Cores/collections management`
+* `Solr Search`
diff --git a/inst/doc/local_setup.html b/inst/doc/local_setup.html
new file mode 100644
index 0000000..ea12978
--- /dev/null
+++ b/inst/doc/local_setup.html
@@ -0,0 +1,286 @@
+<!DOCTYPE html>
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
+
+<title>Local Solr setup </title>
+
+<script type="text/javascript">
+window.onload = function() {
+  var imgs = document.getElementsByTagName('img'), i, img;
+  for (i = 0; i < imgs.length; i++) {
+    img = imgs[i];
+    // center an image if it is the only element of its parent
+    if (img.parentElement.childElementCount === 1)
+      img.parentElement.style.textAlign = 'center';
+  }
+};
+</script>
+
+<!-- Styles for R syntax highlighter -->
+<style type="text/css">
+   pre .operator,
+   pre .paren {
+     color: rgb(104, 118, 135)
+   }
+
+   pre .literal {
+     color: #990073
+   }
+
+   pre .number {
+     color: #099;
+   }
+
+   pre .comment {
+     color: #998;
+     font-style: italic
+   }
+
+   pre .keyword {
+     color: #900;
+     font-weight: bold
+   }
+
+   pre .identifier {
+     color: rgb(0, 0, 0);
+   }
+
+   pre .string {
+     color: #d14;
+   }
+</style>
+
+<!-- R syntax highlighter -->
+<script type="text/javascript">
+var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/</gm,"<")}function f(r,q,p){return RegExp(q,"m"+(r.cI?"i":"")+(p?"g":""))}function b(r){for(var p=0;p<r.childNodes.length;p++){var q=r.childNodes[p];if(q.nodeName=="CODE"){return q}if(!(q.nodeType==3&&q.nodeValue.match(/\s+/))){break}}}function h(t,s){var p="";for(var r=0;r<t.childNodes.length;r++){if(t.childNodes[r].nodeType==3){var q=t.childNodes[r].nodeValue;if(s){q=q.replace(/\n/g,"")}p+=q}else{if(t.chi [...]
+hljs.initHighlightingOnLoad();
+</script>
+
+
+
+<style type="text/css">
+body, td {
+   font-family: sans-serif;
+   background-color: white;
+   font-size: 13px;
+}
+
+body {
+  max-width: 800px;
+  margin: auto;
+  padding: 1em;
+  line-height: 20px;
+}
+
+tt, code, pre {
+   font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace;
+}
+
+h1 {
+   font-size:2.2em;
+}
+
+h2 {
+   font-size:1.8em;
+}
+
+h3 {
+   font-size:1.4em;
+}
+
+h4 {
+   font-size:1.0em;
+}
+
+h5 {
+   font-size:0.9em;
+}
+
+h6 {
+   font-size:0.8em;
+}
+
+a:visited {
+   color: rgb(50%, 0%, 50%);
+}
+
+pre, img {
+  max-width: 100%;
+}
+pre {
+  overflow-x: auto;
+}
+pre code {
+   display: block; padding: 0.5em;
+}
+
+code {
+  font-size: 92%;
+  border: 1px solid #ccc;
+}
+
+code[class] {
+  background-color: #F8F8F8;
+}
+
+table, td, th {
+  border: none;
+}
+
+blockquote {
+   color:#666666;
+   margin:0;
+   padding-left: 1em;
+   border-left: 0.5em #EEE solid;
+}
+
+hr {
+   height: 0px;
+   border-bottom: none;
+   border-top-width: thin;
+   border-top-style: dotted;
+   border-top-color: #999999;
+}
+
+ at media print {
+   * {
+      background: transparent !important;
+      color: black !important;
+      filter:none !important;
+      -ms-filter: none !important;
+   }
+
+   body {
+      font-size:12pt;
+      max-width:100%;
+   }
+
+   a, a:visited {
+      text-decoration: underline;
+   }
+
+   hr {
+      visibility: hidden;
+      page-break-before: always;
+   }
+
+   pre, blockquote {
+      padding-right: 1em;
+      page-break-inside: avoid;
+   }
+
+   tr, img {
+      page-break-inside: avoid;
+   }
+
+   img {
+      max-width: 100% !important;
+   }
+
+   @page :left {
+      margin: 15mm 20mm 15mm 10mm;
+   }
+
+   @page :right {
+      margin: 15mm 10mm 15mm 20mm;
+   }
+
+   p, h2, h3 {
+      orphans: 3; widows: 3;
+   }
+
+   h2, h3 {
+      page-break-after: avoid;
+   }
+}
+</style>
+
+
+
+</head>
+
+<body>
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Local Solr setup}
+%\VignetteEncoding{UTF-8}
+-->
+
+<h1>Local Solr setup </h1>
+
+<h3>OSX</h3>
+
+<p><strong>Based on <a href="http://lucene.apache.org/solr/quickstart.html">http://lucene.apache.org/solr/quickstart.html</a></strong></p>
+
+<ol>
+<li>Download most recent version from an Apache mirror <a href="http://www.apache.org/dyn/closer.cgi/lucene/solr/5.4.1">http://www.apache.org/dyn/closer.cgi/lucene/solr/5.4.1</a></li>
+<li>Unzip/untar the file. Move to your desired location. Now you have Solr <code>v.5.4.1</code></li>
+<li>Go into the directory you just created: <code>cd solr-5.4.1</code></li>
+<li>Launch Solr: <code>bin/solr start -e cloud -noprompt</code> - Sets up SolrCloud mode, rather
+than Standalone mode. As far as I can tell, SolrCloud mode seems more common.</li>
+<li>Once Step 4 completes, you can go to <code>http://localhost:8983/solr/</code> now, which is
+the admin interface for Solr.</li>
+<li>Load some documents: <code>bin/post -c gettingstarted docs/</code></li>
+<li>Once Step 6 is complete (will take a few minutes), navigate in your browser to <code>http://localhost:8983/solr/gettingstarted/select?q=*:*&wt=json</code> and you should see a
+bunch of documents</li>
+</ol>
+
+<h3>Linux</h3>
+
+<blockquote>
+<p>You should be able to use the above instructions for OSX on a Linux machine.</p>
+</blockquote>
+
+<h4>Linuxbrew</h4>
+
+<p><a href="http://brew.sh/linuxbrew/">Linuxbrew</a> is a port of Mac OS homebrew to linux.  Operation is essentially the same as for homebrew.  Follow the <a href="http://brew.sh/linuxbrew/#installation">installation instructions for linuxbrew</a> and then the instructions for using homebrew (above) should work without modification.</p>
+
+<h3>Windows</h3>
+
+<p>You should be able to use the above instructions for OSX on a Windows machine, but with some slight differences. For example, the <code>bin/post</code> tool for OSX and Linux doesn't work on Windows, but see <a href="https://cwiki.apache.org/confluence/display/solr/Post+Tool#PostTool-Windows">https://cwiki.apache.org/confluence/display/solr/Post+Tool#PostTool-Windows</a> for an equivalent.</p>
+
+<h3><code>solrium</code> usage</h3>
+
+<p>And we can now use the <code>solrium</code> R package to query the Solr database to get raw JSON data:</p>
+
+<pre><code class="r">solr_connect('http://localhost:8983')
+solr_search("gettingstarted", q = '*:*', raw = TRUE, rows = 3)
+
+#> [1] "{\"responseHeader\":{\"status\":0,\"QTime\":8,\"params\":{\"q\":\"*:*\",\"rows\":\"3\",\"wt\":\"json\"}},\"response\":{\"numFound\":3577,\"start\":0,\"maxScore\":1.0,\"docs\":[{\"id\":\"/Users/sacmac/solr-5.2.1/docs/solr-core/org/apache/solr/highlight/class-use/SolrFragmenter.html\",\"stream_size\&qu [...]
+#> attr(,"class")
+#> [1] "sr_search"
+#> attr(,"wt")
+#> [1] "json"
+</code></pre>
+
+<p>Or parsed data to a data.frame (just looking at a few columns for brevity):</p>
+
+<pre><code class="r">solr_search("gettingstarted", q = '*:*', fl = c('date', 'title'))
+
+#> Source: local data frame [10 x 2]
+#>
+#>                    date                                                                         title
+#> 1  2015-06-10T00:00:00Z   Uses of Interface org.apache.solr.highlight.SolrFragmenter (Solr 5.2.1 API)
+#> 2  2015-06-10T00:00:00Z Uses of Class org.apache.solr.highlight.SolrFragmentsBuilder (Solr 5.2.1 API)
+#> 3  2015-06-10T00:00:00Z                                                    CSVParser (Solr 5.2.1 API)
+#> 4  2015-06-10T00:00:00Z                                                     CSVUtils (Solr 5.2.1 API)
+#> 5  2015-06-10T00:00:00Z                                 org.apache.solr.internal.csv (Solr 5.2.1 API)
+#> 6  2015-06-10T00:00:00Z                 org.apache.solr.internal.csv Class Hierarchy (Solr 5.2.1 API)
+#> 7  2015-06-10T00:00:00Z       Uses of Class org.apache.solr.internal.csv.CSVStrategy (Solr 5.2.1 API)
+#> 8  2015-06-10T00:00:00Z          Uses of Class org.apache.solr.internal.csv.CSVUtils (Solr 5.2.1 API)
+#> 9  2015-06-10T00:00:00Z                                                    CSVConfig (Solr 5.2.1 API)
+#> 10 2015-06-10T00:00:00Z                                             CSVConfigGuesser (Solr 5.2.1 API)
+</code></pre>
+
+<p>See the other vignettes for more thorough examples:</p>
+
+<ul>
+<li><code>Document management</code></li>
+<li><code>Cores/collections management</code></li>
+<li><code>Solr Search</code></li>
+</ul>
+
+</body>
+
+</html>
diff --git a/inst/doc/search.Rmd b/inst/doc/search.Rmd
new file mode 100644
index 0000000..102204b
--- /dev/null
+++ b/inst/doc/search.Rmd
@@ -0,0 +1,600 @@
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Solr search}
+%\VignetteEncoding{UTF-8}
+-->
+
+
+
+Solr search
+===========
+
+**A general purpose R interface to [Apache Solr](http://lucene.apache.org/solr/)**
+
+## Solr info
+
++ [Solr home page](http://lucene.apache.org/solr/)
++ [Highlighting help](http://wiki.apache.org/solr/HighlightingParameters)
++ [Faceting help](http://wiki.apache.org/solr/SimpleFacetParameters)
++ [Install and Setup SOLR in OSX, including running Solr](http://risnandar.wordpress.com/2013/09/08/how-to-install-and-setup-apache-lucene-solr-in-osx/)
+
+## Installation
+
+Stable version from CRAN
+
+
+```r
+install.packages("solrium")
+```
+
+Or the development version from GitHub
+
+
+```r
+install.packages("devtools")
+devtools::install_github("ropensci/solrium")
+```
+
+Load
+
+
+```r
+library("solrium")
+```
+
+## Setup connection
+
+You can setup for a remote Solr instance or on your local machine.
+
+
+```r
+solr_connect('http://api.plos.org/search')
+#> <solr_connection>
+#>   url:    http://api.plos.org/search
+#>   errors: simple
+#>   verbose: TRUE
+#>   proxy:
+```
+
+## Rundown
+
+`solr_search()` only returns the `docs` element of a Solr response body. If `docs` is
+all you need, then this function will do the job. If you need facet data only, or mlt
+data only, see the appropriate functions for each of those below. Another function,
+`solr_all()` has a similar interface in terms of parameter as `solr_search()`, but
+returns all parts of the response body, including, facets, mlt, groups, stats, etc.
+as long as you request those.
+
+## Search docs
+
+`solr_search()` returns only docs. A basic search:
+
+
+```r
+solr_search(q = '*:*', rows = 2, fl = 'id')
+#> Source: local data frame [2 x 1]
+#> 
+#>                                        id
+#>                                     (chr)
+#> 1 10.1371/journal.pone.0142243/references
+#> 2       10.1371/journal.pone.0142243/body
+```
+
+__Search in specific fields with `:`__
+
+Search for word ecology in title and cell in the body
+
+
+```r
+solr_search(q = 'title:"ecology" AND body:"cell"', fl = 'title', rows = 5)
+#> Source: local data frame [5 x 1]
+#> 
+#>                                                       title
+#>                                                       (chr)
+#> 1                        The Ecology of Collective Behavior
+#> 2                                   Ecology's Big, Hot Idea
+#> 3     Spatial Ecology of Bacteria at the Microscale in Soil
+#> 4 Biofilm Formation As a Response to Ecological Competition
+#> 5    Ecology of Root Colonizing Massilia (Oxalobacteraceae)
+```
+
+__Wildcards__
+
+Search for word that starts with "cell" in the title field
+
+
+```r
+solr_search(q = 'title:"cell*"', fl = 'title', rows = 5)
+#> Source: local data frame [5 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1                                Tumor Cell Recognition Efficiency by T Cells
+#> 2 Cancer Stem Cell-Like Side Population Cells in Clear Cell Renal Cell Carcin
+#> 3 Dcas Supports Cell Polarization and Cell-Cell Adhesion Complexes in Develop
+#> 4                  Cell-Cell Contact Preserves Cell Viability via Plakoglobin
+#> 5 MS4a4B, a CD20 Homologue in T Cells, Inhibits T Cell Propagation by Modulat
+```
+
+__Proximity search__
+
+Search for words "sports" and "alcohol" within four words of each other
+
+
+```r
+solr_search(q = 'everything:"stem cell"~7', fl = 'title', rows = 3)
+#> Source: local data frame [3 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1 Correction: Reduced Intensity Conditioning, Combined Transplantation of Hap
+#> 2                                            A Recipe for Self-Renewing Brain
+#> 3  Gene Expression Profile Created for Mouse Stem Cells and Developing Embryo
+```
+
+__Range searches__
+
+Search for articles with Twitter count between 5 and 10
+
+
+```r
+solr_search(q = '*:*', fl = c('alm_twitterCount', 'id'), fq = 'alm_twitterCount:[5 TO 50]',
+rows = 10)
+#> Source: local data frame [10 x 2]
+#> 
+#>                                                     id alm_twitterCount
+#>                                                  (chr)            (int)
+#> 1            10.1371/journal.ppat.1005403/introduction                6
+#> 2  10.1371/journal.ppat.1005403/results_and_discussion                6
+#> 3   10.1371/journal.ppat.1005403/materials_and_methods                6
+#> 4  10.1371/journal.ppat.1005403/supporting_information                6
+#> 5                         10.1371/journal.ppat.1005401                6
+#> 6                   10.1371/journal.ppat.1005401/title                6
+#> 7                10.1371/journal.ppat.1005401/abstract                6
+#> 8              10.1371/journal.ppat.1005401/references                6
+#> 9                    10.1371/journal.ppat.1005401/body                6
+#> 10           10.1371/journal.ppat.1005401/introduction                6
+```
+
+__Boosts__
+
+Assign higher boost to title matches than to body matches (compare the two calls)
+
+
+```r
+solr_search(q = 'title:"cell" abstract:"science"', fl = 'title', rows = 3)
+#> Source: local data frame [3 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1 I Want More and Better Cells! – An Outreach Project about Stem Cells and It
+#> 2                                   Centre of the Cell: Science Comes to Life
+#> 3 Globalization of Stem Cell Science: An Examination of Current and Past Coll
+```
+
+
+```r
+solr_search(q = 'title:"cell"^1.5 AND abstract:"science"', fl = 'title', rows = 3)
+#> Source: local data frame [3 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1                                   Centre of the Cell: Science Comes to Life
+#> 2 I Want More and Better Cells! – An Outreach Project about Stem Cells and It
+#> 3          Derivation of Hair-Inducing Cell from Human Pluripotent Stem Cells
+```
+
+## Search all
+
+`solr_all()` differs from `solr_search()` in that it allows specifying facets, mlt, groups,
+stats, etc, and returns all of those. It defaults to `parsetype = "list"` and `wt="json"`,
+whereas `solr_search()` defaults to `parsetype = "df"` and `wt="csv"`. `solr_all()` returns
+by default a list, whereas `solr_search()` by default returns a data.frame.
+
+A basic search, just docs output
+
+
+```r
+solr_all(q = '*:*', rows = 2, fl = 'id')
+#> $response
+#> $response$numFound
+#> [1] 1502814
+#> 
+#> $response$start
+#> [1] 0
+#> 
+#> $response$docs
+#> $response$docs[[1]]
+#> $response$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0142243/references"
+#> 
+#> 
+#> $response$docs[[2]]
+#> $response$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0142243/body"
+```
+
+Get docs, mlt, and stats output
+
+
+```r
+solr_all(q = 'ecology', rows = 2, fl = 'id', mlt = 'true', mlt.count = 2, mlt.fl = 'abstract', stats = 'true', stats.field = 'counter_total_all')
+#> $response
+#> $response$numFound
+#> [1] 31467
+#> 
+#> $response$start
+#> [1] 0
+#> 
+#> $response$docs
+#> $response$docs[[1]]
+#> $response$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0059813"
+#> 
+#> 
+#> $response$docs[[2]]
+#> $response$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0001248"
+#> 
+#> 
+#> 
+#> 
+#> $moreLikeThis
+#> $moreLikeThis$`10.1371/journal.pone.0059813`
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$numFound
+#> [1] 152704
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$start
+#> [1] 0
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[1]]
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0111996"
+#> 
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[2]]
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0143687"
+#> 
+#> 
+#> 
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$numFound
+#> [1] 159058
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$start
+#> [1] 0
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[1]]
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0001275"
+#> 
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[2]]
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0024192"
+#> 
+#> 
+#> 
+#> 
+#> 
+#> $stats
+#> $stats$stats_fields
+#> $stats$stats_fields$counter_total_all
+#> $stats$stats_fields$counter_total_all$min
+#> [1] 16
+#> 
+#> $stats$stats_fields$counter_total_all$max
+#> [1] 367697
+#> 
+#> $stats$stats_fields$counter_total_all$count
+#> [1] 31467
+#> 
+#> $stats$stats_fields$counter_total_all$missing
+#> [1] 0
+#> 
+#> $stats$stats_fields$counter_total_all$sum
+#> [1] 141552408
+#> 
+#> $stats$stats_fields$counter_total_all$sumOfSquares
+#> [1] 3.162032e+12
+#> 
+#> $stats$stats_fields$counter_total_all$mean
+#> [1] 4498.44
+#> 
+#> $stats$stats_fields$counter_total_all$stddev
+#> [1] 8958.45
+#> 
+#> $stats$stats_fields$counter_total_all$facets
+#> named list()
+```
+
+
+## Facet
+
+
+```r
+solr_facet(q = '*:*', facet.field = 'journal', facet.query = c('cell', 'bird'))
+#> $facet_queries
+#>   term  value
+#> 1 cell 128657
+#> 2 bird  13063
+#> 
+#> $facet_fields
+#> $facet_fields$journal
+#>                                 X1      X2
+#> 1                         plos one 1233662
+#> 2                    plos genetics   49285
+#> 3                   plos pathogens   42817
+#> 4       plos computational biology   36373
+#> 5 plos neglected tropical diseases   33911
+#> 6                     plos biology   28745
+#> 7                    plos medicine   19934
+#> 8             plos clinical trials     521
+#> 9                     plos medicin       9
+#> 
+#> 
+#> $facet_pivot
+#> NULL
+#> 
+#> $facet_dates
+#> NULL
+#> 
+#> $facet_ranges
+#> NULL
+```
+
+## Highlight
+
+
+```r
+solr_highlight(q = 'alcohol', hl.fl = 'abstract', rows = 2)
+#> $`10.1371/journal.pmed.0040151`
+#> $`10.1371/journal.pmed.0040151`$abstract
+#> [1] "Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting"
+#> 
+#> 
+#> $`10.1371/journal.pone.0027752`
+#> $`10.1371/journal.pone.0027752`$abstract
+#> [1] "Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking"
+```
+
+## Stats
+
+
+```r
+out <- solr_stats(q = 'ecology', stats.field = c('counter_total_all', 'alm_twitterCount'), stats.facet = c('journal', 'volume'))
+```
+
+
+```r
+out$data
+#>                   min    max count missing       sum sumOfSquares
+#> counter_total_all  16 367697 31467       0 141552408 3.162032e+12
+#> alm_twitterCount    0   1756 31467       0    168586 3.267801e+07
+#>                          mean     stddev
+#> counter_total_all 4498.439889 8958.45030
+#> alm_twitterCount     5.357549   31.77757
+```
+
+
+```r
+out$facet
+#> $counter_total_all
+#> $counter_total_all$volume
+#>     min    max count missing      sum sumOfSquares      mean    stddev
+#> 1    20 166202   887       0  2645927  63864880371  2983.007  7948.200
+#> 2   495 103147   105       0  1017325  23587444387  9688.810 11490.287
+#> 3  1950  69628    69       0   704216  13763808310 10206.029  9834.333
+#> 4   742  13856     9       0    48373    375236903  5374.778  3795.438
+#> 5  1871 182622    81       0  1509647  87261688837 18637.617 27185.811
+#> 6  1667 117922   482       0  5836186 162503606896 12108.270 13817.754
+#> 7  1340 128083   741       0  7714963 188647618509 10411.556 12098.852
+#> 8   667 362410  1010       0  9692492 340237069126  9596.527 15653.040
+#> 9   103 113220  1539       0 12095764 218958657256  7859.496  8975.188
+#> 10   72 243873  2948       0 17699332 327210596846  6003.844  8658.717
+#> 11   51 184259  4825       0 24198104 382922818910  5015.151  7363.541
+#> 12   16 367697  6360       0 26374352 533183277470  4146.911  8163.790
+#> 13   42 287741  6620       0 21003701 612616254755  3172.765  9082.194
+#> 14  128 161520  5791       0 11012026 206899109466  1901.576  5667.209
+#>    volume
+#> 1      11
+#> 2      12
+#> 3      13
+#> 4      14
+#> 5       1
+#> 6       2
+#> 7       3
+#> 8       4
+#> 9       5
+#> 10      6
+#> 11      7
+#> 12      8
+#> 13      9
+#> 14     10
+#> 
+#> $counter_total_all$journal
+#>    min    max count missing      sum sumOfSquares      mean    stddev
+#> 1  667 117922   243       0  4074303 1.460258e+11 16766.679 17920.074
+#> 2  742 265561   884       0 14006081 5.507548e+11 15843.983 19298.065
+#> 3 8463  13797     2       0    22260 2.619796e+08 11130.000  3771.708
+#> 4   16 367697 25915       0 96069530 1.943903e+12  3707.101  7827.546
+#> 5  915  61956   595       0  4788553 6.579963e+10  8047.988  6774.558
+#> 6  548  76290   758       0  6326284 9.168443e+10  8346.021  7167.106
+#> 7  268 212048  1239       0  5876481 1.010080e+11  4742.923  7686.101
+#> 8  495 287741   580       0  4211717 1.411022e+11  7261.581 13815.867
+#>                            journal
+#> 1                    plos medicine
+#> 2                     plos biology
+#> 3             plos clinical trials
+#> 4                         plos one
+#> 5                   plos pathogens
+#> 6                    plos genetics
+#> 7 plos neglected tropical diseases
+#> 8       plos computational biology
+#> 
+#> 
+#> $alm_twitterCount
+#> $alm_twitterCount$volume
+#>    min  max count missing   sum sumOfSquares      mean     stddev volume
+#> 1    0 1756   887       0 12295      4040629 13.861330  66.092178     11
+#> 2    0 1045   105       0  6466      1885054 61.580952 119.569402     12
+#> 3    0  283    69       0  3478       509732 50.405797  70.128101     13
+#> 4    6  274     9       0   647       102391 71.888889  83.575482     14
+#> 5    0   42    81       0   176         4996  2.172840   7.594060      1
+#> 6    0   74   482       0   628        15812  1.302905   5.583197      2
+#> 7    0   48   741       0   652        11036  0.879892   3.760087      3
+#> 8    0  239  1010       0  1039        74993  1.028713   8.559485      4
+#> 9    0  126  1539       0  1901        90297  1.235218   7.562004      5
+#> 10   0  886  2948       0  4357      1245453  1.477951  20.504442      6
+#> 11   0  822  4825       0 19646      2037596  4.071710  20.144602      7
+#> 12   0 1503  6360       0 35938      6505618  5.650629  31.482092      8
+#> 13   0 1539  6620       0 49837     12847207  7.528248  43.408246      9
+#> 14   0  863  5791       0 31526      3307198  5.443965  23.271216     10
+#> 
+#> $alm_twitterCount$journal
+#>   min  max count missing    sum sumOfSquares      mean   stddev
+#> 1   0  777   243       0   4251      1028595 17.493827 62.79406
+#> 2   0 1756   884       0  16405      6088729 18.557692 80.93655
+#> 3   0    3     2       0      3            9  1.500000  2.12132
+#> 4   0 1539 25915       0 123409     23521391  4.762068 29.74883
+#> 5   0  122   595       0   4265       160581  7.168067 14.79428
+#> 6   0  178   758       0   4277       148277  5.642480 12.80605
+#> 7   0  886  1239       0   4972      1048908  4.012914 28.82956
+#> 8   0  285   580       0   4166       265578  7.182759 20.17431
+#>                            journal
+#> 1                    plos medicine
+#> 2                     plos biology
+#> 3             plos clinical trials
+#> 4                         plos one
+#> 5                   plos pathogens
+#> 6                    plos genetics
+#> 7 plos neglected tropical diseases
+#> 8       plos computational biology
+```
+
+## More like this
+
+`solr_mlt` is a function to return similar documents to the one
+
+
+```r
+out <- solr_mlt(q = 'title:"ecology" AND body:"cell"', mlt.fl = 'title', mlt.mindf = 1, mlt.mintf = 1, fl = 'counter_total_all', rows = 5)
+out$docs
+#> Source: local data frame [5 x 2]
+#> 
+#>                             id counter_total_all
+#>                          (chr)             (int)
+#> 1 10.1371/journal.pbio.1001805             17081
+#> 2 10.1371/journal.pbio.0020440             23882
+#> 3 10.1371/journal.pone.0087217              5935
+#> 4 10.1371/journal.pbio.1002191             13036
+#> 5 10.1371/journal.pone.0040117              4316
+```
+
+
+```r
+out$mlt
+#> $`10.1371/journal.pbio.1001805`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0082578              2196
+#> 2 10.1371/journal.pone.0098876              2448
+#> 3 10.1371/journal.pone.0102159              1177
+#> 4 10.1371/journal.pcbi.1002652              3102
+#> 5 10.1371/journal.pcbi.1003408              6942
+#> 
+#> $`10.1371/journal.pbio.0020440`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0102679              3112
+#> 2 10.1371/journal.pone.0035964              5571
+#> 3 10.1371/journal.pone.0003259              2800
+#> 4 10.1371/journal.pntd.0003377              3392
+#> 5 10.1371/journal.pone.0068814              7522
+#> 
+#> $`10.1371/journal.pone.0087217`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0131665               409
+#> 2 10.1371/journal.pcbi.0020092             19604
+#> 3 10.1371/journal.pone.0133941               475
+#> 4 10.1371/journal.pone.0123774               997
+#> 5 10.1371/journal.pone.0140306               322
+#> 
+#> $`10.1371/journal.pbio.1002191`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pbio.1002232              1950
+#> 2 10.1371/journal.pone.0131700               979
+#> 3 10.1371/journal.pone.0070448              1608
+#> 4 10.1371/journal.pone.0028737              7481
+#> 5 10.1371/journal.pone.0052330              5595
+#> 
+#> $`10.1371/journal.pone.0040117`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0069352              2763
+#> 2 10.1371/journal.pone.0148280               467
+#> 3 10.1371/journal.pone.0035502              4031
+#> 4 10.1371/journal.pone.0014065              5764
+#> 5 10.1371/journal.pone.0113280              1984
+```
+
+## Groups
+
+`solr_groups()` is a function to return similar documents to the one
+
+
+```r
+solr_group(q = 'ecology', group.field = 'journal', group.limit = 1, fl = c('id', 'alm_twitterCount'))
+#>                         groupValue numFound start
+#> 1                         plos one    25915     0
+#> 2       plos computational biology      580     0
+#> 3                     plos biology      884     0
+#> 4                             none     1251     0
+#> 5                    plos medicine      243     0
+#> 6 plos neglected tropical diseases     1239     0
+#> 7                   plos pathogens      595     0
+#> 8                    plos genetics      758     0
+#> 9             plos clinical trials        2     0
+#>                             id alm_twitterCount
+#> 1 10.1371/journal.pone.0059813               56
+#> 2 10.1371/journal.pcbi.1003594               21
+#> 3 10.1371/journal.pbio.1002358               16
+#> 4 10.1371/journal.pone.0046671                2
+#> 5 10.1371/journal.pmed.1000303                0
+#> 6 10.1371/journal.pntd.0002577                2
+#> 7 10.1371/journal.ppat.1003372                2
+#> 8 10.1371/journal.pgen.1001197                0
+#> 9 10.1371/journal.pctr.0020010                0
+```
+
+## Parsing
+
+`solr_parse()` is a general purpose parser function with extension methods for parsing outputs from functions in `solr`. `solr_parse()` is used internally within functions to do parsing after retrieving data from the server. You can optionally get back raw `json`, `xml`, or `csv` with the `raw=TRUE`, and then parse afterwards with `solr_parse()`.
+
+For example:
+
+
+```r
+(out <- solr_highlight(q = 'alcohol', hl.fl = 'abstract', rows = 2, raw = TRUE))
+#> [1] "{\"response\":{\"numFound\":20268,\"start\":0,\"docs\":[{},{}]},\"highlighting\":{\"10.1371/journal.pmed.0040151\":{\"abstract\":[\"Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting\"]},\"10.1371/journal.pone.0027752\":{\"abstract\":[\"Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking\"]}}}\n"
+#> attr(,"class")
+#> [1] "sr_high"
+#> attr(,"wt")
+#> [1] "json"
+```
+
+Then parse
+
+
+```r
+solr_parse(out, 'df')
+#>                          names
+#> 1 10.1371/journal.pmed.0040151
+#> 2 10.1371/journal.pone.0027752
+#>                                                                                                    abstract
+#> 1   Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting
+#> 2 Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking
+```
+
+[Please report any issues or bugs](https://github.com/ropensci/solrium/issues).
diff --git a/inst/doc/search.html b/inst/doc/search.html
new file mode 100644
index 0000000..250eba1
--- /dev/null
+++ b/inst/doc/search.html
@@ -0,0 +1,759 @@
+<!DOCTYPE html>
+<html>
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
+
+<title>Solr search</title>
+
+<script type="text/javascript">
+window.onload = function() {
+  var imgs = document.getElementsByTagName('img'), i, img;
+  for (i = 0; i < imgs.length; i++) {
+    img = imgs[i];
+    // center an image if it is the only element of its parent
+    if (img.parentElement.childElementCount === 1)
+      img.parentElement.style.textAlign = 'center';
+  }
+};
+</script>
+
+<!-- Styles for R syntax highlighter -->
+<style type="text/css">
+   pre .operator,
+   pre .paren {
+     color: rgb(104, 118, 135)
+   }
+
+   pre .literal {
+     color: #990073
+   }
+
+   pre .number {
+     color: #099;
+   }
+
+   pre .comment {
+     color: #998;
+     font-style: italic
+   }
+
+   pre .keyword {
+     color: #900;
+     font-weight: bold
+   }
+
+   pre .identifier {
+     color: rgb(0, 0, 0);
+   }
+
+   pre .string {
+     color: #d14;
+   }
+</style>
+
+<!-- R syntax highlighter -->
+<script type="text/javascript">
+var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/</gm,"<")}function f(r,q,p){return RegExp(q,"m"+(r.cI?"i":"")+(p?"g":""))}function b(r){for(var p=0;p<r.childNodes.length;p++){var q=r.childNodes[p];if(q.nodeName=="CODE"){return q}if(!(q.nodeType==3&&q.nodeValue.match(/\s+/))){break}}}function h(t,s){var p="";for(var r=0;r<t.childNodes.length;r++){if(t.childNodes[r].nodeType==3){var q=t.childNodes[r].nodeValue;if(s){q=q.replace(/\n/g,"")}p+=q}else{if(t.chi [...]
+hljs.initHighlightingOnLoad();
+</script>
+
+
+
+<style type="text/css">
+body, td {
+   font-family: sans-serif;
+   background-color: white;
+   font-size: 13px;
+}
+
+body {
+  max-width: 800px;
+  margin: auto;
+  padding: 1em;
+  line-height: 20px;
+}
+
+tt, code, pre {
+   font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace;
+}
+
+h1 {
+   font-size:2.2em;
+}
+
+h2 {
+   font-size:1.8em;
+}
+
+h3 {
+   font-size:1.4em;
+}
+
+h4 {
+   font-size:1.0em;
+}
+
+h5 {
+   font-size:0.9em;
+}
+
+h6 {
+   font-size:0.8em;
+}
+
+a:visited {
+   color: rgb(50%, 0%, 50%);
+}
+
+pre, img {
+  max-width: 100%;
+}
+pre {
+  overflow-x: auto;
+}
+pre code {
+   display: block; padding: 0.5em;
+}
+
+code {
+  font-size: 92%;
+  border: 1px solid #ccc;
+}
+
+code[class] {
+  background-color: #F8F8F8;
+}
+
+table, td, th {
+  border: none;
+}
+
+blockquote {
+   color:#666666;
+   margin:0;
+   padding-left: 1em;
+   border-left: 0.5em #EEE solid;
+}
+
+hr {
+   height: 0px;
+   border-bottom: none;
+   border-top-width: thin;
+   border-top-style: dotted;
+   border-top-color: #999999;
+}
+
+ at media print {
+   * {
+      background: transparent !important;
+      color: black !important;
+      filter:none !important;
+      -ms-filter: none !important;
+   }
+
+   body {
+      font-size:12pt;
+      max-width:100%;
+   }
+
+   a, a:visited {
+      text-decoration: underline;
+   }
+
+   hr {
+      visibility: hidden;
+      page-break-before: always;
+   }
+
+   pre, blockquote {
+      padding-right: 1em;
+      page-break-inside: avoid;
+   }
+
+   tr, img {
+      page-break-inside: avoid;
+   }
+
+   img {
+      max-width: 100% !important;
+   }
+
+   @page :left {
+      margin: 15mm 20mm 15mm 10mm;
+   }
+
+   @page :right {
+      margin: 15mm 10mm 15mm 20mm;
+   }
+
+   p, h2, h3 {
+      orphans: 3; widows: 3;
+   }
+
+   h2, h3 {
+      page-break-after: avoid;
+   }
+}
+</style>
+
+
+
+</head>
+
+<body>
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Solr search}
+%\VignetteEncoding{UTF-8}
+-->
+
+<h1>Solr search</h1>
+
+<p><strong>A general purpose R interface to <a href="http://lucene.apache.org/solr/">Apache Solr</a></strong></p>
+
+<h2>Solr info</h2>
+
+<ul>
+<li><a href="http://lucene.apache.org/solr/">Solr home page</a></li>
+<li><a href="http://wiki.apache.org/solr/HighlightingParameters">Highlighting help</a></li>
+<li><a href="http://wiki.apache.org/solr/SimpleFacetParameters">Faceting help</a></li>
+<li><a href="http://risnandar.wordpress.com/2013/09/08/how-to-install-and-setup-apache-lucene-solr-in-osx/">Install and Setup SOLR in OSX, including running Solr</a></li>
+</ul>
+
+<h2>Installation</h2>
+
+<p>Stable version from CRAN</p>
+
+<pre><code class="r">install.packages("solrium")
+</code></pre>
+
+<p>Or the development version from GitHub</p>
+
+<pre><code class="r">install.packages("devtools")
+devtools::install_github("ropensci/solrium")
+</code></pre>
+
+<p>Load</p>
+
+<pre><code class="r">library("solrium")
+</code></pre>
+
+<h2>Setup connection</h2>
+
+<p>You can setup for a remote Solr instance or on your local machine.</p>
+
+<pre><code class="r">solr_connect('http://api.plos.org/search')
+#> <solr_connection>
+#>   url:    http://api.plos.org/search
+#>   errors: simple
+#>   verbose: TRUE
+#>   proxy:
+</code></pre>
+
+<h2>Rundown</h2>
+
+<p><code>solr_search()</code> only returns the <code>docs</code> element of a Solr response body. If <code>docs</code> is
+all you need, then this function will do the job. If you need facet data only, or mlt
+data only, see the appropriate functions for each of those below. Another function,
+<code>solr_all()</code> has a similar interface in terms of parameter as <code>solr_search()</code>, but
+returns all parts of the response body, including, facets, mlt, groups, stats, etc.
+as long as you request those.</p>
+
+<h2>Search docs</h2>
+
+<p><code>solr_search()</code> returns only docs. A basic search:</p>
+
+<pre><code class="r">solr_search(q = '*:*', rows = 2, fl = 'id')
+#> Source: local data frame [2 x 1]
+#> 
+#>                                        id
+#>                                     (chr)
+#> 1 10.1371/journal.pone.0142243/references
+#> 2       10.1371/journal.pone.0142243/body
+</code></pre>
+
+<p><strong>Search in specific fields with <code>:</code></strong></p>
+
+<p>Search for word ecology in title and cell in the body</p>
+
+<pre><code class="r">solr_search(q = 'title:"ecology" AND body:"cell"', fl = 'title', rows = 5)
+#> Source: local data frame [5 x 1]
+#> 
+#>                                                       title
+#>                                                       (chr)
+#> 1                        The Ecology of Collective Behavior
+#> 2                                   Ecology's Big, Hot Idea
+#> 3     Spatial Ecology of Bacteria at the Microscale in Soil
+#> 4 Biofilm Formation As a Response to Ecological Competition
+#> 5    Ecology of Root Colonizing Massilia (Oxalobacteraceae)
+</code></pre>
+
+<p><strong>Wildcards</strong></p>
+
+<p>Search for word that starts with “cell” in the title field</p>
+
+<pre><code class="r">solr_search(q = 'title:"cell*"', fl = 'title', rows = 5)
+#> Source: local data frame [5 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1                                Tumor Cell Recognition Efficiency by T Cells
+#> 2 Cancer Stem Cell-Like Side Population Cells in Clear Cell Renal Cell Carcin
+#> 3 Dcas Supports Cell Polarization and Cell-Cell Adhesion Complexes in Develop
+#> 4                  Cell-Cell Contact Preserves Cell Viability via Plakoglobin
+#> 5 MS4a4B, a CD20 Homologue in T Cells, Inhibits T Cell Propagation by Modulat
+</code></pre>
+
+<p><strong>Proximity search</strong></p>
+
+<p>Search for words “sports” and “alcohol” within four words of each other</p>
+
+<pre><code class="r">solr_search(q = 'everything:"stem cell"~7', fl = 'title', rows = 3)
+#> Source: local data frame [3 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1 Correction: Reduced Intensity Conditioning, Combined Transplantation of Hap
+#> 2                                            A Recipe for Self-Renewing Brain
+#> 3  Gene Expression Profile Created for Mouse Stem Cells and Developing Embryo
+</code></pre>
+
+<p><strong>Range searches</strong></p>
+
+<p>Search for articles with Twitter count between 5 and 10</p>
+
+<pre><code class="r">solr_search(q = '*:*', fl = c('alm_twitterCount', 'id'), fq = 'alm_twitterCount:[5 TO 50]',
+rows = 10)
+#> Source: local data frame [10 x 2]
+#> 
+#>                                                     id alm_twitterCount
+#>                                                  (chr)            (int)
+#> 1            10.1371/journal.ppat.1005403/introduction                6
+#> 2  10.1371/journal.ppat.1005403/results_and_discussion                6
+#> 3   10.1371/journal.ppat.1005403/materials_and_methods                6
+#> 4  10.1371/journal.ppat.1005403/supporting_information                6
+#> 5                         10.1371/journal.ppat.1005401                6
+#> 6                   10.1371/journal.ppat.1005401/title                6
+#> 7                10.1371/journal.ppat.1005401/abstract                6
+#> 8              10.1371/journal.ppat.1005401/references                6
+#> 9                    10.1371/journal.ppat.1005401/body                6
+#> 10           10.1371/journal.ppat.1005401/introduction                6
+</code></pre>
+
+<p><strong>Boosts</strong></p>
+
+<p>Assign higher boost to title matches than to body matches (compare the two calls)</p>
+
+<pre><code class="r">solr_search(q = 'title:"cell" abstract:"science"', fl = 'title', rows = 3)
+#> Source: local data frame [3 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1 I Want More and Better Cells! – An Outreach Project about Stem Cells and It
+#> 2                                   Centre of the Cell: Science Comes to Life
+#> 3 Globalization of Stem Cell Science: An Examination of Current and Past Coll
+</code></pre>
+
+<pre><code class="r">solr_search(q = 'title:"cell"^1.5 AND abstract:"science"', fl = 'title', rows = 3)
+#> Source: local data frame [3 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1                                   Centre of the Cell: Science Comes to Life
+#> 2 I Want More and Better Cells! – An Outreach Project about Stem Cells and It
+#> 3          Derivation of Hair-Inducing Cell from Human Pluripotent Stem Cells
+</code></pre>
+
+<h2>Search all</h2>
+
+<p><code>solr_all()</code> differs from <code>solr_search()</code> in that it allows specifying facets, mlt, groups,
+stats, etc, and returns all of those. It defaults to <code>parsetype = "list"</code> and <code>wt="json"</code>,
+whereas <code>solr_search()</code> defaults to <code>parsetype = "df"</code> and <code>wt="csv"</code>. <code>solr_all()</code> returns
+by default a list, whereas <code>solr_search()</code> by default returns a data.frame.</p>
+
+<p>A basic search, just docs output</p>
+
+<pre><code class="r">solr_all(q = '*:*', rows = 2, fl = 'id')
+#> $response
+#> $response$numFound
+#> [1] 1502814
+#> 
+#> $response$start
+#> [1] 0
+#> 
+#> $response$docs
+#> $response$docs[[1]]
+#> $response$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0142243/references"
+#> 
+#> 
+#> $response$docs[[2]]
+#> $response$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0142243/body"
+</code></pre>
+
+<p>Get docs, mlt, and stats output</p>
+
+<pre><code class="r">solr_all(q = 'ecology', rows = 2, fl = 'id', mlt = 'true', mlt.count = 2, mlt.fl = 'abstract', stats = 'true', stats.field = 'counter_total_all')
+#> $response
+#> $response$numFound
+#> [1] 31467
+#> 
+#> $response$start
+#> [1] 0
+#> 
+#> $response$docs
+#> $response$docs[[1]]
+#> $response$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0059813"
+#> 
+#> 
+#> $response$docs[[2]]
+#> $response$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0001248"
+#> 
+#> 
+#> 
+#> 
+#> $moreLikeThis
+#> $moreLikeThis$`10.1371/journal.pone.0059813`
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$numFound
+#> [1] 152704
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$start
+#> [1] 0
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[1]]
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0111996"
+#> 
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[2]]
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0143687"
+#> 
+#> 
+#> 
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$numFound
+#> [1] 159058
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$start
+#> [1] 0
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[1]]
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0001275"
+#> 
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[2]]
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0024192"
+#> 
+#> 
+#> 
+#> 
+#> 
+#> $stats
+#> $stats$stats_fields
+#> $stats$stats_fields$counter_total_all
+#> $stats$stats_fields$counter_total_all$min
+#> [1] 16
+#> 
+#> $stats$stats_fields$counter_total_all$max
+#> [1] 367697
+#> 
+#> $stats$stats_fields$counter_total_all$count
+#> [1] 31467
+#> 
+#> $stats$stats_fields$counter_total_all$missing
+#> [1] 0
+#> 
+#> $stats$stats_fields$counter_total_all$sum
+#> [1] 141552408
+#> 
+#> $stats$stats_fields$counter_total_all$sumOfSquares
+#> [1] 3.162032e+12
+#> 
+#> $stats$stats_fields$counter_total_all$mean
+#> [1] 4498.44
+#> 
+#> $stats$stats_fields$counter_total_all$stddev
+#> [1] 8958.45
+#> 
+#> $stats$stats_fields$counter_total_all$facets
+#> named list()
+</code></pre>
+
+<h2>Facet</h2>
+
+<pre><code class="r">solr_facet(q = '*:*', facet.field = 'journal', facet.query = c('cell', 'bird'))
+#> $facet_queries
+#>   term  value
+#> 1 cell 128657
+#> 2 bird  13063
+#> 
+#> $facet_fields
+#> $facet_fields$journal
+#>                                 X1      X2
+#> 1                         plos one 1233662
+#> 2                    plos genetics   49285
+#> 3                   plos pathogens   42817
+#> 4       plos computational biology   36373
+#> 5 plos neglected tropical diseases   33911
+#> 6                     plos biology   28745
+#> 7                    plos medicine   19934
+#> 8             plos clinical trials     521
+#> 9                     plos medicin       9
+#> 
+#> 
+#> $facet_pivot
+#> NULL
+#> 
+#> $facet_dates
+#> NULL
+#> 
+#> $facet_ranges
+#> NULL
+</code></pre>
+
+<h2>Highlight</h2>
+
+<pre><code class="r">solr_highlight(q = 'alcohol', hl.fl = 'abstract', rows = 2)
+#> $`10.1371/journal.pmed.0040151`
+#> $`10.1371/journal.pmed.0040151`$abstract
+#> [1] "Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting"
+#> 
+#> 
+#> $`10.1371/journal.pone.0027752`
+#> $`10.1371/journal.pone.0027752`$abstract
+#> [1] "Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking"
+</code></pre>
+
+<h2>Stats</h2>
+
+<pre><code class="r">out <- solr_stats(q = 'ecology', stats.field = c('counter_total_all', 'alm_twitterCount'), stats.facet = c('journal', 'volume'))
+</code></pre>
+
+<pre><code class="r">out$data
+#>                   min    max count missing       sum sumOfSquares
+#> counter_total_all  16 367697 31467       0 141552408 3.162032e+12
+#> alm_twitterCount    0   1756 31467       0    168586 3.267801e+07
+#>                          mean     stddev
+#> counter_total_all 4498.439889 8958.45030
+#> alm_twitterCount     5.357549   31.77757
+</code></pre>
+
+<pre><code class="r">out$facet
+#> $counter_total_all
+#> $counter_total_all$volume
+#>     min    max count missing      sum sumOfSquares      mean    stddev
+#> 1    20 166202   887       0  2645927  63864880371  2983.007  7948.200
+#> 2   495 103147   105       0  1017325  23587444387  9688.810 11490.287
+#> 3  1950  69628    69       0   704216  13763808310 10206.029  9834.333
+#> 4   742  13856     9       0    48373    375236903  5374.778  3795.438
+#> 5  1871 182622    81       0  1509647  87261688837 18637.617 27185.811
+#> 6  1667 117922   482       0  5836186 162503606896 12108.270 13817.754
+#> 7  1340 128083   741       0  7714963 188647618509 10411.556 12098.852
+#> 8   667 362410  1010       0  9692492 340237069126  9596.527 15653.040
+#> 9   103 113220  1539       0 12095764 218958657256  7859.496  8975.188
+#> 10   72 243873  2948       0 17699332 327210596846  6003.844  8658.717
+#> 11   51 184259  4825       0 24198104 382922818910  5015.151  7363.541
+#> 12   16 367697  6360       0 26374352 533183277470  4146.911  8163.790
+#> 13   42 287741  6620       0 21003701 612616254755  3172.765  9082.194
+#> 14  128 161520  5791       0 11012026 206899109466  1901.576  5667.209
+#>    volume
+#> 1      11
+#> 2      12
+#> 3      13
+#> 4      14
+#> 5       1
+#> 6       2
+#> 7       3
+#> 8       4
+#> 9       5
+#> 10      6
+#> 11      7
+#> 12      8
+#> 13      9
+#> 14     10
+#> 
+#> $counter_total_all$journal
+#>    min    max count missing      sum sumOfSquares      mean    stddev
+#> 1  667 117922   243       0  4074303 1.460258e+11 16766.679 17920.074
+#> 2  742 265561   884       0 14006081 5.507548e+11 15843.983 19298.065
+#> 3 8463  13797     2       0    22260 2.619796e+08 11130.000  3771.708
+#> 4   16 367697 25915       0 96069530 1.943903e+12  3707.101  7827.546
+#> 5  915  61956   595       0  4788553 6.579963e+10  8047.988  6774.558
+#> 6  548  76290   758       0  6326284 9.168443e+10  8346.021  7167.106
+#> 7  268 212048  1239       0  5876481 1.010080e+11  4742.923  7686.101
+#> 8  495 287741   580       0  4211717 1.411022e+11  7261.581 13815.867
+#>                            journal
+#> 1                    plos medicine
+#> 2                     plos biology
+#> 3             plos clinical trials
+#> 4                         plos one
+#> 5                   plos pathogens
+#> 6                    plos genetics
+#> 7 plos neglected tropical diseases
+#> 8       plos computational biology
+#> 
+#> 
+#> $alm_twitterCount
+#> $alm_twitterCount$volume
+#>    min  max count missing   sum sumOfSquares      mean     stddev volume
+#> 1    0 1756   887       0 12295      4040629 13.861330  66.092178     11
+#> 2    0 1045   105       0  6466      1885054 61.580952 119.569402     12
+#> 3    0  283    69       0  3478       509732 50.405797  70.128101     13
+#> 4    6  274     9       0   647       102391 71.888889  83.575482     14
+#> 5    0   42    81       0   176         4996  2.172840   7.594060      1
+#> 6    0   74   482       0   628        15812  1.302905   5.583197      2
+#> 7    0   48   741       0   652        11036  0.879892   3.760087      3
+#> 8    0  239  1010       0  1039        74993  1.028713   8.559485      4
+#> 9    0  126  1539       0  1901        90297  1.235218   7.562004      5
+#> 10   0  886  2948       0  4357      1245453  1.477951  20.504442      6
+#> 11   0  822  4825       0 19646      2037596  4.071710  20.144602      7
+#> 12   0 1503  6360       0 35938      6505618  5.650629  31.482092      8
+#> 13   0 1539  6620       0 49837     12847207  7.528248  43.408246      9
+#> 14   0  863  5791       0 31526      3307198  5.443965  23.271216     10
+#> 
+#> $alm_twitterCount$journal
+#>   min  max count missing    sum sumOfSquares      mean   stddev
+#> 1   0  777   243       0   4251      1028595 17.493827 62.79406
+#> 2   0 1756   884       0  16405      6088729 18.557692 80.93655
+#> 3   0    3     2       0      3            9  1.500000  2.12132
+#> 4   0 1539 25915       0 123409     23521391  4.762068 29.74883
+#> 5   0  122   595       0   4265       160581  7.168067 14.79428
+#> 6   0  178   758       0   4277       148277  5.642480 12.80605
+#> 7   0  886  1239       0   4972      1048908  4.012914 28.82956
+#> 8   0  285   580       0   4166       265578  7.182759 20.17431
+#>                            journal
+#> 1                    plos medicine
+#> 2                     plos biology
+#> 3             plos clinical trials
+#> 4                         plos one
+#> 5                   plos pathogens
+#> 6                    plos genetics
+#> 7 plos neglected tropical diseases
+#> 8       plos computational biology
+</code></pre>
+
+<h2>More like this</h2>
+
+<p><code>solr_mlt</code> is a function to return similar documents to the one</p>
+
+<pre><code class="r">out <- solr_mlt(q = 'title:"ecology" AND body:"cell"', mlt.fl = 'title', mlt.mindf = 1, mlt.mintf = 1, fl = 'counter_total_all', rows = 5)
+out$docs
+#> Source: local data frame [5 x 2]
+#> 
+#>                             id counter_total_all
+#>                          (chr)             (int)
+#> 1 10.1371/journal.pbio.1001805             17081
+#> 2 10.1371/journal.pbio.0020440             23882
+#> 3 10.1371/journal.pone.0087217              5935
+#> 4 10.1371/journal.pbio.1002191             13036
+#> 5 10.1371/journal.pone.0040117              4316
+</code></pre>
+
+<pre><code class="r">out$mlt
+#> $`10.1371/journal.pbio.1001805`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0082578              2196
+#> 2 10.1371/journal.pone.0098876              2448
+#> 3 10.1371/journal.pone.0102159              1177
+#> 4 10.1371/journal.pcbi.1002652              3102
+#> 5 10.1371/journal.pcbi.1003408              6942
+#> 
+#> $`10.1371/journal.pbio.0020440`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0102679              3112
+#> 2 10.1371/journal.pone.0035964              5571
+#> 3 10.1371/journal.pone.0003259              2800
+#> 4 10.1371/journal.pntd.0003377              3392
+#> 5 10.1371/journal.pone.0068814              7522
+#> 
+#> $`10.1371/journal.pone.0087217`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0131665               409
+#> 2 10.1371/journal.pcbi.0020092             19604
+#> 3 10.1371/journal.pone.0133941               475
+#> 4 10.1371/journal.pone.0123774               997
+#> 5 10.1371/journal.pone.0140306               322
+#> 
+#> $`10.1371/journal.pbio.1002191`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pbio.1002232              1950
+#> 2 10.1371/journal.pone.0131700               979
+#> 3 10.1371/journal.pone.0070448              1608
+#> 4 10.1371/journal.pone.0028737              7481
+#> 5 10.1371/journal.pone.0052330              5595
+#> 
+#> $`10.1371/journal.pone.0040117`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0069352              2763
+#> 2 10.1371/journal.pone.0148280               467
+#> 3 10.1371/journal.pone.0035502              4031
+#> 4 10.1371/journal.pone.0014065              5764
+#> 5 10.1371/journal.pone.0113280              1984
+</code></pre>
+
+<h2>Groups</h2>
+
+<p><code>solr_groups()</code> is a function to return similar documents to the one</p>
+
+<pre><code class="r">solr_group(q = 'ecology', group.field = 'journal', group.limit = 1, fl = c('id', 'alm_twitterCount'))
+#>                         groupValue numFound start
+#> 1                         plos one    25915     0
+#> 2       plos computational biology      580     0
+#> 3                     plos biology      884     0
+#> 4                             none     1251     0
+#> 5                    plos medicine      243     0
+#> 6 plos neglected tropical diseases     1239     0
+#> 7                   plos pathogens      595     0
+#> 8                    plos genetics      758     0
+#> 9             plos clinical trials        2     0
+#>                             id alm_twitterCount
+#> 1 10.1371/journal.pone.0059813               56
+#> 2 10.1371/journal.pcbi.1003594               21
+#> 3 10.1371/journal.pbio.1002358               16
+#> 4 10.1371/journal.pone.0046671                2
+#> 5 10.1371/journal.pmed.1000303                0
+#> 6 10.1371/journal.pntd.0002577                2
+#> 7 10.1371/journal.ppat.1003372                2
+#> 8 10.1371/journal.pgen.1001197                0
+#> 9 10.1371/journal.pctr.0020010                0
+</code></pre>
+
+<h2>Parsing</h2>
+
+<p><code>solr_parse()</code> is a general purpose parser function with extension methods for parsing outputs from functions in <code>solr</code>. <code>solr_parse()</code> is used internally within functions to do parsing after retrieving data from the server. You can optionally get back raw <code>json</code>, <code>xml</code>, or <code>csv</code> with the <code>raw=TRUE</code>, and then parse afterwards with <code>solr_parse()</code>.</p>
+
+<p>For example:</p>
+
+<pre><code class="r">(out <- solr_highlight(q = 'alcohol', hl.fl = 'abstract', rows = 2, raw = TRUE))
+#> [1] "{\"response\":{\"numFound\":20268,\"start\":0,\"docs\":[{},{}]},\"highlighting\":{\"10.1371/journal.pmed.0040151\":{\"abstract\":[\"Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting\"]},\"10.1371/journal.pone.0027752\":{\"abstract\":[\"Background: The negative influences of <em>alcohol</em> o [...]
+#> attr(,"class")
+#> [1] "sr_high"
+#> attr(,"wt")
+#> [1] "json"
+</code></pre>
+
+<p>Then parse</p>
+
+<pre><code class="r">solr_parse(out, 'df')
+#>                          names
+#> 1 10.1371/journal.pmed.0040151
+#> 2 10.1371/journal.pone.0027752
+#>                                                                                                    abstract
+#> 1   Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting
+#> 2 Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking
+</code></pre>
+
+<p><a href="https://github.com/ropensci/solrium/issues">Please report any issues or bugs</a>.</p>
+
+</body>
+
+</html>
diff --git a/inst/examples/add_delete.json b/inst/examples/add_delete.json
new file mode 100644
index 0000000..f81c64e
--- /dev/null
+++ b/inst/examples/add_delete.json
@@ -0,0 +1,19 @@
+{
+    "add": {
+        "doc": {
+            "id" : "978-0641723445",
+            "cat" : ["book","hardcover"],
+            "name" : "The Lightning Thief",
+            "author" : "Rick Riordan",
+            "series_t" : "Percy Jackson and the Olympians",
+            "sequence_i" : 1,
+            "genre_s" : "fantasy",
+            "inStock" : true,
+            "price" : 12.50,
+            "pages_i" : 384
+        }
+    },
+    "delete": {
+        "id" : "456"
+    }
+}
diff --git a/inst/examples/add_delete.xml b/inst/examples/add_delete.xml
new file mode 100644
index 0000000..d0b7ba0
--- /dev/null
+++ b/inst/examples/add_delete.xml
@@ -0,0 +1,19 @@
+<update>
+	<add>
+	  <doc>
+	    <field name="id">978-0641723445</field>
+	    <field name="cat">book,hardcover</field>
+	    <field name="name">The Lightning Thief</field>
+	    <field name="author">Rick Riordan</field>
+	    <field name="series_t">Percy Jackson and the Olympians</field>
+	    <field name="sequence_i">1</field>
+	    <field name="genre_s">fantasy</field>
+	    <field name="inStock">TRUE</field>
+	    <field name="price">12.5</field>
+	    <field name="pages_i">384</field>
+	  </doc>
+	</add>
+	<delete>
+		<id>456</id>
+	</delete>
+</update>
diff --git a/inst/examples/books.csv b/inst/examples/books.csv
new file mode 100644
index 0000000..8ccecbb
--- /dev/null
+++ b/inst/examples/books.csv
@@ -0,0 +1,11 @@
+id,cat,name,price,inStock,author,series_t,sequence_i,genre_s
+0553573403,book,A Game of Thrones,7.99,true,George R.R. Martin,"A Song of Ice and Fire",1,fantasy
+0553579908,book,A Clash of Kings,7.99,true,George R.R. Martin,"A Song of Ice and Fire",2,fantasy
+055357342X,book,A Storm of Swords,7.99,true,George R.R. Martin,"A Song of Ice and Fire",3,fantasy
+0553293354,book,Foundation,7.99,true,Isaac Asimov,Foundation Novels,1,scifi
+0812521390,book,The Black Company,6.99,false,Glen Cook,The Chronicles of The Black Company,1,fantasy
+0812550706,book,Ender's Game,6.99,true,Orson Scott Card,Ender,1,scifi
+0441385532,book,Jhereg,7.95,false,Steven Brust,Vlad Taltos,1,fantasy
+0380014300,book,Nine Princes In Amber,6.99,true,Roger Zelazny,the Chronicles of Amber,1,fantasy
+0805080481,book,The Book of Three,5.99,true,Lloyd Alexander,The Chronicles of Prydain,1,fantasy
+080508049X,book,The Black Cauldron,5.99,true,Lloyd Alexander,The Chronicles of Prydain,2,fantasy
diff --git a/inst/examples/books.json b/inst/examples/books.json
new file mode 100644
index 0000000..f82d510
--- /dev/null
+++ b/inst/examples/books.json
@@ -0,0 +1,51 @@
+[
+  {
+    "id" : "978-0641723445",
+    "cat" : ["book","hardcover"],
+    "name" : "The Lightning Thief",
+    "author" : "Rick Riordan",
+    "series_t" : "Percy Jackson and the Olympians",
+    "sequence_i" : 1,
+    "genre_s" : "fantasy",
+    "inStock" : true,
+    "price" : 12.50,
+    "pages_i" : 384
+  }
+,
+  {
+    "id" : "978-1423103349",
+    "cat" : ["book","paperback"],
+    "name" : "The Sea of Monsters",
+    "author" : "Rick Riordan",
+    "series_t" : "Percy Jackson and the Olympians",
+    "sequence_i" : 2,
+    "genre_s" : "fantasy",
+    "inStock" : true,
+    "price" : 6.49,
+    "pages_i" : 304
+  }
+,
+  {
+    "id" : "978-1857995879",
+    "cat" : ["book","paperback"],
+    "name" : "Sophie's World : The Greek Philosophers",
+    "author" : "Jostein Gaarder",
+    "sequence_i" : 1,
+    "genre_s" : "fantasy",
+    "inStock" : true,
+    "price" : 3.07,
+    "pages_i" : 64
+  }
+,
+  {
+    "id" : "978-1933988177",
+    "cat" : ["book","paperback"],
+    "name" : "Lucene in Action, Second Edition",
+    "author" : "Michael McCandless",
+    "sequence_i" : 1,
+    "genre_s" : "IT",
+    "inStock" : true,
+    "price" : 30.50,
+    "pages_i" : 475
+  }
+]
diff --git a/inst/examples/books.xml b/inst/examples/books.xml
new file mode 100644
index 0000000..568fa02
--- /dev/null
+++ b/inst/examples/books.xml
@@ -0,0 +1,50 @@
+<add>
+  <doc>
+    <field name="id">978-0641723445</field>
+    <field name="cat">book,hardcover</field>
+    <field name="name">The Lightning Thief</field>
+    <field name="author">Rick Riordan</field>
+    <field name="series_t">Percy Jackson and the Olympians</field>
+    <field name="sequence_i">1</field>
+    <field name="genre_s">fantasy</field>
+    <field name="inStock">TRUE</field>
+    <field name="price">12.5</field>
+    <field name="pages_i">384</field>
+  </doc>
+  <doc>
+    <field name="id">978-1423103349</field>
+    <field name="cat">book,paperback</field>
+    <field name="name">The Sea of Monsters</field>
+    <field name="author">Rick Riordan</field>
+    <field name="series_t">Percy Jackson and the Olympians</field>
+    <field name="sequence_i">2</field>
+    <field name="genre_s">fantasy</field>
+    <field name="inStock">TRUE</field>
+    <field name="price">6.5</field>
+    <field name="pages_i">304</field>
+  </doc>
+  <doc>
+    <field name="id">978-1857995879</field>
+    <field name="cat">book,paperback</field>
+    <field name="name">Sophies World : The Greek Philosophers</field>
+    <field name="author">Jostein Gaarder</field>
+    <field name="series_t">NA</field>
+    <field name="sequence_i">1</field>
+    <field name="genre_s">fantasy</field>
+    <field name="inStock">TRUE</field>
+    <field name="price">3.7</field>
+    <field name="pages_i">64</field>
+  </doc>
+  <doc>
+    <field name="id">978-1933988177</field>
+    <field name="cat">book,paperback</field>
+    <field name="name">Lucene in Action, Second Edition</field>
+    <field name="author">Michael McCandless</field>
+    <field name="series_t">NA</field>
+    <field name="sequence_i">1</field>
+    <field name="genre_s">IT</field>
+    <field name="inStock">TRUE</field>
+    <field name="price">30.5</field>
+    <field name="pages_i">475</field>
+  </doc>
+</add>
diff --git a/inst/examples/books2.json b/inst/examples/books2.json
new file mode 100644
index 0000000..b4513d2
--- /dev/null
+++ b/inst/examples/books2.json
@@ -0,0 +1,51 @@
+[
+  {
+    "id" : "343334534545",
+    "cat" : ["book","hardcover"],
+    "name" : "Bears, lions",
+    "author" : "Foo bar",
+    "series_t" : "Percy Jackson and the Olympians",
+    "sequence_i" : 1,
+    "genre_s" : "fantasy",
+    "inStock" : true,
+    "price" : 12.50,
+    "pages_i" : 384
+  }
+,
+  {
+    "id" : "29234928423434",
+    "cat" : ["book","paperback"],
+    "name" : "The Sea of Monsters",
+    "author" : "Rick Bick",
+    "series_t" : "Stuff and things",
+    "sequence_i" : 2,
+    "genre_s" : "fantasy",
+    "inStock" : true,
+    "price" : 3.49,
+    "pages_i" : 404
+  }
+,
+  {
+    "id" : "3345345345345",
+    "cat" : ["book","paperback"],
+    "name" : "Sophie's World : The Roman Philosophers",
+    "author" : "Jill Brown",
+    "sequence_i" : 1,
+    "genre_s" : "fantasy",
+    "inStock" : true,
+    "price" : 4.07,
+    "pages_i" : 64
+  }
+,
+  {
+    "id" : "2343454435",
+    "cat" : ["book","paperback"],
+    "name" : "Lucene in Action, Third Edition",
+    "author" : "Michael McCandless",
+    "sequence_i" : 1,
+    "genre_s" : "IT",
+    "inStock" : true,
+    "price" : 34.50,
+    "pages_i" : 375
+  }
+]
diff --git a/inst/examples/books2_delete.json b/inst/examples/books2_delete.json
new file mode 100644
index 0000000..bcc196c
--- /dev/null
+++ b/inst/examples/books2_delete.json
@@ -0,0 +1,6 @@
+{
+    "delete": {"id" : "343334534545"},
+    "delete": {"id" : "29234928423434"},
+    "delete": {"id" : "3345345345345"},
+    "delete": {"id" : "2343454435"}
+}
diff --git a/inst/examples/books2_delete.xml b/inst/examples/books2_delete.xml
new file mode 100644
index 0000000..35fb15f
--- /dev/null
+++ b/inst/examples/books2_delete.xml
@@ -0,0 +1,6 @@
+<delete>
+	<id>343334534545</id>
+	<id>29234928423434</id>
+	<id>3345345345345</id>
+	<id>2343454435</id>
+</delete>
diff --git a/inst/examples/books_delete.json b/inst/examples/books_delete.json
new file mode 100644
index 0000000..7baeb2d
--- /dev/null
+++ b/inst/examples/books_delete.json
@@ -0,0 +1,6 @@
+{
+    "delete": {"id" : "978-0641723445"},
+    "delete": {"id" : "978-1423103349"},
+    "delete": {"id" : "978-1857995879"},
+    "delete": {"id" : "978-1933988177"}
+}
diff --git a/inst/examples/books_delete.xml b/inst/examples/books_delete.xml
new file mode 100644
index 0000000..01413d3
--- /dev/null
+++ b/inst/examples/books_delete.xml
@@ -0,0 +1,6 @@
+<delete>
+	<id>978-0641723445</id>
+	<id>978-1423103349</id>
+	<id>978-1857995879</id>
+	<id>978-1933988177</id>
+</delete>
diff --git a/inst/examples/schema.xml b/inst/examples/schema.xml
new file mode 100644
index 0000000..596ecac
--- /dev/null
+++ b/inst/examples/schema.xml
@@ -0,0 +1,534 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!--
+ This is the Solr schema file. This file should be named "schema.xml" and
+ should be in the conf directory under the solr home
+ (i.e. ./solr/conf/schema.xml by default)
+ or located where the classloader for the Solr webapp can find it.
+
+ This example schema is the recommended starting point for users.
+ It should be kept correct and concise, usable out-of-the-box.
+
+ For more information, on how to customize this file, please see
+ http://wiki.apache.org/solr/SchemaXml
+-->
+
+<schema name="example" version="1.5">
+  <!-- attribute "name" is the name of this schema and is only used for display purposes.
+       version="x.y" is Solr's version number for the schema syntax and
+       semantics.  It should not normally be changed by applications.
+
+       1.0: multiValued attribute did not exist, all fields are multiValued
+            by nature
+       1.1: multiValued attribute introduced, false by default
+       1.2: omitTermFreqAndPositions attribute introduced, true by default
+            except for text fields.
+       1.3: removed optional field compress feature
+       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
+            behavior when a single string produces multiple tokens.  Defaults
+            to off for version >= 1.4
+       1.5: omitNorms defaults to true for primitive field types
+            (int, float, boolean, string...)
+     -->
+
+
+   <!-- Valid attributes for fields:
+     name: mandatory - the name for the field
+     type: mandatory - the name of a field type from the
+       <types> fieldType section
+     indexed: true if this field should be indexed (searchable or sortable)
+     stored: true if this field should be retrievable
+     docValues: true if this field should have doc values. Doc values are
+       useful for faceting, grouping, sorting and function queries. Although not
+       required, doc values will make the index faster to load, more
+       NRT-friendly and more memory-efficient. They however come with some
+       limitations: they are currently only supported by StrField, UUIDField
+       and all Trie*Fields, and depending on the field type, they might
+       require the field to be single-valued, be required or have a default
+       value (check the documentation of the field type you're interested in
+       for more information)
+     multiValued: true if this field may contain multiple values per document
+     omitNorms: (expert) set to true to omit the norms associated with
+       this field (this disables length normalization and index-time
+       boosting for the field, and saves some memory).  Only full-text
+       fields or fields that need an index-time boost need norms.
+       Norms are omitted for primitive (non-analyzed) types by default.
+     termVectors: [false] set to true to store the term vector for a
+       given field.
+       When using MoreLikeThis, fields used for similarity should be
+       stored for best performance.
+     termPositions: Store position information with the term vector.
+       This will increase storage costs.
+     termOffsets: Store offset information with the term vector. This
+       will increase storage costs.
+     required: The field is required.  It will throw an error if the
+       value does not exist
+     default: a value that should be used if no value is specified
+       when adding a document.
+   -->
+
+   <!-- field names should consist of alphanumeric or underscore characters only and
+      not start with a digit.  This is not currently strictly enforced,
+      but other field names will not have first class support from all components
+      and back compatibility is not guaranteed.  Names with both leading and
+      trailing underscores (e.g. _version_) are reserved.
+   -->
+
+   <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml
+      or Solr won't start. _version_ and update log are required for SolrCloud
+   -->
+   <field name="_version_" type="long" indexed="true" stored="true"/>
+
+   <!-- points to the root document of a block of nested documents. Required for nested
+      document support, may be removed otherwise
+   -->
+   <field name="_root_" type="string" indexed="true" stored="false"/>
+
+   <!-- Only remove the "id" field if you have a very good reason to. While not strictly
+     required, it is highly recommended. A <uniqueKey> is present in almost all Solr
+     installations. See the <uniqueKey> declaration below where <uniqueKey> is set to "id".
+     Do NOT change the type and apply index-time analysis to the <uniqueKey> as it will likely
+     make routing in SolrCloud and document replacement in general fail. Limited _query_ time
+     analysis is possible as long as the indexing process is guaranteed to index the term
+     in a compatible way. Any analysis applied to the <uniqueKey> should _not_ produce multiple
+     tokens
+   -->
+   <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
+
+   <!-- Dynamic field definitions allow using convention over configuration
+       for fields via the specification of patterns to match field names.
+       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
+       RESTRICTION: the glob-like pattern in the name attribute must have
+       a "*" only at the start or the end.  -->
+
+   <dynamicField name="*_i"  type="int"    indexed="true"  stored="true"/>
+   <dynamicField name="*_is" type="int"    indexed="true"  stored="true"  multiValued="true"/>
+   <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
+   <dynamicField name="*_ss" type="string"  indexed="true"  stored="true" multiValued="true"/>
+   <dynamicField name="*_l"  type="long"   indexed="true"  stored="true"/>
+   <dynamicField name="*_ls" type="long"   indexed="true"  stored="true"  multiValued="true"/>
+   <dynamicField name="*_t"  type="text_general"    indexed="true"  stored="true"/>
+   <dynamicField name="*_txt" type="text_general"   indexed="true"  stored="true" multiValued="true"/>
+   <dynamicField name="*_en"  type="text_en"    indexed="true"  stored="true" multiValued="true"/>
+   <dynamicField name="*_b"  type="boolean" indexed="true" stored="true"/>
+   <dynamicField name="*_bs" type="boolean" indexed="true" stored="true"  multiValued="true"/>
+   <dynamicField name="*_f"  type="float"  indexed="true"  stored="true"/>
+   <dynamicField name="*_fs" type="float"  indexed="true"  stored="true"  multiValued="true"/>
+   <dynamicField name="*_d"  type="double" indexed="true"  stored="true"/>
+   <dynamicField name="*_ds" type="double" indexed="true"  stored="true"  multiValued="true"/>
+
+   <!-- Type used to index the lat and lon components for the "location" FieldType -->
+   <dynamicField name="*_coordinate"  type="tdouble" indexed="true"  stored="false" />
+
+   <dynamicField name="*_dt"  type="date"    indexed="true"  stored="true"/>
+   <dynamicField name="*_dts" type="date"    indexed="true"  stored="true" multiValued="true"/>
+   <dynamicField name="*_p"  type="location" indexed="true" stored="true"/>
+
+   <!-- some trie-coded dynamic fields for faster range queries -->
+   <dynamicField name="*_ti" type="tint"    indexed="true"  stored="true"/>
+   <dynamicField name="*_tl" type="tlong"   indexed="true"  stored="true"/>
+   <dynamicField name="*_tf" type="tfloat"  indexed="true"  stored="true"/>
+   <dynamicField name="*_td" type="tdouble" indexed="true"  stored="true"/>
+   <dynamicField name="*_tdt" type="tdate"  indexed="true"  stored="true"/>
+
+   <dynamicField name="*_c"   type="currency" indexed="true"  stored="true"/>
+
+   <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
+   <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>
+
+   <dynamicField name="random_*" type="random" />
+
+   <!-- uncomment the following to ignore any fields that don't already match an existing
+        field name or dynamic field, rather than reporting them as an error.
+        alternately, change the type="ignored" to some other type e.g. "text" if you want
+        unknown fields indexed and/or stored by default -->
+   <!--dynamicField name="*" type="ignored" multiValued="true" /-->
+
+ <!-- Field to use to determine and enforce document uniqueness.
+      Unless this field is marked with required="false", it will be a required field
+   -->
+ <uniqueKey>id</uniqueKey>
+
+  <!-- copyField commands copy one field to another at the time a document
+        is added to the index.  It's used either to index the same field differently,
+        or to add multiple fields to the same field for easier/faster searching.  -->
+
+  <!--
+   <copyField source="title" dest="text"/>
+   <copyField source="body" dest="text"/>
+  -->
+
+    <!-- field type definitions. The "name" attribute is
+       just a label to be used by field definitions.  The "class"
+       attribute and any other attributes determine the real
+       behavior of the fieldType.
+         Class names starting with "solr" refer to java classes in a
+       standard package such as org.apache.solr.analysis
+    -->
+
+    <!-- The StrField type is not analyzed, but indexed/stored verbatim.
+       It supports doc values but in that case the field needs to be
+       single-valued and either required or have a default value.
+      -->
+    <fieldType name="string" class="solr.StrField" sortMissingLast="true" />
+
+    <!-- boolean type: "true" or "false" -->
+    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
+
+    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
+         currently supported on types that are sorted internally as strings
+         and on numeric types.
+       This includes "string","boolean", and, as of 3.5 (and 4.x),
+       int, float, long, date, double, including the "Trie" variants.
+       - If sortMissingLast="true", then a sort on this field will cause documents
+         without the field to come after documents with the field,
+         regardless of the requested sort order (asc or desc).
+       - If sortMissingFirst="true", then a sort on this field will cause documents
+         without the field to come before documents with the field,
+         regardless of the requested sort order.
+       - If sortMissingLast="false" and sortMissingFirst="false" (the default),
+         then default lucene sorting will be used which places docs without the
+         field first in an ascending sort and last in a descending sort.
+    -->
+
+    <!--
+      Default numeric field types. For faster range queries, consider the tint/tfloat/tlong/tdouble types.
+
+      These fields support doc values, but they require the field to be
+      single-valued and either be required or have a default value.
+    -->
+    <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/>
+
+    <!--
+     Numeric field types that index each value at various levels of precision
+     to accelerate range queries when the number of values between the range
+     endpoints is large. See the javadoc for NumericRangeQuery for internal
+     implementation details.
+
+     Smaller precisionStep values (specified in bits) will lead to more tokens
+     indexed per value, slightly larger index size, and faster range queries.
+     A precisionStep of 0 disables indexing at different precision levels.
+    -->
+    <fieldType name="tint" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0"/>
+    <fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0"/>
+    <fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0"/>
+    <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0"/>
+
+    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
+         is a more restricted form of the canonical representation of dateTime
+         http://www.w3.org/TR/xmlschema-2/#dateTime
+         The trailing "Z" designates UTC time and is mandatory.
+         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
+         All other components are mandatory.
+
+         Expressions can also be used to denote calculations that should be
+         performed relative to "NOW" to determine the value, ie...
+
+               NOW/HOUR
+                  ... Round to the start of the current hour
+               NOW-1DAY
+                  ... Exactly 1 day prior to now
+               NOW/DAY+6MONTHS+3DAYS
+                  ... 6 months and 3 days in the future from the start of
+                      the current day
+
+         Consult the TrieDateField javadocs for more information.
+
+         Note: For faster range queries, consider the tdate type
+      -->
+    <fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0"/>
+
+    <!-- A Trie based date field for faster date range queries and date faceting. -->
+    <fieldType name="tdate" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0"/>
+
+
+    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
+    <fieldType name="binary" class="solr.BinaryField"/>
+
+    <!-- The "RandomSortField" is not used to store or search any
+         data.  You can declare fields of this type it in your schema
+         to generate pseudo-random orderings of your docs for sorting
+         or function purposes.  The ordering is generated based on the field
+         name and the version of the index. As long as the index version
+         remains unchanged, and the same field name is reused,
+         the ordering of the docs will be consistent.
+         If you want different psuedo-random orderings of documents,
+         for the same version of the index, use a dynamicField and
+         change the field name in the request.
+     -->
+    <fieldType name="random" class="solr.RandomSortField" indexed="true" />
+
+    <!-- solr.TextField allows the specification of custom text analyzers
+         specified as a tokenizer and a list of token filters. Different
+         analyzers may be specified for indexing and querying.
+
+         The optional positionIncrementGap puts space between multiple fields of
+         this type on the same document, with the purpose of preventing false phrase
+         matching across fields.
+
+         For more info on customizing your analyzer chain, please see
+         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
+     -->
+
+    <!-- One can also specify an existing Analyzer class that has a
+         default constructor via the class attribute on the analyzer element.
+         Example:
+    <fieldType name="text_greek" class="solr.TextField">
+      <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
+    </fieldType>
+    -->
+
+    <!-- A text field that only splits on whitespace for exact matching of words -->
+    <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- A general text field that has reasonable, generic
+         cross-language defaults: it tokenizes with StandardTokenizer,
+   removes stop words from case-insensitive "stopwords.txt"
+   (empty by default), and down cases.  At query time only, it
+   also applies synonyms. -->
+    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
+      <analyzer type="index">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- A text field with defaults appropriate for English: it
+         tokenizes with StandardTokenizer, removes English stop words
+         (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
+         finally applies Porter's stemming.  The query time analyzer
+         also applies synonyms from synonyms.txt. -->
+    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
+      <analyzer type="index">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <!-- Case insensitive stop word removal.
+        -->
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+                />
+        <filter class="solr.LowerCaseFilterFactory"/>
+  <filter class="solr.EnglishPossessiveFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
+        <filter class="solr.EnglishMinimalStemFilterFactory"/>
+  -->
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+                />
+        <filter class="solr.LowerCaseFilterFactory"/>
+  <filter class="solr.EnglishPossessiveFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+  <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
+        <filter class="solr.EnglishMinimalStemFilterFactory"/>
+  -->
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- A text field with defaults appropriate for English, plus
+   aggressive word-splitting and autophrase features enabled.
+   This field is just like text_en, except it adds
+   WordDelimiterFilter to enable splitting and matching of
+   words on case-change, alpha numeric boundaries, and
+   non-alphanumeric chars.  This means certain compound word
+   cases will work, for example query "wi fi" will match
+   document "WiFi" or "wi-fi".
+        -->
+    <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
+      <analyzer type="index">
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <!-- Case insensitive stop word removal.
+        -->
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+                />
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+                />
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
+         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
+    <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
+      <analyzer>
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.EnglishMinimalStemFilterFactory"/>
+        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
+             possible with WordDelimiterFilter in conjuncton with stemming. -->
+        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Just like text_general except it reverses the characters of
+   each token, to enable more efficient leading wildcard queries. -->
+    <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
+      <analyzer type="index">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"
+           maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- This is an example of using the KeywordTokenizer along
+         With various TokenFilterFactories to produce a sortable field
+         that does not include some properties of the source text
+      -->
+    <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
+      <analyzer>
+        <!-- KeywordTokenizer does no actual tokenizing, so the entire
+             input string is preserved as a single token
+          -->
+        <tokenizer class="solr.KeywordTokenizerFactory"/>
+        <!-- The LowerCase TokenFilter does what you expect, which can be
+             when you want your sorting to be case insensitive
+          -->
+        <filter class="solr.LowerCaseFilterFactory" />
+        <!-- The TrimFilter removes any leading or trailing whitespace -->
+        <filter class="solr.TrimFilterFactory" />
+        <!-- The PatternReplaceFilter gives you the flexibility to use
+             Java Regular expression to replace any sequence of characters
+             matching a pattern with an arbitrary replacement string,
+             which may include back references to portions of the original
+             string matched by the pattern.
+
+             See the Java Regular Expression documentation for more
+             information on pattern and replacement string syntax.
+
+             http://docs.oracle.com/javase/7/docs/api/java/util/regex/package-summary.html
+          -->
+        <filter class="solr.PatternReplaceFilterFactory"
+                pattern="([^a-z])" replacement="" replace="all"
+        />
+      </analyzer>
+    </fieldType>
+
+    <!-- lowercases the entire field value, keeping it as a single token.  -->
+    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.KeywordTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory" />
+      </analyzer>
+    </fieldType>
+
+    <!-- since fields of this type are by default not stored or indexed,
+         any data added to them will be ignored outright.  -->
+    <fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
+
+    <!-- This point type indexes the coordinates as separate fields (subFields)
+      If subFieldType is defined, it references a type, and a dynamic field
+      definition is created matching *___<typename>.  Alternately, if
+      subFieldSuffix is defined, that is used to create the subFields.
+      Example: if subFieldType="double", then the coordinates would be
+        indexed in fields myloc_0___double,myloc_1___double.
+      Example: if subFieldSuffix="_d" then the coordinates would be indexed
+        in fields myloc_0_d,myloc_1_d
+      The subFields are an implementation detail of the fieldType, and end
+      users normally should not need to know about them.
+     -->
+    <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>
+
+    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
+    <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>
+
+    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.
+      For more information about this and other Spatial fields new to Solr 4, see:
+      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
+    -->
+    <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
+        geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />
+
+    <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has
+     special relevancy modes: score=overlapRatio|area|area2D (local-param to the query).  DocValues is recommended for
+     relevancy. -->
+    <fieldType name="bbox" class="solr.BBoxField"
+               geo="true" distanceUnits="kilometers" numberType="_bbox_coord" />
+    <fieldType name="_bbox_coord" class="solr.TrieDoubleField" precisionStep="8" docValues="true" stored="false"/>
+
+   <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
+        Parameters:
+          defaultCurrency: Specifies the default currency if none specified. Defaults to "USD"
+          precisionStep:   Specifies the precisionStep for the TrieLong field used for the amount
+          providerClass:   Lets you plug in other exchange provider backend:
+                           solr.FileExchangeRateProvider is the default and takes one parameter:
+                             currencyConfig: name of an xml file holding exchange rates
+                           solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
+                             ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
+                             refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
+   -->
+    <fieldType name="currency" class="solr.CurrencyField" precisionStep="8" defaultCurrency="USD" currencyConfig="currency.xml" />
+
+</schema>
diff --git a/inst/examples/solrconfig.xml b/inst/examples/solrconfig.xml
new file mode 100644
index 0000000..a964b9a
--- /dev/null
+++ b/inst/examples/solrconfig.xml
@@ -0,0 +1,583 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!--
+     For more details about configurations options that may appear in
+     this file, see http://wiki.apache.org/solr/SolrConfigXml.
+-->
+<config>
+  <!-- In all configuration below, a prefix of "solr." for class names
+       is an alias that causes solr to search appropriate packages,
+       including org.apache.solr.(search|update|request|core|analysis)
+
+       You may also specify a fully qualified Java classname if you
+       have your own custom plugins.
+    -->
+
+  <!-- Controls what version of Lucene various components of Solr
+       adhere to.  Generally, you want to use the latest version to
+       get all bug fixes and improvements. It is highly recommended
+       that you fully re-index after changing this setting as it can
+       affect both how text is indexed and queried.
+  -->
+  <luceneMatchVersion>5.2.1</luceneMatchVersion>
+
+  <!-- Data Directory
+
+       Used to specify an alternate directory to hold all index data
+       other than the default ./data under the Solr home.  If
+       replication is in use, this should match the replication
+       configuration.
+    -->
+  <dataDir>${solr.data.dir:}</dataDir>
+
+
+  <!-- The DirectoryFactory to use for indexes.
+
+       solr.StandardDirectoryFactory is filesystem
+       based and tries to pick the best implementation for the current
+       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,
+       wraps solr.StandardDirectoryFactory and caches small files in memory
+       for better NRT performance.
+
+       One can force a particular implementation via solr.MMapDirectoryFactory,
+       solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory.
+
+       solr.RAMDirectoryFactory is memory based, not
+       persistent, and doesn't work with replication.
+    -->
+  <directoryFactory name="DirectoryFactory"
+                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}">
+  </directoryFactory>
+
+  <!-- The CodecFactory for defining the format of the inverted index.
+       The default implementation is SchemaCodecFactory, which is the official Lucene
+       index format, but hooks into the schema to provide per-field customization of
+       the postings lists and per-document values in the fieldType element
+       (postingsFormat/docValuesFormat). Note that most of the alternative implementations
+       are experimental, so if you choose to customize the index format, it's a good
+       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)
+       before upgrading to a newer version to avoid unnecessary reindexing.
+  -->
+  <codecFactory class="solr.SchemaCodecFactory"/>
+
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+
+  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+       Index Config - These settings control low-level behavior of indexing
+       Most example settings here show the default value, but are commented
+       out, to more easily see where customizations have been made.
+
+       Note: This replaces <indexDefaults> and <mainIndex> from older versions
+       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
+  <indexConfig>
+
+    <!-- LockFactory
+
+         This option specifies which Lucene LockFactory implementation
+         to use.
+
+         single = SingleInstanceLockFactory - suggested for a
+                  read-only index or when there is no possibility of
+                  another process trying to modify the index.
+         native = NativeFSLockFactory - uses OS native file locking.
+                  Do not use when multiple solr webapps in the same
+                  JVM are attempting to share a single index.
+         simple = SimpleFSLockFactory  - uses a plain file for locking
+
+         Defaults: 'native' is default for Solr3.6 and later, otherwise
+                   'simple' is the default
+
+         More details on the nuances of each LockFactory...
+         http://wiki.apache.org/lucene-java/AvailableLockFactories
+    -->
+    <lockType>${solr.lock.type:native}</lockType>
+
+    <!-- Lucene Infostream
+
+         To aid in advanced debugging, Lucene provides an "InfoStream"
+         of detailed information when indexing.
+
+         Setting the value to true will instruct the underlying Lucene
+         IndexWriter to write its info stream to solr's log. By default,
+         this is enabled here, and controlled through log4j.properties.
+      -->
+     <infoStream>true</infoStream>
+  </indexConfig>
+
+
+  <!-- JMX
+
+       This example enables JMX if and only if an existing MBeanServer
+       is found, use this if you want to configure JMX through JVM
+       parameters. Remove this to disable exposing Solr configuration
+       and statistics to JMX.
+
+       For more details see http://wiki.apache.org/solr/SolrJmx
+    -->
+  <jmx />
+  <!-- If you want to connect to a particular server, specify the
+       agentId
+    -->
+  <!-- <jmx agentId="myAgent" /> -->
+  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->
+  <!-- <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr"/>
+    -->
+
+  <!-- The default high-performance update handler -->
+  <updateHandler class="solr.DirectUpdateHandler2">
+
+    <!-- Enables a transaction log, used for real-time get, durability, and
+         and solr cloud replica recovery.  The log can grow as big as
+         uncommitted changes to the index, so use of a hard autoCommit
+         is recommended (see below).
+         "dir" - the target directory for transaction logs, defaults to the
+                solr data directory.
+         "numVersionBuckets" - sets the number of buckets used to keep
+                track of max version values when checking for re-ordered
+                updates; increase this value to reduce the cost of
+                synchronizing access to version buckets during high-volume
+                indexing, this requires 8 bytes (long) * numVersionBuckets
+                of heap space per Solr core.
+    -->
+    <updateLog>
+      <str name="dir">${solr.ulog.dir:}</str>
+      <int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}</int>
+    </updateLog>
+
+    <!-- AutoCommit
+
+         Perform a hard commit automatically under certain conditions.
+         Instead of enabling autoCommit, consider using "commitWithin"
+         when adding documents.
+
+         http://wiki.apache.org/solr/UpdateXmlMessages
+
+         maxDocs - Maximum number of documents to add since the last
+                   commit before automatically triggering a new commit.
+
+         maxTime - Maximum amount of time in ms that is allowed to pass
+                   since a document was added before automatically
+                   triggering a new commit.
+         openSearcher - if false, the commit causes recent index changes
+           to be flushed to stable storage, but does not cause a new
+           searcher to be opened to make those changes visible.
+
+         If the updateLog is enabled, then it's highly recommended to
+         have some sort of hard autoCommit to limit the log size.
+      -->
+     <autoCommit>
+       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
+       <openSearcher>false</openSearcher>
+     </autoCommit>
+
+    <!-- softAutoCommit is like autoCommit except it causes a
+         'soft' commit which only ensures that changes are visible
+         but does not ensure that data is synced to disk.  This is
+         faster and more near-realtime friendly than a hard commit.
+      -->
+     <autoSoftCommit>
+       <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
+     </autoSoftCommit>
+
+  </updateHandler>
+
+  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+       Query section - these settings control query time things like caches
+       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
+  <query>
+    <!-- Max Boolean Clauses
+
+         Maximum number of clauses in each BooleanQuery,  an exception
+         is thrown if exceeded.
+
+         ** WARNING **
+
+         This option actually modifies a global Lucene property that
+         will affect all SolrCores.  If multiple solrconfig.xml files
+         disagree on this property, the value at any given moment will
+         be based on the last SolrCore to be initialized.
+
+      -->
+    <maxBooleanClauses>1024</maxBooleanClauses>
+
+
+    <!-- Solr Internal Query Caches
+
+         There are two implementations of cache available for Solr,
+         LRUCache, based on a synchronized LinkedHashMap, and
+         FastLRUCache, based on a ConcurrentHashMap.
+
+         FastLRUCache has faster gets and slower puts in single
+         threaded operation and thus is generally faster than LRUCache
+         when the hit ratio of the cache is high (> 75%), and may be
+         faster under other scenarios on multi-cpu systems.
+    -->
+
+    <!-- Filter Cache
+
+         Cache used by SolrIndexSearcher for filters (DocSets),
+         unordered sets of *all* documents that match a query.  When a
+         new searcher is opened, its caches may be prepopulated or
+         "autowarmed" using data from caches in the old searcher.
+         autowarmCount is the number of items to prepopulate.  For
+         LRUCache, the autowarmed items will be the most recently
+         accessed items.
+
+         Parameters:
+           class - the SolrCache implementation LRUCache or
+               (LRUCache or FastLRUCache)
+           size - the maximum number of entries in the cache
+           initialSize - the initial capacity (number of entries) of
+               the cache.  (see java.util.HashMap)
+           autowarmCount - the number of entries to prepopulate from
+               and old cache.
+      -->
+    <filterCache class="solr.FastLRUCache"
+                 size="512"
+                 initialSize="512"
+                 autowarmCount="0"/>
+
+    <!-- Query Result Cache
+
+        Caches results of searches - ordered lists of document ids
+        (DocList) based on a query, a sort, and the range of documents requested.
+        Additional supported parameter by LRUCache:
+           maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed
+                      to occupy
+     -->
+    <queryResultCache class="solr.LRUCache"
+                     size="512"
+                     initialSize="512"
+                     autowarmCount="0"/>
+
+    <!-- Document Cache
+
+         Caches Lucene Document objects (the stored fields for each
+         document).  Since Lucene internal document ids are transient,
+         this cache will not be autowarmed.
+      -->
+    <documentCache class="solr.LRUCache"
+                   size="512"
+                   initialSize="512"
+                   autowarmCount="0"/>
+
+    <!-- custom cache currently used by block join -->
+    <cache name="perSegFilter"
+      class="solr.search.LRUCache"
+      size="10"
+      initialSize="0"
+      autowarmCount="10"
+      regenerator="solr.NoOpRegenerator" />
+
+    <!-- Lazy Field Loading
+
+         If true, stored fields that are not requested will be loaded
+         lazily.  This can result in a significant speed improvement
+         if the usual case is to not load all stored fields,
+         especially if the skipped fields are large compressed text
+         fields.
+    -->
+    <enableLazyFieldLoading>true</enableLazyFieldLoading>
+
+   <!-- Result Window Size
+
+        An optimization for use with the queryResultCache.  When a search
+        is requested, a superset of the requested number of document ids
+        are collected.  For example, if a search for a particular query
+        requests matching documents 10 through 19, and queryWindowSize is 50,
+        then documents 0 through 49 will be collected and cached.  Any further
+        requests in that range can be satisfied via the cache.
+     -->
+   <queryResultWindowSize>20</queryResultWindowSize>
+
+   <!-- Maximum number of documents to cache for any entry in the
+        queryResultCache.
+     -->
+   <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
+
+    <!-- Use Cold Searcher
+
+         If a search request comes in and there is no current
+         registered searcher, then immediately register the still
+         warming searcher and use it.  If "false" then all requests
+         will block until the first searcher is done warming.
+      -->
+    <useColdSearcher>false</useColdSearcher>
+
+    <!-- Max Warming Searchers
+
+         Maximum number of searchers that may be warming in the
+         background concurrently.  An error is returned if this limit
+         is exceeded.
+
+         Recommend values of 1-2 for read-only slaves, higher for
+         masters w/o cache warming.
+      -->
+    <maxWarmingSearchers>2</maxWarmingSearchers>
+
+  </query>
+
+
+  <!-- Request Dispatcher
+
+       This section contains instructions for how the SolrDispatchFilter
+       should behave when processing requests for this SolrCore.
+
+       handleSelect is a legacy option that affects the behavior of requests
+       such as /select?qt=XXX
+
+       handleSelect="true" will cause the SolrDispatchFilter to process
+       the request and dispatch the query to a handler specified by the
+       "qt" param, assuming "/select" isn't already registered.
+
+       handleSelect="false" will cause the SolrDispatchFilter to
+       ignore "/select" requests, resulting in a 404 unless a handler
+       is explicitly registered with the name "/select"
+
+       handleSelect="true" is not recommended for new users, but is the default
+       for backwards compatibility
+    -->
+  <requestDispatcher handleSelect="false" >
+    <!-- Request Parsing
+
+         These settings indicate how Solr Requests may be parsed, and
+         what restrictions may be placed on the ContentStreams from
+         those requests
+
+         enableRemoteStreaming - enables use of the stream.file
+         and stream.url parameters for specifying remote streams.
+
+         multipartUploadLimitInKB - specifies the max size (in KiB) of
+         Multipart File Uploads that Solr will allow in a Request.
+
+         formdataUploadLimitInKB - specifies the max size (in KiB) of
+         form data (application/x-www-form-urlencoded) sent via
+         POST. You can use POST to pass request parameters not
+         fitting into the URL.
+
+         addHttpRequestToContext - if set to true, it will instruct
+         the requestParsers to include the original HttpServletRequest
+         object in the context map of the SolrQueryRequest under the
+         key "httpRequest". It will not be used by any of the existing
+         Solr components, but may be useful when developing custom
+         plugins.
+
+         *** WARNING ***
+         The settings below authorize Solr to fetch remote files, You
+         should make sure your system has some authentication before
+         using enableRemoteStreaming="true"
+
+      -->
+    <requestParsers enableRemoteStreaming="true"
+                    multipartUploadLimitInKB="2048000"
+                    formdataUploadLimitInKB="2048"
+                    addHttpRequestToContext="false"/>
+
+    <!-- HTTP Caching
+
+         Set HTTP caching related parameters (for proxy caches and clients).
+
+         The options below instruct Solr not to output any HTTP Caching
+         related headers
+      -->
+    <httpCaching never304="true" />
+
+  </requestDispatcher>
+
+  <!-- Request Handlers
+
+       http://wiki.apache.org/solr/SolrRequestHandler
+
+       Incoming queries will be dispatched to a specific handler by name
+       based on the path specified in the request.
+
+       Legacy behavior: If the request path uses "/select" but no Request
+       Handler has that name, and if handleSelect="true" has been specified in
+       the requestDispatcher, then the Request Handler is dispatched based on
+       the qt parameter.  Handlers without a leading '/' are accessed this way
+       like so: http://host/app/[core/]select?qt=name  If no qt is
+       given, then the requestHandler that declares default="true" will be
+       used or the one named "standard".
+
+       If a Request Handler is declared with startup="lazy", then it will
+       not be initialized until the first request that uses it.
+
+    -->
+  <!-- SearchHandler
+
+       http://wiki.apache.org/solr/SearchHandler
+
+       For processing Search Queries, the primary Request Handler
+       provided with Solr is "SearchHandler" It delegates to a sequent
+       of SearchComponents (see below) and supports distributed
+       queries across multiple shards
+    -->
+  <requestHandler name="/select" class="solr.SearchHandler">
+    <!-- default values for query parameters can be specified, these
+         will be overridden by parameters in the request
+      -->
+     <lst name="defaults">
+       <str name="echoParams">explicit</str>
+       <int name="rows">10</int>
+     </lst>
+
+    </requestHandler>
+
+  <!-- A request handler that returns indented JSON by default -->
+  <requestHandler name="/query" class="solr.SearchHandler">
+     <lst name="defaults">
+       <str name="echoParams">explicit</str>
+       <str name="wt">json</str>
+       <str name="indent">true</str>
+       <str name="df">text</str>
+     </lst>
+  </requestHandler>
+
+  <!--
+    The export request handler is used to export full sorted result sets.
+    Do not change these defaults.
+  -->
+  <requestHandler name="/export" class="solr.SearchHandler">
+    <lst name="invariants">
+      <str name="rq">{!xport}</str>
+      <str name="wt">xsort</str>
+      <str name="distrib">false</str>
+    </lst>
+
+    <arr name="components">
+      <str>query</str>
+    </arr>
+  </requestHandler>
+
+
+  <initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell">
+    <lst name="defaults">
+      <str name="df">text</str>
+    </lst>
+  </initParams>
+
+  <!-- Field Analysis Request Handler
+
+       RequestHandler that provides much the same functionality as
+       analysis.jsp. Provides the ability to specify multiple field
+       types and field names in the same request and outputs
+       index-time and query-time analysis for each of them.
+
+       Request parameters are:
+       analysis.fieldname - field name whose analyzers are to be used
+
+       analysis.fieldtype - field type whose analyzers are to be used
+       analysis.fieldvalue - text for index-time analysis
+       q (or analysis.q) - text for query time analysis
+       analysis.showmatch (true|false) - When set to true and when
+           query analysis is performed, the produced tokens of the
+           field value analysis will be marked as "matched" for every
+           token that is produces by the query analysis
+   -->
+  <requestHandler name="/analysis/field"
+                  startup="lazy"
+                  class="solr.FieldAnalysisRequestHandler" />
+
+
+  <!-- Document Analysis Handler
+
+       http://wiki.apache.org/solr/AnalysisRequestHandler
+
+       An analysis handler that provides a breakdown of the analysis
+       process of provided documents. This handler expects a (single)
+       content stream with the following format:
+
+       <docs>
+         <doc>
+           <field name="id">1</field>
+           <field name="name">The Name</field>
+           <field name="text">The Text Value</field>
+         </doc>
+         <doc>...</doc>
+         <doc>...</doc>
+         ...
+       </docs>
+
+    Note: Each document must contain a field which serves as the
+    unique key. This key is used in the returned response to associate
+    an analysis breakdown to the analyzed document.
+
+    Like the FieldAnalysisRequestHandler, this handler also supports
+    query analysis by sending either an "analysis.query" or "q"
+    request parameter that holds the query text to be analyzed. It
+    also supports the "analysis.showmatch" parameter which when set to
+    true, all field tokens that match the query tokens will be marked
+    as a "match".
+  -->
+  <requestHandler name="/analysis/document"
+                  class="solr.DocumentAnalysisRequestHandler"
+                  startup="lazy" />
+
+  <!-- Echo the request contents back to the client -->
+  <requestHandler name="/debug/dump" class="solr.DumpRequestHandler" >
+    <lst name="defaults">
+     <str name="echoParams">explicit</str>
+     <str name="echoHandler">true</str>
+    </lst>
+  </requestHandler>
+
+
+
+  <!-- Search Components
+
+       Search components are registered to SolrCore and used by
+       instances of SearchHandler (which can access them by name)
+
+       By default, the following components are available:
+
+       <searchComponent name="query"     class="solr.QueryComponent" />
+       <searchComponent name="facet"     class="solr.FacetComponent" />
+       <searchComponent name="mlt"       class="solr.MoreLikeThisComponent" />
+       <searchComponent name="highlight" class="solr.HighlightComponent" />
+       <searchComponent name="stats"     class="solr.StatsComponent" />
+       <searchComponent name="debug"     class="solr.DebugComponent" />
+
+     -->
+
+  <!-- Terms Component
+
+       http://wiki.apache.org/solr/TermsComponent
+
+       A component to return terms and document frequency of those
+       terms
+    -->
+  <searchComponent name="terms" class="solr.TermsComponent"/>
+
+  <!-- A request handler for demonstrating the terms component -->
+  <requestHandler name="/terms" class="solr.SearchHandler" startup="lazy">
+     <lst name="defaults">
+      <bool name="terms">true</bool>
+      <bool name="distrib">false</bool>
+    </lst>
+    <arr name="components">
+      <str>terms</str>
+    </arr>
+  </requestHandler>
+
+  <!-- Legacy config for the admin interface -->
+  <admin>
+    <defaultQuery>*:*</defaultQuery>
+  </admin>
+
+</config>
diff --git a/inst/examples/updatecommands_add.json b/inst/examples/updatecommands_add.json
new file mode 100644
index 0000000..825930b
--- /dev/null
+++ b/inst/examples/updatecommands_add.json
@@ -0,0 +1,16 @@
+{
+  "add": {
+    "doc": {
+      "id" : "345",
+      "cat" : ["book","hardcover"],
+      "name" : "Cars and bikes",
+      "author" : "Hello world",
+      "series_t" : "A series of books",
+      "sequence_i" : 1,
+      "genre_s" : "science fiction",
+      "inStock" : true,
+      "price" : 12.75,
+      "pages_i" : 3
+    }
+  }
+}
diff --git a/inst/examples/updatecommands_add.xml b/inst/examples/updatecommands_add.xml
new file mode 100644
index 0000000..f041153
--- /dev/null
+++ b/inst/examples/updatecommands_add.xml
@@ -0,0 +1,13 @@
+<add>
+  <doc>
+    <field name="id">05991</field>
+    <field name="name">"Cars and bikes"</field>
+    <field name="author">"Hello world"</field>
+    <field name="series_t">"A series of books"</field>
+    <field name="sequence_i">1</field>
+    <field name="genre_s">"science fiction"</field>
+    <field name="inStock">true</field>
+    <field name="price">12.75</field>
+    <field name="pages_i">3</field>
+  </doc>
+</add>
diff --git a/inst/examples/updatecommands_delete.json b/inst/examples/updatecommands_delete.json
new file mode 100644
index 0000000..abee025
--- /dev/null
+++ b/inst/examples/updatecommands_delete.json
@@ -0,0 +1,3 @@
+{
+  "delete": { "id": "345" }
+}
diff --git a/inst/examples/updatecommands_delete.xml b/inst/examples/updatecommands_delete.xml
new file mode 100644
index 0000000..ea4421b
--- /dev/null
+++ b/inst/examples/updatecommands_delete.xml
@@ -0,0 +1 @@
+<delete><id>345</id></delete>
diff --git a/man/add.Rd b/man/add.Rd
new file mode 100644
index 0000000..acc0704
--- /dev/null
+++ b/man/add.Rd
@@ -0,0 +1,88 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/add.R
+\name{add}
+\alias{add}
+\title{Add documents from R objects}
+\usage{
+add(x, name, commit = TRUE, commit_within = NULL, overwrite = TRUE,
+  boost = NULL, wt = "json", raw = FALSE, ...)
+}
+\arguments{
+\item{x}{Documents, either as rows in a data.frame, or a list.}
+
+\item{name}{(character) A collection or core name. Required.}
+
+\item{commit}{(logical) If \code{TRUE}, documents immediately searchable. 
+Default: \code{TRUE}}
+
+\item{commit_within}{(numeric) Milliseconds to commit the change, the 
+document will be added within that time. Default: NULL}
+
+\item{overwrite}{(logical) Overwrite documents with matching keys. 
+Default: \code{TRUE}}
+
+\item{boost}{(numeric) Boost factor. Default: NULL}
+
+\item{wt}{(character) One of json (default) or xml. If json, uses 
+\code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to 
+parse}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data in format specified by 
+\code{wt} param}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Add documents from R objects
+}
+\details{
+Works for Collections as well as Cores (in SolrCloud and Standalone 
+modes, respectively)
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# create the boooks collection
+if (!collection_exists("books")) {
+  collection_create(name = "books", numShards = 2)
+}
+
+# Documents in a list
+ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+add(ss, name = "books")
+
+# Documents in a data.frame
+## Simple example
+df <- data.frame(id = c(67, 68), price = c(1000, 500000000))
+add(x = df, "books")
+df <- data.frame(id = c(77, 78), price = c(1, 2.40))
+add(x = df, "books")
+
+## More complex example, get file from package examples
+# start Solr in Schemaless mode first: bin/solr start -e schemaless
+file <- system.file("examples", "books.csv", package = "solrium")
+x <- read.csv(file, stringsAsFactors = FALSE)
+class(x)
+head(x)
+if (!collection_exists("mybooks")) {
+  collection_create(name = "mybooks", numShards = 2)
+}
+add(x, "mybooks")
+
+# Use modifiers
+add(x, "mybooks", commit_within = 5000)
+
+# Get back XML instead of a list
+ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+# parsed XML
+add(ss, name = "books", wt = "xml")
+# raw XML
+add(ss, name = "books", wt = "xml", raw = TRUE)
+}
+}
+\seealso{
+\code{\link{update_json}}, \code{\link{update_xml}}, 
+\code{\link{update_csv}} for adding documents from files
+}
+
diff --git a/man/collapse_pivot_names.Rd b/man/collapse_pivot_names.Rd
new file mode 100644
index 0000000..fa326d3
--- /dev/null
+++ b/man/collapse_pivot_names.Rd
@@ -0,0 +1,24 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/parsers.R
+\name{collapse_pivot_names}
+\alias{collapse_pivot_names}
+\title{Collapse Pivot Field and Value Columns}
+\usage{
+collapse_pivot_names(data)
+}
+\arguments{
+\item{data}{a \code{data.frame} with every 2 columns
+representing a field and value and the final representing
+a count}
+}
+\value{
+a \code{data.frame}
+}
+\description{
+Convert a table consisting of columns in sets of 3
+into 2 columns assuming that the first column of every set of 3
+(field) is duplicated throughout all rows and should be removed.
+This type of structure is usually returned by facet.pivot responses.
+}
+\keyword{internal}
+
diff --git a/man/collectargs.Rd b/man/collectargs.Rd
new file mode 100644
index 0000000..1bd81b9
--- /dev/null
+++ b/man/collectargs.Rd
@@ -0,0 +1,15 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/zzz.r
+\name{collectargs}
+\alias{collectargs}
+\title{Function to make a list of args passing arg names through multiargs function.}
+\usage{
+collectargs(x)
+}
+\arguments{
+\item{x}{Value}
+}
+\description{
+Function to make a list of args passing arg names through multiargs function.
+}
+
diff --git a/man/collection_addreplica.Rd b/man/collection_addreplica.Rd
new file mode 100644
index 0000000..49c3a3d
--- /dev/null
+++ b/man/collection_addreplica.Rd
@@ -0,0 +1,66 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_addreplica.R
+\name{collection_addreplica}
+\alias{collection_addreplica}
+\title{Add a replica}
+\usage{
+collection_addreplica(name, shard = NULL, route = NULL, node = NULL,
+  instanceDir = NULL, dataDir = NULL, async = NULL, raw = FALSE,
+  callopts = list(), ...)
+}
+\arguments{
+\item{name}{(character) The name of the collection. Required}
+
+\item{shard}{(character) The name of the shard to which replica is to be added.
+If \code{shard} is not given, then \code{route} must be.}
+
+\item{route}{(character) If the exact shard name is not known, users may pass
+the \code{route} value and the system would identify the name of the shard.
+Ignored if the \code{shard} param is also given}
+
+\item{node}{(character) The name of the node where the replica should be created}
+
+\item{instanceDir}{(character) The instanceDir for the core that will be created}
+
+\item{dataDir}{(character)    The directory in which the core should be created}
+
+\item{async}{(character) Request ID to track this action which will be processed
+asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+
+\item{...}{You can pass in parameters like \code{property.name=value}    to set
+core property name to value. See the section Defining core.properties for details on
+supported properties and values.
+(https://cwiki.apache.org/confluence/display/solr/Defining+core.properties)}
+}
+\description{
+Add a replica to a shard in a collection. The node name can be
+specified if the replica is to be created in a specific node
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# create collection
+if (!collection_exists("foobar")) {
+  collection_create(name = "foobar", numShards = 2) # bin/solr create -c foobar
+}
+
+# status
+collection_clusterstatus()$cluster$collections$foobar
+
+# add replica
+if (!collection_exists("foobar")) {
+  collection_addreplica(name = "foobar", shard = "shard1")
+}
+
+# status again
+collection_clusterstatus()$cluster$collections$foobar
+collection_clusterstatus()$cluster$collections$foobar$shards
+collection_clusterstatus()$cluster$collections$foobar$shards$shard1
+}
+}
+
diff --git a/man/collection_addreplicaprop.Rd b/man/collection_addreplicaprop.Rd
new file mode 100644
index 0000000..a26a1c4
--- /dev/null
+++ b/man/collection_addreplicaprop.Rd
@@ -0,0 +1,56 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_addreplicaprop.R
+\name{collection_addreplicaprop}
+\alias{collection_addreplicaprop}
+\title{Add a replica property}
+\usage{
+collection_addreplicaprop(name, shard, replica, property, property.value,
+  shardUnique = FALSE, raw = FALSE, callopts = list())
+}
+\arguments{
+\item{name}{(character) Required. The name of the collection this replica belongs to.}
+
+\item{shard}{(character) Required. The name of the shard the replica belongs to.}
+
+\item{replica}{(character) Required. The replica, e.g. core_node1.}
+
+\item{property}{(character) Required. The property to add. Note: this will have the
+literal 'property.' prepended to distinguish it from system-maintained properties.
+So these two forms are equivalent: \code{property=special} and
+\code{property=property.special}}
+
+\item{property.value}{(character) Required. The value to assign to the property.}
+
+\item{shardUnique}{(logical) If \code{TRUE}, then setting this property in one
+replica will (1) remove the property from all other replicas in that shard.
+Default: \code{FALSE}}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Assign an arbitrary property to a particular replica and give it
+the value specified. If the property already exists, it will be overwritten
+with the new value.
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# create collection
+collection_create(name = "addrep", numShards = 2) # bin/solr create -c addrep
+
+# status
+collection_clusterstatus()$cluster$collections$addrep$shards
+
+# add the value world to the property hello
+collection_addreplicaprop(name = "addrep", shard = "shard1", replica = "core_node1",
+   property = "hello", property.value = "world")
+
+# check status
+collection_clusterstatus()$cluster$collections$addrep$shards
+collection_clusterstatus()$cluster$collections$addrep$shards$shard1$replicas$core_node1
+}
+}
+
diff --git a/man/collection_addrole.Rd b/man/collection_addrole.Rd
new file mode 100644
index 0000000..4f1d475
--- /dev/null
+++ b/man/collection_addrole.Rd
@@ -0,0 +1,38 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_addrole.R
+\name{collection_addrole}
+\alias{collection_addrole}
+\title{Add a role to a node}
+\usage{
+collection_addrole(role = "overseer", node, raw = FALSE, ...)
+}
+\arguments{
+\item{role}{(character) Required. The name of the role. The only supported role
+as of now is overseer (set as default).}
+
+\item{node}{(character) Required. The name of the node. It is possible to assign a
+role even before that node is started.}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Assign a role to a given node in the cluster. The only supported role
+as of 4.7 is 'overseer' . Use this API to dedicate a particular node as Overseer.
+Invoke it multiple times to add more nodes. This is useful in large clusters where
+an Overseer is likely to get overloaded . If available, one among the list of
+nodes which are assigned the 'overseer' role would become the overseer. The
+system would assign the role to any other node if none of the designated nodes
+are up and running
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# get list of nodes
+nodes <- collection_clusterstatus()$cluster$live_nodes
+collection_addrole(node = nodes[1])
+}
+}
+
diff --git a/man/collection_balanceshardunique.Rd b/man/collection_balanceshardunique.Rd
new file mode 100644
index 0000000..b9d0fc1
--- /dev/null
+++ b/man/collection_balanceshardunique.Rd
@@ -0,0 +1,49 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_balanceshardunique.R
+\name{collection_balanceshardunique}
+\alias{collection_balanceshardunique}
+\title{Balance a property}
+\usage{
+collection_balanceshardunique(name, property, onlyactivenodes = TRUE,
+  shardUnique = NULL, raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) Required. The name of the collection to balance the property in}
+
+\item{property}{(character) Required. The property to balance. The literal "property."
+is prepended to this property if not specified explicitly.}
+
+\item{onlyactivenodes}{(logical) Normally, the property is instantiated on active
+nodes only. If \code{FALSE}, then inactive nodes are also included for distribution.
+Default: \code{TRUE}}
+
+\item{shardUnique}{(logical) Something of a safety valve. There is one pre-defined
+property (preferredLeader) that defaults this value to \code{TRUE}. For all other
+properties that are balanced, this must be set to \code{TRUE} or an error message is
+returned}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Insures that a particular property is distributed evenly amongst the
+physical nodes that make up a collection. If the property already exists on a replica,
+every effort is made to leave it there. If the property is not on any replica on a
+shard one is chosen and the property is added.
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# create collection
+collection_create(name = "mycollection") # bin/solr create -c mycollection
+
+# balance preferredLeader property
+collection_balanceshardunique("mycollection", property = "preferredLeader")
+
+# examine cluster status
+collection_clusterstatus()$cluster$collections$mycollection
+}
+}
+
diff --git a/man/collection_clusterprop.Rd b/man/collection_clusterprop.Rd
new file mode 100644
index 0000000..2cb2122
--- /dev/null
+++ b/man/collection_clusterprop.Rd
@@ -0,0 +1,42 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_clusterprop.R
+\name{collection_clusterprop}
+\alias{collection_clusterprop}
+\title{Add, edit, delete a cluster-wide property}
+\usage{
+collection_clusterprop(name, val, raw = FALSE, callopts = list())
+}
+\arguments{
+\item{name}{(character) Required. The name of the property. The two supported
+properties names are urlScheme and autoAddReplicas. Other names are rejected
+with an error}
+
+\item{val}{(character) Required. The value of the property. If the value is
+empty or null, the property is unset.}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Important: whether add, edit, or delete is used is determined by
+the value passed to the \code{val} parameter. If the property name is
+new, it will be added. If the property name exists, and the value is different,
+it will be edited. If the property name exists, and the value is NULL or empty
+the property is deleted (unset).
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# add the value https to the property urlScheme
+collection_clusterprop(name = "urlScheme", val = "https")
+
+# status again
+collection_clusterstatus()$cluster$properties
+
+# delete the property urlScheme by setting val to NULL or a 0 length string
+collection_clusterprop(name = "urlScheme", val = "")
+}
+}
+
diff --git a/man/collection_clusterstatus.Rd b/man/collection_clusterstatus.Rd
new file mode 100644
index 0000000..bb86d8a
--- /dev/null
+++ b/man/collection_clusterstatus.Rd
@@ -0,0 +1,36 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_clusterstatus.R
+\name{collection_clusterstatus}
+\alias{collection_clusterstatus}
+\title{Get cluster status}
+\usage{
+collection_clusterstatus(name = NULL, shard = NULL, raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) The collection name for which information is requested.
+If omitted, information on all collections in the cluster will be returned.}
+
+\item{shard}{(character) The shard(s) for which information is requested. Multiple
+shard names can be specified as a character vector.}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Fetch the cluster status including collections, shards, replicas,
+configuration name as well as collection aliases and cluster properties.
+}
+\examples{
+\dontrun{
+solr_connect()
+collection_clusterstatus()
+res <- collection_clusterstatus()
+res$responseHeader
+res$cluster
+res$cluster$collections
+res$cluster$collections$gettingstarted
+res$cluster$live_nodes
+}
+}
+
diff --git a/man/collection_create.Rd b/man/collection_create.Rd
new file mode 100644
index 0000000..bd0c14c
--- /dev/null
+++ b/man/collection_create.Rd
@@ -0,0 +1,106 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_create.R
+\name{collection_create}
+\alias{collection_create}
+\title{Add a collection}
+\usage{
+collection_create(name, numShards = 2, maxShardsPerNode = 1,
+  createNodeSet = NULL, collection.configName = NULL,
+  replicationFactor = 1, router.name = NULL, shards = NULL,
+  createNodeSet.shuffle = TRUE, router.field = NULL,
+  autoAddReplicas = FALSE, async = NULL, raw = FALSE, callopts = list(),
+  ...)
+}
+\arguments{
+\item{name}{The name of the collection to be created. Required}
+
+\item{numShards}{(integer) The number of shards to be created as part of the
+collection. This is a required parameter when using the 'compositeId' router.}
+
+\item{maxShardsPerNode}{(integer) When creating collections, the shards and/or replicas
+are spread across all available (i.e., live) nodes, and two replicas of the same shard
+will never be on the same node. If a node is not live when the CREATE operation is called,
+it will not get any parts of the new collection, which could lead to too many replicas
+being created on a single live node. Defining maxShardsPerNode sets a limit on the number
+of replicas CREATE will spread to each node. If the entire collection can not be fit into
+the live nodes, no collection will be created at all. Default: 1}
+
+\item{createNodeSet}{(logical) Allows defining the nodes to spread the new collection
+across. If not provided, the CREATE operation will create shard-replica spread across all
+live Solr nodes. The format is a comma-separated list of node_names, such as
+localhost:8983_solr, localhost:8984_solr, localhost:8985_solr. Default: \code{NULL}}
+
+\item{collection.configName}{(character) Defines the name of the configurations (which
+must already be stored in ZooKeeper) to use for this collection. If not provided, Solr
+will default to the collection name as the configuration name. Default: \code{compositeId}}
+
+\item{replicationFactor}{(integer) The number of replicas to be created for each shard.
+Default: 1}
+
+\item{router.name}{(character) The router name that will be used. The router defines
+how documents will be distributed among the shards. The value can be either \code{implicit},
+which uses an internal default hash, or \code{compositeId}, which allows defining the specific
+shard to assign documents to. When using the 'implicit' router, the shards parameter is
+required. When using the 'compositeId' router, the numShards parameter is required.
+For more information, see also the section Document Routing. Default: \code{compositeId}}
+
+\item{shards}{(character) A comma separated list of shard names, e.g.,
+shard-x,shard-y,shard-z . This is a required parameter when using the 'implicit' router.}
+
+\item{createNodeSet.shuffle}{(logical)    Controls wether or not the shard-replicas created
+for this collection will be assigned to the nodes specified by the createNodeSet in a
+sequential manner, or if the list of nodes should be shuffled prior to creating individual
+replicas.  A 'false' value makes the results of a collection creation predictible and
+gives more exact control over the location of the individual shard-replicas, but 'true'
+can be a better choice for ensuring replicas are distributed evenly across nodes. Ignored
+if createNodeSet is not also specified. Default: \code{TRUE}}
+
+\item{router.field}{(character) If this field is specified, the router will look at the
+value of the field in an input document to compute the hash and identify a shard instead of
+looking at the uniqueKey field. If the field specified is null in the document, the document
+will be rejected. Please note that RealTime Get or retrieval by id would also require the
+parameter _route_ (or shard.keys) to avoid a distributed search.}
+
+\item{autoAddReplicas}{(logical)    When set to true, enables auto addition of replicas on
+shared file systems. See the section autoAddReplicas Settings for more details on settings
+and overrides. Default: \code{FALSE}}
+
+\item{async}{(character) Request ID to track this action which will be processed
+asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+
+\item{...}{You can pass in parameters like \code{property.name=value}    to set
+core property name to value. See the section Defining core.properties for details on
+supported properties and values.
+(https://cwiki.apache.org/confluence/display/solr/Defining+core.properties)}
+}
+\description{
+Add a collection
+}
+\examples{
+\dontrun{
+solr_connect()
+
+if (!collection_exists("foobar")) {
+  collection_delete(name = "helloWorld")
+  collection_create(name = "helloWorld", numShards = 2)
+}
+if (!collection_exists("foobar")) {
+  collection_delete(name = "tablesChairs")
+  collection_create(name = "tablesChairs")
+}
+
+# you may have to do this if you don't want to use 
+# bin/solr or use zookeeper directly
+path <- "~/solr-5.4.1/server/solr/newcore/conf"
+dir.create(path, recursive = TRUE)
+files <- list.files("~/solr-5.4.1/server/solr/configsets/data_driven_schema_configs/conf/",
+full.names = TRUE)
+invisible(file.copy(files, path, recursive = TRUE))
+collection_create(name = "newcore", collection.configName = "newcore")
+}
+}
+
diff --git a/man/collection_createalias.Rd b/man/collection_createalias.Rd
new file mode 100644
index 0000000..c586352
--- /dev/null
+++ b/man/collection_createalias.Rd
@@ -0,0 +1,31 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_createalias.R
+\name{collection_createalias}
+\alias{collection_createalias}
+\title{Create an alias for a collection}
+\usage{
+collection_createalias(alias, collections, raw = FALSE, ...)
+}
+\arguments{
+\item{alias}{(character) Required. The alias name to be created}
+
+\item{collections}{(character) Required. A character vector of collections to be aliased}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Create a new alias pointing to one or more collections. If an
+alias by the same name already exists, this action will replace the existing
+alias, effectively acting like an atomic "MOVE" command.
+}
+\examples{
+\dontrun{
+solr_connect()
+collection_create(name = "thingsstuff", numShards = 2)
+collection_createalias("tstuff", "thingsstuff")
+collection_clusterstatus()$cluster$collections$thingsstuff$aliases # new alias
+}
+}
+
diff --git a/man/collection_createshard.Rd b/man/collection_createshard.Rd
new file mode 100644
index 0000000..9ed74ac
--- /dev/null
+++ b/man/collection_createshard.Rd
@@ -0,0 +1,35 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_createshard.R
+\name{collection_createshard}
+\alias{collection_createshard}
+\title{Create a shard}
+\usage{
+collection_createshard(name, shard, createNodeSet = NULL, raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) Required. The name of the collection that includes the shard
+that will be splitted.}
+
+\item{shard}{(character) Required. The name of the shard to be created.}
+
+\item{createNodeSet}{(character) Allows defining the nodes to spread the new
+collection across. If not provided, the CREATE operation will create shard-replica
+spread across all live Solr nodes. The format is a comma-separated list of
+node_names, such as localhost:8983_solr, localhost:8984_s olr, localhost:8985_solr.}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Create a shard
+}
+\examples{
+\dontrun{
+solr_connect()
+## FIXME - doesn't work right now
+# collection_create(name = "trees")
+# collection_createshard(name = "trees", shard = "newshard")
+}
+}
+
diff --git a/man/collection_delete.Rd b/man/collection_delete.Rd
new file mode 100644
index 0000000..9a1000f
--- /dev/null
+++ b/man/collection_delete.Rd
@@ -0,0 +1,26 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_delete.R
+\name{collection_delete}
+\alias{collection_delete}
+\title{Add a collection}
+\usage{
+collection_delete(name, raw = FALSE, ...)
+}
+\arguments{
+\item{name}{The name of the collection to be created. Required}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Add a collection
+}
+\examples{
+\dontrun{
+solr_connect()
+collection_create(name = "helloWorld")
+collection_delete(name = "helloWorld")
+}
+}
+
diff --git a/man/collection_deletealias.Rd b/man/collection_deletealias.Rd
new file mode 100644
index 0000000..1fbf925
--- /dev/null
+++ b/man/collection_deletealias.Rd
@@ -0,0 +1,29 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_deletealias.R
+\name{collection_deletealias}
+\alias{collection_deletealias}
+\title{Delete a collection alias}
+\usage{
+collection_deletealias(alias, raw = FALSE, ...)
+}
+\arguments{
+\item{alias}{(character) Required. The alias name to be created}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Delete a collection alias
+}
+\examples{
+\dontrun{
+solr_connect()
+collection_create(name = "thingsstuff", numShards = 2)
+collection_createalias("tstuff", "thingsstuff")
+collection_clusterstatus()$cluster$collections$thingsstuff$aliases # new alias
+collection_deletealias("tstuff")
+collection_clusterstatus()$cluster$collections$thingsstuff$aliases # gone
+}
+}
+
diff --git a/man/collection_deletereplica.Rd b/man/collection_deletereplica.Rd
new file mode 100644
index 0000000..03198ac
--- /dev/null
+++ b/man/collection_deletereplica.Rd
@@ -0,0 +1,59 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_deletereplica.R
+\name{collection_deletereplica}
+\alias{collection_deletereplica}
+\title{Delete a replica}
+\usage{
+collection_deletereplica(name, shard = NULL, replica = NULL,
+  onlyIfDown = FALSE, raw = FALSE, callopts = list(), ...)
+}
+\arguments{
+\item{name}{(character) Required. The name of the collection.}
+
+\item{shard}{(character) Required. The name of the shard that includes the replica to
+be removed.}
+
+\item{replica}{(character) Required. The name of the replica to remove.}
+
+\item{onlyIfDown}{(logical) When \code{TRUE} will not take any action if the replica
+is active. Default: \code{FALSE}}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+
+\item{...}{You can pass in parameters like \code{property.name=value}    to set
+core property name to value. See the section Defining core.properties for details on
+supported properties and values.
+(https://cwiki.apache.org/confluence/display/solr/Defining+core.properties)}
+}
+\description{
+Delete a replica from a given collection and shard. If the
+corresponding core is up and running the core is unloaded and the entry is
+removed from the clusterstate. If the node/core is down , the entry is taken
+off the clusterstate and if the core comes up later it is automatically
+unregistered.
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# create collection
+collection_create(name = "foobar2", numShards = 2) # bin/solr create -c foobar2
+
+# status
+collection_clusterstatus()$cluster$collections$foobar2$shards$shard1
+
+# add replica
+collection_addreplica(name = "foobar2", shard = "shard1")
+
+# delete replica
+## get replica name
+nms <- names(collection_clusterstatus()$cluster$collections$foobar2$shards$shard1$replicas)
+collection_deletereplica(name = "foobar2", shard = "shard1", replica = nms[1])
+
+# status again
+collection_clusterstatus()$cluster$collections$foobar2$shards$shard1
+}
+}
+
diff --git a/man/collection_deletereplicaprop.Rd b/man/collection_deletereplicaprop.Rd
new file mode 100644
index 0000000..d1a9666
--- /dev/null
+++ b/man/collection_deletereplicaprop.Rd
@@ -0,0 +1,55 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_deletereplicaprop.R
+\name{collection_deletereplicaprop}
+\alias{collection_deletereplicaprop}
+\title{Delete a replica property}
+\usage{
+collection_deletereplicaprop(name, shard, replica, property, raw = FALSE,
+  callopts = list())
+}
+\arguments{
+\item{name}{(character) Required. The name of the collection this replica belongs to.}
+
+\item{shard}{(character) Required. The name of the shard the replica belongs to.}
+
+\item{replica}{(character) Required. The replica, e.g. core_node1.}
+
+\item{property}{(character) Required. The property to delete. Note: this will have the
+literal 'property.' prepended to distinguish it from system-maintained properties.
+So these two forms are equivalent: \code{property=special} and
+\code{property=property.special}}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Deletes an arbitrary property from a particular replica.
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# create collection
+collection_create(name = "deleterep", numShards = 2) # bin/solr create -c deleterep
+
+# status
+collection_clusterstatus()$cluster$collections$deleterep$shards
+
+# add the value bar to the property foo
+collection_addreplicaprop(name = "deleterep", shard = "shard1", replica = "core_node1",
+   property = "foo", property.value = "bar")
+
+# check status
+collection_clusterstatus()$cluster$collections$deleterep$shards
+collection_clusterstatus()$cluster$collections$deleterep$shards$shard1$replicas$core_node1
+
+# delete replica property
+collection_deletereplicaprop(name = "deleterep", shard = "shard1",
+   replica = "core_node1", property = "foo")
+
+# check status - foo should be gone
+collection_clusterstatus()$cluster$collections$deleterep$shards$shard1$replicas$core_node1
+}
+}
+
diff --git a/man/collection_deleteshard.Rd b/man/collection_deleteshard.Rd
new file mode 100644
index 0000000..e9fe3da
--- /dev/null
+++ b/man/collection_deleteshard.Rd
@@ -0,0 +1,41 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_deleteshard.R
+\name{collection_deleteshard}
+\alias{collection_deleteshard}
+\title{Delete a shard}
+\usage{
+collection_deleteshard(name, shard, raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) Required. The name of the collection that includes the shard
+to be deleted}
+
+\item{shard}{(character) Required. The name of the shard to be deleted}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Deleting a shard will unload all replicas of the shard and remove
+them from clusterstate.json. It will only remove shards that are inactive, or
+which have no range given for custom sharding.
+}
+\examples{
+\dontrun{
+solr_connect()
+# create collection
+# collection_create(name = "buffalo") # bin/solr create -c buffalo
+
+# find shard names
+names(collection_clusterstatus()$cluster$collections$buffalo$shards)
+# split a shard by name
+collection_splitshard(name = "buffalo", shard = "shard1")
+# now we have three shards
+names(collection_clusterstatus()$cluster$collections$buffalo$shards)
+
+# delete shard
+collection_deleteshard(name = "buffalo", shard = "shard1_1")
+}
+}
+
diff --git a/man/collection_exists.Rd b/man/collection_exists.Rd
new file mode 100644
index 0000000..8bf9682
--- /dev/null
+++ b/man/collection_exists.Rd
@@ -0,0 +1,39 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_exists.R
+\name{collection_exists}
+\alias{collection_exists}
+\title{Check if a collection exists}
+\usage{
+collection_exists(name, ...)
+}
+\arguments{
+\item{name}{(character) The name of the core. If not given, all cores.}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\value{
+A single boolean, \code{TRUE} or \code{FALSE}
+}
+\description{
+Check if a collection exists
+}
+\details{
+Simply calls \code{\link{collection_list}} internally
+}
+\examples{
+\dontrun{
+# start Solr with Cloud mode via the schemaless eg: bin/solr -e cloud
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# exists
+collection_exists("gettingstarted")
+
+# doesn't exist
+collection_exists("hhhhhh")
+}
+}
+
diff --git a/man/collection_list.Rd b/man/collection_list.Rd
new file mode 100644
index 0000000..0a270f4
--- /dev/null
+++ b/man/collection_list.Rd
@@ -0,0 +1,24 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_list.R
+\name{collection_list}
+\alias{collection_list}
+\title{List collections}
+\usage{
+collection_list(raw = FALSE, ...)
+}
+\arguments{
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+List collections
+}
+\examples{
+\dontrun{
+solr_connect()
+collection_list()
+collection_list()$collections
+}
+}
+
diff --git a/man/collection_migrate.Rd b/man/collection_migrate.Rd
new file mode 100644
index 0000000..a13e3c3
--- /dev/null
+++ b/man/collection_migrate.Rd
@@ -0,0 +1,54 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_migrate.R
+\name{collection_migrate}
+\alias{collection_migrate}
+\title{Migrate documents to another collection}
+\usage{
+collection_migrate(name, target.collection, split.key, forward.timeout = NULL,
+  async = NULL, raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) Required. The name of the source collection from which
+documents will be split}
+
+\item{target.collection}{(character) Required. The name of the target collection
+to which documents will be migrated}
+
+\item{split.key}{(character) Required. The routing key prefix. For example, if
+uniqueKey is a!123, then you would use split.key=a!}
+
+\item{forward.timeout}{(integer) The timeout (seconds), until which write requests
+made to the source collection for the given \code{split.key} will be forwarded to the
+target shard. Default: 60}
+
+\item{async}{(character) Request ID to track this action which will be processed
+asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Migrate documents to another collection
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# create collection
+collection_create(name = "migrate_from") # bin/solr create -c migrate_from
+
+# create another collection
+collection_create(name = "migrate_to") # bin/solr create -c migrate_to
+
+# add some documents
+file <- system.file("examples", "books.csv", package = "solr")
+x <- read.csv(file, stringsAsFactors = FALSE)
+add(x, "migrate_from")
+
+# migrate some documents from one collection to the other
+## FIXME - not sure if this is actually working....
+collection_migrate("migrate_from", "migrate_to", split.key = "05535")
+}
+}
+
diff --git a/man/collection_overseerstatus.Rd b/man/collection_overseerstatus.Rd
new file mode 100644
index 0000000..6aff27b
--- /dev/null
+++ b/man/collection_overseerstatus.Rd
@@ -0,0 +1,34 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_overseerstatus.R
+\name{collection_overseerstatus}
+\alias{collection_overseerstatus}
+\title{Get overseer status}
+\usage{
+collection_overseerstatus(raw = FALSE, ...)
+}
+\arguments{
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Returns the current status of the overseer, performance statistics
+of various overseer APIs as well as last 10 failures per operation type.
+}
+\examples{
+\dontrun{
+solr_connect()
+collection_overseerstatus()
+res <- collection_overseerstatus()
+res$responseHeader
+res$leader
+res$overseer_queue_size
+res$overseer_work_queue_size
+res$overseer_operations
+res$collection_operations
+res$overseer_queue
+res$overseer_internal_queue
+res$collection_queue
+}
+}
+
diff --git a/man/collection_rebalanceleaders.Rd b/man/collection_rebalanceleaders.Rd
new file mode 100644
index 0000000..7eefdfd
--- /dev/null
+++ b/man/collection_rebalanceleaders.Rd
@@ -0,0 +1,49 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_rebalanceleaders.R
+\name{collection_rebalanceleaders}
+\alias{collection_rebalanceleaders}
+\title{Rebalance leaders}
+\usage{
+collection_rebalanceleaders(name, maxAtOnce = NULL, maxWaitSeconds = NULL,
+  raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) Required. The name of the collection rebalance preferredLeaders on.}
+
+\item{maxAtOnce}{(integer) The maximum number of reassignments to have queue up at once.
+Values <=0 are use the default value Integer.MAX_VALUE. When this number is reached, the
+process waits for one or more leaders to be successfully assigned before adding more
+to the queue.}
+
+\item{maxWaitSeconds}{(integer) Timeout value when waiting for leaders to be reassigned.
+NOTE: if maxAtOnce is less than the number of reassignments that will take place,
+this is the maximum interval that any single wait for at least one reassignment.
+For example, if 10 reassignments are to take place and maxAtOnce is 1 and maxWaitSeconds
+is 60, the upper bound on the time that the command may wait is 10 minutes. Default: 60}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Reassign leaders in a collection according to the preferredLeader
+property across active nodes
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# create collection
+collection_create(name = "mycollection2") # bin/solr create -c mycollection2
+
+# balance preferredLeader property
+collection_balanceshardunique("mycollection2", property = "preferredLeader")
+
+# balance preferredLeader property
+collection_rebalanceleaders("mycollection2")
+
+# examine cluster status
+collection_clusterstatus()$cluster$collections$mycollection2
+}
+}
+
diff --git a/man/collection_reload.Rd b/man/collection_reload.Rd
new file mode 100644
index 0000000..c624173
--- /dev/null
+++ b/man/collection_reload.Rd
@@ -0,0 +1,26 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_reload.R
+\name{collection_reload}
+\alias{collection_reload}
+\title{Reload a collection}
+\usage{
+collection_reload(name, raw = FALSE, ...)
+}
+\arguments{
+\item{name}{The name of the collection to reload. Required}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Reload a collection
+}
+\examples{
+\dontrun{
+solr_connect()
+collection_create(name = "helloWorld")
+collection_reload(name = "helloWorld")
+}
+}
+
diff --git a/man/collection_removerole.Rd b/man/collection_removerole.Rd
new file mode 100644
index 0000000..3917d21
--- /dev/null
+++ b/man/collection_removerole.Rd
@@ -0,0 +1,33 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_removerole.R
+\name{collection_removerole}
+\alias{collection_removerole}
+\title{Remove a role from a node}
+\usage{
+collection_removerole(role = "overseer", node, raw = FALSE, ...)
+}
+\arguments{
+\item{role}{(character) Required. The name of the role. The only supported role
+as of now is overseer (set as default).}
+
+\item{node}{(character) Required. The name of the node.}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Remove an assigned role. This API is used to undo the roles
+assigned using \code{\link{collection_addrole}}
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# get list of nodes
+nodes <- collection_clusterstatus()$cluster$live_nodes
+collection_addrole(node = nodes[1])
+collection_removerole(node = nodes[1])
+}
+}
+
diff --git a/man/collection_requeststatus.Rd b/man/collection_requeststatus.Rd
new file mode 100644
index 0000000..94e46c6
--- /dev/null
+++ b/man/collection_requeststatus.Rd
@@ -0,0 +1,36 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_requeststatus.R
+\name{collection_requeststatus}
+\alias{collection_requeststatus}
+\title{Get request status}
+\usage{
+collection_requeststatus(requestid, raw = FALSE, ...)
+}
+\arguments{
+\item{requestid}{(character) Required. The user defined request-id for the request.
+This can be used to track the status of the submitted asynchronous task. \code{-1}
+is a special request id which is used to cleanup the stored states for all of the
+already completed/failed tasks.}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Request the status of an already submitted Asynchronous Collection
+API call. This call is also used to clear up the stored statuses.
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# invalid requestid
+collection_requeststatus(requestid = "xxx")
+
+# valid requestid
+collection_requeststatus(requestid = "xxx")
+res$responseHeader
+res$xxx
+}
+}
+
diff --git a/man/collection_splitshard.Rd b/man/collection_splitshard.Rd
new file mode 100644
index 0000000..e761487
--- /dev/null
+++ b/man/collection_splitshard.Rd
@@ -0,0 +1,44 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collection_splitshard.R
+\name{collection_splitshard}
+\alias{collection_splitshard}
+\title{Create a shard}
+\usage{
+collection_splitshard(name, shard, ranges = NULL, split.key = NULL,
+  async = NULL, raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) Required. The name of the collection that includes the shard
+to be split}
+
+\item{shard}{(character) Required. The name of the shard to be split}
+
+\item{ranges}{(character) A comma-separated list of hash ranges in hexadecimal
+e.g. ranges=0-1f4,1f5-3e8,3e9-5dc}
+
+\item{split.key}{(character) The key to use for splitting the index}
+
+\item{async}{(character) Request ID to track this action which will be processed
+asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Create a shard
+}
+\examples{
+\dontrun{
+solr_connect()
+# create collection
+collection_create(name = "trees")
+# find shard names
+names(collection_clusterstatus()$cluster$collections$trees$shards)
+# split a shard by name
+collection_splitshard(name = "trees", shard = "shard1")
+# now we have three shards
+names(collection_clusterstatus()$cluster$collections$trees$shards)
+}
+}
+
diff --git a/man/collections.Rd b/man/collections.Rd
new file mode 100644
index 0000000..fcc8135
--- /dev/null
+++ b/man/collections.Rd
@@ -0,0 +1,41 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/collections.R
+\name{collections}
+\alias{collections}
+\alias{cores}
+\title{List collections or cores}
+\usage{
+collections(...)
+
+cores(...)
+}
+\arguments{
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\value{
+A character vector
+}
+\description{
+List collections or cores
+}
+\details{
+Calls \code{\link{collection_list}} or \code{\link{core_status}} internally, 
+and parses out names for you.
+}
+\examples{
+\dontrun{
+# connect
+solr_connect(verbose = FALSE)
+
+# list collections
+collections()
+
+# list cores
+cores()
+
+# curl options
+library("httr")
+collections(config = verbose())
+}
+}
+
diff --git a/man/commit.Rd b/man/commit.Rd
new file mode 100644
index 0000000..f7f619d
--- /dev/null
+++ b/man/commit.Rd
@@ -0,0 +1,47 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/commit.R
+\name{commit}
+\alias{commit}
+\title{Commit}
+\usage{
+commit(name, expunge_deletes = FALSE, wait_searcher = TRUE,
+  soft_commit = FALSE, wt = "json", raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) A collection or core name. Required.}
+
+\item{expunge_deletes}{merge segments with deletes away. Default: \code{FALSE}}
+
+\item{wait_searcher}{block until a new searcher is opened and registered as the
+main query searcher, making the changes visible. Default: \code{TRUE}}
+
+\item{soft_commit}{perform a soft commit - this will refresh the 'view' of the
+index in a more performant manner, but without "on-disk" guarantees.
+Default: \code{FALSE}}
+
+\item{wt}{(character) One of json (default) or xml. If json, uses
+\code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to
+parse}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data in format specified by
+\code{wt} param}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Commit
+}
+\examples{
+\dontrun{
+solr_connect()
+
+commit("gettingstarted")
+commit("gettingstarted", wait_searcher = FALSE)
+
+# get xml back
+commit("gettingstarted", wt = "xml")
+## raw xml
+commit("gettingstarted", wt = "xml", raw = TRUE)
+}
+}
+
diff --git a/man/config_get.Rd b/man/config_get.Rd
new file mode 100644
index 0000000..27ce7f6
--- /dev/null
+++ b/man/config_get.Rd
@@ -0,0 +1,67 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/config_get.R
+\name{config_get}
+\alias{config_get}
+\title{Get Solr configuration details}
+\usage{
+config_get(name, what = NULL, wt = "json", raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) The name of the core. If not given, all cores.}
+
+\item{what}{(character) What you want to look at. One of solrconfig or
+schema. Default: solrconfig}
+
+\item{wt}{(character) One of json (default) or xml. Data type returned.
+If json, uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses
+\code{\link[xml2]{read_xml}} to parse.}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data in format specified by
+\code{wt}}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\value{
+A list, \code{xml_document}, or character
+}
+\description{
+Get Solr configuration details
+}
+\details{
+Note that if \code{raw=TRUE}, \code{what} is ignored. That is,
+you get all the data when \code{raw=TRUE}.
+}
+\examples{
+\dontrun{
+# start Solr with Cloud mode via the schemaless eg: bin/solr -e cloud
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# all config settings
+config_get("gettingstarted")
+
+# just znodeVersion
+config_get("gettingstarted", "znodeVersion")
+
+# just znodeVersion
+config_get("gettingstarted", "luceneMatchVersion")
+
+# just updateHandler
+config_get("gettingstarted", "updateHandler")
+
+# just updateHandler
+config_get("gettingstarted", "requestHandler")
+
+## Get XML
+config_get("gettingstarted", wt = "xml")
+config_get("gettingstarted", "updateHandler", wt = "xml")
+config_get("gettingstarted", "requestHandler", wt = "xml")
+
+## Raw data - what param ignored when raw=TRUE
+config_get("gettingstarted", raw = TRUE)
+}
+}
+
diff --git a/man/config_overlay.Rd b/man/config_overlay.Rd
new file mode 100644
index 0000000..f98efc1
--- /dev/null
+++ b/man/config_overlay.Rd
@@ -0,0 +1,38 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/config_overlay.R
+\name{config_overlay}
+\alias{config_overlay}
+\title{Get Solr configuration overlay}
+\usage{
+config_overlay(name, omitHeader = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) The name of the core. If not given, all cores.}
+
+\item{omitHeader}{(logical) If \code{TRUE}, omit header. Default: \code{FALSE}}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\value{
+A list with response from server
+}
+\description{
+Get Solr configuration overlay
+}
+\examples{
+\dontrun{
+# start Solr with Cloud mode via the schemaless eg: bin/solr -e cloud
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# get config overlay
+config_overlay("gettingstarted")
+
+# without header
+config_overlay("gettingstarted", omitHeader = TRUE)
+}
+}
+
diff --git a/man/config_params.Rd b/man/config_params.Rd
new file mode 100644
index 0000000..b63e481
--- /dev/null
+++ b/man/config_params.Rd
@@ -0,0 +1,61 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/config_params.R
+\name{config_params}
+\alias{config_params}
+\title{Set Solr configuration params}
+\usage{
+config_params(name, param = NULL, set = NULL, unset = NULL,
+  update = NULL, ...)
+}
+\arguments{
+\item{name}{(character) The name of the core. If not given, all cores.}
+
+\item{param}{(character) Name of a parameter}
+
+\item{set}{(list) List of key:value pairs of what to set. Create or overwrite 
+a parameter set map. Default: NULL (nothing passed)}
+
+\item{unset}{(list) One or more character strings of keys to unset. Default: NULL 
+(nothing passed)}
+
+\item{update}{(list) List of key:value pairs of what to update. Updates a parameter 
+set map. This essentially overwrites the old parameter set, so all parameters must 
+be sent in each update request.}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\value{
+A list with response from server
+}
+\description{
+Set Solr configuration params
+}
+\details{
+The Request Parameters API allows creating parameter sets that can 
+override or take the place of parameters defined in solrconfig.xml. It is 
+really another endpoint of the Config API instead of a separate API, and 
+has distinct commands. It does not replace or modify any sections of 
+solrconfig.xml, but instead provides another approach to handling parameters 
+used in requests. It behaves in the same way as the Config API, by storing 
+parameters in another file that will be used at runtime. In this case, 
+the parameters are stored in a file named params.json. This file is kept in 
+ZooKeeper or in the conf directory of a standalone Solr instance.
+}
+\examples{
+\dontrun{
+# start Solr in standard or Cloud mode
+# connect
+solr_connect()
+
+# set a parameter set
+myFacets <- list(myFacets = list(facet = TRUE, facet.limit = 5))
+config_params("gettingstarted", set = myFacets)
+
+# check a parameter
+config_params("gettingstarted", param = "myFacets")
+
+# see all params
+config_params("gettingstarted")
+}
+}
+
diff --git a/man/config_set.Rd b/man/config_set.Rd
new file mode 100644
index 0000000..e22881d
--- /dev/null
+++ b/man/config_set.Rd
@@ -0,0 +1,52 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/config_set.R
+\name{config_set}
+\alias{config_set}
+\title{Set Solr configuration details}
+\usage{
+config_set(name, set = NULL, unset = NULL, ...)
+}
+\arguments{
+\item{name}{(character) The name of the core. If not given, all cores.}
+
+\item{set}{(list) List of key:value pairs of what to set. Default: NULL 
+(nothing passed)}
+
+\item{unset}{(list) One or more character strings of keys to unset. Default: NULL 
+(nothing passed)}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\value{
+A list with response from server
+}
+\description{
+Set Solr configuration details
+}
+\examples{
+\dontrun{
+# start Solr with Cloud mode via the schemaless eg: bin/solr -e cloud
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# set a property
+config_set("gettingstarted", set = list(query.filterCache.autowarmCount = 1000))
+
+# unset a property
+config_set("gettingstarted", unset = "query.filterCache.size", config = verbose())
+
+# both set a property and unset a property
+config_set("gettingstarted", unset = "enableLazyFieldLoading")
+
+# many properties
+config_set("gettingstarted", set = list(
+   query.filterCache.autowarmCount = 1000,
+   query.commitWithin.softCommit = 'false'
+ )
+)
+}
+}
+
diff --git a/man/core_create.Rd b/man/core_create.Rd
new file mode 100644
index 0000000..9b364d9
--- /dev/null
+++ b/man/core_create.Rd
@@ -0,0 +1,66 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_create.R
+\name{core_create}
+\alias{core_create}
+\title{Create a core}
+\usage{
+core_create(name, instanceDir = NULL, config = NULL, schema = NULL,
+  dataDir = NULL, configSet = NULL, collection = NULL, shard = NULL,
+  async = NULL, raw = FALSE, callopts = list(), ...)
+}
+\arguments{
+\item{name}{(character) The name of the core to be created. Required}
+
+\item{instanceDir}{(character) Path to instance directory}
+
+\item{config}{(character) Path to config file}
+
+\item{schema}{(character) Path to schema file}
+
+\item{dataDir}{(character) Name of the data directory relative to instanceDir.}
+
+\item{configSet}{(character) Name of the configset to use for this core. For more
+information, see https://cwiki.apache.org/confluence/display/solr/Config+Sets}
+
+\item{collection}{(character) The name of the collection to which this core belongs.
+The default is the name of the core. collection.<param>=<val ue> causes a property of
+<param>=<value> to be set if a new collection is being created. Use collection.configNa
+me=<configname> to point to the configuration for a new collection.}
+
+\item{shard}{(character) The shard id this core represents. Normally you want to be
+auto-assigned a shard id.}
+
+\item{async}{(character) Request ID to track this action which will be
+processed asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+
+\item{...}{You can pass in parameters like \code{property.name=value}    to set
+core property name to value. See the section Defining core.properties for details on
+supported properties and values.
+(https://cwiki.apache.org/confluence/display/solr/Defining+core.properties)}
+}
+\description{
+Create a core
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or create as below
+
+# connect
+solr_connect()
+
+# Create a core
+path <- "~/solr-5.4.1/server/solr/newcore/conf"
+dir.create(path, recursive = TRUE)
+files <- list.files("~/solr-5.4.1/server/solr/configsets/data_driven_schema_configs/conf/",
+full.names = TRUE)
+file.copy(files, path, recursive = TRUE)
+core_create(name = "newcore", instanceDir = "newcore", configSet = "basic_configs")
+}
+}
+
diff --git a/man/core_exists.Rd b/man/core_exists.Rd
new file mode 100644
index 0000000..0157624
--- /dev/null
+++ b/man/core_exists.Rd
@@ -0,0 +1,39 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_exists.R
+\name{core_exists}
+\alias{core_exists}
+\title{Check if a core exists}
+\usage{
+core_exists(name, callopts = list())
+}
+\arguments{
+\item{name}{(character) The name of the core. If not given, all cores.}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\value{
+A single boolean, \code{TRUE} or \code{FALSE}
+}
+\description{
+Check if a core exists
+}
+\details{
+Simply calls \code{\link{core_status}} internally
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# exists
+core_exists("gettingstarted")
+
+# doesn't exist
+core_exists("hhhhhh")
+}
+}
+
diff --git a/man/core_mergeindexes.Rd b/man/core_mergeindexes.Rd
new file mode 100644
index 0000000..ab7dd6d
--- /dev/null
+++ b/man/core_mergeindexes.Rd
@@ -0,0 +1,48 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_mergeindexes.R
+\name{core_mergeindexes}
+\alias{core_mergeindexes}
+\title{Merge indexes (cores)}
+\usage{
+core_mergeindexes(name, indexDir = NULL, srcCore = NULL, async = NULL,
+  raw = FALSE, callopts = list())
+}
+\arguments{
+\item{name}{The name of the target core/index. Required}
+
+\item{indexDir}{(character)    Multi-valued, directories that would be merged.}
+
+\item{srcCore}{(character)    Multi-valued, source cores that would be merged.}
+
+\item{async}{(character) Request ID to track this action which will be processed
+asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Merges one or more indexes to another index. The indexes must
+have completed commits, and should be locked against writes until the merge
+is complete or the resulting merged index may become corrupted. The target
+core index must already exist and have a compatible schema with the one or
+more indexes that will be merged to it.
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+
+# connect
+solr_connect()
+
+## FIXME: not tested yet
+
+# use indexDir parameter
+core_mergeindexes(core="new_core_name", indexDir = c("/solr_home/core1/data/index",
+   "/solr_home/core2/data/index"))
+
+# use srcCore parameter
+core_mergeindexes(name = "new_core_name", srcCore = c('core1', 'core2'))
+}
+}
+
diff --git a/man/core_reload.Rd b/man/core_reload.Rd
new file mode 100644
index 0000000..8f40784
--- /dev/null
+++ b/man/core_reload.Rd
@@ -0,0 +1,33 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_reload.R
+\name{core_reload}
+\alias{core_reload}
+\title{Reload a core}
+\usage{
+core_reload(name, raw = FALSE, callopts = list())
+}
+\arguments{
+\item{name}{(character) The name of the core. Required}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Reload a core
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# Status of particular cores
+core_reload("gettingstarted")
+core_status("gettingstarted")
+}
+}
+
diff --git a/man/core_rename.Rd b/man/core_rename.Rd
new file mode 100644
index 0000000..ca6d036
--- /dev/null
+++ b/man/core_rename.Rd
@@ -0,0 +1,40 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_rename.R
+\name{core_rename}
+\alias{core_rename}
+\title{Rename a core}
+\usage{
+core_rename(name, other, async = NULL, raw = FALSE, callopts = list())
+}
+\arguments{
+\item{name}{(character) The name of the core to be renamed. Required}
+
+\item{other}{(character) The new name of the core. Required.}
+
+\item{async}{(character) Request ID to track this action which will be processed
+asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Rename a core
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# Status of particular cores
+core_create("testcore") # or create in the CLI: bin/solr create -c testcore
+core_rename("testcore", "newtestcore")
+core_status("testcore") # core missing
+core_status("newtestcore", FALSE) # not missing
+}
+}
+
diff --git a/man/core_requeststatus.Rd b/man/core_requeststatus.Rd
new file mode 100644
index 0000000..3f4a4ca
--- /dev/null
+++ b/man/core_requeststatus.Rd
@@ -0,0 +1,28 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_requeststatus.R
+\name{core_requeststatus}
+\alias{core_requeststatus}
+\title{Request status of asynchronous CoreAdmin API call}
+\usage{
+core_requeststatus(requestid, raw = FALSE, callopts = list())
+}
+\arguments{
+\item{requestid}{The name of one of the cores to be removed. Required}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Request status of asynchronous CoreAdmin API call
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+
+# FIXME: not tested yet...
+# solr_connect()
+# core_requeststatus(requestid = 1)
+}
+}
+
diff --git a/man/core_split.Rd b/man/core_split.Rd
new file mode 100644
index 0000000..12ac1ae
--- /dev/null
+++ b/man/core_split.Rd
@@ -0,0 +1,84 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_split.R
+\name{core_split}
+\alias{core_split}
+\title{Split a core}
+\usage{
+core_split(name, path = NULL, targetCore = NULL, ranges = NULL,
+  split.key = NULL, async = NULL, raw = FALSE, callopts = list())
+}
+\arguments{
+\item{name}{(character) The name of one of the cores to be swapped. Required}
+
+\item{path}{(character) Two or more target directory paths in which a piece of the
+index will be written}
+
+\item{targetCore}{(character) Two or more target Solr cores to which a piece
+of the index will be merged}
+
+\item{ranges}{(character) A list of number ranges, or hash ranges in hexadecimal format.
+If numbers, they get converted to hexidecimal format before being passed to
+your Solr server.}
+
+\item{split.key}{(character) The key to be used for splitting the index}
+
+\item{async}{(character) Request ID to track this action which will be processed
+asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+SPLIT splits an index into two or more indexes. The index being
+split can continue to handle requests. The split pieces can be placed into
+a specified directory on the server's filesystem or it can be merged into
+running Solr cores.
+}
+\details{
+The core index will be split into as many pieces as the number of \code{path}
+or \code{targetCore} parameters.
+
+Either \code{path} or \code{targetCore} parameter must be specified but not
+both. The \code{ranges} and \code{split.key} parameters are optional and only one of
+the two should be specified, if at all required.
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# Swap a core
+## First, create two cores
+# core_split("splitcoretest0") # or create in the CLI: bin/solr create -c splitcoretest0
+# core_split("splitcoretest1") # or create in the CLI: bin/solr create -c splitcoretest1
+# core_split("splitcoretest2") # or create in the CLI: bin/solr create -c splitcoretest2
+
+## check status
+core_status("splitcoretest0", FALSE)
+core_status("splitcoretest1", FALSE)
+core_status("splitcoretest2", FALSE)
+
+## split core using targetCore parameter
+core_split("splitcoretest0", targetCore = c("splitcoretest1", "splitcoretest2"))
+
+## split core using split.key parameter
+### Here all documents having the same route key as the split.key i.e. 'A!'
+### will be split from the core index and written to the targetCore
+core_split("splitcoretest0", targetCore = "splitcoretest1", split.key = "A!")
+
+## split core using ranges parameter
+### Solr expects hash ranges in hexidecimal, but since we're in R,
+### let's not make our lives any harder, so you can pass in numbers
+### but you can still pass in hexidecimal if you want.
+rgs <- c('0-1f4', '1f5-3e8')
+core_split("splitcoretest0", targetCore = c("splitcoretest1", "splitcoretest2"), ranges = rgs)
+rgs <- list(c(0, 500), c(501, 1000))
+core_split("splitcoretest0", targetCore = c("splitcoretest1", "splitcoretest2"), ranges = rgs)
+}
+}
+
diff --git a/man/core_status.Rd b/man/core_status.Rd
new file mode 100644
index 0000000..3a0aaef
--- /dev/null
+++ b/man/core_status.Rd
@@ -0,0 +1,43 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_status.R
+\name{core_status}
+\alias{core_status}
+\title{Get core status}
+\usage{
+core_status(name = NULL, indexInfo = TRUE, raw = FALSE,
+  callopts = list())
+}
+\arguments{
+\item{name}{(character) The name of the core. If not given, all cores.}
+
+\item{indexInfo}{(logical)}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Get core status
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# Status of all cores
+core_status()
+
+# Status of particular cores
+core_status("gettingstarted")
+
+# Get index info or not
+## Default: TRUE
+core_status("gettingstarted", indexInfo = TRUE)
+core_status("gettingstarted", indexInfo = FALSE)
+}
+}
+
diff --git a/man/core_swap.Rd b/man/core_swap.Rd
new file mode 100644
index 0000000..5fb0868
--- /dev/null
+++ b/man/core_swap.Rd
@@ -0,0 +1,57 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_swap.R
+\name{core_swap}
+\alias{core_swap}
+\title{Swap a core}
+\usage{
+core_swap(name, other, async = NULL, raw = FALSE, callopts = list())
+}
+\arguments{
+\item{name}{(character) The name of one of the cores to be swapped. Required}
+
+\item{other}{(character) The name of one of the cores to be swapped. Required.}
+
+\item{async}{(character) Request ID to track this action which will be processed
+asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+SWAP atomically swaps the names used to access two existing Solr cores.
+This can be used to swap new content into production. The prior core remains
+available and can be swapped back, if necessary. Each core will be known by
+the name of the other, after the swap
+}
+\details{
+Do not use \code{core_swap} with a SolrCloud node. It is not supported and
+can result in the core being unusable. We'll try to stop you if you try.
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+# you can create a new core like: bin/solr create -c corename
+# where <corename> is the name for your core - or creaate as below
+
+# connect
+solr_connect()
+
+# Swap a core
+## First, create two cores
+core_create("swapcoretest") # or create in the CLI: bin/solr create -c swapcoretest
+core_create("swapcoretest") # or create in the CLI: bin/solr create -c swapcoretest
+
+## check status
+core_status("swapcoretest1", FALSE)
+core_status("swapcoretest2", FALSE)
+
+## swap core
+core_swap("swapcoretest1", "swapcoretest2")
+
+## check status again
+core_status("swapcoretest1", FALSE)
+core_status("swapcoretest2", FALSE)
+}
+}
+
diff --git a/man/core_unload.Rd b/man/core_unload.Rd
new file mode 100644
index 0000000..66b5a52
--- /dev/null
+++ b/man/core_unload.Rd
@@ -0,0 +1,48 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/core_unload.R
+\name{core_unload}
+\alias{core_unload}
+\title{Unload (delete) a core}
+\usage{
+core_unload(name, deleteIndex = FALSE, deleteDataDir = FALSE,
+  deleteInstanceDir = FALSE, async = NULL, raw = FALSE,
+  callopts = list())
+}
+\arguments{
+\item{name}{The name of one of the cores to be removed. Required}
+
+\item{deleteIndex}{(logical)    If \code{TRUE}, will remove the index when unloading
+the core. Default: \code{FALSE}}
+
+\item{deleteDataDir}{(logical)    If \code{TRUE}, removes the data directory and all
+sub-directories. Default: \code{FALSE}}
+
+\item{deleteInstanceDir}{(logical)    If \code{TRUE}, removes everything related to
+the core, including the index directory, configuration files and other related
+files. Default: \code{FALSE}}
+
+\item{async}{(character) Request ID to track this action which will be processed
+asynchronously}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{callopts}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Unload (delete) a core
+}
+\examples{
+\dontrun{
+# start Solr with Schemaless mode via the schemaless eg: bin/solr start -e schemaless
+
+# connect
+solr_connect()
+
+# Create a core
+core_create(name = "thingsstuff")
+
+# Unload a core
+core_unload(name = "fart")
+}
+}
+
diff --git a/man/delete.Rd b/man/delete.Rd
new file mode 100644
index 0000000..6932225
--- /dev/null
+++ b/man/delete.Rd
@@ -0,0 +1,65 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/delete.R
+\name{delete}
+\alias{delete}
+\alias{delete_by_id}
+\alias{delete_by_query}
+\title{Delete documents by ID or query}
+\usage{
+delete_by_id(ids, name, commit = TRUE, commit_within = NULL,
+  overwrite = TRUE, boost = NULL, wt = "json", raw = FALSE, ...)
+
+delete_by_query(query, name, commit = TRUE, commit_within = NULL,
+  overwrite = TRUE, boost = NULL, wt = "json", raw = FALSE, ...)
+}
+\arguments{
+\item{ids}{Document IDs, one or more in a vector or list}
+
+\item{name}{(character) A collection or core name. Required.}
+
+\item{commit}{(logical) If \code{TRUE}, documents immediately searchable.
+Deafult: \code{TRUE}}
+
+\item{commit_within}{(numeric) Milliseconds to commit the change, the document will be added
+within that time. Default: NULL}
+
+\item{overwrite}{(logical) Overwrite documents with matching keys. Default: \code{TRUE}}
+
+\item{boost}{(numeric) Boost factor. Default: NULL}
+
+\item{wt}{(character) One of json (default) or xml. If json, uses
+\code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to
+parse}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data in format specified by
+\code{wt} param}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+
+\item{query}{Query to use to delete documents}
+}
+\description{
+Delete documents by ID or query
+}
+\details{
+We use json internally as data interchange format for this function.
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# add some documents first
+ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+add(ss, name = "gettingstarted")
+
+# Now, delete them
+# Delete by ID
+# delete_by_id(ids = 9)
+## Many IDs
+# delete_by_id(ids = c(3, 4))
+
+# Delete by query
+# delete_by_query(query = "manu:bank")
+}
+}
+
diff --git a/man/is-sr.Rd b/man/is-sr.Rd
new file mode 100644
index 0000000..bc7f5b1
--- /dev/null
+++ b/man/is-sr.Rd
@@ -0,0 +1,25 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/classes.r
+\name{is.sr_facet}
+\alias{is.sr_facet}
+\alias{is.sr_high}
+\alias{is.sr_search}
+\title{Test for sr_facet class}
+\usage{
+is.sr_facet(x)
+
+is.sr_high(x)
+
+is.sr_search(x)
+}
+\arguments{
+\item{x}{Input}
+}
+\description{
+Test for sr_facet class
+
+Test for sr_high class
+
+Test for sr_search class
+}
+
diff --git a/man/makemultiargs.Rd b/man/makemultiargs.Rd
new file mode 100644
index 0000000..4a96fc8
--- /dev/null
+++ b/man/makemultiargs.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/zzz.r
+\name{makemultiargs}
+\alias{makemultiargs}
+\title{Function to make make multiple args of the same name from a 
+single input with length > 1}
+\usage{
+makemultiargs(x)
+}
+\arguments{
+\item{x}{Value}
+}
+\description{
+Function to make make multiple args of the same name from a 
+single input with length > 1
+}
+
diff --git a/man/optimize.Rd b/man/optimize.Rd
new file mode 100644
index 0000000..67f1ca0
--- /dev/null
+++ b/man/optimize.Rd
@@ -0,0 +1,48 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/optimize.R
+\name{optimize}
+\alias{optimize}
+\title{Optimize}
+\usage{
+optimize(name, max_segments = 1, wait_searcher = TRUE,
+  soft_commit = FALSE, wt = "json", raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) A collection or core name. Required.}
+
+\item{max_segments}{optimizes down to at most this number of segments. Default: 1}
+
+\item{wait_searcher}{block until a new searcher is opened and registered as the
+main query searcher, making the changes visible. Default: \code{TRUE}}
+
+\item{soft_commit}{perform a soft commit - this will refresh the 'view' of the
+index in a more performant manner, but without "on-disk" guarantees.
+Default: \code{FALSE}}
+
+\item{wt}{(character) One of json (default) or xml. If json, uses
+\code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to
+parse}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data in format specified by
+\code{wt} param}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Optimize
+}
+\examples{
+\dontrun{
+solr_connect()
+
+optimize("gettingstarted")
+optimize("gettingstarted", max_segments = 2)
+optimize("gettingstarted", wait_searcher = FALSE)
+
+# get xml back
+optimize("gettingstarted", wt = "xml")
+## raw xml
+optimize("gettingstarted", wt = "xml", raw = TRUE)
+}
+}
+
diff --git a/man/ping.Rd b/man/ping.Rd
new file mode 100644
index 0000000..8dfdb1a
--- /dev/null
+++ b/man/ping.Rd
@@ -0,0 +1,54 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/ping.R
+\name{ping}
+\alias{ping}
+\title{Ping a Solr instance}
+\usage{
+ping(name, wt = "json", verbose = TRUE, raw = FALSE, ...)
+}
+\arguments{
+\item{name}{(character) Name of a collection or core. Required.}
+
+\item{wt}{(character) One of json (default) or xml. If json, uses
+\code{\link[jsonlite]{fromJSON}} to parse. If xml, uses
+\code{\link[xml2]{read_xml}} to parse}
+
+\item{verbose}{If TRUE (default) the url call used printed to console.}
+
+\item{raw}{(logical) If TRUE, returns raw data in format specified by
+\code{wt} param}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\value{
+if \code{wt="xml"} an object of class \code{xml_document}, if
+\code{wt="json"} an object of class \code{list}
+}
+\description{
+Ping a Solr instance
+}
+\details{
+You likely may not be able to run this function against many public
+Solr services as they hopefully don't expose their admin interface to the
+public, but works locally.
+}
+\examples{
+\dontrun{
+# start Solr, in your CLI, run: `bin/solr start -e cloud -noprompt`
+# after that, if you haven't run `bin/post -c gettingstarted docs/` yet,
+# do so
+
+# connect: by default we connect to localhost, port 8983
+solr_connect()
+
+# ping the gettingstarted index
+ping("gettingstarted")
+ping("gettingstarted", wt = "xml")
+ping("gettingstarted", verbose = FALSE)
+ping("gettingstarted", raw = TRUE)
+
+library("httr")
+ping("gettingstarted", wt="xml", config = verbose())
+}
+}
+
diff --git a/man/pivot_flatten_tabular.Rd b/man/pivot_flatten_tabular.Rd
new file mode 100644
index 0000000..34a6a8d
--- /dev/null
+++ b/man/pivot_flatten_tabular.Rd
@@ -0,0 +1,22 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/parsers.R
+\name{pivot_flatten_tabular}
+\alias{pivot_flatten_tabular}
+\title{Flatten facet.pivot responses}
+\usage{
+pivot_flatten_tabular(df_w_pivot)
+}
+\arguments{
+\item{df_w_pivot}{a \code{data.frame} with another
+\code{data.frame} nested inside representing a
+pivot reponse}
+}
+\value{
+a \code{data.frame}
+}
+\description{
+Convert a nested hierarchy of facet.pivot elements
+to tabular data (rows and columns)
+}
+\keyword{internal}
+
diff --git a/man/schema.Rd b/man/schema.Rd
new file mode 100644
index 0000000..f1e7c77
--- /dev/null
+++ b/man/schema.Rd
@@ -0,0 +1,59 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/schema.R
+\name{schema}
+\alias{schema}
+\title{Get the schema for a collection or core}
+\usage{
+schema(name, what = "", raw = FALSE, verbose = TRUE, ...)
+}
+\arguments{
+\item{name}{(character) Name of collection or core}
+
+\item{what}{(character) What to retrieve. By default, we retrieve the entire
+schema. Options include: fields, dynamicfields, fieldtypes, copyfields, name,
+version, uniquekey, similarity, "solrqueryparser/defaultoperator"}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data}
+
+\item{verbose}{If TRUE (default) the url call used printed to console.}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Get the schema for a collection or core
+}
+\examples{
+\dontrun{
+# start Solr, in your CLI, run: `bin/solr start -e cloud -noprompt`
+# after that, if you haven't run `bin/post -c gettingstarted docs/` yet, do so
+
+# connect: by default we connect to localhost, port 8983
+solr_connect()
+
+# get the schema for the gettingstarted index
+schema(name = "gettingstarted")
+
+# Get parts of the schema
+schema(name = "gettingstarted", "fields")
+schema(name = "gettingstarted", "dynamicfields")
+schema(name = "gettingstarted", "fieldtypes")
+schema(name = "gettingstarted", "copyfields")
+schema(name = "gettingstarted", "name")
+schema(name = "gettingstarted", "version")
+schema(name = "gettingstarted", "uniquekey")
+schema(name = "gettingstarted", "similarity")
+schema(name = "gettingstarted", "solrqueryparser/defaultoperator")
+
+# get raw data
+schema(name = "gettingstarted", "similarity", raw = TRUE)
+schema(name = "gettingstarted", "uniquekey", raw = TRUE)
+
+# start Solr in Schemaless mode: bin/solr start -e schemaless
+# schema("gettingstarted")
+
+# start Solr in Standalone mode: bin/solr start
+# then add a core: bin/solr create -c helloWorld
+# schema("helloWorld")
+}
+}
+
diff --git a/man/solr_all.Rd b/man/solr_all.Rd
new file mode 100644
index 0000000..149e57b
--- /dev/null
+++ b/man/solr_all.Rd
@@ -0,0 +1,141 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/solr_all.r
+\name{solr_all}
+\alias{solr_all}
+\title{All purpose search}
+\usage{
+solr_all(name = NULL, q = "*:*", sort = NULL, start = 0, rows = NULL,
+  pageDoc = NULL, pageScore = NULL, fq = NULL, fl = NULL,
+  defType = NULL, timeAllowed = NULL, qt = NULL, wt = "json",
+  NOW = NULL, TZ = NULL, echoHandler = NULL, echoParams = NULL,
+  key = NULL, callopts = list(), raw = FALSE, parsetype = "df",
+  concat = ",", ...)
+}
+\arguments{
+\item{name}{Name of a collection or core. Or leave as \code{NULL} if not needed.}
+
+\item{q}{Query terms, defaults to '*:*', or everything.}
+
+\item{sort}{Field to sort on. You can specify ascending (e.g., score desc) or 
+descending (e.g., score asc), sort by two fields (e.g., score desc, price asc), 
+or sort by a function (e.g., sum(x_f, y_f) desc, which sorts by the sum of 
+x_f and y_f in a descending order).}
+
+\item{start}{Record to start at, default to beginning.}
+
+\item{rows}{Number of records to return. Default: 10.}
+
+\item{pageDoc}{If you expect to be paging deeply into the results (say beyond page 10, 
+assuming rows=10) and you are sorting by score, you may wish to add the pageDoc 
+and pageScore parameters to your request. These two parameters tell Solr (and Lucene) 
+what the last result (Lucene internal docid and score) of the previous page was, 
+so that when scoring the query for the next set of pages, it can ignore any results 
+that occur higher than that item. To get the Lucene internal doc id, you will need 
+to add [docid] to the &fl list. 
+e.g., q=*:*&start=10&pageDoc=5&pageScore=1.345&fl=[docid],score}
+
+\item{pageScore}{See pageDoc notes.}
+
+\item{fq}{Filter query, this does not affect the search, only what gets returned. 
+This parameter can accept multiple items in a lis or vector. You can't pass more than 
+one parameter of the same name, so we get around it by passing multiple queries 
+and we parse internally}
+
+\item{fl}{Fields to return, can be a character vector like \code{c('id', 'title')}, 
+or a single character vector with one or more comma separated names, like 
+\code{'id,title'}}
+
+\item{defType}{Specify the query parser to use with this request.}
+
+\item{timeAllowed}{The time allowed for a search to finish. This value only applies 
+to the search and not to requests in general. Time is in milliseconds. Values <= 0 
+mean no time restriction. Partial results may be returned (if there are any).}
+
+\item{qt}{Which query handler used. Options: dismax, others?}
+
+\item{wt}{(character) One of json (default) or xml. If json, uses
+\code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}}
+to parse. You can't use \code{csv} because the point of this function}
+
+\item{NOW}{Set a fixed time for evaluating Date based expresions}
+
+\item{TZ}{Time zone, you can override the default.}
+
+\item{echoHandler}{If \code{TRUE}, Solr places the name of the handle used in the 
+response to the client for debugging purposes. Default:}
+
+\item{echoParams}{The echoParams parameter tells Solr what kinds of Request 
+parameters should be included in the response for debugging purposes, legal values 
+include:
+\itemize{
+ \item none - don't include any request parameters for debugging
+ \item explicit - include the parameters explicitly specified by the client in the request
+ \item all - include all parameters involved in this request, either specified explicitly 
+ by the client, or implicit because of the request handler configuration.
+}}
+
+\item{key}{API key, if needed.}
+
+\item{callopts}{Call options passed on to httr::GET}
+
+\item{raw}{(logical) If TRUE, returns raw data in format specified by wt param}
+
+\item{parsetype}{(character) One of 'list' or 'df'}
+
+\item{concat}{(character) Character to concatenate elements of longer than length 1. 
+Note that this only works reliably when data format is json (wt='json'). The parsing
+is more complicated in XML format, but you can do that on your own.}
+
+\item{...}{Further args.}
+}
+\value{
+XML, JSON, a list, or data.frame
+}
+\description{
+Includes documents, facets, groups, mlt, stats, and highlights.
+}
+\examples{
+\dontrun{
+# connect
+solr_connect('http://api.plos.org/search')
+
+solr_all(q='*:*', rows=2, fl='id')
+
+# facets
+solr_all(q='*:*', rows=2, fl='id', facet="true", facet.field="journal")
+
+# mlt
+solr_all(q='ecology', rows=2, fl='id', mlt='true', mlt.count=2, mlt.fl='abstract')
+
+# facets and mlt
+solr_all(q='ecology', rows=2, fl='id', facet="true", facet.field="journal",
+mlt='true', mlt.count=2, mlt.fl='abstract')
+
+# stats
+solr_all(q='ecology', rows=2, fl='id', stats='true', stats.field='counter_total_all')
+
+# facets, mlt, and stats
+solr_all(q='ecology', rows=2, fl='id', facet="true", facet.field="journal",
+mlt='true', mlt.count=2, mlt.fl='abstract', stats='true', stats.field='counter_total_all')
+
+# group
+solr_all(q='ecology', rows=2, fl='id', group='true',
+   group.field='journal', group.limit=3)
+
+# facets, mlt, stats, and groups
+solr_all(q='ecology', rows=2, fl='id', facet="true", facet.field="journal",
+   mlt='true', mlt.count=2, mlt.fl='abstract', stats='true', stats.field='counter_total_all',
+   group='true', group.field='journal', group.limit=3)
+
+# using wt = xml
+solr_all(q='*:*', rows=50, fl=c('id','score'), fq='doc_type:full', wt="xml", raw=TRUE)
+}
+}
+\references{
+See \url{http://wiki.apache.org/solr/#Search_and_Indexing} for
+more information.
+}
+\seealso{
+\code{\link{solr_highlight}}, \code{\link{solr_facet}}
+}
+
diff --git a/man/solr_connect.Rd b/man/solr_connect.Rd
new file mode 100644
index 0000000..c30eb5b
--- /dev/null
+++ b/man/solr_connect.Rd
@@ -0,0 +1,58 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/connect.R
+\name{solr_connect}
+\alias{solr_connect}
+\alias{solr_settings}
+\title{Solr connection}
+\usage{
+solr_connect(url = "http://localhost:8983", proxy = NULL,
+  errors = "simple", verbose = TRUE)
+
+solr_settings()
+}
+\arguments{
+\item{url}{Base URL for Solr instance. For a local instance, this is likely going
+to be \code{http://localhost:8983} (also the default), or a different port if you
+set a different port.}
+
+\item{proxy}{List of arguments for a proxy connection, including one or more of:
+url, port, username, password, and auth. See \code{\link[httr]{use_proxy}} for 
+help, which is used to construct the proxy connection.}
+
+\item{errors}{(character) One of simple or complete. Simple gives http code and 
+error message on an error, while complete gives both http code and error message, 
+and stack trace, if available.}
+
+\item{verbose}{(logical) Whether to print help messages or not. E.g., if 
+\code{TRUE}, we print the URL on each request to a Solr server for your 
+reference. Default: \code{TRUE}}
+}
+\description{
+Set Solr options, including base URL, proxy, and errors
+}
+\details{
+This function sets environment variables that we use internally
+within functions in this package to determine the right thing to do given your
+inputs. 
+
+In addition, \code{solr_connect} does a quick \code{GET} request to the URL you 
+provide to make sure the service is up.
+}
+\examples{
+\dontrun{
+# set solr settings
+solr_connect()
+
+# set solr settings with a proxy
+prox <- list(url = "187.62.207.130", port = 3128)
+solr_connect(url = "http://localhost:8983", proxy = prox)
+
+# get solr settings
+solr_settings()
+
+# you can also check your settings via Sys.getenv()
+Sys.getenv("SOLR_URL")
+Sys.getenv("SOLR_ERRORS")
+}
+}
+
diff --git a/man/solr_facet.Rd b/man/solr_facet.Rd
new file mode 100644
index 0000000..72a87a3
--- /dev/null
+++ b/man/solr_facet.Rd
@@ -0,0 +1,366 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/solr_facet.r
+\name{solr_facet}
+\alias{solr_facet}
+\title{Faceted search}
+\usage{
+solr_facet(name = NULL, q = "*:*", facet.query = NA, facet.field = NA,
+  facet.prefix = NA, facet.sort = NA, facet.limit = NA,
+  facet.offset = NA, facet.mincount = NA, facet.missing = NA,
+  facet.method = NA, facet.enum.cache.minDf = NA, facet.threads = NA,
+  facet.date = NA, facet.date.start = NA, facet.date.end = NA,
+  facet.date.gap = NA, facet.date.hardend = NA, facet.date.other = NA,
+  facet.date.include = NA, facet.range = NA, facet.range.start = NA,
+  facet.range.end = NA, facet.range.gap = NA, facet.range.hardend = NA,
+  facet.range.other = NA, facet.range.include = NA, facet.pivot = NA,
+  facet.pivot.mincount = NA, start = NA, rows = NA, key = NA,
+  wt = "json", raw = FALSE, callopts = list(), ...)
+}
+\arguments{
+\item{name}{Name of a collection or core. Or leave as \code{NULL} if not needed.}
+
+\item{q}{Query terms. See examples.}
+
+\item{facet.query}{This param allows you to specify an arbitrary query in the
+Lucene default syntax to generate a facet count. By default, faceting returns
+a count of the unique terms for a "field", while facet.query allows you to
+determine counts for arbitrary terms or expressions. This parameter can be
+specified multiple times to indicate that multiple queries should be used as
+separate facet constraints. It can be particularly useful for numeric range
+based facets, or prefix based facets -- see example below (i.e. price:[* TO 500]
+and  price:[501 TO *]).}
+
+\item{facet.field}{This param allows you to specify a field which should be
+treated as a facet. It will iterate over each Term in the field and generate a
+facet count using that Term as the constraint. This parameter can be specified
+multiple times to indicate multiple facet fields. None of the other params in
+this section will have any effect without specifying at least one field name
+using this param.}
+
+\item{facet.prefix}{Limits the terms on which to facet to those starting with
+the given string prefix. Note that unlike fq, this does not change the search
+results -- it merely reduces the facet values returned to those beginning with
+the specified prefix. This parameter can be specified on a per field basis.}
+
+\item{facet.sort}{See Details.}
+
+\item{facet.limit}{This param indicates the maximum number of constraint counts
+that should be returned for the facet fields. A negative value means unlimited.
+Default: 100. Can be specified on a per field basis.}
+
+\item{facet.offset}{This param indicates an offset into the list of constraints
+to allow paging. Default: 0. This parameter can be specified on a per field basis.}
+
+\item{facet.mincount}{This param indicates the minimum counts for facet fields
+should be included in the response. Default: 0. This parameter can be specified
+on a per field basis.}
+
+\item{facet.missing}{Set to "true" this param indicates that in addition to the
+Term based constraints of a facet field, a count of all matching results which
+have no value for the field should be computed. Default: FALSE. This parameter
+can be specified on a per field basis.}
+
+\item{facet.method}{See Details.}
+
+\item{facet.enum.cache.minDf}{This param indicates the minimum document frequency
+(number of documents matching a term) for which the filterCache should be used
+when determining the constraint count for that term. This is only used when
+facet.method=enum method of faceting. A value greater than zero will decrease
+memory usage of the filterCache, but increase the query time. When faceting on
+a field with a very large number of terms, and you wish to decrease memory usage,
+try a low value of 25 to 50 first. Default: 0, causing the filterCache to be used
+for all terms in the field. This parameter can be specified on a per field basis.}
+
+\item{facet.threads}{This param will cause loading the underlying fields used in
+faceting to be executed in parallel with the number of threads specified. Specify
+as facet.threads=# where # is the maximum number of threads used. Omitting this
+parameter or specifying the thread count as 0 will not spawn any threads just as
+before. Specifying a negative number of threads will spin up to Integer.MAX_VALUE
+threads. Currently this is limited to the fields, range and query facets are not
+yet supported. In at least one case this has reduced warmup times from 20 seconds
+to under 5 seconds.}
+
+\item{facet.date}{Specify names of fields (of type DateField) which should be
+treated as date facets. Can be specified multiple times to indicate multiple
+date facet fields.}
+
+\item{facet.date.start}{The lower bound for the first date range for all Date
+Faceting on this field. This should be a single date expression which may use
+the DateMathParser syntax. Can be specified on a per field basis.}
+
+\item{facet.date.end}{The minimum upper bound for the last date range for all
+Date Faceting on this field (see facet.date.hardend for an explanation of what
+the actual end value may be greater). This should be a single date expression
+which may use the DateMathParser syntax. Can be specified on a per field basis.}
+
+\item{facet.date.gap}{The size of each date range expressed as an interval to
+be added to the lower bound using the DateMathParser syntax. Eg:
+facet.date.gap=+1DAY. Can be specified on a per field basis.}
+
+\item{facet.date.hardend}{A Boolean parameter instructing Solr what to do in the
+event that facet.date.gap does not divide evenly between facet.date.start and
+facet.date.end. If this is true, the last date range constraint will have an
+upper bound of facet.date.end; if false, the last date range will have the smallest
+possible upper bound greater then facet.date.end such that the range is exactly
+facet.date.gap wide. Default: FALSE. This parameter can be specified on a per
+field basis.}
+
+\item{facet.date.other}{See Details.}
+
+\item{facet.date.include}{See Details.}
+
+\item{facet.range}{Indicates what field to create range facets for. Example:
+facet.range=price&facet.range=age}
+
+\item{facet.range.start}{The lower bound of the ranges. Can be specified on a
+per field basis. Example: f.price.facet.range.start=0.0&f.age.facet.range.start=10}
+
+\item{facet.range.end}{The upper bound of the ranges. Can be specified on a per
+field basis. Example: f.price.facet.range.end=1000.0&f.age.facet.range.start=99}
+
+\item{facet.range.gap}{The size of each range expressed as a value to be added
+to the lower bound. For date fields, this should be expressed using the
+DateMathParser syntax. (ie: facet.range.gap=+1DAY). Can be specified
+on a per field basis. Example: f.price.facet.range.gap=100&f.age.facet.range.gap=10}
+
+\item{facet.range.hardend}{A Boolean parameter instructing Solr what to do in the
+event that facet.range.gap does not divide evenly between facet.range.start and
+facet.range.end. If this is true, the last range constraint will have an upper
+bound of facet.range.end; if false, the last range will have the smallest possible
+upper bound greater then facet.range.end such that the range is exactly
+facet.range.gap wide. Default: FALSE. This parameter can be specified on a
+per field basis.}
+
+\item{facet.range.other}{See Details.}
+
+\item{facet.range.include}{See Details.}
+
+\item{facet.pivot}{This param allows you to specify a single comma-separated string 
+of fields to allow you to facet within the results of the parent facet to return 
+counts in the format of SQL group by operation}
+
+\item{facet.pivot.mincount}{This param indicates the minimum counts for facet fields
+to be included in the response. Default: 0. This parameter should only be specified 
+once.}
+
+\item{start}{Record to start at, default to beginning.}
+
+\item{rows}{Number of records to return.}
+
+\item{key}{API key, if needed.}
+
+\item{wt}{(character) Data type returned, defaults to 'json'. One of json or xml. If json, 
+uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[XML]{xmlParse}} to 
+parse. csv is only supported in \code{\link{solr_search}} and \code{\link{solr_all}}.}
+
+\item{raw}{(logical) If TRUE (default) raw json or xml returned. If FALSE,
+parsed data returned.}
+
+\item{callopts}{Call options passed on to httr::GET}
+
+\item{...}{Further args, usually per field arguments for faceting.}
+}
+\value{
+Raw json or xml, or a list of length 4 parsed elements (usually data.frame's).
+}
+\description{
+Returns only facet items
+}
+\details{
+A number of fields can be specified multiple times, in which case you can separate
+them by commas, like \code{facet.field='journal,subject'}. Those fields are:
+\itemize{
+ \item facet.field
+ \item facet.query
+ \item facet.date
+ \item facet.date.other
+ \item facet.date.include
+ \item facet.range
+ \item facet.range.other
+ \item facet.range.include
+ \item facet.pivot
+}
+
+\strong{Options for some parameters}:
+
+\strong{facet.sort}: This param determines the ordering of the facet field constraints.
+\itemize{
+  \item {count} sort the constraints by count (highest count first)
+  \item {index} to return the constraints sorted in their index order (lexicographic
+  by indexed term). For terms in the ascii range, this will be alphabetically sorted.
+}
+The default is count if facet.limit is greater than 0, index otherwise. This
+parameter can be specified on a per field basis.
+
+\strong{facet.method}:
+This parameter indicates what type of algorithm/method to use when faceting a field.
+\itemize{
+  \item {enum} Enumerates all terms in a field, calculating the set intersection of
+  documents that match the term with documents that match the query. This was the
+  default (and only) method for faceting multi-valued fields prior to Solr 1.4.
+  \item {fc} (Field Cache) The facet counts are calculated by iterating over documents
+  that match the query and summing the terms that appear in each document. This was
+  the default method for single valued fields prior to Solr 1.4.
+  \item {fcs} (Field Cache per Segment) works the same as fc except the underlying
+  cache data structure is built for each segment of the index individually
+}
+The default value is fc (except for BoolField which uses enum) since it tends to use
+less memory and is faster then the enumeration method when a field has many unique
+terms in the index. For indexes that are changing rapidly in NRT situations, fcs may
+be a better choice because it reduces the overhead of building the cache structures
+on the first request and/or warming queries when opening a new searcher -- but tends
+to be somewhat slower then fc for subsequent requests against the same searcher. This
+parameter can be specified on a per field basis.
+
+\strong{facet.date.other}: This param indicates that in addition to the counts for each date
+range constraint between facet.date.start and facet.date.end, counts should also be
+computed for...
+\itemize{
+  \item {before} All records with field values lower then lower bound of the first
+  range
+  \item {after} All records with field values greater then the upper bound of the
+  last range
+  \item {between} All records with field values between the start and end bounds
+  of all ranges
+  \item {none} Compute none of this information
+  \item {all} Shortcut for before, between, and after
+}
+This parameter can be specified on a per field basis. In addition to the all option,
+this parameter can be specified multiple times to indicate multiple choices -- but
+none will override all other options.
+
+\strong{facet.date.include}: By default, the ranges used to compute date faceting between
+facet.date.start and facet.date.end are all inclusive of both endpoints, while
+the "before" and "after" ranges are not inclusive. This behavior can be modified
+by the facet.date.include param, which can be any combination of the following
+options...
+\itemize{
+  \item{lower} All gap based ranges include their lower bound
+  \item{upper} All gap based ranges include their upper bound
+  \item{edge} The first and last gap ranges include their edge bounds (ie: lower
+  for the first one, upper for the last one) even if the corresponding upper/lower
+  option is not specified
+  \item{outer} The "before" and "after" ranges will be inclusive of their bounds,
+  even if the first or last ranges already include those boundaries.
+  \item{all} Shorthand for lower, upper, edge, outer
+}
+This parameter can be specified on a per field basis. This parameter can be specified
+multiple times to indicate multiple choices.
+
+\strong{facet.date.include}: This param indicates that in addition to the counts for each range
+constraint between facet.range.start and facet.range.end, counts should also be
+computed for...
+\itemize{
+  \item{before} All records with field values lower then lower bound of the first
+  range
+  \item{after} All records with field values greater then the upper bound of the
+  last range
+  \item{between} All records with field values between the start and end bounds
+  of all ranges
+  \item{none} Compute none of this information
+  \item{all} Shortcut for before, between, and after
+}
+This parameter can be specified on a per field basis. In addition to the all option,
+this parameter can be specified multiple times to indicate multiple choices -- but
+none will override all other options.
+
+\strong{facet.range.include}: By default, the ranges used to compute range faceting between
+facet.range.start and facet.range.end are inclusive of their lower bounds and
+exclusive of the upper bounds. The "before" range is exclusive and the "after"
+range is inclusive. This default, equivalent to lower below, will not result in
+double counting at the boundaries. This behavior can be modified by the
+facet.range.include param, which can be any combination of the following options...
+\itemize{
+  \item{lower} All gap based ranges include their lower bound
+  \item{upper} All gap based ranges include their upper bound
+  \item{edge} The first and last gap ranges include their edge bounds (ie: lower
+  for the first one, upper for the last one) even if the corresponding upper/lower
+  option is not specified
+  \item{outer} The "before" and "after" ranges will be inclusive of their bounds,
+  even if the first or last ranges already include those boundaries.
+  \item{all} Shorthand for lower, upper, edge, outer
+}
+Can be specified on a per field basis. Can be specified multiple times to indicate
+multiple choices. If you want to ensure you don't double-count, don't choose both
+lower & upper, don't choose outer, and don't choose all.
+}
+\examples{
+\dontrun{
+# connect
+solr_connect('http://api.plos.org/search')
+
+# Facet on a single field
+solr_facet(q='*:*', facet.field='journal')
+
+# Facet on multiple fields
+solr_facet(q='alcohol', facet.field=c('journal','subject'))
+
+# Using mincount
+solr_facet(q='alcohol', facet.field='journal', facet.mincount='500')
+
+# Using facet.query to get counts
+solr_facet(q='*:*', facet.field='journal', facet.query=c('cell','bird'))
+
+# Using facet.pivot to simulate SQL group by counts
+solr_facet(q='alcohol', facet.pivot='journal,subject',
+             facet.pivot.mincount=10)
+## two or more fields are required - you can pass in as a single character string
+solr_facet(facet.pivot = "journal,subject", facet.limit =  3)
+## Or, pass in as a vector of length 2 or greater
+solr_facet(facet.pivot = c("journal", "subject"), facet.limit =  3)
+
+# Date faceting
+solr_facet(q='*:*', facet.date='publication_date',
+facet.date.start='NOW/DAY-5DAYS', facet.date.end='NOW', facet.date.gap='+1DAY')
+## two variables
+solr_facet(q='*:*', facet.date=c('publication_date', 'timestamp'),
+facet.date.start='NOW/DAY-5DAYS', facet.date.end='NOW', facet.date.gap='+1DAY')
+
+# Range faceting
+solr_facet(q='*:*', facet.range='counter_total_all',
+facet.range.start=5, facet.range.end=1000, facet.range.gap=10)
+
+# Range faceting with > 1 field, same settings
+solr_facet(q='*:*', facet.range=c('counter_total_all','alm_twitterCount'),
+facet.range.start=5, facet.range.end=1000, facet.range.gap=10)
+
+# Range faceting with > 1 field, different settings
+solr_facet(q='*:*', facet.range=c('counter_total_all','alm_twitterCount'),
+f.counter_total_all.facet.range.start=5, f.counter_total_all.facet.range.end=1000,
+f.counter_total_all.facet.range.gap=10, f.alm_twitterCount.facet.range.start=5,
+f.alm_twitterCount.facet.range.end=1000, f.alm_twitterCount.facet.range.gap=10)
+
+# Get raw json or xml
+## json
+solr_facet(q='*:*', facet.field='journal', raw=TRUE)
+## xml
+solr_facet(q='*:*', facet.field='journal', raw=TRUE, wt='xml')
+
+# Get raw data back, and parse later, same as what goes on internally if
+# raw=FALSE (Default)
+out <- solr_facet(q='*:*', facet.field='journal', raw=TRUE)
+solr_parse(out)
+out <- solr_facet(q='*:*', facet.field='journal', raw=TRUE,
+   wt='xml')
+solr_parse(out)
+
+# Using the USGS BISON API (https://bison.usgs.gov/#solr)
+## The occurrence endpoint
+solr_connect("https://bison.usgs.gov/solr/occurrences/select")
+solr_facet(q='*:*', facet.field='year')
+solr_facet(q='*:*', facet.field='computedStateFips')
+
+# using a proxy
+# prox <- list(url = "54.195.48.153", port = 8888)
+# solr_connect(url = 'http://api.plos.org/search', proxy = prox)
+# solr_facet(facet.field='journal', callopts=verbose())
+}
+}
+\references{
+See \url{http://wiki.apache.org/solr/SimpleFacetParameters} for
+more information on faceting.
+}
+\seealso{
+\code{\link{solr_search}}, \code{\link{solr_highlight}}, \code{\link{solr_parse}}
+}
+
diff --git a/man/solr_get.Rd b/man/solr_get.Rd
new file mode 100644
index 0000000..6696787
--- /dev/null
+++ b/man/solr_get.Rd
@@ -0,0 +1,52 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/solr_get.R
+\name{solr_get}
+\alias{solr_get}
+\title{Real time get}
+\usage{
+solr_get(ids, name, fl = NULL, wt = "json", raw = FALSE, ...)
+}
+\arguments{
+\item{ids}{Document IDs, one or more in a vector or list}
+
+\item{name}{(character) A collection or core name. Required.}
+
+\item{fl}{Fields to return, can be a character vector like \code{c('id', 'title')},
+or a single character vector with one or more comma separated names, like
+\code{'id,title'}}
+
+\item{wt}{(character) One of json (default) or xml. Data type returned.
+If json, uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses
+\code{\link[xml2]{read_xml}} to parse.}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data in format specified by
+\code{wt} param}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Get documents by id
+}
+\details{
+We use json internally as data interchange format for this function.
+}
+\examples{
+\dontrun{
+solr_connect()
+
+# add some documents first
+ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+add(ss, name = "gettingstarted")
+
+# Now, get documents by id
+solr_get(ids = 1, "gettingstarted")
+solr_get(ids = 2, "gettingstarted")
+solr_get(ids = c(1, 2), "gettingstarted")
+solr_get(ids = "1,2", "gettingstarted")
+
+# Get raw JSON
+solr_get(ids = 1, "gettingstarted", raw = TRUE, wt = "json")
+solr_get(ids = 1, "gettingstarted", raw = TRUE, wt = "xml")
+}
+}
+
diff --git a/man/solr_group.Rd b/man/solr_group.Rd
new file mode 100644
index 0000000..c830a79
--- /dev/null
+++ b/man/solr_group.Rd
@@ -0,0 +1,166 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/solr_group.r
+\name{solr_group}
+\alias{solr_group}
+\title{Grouped search}
+\usage{
+solr_group(name = NULL, q = "*:*", start = 0, rows = NA, sort = NA,
+  fq = NA, fl = NULL, wt = "json", key = NA, group.field = NA,
+  group.limit = NA, group.offset = NA, group.sort = NA, group.main = NA,
+  group.ngroups = NA, group.cache.percent = NA, group.query = NA,
+  group.format = NA, group.func = NA, callopts = list(), raw = FALSE,
+  parsetype = "df", concat = ",", ...)
+}
+\arguments{
+\item{name}{Name of a collection or core. Or leave as \code{NULL} if not needed.}
+
+\item{q}{Query terms, defaults to '*:*', or everything.}
+
+\item{start}{[number] The offset into the list of groups.}
+
+\item{rows}{[number] The number of groups to return. Defaults to 10.}
+
+\item{sort}{How to sort the groups relative to each other. For example, 
+sort=popularity desc will cause the groups to be sorted according to the highest 
+popularity doc in each group. Defaults to "score desc".}
+
+\item{fq}{Filter query, this does not affect the search, only what gets returned}
+
+\item{fl}{Fields to return}
+
+\item{wt}{(character) Data type returned, defaults to 'json'. One of json or xml. If json, 
+uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[XML]{xmlParse}} to 
+parse. csv is only supported in \code{\link{solr_search}} and \code{\link{solr_all}}.}
+
+\item{key}{API key, if needed.}
+
+\item{group.field}{[fieldname] Group based on the unique values of a field. The 
+field must currently be single-valued and must be either indexed, or be another 
+field type that has a value source and works in a function query - such as 
+ExternalFileField. Note: for Solr 3.x versions the field must by a string like 
+field such as StrField or TextField, otherwise a http status 400 is returned.}
+
+\item{group.limit}{[number] The number of results (documents) to return for each 
+group. Defaults to 1.}
+
+\item{group.offset}{[number] The offset into the document list of each group.}
+
+\item{group.sort}{How to sort documents within a single group. Defaults 
+to the same value as the sort parameter.}
+
+\item{group.main}{(logical) If true, the result of the last field grouping command 
+is used as the main result list in the response, using group.format=simple}
+
+\item{group.ngroups}{(logical) If true, includes the number of groups that have 
+matched the query. Default is false. <!> Solr4.1 WARNING: If this parameter is set 
+to true on a sharded environment, all the documents that belong to the same group 
+have to be located in the same shard, otherwise the count will be incorrect. If you 
+are using SolrCloud, consider using "custom hashing"}
+
+\item{group.cache.percent}{[0-100] If > 0 enables grouping cache. Grouping is executed 
+actual two searches. This option caches the second search. A value of 0 disables 
+grouping caching. Default is 0. Tests have shown that this cache only improves search 
+time with boolean queries, wildcard queries and fuzzy queries. For simple queries like 
+a term query or a match all query this cache has a negative impact on performance}
+
+\item{group.query}{[query] Return a single group of documents that also match the 
+given query.}
+
+\item{group.format}{One of grouped or simple. If simple, the grouped documents are 
+presented in a single flat list. The start and rows parameters refer to numbers of 
+documents instead of numbers of groups.}
+
+\item{group.func}{[function query] Group based on the unique values of a function 
+query. <!> Solr4.0 This parameter only is supported on 4.0}
+
+\item{callopts}{Call options passed on to httr::GET}
+
+\item{raw}{(logical) If TRUE, returns raw data in format specified by wt param}
+
+\item{parsetype}{(character) One of 'list' or 'df'}
+
+\item{concat}{(character) Character to concatenate elements of longer than length 1. 
+Note that this only works reliably when data format is json (wt='json'). The parsing
+is more complicated in XML format, but you can do that on your own.}
+
+\item{...}{Further args.}
+}
+\value{
+XML, JSON, a list, or data.frame
+}
+\description{
+Returns only group items
+}
+\examples{
+\dontrun{
+# connect
+solr_connect('http://api.plos.org/search')
+
+# Basic group query
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl=c('id','score'))
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl='article_type')
+
+# Different ways to sort (notice diff btw sort of group.sort)
+# note that you can only sort on a field if you return that field
+solr_group(q='ecology', group.field='journal', group.limit=3,
+   fl=c('id','score'))
+solr_group(q='ecology', group.field='journal', group.limit=3,
+   fl=c('id','score','alm_twitterCount'), group.sort='alm_twitterCount desc')
+solr_group(q='ecology', group.field='journal', group.limit=3,
+   fl=c('id','score','alm_twitterCount'), sort='score asc',
+   group.sort='alm_twitterCount desc')
+
+# Two group.field values
+out <- solr_group(q='ecology', group.field=c('journal','article_type'),
+  group.limit=3,
+  fl='id', raw=TRUE)
+solr_parse(out)
+solr_parse(out, 'df')
+
+# Get two groups, one with alm_twitterCount of 0-10, and another group
+# with 10 to infinity
+solr_group(q='ecology', group.limit=3, fl=c('id','alm_twitterCount'),
+ group.query=c('alm_twitterCount:[0 TO 10]','alm_twitterCount:[10 TO *]'))
+
+# Use of group.format and group.simple.
+## The raw data structure of these two calls are slightly different, but
+## the parsing inside the function outputs the same results. You can
+## of course set raw=TRUE to get back what the data actually look like
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl=c('id','score'), group.format='simple')
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl=c('id','score'), group.format='grouped')
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl=c('id','score'), group.format='grouped', group.main='true')
+
+# xml back
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl=c('id','score'), wt = "xml")
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl=c('id','score'), wt = "xml", parsetype = "list")
+res <- solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl=c('id','score'), wt = "xml", raw = TRUE)
+library("xml2")
+xml2::read_xml(unclass(res))
+
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl='article_type', wt = "xml")
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl='article_type', wt = "xml", parsetype = "list")
+
+# examples with Dryad's Solr instance
+solr_connect("http://datadryad.org/solr/search/select")
+solr_group(q='ecology', group.field='journal', group.limit=3,
+  fl='article_type')
+}
+}
+\references{
+See \url{http://wiki.apache.org/solr/FieldCollapsing} for more
+information.
+}
+\seealso{
+\code{\link{solr_highlight}}, \code{\link{solr_facet}}
+}
+
diff --git a/man/solr_highlight.Rd b/man/solr_highlight.Rd
new file mode 100644
index 0000000..df696f3
--- /dev/null
+++ b/man/solr_highlight.Rd
@@ -0,0 +1,221 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/solr_highlight.r
+\name{solr_highlight}
+\alias{solr_highlight}
+\title{Highlighting search}
+\usage{
+solr_highlight(name = NULL, q, hl.fl = NULL, hl.snippets = NULL,
+  hl.fragsize = NULL, hl.q = NULL, hl.mergeContiguous = NULL,
+  hl.requireFieldMatch = NULL, hl.maxAnalyzedChars = NULL,
+  hl.alternateField = NULL, hl.maxAlternateFieldLength = NULL,
+  hl.preserveMulti = NULL, hl.maxMultiValuedToExamine = NULL,
+  hl.maxMultiValuedToMatch = NULL, hl.formatter = NULL,
+  hl.simple.pre = NULL, hl.simple.post = NULL, hl.fragmenter = NULL,
+  hl.fragListBuilder = NULL, hl.fragmentsBuilder = NULL,
+  hl.boundaryScanner = NULL, hl.bs.maxScan = NULL, hl.bs.chars = NULL,
+  hl.bs.type = NULL, hl.bs.language = NULL, hl.bs.country = NULL,
+  hl.useFastVectorHighlighter = NULL, hl.usePhraseHighlighter = NULL,
+  hl.highlightMultiTerm = NULL, hl.regex.slop = NULL,
+  hl.regex.pattern = NULL, hl.regex.maxAnalyzedChars = NULL, start = 0,
+  rows = NULL, wt = "json", raw = FALSE, key = NULL,
+  callopts = list(), fl = "DOES_NOT_EXIST", fq = NULL,
+  parsetype = "list")
+}
+\arguments{
+\item{name}{Name of a collection or core. Or leave as \code{NULL} if not needed.}
+
+\item{q}{Query terms. See examples.}
+
+\item{hl.fl}{A comma-separated list of fields for which to generate highlighted snippets. 
+If left blank, the fields highlighted for the LuceneQParser are the defaultSearchField 
+(or the df param if used) and for the DisMax parser the qf fields are used. A '*' can 
+be used to match field globs, e.g. 'text_*' or even '*' to highlight on all fields where 
+highlighting is possible. When using '*', consider adding hl.requireFieldMatch=TRUE.}
+
+\item{hl.snippets}{Max no. of highlighted snippets to generate per field. Note: 
+it is possible for any number of snippets from zero to this value to be generated. 
+This parameter accepts per-field overrides. Default: 1.}
+
+\item{hl.fragsize}{The size, in characters, of the snippets (aka fragments) created by 
+the highlighter. In the original Highlighter, "0" indicates that the whole field value 
+should be used with no fragmenting. See 
+\url{http://wiki.apache.org/solr/HighlightingParameters} for more info.}
+
+\item{hl.q}{Set a query request to be highlighted. It overrides q parameter for 
+highlighting. Solr query syntax is acceptable for this parameter.}
+
+\item{hl.mergeContiguous}{Collapse contiguous fragments into a single fragment. "true" 
+indicates contiguous fragments will be collapsed into single fragment. This parameter 
+accepts per-field overrides. This parameter makes sense for the original Highlighter 
+only. Default: FALSE.}
+
+\item{hl.requireFieldMatch}{If TRUE, then a field will only be highlighted if the 
+query matched in this particular field (normally, terms are highlighted in all 
+requested fields regardless of which field matched the query). This only takes effect 
+if "hl.usePhraseHighlighter" is TRUE. Default: FALSE.}
+
+\item{hl.maxAnalyzedChars}{How many characters into a document to look for suitable 
+snippets. This parameter makes sense for the original Highlighter only. Default: 51200. 
+You can assign a large value to this parameter and use hl.fragsize=0 to return 
+highlighting in large fields that have size greater than 51200 characters.}
+
+\item{hl.alternateField}{If a snippet cannot be generated (due to no terms matching), 
+you can specify a field to use as the fallback. This parameter accepts per-field overrides.}
+
+\item{hl.maxAlternateFieldLength}{If hl.alternateField is specified, this parameter 
+specifies the maximum number of characters of the field to return. Any value less than or 
+equal to 0 means unlimited. Default: unlimited.}
+
+\item{hl.preserveMulti}{Preserve order of values in a multiValued list. Default: FALSE.}
+
+\item{hl.maxMultiValuedToExamine}{When highlighting a multiValued field, stop examining 
+the individual entries after looking at this many of them. Will potentially return 0 
+snippets if this limit is reached before any snippets are found. If maxMultiValuedToMatch 
+is also specified, whichever limit is hit first will terminate looking for more. 
+Default: Integer.MAX_VALUE}
+
+\item{hl.maxMultiValuedToMatch}{When highlighting a multiValued field, stop examining 
+the individual entries after looking at this many matches are found. If 
+maxMultiValuedToExamine is also specified, whichever limit is hit first will terminate 
+looking for more. Default: Integer.MAX_VALUE}
+
+\item{hl.formatter}{Specify a formatter for the highlight output. Currently the only 
+legal value is "simple", which surrounds a highlighted term with a customizable pre- and 
+post text snippet. This parameter accepts per-field overrides. This parameter makes 
+sense for the original Highlighter only.}
+
+\item{hl.simple.pre}{The text which appears before and after a highlighted term when using 
+the simple formatter. This parameter accepts per-field overrides. The default values are 
+"<em>" and "</em>" This parameter makes sense for the original Highlighter only. Use 
+hl.tag.pre and hl.tag.post for FastVectorHighlighter (see example under hl.fragmentsBuilder)}
+
+\item{hl.simple.post}{The text which appears before and after a highlighted term when using 
+the simple formatter. This parameter accepts per-field overrides. The default values are 
+"<em>" and "</em>" This parameter makes sense for the original Highlighter only. Use 
+hl.tag.pre and hl.tag.post for FastVectorHighlighter (see example under hl.fragmentsBuilder)}
+
+\item{hl.fragmenter}{Specify a text snippet generator for highlighted text. The standard 
+fragmenter is gap (which is so called because it creates fixed-sized fragments with gaps 
+for multi-valued fields). Another option is regex, which tries to create fragments that 
+"look like" a certain regular expression. This parameter accepts per-field overrides. 
+Default: "gap"}
+
+\item{hl.fragListBuilder}{Specify the name of SolrFragListBuilder.  This parameter 
+makes sense for FastVectorHighlighter only. To create a fragSize=0 with the 
+FastVectorHighlighter, use the SingleFragListBuilder. This field supports per-field 
+overrides.}
+
+\item{hl.fragmentsBuilder}{Specify the name of SolrFragmentsBuilder. This parameter makes 
+sense for FastVectorHighlighter only.}
+
+\item{hl.boundaryScanner}{Configures how the boundaries of fragments are determined. By 
+default, boundaries will split at the character level, creating a fragment such as "uick 
+brown fox jumps over the la". Valid entries are breakIterator or simple, with breakIterator 
+being the most commonly used. This parameter makes sense for FastVectorHighlighter only.}
+
+\item{hl.bs.maxScan}{Specify the length of characters to be scanned by SimpleBoundaryScanner. 
+Default: 10.  This parameter makes sense for FastVectorHighlighter only.}
+
+\item{hl.bs.chars}{Specify the boundary characters, used by SimpleBoundaryScanner. 
+This parameter makes sense for FastVectorHighlighter only.}
+
+\item{hl.bs.type}{Specify one of CHARACTER, WORD, SENTENCE and LINE, used by 
+BreakIteratorBoundaryScanner. Default: WORD. This parameter makes sense for 
+FastVectorHighlighter only.}
+
+\item{hl.bs.language}{Specify the language for Locale that is used by 
+BreakIteratorBoundaryScanner. This parameter makes sense for FastVectorHighlighter only. 
+Valid entries take the form of ISO 639-1 strings.}
+
+\item{hl.bs.country}{Specify the country for Locale that is used by 
+BreakIteratorBoundaryScanner. This parameter makes sense for FastVectorHighlighter only. 
+Valid entries take the form of ISO 3166-1 alpha-2 strings.}
+
+\item{hl.useFastVectorHighlighter}{Use FastVectorHighlighter. FastVectorHighlighter 
+requires the field is termVectors=on, termPositions=on and termOffsets=on. This 
+parameter accepts per-field overrides. Default: FALSE}
+
+\item{hl.usePhraseHighlighter}{Use SpanScorer to highlight phrase terms only when 
+they appear within the query phrase in the document. Default: TRUE.}
+
+\item{hl.highlightMultiTerm}{If the SpanScorer is also being used, enables highlighting 
+for range/wildcard/fuzzy/prefix queries. Default: FALSE. This parameter makes sense 
+for the original Highlighter only.}
+
+\item{hl.regex.slop}{Factor by which the regex fragmenter can stray from the ideal 
+fragment size (given by hl.fragsize) to accomodate the regular expression. For 
+instance, a slop of 0.2 with fragsize of 100 should yield fragments between 80 
+and 120 characters in length. It is usually good to provide a slightly smaller 
+fragsize when using the regex fragmenter. Default: .6. This parameter makes sense 
+for the original Highlighter only.}
+
+\item{hl.regex.pattern}{The regular expression for fragmenting. This could be 
+used to extract sentences (see example solrconfig.xml) This parameter makes sense 
+for the original Highlighter only.}
+
+\item{hl.regex.maxAnalyzedChars}{Only analyze this many characters from a field 
+when using the regex fragmenter (after which, the fragmenter produces fixed-sized 
+fragments). Applying a complicated regex to a huge field is expensive. 
+Default: 10000. This parameter makes sense for the original Highlighter only.}
+
+\item{start}{Record to start at, default to beginning.}
+
+\item{rows}{Number of records to return.}
+
+\item{wt}{(character) Data type returned, defaults to 'json'. One of json or xml. If json, 
+uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[XML]{xmlParse}} to 
+parse. csv is only supported in \code{\link{solr_search}} and \code{\link{solr_all}}.}
+
+\item{raw}{(logical) If TRUE (default) raw json or xml returned. If FALSE,
+parsed data returned.}
+
+\item{key}{API key, if needed.}
+
+\item{callopts}{Call options passed on to httr::GET}
+
+\item{fl}{Fields to return}
+
+\item{fq}{Filter query, this does not affect the search, only what gets returned}
+
+\item{parsetype}{One of list of df (data.frame)}
+}
+\value{
+XML, JSON, a list, or data.frame
+}
+\description{
+Returns only highlight items
+}
+\details{
+The \code{verbose} parameter dropped. See \code{\link{solr_connect}}, which
+can be used to set verbose status.
+}
+\examples{
+\dontrun{
+# connect
+solr_connect('http://api.plos.org/search')
+
+# highlight search
+solr_highlight(q='alcohol', hl.fl = 'abstract', rows=10)
+solr_highlight(q='alcohol', hl.fl = c('abstract','title'), rows=3)
+
+# Raw data back
+## json
+solr_highlight(q='alcohol', hl.fl = 'abstract', rows=10,
+   raw=TRUE)
+## xml
+solr_highlight(q='alcohol', hl.fl = 'abstract', rows=10,
+   raw=TRUE, wt='xml')
+## parse after getting data back
+out <- solr_highlight(q='alcohol', hl.fl = c('abstract','title'), hl.fragsize=30,
+   rows=10, raw=TRUE, wt='xml')
+solr_parse(out, parsetype='df')
+}
+}
+\references{
+See \url{http://wiki.apache.org/solr/HighlightingParameters} for
+more information on highlighting.
+}
+\seealso{
+\code{\link{solr_search}}, \code{\link{solr_facet}}
+}
+
diff --git a/man/solr_mlt.Rd b/man/solr_mlt.Rd
new file mode 100644
index 0000000..9206857
--- /dev/null
+++ b/man/solr_mlt.Rd
@@ -0,0 +1,112 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/solr_mlt.r
+\name{solr_mlt}
+\alias{solr_mlt}
+\title{"more like this" search}
+\usage{
+solr_mlt(name = NULL, q = "*:*", fq = NULL, mlt.count = NULL,
+  mlt.fl = NULL, mlt.mintf = NULL, mlt.mindf = NULL, mlt.minwl = NULL,
+  mlt.maxwl = NULL, mlt.maxqt = NULL, mlt.maxntp = NULL,
+  mlt.boost = NULL, mlt.qf = NULL, fl = NULL, wt = "json", start = 0,
+  rows = NULL, key = NULL, callopts = list(), raw = FALSE,
+  parsetype = "df", concat = ",")
+}
+\arguments{
+\item{name}{Name of a collection or core. Or leave as \code{NULL} if not needed.}
+
+\item{q}{Query terms, defaults to '*:*', or everything.}
+
+\item{fq}{Filter query, this does not affect the search, only what gets returned}
+
+\item{mlt.count}{The number of similar documents to return for each result. Default is 5.}
+
+\item{mlt.fl}{The fields to use for similarity. NOTE: if possible these should have a stored 
+TermVector DEFAULT_FIELD_NAMES = new String[] {"contents"}}
+
+\item{mlt.mintf}{Minimum Term Frequency - the frequency below which terms will be ignored in 
+the source doc. DEFAULT_MIN_TERM_FREQ = 2}
+
+\item{mlt.mindf}{Minimum Document Frequency - the frequency at which words will be ignored which 
+do not occur in at least this many docs. DEFAULT_MIN_DOC_FREQ = 5}
+
+\item{mlt.minwl}{minimum word length below which words will be ignored. 
+DEFAULT_MIN_WORD_LENGTH = 0}
+
+\item{mlt.maxwl}{maximum word length above which words will be ignored. 
+DEFAULT_MAX_WORD_LENGTH = 0}
+
+\item{mlt.maxqt}{maximum number of query terms that will be included in any generated query. 
+DEFAULT_MAX_QUERY_TERMS = 25}
+
+\item{mlt.maxntp}{maximum number of tokens to parse in each example doc field that is not stored 
+with TermVector support. DEFAULT_MAX_NUM_TOKENS_PARSED = 5000}
+
+\item{mlt.boost}{[true/false] set if the query will be boosted by the interesting term relevance. 
+DEFAULT_BOOST = false}
+
+\item{mlt.qf}{Query fields and their boosts using the same format as that used in 
+DisMaxQParserPlugin. These fields must also be specified in mlt.fl.}
+
+\item{fl}{Fields to return. We force 'id' to be returned so that there is a unique identifier 
+with each record.}
+
+\item{wt}{(character) Data type returned, defaults to 'json'. One of json or xml. If json, 
+uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[XML]{xmlParse}} to 
+parse. csv is only supported in \code{\link{solr_search}} and \code{\link{solr_all}}.}
+
+\item{start}{Record to start at, default to beginning.}
+
+\item{rows}{Number of records to return. Defaults to 10.}
+
+\item{key}{API key, if needed.}
+
+\item{callopts}{Call options passed on to httr::GET}
+
+\item{raw}{(logical) If TRUE, returns raw data in format specified by wt param}
+
+\item{parsetype}{(character) One of 'list' or 'df'}
+
+\item{concat}{(character) Character to concatenate elements of longer than length 1. 
+Note that this only works reliably when data format is json (wt='json'). The parsing
+is more complicated in XML format, but you can do that on your own.}
+}
+\value{
+XML, JSON, a list, or data.frame
+}
+\description{
+Returns only more like this items
+}
+\details{
+The \code{verbose} parameter dropped. See \code{\link{solr_connect}}, which
+can be used to set verbose status.
+}
+\examples{
+\dontrun{
+# connect
+solr_connect('http://api.plos.org/search')
+
+# more like this search
+solr_mlt(q='*:*', mlt.count=2, mlt.fl='abstract', fl='score',
+  fq="doc_type:full")
+solr_mlt(q='*:*', rows=2, mlt.fl='title', mlt.mindf=1, mlt.mintf=1,
+  fl='alm_twitterCount')
+solr_mlt(q='title:"ecology" AND body:"cell"', mlt.fl='title', mlt.mindf=1,
+  mlt.mintf=1, fl='counter_total_all', rows=5)
+solr_mlt(q='ecology', mlt.fl='abstract', fl='title', rows=5)
+solr_mlt(q='ecology', mlt.fl='abstract', fl=c('score','eissn'),
+  rows=5)
+solr_mlt(q='ecology', mlt.fl='abstract', fl=c('score','eissn'),
+  rows=5, wt = "xml")
+
+# get raw data, and parse later if needed
+out <- solr_mlt(q='ecology', mlt.fl='abstract', fl='title',
+ rows=2, raw=TRUE)
+library('jsonlite')
+solr_parse(out, "df")
+}
+}
+\references{
+See \url{http://wiki.apache.org/solr/MoreLikeThis} for more
+information.
+}
+
diff --git a/man/solr_parse.Rd b/man/solr_parse.Rd
new file mode 100644
index 0000000..e5c81db
--- /dev/null
+++ b/man/solr_parse.Rd
@@ -0,0 +1,45 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/parsers.R
+\name{solr_parse}
+\alias{solr_parse}
+\alias{solr_parse.sr_all}
+\alias{solr_parse.sr_group}
+\alias{solr_parse.sr_high}
+\alias{solr_parse.sr_mlt}
+\alias{solr_parse.sr_search}
+\alias{solr_parse.sr_stats}
+\title{Parse raw data from solr_search, solr_facet, or solr_highlight.}
+\usage{
+solr_parse(input, parsetype = NULL, concat)
+
+\method{solr_parse}{sr_high}(input, parsetype = "list", concat = ",")
+
+\method{solr_parse}{sr_search}(input, parsetype = "list", concat = ",")
+
+\method{solr_parse}{sr_all}(input, parsetype = "list", concat = ",")
+
+\method{solr_parse}{sr_mlt}(input, parsetype = "list", concat = ",")
+
+\method{solr_parse}{sr_stats}(input, parsetype = "list", concat = ",")
+
+\method{solr_parse}{sr_group}(input, parsetype = "list", concat = ",")
+}
+\arguments{
+\item{input}{Output from solr_facet}
+
+\item{parsetype}{One of 'list' or 'df' (data.frame)}
+
+\item{concat}{Character to conactenate strings by, e.g,. ',' (character). Used
+in solr_parse.sr_search only.}
+}
+\description{
+Parse raw data from solr_search, solr_facet, or solr_highlight.
+}
+\details{
+This is the parser used internally in solr_facet, but if you output raw
+data from solr_facet using raw=TRUE, then you can use this function to parse that
+data (a sr_facet S3 object) after the fact to a list of data.frame's for easier
+consumption. The data format type is detected from the attribute "wt" on the
+sr_facet object.
+}
+
diff --git a/man/solr_search.Rd b/man/solr_search.Rd
new file mode 100644
index 0000000..cd6c210
--- /dev/null
+++ b/man/solr_search.Rd
@@ -0,0 +1,202 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/solr_search.r
+\name{solr_search}
+\alias{solr_search}
+\title{Solr search}
+\usage{
+solr_search(name = NULL, q = "*:*", sort = NULL, start = NULL,
+  rows = NULL, pageDoc = NULL, pageScore = NULL, fq = NULL, fl = NULL,
+  defType = NULL, timeAllowed = NULL, qt = NULL, wt = "json",
+  NOW = NULL, TZ = NULL, echoHandler = NULL, echoParams = NULL,
+  key = NULL, callopts = list(), raw = FALSE, parsetype = "df",
+  concat = ",", ...)
+}
+\arguments{
+\item{name}{Name of a collection or core. Or leave as \code{NULL} if not needed.}
+
+\item{q}{Query terms, defaults to '*:*', or everything.}
+
+\item{sort}{Field to sort on. You can specify ascending (e.g., score desc) or 
+descending (e.g., score asc), sort by two fields (e.g., score desc, price asc), 
+or sort by a function (e.g., sum(x_f, y_f) desc, which sorts by the sum of 
+x_f and y_f in a descending order).}
+
+\item{start}{Record to start at, default to beginning.}
+
+\item{rows}{Number of records to return. Default: 10.}
+
+\item{pageDoc}{If you expect to be paging deeply into the results (say beyond page 10, 
+assuming rows=10) and you are sorting by score, you may wish to add the pageDoc 
+and pageScore parameters to your request. These two parameters tell Solr (and Lucene) 
+what the last result (Lucene internal docid and score) of the previous page was, 
+so that when scoring the query for the next set of pages, it can ignore any results 
+that occur higher than that item. To get the Lucene internal doc id, you will need 
+to add [docid] to the &fl list. 
+e.g., q=*:*&start=10&pageDoc=5&pageScore=1.345&fl=[docid],score}
+
+\item{pageScore}{See pageDoc notes.}
+
+\item{fq}{Filter query, this does not affect the search, only what gets returned. 
+This parameter can accept multiple items in a lis or vector. You can't pass more than 
+one parameter of the same name, so we get around it by passing multiple queries 
+and we parse internally}
+
+\item{fl}{Fields to return, can be a character vector like \code{c('id', 'title')}, 
+or a single character vector with one or more comma separated names, like 
+\code{'id,title'}}
+
+\item{defType}{Specify the query parser to use with this request.}
+
+\item{timeAllowed}{The time allowed for a search to finish. This value only applies 
+to the search and not to requests in general. Time is in milliseconds. Values <= 0 
+mean no time restriction. Partial results may be returned (if there are any).}
+
+\item{qt}{Which query handler used. Options: dismax, others?}
+
+\item{wt}{(character) One of json, xml, or csv. Data type returned, defaults to 'csv'.
+If json, uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses
+\code{\link[xml2]{read_xml}} to parse. If csv, uses \code{\link{read.table}} to parse.
+\code{wt=csv} gives the fastest performance at least in all the cases we have
+tested in, thus it's the default value for \code{wt}.}
+
+\item{NOW}{Set a fixed time for evaluating Date based expresions}
+
+\item{TZ}{Time zone, you can override the default.}
+
+\item{echoHandler}{If \code{TRUE}, Solr places the name of the handle used in the 
+response to the client for debugging purposes. Default:}
+
+\item{echoParams}{The echoParams parameter tells Solr what kinds of Request 
+parameters should be included in the response for debugging purposes, legal values 
+include:
+\itemize{
+ \item none - don't include any request parameters for debugging
+ \item explicit - include the parameters explicitly specified by the client in the request
+ \item all - include all parameters involved in this request, either specified explicitly 
+ by the client, or implicit because of the request handler configuration.
+}}
+
+\item{key}{API key, if needed.}
+
+\item{callopts}{Call options passed on to httr::GET}
+
+\item{raw}{(logical) If TRUE, returns raw data in format specified by wt param}
+
+\item{parsetype}{(character) One of 'list' or 'df'}
+
+\item{concat}{(character) Character to concatenate elements of longer than length 1. 
+Note that this only works reliably when data format is json (wt='json'). The parsing
+is more complicated in XML format, but you can do that on your own.}
+
+\item{...}{Further args.}
+}
+\value{
+XML, JSON, a list, or data.frame
+}
+\description{
+Returns only matched documents, and doesn't return other items,
+including facets, groups, mlt, stats, and highlights.
+}
+\note{
+SOLR v1.2 was first version to support csv. See
+\url{https://issues.apache.org/jira/browse/SOLR-66}
+}
+\examples{
+\dontrun{
+# connect
+solr_connect('http://api.plos.org/search')
+
+# search
+solr_search(q='*:*', rows=2, fl='id')
+
+# Search for word ecology in title and cell in the body
+solr_search(q='title:"ecology" AND body:"cell"', fl='title', rows=5)
+
+# Search for word "cell" and not "body" in the title field
+solr_search(q='title:"cell" -title:"lines"', fl='title', rows=5)
+
+# Wildcards
+## Search for word that starts with "cell" in the title field
+solr_search(q='title:"cell*"', fl='title', rows=5)
+
+# Proximity searching
+## Search for words "sports" and "alcohol" within four words of each other
+solr_search(q='everything:"sports alcohol"~7', fl='abstract', rows=3)
+
+# Range searches
+## Search for articles with Twitter count between 5 and 10
+solr_search(q='*:*', fl=c('alm_twitterCount','id'), fq='alm_twitterCount:[5 TO 50]',
+rows=10)
+
+# Boosts
+## Assign higher boost to title matches than to body matches (compare the two calls)
+solr_search(q='title:"cell" abstract:"science"', fl='title', rows=3)
+solr_search(q='title:"cell"^1.5 AND abstract:"science"', fl='title', rows=3)
+
+# FunctionQuery queries
+## This kind of query allows you to use the actual values of fields to calculate
+## relevancy scores for returned documents
+
+## Here, we search on the product of counter_total_all and alm_twitterCount
+## metrics for articles in PLOS Journals
+solr_search(q="{!func}product($v1,$v2)", v1 = 'sqrt(counter_total_all)',
+   v2 = 'log(alm_twitterCount)', rows=5, fl=c('id','title'), fq='doc_type:full')
+
+## here, search on the product of counter_total_all and alm_twitterCount, using
+## a new temporary field "_val_"
+solr_search(q='_val_:"product(counter_total_all,alm_twitterCount)"',
+   rows=5, fl=c('id','title'), fq='doc_type:full')
+
+## papers with most citations
+solr_search(q='_val_:"max(counter_total_all)"',
+   rows=5, fl=c('id','counter_total_all'), fq='doc_type:full')
+
+## papers with most tweets
+solr_search(q='_val_:"max(alm_twitterCount)"',
+   rows=5, fl=c('id','alm_twitterCount'), fq='doc_type:full')
+
+## using wt = csv
+solr_search(q='*:*', rows=50, fl=c('id','score'), fq='doc_type:full', wt="csv")
+solr_search(q='*:*', rows=50, fl=c('id','score'), fq='doc_type:full')
+
+# using a proxy
+# prox <- list(url = "186.249.1.146", port = 80)
+# solr_connect(url = 'http://api.plos.org/search', proxy = prox)
+# solr_search(q='*:*', rows=2, fl='id', callopts=verbose())
+## vs. w/o a proxy
+# solr_connect(url = 'http://api.plos.org/search')
+# solr_search(q='*:*', rows=2, fl='id', callopts=verbose())
+
+# Pass on curl options to modify request
+solr_connect(url = 'http://api.plos.org/search')
+## verbose
+solr_search(q='*:*', rows=2, fl='id', callopts=verbose())
+## progress
+res <- solr_search(q='*:*', rows=200, fl='id', callopts=progress())
+## timeout
+# solr_search(q='*:*', rows=200, fl='id', callopts=timeout(0.01))
+## combine curl options using the c() function
+opts <- c(verbose(), progress())
+res <- solr_search(q='*:*', rows=200, fl='id', callopts=opts)
+
+## Searching Europeana
+### They don't return the expected Solr output, so we can get raw data, then parse separately
+solr_connect('http://europeana.eu/api/v2/search.json')
+key <- getOption("eu_key")
+dat <- solr_search(query='*:*', rows=5, wskey = key, raw=TRUE)
+library('jsonlite')
+head( jsonlite::fromJSON(dat)$items )
+
+# Connect to a local Solr instance
+## not run - replace with your local Solr URL and collection/core name
+# solr_connect("localhost:8889")
+# solr_search("gettingstarted")
+}
+}
+\references{
+See \url{http://wiki.apache.org/solr/#Search_and_Indexing} for more information.
+}
+\seealso{
+\code{\link{solr_highlight}}, \code{\link{solr_facet}}
+}
+
diff --git a/man/solr_stats.Rd b/man/solr_stats.Rd
new file mode 100644
index 0000000..818b518
--- /dev/null
+++ b/man/solr_stats.Rd
@@ -0,0 +1,91 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/solr_stats.r
+\name{solr_stats}
+\alias{solr_stats}
+\title{Solr stats}
+\usage{
+solr_stats(name = NULL, q = "*:*", stats.field = NULL,
+  stats.facet = NULL, wt = "json", start = 0, rows = 0, key = NULL,
+  callopts = list(), raw = FALSE, parsetype = "df")
+}
+\arguments{
+\item{name}{Name of a collection or core. Or leave as \code{NULL} if not needed.}
+
+\item{q}{Query terms, defaults to '*:*', or everything.}
+
+\item{stats.field}{The number of similar documents to return for each result.}
+
+\item{stats.facet}{You can not facet on multi-valued fields.}
+
+\item{wt}{(character) Data type returned, defaults to 'json'. One of json or xml. If json, 
+uses \code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[XML]{xmlParse}} to 
+parse. csv is only supported in \code{\link{solr_search}} and \code{\link{solr_all}}.}
+
+\item{start}{Record to start at, default to beginning.}
+
+\item{rows}{Number of records to return. Defaults to 10.}
+
+\item{key}{API key, if needed.}
+
+\item{callopts}{Call options passed on to httr::GET}
+
+\item{raw}{(logical) If TRUE, returns raw data in format specified by wt param}
+
+\item{parsetype}{(character) One of 'list' or 'df'}
+}
+\value{
+XML, JSON, a list, or data.frame
+}
+\description{
+Returns only stat items
+}
+\details{
+The \code{verbose} parameter dropped. See \code{\link{solr_connect}}, which
+can be used to set verbose status.
+}
+\examples{
+\dontrun{
+# connect
+solr_connect('http://api.plos.org/search')
+
+# get stats
+solr_stats(q='science', stats.field='counter_total_all', raw=TRUE)
+solr_stats(q='title:"ecology" AND body:"cell"',
+   stats.field=c('counter_total_all','alm_twitterCount'))
+solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'),
+   stats.facet='journal')
+solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'),
+   stats.facet=c('journal','volume'))
+
+# Get raw data, then parse later if you feel like it
+## json
+out <- solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'),
+   stats.facet=c('journal','volume'), raw=TRUE)
+library("jsonlite")
+jsonlite::fromJSON(out)
+solr_parse(out) # list
+solr_parse(out, 'df') # data.frame
+
+## xml
+out <- solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'),
+   stats.facet=c('journal','volume'), raw=TRUE, wt="xml")
+library("xml2")
+xml2::read_xml(unclass(out))
+solr_parse(out) # list
+solr_parse(out, 'df') # data.frame
+
+# Get verbose http call information
+library("httr")
+solr_stats(q='ecology', stats.field='alm_twitterCount',
+   callopts=verbose())
+}
+}
+\references{
+See \url{http://wiki.apache.org/solr/StatsComponent} for
+more information on Solr stats.
+}
+\seealso{
+\code{\link{solr_highlight}}, \code{\link{solr_facet}},
+\code{\link{solr_search}}, \code{\link{solr_mlt}}
+}
+
diff --git a/man/solrium-package.Rd b/man/solrium-package.Rd
new file mode 100644
index 0000000..1c70739
--- /dev/null
+++ b/man/solrium-package.Rd
@@ -0,0 +1,72 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/solrium-package.R
+\docType{package}
+\name{solrium-package}
+\alias{solrium}
+\alias{solrium-package}
+\title{General purpose R interface to Solr.}
+\description{
+This package has support for all the search endpoints, as well as a suite
+of functions for managing a Solr database, including adding and deleting 
+documents.
+}
+\section{Important search functions}{
+
+
+\itemize{
+  \item \code{\link{solr_search}} - General search, only returns documents
+  \item \code{\link{solr_all}} - General search, including all non-documents
+  in addition to documents: facets, highlights, groups, mlt, stats.
+  \item \code{\link{solr_facet}} - Faceting only (w/o general search)
+  \item \code{\link{solr_highlight}} - Highlighting only (w/o general search)
+  \item \code{\link{solr_mlt}} - More like this (w/o general search)
+  \item \code{\link{solr_group}} - Group search (w/o general search)
+  \item \code{\link{solr_stats}} - Stats search (w/o general search)
+}
+}
+
+\section{Important Solr management functions}{
+
+
+\itemize{
+  \item \code{\link{update_json}} - Add or delete documents using json in a 
+  file
+  \item \code{\link{add}} - Add documents via an R list or data.frame
+  \item \code{\link{delete_by_id}} - Delete documents by ID
+  \item \code{\link{delete_by_query}} - Delete documents by query
+}
+}
+
+\section{Vignettes}{
+
+
+See the vignettes for help \code{browseVignettes(package = "solrium")}
+}
+
+\section{Performance}{
+
+
+\code{v0.2} and above of this package will have \code{wt=csv} as the default.
+This  should give significant performance improvement over the previous 
+default of \code{wt=json}, which pulled down json, parsed to an R list, 
+then to a data.frame. With \code{wt=csv}, we pull down csv, and read that 
+in directly to a data.frame.
+
+The http library we use, \pkg{httr}, sets gzip compression header by 
+default. As long as compression is used server side, you're good to go on 
+compression, which should be a good peformance boost. See
+\url{https://wiki.apache.org/solr/SolrPerformanceFactors#Query_Response_Compression}
+for notes on how to enable compression.
+
+There are other notes about Solr performance at
+\url{https://wiki.apache.org/solr/SolrPerformanceFactors} that can be 
+used server side/in your Solr config, but aren't things to tune here in 
+this R client.
+
+Let us know if there's any further performance improvements we can make.
+}
+\author{
+Scott Chamberlain \email{myrmecocystus at gmail.com}
+}
+\keyword{package}
+
diff --git a/man/update_csv.Rd b/man/update_csv.Rd
new file mode 100644
index 0000000..00926cb
--- /dev/null
+++ b/man/update_csv.Rd
@@ -0,0 +1,120 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/update_csv.R
+\name{update_csv}
+\alias{update_csv}
+\title{Update documents using CSV}
+\usage{
+update_csv(files, name, separator = ",", header = TRUE, fieldnames = NULL,
+  skip = NULL, skipLines = 0, trim = FALSE, encapsulator = NULL,
+  escape = NULL, keepEmpty = FALSE, literal = NULL, map = NULL,
+  split = NULL, rowid = NULL, rowidOffset = NULL, overwrite = NULL,
+  commit = NULL, wt = "json", raw = FALSE, ...)
+}
+\arguments{
+\item{files}{Path to file to load into Solr}
+
+\item{name}{(character) Name of the core or collection}
+
+\item{separator}{Specifies the character to act as the field separator. Default: ','}
+
+\item{header}{TRUE if the first line of the CSV input contains field or column names. 
+Default: \code{TRUE}. If the fieldnames parameter is absent, these field names 
+will be used when adding documents to the index.}
+
+\item{fieldnames}{Specifies a comma separated list of field names to use when adding 
+documents to the Solr index. If the CSV input already has a header, the names 
+specified by this parameter will override them. Example: fieldnames=id,name,category}
+
+\item{skip}{A comma separated list of field names to skip in the input. An alternate 
+way to skip a field is to specify it's name as a zero length string in fieldnames. 
+For example, \code{fieldnames=id,name,category&skip=name} skips the name field, 
+and is equivalent to \code{fieldnames=id,,category}}
+
+\item{skipLines}{Specifies the number of lines in the input stream to discard 
+before the CSV data starts (including the header, if present). Default: \code{0}}
+
+\item{trim}{If true remove leading and trailing whitespace from values. CSV parsing 
+already ignores leading whitespace by default, but there may be trailing whitespace, 
+or there may be leading whitespace that is encapsulated by quotes and is thus not 
+removed. This may be specified globally, or on a per-field basis. 
+Default: \code{FALSE}}
+
+\item{encapsulator}{The character optionally used to surround values to preserve 
+characters such as the CSV separator or whitespace. This standard CSV format handles 
+the encapsulator itself appearing in an encapsulated value by doubling the 
+encapsulator.}
+
+\item{escape}{The character used for escaping CSV separators or other reserved 
+characters. If an escape is specified, the encapsulator is not used unless also 
+explicitly specified since most formats use either encapsulation or escaping, not both.}
+
+\item{keepEmpty}{Keep and index empty (zero length) field values. This may be specified 
+globally, or on a per-field basis. Default: \code{FALSE}}
+
+\item{literal}{Adds fixed field name/value to all documents. Example: Adds a "datasource" 
+field with value equal to "products" for every document indexed from the CSV 
+\code{literal.datasource=products}}
+
+\item{map}{Specifies a mapping between one value and another. The string on the LHS of 
+the colon will be replaced with the string on the RHS. This parameter can be specified 
+globally or on a per-field basis. Example: replaces "Absolutely" with "true" in every 
+field \code{map=Absolutely:true}. Example: removes any values of "RemoveMe" in the 
+field "foo" \code{f.foo.map=RemoveMe:&f.foo.keepEmpty=false }}
+
+\item{split}{If TRUE, the field value is split into multiple values by another 
+CSV parser. The CSV parsing rules such as separator and encapsulator may be specified 
+as field parameters. See \url{https://wiki.apache.org/solr/UpdateCSV#split} for examples.}
+
+\item{rowid}{If not null, add a new field to the document where the passed in parameter 
+name is the field name to be added and the current line/rowid is the value. This is 
+useful if your CSV doesn't have a unique id already in it and you want to use the line 
+number as one. Also useful if you simply want to index where exactly in the original 
+CSV file the row came from}
+
+\item{rowidOffset}{In conjunction with the rowid parameter, this integer value will be 
+added to the rowid before adding it the field.}
+
+\item{overwrite}{If true (the default), check for and overwrite duplicate documents, 
+based on the uniqueKey field declared in the solr schema. If you know the documents you 
+are indexing do not contain any duplicates then you may see a considerable speed up 
+with &overwrite=false.}
+
+\item{commit}{Commit changes after all records in this request have been indexed. The 
+default is commit=false to avoid the potential performance impact of frequent commits.}
+
+\item{wt}{(character) One of json (default) or xml. If json, uses
+\code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[xml2]{read_xml}} to parse}
+
+\item{raw}{(logical) If TRUE, returns raw data in format specified by wt param}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Update documents using CSV
+}
+\note{
+SOLR v1.2 was first version to support csv. See
+\url{https://issues.apache.org/jira/browse/SOLR-66}
+}
+\examples{
+\dontrun{
+# start Solr in Schemaless mode: bin/solr start -e schemaless
+
+# connect
+solr_connect()
+
+df <- data.frame(id=1:3, name=c('red', 'blue', 'green'))
+write.csv(df, file="df.csv", row.names=FALSE, quote = FALSE)
+update_csv("df.csv", "books")
+
+# give back xml
+update_csv("df.csv", "books", wt = "xml")
+## raw xml
+update_csv("df.csv", "books", wt = "xml", raw = FALSE)
+}
+}
+\seealso{
+Other update: \code{\link{update_json}},
+  \code{\link{update_xml}}
+}
+
diff --git a/man/update_json.Rd b/man/update_json.Rd
new file mode 100644
index 0000000..5b983e7
--- /dev/null
+++ b/man/update_json.Rd
@@ -0,0 +1,90 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/update_json.R
+\name{update_json}
+\alias{update_json}
+\title{Update documents using JSON}
+\usage{
+update_json(files, name, commit = TRUE, optimize = FALSE,
+  max_segments = 1, expunge_deletes = FALSE, wait_searcher = TRUE,
+  soft_commit = FALSE, prepare_commit = NULL, wt = "json", raw = FALSE,
+  ...)
+}
+\arguments{
+\item{files}{Path to file to load into Solr}
+
+\item{name}{(character) Name of the core or collection}
+
+\item{commit}{(logical) If \code{TRUE}, documents immediately searchable. 
+Deafult: \code{TRUE}}
+
+\item{optimize}{Should index optimization be performed before the method returns. 
+Default: \code{FALSE}}
+
+\item{max_segments}{optimizes down to at most this number of segments. Default: 1}
+
+\item{expunge_deletes}{merge segments with deletes away. Default: \code{FALSE}}
+
+\item{wait_searcher}{block until a new searcher is opened and registered as the 
+main query searcher, making the changes visible. Default: \code{TRUE}}
+
+\item{soft_commit}{perform a soft commit - this will refresh the 'view' of the 
+index in a more performant manner, but without "on-disk" guarantees. 
+Default: \code{FALSE}}
+
+\item{prepare_commit}{The prepareCommit command is an expert-level API that 
+calls Lucene's IndexWriter.prepareCommit(). Not passed by default}
+
+\item{wt}{(character) One of json (default) or xml. If json, uses 
+\code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[XML]{xmlParse}} to 
+parse}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data in format specified by 
+\code{wt} param}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Update documents using JSON
+}
+\details{
+You likely may not be able to run this function against many public Solr 
+services, but should work locally.
+}
+\examples{
+\dontrun{
+# start Solr in Schemaless mode: bin/solr start -e schemaless
+
+# connect
+solr_connect()
+
+# Add documents
+file <- system.file("examples", "books2.json", package = "solrium")
+cat(readLines(file), sep = "\\n")
+update_json(file, "books")
+
+# Update commands - can include many varying commands
+## Add file
+file <- system.file("examples", "updatecommands_add.json", package = "solrium")
+cat(readLines(file), sep = "\\n")
+update_json(file, "books")
+
+## Delete file
+file <- system.file("examples", "updatecommands_delete.json", package = "solrium")
+cat(readLines(file), sep = "\\n")
+update_json(file, "books")
+
+# Add and delete in the same document
+## Add a document first, that we can later delete
+ss <- list(list(id = 456, name = "cat"))
+add(ss, "books")
+## Now add a new document, and delete the one we just made
+file <- system.file("examples", "add_delete.json", package = "solrium")
+cat(readLines(file), sep = "\\n")
+update_json(file, "books")
+}
+}
+\seealso{
+Other update: \code{\link{update_csv}},
+  \code{\link{update_xml}}
+}
+
diff --git a/man/update_xml.Rd b/man/update_xml.Rd
new file mode 100644
index 0000000..a59f55d
--- /dev/null
+++ b/man/update_xml.Rd
@@ -0,0 +1,89 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/update_xml.R
+\name{update_xml}
+\alias{update_xml}
+\title{Update documents using XML}
+\usage{
+update_xml(files, name, commit = TRUE, optimize = FALSE, max_segments = 1,
+  expunge_deletes = FALSE, wait_searcher = TRUE, soft_commit = FALSE,
+  prepare_commit = NULL, wt = "json", raw = FALSE, ...)
+}
+\arguments{
+\item{files}{Path to file to load into Solr}
+
+\item{name}{(character) Name of the core or collection}
+
+\item{commit}{(logical) If \code{TRUE}, documents immediately searchable. 
+Deafult: \code{TRUE}}
+
+\item{optimize}{Should index optimization be performed before the method returns. 
+Default: \code{FALSE}}
+
+\item{max_segments}{optimizes down to at most this number of segments. Default: 1}
+
+\item{expunge_deletes}{merge segments with deletes away. Default: \code{FALSE}}
+
+\item{wait_searcher}{block until a new searcher is opened and registered as the 
+main query searcher, making the changes visible. Default: \code{TRUE}}
+
+\item{soft_commit}{perform a soft commit - this will refresh the 'view' of the 
+index in a more performant manner, but without "on-disk" guarantees. 
+Default: \code{FALSE}}
+
+\item{prepare_commit}{The prepareCommit command is an expert-level API that 
+calls Lucene's IndexWriter.prepareCommit(). Not passed by default}
+
+\item{wt}{(character) One of json (default) or xml. If json, uses 
+\code{\link[jsonlite]{fromJSON}} to parse. If xml, uses \code{\link[XML]{xmlParse}} to 
+parse}
+
+\item{raw}{(logical) If \code{TRUE}, returns raw data in format specified by 
+\code{wt} param}
+
+\item{...}{curl options passed on to \code{\link[httr]{GET}}}
+}
+\description{
+Update documents using XML
+}
+\details{
+You likely may not be able to run this function against many public Solr 
+services, but should work locally.
+}
+\examples{
+\dontrun{
+# start Solr in Schemaless mode: bin/solr start -e schemaless
+
+# connect
+solr_connect()
+
+# Add documents
+file <- system.file("examples", "books.xml", package = "solrium")
+cat(readLines(file), sep = "\\n")
+update_xml(file, "books")
+
+# Update commands - can include many varying commands
+## Add files
+file <- system.file("examples", "books2_delete.xml", package = "solrium")
+cat(readLines(file), sep = "\\n")
+update_xml(file, "books")
+
+## Delete files
+file <- system.file("examples", "updatecommands_delete.xml", package = "solrium")
+cat(readLines(file), sep = "\\n")
+update_xml(file, "books")
+
+## Add and delete in the same document
+## Add a document first, that we can later delete
+ss <- list(list(id = 456, name = "cat"))
+add(ss, "books")
+## Now add a new document, and delete the one we just made
+file <- system.file("examples", "add_delete.xml", package = "solrium")
+cat(readLines(file), sep = "\\n")
+update_xml(file, "books")
+}
+}
+\seealso{
+Other update: \code{\link{update_csv}},
+  \code{\link{update_json}}
+}
+
diff --git a/tests/cloud_mode/test-add.R b/tests/cloud_mode/test-add.R
new file mode 100644
index 0000000..2eebd04
--- /dev/null
+++ b/tests/cloud_mode/test-add.R
@@ -0,0 +1,25 @@
+context("add documents")
+
+# Using with Solr Cloud mode
+
+test_that("adding documents from a ", {
+  solr_connect()
+
+  # setup
+  pinged <- ping(name = "helloWorld", verbose = FALSE)$status
+  if (pinged != "OK") collection_create(name = "helloWorld", numShards = 2)
+
+  # list works
+  ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+  list_out <- add(ss, "helloWorld")
+
+  expect_is(list_out, "list")
+  expect_equal(list_out$responseHeader$status, 0)
+
+  # data.frame works
+  df <- data.frame(id = c(67, 68), price = c(1000, 500000000))
+  df_out <- add(df, "helloWorld")
+
+  expect_is(df_out, "list")
+  expect_equal(df_out$responseHeader$status, 0)
+})
diff --git a/tests/cloud_mode/test-collections.R b/tests/cloud_mode/test-collections.R
new file mode 100644
index 0000000..a75d695
--- /dev/null
+++ b/tests/cloud_mode/test-collections.R
@@ -0,0 +1,24 @@
+context("collections management")
+
+# Using with Solr Cloud mode
+
+test_that("adding a collection works", {
+  solr_connect()
+  ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+
+  # setup
+  pinged <- ping(name = "helloWorld", verbose = FALSE)$status
+  if (pinged != "OK") collection_delete(name = "helloWorld")
+
+  # add collection
+  list_out <- add(ss, "helloWorld")
+
+  expect_is(list_out, "list")
+  expect_equal(list_out$responseHeader$status, 0)
+})
+
+test_that("adding a collection fails well", {
+  solr_connect()
+
+  expect_error(collection_create(name = "helloWorld", verbose = FALSE), "collection already exists")
+})
diff --git a/tests/standard_mode/test-core_create.R b/tests/standard_mode/test-core_create.R
new file mode 100644
index 0000000..9da281b
--- /dev/null
+++ b/tests/standard_mode/test-core_create.R
@@ -0,0 +1,31 @@
+context("core_create")
+
+test_that("core_create works", {
+  solr_connect(verbose = FALSE)
+
+  core_name <- "slamcore"
+
+  # delete if exists
+  if (core_exists(core_name)) {
+    invisible(core_unload(core_name))
+  }
+
+  # write files in preparation
+  path <- sprintf("~/solr-5.4.1/server/solr/%s/conf", core_name)
+  dir.create(path, recursive = TRUE)
+  files <- list.files("~/solr-5.4.1/server/solr/configsets/data_driven_schema_configs/conf/", full.names = TRUE)
+  invisible(file.copy(files, path, recursive = TRUE))
+
+  # create the core
+  aa <- suppressMessages(core_create(name = core_name, instanceDir = core_name, configSet = "basic_configs"))
+
+  expect_is(aa, "list")
+  expect_is(aa$responseHeader, "list")
+
+  # it worked
+  expect_equal(aa$responseHeader$status, 0)
+
+  # correct name
+  expect_is(aa$core, "character")
+  expect_equal(aa$core, core_name)
+})
diff --git a/tests/test-all.R b/tests/test-all.R
new file mode 100644
index 0000000..ff53551
--- /dev/null
+++ b/tests/test-all.R
@@ -0,0 +1,2 @@
+library('testthat')
+test_check('solrium')
diff --git a/tests/testthat/test-core_create.R b/tests/testthat/test-core_create.R
new file mode 100644
index 0000000..5c61009
--- /dev/null
+++ b/tests/testthat/test-core_create.R
@@ -0,0 +1,33 @@
+context("core_create")
+
+test_that("core_create works", {
+  skip_on_cran()
+  
+  solr_connect(verbose = FALSE)
+  
+  core_name <- "slamcore"
+
+  # delete if exists
+  if (core_exists(core_name)) {
+    invisible(core_unload(core_name))
+  }
+  
+  # write files in preparation
+  path <- sprintf("~/solr-5.4.1/server/solr/%s/conf", core_name)
+  dir.create(path, recursive = TRUE, showWarnings = FALSE)
+  files <- list.files("~/solr-5.4.1/server/solr/configsets/data_driven_schema_configs/conf/", full.names = TRUE)
+  invisible(file.copy(files, path, recursive = TRUE))
+  
+  # create the core
+  aa <- suppressMessages(core_create(name = core_name, instanceDir = core_name, configSet = "basic_configs"))
+
+  expect_is(aa, "list")
+  expect_is(aa$responseHeader, "list")
+  
+  # it worked
+  expect_equal(aa$responseHeader$status, 0)
+  
+  # correct name
+  expect_is(aa$core, "character")
+  expect_equal(aa$core, core_name)
+})
diff --git a/tests/testthat/test-errors.R b/tests/testthat/test-errors.R
new file mode 100644
index 0000000..90130d3
--- /dev/null
+++ b/tests/testthat/test-errors.R
@@ -0,0 +1,50 @@
+# errors
+context("errors")
+
+test_that("setting errors level gives correct error classes", {
+  skip_on_cran()
+  
+  invisible(aa <- solr_connect('http://api.plos.org/search'))
+  invisible(bb <- solr_connect('http://api.plos.org/search', errors = "simple"))
+  invisible(cc <- solr_connect('http://api.plos.org/search', errors = "complete"))
+  
+  expect_is(aa, "solr_connection")
+  expect_is(bb, "solr_connection")
+  expect_is(cc, "solr_connection")
+  expect_is(aa$errors, "character")
+  expect_is(bb$errors, "character")
+  expect_is(cc$errors, "character")
+})
+
+test_that("setting errors level gives correct error values", {
+  skip_on_cran()
+  
+  invisible(aa <- solr_connect('http://api.plos.org/search'))
+  invisible(bb <- solr_connect('http://api.plos.org/search', errors = "simple"))
+  invisible(cc <- solr_connect('http://api.plos.org/search', errors = "complete"))
+  
+  expect_equal(aa$errors, "simple")
+  expect_equal(bb$errors, "simple")
+  expect_equal(cc$errors, "complete")
+})
+
+test_that("setting error levels gives correct effect - simple errors", {
+  skip_on_cran()
+  
+  invisible(solr_connect('http://api.plos.org/search', errors = "simple", verbose = FALSE))
+  
+  expect_error(solr_search(q = "*:*", rows = "asdf"), "500 - For input string")
+  expect_error(solr_search(q = "*:*", rows = "asdf"), "500 - For input string")
+})
+
+test_that("setting error levels gives correct effect - complete errors", {
+  skip_on_cran()
+  
+  invisible(solr_connect('http://api.plos.org/search', errors = "complete", verbose = FALSE))
+  
+  errmssg <- "500 - For input string: \"asdf\"\nAPI stack trace"
+  expect_error(solr_search(q = "*:*", rows = "asdf"), errmssg)
+  expect_error(solr_search(q = "*:*", start = "asdf"), errmssg)
+  expect_error(solr_search(q = "*:*", sort = "down"), 
+    "400 - Can't determine a Sort Order \\(asc or desc\\) in sort spec 'down'")
+})
diff --git a/tests/testthat/test-ping.R b/tests/testthat/test-ping.R
new file mode 100644
index 0000000..75ce00a
--- /dev/null
+++ b/tests/testthat/test-ping.R
@@ -0,0 +1,35 @@
+# ping
+context("ping")
+
+test_that("ping works against", {
+  skip_on_cran()
+
+  invisible(solr_connect(verbose = FALSE))
+
+  aa <- ping(name = "gettingstarted")
+
+  expect_is(aa, "list")
+  expect_is(aa$responseHeader, "list")
+  expect_equal(aa$responseHeader$status, 0)
+  expect_equal(aa$responseHeader$params$q, "{!lucene}*:*")
+})
+
+test_that("ping gives raw data correctly", {
+  skip_on_cran()
+  
+  solr_connect(verbose = FALSE)
+  
+  expect_is(ping("gettingstarted", raw = TRUE), "ping")
+  expect_is(ping("gettingstarted", raw = FALSE), "list")
+  expect_is(ping("gettingstarted", wt = "xml", raw = TRUE), "ping")
+  expect_is(ping("gettingstarted", wt = "xml", raw = FALSE), "xml_document")
+})
+
+test_that("ping fails well", {
+  skip_on_cran()
+
+  solr_connect(verbose = FALSE)
+
+  expect_equal(ping()$status, "not found")
+  expect_equal(ping("adfdafs")$status, "not found")
+})
diff --git a/tests/testthat/test-schema.R b/tests/testthat/test-schema.R
new file mode 100644
index 0000000..9e4b663
--- /dev/null
+++ b/tests/testthat/test-schema.R
@@ -0,0 +1,36 @@
+# schema
+context("schema")
+
+test_that("schema works against", {
+  skip_on_cran()
+
+  invisible(solr_connect(verbose = FALSE))
+
+  aa <- schema(name = "gettingstarted")
+  bb <- schema(name = "gettingstarted", "fields")
+  
+  expect_is(schema(name = "gettingstarted", "dynamicfields"), "list")
+  expect_is(schema(name = "gettingstarted", "fieldtypes"), "list")
+  expect_is(schema(name = "gettingstarted", "copyfields"), "list")
+  expect_is(schema(name = "gettingstarted", "name"), "list")
+  expect_is(schema(name = "gettingstarted", "version"), "list")
+  expect_is(schema(name = "gettingstarted", "uniquekey"), "list")
+  expect_is(schema(name = "gettingstarted", "similarity"), "list")
+
+  expect_is(aa, "list")
+  expect_is(aa$responseHeader, "list")
+  expect_is(aa$schema, "list")
+  expect_is(aa$schema$name, "character")
+  
+  expect_is(bb, "list")
+  expect_is(bb$fields, "data.frame")
+})
+
+test_that("schema fails well", {
+  skip_on_cran()
+  
+  invisible(solr_connect(verbose = FALSE))
+  
+  expect_error(schema(), "argument \"name\" is missing")
+  expect_error(schema(name = "gettingstarted", "stuff"), "Client error")
+})
diff --git a/tests/testthat/test-solr_all.R b/tests/testthat/test-solr_all.R
new file mode 100644
index 0000000..f29ec2d
--- /dev/null
+++ b/tests/testthat/test-solr_all.R
@@ -0,0 +1,86 @@
+context("solr_all")
+
+test_that("solr_all works", {
+  skip_on_cran()
+
+  solr_connect('http://api.plos.org/search', verbose = FALSE)
+
+  a <- solr_all(q='*:*', rows=2, fl='id')
+  b <- solr_all(q='title:"ecology" AND body:"cell"', fl='title', rows=5)
+
+  # correct dimensions
+  expect_equal(length(a), 6)
+  expect_equal(length(b), 6)
+
+  # correct classes
+  expect_is(a, "list")
+  expect_is(a$search, "tbl_df")
+  expect_is(b, "list")
+  expect_is(b$search, "tbl_df")
+  
+  # right slot names
+  expect_named(a, c('search','facet','high','mlt','group','stats'))
+  expect_named(b, c('search','facet','high','mlt','group','stats'))
+})
+
+test_that("solr_all fails well", {
+  skip_on_cran()
+
+  invisible(solr_connect('http://api.plos.org/search', verbose = FALSE))
+
+  expect_error(solr_all(q = "*:*", rows = "asdf"), "500 - For input string")
+  expect_error(solr_all(q = "*:*", sort = "down"),
+               "400 - Can't determine a Sort Order \\(asc or desc\\) in sort spec 'down'")
+  expect_error(solr_all(q='*:*', fl=c('alm_twitterCount','id'),
+                           fq='alm_notafield:[5 TO 50]', rows=10),
+               "undefined field")
+  expect_error(solr_all(q = "*:*", wt = "foobar"),
+               "wt must be one of: json, xml, csv")
+
+})
+
+test_that("solr_all works with HathiTrust", {
+  skip_on_cran()
+
+  url_hathi <- "http://chinkapin.pti.indiana.edu:9994/solr/meta/select"
+  invisible(solr_connect(url = url_hathi, verbose = FALSE))
+
+  a <- solr_all(q = '*:*', rows = 2, fl = 'id')
+  b <- solr_all(q = 'language:Spanish', rows = 5)
+
+  # correct dimensions
+  expect_equal(NROW(a$search), 2)
+  expect_equal(NROW(b$search), 5)
+
+  # correct classes
+  expect_is(a, "list")
+  expect_is(a$search, "data.frame")
+  expect_is(a$high, "data.frame")
+  expect_is(a$group, "data.frame")
+  expect_null(b$stats)
+  expect_null(b$facet)
+  
+  expect_is(b, "list")
+  expect_is(a$search, "data.frame")
+  expect_is(b$high, "data.frame")
+  expect_is(b$group, "data.frame")
+  expect_null(b$stats)
+  expect_null(b$facet)
+
+  # names
+  expect_named(a$search, "id")
+})
+
+test_that("solr_all works with Datacite", {
+  skip_on_cran()
+
+  url_dc <- "http://search.datacite.org/api"
+  invisible(solr_connect(url = url_dc, verbose = FALSE))
+
+  a <- solr_all(q = '*:*', rows = 2)
+  b <- solr_all(q = 'publisher:Data', rows = 5)
+
+  # correct dimensions
+  expect_equal(NROW(a$search), 2)
+  expect_equal(NROW(b$search), 5)
+})
diff --git a/tests/testthat/test-solr_connect.R b/tests/testthat/test-solr_connect.R
new file mode 100644
index 0000000..a3f0d21
--- /dev/null
+++ b/tests/testthat/test-solr_connect.R
@@ -0,0 +1,50 @@
+# solr_connect
+context("solr_connect")
+
+test_that("solr_connect to remote Solr server works", {
+  skip_on_cran()
+  
+  invisible(aa <- solr_connect('http://api.plos.org/search'))
+  
+  expect_is(aa, "solr_connection")
+  expect_is(aa$url, "character")
+  expect_null(aa$proxy)
+  expect_is(aa$errors, "character")
+  expect_named(aa, c('url', 'proxy', 'errors', 'verbose'))
+})
+
+test_that("solr_connect to local Solr server works", {
+  skip_on_cran()
+  
+  invisible(bb <- solr_connect())
+  
+  expect_is(bb, "solr_connection")
+  expect_is(bb$url, "character")
+  expect_null(bb$proxy)
+  expect_is(bb$errors, "character")
+  expect_named(bb, c('url', 'proxy', 'errors', 'verbose'))
+})
+
+test_that("solr_connect works with a proxy", {
+  skip_on_cran()
+  
+  port = 3128
+  proxy <- list(url = "187.62.207.130", port = port)
+  invisible(cc <- solr_connect(proxy = proxy))
+  
+  expect_is(cc, "solr_connection")
+  expect_is(cc$url, "character")
+  expect_is(cc$proxy, "request")
+  expect_is(cc$proxy$options, "list")
+  expect_equal(cc$proxy$options$proxyport, port)
+  expect_is(cc$errors, "character")
+})
+
+test_that("solr_connect fails well", {
+  skip_on_cran()
+  
+  expect_error(solr_connect("foobar"), "That does not appear to be a url")
+  expect_error(solr_connect(errors = 'foo'), "should be one of")
+  expect_error(solr_connect(proxy = list(foo = "bar")), 
+               "Input to proxy can only contain")
+})
diff --git a/tests/testthat/test-solr_error.R b/tests/testthat/test-solr_error.R
new file mode 100644
index 0000000..73942b1
--- /dev/null
+++ b/tests/testthat/test-solr_error.R
@@ -0,0 +1,49 @@
+context("solr_error internal function")
+
+test_that("solr_error works when no errors", {
+  skip_on_cran()
+
+  invisible(solr_connect('http://api.plos.org/search', verbose = FALSE))
+  
+  aa <- solr_search(q = '*:*', rows = 2, fl = 'id')
+  expect_equal(solr_settings()$errors, "simple")
+  expect_is(aa, "data.frame")
+  expect_is(aa$id, "character")
+})
+
+
+test_that("solr_error works when there should be errors - simple errors", {
+  skip_on_cran()
+  
+  invisible(solr_connect('http://api.plos.org/search', verbose = FALSE))
+  
+  expect_equal(solr_settings()$errors, "simple")
+  expect_error(solr_search(q = '*:*', rows = 5, sort = "things"), 
+               "Can't determine a Sort Order")
+})
+
+test_that("solr_error works when there should be errors - complete errors", {
+  skip_on_cran()
+  
+  invisible(solr_connect('http://api.plos.org/search', 
+                         errors = "complete", 
+                         verbose = FALSE))
+  
+  expect_equal(solr_settings()$errors, "complete")
+  expect_error(solr_search(q = '*:*', rows = 5, sort = "things"), 
+               "Can't determine a Sort Order")
+  expect_error(solr_search(q = '*:*', rows = 5, sort = "things"), 
+               "no stack trace")
+})
+
+test_that("solr_error - test directly", {
+  skip_on_cran()
+  
+  invisible(solr_connect('http://api.plos.org/search', 
+                         errors = "complete", 
+                         verbose = FALSE))
+  
+  library("httr")
+  res <- GET("http://api.plos.org/search?wt=json&q=%22synthetic%20biology%22&rows=10&fl=id,title&sort=notasortoption")
+  expect_error(solrium:::solr_error(res), "Can't determine a Sort Order \\(asc or desc\\)")
+})
diff --git a/tests/testthat/test-solr_facet.r b/tests/testthat/test-solr_facet.r
new file mode 100644
index 0000000..e72afe2
--- /dev/null
+++ b/tests/testthat/test-solr_facet.r
@@ -0,0 +1,69 @@
+context("solr_facet")
+
+test_that("solr_facet works", {
+  skip_on_cran()
+
+  invisible(solr_connect('http://api.plos.org/search', verbose=FALSE))
+
+  a <- solr_facet(q='*:*', facet.field='journal')
+  b <- solr_facet(q='*:*', facet.date='publication_date', 
+                  facet.date.start='NOW/DAY-5DAYS', facet.date.end='NOW', 
+                  facet.date.gap='+1DAY')
+  c <- solr_facet(q='alcohol', facet.pivot='journal,subject', 
+                  facet.pivot.mincount=10)
+
+  # correct dimenions
+  expect_equal(length(a), 5)
+  expect_equal(length(a$facet_queries), 0)
+  expect_equal(NCOL(a$facet_fields$journal), 2)
+
+  expect_that(length(b), equals(5))
+  expect_that(length(b$facet_dates), equals(1))
+  expect_that(dim(b$facet_dates$publication_date), equals(c(6,2)))
+  
+  expect_equal(length(c), 5)
+  expect_equal(names(c$facet_pivot), c('journal', 'journal,subject'))
+  expect_equal(names(c$facet_pivot$journal), c('journal', 'count'))
+  expect_equal(names(c$facet_pivot$`journal,subject`), c('journal', 'subject', 'count'))
+  expect_true(min(unlist(c$facet_pivot$`journal,subject`$count)) >= 10)
+  
+  # correct classes
+  expect_is(a, "list")
+  expect_is(b, "list")
+  expect_is(c, "list")
+  expect_is(b$facet_dates, "list")
+  expect_is(b$facet_dates$publication_date, "data.frame")
+  expect_is(c$facet_pivot, "list")
+  expect_is(c$facet_pivot$journal, "data.frame")
+  expect_is(c$facet_pivot$`journal,subject`, "data.frame")
+})
+
+
+test_that("faceting works against HathiTrust", {
+  url_hathi <- "http://chinkapin.pti.indiana.edu:9994/solr/meta/select"
+  invisible(solr_connect(url = url_hathi, verbose = FALSE))
+  
+  # regular facet
+  a <- solr_facet(q = '*:*', facet.field = 'genre')
+  # pivot facet
+  c <- solr_facet(q = '*:*', facet.pivot = 'genre,publisher', 
+                  facet.pivot.mincount = 10)
+  
+  expect_equal(length(a), 5)
+  expect_equal(length(a$facet_queries), 0)
+  expect_equal(NCOL(a$facet_fields$genre), 2)
+  
+  expect_equal(length(c), 5)
+  expect_equal(names(c$facet_pivot), c('genre', 'genre,publisher'))
+  expect_named(c$facet_pivot$genre, c('genre', 'count'))
+  expect_named(c$facet_pivot$`genre,publisher`, c('genre', 'publisher', 'count'))
+  expect_true(min(unlist(c$facet_pivot$`genre,publisher`$count)) >= 10)
+  
+  # correct classes
+  expect_is(a, "list")
+  expect_is(c, "list")
+  expect_is(c$facet_pivot, "list")
+  expect_is(c$facet_pivot$genre, "data.frame")
+  expect_is(c$facet_pivot$`genre,publisher`, "data.frame")  
+})
+
diff --git a/tests/testthat/test-solr_group.r b/tests/testthat/test-solr_group.r
new file mode 100644
index 0000000..3657277
--- /dev/null
+++ b/tests/testthat/test-solr_group.r
@@ -0,0 +1,39 @@
+context("solr_group")
+
+test_that("solr_group works", {
+  skip_on_cran()
+
+  solr_connect('http://api.plos.org/search', verbose=FALSE)
+
+  a <- solr_group(q='ecology', group.field='journal', group.limit=3, fl=c('id','score'))
+  b <- solr_group(q='ecology', group.field='journal', group.limit=3,
+                  fl=c('id','score','alm_twitterCount'),
+                  group.sort='alm_twitterCount desc')
+  out <- solr_group(q='ecology', group.field=c('journal','article_type'), group.limit=3, fl='id',
+                    raw=TRUE)
+  c <- out
+  d <- solr_parse(out, 'df')
+  e <- solr_group(q='ecology', group.field='journal', group.limit=3, fl=c('id','score'),
+                  group.format='grouped', group.main='true')
+
+  suppressPackageStartupMessages(library('jsonlite', quietly = TRUE))
+  f <- jsonlite::fromJSON(out, FALSE)
+
+  # correct dimensions
+  expect_equal(NCOL(a), 5)
+  expect_equal(NCOL(b), 6)
+  expect_that(length(c), equals(1))
+  expect_that(length(d), equals(2))
+  expect_equal(NCOL(d$article_type), 4)
+  expect_equal(NCOL(e), 4)
+  expect_that(length(f), equals(1))
+  expect_that(length(f$grouped), equals(2))
+
+  #  correct classes
+  expect_is(a, "data.frame")
+  expect_is(b, "data.frame")
+  expect_is(c, "sr_group")
+  expect_is(d, "list")
+  expect_is(d$journal, "data.frame")
+  expect_is(e, "data.frame")
+})
diff --git a/tests/testthat/test-solr_highlight.r b/tests/testthat/test-solr_highlight.r
new file mode 100644
index 0000000..0c2a916
--- /dev/null
+++ b/tests/testthat/test-solr_highlight.r
@@ -0,0 +1,25 @@
+context("solr_highlight")
+
+test_that("solr_highlight works", {
+  skip_on_cran()
+
+  solr_connect('http://api.plos.org/search', verbose=FALSE)
+
+  a <- solr_highlight(q='alcohol', hl.fl = 'abstract', rows=10)
+  b <- solr_highlight(q='alcohol', hl.fl = c('abstract','title'), rows=3)
+
+  # correct dimensions
+  expect_that(length(a), equals(10))
+  expect_that(length(a[[1]]), equals(1))
+  expect_that(length(b), equals(3))
+  expect_that(length(b[[3]]), equals(2))
+
+  # correct classes
+  expect_is(a, "list")
+  expect_is(a[[1]]$abstract, "character")
+
+  expect_is(b, "list")
+  expect_is(b[[1]], "list")
+  expect_is(b[[1]]$abstract, "character")
+  expect_is(b[[1]]$title, "character")
+})
diff --git a/tests/testthat/test-solr_mlt.r b/tests/testthat/test-solr_mlt.r
new file mode 100644
index 0000000..a2c0d5e
--- /dev/null
+++ b/tests/testthat/test-solr_mlt.r
@@ -0,0 +1,35 @@
+context("solr_mlt")
+
+test_that("solr_mlt works", {
+  skip_on_cran()
+
+  solr_connect('http://api.plos.org/search', verbose=FALSE)
+
+  a <- solr_mlt(q='*:*', mlt.count=2, mlt.fl='abstract', fl='score', fq="doc_type:full")
+  c <- solr_mlt(q='ecology', mlt.fl='abstract', fl='title', rows=5)
+
+  out <- solr_mlt(q='ecology', mlt.fl='abstract', fl='title', rows=2, raw=TRUE, wt="xml")
+  library("xml2")
+  outxml <- read_xml(unclass(out))
+  outdf <- solr_parse(out, "df")
+
+  # correct dimensions
+  expect_equal(dim(a$docs), c(10,2))
+  expect_equal(dim(c$docs), c(5, 2))
+  expect_equal(length(c$mlt), 5)
+
+  expect_equal(length(outxml), 2)
+  expect_equal(dim(outdf$mlt[[1]]), c(5, 5))
+
+  # correct classes
+  expect_is(a, "list")
+  #   expect_is(b, "list")
+  expect_is(c, "list")
+  expect_is(a$docs, "data.frame")
+  #   expect_is(b$mlt, "data.frame")
+  expect_is(c$docs, "data.frame")
+
+  expect_is(outxml, "xml_document")
+  expect_is(outdf, "list")
+  expect_is(outdf$mlt[[1]], "data.frame")
+})
diff --git a/tests/testthat/test-solr_search.r b/tests/testthat/test-solr_search.r
new file mode 100644
index 0000000..96a567f
--- /dev/null
+++ b/tests/testthat/test-solr_search.r
@@ -0,0 +1,100 @@
+context("solr_search")
+
+test_that("solr_search works", {
+  skip_on_cran()
+
+  solr_connect('http://api.plos.org/search', verbose = FALSE)
+
+  a <- solr_search(q='*:*', rows=2, fl='id')
+  b <- solr_search(q='title:"ecology" AND body:"cell"', fl='title', rows=5)
+
+  # correct dimensions
+  expect_that(length(a), equals(1))
+  expect_that(length(b), equals(1))
+
+  # correct classes
+  expect_is(a, "data.frame")
+  expect_is(b, "data.frame")
+})
+
+test_that("solr_search fails well", {
+  skip_on_cran()
+  
+  invisible(solr_connect('http://api.plos.org/search', verbose = FALSE))
+  
+  expect_error(solr_search(q = "*:*", rows = "asdf"), "500 - For input string")
+  expect_error(solr_search(q = "*:*", sort = "down"), 
+               "400 - Can't determine a Sort Order \\(asc or desc\\) in sort spec 'down'")
+  expect_error(solr_search(q='*:*', fl=c('alm_twitterCount','id'), 
+                           fq='alm_notafield:[5 TO 50]', rows=10), 
+               "undefined field")
+  expect_error(solr_search(q = "*:*", wt = "foobar"), 
+               "wt must be one of: json, xml, csv")
+  
+})
+
+test_that("solr_search works with HathiTrust", {
+  skip_on_cran()
+  
+  url_hathi <- "http://chinkapin.pti.indiana.edu:9994/solr/meta/select"
+  invisible(solr_connect(url = url_hathi, verbose = FALSE))
+  
+  a <- solr_search(q = '*:*', rows = 2, fl = 'id')
+  b <- solr_search(q = 'language:Spanish', rows = 5)
+  
+  # correct dimensions
+  expect_equal(NROW(a), 2)
+  expect_equal(NROW(b), 5)
+  
+  # correct classes
+  expect_is(a, "data.frame")
+  expect_is(a, "tbl_df")
+  expect_is(b, "data.frame")
+  expect_is(b, "tbl_df")
+  
+  # names
+  expect_named(a, "id")
+})
+
+test_that("solr_search works with Datacite", {
+  skip_on_cran()
+  
+  url_dc <- "http://search.datacite.org/api"
+  invisible(solr_connect(url = url_dc, verbose = FALSE))
+  
+  a <- solr_search(q = '*:*', rows = 2)
+  b <- solr_search(q = 'publisher:Data', rows = 5)
+  
+  # correct dimensions
+  expect_equal(NROW(a), 2)
+  expect_equal(NROW(b), 5)
+  
+  # correct classes
+  expect_is(a, "data.frame")
+  expect_is(a, "tbl_df")
+  expect_is(b, "data.frame")
+  expect_is(b, "tbl_df")
+})
+
+test_that("solr_search works with Dryad", {
+  skip_on_cran()
+  
+  url_dryad <- "http://datadryad.org/solr/search/select"
+  invisible(solr_connect(url = url_dryad, verbose = FALSE))
+  
+  a <- solr_search(q = '*:*', rows = 2)
+  b <- solr_search(q = 'dc.title.en:ecology', rows = 5)
+  
+  # correct dimensions
+  expect_equal(NROW(a), 2)
+  expect_equal(NROW(b), 5)
+  
+  # correct classes
+  expect_is(a, "data.frame")
+  expect_is(a, "tbl_df")
+  expect_is(b, "data.frame")
+  expect_is(b, "tbl_df")
+  
+  # correct content
+  expect_true(all(grepl("ecolog", b$dc.title.en, ignore.case = TRUE)))
+})
diff --git a/tests/testthat/test-solr_settings.R b/tests/testthat/test-solr_settings.R
new file mode 100644
index 0000000..fdac54e
--- /dev/null
+++ b/tests/testthat/test-solr_settings.R
@@ -0,0 +1,31 @@
+# solr_settings
+context("solr_settings")
+
+test_that("solr_settings gives right classes", {
+  skip_on_cran()
+  
+  invisible(solr_connect('http://api.plos.org/search'))
+  aa <- solr_settings()
+  
+  expect_is(aa, "solr_connection")
+  expect_is(aa$url, "character")
+  expect_null(aa$proxy)
+  expect_is(aa$errors, "character")
+})
+
+
+test_that("solr_settings gives right values", {
+  skip_on_cran()
+  
+  invisible(solr_connect('http://api.plos.org/search'))
+  aa <- solr_settings()
+  
+  expect_equal(aa$errors, "simple")
+})
+
+
+test_that("solr_settings fails with a argument passed", {
+  skip_on_cran()
+  
+  expect_error(solr_settings(3), "unused argument")
+})
diff --git a/tests/testthat/test-solr_stats.r b/tests/testthat/test-solr_stats.r
new file mode 100644
index 0000000..90c03d6
--- /dev/null
+++ b/tests/testthat/test-solr_stats.r
@@ -0,0 +1,110 @@
+context("solr_stats")
+
+test_that("solr_stats works", {
+  skip_on_cran()
+
+  invisible(solr_connect('http://api.plos.org/search', verbose=FALSE))
+
+  a <- solr_stats(q='science', stats.field='counter_total_all', raw=TRUE)
+  b <- solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'), 
+                  stats.facet=c('journal','volume'))
+  c <- solr_stats(q='ecology', stats.field=c('counter_total_all','alm_twitterCount'), 
+                  stats.facet=c('journal','volume'), raw=TRUE)
+  d <- solr_parse(c) # list
+  e <- solr_parse(c, 'df') # data.frame
+
+  # correct dimenions
+  expect_equal(length(a), 1)
+  expect_equal(length(b), 2)
+  expect_equal(nrow(b$data), 2)
+  expect_equal(NCOL(b$facet$counter_total_all$journal), 9)
+  expect_equal(length(c), 1)
+  expect_equal(length(d), 2)
+  expect_equal(length(d$data$alm_twitterCount), 8)
+  expect_equal(length(e$facet$alm_twitterCount), 2)
+  expect_equal(NCOL(e$facet$alm_twitterCount$volume), 9)
+
+  # classes
+  expect_is(a, "sr_stats")
+  expect_is(b, "list")
+  expect_is(b$data, "data.frame")
+  expect_is(b$facet$counter_total_all$journal, "data.frame")
+  expect_is(c, "sr_stats")
+  expect_equal(attr(c, "wt"), "json")
+  expect_is(d, "list")
+  expect_is(e, "list")
+})
+
+test_that("solr_stats works using wt=xml", {
+  skip_on_cran()
+  
+  invisible(solr_connect('http://api.plos.org/search', verbose = FALSE))
+  
+  aa <- solr_stats(q='science', wt="xml", stats.field='counter_total_all', raw=TRUE)
+  bb <- solr_stats(q='science', wt="xml", stats.field='counter_total_all')
+  cc <- solr_stats(q='science', wt="xml", stats.field=c('counter_total_all','alm_twitterCount'), 
+                   stats.facet=c('journal','volume'))
+  
+  # correct dimenions
+  expect_equal(length(aa), 1)
+  expect_equal(length(bb), 2)
+  expect_equal(NROW(bb$data), 1)
+  expect_named(cc$facet[[1]], c("volume", "journal"))
+  expect_equal(length(cc), 2)
+  
+  # classes
+  expect_is(aa, "sr_stats")
+  expect_is(bb, "list")
+  expect_is(cc, "list")
+  expect_is(bb$data, "data.frame")
+  expect_is(cc$facet[[1]][[1]], "data.frame")
+  expect_equal(attr(aa, "wt"), "xml")
+})
+
+test_that("solr_stats works with HathiTrust", {
+  skip_on_cran()
+  
+  url_hathi <- "http://chinkapin.pti.indiana.edu:9994/solr/meta/select"
+  invisible(solr_connect(url = url_hathi, verbose = FALSE))
+  
+  a <- solr_stats(q='*:*', stats.field = 'htrc_wordCount', raw = TRUE)
+  b <- solr_stats(q = '*:*', stats.field = c('htrc_wordCount', 'htrc_pageCount'))
+  c <- solr_stats(q = '*:*', stats.field = 'htrc_charCount')
+  d <- solr_parse(a) # list
+  
+  # correct dimenions
+  expect_equal(length(a), 1)
+  expect_equal(length(b), 2)
+  expect_equal(nrow(b$data), 2)
+  expect_equal(length(c), 2)
+  expect_equal(length(d), 2)
+  expect_equal(length(d$data$htrc_wordCount), 8)
+  
+  # classes
+  expect_is(a, "sr_stats")
+  expect_is(b, "list")
+  expect_is(b$data, "data.frame")
+  expect_is(d, "list")
+})
+
+test_that("solr_stats works with Datacite", {
+  skip_on_cran()
+  
+  url_dc <- "http://search.datacite.org/api"
+  invisible(solr_connect(url = url_dc, verbose = FALSE))
+  
+  a <- solr_stats(q='*:*', stats.field='publicationYear', raw=TRUE)
+  b <- solr_stats(q='*:*', stats.field='publicationYear', stats.facet = "prefix")
+  
+  # correct dimenions
+  expect_equal(length(a), 1)
+  expect_equal(length(b), 2)
+  expect_equal(nrow(b$data), 1)
+  expect_equal(NCOL(b$facet$publicationYear), 5)
+  
+  # classes
+  expect_is(a, "sr_stats")
+  expect_is(b, "list")
+  expect_is(b$data, "data.frame")
+  expect_is(b$facet$publicationYear, "data.frame")
+})
diff --git a/vignettes/cores_collections.Rmd b/vignettes/cores_collections.Rmd
new file mode 100644
index 0000000..33d4f3b
--- /dev/null
+++ b/vignettes/cores_collections.Rmd
@@ -0,0 +1,119 @@
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Cores/collections management}
+%\VignetteEncoding{UTF-8}
+-->
+
+
+
+Cores/collections management
+============================
+
+## Installation
+
+Stable version from CRAN
+
+
+```r
+install.packages("solrium")
+```
+
+Or the development version from GitHub
+
+
+```r
+install.packages("devtools")
+devtools::install_github("ropensci/solrium")
+```
+
+Load
+
+
+```r
+library("solrium")
+```
+
+Initialize connection
+
+
+```r
+solr_connect()
+```
+
+```
+#> <solr_connection>
+#>   url:    http://localhost:8983
+#>   errors: simple
+#>   verbose: TRUE
+#>   proxy:
+```
+
+## Cores 
+
+There are many operations you can do on cores, including:
+
+* `core_create()` - create a core
+* `core_exists()` - check if a core exists
+* `core_mergeindexes()` - merge indexes
+* `core_reload()` - reload a core
+* `core_rename()` - rename a core
+* `core_requeststatus()` - check request status
+* `core_split()` - split a core
+* `core_status()` - check core status
+* `core_swap()` - core swap
+* `core_unload()` - delete a core
+
+### Create a core
+
+
+```r
+core_create()
+```
+
+### Delete a core
+
+
+```r
+core_unload()
+```
+
+## Collections
+
+There are many operations you can do on collections, including:
+
+* `collection_addreplica()` 
+* `collection_addreplicaprop()` 
+* `collection_addrole()` 
+* `collection_balanceshardunique()` 
+* `collection_clusterprop()` 
+* `collection_clusterstatus()` 
+* `collection_create()` 
+* `collection_createalias()` 
+* `collection_createshard()` 
+* `collection_delete()` 
+* `collection_deletealias()` 
+* `collection_deletereplica()` 
+* `collection_deletereplicaprop()` 
+* `collection_deleteshard()` 
+* `collection_list()` 
+* `collection_migrate()` 
+* `collection_overseerstatus()` 
+* `collection_rebalanceleaders()` 
+* `collection_reload()` 
+* `collection_removerole()` 
+* `collection_requeststatus()` 
+* `collection_splitshard()` 
+
+### Create a collection
+
+
+```r
+collection_create()
+```
+
+### Delete a collection
+
+
+```r
+collection_delete()
+```
diff --git a/vignettes/document_management.Rmd b/vignettes/document_management.Rmd
new file mode 100644
index 0000000..aca9daa
--- /dev/null
+++ b/vignettes/document_management.Rmd
@@ -0,0 +1,318 @@
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Document management}
+%\VignetteEncoding{UTF-8}
+-->
+
+
+
+Document management
+===================
+
+## Installation
+
+Stable version from CRAN
+
+
+```r
+install.packages("solrium")
+```
+
+Or the development version from GitHub
+
+
+```r
+install.packages("devtools")
+devtools::install_github("ropensci/solrium")
+```
+
+Load
+
+
+```r
+library("solrium")
+```
+
+Initialize connection. By default, you connect to `http://localhost:8983`
+
+
+```r
+solr_connect()
+```
+
+```
+#> <solr_connection>
+#>   url:    http://localhost:8983
+#>   errors: simple
+#>   verbose: TRUE
+#>   proxy:
+```
+
+## Create documents from R objects
+
+For now, only lists and data.frame's supported.
+
+### data.frame
+
+
+```r
+df <- data.frame(id = c(67, 68), price = c(1000, 500000000))
+add(df, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 112
+```
+
+### list
+
+
+
+
+```r
+ss <- list(list(id = 1, price = 100), list(id = 2, price = 500))
+add(ss, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 16
+```
+
+## Delete documents
+
+### By id
+
+Add some documents first
+
+
+
+
+```r
+docs <- list(list(id = 1, price = 100, name = "brown"),
+             list(id = 2, price = 500, name = "blue"),
+             list(id = 3, price = 2000L, name = "pink"))
+add(docs, "gettingstarted")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 18
+```
+
+And the documents are now in your Solr database
+
+
+```r
+tail(solr_search(name = "gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+```
+
+```
+#> Source: local data frame [3 x 4]
+#> 
+#>      id price  name    _version_
+#>   (chr) (int) (chr)        (dbl)
+#> 1     1   100 brown 1.525729e+18
+#> 2     2   500  blue 1.525729e+18
+#> 3     3  2000  pink 1.525729e+18
+```
+
+Now delete those documents just added
+
+
+```r
+delete_by_id(ids = c(1, 2, 3), "gettingstarted")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 24
+```
+
+And now they are gone
+
+
+```r
+tail(solr_search("gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+```
+
+```
+#> Source: local data frame [0 x 0]
+```
+
+### By query
+
+Add some documents first
+
+
+```r
+add(docs, "gettingstarted")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 19
+```
+
+And the documents are now in your Solr database
+
+
+```r
+tail(solr_search("gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+```
+
+```
+#> Source: local data frame [3 x 4]
+#> 
+#>      id price  name    _version_
+#>   (chr) (int) (chr)        (dbl)
+#> 1     1   100 brown 1.525729e+18
+#> 2     2   500  blue 1.525729e+18
+#> 3     3  2000  pink 1.525729e+18
+```
+
+Now delete those documents just added
+
+
+```r
+delete_by_query(query = "(name:blue OR name:pink)", "gettingstarted")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 12
+```
+
+And now they are gone
+
+
+```r
+tail(solr_search("gettingstarted", "*:*", base = "http://localhost:8983/solr/select", rows = 100))
+```
+
+```
+#> Source: local data frame [1 x 4]
+#> 
+#>      id price  name    _version_
+#>   (chr) (int) (chr)        (dbl)
+#> 1     1   100 brown 1.525729e+18
+```
+
+## Update documents from files
+
+This approach is best if you have many different things you want to do at once, e.g., delete and add files and set any additional options. The functions are:
+
+* `update_xml()`
+* `update_json()`
+* `update_csv()`
+
+There are separate functions for each of the data types as they take slightly different parameters - and to make it more clear that those are the three input options for data types.
+
+### JSON
+
+
+```r
+file <- system.file("examples", "books.json", package = "solrium")
+update_json(file, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 39
+```
+
+### Add and delete in the same file
+
+Add a document first, that we can later delete
+
+
+```r
+ss <- list(list(id = 456, name = "cat"))
+add(ss, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 19
+```
+
+Now add a new document, and delete the one we just made
+
+
+```r
+file <- system.file("examples", "add_delete.xml", package = "solrium")
+cat(readLines(file), sep = "\n")
+```
+
+```
+#> <update>
+#> 	<add>
+#> 	  <doc>
+#> 	    <field name="id">978-0641723445</field>
+#> 	    <field name="cat">book,hardcover</field>
+#> 	    <field name="name">The Lightning Thief</field>
+#> 	    <field name="author">Rick Riordan</field>
+#> 	    <field name="series_t">Percy Jackson and the Olympians</field>
+#> 	    <field name="sequence_i">1</field>
+#> 	    <field name="genre_s">fantasy</field>
+#> 	    <field name="inStock">TRUE</field>
+#> 	    <field name="price">12.5</field>
+#> 	    <field name="pages_i">384</field>
+#> 	  </doc>
+#> 	</add>
+#> 	<delete>
+#> 		<id>456</id>
+#> 	</delete>
+#> </update>
+```
+
+```r
+update_xml(file, "books")
+```
+
+```
+#> $responseHeader
+#> $responseHeader$status
+#> [1] 0
+#> 
+#> $responseHeader$QTime
+#> [1] 23
+```
+
+### Notes
+
+Note that `update_xml()` and `update_json()` have exactly the same parameters, but simply use different data input formats. `update_csv()` is different in that you can't provide document or field level boosts or other modifications. In addition `update_csv()` can accept not just csv, but tsv and other types of separators.
+
diff --git a/vignettes/local_setup.Rmd b/vignettes/local_setup.Rmd
new file mode 100644
index 0000000..290ff07
--- /dev/null
+++ b/vignettes/local_setup.Rmd
@@ -0,0 +1,79 @@
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Local Solr setup}
+%\VignetteEncoding{UTF-8}
+-->
+
+Local Solr setup 
+======
+
+### OSX
+
+__Based on http://lucene.apache.org/solr/quickstart.html__
+
+1. Download most recent version from an Apache mirror http://www.apache.org/dyn/closer.cgi/lucene/solr/5.4.1
+2. Unzip/untar the file. Move to your desired location. Now you have Solr `v.5.4.1`
+3. Go into the directory you just created: `cd solr-5.4.1`
+4. Launch Solr: `bin/solr start -e cloud -noprompt` - Sets up SolrCloud mode, rather
+than Standalone mode. As far as I can tell, SolrCloud mode seems more common.
+5. Once Step 4 completes, you can go to `http://localhost:8983/solr/` now, which is
+the admin interface for Solr.
+6. Load some documents: `bin/post -c gettingstarted docs/`
+7. Once Step 6 is complete (will take a few minutes), navigate in your browser to `http://localhost:8983/solr/gettingstarted/select?q=*:*&wt=json` and you should see a
+bunch of documents
+
+
+### Linux
+
+> You should be able to use the above instructions for OSX on a Linux machine.
+
+#### Linuxbrew
+
+[Linuxbrew](http://brew.sh/linuxbrew/) is a port of Mac OS homebrew to linux.  Operation is essentially the same as for homebrew.  Follow the [installation instructions for linuxbrew](http://brew.sh/linuxbrew/#installation) and then the instructions for using homebrew (above) should work without modification.
+
+### Windows
+
+You should be able to use the above instructions for OSX on a Windows machine, but with some slight differences. For example, the `bin/post` tool for OSX and Linux doesn't work on Windows, but see https://cwiki.apache.org/confluence/display/solr/Post+Tool#PostTool-Windows for an equivalent.
+
+### `solrium` usage
+
+And we can now use the `solrium` R package to query the Solr database to get raw JSON data:
+
+
+```r
+solr_connect('http://localhost:8983')
+solr_search("gettingstarted", q = '*:*', raw = TRUE, rows = 3)
+
+#> [1] "{\"responseHeader\":{\"status\":0,\"QTime\":8,\"params\":{\"q\":\"*:*\",\"rows\":\"3\",\"wt\":\"json\"}},\"response\":{\"numFound\":3577,\"start\":0,\"maxScore\":1.0,\"docs\":[{\"id\":\"/Users/sacmac/solr-5.2.1/docs/solr-core/org/apache/solr/highlight/class-use/SolrFragmenter.html\",\"stream_size\":[9016],\"date\":[\"2015-06-10T00:00:00Z\"],\"x_parsed_by\":[\"org.apache.tika.parser.DefaultParser\",\"org.apache.tika.parser.html.HtmlParser\"],\"stream_content_type\":[\"text/html\"] [...]
+#> attr(,"class")
+#> [1] "sr_search"
+#> attr(,"wt")
+#> [1] "json"
+```
+
+Or parsed data to a data.frame (just looking at a few columns for brevity):
+
+
+```r
+solr_search("gettingstarted", q = '*:*', fl = c('date', 'title'))
+
+#> Source: local data frame [10 x 2]
+#>
+#>                    date                                                                         title
+#> 1  2015-06-10T00:00:00Z   Uses of Interface org.apache.solr.highlight.SolrFragmenter (Solr 5.2.1 API)
+#> 2  2015-06-10T00:00:00Z Uses of Class org.apache.solr.highlight.SolrFragmentsBuilder (Solr 5.2.1 API)
+#> 3  2015-06-10T00:00:00Z                                                    CSVParser (Solr 5.2.1 API)
+#> 4  2015-06-10T00:00:00Z                                                     CSVUtils (Solr 5.2.1 API)
+#> 5  2015-06-10T00:00:00Z                                 org.apache.solr.internal.csv (Solr 5.2.1 API)
+#> 6  2015-06-10T00:00:00Z                 org.apache.solr.internal.csv Class Hierarchy (Solr 5.2.1 API)
+#> 7  2015-06-10T00:00:00Z       Uses of Class org.apache.solr.internal.csv.CSVStrategy (Solr 5.2.1 API)
+#> 8  2015-06-10T00:00:00Z          Uses of Class org.apache.solr.internal.csv.CSVUtils (Solr 5.2.1 API)
+#> 9  2015-06-10T00:00:00Z                                                    CSVConfig (Solr 5.2.1 API)
+#> 10 2015-06-10T00:00:00Z                                             CSVConfigGuesser (Solr 5.2.1 API)
+```
+
+See the other vignettes for more thorough examples:
+
+* `Document management`
+* `Cores/collections management`
+* `Solr Search`
diff --git a/vignettes/search.Rmd b/vignettes/search.Rmd
new file mode 100644
index 0000000..102204b
--- /dev/null
+++ b/vignettes/search.Rmd
@@ -0,0 +1,600 @@
+<!--
+%\VignetteEngine{knitr::knitr}
+%\VignetteIndexEntry{Solr search}
+%\VignetteEncoding{UTF-8}
+-->
+
+
+
+Solr search
+===========
+
+**A general purpose R interface to [Apache Solr](http://lucene.apache.org/solr/)**
+
+## Solr info
+
++ [Solr home page](http://lucene.apache.org/solr/)
++ [Highlighting help](http://wiki.apache.org/solr/HighlightingParameters)
++ [Faceting help](http://wiki.apache.org/solr/SimpleFacetParameters)
++ [Install and Setup SOLR in OSX, including running Solr](http://risnandar.wordpress.com/2013/09/08/how-to-install-and-setup-apache-lucene-solr-in-osx/)
+
+## Installation
+
+Stable version from CRAN
+
+
+```r
+install.packages("solrium")
+```
+
+Or the development version from GitHub
+
+
+```r
+install.packages("devtools")
+devtools::install_github("ropensci/solrium")
+```
+
+Load
+
+
+```r
+library("solrium")
+```
+
+## Setup connection
+
+You can setup for a remote Solr instance or on your local machine.
+
+
+```r
+solr_connect('http://api.plos.org/search')
+#> <solr_connection>
+#>   url:    http://api.plos.org/search
+#>   errors: simple
+#>   verbose: TRUE
+#>   proxy:
+```
+
+## Rundown
+
+`solr_search()` only returns the `docs` element of a Solr response body. If `docs` is
+all you need, then this function will do the job. If you need facet data only, or mlt
+data only, see the appropriate functions for each of those below. Another function,
+`solr_all()` has a similar interface in terms of parameter as `solr_search()`, but
+returns all parts of the response body, including, facets, mlt, groups, stats, etc.
+as long as you request those.
+
+## Search docs
+
+`solr_search()` returns only docs. A basic search:
+
+
+```r
+solr_search(q = '*:*', rows = 2, fl = 'id')
+#> Source: local data frame [2 x 1]
+#> 
+#>                                        id
+#>                                     (chr)
+#> 1 10.1371/journal.pone.0142243/references
+#> 2       10.1371/journal.pone.0142243/body
+```
+
+__Search in specific fields with `:`__
+
+Search for word ecology in title and cell in the body
+
+
+```r
+solr_search(q = 'title:"ecology" AND body:"cell"', fl = 'title', rows = 5)
+#> Source: local data frame [5 x 1]
+#> 
+#>                                                       title
+#>                                                       (chr)
+#> 1                        The Ecology of Collective Behavior
+#> 2                                   Ecology's Big, Hot Idea
+#> 3     Spatial Ecology of Bacteria at the Microscale in Soil
+#> 4 Biofilm Formation As a Response to Ecological Competition
+#> 5    Ecology of Root Colonizing Massilia (Oxalobacteraceae)
+```
+
+__Wildcards__
+
+Search for word that starts with "cell" in the title field
+
+
+```r
+solr_search(q = 'title:"cell*"', fl = 'title', rows = 5)
+#> Source: local data frame [5 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1                                Tumor Cell Recognition Efficiency by T Cells
+#> 2 Cancer Stem Cell-Like Side Population Cells in Clear Cell Renal Cell Carcin
+#> 3 Dcas Supports Cell Polarization and Cell-Cell Adhesion Complexes in Develop
+#> 4                  Cell-Cell Contact Preserves Cell Viability via Plakoglobin
+#> 5 MS4a4B, a CD20 Homologue in T Cells, Inhibits T Cell Propagation by Modulat
+```
+
+__Proximity search__
+
+Search for words "sports" and "alcohol" within four words of each other
+
+
+```r
+solr_search(q = 'everything:"stem cell"~7', fl = 'title', rows = 3)
+#> Source: local data frame [3 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1 Correction: Reduced Intensity Conditioning, Combined Transplantation of Hap
+#> 2                                            A Recipe for Self-Renewing Brain
+#> 3  Gene Expression Profile Created for Mouse Stem Cells and Developing Embryo
+```
+
+__Range searches__
+
+Search for articles with Twitter count between 5 and 10
+
+
+```r
+solr_search(q = '*:*', fl = c('alm_twitterCount', 'id'), fq = 'alm_twitterCount:[5 TO 50]',
+rows = 10)
+#> Source: local data frame [10 x 2]
+#> 
+#>                                                     id alm_twitterCount
+#>                                                  (chr)            (int)
+#> 1            10.1371/journal.ppat.1005403/introduction                6
+#> 2  10.1371/journal.ppat.1005403/results_and_discussion                6
+#> 3   10.1371/journal.ppat.1005403/materials_and_methods                6
+#> 4  10.1371/journal.ppat.1005403/supporting_information                6
+#> 5                         10.1371/journal.ppat.1005401                6
+#> 6                   10.1371/journal.ppat.1005401/title                6
+#> 7                10.1371/journal.ppat.1005401/abstract                6
+#> 8              10.1371/journal.ppat.1005401/references                6
+#> 9                    10.1371/journal.ppat.1005401/body                6
+#> 10           10.1371/journal.ppat.1005401/introduction                6
+```
+
+__Boosts__
+
+Assign higher boost to title matches than to body matches (compare the two calls)
+
+
+```r
+solr_search(q = 'title:"cell" abstract:"science"', fl = 'title', rows = 3)
+#> Source: local data frame [3 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1 I Want More and Better Cells! – An Outreach Project about Stem Cells and It
+#> 2                                   Centre of the Cell: Science Comes to Life
+#> 3 Globalization of Stem Cell Science: An Examination of Current and Past Coll
+```
+
+
+```r
+solr_search(q = 'title:"cell"^1.5 AND abstract:"science"', fl = 'title', rows = 3)
+#> Source: local data frame [3 x 1]
+#> 
+#>                                                                         title
+#>                                                                         (chr)
+#> 1                                   Centre of the Cell: Science Comes to Life
+#> 2 I Want More and Better Cells! – An Outreach Project about Stem Cells and It
+#> 3          Derivation of Hair-Inducing Cell from Human Pluripotent Stem Cells
+```
+
+## Search all
+
+`solr_all()` differs from `solr_search()` in that it allows specifying facets, mlt, groups,
+stats, etc, and returns all of those. It defaults to `parsetype = "list"` and `wt="json"`,
+whereas `solr_search()` defaults to `parsetype = "df"` and `wt="csv"`. `solr_all()` returns
+by default a list, whereas `solr_search()` by default returns a data.frame.
+
+A basic search, just docs output
+
+
+```r
+solr_all(q = '*:*', rows = 2, fl = 'id')
+#> $response
+#> $response$numFound
+#> [1] 1502814
+#> 
+#> $response$start
+#> [1] 0
+#> 
+#> $response$docs
+#> $response$docs[[1]]
+#> $response$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0142243/references"
+#> 
+#> 
+#> $response$docs[[2]]
+#> $response$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0142243/body"
+```
+
+Get docs, mlt, and stats output
+
+
+```r
+solr_all(q = 'ecology', rows = 2, fl = 'id', mlt = 'true', mlt.count = 2, mlt.fl = 'abstract', stats = 'true', stats.field = 'counter_total_all')
+#> $response
+#> $response$numFound
+#> [1] 31467
+#> 
+#> $response$start
+#> [1] 0
+#> 
+#> $response$docs
+#> $response$docs[[1]]
+#> $response$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0059813"
+#> 
+#> 
+#> $response$docs[[2]]
+#> $response$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0001248"
+#> 
+#> 
+#> 
+#> 
+#> $moreLikeThis
+#> $moreLikeThis$`10.1371/journal.pone.0059813`
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$numFound
+#> [1] 152704
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$start
+#> [1] 0
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[1]]
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0111996"
+#> 
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[2]]
+#> $moreLikeThis$`10.1371/journal.pone.0059813`$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0143687"
+#> 
+#> 
+#> 
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$numFound
+#> [1] 159058
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$start
+#> [1] 0
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[1]]
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[1]]$id
+#> [1] "10.1371/journal.pone.0001275"
+#> 
+#> 
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[2]]
+#> $moreLikeThis$`10.1371/journal.pone.0001248`$docs[[2]]$id
+#> [1] "10.1371/journal.pone.0024192"
+#> 
+#> 
+#> 
+#> 
+#> 
+#> $stats
+#> $stats$stats_fields
+#> $stats$stats_fields$counter_total_all
+#> $stats$stats_fields$counter_total_all$min
+#> [1] 16
+#> 
+#> $stats$stats_fields$counter_total_all$max
+#> [1] 367697
+#> 
+#> $stats$stats_fields$counter_total_all$count
+#> [1] 31467
+#> 
+#> $stats$stats_fields$counter_total_all$missing
+#> [1] 0
+#> 
+#> $stats$stats_fields$counter_total_all$sum
+#> [1] 141552408
+#> 
+#> $stats$stats_fields$counter_total_all$sumOfSquares
+#> [1] 3.162032e+12
+#> 
+#> $stats$stats_fields$counter_total_all$mean
+#> [1] 4498.44
+#> 
+#> $stats$stats_fields$counter_total_all$stddev
+#> [1] 8958.45
+#> 
+#> $stats$stats_fields$counter_total_all$facets
+#> named list()
+```
+
+
+## Facet
+
+
+```r
+solr_facet(q = '*:*', facet.field = 'journal', facet.query = c('cell', 'bird'))
+#> $facet_queries
+#>   term  value
+#> 1 cell 128657
+#> 2 bird  13063
+#> 
+#> $facet_fields
+#> $facet_fields$journal
+#>                                 X1      X2
+#> 1                         plos one 1233662
+#> 2                    plos genetics   49285
+#> 3                   plos pathogens   42817
+#> 4       plos computational biology   36373
+#> 5 plos neglected tropical diseases   33911
+#> 6                     plos biology   28745
+#> 7                    plos medicine   19934
+#> 8             plos clinical trials     521
+#> 9                     plos medicin       9
+#> 
+#> 
+#> $facet_pivot
+#> NULL
+#> 
+#> $facet_dates
+#> NULL
+#> 
+#> $facet_ranges
+#> NULL
+```
+
+## Highlight
+
+
+```r
+solr_highlight(q = 'alcohol', hl.fl = 'abstract', rows = 2)
+#> $`10.1371/journal.pmed.0040151`
+#> $`10.1371/journal.pmed.0040151`$abstract
+#> [1] "Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting"
+#> 
+#> 
+#> $`10.1371/journal.pone.0027752`
+#> $`10.1371/journal.pone.0027752`$abstract
+#> [1] "Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking"
+```
+
+## Stats
+
+
+```r
+out <- solr_stats(q = 'ecology', stats.field = c('counter_total_all', 'alm_twitterCount'), stats.facet = c('journal', 'volume'))
+```
+
+
+```r
+out$data
+#>                   min    max count missing       sum sumOfSquares
+#> counter_total_all  16 367697 31467       0 141552408 3.162032e+12
+#> alm_twitterCount    0   1756 31467       0    168586 3.267801e+07
+#>                          mean     stddev
+#> counter_total_all 4498.439889 8958.45030
+#> alm_twitterCount     5.357549   31.77757
+```
+
+
+```r
+out$facet
+#> $counter_total_all
+#> $counter_total_all$volume
+#>     min    max count missing      sum sumOfSquares      mean    stddev
+#> 1    20 166202   887       0  2645927  63864880371  2983.007  7948.200
+#> 2   495 103147   105       0  1017325  23587444387  9688.810 11490.287
+#> 3  1950  69628    69       0   704216  13763808310 10206.029  9834.333
+#> 4   742  13856     9       0    48373    375236903  5374.778  3795.438
+#> 5  1871 182622    81       0  1509647  87261688837 18637.617 27185.811
+#> 6  1667 117922   482       0  5836186 162503606896 12108.270 13817.754
+#> 7  1340 128083   741       0  7714963 188647618509 10411.556 12098.852
+#> 8   667 362410  1010       0  9692492 340237069126  9596.527 15653.040
+#> 9   103 113220  1539       0 12095764 218958657256  7859.496  8975.188
+#> 10   72 243873  2948       0 17699332 327210596846  6003.844  8658.717
+#> 11   51 184259  4825       0 24198104 382922818910  5015.151  7363.541
+#> 12   16 367697  6360       0 26374352 533183277470  4146.911  8163.790
+#> 13   42 287741  6620       0 21003701 612616254755  3172.765  9082.194
+#> 14  128 161520  5791       0 11012026 206899109466  1901.576  5667.209
+#>    volume
+#> 1      11
+#> 2      12
+#> 3      13
+#> 4      14
+#> 5       1
+#> 6       2
+#> 7       3
+#> 8       4
+#> 9       5
+#> 10      6
+#> 11      7
+#> 12      8
+#> 13      9
+#> 14     10
+#> 
+#> $counter_total_all$journal
+#>    min    max count missing      sum sumOfSquares      mean    stddev
+#> 1  667 117922   243       0  4074303 1.460258e+11 16766.679 17920.074
+#> 2  742 265561   884       0 14006081 5.507548e+11 15843.983 19298.065
+#> 3 8463  13797     2       0    22260 2.619796e+08 11130.000  3771.708
+#> 4   16 367697 25915       0 96069530 1.943903e+12  3707.101  7827.546
+#> 5  915  61956   595       0  4788553 6.579963e+10  8047.988  6774.558
+#> 6  548  76290   758       0  6326284 9.168443e+10  8346.021  7167.106
+#> 7  268 212048  1239       0  5876481 1.010080e+11  4742.923  7686.101
+#> 8  495 287741   580       0  4211717 1.411022e+11  7261.581 13815.867
+#>                            journal
+#> 1                    plos medicine
+#> 2                     plos biology
+#> 3             plos clinical trials
+#> 4                         plos one
+#> 5                   plos pathogens
+#> 6                    plos genetics
+#> 7 plos neglected tropical diseases
+#> 8       plos computational biology
+#> 
+#> 
+#> $alm_twitterCount
+#> $alm_twitterCount$volume
+#>    min  max count missing   sum sumOfSquares      mean     stddev volume
+#> 1    0 1756   887       0 12295      4040629 13.861330  66.092178     11
+#> 2    0 1045   105       0  6466      1885054 61.580952 119.569402     12
+#> 3    0  283    69       0  3478       509732 50.405797  70.128101     13
+#> 4    6  274     9       0   647       102391 71.888889  83.575482     14
+#> 5    0   42    81       0   176         4996  2.172840   7.594060      1
+#> 6    0   74   482       0   628        15812  1.302905   5.583197      2
+#> 7    0   48   741       0   652        11036  0.879892   3.760087      3
+#> 8    0  239  1010       0  1039        74993  1.028713   8.559485      4
+#> 9    0  126  1539       0  1901        90297  1.235218   7.562004      5
+#> 10   0  886  2948       0  4357      1245453  1.477951  20.504442      6
+#> 11   0  822  4825       0 19646      2037596  4.071710  20.144602      7
+#> 12   0 1503  6360       0 35938      6505618  5.650629  31.482092      8
+#> 13   0 1539  6620       0 49837     12847207  7.528248  43.408246      9
+#> 14   0  863  5791       0 31526      3307198  5.443965  23.271216     10
+#> 
+#> $alm_twitterCount$journal
+#>   min  max count missing    sum sumOfSquares      mean   stddev
+#> 1   0  777   243       0   4251      1028595 17.493827 62.79406
+#> 2   0 1756   884       0  16405      6088729 18.557692 80.93655
+#> 3   0    3     2       0      3            9  1.500000  2.12132
+#> 4   0 1539 25915       0 123409     23521391  4.762068 29.74883
+#> 5   0  122   595       0   4265       160581  7.168067 14.79428
+#> 6   0  178   758       0   4277       148277  5.642480 12.80605
+#> 7   0  886  1239       0   4972      1048908  4.012914 28.82956
+#> 8   0  285   580       0   4166       265578  7.182759 20.17431
+#>                            journal
+#> 1                    plos medicine
+#> 2                     plos biology
+#> 3             plos clinical trials
+#> 4                         plos one
+#> 5                   plos pathogens
+#> 6                    plos genetics
+#> 7 plos neglected tropical diseases
+#> 8       plos computational biology
+```
+
+## More like this
+
+`solr_mlt` is a function to return similar documents to the one
+
+
+```r
+out <- solr_mlt(q = 'title:"ecology" AND body:"cell"', mlt.fl = 'title', mlt.mindf = 1, mlt.mintf = 1, fl = 'counter_total_all', rows = 5)
+out$docs
+#> Source: local data frame [5 x 2]
+#> 
+#>                             id counter_total_all
+#>                          (chr)             (int)
+#> 1 10.1371/journal.pbio.1001805             17081
+#> 2 10.1371/journal.pbio.0020440             23882
+#> 3 10.1371/journal.pone.0087217              5935
+#> 4 10.1371/journal.pbio.1002191             13036
+#> 5 10.1371/journal.pone.0040117              4316
+```
+
+
+```r
+out$mlt
+#> $`10.1371/journal.pbio.1001805`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0082578              2196
+#> 2 10.1371/journal.pone.0098876              2448
+#> 3 10.1371/journal.pone.0102159              1177
+#> 4 10.1371/journal.pcbi.1002652              3102
+#> 5 10.1371/journal.pcbi.1003408              6942
+#> 
+#> $`10.1371/journal.pbio.0020440`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0102679              3112
+#> 2 10.1371/journal.pone.0035964              5571
+#> 3 10.1371/journal.pone.0003259              2800
+#> 4 10.1371/journal.pntd.0003377              3392
+#> 5 10.1371/journal.pone.0068814              7522
+#> 
+#> $`10.1371/journal.pone.0087217`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0131665               409
+#> 2 10.1371/journal.pcbi.0020092             19604
+#> 3 10.1371/journal.pone.0133941               475
+#> 4 10.1371/journal.pone.0123774               997
+#> 5 10.1371/journal.pone.0140306               322
+#> 
+#> $`10.1371/journal.pbio.1002191`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pbio.1002232              1950
+#> 2 10.1371/journal.pone.0131700               979
+#> 3 10.1371/journal.pone.0070448              1608
+#> 4 10.1371/journal.pone.0028737              7481
+#> 5 10.1371/journal.pone.0052330              5595
+#> 
+#> $`10.1371/journal.pone.0040117`
+#>                             id counter_total_all
+#> 1 10.1371/journal.pone.0069352              2763
+#> 2 10.1371/journal.pone.0148280               467
+#> 3 10.1371/journal.pone.0035502              4031
+#> 4 10.1371/journal.pone.0014065              5764
+#> 5 10.1371/journal.pone.0113280              1984
+```
+
+## Groups
+
+`solr_groups()` is a function to return similar documents to the one
+
+
+```r
+solr_group(q = 'ecology', group.field = 'journal', group.limit = 1, fl = c('id', 'alm_twitterCount'))
+#>                         groupValue numFound start
+#> 1                         plos one    25915     0
+#> 2       plos computational biology      580     0
+#> 3                     plos biology      884     0
+#> 4                             none     1251     0
+#> 5                    plos medicine      243     0
+#> 6 plos neglected tropical diseases     1239     0
+#> 7                   plos pathogens      595     0
+#> 8                    plos genetics      758     0
+#> 9             plos clinical trials        2     0
+#>                             id alm_twitterCount
+#> 1 10.1371/journal.pone.0059813               56
+#> 2 10.1371/journal.pcbi.1003594               21
+#> 3 10.1371/journal.pbio.1002358               16
+#> 4 10.1371/journal.pone.0046671                2
+#> 5 10.1371/journal.pmed.1000303                0
+#> 6 10.1371/journal.pntd.0002577                2
+#> 7 10.1371/journal.ppat.1003372                2
+#> 8 10.1371/journal.pgen.1001197                0
+#> 9 10.1371/journal.pctr.0020010                0
+```
+
+## Parsing
+
+`solr_parse()` is a general purpose parser function with extension methods for parsing outputs from functions in `solr`. `solr_parse()` is used internally within functions to do parsing after retrieving data from the server. You can optionally get back raw `json`, `xml`, or `csv` with the `raw=TRUE`, and then parse afterwards with `solr_parse()`.
+
+For example:
+
+
+```r
+(out <- solr_highlight(q = 'alcohol', hl.fl = 'abstract', rows = 2, raw = TRUE))
+#> [1] "{\"response\":{\"numFound\":20268,\"start\":0,\"docs\":[{},{}]},\"highlighting\":{\"10.1371/journal.pmed.0040151\":{\"abstract\":[\"Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting\"]},\"10.1371/journal.pone.0027752\":{\"abstract\":[\"Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking\"]}}}\n"
+#> attr(,"class")
+#> [1] "sr_high"
+#> attr(,"wt")
+#> [1] "json"
+```
+
+Then parse
+
+
+```r
+solr_parse(out, 'df')
+#>                          names
+#> 1 10.1371/journal.pmed.0040151
+#> 2 10.1371/journal.pone.0027752
+#>                                                                                                    abstract
+#> 1   Background: <em>Alcohol</em> consumption causes an estimated 4% of the global disease burden, prompting
+#> 2 Background: The negative influences of <em>alcohol</em> on TB management with regard to delays in seeking
+```
+
+[Please report any issues or bugs](https://github.com/ropensci/solrium/issues).

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/r-cran-solrium.git



More information about the debian-med-commit mailing list