[med-svn] [r-cran-curl] 12/14: New upstream version 2.8.1

Andreas Tille tille at debian.org
Fri Oct 13 09:28:36 UTC 2017


This is an automated email from the git hooks/post-receive script.

tille pushed a commit to branch master
in repository r-cran-curl.

commit 4837e270e38eaca313ba1662d4b268ad816fdbfb
Author: Andreas Tille <tille at debian.org>
Date:   Fri Oct 13 11:25:29 2017 +0200

    New upstream version 2.8.1
---
 DESCRIPTION                        |  38 ++
 LICENSE                            |   2 +
 MD5                                |  83 ++++
 NAMESPACE                          |  67 +++
 NEWS                               | 161 +++++++
 R/curl.R                           |  78 ++++
 R/download.R                       |  36 ++
 R/echo.R                           |  99 +++++
 R/escape.R                         |  26 ++
 R/fetch.R                          | 109 +++++
 R/form.R                           |  48 +++
 R/handle.R                         | 195 +++++++++
 R/multi.R                          | 133 ++++++
 R/nslookup.R                       |  36 ++
 R/onload.R                         |  27 ++
 R/options.R                        |  37 ++
 R/parse_headers.R                  |  51 +++
 R/proxy.R                          |  48 +++
 R/utilities.R                      |  46 ++
 build/vignette.rds                 | Bin 0 -> 230 bytes
 cleanup                            |   2 +
 configure                          |  71 ++++
 configure.win                      |   0
 data/curl_symbols.rda              | Bin 0 -> 7777 bytes
 debian/README.source               |   9 -
 debian/README.test                 |   9 -
 debian/changelog                   |  39 --
 debian/compat                      |   1 -
 debian/control                     |  31 --
 debian/copyright                   |  32 --
 debian/docs                        |   3 -
 debian/rules                       |   8 -
 debian/source/format               |   1 -
 debian/tests/control               |   3 -
 debian/tests/run-unit-test         |  12 -
 debian/watch                       |   3 -
 inst/doc/intro.R                   | 159 +++++++
 inst/doc/intro.Rmd                 | 328 +++++++++++++++
 inst/doc/intro.html                | 643 ++++++++++++++++++++++++++++
 man/curl.Rd                        |  78 ++++
 man/curl_download.Rd               |  47 +++
 man/curl_echo.Rd                   |  32 ++
 man/curl_escape.Rd                 |  29 ++
 man/curl_fetch.Rd                  |  91 ++++
 man/curl_options.Rd                |  45 ++
 man/handle.Rd                      |  74 ++++
 man/handle_cookies.Rd              |  34 ++
 man/ie_proxy.Rd                    |  27 ++
 man/multi.Rd                       |  89 ++++
 man/multipart.Rd                   |  25 ++
 man/nslookup.Rd                    |  33 ++
 man/parse_date.Rd                  |  23 +
 man/parse_headers.Rd               |  36 ++
 src/Makevars.in                    |   7 +
 src/Makevars.win                   |  34 ++
 src/callbacks.c                    |  81 ++++
 src/callbacks.h                    |   9 +
 src/curl-common.h                  |  65 +++
 src/curl-symbols.h                 | 779 ++++++++++++++++++++++++++++++++++
 src/curl.c                         | 287 +++++++++++++
 src/download.c                     |  51 +++
 src/escape.c                       |  33 ++
 src/fetch.c                        |  91 ++++
 src/form.c                         |  46 ++
 src/getdate.c                      |  17 +
 src/handle.c                       | 388 +++++++++++++++++
 src/ieproxy.c                      | 177 ++++++++
 src/init.c                         |  18 +
 src/interrupt.c                    |  70 ++++
 src/multi.c                        | 247 +++++++++++
 src/nslookup.c                     | 100 +++++
 src/reflist.c                      |  56 +++
 src/split.c                        |  16 +
 src/utils.c                        | 132 ++++++
 src/version.c                      |  64 +++
 src/winhttp32.def.in               |  37 ++
 src/winhttp64.def.in               |  37 ++
 src/winidn.c                       |  70 ++++
 tests/testthat.R                   |   4 +
 tests/testthat/helper-version.R    |  36 ++
 tests/testthat/test-auth.R         |  30 ++
 tests/testthat/test-blockopen.R    |  75 ++++
 tests/testthat/test-certificates.R |  20 +
 tests/testthat/test-connection.R   |  37 ++
 tests/testthat/test-cookies.R      |  51 +++
 tests/testthat/test-escape.R       |  35 ++
 tests/testthat/test-gc.R           |  60 +++
 tests/testthat/test-handle.R       |  94 +++++
 tests/testthat/test-idn.R          |  29 ++
 tests/testthat/test-multi.R        | 104 +++++
 tests/testthat/test-nonblocking.R  |  64 +++
 tests/testthat/test-post.R         | 110 +++++
 tools/symbols-in-versions          | 832 +++++++++++++++++++++++++++++++++++++
 tools/symbols.R                    |  50 +++
 tools/winlibs.R                    |   8 +
 vignettes/intro.Rmd                | 328 +++++++++++++++
 96 files changed, 8065 insertions(+), 151 deletions(-)

diff --git a/DESCRIPTION b/DESCRIPTION
new file mode 100644
index 0000000..2ed9db1
--- /dev/null
+++ b/DESCRIPTION
@@ -0,0 +1,38 @@
+Package: curl
+Type: Package
+Title: A Modern and Flexible Web Client for R
+Version: 2.8.1
+Authors at R: c(
+    person("Jeroen", "Ooms", , "jeroen at berkeley.edu", role = c("cre", "aut")),
+    person("Hadley", "Wickham", , "hadley at rstudio.com", role = "ctb"),
+    person("RStudio", role = "cph")
+    )
+Description: The curl() and curl_download() functions provide highly
+    configurable drop-in replacements for base url() and download.file() with
+    better performance, support for encryption (https, ftps), gzip compression,
+    authentication, and other 'libcurl' goodies. The core of the package implements a
+    framework for performing fully customized requests where data can be processed
+    either in memory, on disk, or streaming via the callback or connection
+    interfaces. Some knowledge of 'libcurl' is recommended; for a more-user-friendly
+    web client see the 'httr' package which builds on this package with http
+    specific tools and logic.
+License: MIT + file LICENSE
+SystemRequirements: libcurl: libcurl-devel (rpm) or
+        libcurl4-openssl-dev (deb).
+URL: https://github.com/jeroen/curl#readme (devel)
+        https://curl.haxx.se/libcurl/ (upstream)
+BugReports: https://github.com/jeroen/curl/issues
+Suggests: testthat (>= 1.0.0), knitr, jsonlite, rmarkdown, magrittr,
+        httpuv, webutils
+VignetteBuilder: knitr
+Depends: R (>= 3.0.0)
+LazyData: true
+RoxygenNote: 6.0.1
+NeedsCompilation: yes
+Packaged: 2017-07-20 10:47:34 UTC; jeroen
+Author: Jeroen Ooms [cre, aut],
+  Hadley Wickham [ctb],
+  RStudio [cph]
+Maintainer: Jeroen Ooms <jeroen at berkeley.edu>
+Repository: CRAN
+Date/Publication: 2017-07-21 23:02:20 UTC
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..42973e6
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,2 @@
+YEAR: 2017
+COPYRIGHT HOLDER: Jeroen Ooms; RStudio
diff --git a/MD5 b/MD5
new file mode 100644
index 0000000..a773d62
--- /dev/null
+++ b/MD5
@@ -0,0 +1,83 @@
+777eb3d19c39cfec86dea77f3effd1f8 *DESCRIPTION
+3d90003e73983dbcc48aa8a9aba50c5a *LICENSE
+493180de7b982326d0812811234fa9d0 *NAMESPACE
+5b255990bb811f5206c7fae773055ee1 *NEWS
+a8d57b37295e51b5336a7d649f2655c9 *R/curl.R
+3c3e1de8ce698ab9d5a8d2ad5dcf6243 *R/download.R
+d5a884a9572387ca7ba6bc7d8af20b1a *R/echo.R
+c6c5182b1629ba6080707e80026cf3ca *R/escape.R
+987eb506e56631bf185d17d7965343e0 *R/fetch.R
+826b6c6f8a393b4377c7c6e61451c2a8 *R/form.R
+335e3fd0dbfa6feac1a15036bae9b5f7 *R/handle.R
+9bbad1a5258633609fd6cbfd817c9831 *R/multi.R
+0eb54202268bceedd53ad2129d447d7b *R/nslookup.R
+44c8b59af7ae3fdb01f1e4e60cc2350c *R/onload.R
+736948f114335f0d630583653865f99e *R/options.R
+341deaec71f85196697042ede275d232 *R/parse_headers.R
+4c1746ae3e525b40a8528a4592ebff62 *R/proxy.R
+9b6c44e7f51500ff176b02644584db91 *R/utilities.R
+7536b91db2d5eeee84e3023666f0716f *build/vignette.rds
+6071edd604dbeb75308cfbedc7790398 *cleanup
+9744594dcb86ab087bf10fa12839696c *configure
+d41d8cd98f00b204e9800998ecf8427e *configure.win
+c39c444068e373bfb2c8ed9ed4e3ac5f *data/curl_symbols.rda
+611b46a4e232014c7ae278b68d8feb03 *inst/doc/intro.R
+405dc07df5149c76d91ee28fdc948c5f *inst/doc/intro.Rmd
+94d3edf4227ac497cfa8352812c78f64 *inst/doc/intro.html
+d57f6d9e5c62e5a803cb6a0986ead273 *man/curl.Rd
+5d3ad4ce0121c05dd1c57a480cfa0a8c *man/curl_download.Rd
+cb407e5e439cc862c074502158d4303d *man/curl_echo.Rd
+a485ea24e0b0f48300fd5326dadfe926 *man/curl_escape.Rd
+78f33f00b206fac85719739d68ae1f52 *man/curl_fetch.Rd
+c052e459cb2cf67a0091670567dc8926 *man/curl_options.Rd
+a37a546454cd67c61686643e82ea61ed *man/handle.Rd
+4f516d10514eb7cb39ac8954a0794905 *man/handle_cookies.Rd
+9c70d4d244f200a1a86337131cc6a1a8 *man/ie_proxy.Rd
+c05fb6a2f7868427b2650cd3ea81c286 *man/multi.Rd
+cefd4119ea6f827b69a4d8e6c49358ea *man/multipart.Rd
+837636046e875dde74e265de8d9c1f6f *man/nslookup.Rd
+c3175e1371fa28d57c203d2af8fcd60b *man/parse_date.Rd
+d0a186c57e0a7306a4c892f5498d8f85 *man/parse_headers.Rd
+907343d8176df8e5611a85761a3e91f1 *src/Makevars.in
+9c4f36d80d26084373473ef924704b8d *src/Makevars.win
+6919a902921153ad6bcc2fc133653860 *src/callbacks.c
+6d391e7654d5f35cc336ab174454e0ae *src/callbacks.h
+8ac023718b3b49b910d4e4961dc1144c *src/curl-common.h
+ca60d50a5e73b486510be5d802fdea28 *src/curl-symbols.h
+351234356a2bff7a629a0b9628c35197 *src/curl.c
+53ef5e1c1fae06308fef2c35ab2c2357 *src/download.c
+6ced00f0ef2e58a9bbf1ad43367f9f31 *src/escape.c
+8efd3934c757642b7eb3cfcd6f3b874a *src/fetch.c
+7f6f09915f019020eec775a4dabdb7ac *src/form.c
+0c66ee6a65fa291ad390119953beb871 *src/getdate.c
+f4011970be658707d4c52924eebac93f *src/handle.c
+752bda61360b3cf594c9943d43cef8b5 *src/ieproxy.c
+2f2b792c8784fde145db53f87132d7e9 *src/init.c
+a8e71725b7f154879b6873c87782dad3 *src/interrupt.c
+9c6c1084dc63fe03aed20839032350f4 *src/multi.c
+ec2d29b9fbbaece65e54651626cd198b *src/nslookup.c
+91703e066a9e848a1d63eede9f9dd255 *src/reflist.c
+f5cd7f3315e332bd18a38bb494ad640e *src/split.c
+2cfc7d3f20b0ca8f856c34fd3add73d0 *src/utils.c
+365c211a7f307461ad46a74952b97d57 *src/version.c
+5c32d93f54a58c1382926a61792bfaed *src/winhttp32.def.in
+245dea4d9024e0381bc9f9bd3a8b6627 *src/winhttp64.def.in
+877b30a097df618207cfc6e6a34d06c3 *src/winidn.c
+2570c0e4f8e4f89f3484c8494c9c2c9a *tests/testthat.R
+0d6b531f198306999a2219c399277b02 *tests/testthat/helper-version.R
+bbbd697e81545d0a97fa262c18262155 *tests/testthat/test-auth.R
+92f1d068681f06347ab39b81050df8de *tests/testthat/test-blockopen.R
+c0a9a4cb45d94973ccf13f572d0ba80d *tests/testthat/test-certificates.R
+17857acb1bd8f16af0c43be20a5c9d51 *tests/testthat/test-connection.R
+f2bc95bcceef2dfeebb48bb59f502712 *tests/testthat/test-cookies.R
+c48bedfac6e4bf4220023269451ef0e6 *tests/testthat/test-escape.R
+db8323549c034fef1a207400b0fdfc72 *tests/testthat/test-gc.R
+dd85f72fc445554e74af3a8c78174447 *tests/testthat/test-handle.R
+df03468bf874c557a0d11bd4b779c518 *tests/testthat/test-idn.R
+3ce80dfaff40cee787657b51e700e77f *tests/testthat/test-multi.R
+c5282786b5363c42d50e8941145bd661 *tests/testthat/test-nonblocking.R
+24d68aa9da0801de878d899deb7355b3 *tests/testthat/test-post.R
+2019a062987f868e186ae4b144455727 *tools/symbols-in-versions
+1f5c1ab1572d962e4de4f67a2323a04a *tools/symbols.R
+23996666180a531897b4bcef45178229 *tools/winlibs.R
+405dc07df5149c76d91ee28fdc948c5f *vignettes/intro.Rmd
diff --git a/NAMESPACE b/NAMESPACE
new file mode 100644
index 0000000..176999a
--- /dev/null
+++ b/NAMESPACE
@@ -0,0 +1,67 @@
+# Generated by roxygen2: do not edit by hand
+
+S3method(print,curl_handle)
+S3method(print,curl_multi)
+S3method(print,form_data)
+S3method(print,form_file)
+export(curl)
+export(curl_download)
+export(curl_echo)
+export(curl_escape)
+export(curl_fetch_disk)
+export(curl_fetch_memory)
+export(curl_fetch_multi)
+export(curl_fetch_stream)
+export(curl_options)
+export(curl_unescape)
+export(curl_version)
+export(form_data)
+export(form_file)
+export(handle_cookies)
+export(handle_data)
+export(handle_reset)
+export(handle_setform)
+export(handle_setheaders)
+export(handle_setopt)
+export(has_internet)
+export(ie_get_proxy_for_url)
+export(ie_proxy_info)
+export(multi_add)
+export(multi_cancel)
+export(multi_list)
+export(multi_run)
+export(multi_set)
+export(new_handle)
+export(new_pool)
+export(nslookup)
+export(parse_date)
+export(parse_headers)
+export(parse_headers_list)
+useDynLib(curl,R_curl_connection)
+useDynLib(curl,R_curl_escape)
+useDynLib(curl,R_curl_fetch_disk)
+useDynLib(curl,R_curl_fetch_memory)
+useDynLib(curl,R_curl_getdate)
+useDynLib(curl,R_curl_version)
+useDynLib(curl,R_download_curl)
+useDynLib(curl,R_get_bundle)
+useDynLib(curl,R_get_handle_cookies)
+useDynLib(curl,R_get_handle_response)
+useDynLib(curl,R_get_proxy_for_url)
+useDynLib(curl,R_handle_reset)
+useDynLib(curl,R_handle_setform)
+useDynLib(curl,R_handle_setheaders)
+useDynLib(curl,R_handle_setopt)
+useDynLib(curl,R_multi_add)
+useDynLib(curl,R_multi_cancel)
+useDynLib(curl,R_multi_list)
+useDynLib(curl,R_multi_new)
+useDynLib(curl,R_multi_run)
+useDynLib(curl,R_multi_setopt)
+useDynLib(curl,R_new_handle)
+useDynLib(curl,R_nslookup)
+useDynLib(curl,R_proxy_info)
+useDynLib(curl,R_set_bundle)
+useDynLib(curl,R_split_string)
+useDynLib(curl,R_total_handles)
+useDynLib(curl,R_windows_build)
diff --git a/NEWS b/NEWS
new file mode 100644
index 0000000..624b5a8
--- /dev/null
+++ b/NEWS
@@ -0,0 +1,161 @@
+2.8.1
+ - Windows: switch back to OpenSSL instead of SecureChannel because Windows 2008 (CRAN) does not
+   support TLS 1.1 and TLS 1.2 which is required for many servers now.
+
+2.8 (unpublished)
+ - Windows: EXPERIMENTAL: on R 3.5+ curl now uses SecureChannel instead of OpenSSL for https.
+ - Windows: updated libcurl to v7.54.1 with native windows IDN. Dropped nghttp2 and rtmp support.
+ - Windows: nslookup() now uses IdnToAscii() for non-ascii domains
+ - Add IDN unit tests on supported platforms
+ - Error messages from libcurl include more detail when available (via CURLOPT_ERRORBUFFER)
+ - Set a default CURLOPT_READFUNCTION because libcurls default can cause R to freeze
+ - Fix a bug for empty forms and/or empty form-fields (+ added unit tests for this)
+ - The 'multi_run()' function gains a parameter 'poll' to return immediately when a request completes.
+ - Disable 'Expect: 100-continue' for POST requests (deprecated in libcurl)
+ - Fix two rchk PROTECT warnings (thanks to Tomas Kalibera)
+
+2.7
+ - New function parse_headers_list() to parse response headers into named list
+ - nslookup() gains a parameter 'multi' to return multiple matches
+ - Automatically set 'POSTFIELDSIZE_LARGE' when setting 'POSTFIELDS' or 'COPYPOSTFIELDS' to raw vector
+ - Do not crash when passing invalid objects as handles
+ - Workaround for empty forms, i.e. calling handle_setform() with no arguments
+
+2.6
+ - nslookup() gains a parameter ipv4_only = TRUE (fixes unit test on Mavericks)
+
+2.5
+ - Add curl_echo() function for easy testing
+ - Add support for curlopt_xferinfofunction, used in curl_echo()
+ - Automatically set curlopt_noprogress = 0 when setting one of the progress functions
+ - Automatically use XFERINFOFUNCTION vs PROGRESSFUNCTION depending on libcurl version
+ - Default User-Agent is now: options("HTTPUserAgent")
+ - Requests will now abort if progress/xferinfo callback raises an error
+ - Open a connection with mode 'f' to skip stop_for_status() during open()
+
+2.4
+ - Windows: update libcurl to 7.53.1 with libssl 1.0.2k
+ - New form_data() function to POST form with string/raw values with custom conten-type
+ - Fix busy waiting for curl_fetch_stream()
+ - Tweaks for open(con, blocking = FALSE)
+ - Switch memcpy() to memmove() to please valgrind
+ - Assert that curl() connection is only opened in read mode
+
+2.3
+ - All interruptable handles now use a global pool to share connections. Fixes #79.
+ - Enable interruptable interface by default, even in non-interactive mode.
+ - Update libcurl on Windows to 7.51.0
+ - Unit tests now try serveral httpbin mirrors in case one goes down
+ - Support open(con, blocking = FALSE) and isIncomplete() for curl() connections
+ - Switch curl_fetch_stream to non-blocking implementation
+
+2.2
+ - Fixed bug in multi that did not actually enable or disable multiplexing.
+ - Switch unit tests to HTTP/2 server to get HTTP/2 testing coverage
+ - Fix big endian build on GLIBC systems (tnx Aurelien Jarno and Andreas Tille)
+
+2.1
+ - If libcurl >= 7.47 and was built --with-nghttp2, automatically enable HTTP/2
+   on HTTPS connections (matches behavior of 'curl' cmd util)
+ - Upgrade to libcurl 7.50.3 (--with-nghttp2) on Windows (Adds HTTP/2 support)
+ - Fix a unit test that would fail on fast servers
+
+2.0
+ - New multi interface for concurrent async requests!
+ - Updated vignette with simple multi examples
+ - Export handle_data() to get handle state
+
+1.2
+ - Fix for getaddrinfo GNU extension on some unix platforms
+
+1.1
+ - Fix ASAN warning in curl.c (reference after free)
+
+1.0
+ - Fix for FreeBSD
+ - Simplify handle refCount system
+ - Better handle locking to prevent using/modifying open handles
+ - Make unit tests always close connection to prevent 'unused connection' warnings
+ - Add support for interruptions in curl_download()
+
+0.9.7
+ - The non-blocking download method is now only used in interactive mode
+ - Use options(curl_interrupt = TRUE) to force nonblocking in non-interactive mode
+ - Updated libcurl on windows to 7.47.1. This should fix IPv6 problems.
+ - Update the curl_symbols table to 7.48
+
+0.9.6
+ - Use non-blocking code in curl_fetch_memory to support user interruptions.
+ - Configure script no longer assumes bash so it works on OpenBSD.
+ - Fix for Snow Leopard CRAN build server.
+ - Added has_internet() function.
+
+0.9.5
+ - Added nslookup() as cross-platform alternative to nsl()
+
+0.9.4
+ - Move the creation of the option table to ./R/options.R
+ - The curl_options() function gains an argument to filter by name
+ - Properly invoke winhttp.def file in Makevars.win (required for new toolchain)
+
+0.9.3
+ - Refactor configure script to use pkg-config
+ - Use the preprocessor to extract CURLOPT symbols during install
+ - Don't use setInternet2() in R > 3.2.2
+
+0.9.2
+ - Optimization for windows to make realloc in curl_fetch_memory faster
+ - Updated the curl_symbols table to 7.43
+ - Updated the static libraries on Windows:
+    * libcurl 7.43.0
+    * openssl 1.0.2d
+    * libssh2 1.6.0
+    * libiconv 1.14-5
+    * libidn 1.31-1
+ - New functions for Windows: ie_proxy_info and ie_get_proxy_for_url
+
+0.9.1
+ - Convert url argument to utf8 strings in escape/unescape
+ - Endian fix for BSD systems
+ - Add support for setting curlopt_xxx_large options
+
+0.9
+ - Fix for very old versions of libcurl (RHEL 5)
+ - Do not convert paths to UTF-8 (only URLs)
+ - Improve error message for unknown options
+
+0.8
+ - Fix for curl() character reader to invert byte-order on big endian architectures.
+
+0.7
+ - Rename the C function 'fetch' to 'fetchdata' because of Solaris conflict.
+ - Move warning about missing CA bundle on Windows to onAttach.
+
+0.6
+ - Validation of SSL certificates is now enabled by default if a bundle is available.
+ - Major rewrite to support configurable and reusable handles
+ - Added new_handle, handle_setopt, handle_setheaders, handle_setform, handle_reset, etc.
+ - Added curl_fetch interfaces for httr
+ - Add ie_proxy_settings to get system proxy configuration on windows
+
+0.5
+ - Check for CURLM_CALL_MULTI_PERFORM to support very old versions of libcurl
+
+0.4
+ - Fixed a memory bug that could cause R to crash
+ - Add curl_escape, curl_unescape
+ - Add curl_version and curl_options
+
+0.3
+ - Add curl_download function
+ - More efficient use of realloc
+ - Fix for older versions of libcurl (e.g. Snow Leopard)
+ - Add support for user interrupts while downloading (ESC or CTRL+C)
+ - Fixed bug that caused GC to corrupt connection object
+ - Refactoring and cleanup
+
+0.2
+  - add support for recycling connections
+
+0.1
+  - initial release
diff --git a/R/curl.R b/R/curl.R
new file mode 100644
index 0000000..ee6f0bf
--- /dev/null
+++ b/R/curl.R
@@ -0,0 +1,78 @@
+#' Curl connection interface
+#'
+#' Drop-in replacement for base \code{\link{url}} that supports https, ftps,
+#' gzip, deflate, etc. Default behavior is identical to \code{\link{url}}, but
+#' request can be fully configured by passing a custom \code{\link{handle}}.
+#'
+#' As of version 2.3 curl connections support \code{open(con, blocking = FALSE)}.
+#' In this case \code{readBin} and \code{readLines} will return immediately with data
+#' that is available without waiting. For such non-blocking connections the caller
+#' needs to call \code{\link{isIncomplete}} to check if the download has completed
+#' yet.
+#'
+#' @useDynLib curl R_curl_connection
+#' @export
+#' @param url character string. See examples.
+#' @param open character string. How to open the connection if it should be opened
+#'   initially. Currently only "r" and "rb" are supported.
+#' @param handle a curl handle object
+#' @examples \dontrun{
+#' con <- curl("https://httpbin.org/get")
+#' readLines(con)
+#'
+#' # Auto-opened connections can be recycled
+#' open(con, "rb")
+#' bin <- readBin(con, raw(), 999)
+#' close(con)
+#' rawToChar(bin)
+#'
+#' # HTTP error
+#' curl("https://httpbin.org/status/418", "r")
+#'
+#' # Follow redirects
+#' readLines(curl("https://httpbin.org/redirect/3"))
+#'
+#' # Error after redirect
+#' curl("https://httpbin.org/redirect-to?url=http://httpbin.org/status/418", "r")
+#'
+#' # Auto decompress Accept-Encoding: gzip / deflate (rfc2616 #14.3)
+#' readLines(curl("http://httpbin.org/gzip"))
+#' readLines(curl("http://httpbin.org/deflate"))
+#'
+#' # Binary support
+#' buf <- readBin(curl("http://httpbin.org/bytes/98765", "rb"), raw(), 1e5)
+#' length(buf)
+#'
+#' # Read file from disk
+#' test <- paste0("file://", system.file("DESCRIPTION"))
+#' readLines(curl(test))
+#'
+#' # Other protocols
+#' read.csv(curl("ftp://cran.r-project.org/pub/R/CRAN_mirrors.csv"))
+#' readLines(curl("ftps://test.rebex.net:990/readme.txt"))
+#' readLines(curl("gopher://quux.org/1"))
+#'
+#' # Streaming data
+#' con <- curl("http://jeroen.github.io/data/diamonds.json", "r")
+#' while(length(x <- readLines(con, n = 5))){
+#'   print(x)
+#' }
+#'
+#' # Stream large dataset over https with gzip
+#' library(jsonlite)
+#' con <- gzcon(curl("https://jeroen.github.io/data/nycflights13.json.gz"))
+#' nycflights <- stream_in(con)
+#' }
+#'
+curl <- function(url = "http://httpbin.org/get", open = "", handle = new_handle()){
+  curl_connection(url, open, handle)
+}
+
+# 'stream' currently only used for non-blocking connections to prevent
+# busy looping in curl_fetch_stream()
+curl_connection <- function(url, mode, handle, partial = FALSE){
+  con <- .Call(R_curl_connection, url, handle, partial)
+  if(!identical(mode, ""))
+    open(con, open = mode)
+  return(con)
+}
diff --git a/R/download.R b/R/download.R
new file mode 100644
index 0000000..e00f9a6
--- /dev/null
+++ b/R/download.R
@@ -0,0 +1,36 @@
+#' Download file to disk
+#'
+#' Libcurl implementation of \code{C_download} (the "internal" download method)
+#' with added support for https, ftps, gzip, etc. Default behavior is identical
+#' to \code{\link{download.file}}, but request can be fully configured by passing
+#' a custom \code{\link{handle}}.
+#'
+#' The main difference between \code{curl_download} and \code{curl_fetch_disk}
+#' is that \code{curl_download} checks the http status code before starting the
+#' download, and raises an error when status is non-successful. The behavior of
+#' \code{curl_fetch_disk} on the other hand is to proceed as normal and write
+#' the error page to disk in case of a non success response.
+#'
+#' @useDynLib curl R_download_curl
+#' @param url A character string naming the URL of a resource to be downloaded.
+#' @param destfile A character string with the name where the downloaded file
+#'   is saved. Tilde-expansion is performed.
+#' @param quiet If \code{TRUE}, suppress status messages (if any), and the
+#'   progress bar.
+#' @param mode A character string specifying the mode with which to write the file.
+#'   Useful values are \code{"w"}, \code{"wb"} (binary), \code{"a"} (append)
+#'   and \code{"ab"}.
+#' @param handle a curl handle object
+#' @return Path of downloaded file (invisibly).
+#' @export
+#' @examples \dontrun{download large file
+#' url <- "http://www2.census.gov/acs2011_5yr/pums/csv_pus.zip"
+#' tmp <- tempfile()
+#' curl_download(url, tmp)
+#' }
+curl_download <- function(url, destfile, quiet = TRUE, mode = "wb", handle = new_handle()){
+  destfile <- normalizePath(destfile, mustWork = FALSE)
+  nonblocking <- isTRUE(getOption("curl_interrupt", TRUE))
+  .Call(R_download_curl, url, destfile, quiet, mode, handle, nonblocking)
+  invisible(destfile)
+}
diff --git a/R/echo.R b/R/echo.R
new file mode 100644
index 0000000..db753fc
--- /dev/null
+++ b/R/echo.R
@@ -0,0 +1,99 @@
+#' Echo Service
+#'
+#' This function is only for testing purposes. It starts a local httpuv server to
+#' echo the request body and content type in the response.
+#'
+#' @export
+#' @param handle a curl handle object
+#' @param port the port number on which to run httpuv server
+#' @param progress show progress meter during http transfer
+#' @param file path or connection to write body. Default returns body as raw vector.
+#' @examples h <- handle_setform(new_handle(), foo = "blabla", bar = charToRaw("test"),
+#' myfile = form_file(system.file("DESCRIPTION"), "text/description"))
+#' formdata <- curl_echo(h)
+#'
+#' # Show the multipart body
+#' cat(rawToChar(formdata$body))
+#'
+#' # Parse multipart
+#' webutils::parse_http(formdata$body, formdata$content_type)
+curl_echo <- function(handle, port = 9359, progress = interactive(), file = NULL){
+  progress <- isTRUE(progress)
+  formdata <- NULL
+  if(!(is.null(file) || inherits(file, "connection") || is.character(file)))
+    stop("Argument 'file' must be a file path or connection object")
+  echo_handler <- function(env){
+    if(progress){
+      cat("\nRequest Complete!\n")
+      progress <<- FALSE
+    }
+
+    formdata <<- as.list(env)
+    http_method <- env[["REQUEST_METHOD"]]
+    content_type <- env[["CONTENT_TYPE"]]
+    type <- ifelse(length(content_type) && nchar(content_type), content_type, "empty")
+    formdata$body <<- if(tolower(http_method) %in% c("post", "put")){
+      if(!length(file)){
+        env[["rook.input"]]$read()
+      } else {
+        write_to_file(env[["rook.input"]]$read, file)
+      }
+    }
+    formdata[["rook.input"]] <<- NULL
+    formdata[["rook.errors"]] <<- NULL
+    names(formdata) <<- tolower(names(formdata))
+    list(
+      status = 200,
+      body = "",
+      headers = c("Content-Type" = "text/plain")
+    )
+  }
+
+  # Start httpuv
+  server_id <- httpuv::startServer("0.0.0.0", port, list(call = echo_handler))
+  on.exit(httpuv::stopServer(server_id), add = TRUE)
+
+  # httpuv 1.3.4 supports non-blocking service()
+  waittime <- ifelse(utils::packageVersion('httpuv') > "1.3.3", NA, 1)
+
+  # Post data from curl
+  handle_setopt(handle, connecttimeout = 2, xferinfofunction = function(down, up){
+    if(progress){
+      if(up[1] == 0 && down[1] == 0){
+        cat("\rConnecting...")
+      } else {
+        cat(sprintf("\rUpload: %s (%d%%)   ", format_size(up[2]), as.integer(100 * up[2] / up[1])))
+      }
+    }
+    # Need very low wait to prevent gridlocking!
+    httpuv::service(waittime)
+  }, noprogress = FALSE)
+  if(progress) cat("\n")
+  curl_fetch_memory(paste0("http://localhost:", port, "/"), handle = handle)
+  if(progress) cat("\n")
+  return(formdata)
+}
+
+write_to_file <- function(readfun, filename){
+  con <- if(inherits(filename, "connection")){
+    filename
+  } else {
+    base::file(filename)
+  }
+  if(!isOpen(con)){
+    open(con, "wb")
+    on.exit(close(con))
+  }
+  while(length(buf <- readfun(1e6))){
+    writeBin(buf, con)
+  }
+  return(filename)
+}
+
+format_size <- function(x){
+  if(x < 1024)
+    return(sprintf("%d b", x))
+  if(x < 1048576)
+    return(sprintf("%.2f Kb", x / 1024))
+  return(sprintf("%.2f Mb", x / 1048576))
+}
diff --git a/R/escape.R b/R/escape.R
new file mode 100644
index 0000000..0c41df0
--- /dev/null
+++ b/R/escape.R
@@ -0,0 +1,26 @@
+#' URL encoding
+#'
+#' Escape all special characters (i.e. everything except for a-z, A-Z, 0-9, '-',
+#' '.', '_' or '~') for use in URLs.
+#'
+#' @useDynLib curl R_curl_escape
+#' @export
+#' @param url A character vector (typically containing urls or parameters) to be
+#'   encoded/decoded
+#' @examples # Escape strings
+#' out <- curl_escape("foo = bar + 5")
+#' curl_unescape(out)
+#'
+#' # All non-ascii characters are encoded
+#' mu <- "\u00b5"
+#' curl_escape(mu)
+#' curl_unescape(curl_escape(mu))
+curl_escape <- function(url){
+  .Call(R_curl_escape, enc2utf8(as.character(url)), FALSE);
+}
+
+#' @rdname curl_escape
+#' @export
+curl_unescape <- function(url){
+  .Call(R_curl_escape, enc2utf8(as.character(url)), TRUE);
+}
diff --git a/R/fetch.R b/R/fetch.R
new file mode 100644
index 0000000..ea4c78b
--- /dev/null
+++ b/R/fetch.R
@@ -0,0 +1,109 @@
+#' Fetch the contents of a URL
+#'
+#' Low-level bindings to write data from a URL into memory, disk or a callback
+#' function. These are mainly intended for \code{httr}, most users will be better
+#' off using the \code{\link{curl}} or \code{\link{curl_download}} function, or the
+#' http specific wrappers in the \code{httr} package.
+#'
+#' The curl_fetch functions automatically raise an error upon protocol problems
+#' (network, disk, ssl) but do not implement application logic. For example for
+#' you need to check the status code of http requests yourself in the response,
+#' and deal with it accordingly.
+#'
+#' Both \code{curl_fetch_memory} and \code{curl_fetch_disk} have a blocking and
+#' non-blocking C implementation. The latter is slightly slower but allows for
+#' interrupting the download prematurely (using e.g. CTRL+C or ESC). Interrupting
+#' is enabled when R runs in interactive mode or when
+#' \code{getOption("curl_interrupt") == TRUE}.
+#'
+#' The \code{curl_fetch_multi} function is the asyncronous equivalent of
+#' \code{curl_fetch_memory}. It wraps \code{multi_add} to schedule requests which
+#' are executed concurrently when calling \code{multi_run}. For each successful
+#' request the \code{done} callback is triggered with response data. For failed
+#' requests (when \code{curl_fetch_memory} would raise an error), the \code{fail}
+#' function is triggered with the error message.
+#'
+#' @param url A character string naming the URL of a resource to be downloaded.
+#' @param handle a curl handle object
+#' @export
+#' @rdname curl_fetch
+#' @useDynLib curl R_curl_fetch_memory
+#' @examples
+#' # Load in memory
+#' res <- curl_fetch_memory("http://httpbin.org/cookies/set?foo=123&bar=ftw")
+#' res$content
+#'
+#' # Save to disk
+#' res <- curl_fetch_disk("http://httpbin.org/stream/10", tempfile())
+#' res$content
+#' readLines(res$content)
+#'
+#' # Stream with callback
+#' res <- curl_fetch_stream("http://www.httpbin.org/drip?duration=5&numbytes=15&code=200", function(x){
+#'   cat(rawToChar(x))
+#' })
+#'
+#' # Async API
+#' data <- list()
+#' success <- function(res){
+#'   cat("Request done! Status:", res$status, "\n")
+#'   data <<- c(data, list(res))
+#' }
+#' failure <- function(msg){
+#'   cat("Oh noes! Request failed!", msg, "\n")
+#' }
+#' curl_fetch_multi("http://httpbin.org/get", success, failure)
+#' curl_fetch_multi("http://httpbin.org/status/418", success, failure)
+#' curl_fetch_multi("https://urldoesnotexist.xyz", success, failure)
+#' multi_run()
+#' str(data)
+curl_fetch_memory <- function(url, handle = new_handle()){
+  nonblocking <- isTRUE(getOption("curl_interrupt", TRUE))
+  output <- .Call(R_curl_fetch_memory, enc2utf8(url), handle, nonblocking)
+  res <- handle_data(handle)
+  res$content <- output
+  res
+}
+
+#' @export
+#' @param path Path to save results
+#' @rdname curl_fetch
+#' @useDynLib curl R_curl_fetch_disk
+curl_fetch_disk <- function(url, path, handle = new_handle()){
+  nonblocking <- isTRUE(getOption("curl_interrupt", TRUE))
+  path <- normalizePath(path, mustWork = FALSE)
+  output <- .Call(R_curl_fetch_disk, enc2utf8(url), handle, path, "wb", nonblocking)
+  res <- handle_data(handle)
+  res$content <- output
+  res
+}
+
+#' @export
+#' @param fun Callback function. Should have one argument, which will be
+#'   a raw vector.
+#' @rdname curl_fetch
+#' @useDynLib curl R_curl_connection
+curl_fetch_stream <- function(url, fun, handle = new_handle()){
+  # Blocking = TRUE and partial = TRUE to prevent busy-waiting
+  con <- curl_connection(url, mode = "", handle = handle, partial = TRUE)
+
+  # 'f' means: do not error for status code
+  open(con, "rbf")
+  on.exit(close(con))
+  while(isIncomplete(con)){
+    buf <- readBin(con, raw(), 32768L)
+    if(length(buf))
+      fun(buf)
+  }
+  handle_data(handle)
+}
+
+#' @export
+#' @rdname curl_fetch
+#' @inheritParams multi
+#' @useDynLib curl R_curl_connection
+curl_fetch_multi <- function(url, done = NULL, fail = NULL, pool = NULL, handle = new_handle()){
+  handle_setopt(handle, url = enc2utf8(url))
+  multi_add(handle = handle, done = done, fail = fail, pool = pool)
+  invisible(handle)
+}
diff --git a/R/form.R b/R/form.R
new file mode 100644
index 0000000..f5a5528
--- /dev/null
+++ b/R/form.R
@@ -0,0 +1,48 @@
+#' POST files or data
+#'
+#' Build multipart form data elements. The \code{form_file} function uploads a
+#' file. The \code{form_data} function allows for posting a string or raw vector
+#' with a custom content-type.
+#'
+#' @param path a string with a path to an existing file on disk
+#' @param type MIME content-type of the file.
+#' @export
+#' @name multipart
+#' @rdname multipart
+form_file <- function(path, type = NULL){
+  path <- normalizePath(path[1], mustWork = TRUE)
+  if(!is.null(type)){
+    stopifnot(is.character(type))
+  }
+  structure(list(path = path, type = type), class = "form_file")
+}
+
+#' @export
+#' @name multipart
+#' @rdname multipart
+#' @param value a character or raw vector to post
+form_data <- function(value, type = NULL){
+  if(is.character(value))
+    value <- charToRaw(paste(enc2utf8(value), collapse = "\n"))
+  if(!is.raw(value))
+    stop("Argument 'value' must be string or raw vector")
+  structure(list(value = value, type = type), class = "form_data")
+}
+
+#' @export
+print.form_file <- function(x, ...){
+  txt <- paste("Form file:", basename(x$path))
+  if(!is.null(x$type)){
+    txt <- sprintf("%s (type: %s)", txt, x$type)
+  }
+  cat(txt, "\n")
+}
+
+#' @export
+print.form_data <- function(x, ...){
+  txt <- paste(sprintf("Form data of length %d", length(x$value)))
+  if(!is.null(x$type)){
+    txt <- sprintf("%s (type: %s)", txt, x$type)
+  }
+  cat(txt, "\n")
+}
diff --git a/R/handle.R b/R/handle.R
new file mode 100644
index 0000000..57c6c86
--- /dev/null
+++ b/R/handle.R
@@ -0,0 +1,195 @@
+#' Create and configure a curl handle
+#'
+#' Handles are the work horses of libcurl. A handle is used to configure a
+#' request with custom options, headers and payload. Once the handle has been
+#' set up, it can be passed to any of the download functions such as \code{\link{curl}}
+#' ,\code{\link{curl_download}} or \code{\link{curl_fetch_memory}}. The handle will maintain
+#' state in between requests, including keep-alive connections, cookies and
+#' settings.
+#'
+#' Use \code{new_handle()} to create a new clean curl handle that can be
+#' configured with custom options and headers. Note that \code{handle_setopt}
+#' appends or overrides options in the handle, whereas \code{handle_setheaders}
+#' replaces the entire set of headers with the new ones. The \code{handle_reset}
+#' function resets only options/headers/forms in the handle. It does not affect
+#' active connections, cookies or response data from previous requests. The safest
+#' way to perform multiple independent requests is by using a separate handle for
+#' each request. There is very little performance overhead in creating handles.
+#'
+#' @family handles
+#' @param ... named options / headers to be set in the handle.
+#'   To send a file, see \code{\link{form_file}}. To list all allowed options,
+#'   see \code{\link{curl_options}}
+#' @return A handle object (external pointer to the underlying curl handle).
+#'   All functions modify the handle in place but also return the handle
+#'   so you can create a pipeline of operations.
+#' @export
+#' @name handle
+#' @useDynLib curl R_new_handle
+#' @rdname handle
+#' @examples
+#' h <- new_handle()
+#' handle_setopt(h, customrequest = "PUT")
+#' handle_setform(h, a = "1", b = "2")
+#' r <- curl_fetch_memory("http://httpbin.org/put", h)
+#' cat(rawToChar(r$content))
+#'
+#' # Or use the list form
+#' h <- new_handle()
+#' handle_setopt(h, .list = list(customrequest = "PUT"))
+#' handle_setform(h, .list = list(a = "1", b = "2"))
+#' r <- curl_fetch_memory("http://httpbin.org/put", h)
+#' cat(rawToChar(r$content))
+new_handle <- function(...){
+  h <- .Call(R_new_handle)
+  handle_setopt(h, ...)
+  h
+}
+
+#' @export
+#' @useDynLib curl R_handle_setopt
+#' @param handle Handle to modify
+#' @param .list A named list of options. This is useful if you've created
+#'   a list of options elsewhere, avoiding the use of \code{do.call()}.
+#' @rdname handle
+handle_setopt <- function(handle, ..., .list = list()){
+  stopifnot(inherits(handle, "curl_handle"))
+  values <- c(list(...), .list)
+  opt_names <- fix_options(tolower(names(values)))
+  keys <- as.integer(curl_options()[opt_names])
+  na_keys <- is.na(keys)
+  if(any(na_keys)){
+    bad_opts <- opt_names[na_keys]
+    stop("Unknown option", ifelse(length(bad_opts) > 1, "s: ", ": "),
+      paste(bad_opts, collapse=", "))
+  }
+  stopifnot(length(keys) == length(values))
+  .Call(R_handle_setopt, handle, keys, values)
+  invisible(handle)
+}
+
+#' @export
+#' @useDynLib curl R_handle_setheaders
+#' @rdname handle
+handle_setheaders <- function(handle, ..., .list = list()){
+  stopifnot(inherits(handle, "curl_handle"))
+  opts <- c(list(...), .list)
+  if(!all(vapply(opts, is.character, logical(1)))){
+    stop("All headers must be strings.")
+  }
+  opts$Expect = ""
+  names <- names(opts)
+  values <- as.character(unlist(opts))
+  vec <- paste0(names, ": ", values)
+  .Call(R_handle_setheaders, handle, vec)
+  invisible(handle)
+}
+
+#' @export
+#' @useDynLib curl R_handle_setform
+#' @rdname handle
+handle_setform <- function(handle, ..., .list = list()){
+  stopifnot(inherits(handle, "curl_handle"))
+  form <- c(list(...), .list)
+  for(i in seq_along(form)){
+    val <- form[[i]];
+    if(is.character(val)){
+      form[[i]] <- charToRaw(enc2utf8(val))
+    } else if(!is.raw(val) && !inherits(val, "form_file") && !inherits(val, "form_data")){
+      stop("Insupported value type for form field '", names(form[i]), "'.")
+    }
+  }
+  .Call(R_handle_setform, handle, form)
+  invisible(handle)
+}
+
+#' @export
+#' @rdname handle
+#' @useDynLib curl R_handle_reset
+handle_reset <- function(handle){
+  stopifnot(inherits(handle, "curl_handle"))
+  .Call(R_handle_reset, handle)
+  invisible(handle)
+}
+
+#' Extract cookies from a handle
+#'
+#' The \code{handle_cookies} function returns a data frame with 7 columns as specified in the
+#' \href{http://www.cookiecentral.com/faq/#3.5}{netscape cookie file format}.
+#'
+#' @useDynLib curl R_get_handle_cookies
+#' @export
+#' @param handle a curl handle object
+#' @family handles
+#' @examples
+#' h <- new_handle()
+#' handle_cookies(h)
+#'
+#' # Server sets cookies
+#' req <- curl_fetch_memory("http://httpbin.org/cookies/set?foo=123&bar=ftw", handle = h)
+#' handle_cookies(h)
+#'
+#' # Server deletes cookies
+#' req <- curl_fetch_memory("http://httpbin.org/cookies/delete?foo", handle = h)
+#' handle_cookies(h)
+#'
+#' # Cookies will survive a reset!
+#' handle_reset(h)
+#' handle_cookies(h)
+handle_cookies <- function(handle){
+  stopifnot(inherits(handle, "curl_handle"))
+  cookies <- .Call(R_get_handle_cookies, handle)
+  df <- if(length(cookies)){
+    values <- lapply(strsplit(cookies, split="\t"), `[`, 1:7)
+    as.data.frame(do.call(rbind, values), stringsAsFactors = FALSE)
+  } else {
+    as.data.frame(matrix(ncol=7, nrow=0))
+  }
+  names(df) <- c("domain", "flag", "path", "secure", "expiration", "name", "value")
+  df$flag <- as.logical(df$flag)
+  df$secure <- as.logical(df$secure)
+  expires <- as.numeric(df$expiration)
+  expires[expires==0] <- Inf
+  class(expires) = c("POSIXct", "POSIXt");
+  df$expiration <- expires
+  df
+
+}
+
+#' @export
+#' @rdname handle
+#' @useDynLib curl R_get_handle_response
+handle_data <- function(handle){
+  stopifnot(inherits(handle, "curl_handle"))
+  out <- .Call(R_get_handle_response, handle)
+  out$content = NULL
+  out
+}
+
+#' @export
+print.curl_handle <- function(x, ...){
+  stopifnot(inherits(x, "curl_handle"))
+  url <- handle_data(x)$url
+  if(!nchar(url)) url <- "empty"
+  cat(sprintf("<curl handle> (%s)\n", url))
+}
+
+# Only for testing memory leaks
+#' @useDynLib curl R_total_handles
+total_handles <- function(){
+  .Call(R_total_handles)
+}
+
+
+## Some hacks for backward compatibilty
+fix_options <- function(opt_names){
+  # Recent libcurl should use xferinfo instead of progress
+  has_xferinfo <- length(curl_options("xferinfofunction"))
+  if(has_xferinfo){
+    opt_names[opt_names == "progressfunction"] <- "xferinfofunction"
+    return(opt_names)
+  } else {
+    opt_names[opt_names == "xferinfofunction"] <- "progressfunction"
+    return(opt_names)
+  }
+}
diff --git a/R/multi.R b/R/multi.R
new file mode 100644
index 0000000..f0b06f9
--- /dev/null
+++ b/R/multi.R
@@ -0,0 +1,133 @@
+#' Async Multi Download
+#'
+#' AJAX style concurrent requests, possibly using HTTP/2 multiplexing.
+#' Results are only available via callback functions. Advanced use only!
+#'
+#' Requests are created in the usual way using a curl \link{handle} and added
+#' to the scheduler with \link{multi_add}. This function returns immediately
+#' and does not perform the request yet. The user needs to call \link{multi_run}
+#' which performs all scheduled requests concurrently. It returns when all
+#' requests have completed, or case of a \code{timeout} or \code{SIGINT} (e.g.
+#' if the user presses \code{ESC} or \code{CTRL+C} in the console). In case of
+#' the latter, simply call \link{multi_run} again to resume pending requests.
+#'
+#' When the request succeeded, the \code{done} callback gets triggerd with
+#' the response data. The structure if this data is identical to \link{curl_fetch_memory}.
+#' When the request fails, the \code{fail} callback is triggered with an error
+#' message. Note that failure here means something went wrong in performing the
+#' request such as a connection failure, it does not check the http status code.
+#' Just like \link{curl_fetch_memory}, the user has to implement application logic.
+#'
+#' Raising an error within a callback function stops execution of that function
+#' but does not affect other requests.
+#'
+#' A single handle cannot be used for multiple simultaneous requests. However
+#' it is possible to add new requests to a pool while it is running, so you
+#' can re-use a handle within the callback of a request from that same handle.
+#' It is up to the user to make sure the same handle is not used in concurrent
+#' requests.
+#'
+#' The \link{multi_cancel} function can be used to cancel a pending request.
+#' It has no effect if the request was already completed or canceled.
+#'
+#' @name multi
+#' @rdname multi
+#' @useDynLib curl R_multi_add
+#' @param handle a curl \link{handle} with preconfigured \code{url} option.
+#' @param done callback function for completed request. Single argument with
+#' response data in same structure as \link{curl_fetch_memory}.
+#' @param fail callback function called on failed request. Argument contains
+#' error message.
+#' @param pool a multi handle created by \link{new_pool}. Default uses a global pool.
+#' @export
+#' @examples h1 <- new_handle(url = "https://eu.httpbin.org/delay/3")
+#' h2 <- new_handle(url = "https://eu.httpbin.org/post", postfields = "bla bla")
+#' h3 <- new_handle(url = "https://urldoesnotexist.xyz")
+#' multi_add(h1, done = print, fail = print)
+#' multi_add(h2, done = print, fail = print)
+#' multi_add(h3, done = print, fail = print)
+#' multi_run(timeout = 2)
+#' multi_run()
+multi_add <- function(handle, done = NULL, fail = NULL, pool = NULL){
+  if(is.null(pool))
+    pool <- multi_default()
+  stopifnot(inherits(handle, "curl_handle"))
+  stopifnot(inherits(pool, "curl_multi"))
+  stopifnot(is.null(done) || is.function(done))
+  stopifnot(is.null(fail) || is.function(fail))
+  .Call(R_multi_add, handle, done, fail, pool)
+}
+
+#' @param timeout max time in seconds to wait for results. Use \code{0} to poll for results without
+#' waiting at all.
+#' @param poll If \code{TRUE} then return immediately after any of the requests has completed.
+#' May also be an integer in which case it returns after n requests have completed.
+#' @export
+#' @useDynLib curl R_multi_run
+#' @rdname multi
+multi_run <- function(timeout = Inf, poll = FALSE, pool = NULL){
+  if(is.null(pool))
+    pool <- multi_default()
+  stopifnot(is.numeric(timeout))
+  stopifnot(inherits(pool, "curl_multi"))
+  .Call(R_multi_run, pool, timeout, as.integer(poll))
+}
+
+#' @param total_con max total concurrent connections.
+#' @param host_con max concurrent connections per host.
+#' @param multiplex enable HTTP/2 multiplexing if supported by host and client.
+#' @export
+#' @useDynLib curl R_multi_setopt
+#' @rdname multi
+multi_set <- function(total_con = 50, host_con = 6, multiplex = TRUE, pool = NULL){
+  if(is.null(pool))
+    pool <- multi_default()
+  stopifnot(inherits(pool, "curl_multi"))
+  stopifnot(is.numeric(total_con))
+  stopifnot(is.numeric(host_con))
+  stopifnot(is.logical(multiplex))
+  .Call(R_multi_setopt, pool, total_con, host_con, multiplex)
+}
+
+#' @export
+#' @useDynLib curl R_multi_list
+#' @rdname multi
+multi_list <- function(pool = NULL){
+  if(is.null(pool))
+    pool <- multi_default()
+  stopifnot(inherits(pool, "curl_multi"))
+  as.list(.Call(R_multi_list, pool))
+}
+
+#' @export
+#' @useDynLib curl R_multi_cancel
+#' @rdname multi
+multi_cancel <- function(handle){
+  stopifnot(inherits(handle, "curl_handle"))
+  .Call(R_multi_cancel, handle)
+}
+
+#' @export
+#' @useDynLib curl R_multi_new
+#' @rdname multi
+new_pool <- function(total_con = 100, host_con = 6, multiplex = TRUE){
+  pool <- .Call(R_multi_new)
+  multi_set(pool = pool, total_con = total_con, host_con = host_con, multiplex = multiplex)
+}
+
+multi_default <- local({
+  global_multi_handle <- NULL
+  function(){
+    if(is.null(global_multi_handle)){
+      global_multi_handle <<- new_pool()
+    }
+    stopifnot(inherits(global_multi_handle, "curl_multi"))
+    return(global_multi_handle)
+  }
+})
+
+#' @export
+print.curl_multi <- function(x, ...){
+  len <- length(multi_list(x))
+  cat(sprintf("<curl multi-pool> (%d pending requests)\n", len))
+}
diff --git a/R/nslookup.R b/R/nslookup.R
new file mode 100644
index 0000000..64df1aa
--- /dev/null
+++ b/R/nslookup.R
@@ -0,0 +1,36 @@
+#' Lookup a hostname
+#'
+#' The \code{nslookup} function is similar to \code{nsl} but works on all platforms
+#' and can resolve ipv6 addresses if supported by the OS. Default behavior raises an
+#' error if lookup fails. The \code{has_internet} function tests the internet
+#' connection by resolving a random address.
+#'
+#' @export
+#' @param host a string with a hostname
+#' @param error raise an error for failed DNS lookup. Otherwise returns \code{NULL}.
+#' @param ipv4_only always return ipv4 address. Set to `FALSE` to allow for ipv6 as well.
+#' @param multiple returns multiple ip addresses if possible
+#' @rdname nslookup
+#' @useDynLib curl R_nslookup
+#' @examples # Should always work if we are online
+#' nslookup("www.r-project.org")
+#'
+#' # If your OS supports IPv6
+#' nslookup("ipv6.test-ipv6.com", ipv4_only = FALSE, error = FALSE)
+nslookup <- function(host, ipv4_only = FALSE, multiple = FALSE, error = TRUE){
+  stopifnot(is.character(host))
+  if(grepl("://", host, fixed = TRUE))
+    stop("This looks like a URL, not a hostname")
+  out <- .Call(R_nslookup, host[1], as.logical(ipv4_only))
+  if(isTRUE(error) && is.null(out))
+    stop("Unable to resolve host: ", host)
+  if(isTRUE(multiple))
+    return(unique(out))
+  utils::head(out, 1)
+}
+
+#' @export
+#' @rdname nslookup
+has_internet <- function(){
+  !is.null(nslookup("r-project.org", error = FALSE))
+}
diff --git a/R/onload.R b/R/onload.R
new file mode 100644
index 0000000..055bb10
--- /dev/null
+++ b/R/onload.R
@@ -0,0 +1,27 @@
+.onLoad <- function(libname, pkgname){
+  if (!grepl("mingw", R.Version()$platform))
+    return()
+
+  # Enable SSL on Windows if CA bundle is available (R >= 3.2.0)
+  bundle <- Sys.getenv("CURL_CA_BUNDLE",
+    file.path(R.home("etc"), "curl-ca-bundle.crt"))
+  if (bundle != "" && file.exists(bundle)) {
+    set_bundle(bundle)
+  }
+}
+
+.onAttach <- function(libname, pkgname){
+  if (grepl("mingw", R.Version()$platform) && !file.exists(get_bundle())){
+    warning("No CA bundle found. SSL validation disabled.", call. = FALSE)
+  }
+}
+
+#' @useDynLib curl R_set_bundle
+set_bundle <- function(path){
+  .Call(R_set_bundle, path)
+}
+
+#' @useDynLib curl R_get_bundle
+get_bundle <- function(){
+  .Call(R_get_bundle)
+}
diff --git a/R/options.R b/R/options.R
new file mode 100644
index 0000000..2d58879
--- /dev/null
+++ b/R/options.R
@@ -0,0 +1,37 @@
+#' List curl version and options.
+#'
+#' \code{curl_version()} shows the versions of libcurl, libssl and zlib and
+#' supported protocols. \code{curl_options()} lists all options available in
+#' the current version of libcurl.  The dataset \code{curl_symbols} lists all
+#' symbols (including options) provides more information about the symbols,
+#' including when support was added/removed from libcurl.
+#'
+#' @export
+#' @param filter string: only return options with string in name
+#' @examples # Available options
+#' curl_options()
+#'
+#' # List proxy options
+#' curl_options("proxy")
+#'
+#' # Sybol table
+#' head(curl_symbols)
+curl_options <- function(filter = ""){
+  m <- grep(filter, fixed = TRUE, names(option_table))
+  option_table[m]
+}
+
+option_table <- (function(){
+  env <- new.env()
+  if(file.exists("tools/option_table.txt")){
+    source("tools/option_table.txt", env)
+  } else if(file.exists("../tools/option_table.txt")){
+    source("../tools/option_table.txt", env)
+  } else {
+    stop("Failed to find 'tools/option_table.txt' from:", getwd())
+  }
+
+  option_table <- unlist(as.list(env))
+  names(option_table) <- sub("^curlopt_", "", tolower(names(option_table)))
+  option_table[order(names(option_table))]
+})()
diff --git a/R/parse_headers.R b/R/parse_headers.R
new file mode 100644
index 0000000..166b971
--- /dev/null
+++ b/R/parse_headers.R
@@ -0,0 +1,51 @@
+#' Parse response headers
+#'
+#' Parse response header data as returned by curl_fetch, either as a set of strings
+#' or into a named list.
+#'
+#' The parse_headers_list function parses the headers into a normalized (lowercase
+#' field names, trimmed whitespace) named list.
+#'
+#' If a request has followed redirects, the data can contain multiple sets of headers.
+#' When multiple = TRUE, the function returns a list with the response headers
+#' for each request. By default it only returns the headers of the final request.
+#'
+#' @param txt raw or character vector with the header data
+#' @param multiple parse multiple sets of headers separated by a blank line. See details.
+#' @export
+#' @rdname parse_headers
+#' @examples req <- curl_fetch_memory("https://httpbin.org/redirect/3")
+#' parse_headers(req$headers)
+#' parse_headers(req$headers, multiple = TRUE)
+#'
+#' # Parse into named list
+#' parse_headers_list(req$headers)
+parse_headers <- function(txt, multiple = FALSE){
+  if(is.raw(txt)){
+    txt <- rawToChar(txt)
+  }
+  stopifnot(is.character(txt))
+  if(length(txt) > 1){
+    txt <- paste(txt, collapse = "\n")
+  }
+
+  # Allow for either "\r\n" line breaks or just "\r" or "\n" (i.e. windows servers)
+  sets <- strsplit(txt, "\\r\\n\\r\\n|\\n\\n|\\r\\r")[[1]]
+  headers <- strsplit(sets, "\\r\\n|\\n|\\r")
+  if(multiple){
+    headers
+  } else {
+    headers[[length(headers)]]
+  }
+}
+
+#' @export
+#' @rdname parse_headers
+parse_headers_list <- function(txt){
+  headers <- grep(":", parse_headers(txt), fixed = TRUE, value = TRUE)
+  out <- lapply(headers, split_string, ":")
+  names <- tolower(vapply(out, `[[`, character(1), 1)) #names are case insensitive
+  values <- lapply(lapply(out, `[[`, 2), trimws)
+  names(values) <- names
+  values
+}
diff --git a/R/proxy.R b/R/proxy.R
new file mode 100644
index 0000000..c3f4514
--- /dev/null
+++ b/R/proxy.R
@@ -0,0 +1,48 @@
+#' Internet Explorer proxy settings
+#'
+#' Lookup and mimic the system proxy settings on Windows as set by Internet
+#' Explorer. This can be used to configure curl to use the same proxy server.
+#'
+#' The \code{ie_proxy_info} function looks
+#' up your current proxy settings as configured in IE under "Internet Options"
+#' > "Tab: Connections" > "LAN Settings". The \code{ie_get_proxy_for_url}
+#' determines if and which proxy should be used to connect to a particular
+#' URL. If your settings have an "automatic configuration script" this
+#' involves downloading and executing a PAC file, which can take a while.
+#'
+#' @useDynLib curl R_proxy_info
+#' @export
+#' @rdname ie_proxy
+#' @name ie_proxy
+ie_proxy_info <- function(){
+  .Call(R_proxy_info)
+}
+
+#' @useDynLib curl R_get_proxy_for_url
+#' @param target_url url with host for which to lookup the proxy server
+#' @export
+#' @rdname ie_proxy
+ie_get_proxy_for_url <- function(target_url = "http://www.google.com"){
+  stopifnot(is.character(target_url))
+  info <- ie_proxy_info()
+  if(length(info$Proxy)){
+    if(isTRUE(grepl("<local>", info$ProxyBypass, fixed = TRUE)) &&
+       isTRUE(grepl("(://)[^./]+/", paste0(target_url, "/")))){
+      return(NULL)
+    } else {
+      return(info$Proxy)
+    }
+  }
+  if(isTRUE(info$AutoDetect) || length(info$AutoConfigUrl)){
+    out <- .Call(R_get_proxy_for_url, target_url, info$AutoDetect, info$AutoConfigUrl)
+    if(isTRUE(out$HasProxy)){
+      return(out$Proxy)
+    }
+  }
+  return(NULL);
+}
+
+#' @useDynLib curl R_windows_build
+get_windows_build <- function(){
+  .Call(R_windows_build)
+}
diff --git a/R/utilities.R b/R/utilities.R
new file mode 100644
index 0000000..4f031c0
--- /dev/null
+++ b/R/utilities.R
@@ -0,0 +1,46 @@
+#' @useDynLib curl R_curl_version
+#' @export
+#' @rdname curl_options
+#' @examples
+#' # Curl/ssl version info
+#' curl_version()
+curl_version <- function(){
+  .Call(R_curl_version);
+}
+
+#' @rdname curl_options
+#' @format A data frame with columns:
+#' \describe{
+#' \item{name}{Symbol name}
+#' \item{introduced,deprecated,removed}{Versions of libcurl}
+#' \item{value}{Integer value of symbol}
+#' \item{type}{If an option, the type of value it needs}
+#' }
+"curl_symbols"
+
+#' Parse date/time
+#'
+#' Can be used to parse dates appearing in http response headers such
+#' as \code{Expires} or \code{Last-Modified}. Automatically recognizes
+#' most common formats. If the format is known, \code{\link{strptime}}
+#' might be easier.
+#'
+#' @param datestring a string consisting of a timestamp
+#' @useDynLib curl R_curl_getdate
+#' @export
+#' @examples
+#' # Parse dates in many formats
+#' parse_date("Sunday, 06-Nov-94 08:49:37 GMT")
+#' parse_date("06 Nov 1994 08:49:37")
+#' parse_date("20040911 +0200")
+parse_date <- function(datestring){
+  out <- .Call(R_curl_getdate, datestring);
+  class(out) <- c("POSIXct", "POSIXt")
+  out
+}
+
+
+#' @useDynLib curl R_split_string
+split_string <- function(x, split = ":"){
+  .Call(R_split_string, x, split)
+}
diff --git a/build/vignette.rds b/build/vignette.rds
new file mode 100644
index 0000000..173d0c8
Binary files /dev/null and b/build/vignette.rds differ
diff --git a/cleanup b/cleanup
new file mode 100755
index 0000000..3c020d3
--- /dev/null
+++ b/cleanup
@@ -0,0 +1,2 @@
+#!/bin/sh
+rm -f src/Makevars
diff --git a/configure b/configure
new file mode 100755
index 0000000..77aee8f
--- /dev/null
+++ b/configure
@@ -0,0 +1,71 @@
+# Anticonf (tm) script by Jeroen Ooms (2015)
+# This script will query 'pkg-config' for the required cflags and ldflags.
+# If pkg-config is unavailable or does not find the library, try setting
+# INCLUDE_DIR and LIB_DIR manually via e.g:
+# R CMD INSTALL --configure-vars='INCLUDE_DIR=/.../include LIB_DIR=/.../lib'
+
+# Library settings
+PKG_CONFIG_NAME="libcurl"
+PKG_DEB_NAME="libcurl4-openssl-dev"
+PKG_RPM_NAME="libcurl-devel"
+PKG_CSW_NAME="libcurl_dev"
+PKG_TEST_HEADER="<curl/curl.h>"
+PKG_LIBS="-lcurl"
+PKG_CFLAGS=""
+
+# Use pkg-config if available
+pkg-config --version >/dev/null 2>&1
+if [ $? -eq 0 ]; then
+  PKGCONFIG_CFLAGS=`pkg-config --cflags ${PKG_CONFIG_NAME}`
+  case "$PKGCONFIG_CFLAGS" in
+    *CURL_STATICLIB*) PKGCONFIG_LIBS=`pkg-config --libs --static ${PKG_CONFIG_NAME}`;;
+    *) PKGCONFIG_LIBS=`pkg-config --libs ${PKG_CONFIG_NAME}`;;
+  esac
+fi
+
+# Note that cflags may be empty in case of success
+if [ "$INCLUDE_DIR" ] || [ "$LIB_DIR" ]; then
+  echo "Found INCLUDE_DIR and/or LIB_DIR!"
+  PKG_CFLAGS="-I$INCLUDE_DIR $PKG_CFLAGS"
+  PKG_LIBS="-L$LIB_DIR $PKG_LIBS"
+elif [ "$PKGCONFIG_CFLAGS" ] || [ "$PKGCONFIG_LIBS" ]; then
+  echo "Found pkg-config cflags and libs!"
+  PKG_CFLAGS=${PKGCONFIG_CFLAGS}
+  PKG_LIBS=${PKGCONFIG_LIBS}
+fi
+
+# Find compiler
+CC=`${R_HOME}/bin/R CMD config CC`
+CFLAGS=`${R_HOME}/bin/R CMD config CFLAGS`
+CPPFLAGS=`${R_HOME}/bin/R CMD config CPPFLAGS`
+
+# For debugging
+echo "Using PKG_CFLAGS=$PKG_CFLAGS"
+echo "Using PKG_LIBS=$PKG_LIBS"
+
+# Test configuration
+echo "#include $PKG_TEST_HEADER" | ${CC} ${CPPFLAGS} ${PKG_CFLAGS} ${CFLAGS} -E -xc - >/dev/null 2>&1 || R_CONFIG_ERROR=1;
+
+# Customize the error
+if [ $R_CONFIG_ERROR ]; then
+  echo "------------------------- ANTICONF ERROR ---------------------------"
+  echo "Configuration failed because $PKG_CONFIG_NAME was not found. Try installing:"
+  echo " * deb: $PKG_DEB_NAME (Debian, Ubuntu, etc)"
+  echo " * rpm: $PKG_RPM_NAME (Fedora, CentOS, RHEL)"
+  echo " * csw: $PKG_CSW_NAME (Solaris)"
+  echo "If $PKG_CONFIG_NAME is already installed, check that 'pkg-config' is in your"
+  echo "PATH and PKG_CONFIG_PATH contains a $PKG_CONFIG_NAME.pc file. If pkg-config"
+  echo "is unavailable you can set INCLUDE_DIR and LIB_DIR manually via:"
+  echo "R CMD INSTALL --configure-vars='INCLUDE_DIR=... LIB_DIR=...'"
+  echo "--------------------------------------------------------------------"
+  exit 1;
+fi
+
+# Write to Makevars
+sed -e "s|@cflags@|$PKG_CFLAGS|" -e "s|@libs@|$PKG_LIBS|" src/Makevars.in > src/Makevars
+
+# Extract curlopt symbols
+echo '#include <curl/curl.h>' | ${CC} ${CPPFLAGS} ${PKG_CFLAGS} ${CFLAGS} -E -xc - | grep "^[ \t]*CURLOPT_.*," | sed s/,// > tools/option_table.txt
+
+# Success
+exit 0
diff --git a/configure.win b/configure.win
new file mode 100644
index 0000000..e69de29
diff --git a/data/curl_symbols.rda b/data/curl_symbols.rda
new file mode 100644
index 0000000..7a92412
Binary files /dev/null and b/data/curl_symbols.rda differ
diff --git a/debian/README.source b/debian/README.source
deleted file mode 100644
index 1538fd4..0000000
--- a/debian/README.source
+++ /dev/null
@@ -1,9 +0,0 @@
-Explanation for binary files inside source package according to
-  http://lists.debian.org/debian-devel/2013/09/msg00332.html
-
-Files: data/curl_symbols.rda
-Documented: man/curl_options.Rd
-  data set listing all symbols (including options) provides more information about the symbols,
-  including when support was added/removed from libcurl
-
- -- Andreas Tille <tille at debian.org>  Thu, 14 Sep 2017 09:31:48 +0200
diff --git a/debian/README.test b/debian/README.test
deleted file mode 100644
index 8d70ca3..0000000
--- a/debian/README.test
+++ /dev/null
@@ -1,9 +0,0 @@
-Notes on how this package can be tested.
-────────────────────────────────────────
-
-This package can be tested by running the provided test:
-
-cd tests
-LC_ALL=C R --no-save < testthat.R
-
-in order to confirm its integrity.
diff --git a/debian/changelog b/debian/changelog
deleted file mode 100644
index 95bae9a..0000000
--- a/debian/changelog
+++ /dev/null
@@ -1,39 +0,0 @@
-r-cran-curl (2.8.1-1) unstable; urgency=medium
-
-  * New upstream version
-    Closes: #860018
-  * Standards-Version: 4.1.0 (no changes needed)
-  * Add README.source to document binary data file
-
- -- Andreas Tille <tille at debian.org>  Thu, 14 Sep 2017 09:31:48 +0200
-
-r-cran-curl (2.3-1) unstable; urgency=medium
-
-  * New upstream version
-  * debhelper 10
-  * d/watch: version=4
-
- -- Andreas Tille <tille at debian.org>  Wed, 30 Nov 2016 08:42:16 +0100
-
-r-cran-curl (2.2-1) unstable; urgency=medium
-
-  * New upstream version (applied patch from last release)
-  * Convert to dh-r
-
- -- Andreas Tille <tille at debian.org>  Mon, 31 Oct 2016 22:28:35 +0100
-
-r-cran-curl (2.1-1) unstable; urgency=medium
-
-  * New upstream version
-  * Fix FTBFS on big-endian architectures (thanks for the patch to Aurelien
-    Jarno <aurel32 at debian.org>)
-    Closes: #841210
-  * canonical homepage for cran
-
- -- Andreas Tille <tille at debian.org>  Wed, 19 Oct 2016 05:34:23 +0200
-
-r-cran-curl (0.9.6-1) unstable; urgency=low
-
-  * Initial release (Closes: #819001)
-
- -- Andreas Tille <tille at debian.org>  Tue, 22 Mar 2016 18:38:08 +0100
diff --git a/debian/compat b/debian/compat
deleted file mode 100644
index f599e28..0000000
--- a/debian/compat
+++ /dev/null
@@ -1 +0,0 @@
-10
diff --git a/debian/control b/debian/control
deleted file mode 100644
index eb33e60..0000000
--- a/debian/control
+++ /dev/null
@@ -1,31 +0,0 @@
-Source: r-cran-curl
-Maintainer: Debian Med Packaging Team <debian-med-packaging at lists.alioth.debian.org>
-Uploaders: Andreas Tille <tille at debian.org>
-Section: gnu-r
-Priority: optional
-Build-Depends: debhelper (>= 10),
-               dh-r,
-               r-base-dev,
-               libcurl4-openssl-dev
-Standards-Version: 4.1.0
-Vcs-Browser: https://anonscm.debian.org/viewvc/debian-med/trunk/packages/R/r-cran-curl/trunk/
-Vcs-Svn: svn://anonscm.debian.org/debian-med/trunk/packages/R/r-cran-curl/trunk/
-Homepage: https://cran.r-project.org/package=curl
-
-Package: r-cran-curl
-Architecture: any
-Depends: ${misc:Depends},
-         ${shlibs:Depends},
-         ${R:Depends}
-Recommends: ${R:Recommends}
-Suggests: ${R:Suggests}
-Description: GNU R modern and flexible web client for R
- The curl() and curl_download() functions provide highly configurable drop-
- in replacements for base url() and download.file() with better
- performance, support for encryption (https, ftps), gzip compression,
- authentication, and other libcurl goodies. The core of the package
- implements a framework for performing fully customized requests where
- data can be processed either in memory, on disk, or streaming via the
- callback or connection interfaces. Some knowledge of libcurl is
- recommended; for a more-user-friendly web client see the 'httr' package
- which builds on this package with http specific tools and logic.
diff --git a/debian/copyright b/debian/copyright
deleted file mode 100644
index 49220d4..0000000
--- a/debian/copyright
+++ /dev/null
@@ -1,32 +0,0 @@
-Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
-Upstream-Contact: Jeroen Ooms <jeroen.ooms at stat.ucla.edu>
-Source: https://cran.r-project.org/web/packages/curl/
-
-Files: *
-Copyright: 2013-2016 Jeroen Ooms <jeroen.ooms at stat.ucla.edu>,
-                     Hadley Wickham, RStudio
-License: MIT
-
-Files: debian/*
-Copyright: 2016 Andreas Tille <tille at debian.org>
-License: MIT
-
-License: MIT
- Permission is hereby granted, free of charge, to any person obtaining a
- copy of this software and associated documentation files (the
- "Software"), to deal in the Software without restriction, including
- without limitation the rights to use, copy, modify, merge, publish,
- distribute, sublicense, and/or sell copies of the Software, and to
- permit persons to whom the Software is furnished to do so, subject to
- the following conditions:
- .
- The above copyright notice and this permission notice shall be included
- in all copies or substantial portions of the Software.
- .
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
- OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
- CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
- TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
- SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/debian/docs b/debian/docs
deleted file mode 100644
index 960011c..0000000
--- a/debian/docs
+++ /dev/null
@@ -1,3 +0,0 @@
-tests
-debian/README.test
-debian/tests/run-unit-test
diff --git a/debian/rules b/debian/rules
deleted file mode 100755
index ae86733..0000000
--- a/debian/rules
+++ /dev/null
@@ -1,8 +0,0 @@
-#!/usr/bin/make -f
-
-%:
-	dh $@ --buildsystem R
-
-override_dh_install:
-	dh_install
-	find debian -name LICENSE -delete
diff --git a/debian/source/format b/debian/source/format
deleted file mode 100644
index 163aaf8..0000000
--- a/debian/source/format
+++ /dev/null
@@ -1 +0,0 @@
-3.0 (quilt)
diff --git a/debian/tests/control b/debian/tests/control
deleted file mode 100644
index e86af3e..0000000
--- a/debian/tests/control
+++ /dev/null
@@ -1,3 +0,0 @@
-Tests: run-unit-test
-Depends: @, r-cran-testthat, r-cran-jsonlite
-Restrictions: allow-stderr
diff --git a/debian/tests/run-unit-test b/debian/tests/run-unit-test
deleted file mode 100644
index 2fe9c31..0000000
--- a/debian/tests/run-unit-test
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/sh -e
-
-oname=curl
-pkg=r-cran-`echo $oname | tr [A-Z] [a-z]`
-
-if [ "$ADTTMP" = "" ] ; then
-  ADTTMP=`mktemp -d /tmp/${pkg}-test.XXXXXX`
-fi
-cd $ADTTMP
-cp -a /usr/share/doc/${pkg}/tests/* $ADTTMP
-LC_ALL=C R --no-save < testthat.R
-rm -fr $ADTTMP/*
diff --git a/debian/watch b/debian/watch
deleted file mode 100644
index 4b5442d..0000000
--- a/debian/watch
+++ /dev/null
@@ -1,3 +0,0 @@
-version=4
-http://cran.r-project.org/src/contrib/curl_([-0-9\.]*).tar.gz
-
diff --git a/inst/doc/intro.R b/inst/doc/intro.R
new file mode 100644
index 0000000..b66f020
--- /dev/null
+++ b/inst/doc/intro.R
@@ -0,0 +1,159 @@
+## ---- echo = FALSE, message = FALSE-----------------------------------------------------------------------------------
+knitr::opts_chunk$set(comment = "")
+options(width = 120, max.print = 100)
+library(curl)
+
+## ---------------------------------------------------------------------------------------------------------------------
+req <- curl_fetch_memory("https://httpbin.org/get")
+str(req)
+parse_headers(req$headers)
+cat(rawToChar(req$content))
+
+## ---------------------------------------------------------------------------------------------------------------------
+tmp <- tempfile()
+curl_download("https://httpbin.org/get", tmp)
+cat(readLines(tmp), sep = "\n")
+
+## ---------------------------------------------------------------------------------------------------------------------
+con <- curl("https://httpbin.org/get")
+open(con)
+
+# Get 3 lines
+out <- readLines(con, n = 3)
+cat(out, sep = "\n")
+
+# Get 3 more lines
+out <- readLines(con, n = 3)
+cat(out, sep = "\n")
+
+# Get remaining lines
+out <- readLines(con)
+close(con)
+cat(out, sep = "\n")
+
+## ---------------------------------------------------------------------------------------------------------------------
+con <- curl("https://httpbin.org/drip?duration=1&numbytes=50")
+open(con, "rb", blocking = FALSE)
+while(isIncomplete(con)){
+  buf <- readBin(con, raw(), 1024)
+  if(length(buf)) 
+    cat("received: ", rawToChar(buf), "\n")
+}
+close(con)
+
+## ---------------------------------------------------------------------------------------------------------------------
+pool <- new_pool()
+cb <- function(req){cat("done:", req$url, ": HTTP:", req$status, "\n")}
+curl_fetch_multi('https://www.google.com', done = cb, pool = pool)
+curl_fetch_multi('https://cloud.r-project.org', done = cb, pool = pool)
+curl_fetch_multi('https://httpbin.org/blabla', done = cb, pool = pool)
+
+## ---------------------------------------------------------------------------------------------------------------------
+# This actually performs requests:
+out <- multi_run(pool = pool)
+print(out)
+
+## ---------------------------------------------------------------------------------------------------------------------
+# This is OK
+curl_download('https://cran.r-project.org/CRAN_mirrors.csv', 'mirrors.csv')
+mirros <- read.csv('mirrors.csv')
+unlink('mirrors.csv')
+
+## ---- echo = FALSE, message = FALSE, warning=FALSE--------------------------------------------------------------------
+close(con)
+rm(con)
+
+## ---------------------------------------------------------------------------------------------------------------------
+req <- curl_fetch_memory('https://cran.r-project.org/CRAN_mirrors.csv')
+print(req$status_code)
+
+## ---------------------------------------------------------------------------------------------------------------------
+# Oops a typo!
+req <- curl_fetch_disk('https://cran.r-project.org/CRAN_mirrorZ.csv', 'mirrors.csv')
+print(req$status_code)
+
+# This is not the CSV file we were expecting!
+head(readLines('mirrors.csv'))
+unlink('mirrors.csv')
+
+## ---------------------------------------------------------------------------------------------------------------------
+h <- new_handle()
+handle_setopt(h, copypostfields = "moo=moomooo");
+handle_setheaders(h,
+  "Content-Type" = "text/moo",
+  "Cache-Control" = "no-cache",
+  "User-Agent" = "A cow"
+)
+
+## ---------------------------------------------------------------------------------------------------------------------
+req <- curl_fetch_memory("http://httpbin.org/post", handle = h)
+cat(rawToChar(req$content))
+
+## ---------------------------------------------------------------------------------------------------------------------
+con <- curl("http://httpbin.org/post", handle = h)
+cat(readLines(con), sep = "\n")
+
+## ---- echo = FALSE, message = FALSE, warning=FALSE--------------------------------------------------------------------
+close(con)
+
+## ---------------------------------------------------------------------------------------------------------------------
+tmp <- tempfile()
+curl_download("http://httpbin.org/post", destfile = tmp, handle = h)
+cat(readLines(tmp), sep = "\n")
+
+## ---------------------------------------------------------------------------------------------------------------------
+curl_fetch_multi("http://httpbin.org/post", handle = h, done = function(res){
+  cat("Request complete! Response content:\n")
+  cat(rawToChar(res$content))
+})
+
+# Perform the request
+out <- multi_run()
+
+## ---------------------------------------------------------------------------------------------------------------------
+# Start with a fresh handle
+h <- new_handle()
+
+# Ask server to set some cookies
+req <- curl_fetch_memory("http://httpbin.org/cookies/set?foo=123&bar=ftw", handle = h)
+req <- curl_fetch_memory("http://httpbin.org/cookies/set?baz=moooo", handle = h)
+handle_cookies(h)
+
+# Unset a cookie
+req <- curl_fetch_memory("http://httpbin.org/cookies/delete?foo", handle = h)
+handle_cookies(h)
+
+## ---------------------------------------------------------------------------------------------------------------------
+req1 <- curl_fetch_memory("https://httpbin.org/get")
+req2 <- curl_fetch_memory("http://www.r-project.org")
+
+## ---------------------------------------------------------------------------------------------------------------------
+req <- curl_fetch_memory("https://api.github.com/users/ropensci")
+req$times
+
+req2 <- curl_fetch_memory("https://api.github.com/users/rstudio")
+req2$times
+
+## ---------------------------------------------------------------------------------------------------------------------
+handle_reset(h)
+
+## ---------------------------------------------------------------------------------------------------------------------
+# Posting multipart
+h <- new_handle()
+handle_setform(h,
+  foo = "blabla",
+  bar = charToRaw("boeboe"),
+  iris = form_data(serialize(iris, NULL), "application/rda"),
+  description = form_file(system.file("DESCRIPTION")),
+  logo = form_file(file.path(Sys.getenv("R_DOC_DIR"), "html/logo.jpg"), "image/jpeg")
+)
+req <- curl_fetch_memory("http://httpbin.org/post", handle = h)
+
+## ---------------------------------------------------------------------------------------------------------------------
+library(magrittr)
+
+new_handle() %>%
+  handle_setopt(copypostfields = "moo=moomooo") %>%
+  handle_setheaders("Content-Type" = "text/moo", "Cache-Control" = "no-cache", "User-Agent" = "A cow") %>%
+  curl_fetch_memory(url = "http://httpbin.org/post") %$% content %>% rawToChar %>% cat
+
diff --git a/inst/doc/intro.Rmd b/inst/doc/intro.Rmd
new file mode 100644
index 0000000..b151523
--- /dev/null
+++ b/inst/doc/intro.Rmd
@@ -0,0 +1,328 @@
+---
+title: "The curl package: a modern R interface to libcurl"
+date: "`r Sys.Date()`"
+output:
+  html_document:
+    fig_caption: false
+    toc: true
+    toc_float:
+      collapsed: false
+      smooth_scroll: false
+    toc_depth: 3
+vignette: >
+  %\VignetteIndexEntry{The curl package: a modern R interface to libcurl}
+  %\VignetteEngine{knitr::rmarkdown}
+  %\VignetteEncoding{UTF-8}
+---
+
+
+```{r, echo = FALSE, message = FALSE}
+knitr::opts_chunk$set(comment = "")
+options(width = 120, max.print = 100)
+library(curl)
+```
+
+The curl package provides bindings to the [libcurl](http://curl.haxx.se/libcurl/) C library for R. The package supports retrieving data in-memory, downloading to disk, or streaming using the [R "connection" interface](https://stat.ethz.ch/R-manual/R-devel/library/base/html/connections.html). Some knowledge of curl is recommended to use this package. For a more user-friendly HTTP client, have a look at the  [httr](https://cran.r-project.org/package=httr/vignettes/quickstart.html) package  [...]
+
+## Request interfaces
+
+The curl package implements several interfaces to retrieve data from a URL:
+
+ - `curl_fetch_memory()`  saves response in memory
+ - `curl_download()` or `curl_fetch_disk()`  writes response to disk
+ - `curl()` or `curl_fetch_stream()` streams response data
+ - `curl_fetch_multi()` (Advanced) process responses via callback functions
+
+Each interface performs the same HTTP request, they only differ in how response data is processed.
+
+### Getting in memory
+
+The `curl_fetch_memory` function is a blocking interface which waits for the request to complete and returns a list with all content (data, headers, status, timings) of the server response.
+
+
+```{r}
+req <- curl_fetch_memory("https://httpbin.org/get")
+str(req)
+parse_headers(req$headers)
+cat(rawToChar(req$content))
+```
+
+The `curl_fetch_memory` interface is the easiest interface and most powerful for building API clients. However it is not suitable for downloading really large files because it is fully in-memory. If you are expecting 100G of data, you probably need one of the other interfaces.
+
+### Downloading to disk
+
+The second method is `curl_download`, which has been designed as a drop-in replacement for `download.file` in r-base. It writes the response straight to disk, which is useful for downloading (large) files.
+
+```{r}
+tmp <- tempfile()
+curl_download("https://httpbin.org/get", tmp)
+cat(readLines(tmp), sep = "\n")
+```
+
+### Streaming data
+
+The most flexible interface is the `curl` function, which has been designed as a drop-in replacement for base `url`. It will create a so-called connection object, which allows for incremental (asynchronous) reading of the response.
+
+```{r}
+con <- curl("https://httpbin.org/get")
+open(con)
+
+# Get 3 lines
+out <- readLines(con, n = 3)
+cat(out, sep = "\n")
+
+# Get 3 more lines
+out <- readLines(con, n = 3)
+cat(out, sep = "\n")
+
+# Get remaining lines
+out <- readLines(con)
+close(con)
+cat(out, sep = "\n")
+```
+
+The example shows how to use `readLines` on an opened connection to read `n` lines at a time. Similarly `readBin` is used to read `n` bytes at a time for stream parsing binary data.
+
+#### Non blocking connections
+
+As of version 2.3 it is also possible to open connetions in non-blocking mode. In this case `readBin` and `readLines` will return immediately with data that is available without waiting. For non-blocking connections we use `isIncomplete` to check if the download has completed yet.
+
+```{r}
+con <- curl("https://httpbin.org/drip?duration=1&numbytes=50")
+open(con, "rb", blocking = FALSE)
+while(isIncomplete(con)){
+  buf <- readBin(con, raw(), 1024)
+  if(length(buf)) 
+    cat("received: ", rawToChar(buf), "\n")
+}
+close(con)
+```
+
+The `curl_fetch_stream` function provides a very simple wrapper around a non-blocking connection.
+
+
+### Async requests
+
+As of `curl 2.0` the package provides an async interface which can perform multiple simultaneous requests concurrently. The `curl_fetch_multi` adds a request to a pool and returns immediately; it does not actually perform the request. 
+
+```{r}
+pool <- new_pool()
+cb <- function(req){cat("done:", req$url, ": HTTP:", req$status, "\n")}
+curl_fetch_multi('https://www.google.com', done = cb, pool = pool)
+curl_fetch_multi('https://cloud.r-project.org', done = cb, pool = pool)
+curl_fetch_multi('https://httpbin.org/blabla', done = cb, pool = pool)
+```
+
+When we call `multi_run()`, all scheduled requests are performed concurrently. The callback functions get triggered when each request completes.
+
+```{r}
+# This actually performs requests:
+out <- multi_run(pool = pool)
+print(out)
+```
+
+The system allows for running many concurrent non-blocking requests. However it is quite complex and requires careful specification of handler functions.
+
+## Exception handling
+
+A HTTP requests can encounter two types of errors:
+
+ 1. Connection failure: network down, host not found, invalid SSL certificate, etc
+ 2. HTTP non-success status: 401 (DENIED), 404 (NOT FOUND), 503 (SERVER PROBLEM), etc
+
+The first type of errors (connection failures) will always raise an error in R for each interface. However if the requests succeeds and the server returns a non-success HTTP status code, only `curl()` and `curl_download()` will raise an error. Let's dive a little deeper into this.
+
+### Error automatically
+
+The `curl` and `curl_download` functions are safest to use because they automatically raise an error if the request was completed but the server returned a non-success (400 or higher) HTTP status. This mimics behavior of base functions `url` and `download.file`. Therefore we can safely write code like this:
+
+```{r}
+# This is OK
+curl_download('https://cran.r-project.org/CRAN_mirrors.csv', 'mirrors.csv')
+mirros <- read.csv('mirrors.csv')
+unlink('mirrors.csv')
+```
+
+If the HTTP request was unsuccessful, R will not continue:
+
+```{r, error=TRUE, purl = FALSE}
+# Oops! A typo in the URL!
+curl_download('https://cran.r-project.org/CRAN_mirrorZ.csv', 'mirrors.csv')
+con <- curl('https://cran.r-project.org/CRAN_mirrorZ.csv')
+open(con)
+```
+
+```{r, echo = FALSE, message = FALSE, warning=FALSE}
+close(con)
+rm(con)
+```
+
+
+### Check manually
+
+When using any of the `curl_fetch_*` functions it is important to realize that these do **not** raise an error if the request was completed but returned a non-200 status code. When using `curl_fetch_memory` or `curl_fetch_disk` you need to implement such application logic yourself and check if the response was successful.
+
+```{r}
+req <- curl_fetch_memory('https://cran.r-project.org/CRAN_mirrors.csv')
+print(req$status_code)
+```
+
+Same for downloading to disk. If you do not check your status, you might have downloaded an error page!
+
+```{r}
+# Oops a typo!
+req <- curl_fetch_disk('https://cran.r-project.org/CRAN_mirrorZ.csv', 'mirrors.csv')
+print(req$status_code)
+
+# This is not the CSV file we were expecting!
+head(readLines('mirrors.csv'))
+unlink('mirrors.csv')
+```
+
+If you *do* want the `curl_fetch_*` functions to automatically raise an error, you should set the [`FAILONERROR`](https://curl.haxx.se/libcurl/c/CURLOPT_FAILONERROR.html) option to `TRUE` in the handle of the request.
+
+```{r, error=TRUE, purl = FALSE}
+h <- new_handle(failonerror = TRUE)
+curl_fetch_memory('https://cran.r-project.org/CRAN_mirrorZ.csv', handle = h)
+```
+
+## Customizing requests
+
+By default libcurl uses HTTP GET to issue a request to an HTTP url. To send a customized request, we first need to create and configure a curl handle object that is passed to the specific download interface.  
+
+### Configuring a handle
+
+Creating a new handle is done using `new_handle`. After creating a handle object, we can set the libcurl options and http request headers.
+
+```{r}
+h <- new_handle()
+handle_setopt(h, copypostfields = "moo=moomooo");
+handle_setheaders(h,
+  "Content-Type" = "text/moo",
+  "Cache-Control" = "no-cache",
+  "User-Agent" = "A cow"
+)
+```
+
+Use the `curl_options()` function to get a list of the options supported by your version of libcurl. The [libcurl documentation](http://curl.haxx.se/libcurl/c/curl_easy_setopt.html) explains what each option does. Option names are not case sensitive.
+
+After the handle has been configured, it can be used with any of the download interfaces to perform the request. For example `curl_fetch_memory` will load store the output of the request in memory:
+
+```{r}
+req <- curl_fetch_memory("http://httpbin.org/post", handle = h)
+cat(rawToChar(req$content))
+```
+
+Alternatively we can use `curl()` to read the data of via a connection interface:
+
+```{r}
+con <- curl("http://httpbin.org/post", handle = h)
+cat(readLines(con), sep = "\n")
+```
+
+```{r, echo = FALSE, message = FALSE, warning=FALSE}
+close(con)
+```
+
+Or we can use `curl_download` to write the response to disk:
+
+```{r}
+tmp <- tempfile()
+curl_download("http://httpbin.org/post", destfile = tmp, handle = h)
+cat(readLines(tmp), sep = "\n")
+```
+
+Or perform the same request with a multi pool:
+
+```{r}
+curl_fetch_multi("http://httpbin.org/post", handle = h, done = function(res){
+  cat("Request complete! Response content:\n")
+  cat(rawToChar(res$content))
+})
+
+# Perform the request
+out <- multi_run()
+```
+
+### Reading cookies
+
+Curl handles automatically keep track of cookies set by the server. At any given point we can use `handle_cookies` to see a list of current cookies in the handle.
+
+```{r}
+# Start with a fresh handle
+h <- new_handle()
+
+# Ask server to set some cookies
+req <- curl_fetch_memory("http://httpbin.org/cookies/set?foo=123&bar=ftw", handle = h)
+req <- curl_fetch_memory("http://httpbin.org/cookies/set?baz=moooo", handle = h)
+handle_cookies(h)
+
+# Unset a cookie
+req <- curl_fetch_memory("http://httpbin.org/cookies/delete?foo", handle = h)
+handle_cookies(h)
+```
+
+The `handle_cookies` function returns a data frame with 7 columns as specified in the [netscape cookie file format](http://www.cookiecentral.com/faq/#3.5).
+
+### On reusing handles
+
+In most cases you should not re-use a single handle object for more than one request. The only benefit of reusing a handle for multiple requests is to keep track of cookies set by the server (seen above). This could be needed if your server uses session cookies, but this is rare these days. Most APIs set state explicitly via http headers or parameters, rather than implicitly via cookies.
+
+In recent versions of the curl package there are no performance benefits of reusing handles. The overhead of creating and configuring a new handle object is negligible. The safest way to issue multiple requests, either to a single server or multiple servers is by using a separate handle for each request (which is the default)
+
+```{r}
+req1 <- curl_fetch_memory("https://httpbin.org/get")
+req2 <- curl_fetch_memory("http://www.r-project.org")
+```
+
+In past versions of this package you needed to manually use a handle to take advantage of http Keep-Alive. However as of version 2.3 this is no longer the case: curl automatically maintains global a pool of open http connections shared by all handles. When performing many requests to the same server, curl automatically uses existing connections when possible, eliminating TCP/SSL handshaking overhead:
+
+```{r}
+req <- curl_fetch_memory("https://api.github.com/users/ropensci")
+req$times
+
+req2 <- curl_fetch_memory("https://api.github.com/users/rstudio")
+req2$times
+```
+
+If you really need to re-use a handle, do note that that curl does not cleanup the handle after each request. All of the options and internal fields will linger around for all future request until explicitly reset or overwritten. This can sometimes leads to unexpected behavior.
+
+```{r}
+handle_reset(h)
+```
+
+The `handle_reset` function will reset all curl options and request headers to the default values. It will **not** erase cookies and it will still keep alive the connections. Therefore it is good practice to call `handle_reset` after performing a request if you want to reuse the handle for a subsequent request. Still it is always safer to create a new fresh handle when possible, rather than recycling old ones.
+
+### Posting forms
+
+The `handle_setform` function is used to perform a `multipart/form-data` HTTP POST request (a.k.a. posting a form). Values can be either strings, raw vectors (for binary data) or files.
+
+```{r}
+# Posting multipart
+h <- new_handle()
+handle_setform(h,
+  foo = "blabla",
+  bar = charToRaw("boeboe"),
+  iris = form_data(serialize(iris, NULL), "application/rda"),
+  description = form_file(system.file("DESCRIPTION")),
+  logo = form_file(file.path(Sys.getenv("R_DOC_DIR"), "html/logo.jpg"), "image/jpeg")
+)
+req <- curl_fetch_memory("http://httpbin.org/post", handle = h)
+```
+
+The `form_file` function is used to upload files with the form post. It has two arguments: a file path, and optionally a content-type value. If no content-type is set, curl will guess the content type of the file based on the file extension.
+
+The `form_data` function is similar but simply posts a string or raw value with a custom content-type.
+
+### Using pipes
+
+All of the `handle_xxx` functions return the handle object so that function calls can be chained using the popular pipe operators:
+
+```{r}
+library(magrittr)
+
+new_handle() %>%
+  handle_setopt(copypostfields = "moo=moomooo") %>%
+  handle_setheaders("Content-Type" = "text/moo", "Cache-Control" = "no-cache", "User-Agent" = "A cow") %>%
+  curl_fetch_memory(url = "http://httpbin.org/post") %$% content %>% rawToChar %>% cat
+```
diff --git a/inst/doc/intro.html b/inst/doc/intro.html
new file mode 100644
index 0000000..0ae8939
--- /dev/null
+++ b/inst/doc/intro.html
@@ -0,0 +1,643 @@
+<!DOCTYPE html>
+
+<html xmlns="http://www.w3.org/1999/xhtml">
+
+<head>
+
+<meta charset="utf-8" />
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<meta name="generator" content="pandoc" />
+
+
+
+<meta name="date" content="2017-07-20" />
+
+<title>The curl package: a modern R interface to libcurl</title>
+
+<script src="data:application/x-javascript;base64,LyohIGpRdWVyeSB2MS4xMS4zIHwgKGMpIDIwMDUsIDIwMTUgalF1ZXJ5IEZvdW5kYXRpb24sIEluYy4gfCBqcXVlcnkub3JnL2xpY2Vuc2UgKi8KIWZ1bmN0aW9uKGEsYil7Im9iamVjdCI9PXR5cGVvZiBtb2R1bGUmJiJvYmplY3QiPT10eXBlb2YgbW9kdWxlLmV4cG9ydHM/bW9kdWxlLmV4cG9ydHM9YS5kb2N1bWVudD9iKGEsITApOmZ1bmN0aW9uKGEpe2lmKCFhLmRvY3VtZW50KXRocm93IG5ldyBFcnJvcigialF1ZXJ5IHJlcXVpcmVzIGEgd2luZG93IHdpdGggYSBkb2N1bWVudCIpO3JldHVybiBiKGEpfTpiKGEpfSgidW5kZWZpbmVkIiE9dHlwZW9mIHdpbmRvdz93aW5kb3c6dG [...]
+<meta name="viewport" content="width=device-width, initial-scale=1" />
+<link href="data:text/css;charset=utf-8,html%7Bfont%2Dfamily%3Asans%2Dserif%3B%2Dwebkit%2Dtext%2Dsize%2Dadjust%3A100%25%3B%2Dms%2Dtext%2Dsize%2Dadjust%3A100%25%7Dbody%7Bmargin%3A0%7Darticle%2Caside%2Cdetails%2Cfigcaption%2Cfigure%2Cfooter%2Cheader%2Chgroup%2Cmain%2Cmenu%2Cnav%2Csection%2Csummary%7Bdisplay%3Ablock%7Daudio%2Ccanvas%2Cprogress%2Cvideo%7Bdisplay%3Ainline%2Dblock%3Bvertical%2Dalign%3Abaseline%7Daudio%3Anot%28%5Bcontrols%5D%29%7Bdisplay%3Anone%3Bheight%3A0%7D%5Bhidden%5D%2Ctem [...]
+<script src="data:application/x-javascript;base64,LyohCiAqIEJvb3RzdHJhcCB2My4zLjUgKGh0dHA6Ly9nZXRib290c3RyYXAuY29tKQogKiBDb3B5cmlnaHQgMjAxMS0yMDE1IFR3aXR0ZXIsIEluYy4KICogTGljZW5zZWQgdW5kZXIgdGhlIE1JVCBsaWNlbnNlCiAqLwppZigidW5kZWZpbmVkIj09dHlwZW9mIGpRdWVyeSl0aHJvdyBuZXcgRXJyb3IoIkJvb3RzdHJhcCdzIEphdmFTY3JpcHQgcmVxdWlyZXMgalF1ZXJ5Iik7K2Z1bmN0aW9uKGEpeyJ1c2Ugc3RyaWN0Ijt2YXIgYj1hLmZuLmpxdWVyeS5zcGxpdCgiICIpWzBdLnNwbGl0KCIuIik7aWYoYlswXTwyJiZiWzFdPDl8fDE9PWJbMF0mJjk9PWJbMV0mJmJbMl08MSl0aHJvdy [...]
+<script src="data:application/x-javascript;base64,LyoqCiogQHByZXNlcnZlIEhUTUw1IFNoaXYgMy43LjIgfCBAYWZhcmthcyBAamRhbHRvbiBAam9uX25lYWwgQHJlbSB8IE1JVC9HUEwyIExpY2Vuc2VkCiovCi8vIE9ubHkgcnVuIHRoaXMgY29kZSBpbiBJRSA4CmlmICghIXdpbmRvdy5uYXZpZ2F0b3IudXNlckFnZW50Lm1hdGNoKCJNU0lFIDgiKSkgewohZnVuY3Rpb24oYSxiKXtmdW5jdGlvbiBjKGEsYil7dmFyIGM9YS5jcmVhdGVFbGVtZW50KCJwIiksZD1hLmdldEVsZW1lbnRzQnlUYWdOYW1lKCJoZWFkIilbMF18fGEuZG9jdW1lbnRFbGVtZW50O3JldHVybiBjLmlubmVySFRNTD0ieDxzdHlsZT4iK2IrIjwvc3R5bGU+IixkLm [...]
+<script src="data:application/x-javascript;base64,LyohIFJlc3BvbmQuanMgdjEuNC4yOiBtaW4vbWF4LXdpZHRoIG1lZGlhIHF1ZXJ5IHBvbHlmaWxsICogQ29weXJpZ2h0IDIwMTMgU2NvdHQgSmVobAogKiBMaWNlbnNlZCB1bmRlciBodHRwczovL2dpdGh1Yi5jb20vc2NvdHRqZWhsL1Jlc3BvbmQvYmxvYi9tYXN0ZXIvTElDRU5TRS1NSVQKICogICovCgovLyBPbmx5IHJ1biB0aGlzIGNvZGUgaW4gSUUgOAppZiAoISF3aW5kb3cubmF2aWdhdG9yLnVzZXJBZ2VudC5tYXRjaCgiTVNJRSA4IikpIHsKIWZ1bmN0aW9uKGEpeyJ1c2Ugc3RyaWN0IjthLm1hdGNoTWVkaWE9YS5tYXRjaE1lZGlhfHxmdW5jdGlvbihhKXt2YXIgYixjPWEuZG [...]
+<script src="data:application/x-javascript;base64,LyohIGpRdWVyeSBVSSAtIHYxLjExLjQgLSAyMDE2LTAxLTA1CiogaHR0cDovL2pxdWVyeXVpLmNvbQoqIEluY2x1ZGVzOiBjb3JlLmpzLCB3aWRnZXQuanMsIG1vdXNlLmpzLCBwb3NpdGlvbi5qcywgZHJhZ2dhYmxlLmpzLCBkcm9wcGFibGUuanMsIHJlc2l6YWJsZS5qcywgc2VsZWN0YWJsZS5qcywgc29ydGFibGUuanMsIGFjY29yZGlvbi5qcywgYXV0b2NvbXBsZXRlLmpzLCBidXR0b24uanMsIGRpYWxvZy5qcywgbWVudS5qcywgcHJvZ3Jlc3NiYXIuanMsIHNlbGVjdG1lbnUuanMsIHNsaWRlci5qcywgc3Bpbm5lci5qcywgdGFicy5qcywgdG9vbHRpcC5qcywgZWZmZWN0LmpzLC [...]
+<link href="data:text/css;charset=utf-8,%0A%0A%2Etocify%20%7B%0Awidth%3A%2020%25%3B%0Amax%2Dheight%3A%2090%25%3B%0Aoverflow%3A%20auto%3B%0Amargin%2Dleft%3A%202%25%3B%0Aposition%3A%20fixed%3B%0Aborder%3A%201px%20solid%20%23ccc%3B%0Awebkit%2Dborder%2Dradius%3A%206px%3B%0Amoz%2Dborder%2Dradius%3A%206px%3B%0Aborder%2Dradius%3A%206px%3B%0A%7D%0A%0A%2Etocify%20ul%2C%20%2Etocify%20li%20%7B%0Alist%2Dstyle%3A%20none%3B%0Amargin%3A%200%3B%0Apadding%3A%200%3B%0Aborder%3A%20none%3B%0Aline%2Dheight%3 [...]
+<script src="data:application/x-javascript;base64,LyoganF1ZXJ5IFRvY2lmeSAtIHYxLjkuMSAtIDIwMTMtMTAtMjIKICogaHR0cDovL3d3dy5ncmVnZnJhbmtvLmNvbS9qcXVlcnkudG9jaWZ5LmpzLwogKiBDb3B5cmlnaHQgKGMpIDIwMTMgR3JlZyBGcmFua287IExpY2Vuc2VkIE1JVCAqLwoKLy8gSW1tZWRpYXRlbHktSW52b2tlZCBGdW5jdGlvbiBFeHByZXNzaW9uIChJSUZFKSBbQmVuIEFsbWFuIEJsb2cgUG9zdF0oaHR0cDovL2JlbmFsbWFuLmNvbS9uZXdzLzIwMTAvMTEvaW1tZWRpYXRlbHktaW52b2tlZC1mdW5jdGlvbi1leHByZXNzaW9uLykgdGhhdCBjYWxscyBhbm90aGVyIElJRkUgdGhhdCBjb250YWlucyBhbGwgb2YgdG [...]
+<script src="data:application/x-javascript;base64,CgovKioKICogalF1ZXJ5IFBsdWdpbjogU3RpY2t5IFRhYnMKICoKICogQGF1dGhvciBBaWRhbiBMaXN0ZXIgPGFpZGFuQHBocC5uZXQ+CiAqIGFkYXB0ZWQgYnkgUnViZW4gQXJzbGFuIHRvIGFjdGl2YXRlIHBhcmVudCB0YWJzIHRvbwogKiBodHRwOi8vd3d3LmFpZGFubGlzdGVyLmNvbS8yMDE0LzAzL3BlcnNpc3RpbmctdGhlLXRhYi1zdGF0ZS1pbi1ib290c3RyYXAvCiAqLwooZnVuY3Rpb24oJCkgewogICJ1c2Ugc3RyaWN0IjsKICAkLmZuLnJtYXJrZG93blN0aWNreVRhYnMgPSBmdW5jdGlvbigpIHsKICAgIHZhciBjb250ZXh0ID0gdGhpczsKICAgIC8vIFNob3cgdGhlIHRhYi [...]
+<link href="data:text/css;charset=utf-8,pre%20%2Eoperator%2C%0Apre%20%2Eparen%20%7B%0Acolor%3A%20rgb%28104%2C%20118%2C%20135%29%0A%7D%0Apre%20%2Eliteral%20%7B%0Acolor%3A%20%23990073%0A%7D%0Apre%20%2Enumber%20%7B%0Acolor%3A%20%23099%3B%0A%7D%0Apre%20%2Ecomment%20%7B%0Acolor%3A%20%23998%3B%0Afont%2Dstyle%3A%20italic%0A%7D%0Apre%20%2Ekeyword%20%7B%0Acolor%3A%20%23900%3B%0Afont%2Dweight%3A%20bold%0A%7D%0Apre%20%2Eidentifier%20%7B%0Acolor%3A%20rgb%280%2C%200%2C%200%29%3B%0A%7D%0Apre%20%2Estri [...]
+<script src="data:application/x-javascript;base64,dmFyIGhsanM9bmV3IGZ1bmN0aW9uKCl7ZnVuY3Rpb24gbShwKXtyZXR1cm4gcC5yZXBsYWNlKC8mL2dtLCImYW1wOyIpLnJlcGxhY2UoLzwvZ20sIiZsdDsiKX1mdW5jdGlvbiBmKHIscSxwKXtyZXR1cm4gUmVnRXhwKHEsIm0iKyhyLmNJPyJpIjoiIikrKHA/ImciOiIiKSl9ZnVuY3Rpb24gYihyKXtmb3IodmFyIHA9MDtwPHIuY2hpbGROb2Rlcy5sZW5ndGg7cCsrKXt2YXIgcT1yLmNoaWxkTm9kZXNbcF07aWYocS5ub2RlTmFtZT09IkNPREUiKXtyZXR1cm4gcX1pZighKHEubm9kZVR5cGU9PTMmJnEubm9kZVZhbHVlLm1hdGNoKC9ccysvKSkpe2JyZWFrfX19ZnVuY3Rpb24gaCh0LH [...]
+
+<style type="text/css">code{white-space: pre;}</style>
+<style type="text/css">
+  pre:not([class]) {
+    background-color: white;
+  }
+</style>
+<script type="text/javascript">
+if (window.hljs && document.readyState && document.readyState === "complete") {
+   window.setTimeout(function() {
+      hljs.initHighlighting();
+   }, 0);
+}
+</script>
+
+
+
+<style type="text/css">
+h1 {
+  font-size: 34px;
+}
+h1.title {
+  font-size: 38px;
+}
+h2 {
+  font-size: 30px;
+}
+h3 {
+  font-size: 24px;
+}
+h4 {
+  font-size: 18px;
+}
+h5 {
+  font-size: 16px;
+}
+h6 {
+  font-size: 12px;
+}
+.table th:not([align]) {
+  text-align: left;
+}
+</style>
+
+
+</head>
+
+<body>
+
+<style type="text/css">
+.main-container {
+  max-width: 940px;
+  margin-left: auto;
+  margin-right: auto;
+}
+code {
+  color: inherit;
+  background-color: rgba(0, 0, 0, 0.04);
+}
+img {
+  max-width:100%;
+  height: auto;
+}
+.tabbed-pane {
+  padding-top: 12px;
+}
+button.code-folding-btn:focus {
+  outline: none;
+}
+</style>
+
+
+
+<div class="container-fluid main-container">
+
+<!-- tabsets -->
+<script>
+$(document).ready(function () {
+  window.buildTabsets("TOC");
+});
+</script>
+
+<!-- code folding -->
+
+
+
+
+<script>
+$(document).ready(function ()  {
+
+    // move toc-ignore selectors from section div to header
+    $('div.section.toc-ignore')
+        .removeClass('toc-ignore')
+        .children('h1,h2,h3,h4,h5').addClass('toc-ignore');
+
+    // establish options
+    var options = {
+      selectors: "h1,h2,h3",
+      theme: "bootstrap3",
+      context: '.toc-content',
+      hashGenerator: function (text) {
+        return text.replace(/[.\\/?&!#<>]/g, '').replace(/\s/g, '_').toLowerCase();
+      },
+      ignoreSelector: ".toc-ignore",
+      scrollTo: 0
+    };
+    options.showAndHide = false;
+    options.smoothScroll = false;
+
+    // tocify
+    var toc = $("#TOC").tocify(options).data("toc-tocify");
+});
+</script>
+
+<style type="text/css">
+
+#TOC {
+  margin: 25px 0px 20px 0px;
+}
+ at media (max-width: 768px) {
+#TOC {
+  position: relative;
+  width: 100%;
+}
+}
+
+
+.toc-content {
+  padding-left: 30px;
+  padding-right: 40px;
+}
+
+div.main-container {
+  max-width: 1200px;
+}
+
+div.tocify {
+  width: 20%;
+  max-width: 260px;
+  max-height: 85%;
+}
+
+ at media (min-width: 768px) and (max-width: 991px) {
+  div.tocify {
+    width: 25%;
+  }
+}
+
+ at media (max-width: 767px) {
+  div.tocify {
+    width: 100%;
+    max-width: none;
+  }
+}
+
+.tocify ul, .tocify li {
+  line-height: 20px;
+}
+
+.tocify-subheader .tocify-item {
+  font-size: 0.90em;
+  padding-left: 25px;
+  text-indent: 0;
+}
+
+.tocify .list-group-item {
+  border-radius: 0px;
+}
+
+.tocify-subheader {
+  display: inline;
+}
+.tocify-subheader .tocify-item {
+  font-size: 0.95em;
+}
+
+</style>
+
+<!-- setup 3col/9col grid for toc_float and main content  -->
+<div class="row-fluid">
+<div class="col-xs-12 col-sm-4 col-md-3">
+<div id="TOC" class="tocify">
+</div>
+</div>
+
+<div class="toc-content col-xs-12 col-sm-8 col-md-9">
+
+
+
+
+<div class="fluid-row" id="header">
+
+
+
+<h1 class="title toc-ignore">The curl package: a modern R interface to libcurl</h1>
+<h4 class="date"><em>2017-07-20</em></h4>
+
+</div>
+
+
+<p>The curl package provides bindings to the <a href="http://curl.haxx.se/libcurl/">libcurl</a> C library for R. The package supports retrieving data in-memory, downloading to disk, or streaming using the <a href="https://stat.ethz.ch/R-manual/R-devel/library/base/html/connections.html">R “connection” interface</a>. Some knowledge of curl is recommended to use this package. For a more user-friendly HTTP client, have a look at the <a href="https://cran.r-project.org/package=httr/vignettes [...]
+<div id="request-interfaces" class="section level2">
+<h2>Request interfaces</h2>
+<p>The curl package implements several interfaces to retrieve data from a URL:</p>
+<ul>
+<li><code>curl_fetch_memory()</code> saves response in memory</li>
+<li><code>curl_download()</code> or <code>curl_fetch_disk()</code> writes response to disk</li>
+<li><code>curl()</code> or <code>curl_fetch_stream()</code> streams response data</li>
+<li><code>curl_fetch_multi()</code> (Advanced) process responses via callback functions</li>
+</ul>
+<p>Each interface performs the same HTTP request, they only differ in how response data is processed.</p>
+<div id="getting-in-memory" class="section level3">
+<h3>Getting in memory</h3>
+<p>The <code>curl_fetch_memory</code> function is a blocking interface which waits for the request to complete and returns a list with all content (data, headers, status, timings) of the server response.</p>
+<pre class="r"><code>req <- curl_fetch_memory("https://httpbin.org/get")
+str(req)</code></pre>
+<pre><code>List of 6
+ $ url        : chr "https://httpbin.org/get"
+ $ status_code: int 200
+ $ headers    : raw [1:302] 48 54 54 50 ...
+ $ modified   : POSIXct[1:1], format: NA
+ $ times      : Named num [1:6] 0 0.0228 0.1229 0.3557 2.822 ...
+  ..- attr(*, "names")= chr [1:6] "redirect" "namelookup" "connect" "pretransfer" ...
+ $ content    : raw [1:300] 7b 0a 20 20 ...</code></pre>
+<pre class="r"><code>parse_headers(req$headers)</code></pre>
+<pre><code> [1] "HTTP/1.1 200 OK"                        "Connection: keep-alive"                
+ [3] "Server: meinheld/0.6.1"                 "Date: Thu, 20 Jul 2017 10:47:02 GMT"   
+ [5] "Content-Type: application/json"         "Access-Control-Allow-Origin: *"        
+ [7] "Access-Control-Allow-Credentials: true" "X-Powered-By: Flask"                   
+ [9] "X-Processed-Time: 0.00144004821777"     "Content-Length: 300"                   
+[11] "Via: 1.1 vegur"                        </code></pre>
+<pre class="r"><code>cat(rawToChar(req$content))</code></pre>
+<pre><code>{
+  "args": {}, 
+  "headers": {
+    "Accept": "*/*", 
+    "Accept-Encoding": "gzip, deflate", 
+    "Connection": "close", 
+    "Host": "httpbin.org", 
+    "User-Agent": "R (3.4.1 x86_64-apple-darwin15.6.0 x86_64 darwin15.6.0)"
+  }, 
+  "origin": "80.101.61.181", 
+  "url": "https://httpbin.org/get"
+}</code></pre>
+<p>The <code>curl_fetch_memory</code> interface is the easiest interface and most powerful for building API clients. However it is not suitable for downloading really large files because it is fully in-memory. If you are expecting 100G of data, you probably need one of the other interfaces.</p>
+</div>
+<div id="downloading-to-disk" class="section level3">
+<h3>Downloading to disk</h3>
+<p>The second method is <code>curl_download</code>, which has been designed as a drop-in replacement for <code>download.file</code> in r-base. It writes the response straight to disk, which is useful for downloading (large) files.</p>
+<pre class="r"><code>tmp <- tempfile()
+curl_download("https://httpbin.org/get", tmp)
+cat(readLines(tmp), sep = "\n")</code></pre>
+<pre><code>{
+  "args": {}, 
+  "headers": {
+    "Accept": "*/*", 
+    "Accept-Encoding": "gzip, deflate", 
+    "Connection": "close", 
+    "Host": "httpbin.org", 
+    "User-Agent": "R (3.4.1 x86_64-apple-darwin15.6.0 x86_64 darwin15.6.0)"
+  }, 
+  "origin": "80.101.61.181", 
+  "url": "https://httpbin.org/get"
+}</code></pre>
+</div>
+<div id="streaming-data" class="section level3">
+<h3>Streaming data</h3>
+<p>The most flexible interface is the <code>curl</code> function, which has been designed as a drop-in replacement for base <code>url</code>. It will create a so-called connection object, which allows for incremental (asynchronous) reading of the response.</p>
+<pre class="r"><code>con <- curl("https://httpbin.org/get")
+open(con)
+
+# Get 3 lines
+out <- readLines(con, n = 3)
+cat(out, sep = "\n")</code></pre>
+<pre><code>{
+  "args": {}, 
+  "headers": {</code></pre>
+<pre class="r"><code># Get 3 more lines
+out <- readLines(con, n = 3)
+cat(out, sep = "\n")</code></pre>
+<pre><code>    "Accept": "*/*", 
+    "Accept-Encoding": "gzip, deflate", 
+    "Connection": "close", </code></pre>
+<pre class="r"><code># Get remaining lines
+out <- readLines(con)
+close(con)
+cat(out, sep = "\n")</code></pre>
+<pre><code>    "Host": "httpbin.org", 
+    "User-Agent": "R (3.4.1 x86_64-apple-darwin15.6.0 x86_64 darwin15.6.0)"
+  }, 
+  "origin": "80.101.61.181", 
+  "url": "https://httpbin.org/get"
+}</code></pre>
+<p>The example shows how to use <code>readLines</code> on an opened connection to read <code>n</code> lines at a time. Similarly <code>readBin</code> is used to read <code>n</code> bytes at a time for stream parsing binary data.</p>
+<div id="non-blocking-connections" class="section level4">
+<h4>Non blocking connections</h4>
+<p>As of version 2.3 it is also possible to open connetions in non-blocking mode. In this case <code>readBin</code> and <code>readLines</code> will return immediately with data that is available without waiting. For non-blocking connections we use <code>isIncomplete</code> to check if the download has completed yet.</p>
+<pre class="r"><code>con <- curl("https://httpbin.org/drip?duration=1&numbytes=50")
+open(con, "rb", blocking = FALSE)
+while(isIncomplete(con)){
+  buf <- readBin(con, raw(), 1024)
+  if(length(buf)) 
+    cat("received: ", rawToChar(buf), "\n")
+}</code></pre>
+<pre><code>received:  ************************************************** </code></pre>
+<pre class="r"><code>close(con)</code></pre>
+<p>The <code>curl_fetch_stream</code> function provides a very simple wrapper around a non-blocking connection.</p>
+</div>
+</div>
+<div id="async-requests" class="section level3">
+<h3>Async requests</h3>
+<p>As of <code>curl 2.0</code> the package provides an async interface which can perform multiple simultaneous requests concurrently. The <code>curl_fetch_multi</code> adds a request to a pool and returns immediately; it does not actually perform the request.</p>
+<pre class="r"><code>pool <- new_pool()
+cb <- function(req){cat("done:", req$url, ": HTTP:", req$status, "\n")}
+curl_fetch_multi('https://www.google.com', done = cb, pool = pool)
+curl_fetch_multi('https://cloud.r-project.org', done = cb, pool = pool)
+curl_fetch_multi('https://httpbin.org/blabla', done = cb, pool = pool)</code></pre>
+<p>When we call <code>multi_run()</code>, all scheduled requests are performed concurrently. The callback functions get triggered when each request completes.</p>
+<pre class="r"><code># This actually performs requests:
+out <- multi_run(pool = pool)</code></pre>
+<pre><code>done: https://www.google.nl/?gfe_rd=cr&ei=O4pwWdjyC4vc8AfipLq4Cg : HTTP: 200 
+done: https://httpbin.org/blabla : HTTP: 404 
+done: https://cloud.r-project.org/ : HTTP: 200 </code></pre>
+<pre class="r"><code>print(out)</code></pre>
+<pre><code>$success
+[1] 3
+
+$error
+[1] 0
+
+$pending
+[1] 0</code></pre>
+<p>The system allows for running many concurrent non-blocking requests. However it is quite complex and requires careful specification of handler functions.</p>
+</div>
+</div>
+<div id="exception-handling" class="section level2">
+<h2>Exception handling</h2>
+<p>A HTTP requests can encounter two types of errors:</p>
+<ol style="list-style-type: decimal">
+<li>Connection failure: network down, host not found, invalid SSL certificate, etc</li>
+<li>HTTP non-success status: 401 (DENIED), 404 (NOT FOUND), 503 (SERVER PROBLEM), etc</li>
+</ol>
+<p>The first type of errors (connection failures) will always raise an error in R for each interface. However if the requests succeeds and the server returns a non-success HTTP status code, only <code>curl()</code> and <code>curl_download()</code> will raise an error. Let’s dive a little deeper into this.</p>
+<div id="error-automatically" class="section level3">
+<h3>Error automatically</h3>
+<p>The <code>curl</code> and <code>curl_download</code> functions are safest to use because they automatically raise an error if the request was completed but the server returned a non-success (400 or higher) HTTP status. This mimics behavior of base functions <code>url</code> and <code>download.file</code>. Therefore we can safely write code like this:</p>
+<pre class="r"><code># This is OK
+curl_download('https://cran.r-project.org/CRAN_mirrors.csv', 'mirrors.csv')
+mirros <- read.csv('mirrors.csv')
+unlink('mirrors.csv')</code></pre>
+<p>If the HTTP request was unsuccessful, R will not continue:</p>
+<pre class="r"><code># Oops! A typo in the URL!
+curl_download('https://cran.r-project.org/CRAN_mirrorZ.csv', 'mirrors.csv')</code></pre>
+<pre><code>Error in curl_download("https://cran.r-project.org/CRAN_mirrorZ.csv", : HTTP error 404.</code></pre>
+<pre class="r"><code>con <- curl('https://cran.r-project.org/CRAN_mirrorZ.csv')
+open(con)</code></pre>
+<pre><code>Error in open.connection(con): HTTP error 404.</code></pre>
+</div>
+<div id="check-manually" class="section level3">
+<h3>Check manually</h3>
+<p>When using any of the <code>curl_fetch_*</code> functions it is important to realize that these do <strong>not</strong> raise an error if the request was completed but returned a non-200 status code. When using <code>curl_fetch_memory</code> or <code>curl_fetch_disk</code> you need to implement such application logic yourself and check if the response was successful.</p>
+<pre class="r"><code>req <- curl_fetch_memory('https://cran.r-project.org/CRAN_mirrors.csv')
+print(req$status_code)</code></pre>
+<pre><code>[1] 200</code></pre>
+<p>Same for downloading to disk. If you do not check your status, you might have downloaded an error page!</p>
+<pre class="r"><code># Oops a typo!
+req <- curl_fetch_disk('https://cran.r-project.org/CRAN_mirrorZ.csv', 'mirrors.csv')
+print(req$status_code)</code></pre>
+<pre><code>[1] 404</code></pre>
+<pre class="r"><code># This is not the CSV file we were expecting!
+head(readLines('mirrors.csv'))</code></pre>
+<pre><code>[1] "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"                               
+[2] "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\""               
+[3] "  \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">"                 
+[4] "<html xmlns=\"http://www.w3.org/1999/xhtml\" lang=\"en\" xml:lang=\"en\">"
+[5] "<head>"                                                                   
+[6] "<title>Object not found!</title>"                                         </code></pre>
+<pre class="r"><code>unlink('mirrors.csv')</code></pre>
+<p>If you <em>do</em> want the <code>curl_fetch_*</code> functions to automatically raise an error, you should set the <a href="https://curl.haxx.se/libcurl/c/CURLOPT_FAILONERROR.html"><code>FAILONERROR</code></a> option to <code>TRUE</code> in the handle of the request.</p>
+<pre class="r"><code>h <- new_handle(failonerror = TRUE)
+curl_fetch_memory('https://cran.r-project.org/CRAN_mirrorZ.csv', handle = h)</code></pre>
+<pre><code>Error in curl_fetch_memory("https://cran.r-project.org/CRAN_mirrorZ.csv", : The requested URL returned error: 404 Not Found</code></pre>
+</div>
+</div>
+<div id="customizing-requests" class="section level2">
+<h2>Customizing requests</h2>
+<p>By default libcurl uses HTTP GET to issue a request to an HTTP url. To send a customized request, we first need to create and configure a curl handle object that is passed to the specific download interface.</p>
+<div id="configuring-a-handle" class="section level3">
+<h3>Configuring a handle</h3>
+<p>Creating a new handle is done using <code>new_handle</code>. After creating a handle object, we can set the libcurl options and http request headers.</p>
+<pre class="r"><code>h <- new_handle()
+handle_setopt(h, copypostfields = "moo=moomooo");
+handle_setheaders(h,
+  "Content-Type" = "text/moo",
+  "Cache-Control" = "no-cache",
+  "User-Agent" = "A cow"
+)</code></pre>
+<p>Use the <code>curl_options()</code> function to get a list of the options supported by your version of libcurl. The <a href="http://curl.haxx.se/libcurl/c/curl_easy_setopt.html">libcurl documentation</a> explains what each option does. Option names are not case sensitive.</p>
+<p>After the handle has been configured, it can be used with any of the download interfaces to perform the request. For example <code>curl_fetch_memory</code> will load store the output of the request in memory:</p>
+<pre class="r"><code>req <- curl_fetch_memory("http://httpbin.org/post", handle = h)
+cat(rawToChar(req$content))</code></pre>
+<pre><code>{
+  "args": {}, 
+  "data": "moo=moomooo", 
+  "files": {}, 
+  "form": {}, 
+  "headers": {
+    "Accept": "*/*", 
+    "Accept-Encoding": "gzip, deflate", 
+    "Cache-Control": "no-cache", 
+    "Connection": "close", 
+    "Content-Length": "11", 
+    "Content-Type": "text/moo", 
+    "Host": "httpbin.org", 
+    "User-Agent": "A cow"
+  }, 
+  "json": null, 
+  "origin": "80.101.61.181", 
+  "url": "http://httpbin.org/post"
+}</code></pre>
+<p>Alternatively we can use <code>curl()</code> to read the data of via a connection interface:</p>
+<pre class="r"><code>con <- curl("http://httpbin.org/post", handle = h)
+cat(readLines(con), sep = "\n")</code></pre>
+<pre><code>{
+  "args": {}, 
+  "data": "moo=moomooo", 
+  "files": {}, 
+  "form": {}, 
+  "headers": {
+    "Accept": "*/*", 
+    "Accept-Encoding": "gzip, deflate", 
+    "Cache-Control": "no-cache", 
+    "Connection": "close", 
+    "Content-Length": "11", 
+    "Content-Type": "text/moo", 
+    "Host": "httpbin.org", 
+    "User-Agent": "A cow"
+  }, 
+  "json": null, 
+  "origin": "80.101.61.181", 
+  "url": "http://httpbin.org/post"
+}</code></pre>
+<p>Or we can use <code>curl_download</code> to write the response to disk:</p>
+<pre class="r"><code>tmp <- tempfile()
+curl_download("http://httpbin.org/post", destfile = tmp, handle = h)
+cat(readLines(tmp), sep = "\n")</code></pre>
+<pre><code>{
+  "args": {}, 
+  "data": "moo=moomooo", 
+  "files": {}, 
+  "form": {}, 
+  "headers": {
+    "Accept": "*/*", 
+    "Accept-Encoding": "gzip, deflate", 
+    "Cache-Control": "no-cache", 
+    "Connection": "close", 
+    "Content-Length": "11", 
+    "Content-Type": "text/moo", 
+    "Host": "httpbin.org", 
+    "User-Agent": "A cow"
+  }, 
+  "json": null, 
+  "origin": "80.101.61.181", 
+  "url": "http://httpbin.org/post"
+}</code></pre>
+<p>Or perform the same request with a multi pool:</p>
+<pre class="r"><code>curl_fetch_multi("http://httpbin.org/post", handle = h, done = function(res){
+  cat("Request complete! Response content:\n")
+  cat(rawToChar(res$content))
+})
+
+# Perform the request
+out <- multi_run()</code></pre>
+<pre><code>Request complete! Response content:
+{
+  "args": {}, 
+  "data": "moo=moomooo", 
+  "files": {}, 
+  "form": {}, 
+  "headers": {
+    "Accept": "*/*", 
+    "Accept-Encoding": "gzip, deflate", 
+    "Cache-Control": "no-cache", 
+    "Connection": "close", 
+    "Content-Length": "11", 
+    "Content-Type": "text/moo", 
+    "Host": "httpbin.org", 
+    "User-Agent": "A cow"
+  }, 
+  "json": null, 
+  "origin": "80.101.61.181", 
+  "url": "http://httpbin.org/post"
+}</code></pre>
+</div>
+<div id="reading-cookies" class="section level3">
+<h3>Reading cookies</h3>
+<p>Curl handles automatically keep track of cookies set by the server. At any given point we can use <code>handle_cookies</code> to see a list of current cookies in the handle.</p>
+<pre class="r"><code># Start with a fresh handle
+h <- new_handle()
+
+# Ask server to set some cookies
+req <- curl_fetch_memory("http://httpbin.org/cookies/set?foo=123&bar=ftw", handle = h)
+req <- curl_fetch_memory("http://httpbin.org/cookies/set?baz=moooo", handle = h)
+handle_cookies(h)</code></pre>
+<pre><code>       domain  flag path secure expiration name value
+1 httpbin.org FALSE    /  FALSE       <NA>  bar   ftw
+2 httpbin.org FALSE    /  FALSE       <NA>  foo   123
+3 httpbin.org FALSE    /  FALSE       <NA>  baz moooo</code></pre>
+<pre class="r"><code># Unset a cookie
+req <- curl_fetch_memory("http://httpbin.org/cookies/delete?foo", handle = h)
+handle_cookies(h)</code></pre>
+<pre><code>       domain  flag path secure          expiration name value
+1 httpbin.org FALSE    /  FALSE                <NA>  bar   ftw
+2 httpbin.org FALSE    /  FALSE 2017-07-20 12:47:30  foo  <NA>
+3 httpbin.org FALSE    /  FALSE                <NA>  baz moooo</code></pre>
+<p>The <code>handle_cookies</code> function returns a data frame with 7 columns as specified in the <a href="http://www.cookiecentral.com/faq/#3.5">netscape cookie file format</a>.</p>
+</div>
+<div id="on-reusing-handles" class="section level3">
+<h3>On reusing handles</h3>
+<p>In most cases you should not re-use a single handle object for more than one request. The only benefit of reusing a handle for multiple requests is to keep track of cookies set by the server (seen above). This could be needed if your server uses session cookies, but this is rare these days. Most APIs set state explicitly via http headers or parameters, rather than implicitly via cookies.</p>
+<p>In recent versions of the curl package there are no performance benefits of reusing handles. The overhead of creating and configuring a new handle object is negligible. The safest way to issue multiple requests, either to a single server or multiple servers is by using a separate handle for each request (which is the default)</p>
+<pre class="r"><code>req1 <- curl_fetch_memory("https://httpbin.org/get")
+req2 <- curl_fetch_memory("http://www.r-project.org")</code></pre>
+<p>In past versions of this package you needed to manually use a handle to take advantage of http Keep-Alive. However as of version 2.3 this is no longer the case: curl automatically maintains global a pool of open http connections shared by all handles. When performing many requests to the same server, curl automatically uses existing connections when possible, eliminating TCP/SSL handshaking overhead:</p>
+<pre class="r"><code>req <- curl_fetch_memory("https://api.github.com/users/ropensci")
+req$times</code></pre>
+<pre><code>     redirect    namelookup       connect   pretransfer starttransfer         total 
+     0.000000      0.011790      0.102725      0.302215      0.419966      0.420094 </code></pre>
+<pre class="r"><code>req2 <- curl_fetch_memory("https://api.github.com/users/rstudio")
+req2$times</code></pre>
+<pre><code>     redirect    namelookup       connect   pretransfer starttransfer         total 
+     0.000000      0.000033      0.000035      0.000094      0.115455      0.115580 </code></pre>
+<p>If you really need to re-use a handle, do note that that curl does not cleanup the handle after each request. All of the options and internal fields will linger around for all future request until explicitly reset or overwritten. This can sometimes leads to unexpected behavior.</p>
+<pre class="r"><code>handle_reset(h)</code></pre>
+<p>The <code>handle_reset</code> function will reset all curl options and request headers to the default values. It will <strong>not</strong> erase cookies and it will still keep alive the connections. Therefore it is good practice to call <code>handle_reset</code> after performing a request if you want to reuse the handle for a subsequent request. Still it is always safer to create a new fresh handle when possible, rather than recycling old ones.</p>
+</div>
+<div id="posting-forms" class="section level3">
+<h3>Posting forms</h3>
+<p>The <code>handle_setform</code> function is used to perform a <code>multipart/form-data</code> HTTP POST request (a.k.a. posting a form). Values can be either strings, raw vectors (for binary data) or files.</p>
+<pre class="r"><code># Posting multipart
+h <- new_handle()
+handle_setform(h,
+  foo = "blabla",
+  bar = charToRaw("boeboe"),
+  iris = form_data(serialize(iris, NULL), "application/rda"),
+  description = form_file(system.file("DESCRIPTION")),
+  logo = form_file(file.path(Sys.getenv("R_DOC_DIR"), "html/logo.jpg"), "image/jpeg")
+)
+req <- curl_fetch_memory("http://httpbin.org/post", handle = h)</code></pre>
+<p>The <code>form_file</code> function is used to upload files with the form post. It has two arguments: a file path, and optionally a content-type value. If no content-type is set, curl will guess the content type of the file based on the file extension.</p>
+<p>The <code>form_data</code> function is similar but simply posts a string or raw value with a custom content-type.</p>
+</div>
+<div id="using-pipes" class="section level3">
+<h3>Using pipes</h3>
+<p>All of the <code>handle_xxx</code> functions return the handle object so that function calls can be chained using the popular pipe operators:</p>
+<pre class="r"><code>library(magrittr)
+
+new_handle() %>%
+  handle_setopt(copypostfields = "moo=moomooo") %>%
+  handle_setheaders("Content-Type" = "text/moo", "Cache-Control" = "no-cache", "User-Agent" = "A cow") %>%
+  curl_fetch_memory(url = "http://httpbin.org/post") %$% content %>% rawToChar %>% cat</code></pre>
+<pre><code>{
+  "args": {}, 
+  "data": "moo=moomooo", 
+  "files": {}, 
+  "form": {}, 
+  "headers": {
+    "Accept": "*/*", 
+    "Accept-Encoding": "gzip, deflate", 
+    "Cache-Control": "no-cache", 
+    "Connection": "close", 
+    "Content-Length": "11", 
+    "Content-Type": "text/moo", 
+    "Host": "httpbin.org", 
+    "User-Agent": "A cow"
+  }, 
+  "json": null, 
+  "origin": "80.101.61.181", 
+  "url": "http://httpbin.org/post"
+}</code></pre>
+</div>
+</div>
+
+
+
+</div>
+</div>
+
+</div>
+
+<script>
+
+// add bootstrap table styles to pandoc tables
+function bootstrapStylePandocTables() {
+  $('tr.header').parent('thead').parent('table').addClass('table table-condensed');
+}
+$(document).ready(function () {
+  bootstrapStylePandocTables();
+});
+
+
+</script>
+
+<!-- dynamically load mathjax for compatibility with self-contained -->
+<script>
+  (function () {
+    var script = document.createElement("script");
+    script.type = "text/javascript";
+    script.src  = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
+    document.getElementsByTagName("head")[0].appendChild(script);
+  })();
+</script>
+
+</body>
+</html>
diff --git a/man/curl.Rd b/man/curl.Rd
new file mode 100644
index 0000000..2b09788
--- /dev/null
+++ b/man/curl.Rd
@@ -0,0 +1,78 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/curl.R
+\name{curl}
+\alias{curl}
+\title{Curl connection interface}
+\usage{
+curl(url = "http://httpbin.org/get", open = "", handle = new_handle())
+}
+\arguments{
+\item{url}{character string. See examples.}
+
+\item{open}{character string. How to open the connection if it should be opened
+initially. Currently only "r" and "rb" are supported.}
+
+\item{handle}{a curl handle object}
+}
+\description{
+Drop-in replacement for base \code{\link{url}} that supports https, ftps,
+gzip, deflate, etc. Default behavior is identical to \code{\link{url}}, but
+request can be fully configured by passing a custom \code{\link{handle}}.
+}
+\details{
+As of version 2.3 curl connections support \code{open(con, blocking = FALSE)}.
+In this case \code{readBin} and \code{readLines} will return immediately with data
+that is available without waiting. For such non-blocking connections the caller
+needs to call \code{\link{isIncomplete}} to check if the download has completed
+yet.
+}
+\examples{
+\dontrun{
+con <- curl("https://httpbin.org/get")
+readLines(con)
+
+# Auto-opened connections can be recycled
+open(con, "rb")
+bin <- readBin(con, raw(), 999)
+close(con)
+rawToChar(bin)
+
+# HTTP error
+curl("https://httpbin.org/status/418", "r")
+
+# Follow redirects
+readLines(curl("https://httpbin.org/redirect/3"))
+
+# Error after redirect
+curl("https://httpbin.org/redirect-to?url=http://httpbin.org/status/418", "r")
+
+# Auto decompress Accept-Encoding: gzip / deflate (rfc2616 #14.3)
+readLines(curl("http://httpbin.org/gzip"))
+readLines(curl("http://httpbin.org/deflate"))
+
+# Binary support
+buf <- readBin(curl("http://httpbin.org/bytes/98765", "rb"), raw(), 1e5)
+length(buf)
+
+# Read file from disk
+test <- paste0("file://", system.file("DESCRIPTION"))
+readLines(curl(test))
+
+# Other protocols
+read.csv(curl("ftp://cran.r-project.org/pub/R/CRAN_mirrors.csv"))
+readLines(curl("ftps://test.rebex.net:990/readme.txt"))
+readLines(curl("gopher://quux.org/1"))
+
+# Streaming data
+con <- curl("http://jeroen.github.io/data/diamonds.json", "r")
+while(length(x <- readLines(con, n = 5))){
+  print(x)
+}
+
+# Stream large dataset over https with gzip
+library(jsonlite)
+con <- gzcon(curl("https://jeroen.github.io/data/nycflights13.json.gz"))
+nycflights <- stream_in(con)
+}
+
+}
diff --git a/man/curl_download.Rd b/man/curl_download.Rd
new file mode 100644
index 0000000..8f1594f
--- /dev/null
+++ b/man/curl_download.Rd
@@ -0,0 +1,47 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/download.R
+\name{curl_download}
+\alias{curl_download}
+\title{Download file to disk}
+\usage{
+curl_download(url, destfile, quiet = TRUE, mode = "wb",
+  handle = new_handle())
+}
+\arguments{
+\item{url}{A character string naming the URL of a resource to be downloaded.}
+
+\item{destfile}{A character string with the name where the downloaded file
+is saved. Tilde-expansion is performed.}
+
+\item{quiet}{If \code{TRUE}, suppress status messages (if any), and the
+progress bar.}
+
+\item{mode}{A character string specifying the mode with which to write the file.
+Useful values are \code{"w"}, \code{"wb"} (binary), \code{"a"} (append)
+and \code{"ab"}.}
+
+\item{handle}{a curl handle object}
+}
+\value{
+Path of downloaded file (invisibly).
+}
+\description{
+Libcurl implementation of \code{C_download} (the "internal" download method)
+with added support for https, ftps, gzip, etc. Default behavior is identical
+to \code{\link{download.file}}, but request can be fully configured by passing
+a custom \code{\link{handle}}.
+}
+\details{
+The main difference between \code{curl_download} and \code{curl_fetch_disk}
+is that \code{curl_download} checks the http status code before starting the
+download, and raises an error when status is non-successful. The behavior of
+\code{curl_fetch_disk} on the other hand is to proceed as normal and write
+the error page to disk in case of a non success response.
+}
+\examples{
+\dontrun{download large file
+url <- "http://www2.census.gov/acs2011_5yr/pums/csv_pus.zip"
+tmp <- tempfile()
+curl_download(url, tmp)
+}
+}
diff --git a/man/curl_echo.Rd b/man/curl_echo.Rd
new file mode 100644
index 0000000..b13855c
--- /dev/null
+++ b/man/curl_echo.Rd
@@ -0,0 +1,32 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/echo.R
+\name{curl_echo}
+\alias{curl_echo}
+\title{Echo Service}
+\usage{
+curl_echo(handle, port = 9359, progress = interactive(), file = NULL)
+}
+\arguments{
+\item{handle}{a curl handle object}
+
+\item{port}{the port number on which to run httpuv server}
+
+\item{progress}{show progress meter during http transfer}
+
+\item{file}{path or connection to write body. Default returns body as raw vector.}
+}
+\description{
+This function is only for testing purposes. It starts a local httpuv server to
+echo the request body and content type in the response.
+}
+\examples{
+h <- handle_setform(new_handle(), foo = "blabla", bar = charToRaw("test"),
+myfile = form_file(system.file("DESCRIPTION"), "text/description"))
+formdata <- curl_echo(h)
+
+# Show the multipart body
+cat(rawToChar(formdata$body))
+
+# Parse multipart
+webutils::parse_http(formdata$body, formdata$content_type)
+}
diff --git a/man/curl_escape.Rd b/man/curl_escape.Rd
new file mode 100644
index 0000000..f523cca
--- /dev/null
+++ b/man/curl_escape.Rd
@@ -0,0 +1,29 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/escape.R
+\name{curl_escape}
+\alias{curl_escape}
+\alias{curl_unescape}
+\title{URL encoding}
+\usage{
+curl_escape(url)
+
+curl_unescape(url)
+}
+\arguments{
+\item{url}{A character vector (typically containing urls or parameters) to be
+encoded/decoded}
+}
+\description{
+Escape all special characters (i.e. everything except for a-z, A-Z, 0-9, '-',
+'.', '_' or '~') for use in URLs.
+}
+\examples{
+# Escape strings
+out <- curl_escape("foo = bar + 5")
+curl_unescape(out)
+
+# All non-ascii characters are encoded
+mu <- "\\u00b5"
+curl_escape(mu)
+curl_unescape(curl_escape(mu))
+}
diff --git a/man/curl_fetch.Rd b/man/curl_fetch.Rd
new file mode 100644
index 0000000..159c109
--- /dev/null
+++ b/man/curl_fetch.Rd
@@ -0,0 +1,91 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/fetch.R
+\name{curl_fetch_memory}
+\alias{curl_fetch_memory}
+\alias{curl_fetch_disk}
+\alias{curl_fetch_stream}
+\alias{curl_fetch_multi}
+\title{Fetch the contents of a URL}
+\usage{
+curl_fetch_memory(url, handle = new_handle())
+
+curl_fetch_disk(url, path, handle = new_handle())
+
+curl_fetch_stream(url, fun, handle = new_handle())
+
+curl_fetch_multi(url, done = NULL, fail = NULL, pool = NULL,
+  handle = new_handle())
+}
+\arguments{
+\item{url}{A character string naming the URL of a resource to be downloaded.}
+
+\item{handle}{a curl handle object}
+
+\item{path}{Path to save results}
+
+\item{fun}{Callback function. Should have one argument, which will be
+a raw vector.}
+
+\item{done}{callback function for completed request. Single argument with
+response data in same structure as \link{curl_fetch_memory}.}
+
+\item{fail}{callback function called on failed request. Argument contains
+error message.}
+
+\item{pool}{a multi handle created by \link{new_pool}. Default uses a global pool.}
+}
+\description{
+Low-level bindings to write data from a URL into memory, disk or a callback
+function. These are mainly intended for \code{httr}, most users will be better
+off using the \code{\link{curl}} or \code{\link{curl_download}} function, or the
+http specific wrappers in the \code{httr} package.
+}
+\details{
+The curl_fetch functions automatically raise an error upon protocol problems
+(network, disk, ssl) but do not implement application logic. For example for
+you need to check the status code of http requests yourself in the response,
+and deal with it accordingly.
+
+Both \code{curl_fetch_memory} and \code{curl_fetch_disk} have a blocking and
+non-blocking C implementation. The latter is slightly slower but allows for
+interrupting the download prematurely (using e.g. CTRL+C or ESC). Interrupting
+is enabled when R runs in interactive mode or when
+\code{getOption("curl_interrupt") == TRUE}.
+
+The \code{curl_fetch_multi} function is the asyncronous equivalent of
+\code{curl_fetch_memory}. It wraps \code{multi_add} to schedule requests which
+are executed concurrently when calling \code{multi_run}. For each successful
+request the \code{done} callback is triggered with response data. For failed
+requests (when \code{curl_fetch_memory} would raise an error), the \code{fail}
+function is triggered with the error message.
+}
+\examples{
+# Load in memory
+res <- curl_fetch_memory("http://httpbin.org/cookies/set?foo=123&bar=ftw")
+res$content
+
+# Save to disk
+res <- curl_fetch_disk("http://httpbin.org/stream/10", tempfile())
+res$content
+readLines(res$content)
+
+# Stream with callback
+res <- curl_fetch_stream("http://www.httpbin.org/drip?duration=5&numbytes=15&code=200", function(x){
+  cat(rawToChar(x))
+})
+
+# Async API
+data <- list()
+success <- function(res){
+  cat("Request done! Status:", res$status, "\\n")
+  data <<- c(data, list(res))
+}
+failure <- function(msg){
+  cat("Oh noes! Request failed!", msg, "\\n")
+}
+curl_fetch_multi("http://httpbin.org/get", success, failure)
+curl_fetch_multi("http://httpbin.org/status/418", success, failure)
+curl_fetch_multi("https://urldoesnotexist.xyz", success, failure)
+multi_run()
+str(data)
+}
diff --git a/man/curl_options.Rd b/man/curl_options.Rd
new file mode 100644
index 0000000..940f102
--- /dev/null
+++ b/man/curl_options.Rd
@@ -0,0 +1,45 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/options.R, R/utilities.R
+\docType{data}
+\name{curl_options}
+\alias{curl_options}
+\alias{curl_version}
+\alias{curl_symbols}
+\title{List curl version and options.}
+\format{A data frame with columns:
+\describe{
+\item{name}{Symbol name}
+\item{introduced,deprecated,removed}{Versions of libcurl}
+\item{value}{Integer value of symbol}
+\item{type}{If an option, the type of value it needs}
+}}
+\usage{
+curl_options(filter = "")
+
+curl_version()
+
+curl_symbols
+}
+\arguments{
+\item{filter}{string: only return options with string in name}
+}
+\description{
+\code{curl_version()} shows the versions of libcurl, libssl and zlib and
+supported protocols. \code{curl_options()} lists all options available in
+the current version of libcurl.  The dataset \code{curl_symbols} lists all
+symbols (including options) provides more information about the symbols,
+including when support was added/removed from libcurl.
+}
+\examples{
+# Available options
+curl_options()
+
+# List proxy options
+curl_options("proxy")
+
+# Sybol table
+head(curl_symbols)
+# Curl/ssl version info
+curl_version()
+}
+\keyword{datasets}
diff --git a/man/handle.Rd b/man/handle.Rd
new file mode 100644
index 0000000..be07ef6
--- /dev/null
+++ b/man/handle.Rd
@@ -0,0 +1,74 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/handle.R
+\name{handle}
+\alias{handle}
+\alias{new_handle}
+\alias{handle_setopt}
+\alias{handle_setheaders}
+\alias{handle_setform}
+\alias{handle_reset}
+\alias{handle_data}
+\title{Create and configure a curl handle}
+\usage{
+new_handle(...)
+
+handle_setopt(handle, ..., .list = list())
+
+handle_setheaders(handle, ..., .list = list())
+
+handle_setform(handle, ..., .list = list())
+
+handle_reset(handle)
+
+handle_data(handle)
+}
+\arguments{
+\item{...}{named options / headers to be set in the handle.
+To send a file, see \code{\link{form_file}}. To list all allowed options,
+see \code{\link{curl_options}}}
+
+\item{handle}{Handle to modify}
+
+\item{.list}{A named list of options. This is useful if you've created
+a list of options elsewhere, avoiding the use of \code{do.call()}.}
+}
+\value{
+A handle object (external pointer to the underlying curl handle).
+  All functions modify the handle in place but also return the handle
+  so you can create a pipeline of operations.
+}
+\description{
+Handles are the work horses of libcurl. A handle is used to configure a
+request with custom options, headers and payload. Once the handle has been
+set up, it can be passed to any of the download functions such as \code{\link{curl}}
+,\code{\link{curl_download}} or \code{\link{curl_fetch_memory}}. The handle will maintain
+state in between requests, including keep-alive connections, cookies and
+settings.
+}
+\details{
+Use \code{new_handle()} to create a new clean curl handle that can be
+configured with custom options and headers. Note that \code{handle_setopt}
+appends or overrides options in the handle, whereas \code{handle_setheaders}
+replaces the entire set of headers with the new ones. The \code{handle_reset}
+function resets only options/headers/forms in the handle. It does not affect
+active connections, cookies or response data from previous requests. The safest
+way to perform multiple independent requests is by using a separate handle for
+each request. There is very little performance overhead in creating handles.
+}
+\examples{
+h <- new_handle()
+handle_setopt(h, customrequest = "PUT")
+handle_setform(h, a = "1", b = "2")
+r <- curl_fetch_memory("http://httpbin.org/put", h)
+cat(rawToChar(r$content))
+
+# Or use the list form
+h <- new_handle()
+handle_setopt(h, .list = list(customrequest = "PUT"))
+handle_setform(h, .list = list(a = "1", b = "2"))
+r <- curl_fetch_memory("http://httpbin.org/put", h)
+cat(rawToChar(r$content))
+}
+\seealso{
+Other handles: \code{\link{handle_cookies}}
+}
diff --git a/man/handle_cookies.Rd b/man/handle_cookies.Rd
new file mode 100644
index 0000000..c216fb4
--- /dev/null
+++ b/man/handle_cookies.Rd
@@ -0,0 +1,34 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/handle.R
+\name{handle_cookies}
+\alias{handle_cookies}
+\title{Extract cookies from a handle}
+\usage{
+handle_cookies(handle)
+}
+\arguments{
+\item{handle}{a curl handle object}
+}
+\description{
+The \code{handle_cookies} function returns a data frame with 7 columns as specified in the
+\href{http://www.cookiecentral.com/faq/#3.5}{netscape cookie file format}.
+}
+\examples{
+h <- new_handle()
+handle_cookies(h)
+
+# Server sets cookies
+req <- curl_fetch_memory("http://httpbin.org/cookies/set?foo=123&bar=ftw", handle = h)
+handle_cookies(h)
+
+# Server deletes cookies
+req <- curl_fetch_memory("http://httpbin.org/cookies/delete?foo", handle = h)
+handle_cookies(h)
+
+# Cookies will survive a reset!
+handle_reset(h)
+handle_cookies(h)
+}
+\seealso{
+Other handles: \code{\link{handle}}
+}
diff --git a/man/ie_proxy.Rd b/man/ie_proxy.Rd
new file mode 100644
index 0000000..511e321
--- /dev/null
+++ b/man/ie_proxy.Rd
@@ -0,0 +1,27 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/proxy.R
+\name{ie_proxy}
+\alias{ie_proxy}
+\alias{ie_proxy_info}
+\alias{ie_get_proxy_for_url}
+\title{Internet Explorer proxy settings}
+\usage{
+ie_proxy_info()
+
+ie_get_proxy_for_url(target_url = "http://www.google.com")
+}
+\arguments{
+\item{target_url}{url with host for which to lookup the proxy server}
+}
+\description{
+Lookup and mimic the system proxy settings on Windows as set by Internet
+Explorer. This can be used to configure curl to use the same proxy server.
+}
+\details{
+The \code{ie_proxy_info} function looks
+up your current proxy settings as configured in IE under "Internet Options"
+> "Tab: Connections" > "LAN Settings". The \code{ie_get_proxy_for_url}
+determines if and which proxy should be used to connect to a particular
+URL. If your settings have an "automatic configuration script" this
+involves downloading and executing a PAC file, which can take a while.
+}
diff --git a/man/multi.Rd b/man/multi.Rd
new file mode 100644
index 0000000..5566058
--- /dev/null
+++ b/man/multi.Rd
@@ -0,0 +1,89 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/multi.R
+\name{multi}
+\alias{multi}
+\alias{multi_add}
+\alias{multi_run}
+\alias{multi_set}
+\alias{multi_list}
+\alias{multi_cancel}
+\alias{new_pool}
+\title{Async Multi Download}
+\usage{
+multi_add(handle, done = NULL, fail = NULL, pool = NULL)
+
+multi_run(timeout = Inf, poll = FALSE, pool = NULL)
+
+multi_set(total_con = 50, host_con = 6, multiplex = TRUE, pool = NULL)
+
+multi_list(pool = NULL)
+
+multi_cancel(handle)
+
+new_pool(total_con = 100, host_con = 6, multiplex = TRUE)
+}
+\arguments{
+\item{handle}{a curl \link{handle} with preconfigured \code{url} option.}
+
+\item{done}{callback function for completed request. Single argument with
+response data in same structure as \link{curl_fetch_memory}.}
+
+\item{fail}{callback function called on failed request. Argument contains
+error message.}
+
+\item{pool}{a multi handle created by \link{new_pool}. Default uses a global pool.}
+
+\item{timeout}{max time in seconds to wait for results. Use \code{0} to poll for results without
+waiting at all.}
+
+\item{poll}{If \code{TRUE} then return immediately after any of the requests has completed.
+May also be an integer in which case it returns after n requests have completed.}
+
+\item{total_con}{max total concurrent connections.}
+
+\item{host_con}{max concurrent connections per host.}
+
+\item{multiplex}{enable HTTP/2 multiplexing if supported by host and client.}
+}
+\description{
+AJAX style concurrent requests, possibly using HTTP/2 multiplexing.
+Results are only available via callback functions. Advanced use only!
+}
+\details{
+Requests are created in the usual way using a curl \link{handle} and added
+to the scheduler with \link{multi_add}. This function returns immediately
+and does not perform the request yet. The user needs to call \link{multi_run}
+which performs all scheduled requests concurrently. It returns when all
+requests have completed, or case of a \code{timeout} or \code{SIGINT} (e.g.
+if the user presses \code{ESC} or \code{CTRL+C} in the console). In case of
+the latter, simply call \link{multi_run} again to resume pending requests.
+
+When the request succeeded, the \code{done} callback gets triggerd with
+the response data. The structure if this data is identical to \link{curl_fetch_memory}.
+When the request fails, the \code{fail} callback is triggered with an error
+message. Note that failure here means something went wrong in performing the
+request such as a connection failure, it does not check the http status code.
+Just like \link{curl_fetch_memory}, the user has to implement application logic.
+
+Raising an error within a callback function stops execution of that function
+but does not affect other requests.
+
+A single handle cannot be used for multiple simultaneous requests. However
+it is possible to add new requests to a pool while it is running, so you
+can re-use a handle within the callback of a request from that same handle.
+It is up to the user to make sure the same handle is not used in concurrent
+requests.
+
+The \link{multi_cancel} function can be used to cancel a pending request.
+It has no effect if the request was already completed or canceled.
+}
+\examples{
+h1 <- new_handle(url = "https://eu.httpbin.org/delay/3")
+h2 <- new_handle(url = "https://eu.httpbin.org/post", postfields = "bla bla")
+h3 <- new_handle(url = "https://urldoesnotexist.xyz")
+multi_add(h1, done = print, fail = print)
+multi_add(h2, done = print, fail = print)
+multi_add(h3, done = print, fail = print)
+multi_run(timeout = 2)
+multi_run()
+}
diff --git a/man/multipart.Rd b/man/multipart.Rd
new file mode 100644
index 0000000..f41a5f6
--- /dev/null
+++ b/man/multipart.Rd
@@ -0,0 +1,25 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/form.R
+\name{multipart}
+\alias{multipart}
+\alias{form_file}
+\alias{multipart}
+\alias{form_data}
+\title{POST files or data}
+\usage{
+form_file(path, type = NULL)
+
+form_data(value, type = NULL)
+}
+\arguments{
+\item{path}{a string with a path to an existing file on disk}
+
+\item{type}{MIME content-type of the file.}
+
+\item{value}{a character or raw vector to post}
+}
+\description{
+Build multipart form data elements. The \code{form_file} function uploads a
+file. The \code{form_data} function allows for posting a string or raw vector
+with a custom content-type.
+}
diff --git a/man/nslookup.Rd b/man/nslookup.Rd
new file mode 100644
index 0000000..add6024
--- /dev/null
+++ b/man/nslookup.Rd
@@ -0,0 +1,33 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/nslookup.R
+\name{nslookup}
+\alias{nslookup}
+\alias{has_internet}
+\title{Lookup a hostname}
+\usage{
+nslookup(host, ipv4_only = FALSE, multiple = FALSE, error = TRUE)
+
+has_internet()
+}
+\arguments{
+\item{host}{a string with a hostname}
+
+\item{ipv4_only}{always return ipv4 address. Set to `FALSE` to allow for ipv6 as well.}
+
+\item{multiple}{returns multiple ip addresses if possible}
+
+\item{error}{raise an error for failed DNS lookup. Otherwise returns \code{NULL}.}
+}
+\description{
+The \code{nslookup} function is similar to \code{nsl} but works on all platforms
+and can resolve ipv6 addresses if supported by the OS. Default behavior raises an
+error if lookup fails. The \code{has_internet} function tests the internet
+connection by resolving a random address.
+}
+\examples{
+# Should always work if we are online
+nslookup("www.r-project.org")
+
+# If your OS supports IPv6
+nslookup("ipv6.test-ipv6.com", ipv4_only = FALSE, error = FALSE)
+}
diff --git a/man/parse_date.Rd b/man/parse_date.Rd
new file mode 100644
index 0000000..00ec327
--- /dev/null
+++ b/man/parse_date.Rd
@@ -0,0 +1,23 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/utilities.R
+\name{parse_date}
+\alias{parse_date}
+\title{Parse date/time}
+\usage{
+parse_date(datestring)
+}
+\arguments{
+\item{datestring}{a string consisting of a timestamp}
+}
+\description{
+Can be used to parse dates appearing in http response headers such
+as \code{Expires} or \code{Last-Modified}. Automatically recognizes
+most common formats. If the format is known, \code{\link{strptime}}
+might be easier.
+}
+\examples{
+# Parse dates in many formats
+parse_date("Sunday, 06-Nov-94 08:49:37 GMT")
+parse_date("06 Nov 1994 08:49:37")
+parse_date("20040911 +0200")
+}
diff --git a/man/parse_headers.Rd b/man/parse_headers.Rd
new file mode 100644
index 0000000..c983954
--- /dev/null
+++ b/man/parse_headers.Rd
@@ -0,0 +1,36 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/parse_headers.R
+\name{parse_headers}
+\alias{parse_headers}
+\alias{parse_headers_list}
+\title{Parse response headers}
+\usage{
+parse_headers(txt, multiple = FALSE)
+
+parse_headers_list(txt)
+}
+\arguments{
+\item{txt}{raw or character vector with the header data}
+
+\item{multiple}{parse multiple sets of headers separated by a blank line. See details.}
+}
+\description{
+Parse response header data as returned by curl_fetch, either as a set of strings
+or into a named list.
+}
+\details{
+The parse_headers_list function parses the headers into a normalized (lowercase
+field names, trimmed whitespace) named list.
+
+If a request has followed redirects, the data can contain multiple sets of headers.
+When multiple = TRUE, the function returns a list with the response headers
+for each request. By default it only returns the headers of the final request.
+}
+\examples{
+req <- curl_fetch_memory("https://httpbin.org/redirect/3")
+parse_headers(req$headers)
+parse_headers(req$headers, multiple = TRUE)
+
+# Parse into named list
+parse_headers_list(req$headers)
+}
diff --git a/src/Makevars.in b/src/Makevars.in
new file mode 100644
index 0000000..14e59cc
--- /dev/null
+++ b/src/Makevars.in
@@ -0,0 +1,7 @@
+PKG_CPPFLAGS=@cflags@
+PKG_LIBS=@libs@
+
+all: clean
+
+clean:
+	rm -f $(SHLIB) $(OBJECTS)
diff --git a/src/Makevars.win b/src/Makevars.win
new file mode 100644
index 0000000..cedf965
--- /dev/null
+++ b/src/Makevars.win
@@ -0,0 +1,34 @@
+# Set a default
+LIBCURL_BUILD ?= openssl
+#LIBCURL_BUILD ?= winssl
+
+# Switches between OpenSSL and SecureChannel builds of libcurl stack
+ifeq "${LIBCURL_BUILD}" "openssl"
+CURL_LIBS = -lcurl -lssh2 -lz -lssl -lcrypto -lgdi32 -lws2_32 -lcrypt32 -lwldap32
+else
+CURL_LIBS = -lcurl -lz -lws2_32 -lcrypt32 -lwldap32
+endif
+
+PKG_LIBS= -L../windows/libcurl-7.54.1/lib-${LIBCURL_BUILD}${R_ARCH} \
+	-L. -lwinhttp $(CURL_LIBS)
+
+PKG_CPPFLAGS= \
+	-I../windows/libcurl-7.54.1/include -DCURL_STATICLIB
+
+all: info clean winlibs libwinhttp.dll.a
+
+clean:
+	rm -f $(SHLIB) $(OBJECTS) libwinhttp.dll.a winhttp.def
+
+info:
+	@echo "Building curl with '$(LIBCURL_BUILD)' crypto backend."
+
+winlibs: clean
+	"${R_HOME}/bin${R_ARCH_BIN}/Rscript.exe" --vanilla "../tools/winlibs.R"
+	echo '#include <curl/curl.h>' | $(CPP) $(PKG_CPPFLAGS) -std=gnu99 -xc - | grep "^[ \t]*CURLOPT_.*," | sed s/,// > ../tools/option_table.txt
+
+winhttp.def:
+	cp winhttp$(WIN).def.in winhttp.def
+
+.PHONY: all winlibs clean
+
diff --git a/src/callbacks.c b/src/callbacks.c
new file mode 100644
index 0000000..43ce950
--- /dev/null
+++ b/src/callbacks.c
@@ -0,0 +1,81 @@
+#include "curl-common.h"
+
+int R_curl_callback_progress(SEXP fun,
+                             double dltotal, double dlnow,
+                             double ultotal, double ulnow) {
+
+  SEXP down = PROTECT(allocVector(REALSXP, 2));
+  REAL(down)[0] = dltotal;
+  REAL(down)[1] = dlnow;
+
+  SEXP up = PROTECT(allocVector(REALSXP, 2));
+  REAL(up)[0] = ultotal;
+  REAL(up)[1] = ulnow;
+
+  SEXP call = PROTECT(LCONS(fun, LCONS(down, LCONS(up, R_NilValue))));
+  int ok;
+  SEXP res = PROTECT(R_tryEval(call, R_GlobalEnv, &ok));
+
+  if (ok != 0) {
+    UNPROTECT(4);
+    return CURL_READFUNC_ABORT;
+  }
+
+  if (TYPEOF(res) != LGLSXP || length(res) != 1) {
+    UNPROTECT(4);
+    Rf_warning("progress callback must return boolean");
+    return 0;
+  }
+
+  UNPROTECT(4);
+  return !asLogical(res);
+}
+
+size_t R_curl_callback_read(char *buffer, size_t size, size_t nitems, SEXP fun) {
+  SEXP nbytes = PROTECT(ScalarInteger(size * nitems));
+  SEXP call = PROTECT(LCONS(fun, LCONS(nbytes, R_NilValue)));
+
+  int ok;
+  SEXP res = PROTECT(R_tryEval(call, R_GlobalEnv, &ok));
+
+  if (ok != 0) {
+    UNPROTECT(3);
+    return CURL_READFUNC_ABORT;
+  }
+
+  if (TYPEOF(res) != RAWSXP) {
+    UNPROTECT(3);
+    Rf_warning("read callback must raw vector");
+    return CURL_READFUNC_ABORT;
+  }
+
+  size_t bytes_read = length(res);
+  memcpy(buffer, RAW(res), bytes_read);
+
+  UNPROTECT(3);
+  return bytes_read;
+}
+
+int R_curl_callback_debug(CURL *handle, curl_infotype type_, char *data,
+                          size_t size, SEXP fun) {
+
+  /* wrap type and msg into R types */
+  SEXP type = PROTECT(ScalarInteger(type_));
+  SEXP msg = PROTECT(allocVector(RAWSXP, size));
+  memcpy(RAW(msg), data, size);
+
+  /* call the R function */
+  SEXP call = PROTECT(LCONS(fun, LCONS(type, LCONS(msg, R_NilValue))));
+  R_tryEval(call, R_GlobalEnv, NULL);
+
+  UNPROTECT(3);
+  // Debug function must always return 0
+  return 0;
+}
+
+
+int R_curl_callback_xferinfo(SEXP fun,
+                             curl_off_t  dltotal, curl_off_t  dlnow,
+                             curl_off_t  ultotal, curl_off_t  ulnow) {
+  return R_curl_callback_progress(fun, dltotal, dlnow, ultotal, ulnow);
+}
diff --git a/src/callbacks.h b/src/callbacks.h
new file mode 100644
index 0000000..3ea7752
--- /dev/null
+++ b/src/callbacks.h
@@ -0,0 +1,9 @@
+int R_curl_callback_progress(SEXP fun, double dltotal, double dlnow,
+  double ultotal, double ulnow);
+size_t R_curl_callback_read(char *buffer, size_t size, size_t nitems, SEXP fun);
+int R_curl_callback_debug(CURL *handle, curl_infotype type_, char *data,
+                          size_t size, SEXP fun);
+
+int R_curl_callback_xferinfo(SEXP fun,
+                             curl_off_t  dltotal, curl_off_t  dlnow,
+                             curl_off_t  ultotal, curl_off_t  ulnow);
diff --git a/src/curl-common.h b/src/curl-common.h
new file mode 100644
index 0000000..7487611
--- /dev/null
+++ b/src/curl-common.h
@@ -0,0 +1,65 @@
+#include <Rinternals.h>
+#include <curl/curl.h>
+#include <curl/easy.h>
+#include <string.h>
+#include <stdlib.h>
+
+#if LIBCURL_VERSION_MAJOR > 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR >= 28)
+#define HAS_MULTI_WAIT 1
+#endif
+
+typedef struct {
+  unsigned char *buf;
+  size_t size;
+} memory;
+
+typedef struct {
+  SEXP multiptr;
+  SEXP handles;
+  CURLM *m;
+} multiref;
+
+typedef struct {
+  multiref *mref;
+  struct refnode *node;
+  memory content;
+  SEXP complete;
+  SEXP error;
+} async;
+
+typedef struct {
+  SEXP handleptr;
+  CURL *handle;
+  struct curl_httppost *form;
+  struct curl_slist *headers;
+  char errbuf[CURL_ERROR_SIZE];
+  memory resheaders;
+  async async;
+  int refCount;
+  int locked;
+} reference;
+
+CURL* get_handle(SEXP ptr);
+reference* get_ref(SEXP ptr);
+void assert_status(CURLcode res, reference *ref);
+void assert(CURLcode res);
+void massert(CURLMcode res);
+void stop_for_status(CURL *http_handle);
+SEXP slist_to_vec(struct curl_slist *slist);
+struct curl_slist* vec_to_slist(SEXP vec);
+struct curl_httppost* make_form(SEXP form);
+void set_form(reference *ref, struct curl_httppost* newform);
+void set_headers(reference *ref, struct curl_slist *newheaders);
+void reset_resheaders(reference *ref);
+void reset_errbuf(reference *ref);
+void clean_handle(reference *ref);
+size_t push_disk(void* contents, size_t sz, size_t nmemb, FILE *ctx);
+size_t append_buffer(void *contents, size_t sz, size_t nmemb, void *ctx);
+CURLcode curl_perform_with_interrupt(CURL *handle);
+int pending_interrupt();
+SEXP make_handle_response(reference *ref);
+
+/* reflist.c */
+SEXP reflist_init();
+SEXP reflist_add(SEXP x, SEXP target);
+SEXP reflist_remove(SEXP x, SEXP target);
diff --git a/src/curl-symbols.h b/src/curl-symbols.h
new file mode 100644
index 0000000..fad68d2
--- /dev/null
+++ b/src/curl-symbols.h
@@ -0,0 +1,779 @@
+#include <curl/curl.h>
+
+#define LIBCURL_HAS(x) \
+  (defined(x ## _FIRST) && (x ## _FIRST <= LIBCURL_VERSION_NUM) && \
+   (!defined(x ## _LAST) || ( x ## _LAST >= LIBCURL_VERSION_NUM)))
+
+#define CURLAUTH_ANY_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLAUTH_ANYSAFE_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLAUTH_BASIC_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLAUTH_DIGEST_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLAUTH_DIGEST_IE_FIRST 0x071303 /* Added in 7.19.3 */
+#define CURLAUTH_GSSNEGOTIATE_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLAUTH_NEGOTIATE_FIRST 0x072600 /* Added in 7.38.0 */
+#define CURLAUTH_NONE_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLAUTH_NTLM_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLAUTH_NTLM_WB_FIRST 0x071600 /* Added in 7.22.0 */
+#define CURLAUTH_ONLY_FIRST 0x071503 /* Added in 7.21.3 */
+#define CURLCLOSEPOLICY_CALLBACK_FIRST 0x070700 /* Added in 7.7 */
+#define CURLCLOSEPOLICY_LEAST_RECENTLY_USED_FIRST 0x070700 /* Added in 7.7 */
+#define CURLCLOSEPOLICY_LEAST_TRAFFIC_FIRST 0x070700 /* Added in 7.7 */
+#define CURLCLOSEPOLICY_NONE_FIRST 0x070700 /* Added in 7.7 */
+#define CURLCLOSEPOLICY_OLDEST_FIRST 0x070700 /* Added in 7.7 */
+#define CURLCLOSEPOLICY_SLOWEST_FIRST 0x070700 /* Added in 7.7 */
+#define CURLE_ABORTED_BY_CALLBACK_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_AGAIN_FIRST 0x071202 /* Added in 7.18.2 */
+#define CURLE_ALREADY_COMPLETE_FIRST 0x070702 /* Added in 7.7.2 */
+#define CURLE_BAD_CALLING_ORDER_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_BAD_CONTENT_ENCODING_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLE_BAD_DOWNLOAD_RESUME_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLE_BAD_FUNCTION_ARGUMENT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_BAD_PASSWORD_ENTERED_FIRST 0x070402 /* Added in 7.4.2 */
+#define CURLE_CHUNK_FAILED_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLE_CONV_FAILED_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLE_CONV_REQD_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLE_COULDNT_CONNECT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_COULDNT_RESOLVE_HOST_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_COULDNT_RESOLVE_PROXY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FAILED_INIT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FILESIZE_EXCEEDED_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURLE_FILE_COULDNT_READ_FILE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_ACCEPT_FAILED_FIRST 0x071800 /* Added in 7.24.0 */
+#define CURLE_FTP_ACCEPT_TIMEOUT_FIRST 0x071800 /* Added in 7.24.0 */
+#define CURLE_FTP_ACCESS_DENIED_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_BAD_DOWNLOAD_RESUME_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_BAD_FILE_LIST_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLE_FTP_CANT_GET_HOST_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_CANT_RECONNECT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_COULDNT_GET_SIZE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_COULDNT_RETR_FILE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_COULDNT_SET_ASCII_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_COULDNT_SET_BINARY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_COULDNT_SET_TYPE_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLE_FTP_COULDNT_STOR_FILE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_COULDNT_USE_REST_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_PARTIAL_FILE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_PORT_FAILED_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_PRET_FAILED_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLE_FTP_QUOTE_ERROR_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_SSL_FAILED_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLE_FTP_USER_PASSWORD_INCORRECT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_WEIRD_227_FORMAT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_WEIRD_PASS_REPLY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_WEIRD_PASV_REPLY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_WEIRD_SERVER_REPLY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_WEIRD_USER_REPLY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FTP_WRITE_ERROR_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_FUNCTION_NOT_FOUND_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_GOT_NOTHING_FIRST 0x070901 /* Added in 7.9.1 */
+#define CURLE_HTTP2_FIRST 0x072600 /* Added in 7.38.0 */
+#define CURLE_HTTP_NOT_FOUND_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_HTTP_PORT_FAILED_FIRST 0x070300 /* Added in 7.3 */
+#define CURLE_HTTP_POST_ERROR_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_HTTP_RANGE_ERROR_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_HTTP_RETURNED_ERROR_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLE_INTERFACE_FAILED_FIRST 0x070c00 /* Added in 7.12.0 */
+#define CURLE_LDAP_CANNOT_BIND_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_LDAP_INVALID_URL_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURLE_LDAP_SEARCH_FAILED_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_LIBRARY_NOT_FOUND_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_LOGIN_DENIED_FIRST 0x070d01 /* Added in 7.13.1 */
+#define CURLE_MALFORMAT_USER_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_NOT_BUILT_IN_FIRST 0x071505 /* Added in 7.21.5 */
+#define CURLE_NO_CONNECTION_AVAILABLE_FIRST 0x071e00 /* Added in 7.30.0 */
+#define CURLE_OK_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_OPERATION_TIMEDOUT_FIRST 0x070a02 /* Added in 7.10.2 */
+#define CURLE_OPERATION_TIMEOUTED_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_OUT_OF_MEMORY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_PARTIAL_FILE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_PEER_FAILED_VERIFICATION_FIRST 0x071101 /* Added in 7.17.1 */
+#define CURLE_QUOTE_ERROR_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLE_RANGE_ERROR_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLE_READ_ERROR_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_RECV_ERROR_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLE_REMOTE_ACCESS_DENIED_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLE_REMOTE_DISK_FULL_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLE_REMOTE_FILE_EXISTS_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLE_REMOTE_FILE_NOT_FOUND_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLE_RTSP_CSEQ_ERROR_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLE_RTSP_SESSION_ERROR_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLE_SEND_ERROR_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLE_SEND_FAIL_REWIND_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLE_SHARE_IN_USE_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLE_SSH_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLE_SSL_CACERT_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLE_SSL_CACERT_BADFILE_FIRST 0x071000 /* Added in 7.16.0 */
+#define CURLE_SSL_CERTPROBLEM_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLE_SSL_CIPHER_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLE_SSL_CONNECT_ERROR_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_SSL_CRL_BADFILE_FIRST 0x071300 /* Added in 7.19.0 */
+#define CURLE_SSL_ENGINE_INITFAILED_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLE_SSL_ENGINE_NOTFOUND_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLE_SSL_ENGINE_SETFAILED_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLE_SSL_INVALIDCERTSTATUS_FIRST 0x072900 /* Added in 7.41.0 */
+#define CURLE_SSL_ISSUER_ERROR_FIRST 0x071300 /* Added in 7.19.0 */
+#define CURLE_SSL_PEER_CERTIFICATE_FIRST 0x070800 /* Added in 7.8 */
+#define CURLE_SSL_PINNEDPUBKEYNOTMATCH_FIRST 0x072700 /* Added in 7.39.0 */
+#define CURLE_SSL_SHUTDOWN_FAILED_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLE_TELNET_OPTION_SYNTAX_FIRST 0x070700 /* Added in 7.7 */
+#define CURLE_TFTP_DISKFULL_FIRST 0x070f00 /* Added in 7.15.0 */
+#define CURLE_TFTP_EXISTS_FIRST 0x070f00 /* Added in 7.15.0 */
+#define CURLE_TFTP_ILLEGAL_FIRST 0x070f00 /* Added in 7.15.0 */
+#define CURLE_TFTP_NOSUCHUSER_FIRST 0x070f00 /* Added in 7.15.0 */
+#define CURLE_TFTP_NOTFOUND_FIRST 0x070f00 /* Added in 7.15.0 */
+#define CURLE_TFTP_PERM_FIRST 0x070f00 /* Added in 7.15.0 */
+#define CURLE_TFTP_UNKNOWNID_FIRST 0x070f00 /* Added in 7.15.0 */
+#define CURLE_TOO_MANY_REDIRECTS_FIRST 0x070500 /* Added in 7.5 */
+#define CURLE_UNKNOWN_OPTION_FIRST 0x071505 /* Added in 7.21.5 */
+#define CURLE_UNKNOWN_TELNET_OPTION_FIRST 0x070700 /* Added in 7.7 */
+#define CURLE_UNSUPPORTED_PROTOCOL_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_UPLOAD_FAILED_FIRST 0x071003 /* Added in 7.16.3 */
+#define CURLE_URL_MALFORMAT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_URL_MALFORMAT_USER_FIRST 0x070100 /* Added in 7.1 */
+#define CURLE_USE_SSL_FAILED_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLE_WRITE_ERROR_FIRST 0x070100 /* Added in 7.1 */
+#define CURLFILETYPE_DEVICE_BLOCK_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFILETYPE_DEVICE_CHAR_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFILETYPE_DIRECTORY_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFILETYPE_DOOR_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFILETYPE_FILE_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFILETYPE_NAMEDPIPE_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFILETYPE_SOCKET_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFILETYPE_SYMLINK_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFILETYPE_UNKNOWN_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFINFOFLAG_KNOWN_FILENAME_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFINFOFLAG_KNOWN_FILETYPE_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFINFOFLAG_KNOWN_GID_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFINFOFLAG_KNOWN_HLINKCOUNT_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFINFOFLAG_KNOWN_PERM_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFINFOFLAG_KNOWN_SIZE_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFINFOFLAG_KNOWN_TIME_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFINFOFLAG_KNOWN_UID_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLFORM_ARRAY_FIRST 0x070901 /* Added in 7.9.1 */
+#define CURLFORM_ARRAY_END_FIRST 0x070901 /* Added in 7.9.1 */
+#define CURLFORM_ARRAY_END_LAST 0x070906 /* Last featured in 7.9.6 */
+#define CURLFORM_ARRAY_START_FIRST 0x070901 /* Added in 7.9.1 */
+#define CURLFORM_ARRAY_START_LAST 0x070906 /* Last featured in 7.9.6 */
+#define CURLFORM_BUFFER_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURLFORM_BUFFERLENGTH_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURLFORM_BUFFERPTR_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURLFORM_CONTENTHEADER_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLFORM_CONTENTSLENGTH_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_CONTENTTYPE_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_COPYCONTENTS_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_COPYNAME_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_END_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_FILE_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_FILECONTENT_FIRST 0x070901 /* Added in 7.9.1 */
+#define CURLFORM_FILENAME_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLFORM_NAMELENGTH_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_NOTHING_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_PTRCONTENTS_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_PTRNAME_FIRST 0x070900 /* Added in 7.9 */
+#define CURLFORM_STREAM_FIRST 0x071202 /* Added in 7.18.2 */
+#define CURLFTPAUTH_DEFAULT_FIRST 0x070c02 /* Added in 7.12.2 */
+#define CURLFTPAUTH_SSL_FIRST 0x070c02 /* Added in 7.12.2 */
+#define CURLFTPAUTH_TLS_FIRST 0x070c02 /* Added in 7.12.2 */
+#define CURLFTPMETHOD_DEFAULT_FIRST 0x070f03 /* Added in 7.15.3 */
+#define CURLFTPMETHOD_MULTICWD_FIRST 0x070f03 /* Added in 7.15.3 */
+#define CURLFTPMETHOD_NOCWD_FIRST 0x070f03 /* Added in 7.15.3 */
+#define CURLFTPMETHOD_SINGLECWD_FIRST 0x070f03 /* Added in 7.15.3 */
+#define CURLFTPSSL_ALL_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLFTPSSL_CCC_ACTIVE_FIRST 0x071002 /* Added in 7.16.2 */
+#define CURLFTPSSL_CCC_NONE_FIRST 0x071002 /* Added in 7.16.2 */
+#define CURLFTPSSL_CCC_PASSIVE_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLFTPSSL_CONTROL_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLFTPSSL_NONE_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLFTPSSL_TRY_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLFTP_CREATE_DIR_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLFTP_CREATE_DIR_NONE_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLFTP_CREATE_DIR_RETRY_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLGSSAPI_DELEGATION_FLAG_FIRST 0x071600 /* Added in 7.22.0 */
+#define CURLGSSAPI_DELEGATION_NONE_FIRST 0x071600 /* Added in 7.22.0 */
+#define CURLGSSAPI_DELEGATION_POLICY_FLAG_FIRST 0x071600 /* Added in 7.22.0 */
+#define CURLHEADER_SEPARATE_FIRST 0x072500 /* Added in 7.37.0 */
+#define CURLHEADER_UNIFIED_FIRST 0x072500 /* Added in 7.37.0 */
+#define CURLINFO_APPCONNECT_TIME_FIRST 0x071300 /* Added in 7.19.0 */
+#define CURLINFO_CERTINFO_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURLINFO_CONDITION_UNMET_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLINFO_CONNECT_TIME_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_CONTENT_LENGTH_DOWNLOAD_FIRST 0x070601 /* Added in 7.6.1 */
+#define CURLINFO_CONTENT_LENGTH_UPLOAD_FIRST 0x070601 /* Added in 7.6.1 */
+#define CURLINFO_CONTENT_TYPE_FIRST 0x070904 /* Added in 7.9.4 */
+#define CURLINFO_COOKIELIST_FIRST 0x070e01 /* Added in 7.14.1 */
+#define CURLINFO_DATA_IN_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLINFO_DATA_OUT_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLINFO_DOUBLE_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_EFFECTIVE_URL_FIRST 0x070400 /* Added in 7.4 */
+#define CURLINFO_END_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLINFO_FILETIME_FIRST 0x070500 /* Added in 7.5 */
+#define CURLINFO_FTP_ENTRY_PATH_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLINFO_HEADER_IN_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLINFO_HEADER_OUT_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLINFO_HEADER_SIZE_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_HTTPAUTH_AVAIL_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURLINFO_HTTP_CODE_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_HTTP_CONNECTCODE_FIRST 0x070a07 /* Added in 7.10.7 */
+#define CURLINFO_LASTONE_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_LASTSOCKET_FIRST 0x070f02 /* Added in 7.15.2 */
+#define CURLINFO_LOCAL_IP_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLINFO_LOCAL_PORT_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLINFO_LONG_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_MASK_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_NAMELOOKUP_TIME_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_NONE_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_NUM_CONNECTS_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLINFO_OS_ERRNO_FIRST 0x070c02 /* Added in 7.12.2 */
+#define CURLINFO_PRETRANSFER_TIME_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_PRIMARY_IP_FIRST 0x071300 /* Added in 7.19.0 */
+#define CURLINFO_PRIMARY_PORT_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLINFO_PRIVATE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLINFO_PROXYAUTH_AVAIL_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURLINFO_REDIRECT_COUNT_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURLINFO_REDIRECT_TIME_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURLINFO_REDIRECT_URL_FIRST 0x071202 /* Added in 7.18.2 */
+#define CURLINFO_REQUEST_SIZE_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_RESPONSE_CODE_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURLINFO_RTSP_CLIENT_CSEQ_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLINFO_RTSP_CSEQ_RECV_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLINFO_RTSP_SERVER_CSEQ_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLINFO_RTSP_SESSION_ID_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLINFO_SIZE_DOWNLOAD_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_SIZE_UPLOAD_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_SLIST_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLINFO_SPEED_DOWNLOAD_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_SPEED_UPLOAD_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_SSL_DATA_IN_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURLINFO_SSL_DATA_OUT_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURLINFO_SSL_ENGINES_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLINFO_SSL_VERIFYRESULT_FIRST 0x070500 /* Added in 7.5 */
+#define CURLINFO_STARTTRANSFER_TIME_FIRST 0x070902 /* Added in 7.9.2 */
+#define CURLINFO_STRING_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_TEXT_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLINFO_TLS_SESSION_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLINFO_TOTAL_TIME_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLINFO_TYPEMASK_FIRST 0x070401 /* Added in 7.4.1 */
+#define CURLIOCMD_NOP_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLIOCMD_RESTARTREAD_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLIOE_FAILRESTART_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLIOE_OK_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLIOE_UNKNOWNCMD_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLKHMATCH_MISMATCH_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHMATCH_MISSING_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHMATCH_OK_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHSTAT_DEFER_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHSTAT_FINE_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHSTAT_FINE_ADD_TO_FILE_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHSTAT_REJECT_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHTYPE_DSS_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHTYPE_RSA_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHTYPE_RSA1_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLKHTYPE_UNKNOWN_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE_FIRST 0x071e00 /* Added in 7.30.0 */
+#define CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE_FIRST 0x071e00 /* Added in 7.30.0 */
+#define CURLMOPT_MAXCONNECTS_FIRST 0x071003 /* Added in 7.16.3 */
+#define CURLMOPT_MAX_HOST_CONNECTIONS_FIRST 0x071e00 /* Added in 7.30.0 */
+#define CURLMOPT_MAX_PIPELINE_LENGTH_FIRST 0x071e00 /* Added in 7.30.0 */
+#define CURLMOPT_MAX_TOTAL_CONNECTIONS_FIRST 0x071e00 /* Added in 7.30.0 */
+#define CURLMOPT_PIPELINING_FIRST 0x071000 /* Added in 7.16.0 */
+#define CURLMOPT_PIPELINING_SERVER_BL_FIRST 0x071e00 /* Added in 7.30.0 */
+#define CURLMOPT_PIPELINING_SITE_BL_FIRST 0x071e00 /* Added in 7.30.0 */
+#define CURLMOPT_SOCKETDATA_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLMOPT_SOCKETFUNCTION_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLMOPT_TIMERDATA_FIRST 0x071000 /* Added in 7.16.0 */
+#define CURLMOPT_TIMERFUNCTION_FIRST 0x071000 /* Added in 7.16.0 */
+#define CURLMSG_DONE_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLMSG_NONE_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLM_ADDED_ALREADY_FIRST 0x072001 /* Added in 7.32.1 */
+#define CURLM_BAD_EASY_HANDLE_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLM_BAD_HANDLE_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLM_BAD_SOCKET_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLM_CALL_MULTI_PERFORM_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLM_CALL_MULTI_SOCKET_FIRST 0x070f05 /* Added in 7.15.5 */
+#define CURLM_INTERNAL_ERROR_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLM_OK_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLM_OUT_OF_MEMORY_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLM_UNKNOWN_OPTION_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLOPTTYPE_FUNCTIONPOINT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPTTYPE_LONG_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPTTYPE_OBJECTPOINT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPTTYPE_OFF_T_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLOPT_ACCEPTTIMEOUT_MS_FIRST 0x071800 /* Added in 7.24.0 */
+#define CURLOPT_ACCEPT_ENCODING_FIRST 0x071506 /* Added in 7.21.6 */
+#define CURLOPT_ADDRESS_SCOPE_FIRST 0x071300 /* Added in 7.19.0 */
+#define CURLOPT_APPEND_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLOPT_AUTOREFERER_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_BUFFERSIZE_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLOPT_CAINFO_FIRST 0x070402 /* Added in 7.4.2 */
+#define CURLOPT_CAPATH_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURLOPT_CERTINFO_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURLOPT_CHUNK_BGN_FUNCTION_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLOPT_CHUNK_DATA_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLOPT_CHUNK_END_FUNCTION_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLOPT_CLOSEFUNCTION_FIRST 0x070700 /* Added in 7.7 */
+#define CURLOPT_CLOSEFUNCTION_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_CLOSEPOLICY_FIRST 0x070700 /* Added in 7.7 */
+#define CURLOPT_CLOSESOCKETDATA_FIRST 0x071507 /* Added in 7.21.7 */
+#define CURLOPT_CLOSESOCKETFUNCTION_FIRST 0x071507 /* Added in 7.21.7 */
+#define CURLOPT_CONNECTTIMEOUT_FIRST 0x070700 /* Added in 7.7 */
+#define CURLOPT_CONNECTTIMEOUT_MS_FIRST 0x071002 /* Added in 7.16.2 */
+#define CURLOPT_CONNECT_ONLY_FIRST 0x070f02 /* Added in 7.15.2 */
+#define CURLOPT_CONV_FROM_NETWORK_FUNCTION_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLOPT_CONV_FROM_UTF8_FUNCTION_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLOPT_CONV_TO_NETWORK_FUNCTION_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURLOPT_COOKIE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_COOKIEFILE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_COOKIEJAR_FIRST 0x070900 /* Added in 7.9 */
+#define CURLOPT_COOKIELIST_FIRST 0x070e01 /* Added in 7.14.1 */
+#define CURLOPT_COOKIESESSION_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURLOPT_COPYPOSTFIELDS_FIRST 0x071101 /* Added in 7.17.1 */
+#define CURLOPT_CRLF_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_CRLFILE_FIRST 0x071300 /* Added in 7.19.0 */
+#define CURLOPT_CUSTOMREQUEST_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_DEBUGDATA_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLOPT_DEBUGFUNCTION_FIRST 0x070906 /* Added in 7.9.6 */
+#define CURLOPT_DIRLISTONLY_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLOPT_DNS_CACHE_TIMEOUT_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLOPT_DNS_INTERFACE_FIRST 0x072100 /* Added in 7.33.0 */
+#define CURLOPT_DNS_LOCAL_IP4_FIRST 0x072100 /* Added in 7.33.0 */
+#define CURLOPT_DNS_LOCAL_IP6_FIRST 0x072100 /* Added in 7.33.0 */
+#define CURLOPT_DNS_SERVERS_FIRST 0x071800 /* Added in 7.24.0 */
+#define CURLOPT_DNS_USE_GLOBAL_CACHE_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLOPT_EGDSOCKET_FIRST 0x070700 /* Added in 7.7 */
+#define CURLOPT_ENCODING_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLOPT_ERRORBUFFER_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_EXPECT_100_TIMEOUT_MS_FIRST 0x072400 /* Added in 7.36.0 */
+#define CURLOPT_FAILONERROR_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_FILE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_FILETIME_FIRST 0x070500 /* Added in 7.5 */
+#define CURLOPT_FNMATCH_DATA_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLOPT_FNMATCH_FUNCTION_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLOPT_FOLLOWLOCATION_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_FORBID_REUSE_FIRST 0x070700 /* Added in 7.7 */
+#define CURLOPT_FRESH_CONNECT_FIRST 0x070700 /* Added in 7.7 */
+#define CURLOPT_FTPAPPEND_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_FTPASCII_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_FTPASCII_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_FTPLISTONLY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_FTPPORT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_FTPSSLAUTH_FIRST 0x070c02 /* Added in 7.12.2 */
+#define CURLOPT_FTP_ACCOUNT_FIRST 0x070d00 /* Added in 7.13.0 */
+#define CURLOPT_FTP_ALTERNATIVE_TO_USER_FIRST 0x070f05 /* Added in 7.15.5 */
+#define CURLOPT_FTP_CREATE_MISSING_DIRS_FIRST 0x070a07 /* Added in 7.10.7 */
+#define CURLOPT_FTP_FILEMETHOD_FIRST 0x070f01 /* Added in 7.15.1 */
+#define CURLOPT_FTP_RESPONSE_TIMEOUT_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURLOPT_FTP_SKIP_PASV_IP_FIRST 0x070f00 /* Added in 7.15.0 */
+#define CURLOPT_FTP_SSL_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLOPT_FTP_SSL_CCC_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLOPT_FTP_USE_EPRT_FIRST 0x070a05 /* Added in 7.10.5 */
+#define CURLOPT_FTP_USE_EPSV_FIRST 0x070902 /* Added in 7.9.2 */
+#define CURLOPT_FTP_USE_PRET_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_GSSAPI_DELEGATION_FIRST 0x071600 /* Added in 7.22.0 */
+#define CURLOPT_HEADER_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_HEADERDATA_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLOPT_HEADERFUNCTION_FIRST 0x070702 /* Added in 7.7.2 */
+#define CURLOPT_HEADEROPT_FIRST 0x072500 /* Added in 7.37.0 */
+#define CURLOPT_HTTP200ALIASES_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLOPT_HTTPAUTH_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLOPT_HTTPGET_FIRST 0x070801 /* Added in 7.8.1 */
+#define CURLOPT_HTTPHEADER_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_HTTPPOST_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_HTTPPROXYTUNNEL_FIRST 0x070300 /* Added in 7.3 */
+#define CURLOPT_HTTPREQUEST_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_HTTPREQUEST_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_HTTP_CONTENT_DECODING_FIRST 0x071002 /* Added in 7.16.2 */
+#define CURLOPT_HTTP_TRANSFER_DECODING_FIRST 0x071002 /* Added in 7.16.2 */
+#define CURLOPT_HTTP_VERSION_FIRST 0x070901 /* Added in 7.9.1 */
+#define CURLOPT_IGNORE_CONTENT_LENGTH_FIRST 0x070e01 /* Added in 7.14.1 */
+#define CURLOPT_INFILE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_INFILESIZE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_INFILESIZE_LARGE_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLOPT_INTERFACE_FIRST 0x070300 /* Added in 7.3 */
+#define CURLOPT_INTERLEAVEDATA_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_INTERLEAVEFUNCTION_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_IOCTLDATA_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLOPT_IOCTLFUNCTION_FIRST 0x070c03 /* Added in 7.12.3 */
+#define CURLOPT_IPRESOLVE_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURLOPT_ISSUERCERT_FIRST 0x071300 /* Added in 7.19.0 */
+#define CURLOPT_KEYPASSWD_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLOPT_KRB4LEVEL_FIRST 0x070300 /* Added in 7.3 */
+#define CURLOPT_KRBLEVEL_FIRST 0x071004 /* Added in 7.16.4 */
+#define CURLOPT_LOCALPORT_FIRST 0x070f02 /* Added in 7.15.2 */
+#define CURLOPT_LOCALPORTRANGE_FIRST 0x070f02 /* Added in 7.15.2 */
+#define CURLOPT_LOGIN_OPTIONS_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLOPT_LOW_SPEED_LIMIT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_LOW_SPEED_TIME_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_MAIL_AUTH_FIRST 0x071900 /* Added in 7.25.0 */
+#define CURLOPT_MAIL_FROM_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_MAIL_RCPT_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_MAXCONNECTS_FIRST 0x070700 /* Added in 7.7 */
+#define CURLOPT_MAXFILESIZE_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURLOPT_MAXFILESIZE_LARGE_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLOPT_MAXREDIRS_FIRST 0x070500 /* Added in 7.5 */
+#define CURLOPT_MAX_RECV_SPEED_LARGE_FIRST 0x070f05 /* Added in 7.15.5 */
+#define CURLOPT_MAX_SEND_SPEED_LARGE_FIRST 0x070f05 /* Added in 7.15.5 */
+#define CURLOPT_MUTE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_MUTE_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_NETRC_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_NETRC_FILE_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLOPT_NEW_DIRECTORY_PERMS_FIRST 0x071004 /* Added in 7.16.4 */
+#define CURLOPT_NEW_FILE_PERMS_FIRST 0x071004 /* Added in 7.16.4 */
+#define CURLOPT_NOBODY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_NOPROGRESS_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_NOPROXY_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLOPT_NOSIGNAL_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLOPT_NOTHING_FIRST 0x070101 /* Added in 7.1.1 */
+#define CURLOPT_NOTHING_LAST 0x070b00 /* Last featured in 7.11.0 */
+#define CURLOPT_OPENSOCKETDATA_FIRST 0x071101 /* Added in 7.17.1 */
+#define CURLOPT_OPENSOCKETFUNCTION_FIRST 0x071101 /* Added in 7.17.1 */
+#define CURLOPT_PASSWDDATA_FIRST 0x070402 /* Added in 7.4.2 */
+#define CURLOPT_PASSWDDATA_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_PASSWDFUNCTION_FIRST 0x070402 /* Added in 7.4.2 */
+#define CURLOPT_PASSWDFUNCTION_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_PASSWORD_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURLOPT_PASV_HOST_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURLOPT_PASV_HOST_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_PINNEDPUBLICKEY_FIRST 0x072700 /* Added in 7.39.0 */
+#define CURLOPT_PORT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_POST_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_POST301_FIRST 0x071101 /* Added in 7.17.1 */
+#define CURLOPT_POSTFIELDS_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_POSTFIELDSIZE_FIRST 0x070200 /* Added in 7.2 */
+#define CURLOPT_POSTFIELDSIZE_LARGE_FIRST 0x070b01 /* Added in 7.11.1 */
+#define CURLOPT_POSTQUOTE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_POSTREDIR_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURLOPT_PREQUOTE_FIRST 0x070905 /* Added in 7.9.5 */
+#define CURLOPT_PRIVATE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLOPT_PROGRESSDATA_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_PROGRESSFUNCTION_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_PROTOCOLS_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLOPT_PROXY_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_PROXYAUTH_FIRST 0x070a07 /* Added in 7.10.7 */
+#define CURLOPT_PROXYHEADER_FIRST 0x072500 /* Added in 7.37.0 */
+#define CURLOPT_PROXYPASSWORD_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURLOPT_PROXYPORT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_PROXYTYPE_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLOPT_PROXYUSERNAME_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURLOPT_PROXYUSERPWD_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_PROXY_TRANSFER_MODE_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLOPT_PUT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_QUOTE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_RANDOM_FILE_FIRST 0x070700 /* Added in 7.7 */
+#define CURLOPT_RANGE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_READDATA_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURLOPT_READFUNCTION_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_REDIR_PROTOCOLS_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLOPT_REFERER_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_RESOLVE_FIRST 0x071503 /* Added in 7.21.3 */
+#define CURLOPT_RESUME_FROM_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_RESUME_FROM_LARGE_FIRST 0x070b00 /* Added in 7.11.0 */
+#define CURLOPT_RTSPHEADER_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_RTSP_CLIENT_CSEQ_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_RTSP_REQUEST_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_RTSP_SERVER_CSEQ_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_RTSP_SESSION_ID_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_RTSP_STREAM_URI_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_RTSP_TRANSPORT_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_SASL_IR_FIRST 0x071f00 /* Added in 7.31.0 */
+#define CURLOPT_SEEKDATA_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLOPT_SEEKFUNCTION_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLOPT_SERVER_RESPONSE_TIMEOUT_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLOPT_SHARE_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLOPT_SOCKOPTDATA_FIRST 0x071000 /* Added in 7.16.0 */
+#define CURLOPT_SOCKOPTFUNCTION_FIRST 0x071000 /* Added in 7.16.0 */
+#define CURLOPT_SOCKS5_GSSAPI_NEC_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLOPT_SOCKS5_GSSAPI_SERVICE_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLOPT_SOURCE_HOST_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURLOPT_SOURCE_HOST_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_SOURCE_PATH_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURLOPT_SOURCE_PATH_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_SOURCE_PORT_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURLOPT_SOURCE_PORT_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_SOURCE_POSTQUOTE_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURLOPT_SOURCE_POSTQUOTE_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_SOURCE_PREQUOTE_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURLOPT_SOURCE_PREQUOTE_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_SOURCE_QUOTE_FIRST 0x070d00 /* Added in 7.13.0 */
+#define CURLOPT_SOURCE_QUOTE_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_SOURCE_URL_FIRST 0x070d00 /* Added in 7.13.0 */
+#define CURLOPT_SOURCE_URL_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_SOURCE_USERPWD_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURLOPT_SOURCE_USERPWD_LAST 0x070f05 /* Last featured in 7.15.5 */
+#define CURLOPT_SSH_AUTH_TYPES_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLOPT_SSH_HOST_PUBLIC_KEY_MD5_FIRST 0x071101 /* Added in 7.17.1 */
+#define CURLOPT_SSH_KEYDATA_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLOPT_SSH_KEYFUNCTION_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLOPT_SSH_KNOWNHOSTS_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURLOPT_SSH_PRIVATE_KEYFILE_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLOPT_SSH_PUBLIC_KEYFILE_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLOPT_SSLCERT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_SSLCERTPASSWD_FIRST 0x070101 /* Added in 7.1.1 */
+#define CURLOPT_SSLCERTTYPE_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLOPT_SSLENGINE_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLOPT_SSLENGINE_DEFAULT_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLOPT_SSLKEY_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLOPT_SSLKEYPASSWD_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLOPT_SSLKEYTYPE_FIRST 0x070903 /* Added in 7.9.3 */
+#define CURLOPT_SSLVERSION_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_SSL_CIPHER_LIST_FIRST 0x070900 /* Added in 7.9 */
+#define CURLOPT_SSL_CTX_DATA_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLOPT_SSL_CTX_FUNCTION_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURLOPT_SSL_ENABLE_ALPN_FIRST 0x072400 /* Added in 7.36.0 */
+#define CURLOPT_SSL_ENABLE_NPN_FIRST 0x072400 /* Added in 7.36.0 */
+#define CURLOPT_SSL_OPTIONS_FIRST 0x071900 /* Added in 7.25.0 */
+#define CURLOPT_SSL_SESSIONID_CACHE_FIRST 0x071000 /* Added in 7.16.0 */
+#define CURLOPT_SSL_VERIFYHOST_FIRST 0x070801 /* Added in 7.8.1 */
+#define CURLOPT_SSL_VERIFYPEER_FIRST 0x070402 /* Added in 7.4.2 */
+#define CURLOPT_SSL_VERIFYSTATUS_FIRST 0x072900 /* Added in 7.41.0 */
+#define CURLOPT_STDERR_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_TCP_KEEPALIVE_FIRST 0x071900 /* Added in 7.25.0 */
+#define CURLOPT_TCP_KEEPIDLE_FIRST 0x071900 /* Added in 7.25.0 */
+#define CURLOPT_TCP_KEEPINTVL_FIRST 0x071900 /* Added in 7.25.0 */
+#define CURLOPT_TCP_NODELAY_FIRST 0x070b02 /* Added in 7.11.2 */
+#define CURLOPT_TELNETOPTIONS_FIRST 0x070700 /* Added in 7.7 */
+#define CURLOPT_TFTP_BLKSIZE_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLOPT_TIMECONDITION_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_TIMEOUT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_TIMEOUT_MS_FIRST 0x071002 /* Added in 7.16.2 */
+#define CURLOPT_TIMEVALUE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_TLSAUTH_PASSWORD_FIRST 0x071504 /* Added in 7.21.4 */
+#define CURLOPT_TLSAUTH_TYPE_FIRST 0x071504 /* Added in 7.21.4 */
+#define CURLOPT_TLSAUTH_USERNAME_FIRST 0x071504 /* Added in 7.21.4 */
+#define CURLOPT_TRANSFERTEXT_FIRST 0x070101 /* Added in 7.1.1 */
+#define CURLOPT_TRANSFER_ENCODING_FIRST 0x071506 /* Added in 7.21.6 */
+#define CURLOPT_UNIX_SOCKET_PATH_FIRST 0x072800 /* Added in 7.40.0 */
+#define CURLOPT_UNRESTRICTED_AUTH_FIRST 0x070a04 /* Added in 7.10.4 */
+#define CURLOPT_UPLOAD_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_URL_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_USERAGENT_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_USERNAME_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURLOPT_USERPWD_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_USE_SSL_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLOPT_VERBOSE_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_WILDCARDMATCH_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLOPT_WRITEDATA_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURLOPT_WRITEFUNCTION_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_WRITEHEADER_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_WRITEINFO_FIRST 0x070100 /* Added in 7.1 */
+#define CURLOPT_XFERINFODATA_FIRST 0x072000 /* Added in 7.32.0 */
+#define CURLOPT_XFERINFOFUNCTION_FIRST 0x072000 /* Added in 7.32.0 */
+#define CURLOPT_XOAUTH2_BEARER_FIRST 0x072100 /* Added in 7.33.0 */
+#define CURLPAUSE_ALL_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLPAUSE_CONT_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLPAUSE_RECV_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLPAUSE_RECV_CONT_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLPAUSE_SEND_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLPAUSE_SEND_CONT_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLPROTO_ALL_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_DICT_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_FILE_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_FTP_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_FTPS_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_GOPHER_FIRST 0x071502 /* Added in 7.21.2 */
+#define CURLPROTO_HTTP_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_HTTPS_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_IMAP_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLPROTO_IMAPS_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLPROTO_LDAP_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_LDAPS_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_POP3_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLPROTO_POP3S_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLPROTO_RTMP_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLPROTO_RTMPE_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLPROTO_RTMPS_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLPROTO_RTMPT_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLPROTO_RTMPTE_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLPROTO_RTMPTS_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURLPROTO_RTSP_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLPROTO_SCP_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_SFTP_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_SMB_FIRST 0x072800 /* Added in 7.40.0 */
+#define CURLPROTO_SMBS_FIRST 0x072800 /* Added in 7.40.0 */
+#define CURLPROTO_SMTP_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLPROTO_SMTPS_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURLPROTO_TELNET_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROTO_TFTP_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROXY_HTTP_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLPROXY_HTTP_1_0_FIRST 0x071304 /* Added in 7.19.4 */
+#define CURLPROXY_SOCKS4_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLPROXY_SOCKS4A_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLPROXY_SOCKS5_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLPROXY_SOCKS5_HOSTNAME_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURLSHE_BAD_OPTION_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSHE_INVALID_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSHE_IN_USE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSHE_NOMEM_FIRST 0x070c00 /* Added in 7.12.0 */
+#define CURLSHE_NOT_BUILT_IN_FIRST 0x071700 /* Added in 7.23.0 */
+#define CURLSHE_OK_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSHOPT_LOCKFUNC_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSHOPT_NONE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSHOPT_SHARE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSHOPT_UNLOCKFUNC_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSHOPT_UNSHARE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSHOPT_USERDATA_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURLSOCKTYPE_ACCEPT_FIRST 0x071c00 /* Added in 7.28.0 */
+#define CURLSOCKTYPE_IPCXN_FIRST 0x071000 /* Added in 7.16.0 */
+#define CURLSSH_AUTH_AGENT_FIRST 0x071c00 /* Added in 7.28.0 */
+#define CURLSSH_AUTH_ANY_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLSSH_AUTH_DEFAULT_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLSSH_AUTH_HOST_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLSSH_AUTH_KEYBOARD_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLSSH_AUTH_NONE_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLSSH_AUTH_PASSWORD_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLSSH_AUTH_PUBLICKEY_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLSSLBACKEND_AXTLS_FIRST 0x072600 /* Added in 7.38.0 */
+#define CURLSSLBACKEND_CYASSL_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLBACKEND_DARWINSSL_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLBACKEND_GNUTLS_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLBACKEND_GSKIT_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLBACKEND_NONE_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLBACKEND_NSS_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLBACKEND_OPENSSL_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLBACKEND_POLARSSL_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLBACKEND_QSOSSL_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLBACKEND_QSOSSL_LAST 0x072601 /* Last featured in 7.38.1 */
+#define CURLSSLBACKEND_SCHANNEL_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURLSSLOPT_ALLOW_BEAST_FIRST 0x071900 /* Added in 7.25.0 */
+#define CURLUSESSL_ALL_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLUSESSL_CONTROL_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLUSESSL_NONE_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLUSESSL_TRY_FIRST 0x071100 /* Added in 7.17.0 */
+#define CURLVERSION_FIRST_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLVERSION_FOURTH_FIRST 0x071001 /* Added in 7.16.1 */
+#define CURLVERSION_NOW_FIRST 0x070a00 /* Added in 7.10 */
+#define CURLVERSION_SECOND_FIRST 0x070b01 /* Added in 7.11.1 */
+#define CURLVERSION_THIRD_FIRST 0x070c00 /* Added in 7.12.0 */
+#define CURL_CHUNK_BGN_FUNC_FAIL_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURL_CHUNK_BGN_FUNC_OK_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURL_CHUNK_BGN_FUNC_SKIP_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURL_CHUNK_END_FUNC_FAIL_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURL_CHUNK_END_FUNC_OK_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURL_CSELECT_ERR_FIRST 0x071003 /* Added in 7.16.3 */
+#define CURL_CSELECT_IN_FIRST 0x071003 /* Added in 7.16.3 */
+#define CURL_CSELECT_OUT_FIRST 0x071003 /* Added in 7.16.3 */
+#define CURL_EASY_NONE_FIRST 0x070e00 /* Added in 7.14.0 */
+#define CURL_EASY_NONE_LAST 0x070f04 /* Last featured in 7.15.4 */
+#define CURL_EASY_TIMEOUT_FIRST 0x070e00 /* Added in 7.14.0 */
+#define CURL_EASY_TIMEOUT_LAST 0x070f04 /* Last featured in 7.15.4 */
+#define CURL_ERROR_SIZE_FIRST 0x070100 /* Added in 7.1 */
+#define CURL_FNMATCHFUNC_FAIL_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURL_FNMATCHFUNC_MATCH_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURL_FNMATCHFUNC_NOMATCH_FIRST 0x071500 /* Added in 7.21.0 */
+#define CURL_FORMADD_DISABLED_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURL_FORMADD_ILLEGAL_ARRAY_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_FORMADD_INCOMPLETE_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_FORMADD_MEMORY_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_FORMADD_NULL_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_FORMADD_OK_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_FORMADD_OPTION_TWICE_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_FORMADD_UNKNOWN_OPTION_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_GLOBAL_ACK_EINTR_FIRST 0x071e00 /* Added in 7.30.0 */
+#define CURL_GLOBAL_ALL_FIRST 0x070800 /* Added in 7.8 */
+#define CURL_GLOBAL_DEFAULT_FIRST 0x070800 /* Added in 7.8 */
+#define CURL_GLOBAL_NOTHING_FIRST 0x070800 /* Added in 7.8 */
+#define CURL_GLOBAL_SSL_FIRST 0x070800 /* Added in 7.8 */
+#define CURL_GLOBAL_WIN32_FIRST 0x070801 /* Added in 7.8.1 */
+#define CURL_HTTP_VERSION_1_0_FIRST 0x070901 /* Added in 7.9.1 */
+#define CURL_HTTP_VERSION_1_1_FIRST 0x070901 /* Added in 7.9.1 */
+#define CURL_HTTP_VERSION_2_0_FIRST 0x072100 /* Added in 7.33.0 */
+#define CURL_HTTP_VERSION_NONE_FIRST 0x070901 /* Added in 7.9.1 */
+#define CURL_IPRESOLVE_V4_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURL_IPRESOLVE_V6_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURL_IPRESOLVE_WHATEVER_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURL_LOCK_ACCESS_NONE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURL_LOCK_ACCESS_SHARED_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURL_LOCK_ACCESS_SINGLE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURL_LOCK_DATA_CONNECT_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURL_LOCK_DATA_COOKIE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURL_LOCK_DATA_DNS_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURL_LOCK_DATA_NONE_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURL_LOCK_DATA_SHARE_FIRST 0x070a04 /* Added in 7.10.4 */
+#define CURL_LOCK_DATA_SSL_SESSION_FIRST 0x070a03 /* Added in 7.10.3 */
+#define CURL_LOCK_TYPE_CONNECT_FIRST 0x070a00 /* Added in 7.10 */
+#define CURL_LOCK_TYPE_CONNECT_LAST 0x070a02 /* Last featured in 7.10.2 */
+#define CURL_LOCK_TYPE_COOKIE_FIRST 0x070a00 /* Added in 7.10 */
+#define CURL_LOCK_TYPE_COOKIE_LAST 0x070a02 /* Last featured in 7.10.2 */
+#define CURL_LOCK_TYPE_DNS_FIRST 0x070a00 /* Added in 7.10 */
+#define CURL_LOCK_TYPE_DNS_LAST 0x070a02 /* Last featured in 7.10.2 */
+#define CURL_LOCK_TYPE_NONE_FIRST 0x070a00 /* Added in 7.10 */
+#define CURL_LOCK_TYPE_NONE_LAST 0x070a02 /* Last featured in 7.10.2 */
+#define CURL_LOCK_TYPE_SSL_SESSION_FIRST 0x070a00 /* Added in 7.10 */
+#define CURL_LOCK_TYPE_SSL_SESSION_LAST 0x070a02 /* Last featured in 7.10.2 */
+#define CURL_MAX_HTTP_HEADER_FIRST 0x071307 /* Added in 7.19.7 */
+#define CURL_MAX_WRITE_SIZE_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURL_NETRC_IGNORED_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_NETRC_OPTIONAL_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_NETRC_REQUIRED_FIRST 0x070908 /* Added in 7.9.8 */
+#define CURL_POLL_IN_FIRST 0x070e00 /* Added in 7.14.0 */
+#define CURL_POLL_INOUT_FIRST 0x070e00 /* Added in 7.14.0 */
+#define CURL_POLL_NONE_FIRST 0x070e00 /* Added in 7.14.0 */
+#define CURL_POLL_OUT_FIRST 0x070e00 /* Added in 7.14.0 */
+#define CURL_POLL_REMOVE_FIRST 0x070e00 /* Added in 7.14.0 */
+#define CURL_PROGRESS_BAR_FIRST 0x070101 /* Added in 7.1.1 */
+#define CURL_PROGRESS_BAR_LAST 0x070401 /* Last featured in 7.4.1 */
+#define CURL_PROGRESS_STATS_FIRST 0x070101 /* Added in 7.1.1 */
+#define CURL_PROGRESS_STATS_LAST 0x070401 /* Last featured in 7.4.1 */
+#define CURL_READFUNC_ABORT_FIRST 0x070c01 /* Added in 7.12.1 */
+#define CURL_READFUNC_PAUSE_FIRST 0x071200 /* Added in 7.18.0 */
+#define CURL_REDIR_GET_ALL_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURL_REDIR_POST_301_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURL_REDIR_POST_302_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURL_REDIR_POST_303_FIRST 0x071901 /* Added in 7.25.1 */
+#define CURL_REDIR_POST_ALL_FIRST 0x071301 /* Added in 7.19.1 */
+#define CURL_RTSPREQ_ANNOUNCE_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_DESCRIBE_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_GET_PARAMETER_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_NONE_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_OPTIONS_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_PAUSE_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_PLAY_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_RECEIVE_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_RECORD_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_SETUP_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_SET_PARAMETER_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_RTSPREQ_TEARDOWN_FIRST 0x071400 /* Added in 7.20.0 */
+#define CURL_SEEKFUNC_CANTSEEK_FIRST 0x071305 /* Added in 7.19.5 */
+#define CURL_SEEKFUNC_FAIL_FIRST 0x071305 /* Added in 7.19.5 */
+#define CURL_SEEKFUNC_OK_FIRST 0x071305 /* Added in 7.19.5 */
+#define CURL_SOCKET_BAD_FIRST 0x070e00 /* Added in 7.14.0 */
+#define CURL_SOCKET_TIMEOUT_FIRST 0x070e00 /* Added in 7.14.0 */
+#define CURL_SOCKOPT_ALREADY_CONNECTED_FIRST 0x071505 /* Added in 7.21.5 */
+#define CURL_SOCKOPT_ERROR_FIRST 0x071505 /* Added in 7.21.5 */
+#define CURL_SOCKOPT_OK_FIRST 0x071505 /* Added in 7.21.5 */
+#define CURL_SSLVERSION_DEFAULT_FIRST 0x070902 /* Added in 7.9.2 */
+#define CURL_SSLVERSION_SSLv2_FIRST 0x070902 /* Added in 7.9.2 */
+#define CURL_SSLVERSION_SSLv3_FIRST 0x070902 /* Added in 7.9.2 */
+#define CURL_SSLVERSION_TLSv1_FIRST 0x070902 /* Added in 7.9.2 */
+#define CURL_SSLVERSION_TLSv1_0_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURL_SSLVERSION_TLSv1_1_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURL_SSLVERSION_TLSv1_2_FIRST 0x072200 /* Added in 7.34.0 */
+#define CURL_TIMECOND_IFMODSINCE_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURL_TIMECOND_IFUNMODSINCE_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURL_TIMECOND_LASTMOD_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURL_TIMECOND_NONE_FIRST 0x070907 /* Added in 7.9.7 */
+#define CURL_TLSAUTH_NONE_FIRST 0x071504 /* Added in 7.21.4 */
+#define CURL_TLSAUTH_SRP_FIRST 0x071504 /* Added in 7.21.4 */
+#define CURL_VERSION_ASYNCHDNS_FIRST 0x070a07 /* Added in 7.10.7 */
+#define CURL_VERSION_CONV_FIRST 0x070f04 /* Added in 7.15.4 */
+#define CURL_VERSION_CURLDEBUG_FIRST 0x071306 /* Added in 7.19.6 */
+#define CURL_VERSION_DEBUG_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURL_VERSION_GSSAPI_FIRST 0x072600 /* Added in 7.38.0 */
+#define CURL_VERSION_GSSNEGOTIATE_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURL_VERSION_HTTP2_FIRST 0x072100 /* Added in 7.33.0 */
+#define CURL_VERSION_IDN_FIRST 0x070c00 /* Added in 7.12.0 */
+#define CURL_VERSION_IPV6_FIRST 0x070a00 /* Added in 7.10 */
+#define CURL_VERSION_KERBEROS4_FIRST 0x070a00 /* Added in 7.10 */
+#define CURL_VERSION_KERBEROS5_FIRST 0x072800 /* Added in 7.40.0 */
+#define CURL_VERSION_LARGEFILE_FIRST 0x070b01 /* Added in 7.11.1 */
+#define CURL_VERSION_LIBZ_FIRST 0x070a00 /* Added in 7.10 */
+#define CURL_VERSION_NTLM_FIRST 0x070a06 /* Added in 7.10.6 */
+#define CURL_VERSION_NTLM_WB_FIRST 0x071600 /* Added in 7.22.0 */
+#define CURL_VERSION_SPNEGO_FIRST 0x070a08 /* Added in 7.10.8 */
+#define CURL_VERSION_SSL_FIRST 0x070a00 /* Added in 7.10 */
+#define CURL_VERSION_SSPI_FIRST 0x070d02 /* Added in 7.13.2 */
+#define CURL_VERSION_TLSAUTH_SRP_FIRST 0x071504 /* Added in 7.21.4 */
+#define CURL_VERSION_UNIX_SOCKETS_FIRST 0x072800 /* Added in 7.40.0 */
+#define CURL_WAIT_POLLIN_FIRST 0x071c00 /* Added in 7.28.0 */
+#define CURL_WAIT_POLLOUT_FIRST 0x071c00 /* Added in 7.28.0 */
+#define CURL_WAIT_POLLPRI_FIRST 0x071c00 /* Added in 7.28.0 */
+#define CURL_WRITEFUNC_PAUSE_FIRST 0x071200 /* Added in 7.18.0 */
diff --git a/src/curl.c b/src/curl.c
new file mode 100644
index 0000000..aa08d94
--- /dev/null
+++ b/src/curl.c
@@ -0,0 +1,287 @@
+/* *
+ * Streaming interface to libcurl for R. (c) 2015 Jeroen Ooms.
+ * Source: https://github.com/jeroen/curl
+ * Comments and contributions are welcome!
+ * Helpful libcurl examples:
+ *  - http://curl.haxx.se/libcurl/c/getinmemory.html
+ *  - http://curl.haxx.se/libcurl/c/multi-single.html
+ * Sparse documentation about Rconnection API:
+ *  - https://github.com/wch/r-source/blob/trunk/src/include/R_ext/Connections.h
+ *  - http://biostatmatt.com/R/R-conn-ints/C-Structures.html
+ *
+ * Notes: the close() function in R actually calls con->destroy. The con->close
+ * function is only used when a connection is recycled after auto-open.
+ */
+#include "curl-common.h"
+#include <Rconfig.h>
+
+/* Define BSWAP_32 on Big Endian systems */
+#ifdef WORDS_BIGENDIAN
+#if (defined(__sun) && defined(__SVR4))
+#include <sys/byteorder.h>
+#elif (defined(__APPLE__) && defined(__ppc__) || defined(__ppc64__))
+#include <libkern/OSByteOrder.h>
+#define BSWAP_32 OSSwapInt32
+#elif (defined(__OpenBSD__))
+#define BSWAP_32(x) swap32(x)
+#elif (defined(__GLIBC__))
+#include <byteswap.h>
+#define BSWAP_32(x) bswap_32(x)
+#endif
+#endif
+
+/* the RConnection API is experimental and subject to change */
+#include <R_ext/Connections.h>
+#if ! defined(R_CONNECTIONS_VERSION) || R_CONNECTIONS_VERSION != 1
+#error "Unsupported connections API version"
+#endif
+
+#define min(a, b) (((a) < (b)) ? (a) : (b))
+#define R_EOF -1
+
+typedef struct {
+  char *url;
+  char *buf;
+  char *cur;
+  int has_data;
+  int has_more;
+  int used;
+  int partial;
+  size_t size;
+  size_t limit;
+  CURLM *manager;
+  CURL *handle;
+  reference *ref;
+} request;
+
+/* callback function to store received data */
+static size_t push(void *contents, size_t sz, size_t nmemb, void *ctx) {
+  /* avoids compiler warning on windows */
+  request* req = (request*) ctx;
+  req->has_data = 1;
+
+  /* move existing data to front of buffer (if any) */
+  memmove(req->buf, req->cur, req->size);
+
+  /* allocate more space if required */
+  size_t realsize = sz * nmemb;
+  size_t newsize = req->size + realsize;
+  if(newsize > req->limit) {
+    size_t newlimit = 2 * req->limit;
+    //Rprintf("Resizing buffer to %d.\n", newlimit);
+    void *newbuf = realloc(req->buf, newlimit);
+    if(!newbuf)
+      error("Failure in realloc. Out of memory?");
+    req->buf = newbuf;
+    req->limit = newlimit;
+  }
+
+  /* append new data */
+  memcpy(req->buf + req->size, contents, realsize);
+  req->size = newsize;
+  req->cur = req->buf;
+  return realsize;
+}
+
+static size_t pop(void *target, size_t max, request *req){
+  size_t copy_size = min(req->size, max);
+  memcpy(target, req->cur, copy_size);
+  req->cur += copy_size;
+  req->size -= copy_size;
+  //Rprintf("Requested %d bytes, popped %d bytes, new size %d bytes.\n", max, copy_size, req->size);
+  return copy_size;
+}
+
+void check_manager(CURLM *manager, reference *ref) {
+  for(int msg = 1; msg > 0;){
+    CURLMsg *out = curl_multi_info_read(manager, &msg);
+    if(out)
+      assert_status(out->data.result, ref);
+  }
+}
+
+//NOTE: renamed because the name 'fetch' caused crash/conflict on Solaris.
+void fetchdata(request *req) {
+  R_CheckUserInterrupt();
+  long timeout = 10*1000;
+  massert(curl_multi_timeout(req->manager, &timeout));
+  /* massert(curl_multi_perform(req->manager, &(req->has_more))); */
+
+  /* On libcurl < 7.20 we need to check for CURLM_CALL_MULTI_PERFORM, see docs */
+  CURLMcode res = CURLM_CALL_MULTI_PERFORM;
+  while(res == CURLM_CALL_MULTI_PERFORM){
+    res = curl_multi_perform(req->manager, &(req->has_more));
+  }
+  massert(res);
+  /* End */
+  check_manager(req->manager, req->ref);
+}
+
+/* Support for readBin() */
+static size_t rcurl_read(void *target, size_t sz, size_t ni, Rconnection con) {
+  request *req = (request*) con->private;
+  size_t req_size = sz * ni;
+
+  /* append data to the target buffer */
+  size_t total_size = pop(target, req_size, req);
+  while((req_size > total_size) && req->has_more) {
+    /* wait for activity, timeout or "nothing" */
+#ifdef HAS_MULTI_WAIT
+    int numfds;
+    if(con->blocking)
+      massert(curl_multi_wait(req->manager, NULL, 0, 1000, &numfds));
+#endif
+    fetchdata(req);
+    total_size += pop((char*)target + total_size, (req_size-total_size), req);
+
+    //return less than requested data for non-blocking connections, or curl_fetch_stream()
+    if(!con->blocking || req->partial)
+      break;
+  }
+  con->incomplete = req->has_more || req->size;
+  return total_size;
+}
+
+/* naive implementation of readLines */
+static int rcurl_fgetc(Rconnection con) {
+  int x = 0;
+#ifdef WORDS_BIGENDIAN
+  return rcurl_read(&x, 1, 1, con) ? BSWAP_32(x) : R_EOF;
+#else
+  return rcurl_read(&x, 1, 1, con) ? x : R_EOF;
+#endif
+}
+
+void cleanup(Rconnection con) {
+  //Rprintf("Destroying connection.\n");
+  request *req = (request*) con->private;
+  reference *ref = req->ref;
+
+  /* free thee handle connection */
+  curl_multi_remove_handle(req->manager, req->handle);
+  ref->locked = 0;
+
+  /* delayed finalizer cleanup */
+  (ref->refCount)--;
+  clean_handle(ref);
+
+  /* clean up connection */
+  curl_multi_cleanup(req->manager);
+  free(req->buf);
+  free(req->url);
+  free(req);
+}
+
+/* reset to pre-opened state */
+void reset(Rconnection con) {
+  //Rprintf("Resetting connection object.\n");
+  request *req = (request*) con->private;
+  curl_multi_remove_handle(req->manager, req->handle);
+  req->ref->locked = 0;
+  con->isopen = FALSE;
+  con->text = TRUE;
+  con->incomplete = FALSE;
+  strcpy(con->mode, "r");
+}
+
+static Rboolean rcurl_open(Rconnection con) {
+  request *req = (request*) con->private;
+
+  //same message as base::url()
+  if (con->mode[0] != 'r' || strchr(con->mode, 'w'))
+    Rf_error("can only open URLs for reading");
+
+  if(req->ref->locked)
+    Rf_error("Handle is already in use elsewhere.");
+
+  /* init a multi stack with callback */
+  CURL *handle = req->handle;
+  assert(curl_easy_setopt(handle, CURLOPT_URL, req->url));
+  assert(curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, push));
+  assert(curl_easy_setopt(handle, CURLOPT_WRITEDATA, req));
+
+  /* add the handle to the pool and lock it */
+  massert(curl_multi_add_handle(req->manager, handle));
+  req->ref->locked = 1;
+
+  /* reset the state */
+  req->handle = handle;
+  req->cur = req->buf;
+  req->size = 0;
+  req->used = 1;
+  req->has_data = 0;
+  req->has_more = 1;
+
+  /* fully non-blocking has 's' in open mode */
+  int block_open = strchr(con->mode, 's') == NULL;
+  int force_open = strchr(con->mode, 'f') != NULL;
+
+ /* Wait for first data to arrive. Monitoring a change in status code does not
+   suffice in case of http redirects */
+  while(block_open && req->has_more && !req->has_data) {
+#ifdef HAS_MULTI_WAIT
+    int numfds;
+    massert(curl_multi_wait(req->manager, NULL, 0, 1000, &numfds));
+#endif
+    fetchdata(req);
+  }
+
+  /* check http status code */
+  /* Stream connections should be checked via handle_data() */
+  /* Non-blocking open connections get checked during read */
+  if(block_open && !force_open)
+    stop_for_status(handle);
+
+  /* set mode in case open() changed it */
+  con->text = strchr(con->mode, 'b') ? FALSE : TRUE;
+  con->isopen = TRUE;
+  con->incomplete = TRUE;
+  return TRUE;
+}
+
+SEXP R_curl_connection(SEXP url, SEXP ptr, SEXP partial) {
+  if(!isString(url))
+    error("Argument 'url' must be string.");
+
+  /* create the R connection object, mimicking base::url() */
+  Rconnection con;
+
+  /* R wants description in native encoding, but we use UTF-8 URL below */
+  SEXP rc = PROTECT(R_new_custom_connection(translateChar(STRING_ELT(url, 0)), "r", "curl", &con));
+
+  /* setup curl. These are the parts that are recycable. */
+  request *req = malloc(sizeof(request));
+  req->handle = get_handle(ptr);
+  req->ref = get_ref(ptr);
+  req->limit = CURL_MAX_WRITE_SIZE;
+  req->buf = malloc(req->limit);
+  req->manager = curl_multi_init();
+  req->partial = asLogical(partial); //only for curl_fetch_stream()
+  req->used = 0;
+
+  /* allocate url string */
+  req->url = malloc(strlen(translateCharUTF8(asChar(url))) + 1);
+  strcpy(req->url, translateCharUTF8(asChar(url)));
+
+  /* set connection properties */
+  con->incomplete = FALSE;
+  con->private = req;
+  con->canseek = FALSE;
+  con->canwrite = FALSE;
+  con->isopen = FALSE;
+  con->blocking = TRUE;
+  con->text = TRUE;
+  con->UTF8out = TRUE;
+  con->open = rcurl_open;
+  con->close = reset;
+  con->destroy = cleanup;
+  con->read = rcurl_read;
+  con->fgetc = rcurl_fgetc;
+  con->fgetc_internal = rcurl_fgetc;
+
+  /* protect the handle */
+  (req->ref->refCount)++;
+
+  UNPROTECT(1);
+  return rc;
+}
diff --git a/src/download.c b/src/download.c
new file mode 100644
index 0000000..fb0471f
--- /dev/null
+++ b/src/download.c
@@ -0,0 +1,51 @@
+/* *
+ * Reimplementation of C_download (the "internal" method for download.file).
+ */
+#include "curl-common.h"
+
+SEXP R_download_curl(SEXP url, SEXP destfile, SEXP quiet, SEXP mode, SEXP ptr, SEXP nonblocking) {
+  if(!isString(url))
+    error("Argument 'url' must be string.");
+
+  if(!isString(destfile))
+    error("Argument 'destfile' must be string.");
+
+  if(!isLogical(quiet))
+    error("Argument 'quiet' must be TRUE/FALSE.");
+
+  if(!isString(mode))
+    error("Argument 'mode' must be string.");
+
+  /* get the handle */
+  CURL *handle = get_handle(ptr);
+  reset_errbuf(get_ref(ptr));
+
+  /* open file */
+  FILE *dest = fopen(CHAR(asChar(destfile)), CHAR(asChar(mode)));
+  if(!dest)
+    error("Failed to open file %s.", CHAR(asChar(destfile)));
+
+  /* set options */
+  curl_easy_setopt(handle, CURLOPT_URL, translateCharUTF8(asChar(url)));
+  curl_easy_setopt(handle, CURLOPT_NOPROGRESS, asLogical(quiet));
+  curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, push_disk);
+  curl_easy_setopt(handle, CURLOPT_WRITEDATA, dest);
+
+  /* perform blocking request */
+  CURLcode status = asLogical(nonblocking) ?
+    curl_perform_with_interrupt(handle) : curl_easy_perform(handle);
+
+  /* cleanup */
+  curl_easy_setopt(handle, CURLOPT_URL, NULL);
+  curl_easy_setopt(handle, CURLOPT_NOPROGRESS, 1);
+  curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, NULL);
+  curl_easy_setopt(handle, CURLOPT_WRITEDATA, NULL);
+  fclose(dest);
+
+  /* raise for curl errors */
+  assert_status(status, get_ref(ptr));
+
+  /* check for success */
+  stop_for_status(handle);
+  return ScalarInteger(0);
+}
diff --git a/src/escape.c b/src/escape.c
new file mode 100644
index 0000000..687bb44
--- /dev/null
+++ b/src/escape.c
@@ -0,0 +1,33 @@
+#include <curl/curl.h>
+#include <Rinternals.h>
+
+SEXP R_curl_escape(SEXP url, SEXP unescape_) {
+  if (TYPEOF(url) != STRSXP)
+    error("`url` must be a character vector.");
+
+  /* init curl */
+  CURL *curl = curl_easy_init();
+  if (!curl)
+    return(R_NilValue);
+
+  int unescape = asLogical(unescape_);
+  int n = Rf_length(url);
+  SEXP output = PROTECT(allocVector(STRSXP, n));
+
+  for (int i = 0; i < n; ++i) {
+    const char *in = CHAR(STRING_ELT(url, i));
+    char *out;
+    if (unescape) {
+      out = curl_easy_unescape(curl, in, 0, NULL);
+    } else {
+      out = curl_easy_escape(curl, in, 0);
+    }
+
+    SET_STRING_ELT(output, i, mkCharCE(out, CE_UTF8));
+    curl_free(out);
+  }
+
+  curl_easy_cleanup(curl);
+  UNPROTECT(1);
+  return output;
+}
diff --git a/src/fetch.c b/src/fetch.c
new file mode 100644
index 0000000..948d017
--- /dev/null
+++ b/src/fetch.c
@@ -0,0 +1,91 @@
+/* *
+ * Blocking easy interfaces to libcurl for R.
+ * Example: http://curl.haxx.se/libcurl/c/getinmemory.html
+ */
+
+#include "curl-common.h"
+
+SEXP R_curl_fetch_memory(SEXP url, SEXP ptr, SEXP nonblocking){
+  if (!isString(url) || length(url) != 1)
+    error("Argument 'url' must be string.");
+
+  /* get the handle */
+  CURL *handle = get_handle(ptr);
+
+  /* update the url */
+  curl_easy_setopt(handle, CURLOPT_URL, CHAR(STRING_ELT(url, 0)));
+
+  /* reset the response header buffer */
+  reset_resheaders(get_ref(ptr));
+  reset_errbuf(get_ref(ptr));
+
+  /* buffer body */
+  memory body = {NULL, 0};
+  curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, append_buffer);
+  curl_easy_setopt(handle, CURLOPT_WRITEDATA, &body);
+
+  /* perform blocking request */
+  CURLcode status = asLogical(nonblocking) ?
+    curl_perform_with_interrupt(handle) : curl_easy_perform(handle);
+
+  /* Reset for reuse */
+  curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, NULL);
+  curl_easy_setopt(handle, CURLOPT_WRITEDATA, NULL);
+
+  /* check for errors */
+  if (status != CURLE_OK) {
+    free(body.buf);
+    assert_status(status, get_ref(ptr));
+  }
+
+  /* create output */
+  SEXP out = PROTECT(allocVector(RAWSXP, body.size));
+
+  /* copy only if there is actual content */
+  if(body.size)
+    memcpy(RAW(out), body.buf, body.size);
+
+  /* cleanup and return */
+  UNPROTECT(1);
+  free(body.buf);
+  return out;
+}
+
+SEXP R_curl_fetch_disk(SEXP url, SEXP ptr, SEXP path, SEXP mode, SEXP nonblocking){
+  if (!isString(url) || length(url) != 1)
+    error("Argument 'url' must be string.");
+  if (!isString(path) || length(path) != 1)
+    error("`path` must be string.");
+
+  /* get the handle */
+  CURL *handle = get_handle(ptr);
+
+  /* update the url */
+  curl_easy_setopt(handle, CURLOPT_URL, CHAR(STRING_ELT(url, 0)));
+
+  /* reset the response header buffer */
+  reset_resheaders(get_ref(ptr));
+  reset_errbuf(get_ref(ptr));
+
+  /* open file */
+  FILE *dest = fopen(CHAR(asChar(path)), CHAR(asChar(mode)));
+  if(!dest)
+    error("Failed to open file %s.", CHAR(asChar(path)));
+  curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, push_disk);
+  curl_easy_setopt(handle, CURLOPT_WRITEDATA, dest);
+
+  /* perform blocking request */
+  CURLcode status = asLogical(nonblocking) ?
+    curl_perform_with_interrupt(handle): curl_easy_perform(handle);
+
+  /* cleanup */
+  curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, NULL);
+  curl_easy_setopt(handle, CURLOPT_WRITEDATA, NULL);
+  fclose(dest);
+
+  /* check for errors */
+  assert_status(status, get_ref(ptr));
+
+  /* return the file path */
+  return path;
+}
diff --git a/src/form.c b/src/form.c
new file mode 100644
index 0000000..0db1d8d
--- /dev/null
+++ b/src/form.c
@@ -0,0 +1,46 @@
+#include "curl-common.h"
+
+struct curl_httppost* make_form(SEXP form){
+  struct curl_httppost* post = NULL;
+  struct curl_httppost* last = NULL;
+  SEXP ln = PROTECT(getAttrib(form, R_NamesSymbol));
+  for(int i = 0; i < length(form); i++){
+    const char *name = translateCharUTF8(STRING_ELT(ln, i));
+    SEXP val = VECTOR_ELT(form, i);
+    if(TYPEOF(val) == RAWSXP){
+      long datalen = Rf_length(val);
+      if(datalen > 0){
+        unsigned char * data = RAW(val);
+        curl_formadd(&post, &last, CURLFORM_COPYNAME, name, CURLFORM_COPYCONTENTS, data, CURLFORM_CONTENTSLENGTH, datalen, CURLFORM_END);
+      } else {
+        //Note if 'CURLFORM_CONTENTLEN == 0' then libcurl assumes strlen() !
+        curl_formadd(&post, &last, CURLFORM_COPYNAME, name, CURLFORM_COPYCONTENTS, "", CURLFORM_END);
+      }
+    } else if(isVector(val) && Rf_length(val)){
+      if(isString(VECTOR_ELT(val, 0))){
+        //assume a form_file upload
+        const char * path = CHAR(asChar(VECTOR_ELT(val, 0)));
+        if(isString(VECTOR_ELT(val, 1))){
+          const char *content_type = CHAR(asChar(VECTOR_ELT(val, 1)));
+          curl_formadd(&post, &last, CURLFORM_COPYNAME, name, CURLFORM_FILE, path, CURLFORM_CONTENTTYPE, content_type, CURLFORM_END);
+        } else {
+          curl_formadd(&post, &last, CURLFORM_COPYNAME, name, CURLFORM_FILE, path, CURLFORM_END);
+        }
+      } else {
+        //assume a form_value upload
+        unsigned char * data = RAW(VECTOR_ELT(val, 0));
+        long datalen = Rf_length(VECTOR_ELT(val, 0));
+        if(isString(VECTOR_ELT(val, 1))){
+          const char * content_type = CHAR(asChar(VECTOR_ELT(val, 1)));
+          curl_formadd(&post, &last, CURLFORM_COPYNAME, name, CURLFORM_COPYCONTENTS, data, CURLFORM_CONTENTSLENGTH, datalen, CURLFORM_CONTENTTYPE, content_type, CURLFORM_END);
+        } else {
+          curl_formadd(&post, &last, CURLFORM_COPYNAME, name, CURLFORM_COPYCONTENTS, data, CURLFORM_CONTENTSLENGTH, datalen, CURLFORM_END);
+        }
+      }
+    } else {
+      error("form value %s not supported", name);
+    }
+  }
+  UNPROTECT(1);
+  return post;
+}
diff --git a/src/getdate.c b/src/getdate.c
new file mode 100644
index 0000000..40f5495
--- /dev/null
+++ b/src/getdate.c
@@ -0,0 +1,17 @@
+#include <curl/curl.h>
+#include <Rinternals.h>
+
+SEXP R_curl_getdate(SEXP datestring) {
+  if(!isString(datestring))
+    error("Argument 'datestring' must be string.");
+
+  int len = length(datestring);
+  SEXP out = PROTECT(allocVector(INTSXP, len));
+
+  for(int i = 0; i < len; i++){
+    time_t date = curl_getdate(CHAR(STRING_ELT(datestring, i)), NULL);
+    INTEGER(out)[i] = date < 0 ? NA_INTEGER : (int) date;
+  }
+  UNPROTECT(1);
+  return out;
+}
diff --git a/src/handle.c b/src/handle.c
new file mode 100644
index 0000000..b223e40
--- /dev/null
+++ b/src/handle.c
@@ -0,0 +1,388 @@
+#include "curl-common.h"
+#include "callbacks.h"
+
+#ifndef MAX_PATH
+#define MAX_PATH 1024
+#endif
+
+#if LIBCURL_VERSION_MAJOR > 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR >= 47)
+#define HAS_HTTP_VERSION_2TLS 1
+#endif
+
+#if LIBCURL_VERSION_MAJOR > 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR >= 32)
+#define HAS_XFERINFOFUNCTION 1
+#endif
+
+#if LIBCURL_VERSION_MAJOR > 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR >= 36)
+#define HAS_CURLOPT_EXPECT_100_TIMEOUT_MS 1
+#endif
+
+
+char CA_BUNDLE[MAX_PATH];
+static struct curl_slist * default_headers;
+
+SEXP R_set_bundle(SEXP path){
+  strcpy(CA_BUNDLE, CHAR(asChar(path)));
+  return mkString(CA_BUNDLE);
+}
+
+SEXP R_get_bundle(){
+  return mkString(CA_BUNDLE);
+}
+
+int total_handles = 0;
+
+void clean_handle(reference *ref){
+  if(ref->refCount == 0){
+    if(ref->headers)
+      curl_slist_free_all(ref->headers);
+    if(ref->form)
+      curl_formfree(ref->form);
+    if(ref->handle)
+      curl_easy_cleanup(ref->handle);
+    if(ref->resheaders.buf)
+      free(ref->resheaders.buf);
+    free(ref);
+    total_handles--;
+  }
+}
+
+void fin_handle(SEXP ptr){
+  reference *ref = (reference*) R_ExternalPtrAddr(ptr);
+
+  //this kind of strange but the multi finalizer needs the ptr value
+  //if it is still pending
+  ref->refCount--;
+  if(ref->refCount == 0)
+    R_ClearExternalPtr(ptr);
+
+  //free stuff
+  clean_handle(ref);
+}
+
+/* the default readfunc os fread which can cause R to freeze */
+size_t dummy_read(char *buffer, size_t size, size_t nitems, void *instream){
+  return 0;
+}
+
+/* These are defaulst that we always want to set */
+void set_handle_defaults(reference *ref){
+
+  /* the actual curl handle */
+  CURL *handle = ref->handle;
+  assert(curl_easy_setopt(handle, CURLOPT_PRIVATE, ref));
+
+  /* set the response header collector */
+  reset_resheaders(ref);
+  curl_easy_setopt(handle, CURLOPT_HEADERFUNCTION, append_buffer);
+  curl_easy_setopt(handle, CURLOPT_HEADERDATA, &(ref->resheaders));
+
+  #ifdef _WIN32
+  if(CA_BUNDLE != NULL && strlen(CA_BUNDLE)){
+    /* on windows a cert bundle is included with R version 3.2.0 */
+    curl_easy_setopt(handle, CURLOPT_CAINFO, CA_BUNDLE);
+  } else {
+    /* disable cert validation for older versions of R */
+    curl_easy_setopt(handle, CURLOPT_SSL_VERIFYHOST, 0L);
+    curl_easy_setopt(handle, CURLOPT_SSL_VERIFYPEER, 0L);
+  }
+  #endif
+
+  /* needed to support compressed responses */
+  assert(curl_easy_setopt(handle, CURLOPT_ENCODING, "gzip, deflate"));
+
+  /* follow redirect */
+  assert(curl_easy_setopt(handle, CURLOPT_FOLLOWLOCATION, 1L));
+  assert(curl_easy_setopt(handle, CURLOPT_MAXREDIRS, 10L));
+
+  /* a sensible timeout (10s) */
+  assert(curl_easy_setopt(handle, CURLOPT_CONNECTTIMEOUT, 10L));
+
+  /* needed to start the cookie engine */
+  assert(curl_easy_setopt(handle, CURLOPT_COOKIEFILE, ""));
+  assert(curl_easy_setopt(handle, CURLOPT_FILETIME, 1L));
+
+  /* set the default user agent */
+  SEXP agent = GetOption1(install("HTTPUserAgent"));
+  if(isString(agent) && Rf_length(agent)){
+    assert(curl_easy_setopt(handle, CURLOPT_USERAGENT, CHAR(STRING_ELT(agent, 0))));
+  } else {
+    assert(curl_easy_setopt(handle, CURLOPT_USERAGENT, "r/curl/jeroen"));
+  }
+
+  /* allow all authentication methods */
+  assert(curl_easy_setopt(handle, CURLOPT_HTTPAUTH, CURLAUTH_ANY));
+  assert(curl_easy_setopt(handle, CURLOPT_UNRESTRICTED_AUTH, 1L));
+
+  /* enables HTTP2 on HTTPS (match behavior of curl cmd util) */
+#if defined(CURL_VERSION_HTTP2) && defined(HAS_HTTP_VERSION_2TLS)
+  if(curl_version_info(CURLVERSION_NOW)->features & CURL_VERSION_HTTP2)
+    assert(curl_easy_setopt(handle, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_2TLS));
+#endif
+
+  /* set an error buffer */
+  assert(curl_easy_setopt(handle, CURLOPT_ERRORBUFFER, ref->errbuf));
+
+  /* dummy readfunction because default can freeze R */
+  assert(curl_easy_setopt(handle, CURLOPT_READFUNCTION, dummy_read));
+
+  /* set default headers (disables the Expect: http 100)*/
+#ifdef HAS_CURLOPT_EXPECT_100_TIMEOUT_MS
+  assert(curl_easy_setopt(handle, CURLOPT_EXPECT_100_TIMEOUT_MS, 0L));
+#endif
+  assert(curl_easy_setopt(handle, CURLOPT_HTTPHEADER, default_headers));
+}
+
+SEXP R_new_handle(){
+  reference *ref = calloc(1, sizeof(reference));
+  ref->refCount = 1;
+  ref->handle = curl_easy_init();
+  total_handles++;
+  set_handle_defaults(ref);
+  SEXP ptr = PROTECT(R_MakeExternalPtr(ref, R_NilValue, R_NilValue));
+  R_RegisterCFinalizerEx(ptr, fin_handle, TRUE);
+  setAttrib(ptr, R_ClassSymbol, mkString("curl_handle"));
+  UNPROTECT(1);
+  ref->handleptr = ptr;
+  return ptr;
+}
+
+SEXP R_handle_reset(SEXP ptr){
+  //reset all fields
+  reference *ref = get_ref(ptr);
+  set_form(ref, NULL);
+  set_headers(ref, NULL);
+  reset_errbuf(ref);
+  curl_easy_reset(ref->handle);
+
+  //restore default settings
+  set_handle_defaults(ref);
+  return ScalarLogical(1);
+}
+
+int opt_is_linked_list(int key) {
+  // These four options need linked lists of various forms - determined
+  // from inspection of curl.h
+  return
+    key == 10023 || // CURLOPT_HTTPHEADER
+    key == 10024 || // CURLOPT_HTTPPOST
+    key == 10070 || // CURLOPT_TELNETOPTIONS
+    key == 10228;   // CURLOPT_PROXYHEADER
+}
+
+SEXP R_handle_setopt(SEXP ptr, SEXP keys, SEXP values){
+  CURL *handle = get_handle(ptr);
+  SEXP optnames = PROTECT(getAttrib(values, R_NamesSymbol));
+
+  if(!isInteger(keys))
+    error("keys` must be an integer");
+
+  if(!isVector(values))
+    error("`values` must be a list");
+
+  for(int i = 0; i < length(keys); i++){
+    int key = INTEGER(keys)[i];
+    const char* optname = CHAR(STRING_ELT(optnames, i));
+    SEXP val = VECTOR_ELT(values, i);
+    if(val == R_NilValue){
+      assert(curl_easy_setopt(handle, key, NULL));
+#ifdef HAS_XFERINFOFUNCTION
+    } else if (key == CURLOPT_XFERINFOFUNCTION) {
+      if (TYPEOF(val) != CLOSXP)
+        error("Value for option %s (%d) must be a function.", optname, key);
+
+      assert(curl_easy_setopt(handle, CURLOPT_XFERINFOFUNCTION,
+                              (curl_progress_callback) R_curl_callback_xferinfo));
+      assert(curl_easy_setopt(handle, CURLOPT_XFERINFODATA, val));
+      assert(curl_easy_setopt(handle, CURLOPT_NOPROGRESS, 0));
+#endif
+    } else if (key == CURLOPT_PROGRESSFUNCTION) {
+      if (TYPEOF(val) != CLOSXP)
+        error("Value for option %s (%d) must be a function.", optname, key);
+
+      assert(curl_easy_setopt(handle, CURLOPT_PROGRESSFUNCTION,
+        (curl_progress_callback) R_curl_callback_progress));
+      assert(curl_easy_setopt(handle, CURLOPT_PROGRESSDATA, val));
+      assert(curl_easy_setopt(handle, CURLOPT_NOPROGRESS, 0));
+    } else if (key == CURLOPT_READFUNCTION) {
+      if (TYPEOF(val) != CLOSXP)
+        error("Value for option %s (%d) must be a function.", optname, key);
+
+      assert(curl_easy_setopt(handle, CURLOPT_READFUNCTION,
+        (curl_read_callback) R_curl_callback_read));
+      assert(curl_easy_setopt(handle, CURLOPT_READDATA, val));
+    } else if (key == CURLOPT_DEBUGFUNCTION) {
+      if (TYPEOF(val) != CLOSXP)
+        error("Value for option %s (%d) must be a function.", optname, key);
+
+      assert(curl_easy_setopt(handle, CURLOPT_DEBUGFUNCTION,
+        (curl_debug_callback) R_curl_callback_debug));
+      assert(curl_easy_setopt(handle, CURLOPT_DEBUGDATA, val));
+    } else if (key == CURLOPT_URL) {
+      /* always use utf-8 for urls */
+      const char * url_utf8 = translateCharUTF8(STRING_ELT(val, 0));
+      assert(curl_easy_setopt(handle, CURLOPT_URL, url_utf8));
+    } else if (opt_is_linked_list(key)) {
+      error("Option %s (%d) not supported.", optname, key);
+    } else if(key < 10000){
+      if(!isNumeric(val) || length(val) != 1) {
+        error("Value for option %s (%d) must be a number.", optname, key);
+      }
+      assert(curl_easy_setopt(handle, key, (long) asInteger(val)));
+    } else if(key < 20000){
+      switch (TYPEOF(val)) {
+      case RAWSXP:
+        if(key == CURLOPT_POSTFIELDS || key == CURLOPT_COPYPOSTFIELDS)
+          assert(curl_easy_setopt(handle, CURLOPT_POSTFIELDSIZE_LARGE, (curl_off_t) Rf_length(val)));
+        assert(curl_easy_setopt(handle, key, RAW(val)));
+        break;
+      case STRSXP:
+        if (length(val) != 1)
+          error("Value for option %s (%d) must be length-1 string", optname, key);
+        assert(curl_easy_setopt(handle, key, CHAR(STRING_ELT(val, 0))));
+        break;
+      default:
+        error("Value for option %s (%d) must be a string or raw vector.", optname, key);
+      }
+    } else if(key >= 30000 && key < 40000){
+      if(!isNumeric(val) || length(val) != 1) {
+        error("Value for option %s (%d) must be a number.", optname, key);
+      }
+      assert(curl_easy_setopt(handle, key, (curl_off_t) asReal(val)));
+    } else {
+      error("Option %s (%d) not supported.", optname, key);
+    }
+  }
+  UNPROTECT(1);
+  return ScalarLogical(1);
+}
+
+SEXP R_handle_setheaders(SEXP ptr, SEXP vec){
+  if(!isString(vec))
+    error("header vector must be a string.");
+  set_headers(get_ref(ptr), vec_to_slist(vec));
+  return ScalarLogical(1);
+}
+
+SEXP R_handle_setform(SEXP ptr, SEXP form){
+  if(!isVector(form))
+    error("Form must be a list.");
+  set_form(get_ref(ptr), make_form(form));
+  return ScalarLogical(1);
+}
+
+SEXP make_timevec(CURL *handle){
+  double time_redirect, time_lookup, time_connect, time_pre, time_start, time_total;
+  assert(curl_easy_getinfo(handle, CURLINFO_REDIRECT_TIME, &time_redirect));
+  assert(curl_easy_getinfo(handle, CURLINFO_NAMELOOKUP_TIME, &time_lookup));
+  assert(curl_easy_getinfo(handle, CURLINFO_CONNECT_TIME, &time_connect));
+  assert(curl_easy_getinfo(handle, CURLINFO_PRETRANSFER_TIME, &time_pre));
+  assert(curl_easy_getinfo(handle, CURLINFO_STARTTRANSFER_TIME, &time_start));
+  assert(curl_easy_getinfo(handle, CURLINFO_TOTAL_TIME, &time_total));
+
+  SEXP result = PROTECT(allocVector(REALSXP, 6));
+  REAL(result)[0] = time_redirect;
+  REAL(result)[1] = time_lookup;
+  REAL(result)[2] = time_connect;
+  REAL(result)[3] = time_pre;
+  REAL(result)[4] = time_start;
+  REAL(result)[5] = time_total;
+
+  SEXP names = PROTECT(allocVector(STRSXP, 6));
+  SET_STRING_ELT(names, 0, mkChar("redirect"));
+  SET_STRING_ELT(names, 1, mkChar("namelookup"));
+  SET_STRING_ELT(names, 2, mkChar("connect"));
+  SET_STRING_ELT(names, 3, mkChar("pretransfer"));
+  SET_STRING_ELT(names, 4, mkChar("starttransfer"));
+  SET_STRING_ELT(names, 5, mkChar("total"));
+  setAttrib(result, R_NamesSymbol, names);
+  UNPROTECT(2);
+  return result;
+}
+
+/* Extract current cookies (state) from handle */
+SEXP make_cookievec(CURL *handle){
+  /* linked list of strings */
+  struct curl_slist *cookies;
+  assert(curl_easy_getinfo(handle, CURLINFO_COOKIELIST, &cookies));
+  SEXP out = slist_to_vec(cookies);
+  curl_slist_free_all(cookies);
+  return out;
+}
+
+SEXP make_status(CURL *handle){
+  long res_status;
+  assert(curl_easy_getinfo(handle, CURLINFO_RESPONSE_CODE, &res_status));
+  return ScalarInteger(res_status);
+}
+
+SEXP make_url(CURL *handle){
+  char *res_url;
+  assert(curl_easy_getinfo(handle, CURLINFO_EFFECTIVE_URL, &res_url));
+  return ScalarString(mkCharCE(res_url, CE_UTF8));
+}
+
+SEXP make_filetime(CURL *handle){
+  long filetime;
+  assert(curl_easy_getinfo(handle, CURLINFO_FILETIME, &filetime));
+  if(filetime < 0){
+    filetime = NA_INTEGER;
+  }
+
+  SEXP classes = PROTECT(allocVector(STRSXP, 2));
+  SET_STRING_ELT(classes, 0, mkChar("POSIXct"));
+  SET_STRING_ELT(classes, 1, mkChar("POSIXt"));
+
+  SEXP out = PROTECT(ScalarInteger(filetime));
+  setAttrib(out, R_ClassSymbol, classes);
+  UNPROTECT(2);
+  return out;
+}
+
+SEXP make_rawvec(unsigned char *ptr, size_t size){
+  SEXP out = PROTECT(allocVector(RAWSXP, size));
+  if(size > 0)
+    memcpy(RAW(out), ptr, size);
+  UNPROTECT(1);
+  return out;
+}
+
+SEXP make_namesvec(){
+  SEXP names = PROTECT(allocVector(STRSXP, 6));
+  SET_STRING_ELT(names, 0, mkChar("url"));
+  SET_STRING_ELT(names, 1, mkChar("status_code"));
+  SET_STRING_ELT(names, 2, mkChar("headers"));
+  SET_STRING_ELT(names, 3, mkChar("modified"));
+  SET_STRING_ELT(names, 4, mkChar("times"));
+  SET_STRING_ELT(names, 5, mkChar("content"));
+  UNPROTECT(1);
+  return names;
+}
+
+SEXP R_get_handle_cookies(SEXP ptr){
+  return make_cookievec(get_handle(ptr));
+}
+
+SEXP make_handle_response(reference *ref){
+  CURL *handle = ref->handle;
+  SEXP res = PROTECT(allocVector(VECSXP, 6));
+  SET_VECTOR_ELT(res, 0, make_url(handle));
+  SET_VECTOR_ELT(res, 1, make_status(handle));
+  SET_VECTOR_ELT(res, 2, make_rawvec(ref->resheaders.buf, ref->resheaders.size));
+  SET_VECTOR_ELT(res, 3, make_filetime(handle));
+  SET_VECTOR_ELT(res, 4, make_timevec(handle));
+  SET_VECTOR_ELT(res, 5, R_NilValue);
+  setAttrib(res, R_NamesSymbol, make_namesvec());
+  UNPROTECT(1);
+  return res;
+}
+
+SEXP R_get_handle_response(SEXP ptr){
+  /* get the handle */
+  reference *ref = get_ref(ptr);
+  return make_handle_response(ref);
+}
+
+SEXP R_total_handles(){
+  return(ScalarInteger(total_handles));
+}
diff --git a/src/ieproxy.c b/src/ieproxy.c
new file mode 100644
index 0000000..3798cb6
--- /dev/null
+++ b/src/ieproxy.c
@@ -0,0 +1,177 @@
+#include <Rinternals.h>
+
+#ifdef _WIN32
+#include <Windows.h>
+#include <Winhttp.h>
+#include <stdlib.h>
+
+#define WINHTTP_AUTO_DETECT_TYPE_DHCP           0x00000001
+#define WINHTTP_AUTO_DETECT_TYPE_DNS_A          0x00000002
+#define WINHTTP_AUTOPROXY_AUTO_DETECT           0x00000001
+#define WINHTTP_AUTOPROXY_CONFIG_URL            0x00000002
+#define WINHTTP_AUTOPROXY_RUN_INPROCESS         0x00010000
+#define WINHTTP_AUTOPROXY_RUN_OUTPROCESS_ONLY   0x00020000
+
+SEXP proxy_namesvec(){
+  SEXP names = PROTECT(allocVector(STRSXP, 4));
+  SET_STRING_ELT(names, 0, mkChar("AutoDetect"));
+  SET_STRING_ELT(names, 1, mkChar("AutoConfigUrl"));
+  SET_STRING_ELT(names, 2, mkChar("Proxy"));
+  SET_STRING_ELT(names, 3, mkChar("ProxyBypass"));
+  UNPROTECT(1);
+  return names;
+}
+
+SEXP auto_namesvec(){
+  SEXP names = PROTECT(allocVector(STRSXP, 3));
+  SET_STRING_ELT(names, 0, mkChar("HasProxy"));
+  SET_STRING_ELT(names, 1, mkChar("Proxy"));
+  SET_STRING_ELT(names, 2, mkChar("ProxyBypass"));
+  UNPROTECT(1);
+  return names;
+}
+
+SEXP R_proxy_info(){
+  WINHTTP_CURRENT_USER_IE_PROXY_CONFIG MyProxyConfig;
+  if(!WinHttpGetIEProxyConfigForCurrentUser(&MyProxyConfig)){
+    return R_NilValue;
+  }
+  char buffer[500];
+  SEXP vec = PROTECT(allocVector(VECSXP, 4));
+  SET_VECTOR_ELT(vec, 0, ScalarLogical(MyProxyConfig.fAutoDetect));
+
+  if(MyProxyConfig.lpszAutoConfigUrl != NULL) {
+    wcstombs(buffer, MyProxyConfig.lpszAutoConfigUrl, 500);
+    SET_VECTOR_ELT(vec, 1, mkString(buffer));
+  }
+
+  if(MyProxyConfig.lpszProxy != NULL) {
+    wcstombs(buffer, MyProxyConfig.lpszProxy, 500);
+    SET_VECTOR_ELT(vec, 2, mkString(buffer));
+  }
+
+  if(MyProxyConfig.lpszProxyBypass != NULL) {
+    wcstombs(buffer, MyProxyConfig.lpszProxyBypass, 500);
+    SET_VECTOR_ELT(vec, 3, mkString(buffer));
+  }
+  setAttrib(vec, R_NamesSymbol, proxy_namesvec());
+  UNPROTECT(1);
+  return vec;
+}
+
+SEXP R_get_proxy_for_url(SEXP target_url, SEXP auto_detect, SEXP autoproxy_url){
+  // Convert char to windows strings
+  wchar_t *longurl = (wchar_t *) calloc(10000, sizeof(int));
+  mbstowcs(longurl, CHAR(STRING_ELT(target_url, 0)), LENGTH(STRING_ELT(target_url, 0)));
+
+  // Some settings
+  WINHTTP_AUTOPROXY_OPTIONS AutoProxyOptions;
+  WINHTTP_PROXY_INFO ProxyInfo;
+
+  // Clear memory
+  ZeroMemory( &AutoProxyOptions, sizeof(AutoProxyOptions) );
+  ZeroMemory( &ProxyInfo, sizeof(ProxyInfo) );
+
+  // Create the WinHTTP session.
+  HINTERNET hHttpSession = WinHttpOpen( L"WinHTTP AutoProxy Sample/1.0",
+      WINHTTP_ACCESS_TYPE_NO_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, 0);
+
+  // Exit if WinHttpOpen failed.
+  if( !hHttpSession )
+    error("Call to WinHttpOpen failed.");
+
+  // Auto-detection doesn't work very well
+  if(asLogical(auto_detect)){
+    AutoProxyOptions.dwFlags = WINHTTP_AUTOPROXY_AUTO_DETECT;
+    AutoProxyOptions.dwAutoDetectFlags = WINHTTP_AUTO_DETECT_TYPE_DHCP | WINHTTP_AUTO_DETECT_TYPE_DNS_A;
+  }
+
+  // Use manual URL instead
+  if(isString(autoproxy_url) && LENGTH(autoproxy_url)){
+    wchar_t *autourl = (wchar_t *) calloc(10000, sizeof(int));
+    mbstowcs(autourl, CHAR(STRING_ELT(autoproxy_url, 0)), LENGTH(STRING_ELT(autoproxy_url, 0)));
+    AutoProxyOptions.dwFlags = WINHTTP_AUTOPROXY_CONFIG_URL;
+    AutoProxyOptions.lpszAutoConfigUrl = autourl;
+  }
+
+  // Use DHCP and DNS-based auto-detection.
+  AutoProxyOptions.fAutoLogonIfChallenged = TRUE;
+
+  // This downloads and runs the JavaScript to map the url to a proxy
+  if(!WinHttpGetProxyForUrl( hHttpSession, longurl, &AutoProxyOptions, &ProxyInfo)){
+    DWORD err = GetLastError();
+    switch(err){
+      case ERROR_WINHTTP_AUTO_PROXY_SERVICE_ERROR:
+        error("ERROR_WINHTTP_AUTO_PROXY_SERVICE_ERROR");
+      case ERROR_WINHTTP_BAD_AUTO_PROXY_SCRIPT:
+        error("ERROR_WINHTTP_BAD_AUTO_PROXY_SCRIPT");
+      case ERROR_WINHTTP_INCORRECT_HANDLE_TYPE:
+        error("ERROR_WINHTTP_INCORRECT_HANDLE_TYPE");
+      case ERROR_WINHTTP_INTERNAL_ERROR:
+        error("ERROR_WINHTTP_INTERNAL_ERROR");
+      case ERROR_WINHTTP_INVALID_URL:
+        error("ERROR_WINHTTP_INVALID_URL");
+      case ERROR_WINHTTP_LOGIN_FAILURE:
+        error("ERROR_WINHTTP_LOGIN_FAILURE");
+      case ERROR_WINHTTP_OPERATION_CANCELLED:
+        error("ERROR_WINHTTP_OPERATION_CANCELLED");
+      case ERROR_WINHTTP_UNABLE_TO_DOWNLOAD_SCRIPT:
+        error("ERROR_WINHTTP_UNABLE_TO_DOWNLOAD_SCRIPT");
+      case ERROR_WINHTTP_UNRECOGNIZED_SCHEME:
+        error("ERROR_WINHTTP_UNRECOGNIZED_SCHEME");
+      case ERROR_NOT_ENOUGH_MEMORY:
+        error("ERROR_NOT_ENOUGH_MEMORY");
+    }
+  }
+
+  //store output data
+  char buffer[500];
+  SEXP vec = PROTECT(allocVector(VECSXP, 3));
+  SET_VECTOR_ELT(vec, 0, ScalarLogical(
+      ProxyInfo.dwAccessType == WINHTTP_ACCESS_TYPE_NAMED_PROXY ||
+      ProxyInfo.dwAccessType == WINHTTP_ACCESS_TYPE_DEFAULT_PROXY));
+
+  if(ProxyInfo.lpszProxy != NULL) {
+    wcstombs(buffer, ProxyInfo.lpszProxy, 500);
+    SET_VECTOR_ELT(vec, 1, mkString(buffer));
+    GlobalFree((void*) ProxyInfo.lpszProxy);
+  }
+
+  if(ProxyInfo.lpszProxyBypass != NULL) {
+    wcstombs(buffer, ProxyInfo.lpszProxyBypass, 500);
+    SET_VECTOR_ELT(vec, 2, mkString(buffer));
+    GlobalFree((void*) ProxyInfo.lpszProxyBypass );
+  }
+
+  //clean up
+  WinHttpCloseHandle( hHttpSession );
+
+  //return
+  setAttrib(vec, R_NamesSymbol, auto_namesvec());
+  UNPROTECT(1);
+  return vec;
+}
+
+SEXP R_windows_build(){
+  DWORD dwBuild = 0;
+  DWORD dwVersion = GetVersion();
+  if (dwVersion < 0x80000000)
+    dwBuild = (DWORD)(HIWORD(dwVersion));
+  return ScalarInteger(dwBuild);
+}
+
+#else //_WIN32
+
+SEXP R_proxy_info(){
+  return R_NilValue;
+}
+
+SEXP R_get_proxy_for_url(SEXP target_url, SEXP autoproxy_url){
+  return R_NilValue;
+}
+
+SEXP R_windows_build(){
+  return R_NilValue;
+}
+
+#endif //_WIN32
diff --git a/src/init.c b/src/init.c
new file mode 100644
index 0000000..93b5a16
--- /dev/null
+++ b/src/init.c
@@ -0,0 +1,18 @@
+#include <R_ext/Rdynload.h>
+#include <curl/curl.h>
+
+CURLM *multi_handle = NULL;
+static struct curl_slist * default_headers = NULL;
+
+void R_init_curl(DllInfo *info) {
+  curl_global_init(CURL_GLOBAL_DEFAULT);
+  multi_handle = curl_multi_init();
+  default_headers = curl_slist_append(default_headers, "Expect:");
+  R_registerRoutines(info, NULL, NULL, NULL, NULL);
+  R_useDynamicSymbols(info, TRUE);
+}
+
+void R_unload_curl(DllInfo *info) {
+  curl_multi_cleanup(multi_handle);
+  curl_global_cleanup();
+}
diff --git a/src/interrupt.c b/src/interrupt.c
new file mode 100644
index 0000000..61a7733
--- /dev/null
+++ b/src/interrupt.c
@@ -0,0 +1,70 @@
+/* Non-blocking drop-in replacement for curl_easy_perform with support for
+ * R interruptions. Based on: https://curl.haxx.se/libcurl/c/multi-single.html
+ */
+
+#include <Rinternals.h>
+#include "curl-common.h"
+
+/* Check for interrupt without long jumping */
+void check_interrupt_fn(void *dummy) {
+  R_CheckUserInterrupt();
+}
+
+int pending_interrupt() {
+  return !(R_ToplevelExec(check_interrupt_fn, NULL));
+}
+
+/* created in init.c */
+CURLM * multi_handle;
+
+/* Don't call Rf_error() until we remove the handle from the multi handle! */
+CURLcode curl_perform_with_interrupt(CURL *handle){
+  /* start settings */
+  CURLcode status = CURLE_FAILED_INIT;
+  int still_running = 1;
+
+  if(CURLM_OK != curl_multi_add_handle(multi_handle, handle)){
+    curl_multi_cleanup(multi_handle);
+    return CURLE_FAILED_INIT;
+  }
+
+  /* non blocking downloading */
+  while(still_running) {
+    if(pending_interrupt()){
+      status = CURLE_ABORTED_BY_CALLBACK;
+      break;
+    }
+
+#ifdef HAS_MULTI_WAIT
+    /* wait for activity, timeout or "nothing" */
+    int numfds;
+    if(curl_multi_wait(multi_handle, NULL, 0, 1000, &numfds) != CURLM_OK)
+      break;
+#endif
+
+    /* Required by old versions of libcurl */
+    CURLMcode res = CURLM_CALL_MULTI_PERFORM;
+    while(res == CURLM_CALL_MULTI_PERFORM)
+      res = curl_multi_perform(multi_handle, &(still_running));
+
+    /* check for multi errors */
+    if(res != CURLM_OK)
+      break;
+  }
+
+  /* set status if handle has completed. This might be overkill */
+  if(!still_running){
+    int msgq = 0;
+    do {
+      CURLMsg *m = curl_multi_info_read(multi_handle, &msgq);
+      if(m && (m->msg == CURLMSG_DONE)){
+        status = m->data.result;
+        break;
+      }
+    } while (msgq > 0);
+  }
+
+  /* cleanup first */
+  curl_multi_remove_handle(multi_handle, handle);
+  return status;
+}
diff --git a/src/multi.c b/src/multi.c
new file mode 100644
index 0000000..65ee725
--- /dev/null
+++ b/src/multi.c
@@ -0,0 +1,247 @@
+#include "curl-common.h"
+#include <time.h>
+
+/* Notes:
+ *  - First check for unhandled messages in curl_multi_info_read() before curl_multi_perform()
+ *  - Use eval() to callback instead of R_tryEval() to propagate interrupt or error back to C
+ */
+
+#if LIBCURL_VERSION_MAJOR > 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR >= 30)
+#define HAS_CURLMOPT_MAX_TOTAL_CONNECTIONS 1
+#endif
+
+multiref *get_multiref(SEXP ptr){
+  if(TYPEOF(ptr) != EXTPTRSXP || !Rf_inherits(ptr, "curl_multi"))
+    Rf_error("pool ptr is not a curl_multi handle");
+  multiref *mref = (multiref*) R_ExternalPtrAddr(ptr);
+  if(!mref)
+    Rf_error("multiref pointer is dead");
+  return mref;
+}
+
+void multi_release(reference *ref){
+  /* Release the easy-handle */
+  CURL *handle = ref->handle;
+  CURLM *multi = ref->async.mref->m;
+  massert(curl_multi_remove_handle(multi, handle));
+  curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, NULL);
+  curl_easy_setopt(handle, CURLOPT_WRITEDATA, NULL);
+
+  /* Remove the curl handle from the handles list */
+  ref->async.mref->handles = reflist_remove(ref->async.mref->handles, ref->handleptr);
+  R_SetExternalPtrProtected(ref->async.mref->multiptr, ref->async.mref->handles);
+  R_SetExternalPtrProtected(ref->handleptr, R_NilValue);
+
+  /* Reset multi state struct */
+  if(ref->async.content.buf){
+    free(ref->async.content.buf);
+    ref->async.content.buf = NULL;
+    ref->async.content.size = 0;
+  }
+  ref->async.mref = NULL;
+  ref->async.content.buf = NULL;
+  ref->async.content.size = 0;
+  ref->async.complete = NULL;
+  ref->async.error = NULL;
+  ref->async.node = NULL;
+
+  /* Unlock handle (and cleanup if needed) */
+  ref->locked = 0;
+  ref->refCount--;
+  clean_handle(ref);
+}
+
+SEXP R_multi_cancel(SEXP handle_ptr){
+  reference *ref = get_ref(handle_ptr);
+  if(ref->async.mref)
+    multi_release(ref);
+  return handle_ptr;
+}
+
+SEXP R_multi_add(SEXP handle_ptr, SEXP cb_complete, SEXP cb_error, SEXP pool_ptr){
+  multiref *mref = get_multiref(pool_ptr);
+  CURLM *multi = mref->m;
+
+  reference *ref = get_ref(handle_ptr);
+  if(ref->locked)
+    Rf_error("Handle is locked. Probably in use in a connection or async request.");
+
+  /* placeholder body */
+  curl_easy_setopt(ref->handle, CURLOPT_WRITEFUNCTION, append_buffer);
+  curl_easy_setopt(ref->handle, CURLOPT_WRITEDATA, &(ref->async.content));
+
+  /* add to scheduler */
+  massert(curl_multi_add_handle(multi, ref->handle));
+
+  /* create node in ref */
+  ref->async.mref = mref;
+  mref->handles = reflist_add(mref->handles, handle_ptr);
+  R_SetExternalPtrProtected(pool_ptr, mref->handles);
+
+  /* set multi callbacks */
+  ref->async.complete = cb_complete;
+  ref->async.error = cb_error;
+  R_SetExternalPtrProtected(handle_ptr, Rf_list2(cb_error, cb_complete));
+
+  /* lock and protect handle */
+  ref->refCount++;
+  ref->locked = 1;
+  return handle_ptr;
+}
+
+SEXP R_multi_run(SEXP pool_ptr, SEXP timeout, SEXP max){
+  multiref *mref = get_multiref(pool_ptr);
+  CURLM *multi = mref->m;
+
+  int total_pending = -1;
+  int total_success = 0;
+  int total_fail = 0;
+  int result_max = asInteger(max);
+  double time_max = asReal(timeout);
+  time_t time_start = time(NULL);
+
+  double seconds_elapsed = 0;
+  while(1) {
+    /* check for completed requests */
+    int dirty = 0;
+    int msgq = 1;
+    while (msgq > 0) {
+      CURLMsg *m = curl_multi_info_read(multi, &msgq);
+      if(m && (m->msg == CURLMSG_DONE)){
+        dirty = 1;
+        reference *ref = NULL;
+        CURL *handle = m->easy_handle;
+        CURLcode status = m->data.result;
+        assert(curl_easy_getinfo(handle, CURLINFO_PRIVATE, (char**) &ref));
+
+        // prepare for callback
+        SEXP cb_complete = PROTECT(ref->async.complete);
+        SEXP cb_error = PROTECT(ref->async.error);
+        SEXP buf = PROTECT(allocVector(RAWSXP, ref->async.content.size));
+        if(ref->async.content.buf && ref->async.content.size)
+          memcpy(RAW(buf), ref->async.content.buf, ref->async.content.size);
+
+        //release handle for use by callbacks
+        multi_release(ref);
+
+        // callbacks must be trycatch! we should continue the loop
+        if(status == CURLE_OK){
+          total_success++;
+          if(Rf_isFunction(cb_complete)){
+            int arglen = Rf_length(FORMALS(cb_complete));
+            SEXP out = PROTECT(make_handle_response(ref));
+            SET_VECTOR_ELT(out, 5, buf);
+            SEXP call = PROTECT(LCONS(cb_complete, arglen ? LCONS(out, R_NilValue) : R_NilValue));
+            //R_tryEval(call, R_GlobalEnv, &cbfail);
+            eval(call, R_GlobalEnv); //OK to error here
+            UNPROTECT(2);
+          }
+        } else {
+          total_fail++;
+          if(Rf_isFunction(cb_error)){
+            int arglen = Rf_length(FORMALS(cb_error));
+            SEXP buf = PROTECT(mkString(strlen(ref->errbuf) ? ref->errbuf : curl_easy_strerror(status)));
+            SEXP call = PROTECT(LCONS(cb_error, arglen ? LCONS(buf, R_NilValue) : R_NilValue));
+            //R_tryEval(call, R_GlobalEnv, &cbfail);
+            eval(call, R_GlobalEnv); //OK to error here
+            UNPROTECT(2);
+          }
+        }
+        UNPROTECT(3);
+      }
+      R_CheckUserInterrupt();
+    }
+
+    /* check for user interruptions */
+    //if(pending_interrupt())  break;
+    R_CheckUserInterrupt();
+
+    /* check for timeout or max result*/
+    if(result_max > 0 && total_success + total_fail >= result_max)
+      break;
+    if(time_max == 0 && total_pending != -1)
+      break;
+    if(time_max > 0){
+      seconds_elapsed = (double) (time(NULL) - time_start);
+      if(seconds_elapsed > time_max)
+        break;
+    }
+
+    /* check if we are done */
+    if(total_pending == 0 && !dirty)
+      break;
+
+#ifdef HAS_MULTI_WAIT
+    /* wait for activity, timeout or "nothing" */
+    int numfds;
+    double waitforit = fmin(time_max - seconds_elapsed, 1); //at most 1 sec to support interrupts
+    if(time_max > 0)
+      massert(curl_multi_wait(multi, NULL, 0, (int) waitforit * 1000, &numfds));
+#endif
+
+    /* poll libcurl for new data - updates total_pending */
+    CURLMcode res = CURLM_CALL_MULTI_PERFORM;
+    while(res == CURLM_CALL_MULTI_PERFORM)
+      res = curl_multi_perform(multi, &(total_pending));
+    if(res != CURLM_OK)
+      break;
+  }
+
+  SEXP res = PROTECT(allocVector(VECSXP, 3));
+  SET_VECTOR_ELT(res, 0, ScalarInteger(total_success));
+  SET_VECTOR_ELT(res, 1, ScalarInteger(total_fail));
+  SET_VECTOR_ELT(res, 2, ScalarInteger(total_pending));
+
+  SEXP names = PROTECT(allocVector(STRSXP, 3));
+  SET_STRING_ELT(names, 0, mkChar("success"));
+  SET_STRING_ELT(names, 1, mkChar("error"));
+  SET_STRING_ELT(names, 2, mkChar("pending"));
+  setAttrib(res, R_NamesSymbol, names);
+  UNPROTECT(2);
+  return res;
+}
+
+void fin_multi(SEXP ptr){
+  multiref *mref = get_multiref(ptr);
+  SEXP handles = mref->handles;
+  while(handles != R_NilValue){
+    multi_release(get_ref(CAR(handles)));
+    handles = CDR(handles);
+  }
+  curl_multi_cleanup(mref->m);
+  free(mref);
+  R_ClearExternalPtr(ptr);
+}
+
+SEXP R_multi_new(){
+  multiref *ref = calloc(1, sizeof(multiref));
+  ref->m = curl_multi_init();
+  ref->handles = reflist_init();
+  SEXP ptr = PROTECT(R_MakeExternalPtr(ref, R_NilValue, ref->handles));
+  ref->multiptr = ptr;
+  R_RegisterCFinalizerEx(ptr, fin_multi, 1);
+  setAttrib(ptr, R_ClassSymbol, mkString("curl_multi"));
+  UNPROTECT(1);
+  return ptr;
+}
+
+SEXP R_multi_setopt(SEXP pool_ptr, SEXP total_con, SEXP host_con, SEXP multiplex){
+  multiref *mref = get_multiref(pool_ptr);
+  CURLM *multi = mref->m;
+
+  // NOTE: CURLPIPE_HTTP1 is unsafe for non idempotent requests
+  #ifdef CURLPIPE_MULTIPLEX
+    massert(curl_multi_setopt(multi, CURLMOPT_PIPELINING,
+                              asLogical(multiplex) ? CURLPIPE_MULTIPLEX : CURLPIPE_NOTHING));
+  #endif
+
+  #ifdef HAS_CURLMOPT_MAX_TOTAL_CONNECTIONS
+    massert(curl_multi_setopt(multi, CURLMOPT_MAX_TOTAL_CONNECTIONS, (long) asInteger(total_con)));
+    massert(curl_multi_setopt(multi, CURLMOPT_MAX_HOST_CONNECTIONS, (long) asInteger(host_con)));
+  #endif
+  return pool_ptr;
+}
+
+SEXP R_multi_list(SEXP pool_ptr){
+  return get_multiref(pool_ptr)->handles;
+}
diff --git a/src/nslookup.c b/src/nslookup.c
new file mode 100644
index 0000000..4420094
--- /dev/null
+++ b/src/nslookup.c
@@ -0,0 +1,100 @@
+//libcurl internal punycode converter
+#ifdef _WIN32
+int jeroen_win32_idn_to_ascii(const char *in, char **out);
+#endif
+
+//getaddrinfo is an extension (not C99)
+#if !defined(_WIN32) && !defined(__sun) && !defined(_POSIX_C_SOURCE)
+#define _POSIX_C_SOURCE 200112L
+#endif
+
+#include <Rinternals.h>
+#include <string.h>
+
+#ifdef _WIN32
+#include <winsock2.h>
+#include <ws2tcpip.h>
+const char *inet_ntop(int af, const void *src, char *dst, socklen_t size);
+#else
+#include <sys/socket.h>
+#include <netinet/in.h>
+#include <netdb.h>
+#include <arpa/inet.h>
+#endif
+
+SEXP R_nslookup(SEXP hostname, SEXP ipv4_only) {
+  /* Because gethostbyname() is deprecated */
+  struct addrinfo hints = {0};
+  if(asLogical(ipv4_only))
+    hints.ai_family = AF_INET; //only allow ipv4
+  struct addrinfo *addr;
+  const char * hoststr = CHAR(STRING_ELT(hostname, 0));
+#ifdef _WIN32
+  if(Rf_getCharCE(STRING_ELT(hostname, 0)) == CE_UTF8){
+    char * punycode;
+    if(jeroen_win32_idn_to_ascii(hoststr, &punycode))
+      hoststr = punycode;
+  }
+#endif
+  if(getaddrinfo(hoststr, NULL, &hints, &addr))
+    return R_NilValue;
+
+  // count number of hits
+  int len = 0;
+  struct addrinfo * cur = addr;
+  while(cur != NULL){
+    len++;
+    cur = cur->ai_next;
+  }
+
+  //allocate output
+  SEXP out = PROTECT(allocVector(STRSXP, len));
+
+  //extract the values
+  cur = addr;
+  for(size_t i = 0; i < len; i++) {
+    struct sockaddr *sa = cur->ai_addr;
+
+    /* IPv4 vs v6 */
+    char ip[INET6_ADDRSTRLEN];
+    if (sa->sa_family == AF_INET) {
+      struct sockaddr_in *sa_in = (struct sockaddr_in*) sa;
+      inet_ntop(AF_INET, &(sa_in->sin_addr), ip, INET_ADDRSTRLEN);
+    } else {
+      struct sockaddr_in6 *sa_in = (struct sockaddr_in6*) sa;
+      inet_ntop(AF_INET6, &(sa_in->sin6_addr), ip, INET6_ADDRSTRLEN);
+    }
+    SET_STRING_ELT(out, i, mkChar(ip));
+    cur = cur->ai_next;
+  }
+  UNPROTECT(1);
+  freeaddrinfo(addr);
+  return out;
+}
+
+/* Fallback implementation for inet_ntop in Win32 */
+
+#if defined(_WIN32) && !defined(_WIN64)
+const char *inet_ntop(int af, const void *src, char *dst, socklen_t size)
+{
+  struct sockaddr_storage ss;
+  unsigned long s = size;
+
+  ZeroMemory(&ss, sizeof(ss));
+  ss.ss_family = af;
+
+  switch(af) {
+  case AF_INET:
+    ((struct sockaddr_in *)&ss)->sin_addr = *(struct in_addr *)src;
+    break;
+  case AF_INET6:
+    ((struct sockaddr_in6 *)&ss)->sin6_addr = *(struct in6_addr *)src;
+    break;
+  default:
+    return NULL;
+  }
+  /* cannot direclty use &size because of strict aliasing rules */
+  return (WSAAddressToString((struct sockaddr *)&ss, sizeof(ss), NULL, dst, &s) == 0)?
+  dst : NULL;
+}
+#endif
diff --git a/src/reflist.c b/src/reflist.c
new file mode 100644
index 0000000..3085bff
--- /dev/null
+++ b/src/reflist.c
@@ -0,0 +1,56 @@
+#include <Rinternals.h>
+
+SEXP reflist_init(){
+  return R_NilValue;
+}
+
+//note: you MUST use the return value for this object
+SEXP reflist_add(SEXP x, SEXP target){
+  if(!Rf_isPairList(x))
+    Rf_error("Not a LISTSXP");
+  return(CONS(target, x));
+}
+
+SEXP reflist_has(SEXP x, SEXP target){
+  if(!Rf_isPairList(x))
+    Rf_error("Not a LISTSXP");
+  while(x != R_NilValue){
+    if(CAR(x) == target)
+      return(ScalarLogical(1));
+    x = CDR(x);
+  }
+  return(ScalarLogical(0));
+}
+
+SEXP reflist_remove(SEXP x, SEXP target){
+  if(!Rf_isPairList(x))
+    Rf_error("Not a LISTSXP");
+
+  //drop head
+  if(x != R_NilValue && CAR(x) == target)
+    return(CDR(x));
+  SEXP prev = x;
+  SEXP current = CDR(x);
+
+  //check inner nodes
+  while(current != R_NilValue){
+    if(CAR(current) == target){
+      SETCDR(prev, CDR(current));
+      return(x);
+    }
+    prev = current;
+    current = CDR(current);
+  }
+  Rf_error("Object not found in reflist!");
+}
+
+SEXP reflist_length(SEXP x) {
+  if(!Rf_isPairList(x))
+    Rf_error("Not a LISTSXP");
+  int i = 0;
+  while(x != R_NilValue){
+    i++;
+    x = CDR(x);
+  }
+  return ScalarInteger(i);
+}
diff --git a/src/split.c b/src/split.c
new file mode 100644
index 0000000..91fa21a
--- /dev/null
+++ b/src/split.c
@@ -0,0 +1,16 @@
+#include <Rinternals.h>
+#include <string.h>
+
+SEXP R_split_string(SEXP string, SEXP split){
+  const char * str = CHAR(STRING_ELT(string, 0));
+  cetype_t enc = Rf_getCharCE(STRING_ELT(string, 0));
+  const char * cut = CHAR(STRING_ELT(split, 0));
+  char * out = strstr(str, cut);
+  if(!out)
+    return string;
+  SEXP res = PROTECT(allocVector(STRSXP, 2));
+  SET_STRING_ELT(res, 0, mkCharLenCE(str, out - str, enc));
+  SET_STRING_ELT(res, 1, mkCharCE(out + strlen(cut), enc));
+  UNPROTECT(1);
+  return res;
+}
diff --git a/src/utils.c b/src/utils.c
new file mode 100644
index 0000000..07f8a5e
--- /dev/null
+++ b/src/utils.c
@@ -0,0 +1,132 @@
+#include "curl-common.h"
+
+CURL* get_handle(SEXP ptr){
+  return get_ref(ptr)->handle;
+}
+
+reference* get_ref(SEXP ptr){
+  if(TYPEOF(ptr) != EXTPTRSXP || !Rf_inherits(ptr, "curl_handle"))
+    Rf_error("handle is not a curl_handle()");
+  if(!R_ExternalPtrAddr(ptr))
+    error("handle is dead");
+  reference *ref = (reference*) R_ExternalPtrAddr(ptr);
+  return ref;
+}
+
+void set_form(reference *ref, struct curl_httppost* newform){
+  if(ref->form)
+    curl_formfree(ref->form);
+  ref->form = newform;
+  if(newform){
+    assert(curl_easy_setopt(ref->handle, CURLOPT_HTTPPOST, ref->form));
+  } else {
+    //CURLOPT_HTTPPOST has bug for empty forms. We probably want this:
+    assert(curl_easy_setopt(ref->handle, CURLOPT_POSTFIELDS, ""));
+  }
+}
+
+void set_headers(reference *ref, struct curl_slist *newheaders){
+  if(ref->headers)
+    curl_slist_free_all(ref->headers);
+  ref->headers = newheaders;
+  assert(curl_easy_setopt(ref->handle, CURLOPT_HTTPHEADER, ref->headers));
+}
+
+void reset_resheaders(reference *ref){
+  if(ref->resheaders.buf)
+    free(ref->resheaders.buf);
+  ref->resheaders.buf = NULL;
+  ref->resheaders.size = 0;
+}
+
+void reset_errbuf(reference *ref){
+  memset(ref->errbuf, 0, CURL_ERROR_SIZE);
+}
+
+void assert(CURLcode res){
+  if(res != CURLE_OK)
+    error(curl_easy_strerror(res));
+}
+
+void assert_status(CURLcode res, reference *ref){
+  if(res == CURLE_OPERATION_TIMEDOUT)
+    Rf_error("%s: %s", curl_easy_strerror(res), ref->errbuf);
+  if(res != CURLE_OK)
+    Rf_error("%s", strlen(ref->errbuf) ? ref->errbuf : curl_easy_strerror(res));
+}
+
+void massert(CURLMcode res){
+  if(res != CURLM_OK)
+    error(curl_multi_strerror(res));
+}
+
+void stop_for_status(CURL *http_handle){
+  long status = 0;
+  assert(curl_easy_getinfo(http_handle, CURLINFO_RESPONSE_CODE, &status));
+
+  /* check http status code. Not sure what this does for ftp. */
+  if(status >= 300)
+    error("HTTP error %d.", status);
+}
+
+/* make sure to call curl_slist_free_all on this object */
+struct curl_slist* vec_to_slist(SEXP vec){
+  if(!isString(vec))
+    error("vec is not a character vector");
+  struct curl_slist *slist = NULL;
+  for(int i = 0; i < length(vec); i++){
+    slist = curl_slist_append(slist, CHAR(STRING_ELT(vec, i)));
+  }
+  return slist;
+}
+
+SEXP slist_to_vec(struct curl_slist *slist){
+  /* linked list of strings */
+  struct curl_slist *cursor = slist;
+
+  /* count slist */
+  int n = 0;
+  while (cursor) {
+    n++;
+    cursor = cursor->next;
+  }
+
+  SEXP out = PROTECT(allocVector(STRSXP, n));
+  cursor = slist;
+  for(int i = 0; i < n; i++){
+    SET_STRING_ELT(out, i, mkChar(cursor->data));
+    cursor = cursor->next;
+  }
+  UNPROTECT(1);
+  return out;
+}
+
+size_t push_disk(void* contents, size_t sz, size_t nmemb, FILE *ctx) {
+  //if (pending_interrupt())
+  //  return 0;
+  return fwrite(contents, sz, nmemb, ctx);
+}
+
+size_t append_buffer(void *contents, size_t sz, size_t nmemb, void *ctx) {
+//if (pending_interrupt())
+  //  return 0;
+
+  /* avoids compiler warning on windows */
+  size_t realsize = sz * nmemb;
+  memory *mem = (memory*) ctx;
+
+  /* realloc is slow on windows, therefore increase buffer to nearest 2^n */
+  #ifdef _WIN32
+    mem->buf = realloc(mem->buf, exp2(ceil(log2(mem->size + realsize))));
+  #else
+    mem->buf = realloc(mem->buf, mem->size + realsize);
+  #endif
+
+  if (!mem->buf)
+    return 0;
+
+  /* append data and increment size */
+  memcpy(&(mem->buf[mem->size]), contents, realsize);
+  mem->size += realsize;
+  return realsize;
+}
diff --git a/src/version.c b/src/version.c
new file mode 100644
index 0000000..dc5ea1f
--- /dev/null
+++ b/src/version.c
@@ -0,0 +1,64 @@
+#include <curl/curl.h>
+#include <Rinternals.h>
+
+#define make_string(x) x ? Rf_mkString(x) : ScalarString(NA_STRING)
+
+SEXP R_curl_version() {
+  /* retrieve info from curl */
+  const curl_version_info_data *data = curl_version_info(CURLVERSION_NOW);
+
+  /* put stuff in a list */
+  SEXP list = PROTECT(allocVector(VECSXP, 10));
+  SET_VECTOR_ELT(list, 0, make_string(data->version));
+  SET_VECTOR_ELT(list, 1, make_string(data->ssl_version));
+  SET_VECTOR_ELT(list, 2, make_string(data->libz_version));
+  SET_VECTOR_ELT(list, 3, make_string(data->libssh_version));
+  SET_VECTOR_ELT(list, 4, make_string(data->libidn));
+  SET_VECTOR_ELT(list, 5, make_string(data->host));
+
+  /* create vector of protocols */
+  int len = 0;
+  const char *const * temp = data->protocols;
+  while(*temp++) len++;
+  SEXP protocols = PROTECT(allocVector(STRSXP, len));
+  for (int i = 0; i < len; i++){
+    SET_STRING_ELT(protocols, i, mkChar(*(data->protocols + i)));
+  }
+  SET_VECTOR_ELT(list, 6, protocols);
+
+  /* add list names */
+  SEXP names = PROTECT(allocVector(STRSXP, 10));
+  SET_STRING_ELT(names, 0, mkChar("version"));
+  SET_STRING_ELT(names, 1, mkChar("ssl_version"));
+  SET_STRING_ELT(names, 2, mkChar("libz_version"));
+  SET_STRING_ELT(names, 3, mkChar("libssh_version"));
+  SET_STRING_ELT(names, 4, mkChar("libidn_version"));
+  SET_STRING_ELT(names, 5, mkChar("host"));
+  SET_STRING_ELT(names, 6, mkChar("protocols"));
+  SET_STRING_ELT(names, 7, mkChar("ipv6"));
+  SET_STRING_ELT(names, 8, mkChar("http2"));
+  SET_STRING_ELT(names, 9, mkChar("idn"));
+  setAttrib(list, R_NamesSymbol, names);
+
+  #ifdef CURL_VERSION_IPV6
+  SET_VECTOR_ELT(list, 7, ScalarLogical(data->features & CURL_VERSION_IPV6));
+  #else
+  SET_VECTOR_ELT(list, 7, ScalarLogical(0));
+  #endif
+
+  #ifdef CURL_VERSION_HTTP2
+  SET_VECTOR_ELT(list, 8, ScalarLogical(data->features & CURL_VERSION_HTTP2));
+  #else
+  SET_VECTOR_ELT(list, 8, ScalarLogical(0));
+  #endif
+
+  #ifdef CURL_VERSION_IDN
+    SET_VECTOR_ELT(list, 9, ScalarLogical(data->features & CURL_VERSION_IDN));
+  #else
+    SET_VECTOR_ELT(list, 9, ScalarLogical(0));
+  #endif
+
+  /* return */
+  UNPROTECT(3);
+  return list;
+}
diff --git a/src/winhttp32.def.in b/src/winhttp32.def.in
new file mode 100644
index 0000000..c33acd1
--- /dev/null
+++ b/src/winhttp32.def.in
@@ -0,0 +1,37 @@
+;
+; Definition file of WINHTTP.dll
+; Automatic generated by gendef
+; written by Kai Tietz 2008
+;
+LIBRARY "WINHTTP.dll"
+EXPORTS
+Private1 at 20
+SvchostPushServiceGlobals at 4
+WinHttpAddRequestHeaders at 16
+WinHttpAutoProxySvcMain at 8
+WinHttpCheckPlatform at 0
+WinHttpCloseHandle at 4
+WinHttpConnect at 16
+WinHttpCrackUrl at 16
+WinHttpCreateUrl at 16
+WinHttpDetectAutoProxyConfigUrl at 8
+WinHttpGetDefaultProxyConfiguration at 4
+WinHttpGetIEProxyConfigForCurrentUser at 4
+WinHttpGetProxyForUrl at 16
+WinHttpOpen at 20
+WinHttpOpenRequest at 28
+WinHttpQueryAuthSchemes at 16
+WinHttpQueryDataAvailable at 8
+WinHttpQueryHeaders at 24
+WinHttpQueryOption at 16
+WinHttpReadData at 16
+WinHttpReceiveResponse at 8
+WinHttpSendRequest at 28
+WinHttpSetCredentials at 24
+WinHttpSetDefaultProxyConfiguration at 4
+WinHttpSetOption at 16
+WinHttpSetStatusCallback at 16
+WinHttpSetTimeouts at 20
+WinHttpTimeFromSystemTime at 8
+WinHttpTimeToSystemTime at 8
+WinHttpWriteData at 16
diff --git a/src/winhttp64.def.in b/src/winhttp64.def.in
new file mode 100644
index 0000000..9ec0bac
--- /dev/null
+++ b/src/winhttp64.def.in
@@ -0,0 +1,37 @@
+;
+; Definition file of WINHTTP.dll
+; Automatic generated by gendef
+; written by Kai Tietz 2008
+;
+LIBRARY "WINHTTP.dll"
+EXPORTS
+Private1
+SvchostPushServiceGlobals
+WinHttpAddRequestHeaders
+WinHttpAutoProxySvcMain
+WinHttpCheckPlatform
+WinHttpCloseHandle
+WinHttpConnect
+WinHttpCrackUrl
+WinHttpCreateUrl
+WinHttpDetectAutoProxyConfigUrl
+WinHttpGetDefaultProxyConfiguration
+WinHttpGetIEProxyConfigForCurrentUser
+WinHttpGetProxyForUrl
+WinHttpOpen
+WinHttpOpenRequest
+WinHttpQueryAuthSchemes
+WinHttpQueryDataAvailable
+WinHttpQueryHeaders
+WinHttpQueryOption
+WinHttpReadData
+WinHttpReceiveResponse
+WinHttpSendRequest
+WinHttpSetCredentials
+WinHttpSetDefaultProxyConfiguration
+WinHttpSetOption
+WinHttpSetStatusCallback
+WinHttpSetTimeouts
+WinHttpTimeFromSystemTime
+WinHttpTimeToSystemTime
+WinHttpWriteData
diff --git a/src/winidn.c b/src/winidn.c
new file mode 100644
index 0000000..5a85d80
--- /dev/null
+++ b/src/winidn.c
@@ -0,0 +1,70 @@
+/* IdnToAscii() requires at least vista to build */
+#define _WIN32_WINNT 0x0600
+#define WINVER 0x0600
+#define IDN_MAX_LENGTH 255
+
+#ifdef _WIN32
+#include <Windows.h>
+
+wchar_t * jeroen_convert_UTF8_to_wchar(const char *str_utf8){
+  wchar_t *str_w = NULL;
+
+  if(str_utf8) {
+    int str_w_len = MultiByteToWideChar(CP_UTF8, MB_ERR_INVALID_CHARS,
+                                        str_utf8, -1, NULL, 0);
+    if(str_w_len > 0) {
+      str_w = malloc(str_w_len * sizeof(wchar_t));
+      if(str_w) {
+        if(MultiByteToWideChar(CP_UTF8, 0, str_utf8, -1, str_w,
+                               str_w_len) == 0) {
+          free(str_w);
+          return NULL;
+        }
+      }
+    }
+  }
+
+  return str_w;
+}
+
+char *jeroen_convert_wchar_to_UTF8(const wchar_t *str_w){
+  char *str_utf8 = NULL;
+
+  if(str_w) {
+    int str_utf8_len = WideCharToMultiByte(CP_UTF8, 0, str_w, -1, NULL,
+                                           0, NULL, NULL);
+    if(str_utf8_len > 0) {
+      str_utf8 = malloc(str_utf8_len * sizeof(wchar_t));
+      if(str_utf8) {
+        if(WideCharToMultiByte(CP_UTF8, 0, str_w, -1, str_utf8, str_utf8_len,
+                               NULL, FALSE) == 0) {
+          free(str_utf8);
+          return NULL;
+        }
+      }
+    }
+  }
+
+  return str_utf8;
+}
+
+int jeroen_win32_idn_to_ascii(const char *in, char **out){
+  int success = FALSE;
+  wchar_t *in_w = jeroen_convert_UTF8_to_wchar(in);
+  if(in_w) {
+    wchar_t punycode[IDN_MAX_LENGTH];
+    int chars = IdnToAscii(0, in_w, -1, punycode, IDN_MAX_LENGTH);
+    free(in_w);
+    if(chars) {
+      *out = jeroen_convert_wchar_to_UTF8(punycode);
+      if(*out)
+        success = TRUE;
+    }
+  }
+
+  return success;
+}
+
+#else
+void placeholder_to_avoid_stupid_warning(){}
+#endif
diff --git a/tests/testthat.R b/tests/testthat.R
new file mode 100644
index 0000000..86da1a1
--- /dev/null
+++ b/tests/testthat.R
@@ -0,0 +1,4 @@
+library(testthat)
+library(curl)
+
+test_check("curl")
diff --git a/tests/testthat/helper-version.R b/tests/testthat/helper-version.R
new file mode 100644
index 0000000..6438a8b
--- /dev/null
+++ b/tests/testthat/helper-version.R
@@ -0,0 +1,36 @@
+cat("This is libcurl version", curl_version()$version, "with", curl_version()$ssl_version, "\n")
+
+# Try to load test server
+find_test_server <- function(){
+  h <- curl::new_handle(timeout = 10, failonerror = TRUE)
+
+  # Try to download latest test-server list
+  servers <- tryCatch({
+    req <- curl_fetch_memory("http://jeroen.github.io/curl/servers", handle = h)
+    strsplit(rawToChar(req$content), "\n", fixed = TRUE)[[1]]
+  }, error = function(e){
+    message("Failed to download server list:", e$message)
+    c("https://eu.httpbin.org", "https://httpbin.org", "http://httpbin.org")
+  })
+
+
+  # Try each test-server in the list
+  for(host in servers){
+    tryCatch({
+      url <- paste0(host, "/get")
+      req <- curl_fetch_memory(url, handle = h)
+      return(host)
+    }, error = function(e){
+      message(paste0("Not using ", host, ": ", e$message))
+    })
+  }
+
+  stop("All testing servers seem unavailable. No internet connection?")
+}
+
+testserver <- find_test_server()
+cat("Using test server:", testserver, "\n")
+
+httpbin <- function(path){
+  paste0(testserver, "/", sub("^/", "", path))
+}
diff --git a/tests/testthat/test-auth.R b/tests/testthat/test-auth.R
new file mode 100644
index 0000000..1fc0629
--- /dev/null
+++ b/tests/testthat/test-auth.R
@@ -0,0 +1,30 @@
+# Some of these unit tests fail if you reuse the handle. Don't know why. Maybe cache related.
+context("Authentication")
+
+test_that("Permission denied", {
+  h <- new_handle()
+  expect_equal(curl_fetch_memory(httpbin("basic-auth/jerry/secret"), handle = h)$status, 401)
+  expect_equal(curl_fetch_memory(httpbin("hidden-basic-auth/jerry/secret"), handle = h)$status, 404)
+  expect_equal(curl_fetch_memory(httpbin("digest-auth/auth/jerry/secret"), handle = h)$status, 401)
+})
+
+test_that("Auth userpwd", {
+  h <- new_handle()
+  handle_setopt(h, userpwd = "jerry:secret")
+  expect_equal(curl_fetch_memory(httpbin("basic-auth/jerry/secret"), handle = h)$status, 200)
+  expect_equal(curl_fetch_memory(httpbin("hidden-basic-auth/jerry/secret"), handle = h)$status, 200)
+  expect_equal(curl_fetch_memory(httpbin("digest-auth/auth/jerry/secret"), handle = h)$status, 200)
+})
+
+test_that("Auth username and password", {
+  h <- new_handle()
+  handle_setopt(h, username = "jerry", password = "secret")
+  expect_equal(curl_fetch_memory(httpbin("basic-auth/jerry/secret"), handle = h)$status, 200)
+  expect_equal(curl_fetch_memory(httpbin("hidden-basic-auth/jerry/secret"), handle = h)$status, 200)
+  expect_equal(curl_fetch_memory(httpbin("digest-auth/auth/jerry/secret"), handle = h)$status, 200)
+})
+
+test_that("GC works", {
+  gc()
+  expect_equal(total_handles(), 0L)
+})
diff --git a/tests/testthat/test-blockopen.R b/tests/testthat/test-blockopen.R
new file mode 100644
index 0000000..1e1da77
--- /dev/null
+++ b/tests/testthat/test-blockopen.R
@@ -0,0 +1,75 @@
+context("Non-blocking opening connection")
+
+read_text <- function(x){
+  while (isIncomplete(x)) {
+    Sys.sleep(0.1)
+    txt <- readLines(x)
+    if(length(txt))
+      return(txt)
+  }
+}
+
+read_bin <- function(x){
+  while (isIncomplete(x)) {
+    Sys.sleep(0.1)
+    bin <- readBin(x, raw(), 100)
+    if(length(bin))
+      return(bin)
+  }
+}
+
+expect_immediate <- function(...){
+  expect_true(system.time(...)['elapsed'] < 0.5)
+}
+
+test_that("Non-blocking open does not block", {
+
+  # Get a regular string
+  con <- curl(httpbin("delay/1"))
+  expect_immediate(open(con, "rs", blocking = FALSE))
+  expect_immediate(readLines(con))
+  close(con)
+})
+
+test_that("Error handling for non-blocking open", {
+
+  # Get a regular string
+  con <- curl(httpbin("get"))
+  expect_immediate(open(con, "rs", blocking = FALSE))
+  expect_is(read_text(con), "character")
+  close(con)
+
+  # Test error during read text
+  h <- new_handle()
+  con <- curl(httpbin("status/418"), handle = h)
+  expect_immediate(open(con, "rs", blocking = FALSE))
+  expect_is(read_text(con), "character")
+  expect_equal(handle_data(h)$status_code, 418)
+  close(con)
+
+  # Test error during read binary
+  h <- new_handle()
+  con <- curl(httpbin("status/418"), handle = h)
+  expect_immediate(open(con, "rbs", blocking = FALSE))
+  expect_is(read_bin(con), "raw")
+  expect_equal(handle_data(h)$status_code, 418)
+  close(con)
+
+  # DNS error
+  con <- curl("http://this.is.invalid.co.za")
+  expect_immediate(open(con, "rs", blocking = FALSE))
+  expect_error(read_text(con), "resolve")
+  close(con)
+
+  # Non existing host
+  con <- curl("http://240.0.0.1")
+  expect_immediate(open(con, "rs", blocking = FALSE))
+  expect_error(read_text(con))
+  close(con)
+
+  # Invalid port
+  con <- curl("http://8.8.8.8:666")
+  expect_immediate(open(con, "rs", blocking = FALSE))
+  expect_error(read_text(con))
+  close(con)
+})
diff --git a/tests/testthat/test-certificates.R b/tests/testthat/test-certificates.R
new file mode 100644
index 0000000..7b2a92c
--- /dev/null
+++ b/tests/testthat/test-certificates.R
@@ -0,0 +1,20 @@
+context("Certificate validation")
+
+test_that("CloudFlare / LetsEncrypt certs", {
+  if(is.numeric(get_windows_build()))
+    skip_if_not(get_windows_build() >= 7600, "TLS 1.2 requires at least Windows 7 / Windows Server 2008 R2")
+  expect_equal(curl_fetch_memory('https://www.opencpu.org')$status_code, 200)
+  expect_equal(curl_fetch_memory('https://rud.is')$status_code, 200)
+})
+
+test_that("Invalid domain raises an error", {
+  ipaddr <- nslookup("www.google.com", ipv4_only = TRUE)
+  fake_url <- paste0("https://", ipaddr)
+  expect_error(curl_fetch_memory(fake_url), "certificate")
+  expect_is(curl_fetch_memory(fake_url, handle = new_handle(ssl_verifyhost = FALSE))$status, "integer")
+})
+
+test_that("GC works", {
+  gc()
+  expect_equal(total_handles(), 0L)
+})
diff --git a/tests/testthat/test-connection.R b/tests/testthat/test-connection.R
new file mode 100644
index 0000000..d52fc03
--- /dev/null
+++ b/tests/testthat/test-connection.R
@@ -0,0 +1,37 @@
+context("Connections")
+
+h <- new_handle()
+
+test_that("Compression and destroying connection", {
+  con <- curl(httpbin("deflate"), handle = h)
+  expect_equal(jsonlite::fromJSON(readLines(con))$deflate, TRUE)
+  expect_false(isOpen(con))
+  close(con) #destroy
+
+  expect_equal(jsonlite::fromJSON(rawToChar(curl_fetch_memory(httpbin("deflate"), handle = h)$content))$deflate, TRUE)
+
+  con <- curl(httpbin("gzip"), handle = h)
+  expect_equal(jsonlite::fromJSON(readLines(con))$gzipped, TRUE)
+  expect_false(isOpen(con))
+  close(con) #destroy
+
+  expect_equal(jsonlite::fromJSON(rawToChar(curl_fetch_memory(httpbin("gzip"), handle = h)$content))$gzipped, TRUE)
+})
+
+test_that("Connection interface", {
+  # note: jsonlite automatically destroys auto-opened connection
+  con <- curl(httpbin("get?test=blabla"), handle = h)
+  expect_equal(jsonlite::fromJSON(con)$args$test, "blabla")
+
+  # test error
+  con <- curl(httpbin("status/418"))
+  expect_error(readLines(con))
+  close(con) #destroy
+
+  # test not error
+  con <- curl(httpbin("status/418"), handle = h)
+  open(con, "rf")
+  expect_is(readLines(con), "character")
+  expect_equal(handle_data(h)$status_code, 418L)
+  close(con) #destroy
+})
diff --git a/tests/testthat/test-cookies.R b/tests/testthat/test-cookies.R
new file mode 100644
index 0000000..077afc7
--- /dev/null
+++ b/tests/testthat/test-cookies.R
@@ -0,0 +1,51 @@
+context("Cookies")
+
+h <- new_handle()
+
+test_that("No cookies", {
+  cookies <- handle_cookies(h);
+  expect_is(cookies, "data.frame")
+  expect_equal(nrow(cookies), 0)
+})
+
+test_that("Add some cookies", {
+  req <- curl_fetch_memory(httpbin("cookies/set?foo=123&bar=ftw"), handle = h)
+  cookies <- handle_cookies(h);
+  expect_is(cookies, "data.frame")
+  expect_equal(nrow(cookies), 2)
+  expect_equal(sort(cookies$name), c("bar", "foo"))
+  expect_equal(sort(cookies$value), c("123","ftw"))
+  expect_true(all(cookies$expiration == Inf))
+})
+
+test_that("Coockie with connection", {
+  con <- curl(httpbin("cookies"), handle = h)
+  expect_equal(jsonlite::fromJSON(con)$cookies$foo, "123")
+})
+
+test_that("Delete a cookie", {
+  req <- curl_fetch_memory(httpbin("cookies/delete?foo"), handle = h)
+  cookies <- handle_cookies(h)
+  foo <- subset(cookies, name == "foo")
+  bar <- subset(cookies, name == "bar")
+  expect_true(foo$expiration < Sys.time())
+  expect_true(bar$expiration > Sys.time())
+  expect_true(is.na(foo$value))
+  expect_equal(bar$value, "ftw")
+})
+
+test_that("Overwrite a cookie", {
+  req <- curl_fetch_memory(httpbin("cookies/set?foo=888&bar=999"), handle = h)
+  cookies <- handle_cookies(h)
+  foo <- subset(cookies, name == "foo")
+  bar <- subset(cookies, name == "bar")
+  expect_equal(foo$value, "888")
+  expect_equal(bar$value, "999")
+  expect_true(all(cookies$expiration == Inf))
+})
+
+rm(h)
+test_that("GC works", {
+  gc()
+  expect_equal(total_handles(), 0L)
+})
diff --git a/tests/testthat/test-escape.R b/tests/testthat/test-escape.R
new file mode 100644
index 0000000..baebfaf
--- /dev/null
+++ b/tests/testthat/test-escape.R
@@ -0,0 +1,35 @@
+context("URL escaping")
+
+test_that("basic encoding", {
+  expect_equal("a%2Fb%2Fc", curl_escape("a/b/c"))
+  expect_equal("a = b + c", curl_unescape("a%20%3D%20b%20%2B%20c"))
+})
+
+test_that("curl_{,un}escape handle NULL", {
+  escaped_null <- curl_escape(NULL)
+  expect_equal(0, length(escaped_null))
+  expect_equal("character", class(escaped_null))
+  unescaped_null <- curl_unescape(NULL)
+  expect_equal(0, length(unescaped_null))
+  expect_equal("character", class(unescaped_null))
+})
+
+test_that("curl_escape and curl_unescape are inverses", {
+  mu <- "\u00b5"
+  expect_equal(mu, curl_unescape(curl_escape(mu)))
+  escaped_mu <- curl_escape(mu)
+  expect_equal(escaped_mu, curl_escape(curl_unescape(escaped_mu)))
+})
+
+test_that("Test character encoding", {
+  strings <- c(
+    "Zürich",
+    "北京填鴨们",
+    "ผัดไทย",
+    "寿司",
+    rawToChar(as.raw(1:40)),
+    "?foo&bar=baz!bla\n"
+  )
+  strings <- enc2utf8(strings)
+  expect_equal(strings, curl_unescape(curl_escape(strings)))
+})
diff --git a/tests/testthat/test-gc.R b/tests/testthat/test-gc.R
new file mode 100644
index 0000000..5eb69f4
--- /dev/null
+++ b/tests/testthat/test-gc.R
@@ -0,0 +1,60 @@
+context("Garbage collection")
+
+h1 <- new_handle()
+test <- function(){
+  pool <- new_pool()
+  h2 <- new_handle()
+  cb <- function(...){}
+  curl_fetch_multi('http://jeroen.github.io/images/frink.png', pool = pool, done = cb, handle = h1)
+  curl_fetch_multi('http://jeroen.github.io/images/frink.png', pool = pool, done = cb, handle = h2)
+  return(pool)
+}
+
+test_that("Garbage collection works", {
+  # Should clean 0 handles
+  pool <- test()
+  expect_equal(total_handles(), 2L)
+  multi_run(pool = pool)
+  gc()
+  expect_equal(total_handles(), 1L)
+})
+
+rm(h1)
+
+test_that("Garbage collection works", {
+  gc()
+  expect_equal(total_handles(), 0L)
+})
+
+# Test circular GC problems
+test2 <- function(){
+  pool <- new_pool()
+  cb <- function(...){}
+  curl_fetch_multi('http://jeroen.github.io/images/frink.png', pool = pool, done = cb)
+  curl_fetch_multi('http://jeroen.github.io/images/frink.png', pool = pool, done = cb)
+}
+
+test_that("Clean up pending requets", {
+  test2()
+  gc()
+  expect_equal(total_handles(), 0L)
+})
+
+# Test3 circular GC problems
+test3 <- function(){
+  pool <- new_pool()
+  curl_fetch_multi('https://cran.r-project.org/src/contrib/stringi_1.1.1.tar.gz', pool = pool)
+  curl_fetch_multi('https://cran.r-project.org/src/contrib/stringi_1.1.1.tar.gz', pool = pool)
+  return(pool)
+}
+
+test_that("Clean up hanging requests", {
+  pool <- test3()
+  expect_equal(total_handles(), 2L)
+  multi_run(0, pool = pool)
+  rm(pool)
+  gc()
+  expect_equal(total_handles(), 0L)
+})
+
+
diff --git a/tests/testthat/test-handle.R b/tests/testthat/test-handle.R
new file mode 100644
index 0000000..7c0cb04
--- /dev/null
+++ b/tests/testthat/test-handle.R
@@ -0,0 +1,94 @@
+context("Reusable handle")
+
+h <- new_handle()
+
+test_that("Perform", {
+  expect_equal(curl_fetch_memory(httpbin("get"), handle = h)$status, 200)
+  expect_equal(curl_fetch_memory(httpbin("cookies"), handle = h)$status, 200)
+  expect_equal(curl_fetch_memory(httpbin("status/418"), handle = h)$status, 418)
+})
+
+test_that("Redirect", {
+  expect_equal(curl_fetch_memory(httpbin("redirect/6"), handle = h)$status, 200)
+  expect_equal(curl_fetch_memory(httpbin("relative-redirect/6"), handle = h)$status, 200)
+  expect_equal(curl_fetch_memory(httpbin("absolute-redirect/6"), handle = h)$status, 200)
+})
+
+test_that("Cookies", {
+  expect_equal(curl_fetch_memory(httpbin("cookies/set?foo=123&bar=456"), handle = h)$status, 200)
+  expect_equal(jsonlite::fromJSON(rawToChar(curl_fetch_memory(httpbin("cookies"), handle = h)$content))$cookies$bar, "456")
+  expect_equal(curl_fetch_memory(httpbin("cookies/delete?bar"), handle = h)$status, 200)
+  expect_equal(jsonlite::fromJSON(rawToChar(curl_fetch_memory(httpbin("cookies"), handle = h)$content))$cookies$bar, NULL)
+})
+
+test_that("Keep-Alive", {
+  # Connection to httpbin already set in previous tests. Subsequent requests
+  # should reuse the connection.
+  # Capture the verbose curl output to look for the connection reuse message
+  h <- handle_setopt(h, verbose=TRUE,
+    debugfunction=function(type, msg) cat(readBin(msg, character())))
+  req <- capture.output(curl_fetch_memory(httpbin("get"), handle=h))
+  expect_true(any(grepl("existing connection", req)))
+  handle_setopt(h, verbose=FALSE)
+})
+
+test_that("Opening and closing a connection",{
+  # Create connection
+  con <- curl(httpbin("cookies"), handle = h)
+
+  # Handle is still usable
+  expect_equal(curl_fetch_memory(httpbin("get"), handle = h)$status, 200)
+
+  # Opening the connection locks the handle
+  open(con)
+
+  # Recent versions of libcurl will raise an error
+  #if(compareVersion(curl_version()$version, "7.37") > 0){
+  #  expect_error(curl_fetch_memory(httpbin("get", handle = h))
+  #}
+
+  expect_equal(jsonlite::fromJSON(readLines(con))$cookies$foo, "123")
+
+  # After closing it is free again
+  close(con)
+  expect_equal(curl_fetch_memory(httpbin("get"), handle = h)$status, 200)
+
+  # Removing the connection also unlocks the handle
+  con <- curl(httpbin("cookies"), "rb", handle = h)
+
+  # Recent versions of libcurl will raise an error
+  #if(compareVersion(curl_version()$version, "7.37") > 0){
+  #  expect_error(curl_fetch_memory(httpbin("get", handle = h))
+  #}
+  close(con)
+  rm(con)
+  expect_equal(curl_fetch_memory(httpbin("get"), handle = h)$status, 200)
+})
+
+test_that("Downloading to a file", {
+  tmp <- tempfile()
+  expect_error(curl_download(httpbin("status/418"), tmp, handle = h))
+
+  curl_download(httpbin("get?test=boeboe"), tmp, handle = h)
+  expect_equal(jsonlite::fromJSON(tmp)$args$test, "boeboe")
+
+  curl_download(httpbin("cookies"), tmp, handle = h)
+  expect_equal(jsonlite::fromJSON(tmp)$cookies$foo, "123")
+})
+
+test_that("handle_setopt validates options", {
+  h <- new_handle()
+  expect_identical(class(h), "curl_handle")
+  expect_error(handle_setopt(h, invalid.option="foo"),
+    "Unknown option: invalid.option")
+  expect_error(handle_setopt(h, badopt1="foo", badopt2="bar"),
+    "Unknown options: badopt1, badopt2")
+  expect_identical(class(handle_setopt(h, username="foo")),
+    "curl_handle") ## i.e. that's a valid option, so it succeeds
+})
+
+rm(h)
+test_that("GC works", {
+  gc()
+  expect_equal(total_handles(), 0L)
+})
diff --git a/tests/testthat/test-idn.R b/tests/testthat/test-idn.R
new file mode 100644
index 0000000..50942e2
--- /dev/null
+++ b/tests/testthat/test-idn.R
@@ -0,0 +1,29 @@
+context("IDN")
+
+test_that("IDN domain names",{
+  # OSX does not support IDN by default :(
+  skip_if_not(curl_version()$idn, "libcurl does not have libidn")
+
+  malmo <- "http://www.malm\u00F6.se"
+  expect_is(curl::curl_fetch_memory(enc2utf8(malmo))$status_code, "integer")
+  expect_is(curl::curl_fetch_memory(enc2native(malmo))$status_code, "integer")
+
+  con <- curl::curl(enc2utf8(malmo))
+  expect_is(readLines(con, warn = FALSE), "character")
+  close(con)
+
+  con <- curl::curl(enc2native(malmo))
+  expect_is(readLines(con, warn = FALSE), "character")
+  close(con)
+
+  kremlin <- "http://\u043F\u0440\u0435\u0437\u0438\u0434\u0435\u043D\u0442.\u0440\u0444"
+  expect_is(curl::curl_fetch_memory(kremlin)$status_code, "integer")
+
+  con <- curl::curl(kremlin)
+  expect_is(readLines(con, warn = FALSE), "character")
+  close(con)
+
+  # Something random that doesn't exist
+  wrong <- "http://\u043F\u0840\u0435\u0537\u0438\u0433\u0435\u043F\u0442.\u0440\u0444"
+  expect_error(curl::curl_fetch_memory(enc2utf8(wrong)), 'resolve')
+})
diff --git a/tests/testthat/test-multi.R b/tests/testthat/test-multi.R
new file mode 100644
index 0000000..37e7a4e
--- /dev/null
+++ b/tests/testthat/test-multi.R
@@ -0,0 +1,104 @@
+context("Multi handle")
+
+test_that("Max connections works", {
+  skip_if_not(curl_version()$version >= as.numeric_version("7.30"),
+    "libcurl does not support host_connections")
+  multi_set(host_con = 2, multiplex = FALSE)
+  for(i in 1:3){
+    multi_add(new_handle(url = httpbin("delay/2")))
+  }
+  out <- multi_run(timeout = 3.5)
+  expect_equal(out, list(success = 2, error = 0, pending = 1))
+  out <- multi_run(timeout = 2)
+  expect_equal(out, list(success = 1, error = 0, pending = 0))
+  out <- multi_run()
+  expect_equal(out, list(success = 0, error = 0, pending = 0))
+})
+
+test_that("Max connections reset", {
+  multi_set(host_con = 6, multiplex = TRUE)
+  for(i in 1:3){
+    multi_add(new_handle(url = httpbin("delay/2")))
+  }
+  out <- multi_run(timeout = 4)
+  expect_equal(out, list(success = 3, error = 0, pending = 0))
+})
+
+test_that("Timeout works", {
+  h1 <- new_handle(url = httpbin("delay/3"))
+  h2 <- new_handle(url = httpbin("post"), postfields = "bla bla")
+  h3 <- new_handle(url = "https://urldoesnotexist.xyz", connecttimeout = 1)
+  h4 <- new_handle(url = "http://localhost:14", connecttimeout = 1)
+  m <- new_pool()
+  multi_add(h1, pool = m)
+  multi_add(h2, pool = m)
+  multi_add(h3, pool = m)
+  multi_add(h4, pool = m)
+  rm(h1, h2, h3, h4)
+  gc()
+  out <- multi_run(timeout = 2, pool = m)
+  expect_equal(out, list(success = 1, error = 2, pending = 1))
+  out <- multi_run(timeout = 0, pool = m)
+  expect_equal(out, list(success = 0, error = 0, pending = 1))
+  out <- multi_run(pool = m)
+  expect_equal(out, list(success = 1, error = 0, pending = 0))
+})
+
+test_that("Callbacks work", {
+  total = 0;
+  h1 <- new_handle(url = httpbin("get"))
+  multi_add(h1, done = function(...){
+    total <<- total + 1
+    multi_add(h1, done = function(...){
+      total <<- total + 1
+    })
+  })
+  gc() # test that callback functions are protected
+  out <- multi_run()
+  expect_equal(out, list(success=2, error=0, pending=0))
+  expect_equal(total, 2)
+})
+
+test_that("Multi cancel works", {
+  expect_length(multi_list(), 0)
+  h1 <- new_handle(url = httpbin("get"))
+  multi_add(h1)
+  expect_length(multi_list(), 1)
+  expect_error(multi_add(h1), "locked")
+  expect_equal(multi_run(timeout = 0), list(success = 0, error = 0, pending = 1))
+  expect_length(multi_list(), 1)
+  expect_is(multi_cancel(h1), "curl_handle")
+  expect_length(multi_list(), 0)
+  expect_is(multi_add(h1), "curl_handle")
+  expect_length(multi_list(), 1)
+  expect_equal(multi_run(), list(success = 1, error = 0, pending = 0))
+  expect_length(multi_list(), 0)
+})
+
+test_that("Errors in Callbacks", {
+  pool <- new_pool()
+  cb <- function(req){
+    stop("testerror in callback!")
+  }
+  curl_fetch_multi(httpbin("get"), pool = pool, done = cb)
+  curl_fetch_multi(httpbin("status/404"), pool = pool, done = cb)
+  curl_fetch_multi("https://urldoesnotexist.xyz", pool = pool, fail = cb)
+  gc()
+  expect_equal(total_handles(), 3)
+  expect_error(multi_run(pool = pool), "testerror")
+  gc()
+  expect_equal(total_handles(), 2)
+  expect_error(multi_run(pool = pool), "testerror")
+  gc()
+  expect_equal(total_handles(), 1)
+  expect_error(multi_run(pool = pool), "testerror")
+  gc()
+  expect_equal(total_handles(), 0)
+  expect_equal(multi_run(pool = pool), list(success = 0, error = 0, pending = 0))
+})
+
+test_that("GC works", {
+  gc()
+  expect_equal(total_handles(), 0L)
+})
+
diff --git a/tests/testthat/test-nonblocking.R b/tests/testthat/test-nonblocking.R
new file mode 100644
index 0000000..9512ac1
--- /dev/null
+++ b/tests/testthat/test-nonblocking.R
@@ -0,0 +1,64 @@
+context("Nonblocking connection")
+
+test_that("Non blocking connections ", {
+  h <- new_handle()
+  con <- curl(httpbin("drip?duration=3&numbytes=50&code=200"), handle = h)
+  expect_equal(handle_data(h)$status_code, 0L)
+  open(con, "rb", blocking = FALSE)
+  expect_equal(handle_data(h)$status_code, 200L)
+  n <- 0
+  while(isIncomplete(con)){
+    Sys.sleep(0.01)
+    buf <- readBin(con, raw(), 5)
+    n <- n + length(buf)
+  }
+  expect_equal(n, 50L)
+  rm(h)
+  close(con)
+  gc()
+  expect_equal(total_handles(), 0L)
+})
+
+test_that("Non blocking readline", {
+  con <- curl(httpbin("stream/71"))
+  open(con, "r", blocking = FALSE)
+  n <- 0
+  while(isIncomplete(con)){
+    buf <- readLines(con, 5)
+    n <- n + length(buf)
+  }
+  expect_equal(n, 71L)
+  close(con)
+  gc()
+  expect_equal(total_handles(), 0L)
+})
+
+test_that("isIncomplete for blocking connections", {
+  con <- curl(httpbin("stream/71"))
+  expect_false(isIncomplete(con))
+  expect_equal(length(readLines(con)), 71L)
+  expect_false(isIncomplete(con))
+  open(con)
+  expect_true(isIncomplete(con))
+  n <- 0
+  while(isIncomplete(con)){
+    buf <- readLines(con, 5)
+    n <- n + length(buf)
+  }
+  expect_equal(n, 71L)
+  close(con)
+  gc()
+  expect_equal(total_handles(), 0L)
+})
+
+test_that("Small buffers", {
+  con <- curl(httpbin("get"))
+  expect_false(isIncomplete(con = con))
+  open(con)
+  on.exit(close(con), add = TRUE)
+  expect_true(isIncomplete(con = con))
+  readLines(con, 1)
+  expect_true(isIncomplete(con = con))
+  readLines(con)
+  expect_false(isIncomplete(con = con))
+})
diff --git a/tests/testthat/test-post.R b/tests/testthat/test-post.R
new file mode 100644
index 0000000..201261f
--- /dev/null
+++ b/tests/testthat/test-post.R
@@ -0,0 +1,110 @@
+context("Posting data")
+
+h <- new_handle()
+
+test_that("Post text data", {
+  handle_setopt(h, COPYPOSTFIELDS = "moo=moomooo");
+  handle_setheaders(h,
+    "Content-Type" = "text/moo",
+    "Cache-Control" = "no-cache",
+    "User-Agent" = "A cow"
+  )
+  req <- curl_fetch_memory(httpbin("post"), handle = h)
+  res <- jsonlite::fromJSON(rawToChar(req$content))
+
+  expect_equal(res$data, "moo=moomooo")
+  expect_equal(res$headers$`Content-Type`, "text/moo")
+  expect_equal(res$headers$`User-Agent`, "A cow")
+
+  # Using connection interface
+  input <- jsonlite::fromJSON(rawToChar(req$content))
+  con <- curl(httpbin("post"), handle = h)
+  output <- jsonlite::fromJSON(con)
+  expect_equal(input, output)
+
+  # Using download interface
+  tmp <- tempfile()
+  on.exit(unlink(tmp), add = TRUE)
+  curl_download(httpbin("post"), tmp, handle = h)
+  txt2 <- readLines(tmp)
+  expect_equal(rawToChar(req$content), paste0(txt2, "\n", collapse=""))
+})
+
+test_that("Change headers", {
+  # Default to application/url-encoded
+  handle_setheaders(h, "User-Agent" = "Not a cow")
+  req <- curl_fetch_memory(httpbin("post"), handle = h)
+  res <- jsonlite::fromJSON(rawToChar(req$content))
+  expect_equal(res$form$moo, "moomooo")
+  expect_equal(res$headers$`User-Agent`, "Not a cow")
+
+})
+
+test_that("Post JSON data", {
+  hx <- new_handle()
+  handle_setopt(hx, COPYPOSTFIELDS = jsonlite::toJSON(mtcars));
+  handle_setheaders(hx, "Content-Type" = "application/json")
+  req <- curl_fetch_memory(httpbin("post"), handle = hx)
+  expect_equal(req$status_code, 200)
+
+  # For debugging
+  if(req$status_code > 200)
+    stop(rawToChar(req$content))
+
+  # Note that httpbin reoders columns alphabetically
+  output <- jsonlite::fromJSON(rawToChar(req$content))
+  expect_is(output$json, "data.frame")
+  expect_equal(sort(names(output$json)), sort(names(mtcars)))
+})
+
+test_that("Multipart form post", {
+  # Don't reset options manually, curl should figure this out.
+  hx <- handle_setform(new_handle(),
+    foo = "blabla",
+    bar = charToRaw("boeboe"),
+    iris = form_data(serialize(iris, NULL), "data/rda"),
+    description = form_file(system.file("DESCRIPTION")),
+    logo = form_file(file.path(Sys.getenv("R_DOC_DIR"), "html/logo.jpg"), "image/jpeg")
+  )
+  req <- curl_fetch_memory(httpbin("post"), handle = hx)
+
+  # For debugging
+  expect_equal(req$status_code, 200)
+  if(req$status_code > 200)
+    stop(rawToChar(req$content))
+
+  res <- jsonlite::fromJSON(rawToChar(req$content))
+  expect_match(res$headers$`Content-Type`, "multipart")
+  expect_equal(sort(names(res$files)), c("description", "logo"))
+  expect_equal(sort(names(res$form)), c("bar", "foo", "iris"))
+})
+
+test_that("Empty values", {
+  hx <- handle_setform(new_handle())
+  req <- curl_fetch_memory(httpbin("post"), handle = hx)
+  expect_equal(req$status_code, 200)
+  res <- jsonlite::fromJSON(rawToChar(req$content))
+  expect_length(res$form, 0)
+  expect_equal(as.numeric(res$headers$`Content-Length`), 0)
+
+  hx <- handle_setform(new_handle(), x = "", y = raw(0))
+  req <- curl_fetch_memory(httpbin("post"), handle = hx)
+
+  # For debugging
+  expect_equal(req$status_code, 200)
+  if(req$status_code > 200)
+    stop(rawToChar(req$content))
+
+  res <- jsonlite::fromJSON(rawToChar(req$content))
+  expect_match(res$headers$`Content-Type`, "multipart")
+  expect_length(res$form, 2)
+  expect_equal(res$form$x, "")
+  expect_equal(res$form$y, "")
+})
+
+rm(h)
+test_that("GC works", {
+  gc()
+  expect_equal(total_handles(), 0L)
+})
+
diff --git a/tools/symbols-in-versions b/tools/symbols-in-versions
new file mode 100644
index 0000000..8834ada
--- /dev/null
+++ b/tools/symbols-in-versions
@@ -0,0 +1,832 @@
+                                  _   _ ____  _
+                              ___| | | |  _ \| |
+                             / __| | | | |_) | |
+                            | (__| |_| |  _ <| |___
+                             \___|\___/|_| \_\_____|
+
+ This document lists defines and other symbols present in libcurl, together
+ with exact information about the first libcurl version that provides the
+ symbol, the first version in which the symbol was marked as deprecated and
+ for a few symbols the last version that featured it. The names appear in
+ alphabetical order.
+
+ Name                           Introduced  Deprecated  Removed
+
+CURLAUTH_ANY                    7.10.6
+CURLAUTH_ANYSAFE                7.10.6
+CURLAUTH_BASIC                  7.10.6
+CURLAUTH_DIGEST                 7.10.6
+CURLAUTH_DIGEST_IE              7.19.3
+CURLAUTH_GSSNEGOTIATE           7.10.6       7.38.0
+CURLAUTH_NEGOTIATE              7.38.0
+CURLAUTH_NONE                   7.10.6
+CURLAUTH_NTLM                   7.10.6
+CURLAUTH_NTLM_WB                7.22.0
+CURLAUTH_ONLY                   7.21.3
+CURLCLOSEPOLICY_CALLBACK        7.7
+CURLCLOSEPOLICY_LEAST_RECENTLY_USED 7.7
+CURLCLOSEPOLICY_LEAST_TRAFFIC   7.7
+CURLCLOSEPOLICY_NONE            7.7
+CURLCLOSEPOLICY_OLDEST          7.7
+CURLCLOSEPOLICY_SLOWEST         7.7
+CURLE_ABORTED_BY_CALLBACK       7.1
+CURLE_AGAIN                     7.18.2
+CURLE_ALREADY_COMPLETE          7.7.2
+CURLE_BAD_CALLING_ORDER         7.1           7.17.0
+CURLE_BAD_CONTENT_ENCODING      7.10
+CURLE_BAD_DOWNLOAD_RESUME       7.10
+CURLE_BAD_FUNCTION_ARGUMENT     7.1
+CURLE_BAD_PASSWORD_ENTERED      7.4.2         7.17.0
+CURLE_CHUNK_FAILED              7.21.0
+CURLE_CONV_FAILED               7.15.4
+CURLE_CONV_REQD                 7.15.4
+CURLE_COULDNT_CONNECT           7.1
+CURLE_COULDNT_RESOLVE_HOST      7.1
+CURLE_COULDNT_RESOLVE_PROXY     7.1
+CURLE_FAILED_INIT               7.1
+CURLE_FILESIZE_EXCEEDED         7.10.8
+CURLE_FILE_COULDNT_READ_FILE    7.1
+CURLE_FTP_ACCEPT_FAILED         7.24.0
+CURLE_FTP_ACCEPT_TIMEOUT        7.24.0
+CURLE_FTP_ACCESS_DENIED         7.1
+CURLE_FTP_BAD_DOWNLOAD_RESUME   7.1           7.1
+CURLE_FTP_BAD_FILE_LIST         7.21.0
+CURLE_FTP_CANT_GET_HOST         7.1
+CURLE_FTP_CANT_RECONNECT        7.1           7.17.0
+CURLE_FTP_COULDNT_GET_SIZE      7.1           7.17.0
+CURLE_FTP_COULDNT_RETR_FILE     7.1
+CURLE_FTP_COULDNT_SET_ASCII     7.1           7.17.0
+CURLE_FTP_COULDNT_SET_BINARY    7.1           7.17.0
+CURLE_FTP_COULDNT_SET_TYPE      7.17.0
+CURLE_FTP_COULDNT_STOR_FILE     7.1
+CURLE_FTP_COULDNT_USE_REST      7.1
+CURLE_FTP_PARTIAL_FILE          7.1           7.1
+CURLE_FTP_PORT_FAILED           7.1
+CURLE_FTP_PRET_FAILED           7.20.0
+CURLE_FTP_QUOTE_ERROR           7.1           7.17.0
+CURLE_FTP_SSL_FAILED            7.11.0        7.17.0
+CURLE_FTP_USER_PASSWORD_INCORRECT 7.1         7.17.0
+CURLE_FTP_WEIRD_227_FORMAT      7.1
+CURLE_FTP_WEIRD_PASS_REPLY      7.1
+CURLE_FTP_WEIRD_PASV_REPLY      7.1
+CURLE_FTP_WEIRD_SERVER_REPLY    7.1
+CURLE_FTP_WEIRD_USER_REPLY      7.1           7.17.0
+CURLE_FTP_WRITE_ERROR           7.1           7.17.0
+CURLE_FUNCTION_NOT_FOUND        7.1
+CURLE_GOT_NOTHING               7.9.1
+CURLE_HTTP2                     7.38.0
+CURLE_HTTP2_STREAM              7.49.0
+CURLE_HTTP_NOT_FOUND            7.1
+CURLE_HTTP_PORT_FAILED          7.3           7.12.0
+CURLE_HTTP_POST_ERROR           7.1
+CURLE_HTTP_RANGE_ERROR          7.1           7.17.0
+CURLE_HTTP_RETURNED_ERROR       7.10.3
+CURLE_INTERFACE_FAILED          7.12.0
+CURLE_LDAP_CANNOT_BIND          7.1
+CURLE_LDAP_INVALID_URL          7.10.8
+CURLE_LDAP_SEARCH_FAILED        7.1
+CURLE_LIBRARY_NOT_FOUND         7.1           7.17.0
+CURLE_LOGIN_DENIED              7.13.1
+CURLE_MALFORMAT_USER            7.1           7.17.0
+CURLE_NOT_BUILT_IN              7.21.5
+CURLE_NO_CONNECTION_AVAILABLE   7.30.0
+CURLE_OK                        7.1
+CURLE_OPERATION_TIMEDOUT        7.10.2
+CURLE_OPERATION_TIMEOUTED       7.1           7.17.0
+CURLE_OUT_OF_MEMORY             7.1
+CURLE_PARTIAL_FILE              7.1
+CURLE_PEER_FAILED_VERIFICATION  7.17.1
+CURLE_QUOTE_ERROR               7.17.0
+CURLE_RANGE_ERROR               7.17.0
+CURLE_READ_ERROR                7.1
+CURLE_RECV_ERROR                7.10
+CURLE_REMOTE_ACCESS_DENIED      7.17.0
+CURLE_REMOTE_DISK_FULL          7.17.0
+CURLE_REMOTE_FILE_EXISTS        7.17.0
+CURLE_REMOTE_FILE_NOT_FOUND     7.16.1
+CURLE_RTSP_CSEQ_ERROR           7.20.0
+CURLE_RTSP_SESSION_ERROR        7.20.0
+CURLE_SEND_ERROR                7.10
+CURLE_SEND_FAIL_REWIND          7.12.3
+CURLE_SHARE_IN_USE              7.9.6         7.17.0
+CURLE_SSH                       7.16.1
+CURLE_SSL_CACERT                7.10
+CURLE_SSL_CACERT_BADFILE        7.16.0
+CURLE_SSL_CERTPROBLEM           7.10
+CURLE_SSL_CIPHER                7.10
+CURLE_SSL_CONNECT_ERROR         7.1
+CURLE_SSL_CRL_BADFILE           7.19.0
+CURLE_SSL_ENGINE_INITFAILED     7.12.3
+CURLE_SSL_ENGINE_NOTFOUND       7.9.3
+CURLE_SSL_ENGINE_SETFAILED      7.9.3
+CURLE_SSL_INVALIDCERTSTATUS     7.41.0
+CURLE_SSL_ISSUER_ERROR          7.19.0
+CURLE_SSL_PEER_CERTIFICATE      7.8           7.17.1
+CURLE_SSL_PINNEDPUBKEYNOTMATCH  7.39.0
+CURLE_SSL_SHUTDOWN_FAILED       7.16.1
+CURLE_TELNET_OPTION_SYNTAX      7.7
+CURLE_TFTP_DISKFULL             7.15.0        7.17.0
+CURLE_TFTP_EXISTS               7.15.0        7.17.0
+CURLE_TFTP_ILLEGAL              7.15.0
+CURLE_TFTP_NOSUCHUSER           7.15.0
+CURLE_TFTP_NOTFOUND             7.15.0
+CURLE_TFTP_PERM                 7.15.0
+CURLE_TFTP_UNKNOWNID            7.15.0
+CURLE_TOO_MANY_REDIRECTS        7.5
+CURLE_UNKNOWN_OPTION            7.21.5
+CURLE_UNKNOWN_TELNET_OPTION     7.7
+CURLE_UNSUPPORTED_PROTOCOL      7.1
+CURLE_UPLOAD_FAILED             7.16.3
+CURLE_URL_MALFORMAT             7.1
+CURLE_URL_MALFORMAT_USER        7.1           7.17.0
+CURLE_USE_SSL_FAILED            7.17.0
+CURLE_WEIRD_SERVER_REPLY        7.51.0
+CURLE_WRITE_ERROR               7.1
+CURLFILETYPE_DEVICE_BLOCK       7.21.0
+CURLFILETYPE_DEVICE_CHAR        7.21.0
+CURLFILETYPE_DIRECTORY          7.21.0
+CURLFILETYPE_DOOR               7.21.0
+CURLFILETYPE_FILE               7.21.0
+CURLFILETYPE_NAMEDPIPE          7.21.0
+CURLFILETYPE_SOCKET             7.21.0
+CURLFILETYPE_SYMLINK            7.21.0
+CURLFILETYPE_UNKNOWN            7.21.0
+CURLFINFOFLAG_KNOWN_FILENAME    7.21.0
+CURLFINFOFLAG_KNOWN_FILETYPE    7.21.0
+CURLFINFOFLAG_KNOWN_GID         7.21.0
+CURLFINFOFLAG_KNOWN_HLINKCOUNT  7.21.0
+CURLFINFOFLAG_KNOWN_PERM        7.21.0
+CURLFINFOFLAG_KNOWN_SIZE        7.21.0
+CURLFINFOFLAG_KNOWN_TIME        7.21.0
+CURLFINFOFLAG_KNOWN_UID         7.21.0
+CURLFORM_ARRAY                  7.9.1
+CURLFORM_ARRAY_END              7.9.1         7.9.5       7.9.6
+CURLFORM_ARRAY_START            7.9.1         7.9.5       7.9.6
+CURLFORM_BUFFER                 7.9.8
+CURLFORM_BUFFERLENGTH           7.9.8
+CURLFORM_BUFFERPTR              7.9.8
+CURLFORM_CONTENTHEADER          7.9.3
+CURLFORM_CONTENTLEN             7.46.0
+CURLFORM_CONTENTSLENGTH         7.9
+CURLFORM_CONTENTTYPE            7.9
+CURLFORM_COPYCONTENTS           7.9
+CURLFORM_COPYNAME               7.9
+CURLFORM_END                    7.9
+CURLFORM_FILE                   7.9
+CURLFORM_FILECONTENT            7.9.1
+CURLFORM_FILENAME               7.9.6
+CURLFORM_NAMELENGTH             7.9
+CURLFORM_NOTHING                7.9
+CURLFORM_PTRCONTENTS            7.9
+CURLFORM_PTRNAME                7.9
+CURLFORM_STREAM                 7.18.2
+CURLFTPAUTH_DEFAULT             7.12.2
+CURLFTPAUTH_SSL                 7.12.2
+CURLFTPAUTH_TLS                 7.12.2
+CURLFTPMETHOD_DEFAULT           7.15.3
+CURLFTPMETHOD_MULTICWD          7.15.3
+CURLFTPMETHOD_NOCWD             7.15.3
+CURLFTPMETHOD_SINGLECWD         7.15.3
+CURLFTPSSL_ALL                  7.11.0        7.17.0
+CURLFTPSSL_CCC_ACTIVE           7.16.2
+CURLFTPSSL_CCC_NONE             7.16.2
+CURLFTPSSL_CCC_PASSIVE          7.16.1
+CURLFTPSSL_CONTROL              7.11.0        7.17.0
+CURLFTPSSL_NONE                 7.11.0        7.17.0
+CURLFTPSSL_TRY                  7.11.0        7.17.0
+CURLFTP_CREATE_DIR              7.19.4
+CURLFTP_CREATE_DIR_NONE         7.19.4
+CURLFTP_CREATE_DIR_RETRY        7.19.4
+CURLGSSAPI_DELEGATION_FLAG      7.22.0
+CURLGSSAPI_DELEGATION_NONE      7.22.0
+CURLGSSAPI_DELEGATION_POLICY_FLAG 7.22.0
+CURLHEADER_SEPARATE             7.37.0
+CURLHEADER_UNIFIED              7.37.0
+CURLINFO_ACTIVESOCKET           7.45.0
+CURLINFO_APPCONNECT_TIME        7.19.0
+CURLINFO_CERTINFO               7.19.1
+CURLINFO_CONDITION_UNMET        7.19.4
+CURLINFO_CONNECT_TIME           7.4.1
+CURLINFO_CONTENT_LENGTH_DOWNLOAD 7.6.1
+CURLINFO_CONTENT_LENGTH_UPLOAD  7.6.1
+CURLINFO_CONTENT_TYPE           7.9.4
+CURLINFO_COOKIELIST             7.14.1
+CURLINFO_DATA_IN                7.9.6
+CURLINFO_DATA_OUT               7.9.6
+CURLINFO_DOUBLE                 7.4.1
+CURLINFO_EFFECTIVE_URL          7.4
+CURLINFO_END                    7.9.6
+CURLINFO_FILETIME               7.5
+CURLINFO_FTP_ENTRY_PATH         7.15.4
+CURLINFO_HEADER_IN              7.9.6
+CURLINFO_HEADER_OUT             7.9.6
+CURLINFO_HEADER_SIZE            7.4.1
+CURLINFO_HTTPAUTH_AVAIL         7.10.8
+CURLINFO_HTTP_CODE              7.4.1         7.10.8
+CURLINFO_HTTP_CONNECTCODE       7.10.7
+CURLINFO_HTTP_VERSION           7.50.0
+CURLINFO_LASTONE                7.4.1
+CURLINFO_LASTSOCKET             7.15.2
+CURLINFO_LOCAL_IP               7.21.0
+CURLINFO_LOCAL_PORT             7.21.0
+CURLINFO_LONG                   7.4.1
+CURLINFO_MASK                   7.4.1
+CURLINFO_NAMELOOKUP_TIME        7.4.1
+CURLINFO_NONE                   7.4.1
+CURLINFO_NUM_CONNECTS           7.12.3
+CURLINFO_OS_ERRNO               7.12.2
+CURLINFO_PRETRANSFER_TIME       7.4.1
+CURLINFO_PRIMARY_IP             7.19.0
+CURLINFO_PRIMARY_PORT           7.21.0
+CURLINFO_PRIVATE                7.10.3
+CURLINFO_PROTOCOL               7.52.0
+CURLINFO_PROXYAUTH_AVAIL        7.10.8
+CURLINFO_PROXY_SSL_VERIFYRESULT 7.52.0
+CURLINFO_REDIRECT_COUNT         7.9.7
+CURLINFO_REDIRECT_TIME          7.9.7
+CURLINFO_REDIRECT_URL           7.18.2
+CURLINFO_REQUEST_SIZE           7.4.1
+CURLINFO_RESPONSE_CODE          7.10.8
+CURLINFO_RTSP_CLIENT_CSEQ       7.20.0
+CURLINFO_RTSP_CSEQ_RECV         7.20.0
+CURLINFO_RTSP_SERVER_CSEQ       7.20.0
+CURLINFO_RTSP_SESSION_ID        7.20.0
+CURLINFO_SCHEME                 7.52.0
+CURLINFO_SIZE_DOWNLOAD          7.4.1
+CURLINFO_SIZE_UPLOAD            7.4.1
+CURLINFO_SLIST                  7.12.3
+CURLINFO_SOCKET                 7.45.0
+CURLINFO_SPEED_DOWNLOAD         7.4.1
+CURLINFO_SPEED_UPLOAD           7.4.1
+CURLINFO_SSL_DATA_IN            7.12.1
+CURLINFO_SSL_DATA_OUT           7.12.1
+CURLINFO_SSL_ENGINES            7.12.3
+CURLINFO_SSL_VERIFYRESULT       7.5
+CURLINFO_STARTTRANSFER_TIME     7.9.2
+CURLINFO_STRING                 7.4.1
+CURLINFO_TEXT                   7.9.6
+CURLINFO_TLS_SESSION            7.34.0        7.48.0
+CURLINFO_TLS_SSL_PTR            7.48.0
+CURLINFO_TOTAL_TIME             7.4.1
+CURLINFO_TYPEMASK               7.4.1
+CURLIOCMD_NOP                   7.12.3
+CURLIOCMD_RESTARTREAD           7.12.3
+CURLIOE_FAILRESTART             7.12.3
+CURLIOE_OK                      7.12.3
+CURLIOE_UNKNOWNCMD              7.12.3
+CURLKHMATCH_MISMATCH            7.19.6
+CURLKHMATCH_MISSING             7.19.6
+CURLKHMATCH_OK                  7.19.6
+CURLKHSTAT_DEFER                7.19.6
+CURLKHSTAT_FINE                 7.19.6
+CURLKHSTAT_FINE_ADD_TO_FILE     7.19.6
+CURLKHSTAT_REJECT               7.19.6
+CURLKHTYPE_DSS                  7.19.6
+CURLKHTYPE_RSA                  7.19.6
+CURLKHTYPE_RSA1                 7.19.6
+CURLKHTYPE_UNKNOWN              7.19.6
+CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE 7.30.0
+CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE 7.30.0
+CURLMOPT_MAXCONNECTS            7.16.3
+CURLMOPT_MAX_HOST_CONNECTIONS   7.30.0
+CURLMOPT_MAX_PIPELINE_LENGTH    7.30.0
+CURLMOPT_MAX_TOTAL_CONNECTIONS  7.30.0
+CURLMOPT_PIPELINING             7.16.0
+CURLMOPT_PIPELINING_SERVER_BL   7.30.0
+CURLMOPT_PIPELINING_SITE_BL     7.30.0
+CURLMOPT_PUSHDATA               7.44.0
+CURLMOPT_PUSHFUNCTION           7.44.0
+CURLMOPT_SOCKETDATA             7.15.4
+CURLMOPT_SOCKETFUNCTION         7.15.4
+CURLMOPT_TIMERDATA              7.16.0
+CURLMOPT_TIMERFUNCTION          7.16.0
+CURLMSG_DONE                    7.9.6
+CURLMSG_NONE                    7.9.6
+CURLM_ADDED_ALREADY             7.32.1
+CURLM_BAD_EASY_HANDLE           7.9.6
+CURLM_BAD_HANDLE                7.9.6
+CURLM_BAD_SOCKET                7.15.4
+CURLM_CALL_MULTI_PERFORM        7.9.6
+CURLM_CALL_MULTI_SOCKET         7.15.5
+CURLM_INTERNAL_ERROR            7.9.6
+CURLM_OK                        7.9.6
+CURLM_OUT_OF_MEMORY             7.9.6
+CURLM_UNKNOWN_OPTION            7.15.4
+CURLOPTTYPE_FUNCTIONPOINT       7.1
+CURLOPTTYPE_LONG                7.1
+CURLOPTTYPE_OBJECTPOINT         7.1
+CURLOPTTYPE_OFF_T               7.11.0
+CURLOPTTYPE_STRINGPOINT         7.46.0
+CURLOPT_ABSTRACT_UNIX_SOCKET    7.53.0
+CURLOPT_ACCEPTTIMEOUT_MS        7.24.0
+CURLOPT_ACCEPT_ENCODING         7.21.6
+CURLOPT_ADDRESS_SCOPE           7.19.0
+CURLOPT_APPEND                  7.17.0
+CURLOPT_AUTOREFERER             7.1
+CURLOPT_BUFFERSIZE              7.10
+CURLOPT_CAINFO                  7.4.2
+CURLOPT_CAPATH                  7.9.8
+CURLOPT_CERTINFO                7.19.1
+CURLOPT_CHUNK_BGN_FUNCTION      7.21.0
+CURLOPT_CHUNK_DATA              7.21.0
+CURLOPT_CHUNK_END_FUNCTION      7.21.0
+CURLOPT_CLOSEFUNCTION           7.7           7.11.1      7.15.5
+CURLOPT_CLOSEPOLICY             7.7           7.16.1
+CURLOPT_CLOSESOCKETDATA         7.21.7
+CURLOPT_CLOSESOCKETFUNCTION     7.21.7
+CURLOPT_CONNECTTIMEOUT          7.7
+CURLOPT_CONNECTTIMEOUT_MS       7.16.2
+CURLOPT_CONNECT_ONLY            7.15.2
+CURLOPT_CONNECT_TO              7.49.0
+CURLOPT_CONV_FROM_NETWORK_FUNCTION 7.15.4
+CURLOPT_CONV_FROM_UTF8_FUNCTION 7.15.4
+CURLOPT_CONV_TO_NETWORK_FUNCTION 7.15.4
+CURLOPT_COOKIE                  7.1
+CURLOPT_COOKIEFILE              7.1
+CURLOPT_COOKIEJAR               7.9
+CURLOPT_COOKIELIST              7.14.1
+CURLOPT_COOKIESESSION           7.9.7
+CURLOPT_COPYPOSTFIELDS          7.17.1
+CURLOPT_CRLF                    7.1
+CURLOPT_CRLFILE                 7.19.0
+CURLOPT_CUSTOMREQUEST           7.1
+CURLOPT_DEBUGDATA               7.9.6
+CURLOPT_DEBUGFUNCTION           7.9.6
+CURLOPT_DEFAULT_PROTOCOL        7.45.0
+CURLOPT_DIRLISTONLY             7.17.0
+CURLOPT_DNS_CACHE_TIMEOUT       7.9.3
+CURLOPT_DNS_INTERFACE           7.33.0
+CURLOPT_DNS_LOCAL_IP4           7.33.0
+CURLOPT_DNS_LOCAL_IP6           7.33.0
+CURLOPT_DNS_SERVERS             7.24.0
+CURLOPT_DNS_USE_GLOBAL_CACHE    7.9.3         7.11.1
+CURLOPT_EGDSOCKET               7.7
+CURLOPT_ENCODING                7.10
+CURLOPT_ERRORBUFFER             7.1
+CURLOPT_EXPECT_100_TIMEOUT_MS   7.36.0
+CURLOPT_FAILONERROR             7.1
+CURLOPT_FILE                    7.1           7.9.7
+CURLOPT_FILETIME                7.5
+CURLOPT_FNMATCH_DATA            7.21.0
+CURLOPT_FNMATCH_FUNCTION        7.21.0
+CURLOPT_FOLLOWLOCATION          7.1
+CURLOPT_FORBID_REUSE            7.7
+CURLOPT_FRESH_CONNECT           7.7
+CURLOPT_FTPAPPEND               7.1           7.16.4
+CURLOPT_FTPASCII                7.1           7.11.1      7.15.5
+CURLOPT_FTPLISTONLY             7.1           7.16.4
+CURLOPT_FTPPORT                 7.1
+CURLOPT_FTPSSLAUTH              7.12.2
+CURLOPT_FTP_ACCOUNT             7.13.0
+CURLOPT_FTP_ALTERNATIVE_TO_USER 7.15.5
+CURLOPT_FTP_CREATE_MISSING_DIRS 7.10.7
+CURLOPT_FTP_FILEMETHOD          7.15.1
+CURLOPT_FTP_RESPONSE_TIMEOUT    7.10.8
+CURLOPT_FTP_SKIP_PASV_IP        7.15.0
+CURLOPT_FTP_SSL                 7.11.0        7.16.4
+CURLOPT_FTP_SSL_CCC             7.16.1
+CURLOPT_FTP_USE_EPRT            7.10.5
+CURLOPT_FTP_USE_EPSV            7.9.2
+CURLOPT_FTP_USE_PRET            7.20.0
+CURLOPT_GSSAPI_DELEGATION       7.22.0
+CURLOPT_HEADER                  7.1
+CURLOPT_HEADERDATA              7.10
+CURLOPT_HEADERFUNCTION          7.7.2
+CURLOPT_HEADEROPT               7.37.0
+CURLOPT_HTTP200ALIASES          7.10.3
+CURLOPT_HTTPAUTH                7.10.6
+CURLOPT_HTTPGET                 7.8.1
+CURLOPT_HTTPHEADER              7.1
+CURLOPT_HTTPPOST                7.1
+CURLOPT_HTTPPROXYTUNNEL         7.3
+CURLOPT_HTTPREQUEST             7.1           -           7.15.5
+CURLOPT_HTTP_CONTENT_DECODING   7.16.2
+CURLOPT_HTTP_TRANSFER_DECODING  7.16.2
+CURLOPT_HTTP_VERSION            7.9.1
+CURLOPT_IGNORE_CONTENT_LENGTH   7.14.1
+CURLOPT_INFILE                  7.1           7.9.7
+CURLOPT_INFILESIZE              7.1
+CURLOPT_INFILESIZE_LARGE        7.11.0
+CURLOPT_INTERFACE               7.3
+CURLOPT_INTERLEAVEDATA          7.20.0
+CURLOPT_INTERLEAVEFUNCTION      7.20.0
+CURLOPT_IOCTLDATA               7.12.3
+CURLOPT_IOCTLFUNCTION           7.12.3
+CURLOPT_IPRESOLVE               7.10.8
+CURLOPT_ISSUERCERT              7.19.0
+CURLOPT_KEYPASSWD               7.17.0
+CURLOPT_KEEP_SENDING_ON_ERROR   7.51.0
+CURLOPT_KRB4LEVEL               7.3           7.17.0
+CURLOPT_KRBLEVEL                7.16.4
+CURLOPT_LOCALPORT               7.15.2
+CURLOPT_LOCALPORTRANGE          7.15.2
+CURLOPT_LOGIN_OPTIONS           7.34.0
+CURLOPT_LOW_SPEED_LIMIT         7.1
+CURLOPT_LOW_SPEED_TIME          7.1
+CURLOPT_MAIL_AUTH               7.25.0
+CURLOPT_MAIL_FROM               7.20.0
+CURLOPT_MAIL_RCPT               7.20.0
+CURLOPT_MAXCONNECTS             7.7
+CURLOPT_MAXFILESIZE             7.10.8
+CURLOPT_MAXFILESIZE_LARGE       7.11.0
+CURLOPT_MAXREDIRS               7.5
+CURLOPT_MAX_RECV_SPEED_LARGE    7.15.5
+CURLOPT_MAX_SEND_SPEED_LARGE    7.15.5
+CURLOPT_MUTE                    7.1           7.8         7.15.5
+CURLOPT_NETRC                   7.1
+CURLOPT_NETRC_FILE              7.11.0
+CURLOPT_NEW_DIRECTORY_PERMS     7.16.4
+CURLOPT_NEW_FILE_PERMS          7.16.4
+CURLOPT_NOBODY                  7.1
+CURLOPT_NOPROGRESS              7.1
+CURLOPT_NOPROXY                 7.19.4
+CURLOPT_NOSIGNAL                7.10
+CURLOPT_NOTHING                 7.1.1         7.11.1      7.11.0
+CURLOPT_OPENSOCKETDATA          7.17.1
+CURLOPT_OPENSOCKETFUNCTION      7.17.1
+CURLOPT_PASSWDDATA              7.4.2         7.11.1      7.15.5
+CURLOPT_PASSWDFUNCTION          7.4.2         7.11.1      7.15.5
+CURLOPT_PASSWORD                7.19.1
+CURLOPT_PASV_HOST               7.12.1        7.16.0      7.15.5
+CURLOPT_PATH_AS_IS              7.42.0
+CURLOPT_PINNEDPUBLICKEY         7.39.0
+CURLOPT_PIPEWAIT                7.43.0
+CURLOPT_PORT                    7.1
+CURLOPT_POST                    7.1
+CURLOPT_POST301                 7.17.1        7.19.1
+CURLOPT_POSTFIELDS              7.1
+CURLOPT_POSTFIELDSIZE           7.2
+CURLOPT_POSTFIELDSIZE_LARGE     7.11.1
+CURLOPT_POSTQUOTE               7.1
+CURLOPT_POSTREDIR               7.19.1
+CURLOPT_PREQUOTE                7.9.5
+CURLOPT_PRE_PROXY               7.52.0
+CURLOPT_PRIVATE                 7.10.3
+CURLOPT_PROGRESSDATA            7.1
+CURLOPT_PROGRESSFUNCTION        7.1           7.32.0
+CURLOPT_PROTOCOLS               7.19.4
+CURLOPT_PROXY                   7.1
+CURLOPT_PROXYAUTH               7.10.7
+CURLOPT_PROXYHEADER             7.37.0
+CURLOPT_PROXYPASSWORD           7.19.1
+CURLOPT_PROXYPORT               7.1
+CURLOPT_PROXYTYPE               7.10
+CURLOPT_PROXYUSERNAME           7.19.1
+CURLOPT_PROXYUSERPWD            7.1
+CURLOPT_PROXY_CAINFO            7.52.0
+CURLOPT_PROXY_CAPATH            7.52.0
+CURLOPT_PROXY_CRLFILE           7.52.0
+CURLOPT_PROXY_KEYPASSWD         7.52.0
+CURLOPT_PROXY_PINNEDPUBLICKEY   7.52.0
+CURLOPT_PROXY_SERVICE_NAME      7.43.0
+CURLOPT_PROXY_SSLCERT           7.52.0
+CURLOPT_PROXY_SSLCERTTYPE       7.52.0
+CURLOPT_PROXY_SSLKEY            7.52.0
+CURLOPT_PROXY_SSLKEYTYPE        7.52.0
+CURLOPT_PROXY_SSLVERSION        7.52.0
+CURLOPT_PROXY_SSL_CIPHER_LIST   7.52.0
+CURLOPT_PROXY_SSL_OPTIONS       7.52.0
+CURLOPT_PROXY_SSL_VERIFYHOST    7.52.0
+CURLOPT_PROXY_SSL_VERIFYPEER    7.52.0
+CURLOPT_PROXY_TLSAUTH_PASSWORD  7.52.0
+CURLOPT_PROXY_TLSAUTH_TYPE      7.52.0
+CURLOPT_PROXY_TLSAUTH_USERNAME  7.52.0
+CURLOPT_PROXY_TRANSFER_MODE     7.18.0
+CURLOPT_PUT                     7.1
+CURLOPT_QUOTE                   7.1
+CURLOPT_RANDOM_FILE             7.7
+CURLOPT_RANGE                   7.1
+CURLOPT_READDATA                7.9.7
+CURLOPT_READFUNCTION            7.1
+CURLOPT_REDIR_PROTOCOLS         7.19.4
+CURLOPT_REFERER                 7.1
+CURLOPT_RESOLVE                 7.21.3
+CURLOPT_RESUME_FROM             7.1
+CURLOPT_RESUME_FROM_LARGE       7.11.0
+CURLOPT_RTSPHEADER              7.20.0
+CURLOPT_RTSP_CLIENT_CSEQ        7.20.0
+CURLOPT_RTSP_REQUEST            7.20.0
+CURLOPT_RTSP_SERVER_CSEQ        7.20.0
+CURLOPT_RTSP_SESSION_ID         7.20.0
+CURLOPT_RTSP_STREAM_URI         7.20.0
+CURLOPT_RTSP_TRANSPORT          7.20.0
+CURLOPT_SASL_IR                 7.31.0
+CURLOPT_SEEKDATA                7.18.0
+CURLOPT_SEEKFUNCTION            7.18.0
+CURLOPT_SERVER_RESPONSE_TIMEOUT 7.20.0
+CURLOPT_SERVICE_NAME            7.43.0
+CURLOPT_SHARE                   7.10
+CURLOPT_SOCKOPTDATA             7.16.0
+CURLOPT_SOCKOPTFUNCTION         7.16.0
+CURLOPT_SOCKS5_GSSAPI_NEC       7.19.4
+CURLOPT_SOCKS5_GSSAPI_SERVICE   7.19.4        7.49.0
+CURLOPT_SOURCE_HOST             7.12.1        -           7.15.5
+CURLOPT_SOURCE_PATH             7.12.1        -           7.15.5
+CURLOPT_SOURCE_PORT             7.12.1        -           7.15.5
+CURLOPT_SOURCE_POSTQUOTE        7.12.1        -           7.15.5
+CURLOPT_SOURCE_PREQUOTE         7.12.1        -           7.15.5
+CURLOPT_SOURCE_QUOTE            7.13.0        -           7.15.5
+CURLOPT_SOURCE_URL              7.13.0        -           7.15.5
+CURLOPT_SOURCE_USERPWD          7.12.1        -           7.15.5
+CURLOPT_SSH_AUTH_TYPES          7.16.1
+CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 7.17.1
+CURLOPT_SSH_KEYDATA             7.19.6
+CURLOPT_SSH_KEYFUNCTION         7.19.6
+CURLOPT_SSH_KNOWNHOSTS          7.19.6
+CURLOPT_SSH_PRIVATE_KEYFILE     7.16.1
+CURLOPT_SSH_PUBLIC_KEYFILE      7.16.1
+CURLOPT_SSLCERT                 7.1
+CURLOPT_SSLCERTPASSWD           7.1.1         7.17.0
+CURLOPT_SSLCERTTYPE             7.9.3
+CURLOPT_SSLENGINE               7.9.3
+CURLOPT_SSLENGINE_DEFAULT       7.9.3
+CURLOPT_SSLKEY                  7.9.3
+CURLOPT_SSLKEYPASSWD            7.9.3         7.17.0
+CURLOPT_SSLKEYTYPE              7.9.3
+CURLOPT_SSLVERSION              7.1
+CURLOPT_SSL_CIPHER_LIST         7.9
+CURLOPT_SSL_CTX_DATA            7.10.6
+CURLOPT_SSL_CTX_FUNCTION        7.10.6
+CURLOPT_SSL_ENABLE_ALPN         7.36.0
+CURLOPT_SSL_ENABLE_NPN          7.36.0
+CURLOPT_SSL_FALSESTART          7.42.0
+CURLOPT_SSL_OPTIONS             7.25.0
+CURLOPT_SSL_SESSIONID_CACHE     7.16.0
+CURLOPT_SSL_VERIFYHOST          7.8.1
+CURLOPT_SSL_VERIFYPEER          7.4.2
+CURLOPT_SSL_VERIFYSTATUS        7.41.0
+CURLOPT_STDERR                  7.1
+CURLOPT_STREAM_DEPENDS          7.46.0
+CURLOPT_STREAM_DEPENDS_E        7.46.0
+CURLOPT_STREAM_WEIGHT           7.46.0
+CURLOPT_TCP_KEEPALIVE           7.25.0
+CURLOPT_TCP_KEEPIDLE            7.25.0
+CURLOPT_TCP_KEEPINTVL           7.25.0
+CURLOPT_TCP_NODELAY             7.11.2
+CURLOPT_TCP_FASTOPEN            7.49.0
+CURLOPT_TELNETOPTIONS           7.7
+CURLOPT_TFTP_BLKSIZE            7.19.4
+CURLOPT_TFTP_NO_OPTIONS         7.48.0
+CURLOPT_TIMECONDITION           7.1
+CURLOPT_TIMEOUT                 7.1
+CURLOPT_TIMEOUT_MS              7.16.2
+CURLOPT_TIMEVALUE               7.1
+CURLOPT_TLSAUTH_PASSWORD        7.21.4
+CURLOPT_TLSAUTH_TYPE            7.21.4
+CURLOPT_TLSAUTH_USERNAME        7.21.4
+CURLOPT_TRANSFERTEXT            7.1.1
+CURLOPT_TRANSFER_ENCODING       7.21.6
+CURLOPT_UNIX_SOCKET_PATH        7.40.0
+CURLOPT_UNRESTRICTED_AUTH       7.10.4
+CURLOPT_UPLOAD                  7.1
+CURLOPT_URL                     7.1
+CURLOPT_USERAGENT               7.1
+CURLOPT_USERNAME                7.19.1
+CURLOPT_USERPWD                 7.1
+CURLOPT_USE_SSL                 7.17.0
+CURLOPT_VERBOSE                 7.1
+CURLOPT_WILDCARDMATCH           7.21.0
+CURLOPT_WRITEDATA               7.9.7
+CURLOPT_WRITEFUNCTION           7.1
+CURLOPT_WRITEHEADER             7.1
+CURLOPT_WRITEINFO               7.1
+CURLOPT_XFERINFODATA            7.32.0
+CURLOPT_XFERINFOFUNCTION        7.32.0
+CURLOPT_XOAUTH2_BEARER          7.33.0
+CURLPAUSE_ALL                   7.18.0
+CURLPAUSE_CONT                  7.18.0
+CURLPAUSE_RECV                  7.18.0
+CURLPAUSE_RECV_CONT             7.18.0
+CURLPAUSE_SEND                  7.18.0
+CURLPAUSE_SEND_CONT             7.18.0
+CURLPIPE_HTTP1                  7.43.0
+CURLPIPE_MULTIPLEX              7.43.0
+CURLPIPE_NOTHING                7.43.0
+CURLPROTO_ALL                   7.19.4
+CURLPROTO_DICT                  7.19.4
+CURLPROTO_FILE                  7.19.4
+CURLPROTO_FTP                   7.19.4
+CURLPROTO_FTPS                  7.19.4
+CURLPROTO_GOPHER                7.21.2
+CURLPROTO_HTTP                  7.19.4
+CURLPROTO_HTTPS                 7.19.4
+CURLPROTO_IMAP                  7.20.0
+CURLPROTO_IMAPS                 7.20.0
+CURLPROTO_LDAP                  7.19.4
+CURLPROTO_LDAPS                 7.19.4
+CURLPROTO_POP3                  7.20.0
+CURLPROTO_POP3S                 7.20.0
+CURLPROTO_RTMP                  7.21.0
+CURLPROTO_RTMPE                 7.21.0
+CURLPROTO_RTMPS                 7.21.0
+CURLPROTO_RTMPT                 7.21.0
+CURLPROTO_RTMPTE                7.21.0
+CURLPROTO_RTMPTS                7.21.0
+CURLPROTO_RTSP                  7.20.0
+CURLPROTO_SCP                   7.19.4
+CURLPROTO_SFTP                  7.19.4
+CURLPROTO_SMB                   7.40.0
+CURLPROTO_SMBS                  7.40.0
+CURLPROTO_SMTP                  7.20.0
+CURLPROTO_SMTPS                 7.20.0
+CURLPROTO_TELNET                7.19.4
+CURLPROTO_TFTP                  7.19.4
+CURLPROXY_HTTP                  7.10
+CURLPROXY_HTTP_1_0              7.19.4
+CURLPROXY_HTTPS                 7.52.0
+CURLPROXY_SOCKS4                7.10
+CURLPROXY_SOCKS4A               7.18.0
+CURLPROXY_SOCKS5                7.10
+CURLPROXY_SOCKS5_HOSTNAME       7.18.0
+CURLSHE_BAD_OPTION              7.10.3
+CURLSHE_INVALID                 7.10.3
+CURLSHE_IN_USE                  7.10.3
+CURLSHE_NOMEM                   7.12.0
+CURLSHE_NOT_BUILT_IN            7.23.0
+CURLSHE_OK                      7.10.3
+CURLSHOPT_LOCKFUNC              7.10.3
+CURLSHOPT_NONE                  7.10.3
+CURLSHOPT_SHARE                 7.10.3
+CURLSHOPT_UNLOCKFUNC            7.10.3
+CURLSHOPT_UNSHARE               7.10.3
+CURLSHOPT_USERDATA              7.10.3
+CURLSOCKTYPE_ACCEPT             7.28.0
+CURLSOCKTYPE_IPCXN              7.16.0
+CURLSSH_AUTH_AGENT              7.28.0
+CURLSSH_AUTH_ANY                7.16.1
+CURLSSH_AUTH_DEFAULT            7.16.1
+CURLSSH_AUTH_HOST               7.16.1
+CURLSSH_AUTH_KEYBOARD           7.16.1
+CURLSSH_AUTH_NONE               7.16.1
+CURLSSH_AUTH_PASSWORD           7.16.1
+CURLSSH_AUTH_PUBLICKEY          7.16.1
+CURLSSLBACKEND_AXTLS            7.38.0
+CURLSSLBACKEND_BORINGSSL        7.49.0
+CURLSSLBACKEND_CYASSL           7.34.0
+CURLSSLBACKEND_DARWINSSL        7.34.0
+CURLSSLBACKEND_GNUTLS           7.34.0
+CURLSSLBACKEND_GSKIT            7.34.0
+CURLSSLBACKEND_LIBRESSL         7.49.0
+CURLSSLBACKEND_MBEDTLS          7.46.0
+CURLSSLBACKEND_NONE             7.34.0
+CURLSSLBACKEND_NSS              7.34.0
+CURLSSLBACKEND_OPENSSL          7.34.0
+CURLSSLBACKEND_POLARSSL         7.34.0
+CURLSSLBACKEND_QSOSSL           7.34.0        -           7.38.1
+CURLSSLBACKEND_SCHANNEL         7.34.0
+CURLSSLBACKEND_WOLFSSL          7.49.0
+CURLSSLOPT_ALLOW_BEAST          7.25.0
+CURLSSLOPT_NO_REVOKE            7.44.0
+CURLUSESSL_ALL                  7.17.0
+CURLUSESSL_CONTROL              7.17.0
+CURLUSESSL_NONE                 7.17.0
+CURLUSESSL_TRY                  7.17.0
+CURLVERSION_FIRST               7.10
+CURLVERSION_FOURTH              7.16.1
+CURLVERSION_NOW                 7.10
+CURLVERSION_SECOND              7.11.1
+CURLVERSION_THIRD               7.12.0
+CURL_CHUNK_BGN_FUNC_FAIL        7.21.0
+CURL_CHUNK_BGN_FUNC_OK          7.21.0
+CURL_CHUNK_BGN_FUNC_SKIP        7.21.0
+CURL_CHUNK_END_FUNC_FAIL        7.21.0
+CURL_CHUNK_END_FUNC_OK          7.21.0
+CURL_CSELECT_ERR                7.16.3
+CURL_CSELECT_IN                 7.16.3
+CURL_CSELECT_OUT                7.16.3
+CURL_DID_MEMORY_FUNC_TYPEDEFS   7.49.0
+CURL_EASY_NONE                  7.14.0        -           7.15.4
+CURL_EASY_TIMEOUT               7.14.0        -           7.15.4
+CURL_ERROR_SIZE                 7.1
+CURL_FNMATCHFUNC_FAIL           7.21.0
+CURL_FNMATCHFUNC_MATCH          7.21.0
+CURL_FNMATCHFUNC_NOMATCH        7.21.0
+CURL_FORMADD_DISABLED           7.12.1
+CURL_FORMADD_ILLEGAL_ARRAY      7.9.8
+CURL_FORMADD_INCOMPLETE         7.9.8
+CURL_FORMADD_MEMORY             7.9.8
+CURL_FORMADD_NULL               7.9.8
+CURL_FORMADD_OK                 7.9.8
+CURL_FORMADD_OPTION_TWICE       7.9.8
+CURL_FORMADD_UNKNOWN_OPTION     7.9.8
+CURL_GLOBAL_ACK_EINTR           7.30.0
+CURL_GLOBAL_ALL                 7.8
+CURL_GLOBAL_DEFAULT             7.8
+CURL_GLOBAL_NOTHING             7.8
+CURL_GLOBAL_SSL                 7.8
+CURL_GLOBAL_WIN32               7.8.1
+CURL_HTTPPOST_BUFFER            7.46.0
+CURL_HTTPPOST_CALLBACK          7.46.0
+CURL_HTTPPOST_FILENAME          7.46.0
+CURL_HTTPPOST_LARGE             7.46.0
+CURL_HTTPPOST_PTRBUFFER         7.46.0
+CURL_HTTPPOST_PTRCONTENTS       7.46.0
+CURL_HTTPPOST_PTRNAME           7.46.0
+CURL_HTTPPOST_READFILE          7.46.0
+CURL_HTTP_VERSION_1_0           7.9.1
+CURL_HTTP_VERSION_1_1           7.9.1
+CURL_HTTP_VERSION_2             7.43.0
+CURL_HTTP_VERSION_2_0           7.33.0
+CURL_HTTP_VERSION_2TLS          7.47.0
+CURL_HTTP_VERSION_2_PRIOR_KNOWLEDGE 7.49.0
+CURL_HTTP_VERSION_NONE          7.9.1
+CURL_IPRESOLVE_V4               7.10.8
+CURL_IPRESOLVE_V6               7.10.8
+CURL_IPRESOLVE_WHATEVER         7.10.8
+CURL_LOCK_ACCESS_NONE           7.10.3
+CURL_LOCK_ACCESS_SHARED         7.10.3
+CURL_LOCK_ACCESS_SINGLE         7.10.3
+CURL_LOCK_DATA_CONNECT          7.10.3
+CURL_LOCK_DATA_COOKIE           7.10.3
+CURL_LOCK_DATA_DNS              7.10.3
+CURL_LOCK_DATA_NONE             7.10.3
+CURL_LOCK_DATA_SHARE            7.10.4
+CURL_LOCK_DATA_SSL_SESSION      7.10.3
+CURL_LOCK_TYPE_CONNECT          7.10          -           7.10.2
+CURL_LOCK_TYPE_COOKIE           7.10          -           7.10.2
+CURL_LOCK_TYPE_DNS              7.10          -           7.10.2
+CURL_LOCK_TYPE_NONE             7.10          -           7.10.2
+CURL_LOCK_TYPE_SSL_SESSION      7.10          -           7.10.2
+CURL_MAX_HTTP_HEADER            7.19.7
+CURL_MAX_READ_SIZE              7.53.0
+CURL_MAX_WRITE_SIZE             7.9.7
+CURL_NETRC_IGNORED              7.9.8
+CURL_NETRC_OPTIONAL             7.9.8
+CURL_NETRC_REQUIRED             7.9.8
+CURL_POLL_IN                    7.14.0
+CURL_POLL_INOUT                 7.14.0
+CURL_POLL_NONE                  7.14.0
+CURL_POLL_OUT                   7.14.0
+CURL_POLL_REMOVE                7.14.0
+CURL_PROGRESS_BAR               7.1.1         -           7.4.1
+CURL_PROGRESS_STATS             7.1.1         -           7.4.1
+CURL_PUSH_DENY                  7.44.0
+CURL_PUSH_OK                    7.44.0
+CURL_READFUNC_ABORT             7.12.1
+CURL_READFUNC_PAUSE             7.18.0
+CURL_REDIR_GET_ALL              7.19.1
+CURL_REDIR_POST_301             7.19.1
+CURL_REDIR_POST_302             7.19.1
+CURL_REDIR_POST_303             7.25.1
+CURL_REDIR_POST_ALL             7.19.1
+CURL_RTSPREQ_ANNOUNCE           7.20.0
+CURL_RTSPREQ_DESCRIBE           7.20.0
+CURL_RTSPREQ_GET_PARAMETER      7.20.0
+CURL_RTSPREQ_NONE               7.20.0
+CURL_RTSPREQ_OPTIONS            7.20.0
+CURL_RTSPREQ_PAUSE              7.20.0
+CURL_RTSPREQ_PLAY               7.20.0
+CURL_RTSPREQ_RECEIVE            7.20.0
+CURL_RTSPREQ_RECORD             7.20.0
+CURL_RTSPREQ_SETUP              7.20.0
+CURL_RTSPREQ_SET_PARAMETER      7.20.0
+CURL_RTSPREQ_TEARDOWN           7.20.0
+CURL_SEEKFUNC_CANTSEEK          7.19.5
+CURL_SEEKFUNC_FAIL              7.19.5
+CURL_SEEKFUNC_OK                7.19.5
+CURL_SOCKET_BAD                 7.14.0
+CURL_SOCKET_TIMEOUT             7.14.0
+CURL_SOCKOPT_ALREADY_CONNECTED  7.21.5
+CURL_SOCKOPT_ERROR              7.21.5
+CURL_SOCKOPT_OK                 7.21.5
+CURL_STRICTER                   7.50.2
+CURL_SSLVERSION_DEFAULT         7.9.2
+CURL_SSLVERSION_SSLv2           7.9.2
+CURL_SSLVERSION_SSLv3           7.9.2
+CURL_SSLVERSION_TLSv1           7.9.2
+CURL_SSLVERSION_TLSv1_0         7.34.0
+CURL_SSLVERSION_TLSv1_1         7.34.0
+CURL_SSLVERSION_TLSv1_2         7.34.0
+CURL_SSLVERSION_TLSv1_3         7.52.0
+CURL_TIMECOND_IFMODSINCE        7.9.7
+CURL_TIMECOND_IFUNMODSINCE      7.9.7
+CURL_TIMECOND_LASTMOD           7.9.7
+CURL_TIMECOND_NONE              7.9.7
+CURL_TLSAUTH_NONE               7.21.4
+CURL_TLSAUTH_SRP                7.21.4
+CURL_VERSION_ASYNCHDNS          7.10.7
+CURL_VERSION_CONV               7.15.4
+CURL_VERSION_CURLDEBUG          7.19.6
+CURL_VERSION_DEBUG              7.10.6
+CURL_VERSION_GSSAPI             7.38.0
+CURL_VERSION_GSSNEGOTIATE       7.10.6        7.38.0
+CURL_VERSION_HTTP2              7.33.0
+CURL_VERSION_HTTPS_PROXY        7.52.0
+CURL_VERSION_IDN                7.12.0
+CURL_VERSION_IPV6               7.10
+CURL_VERSION_KERBEROS4          7.10          7.33.0
+CURL_VERSION_KERBEROS5          7.40.0
+CURL_VERSION_LARGEFILE          7.11.1
+CURL_VERSION_LIBZ               7.10
+CURL_VERSION_NTLM               7.10.6
+CURL_VERSION_NTLM_WB            7.22.0
+CURL_VERSION_PSL                7.47.0
+CURL_VERSION_SPNEGO             7.10.8
+CURL_VERSION_SSL                7.10
+CURL_VERSION_SSPI               7.13.2
+CURL_VERSION_TLSAUTH_SRP        7.21.4
+CURL_VERSION_UNIX_SOCKETS       7.40.0
+CURL_WAIT_POLLIN                7.28.0
+CURL_WAIT_POLLOUT               7.28.0
+CURL_WAIT_POLLPRI               7.28.0
+CURL_WRITEFUNC_PAUSE            7.18.0
diff --git a/tools/symbols.R b/tools/symbols.R
new file mode 100644
index 0000000..a592cb2
--- /dev/null
+++ b/tools/symbols.R
@@ -0,0 +1,50 @@
+# Note: we can only lookup symbols that are available in the installed version of libcurl
+# Therefore you should only update the symbol table using the latest version of libcurl.
+# On Mac: 'brew install curl' will install to /usr/local/opt/curl
+
+blacklist <- c("CURL_DID_MEMORY_FUNC_TYPEDEFS", "CURL_STRICTER")
+
+# Function to read a symbol
+library(inline)
+getsymbol <- function(name){
+  if(name %in% blacklist) return(NA_integer_)
+  fun = cfunction(
+    cppargs="-I/usr/local/opt/curl/include",
+    includes = '#include <curl/curl.h>',
+    body = paste("return ScalarInteger((int)", name, ");")
+  )
+  val = fun()
+  rm(fun); gc();
+  cat("Found:", name, "=", val, "\n")
+  return(val)
+}
+
+# The symbols-in-versions file is included with libcurl
+txt <- scan("tools/symbols-in-versions", character(), sep = "\n", skip = 13)
+lines <- strsplit(txt, "[ ]+")
+symbols <- as.data.frame(t(vapply(lines, `[`, character(4), 1:4)), stringsAsFactors = FALSE)
+names(symbols) <- c("name", "introduced", "deprecated", "removed")
+
+# Get current version
+avail <- is.na(symbols$removed)
+
+# Lookup all symbol values from curl.h (takes a while)
+symbols$value <- NA_integer_;
+available_symbols <- symbols$name[avail]
+symbols$value[avail] <- vapply(available_symbols, getsymbol, integer(1))
+
+# Compute type for options
+type_name <- c("integer", "string", "function", "number")
+type <- cut(symbols$value, c(-Inf, 0, 10000, 20000, 30000, 40000, Inf),
+  labels = FALSE, right = FALSE)
+type[is.na(type)] <- 1
+
+symbols$type <- c("unknown", "integer", "string", "function", "number", "unknown")[type]
+
+option <- grepl("CURLOPT", symbols$name)
+symbols$type[!option] <- NA
+
+# Save as lazy data
+curl_symbols <- symbols[order(symbols$name), ]
+row.names(curl_symbols) = NULL
+devtools::use_data(curl_symbols, overwrite = TRUE)
diff --git a/tools/winlibs.R b/tools/winlibs.R
new file mode 100644
index 0000000..78d064d
--- /dev/null
+++ b/tools/winlibs.R
@@ -0,0 +1,8 @@
+# Build against static libraries from rwinlib
+if(!file.exists("../windows/libcurl-7.54.1/include/curl/curl.h")){
+  if(getRversion() < "3.3.0") setInternet2()
+  download.file("https://github.com/rwinlib/libcurl/archive/v7.54.1.zip", "lib.zip", quiet = TRUE)
+  dir.create("../windows", showWarnings = FALSE)
+  unzip("lib.zip", exdir = "../windows")
+  unlink("lib.zip")
+}
diff --git a/vignettes/intro.Rmd b/vignettes/intro.Rmd
new file mode 100644
index 0000000..b151523
--- /dev/null
+++ b/vignettes/intro.Rmd
@@ -0,0 +1,328 @@
+---
+title: "The curl package: a modern R interface to libcurl"
+date: "`r Sys.Date()`"
+output:
+  html_document:
+    fig_caption: false
+    toc: true
+    toc_float:
+      collapsed: false
+      smooth_scroll: false
+    toc_depth: 3
+vignette: >
+  %\VignetteIndexEntry{The curl package: a modern R interface to libcurl}
+  %\VignetteEngine{knitr::rmarkdown}
+  %\VignetteEncoding{UTF-8}
+---
+
+
+```{r, echo = FALSE, message = FALSE}
+knitr::opts_chunk$set(comment = "")
+options(width = 120, max.print = 100)
+library(curl)
+```
+
+The curl package provides bindings to the [libcurl](http://curl.haxx.se/libcurl/) C library for R. The package supports retrieving data in-memory, downloading to disk, or streaming using the [R "connection" interface](https://stat.ethz.ch/R-manual/R-devel/library/base/html/connections.html). Some knowledge of curl is recommended to use this package. For a more user-friendly HTTP client, have a look at the  [httr](https://cran.r-project.org/package=httr/vignettes/quickstart.html) package  [...]
+
+## Request interfaces
+
+The curl package implements several interfaces to retrieve data from a URL:
+
+ - `curl_fetch_memory()`  saves response in memory
+ - `curl_download()` or `curl_fetch_disk()`  writes response to disk
+ - `curl()` or `curl_fetch_stream()` streams response data
+ - `curl_fetch_multi()` (Advanced) process responses via callback functions
+
+Each interface performs the same HTTP request, they only differ in how response data is processed.
+
+### Getting in memory
+
+The `curl_fetch_memory` function is a blocking interface which waits for the request to complete and returns a list with all content (data, headers, status, timings) of the server response.
+
+
+```{r}
+req <- curl_fetch_memory("https://httpbin.org/get")
+str(req)
+parse_headers(req$headers)
+cat(rawToChar(req$content))
+```
+
+The `curl_fetch_memory` interface is the easiest interface and most powerful for building API clients. However it is not suitable for downloading really large files because it is fully in-memory. If you are expecting 100G of data, you probably need one of the other interfaces.
+
+### Downloading to disk
+
+The second method is `curl_download`, which has been designed as a drop-in replacement for `download.file` in r-base. It writes the response straight to disk, which is useful for downloading (large) files.
+
+```{r}
+tmp <- tempfile()
+curl_download("https://httpbin.org/get", tmp)
+cat(readLines(tmp), sep = "\n")
+```
+
+### Streaming data
+
+The most flexible interface is the `curl` function, which has been designed as a drop-in replacement for base `url`. It will create a so-called connection object, which allows for incremental (asynchronous) reading of the response.
+
+```{r}
+con <- curl("https://httpbin.org/get")
+open(con)
+
+# Get 3 lines
+out <- readLines(con, n = 3)
+cat(out, sep = "\n")
+
+# Get 3 more lines
+out <- readLines(con, n = 3)
+cat(out, sep = "\n")
+
+# Get remaining lines
+out <- readLines(con)
+close(con)
+cat(out, sep = "\n")
+```
+
+The example shows how to use `readLines` on an opened connection to read `n` lines at a time. Similarly `readBin` is used to read `n` bytes at a time for stream parsing binary data.
+
+#### Non blocking connections
+
+As of version 2.3 it is also possible to open connetions in non-blocking mode. In this case `readBin` and `readLines` will return immediately with data that is available without waiting. For non-blocking connections we use `isIncomplete` to check if the download has completed yet.
+
+```{r}
+con <- curl("https://httpbin.org/drip?duration=1&numbytes=50")
+open(con, "rb", blocking = FALSE)
+while(isIncomplete(con)){
+  buf <- readBin(con, raw(), 1024)
+  if(length(buf)) 
+    cat("received: ", rawToChar(buf), "\n")
+}
+close(con)
+```
+
+The `curl_fetch_stream` function provides a very simple wrapper around a non-blocking connection.
+
+
+### Async requests
+
+As of `curl 2.0` the package provides an async interface which can perform multiple simultaneous requests concurrently. The `curl_fetch_multi` adds a request to a pool and returns immediately; it does not actually perform the request. 
+
+```{r}
+pool <- new_pool()
+cb <- function(req){cat("done:", req$url, ": HTTP:", req$status, "\n")}
+curl_fetch_multi('https://www.google.com', done = cb, pool = pool)
+curl_fetch_multi('https://cloud.r-project.org', done = cb, pool = pool)
+curl_fetch_multi('https://httpbin.org/blabla', done = cb, pool = pool)
+```
+
+When we call `multi_run()`, all scheduled requests are performed concurrently. The callback functions get triggered when each request completes.
+
+```{r}
+# This actually performs requests:
+out <- multi_run(pool = pool)
+print(out)
+```
+
+The system allows for running many concurrent non-blocking requests. However it is quite complex and requires careful specification of handler functions.
+
+## Exception handling
+
+A HTTP requests can encounter two types of errors:
+
+ 1. Connection failure: network down, host not found, invalid SSL certificate, etc
+ 2. HTTP non-success status: 401 (DENIED), 404 (NOT FOUND), 503 (SERVER PROBLEM), etc
+
+The first type of errors (connection failures) will always raise an error in R for each interface. However if the requests succeeds and the server returns a non-success HTTP status code, only `curl()` and `curl_download()` will raise an error. Let's dive a little deeper into this.
+
+### Error automatically
+
+The `curl` and `curl_download` functions are safest to use because they automatically raise an error if the request was completed but the server returned a non-success (400 or higher) HTTP status. This mimics behavior of base functions `url` and `download.file`. Therefore we can safely write code like this:
+
+```{r}
+# This is OK
+curl_download('https://cran.r-project.org/CRAN_mirrors.csv', 'mirrors.csv')
+mirros <- read.csv('mirrors.csv')
+unlink('mirrors.csv')
+```
+
+If the HTTP request was unsuccessful, R will not continue:
+
+```{r, error=TRUE, purl = FALSE}
+# Oops! A typo in the URL!
+curl_download('https://cran.r-project.org/CRAN_mirrorZ.csv', 'mirrors.csv')
+con <- curl('https://cran.r-project.org/CRAN_mirrorZ.csv')
+open(con)
+```
+
+```{r, echo = FALSE, message = FALSE, warning=FALSE}
+close(con)
+rm(con)
+```
+
+
+### Check manually
+
+When using any of the `curl_fetch_*` functions it is important to realize that these do **not** raise an error if the request was completed but returned a non-200 status code. When using `curl_fetch_memory` or `curl_fetch_disk` you need to implement such application logic yourself and check if the response was successful.
+
+```{r}
+req <- curl_fetch_memory('https://cran.r-project.org/CRAN_mirrors.csv')
+print(req$status_code)
+```
+
+Same for downloading to disk. If you do not check your status, you might have downloaded an error page!
+
+```{r}
+# Oops a typo!
+req <- curl_fetch_disk('https://cran.r-project.org/CRAN_mirrorZ.csv', 'mirrors.csv')
+print(req$status_code)
+
+# This is not the CSV file we were expecting!
+head(readLines('mirrors.csv'))
+unlink('mirrors.csv')
+```
+
+If you *do* want the `curl_fetch_*` functions to automatically raise an error, you should set the [`FAILONERROR`](https://curl.haxx.se/libcurl/c/CURLOPT_FAILONERROR.html) option to `TRUE` in the handle of the request.
+
+```{r, error=TRUE, purl = FALSE}
+h <- new_handle(failonerror = TRUE)
+curl_fetch_memory('https://cran.r-project.org/CRAN_mirrorZ.csv', handle = h)
+```
+
+## Customizing requests
+
+By default libcurl uses HTTP GET to issue a request to an HTTP url. To send a customized request, we first need to create and configure a curl handle object that is passed to the specific download interface.  
+
+### Configuring a handle
+
+Creating a new handle is done using `new_handle`. After creating a handle object, we can set the libcurl options and http request headers.
+
+```{r}
+h <- new_handle()
+handle_setopt(h, copypostfields = "moo=moomooo");
+handle_setheaders(h,
+  "Content-Type" = "text/moo",
+  "Cache-Control" = "no-cache",
+  "User-Agent" = "A cow"
+)
+```
+
+Use the `curl_options()` function to get a list of the options supported by your version of libcurl. The [libcurl documentation](http://curl.haxx.se/libcurl/c/curl_easy_setopt.html) explains what each option does. Option names are not case sensitive.
+
+After the handle has been configured, it can be used with any of the download interfaces to perform the request. For example `curl_fetch_memory` will load store the output of the request in memory:
+
+```{r}
+req <- curl_fetch_memory("http://httpbin.org/post", handle = h)
+cat(rawToChar(req$content))
+```
+
+Alternatively we can use `curl()` to read the data of via a connection interface:
+
+```{r}
+con <- curl("http://httpbin.org/post", handle = h)
+cat(readLines(con), sep = "\n")
+```
+
+```{r, echo = FALSE, message = FALSE, warning=FALSE}
+close(con)
+```
+
+Or we can use `curl_download` to write the response to disk:
+
+```{r}
+tmp <- tempfile()
+curl_download("http://httpbin.org/post", destfile = tmp, handle = h)
+cat(readLines(tmp), sep = "\n")
+```
+
+Or perform the same request with a multi pool:
+
+```{r}
+curl_fetch_multi("http://httpbin.org/post", handle = h, done = function(res){
+  cat("Request complete! Response content:\n")
+  cat(rawToChar(res$content))
+})
+
+# Perform the request
+out <- multi_run()
+```
+
+### Reading cookies
+
+Curl handles automatically keep track of cookies set by the server. At any given point we can use `handle_cookies` to see a list of current cookies in the handle.
+
+```{r}
+# Start with a fresh handle
+h <- new_handle()
+
+# Ask server to set some cookies
+req <- curl_fetch_memory("http://httpbin.org/cookies/set?foo=123&bar=ftw", handle = h)
+req <- curl_fetch_memory("http://httpbin.org/cookies/set?baz=moooo", handle = h)
+handle_cookies(h)
+
+# Unset a cookie
+req <- curl_fetch_memory("http://httpbin.org/cookies/delete?foo", handle = h)
+handle_cookies(h)
+```
+
+The `handle_cookies` function returns a data frame with 7 columns as specified in the [netscape cookie file format](http://www.cookiecentral.com/faq/#3.5).
+
+### On reusing handles
+
+In most cases you should not re-use a single handle object for more than one request. The only benefit of reusing a handle for multiple requests is to keep track of cookies set by the server (seen above). This could be needed if your server uses session cookies, but this is rare these days. Most APIs set state explicitly via http headers or parameters, rather than implicitly via cookies.
+
+In recent versions of the curl package there are no performance benefits of reusing handles. The overhead of creating and configuring a new handle object is negligible. The safest way to issue multiple requests, either to a single server or multiple servers is by using a separate handle for each request (which is the default)
+
+```{r}
+req1 <- curl_fetch_memory("https://httpbin.org/get")
+req2 <- curl_fetch_memory("http://www.r-project.org")
+```
+
+In past versions of this package you needed to manually use a handle to take advantage of http Keep-Alive. However as of version 2.3 this is no longer the case: curl automatically maintains global a pool of open http connections shared by all handles. When performing many requests to the same server, curl automatically uses existing connections when possible, eliminating TCP/SSL handshaking overhead:
+
+```{r}
+req <- curl_fetch_memory("https://api.github.com/users/ropensci")
+req$times
+
+req2 <- curl_fetch_memory("https://api.github.com/users/rstudio")
+req2$times
+```
+
+If you really need to re-use a handle, do note that that curl does not cleanup the handle after each request. All of the options and internal fields will linger around for all future request until explicitly reset or overwritten. This can sometimes leads to unexpected behavior.
+
+```{r}
+handle_reset(h)
+```
+
+The `handle_reset` function will reset all curl options and request headers to the default values. It will **not** erase cookies and it will still keep alive the connections. Therefore it is good practice to call `handle_reset` after performing a request if you want to reuse the handle for a subsequent request. Still it is always safer to create a new fresh handle when possible, rather than recycling old ones.
+
+### Posting forms
+
+The `handle_setform` function is used to perform a `multipart/form-data` HTTP POST request (a.k.a. posting a form). Values can be either strings, raw vectors (for binary data) or files.
+
+```{r}
+# Posting multipart
+h <- new_handle()
+handle_setform(h,
+  foo = "blabla",
+  bar = charToRaw("boeboe"),
+  iris = form_data(serialize(iris, NULL), "application/rda"),
+  description = form_file(system.file("DESCRIPTION")),
+  logo = form_file(file.path(Sys.getenv("R_DOC_DIR"), "html/logo.jpg"), "image/jpeg")
+)
+req <- curl_fetch_memory("http://httpbin.org/post", handle = h)
+```
+
+The `form_file` function is used to upload files with the form post. It has two arguments: a file path, and optionally a content-type value. If no content-type is set, curl will guess the content type of the file based on the file extension.
+
+The `form_data` function is similar but simply posts a string or raw value with a custom content-type.
+
+### Using pipes
+
+All of the `handle_xxx` functions return the handle object so that function calls can be chained using the popular pipe operators:
+
+```{r}
+library(magrittr)
+
+new_handle() %>%
+  handle_setopt(copypostfields = "moo=moomooo") %>%
+  handle_setheaders("Content-Type" = "text/moo", "Cache-Control" = "no-cache", "User-Agent" = "A cow") %>%
+  curl_fetch_memory(url = "http://httpbin.org/post") %$% content %>% rawToChar %>% cat
+```

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/r-cran-curl.git



More information about the debian-med-commit mailing list