[med-svn] [r-cran-tidyr] 01/04: New upstream version 0.7.1

Andreas Tille tille at debian.org
Fri Oct 13 06:51:06 UTC 2017


This is an automated email from the git hooks/post-receive script.

tille pushed a commit to branch master
in repository r-cran-tidyr.

commit b6d112767a7a7bd47bc5edf2b41f4a561986a57e
Author: Andreas Tille <tille at debian.org>
Date:   Fri Oct 13 08:46:11 2017 +0200

    New upstream version 0.7.1
---
 DESCRIPTION                       |  14 +-
 MD5                               | 136 ++++++-----
 NAMESPACE                         |  87 ++++---
 NEWS.md                           | 182 +++++++++++++++
 R/RcppExports.R                   |   8 +-
 R/compat-lazyeval.R               |  90 ++++++++
 R/complete.R                      |  58 ++---
 R/data.R                          |  21 +-
 R/drop-na.R                       |  45 ++++
 R/drop_na.r                       |  53 -----
 R/expand.R                        | 142 ++++++------
 R/extract.R                       | 109 +++++----
 R/fill.R                          |  54 +++--
 R/gather.R                        | 209 ++++++++++-------
 R/id.R                            |   9 +-
 R/nest.R                          | 110 +++++----
 R/replace_na.R                    |  14 +-
 R/separate-rows.R                 |  60 +++--
 R/separate.R                      | 196 ++++++++--------
 R/seq.R                           |   4 +-
 R/spread.R                        | 167 +++++++-------
 R/tidyr.R                         |  37 +++
 R/unite.R                         |  78 +++----
 R/unnest.R                        | 162 +++++++------
 R/utils.R                         |  47 ++--
 README.md                         |   4 +-
 build/vignette.rds                | Bin 199 -> 199 bytes
 inst/doc/tidy-data.Rmd            |   6 +-
 inst/doc/tidy-data.html           | 466 ++++++++++++++++++--------------------
 man/complete.Rd                   |  39 ++--
 man/complete_.Rd                  |  21 --
 man/deprecated-se.Rd              | 164 ++++++++++++++
 man/drop_na.Rd                    |  46 +++-
 man/drop_na_.Rd                   |  19 --
 man/expand.Rd                     |  50 ++--
 man/expand_.Rd                    |  18 --
 man/extract.Rd                    |  16 +-
 man/extract_.Rd                   |  32 ---
 man/extract_numeric.Rd            |   2 +-
 man/figures/logo.png              | Bin 0 -> 16179 bytes
 man/fill.Rd                       |  25 +-
 man/fill_.Rd                      |  21 --
 man/full_seq.Rd                   |   1 -
 man/gather.Rd                     |  59 +++--
 man/gather_.Rd                    |  34 ---
 man/nest.Rd                       |  57 ++++-
 man/nest_.Rd                      |  20 --
 man/pipe.Rd                       |   1 -
 man/replace_na.Rd                 |   3 +-
 man/separate.Rd                   |  55 +++--
 man/separate_.Rd                  |  58 -----
 man/separate_rows.Rd              |  43 +++-
 man/separate_rows_.Rd             |  23 --
 man/smiths.Rd                     |   1 -
 man/spread.Rd                     |  18 +-
 man/spread_.Rd                    |  38 ----
 man/table1.Rd                     |   1 -
 man/tidyr-package.Rd              |  36 +++
 man/unite.Rd                      |  51 ++++-
 man/unite_.Rd                     |  24 --
 man/unnest.Rd                     |  29 ++-
 man/unnest_.Rd                    |  30 ---
 man/who.Rd                        |  30 +--
 src/RcppExports.cpp               |  21 +-
 tests/testthat/test-complete.R    |   8 +-
 tests/testthat/test-drop_na.R     |  46 ++--
 tests/testthat/test-expand.R      |  29 +--
 tests/testthat/test-extract.R     |   6 +-
 tests/testthat/test-fill.R        |  20 +-
 tests/testthat/test-gather.R      |  21 +-
 tests/testthat/test-id.R          |   1 -
 tests/testthat/test-nest.R        |  22 +-
 tests/testthat/test-replace_na.R  |   6 +-
 tests/testthat/test-separate.R    |  31 ++-
 tests/testthat/test-spread.R      |  41 ++--
 tests/testthat/test-underscored.R | 117 ++++++++++
 tests/testthat/test-unite.R       |  14 +-
 tests/testthat/test-unnest.R      |  65 ++++--
 vignettes/tidy-data.Rmd           |   6 +-
 79 files changed, 2256 insertions(+), 1831 deletions(-)

diff --git a/DESCRIPTION b/DESCRIPTION
index 7c0871f..ab6cffe 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -1,8 +1,9 @@
 Package: tidyr
 Title: Easily Tidy Data with 'spread()' and 'gather()' Functions
-Version: 0.6.1
+Version: 0.7.1
 Authors at R: c(
     person("Hadley", "Wickham", , "hadley at rstudio.com", c("aut", "cre")),
+    person("Lionel", "Henry", , "lionel at rstudio.com", "aut"),
     person("RStudio", role = "cph")
     )
 Description: An evolution of 'reshape2'. It's designed specifically for data
@@ -11,18 +12,19 @@ Description: An evolution of 'reshape2'. It's designed specifically for data
 Depends: R (>= 3.1.0)
 License: MIT + file LICENSE
 LazyData: true
-Imports: tibble, dplyr (>= 0.4), stringi, lazyeval, magrittr, Rcpp
+Imports: dplyr (>= 0.7.0), glue, magrittr, purrr, rlang, Rcpp, stringi,
+        tibble, tidyselect
 URL: http://tidyr.tidyverse.org, https://github.com/tidyverse/tidyr
 BugReports: https://github.com/tidyverse/tidyr/issues
 Suggests: knitr, testthat, covr, gapminder, rmarkdown
-Remotes: RcppCore/Rcpp
 VignetteBuilder: knitr
 LinkingTo: Rcpp
-RoxygenNote: 5.0.1
+RoxygenNote: 6.0.1
 NeedsCompilation: yes
-Packaged: 2017-01-09 23:43:16 UTC; hadley
+Packaged: 2017-08-24 14:15:41 UTC; lionel
 Author: Hadley Wickham [aut, cre],
+  Lionel Henry [aut],
   RStudio [cph]
 Maintainer: Hadley Wickham <hadley at rstudio.com>
 Repository: CRAN
-Date/Publication: 2017-01-10 10:17:53
+Date/Publication: 2017-09-01 15:15:47 UTC
diff --git a/MD5 b/MD5
index 499db45..4e03373 100644
--- a/MD5
+++ b/MD5
@@ -1,27 +1,29 @@
-93b72e5cb89d5911b02f435c5754eb4c *DESCRIPTION
+cde6f0d6d9862ea9c02d7a8f74c2a4ef *DESCRIPTION
 1734bf7b2a958fa874a85d6417f4a0e0 *LICENSE
-69415830da5f76ca03747640899ed454 *NAMESPACE
-bdbe89cf21c4c4bf54f2745051ba0cc5 *NEWS.md
-954a1ab6fbadfa36af9222a046664143 *R/RcppExports.R
-d81e363a14c674d1d0f7a784c1642e8f *R/complete.R
-ab3a1f6c7f05ac4e8f2c2da3b04f46b6 *R/data.R
-a0e42f67ec46d0ccd9fb8927cc9549c4 *R/drop_na.r
-6aa4f2e707e4297a5c6b4fe433909098 *R/expand.R
-70dbefd9caf92fb9be222807e2416711 *R/extract.R
-575ff59fce1df34f9b4205c4f9d1036b *R/fill.R
-a9dfcb7fe92bf522cdf08e0e17d208f4 *R/gather.R
-05d434d62e6eb6cb87535f45ab41b5a3 *R/id.R
-14c7eabbfeecc456b74e9d7664b7c6a5 *R/nest.R
-bfafd89382c2c46dfeb755b2ca83ae00 *R/replace_na.R
-cb6bfff8a7400aa2f7c7d45c76406a62 *R/separate-rows.R
-0679452bd8125ade900820ee0b7f8282 *R/separate.R
-1ec1ed4eeabc06fc5ecc9c8a30955ec6 *R/seq.R
-3575bb95e32986bb6d3f2c5d3fc3bf9d *R/spread.R
-01f8ccb33ba3d65f025113f5a6500eaf *R/unite.R
-3db366b7eac86f7c620aef3a96d79b75 *R/unnest.R
-66ed6d43aa268e7e273c1ccc3394b79c *R/utils.R
-9d8af982d09f1a02ca0da12407bcea19 *README.md
-b503501baf7904cddb1bfc0ecfae62c8 *build/vignette.rds
+3c2147163007eb93226799a3c1b504cb *NAMESPACE
+7207da2c9eccf02933f1c2943bc529c1 *NEWS.md
+d5af2bc872fd256dd82f2607ea2aff67 *R/RcppExports.R
+79ea586e36b0123e161a26e98bd99b64 *R/compat-lazyeval.R
+61e0ad373cfce05de2c489ef9b60ae19 *R/complete.R
+e772ef2ee60ba55dc6e3b13630c67597 *R/data.R
+8f0c5d36191fafe5131a26ce2dbea879 *R/drop-na.R
+879a422a977f98b329cd4fcd27fcdb60 *R/expand.R
+6f091ae6e4aeefc42913a0cfe9a8fa55 *R/extract.R
+a703420c10f3bc22ac51104adb5c6b97 *R/fill.R
+0cfc2fcac7717cd157a19c433daad751 *R/gather.R
+1782474ccca6bd3a688775757d31175b *R/id.R
+0c515b16500dad45447331dd855be8f1 *R/nest.R
+5763e52a7b14ad4c4ca09a6bd8d9437e *R/replace_na.R
+7c4594c6c21d88cbb1a5540ccc2d58b0 *R/separate-rows.R
+3fe96a5046df1ca5a1ca1e23eedd2a4e *R/separate.R
+d6746fae28232c2c13cb5cb61da547e8 *R/seq.R
+e03af1810aa7ec04c02bd3e2bfb846a9 *R/spread.R
+f6f27545149d75e3e0b18aefc7f0bcca *R/tidyr.R
+48603c047f87f084d9d780232b4e941e *R/unite.R
+0c47542be37eef2d0a7dc359bb5d8cc5 *R/unnest.R
+e7a276c42c0bc73d94807dcfd049cfef *R/utils.R
+5f0e633bf5cd9c5530214697ddc05449 *README.md
+a039b419751a933150c18ad4212c68f6 *build/vignette.rds
 4c2c3340bd4f8199e23710ccef6f1f63 *data/population.rdata
 9676be6a02d57f111e0fff2c0c33b6b6 *data/smiths.rda
 e6228478eb819186592e71cfe1bd76c1 *data/table1.rdata
@@ -38,62 +40,54 @@ f3284df0b78edfb5a2c9f5e44cd3bc65 *demo/dadmom.R
 6300912ba5650aa8515b2bc109f11073 *demo/so-17481212.R
 4c61156afe9636ec80849e49fa98a0a3 *demo/so-9684671.R
 c4383a3f9fca197d86b0ae4a22abc79a *inst/doc/tidy-data.R
-d7c5c9f934bcb3bdd4eebba3555407c7 *inst/doc/tidy-data.Rmd
-8041a8505a6886e5a465bcf3357c854e *inst/doc/tidy-data.html
-54a446affa0b0fea3b9902c112df182a *man/complete.Rd
-1307879ff06822f095e5490efaf2399e *man/complete_.Rd
-f0216f78e6454f4df08642ae9e57f2e0 *man/drop_na.Rd
-31ca7ab6e8ae8c0a068e4e7d5a2a0d8c *man/drop_na_.Rd
-d1d7bdb9ac2ba63ed23a0a0177cdc45f *man/expand.Rd
-d20e5c28ad74b37476851084775e021f *man/expand_.Rd
-be7fd4bee27222ba84be88104bab14a0 *man/extract.Rd
-e300585030ed12423f827acdb17fbd49 *man/extract_.Rd
-2fbf27d34dde004a4a9b7f58ae7f5757 *man/extract_numeric.Rd
-fde5eb42318b90ac0c5e31f4f976a2b6 *man/fill.Rd
-0977867e363ba2d59d29d9e51d7ae416 *man/fill_.Rd
-d21a3a50750ff72e1303ec08b9d528d2 *man/full_seq.Rd
-5a7aaf29140ed2ece9e73c1e4b116d3a *man/gather.Rd
-9176f5e6f83f259b57e338fe361a724e *man/gather_.Rd
-b81cf81cae739668e6c900626ba72e6c *man/nest.Rd
-1677d1bb599ed0116ce4f6b436d06dd8 *man/nest_.Rd
-4f421d92452ec80d7c2c173103308bde *man/pipe.Rd
-9d254d4503cb8b41243123a40b178884 *man/replace_na.Rd
-bcffb2eed13490861906477d1380f1fe *man/separate.Rd
-ea79a19ccbbe53dca689212538021713 *man/separate_.Rd
-4269ef68792f99eea4ac68021562c939 *man/separate_rows.Rd
-0052f36149cc2282322c55904184d86d *man/separate_rows_.Rd
-7bed55f805f5937d3498f2d4ce0b02a5 *man/smiths.Rd
-dca755ded7f1a273eaba4ae657603d83 *man/spread.Rd
-38b39720e1f0582d907a63832ffd74d6 *man/spread_.Rd
-265c69b9da802a6e2be9bc494c4cee80 *man/table1.Rd
-ef701d529989ef37dd7fed809aba6829 *man/unite.Rd
-bef0f131fd8b03f7bd1218520967f327 *man/unite_.Rd
-e0357c1e0f10d5b6a7d2c0ee2fe1f0ce *man/unnest.Rd
-efdc032592985292d3eb8f408a97e6a4 *man/unnest_.Rd
-d39b7c92ce82050c472bbc2b3b47bde8 *man/who.Rd
-8e8d4735cf966afd8abfdca4db114144 *src/RcppExports.cpp
+296da1cc768b3709970402d615c512b2 *inst/doc/tidy-data.Rmd
+6d6aaeefef2147b8af25fa9604f74aba *inst/doc/tidy-data.html
+89e7aa3629e3af61431582cb0b9882d1 *man/complete.Rd
+6a1ba38c59e9935977006c90db7f47c8 *man/deprecated-se.Rd
+6971dd7d01266c075b63eea069d09f1f *man/drop_na.Rd
+922e177c053cb58a09e03109370d0f14 *man/expand.Rd
+f1d7793b2d4be4dcb4966997fab23b22 *man/extract.Rd
+830ad8bb930106cbdfc9c5bfed88f16c *man/extract_numeric.Rd
+58683d719cadab671f0336bab86bbf27 *man/figures/logo.png
+93cd88dd9383c50c3920b61d56c127c4 *man/fill.Rd
+752d11d00b9c415bf2ae710f5eb0726b *man/full_seq.Rd
+b6a644fb52a8df85cfe348779a354135 *man/gather.Rd
+bc292160a146aee75f5084b5f599319a *man/nest.Rd
+0f020b37daf27c2fd4c78c574285ef1b *man/pipe.Rd
+5e919b97698936e647ed00edf71a8223 *man/replace_na.Rd
+86424e0f02cc53a8565967458cec79ba *man/separate.Rd
+cf9d82ac4f3b53b032350c96f416380b *man/separate_rows.Rd
+06b27539deeb757867fffeaccc3e6558 *man/smiths.Rd
+8a30ebf5b4f8f9ea6c7c7d935aaebea3 *man/spread.Rd
+d666306ea81966125708a72e36efc617 *man/table1.Rd
+9746e5d604242d68b24d123d2a5d96d4 *man/tidyr-package.Rd
+c88a0ea47ba7ba780275de638c2936ad *man/unite.Rd
+c8ba478dc1fb90bc2c84ea2e3871bb66 *man/unnest.Rd
+098fdd0edc34de56a2da62f5dd22373a *man/who.Rd
+aa56ef8384b525ea2846f3cdb59b92e5 *src/RcppExports.cpp
 81db5dd38227b4cab4713128f04f46c1 *src/fill.cpp
 e9fa31140b3e8191fc77a1c114d2ad5a *src/melt.cpp
 32534931093398158fef10463826e304 *src/simplifyPieces.cpp
 14fd04cc33329083bbe4c25bdd2f0531 *tests/testthat.R
-01de56d1ccecb5a5146f9376f8592909 *tests/testthat/test-complete.R
-aad28d483bb306f9d89b5a309003facf *tests/testthat/test-drop_na.R
-4abc8535438e7b6f9517db46444dd715 *tests/testthat/test-expand.R
-5156e43deafd544e8f607004caef56e9 *tests/testthat/test-extract.R
-dcb1ed9eda37a98a5697b9ec201647a7 *tests/testthat/test-fill.R
+0596c84dbd8e83646f1ee3e2a798d4f2 *tests/testthat/test-complete.R
+2e8a9bbcf302cf2b96d9b21baa9a76ee *tests/testthat/test-drop_na.R
+268cd27319e673da4e422e7e2a48a06b *tests/testthat/test-expand.R
+3d4f4ce4fa98d50fade3a2352bb63c33 *tests/testthat/test-extract.R
+52bdaf7932812e1bf7b5b34ae12fc7aa *tests/testthat/test-fill.R
 b0a7fb6ecf9db133274a91a5e329d6f1 *tests/testthat/test-full_seq.R
-7f660277ad9193ac08aec7c9f10cf07f *tests/testthat/test-gather.R
-8bca207ae4f6d821867657be9be62dd1 *tests/testthat/test-id.R
-d84be2784e0646621dea944f3a1a5871 *tests/testthat/test-nest.R
-786038996580d40bb690c40634662dd5 *tests/testthat/test-replace_na.R
-aec786c69cc7f69edcd5b1ffa8801b75 *tests/testthat/test-separate.R
-a5c5c16a4ca7d044e23b818d83ba056e *tests/testthat/test-spread.R
-ca186bbd753d3ca24efc7db55f2dade6 *tests/testthat/test-unite.R
-72f9dc338715d9e22ce1242e25edd395 *tests/testthat/test-unnest.R
+f1a2c9fe2acd33a44e7ce1522f2125a9 *tests/testthat/test-gather.R
+f3eab4757a75d067572f56a8cd2fa4df *tests/testthat/test-id.R
+27c5bb9b05002b9ed64efffcc076c788 *tests/testthat/test-nest.R
+93135c802368f5391e817cd05add0c1f *tests/testthat/test-replace_na.R
+0c42de930422f560478c509972ace9e9 *tests/testthat/test-separate.R
+a91b5b14318349c8490fb4c719b1d8cf *tests/testthat/test-spread.R
+733a68e17806af6e775a72ac31a7947c *tests/testthat/test-underscored.R
+e5481a1d49d145db4c477d47bf6b3392 *tests/testthat/test-unite.R
+70aa1570a7b1907fb1b33c0c7a25bb84 *tests/testthat/test-unnest.R
 54858865b5d09e66c0541c370836818a *vignettes/billboard.csv
 1f28c63a2a0c3419cb241d93be18f7ea *vignettes/pew.csv
 8874e836f5787180dad68e7fa8105072 *vignettes/preg.csv
 c1bd3e72fdd27a35421e84636cb127a0 *vignettes/preg2.csv
 6144ebd1068581258c02ed88fff198c3 *vignettes/tb.csv
-d7c5c9f934bcb3bdd4eebba3555407c7 *vignettes/tidy-data.Rmd
+296da1cc768b3709970402d615c512b2 *vignettes/tidy-data.Rmd
 f85f432d796495a2df1fedfcbd15ad7d *vignettes/weather.csv
diff --git a/NAMESPACE b/NAMESPACE
index f76992c..0762a95 100644
--- a/NAMESPACE
+++ b/NAMESPACE
@@ -1,44 +1,46 @@
 # Generated by roxygen2: do not edit by hand
 
+S3method(complete,data.frame)
 S3method(complete_,data.frame)
-S3method(complete_,grouped_df)
+S3method(drop_na,data.frame)
+S3method(drop_na,default)
 S3method(drop_na_,data.frame)
-S3method(drop_na_,grouped_df)
-S3method(drop_na_,tbl_df)
+S3method(expand,data.frame)
+S3method(expand,default)
+S3method(expand,grouped_df)
 S3method(expand_,data.frame)
-S3method(expand_,grouped_df)
-S3method(expand_,tbl_df)
+S3method(extract,data.frame)
+S3method(extract,default)
 S3method(extract_,data.frame)
-S3method(extract_,grouped_df)
-S3method(extract_,tbl_df)
+S3method(fill,data.frame)
+S3method(fill,default)
+S3method(fill,grouped_df)
 S3method(fill_,data.frame)
-S3method(fill_,grouped_df)
 S3method(full_seq,Date)
 S3method(full_seq,POSIXct)
 S3method(full_seq,numeric)
+S3method(gather,data.frame)
+S3method(gather,default)
 S3method(gather_,data.frame)
-S3method(gather_,grouped_df)
-S3method(gather_,tbl_df)
+S3method(nest,data.frame)
+S3method(nest,default)
 S3method(nest_,data.frame)
-S3method(nest_,grouped_df)
-S3method(nest_,tbl_df)
 S3method(replace_na,data.frame)
-S3method(replace_na,tbl_df)
+S3method(separate,data.frame)
+S3method(separate,default)
 S3method(separate_,data.frame)
-S3method(separate_,grouped_df)
-S3method(separate_,tbl_df)
+S3method(separate_rows,data.frame)
+S3method(separate_rows,default)
 S3method(separate_rows_,data.frame)
-S3method(separate_rows_,grouped_df)
-S3method(separate_rows_,tbl_df)
+S3method(spread,data.frame)
+S3method(spread,default)
 S3method(spread_,data.frame)
-S3method(spread_,grouped_df)
-S3method(spread_,tbl_df)
+S3method(unite,data.frame)
+S3method(unite,default)
 S3method(unite_,data.frame)
-S3method(unite_,grouped_df)
-S3method(unite_,tbl_df)
+S3method(unnest,data.frame)
+S3method(unnest,default)
 S3method(unnest_,data.frame)
-S3method(unnest_,grouped_df)
-S3method(unnest_,tbl_df)
 export("%>%")
 export(complete)
 export(complete_)
@@ -71,10 +73,41 @@ export(unite)
 export(unite_)
 export(unnest)
 export(unnest_)
+import(rlang)
 importFrom(Rcpp,sourceCpp)
+importFrom(glue,glue)
 importFrom(magrittr,"%>%")
-importFrom(stats,setNames)
-importFrom(tibble,as_data_frame)
-importFrom(tibble,data_frame)
+importFrom(purrr,accumulate)
+importFrom(purrr,accumulate_right)
+importFrom(purrr,discard)
+importFrom(purrr,every)
+importFrom(purrr,keep)
+importFrom(purrr,map)
+importFrom(purrr,map2)
+importFrom(purrr,map2_chr)
+importFrom(purrr,map2_dbl)
+importFrom(purrr,map2_df)
+importFrom(purrr,map2_int)
+importFrom(purrr,map2_lgl)
+importFrom(purrr,map_at)
+importFrom(purrr,map_call)
+importFrom(purrr,map_chr)
+importFrom(purrr,map_dbl)
+importFrom(purrr,map_df)
+importFrom(purrr,map_if)
+importFrom(purrr,map_int)
+importFrom(purrr,map_lgl)
+importFrom(purrr,pmap)
+importFrom(purrr,pmap_chr)
+importFrom(purrr,pmap_dbl)
+importFrom(purrr,pmap_df)
+importFrom(purrr,pmap_int)
+importFrom(purrr,pmap_lgl)
+importFrom(purrr,reduce)
+importFrom(purrr,reduce_right)
+importFrom(purrr,some)
+importFrom(purrr,transpose)
+importFrom(tibble,as_tibble)
+importFrom(tibble,tibble)
 importFrom(utils,type.convert)
-useDynLib(tidyr)
+useDynLib(tidyr, .registration = TRUE)
diff --git a/NEWS.md b/NEWS.md
index 14c030c..64e31a1 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -1,3 +1,184 @@
+
+# tidyr 0.7.1
+
+This is a hotfix release to account for some tidyselect changes in the
+unit tests.
+
+Note that the upcoming version of tidyselect backtracks on some of the
+changes announced for 0.7.0. The special evaluation semantics for
+selection have been changed back to the old behaviour because the new
+rules were causing too much trouble and confusion. From now on data
+expressions (symbols and calls to `:` and `c()`) can refer to both
+registered variables and to objects from the context.
+
+However the semantics for context expressions (any calls other than to
+`:` and `c()`) remain the same. Those expressions are evaluated in the
+context only and cannot refer to registered variables. If you're
+writing functions and refer to contextual objects, it is still a good
+idea to avoid data expressions by following the advice of the 0.7.0
+release notes.
+
+
+# tidyr 0.7.0
+
+This release includes important changes to tidyr internals. Tidyr now
+supports the new tidy evaluation framework for quoting (NSE)
+functions. It also uses the new tidyselect package as selecting
+backend.
+
+
+## Breaking changes
+
+- If you see error messages about objects or functions not found, it
+  is likely because the selecting functions are now stricter in their
+  arguments An example of selecting function is `gather()` and its
+  `...` argument. This change makes the code more robust by
+  disallowing ambiguous scoping. Consider the following code:
+
+  ```
+  x <- 3
+  df <- tibble(w = 1, x = 2, y = 3)
+  gather(df, "variable", "value", 1:x)
+  ```
+
+  Does it select the first three columns (using the `x` defined in the
+  global environment), or does it select the first two columns (using
+  the column named `x`)?
+
+  To solve this ambiguity, we now make a strict distinction between
+  data and context expressions. A data expression is either a bare
+  name or an expression like `x:y` or `c(x, y)`. In a data expression,
+  you can only refer to columns from the data frame. Everything else
+  is a context expression in which you can only refer to objects that
+  you have defined with `<-`.
+
+  In practice this means that you can no longer refer to contextual
+  objects like this:
+
+  ```
+  mtcars %>% gather(var, value, 1:ncol(mtcars))
+
+  x <- 3
+  mtcars %>% gather(var, value, 1:x)
+  mtcars %>% gather(var, value, -(1:x))
+  ```
+
+  You now have to be explicit about where to find objects. To do so,
+  you can use the quasiquotation operator `!!` which will evaluate its
+  argument early and inline the result:
+
+  ```{r}
+  mtcars %>% gather(var, value, !! 1:ncol(mtcars))
+  mtcars %>% gather(var, value, !! 1:x)
+  mtcars %>% gather(var, value, !! -(1:x))
+  ```
+
+  An alternative is to turn your data expression into a context
+  expression by using `seq()` or `seq_len()` instead of `:`. See the
+  section on tidyselect for more information about these semantics.
+
+- Following the switch to tidy evaluation, you might see warnings
+  about the "variable context not set". This is most likely caused by
+  supplyng helpers like `everything()` to underscored versions of
+  tidyr verbs. Helpers should be always be evaluated lazily. To fix
+  this, just quote the helper with a formula: `drop_na(df,
+  ~everything())`.
+
+- The selecting functions are now stricter when you supply integer
+  positions. If you see an error along the lines of
+
+  ```
+  `-0.949999999999999`, `-0.940000000000001`, ... must resolve to
+  integer column positions, not a double vector
+  ```
+
+  please round the positions before supplying them to tidyr. Double
+  vectors are fine as long as they are rounded.
+
+
+## Switch to tidy evaluation
+
+tidyr is now a tidy evaluation grammar. See the
+[programming vignette](http://dplyr.tidyverse.org/articles/programming.html)
+in dplyr for practical information about tidy evaluation.
+
+The tidyr port is a bit special. While the philosophy of tidy
+evaluation is that R code should refer to real objects (from the data
+frame or from the context), we had to make some exceptions to this
+rule for tidyr. The reason is that several functions accept bare
+symbols to specify the names of _new_ columns to create (`gather()`
+being a prime example). This is not tidy because the symbol do not
+represent any actual object. Our workaround is to capture these
+arguments using `rlang::quo_name()` (so they still support
+quasiquotation and you can unquote symbols or strings). This type of
+NSE is now discouraged in the tidyverse: symbols in R code should
+represent real objects.
+
+Following the switch to tidy eval the underscored variants are softly
+deprecated. However they will remain around for some time and without
+warning for backward compatibility.
+
+
+## Switch to the tidyselect backend
+
+The selecting backend of dplyr has been extracted in a standalone
+package tidyselect which tidyr now uses for selecting variables. It is
+used for selecting multiple variables (in `drop_na()`) as well as
+single variables (the `col` argument of `extract()` and `separate()`,
+and the `key` and `value` arguments of `spread()`). This implies the
+following changes:
+
+* The arguments for selecting a single variable now support all
+  features from `dplyr::pull()`. You can supply a name or a position,
+  including negative positions.
+
+* Multiple variables are now selected a bit differently. We now make a
+  strict distinction between data and context expressions. A data
+  expression is either a bare name of an expression like `x:y` or
+  `c(x, y)`. In a data expression, you can only refer to columns from
+  the data frame. Everything else is a context expression in which you
+  can only refer to objects that you have defined with `<-`.
+
+  You can still refer to contextual objects in a data expression by
+  being explicit. One way of being explicit is to unquote a variable
+  from the environment with the tidy eval operator `!!`:
+
+  ```r
+  x <- 2
+  drop_na(df, 2)     # Works fine
+  drop_na(df, x)     # Object 'x' not found
+  drop_na(df, !! x)  # Works as if you had supplied 2
+  ```
+
+  On the other hand, select helpers like `start_with()` are context
+  expressions. It is therefore easy to refer to objects and they will
+  never be ambiguous with data columns:
+
+  ```{r}
+  x <- "d"
+  drop_na(df, starts_with(x))
+  ```
+
+  While these special rules is in contrast to most dplyr and tidyr
+  verbs (where both the data and the context are in scope) they make
+  sense for selecting functions and should provide more robust and
+  helpful semantics.
+
+
+# tidyr 0.6.3
+
+* Patch tests to be compatible with dev tibble
+
+
+# tidyr 0.6.2
+
+* Register C functions
+
+* Added package docs
+
+* Patch tests to be compatible with dev dplyr.
+
+
 # tidyr 0.6.1
 
 * Patch test to be compatible with dev tibble
@@ -5,6 +186,7 @@
 * Changed deprecation message of `extract_numeric()` to point to 
   `readr::parse_number()` rather than `readr::parse_numeric()`
 
+
 # tidyr 0.6.0
 
 ## API changes
diff --git a/R/RcppExports.R b/R/RcppExports.R
index 224fa46..0235bd8 100644
--- a/R/RcppExports.R
+++ b/R/RcppExports.R
@@ -2,18 +2,18 @@
 # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
 
 fillDown <- function(x) {
-    .Call('tidyr_fillDown', PACKAGE = 'tidyr', x)
+    .Call(`_tidyr_fillDown`, x)
 }
 
 fillUp <- function(x) {
-    .Call('tidyr_fillUp', PACKAGE = 'tidyr', x)
+    .Call(`_tidyr_fillUp`, x)
 }
 
 melt_dataframe <- function(data, id_ind, measure_ind, variable_name, value_name, attrTemplate, factorsAsStrings, valueAsFactor, variableAsFactor) {
-    .Call('tidyr_melt_dataframe', PACKAGE = 'tidyr', data, id_ind, measure_ind, variable_name, value_name, attrTemplate, factorsAsStrings, valueAsFactor, variableAsFactor)
+    .Call(`_tidyr_melt_dataframe`, data, id_ind, measure_ind, variable_name, value_name, attrTemplate, factorsAsStrings, valueAsFactor, variableAsFactor)
 }
 
 simplifyPieces <- function(pieces, p, fillLeft = TRUE) {
-    .Call('tidyr_simplifyPieces', PACKAGE = 'tidyr', pieces, p, fillLeft)
+    .Call(`_tidyr_simplifyPieces`, pieces, p, fillLeft)
 }
 
diff --git a/R/compat-lazyeval.R b/R/compat-lazyeval.R
new file mode 100644
index 0000000..7fb3b38
--- /dev/null
+++ b/R/compat-lazyeval.R
@@ -0,0 +1,90 @@
+# nocov - compat-lazyeval (last updated: rlang 0.0.0.9018)
+
+# This file serves as a reference for compatibility functions for lazyeval.
+# Please find the most recent version in rlang's repository.
+
+
+warn_underscored <- function() {
+  return(NULL)
+  warn(paste(
+    "The underscored versions are deprecated in favour of",
+    "tidy evaluation idioms. Please see the documentation",
+    "for `quo()` in rlang"
+  ))
+}
+warn_text_se <- function() {
+  return(NULL)
+  warn("Text parsing is deprecated, please supply an expression or formula")
+}
+
+compat_lazy <- function(lazy, env = caller_env(), warn = TRUE) {
+  if (warn) warn_underscored()
+
+  if (missing(lazy)) {
+    return(quo())
+  }
+
+  coerce_type(lazy, "a quosure",
+    formula = as_quosure(lazy, env),
+    symbol = ,
+    language = new_quosure(lazy, env),
+    string = ,
+    character = {
+      if (warn) warn_text_se()
+      parse_quosure(lazy[[1]], env)
+    },
+    logical = ,
+    integer = ,
+    double = {
+      if (length(lazy) > 1) {
+        warn("Truncating vector to length 1")
+        lazy <- lazy[[1]]
+      }
+      new_quosure(lazy, env)
+    },
+    list =
+      coerce_class(lazy, "a quosure",
+        lazy = new_quosure(lazy$expr, lazy$env)
+      )
+  )
+}
+
+compat_lazy_dots <- function(dots, env, ..., .named = FALSE) {
+  if (missing(dots)) {
+    dots <- list()
+  }
+  if (inherits(dots, c("lazy", "formula"))) {
+    dots <- list(dots)
+  } else {
+    dots <- unclass(dots)
+  }
+  dots <- c(dots, list(...))
+
+  warn <- TRUE
+  for (i in seq_along(dots)) {
+    dots[[i]] <- compat_lazy(dots[[i]], env, warn)
+    warn <- FALSE
+  }
+
+  named <- have_name(dots)
+  if (.named && any(!named)) {
+    nms <- map_chr(dots[!named], f_text)
+    names(dots)[!named] <- nms
+  }
+
+  names(dots) <- names2(dots)
+  dots
+}
+
+compat_as_lazy <- function(quo) {
+  structure(class = "lazy", list(
+    expr = f_rhs(quo),
+    env = f_env(quo)
+  ))
+}
+compat_as_lazy_dots <- function(...) {
+  structure(class = "lazy_dots", map(quos(...), compat_as_lazy))
+}
+
+
+# nocov end
diff --git a/R/complete.R b/R/complete.R
index 2617c94..0fe853d 100644
--- a/R/complete.R
+++ b/R/complete.R
@@ -1,25 +1,20 @@
-#' @importFrom stats setNames
-#' @importFrom utils type.convert
-NULL
-
 #' Complete a data frame with missing combinations of data.
 #'
 #' Turns implicit missing values into explicit missing values.
-#' This is a wrapper around \code{\link{expand}()},
-#' \code{\link[dplyr]{left_join}()} and \code{\link{replace_na}} that's
+#' This is a wrapper around [expand()],
+#' [dplyr::left_join()] and [replace_na()] that's
 #' useful for completing missing combinations of data.
 #'
-#' If you supply \code{fill}, these values will also replace existing
+#' If you supply `fill`, these values will also replace existing
 #' explicit missing values in the data set.
 #'
-#' @inheritParams complete_
 #' @inheritParams expand
-#' @seealso \code{\link{complete_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
+#' @param fill A named list that for each variable supplies a single value to
+#'   use instead of `NA` for missing combinations.
 #' @export
 #' @examples
-#' library(dplyr)
-#' df <- data_frame(
+#' library(dplyr, warn.conflicts = FALSE)
+#' df <- tibble(
 #'   group = c(1:2, 1),
 #'   item_id = c(1:2, 2),
 #'   item_name = c("a", "b", "b"),
@@ -31,37 +26,32 @@ NULL
 #' # You can also choose to fill in missing values
 #' df %>% complete(group, nesting(item_id, item_name), fill = list(value1 = 0))
 complete <- function(data, ..., fill = list()) {
-  dots <- lazyeval::lazy_dots(...)
-  if (length(dots) == 0) {
-    stop("Please supply variables to complete.", call. = FALSE)
+  if (is_empty(exprs(...))) {
+    abort("Please supply variables to complete")
   }
 
-  complete_(data, dots, fill = fill)
+  UseMethod("complete")
 }
-
-#' Standard-evaluation version of \code{complete}.
-#'
-#' This is a S3 generic.
-#' @param data A data frame
-#' @param cols Columns to expand
-#' @param fill A named list that for each variable supplies a single value to
-#'   use instead of \code{NA} for missing combinations.
-#' @export
-#' @keywords internal
-complete_ <- function(data, cols, fill = list(), ...) {
-  UseMethod("complete_")
+complete.default <- function(data, ..., fill = list()) {
+  complete_(data, .dots = compat_as_lazy_dots(...), fill = fill)
 }
-
 #' @export
-complete_.data.frame <- function(data, cols, fill = list(), ...) {
-  full <- expand_(data, cols)
+complete.data.frame <- function(data, ..., fill = list()) {
+  full <- expand(data, ...)
   full <- dplyr::left_join(full, data, by = names(full))
   full <- replace_na(full, replace = fill)
 
-  full
+  reconstruct_tibble(data, full)
 }
 
+#' @rdname deprecated-se
+#' @inheritParams complete
 #' @export
-complete_.grouped_df <- function(data, cols, fill = list(), ...) {
-  regroup(NextMethod(), data)
+complete_ <- function(data, cols, fill = list(), ...) {
+  UseMethod("complete_")
+}
+#' @export
+complete_.data.frame <- function(data, cols, fill = list(), ...) {
+  cols <- compat_lazy_dots(cols, caller_env())
+  complete(data, !!! cols, fill = fill)
 }
diff --git a/R/data.R b/R/data.R
index 71079ec..8c575fd 100644
--- a/R/data.R
+++ b/R/data.R
@@ -6,19 +6,20 @@
 #' @format A dataset with the variables
 #' \describe{
 #'   \item{country}{Country name}
-#'   \item{iso2,iso2}{2 & 3 letter ISO country codes}
+#'   \item{iso2, iso3}{2 & 3 letter ISO country codes}
+#'   \item{year}{Year}
 #'   \item{new_sp_m014 - new_rel_f65}{Counts of new TB cases recorded by group.
 #'    Column names encode three variables that describe the group (see details).}
 #' }
 #' @details The data uses the original codes given by the World Health
 #'   Organization. The column names for columns five through 60 are made by
-#'   combining \code{new_} to a code for method of diagnosis (\code{rel} =
-#'   relapse, \code{sn} = negative pulmonary smear, \code{sp} = positive
-#'   pulmonary smear, \code{ep} = extrapulmonary) to a code for gender
-#'   (\code{f} = female, \code{m} = male) to a code for age group (\code{014} =
-#'   0-14 yrs of age, \code{1524} = 15-24 years of age, \code{2534} = 25 to
-#'   34 years of age, \code{3544} = 35 to 44 years of age, \code{4554} = 45 to
-#'   54 years of age, \code{5564} = 55 to 64 years of age, \code{65} = 65 years
+#'   combining `new_` to a code for method of diagnosis (`rel` =
+#'   relapse, `sn` = negative pulmonary smear, `sp` = positive
+#'   pulmonary smear, `ep` = extrapulmonary) to a code for gender
+#'   (`f` = female, `m` = male) to a code for age group (`014` =
+#'   0-14 yrs of age, `1524` = 15-24 years of age, `2534` = 25 to
+#'   34 years of age, `3544` = 35 to 44 years of age, `4554` = 45 to
+#'   54 years of age, `5564` = 55 to 64 years of age, `65` = 65 years
 #'   of age or older).
 #'
 #' @source \url{http://www.who.int/tb/country/data/download/en/}
@@ -31,8 +32,8 @@
 #'
 #' Data sets that demonstrate multiple ways to layout the same tabular data.
 #'
-#' \code{table1}, \code{table2}, \code{table3}, \code{table4a}, \code{table4b},
-#' and \code{table5} all display the number of TB cases documented by the World
+#' `table1`, `table2`, `table3`, `table4a`, `table4b`,
+#' and `table5` all display the number of TB cases documented by the World
 #' Health Organization in Afghanistan, Brazil, and China between 1999 and 2000.
 #' The data contains values associated with four variables (country, year,
 #' cases, and population), but each table organizes the values in a different
diff --git a/R/drop-na.R b/R/drop-na.R
new file mode 100644
index 0000000..06ce7c5
--- /dev/null
+++ b/R/drop-na.R
@@ -0,0 +1,45 @@
+#' Drop rows containing missing values
+#'
+#' @param data A data frame.
+#' @inheritSection gather Rules for selection
+#' @inheritParams gather
+#' @examples
+#' library(dplyr)
+#' df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
+#' df %>% drop_na()
+#' df %>% drop_na(x)
+#' @export
+drop_na <- function(data, ...) {
+  UseMethod("drop_na")
+}
+#' @export
+drop_na.default <- function(data, ...) {
+  drop_na_(data, vars = compat_as_lazy_dots(...))
+}
+#' @export
+drop_na.data.frame <- function(data, ...) {
+  vars <- unname(tidyselect::vars_select(colnames(data), ...))
+  if (!is_character(vars)) {
+    abort("`vars` is not a character vector.")
+  }
+
+  if (is_empty(vars)) {
+    f <- stats::complete.cases(data)
+  } else {
+    f <- stats::complete.cases(data[vars])
+  }
+  out <- data[f, ]
+
+  reconstruct_tibble(data, out)
+}
+
+
+#' @rdname deprecated-se
+#' @export
+drop_na_ <- function(data, vars) {
+  UseMethod("drop_na_")
+}
+#' @export
+drop_na_.data.frame <- function(data, vars) {
+  drop_na(data, !!! vars)
+}
diff --git a/R/drop_na.r b/R/drop_na.r
deleted file mode 100644
index 9735434..0000000
--- a/R/drop_na.r
+++ /dev/null
@@ -1,53 +0,0 @@
-#' Drop rows containing missing values
-#'
-#' @param data A data frame.
-#' @param ... Specification of variables to consider while dropping rows.
-#'   If empty, consider all variables. Use bare variable names. Select all
-#'    variables between x and z with \code{x:z}, exclude y with \code{-y}.
-#'    For more options, see the \link[dplyr]{select} documentation.
-#' @seealso \code{\link{drop_na_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
-#' @examples
-#' library(dplyr)
-#' df <- data_frame(x = c(1, 2, NA), y = c("a", NA, "b"))
-#' df %>% drop_na()
-#' df %>% drop_na(x)
-#' @export
-drop_na <- function(data, ...) {
-  relevant_cols <- unname(dplyr::select_vars(colnames(data), ...))
-  drop_na_(data, relevant_cols)
-}
-
-#' Standard-evaluation version of \code{drop_na}.
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame.
-#' @param vars Character vector of variable names. If empty, all
-#'    variables are considered while dropping rows.
-#' @keywords internal
-#' @export
-drop_na_ <- function(data, vars) {
-  UseMethod("drop_na_")
-}
-
-#' @export
-drop_na_.data.frame <- function(data, vars) {
-  if (!is.character(vars)) stop("`vars` is not a character vector.", call. = FALSE)
-  if (length(vars) == 0) {
-    f = stats::complete.cases(data)
-  } else {
-    f <- stats::complete.cases(data[vars])
-  }
-  data[f, ]
-}
-
-#' @export
-drop_na_.tbl_df <- function(data, vars) {
-  as_data_frame(NextMethod())
-}
-
-#' @export
-drop_na_.grouped_df <- function(data, vars) {
-  regroup(NextMethod(), data)
-}
diff --git a/R/expand.R b/R/expand.R
index 4bb8c7b..4ef821b 100644
--- a/R/expand.R
+++ b/R/expand.R
@@ -1,39 +1,36 @@
 #' Expand data frame to include all combinations of values
 #'
-#' \code{expand()} is often useful in conjunction with \code{left_join} if
+#' `expand()` is often useful in conjunction with `left_join` if
 #' you want to convert implicit missing values to explicit missing values.
-#' Or you can use it in conjunction with \code{anti_join()} to figure
+#' Or you can use it in conjunction with `anti_join()` to figure
 #' out which combinations are missing.
 #'
-#' \code{crossing()} is similar to \code{\link{expand.grid}()}, this never
-#' converts strings to factors, returns a \code{tbl_df} without additional
-#' attributes, and first factors vary slowest. \code{nesting()} is the
-#' complement to \code{crossing()}: it only keeps combinations of all variables
+#' `crossing()` is similar to [expand.grid()], this never
+#' converts strings to factors, returns a `tbl_df` without additional
+#' attributes, and first factors vary slowest. `nesting()` is the
+#' complement to `crossing()`: it only keeps combinations of all variables
 #' that appear in the data.
 #'
-#' @inheritParams expand_
+#' @param data A data frame.
 #' @param ... Specification of columns to expand.
 #'
 #'   To find all unique combinations of x, y and z, including those not
 #'   found in the data, supply each variable as a separate argument.
 #'   To find only the combinations that occur in the data, use nest:
-#'   \code{expand(df, nesting(x, y, z))}.
+#'   `expand(df, nesting(x, y, z))`.
 #'
 #'   You can combine the two forms. For example,
-#'   \code{expand(df, nesting(school_id, student_id), date)} would produce
+#'   `expand(df, nesting(school_id, student_id), date)` would produce
 #'   a row for every student for each date.
 #'
 #'   For factors, the full set of levels (not just those that appear in the
 #'   data) are used. For continuous variables, you may need to fill in values
 #'   that don't appear in the data: to do so use expressions like
-#'   \code{year = 2010:2020} or \code{year = \link{full_seq}(year)}.
+#'   `year = 2010:2020` or `year = \link{full_seq}(year)`.
 #'
 #'   Length-zero (empty) elements are automatically dropped.
-#' @param x For \code{nesting_} and \code{crossing_} a list of variables.
-#' @seealso \code{\link{complete}} for a common application of \code{expand}:
+#' @seealso [complete()] for a common application of `expand`:
 #'   completing a data frame with missing combinations.
-#' @seealso \code{\link{expand_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
 #' @export
 #' @examples
 #' library(dplyr)
@@ -45,7 +42,7 @@
 #' expand(mtcars, nesting(vs, cyl))
 #'
 #' # Implicit missings ---------------------------------------------------------
-#' df <- data_frame(
+#' df <- tibble(
 #'   year   = c(2010, 2010, 2010, 2010, 2012, 2012, 2012),
 #'   qtr    = c(   1,    2,    3,    4,    1,    2,    3),
 #'   return = rnorm(7)
@@ -59,7 +56,7 @@
 #' # Each person was given one of two treatments, repeated three times
 #' # But some of the replications haven't happened yet, so we have
 #' # incomplete data:
-#' experiment <- data_frame(
+#' experiment <- tibble(
 #'   name = rep(c("Alex", "Robert", "Sam"), c(3, 2, 1)),
 #'   trt  = rep(c("a", "b", "a"), c(3, 2, 1)),
 #'   rep = c(1, 2, 3, 1, 2, 1),
@@ -81,104 +78,107 @@
 #' # Or use the complete() short-hand
 #' experiment %>% complete(nesting(name, trt), rep)
 expand <- function(data, ...) {
-  dots <- lazyeval::lazy_dots(...)
-  expand_(data, dots)
+  UseMethod("expand")
 }
-
-#' Expand (standard evaluation).
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame
-#' @param expand_cols Character vector of column names to be expanded.
-#' @keywords internal
 #' @export
-expand_ <- function(data, dots, ...) {
-  UseMethod("expand_")
+expand.default <- function(data, ...) {
+  expand_(data, .dots = compat_as_lazy_dots(...))
 }
-
 #' @export
-expand_.data.frame <- function(data, dots, ...) {
-  dots <- lazyeval::as.lazy_dots(dots)
-  if (length(dots) == 0)
-    return(data.frame())
+expand.data.frame <- function(data, ...) {
+  dots <- quos(..., .named = TRUE)
+  if (is_empty(dots)) {
+    return(reconstruct_tibble(data, data.frame()))
+  }
 
-  dots <- lazyeval::auto_name(dots)
-  pieces <- lazyeval::lazy_eval(dots, data)
+  pieces <- map(dots, eval_tidy, data)
+  df <- crossing(!!! pieces)
 
-  crossing_(pieces)
+  reconstruct_tibble(data, df)
 }
-
 #' @export
-expand_.tbl_df <- function(data, dots, ...) {
-  as_data_frame(NextMethod())
+expand.grouped_df <- function(data, ...) {
+  dots <- quos(...)
+  dplyr::do(data, expand(., !!! dots))
 }
 
+#' @rdname deprecated-se
+#' @param expand_cols Character vector of column names to be expanded.
 #' @export
-expand_.grouped_df <- function(data, dots, ...) {
-  dplyr::do(data, expand_(., dots, ...))
+expand_ <- function(data, dots, ...) {
+  UseMethod("expand_")
+}
+#' @export
+expand_.data.frame <- function(data, dots, ...) {
+  dots <- compat_lazy_dots(dots, caller_env())
+  expand(data, !!! dots)
 }
 
 
 # Nesting & crossing ------------------------------------------------------
 
-#' @export
 #' @rdname expand
+#' @export
 crossing <- function(...) {
-  crossing_(tibble::lst(...))
-}
+  x <- tibble::lst(...)
+  stopifnot(is_list(x))
 
-#' @export
-#' @rdname expand
-crossing_ <- function(x) {
-  stopifnot(is.list(x))
   x <- drop_empty(x)
 
-  is_atomic <- vapply(x, is.atomic, logical(1))
-  is_df <- vapply(x, is.data.frame, logical(1))
+  is_atomic <- map_lgl(x, is_atomic)
+  is_df <- map_lgl(x, is.data.frame)
   if (any(!is_df & !is_atomic)) {
     bad <- names(x)[!is_df & !is_atomic]
-    stop(
-      "Each element must be either an atomic vector or a data frame\n.",
-      "Problems: ", paste(bad, collapse = ", "), ".\n",
-      call. = FALSE
-    )
+
+    problems <- paste(bad, collapse = ", ")
+    abort(glue(
+      "Each element must be either an atomic vector or a data frame.
+       Problems: {problems}."
+    ))
+
   }
 
   # turn each atomic vector into single column data frame
-  col_df <- lapply(x[is_atomic], function(x) data_frame(x = ulevels(x)))
-  col_df <- Map(setNames, col_df, names(x)[is_atomic])
+  col_df <- map(x[is_atomic], function(x) tibble(x = ulevels(x)))
+  col_df <- map2(col_df, names(x)[is_atomic], set_names)
   x[is_atomic] <- col_df
 
   Reduce(cross_df, x)
 }
-
 cross_df <- function(x, y) {
   x_idx <- rep(seq_len(nrow(x)), each = nrow(y))
   y_idx <- rep(seq_len(nrow(y)), nrow(x))
   dplyr::bind_cols(x[x_idx, , drop = FALSE], y[y_idx, , drop = FALSE])
 }
+drop_empty <- function(x) {
+  empty <- map_lgl(x, function(x) length(x) == 0)
+  x[!empty]
+}
 
-
-#' @export
 #' @rdname expand
-#' @importFrom tibble data_frame
+#' @export
 nesting <- function(...) {
-  nesting_(tibble::lst(...))
-}
+  x <- tibble::lst(...)
 
-#' @export
-#' @rdname expand
-nesting_ <- function(x) {
-  stopifnot(is.list(x))
+  stopifnot(is_list(x))
   x <- drop_empty(x)
 
-  df <- as_data_frame(x)
+  df <- as_tibble(x)
   df <- dplyr::distinct(df)
   df[do.call(order, df), , drop = FALSE]
 }
 
-drop_empty <- function(x) {
-  empty <- vapply(x, function(x) length(x) == 0, logical(1))
-  x[!empty]
+
+#' @rdname deprecated-se
+#' @param x For `nesting_` and `crossing_` a list of variables.
+#' @export
+crossing_ <- function(x) {
+  x <- compat_lazy_dots(x, caller_env())
+  crossing(!!! x)
+}
+#' @rdname deprecated-se
+#' @export
+nesting_ <- function(x) {
+  x <- compat_lazy_dots(x, caller_env())
+  nesting(!!! x)
 }
diff --git a/R/extract.R b/R/extract.R
index 9d6df7d..9dbfecb 100644
--- a/R/extract.R
+++ b/R/extract.R
@@ -1,14 +1,25 @@
 #' Extract one column into multiple columns.
 #'
-#' Given a regular expression with capturing groups, \code{extract()} turns
+#' Given a regular expression with capturing groups, `extract()` turns
 #' each group into a new column. If the groups don't match, or the input
 #' is NA, the output will be NA.
 #'
-#' @param col Bare column name.
+#' @inheritParams expand
+#' @param col Column name or position. This is passed to
+#'   [tidyselect::vars_pull()].
+#'
+#'   This argument is passed by expression and supports
+#'   [quasiquotation][rlang::quasiquotation] (you can unquote column
+#'   names or column positions).
+#' @param into Names of new variables to create as character vector.
+#' @param regex a regular expression used to extract the desired values.
+#' @param remove If `TRUE`, remove input column from output data frame.
+#' @param convert If `TRUE`, will run [type.convert()] with
+#'   `as.is = TRUE` on new columns. This is useful if the component
+#'   columns are integer, numeric or logical.
+#' @param ... Other arguments passed on to [regexec()] to control
+#'   how the regular expression is processed.
 #' @export
-#' @inheritParams extract_
-#' @seealso \code{\link{extract_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
 #' @examples
 #' library(dplyr)
 #' df <- data.frame(x = c(NA, "a-b", "a-d", "b-c", "d-e"))
@@ -17,68 +28,70 @@
 #'
 #' # If no match, NA:
 #' df %>% extract(x, c("A", "B"), "([a-d]+)-([a-d]+)")
-extract <- function(data, col, into, regex = "([[:alnum:]]+)", remove = TRUE,
-                     convert = FALSE, ...) {
-  col <- col_name(substitute(col))
-  extract_(data, col, into, regex = regex, remove = remove, convert = convert, ...)
+extract <- function(data, col, into, regex = "([[:alnum:]]+)",
+                    remove = TRUE, convert = FALSE, ...) {
+  UseMethod("extract")
 }
-
-#' Standard-evaluation version of \code{extract}.
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame.
-#' @param col Name of column to split, as string.
-#' @param into Names of new variables to create as character vector.
-#' @param regex a regular expression used to extract the desired values.
-#' @param remove If \code{TRUE}, remove input column from output data frame.
-#' @param convert If \code{TRUE}, will run \code{\link{type.convert}} with
-#'   \code{as.is = TRUE} on new columns. This is useful if the component
-#'   columns are integer, numeric or logical.
-#' @param ... Other arguments passed on to \code{\link{regexec}} to control
-#'   how the regular expression is processed.
-#' @keywords internal
 #' @export
-extract_ <- function(data, col, into, regex = "([[:alnum:]]+)", remove = TRUE,
-                      convert = FALSE, ...) {
-  UseMethod("extract_")
+extract.default <- function(data, col, into, regex = "([[:alnum:]]+)",
+                            remove = TRUE, convert = FALSE, ...) {
+  extract_(data,
+    col = compat_as_lazy(enquo(col)),
+    into = into,
+    regex = regex,
+    remove = remove,
+    convert = convert,
+    ...
+  )
 }
-
 #' @export
-extract_.data.frame <- function(data, col, into, regex = "([[:alnum:]]+)",
-                                 remove = TRUE, convert = FALSE, ...) {
-
-  stopifnot(is.character(col), length(col) == 1)
-  stopifnot(is.character(regex))
+extract.data.frame <- function(data, col, into, regex = "([[:alnum:]]+)",
+                               remove = TRUE, convert = FALSE, ...) {
+  var <- tidyselect::vars_pull(names(data), !! enquo(col))
+  stopifnot(
+    is_string(regex),
+    is_character(into)
+  )
 
   # Extract matching groups
-  value <- as.character(data[[col]])
-
+  value <- as.character(data[[var]])
   matches <- stringi::stri_match_first_regex(value, regex)[, -1, drop = FALSE]
-  # Use as_data_frame post https://github.com/hadley/dplyr/issues/876
-  l <- lapply(seq_len(ncol(matches)), function(i) matches[, i])
+
+  # Use as_tibble post https://github.com/hadley/dplyr/issues/876
+  l <- map(seq_len(ncol(matches)), function(i) matches[, i])
   names(l) <- enc2utf8(into)
 
   if (convert) {
-    l[] <- lapply(l, type.convert, as.is = TRUE)
+    l[] <- map(l, type.convert, as.is = TRUE)
   }
 
   # Insert into existing data frame
-  data <- append_df(data, l, which(names(data) == col))
+  out <- append_df(data, l, match(var, dplyr::tbl_vars(data)))
   if (remove) {
-    data[[col]] <- NULL
+    out[[var]] <- NULL
   }
-  data
+
+  reconstruct_tibble(data, out, if (remove) var else chr())
 }
 
+
+#' @rdname deprecated-se
+#' @inheritParams extract
 #' @export
-extract_.tbl_df <- function(data, col, into, regex = "([[:alnum:]]+)",
-                             remove = TRUE, convert = FALSE, ...) {
-  as_data_frame(NextMethod())
+extract_ <- function(data, col, into, regex = "([[:alnum:]]+)", remove = TRUE,
+                      convert = FALSE, ...) {
+  UseMethod("extract_")
 }
-
 #' @export
-extract_.grouped_df <- function(data, col, into, regex = "([[:alnum:]]+)",
+extract_.data.frame <- function(data, col, into, regex = "([[:alnum:]]+)",
                                 remove = TRUE, convert = FALSE, ...) {
-  regroup(NextMethod(), data, if (remove) col)
+  col <- compat_lazy(col, caller_env())
+  extract(data,
+    col = !! col,
+    into = into,
+    regex = regex,
+    remove = remove,
+    convert = convert,
+    ...
+  )
 }
diff --git a/R/fill.R b/R/fill.R
index 269b090..bcbbd26 100644
--- a/R/fill.R
+++ b/R/fill.R
@@ -1,4 +1,3 @@
-#' @useDynLib tidyr
 #' @importFrom Rcpp sourceCpp
 NULL
 
@@ -8,42 +7,28 @@ NULL
 #' common output format where values are not repeated, they're recorded
 #' each time they change.
 #'
-#' Missing values are replaced in atomic vectors; \code{NULL}s are replaced
+#' Missing values are replaced in atomic vectors; `NULL`s are replaced
 #' in list.
 #'
-#' @param ... Specification of columns to fill. Use bare variable names.
-#'   Select all variables between x and z with \code{x:z}, exclude y with
-#'   \code{-y}. For more options, see the \link[dplyr]{select} documentation.
-#' @export
-#' @inheritParams extract_
-#' @inheritParams fill_
-#' @seealso \code{\link{fill_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
+#' @inheritParams expand
+#' @inheritParams gather
+#' @param .direction Direction in which to fill missing values. Currently
+#'   either "down" (the default) or "up".
 #' @export
 #' @examples
 #' df <- data.frame(Month = 1:12, Year = c(2000, rep(NA, 11)))
 #' df %>% fill(Year)
 fill <- function(data, ..., .direction = c("down", "up")) {
-  fill_cols <- unname(dplyr::select_vars(colnames(data), ...))
-  fill_(data, fill_cols, .direction = .direction)
+  UseMethod("fill")
 }
-
-#' Standard-evaluation version of \code{fill}.
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame.
-#' @param fill_cols Character vector of column names.
-#' @param .direction Direction in which to fill missing values. Currently
-#'   either "down" (the default) or "up".
-#' @keywords internal
 #' @export
-fill_ <- function(data, fill_cols, .direction = c("down", "up")) {
-  UseMethod("fill_")
+fill.default <- function(data, ..., .direction = c("down", "up")) {
+  fill_(data, fill_cols = compat_as_lazy_dots(...), .direction = .direction)
 }
-
 #' @export
-fill_.data.frame <- function(data, fill_cols, .direction = c("down", "up")) {
+fill.data.frame <- function(data, ..., .direction = c("down", "up")) {
+  fill_cols <- unname(tidyselect::vars_select(names(data), ...))
+
   .direction <- match.arg(.direction)
   fillVector <- switch(.direction, down = fillDown, up = fillUp)
 
@@ -53,8 +38,21 @@ fill_.data.frame <- function(data, fill_cols, .direction = c("down", "up")) {
 
   data
 }
+#' @export
+fill.grouped_df <- function(data, ..., .direction = c("down", "up")) {
+  dplyr::do(data, fill(., ..., .direction = .direction))
+}
+
 
+#' @rdname deprecated-se
+#' @inheritParams fill
+#' @param fill_cols Character vector of column names.
 #' @export
-fill_.grouped_df <- function(data, fill_cols, .direction = c("down", "up")) {
-  dplyr::do(data, fill_(., fill_cols = fill_cols, .direction = .direction))
+fill_ <- function(data, fill_cols, .direction = c("down", "up")) {
+  UseMethod("fill_")
+}
+#' @export
+fill_.data.frame <- function(data, fill_cols, .direction = c("down", "up")) {
+  vars <- syms(fill_cols)
+  fill(data, !!! vars, .direction = .direction)
 }
diff --git a/R/gather.R b/R/gather.R
index 298d8cc..eac643f 100644
--- a/R/gather.R
+++ b/R/gather.R
@@ -1,22 +1,64 @@
 #' Gather columns into key-value pairs.
 #'
 #' Gather takes multiple columns and collapses into key-value pairs,
-#' duplicating all other columns as needed. You use \code{gather()} when
+#' duplicating all other columns as needed. You use `gather()` when
 #' you notice that you have columns that are not variables.
 #'
-#' @param data A data frame.
-#' @param key,value Names of key and value columns to create in output.
-#' @param ... Specification of columns to gather. Use bare variable names.
-#'   Select all variables between x and z with \code{x:z}, exclude y with
-#'   \code{-y}. For more options, see the \link[dplyr]{select} documentation.
+#' @section Rules for selection:
+#'
+#' Arguments for selecting columns are passed to
+#' [tidyselect::vars_select()] and are treated specially. Unlike other
+#' verbs, selecting functions make a strict distinction between data
+#' expressions and context expressions.
+#'
+#' * A data expression is either a bare name like `x` or an expression
+#'   like `x:y` or `c(x, y)`. In a data expression, you can only refer
+#'   to columns from the data frame.
+#'
+#' * Everything else is a context expression in which you can only
+#'   refer to objects that you have defined with `<-`.
+#'
+#' For instance, `col1:col3` is a data expression that refers to data
+#' columns, while `seq(start, end)` is a context expression that
+#' refers to objects from the contexts.
+#'
+#' If you really need to refer to contextual objects from a data
+#' expression, you can unquote them with the tidy eval operator
+#' `!!`. This operator evaluates its argument in the context and
+#' inlines the result in the surrounding function call. For instance,
+#' `c(x, !! x)` selects the `x` column within the data frame and the
+#' column referred to by the object `x` defined in the context (which
+#' can contain either a column name as string or a column position).
+#'
+#' @inheritParams expand
+#' @param key,value Names of new key and value columns, as strings or
+#'   symbols.
+#'
+#'   This argument is passed by expression and supports
+#'   [quasiquotation][rlang::quasiquotation] (you can unquote strings
+#'   and symbols). The name is captured from the expression with
+#'   [rlang::quo_name()] (note that this kind of interface where
+#'   symbols do not represent actual objects is now discouraged in the
+#'   tidyverse; we support it here for backward compatibility).
+#' @param ... A selection of columns. If empty, all variables are
+#'   selected. You can supply bare variable names, select all
+#'   variables between x and z with `x:z`, exclude y with `-y`. For
+#'   more options, see the [dplyr::select()] documentation. See also
+#'   the section on selection rules below.
+#' @param na.rm If `TRUE`, will remove rows from output where the
+#'   value column in `NA`.
+#' @param convert If `TRUE` will automatically run
+#'   [type.convert()] on the key column. This is useful if the column
+#'   names are actually numeric, integer, or logical.
+#' @param factor_key If `FALSE`, the default, the key values will be
+#'   stored as a character vector. If `TRUE`, will be stored as a factor,
+#'   which preserves the original ordering of the columns.
 #' @inheritParams gather_
-#' @seealso \code{\link{gather_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
 #' @export
 #' @examples
 #' library(dplyr)
 #' # From http://stackoverflow.com/questions/1181060
-#' stocks <- data_frame(
+#' stocks <- tibble(
 #'   time = as.Date('2009-01-01') + 0:9,
 #'   X = rnorm(10, 0, 1),
 #'   Y = rnorm(10, 0, 2),
@@ -41,60 +83,44 @@
 #'   group_by(Species) %>%
 #'   slice(1)
 #' mini_iris %>% gather(key = flower_att, value = measurement, -Species)
-gather <- function(data, key, value, ..., na.rm = FALSE, convert = FALSE,
-                   factor_key = FALSE) {
-  key_col <- col_name(substitute(key), "key")
-  value_col <- col_name(substitute(value), "value")
-
-  if (n_dots(...) == 0) {
-    gather_cols <- setdiff(colnames(data), c(key_col, value_col))
-  } else {
-    gather_cols <- unname(dplyr::select_vars(colnames(data), ...))
-  }
-
-  gather_(data, key_col, value_col, gather_cols, na.rm = na.rm,
-    convert = convert, factor_key = factor_key)
+gather <- function(data, key = "key", value = "value", ...,
+                   na.rm = FALSE, convert = FALSE, factor_key = FALSE) {
+  UseMethod("gather")
 }
-
-n_dots <- function(...) nargs()
-
-#' Gather (standard-evaluation).
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame
-#' @param key_col,value_col Strings giving names of key and value columns to
-#'   create.
-#' @param gather_cols Character vector giving column names to be gathered into
-#'   pair of key-value columns.
-#' @param na.rm If \code{TRUE}, will remove rows from output where the
-#'   value column in \code{NA}.
-#' @param convert If \code{TRUE} will automatically run
-#'   \code{\link{type.convert}} on the key column. This is useful if the column
-#'   names are actually numeric, integer, or logical.
-#' @param factor_key If \code{FALSE}, the default, the key values will be
-#'   stored as a character vector. If \code{TRUE}, will be stored as a factor,
-#'   which preserves the original ordering of the columns.
-#' @keywords internal
 #' @export
-gather_ <- function(data, key_col, value_col, gather_cols, na.rm = FALSE,
-                     convert = FALSE, factor_key = FALSE) {
-  UseMethod("gather_")
+gather.default <- function(data, key = "key", value = "value", ...,
+                           na.rm = FALSE, convert = FALSE,
+                           factor_key = FALSE) {
+  gather_(data,
+    key_col = compat_as_lazy(enquo(key)),
+    value_col = compat_as_lazy(enquo(value)),
+    ...,
+    na.rm = na.rm,
+    convert = convert,
+    factor_key = factor_key
+  )
 }
-
 #' @export
-gather_.data.frame <- function(data, key_col, value_col, gather_cols,
-                               na.rm = FALSE, convert = FALSE,
-                               factor_key = FALSE) {
-  ## Return if we're not doing any gathering
-  if (length(gather_cols) == 0) {
+gather.data.frame <- function(data, key = "key", value = "value", ...,
+                              na.rm = FALSE, convert = FALSE,
+                              factor_key = FALSE) {
+  key_var <- quo_name(enexpr(key))
+  value_var <- quo_name(enexpr(value))
+
+  quos <- quos(...)
+  if (is_empty(quos)) {
+    gather_vars <- setdiff(names(data), c(key_var, value_var))
+  } else {
+    gather_vars <- unname(tidyselect::vars_select(names(data), !!! quos))
+  }
+  if (is_empty(gather_vars)) {
     return(data)
   }
 
-  gather_idx <- match(gather_cols, names(data))
+  gather_idx <- match(gather_vars, names(data))
   if (anyNA(gather_idx)) {
-    missing_cols <- paste(gather_cols[is.na(gather_idx)], collapse = ", ")
-    stop("Unknown column names: ", missing_cols, call. = FALSE)
+    missing_cols <- paste(gather_vars[is.na(gather_idx)], collapse = ", ")
+    abort(glue("Unknown column names: ", missing_cols))
   }
   id_idx <- setdiff(seq_along(data), gather_idx)
 
@@ -102,39 +128,27 @@ gather_.data.frame <- function(data, key_col, value_col, gather_cols,
   args <- normalize_melt_arguments(data, gather_idx, factorsAsStrings = TRUE)
   valueAsFactor <- "factor" %in% class(args$attr_template)
 
-  df <- melt_dataframe(data,
+  out <- melt_dataframe(data,
     id_idx - 1L,
     gather_idx - 1L,
-    as.character(key_col),
-    as.character(value_col),
+    as.character(key_var),
+    as.character(value_var),
     args$attr_template,
     args$factorsAsStrings,
     as.logical(valueAsFactor),
     as.logical(factor_key)
   )
 
-  if (na.rm && anyNA(df)) {
-    missing <- is.na(df[[value_col]])
-    df <- df[!missing, ]
+  if (na.rm && anyNA(out)) {
+    missing <- is.na(out[[value_var]])
+    out <- out[!missing, ]
   }
 
   if (convert) {
-    df[[key_col]] <- type.convert(as.character(df[[key_col]]), as.is = TRUE)
+    out[[key_var]] <- type.convert(as.character(out[[key_var]]), as.is = TRUE)
   }
 
-  df
-}
-
-#' @export
-gather_.tbl_df <- function(data, key_col, value_col, gather_cols,
-                           na.rm = FALSE, convert = FALSE, factor_key = FALSE) {
-  as_data_frame(NextMethod())
-}
-
-#' @export
-gather_.grouped_df <- function(data, key_col, value_col, gather_cols,
-                               na.rm = FALSE, convert = FALSE, factor_key = FALSE) {
-  regroup(NextMethod(), data, gather_cols)
+  reconstruct_tibble(data, out, gather_vars)
 }
 
 # Functions from reshape2 -------------------------------------------------
@@ -142,7 +156,7 @@ gather_.grouped_df <- function(data, key_col, value_col, gather_cols,
 ## Get the attributes if common, NULL if not.
 normalize_melt_arguments <- function(data, measure.ind, factorsAsStrings) {
 
-  measure.attributes <- lapply(measure.ind, function(i) {
+  measure.attributes <- map(measure.ind, function(i) {
     attributes(data[[i]])
   })
 
@@ -152,22 +166,20 @@ normalize_melt_arguments <- function(data, measure.ind, factorsAsStrings) {
   if (measure.attrs.equal) {
     attr_template <- data[[measure.ind[1]]]
   } else {
-    warning("attributes are not identical across measure variables; ",
-      "they will be dropped", call. = FALSE)
+    warn(glue(
+      "attributes are not identical across measure variables;
+       they will be dropped"))
     attr_template <- NULL
   }
 
   if (!factorsAsStrings && !measure.attrs.equal) {
-    warning("cannot avoid coercion of factors when measure attributes not identical",
-      call. = FALSE)
+    warn("cannot avoid coercion of factors when measure attributes not identical")
     factorsAsStrings <- TRUE
   }
 
   ## If we are going to be coercing any factors to strings, we don't want to
   ## copy the attributes
-  any.factors <- any( sapply( measure.ind, function(i) {
-    is.factor(data[[i]])
-  }))
+  any.factors <- any(map_lgl(measure.ind, function(i) is.factor(data[[i]])))
 
   if (factorsAsStrings && any.factors) {
     attr_template <- NULL
@@ -186,3 +198,34 @@ all_identical <- function(xs) {
   }
   TRUE
 }
+
+
+#' @rdname deprecated-se
+#' @inheritParams gather
+#' @param key_col,value_col Strings giving names of key and value columns to
+#'   create.
+#' @param gather_cols Character vector giving column names to be gathered into
+#'   pair of key-value columns.
+#' @keywords internal
+#' @export
+gather_ <- function(data, key_col, value_col, gather_cols, na.rm = FALSE,
+                    convert = FALSE, factor_key = FALSE) {
+  UseMethod("gather_")
+}
+#' @export
+gather_.data.frame <- function(data, key_col, value_col, gather_cols,
+                               na.rm = FALSE, convert = FALSE,
+                               factor_key = FALSE) {
+  key_col <- compat_lazy(key_col, caller_env())
+  value_col <- compat_lazy(value_col, caller_env())
+  gather_cols <- syms(gather_cols)
+
+  gather(data,
+    key = !! key_col,
+    value = !! value_col,
+    !!! gather_cols,
+    na.rm = na.rm,
+    convert = convert,
+    factor_key = factor_key
+  )
+}
diff --git a/R/id.R b/R/id.R
index f28f77d..c3cada8 100644
--- a/R/id.R
+++ b/R/id.R
@@ -10,12 +10,11 @@ id <- function(.variables, drop = FALSE) {
   }
 
   # Calculate individual ids
-  ids <- rev(lapply(.variables, id_var, drop = drop))
+  ids <- rev(map(.variables, id_var, drop = drop))
   p <- length(ids)
 
   # Calculate dimensions
-  ndistinct <- vapply(ids, attr, "n", FUN.VALUE = numeric(1),
-    USE.NAMES = FALSE)
+  ndistinct <- map_dbl(ids, attr, "n")
   n <- prod(ndistinct)
   if (n > 2 ^ 31) {
     # Too big for integers, have to use strings, which will be much slower :(
@@ -39,7 +38,7 @@ id <- function(.variables, drop = FALSE) {
 }
 
 id_var <- function(x, drop = FALSE) {
-  if (!is.null(attr(x, "n")) && !drop) return(x)
+  if (!is_null(attr(x, "n")) && !drop) return(x)
 
   if (is.factor(x) && !drop) {
     id <- as.integer(addNA(x, ifany = TRUE))
@@ -47,7 +46,7 @@ id_var <- function(x, drop = FALSE) {
   } else if (length(x) == 0) {
     id <- integer()
     n <- 0L
-  } else if (is.list(x)) {
+  } else if (is_list(x)) {
     # Sorting lists isn't supported
     levels <- unique(x)
     id <- match(x, levels)
diff --git a/R/nest.R b/R/nest.R
index cda0fdc..2335a5d 100644
--- a/R/nest.R
+++ b/R/nest.R
@@ -1,22 +1,26 @@
 #' Nest repeated values in a list-variable.
 #'
 #' There are many possible ways one could choose to nest columns inside a
-#' data frame. \code{nest()} creates a list of data frames containing all
+#' data frame. `nest()` creates a list of data frames containing all
 #' the nested variables: this seems to be the most useful form in practice.
 #'
-#' @seealso \code{\link{unnest}} for the inverse operation.
-#' @seealso \code{\link{nest_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
-#' @param .key The name of the new column.
-#' @inheritParams nest_
-#' @param ... Specification of columns to nest. Use bare variable names.
-#'   Select all variables between x and z with \code{x:z}, exclude y with
-#'   \code{-y}. For more options, see the \link[dplyr]{select} documentation.
+#' @inheritSection gather Rules for selection
+#' @inheritParams gather
+#' @param data A data frame.
+#' @param .key The name of the new column, as a string or symbol.
+#'
+#'   This argument is passed by expression and supports
+#'   [quasiquotation][rlang::quasiquotation] (you can unquote strings
+#'   and symbols). The name is captured from the expression with
+#'   [rlang::quo_name()] (note that this kind of interface where
+#'   symbols do not represent actual objects is now discouraged in the
+#'   tidyverse; we support it here for backward compatibility).
+#' @seealso [unnest()] for the inverse operation.
 #' @export
 #' @examples
 #' library(dplyr)
-#' iris %>% nest(-Species)
-#' chickwts %>% nest(weight)
+#' as_tibble(iris) %>% nest(-Species)
+#' as_tibble(chickwts) %>% nest(weight)
 #'
 #' if (require("gapminder")) {
 #'   gapminder %>%
@@ -26,64 +30,58 @@
 #'   gapminder %>%
 #'     nest(-country, -continent)
 #' }
-nest <- function(data, ..., .key = data) {
-  key_col <- col_name(substitute(.key))
-  nest_cols <- unname(dplyr::select_vars(colnames(data), ...))
-  nest_(data, key_col, nest_cols)
+nest <- function(data, ..., .key = "data") {
+  UseMethod("nest")
 }
-
-#' Standard-evaluation version of \code{nest}.
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame.
-#' @param key_col Name of the column that will contain the nested data frames.
-#' @param nest_cols Character vector of columns to nest.
-#' @keywords internal
 #' @export
-nest_ <- function(data, key_col, nest_cols = character()) {
-  UseMethod("nest_")
+nest.default <- function(data, ..., .key = "data") {
+  key_col <- compat_as_lazy(enquo(.key))
+  nest_cols <- compat_as_lazy_dots(...)
+  nest_(data, key_col = key_col, nest_cols = nest_cols)
 }
-
 #' @export
-nest_.data.frame <- function(data, key_col, nest_cols = character()) {
-  group_cols <- setdiff(names(data), nest_cols)
-  nest_impl(as_data_frame(data), key_col, group_cols, nest_cols)
-}
+nest.data.frame <- function(data, ..., .key = "data") {
+  key_var <- quo_name(enexpr(.key))
 
-#' @export
-nest_.tbl_df <- function(data, key_col, nest_cols = character()) {
-  as_data_frame(NextMethod())
-}
+  nest_vars <- unname(tidyselect::vars_select(names(data), ...))
+  if (is_empty(nest_vars)) {
+    nest_vars <- names(data)
+  }
 
-#' @export
-nest_.grouped_df <- function(data, key_col, nest_cols = character()) {
-  if (length(nest_cols) == 0) {
-    nest_cols <- names(data)
+  if (dplyr::is_grouped_df(data)) {
+    group_vars <- dplyr::group_vars(data)
+  } else {
+    group_vars <- setdiff(names(data), nest_vars)
   }
-  group_cols <- vapply(dplyr::groups(data), as.character, character(1))
-  nest_impl(data, key_col, group_cols, nest_cols)
-}
+  nest_vars <- setdiff(nest_vars, group_vars)
 
-#' @importFrom tibble data_frame
-nest_impl <- function(data, key_col, group_cols, nest_cols) {
   data <- dplyr::ungroup(data)
-
-  if (length(group_cols) == 0) {
-    df <- data_frame(list(data))
-    names(df) <- enc2utf8(key_col)
-
-    return(df)
+  if (is_empty(group_vars)) {
+    return(tibble(!! key_var := list(data)))
   }
 
-  nest_cols <- setdiff(nest_cols, group_cols)
+  out <- dplyr::select(data, !!! syms(group_vars))
+  out <- dplyr::distinct(out)
 
-  out <- dplyr::distinct_(dplyr::select_(data, .dots = group_cols))
-
-  idx <- dplyr::group_indices_(data, .dots = group_cols)
-  out[[key_col]] <- unname(split(data[nest_cols], idx))[unique(idx)]
+  idx <- dplyr::group_indices(data, !!! syms(group_vars))
+  out[[key_var]] <- unname(split(data[nest_vars], idx))[unique(idx)]
 
   out
 }
 
-globalVariables(".")
+
+#' @rdname deprecated-se
+#' @inheritParams nest
+#' @param key_col Name of the column that will contain the nested data frames.
+#' @param nest_cols Character vector of columns to nest.
+#' @keywords internal
+#' @export
+nest_ <- function(data, key_col, nest_cols = character()) {
+  UseMethod("nest_")
+}
+#' @export
+nest_.data.frame <- function(data, key_col, nest_cols = character()) {
+  key_col <- compat_lazy(key_col, caller_env())
+  nest_cols <- compat_lazy_dots(nest_cols, caller_env())
+  nest(data, .key = !! key_col, !!! nest_cols)
+}
diff --git a/R/replace_na.R b/R/replace_na.R
index a3cc1f3..72d15c2 100644
--- a/R/replace_na.R
+++ b/R/replace_na.R
@@ -1,30 +1,24 @@
 #' Replace missing values
 #'
 #' @param data A data frame.
-#' @param replace A named list given the value to replace \code{NA} with
+#' @param replace A named list given the value to replace `NA` with
 #'   for each column.
 #' @param ... Additional arguments for methods. Currently unused.
 #' @examples
 #' library(dplyr)
-#' df <- data_frame(x = c(1, 2, NA), y = c("a", NA, "b"))
+#' df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
 #' df %>% replace_na(list(x = 0, y = "unknown"))
 #' @export
 replace_na <- function(data, replace = list(), ...) {
   UseMethod("replace_na")
 }
-
 #' @export
 replace_na.data.frame <- function(data, replace = list(), ...) {
-  stopifnot(is.list(replace))
+  stopifnot(is_list(replace))
 
   for (var in names(replace)) {
-    data[[var]][is.na(data[[var]])] <- replace[[var]]
+    data[[var]][are_na(data[[var]])] <- replace[[var]]
   }
 
   data
 }
-
-#' @export
-replace_na.tbl_df <- function(data, replace = list(), ...) {
-  as_data_frame(NextMethod())
-}
diff --git a/R/separate-rows.R b/R/separate-rows.R
index 8bbac4a..27f7068 100644
--- a/R/separate-rows.R
+++ b/R/separate-rows.R
@@ -3,11 +3,10 @@
 #' If a variable contains observations with multiple delimited values, this
 #' separates the values and places each one in its own row.
 #'
-#' @inheritParams separate_rows_
-#' @inheritParams separate_
-#' @param ... Specification of columns to separate. Use bare variable names.
-#'   Select all variables between x and z with \code{x:z}, exclude y with
-#'   \code{-y}. For more options, see the \link[dplyr]{select} documentation.
+#' @inheritSection gather Rules for selection
+#' @inheritParams gather
+#' @inheritParams separate
+#' @param sep Separator delimiting collapsed values.
 #' @export
 #' @examples
 #'
@@ -20,47 +19,40 @@
 #' separate_rows(df, y, z, convert = TRUE)
 separate_rows <- function(data, ..., sep = "[^[:alnum:].]+",
                           convert = FALSE) {
-  cols <- unname(dplyr::select_vars(names(data), ...))
-  separate_rows_(data, cols, sep, convert)
+  UseMethod("separate_rows")
 }
-
-#' Standard-evaluation version of \code{separate_rows}.
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame.
-#' @param cols Name of columns that need to be separated.
-#' @param sep Separator delimiting collapsed values.
-#' @inheritParams separate_
 #' @export
-separate_rows_ <- function(data, cols, sep = "[^[:alnum:].]+",
-                           convert = FALSE) {
-  UseMethod("separate_rows_")
+separate_rows.default <- function(data, ..., sep = "[^[:alnum:].]+",
+                                  convert = FALSE) {
+  cols <- compat_as_lazy_dots(...)
+  separate_rows_(data, cols = cols, sep = sep)
 }
-
 #' @export
-separate_rows_.data.frame <- function(data, cols, sep = "[^[:alnum:].]+",
-                                      convert = FALSE) {
+separate_rows.data.frame <- function(data, ..., sep = "[^[:alnum:].]+",
+                                     convert = FALSE) {
+  orig <- data
+  vars <- unname(tidyselect::vars_select(names(data), ...))
 
-  data[cols] <- lapply(data[cols], stringi::stri_split_regex, sep)
-  data <- unnest_(data, cols)
+  data[vars] <- map(data[vars], stringi::stri_split_regex, sep)
+  data <- unnest(data, !!! syms(vars))
 
   if (convert) {
-    data[cols] <- lapply(data[cols], type.convert, as.is = TRUE)
+    data[vars] <- map(data[vars], type.convert, as.is = TRUE)
   }
 
-  data
+  reconstruct_tibble(orig, data, vars)
 }
 
+#' @rdname deprecated-se
+#' @inheritParams separate_rows
 #' @export
-separate_rows_.tbl_df <- function(data, cols, sep = "[^[:alnum:].]+",
-                                  convert = FALSE) {
-  as_data_frame(NextMethod())
+separate_rows_ <- function(data, cols, sep = "[^[:alnum:].]+",
+                           convert = FALSE) {
+  UseMethod("separate_rows_")
 }
-
 #' @export
-separate_rows_.grouped_df <- function(data, cols, sep = "[^[:alnum:].]+",
-                                  convert = FALSE) {
-
-  regroup(NextMethod(), data, cols)
+separate_rows_.data.frame <- function(data, cols, sep = "[^[:alnum:].]+",
+                                      convert = FALSE) {
+  cols <- syms(cols)
+  separate_rows(data, !!! cols, sep = sep, convert = convert)
 }
diff --git a/R/separate.R b/R/separate.R
index 48d1769..25d7943 100644
--- a/R/separate.R
+++ b/R/separate.R
@@ -1,13 +1,38 @@
 #' Separate one column into multiple columns.
 #'
 #' Given either regular expression or a vector of character positions,
-#' \code{separate()} turns a single character column into multiple columns.
+#' `separate()` turns a single character column into multiple columns.
 #'
-#' @param col Bare column name.
-#' @inheritParams separate_
-#' @seealso \code{\link{unite}()}, the complement.
-#' @seealso \code{\link{separate_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
+#' @inheritParams extract
+#' @param into Names of new variables to create as character vector.
+#' @param sep Separator between columns.
+#'
+#'   If character, is interpreted as a regular expression. The default
+#'   value is a regular expression that matches any sequence of
+#'   non-alphanumeric values.
+#'
+#'   If numeric, interpreted as positions to split at. Positive values start
+#'   at 1 at the far-left of the string; negative value start at -1 at the
+#'   far-right of the string. The length of `sep` should be one less than
+#'   `into`.
+#' @param extra If `sep` is a character vector, this controls what
+#'   happens when there are too many pieces. There are three valid options:
+#'
+#'   * "warn" (the default): emit a warning and drop extra values.
+#'   * "drop": drop any extra values without a warning.
+#'   * "merge": only splits at most `length(into)` times
+#' @param fill If `sep` is a character vector, this controls what
+#'   happens when there are not enough pieces. There are three valid options:
+#'
+#'   * "warn" (the default): emit a warning and fill from the right
+#'   * "right": fill with missing values on the right
+#'   * "left": fill with missing values on the left
+#' @param remove If `TRUE`, remove input column from output data frame.
+#' @param convert If `TRUE`, will run [type.convert()] with
+#'   `as.is = TRUE` on new columns. This is useful if the component
+#'   columns are integer, numeric or logical.
+#' @param ... Defunct, will be removed in the next version of the package.
+#' @seealso [unite()], the complement.
 #' @export
 #' @examples
 #' library(dplyr)
@@ -28,128 +53,83 @@
 #' df %>% separate(x, c("key", "value"), ": ", extra = "merge")
 separate <- function(data, col, into, sep = "[^[:alnum:]]+", remove = TRUE,
                      convert = FALSE, extra = "warn", fill = "warn", ...) {
-  col <- col_name(substitute(col))
-  separate_(data, col, into, sep = sep, remove = remove, convert = convert,
-    extra = extra, fill = fill, ...)
+  UseMethod("separate")
 }
-
-#' Standard-evaluation version of \code{separate}.
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame.
-#' @param col Name of column to split, as string.
-#' @param into Names of new variables to create as character vector.
-#' @param sep Separator between columns.
-#'
-#'   If character, is interpreted as a regular expression. The default
-#'   value is a regular expression that matches any sequence of
-#'   non-alphanumeric values.
-#'
-#'   If numeric, interpreted as positions to split at. Positive values start
-#'   at 1 at the far-left of the string; negative value start at -1 at the
-#'   far-right of the string. The length of \code{sep} should be one less than
-#'   \code{into}.
-#'
-#' @param extra If \code{sep} is a character vector, this controls what
-#'   happens when there are too many pieces. There are three valid options:
-#'
-#'   \itemize{
-#'    \item "warn" (the default): emit a warning and drop extra values.
-#'    \item "drop": drop any extra values without a warning.
-#'    \item "merge": only splits at most \code{length(into)} times
-#'   }
-#' @param fill If \code{sep} is a character vector, this controls what
-#'   happens when there are not enough pieces. There are three valid options:
-#'
-#'   \itemize{
-#'    \item "warn" (the default): emit a warning and fill from the right
-#'    \item "right": fill with missing values on the right
-#'    \item "left": fill with missing values on the left
-#'   }
-#' @param remove If \code{TRUE}, remove input column from output data frame.
-#' @param convert If \code{TRUE}, will run \code{\link{type.convert}} with
-#'   \code{as.is = TRUE} on new columns. This is useful if the component
-#'   columns are integer, numeric or logical.
-#' @param ... Defunct, will be removed in the next version of the package.
-#' @keywords internal
 #' @export
-separate_ <- function(data, col, into, sep = "[^[:alnum:]]+", remove = TRUE,
-                      convert = FALSE, extra = "warn", fill = "warn", ...) {
-  UseMethod("separate_")
+separate.default <- function(data, col, into, sep = "[^[:alnum:]]+",
+                             remove = TRUE, convert = FALSE,
+                             extra = "warn", fill = "warn", ...) {
+  col <- compat_as_lazy(enquo(col))
+  separate_(data,
+    col = col,
+    into = into,
+    sep = sep,
+    remove = remove,
+    convert = convert,
+    extra = extra,
+    fill = fill,
+    ...
+  )
 }
-
 #' @export
-separate_.data.frame <- function(data, col, into, sep = "[^[:alnum:]]+",
-                                 remove = TRUE, convert = FALSE,
-                                 extra = "warn", fill = "warn", ...) {
-  stopifnot(is.character(col), length(col) == 1)
-  value <- as.character(data[[col]])
+separate.data.frame <- function(data, col, into, sep = "[^[:alnum:]]+",
+                                remove = TRUE, convert = FALSE,
+                                extra = "warn", fill = "warn", ...) {
+  orig <- data
+
+  var <- tidyselect::vars_pull(names(data), !! enquo(col))
+  value <- as.character(data[[var]])
 
   if (length(list(...)) != 0) {
-    warning("Using ... for passing arguments to strsplit is defunct.")
+    warn("Using ... for passing arguments to `strsplit()` is defunct")
   }
 
   if (is.numeric(sep)) {
     l <- strsep(value, sep)
-  } else if (is.character(sep)) {
+  } else if (is_character(sep)) {
     l <- str_split_fixed(value, sep, length(into), extra = extra, fill = fill)
   } else {
-    stop("'sep' must be either numeric or character", .call = FALSE)
+    abort("`sep` must be either numeric or character")
   }
 
-  names(l) <- enc2utf8(into)
+  names(l) <- as_utf8_character(into)
   if (convert) {
-    l[] <- lapply(l, type.convert, as.is = TRUE)
+    l[] <- map(l, type.convert, as.is = TRUE)
   }
 
   # Insert into existing data frame
-  data <- append_df(data, l, which(names(data) == col))
-  if (remove)
-    data[[col]] <- NULL
-
-  data
-}
+  data <- append_df(data, l, match(var, dplyr::tbl_vars(data)))
+  if (remove) {
+    data[[var]] <- NULL
+  }
 
-#' @export
-separate_.tbl_df <- function(data, col, into, sep = "[^[:alnum:]]+",
-                             remove = TRUE, convert = FALSE,
-                             extra = "warn", fill = "warn", ...) {
-  as_data_frame(NextMethod())
+  reconstruct_tibble(orig, data, if (remove) var else NULL)
 }
 
-#' @export
-separate_.grouped_df <- function(data, col, into, sep = "[^[:alnum:]]+",
-                                 remove = TRUE, convert = FALSE,
-                                 extra = "warn", fill = "warn", ...) {
-  regroup(NextMethod(), data, if (remove) col)
-}
-
-
-
 strsep <- function(x, sep) {
   sep <- c(0, sep, -1)
 
   nchar <- stringi::stri_length(x)
-  pos <- lapply(sep, function(i) {
+  pos <- map(sep, function(i) {
     if (i >= 0) return(i)
     nchar + i + 1
   })
 
-  lapply(1:(length(pos) - 1), function(i) {
+  map(1:(length(pos) - 1), function(i) {
     stringi::stri_sub(x, pos[[i]] + 1, pos[[i + 1]])
   })
 }
-
 str_split_fixed <- function(value, sep, n, extra = "warn", fill = "warn") {
   if (extra == "error") {
-    warning("extra = 'error' is deprecated. Please use extra = 'warn'",
-      " instead", call. = FALSE)
+    warn(glue(
+      "`extra = \"error\"` is deprecated. \\
+       Please use `extra = \"warn\"` instead"
+    ))
     extra <- "warn"
   }
 
-  extra <- match.arg(extra, c("warn", "merge", "drop"))
-  fill <- match.arg(fill, c("warn", "left", "right"))
+  extra <- arg_match(extra, c("warn", "merge", "drop"))
+  fill <- arg_match(fill, c("warn", "left", "right"))
 
   n_max <- if (extra == "merge") n else -1L
   pieces <- stringi::stri_split_regex(value, sep, n_max)
@@ -158,14 +138,40 @@ str_split_fixed <- function(value, sep, n, extra = "warn", fill = "warn") {
 
   n_big <- length(simp$too_big)
   if (extra == "warn" && n_big > 0) {
-    warning("Too many values at ", n_big, " locations: ",
-      list_indices(simp$too_big), call. = FALSE)
+    idx <- list_indices(simp$too_big)
+    warn(glue("Too many values at {n_big} locations: {idx}"))
   }
+
   n_sml <- length(simp$too_sml)
   if (fill == "warn" && n_sml > 0) {
-    warning("Too few values at ", n_sml, " locations: ",
-      list_indices(simp$too_sml), call. = FALSE)
+    idx <- list_indices(simp$too_sml)
+    warn(glue("Too few values at {n_sml} locations: {idx}"))
   }
 
   simp$strings
 }
+
+
+#' @rdname deprecated-se
+#' @inheritParams separate
+#' @export
+separate_ <- function(data, col, into, sep = "[^[:alnum:]]+", remove = TRUE,
+                      convert = FALSE, extra = "warn", fill = "warn", ...) {
+  UseMethod("separate_")
+}
+#' @export
+separate_.data.frame <- function(data, col, into, sep = "[^[:alnum:]]+",
+                                 remove = TRUE, convert = FALSE,
+                                 extra = "warn", fill = "warn", ...) {
+  col <- sym(col)
+  separate(data,
+    col = !! col,
+    into = into,
+    sep = sep,
+    remove = remove,
+    convert = convert,
+    extra = extra,
+    fill = fill,
+    ...
+  )
+}
diff --git a/R/seq.R b/R/seq.R
index 889bbff..5aff205 100644
--- a/R/seq.R
+++ b/R/seq.R
@@ -1,8 +1,8 @@
 #' Create the full sequence of values in a vector.
 #'
 #' This is useful if you want to fill in missing values that should have
-#' been observed but weren't. For example, \code{full_seq(c(1, 2, 4, 6), 1)}
-#' will return \code{1:6}.
+#' been observed but weren't. For example, `full_seq(c(1, 2, 4, 6), 1)`
+#' will return `1:6`.
 #'
 #' @param x A numeric vector.
 #' @param period Gap between each observation. The existing data will be
diff --git a/R/spread.R b/R/spread.R
index fbc58ef..d129d90 100644
--- a/R/spread.R
+++ b/R/spread.R
@@ -1,12 +1,27 @@
 #' Spread a key-value pair across multiple columns.
 #'
-#' @param key The bare (unquoted) name of the column whose values will be used
-#'   as column headings.
-#' @param value The bare (unquoted) name of the column whose values will
-#'  populate the cells.
-#' @inheritParams spread_
-#' @seealso \code{\link{spread_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
+#' @param data A data frame.
+#' @param key,value Column names or positions. This is passed to
+#'   [tidyselect::vars_pull()].
+#'
+#'   These arguments are passed by expression and support
+#'   [quasiquotation][rlang::quasiquotation] (you can unquote column
+#'   names or column positions).
+#' @param fill If set, missing values will be replaced with this value. Note
+#'   that there are two types of missingness in the input: explicit missing
+#'   values (i.e. `NA`), and implicit missings, rows that simply aren't
+#'   present. Both types of missing value will be replaced by `fill`.
+#' @param convert If `TRUE`, [type.convert()] with \code{asis =
+#'   TRUE} will be run on each of the new columns. This is useful if the value
+#'   column was a mix of variables that was coerced to a string. If the class of
+#'   the value column was factor or date, note that will not be true of the new
+#'   columns that are produced, which are coerced to character before type
+#'   conversion.
+#' @param drop If `FALSE`, will keep factor levels that don't appear in the
+#'   data, filling in missing combinations with `fill`.
+#' @param sep If `NULL`, the column names will be taken from the values of
+#'   `key` variable. If non-`NULL`, the column names will be given
+#'   by "<key_name><sep><key_value>".
 #' @export
 #' @examples
 #' library(dplyr)
@@ -30,60 +45,35 @@
 #'                  value = c(5.1, "setosa", 1, 7.0, "versicolor", 2))
 #' df %>% spread(var, value) %>% str
 #' df %>% spread(var, value, convert = TRUE) %>% str
-spread <- function(data, key, value, fill = NA, convert = FALSE, drop = TRUE,
-                   sep = NULL) {
-  key_col <- col_name(substitute(key))
-  value_col <- col_name(substitute(value))
-
-  spread_(data, key_col, value_col, fill = fill, convert = convert, drop = drop,
-    sep = sep)
+spread <- function(data, key, value, fill = NA, convert = FALSE,
+                   drop = TRUE, sep = NULL) {
+  UseMethod("spread")
 }
-
-#' Standard-evaluation version of \code{spread}.
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame.
-#' @param key_col,value_col Strings giving names of key and value cols.
-#' @param fill If set, missing values will be replaced with this value. Note
-#'   that there are two types of missingness in the input: explicit missing
-#'   values (i.e. \code{NA}), and implicit missings, rows that simply aren't
-#'   present. Both types of missing value will be replaced by \code{fill}.
-#' @param convert If \code{TRUE}, \code{\link{type.convert}} with \code{asis =
-#'   TRUE} will be run on each of the new columns. This is useful if the value
-#'   column was a mix of variables that was coerced to a string. If the class of
-#'   the value column was factor or date, note that will not be true of the new
-#'   columns that are produced, which are coerced to character before type
-#'   conversion.
-#' @param drop If \code{FALSE}, will keep factor levels that don't appear in the
-#'   data, filling in missing combinations with \code{fill}.
-#' @param sep If \code{NULL}, the column names will be taken from the values of
-#'   \code{key} variable. If non-\code{NULL}, the column names will be given
-#'   by "<key_name><sep><key_value>".
-#' @keywords internal
 #' @export
-spread_ <- function(data, key_col, value_col, fill = NA, convert = FALSE,
-                    drop = TRUE, sep = NULL) {
-  if (!(key_col %in% names(data))) {
-    stop("Key column '", key_col, "' does not exist in input.", call. = FALSE)
-  }
-  if (!(value_col %in% names(data))) {
-    stop("Value column '", value_col, "' does not exist in input.", call. = FALSE)
-  }
-
-  UseMethod("spread_")
+spread.default <- function(data, key, value, fill = NA, convert = FALSE,
+                           drop = TRUE, sep = NULL) {
+  key <- compat_as_lazy(enquo(key))
+  value <- compat_as_lazy(enquo(value))
+  spread_(data,
+    key_col = key,
+    value_col = value,
+    fill = fill,
+    convert = convert,
+    drop = drop,
+    sep = sep
+  )
 }
-
 #' @export
-#' @importFrom tibble as_data_frame
-spread_.data.frame <- function(data, key_col, value_col, fill = NA,
-                               convert = FALSE, drop = TRUE, sep = NULL) {
+spread.data.frame <- function(data, key, value, fill = NA, convert = FALSE,
+                              drop = TRUE, sep = NULL) {
+  key_var <- tidyselect::vars_pull(names(data), !! enquo(key))
+  value_var <- tidyselect::vars_pull(names(data), !! enquo(value))
 
-  col <- data[key_col]
+  col <- data[key_var]
   col_id <- id(col, drop = drop)
   col_labels <- split_labels(col, col_id, drop = drop)
 
-  rows <- data[setdiff(names(data), c(key_col, value_col))]
+  rows <- data[setdiff(names(data), c(key_var, value_var))]
   if (length(rows) == 0) {
     # Special case when there's only one row
     row_id <- structure(1L, n = 1L)
@@ -99,13 +89,11 @@ spread_.data.frame <- function(data, key_col, value_col, fill = NA,
   # Check that each output value occurs in unique location
   if (anyDuplicated(overall)) {
     groups <- split(seq_along(overall), overall)
-    groups <- groups[vapply(groups, length, integer(1)) > 1]
-
-    str <- vapply(groups, function(x) paste0("(", paste0(x, collapse = ", "), ")"),
-      character(1))
+    groups <- groups[map_int(groups, length) > 1]
 
-    stop("Duplicate identifiers for rows ", paste(str, collapse = ", "),
-      call. = FALSE)
+    str <- map_chr(groups, function(x) paste0("(", paste0(x, collapse = ", "), ")"))
+    rows <- paste(str, collapse = ", ")
+    abort(glue("Duplicate identifiers for rows {rows}"))
   }
 
   # Add in missing values, if necessary
@@ -115,54 +103,41 @@ spread_.data.frame <- function(data, key_col, value_col, fill = NA,
     overall <- order(overall)
   }
 
-  value <- data[[value_col]]
+  value <- data[[value_var]]
   ordered <- value[overall]
   if (!is.na(fill)) {
     ordered[is.na(ordered)] <- fill
   }
 
-  if (convert && !is.character(ordered)) {
+  if (convert && !is_character(ordered)) {
     ordered <- as.character(ordered)
   }
   dim(ordered) <- c(attr(row_id, "n"), attr(col_id, "n"))
   colnames(ordered) <- enc2utf8(col_names(col_labels, sep = sep))
 
-  ordered <- as_data_frame_matrix(ordered)
+  ordered <- as_tibble_matrix(ordered)
 
   if (convert) {
-    ordered[] <- lapply(ordered, type.convert, as.is = TRUE)
+    ordered[] <- map(ordered, type.convert, as.is = TRUE)
   }
 
-  append_df(row_labels, ordered)
+  out <- append_df(row_labels, ordered)
+  reconstruct_tibble(data, out, c(key_var, value_var))
 }
 
 col_names <- function(x, sep = NULL) {
   names <- as.character(x[[1]])
 
-  if (is.null(sep)) {
-    ifelse(is.na(names), "<NA>", names)
+  if (is_null(sep)) {
+    ifelse(are_na(names), "<NA>", names)
   } else {
     paste(names(x)[[1]], names, sep = sep)
   }
 }
-
-as_data_frame_matrix <- function(x) {
+as_tibble_matrix <- function(x) {
   # getS3method() only available in R >= 3.3
-  get("as_data_frame.matrix", asNamespace("tibble"), mode = "function")(x)
-}
-
-#' @export
-spread_.tbl_df <- function(data, key_col, value_col, fill = NA,
-                           convert = FALSE, drop = TRUE, sep = NULL) {
-  as_data_frame(NextMethod())
-}
-
-#' @export
-spread_.grouped_df <- function(data, key_col, value_col, fill = NA,
-                               convert = FALSE, drop = TRUE, sep = NULL) {
-  regroup(NextMethod(), data, c(key_col, value_col))
+  get("as_tibble.matrix", asNamespace("tibble"), mode = "function")(x)
 }
-
 split_labels <- function(df, id, drop = TRUE) {
   if (length(df) == 0) {
     return(df)
@@ -172,11 +147,10 @@ split_labels <- function(df, id, drop = TRUE) {
     representative <- match(sort(unique(id)), id)
     df[representative, , drop = FALSE]
   } else {
-    unique_values <- lapply(df, ulevels)
+    unique_values <- map(df, ulevels)
     rev(expand.grid(rev(unique_values), stringsAsFactors = FALSE))
   }
 }
-
 ulevels <- function(x) {
   if (is.factor(x)) {
     x <- addNA(x, ifany = TRUE)
@@ -186,3 +160,28 @@ ulevels <- function(x) {
     sort(unique(x))
   }
 }
+
+
+#' @rdname deprecated-se
+#' @inheritParams spread
+#' @param key_col,value_col Strings giving names of key and value cols.
+#' @export
+spread_ <- function(data, key_col, value_col, fill = NA, convert = FALSE,
+                    drop = TRUE, sep = NULL) {
+  UseMethod("spread_")
+}
+#' @export
+spread_.data.frame <- function(data, key_col, value_col, fill = NA,
+                               convert = FALSE, drop = TRUE, sep = NULL) {
+  key_col <- compat_lazy(key_col, caller_env())
+  value_col <- compat_lazy(value_col, caller_env())
+
+  spread(data,
+    key = !! key_col,
+    value = !! value_col,
+    fill = fill,
+    convert = convert,
+    drop = drop,
+    sep = sep
+  )
+}
diff --git a/R/tidyr.R b/R/tidyr.R
new file mode 100644
index 0000000..d35a1db
--- /dev/null
+++ b/R/tidyr.R
@@ -0,0 +1,37 @@
+#' @keywords internal
+#' @import rlang
+#' @importFrom glue glue
+#' @importFrom purrr accumulate accumulate_right discard every keep
+#'   map map2 map2_chr map2_dbl map2_df map2_int map2_lgl map_at
+#'   map_call map_chr map_dbl map_df map_if map_int map_lgl pmap
+#'   pmap_chr pmap_dbl pmap_df pmap_int pmap_lgl reduce reduce_right
+#'   some transpose
+#' @importFrom tibble tibble as_tibble
+#' @importFrom utils type.convert
+#' @useDynLib tidyr, .registration = TRUE
+"_PACKAGE"
+
+globalVariables(".")
+
+
+#' Deprecated SE versions of main verbs
+#'
+#' tidyr used to offer twin versions of each verb suffixed with an
+#' underscore. These versions had standard evaluation (SE) semantics:
+#' rather than taking arguments by code, like NSE verbs, they took
+#' arguments by value. Their purpose was to make it possible to
+#' program with tidyr. However, tidyr now uses tidy evaluation
+#' semantics. NSE verbs still capture their arguments, but you can now
+#' unquote parts of these arguments. This offers full programmability
+#' with NSE verbs. Thus, the underscored versions are now superfluous.
+#'
+#' Unquoting triggers immediate evaluation of its operand and inlines
+#' the result within the captured expression. This result can be a
+#' value or an expression to be evaluated later with the rest of the
+#' argument. See `vignette("programming", "dplyr")` for more information.
+#'
+#' @param data A data frame
+#' @param vars,cols,col Name of columns.
+#' @name deprecated-se
+#' @keywords internal
+NULL
diff --git a/R/unite.R b/R/unite.R
index 2ed2172..e40b0ce 100644
--- a/R/unite.R
+++ b/R/unite.R
@@ -2,14 +2,20 @@
 #'
 #' Convenience function to paste together multiple columns into one.
 #'
-#' @inheritParams unite_
-#' @param col (Bare) name of column to add
-#' @param ... Specification of columns to unite. Use bare variable names.
-#'   Select all variables between x and z with \code{x:z}, exclude y with
-#'   \code{-y}. For more options, see the \link[dplyr]{select} documentation.
-#' @seealso \code{\link{separate}()}, the complement.
-#' @seealso \code{\link{unite_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
+#' @inheritSection gather Rules for selection
+#' @inheritParams gather
+#' @param data A data frame.
+#' @param col The name of the new column, as a string or symbol.
+#'
+#'   This argument is passed by expression and supports
+#'   [quasiquotation][rlang::quasiquotation] (you can unquote strings
+#'   and symbols). The name is captured from the expression with
+#'   [rlang::quo_name()] (note that this kind of interface where
+#'   symbols do not represent actual objects is now discouraged in the
+#'   tidyverse; we support it here for backward compatibility).
+#' @param sep Separator to use between values.
+#' @param remove If `TRUE`, remove input columns from output data frame.
+#' @seealso [separate()], the complement.
 #' @export
 #' @examples
 #' library(dplyr)
@@ -20,48 +26,42 @@
 #'   unite(vs_am, vs, am) %>%
 #'   separate(vs_am, c("vs", "am"))
 unite <- function(data, col, ..., sep = "_", remove = TRUE) {
-  col <- col_name(substitute(col))
-  from <- dplyr::select_vars(colnames(data), ...)
-
-  unite_(data, col, from, sep = sep, remove = remove)
+  UseMethod("unite")
 }
-
-#' Standard-evaluation version of \code{unite}
-#'
-#' This is a S3 generic.
-#'
-#' @keywords internal
-#' @param data A data frame.
-#' @param col Name of new column as string.
-#' @param from Names of existing columns as character vector
-#' @param sep Separator to use between values.
-#' @param remove If \code{TRUE}, remove input columns from output data frame.
 #' @export
-unite_ <- function(data, col, from, sep = "_", remove = TRUE) {
-  UseMethod("unite_")
+unite.default <- function(data, col, ..., sep = "_", remove = TRUE) {
+  col <- compat_as_lazy(enquo(col))
+  from <- compat_as_lazy_dots(...)
+  unite_(data, col, from, sep = sep, remove = remove)
 }
-
 #' @export
-unite_.data.frame <- function(data, col, from, sep = "_", remove = TRUE) {
-  united <- do.call("paste", c(data[from], list(sep = sep)))
+unite.data.frame <- function(data, col, ..., sep = "_", remove = TRUE) {
+  var <- quo_name(enquo(col))
+  from_vars <- tidyselect::vars_select(colnames(data), ...)
 
-  first_col <- which(names(data) %in% from)[1]
-
-  data2 <- data
+  out <- data
   if (remove) {
-    data2 <- data2[setdiff(names(data2), from)]
+    out <- out[setdiff(names(out), from_vars)]
   }
 
-  append_col(data2, united, col, after = first_col - 1)
+  first_pos <- which(names(data) %in% from_vars)[1]
+  united <- invoke(paste, c(data[from_vars], list(sep = sep)))
+
+  out <- append_col(out, united, var, after = first_pos - 1)
+  reconstruct_tibble(data, out, if (remove) from_vars)
 }
 
+
+#' @rdname deprecated-se
+#' @inheritParams unite
+#' @param from Names of existing columns as character vector
 #' @export
-unite_.tbl_df <- function(data, col, from, sep = "_", remove = TRUE) {
-  as_data_frame(NextMethod())
+unite_ <- function(data, col, from, sep = "_", remove = TRUE) {
+  UseMethod("unite_")
 }
-
 #' @export
-unite_.grouped_df <- function(data, col, from, sep = "_", remove = TRUE) {
-  regroup(NextMethod(), data, if (remove) from)
+unite_.data.frame <- function(data, col, from, sep = "_", remove = TRUE) {
+  col <- compat_lazy(col, caller_env())
+  from <- syms(from)
+  unite(data, !! col, !!! from, sep = sep, remove = remove)
 }
-
diff --git a/R/unnest.R b/R/unnest.R
index 0137ef2..803782d 100644
--- a/R/unnest.R
+++ b/R/unnest.R
@@ -1,19 +1,28 @@
 #' Unnest a list column.
 #'
 #' If you have a list-column, this makes each element of the list its own
-#' row. List-columns can either be atomic vectors or data frames. Each
-#' row must have the same number of entries.
+#' row. List-columns can either be atomic vectors or data frames.
 #'
-#' @inheritParams unnest_
+#' If you unnest multiple columns, parallel entries must have the same length
+#' or number of rows (if a data frame).
+#'
+#' @inheritParams expand
 #' @param ... Specification of columns to nest. Use bare variable names or
 #'   functions of variables. If omitted, defaults to all list-cols.
-#' @seealso \code{\link{nest}} for the inverse operation.
-#' @seealso \code{\link{unnest_}} for a version that uses regular evaluation
-#'   and is suitable for programming with.
+#' @param .drop Should additional list columns be dropped? By default,
+#'   `unnest` will drop them if unnesting the specified columns requires
+#'   the rows to be duplicated.
+#' @param .id Data frame identifier - if supplied, will create a new column
+#'   with name `.id`, giving a unique identifier. This is most useful if
+#'   the list column is named.
+#' @param .sep If non-`NULL`, the names of unnested data frame columns
+#'   will combine the name of the original list-col with the names from
+#'   nested data frame, separated by `.sep`.
+#' @seealso [nest()] for the inverse operation.
 #' @export
 #' @examples
 #' library(dplyr)
-#' df <- data_frame(
+#' df <- tibble(
 #'   x = 1:3,
 #'   y = c("a", "d,e,f", "g,h")
 #' )
@@ -26,17 +35,17 @@
 #'   unnest(y = strsplit(y, ","))
 #'
 #' # It also works if you have a column that contains other data frames!
-#' df <- data_frame(
+#' df <- tibble(
 #'   x = 1:2,
 #'   y = list(
-#'    data_frame(z = 1),
-#'    data_frame(z = 3:4)
+#'    tibble(z = 1),
+#'    tibble(z = 3:4)
 #'  )
 #' )
 #' df %>% unnest(y)
 #'
 #' # You can also unnest multiple columns simultaneously
-#' df <- data_frame(
+#' df <- tibble(
 #'  a = list(c("a", "b"), "c"),
 #'  b = list(1:2, 3),
 #'  c = c(11, 22)
@@ -51,71 +60,55 @@
 #' df %>% nest(y) %>% unnest()
 #'
 #' # If you have a named list-column, you may want to supply .id
-#' df <- data_frame(
+#' df <- tibble(
 #'   x = 1:2,
 #'   y = list(a = 1, b = 3:4)
 #' )
 #' unnest(df, .id = "name")
 unnest <- function(data, ..., .drop = NA, .id = NULL, .sep = NULL) {
-  dots <- lazyeval::lazy_dots(...)
-  if (length(dots) == 0) {
-    list_cols <- names(data)[vapply(data, is.list, logical(1))]
-    list_col_names <- lapply(list_cols, as.name)
-    dots <- lazyeval::as.lazy_dots(list_col_names, env = parent.frame())
-  }
-
-  unnest_(data, dots, .drop = .drop, .id = .id, .sep = .sep)
+  UseMethod("unnest")
 }
-
-#' Standard-evaluation version of \code{unnest}.
-#'
-#' This is a S3 generic.
-#'
-#' @param data A data frame.
-#' @param unnest_cols Name of columns that needs to be unnested.
-#' @param .drop Should additional list columns be dropped? By default,
-#'   \code{unnest} will drop them if unnesting the specified columns requires
-#'   the rows to be duplicated.
-#' @param .id Data frame idenfier - if supplied, will create a new column
-#'   with name \code{.id}, giving a unique identifer. This is most useful if
-#'   the list column is named.
-#' @param .sep If non-\code{NULL}, the names of unnested data frame columns
-#'   will combine the name of the original list-col with the names from
-#'   nested data frame, separated by \code{.sep}.
-#' @keywords internal
 #' @export
-unnest_ <- function(data, unnest_cols, .drop = NA, .id = NULL, .sep = NULL) {
-  UseMethod("unnest_")
+unnest.default <- function(data, ..., .drop = NA, .id = NULL, .sep = NULL) {
+  unnest_cols <- compat_as_lazy_dots(...)
+  unnest_(data, unnest_cols = unnest_cols, .drop = .drop, .id = .id, .sep = .sep)
 }
-
 #' @export
-unnest_.data.frame <- function(data, unnest_cols, .drop = NA, .id = NULL,
-                               .sep = NULL) {
-  nested <- dplyr::transmute_(data, .dots = unnest_cols)
-  n <- lapply(nested, function(x) vapply(x, NROW, numeric(1)))
+unnest.data.frame <- function(data, ..., .drop = NA, .id = NULL,
+                              .sep = NULL) {
+  quos <- quos(...)
+  if (is_empty(quos)) {
+    list_cols <- names(data)[map_lgl(data, is_list)]
+    quos <- syms(list_cols)
+  }
+
+  nested <- dplyr::transmute(dplyr::ungroup(data), !!! quos)
+  n <- map(nested, function(x) map_int(x, NROW))
   if (length(unique(n)) != 1) {
-    stop("All nested columns must have the same number of elements.",
-      call. = FALSE)
+    abort("All nested columns must have the same number of elements.")
   }
 
-  types <- vapply(nested, list_col_type, character(1))
+  types <- map_chr(nested, list_col_type)
   nest_types <- split.default(nested, types)
   if (length(nest_types$mixed) > 0) {
     probs <- paste(names(nest_types$mixed), collapse = ",")
-    stop("Each column must either be a list of vectors or a list of ",
-      "data frames [", probs , "]", call. = FALSE)
+    abort(glue(
+      "Each column must either be a list of vectors or a list of ",
+      "data frames [{probs}]"
+    ))
   }
 
-  unnested_atomic <- mapply(enframe, nest_types$atomic, names(nest_types$atomic),
-    MoreArgs = list(.id = .id), SIMPLIFY = FALSE)
-  if (length(unnested_atomic) > 0)
+  unnested_atomic <- imap(nest_types$atomic %||% list(), enframe, .id = .id)
+  if (length(unnested_atomic) > 0) {
     unnested_atomic <- dplyr::bind_cols(unnested_atomic)
+  }
 
-  unnested_dataframe <- lapply(nest_types$dataframe, dplyr::bind_rows, .id = .id)
-  if (!is.null(.sep)) {
-    unnested_dataframe <- Map(function(name, df) {
-      setNames(df, paste(name, names(df), sep = .sep))
-    }, names(unnested_dataframe), unnested_dataframe)
+  unnested_dataframe <- map(nest_types$dataframe %||% list(), dplyr::bind_rows, .id = .id)
+  if (!is_null(.sep)) {
+    unnested_dataframe <- imap(unnested_dataframe,
+      function(df, name) {
+        set_names(df, paste(name, names(df), sep = .sep))
+      })
   }
   if (length(unnested_dataframe) > 0)
     unnested_dataframe <- dplyr::bind_cols(unnested_dataframe)
@@ -123,25 +116,29 @@ unnest_.data.frame <- function(data, unnest_cols, .drop = NA, .id = NULL,
   # Keep list columns by default, only if the rows aren't expanded
   if (identical(.drop, NA)) {
     n_in <- nrow(data)
-    n_out <- nrow(unnested_atomic %||% unnested_dataframe)
+    if (length(unnested_atomic)) {
+      n_out <- nrow(unnested_atomic)
+    } else {
+      n_out <- nrow(unnested_dataframe)
+    }
     .drop <- n_out != n_in
   }
   if (.drop) {
-    is_atomic <- vapply(data, is.atomic, logical(1))
-    group_cols <- names(data)[is_atomic]
+    is_atomic <- map_lgl(data, is_atomic)
+    group_vars <- names(data)[is_atomic]
   } else {
-    group_cols <- names(data)
+    group_vars <- names(data)
   }
-  group_cols <- setdiff(group_cols, names(nested))
-
-  rest <- data[rep(1:nrow(data), n[[1]]), group_cols, drop = FALSE]
+  group_vars <- setdiff(group_vars, names(nested))
 
-  dplyr::bind_cols(compact(list(rest, unnested_atomic, unnested_dataframe)))
+  rest <- data[rep(1:nrow(data), n[[1]]), group_vars, drop = FALSE]
+  out <- dplyr::bind_cols(rest, unnested_atomic, unnested_dataframe)
+  reconstruct_tibble(data, out)
 }
 
 list_col_type <- function(x) {
-  is_data_frame <- vapply(x, is.data.frame, logical(1))
-  is_atomic <- vapply(x, is.atomic, logical(1))
+  is_data_frame <- map_lgl(x, is.data.frame)
+  is_atomic <- map_lgl(x, is_atomic)
 
   if (all(is_data_frame)) {
     "dataframe"
@@ -151,38 +148,35 @@ list_col_type <- function(x) {
     "mixed"
   }
 }
-
 enframe <- function(x, col_name, .id = NULL) {
-  out <- data_frame(dplyr::combine(x))
+  out <- tibble(dplyr::combine(x))
   names(out) <- col_name
 
-  if (!is.null(.id)) {
+  if (!is_null(.id)) {
     out[[.id]] <- id_col(x)
   }
   out
 }
-
 id_col <- function(x) {
-  stopifnot(is.list(x))
+  stopifnot(is_list(x))
 
-  ids <- if (is.null(names(x))) seq_along(x) else names(x)
-  lengths <- vapply(x, length, integer(1))
+  ids <- if (is_null(names(x))) seq_along(x) else names(x)
+  lengths <- map_int(x, length)
 
   ids[rep(seq_along(ids), lengths)]
 }
 
+
+#' @rdname deprecated-se
+#' @inheritParams unnest
+#' @param unnest_cols Name of columns that needs to be unnested.
 #' @export
-unnest_.tbl_df <- function(data, unnest_cols, .drop = NA, .id = NULL,
-                           .sep = NULL) {
-  as_data_frame(NextMethod())
+unnest_ <- function(data, unnest_cols, .drop = NA, .id = NULL, .sep = NULL) {
+  UseMethod("unnest_")
 }
-
 #' @export
-unnest_.grouped_df <- function(data, unnest_cols, .drop = NA, .id = NULL,
+unnest_.data.frame <- function(data, unnest_cols, .drop = NA, .id = NULL,
                                .sep = NULL) {
-  out <- unnest_(
-    dplyr::ungroup(data), unnest_cols,
-    .drop = .drop, .id = .id, .sep = .sep
-  )
-  regroup(out, data)
+  unnest_cols <- compat_lazy_dots(unnest_cols, caller_env())
+  unnest(data, !!! unnest_cols, .drop = .drop, .id = .id, .sep = .sep)
 }
diff --git a/R/utils.R b/R/utils.R
index 95ebab3..9c0d17d 100644
--- a/R/utils.R
+++ b/R/utils.R
@@ -1,10 +1,12 @@
-col_name <- function(x, default = stop("Please supply column name", call. = FALSE)) {
-  if (is.character(x)) return(x)
-  if (identical(x, quote(expr = ))) return(default)
-  if (is.name(x)) return(as.character(x))
-  if (is.null(x)) return(x)
 
-  stop("Invalid column specification", call. = FALSE)
+col_name <- function(x, default = abort("Please supply column name")) {
+  if (identical(x, quote(expr = ))) return(default)
+  switch_type(x,
+    NULL = NULL,
+    string = x,
+    symbol = as_string(x),
+    abort("Invalid column specification")
+  )
 }
 
 append_df <- function(x, values, after = length(x)) {
@@ -17,16 +19,17 @@ append_df <- function(x, values, after = length(x)) {
 
 append_col <- function(x, col, name, after = length(x)) {
   name <- enc2utf8(name)
-  append_df(x, setNames(list(col), name), after = after)
+  append_df(x, set_names(list(col), name), after = after)
 }
 
-compact <- function(x) x[vapply(x, length, integer(1)) > 0]
+compact <- function(x) x[map_int(x, length) > 0]
 
 #' Extract numeric component of variable.
 #'
-#' DEPRECATED: please use \code{readr::parse_number()} instead.
+#' DEPRECATED: please use `readr::parse_number()` instead.
 #'
 #' @param x A character vector (or a factor).
+#' @keywords internal
 #' @export
 extract_numeric <- function(x) {
   message("extract_numeric() is deprecated: please use readr::parse_number() instead")
@@ -52,21 +55,25 @@ list_indices <- function(x, max = 20) {
   paste(x, collapse = ", ")
 }
 
-`%||%` <- function(x, y) if (length(x) == 0) y else x
-
-regroup <- function(x, y, except = NULL) {
-  groups <- dplyr::groups(y)
+regroup <- function(output, input, except = NULL) {
+  groups <- dplyr::group_vars(input)
   if (!is.null(except)) {
-    groups <- setdiff(groups, lapply(except, as.name))
+    groups <- setdiff(groups, except)
   }
 
-  dplyr::grouped_df(x, groups)
+  dplyr::grouped_df(output, groups)
+}
+reconstruct_tibble <- function(input, output, ungrouped_vars = chr()) {
+  if (inherits(input, "grouped_df")) {
+    regroup(output, input, ungrouped_vars)
+  } else if (inherits(input, "tbl_df")) {
+    as_tibble(output)
+  } else {
+    output
+  }
 }
 
-# Allows tests to work with either dplyr 0.4 (which ignores value of
-# everything), and 0.5 which exports it as a proper function
-everything <- function(...) dplyr::everything(...)
 
-is_numeric <- function(x) {
-  typeof(x) %in% c("integer", "double")
+imap <- function(.x, .f, ...) {
+  map2(.x, names(.x) %||% character(0), .f, ...)
 }
diff --git a/README.md b/README.md
index cd73b0d..180713d 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
 
 <!-- README.md is generated from README.Rmd. Please edit that file -->
-tidyr <img src="logo.png" align="right" />
-==========================================
+tidyr <img src="man/figures/logo.png" align="right" />
+======================================================
 
 [![Build Status](https://travis-ci.org/tidyverse/tidyr.svg?branch=master)](https://travis-ci.org/tidyverse/tidyr) [![codecov.io](http://codecov.io/github/tidyverse/tidyr/coverage.svg?branch=master)](http://codecov.io/github/tidyverse/tidyr?branch=master) [![CRAN\_Status\_Badge](http://www.r-pkg.org/badges/version/tidyr)](https://cran.r-project.org/package=tidyr)
 
diff --git a/build/vignette.rds b/build/vignette.rds
index 9b40832..460b1d5 100644
Binary files a/build/vignette.rds and b/build/vignette.rds differ
diff --git a/inst/doc/tidy-data.Rmd b/inst/doc/tidy-data.Rmd
index a4e2b0a..012e6f8 100644
--- a/inst/doc/tidy-data.Rmd
+++ b/inst/doc/tidy-data.Rmd
@@ -1,7 +1,5 @@
 ---
 title: "Tidy data"
-author: "Hadley Wickham"
-date: "`r Sys.Date()`"
 output: rmarkdown::html_vignette
 vignette: >
   %\VignetteIndexEntry{Tidy data}
@@ -90,7 +88,7 @@ Tidy data is a standard way of mapping the meaning of a dataset to its structure
 
 3.  Each type of observational unit forms a table.
 
-This is Codd's 3rd normal form, but with the constraints framed in statistical language, and the focus put on a single dataset rather than the many connected datasets common in relational databases. **Messy data** is any other other arrangement of the data.
+This is Codd's 3rd normal form, but with the constraints framed in statistical language, and the focus put on a single dataset rather than the many connected datasets common in relational databases. **Messy data** is any other arrangement of the data.
 
 Tidy data makes it easy for an analyst or a computer to extract needed variables because it provides a standard way of structuring a dataset. Compare the different versions of the pregnancy data: in the messy version you need to use different strategies to extract different variables. This slows analysis and invites errors. If you consider how many data analysis operations involve all of the values in a variable (every aggregation function), you can see how important it is to extract the [...]
 
@@ -134,7 +132,7 @@ pew %>%
 
 This form is tidy because each column represents a variable and each row represents an observation, in this case a demographic unit corresponding to a combination of `religion` and `income`.
 
-This format is also used to record regularly spaced observations over time. For example, the Billboard dataset shown below records the date a song first entered the billboard top 100. It has variables for `artist`, `track`, `date.entered`, `rank` and `week`. The rank in each week after it enters the top 100 is recorded in 75 columns, `wk1` to `wk75`. This form of storage is not tidy, but it is useful for data entry. It reduces duplication since otherwise each song in each week would need [...]
+This format is also used to record regularly spaced observations over time. For example, the Billboard dataset shown below records the date a song first entered the billboard top 100. It has variables for `artist`, `track`, `date.entered`, `rank` and `week`. The rank in each week after it enters the top 100 is recorded in 75 columns, `wk1` to `wk75`. This form of storage is not tidy, but it is useful for data entry. It reduces duplication since otherwise each song in each week would need [...]
 
 ```{r}
 billboard <- tbl_df(read.csv("billboard.csv", stringsAsFactors = FALSE))
diff --git a/inst/doc/tidy-data.html b/inst/doc/tidy-data.html
index a154352..3b1a75a 100644
--- a/inst/doc/tidy-data.html
+++ b/inst/doc/tidy-data.html
@@ -4,15 +4,13 @@
 
 <head>
 
-<meta charset="utf-8">
+<meta charset="utf-8" />
 <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
 <meta name="generator" content="pandoc" />
 
 <meta name="viewport" content="width=device-width, initial-scale=1">
 
-<meta name="author" content="Hadley Wickham" />
 
-<meta name="date" content="2017-01-09" />
 
 <title>Tidy data</title>
 
@@ -20,46 +18,28 @@
 
 <style type="text/css">code{white-space: pre;}</style>
 <style type="text/css">
-div.sourceCode { overflow-x: auto; }
 table.sourceCode, tr.sourceCode, td.lineNumbers, td.sourceCode {
   margin: 0; padding: 0; vertical-align: baseline; border: none; }
 table.sourceCode { width: 100%; line-height: 100%; }
 td.lineNumbers { text-align: right; padding-right: 4px; padding-left: 4px; color: #aaaaaa; border-right: 1px solid #aaaaaa; }
 td.sourceCode { padding-left: 5px; }
-code > span.kw { color: #007020; font-weight: bold; } /* Keyword */
-code > span.dt { color: #902000; } /* DataType */
-code > span.dv { color: #40a070; } /* DecVal */
-code > span.bn { color: #40a070; } /* BaseN */
-code > span.fl { color: #40a070; } /* Float */
-code > span.ch { color: #4070a0; } /* Char */
-code > span.st { color: #4070a0; } /* String */
-code > span.co { color: #60a0b0; font-style: italic; } /* Comment */
-code > span.ot { color: #007020; } /* Other */
-code > span.al { color: #ff0000; font-weight: bold; } /* Alert */
-code > span.fu { color: #06287e; } /* Function */
-code > span.er { color: #ff0000; font-weight: bold; } /* Error */
-code > span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */
-code > span.cn { color: #880000; } /* Constant */
-code > span.sc { color: #4070a0; } /* SpecialChar */
-code > span.vs { color: #4070a0; } /* VerbatimString */
-code > span.ss { color: #bb6688; } /* SpecialString */
-code > span.im { } /* Import */
-code > span.va { color: #19177c; } /* Variable */
-code > span.cf { color: #007020; font-weight: bold; } /* ControlFlow */
-code > span.op { color: #666666; } /* Operator */
-code > span.bu { } /* BuiltIn */
-code > span.ex { } /* Extension */
-code > span.pp { color: #bc7a00; } /* Preprocessor */
-code > span.at { color: #7d9029; } /* Attribute */
-code > span.do { color: #ba2121; font-style: italic; } /* Documentation */
-code > span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */
-code > span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */
-code > span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */
+code > span.kw { color: #007020; font-weight: bold; }
+code > span.dt { color: #902000; }
+code > span.dv { color: #40a070; }
+code > span.bn { color: #40a070; }
+code > span.fl { color: #40a070; }
+code > span.ch { color: #4070a0; }
+code > span.st { color: #4070a0; }
+code > span.co { color: #60a0b0; font-style: italic; }
+code > span.ot { color: #007020; }
+code > span.al { color: #ff0000; font-weight: bold; }
+code > span.fu { color: #06287e; }
+code > span.er { color: #ff0000; font-weight: bold; }
 </style>
 
 
 
-<link href="data:text/css;charset=utf-8,body%20%7B%0Abackground%2Dcolor%3A%20%23fff%3B%0Amargin%3A%201em%20auto%3B%0Amax%2Dwidth%3A%20700px%3B%0Aoverflow%3A%20visible%3B%0Apadding%2Dleft%3A%202em%3B%0Apadding%2Dright%3A%202em%3B%0Afont%2Dfamily%3A%20%22Open%20Sans%22%2C%20%22Helvetica%20Neue%22%2C%20Helvetica%2C%20Arial%2C%20sans%2Dserif%3B%0Afont%2Dsize%3A%2014px%3B%0Aline%2Dheight%3A%201%2E35%3B%0A%7D%0A%23header%20%7B%0Atext%2Dalign%3A%20center%3B%0A%7D%0A%23TOC%20%7B%0Aclear%3A%20bot [...]
+<link href="data:text/css,body%20%7B%0A%20%20background%2Dcolor%3A%20%23fff%3B%0A%20%20margin%3A%201em%20auto%3B%0A%20%20max%2Dwidth%3A%20700px%3B%0A%20%20overflow%3A%20visible%3B%0A%20%20padding%2Dleft%3A%202em%3B%0A%20%20padding%2Dright%3A%202em%3B%0A%20%20font%2Dfamily%3A%20%22Open%20Sans%22%2C%20%22Helvetica%20Neue%22%2C%20Helvetica%2C%20Arial%2C%20sans%2Dserif%3B%0A%20%20font%2Dsize%3A%2014px%3B%0A%20%20line%2Dheight%3A%201%2E35%3B%0A%7D%0A%0A%23header%20%7B%0A%20%20text%2Dalign%3A% [...]
 
 </head>
 
@@ -69,8 +49,6 @@ code > span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Inf
 
 
 <h1 class="title toc-ignore">Tidy data</h1>
-<h4 class="author"><em>Hadley Wickham</em></h4>
-<h4 class="date"><em>2017-01-09</em></h4>
 
 
 
@@ -89,24 +67,24 @@ code > span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Inf
 <div id="data-structure" class="section level2">
 <h2>Data structure</h2>
 <p>Most statistical datasets are data frames made up of <strong>rows</strong> and <strong>columns</strong>. The columns are almost always labeled and the rows are sometimes labeled. The following code provides some data about an imaginary experiment in a format commonly seen in the wild. The table has two columns and three rows, and both rows and columns are labeled.</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">preg <-<span class="st"> </span><span class="kw">read.csv</span>(<span class="st">"preg.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>)
+<pre class="sourceCode r"><code class="sourceCode r">preg <-<span class="st"> </span><span class="kw">read.csv</span>(<span class="st">"preg.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>)
 preg
 <span class="co">#>           name treatmenta treatmentb</span>
 <span class="co">#> 1   John Smith         NA         18</span>
 <span class="co">#> 2     Jane Doe          4          1</span>
-<span class="co">#> 3 Mary Johnson          6          7</span></code></pre></div>
+<span class="co">#> 3 Mary Johnson          6          7</span></code></pre>
 <p>There are many ways to structure the same underlying data. The following table shows the same data as above, but the rows and columns have been transposed.</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">read.csv</span>(<span class="st">"preg2.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>)
+<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">read.csv</span>(<span class="st">"preg2.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>)
 <span class="co">#>   treatment John.Smith Jane.Doe Mary.Johnson</span>
 <span class="co">#> 1         a         NA        4            6</span>
-<span class="co">#> 2         b         18        1            7</span></code></pre></div>
+<span class="co">#> 2         b         18        1            7</span></code></pre>
 <p>The data is the same, but the layout is different. Our vocabulary of rows and columns is simply not rich enough to describe why the two tables represent the same data. In addition to appearance, we need a way to describe the underlying semantics, or meaning, of the values displayed in the table.</p>
 </div>
 <div id="data-semantics" class="section level2">
 <h2>Data semantics</h2>
 <p>A dataset is a collection of <strong>values</strong>, usually either numbers (if quantitative) or strings (if qualitative). Values are organised in two ways. Every value belongs to a <strong>variable</strong> and an <strong>observation</strong>. A variable contains all values that measure the same underlying attribute (like height, temperature, duration) across units. An observation contains all values measured on the same unit (like a person, or a day, or a race) across attributes.</p>
 <p>A tidy version of the pregnancy data looks like this: (you’ll learn how the functions work a little later)</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(tidyr)
+<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(tidyr)
 <span class="kw">library</span>(dplyr)
 preg2 <-<span class="st"> </span>preg %>%<span class="st"> </span>
 <span class="st">  </span><span class="kw">gather</span>(treatment, n, treatmenta:treatmentb) %>%
@@ -119,7 +97,7 @@ preg2
 <span class="co">#> 3   John Smith         a NA</span>
 <span class="co">#> 4   John Smith         b 18</span>
 <span class="co">#> 5 Mary Johnson         a  6</span>
-<span class="co">#> 6 Mary Johnson         b  7</span></code></pre></div>
+<span class="co">#> 6 Mary Johnson         b  7</span></code></pre>
 <p>This makes the values, variables and observations more clear. The dataset contains 18 values representing three variables and six observations. The variables are:</p>
 <ol style="list-style-type: decimal">
 <li><p><code>name</code>, with three possible values (John, Mary, and Jane).</p></li>
@@ -139,7 +117,7 @@ preg2
 <li><p>Each observation forms a row.</p></li>
 <li><p>Each type of observational unit forms a table.</p></li>
 </ol>
-<p>This is Codd’s 3rd normal form, but with the constraints framed in statistical language, and the focus put on a single dataset rather than the many connected datasets common in relational databases. <strong>Messy data</strong> is any other other arrangement of the data.</p>
+<p>This is Codd’s 3rd normal form, but with the constraints framed in statistical language, and the focus put on a single dataset rather than the many connected datasets common in relational databases. <strong>Messy data</strong> is any other arrangement of the data.</p>
 <p>Tidy data makes it easy for an analyst or a computer to extract needed variables because it provides a standard way of structuring a dataset. Compare the different versions of the pregnancy data: in the messy version you need to use different strategies to extract different variables. This slows analysis and invites errors. If you consider how many data analysis operations involve all of the values in a variable (every aggregation function), you can see how important it is to extract  [...]
 <p>While the order of variables and observations does not affect analysis, a good ordering makes it easier to scan the raw values. One way of organising variables is by their role in the analysis: are values fixed by the design of the data collection, or are they measured during the course of the experiment? Fixed variables describe the experimental design and are known in advance. Computer scientists often call fixed variables dimensions, and statisticians usually denote them with subsc [...]
 </div>
@@ -159,58 +137,58 @@ preg2
 <h2>Column headers are values, not variable names</h2>
 <p>A common type of messy dataset is tabular data designed for presentation, where variables form both the rows and columns, and column headers are values, not variable names. While I would call this arrangement messy, in some cases it can be extremely useful. It provides efficient storage for completely crossed designs, and it can lead to extremely efficient computation if desired operations can be expressed as matrix operations.</p>
 <p>The following code shows a subset of a typical dataset of this form. This dataset explores the relationship between income and religion in the US. It comes from a report<a href="#fn1" class="footnoteRef" id="fnref1"><sup>1</sup></a> produced by the Pew Research Center, an American think-tank that collects data on attitudes to topics ranging from religion to the internet, and produces many reports that contain datasets in this format.</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">pew <-<span class="st"> </span><span class="kw">tbl_df</span>(<span class="kw">read.csv</span>(<span class="st">"pew.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>, <span class="dt">check.names =</span> <span class="ot">FALSE</span>))
+<pre class="sourceCode r"><code class="sourceCode r">pew <-<span class="st"> </span><span class="kw">tbl_df</span>(<span class="kw">read.csv</span>(<span class="st">"pew.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>, <span class="dt">check.names =</span> <span class="ot">FALSE</span>))
 pew
-<span class="co">#> # A tibble: 18 × 11</span>
+<span class="co">#> # A tibble: 18 x 11</span>
 <span class="co">#>                   religion `<$10k` `$10-20k` `$20-30k` `$30-40k` `$40-50k`</span>
 <span class="co">#>                      <chr>   <int>     <int>     <int>     <int>     <int></span>
-<span class="co">#> 1                 Agnostic      27        34        60        81        76</span>
-<span class="co">#> 2                  Atheist      12        27        37        52        35</span>
-<span class="co">#> 3                 Buddhist      27        21        30        34        33</span>
-<span class="co">#> 4                 Catholic     418       617       732       670       638</span>
-<span class="co">#> 5       Don’t know/refused      15        14        15        11        10</span>
-<span class="co">#> 6         Evangelical Prot     575       869      1064       982       881</span>
-<span class="co">#> 7                    Hindu       1         9         7         9        11</span>
-<span class="co">#> 8  Historically Black Prot     228       244       236       238       197</span>
-<span class="co">#> 9        Jehovah's Witness      20        27        24        24        21</span>
+<span class="co">#>  1                Agnostic      27        34        60        81        76</span>
+<span class="co">#>  2                 Atheist      12        27        37        52        35</span>
+<span class="co">#>  3                Buddhist      27        21        30        34        33</span>
+<span class="co">#>  4                Catholic     418       617       732       670       638</span>
+<span class="co">#>  5      Don’t know/refused      15        14        15        11        10</span>
+<span class="co">#>  6        Evangelical Prot     575       869      1064       982       881</span>
+<span class="co">#>  7                   Hindu       1         9         7         9        11</span>
+<span class="co">#>  8 Historically Black Prot     228       244       236       238       197</span>
+<span class="co">#>  9       Jehovah's Witness      20        27        24        24        21</span>
 <span class="co">#> 10                  Jewish      19        19        25        25        30</span>
 <span class="co">#> # ... with 8 more rows, and 5 more variables: `$50-75k` <int>,</span>
 <span class="co">#> #   `$75-100k` <int>, `$100-150k` <int>, `>150k` <int>, `Don't</span>
-<span class="co">#> #   know/refused` <int></span></code></pre></div>
+<span class="co">#> #   know/refused` <int></span></code></pre>
 <p>This dataset has three variables, <code>religion</code>, <code>income</code> and <code>frequency</code>. To tidy it, we need to <strong>gather</strong> the non-variable columns into a two-column key-value pair. This action is often described as making a wide dataset long (or tall), but I’ll avoid those terms because they’re imprecise.</p>
 <p>When gathering variables, we need to provide the name of the new key-value columns to create. The first argument, is the name of the key column, which is the name of the variable defined by the values of the column headings. In this case, it’s <code>income</code>. The second argument is the name of the value column, <code>frequency</code>. The third argument defines the columns to gather, here, every column except religion.</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">pew %>%
+<pre class="sourceCode r"><code class="sourceCode r">pew %>%
 <span class="st">  </span><span class="kw">gather</span>(income, frequency, -religion)
-<span class="co">#> # A tibble: 180 × 3</span>
+<span class="co">#> # A tibble: 180 x 3</span>
 <span class="co">#>                   religion income frequency</span>
 <span class="co">#>                      <chr>  <chr>     <int></span>
-<span class="co">#> 1                 Agnostic  <$10k        27</span>
-<span class="co">#> 2                  Atheist  <$10k        12</span>
-<span class="co">#> 3                 Buddhist  <$10k        27</span>
-<span class="co">#> 4                 Catholic  <$10k       418</span>
-<span class="co">#> 5       Don’t know/refused  <$10k        15</span>
-<span class="co">#> 6         Evangelical Prot  <$10k       575</span>
-<span class="co">#> 7                    Hindu  <$10k         1</span>
-<span class="co">#> 8  Historically Black Prot  <$10k       228</span>
-<span class="co">#> 9        Jehovah's Witness  <$10k        20</span>
+<span class="co">#>  1                Agnostic  <$10k        27</span>
+<span class="co">#>  2                 Atheist  <$10k        12</span>
+<span class="co">#>  3                Buddhist  <$10k        27</span>
+<span class="co">#>  4                Catholic  <$10k       418</span>
+<span class="co">#>  5      Don’t know/refused  <$10k        15</span>
+<span class="co">#>  6        Evangelical Prot  <$10k       575</span>
+<span class="co">#>  7                   Hindu  <$10k         1</span>
+<span class="co">#>  8 Historically Black Prot  <$10k       228</span>
+<span class="co">#>  9       Jehovah's Witness  <$10k        20</span>
 <span class="co">#> 10                  Jewish  <$10k        19</span>
-<span class="co">#> # ... with 170 more rows</span></code></pre></div>
+<span class="co">#> # ... with 170 more rows</span></code></pre>
 <p>This form is tidy because each column represents a variable and each row represents an observation, in this case a demographic unit corresponding to a combination of <code>religion</code> and <code>income</code>.</p>
-<p>This format is also used to record regularly spaced observations over time. For example, the Billboard dataset shown below records the date a song first entered the billboard top 100. It has variables for <code>artist</code>, <code>track</code>, <code>date.entered</code>, <code>rank</code> and <code>week</code>. The rank in each week after it enters the top 100 is recorded in 75 columns, <code>wk1</code> to <code>wk75</code>. This form of storage is not tidy, but it is useful for data [...]
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">billboard <-<span class="st"> </span><span class="kw">tbl_df</span>(<span class="kw">read.csv</span>(<span class="st">"billboard.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>))
+<p>This format is also used to record regularly spaced observations over time. For example, the Billboard dataset shown below records the date a song first entered the billboard top 100. It has variables for <code>artist</code>, <code>track</code>, <code>date.entered</code>, <code>rank</code> and <code>week</code>. The rank in each week after it enters the top 100 is recorded in 75 columns, <code>wk1</code> to <code>wk75</code>. This form of storage is not tidy, but it is useful for data [...]
+<pre class="sourceCode r"><code class="sourceCode r">billboard <-<span class="st"> </span><span class="kw">tbl_df</span>(<span class="kw">read.csv</span>(<span class="st">"billboard.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>))
 billboard
-<span class="co">#> # A tibble: 317 × 81</span>
+<span class="co">#> # A tibble: 317 x 81</span>
 <span class="co">#>     year         artist                   track  time date.entered   wk1</span>
 <span class="co">#>    <int>          <chr>                   <chr> <chr>        <chr> <int></span>
-<span class="co">#> 1   2000          2 Pac Baby Don't Cry (Keep...  4:22   2000-02-26    87</span>
-<span class="co">#> 2   2000        2Ge+her The Hardest Part Of ...  3:15   2000-09-02    91</span>
-<span class="co">#> 3   2000   3 Doors Down              Kryptonite  3:53   2000-04-08    81</span>
-<span class="co">#> 4   2000   3 Doors Down                   Loser  4:24   2000-10-21    76</span>
-<span class="co">#> 5   2000       504 Boyz           Wobble Wobble  3:35   2000-04-15    57</span>
-<span class="co">#> 6   2000           98^0 Give Me Just One Nig...  3:24   2000-08-19    51</span>
-<span class="co">#> 7   2000        A*Teens           Dancing Queen  3:44   2000-07-08    97</span>
-<span class="co">#> 8   2000        Aaliyah           I Don't Wanna  4:15   2000-01-29    84</span>
-<span class="co">#> 9   2000        Aaliyah               Try Again  4:03   2000-03-18    59</span>
+<span class="co">#>  1  2000          2 Pac Baby Don't Cry (Keep...  4:22   2000-02-26    87</span>
+<span class="co">#>  2  2000        2Ge+her The Hardest Part Of ...  3:15   2000-09-02    91</span>
+<span class="co">#>  3  2000   3 Doors Down              Kryptonite  3:53   2000-04-08    81</span>
+<span class="co">#>  4  2000   3 Doors Down                   Loser  4:24   2000-10-21    76</span>
+<span class="co">#>  5  2000       504 Boyz           Wobble Wobble  3:35   2000-04-15    57</span>
+<span class="co">#>  6  2000           98^0 Give Me Just One Nig...  3:24   2000-08-19    51</span>
+<span class="co">#>  7  2000        A*Teens           Dancing Queen  3:44   2000-07-08    97</span>
+<span class="co">#>  8  2000        Aaliyah           I Don't Wanna  4:15   2000-01-29    84</span>
+<span class="co">#>  9  2000        Aaliyah               Try Again  4:03   2000-03-18    59</span>
 <span class="co">#> 10  2000 Adams, Yolanda           Open My Heart  5:30   2000-08-26    76</span>
 <span class="co">#> # ... with 307 more rows, and 75 more variables: wk2 <int>, wk3 <int>,</span>
 <span class="co">#> #   wk4 <int>, wk5 <int>, wk6 <int>, wk7 <int>, wk8 <int>, wk9 <int>,</span>
@@ -227,220 +205,220 @@ billboard
 <span class="co">#> #   wk60 <int>, wk61 <int>, wk62 <int>, wk63 <int>, wk64 <int>,</span>
 <span class="co">#> #   wk65 <int>, wk66 <lgl>, wk67 <lgl>, wk68 <lgl>, wk69 <lgl>,</span>
 <span class="co">#> #   wk70 <lgl>, wk71 <lgl>, wk72 <lgl>, wk73 <lgl>, wk74 <lgl>,</span>
-<span class="co">#> #   wk75 <lgl>, wk76 <lgl></span></code></pre></div>
+<span class="co">#> #   wk75 <lgl>, wk76 <lgl></span></code></pre>
 <p>To tidy this dataset, we first gather together all the <code>wk</code> columns. The column names give the <code>week</code> and the values are the <code>rank</code>s:</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">billboard2 <-<span class="st"> </span>billboard %>%<span class="st"> </span>
+<pre class="sourceCode r"><code class="sourceCode r">billboard2 <-<span class="st"> </span>billboard %>%<span class="st"> </span>
 <span class="st">  </span><span class="kw">gather</span>(week, rank, wk1:wk76, <span class="dt">na.rm =</span> <span class="ot">TRUE</span>)
 billboard2
-<span class="co">#> # A tibble: 5,307 × 7</span>
+<span class="co">#> # A tibble: 5,307 x 7</span>
 <span class="co">#>     year         artist                   track  time date.entered  week</span>
-<span class="co">#> *  <int>          <chr>                   <chr> <chr>        <chr> <chr></span>
-<span class="co">#> 1   2000          2 Pac Baby Don't Cry (Keep...  4:22   2000-02-26   wk1</span>
-<span class="co">#> 2   2000        2Ge+her The Hardest Part Of ...  3:15   2000-09-02   wk1</span>
-<span class="co">#> 3   2000   3 Doors Down              Kryptonite  3:53   2000-04-08   wk1</span>
-<span class="co">#> 4   2000   3 Doors Down                   Loser  4:24   2000-10-21   wk1</span>
-<span class="co">#> 5   2000       504 Boyz           Wobble Wobble  3:35   2000-04-15   wk1</span>
-<span class="co">#> 6   2000           98^0 Give Me Just One Nig...  3:24   2000-08-19   wk1</span>
-<span class="co">#> 7   2000        A*Teens           Dancing Queen  3:44   2000-07-08   wk1</span>
-<span class="co">#> 8   2000        Aaliyah           I Don't Wanna  4:15   2000-01-29   wk1</span>
-<span class="co">#> 9   2000        Aaliyah               Try Again  4:03   2000-03-18   wk1</span>
+<span class="co">#>  * <int>          <chr>                   <chr> <chr>        <chr> <chr></span>
+<span class="co">#>  1  2000          2 Pac Baby Don't Cry (Keep...  4:22   2000-02-26   wk1</span>
+<span class="co">#>  2  2000        2Ge+her The Hardest Part Of ...  3:15   2000-09-02   wk1</span>
+<span class="co">#>  3  2000   3 Doors Down              Kryptonite  3:53   2000-04-08   wk1</span>
+<span class="co">#>  4  2000   3 Doors Down                   Loser  4:24   2000-10-21   wk1</span>
+<span class="co">#>  5  2000       504 Boyz           Wobble Wobble  3:35   2000-04-15   wk1</span>
+<span class="co">#>  6  2000           98^0 Give Me Just One Nig...  3:24   2000-08-19   wk1</span>
+<span class="co">#>  7  2000        A*Teens           Dancing Queen  3:44   2000-07-08   wk1</span>
+<span class="co">#>  8  2000        Aaliyah           I Don't Wanna  4:15   2000-01-29   wk1</span>
+<span class="co">#>  9  2000        Aaliyah               Try Again  4:03   2000-03-18   wk1</span>
 <span class="co">#> 10  2000 Adams, Yolanda           Open My Heart  5:30   2000-08-26   wk1</span>
-<span class="co">#> # ... with 5,297 more rows, and 1 more variables: rank <int></span></code></pre></div>
+<span class="co">#> # ... with 5,297 more rows, and 1 more variables: rank <int></span></code></pre>
 <p>Here we use <code>na.rm</code> to drop any missing values from the gather columns. In this data, missing values represent weeks that the song wasn’t in the charts, so can be safely dropped.</p>
 <p>In this case it’s also nice to do a little cleaning, converting the week variable to a number, and figuring out the date corresponding to each week on the charts:</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">billboard3 <-<span class="st"> </span>billboard2 %>%
+<pre class="sourceCode r"><code class="sourceCode r">billboard3 <-<span class="st"> </span>billboard2 %>%
 <span class="st">  </span><span class="kw">mutate</span>(
     <span class="dt">week =</span> <span class="kw">extract_numeric</span>(week),
     <span class="dt">date =</span> <span class="kw">as.Date</span>(date.entered) +<span class="st"> </span><span class="dv">7</span> *<span class="st"> </span>(week -<span class="st"> </span><span class="dv">1</span>)) %>%
 <span class="st">  </span><span class="kw">select</span>(-date.entered)
 <span class="co">#> extract_numeric() is deprecated: please use readr::parse_number() instead</span>
 billboard3
-<span class="co">#> # A tibble: 5,307 × 7</span>
+<span class="co">#> # A tibble: 5,307 x 7</span>
 <span class="co">#>     year         artist                   track  time  week  rank</span>
 <span class="co">#>    <int>          <chr>                   <chr> <chr> <dbl> <int></span>
-<span class="co">#> 1   2000          2 Pac Baby Don't Cry (Keep...  4:22     1    87</span>
-<span class="co">#> 2   2000        2Ge+her The Hardest Part Of ...  3:15     1    91</span>
-<span class="co">#> 3   2000   3 Doors Down              Kryptonite  3:53     1    81</span>
-<span class="co">#> 4   2000   3 Doors Down                   Loser  4:24     1    76</span>
-<span class="co">#> 5   2000       504 Boyz           Wobble Wobble  3:35     1    57</span>
-<span class="co">#> 6   2000           98^0 Give Me Just One Nig...  3:24     1    51</span>
-<span class="co">#> 7   2000        A*Teens           Dancing Queen  3:44     1    97</span>
-<span class="co">#> 8   2000        Aaliyah           I Don't Wanna  4:15     1    84</span>
-<span class="co">#> 9   2000        Aaliyah               Try Again  4:03     1    59</span>
+<span class="co">#>  1  2000          2 Pac Baby Don't Cry (Keep...  4:22     1    87</span>
+<span class="co">#>  2  2000        2Ge+her The Hardest Part Of ...  3:15     1    91</span>
+<span class="co">#>  3  2000   3 Doors Down              Kryptonite  3:53     1    81</span>
+<span class="co">#>  4  2000   3 Doors Down                   Loser  4:24     1    76</span>
+<span class="co">#>  5  2000       504 Boyz           Wobble Wobble  3:35     1    57</span>
+<span class="co">#>  6  2000           98^0 Give Me Just One Nig...  3:24     1    51</span>
+<span class="co">#>  7  2000        A*Teens           Dancing Queen  3:44     1    97</span>
+<span class="co">#>  8  2000        Aaliyah           I Don't Wanna  4:15     1    84</span>
+<span class="co">#>  9  2000        Aaliyah               Try Again  4:03     1    59</span>
 <span class="co">#> 10  2000 Adams, Yolanda           Open My Heart  5:30     1    76</span>
-<span class="co">#> # ... with 5,297 more rows, and 1 more variables: date <date></span></code></pre></div>
+<span class="co">#> # ... with 5,297 more rows, and 1 more variables: date <date></span></code></pre>
 <p>Finally, it’s always a good idea to sort the data. We could do it by artist, track and week:</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">billboard3 %>%<span class="st"> </span><span class="kw">arrange</span>(artist, track, week)
-<span class="co">#> # A tibble: 5,307 × 7</span>
+<pre class="sourceCode r"><code class="sourceCode r">billboard3 %>%<span class="st"> </span><span class="kw">arrange</span>(artist, track, week)
+<span class="co">#> # A tibble: 5,307 x 7</span>
 <span class="co">#>     year  artist                   track  time  week  rank       date</span>
 <span class="co">#>    <int>   <chr>                   <chr> <chr> <dbl> <int>     <date></span>
-<span class="co">#> 1   2000   2 Pac Baby Don't Cry (Keep...  4:22     1    87 2000-02-26</span>
-<span class="co">#> 2   2000   2 Pac Baby Don't Cry (Keep...  4:22     2    82 2000-03-04</span>
-<span class="co">#> 3   2000   2 Pac Baby Don't Cry (Keep...  4:22     3    72 2000-03-11</span>
-<span class="co">#> 4   2000   2 Pac Baby Don't Cry (Keep...  4:22     4    77 2000-03-18</span>
-<span class="co">#> 5   2000   2 Pac Baby Don't Cry (Keep...  4:22     5    87 2000-03-25</span>
-<span class="co">#> 6   2000   2 Pac Baby Don't Cry (Keep...  4:22     6    94 2000-04-01</span>
-<span class="co">#> 7   2000   2 Pac Baby Don't Cry (Keep...  4:22     7    99 2000-04-08</span>
-<span class="co">#> 8   2000 2Ge+her The Hardest Part Of ...  3:15     1    91 2000-09-02</span>
-<span class="co">#> 9   2000 2Ge+her The Hardest Part Of ...  3:15     2    87 2000-09-09</span>
+<span class="co">#>  1  2000   2 Pac Baby Don't Cry (Keep...  4:22     1    87 2000-02-26</span>
+<span class="co">#>  2  2000   2 Pac Baby Don't Cry (Keep...  4:22     2    82 2000-03-04</span>
+<span class="co">#>  3  2000   2 Pac Baby Don't Cry (Keep...  4:22     3    72 2000-03-11</span>
+<span class="co">#>  4  2000   2 Pac Baby Don't Cry (Keep...  4:22     4    77 2000-03-18</span>
+<span class="co">#>  5  2000   2 Pac Baby Don't Cry (Keep...  4:22     5    87 2000-03-25</span>
+<span class="co">#>  6  2000   2 Pac Baby Don't Cry (Keep...  4:22     6    94 2000-04-01</span>
+<span class="co">#>  7  2000   2 Pac Baby Don't Cry (Keep...  4:22     7    99 2000-04-08</span>
+<span class="co">#>  8  2000 2Ge+her The Hardest Part Of ...  3:15     1    91 2000-09-02</span>
+<span class="co">#>  9  2000 2Ge+her The Hardest Part Of ...  3:15     2    87 2000-09-09</span>
 <span class="co">#> 10  2000 2Ge+her The Hardest Part Of ...  3:15     3    92 2000-09-16</span>
-<span class="co">#> # ... with 5,297 more rows</span></code></pre></div>
+<span class="co">#> # ... with 5,297 more rows</span></code></pre>
 <p>Or by date and rank:</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">billboard3 %>%<span class="st"> </span><span class="kw">arrange</span>(date, rank)
-<span class="co">#> # A tibble: 5,307 × 7</span>
+<pre class="sourceCode r"><code class="sourceCode r">billboard3 %>%<span class="st"> </span><span class="kw">arrange</span>(date, rank)
+<span class="co">#> # A tibble: 5,307 x 7</span>
 <span class="co">#>     year   artist  track  time  week  rank       date</span>
 <span class="co">#>    <int>    <chr>  <chr> <chr> <dbl> <int>     <date></span>
-<span class="co">#> 1   2000 Lonestar Amazed  4:25     1    81 1999-06-05</span>
-<span class="co">#> 2   2000 Lonestar Amazed  4:25     2    54 1999-06-12</span>
-<span class="co">#> 3   2000 Lonestar Amazed  4:25     3    44 1999-06-19</span>
-<span class="co">#> 4   2000 Lonestar Amazed  4:25     4    39 1999-06-26</span>
-<span class="co">#> 5   2000 Lonestar Amazed  4:25     5    38 1999-07-03</span>
-<span class="co">#> 6   2000 Lonestar Amazed  4:25     6    33 1999-07-10</span>
-<span class="co">#> 7   2000 Lonestar Amazed  4:25     7    29 1999-07-17</span>
-<span class="co">#> 8   2000    Amber Sexual  4:38     1    99 1999-07-17</span>
-<span class="co">#> 9   2000 Lonestar Amazed  4:25     8    29 1999-07-24</span>
+<span class="co">#>  1  2000 Lonestar Amazed  4:25     1    81 1999-06-05</span>
+<span class="co">#>  2  2000 Lonestar Amazed  4:25     2    54 1999-06-12</span>
+<span class="co">#>  3  2000 Lonestar Amazed  4:25     3    44 1999-06-19</span>
+<span class="co">#>  4  2000 Lonestar Amazed  4:25     4    39 1999-06-26</span>
+<span class="co">#>  5  2000 Lonestar Amazed  4:25     5    38 1999-07-03</span>
+<span class="co">#>  6  2000 Lonestar Amazed  4:25     6    33 1999-07-10</span>
+<span class="co">#>  7  2000 Lonestar Amazed  4:25     7    29 1999-07-17</span>
+<span class="co">#>  8  2000    Amber Sexual  4:38     1    99 1999-07-17</span>
+<span class="co">#>  9  2000 Lonestar Amazed  4:25     8    29 1999-07-24</span>
 <span class="co">#> 10  2000    Amber Sexual  4:38     2    99 1999-07-24</span>
-<span class="co">#> # ... with 5,297 more rows</span></code></pre></div>
+<span class="co">#> # ... with 5,297 more rows</span></code></pre>
 </div>
 <div id="multiple-variables-stored-in-one-column" class="section level2">
 <h2>Multiple variables stored in one column</h2>
 <p>After gathering columns, the key column is sometimes a combination of multiple underlying variable names. This happens in the <code>tb</code> (tuberculosis) dataset, shown below. This dataset comes from the World Health Organisation, and records the counts of confirmed tuberculosis cases by <code>country</code>, <code>year</code>, and demographic group. The demographic groups are broken down by <code>sex</code> (m, f) and <code>age</code> (0-14, 15-25, 25-34, 35-44, 45-54, 55-64, unkn [...]
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">tb <-<span class="st"> </span><span class="kw">tbl_df</span>(<span class="kw">read.csv</span>(<span class="st">"tb.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>))
+<pre class="sourceCode r"><code class="sourceCode r">tb <-<span class="st"> </span><span class="kw">tbl_df</span>(<span class="kw">read.csv</span>(<span class="st">"tb.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>))
 tb
-<span class="co">#> # A tibble: 5,769 × 22</span>
+<span class="co">#> # A tibble: 5,769 x 22</span>
 <span class="co">#>     iso2  year   m04  m514  m014 m1524 m2534 m3544 m4554 m5564   m65    mu</span>
 <span class="co">#>    <chr> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int></span>
-<span class="co">#> 1     AD  1989    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 2     AD  1990    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 3     AD  1991    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 4     AD  1992    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 5     AD  1993    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 6     AD  1994    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 7     AD  1996    NA    NA     0     0     0     4     1     0     0    NA</span>
-<span class="co">#> 8     AD  1997    NA    NA     0     0     1     2     2     1     6    NA</span>
-<span class="co">#> 9     AD  1998    NA    NA     0     0     0     1     0     0     0    NA</span>
+<span class="co">#>  1    AD  1989    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  2    AD  1990    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  3    AD  1991    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  4    AD  1992    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  5    AD  1993    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  6    AD  1994    NA    NA    NA    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  7    AD  1996    NA    NA     0     0     0     4     1     0     0    NA</span>
+<span class="co">#>  8    AD  1997    NA    NA     0     0     1     2     2     1     6    NA</span>
+<span class="co">#>  9    AD  1998    NA    NA     0     0     0     1     0     0     0    NA</span>
 <span class="co">#> 10    AD  1999    NA    NA     0     0     0     1     1     0     0    NA</span>
 <span class="co">#> # ... with 5,759 more rows, and 10 more variables: f04 <int>, f514 <int>,</span>
 <span class="co">#> #   f014 <int>, f1524 <int>, f2534 <int>, f3544 <int>, f4554 <int>,</span>
-<span class="co">#> #   f5564 <int>, f65 <int>, fu <int></span></code></pre></div>
+<span class="co">#> #   f5564 <int>, f65 <int>, fu <int></span></code></pre>
 <p>First we gather up the non-variable columns:</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">tb2 <-<span class="st"> </span>tb %>%<span class="st"> </span>
+<pre class="sourceCode r"><code class="sourceCode r">tb2 <-<span class="st"> </span>tb %>%<span class="st"> </span>
 <span class="st">  </span><span class="kw">gather</span>(demo, n, -iso2, -year, <span class="dt">na.rm =</span> <span class="ot">TRUE</span>)
 tb2
-<span class="co">#> # A tibble: 35,750 × 4</span>
+<span class="co">#> # A tibble: 35,750 x 4</span>
 <span class="co">#>     iso2  year  demo     n</span>
-<span class="co">#> *  <chr> <int> <chr> <int></span>
-<span class="co">#> 1     AD  2005   m04     0</span>
-<span class="co">#> 2     AD  2006   m04     0</span>
-<span class="co">#> 3     AD  2008   m04     0</span>
-<span class="co">#> 4     AE  2006   m04     0</span>
-<span class="co">#> 5     AE  2007   m04     0</span>
-<span class="co">#> 6     AE  2008   m04     0</span>
-<span class="co">#> 7     AG  2007   m04     0</span>
-<span class="co">#> 8     AL  2005   m04     0</span>
-<span class="co">#> 9     AL  2006   m04     1</span>
+<span class="co">#>  * <chr> <int> <chr> <int></span>
+<span class="co">#>  1    AD  2005   m04     0</span>
+<span class="co">#>  2    AD  2006   m04     0</span>
+<span class="co">#>  3    AD  2008   m04     0</span>
+<span class="co">#>  4    AE  2006   m04     0</span>
+<span class="co">#>  5    AE  2007   m04     0</span>
+<span class="co">#>  6    AE  2008   m04     0</span>
+<span class="co">#>  7    AG  2007   m04     0</span>
+<span class="co">#>  8    AL  2005   m04     0</span>
+<span class="co">#>  9    AL  2006   m04     1</span>
 <span class="co">#> 10    AL  2007   m04     0</span>
-<span class="co">#> # ... with 35,740 more rows</span></code></pre></div>
+<span class="co">#> # ... with 35,740 more rows</span></code></pre>
 <p>Column headers in this format are often separated by a non-alphanumeric character (e.g. <code>.</code>, <code>-</code>, <code>_</code>, <code>:</code>), or have a fixed width format, like in this dataset. <code>separate()</code> makes it easy to split a compound variables into individual variables. You can either pass it a regular expression to split on (the default is to split on non-alphanumeric columns), or a vector of character positions. In this case we want to split after the fi [...]
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">tb3 <-<span class="st"> </span>tb2 %>%<span class="st"> </span>
+<pre class="sourceCode r"><code class="sourceCode r">tb3 <-<span class="st"> </span>tb2 %>%<span class="st"> </span>
 <span class="st">  </span><span class="kw">separate</span>(demo, <span class="kw">c</span>(<span class="st">"sex"</span>, <span class="st">"age"</span>), <span class="dv">1</span>)
 tb3
-<span class="co">#> # A tibble: 35,750 × 5</span>
+<span class="co">#> # A tibble: 35,750 x 5</span>
 <span class="co">#>     iso2  year   sex   age     n</span>
-<span class="co">#> *  <chr> <int> <chr> <chr> <int></span>
-<span class="co">#> 1     AD  2005     m    04     0</span>
-<span class="co">#> 2     AD  2006     m    04     0</span>
-<span class="co">#> 3     AD  2008     m    04     0</span>
-<span class="co">#> 4     AE  2006     m    04     0</span>
-<span class="co">#> 5     AE  2007     m    04     0</span>
-<span class="co">#> 6     AE  2008     m    04     0</span>
-<span class="co">#> 7     AG  2007     m    04     0</span>
-<span class="co">#> 8     AL  2005     m    04     0</span>
-<span class="co">#> 9     AL  2006     m    04     1</span>
+<span class="co">#>  * <chr> <int> <chr> <chr> <int></span>
+<span class="co">#>  1    AD  2005     m    04     0</span>
+<span class="co">#>  2    AD  2006     m    04     0</span>
+<span class="co">#>  3    AD  2008     m    04     0</span>
+<span class="co">#>  4    AE  2006     m    04     0</span>
+<span class="co">#>  5    AE  2007     m    04     0</span>
+<span class="co">#>  6    AE  2008     m    04     0</span>
+<span class="co">#>  7    AG  2007     m    04     0</span>
+<span class="co">#>  8    AL  2005     m    04     0</span>
+<span class="co">#>  9    AL  2006     m    04     1</span>
 <span class="co">#> 10    AL  2007     m    04     0</span>
-<span class="co">#> # ... with 35,740 more rows</span></code></pre></div>
+<span class="co">#> # ... with 35,740 more rows</span></code></pre>
 <p>Storing the values in this form resolves a problem in the original data. We want to compare rates, not counts, which means we need to know the population. In the original format, there is no easy way to add a population variable. It has to be stored in a separate table, which makes it hard to correctly match populations to counts. In tidy form, adding variables for population and rate is easy because they’re just additional columns.</p>
 </div>
 <div id="variables-are-stored-in-both-rows-and-columns" class="section level2">
 <h2>Variables are stored in both rows and columns</h2>
 <p>The most complicated form of messy data occurs when variables are stored in both rows and columns. The code below loads daily weather data from the Global Historical Climatology Network for one weather station (MX17004) in Mexico for five months in 2010.</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">weather <-<span class="st"> </span><span class="kw">tbl_df</span>(<span class="kw">read.csv</span>(<span class="st">"weather.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>))
+<pre class="sourceCode r"><code class="sourceCode r">weather <-<span class="st"> </span><span class="kw">tbl_df</span>(<span class="kw">read.csv</span>(<span class="st">"weather.csv"</span>, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>))
 weather
-<span class="co">#> # A tibble: 22 × 35</span>
+<span class="co">#> # A tibble: 22 x 35</span>
 <span class="co">#>         id  year month element    d1    d2    d3    d4    d5    d6    d7</span>
 <span class="co">#>      <chr> <int> <int>   <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl></span>
-<span class="co">#> 1  MX17004  2010     1    tmax    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 2  MX17004  2010     1    tmin    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 3  MX17004  2010     2    tmax    NA  27.3  24.1    NA    NA    NA    NA</span>
-<span class="co">#> 4  MX17004  2010     2    tmin    NA  14.4  14.4    NA    NA    NA    NA</span>
-<span class="co">#> 5  MX17004  2010     3    tmax    NA    NA    NA    NA  32.1    NA    NA</span>
-<span class="co">#> 6  MX17004  2010     3    tmin    NA    NA    NA    NA  14.2    NA    NA</span>
-<span class="co">#> 7  MX17004  2010     4    tmax    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 8  MX17004  2010     4    tmin    NA    NA    NA    NA    NA    NA    NA</span>
-<span class="co">#> 9  MX17004  2010     5    tmax    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  1 MX17004  2010     1    tmax    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  2 MX17004  2010     1    tmin    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  3 MX17004  2010     2    tmax    NA  27.3  24.1    NA    NA    NA    NA</span>
+<span class="co">#>  4 MX17004  2010     2    tmin    NA  14.4  14.4    NA    NA    NA    NA</span>
+<span class="co">#>  5 MX17004  2010     3    tmax    NA    NA    NA    NA  32.1    NA    NA</span>
+<span class="co">#>  6 MX17004  2010     3    tmin    NA    NA    NA    NA  14.2    NA    NA</span>
+<span class="co">#>  7 MX17004  2010     4    tmax    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  8 MX17004  2010     4    tmin    NA    NA    NA    NA    NA    NA    NA</span>
+<span class="co">#>  9 MX17004  2010     5    tmax    NA    NA    NA    NA    NA    NA    NA</span>
 <span class="co">#> 10 MX17004  2010     5    tmin    NA    NA    NA    NA    NA    NA    NA</span>
 <span class="co">#> # ... with 12 more rows, and 24 more variables: d8 <dbl>, d9 <lgl>,</span>
 <span class="co">#> #   d10 <dbl>, d11 <dbl>, d12 <lgl>, d13 <dbl>, d14 <dbl>, d15 <dbl>,</span>
 <span class="co">#> #   d16 <dbl>, d17 <dbl>, d18 <lgl>, d19 <lgl>, d20 <lgl>, d21 <lgl>,</span>
 <span class="co">#> #   d22 <lgl>, d23 <dbl>, d24 <lgl>, d25 <dbl>, d26 <dbl>, d27 <dbl>,</span>
-<span class="co">#> #   d28 <dbl>, d29 <dbl>, d30 <dbl>, d31 <dbl></span></code></pre></div>
+<span class="co">#> #   d28 <dbl>, d29 <dbl>, d30 <dbl>, d31 <dbl></span></code></pre>
 <p>It has variables in individual columns (<code>id</code>, <code>year</code>, <code>month</code>), spread across columns (<code>day</code>, d1-d31) and across rows (<code>tmin</code>, <code>tmax</code>) (minimum and maximum temperature). Months with fewer than 31 days have structural missing values for the last day(s) of the month.</p>
 <p>To tidy this dataset we first gather the day columns:</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">weather2 <-<span class="st"> </span>weather %>%
+<pre class="sourceCode r"><code class="sourceCode r">weather2 <-<span class="st"> </span>weather %>%
 <span class="st">  </span><span class="kw">gather</span>(day, value, d1:d31, <span class="dt">na.rm =</span> <span class="ot">TRUE</span>)
 weather2
-<span class="co">#> # A tibble: 66 × 6</span>
+<span class="co">#> # A tibble: 66 x 6</span>
 <span class="co">#>         id  year month element   day value</span>
-<span class="co">#> *    <chr> <int> <int>   <chr> <chr> <dbl></span>
-<span class="co">#> 1  MX17004  2010    12    tmax    d1  29.9</span>
-<span class="co">#> 2  MX17004  2010    12    tmin    d1  13.8</span>
-<span class="co">#> 3  MX17004  2010     2    tmax    d2  27.3</span>
-<span class="co">#> 4  MX17004  2010     2    tmin    d2  14.4</span>
-<span class="co">#> 5  MX17004  2010    11    tmax    d2  31.3</span>
-<span class="co">#> 6  MX17004  2010    11    tmin    d2  16.3</span>
-<span class="co">#> 7  MX17004  2010     2    tmax    d3  24.1</span>
-<span class="co">#> 8  MX17004  2010     2    tmin    d3  14.4</span>
-<span class="co">#> 9  MX17004  2010     7    tmax    d3  28.6</span>
+<span class="co">#>  *   <chr> <int> <int>   <chr> <chr> <dbl></span>
+<span class="co">#>  1 MX17004  2010    12    tmax    d1  29.9</span>
+<span class="co">#>  2 MX17004  2010    12    tmin    d1  13.8</span>
+<span class="co">#>  3 MX17004  2010     2    tmax    d2  27.3</span>
+<span class="co">#>  4 MX17004  2010     2    tmin    d2  14.4</span>
+<span class="co">#>  5 MX17004  2010    11    tmax    d2  31.3</span>
+<span class="co">#>  6 MX17004  2010    11    tmin    d2  16.3</span>
+<span class="co">#>  7 MX17004  2010     2    tmax    d3  24.1</span>
+<span class="co">#>  8 MX17004  2010     2    tmin    d3  14.4</span>
+<span class="co">#>  9 MX17004  2010     7    tmax    d3  28.6</span>
 <span class="co">#> 10 MX17004  2010     7    tmin    d3  17.5</span>
-<span class="co">#> # ... with 56 more rows</span></code></pre></div>
+<span class="co">#> # ... with 56 more rows</span></code></pre>
 <p>For presentation, I’ve dropped the missing values, making them implicit rather than explicit. This is ok because we know how many days are in each month and can easily reconstruct the explicit missing values.</p>
 <p>We’ll also do a little cleaning:</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">weather3 <-<span class="st"> </span>weather2 %>%<span class="st"> </span>
+<pre class="sourceCode r"><code class="sourceCode r">weather3 <-<span class="st"> </span>weather2 %>%<span class="st"> </span>
 <span class="st">  </span><span class="kw">mutate</span>(<span class="dt">day =</span> <span class="kw">extract_numeric</span>(day)) %>%
 <span class="st">  </span><span class="kw">select</span>(id, year, month, day, element, value) %>%
 <span class="st">  </span><span class="kw">arrange</span>(id, year, month, day)
 <span class="co">#> extract_numeric() is deprecated: please use readr::parse_number() instead</span>
 weather3
-<span class="co">#> # A tibble: 66 × 6</span>
+<span class="co">#> # A tibble: 66 x 6</span>
 <span class="co">#>         id  year month   day element value</span>
 <span class="co">#>      <chr> <int> <int> <dbl>   <chr> <dbl></span>
-<span class="co">#> 1  MX17004  2010     1    30    tmax  27.8</span>
-<span class="co">#> 2  MX17004  2010     1    30    tmin  14.5</span>
-<span class="co">#> 3  MX17004  2010     2     2    tmax  27.3</span>
-<span class="co">#> 4  MX17004  2010     2     2    tmin  14.4</span>
-<span class="co">#> 5  MX17004  2010     2     3    tmax  24.1</span>
-<span class="co">#> 6  MX17004  2010     2     3    tmin  14.4</span>
-<span class="co">#> 7  MX17004  2010     2    11    tmax  29.7</span>
-<span class="co">#> 8  MX17004  2010     2    11    tmin  13.4</span>
-<span class="co">#> 9  MX17004  2010     2    23    tmax  29.9</span>
+<span class="co">#>  1 MX17004  2010     1    30    tmax  27.8</span>
+<span class="co">#>  2 MX17004  2010     1    30    tmin  14.5</span>
+<span class="co">#>  3 MX17004  2010     2     2    tmax  27.3</span>
+<span class="co">#>  4 MX17004  2010     2     2    tmin  14.4</span>
+<span class="co">#>  5 MX17004  2010     2     3    tmax  24.1</span>
+<span class="co">#>  6 MX17004  2010     2     3    tmin  14.4</span>
+<span class="co">#>  7 MX17004  2010     2    11    tmax  29.7</span>
+<span class="co">#>  8 MX17004  2010     2    11    tmin  13.4</span>
+<span class="co">#>  9 MX17004  2010     2    23    tmax  29.9</span>
 <span class="co">#> 10 MX17004  2010     2    23    tmin  10.7</span>
-<span class="co">#> # ... with 56 more rows</span></code></pre></div>
+<span class="co">#> # ... with 56 more rows</span></code></pre>
 <p>This dataset is mostly tidy, but the <code>element</code> column is not a variable; it stores the names of variables. (Not shown in this example are the other meteorological variables <code>prcp</code> (precipitation) and <code>snow</code> (snowfall)). Fixing this requires the spread operation. This performs the inverse of gathering by spreading the <code>element</code> and <code>value</code> columns back out into the columns:</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">weather3 %>%<span class="st"> </span><span class="kw">spread</span>(element, value)
-<span class="co">#> # A tibble: 33 × 6</span>
+<pre class="sourceCode r"><code class="sourceCode r">weather3 %>%<span class="st"> </span><span class="kw">spread</span>(element, value)
+<span class="co">#> # A tibble: 33 x 6</span>
 <span class="co">#>         id  year month   day  tmax  tmin</span>
-<span class="co">#> *    <chr> <int> <int> <dbl> <dbl> <dbl></span>
-<span class="co">#> 1  MX17004  2010     1    30  27.8  14.5</span>
-<span class="co">#> 2  MX17004  2010     2     2  27.3  14.4</span>
-<span class="co">#> 3  MX17004  2010     2     3  24.1  14.4</span>
-<span class="co">#> 4  MX17004  2010     2    11  29.7  13.4</span>
-<span class="co">#> 5  MX17004  2010     2    23  29.9  10.7</span>
-<span class="co">#> 6  MX17004  2010     3     5  32.1  14.2</span>
-<span class="co">#> 7  MX17004  2010     3    10  34.5  16.8</span>
-<span class="co">#> 8  MX17004  2010     3    16  31.1  17.6</span>
-<span class="co">#> 9  MX17004  2010     4    27  36.3  16.7</span>
+<span class="co">#>  *   <chr> <int> <int> <dbl> <dbl> <dbl></span>
+<span class="co">#>  1 MX17004  2010     1    30  27.8  14.5</span>
+<span class="co">#>  2 MX17004  2010     2     2  27.3  14.4</span>
+<span class="co">#>  3 MX17004  2010     2     3  24.1  14.4</span>
+<span class="co">#>  4 MX17004  2010     2    11  29.7  13.4</span>
+<span class="co">#>  5 MX17004  2010     2    23  29.9  10.7</span>
+<span class="co">#>  6 MX17004  2010     3     5  32.1  14.2</span>
+<span class="co">#>  7 MX17004  2010     3    10  34.5  16.8</span>
+<span class="co">#>  8 MX17004  2010     3    16  31.1  17.6</span>
+<span class="co">#>  9 MX17004  2010     4    27  36.3  16.7</span>
 <span class="co">#> 10 MX17004  2010     5    27  33.2  18.2</span>
-<span class="co">#> # ... with 23 more rows</span></code></pre></div>
+<span class="co">#> # ... with 23 more rows</span></code></pre>
 <p>This form is tidy: there’s one variable in each column, and each row represents one day.</p>
 </div>
 <div id="multiple-types" class="section level2">
@@ -448,45 +426,45 @@ weather3
 <p>Datasets often involve values collected at multiple levels, on different types of observational units. During tidying, each type of observational unit should be stored in its own table. This is closely related to the idea of database normalisation, where each fact is expressed in only one place. It’s important because otherwise inconsistencies can arise.</p>
 <p>The billboard dataset actually contains observations on two types of observational units: the song and its rank in each week. This manifests itself through the duplication of facts about the song: <code>artist</code>, <code>year</code> and <code>time</code> are repeated many times.</p>
 <p>This dataset needs to be broken down into two pieces: a song dataset which stores <code>artist</code>, <code>song name</code> and <code>time</code>, and a ranking dataset which gives the <code>rank</code> of the <code>song</code> in each <code>week</code>. We first extract a <code>song</code> dataset:</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">song <-<span class="st"> </span>billboard3 %>%<span class="st"> </span>
+<pre class="sourceCode r"><code class="sourceCode r">song <-<span class="st"> </span>billboard3 %>%<span class="st"> </span>
 <span class="st">  </span><span class="kw">select</span>(artist, track, year, time) %>%
 <span class="st">  </span><span class="kw">unique</span>() %>%
 <span class="st">  </span><span class="kw">mutate</span>(<span class="dt">song_id =</span> <span class="kw">row_number</span>())
 song
-<span class="co">#> # A tibble: 317 × 5</span>
+<span class="co">#> # A tibble: 317 x 5</span>
 <span class="co">#>            artist                   track  year  time song_id</span>
 <span class="co">#>             <chr>                   <chr> <int> <chr>   <int></span>
-<span class="co">#> 1           2 Pac Baby Don't Cry (Keep...  2000  4:22       1</span>
-<span class="co">#> 2         2Ge+her The Hardest Part Of ...  2000  3:15       2</span>
-<span class="co">#> 3    3 Doors Down              Kryptonite  2000  3:53       3</span>
-<span class="co">#> 4    3 Doors Down                   Loser  2000  4:24       4</span>
-<span class="co">#> 5        504 Boyz           Wobble Wobble  2000  3:35       5</span>
-<span class="co">#> 6            98^0 Give Me Just One Nig...  2000  3:24       6</span>
-<span class="co">#> 7         A*Teens           Dancing Queen  2000  3:44       7</span>
-<span class="co">#> 8         Aaliyah           I Don't Wanna  2000  4:15       8</span>
-<span class="co">#> 9         Aaliyah               Try Again  2000  4:03       9</span>
+<span class="co">#>  1          2 Pac Baby Don't Cry (Keep...  2000  4:22       1</span>
+<span class="co">#>  2        2Ge+her The Hardest Part Of ...  2000  3:15       2</span>
+<span class="co">#>  3   3 Doors Down              Kryptonite  2000  3:53       3</span>
+<span class="co">#>  4   3 Doors Down                   Loser  2000  4:24       4</span>
+<span class="co">#>  5       504 Boyz           Wobble Wobble  2000  3:35       5</span>
+<span class="co">#>  6           98^0 Give Me Just One Nig...  2000  3:24       6</span>
+<span class="co">#>  7        A*Teens           Dancing Queen  2000  3:44       7</span>
+<span class="co">#>  8        Aaliyah           I Don't Wanna  2000  4:15       8</span>
+<span class="co">#>  9        Aaliyah               Try Again  2000  4:03       9</span>
 <span class="co">#> 10 Adams, Yolanda           Open My Heart  2000  5:30      10</span>
-<span class="co">#> # ... with 307 more rows</span></code></pre></div>
+<span class="co">#> # ... with 307 more rows</span></code></pre>
 <p>Then use that to make a <code>rank</code> dataset by replacing repeated song facts with a pointer to song details (a unique song id):</p>
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">rank <-<span class="st"> </span>billboard3 %>%
+<pre class="sourceCode r"><code class="sourceCode r">rank <-<span class="st"> </span>billboard3 %>%
 <span class="st">  </span><span class="kw">left_join</span>(song, <span class="kw">c</span>(<span class="st">"artist"</span>, <span class="st">"track"</span>, <span class="st">"year"</span>, <span class="st">"time"</span>)) %>%
 <span class="st">  </span><span class="kw">select</span>(song_id, date, week, rank) %>%
 <span class="st">  </span><span class="kw">arrange</span>(song_id, date)
 rank
-<span class="co">#> # A tibble: 5,307 × 4</span>
+<span class="co">#> # A tibble: 5,307 x 4</span>
 <span class="co">#>    song_id       date  week  rank</span>
 <span class="co">#>      <int>     <date> <dbl> <int></span>
-<span class="co">#> 1        1 2000-02-26     1    87</span>
-<span class="co">#> 2        1 2000-03-04     2    82</span>
-<span class="co">#> 3        1 2000-03-11     3    72</span>
-<span class="co">#> 4        1 2000-03-18     4    77</span>
-<span class="co">#> 5        1 2000-03-25     5    87</span>
-<span class="co">#> 6        1 2000-04-01     6    94</span>
-<span class="co">#> 7        1 2000-04-08     7    99</span>
-<span class="co">#> 8        2 2000-09-02     1    91</span>
-<span class="co">#> 9        2 2000-09-09     2    87</span>
+<span class="co">#>  1       1 2000-02-26     1    87</span>
+<span class="co">#>  2       1 2000-03-04     2    82</span>
+<span class="co">#>  3       1 2000-03-11     3    72</span>
+<span class="co">#>  4       1 2000-03-18     4    77</span>
+<span class="co">#>  5       1 2000-03-25     5    87</span>
+<span class="co">#>  6       1 2000-04-01     6    94</span>
+<span class="co">#>  7       1 2000-04-08     7    99</span>
+<span class="co">#>  8       2 2000-09-02     1    91</span>
+<span class="co">#>  9       2 2000-09-09     2    87</span>
 <span class="co">#> 10       2 2000-09-16     3    92</span>
-<span class="co">#> # ... with 5,297 more rows</span></code></pre></div>
+<span class="co">#> # ... with 5,297 more rows</span></code></pre>
 <p>You could also imagine a <code>week</code> dataset which would record background information about the week, maybe the total number of songs sold or similar “demographic” information.</p>
 <p>Normalisation is useful for tidying and eliminating inconsistencies. However, there are few data analysis tools that work directly with relational data, so analysis usually also requires denormalisation or the merging the datasets back into one table.</p>
 </div>
@@ -499,10 +477,10 @@ rank
 <li><p>Combine all tables into a single table.</p></li>
 </ol>
 <p>Plyr makes this straightforward in R. The following code generates a vector of file names in a directory (<code>data/</code>) which match a regular expression (ends in <code>.csv</code>). Next we name each element of the vector with the name of the file. We do this because will preserve the names in the following step, ensuring that each row in the final data frame is labeled with its source. Finally, <code>ldply()</code> loops over each path, reading in the csv file and combining the [...]
-<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(plyr)
+<pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(plyr)
 paths <-<span class="st"> </span><span class="kw">dir</span>(<span class="st">"data"</span>, <span class="dt">pattern =</span> <span class="st">"</span><span class="ch">\\</span><span class="st">.csv$"</span>, <span class="dt">full.names =</span> <span class="ot">TRUE</span>)
 <span class="kw">names</span>(paths) <-<span class="st"> </span><span class="kw">basename</span>(paths)
-<span class="kw">ldply</span>(paths, read.csv, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>)</code></pre></div>
+<span class="kw">ldply</span>(paths, read.csv, <span class="dt">stringsAsFactors =</span> <span class="ot">FALSE</span>)</code></pre>
 <p>Once you have a single table, you can perform additional tidying as needed. An example of this type of cleaning can be found at <a href="https://github.com/hadley/data-baby-names" class="uri">https://github.com/hadley/data-baby-names</a> which takes 129 yearly baby name tables provided by the US Social Security Administration and combines them into a single file.</p>
 <p>A more complicated situation occurs when the dataset structure changes over time. For example, the datasets may contain different variables, the same variables with different names, different file formats, or different conventions for missing values. This may require you to tidy each file to individually (or, if you’re lucky, in small groups) and then combine them once tidied. An example of this type of tidying is illustrated in <a href="https://github.com/hadley/data-fuel-economy" cl [...]
 </div>
@@ -521,7 +499,7 @@ paths <-<span class="st"> </span><span class="kw">dir</span>(<span class="st"
   (function () {
     var script = document.createElement("script");
     script.type = "text/javascript";
-    script.src  = "https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
+    script.src  = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
     document.getElementsByTagName("head")[0].appendChild(script);
   })();
 </script>
diff --git a/man/complete.Rd b/man/complete.Rd
index 19cedf1..044468d 100644
--- a/man/complete.Rd
+++ b/man/complete.Rd
@@ -7,33 +7,33 @@
 complete(data, ..., fill = list())
 }
 \arguments{
-\item{data}{A data frame}
+\item{data}{A data frame.}
 
 \item{...}{Specification of columns to expand.
 
-  To find all unique combinations of x, y and z, including those not
-  found in the data, supply each variable as a separate argument.
-  To find only the combinations that occur in the data, use nest:
-  \code{expand(df, nesting(x, y, z))}.
+To find all unique combinations of x, y and z, including those not
+found in the data, supply each variable as a separate argument.
+To find only the combinations that occur in the data, use nest:
+\code{expand(df, nesting(x, y, z))}.
 
-  You can combine the two forms. For example,
-  \code{expand(df, nesting(school_id, student_id), date)} would produce
-  a row for every student for each date.
+You can combine the two forms. For example,
+\code{expand(df, nesting(school_id, student_id), date)} would produce
+a row for every student for each date.
 
-  For factors, the full set of levels (not just those that appear in the
-  data) are used. For continuous variables, you may need to fill in values
-  that don't appear in the data: to do so use expressions like
-  \code{year = 2010:2020} or \code{year = \link{full_seq}(year)}.
+For factors, the full set of levels (not just those that appear in the
+data) are used. For continuous variables, you may need to fill in values
+that don't appear in the data: to do so use expressions like
+\code{year = 2010:2020} or \code{year = \link{full_seq}(year)}.
 
-  Length-zero (empty) elements are automatically dropped.}
+Length-zero (empty) elements are automatically dropped.}
 
 \item{fill}{A named list that for each variable supplies a single value to
 use instead of \code{NA} for missing combinations.}
 }
 \description{
 Turns implicit missing values into explicit missing values.
-This is a wrapper around \code{\link{expand}()},
-\code{\link[dplyr]{left_join}()} and \code{\link{replace_na}} that's
+This is a wrapper around \code{\link[=expand]{expand()}},
+\code{\link[dplyr:left_join]{dplyr::left_join()}} and \code{\link[=replace_na]{replace_na()}} that's
 useful for completing missing combinations of data.
 }
 \details{
@@ -41,8 +41,8 @@ If you supply \code{fill}, these values will also replace existing
 explicit missing values in the data set.
 }
 \examples{
-library(dplyr)
-df <- data_frame(
+library(dplyr, warn.conflicts = FALSE)
+df <- tibble(
   group = c(1:2, 1),
   item_id = c(1:2, 2),
   item_name = c("a", "b", "b"),
@@ -54,8 +54,3 @@ df \%>\% complete(group, nesting(item_id, item_name))
 # You can also choose to fill in missing values
 df \%>\% complete(group, nesting(item_id, item_name), fill = list(value1 = 0))
 }
-\seealso{
-\code{\link{complete_}} for a version that uses regular evaluation
-  and is suitable for programming with.
-}
-
diff --git a/man/complete_.Rd b/man/complete_.Rd
deleted file mode 100644
index 1e4c222..0000000
--- a/man/complete_.Rd
+++ /dev/null
@@ -1,21 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/complete.R
-\name{complete_}
-\alias{complete_}
-\title{Standard-evaluation version of \code{complete}.}
-\usage{
-complete_(data, cols, fill = list(), ...)
-}
-\arguments{
-\item{data}{A data frame}
-
-\item{cols}{Columns to expand}
-
-\item{fill}{A named list that for each variable supplies a single value to
-use instead of \code{NA} for missing combinations.}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/deprecated-se.Rd b/man/deprecated-se.Rd
new file mode 100644
index 0000000..9392195
--- /dev/null
+++ b/man/deprecated-se.Rd
@@ -0,0 +1,164 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/complete.R, R/drop-na.R, R/expand.R,
+%   R/extract.R, R/fill.R, R/gather.R, R/nest.R, R/separate-rows.R,
+%   R/separate.R, R/spread.R, R/tidyr.R, R/unite.R, R/unnest.R
+\name{complete_}
+\alias{complete_}
+\alias{drop_na_}
+\alias{expand_}
+\alias{crossing_}
+\alias{nesting_}
+\alias{extract_}
+\alias{fill_}
+\alias{gather_}
+\alias{nest_}
+\alias{separate_rows_}
+\alias{separate_}
+\alias{spread_}
+\alias{deprecated-se}
+\alias{unite_}
+\alias{unnest_}
+\title{Deprecated SE versions of main verbs}
+\usage{
+complete_(data, cols, fill = list(), ...)
+
+drop_na_(data, vars)
+
+expand_(data, dots, ...)
+
+crossing_(x)
+
+nesting_(x)
+
+extract_(data, col, into, regex = "([[:alnum:]]+)", remove = TRUE,
+  convert = FALSE, ...)
+
+fill_(data, fill_cols, .direction = c("down", "up"))
+
+gather_(data, key_col, value_col, gather_cols, na.rm = FALSE,
+  convert = FALSE, factor_key = FALSE)
+
+nest_(data, key_col, nest_cols = character())
+
+separate_rows_(data, cols, sep = "[^[:alnum:].]+", convert = FALSE)
+
+separate_(data, col, into, sep = "[^[:alnum:]]+", remove = TRUE,
+  convert = FALSE, extra = "warn", fill = "warn", ...)
+
+spread_(data, key_col, value_col, fill = NA, convert = FALSE, drop = TRUE,
+  sep = NULL)
+
+unite_(data, col, from, sep = "_", remove = TRUE)
+
+unnest_(data, unnest_cols, .drop = NA, .id = NULL, .sep = NULL)
+}
+\arguments{
+\item{data}{A data frame}
+
+\item{fill}{A named list that for each variable supplies a single value to
+use instead of \code{NA} for missing combinations.}
+
+\item{...}{Specification of columns to expand.
+
+To find all unique combinations of x, y and z, including those not
+found in the data, supply each variable as a separate argument.
+To find only the combinations that occur in the data, use nest:
+\code{expand(df, nesting(x, y, z))}.
+
+You can combine the two forms. For example,
+\code{expand(df, nesting(school_id, student_id), date)} would produce
+a row for every student for each date.
+
+For factors, the full set of levels (not just those that appear in the
+data) are used. For continuous variables, you may need to fill in values
+that don't appear in the data: to do so use expressions like
+\code{year = 2010:2020} or \code{year = \link{full_seq}(year)}.
+
+Length-zero (empty) elements are automatically dropped.}
+
+\item{vars, cols, col}{Name of columns.}
+
+\item{x}{For \code{nesting_} and \code{crossing_} a list of variables.}
+
+\item{into}{Names of new variables to create as character vector.}
+
+\item{regex}{a regular expression used to extract the desired values.}
+
+\item{remove}{If \code{TRUE}, remove input column from output data frame.}
+
+\item{convert}{If \code{TRUE}, will run \code{\link[=type.convert]{type.convert()}} with
+\code{as.is = TRUE} on new columns. This is useful if the component
+columns are integer, numeric or logical.}
+
+\item{fill_cols}{Character vector of column names.}
+
+\item{.direction}{Direction in which to fill missing values. Currently
+either "down" (the default) or "up".}
+
+\item{key_col, value_col}{Strings giving names of key and value columns to
+create.}
+
+\item{gather_cols}{Character vector giving column names to be gathered into
+pair of key-value columns.}
+
+\item{na.rm}{If \code{TRUE}, will remove rows from output where the
+value column in \code{NA}.}
+
+\item{factor_key}{If \code{FALSE}, the default, the key values will be
+stored as a character vector. If \code{TRUE}, will be stored as a factor,
+which preserves the original ordering of the columns.}
+
+\item{nest_cols}{Character vector of columns to nest.}
+
+\item{sep}{Separator delimiting collapsed values.}
+
+\item{extra}{If \code{sep} is a character vector, this controls what
+happens when there are too many pieces. There are three valid options:
+\itemize{
+\item "warn" (the default): emit a warning and drop extra values.
+\item "drop": drop any extra values without a warning.
+\item "merge": only splits at most \code{length(into)} times
+}}
+
+\item{drop}{If \code{FALSE}, will keep factor levels that don't appear in the
+data, filling in missing combinations with \code{fill}.}
+
+\item{from}{Names of existing columns as character vector}
+
+\item{unnest_cols}{Name of columns that needs to be unnested.}
+
+\item{.drop}{Should additional list columns be dropped? By default,
+\code{unnest} will drop them if unnesting the specified columns requires
+the rows to be duplicated.}
+
+\item{.id}{Data frame identifier - if supplied, will create a new column
+with name \code{.id}, giving a unique identifier. This is most useful if
+the list column is named.}
+
+\item{.sep}{If non-\code{NULL}, the names of unnested data frame columns
+will combine the name of the original list-col with the names from
+nested data frame, separated by \code{.sep}.}
+
+\item{expand_cols}{Character vector of column names to be expanded.}
+
+\item{key_col}{Name of the column that will contain the nested data frames.}
+
+\item{key_col, value_col}{Strings giving names of key and value cols.}
+}
+\description{
+tidyr used to offer twin versions of each verb suffixed with an
+underscore. These versions had standard evaluation (SE) semantics:
+rather than taking arguments by code, like NSE verbs, they took
+arguments by value. Their purpose was to make it possible to
+program with tidyr. However, tidyr now uses tidy evaluation
+semantics. NSE verbs still capture their arguments, but you can now
+unquote parts of these arguments. This offers full programmability
+with NSE verbs. Thus, the underscored versions are now superfluous.
+}
+\details{
+Unquoting triggers immediate evaluation of its operand and inlines
+the result within the captured expression. This result can be a
+value or an expression to be evaluated later with the rest of the
+argument. See \code{vignette("programming", "dplyr")} for more information.
+}
+\keyword{internal}
diff --git a/man/drop_na.Rd b/man/drop_na.Rd
index 393f595..3ac6034 100644
--- a/man/drop_na.Rd
+++ b/man/drop_na.Rd
@@ -1,5 +1,5 @@
 % Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/drop_na.r
+% Please edit documentation in R/drop-na.R
 \name{drop_na}
 \alias{drop_na}
 \title{Drop rows containing missing values}
@@ -9,22 +9,46 @@ drop_na(data, ...)
 \arguments{
 \item{data}{A data frame.}
 
-\item{...}{Specification of variables to consider while dropping rows.
-If empty, consider all variables. Use bare variable names. Select all
- variables between x and z with \code{x:z}, exclude y with \code{-y}.
- For more options, see the \link[dplyr]{select} documentation.}
+\item{...}{A selection of columns. If empty, all variables are
+selected. You can supply bare variable names, select all
+variables between x and z with \code{x:z}, exclude y with \code{-y}. For
+more options, see the \code{\link[dplyr:select]{dplyr::select()}} documentation. See also
+the section on selection rules below.}
 }
 \description{
 Drop rows containing missing values
 }
+\section{Rules for selection}{
+
+
+Arguments for selecting columns are passed to
+\code{\link[tidyselect:vars_select]{tidyselect::vars_select()}} and are treated specially. Unlike other
+verbs, selecting functions make a strict distinction between data
+expressions and context expressions.
+\itemize{
+\item A data expression is either a bare name like \code{x} or an expression
+like \code{x:y} or \code{c(x, y)}. In a data expression, you can only refer
+to columns from the data frame.
+\item Everything else is a context expression in which you can only
+refer to objects that you have defined with \code{<-}.
+}
+
+For instance, \code{col1:col3} is a data expression that refers to data
+columns, while \code{seq(start, end)} is a context expression that
+refers to objects from the contexts.
+
+If you really need to refer to contextual objects from a data
+expression, you can unquote them with the tidy eval operator
+\code{!!}. This operator evaluates its argument in the context and
+inlines the result in the surrounding function call. For instance,
+\code{c(x, !! x)} selects the \code{x} column within the data frame and the
+column referred to by the object \code{x} defined in the context (which
+can contain either a column name as string or a column position).
+}
+
 \examples{
 library(dplyr)
-df <- data_frame(x = c(1, 2, NA), y = c("a", NA, "b"))
+df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
 df \%>\% drop_na()
 df \%>\% drop_na(x)
 }
-\seealso{
-\code{\link{drop_na_}} for a version that uses regular evaluation
-  and is suitable for programming with.
-}
-
diff --git a/man/drop_na_.Rd b/man/drop_na_.Rd
deleted file mode 100644
index 47bcde3..0000000
--- a/man/drop_na_.Rd
+++ /dev/null
@@ -1,19 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/drop_na.r
-\name{drop_na_}
-\alias{drop_na_}
-\title{Standard-evaluation version of \code{drop_na}.}
-\usage{
-drop_na_(data, vars)
-}
-\arguments{
-\item{data}{A data frame.}
-
-\item{vars}{Character vector of variable names. If empty, all
-variables are considered while dropping rows.}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/expand.Rd b/man/expand.Rd
index e5416f9..403498f 100644
--- a/man/expand.Rd
+++ b/man/expand.Rd
@@ -1,45 +1,37 @@
 % Generated by roxygen2: do not edit by hand
 % Please edit documentation in R/expand.R
 \name{expand}
-\alias{crossing}
-\alias{crossing_}
 \alias{expand}
+\alias{crossing}
 \alias{nesting}
-\alias{nesting_}
 \title{Expand data frame to include all combinations of values}
 \usage{
 expand(data, ...)
 
 crossing(...)
 
-crossing_(x)
-
 nesting(...)
-
-nesting_(x)
 }
 \arguments{
-\item{data}{A data frame}
+\item{data}{A data frame.}
 
 \item{...}{Specification of columns to expand.
 
-  To find all unique combinations of x, y and z, including those not
-  found in the data, supply each variable as a separate argument.
-  To find only the combinations that occur in the data, use nest:
-  \code{expand(df, nesting(x, y, z))}.
+To find all unique combinations of x, y and z, including those not
+found in the data, supply each variable as a separate argument.
+To find only the combinations that occur in the data, use nest:
+\code{expand(df, nesting(x, y, z))}.
 
-  You can combine the two forms. For example,
-  \code{expand(df, nesting(school_id, student_id), date)} would produce
-  a row for every student for each date.
+You can combine the two forms. For example,
+\code{expand(df, nesting(school_id, student_id), date)} would produce
+a row for every student for each date.
 
-  For factors, the full set of levels (not just those that appear in the
-  data) are used. For continuous variables, you may need to fill in values
-  that don't appear in the data: to do so use expressions like
-  \code{year = 2010:2020} or \code{year = \link{full_seq}(year)}.
+For factors, the full set of levels (not just those that appear in the
+data) are used. For continuous variables, you may need to fill in values
+that don't appear in the data: to do so use expressions like
+\code{year = 2010:2020} or \code{year = \link{full_seq}(year)}.
 
-  Length-zero (empty) elements are automatically dropped.}
-
-\item{x}{For \code{nesting_} and \code{crossing_} a list of variables.}
+Length-zero (empty) elements are automatically dropped.}
 }
 \description{
 \code{expand()} is often useful in conjunction with \code{left_join} if
@@ -48,7 +40,7 @@ Or you can use it in conjunction with \code{anti_join()} to figure
 out which combinations are missing.
 }
 \details{
-\code{crossing()} is similar to \code{\link{expand.grid}()}, this never
+\code{crossing()} is similar to \code{\link[=expand.grid]{expand.grid()}}, this never
 converts strings to factors, returns a \code{tbl_df} without additional
 attributes, and first factors vary slowest. \code{nesting()} is the
 complement to \code{crossing()}: it only keeps combinations of all variables
@@ -64,7 +56,7 @@ expand(mtcars, vs, cyl)
 expand(mtcars, nesting(vs, cyl))
 
 # Implicit missings ---------------------------------------------------------
-df <- data_frame(
+df <- tibble(
   year   = c(2010, 2010, 2010, 2010, 2012, 2012, 2012),
   qtr    = c(   1,    2,    3,    4,    1,    2,    3),
   return = rnorm(7)
@@ -78,7 +70,7 @@ df \%>\% complete(year = full_seq(year, 1), qtr)
 # Each person was given one of two treatments, repeated three times
 # But some of the replications haven't happened yet, so we have
 # incomplete data:
-experiment <- data_frame(
+experiment <- tibble(
   name = rep(c("Alex", "Robert", "Sam"), c(3, 2, 1)),
   trt  = rep(c("a", "b", "a"), c(3, 2, 1)),
   rep = c(1, 2, 3, 1, 2, 1),
@@ -101,10 +93,6 @@ experiment \%>\% right_join(all)
 experiment \%>\% complete(nesting(name, trt), rep)
 }
 \seealso{
-\code{\link{complete}} for a common application of \code{expand}:
-  completing a data frame with missing combinations.
-
-\code{\link{expand_}} for a version that uses regular evaluation
-  and is suitable for programming with.
+\code{\link[=complete]{complete()}} for a common application of \code{expand}:
+completing a data frame with missing combinations.
 }
-
diff --git a/man/expand_.Rd b/man/expand_.Rd
deleted file mode 100644
index dc2e29c..0000000
--- a/man/expand_.Rd
+++ /dev/null
@@ -1,18 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/expand.R
-\name{expand_}
-\alias{expand_}
-\title{Expand (standard evaluation).}
-\usage{
-expand_(data, dots, ...)
-}
-\arguments{
-\item{data}{A data frame}
-
-\item{expand_cols}{Character vector of column names to be expanded.}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/extract.Rd b/man/extract.Rd
index c6be6b5..6959f3f 100644
--- a/man/extract.Rd
+++ b/man/extract.Rd
@@ -10,7 +10,12 @@ extract(data, col, into, regex = "([[:alnum:]]+)", remove = TRUE,
 \arguments{
 \item{data}{A data frame.}
 
-\item{col}{Bare column name.}
+\item{col}{Column name or position. This is passed to
+\code{\link[tidyselect:vars_pull]{tidyselect::vars_pull()}}.
+
+This argument is passed by expression and supports
+\link[rlang:quasiquotation]{quasiquotation} (you can unquote column
+names or column positions).}
 
 \item{into}{Names of new variables to create as character vector.}
 
@@ -18,11 +23,11 @@ extract(data, col, into, regex = "([[:alnum:]]+)", remove = TRUE,
 
 \item{remove}{If \code{TRUE}, remove input column from output data frame.}
 
-\item{convert}{If \code{TRUE}, will run \code{\link{type.convert}} with
+\item{convert}{If \code{TRUE}, will run \code{\link[=type.convert]{type.convert()}} with
 \code{as.is = TRUE} on new columns. This is useful if the component
 columns are integer, numeric or logical.}
 
-\item{...}{Other arguments passed on to \code{\link{regexec}} to control
+\item{...}{Other arguments passed on to \code{\link[=regexec]{regexec()}} to control
 how the regular expression is processed.}
 }
 \description{
@@ -39,8 +44,3 @@ df \%>\% extract(x, c("A", "B"), "([[:alnum:]]+)-([[:alnum:]]+)")
 # If no match, NA:
 df \%>\% extract(x, c("A", "B"), "([a-d]+)-([a-d]+)")
 }
-\seealso{
-\code{\link{extract_}} for a version that uses regular evaluation
-  and is suitable for programming with.
-}
-
diff --git a/man/extract_.Rd b/man/extract_.Rd
deleted file mode 100644
index 94f1622..0000000
--- a/man/extract_.Rd
+++ /dev/null
@@ -1,32 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/extract.R
-\name{extract_}
-\alias{extract_}
-\title{Standard-evaluation version of \code{extract}.}
-\usage{
-extract_(data, col, into, regex = "([[:alnum:]]+)", remove = TRUE,
-  convert = FALSE, ...)
-}
-\arguments{
-\item{data}{A data frame.}
-
-\item{col}{Name of column to split, as string.}
-
-\item{into}{Names of new variables to create as character vector.}
-
-\item{regex}{a regular expression used to extract the desired values.}
-
-\item{remove}{If \code{TRUE}, remove input column from output data frame.}
-
-\item{convert}{If \code{TRUE}, will run \code{\link{type.convert}} with
-\code{as.is = TRUE} on new columns. This is useful if the component
-columns are integer, numeric or logical.}
-
-\item{...}{Other arguments passed on to \code{\link{regexec}} to control
-how the regular expression is processed.}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/extract_numeric.Rd b/man/extract_numeric.Rd
index 49bfdea..c49d9b4 100644
--- a/man/extract_numeric.Rd
+++ b/man/extract_numeric.Rd
@@ -12,4 +12,4 @@ extract_numeric(x)
 \description{
 DEPRECATED: please use \code{readr::parse_number()} instead.
 }
-
+\keyword{internal}
diff --git a/man/figures/logo.png b/man/figures/logo.png
new file mode 100644
index 0000000..0d738bd
Binary files /dev/null and b/man/figures/logo.png differ
diff --git a/man/fill.Rd b/man/fill.Rd
index 0fc4b00..288b146 100644
--- a/man/fill.Rd
+++ b/man/fill.Rd
@@ -9,9 +9,23 @@ fill(data, ..., .direction = c("down", "up"))
 \arguments{
 \item{data}{A data frame.}
 
-\item{...}{Specification of columns to fill. Use bare variable names.
-Select all variables between x and z with \code{x:z}, exclude y with
-\code{-y}. For more options, see the \link[dplyr]{select} documentation.}
+\item{...}{Specification of columns to expand.
+
+To find all unique combinations of x, y and z, including those not
+found in the data, supply each variable as a separate argument.
+To find only the combinations that occur in the data, use nest:
+\code{expand(df, nesting(x, y, z))}.
+
+You can combine the two forms. For example,
+\code{expand(df, nesting(school_id, student_id), date)} would produce
+a row for every student for each date.
+
+For factors, the full set of levels (not just those that appear in the
+data) are used. For continuous variables, you may need to fill in values
+that don't appear in the data: to do so use expressions like
+\code{year = 2010:2020} or \code{year = \link{full_seq}(year)}.
+
+Length-zero (empty) elements are automatically dropped.}
 
 \item{.direction}{Direction in which to fill missing values. Currently
 either "down" (the default) or "up".}
@@ -29,8 +43,3 @@ in list.
 df <- data.frame(Month = 1:12, Year = c(2000, rep(NA, 11)))
 df \%>\% fill(Year)
 }
-\seealso{
-\code{\link{fill_}} for a version that uses regular evaluation
-  and is suitable for programming with.
-}
-
diff --git a/man/fill_.Rd b/man/fill_.Rd
deleted file mode 100644
index 6692eea..0000000
--- a/man/fill_.Rd
+++ /dev/null
@@ -1,21 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/fill.R
-\name{fill_}
-\alias{fill_}
-\title{Standard-evaluation version of \code{fill}.}
-\usage{
-fill_(data, fill_cols, .direction = c("down", "up"))
-}
-\arguments{
-\item{data}{A data frame.}
-
-\item{fill_cols}{Character vector of column names.}
-
-\item{.direction}{Direction in which to fill missing values. Currently
-either "down" (the default) or "up".}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/full_seq.Rd b/man/full_seq.Rd
index 779e720..a464599 100644
--- a/man/full_seq.Rd
+++ b/man/full_seq.Rd
@@ -22,4 +22,3 @@ will return \code{1:6}.
 \examples{
 full_seq(c(1, 2, 4, 5, 10), 1)
 }
-
diff --git a/man/gather.Rd b/man/gather.Rd
index 9dce0bf..1b6ff38 100644
--- a/man/gather.Rd
+++ b/man/gather.Rd
@@ -4,23 +4,33 @@
 \alias{gather}
 \title{Gather columns into key-value pairs.}
 \usage{
-gather(data, key, value, ..., na.rm = FALSE, convert = FALSE,
-  factor_key = FALSE)
+gather(data, key = "key", value = "value", ..., na.rm = FALSE,
+  convert = FALSE, factor_key = FALSE)
 }
 \arguments{
 \item{data}{A data frame.}
 
-\item{key, value}{Names of key and value columns to create in output.}
+\item{key, value}{Names of new key and value columns, as strings or
+symbols.
 
-\item{...}{Specification of columns to gather. Use bare variable names.
-Select all variables between x and z with \code{x:z}, exclude y with
-\code{-y}. For more options, see the \link[dplyr]{select} documentation.}
+This argument is passed by expression and supports
+\link[rlang:quasiquotation]{quasiquotation} (you can unquote strings
+and symbols). The name is captured from the expression with
+\code{\link[rlang:quo_name]{rlang::quo_name()}} (note that this kind of interface where
+symbols do not represent actual objects is now discouraged in the
+tidyverse; we support it here for backward compatibility).}
+
+\item{...}{A selection of columns. If empty, all variables are
+selected. You can supply bare variable names, select all
+variables between x and z with \code{x:z}, exclude y with \code{-y}. For
+more options, see the \code{\link[dplyr:select]{dplyr::select()}} documentation. See also
+the section on selection rules below.}
 
 \item{na.rm}{If \code{TRUE}, will remove rows from output where the
 value column in \code{NA}.}
 
 \item{convert}{If \code{TRUE} will automatically run
-\code{\link{type.convert}} on the key column. This is useful if the column
+\code{\link[=type.convert]{type.convert()}} on the key column. This is useful if the column
 names are actually numeric, integer, or logical.}
 
 \item{factor_key}{If \code{FALSE}, the default, the key values will be
@@ -32,10 +42,38 @@ Gather takes multiple columns and collapses into key-value pairs,
 duplicating all other columns as needed. You use \code{gather()} when
 you notice that you have columns that are not variables.
 }
+\section{Rules for selection}{
+
+
+Arguments for selecting columns are passed to
+\code{\link[tidyselect:vars_select]{tidyselect::vars_select()}} and are treated specially. Unlike other
+verbs, selecting functions make a strict distinction between data
+expressions and context expressions.
+\itemize{
+\item A data expression is either a bare name like \code{x} or an expression
+like \code{x:y} or \code{c(x, y)}. In a data expression, you can only refer
+to columns from the data frame.
+\item Everything else is a context expression in which you can only
+refer to objects that you have defined with \code{<-}.
+}
+
+For instance, \code{col1:col3} is a data expression that refers to data
+columns, while \code{seq(start, end)} is a context expression that
+refers to objects from the contexts.
+
+If you really need to refer to contextual objects from a data
+expression, you can unquote them with the tidy eval operator
+\code{!!}. This operator evaluates its argument in the context and
+inlines the result in the surrounding function call. For instance,
+\code{c(x, !! x)} selects the \code{x} column within the data frame and the
+column referred to by the object \code{x} defined in the context (which
+can contain either a column name as string or a column position).
+}
+
 \examples{
 library(dplyr)
 # From http://stackoverflow.com/questions/1181060
-stocks <- data_frame(
+stocks <- tibble(
   time = as.Date('2009-01-01') + 0:9,
   X = rnorm(10, 0, 1),
   Y = rnorm(10, 0, 2),
@@ -61,8 +99,3 @@ mini_iris <-
   slice(1)
 mini_iris \%>\% gather(key = flower_att, value = measurement, -Species)
 }
-\seealso{
-\code{\link{gather_}} for a version that uses regular evaluation
-  and is suitable for programming with.
-}
-
diff --git a/man/gather_.Rd b/man/gather_.Rd
deleted file mode 100644
index 8045a34..0000000
--- a/man/gather_.Rd
+++ /dev/null
@@ -1,34 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/gather.R
-\name{gather_}
-\alias{gather_}
-\title{Gather (standard-evaluation).}
-\usage{
-gather_(data, key_col, value_col, gather_cols, na.rm = FALSE,
-  convert = FALSE, factor_key = FALSE)
-}
-\arguments{
-\item{data}{A data frame}
-
-\item{key_col, value_col}{Strings giving names of key and value columns to
-create.}
-
-\item{gather_cols}{Character vector giving column names to be gathered into
-pair of key-value columns.}
-
-\item{na.rm}{If \code{TRUE}, will remove rows from output where the
-value column in \code{NA}.}
-
-\item{convert}{If \code{TRUE} will automatically run
-\code{\link{type.convert}} on the key column. This is useful if the column
-names are actually numeric, integer, or logical.}
-
-\item{factor_key}{If \code{FALSE}, the default, the key values will be
-stored as a character vector. If \code{TRUE}, will be stored as a factor,
-which preserves the original ordering of the columns.}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/nest.Rd b/man/nest.Rd
index 10d5437..d14ac22 100644
--- a/man/nest.Rd
+++ b/man/nest.Rd
@@ -4,26 +4,63 @@
 \alias{nest}
 \title{Nest repeated values in a list-variable.}
 \usage{
-nest(data, ..., .key = data)
+nest(data, ..., .key = "data")
 }
 \arguments{
 \item{data}{A data frame.}
 
-\item{...}{Specification of columns to nest. Use bare variable names.
-Select all variables between x and z with \code{x:z}, exclude y with
-\code{-y}. For more options, see the \link[dplyr]{select} documentation.}
+\item{...}{A selection of columns. If empty, all variables are
+selected. You can supply bare variable names, select all
+variables between x and z with \code{x:z}, exclude y with \code{-y}. For
+more options, see the \code{\link[dplyr:select]{dplyr::select()}} documentation. See also
+the section on selection rules below.}
 
-\item{.key}{The name of the new column.}
+\item{.key}{The name of the new column, as a string or symbol.
+
+This argument is passed by expression and supports
+\link[rlang:quasiquotation]{quasiquotation} (you can unquote strings
+and symbols). The name is captured from the expression with
+\code{\link[rlang:quo_name]{rlang::quo_name()}} (note that this kind of interface where
+symbols do not represent actual objects is now discouraged in the
+tidyverse; we support it here for backward compatibility).}
 }
 \description{
 There are many possible ways one could choose to nest columns inside a
 data frame. \code{nest()} creates a list of data frames containing all
 the nested variables: this seems to be the most useful form in practice.
 }
+\section{Rules for selection}{
+
+
+Arguments for selecting columns are passed to
+\code{\link[tidyselect:vars_select]{tidyselect::vars_select()}} and are treated specially. Unlike other
+verbs, selecting functions make a strict distinction between data
+expressions and context expressions.
+\itemize{
+\item A data expression is either a bare name like \code{x} or an expression
+like \code{x:y} or \code{c(x, y)}. In a data expression, you can only refer
+to columns from the data frame.
+\item Everything else is a context expression in which you can only
+refer to objects that you have defined with \code{<-}.
+}
+
+For instance, \code{col1:col3} is a data expression that refers to data
+columns, while \code{seq(start, end)} is a context expression that
+refers to objects from the contexts.
+
+If you really need to refer to contextual objects from a data
+expression, you can unquote them with the tidy eval operator
+\code{!!}. This operator evaluates its argument in the context and
+inlines the result in the surrounding function call. For instance,
+\code{c(x, !! x)} selects the \code{x} column within the data frame and the
+column referred to by the object \code{x} defined in the context (which
+can contain either a column name as string or a column position).
+}
+
 \examples{
 library(dplyr)
-iris \%>\% nest(-Species)
-chickwts \%>\% nest(weight)
+as_tibble(iris) \%>\% nest(-Species)
+as_tibble(chickwts) \%>\% nest(weight)
 
 if (require("gapminder")) {
   gapminder \%>\%
@@ -35,9 +72,5 @@ if (require("gapminder")) {
 }
 }
 \seealso{
-\code{\link{unnest}} for the inverse operation.
-
-\code{\link{nest_}} for a version that uses regular evaluation
-  and is suitable for programming with.
+\code{\link[=unnest]{unnest()}} for the inverse operation.
 }
-
diff --git a/man/nest_.Rd b/man/nest_.Rd
deleted file mode 100644
index 7b26547..0000000
--- a/man/nest_.Rd
+++ /dev/null
@@ -1,20 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/nest.R
-\name{nest_}
-\alias{nest_}
-\title{Standard-evaluation version of \code{nest}.}
-\usage{
-nest_(data, key_col, nest_cols = character())
-}
-\arguments{
-\item{data}{A data frame.}
-
-\item{key_col}{Name of the column that will contain the nested data frames.}
-
-\item{nest_cols}{Character vector of columns to nest.}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/pipe.Rd b/man/pipe.Rd
index 2c738a6..51295fd 100644
--- a/man/pipe.Rd
+++ b/man/pipe.Rd
@@ -10,4 +10,3 @@ lhs \%>\% rhs
 See \code{\link[magrittr]{\%>\%}} for more details.
 }
 \keyword{internal}
-
diff --git a/man/replace_na.Rd b/man/replace_na.Rd
index 5a46471..42967f9 100644
--- a/man/replace_na.Rd
+++ b/man/replace_na.Rd
@@ -19,7 +19,6 @@ Replace missing values
 }
 \examples{
 library(dplyr)
-df <- data_frame(x = c(1, 2, NA), y = c("a", NA, "b"))
+df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
 df \%>\% replace_na(list(x = 0, y = "unknown"))
 }
-
diff --git a/man/separate.Rd b/man/separate.Rd
index 7c04b75..b1bcbaf 100644
--- a/man/separate.Rd
+++ b/man/separate.Rd
@@ -10,44 +10,47 @@ separate(data, col, into, sep = "[^[:alnum:]]+", remove = TRUE,
 \arguments{
 \item{data}{A data frame.}
 
-\item{col}{Bare column name.}
+\item{col}{Column name or position. This is passed to
+\code{\link[tidyselect:vars_pull]{tidyselect::vars_pull()}}.
+
+This argument is passed by expression and supports
+\link[rlang:quasiquotation]{quasiquotation} (you can unquote column
+names or column positions).}
 
 \item{into}{Names of new variables to create as character vector.}
 
 \item{sep}{Separator between columns.
 
-  If character, is interpreted as a regular expression. The default
-  value is a regular expression that matches any sequence of
-  non-alphanumeric values.
+If character, is interpreted as a regular expression. The default
+value is a regular expression that matches any sequence of
+non-alphanumeric values.
 
-  If numeric, interpreted as positions to split at. Positive values start
-  at 1 at the far-left of the string; negative value start at -1 at the
-  far-right of the string. The length of \code{sep} should be one less than
-  \code{into}.}
+If numeric, interpreted as positions to split at. Positive values start
+at 1 at the far-left of the string; negative value start at -1 at the
+far-right of the string. The length of \code{sep} should be one less than
+\code{into}.}
 
 \item{remove}{If \code{TRUE}, remove input column from output data frame.}
 
-\item{convert}{If \code{TRUE}, will run \code{\link{type.convert}} with
+\item{convert}{If \code{TRUE}, will run \code{\link[=type.convert]{type.convert()}} with
 \code{as.is = TRUE} on new columns. This is useful if the component
 columns are integer, numeric or logical.}
 
 \item{extra}{If \code{sep} is a character vector, this controls what
-  happens when there are too many pieces. There are three valid options:
-
-  \itemize{
-   \item "warn" (the default): emit a warning and drop extra values.
-   \item "drop": drop any extra values without a warning.
-   \item "merge": only splits at most \code{length(into)} times
-  }}
+happens when there are too many pieces. There are three valid options:
+\itemize{
+\item "warn" (the default): emit a warning and drop extra values.
+\item "drop": drop any extra values without a warning.
+\item "merge": only splits at most \code{length(into)} times
+}}
 
 \item{fill}{If \code{sep} is a character vector, this controls what
-  happens when there are not enough pieces. There are three valid options:
-
-  \itemize{
-   \item "warn" (the default): emit a warning and fill from the right
-   \item "right": fill with missing values on the right
-   \item "left": fill with missing values on the left
-  }}
+happens when there are not enough pieces. There are three valid options:
+\itemize{
+\item "warn" (the default): emit a warning and fill from the right
+\item "right": fill with missing values on the right
+\item "left": fill with missing values on the left
+}}
 
 \item{...}{Defunct, will be removed in the next version of the package.}
 }
@@ -74,9 +77,5 @@ df <- data.frame(x = c("x: 123", "y: error: 7"))
 df \%>\% separate(x, c("key", "value"), ": ", extra = "merge")
 }
 \seealso{
-\code{\link{unite}()}, the complement.
-
-\code{\link{separate_}} for a version that uses regular evaluation
-  and is suitable for programming with.
+\code{\link[=unite]{unite()}}, the complement.
 }
-
diff --git a/man/separate_.Rd b/man/separate_.Rd
deleted file mode 100644
index 4057d11..0000000
--- a/man/separate_.Rd
+++ /dev/null
@@ -1,58 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/separate.R
-\name{separate_}
-\alias{separate_}
-\title{Standard-evaluation version of \code{separate}.}
-\usage{
-separate_(data, col, into, sep = "[^[:alnum:]]+", remove = TRUE,
-  convert = FALSE, extra = "warn", fill = "warn", ...)
-}
-\arguments{
-\item{data}{A data frame.}
-
-\item{col}{Name of column to split, as string.}
-
-\item{into}{Names of new variables to create as character vector.}
-
-\item{sep}{Separator between columns.
-
-  If character, is interpreted as a regular expression. The default
-  value is a regular expression that matches any sequence of
-  non-alphanumeric values.
-
-  If numeric, interpreted as positions to split at. Positive values start
-  at 1 at the far-left of the string; negative value start at -1 at the
-  far-right of the string. The length of \code{sep} should be one less than
-  \code{into}.}
-
-\item{remove}{If \code{TRUE}, remove input column from output data frame.}
-
-\item{convert}{If \code{TRUE}, will run \code{\link{type.convert}} with
-\code{as.is = TRUE} on new columns. This is useful if the component
-columns are integer, numeric or logical.}
-
-\item{extra}{If \code{sep} is a character vector, this controls what
-  happens when there are too many pieces. There are three valid options:
-
-  \itemize{
-   \item "warn" (the default): emit a warning and drop extra values.
-   \item "drop": drop any extra values without a warning.
-   \item "merge": only splits at most \code{length(into)} times
-  }}
-
-\item{fill}{If \code{sep} is a character vector, this controls what
-  happens when there are not enough pieces. There are three valid options:
-
-  \itemize{
-   \item "warn" (the default): emit a warning and fill from the right
-   \item "right": fill with missing values on the right
-   \item "left": fill with missing values on the left
-  }}
-
-\item{...}{Defunct, will be removed in the next version of the package.}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/separate_rows.Rd b/man/separate_rows.Rd
index c7df1c1..1e554ca 100644
--- a/man/separate_rows.Rd
+++ b/man/separate_rows.Rd
@@ -9,20 +9,50 @@ separate_rows(data, ..., sep = "[^[:alnum:].]+", convert = FALSE)
 \arguments{
 \item{data}{A data frame.}
 
-\item{...}{Specification of columns to separate. Use bare variable names.
-Select all variables between x and z with \code{x:z}, exclude y with
-\code{-y}. For more options, see the \link[dplyr]{select} documentation.}
+\item{...}{A selection of columns. If empty, all variables are
+selected. You can supply bare variable names, select all
+variables between x and z with \code{x:z}, exclude y with \code{-y}. For
+more options, see the \code{\link[dplyr:select]{dplyr::select()}} documentation. See also
+the section on selection rules below.}
 
 \item{sep}{Separator delimiting collapsed values.}
 
-\item{convert}{If \code{TRUE}, will run \code{\link{type.convert}} with
-\code{as.is = TRUE} on new columns. This is useful if the component
-columns are integer, numeric or logical.}
+\item{convert}{If \code{TRUE} will automatically run
+\code{\link[=type.convert]{type.convert()}} on the key column. This is useful if the column
+names are actually numeric, integer, or logical.}
 }
 \description{
 If a variable contains observations with multiple delimited values, this
 separates the values and places each one in its own row.
 }
+\section{Rules for selection}{
+
+
+Arguments for selecting columns are passed to
+\code{\link[tidyselect:vars_select]{tidyselect::vars_select()}} and are treated specially. Unlike other
+verbs, selecting functions make a strict distinction between data
+expressions and context expressions.
+\itemize{
+\item A data expression is either a bare name like \code{x} or an expression
+like \code{x:y} or \code{c(x, y)}. In a data expression, you can only refer
+to columns from the data frame.
+\item Everything else is a context expression in which you can only
+refer to objects that you have defined with \code{<-}.
+}
+
+For instance, \code{col1:col3} is a data expression that refers to data
+columns, while \code{seq(start, end)} is a context expression that
+refers to objects from the contexts.
+
+If you really need to refer to contextual objects from a data
+expression, you can unquote them with the tidy eval operator
+\code{!!}. This operator evaluates its argument in the context and
+inlines the result in the surrounding function call. For instance,
+\code{c(x, !! x)} selects the \code{x} column within the data frame and the
+column referred to by the object \code{x} defined in the context (which
+can contain either a column name as string or a column position).
+}
+
 \examples{
 
 df <- data.frame(
@@ -33,4 +63,3 @@ df <- data.frame(
 )
 separate_rows(df, y, z, convert = TRUE)
 }
-
diff --git a/man/separate_rows_.Rd b/man/separate_rows_.Rd
deleted file mode 100644
index d16949b..0000000
--- a/man/separate_rows_.Rd
+++ /dev/null
@@ -1,23 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/separate-rows.R
-\name{separate_rows_}
-\alias{separate_rows_}
-\title{Standard-evaluation version of \code{separate_rows}.}
-\usage{
-separate_rows_(data, cols, sep = "[^[:alnum:].]+", convert = FALSE)
-}
-\arguments{
-\item{data}{A data frame.}
-
-\item{cols}{Name of columns that need to be separated.}
-
-\item{sep}{Separator delimiting collapsed values.}
-
-\item{convert}{If \code{TRUE}, will run \code{\link{type.convert}} with
-\code{as.is = TRUE} on new columns. This is useful if the component
-columns are integer, numeric or logical.}
-}
-\description{
-This is a S3 generic.
-}
-
diff --git a/man/smiths.Rd b/man/smiths.Rd
index 73211cc..34d24da 100644
--- a/man/smiths.Rd
+++ b/man/smiths.Rd
@@ -12,4 +12,3 @@ smiths
 A small demo dataset describing John and Mary Smith.
 }
 \keyword{datasets}
-
diff --git a/man/spread.Rd b/man/spread.Rd
index 9c146aa..a4700ac 100644
--- a/man/spread.Rd
+++ b/man/spread.Rd
@@ -10,19 +10,20 @@ spread(data, key, value, fill = NA, convert = FALSE, drop = TRUE,
 \arguments{
 \item{data}{A data frame.}
 
-\item{key}{The bare (unquoted) name of the column whose values will be used
-as column headings.}
+\item{key, value}{Column names or positions. This is passed to
+\code{\link[tidyselect:vars_pull]{tidyselect::vars_pull()}}.
 
-\item{value}{The bare (unquoted) name of the column whose values will
-populate the cells.}
+These arguments are passed by expression and support
+\link[rlang:quasiquotation]{quasiquotation} (you can unquote column
+names or column positions).}
 
 \item{fill}{If set, missing values will be replaced with this value. Note
 that there are two types of missingness in the input: explicit missing
 values (i.e. \code{NA}), and implicit missings, rows that simply aren't
 present. Both types of missing value will be replaced by \code{fill}.}
 
-\item{convert}{If \code{TRUE}, \code{\link{type.convert}} with \code{asis =
-TRUE} will be run on each of the new columns. This is useful if the value
+\item{convert}{If \code{TRUE}, \code{\link[=type.convert]{type.convert()}} with \code{asis =
+  TRUE} will be run on each of the new columns. This is useful if the value
 column was a mix of variables that was coerced to a string. If the class of
 the value column was factor or date, note that will not be true of the new
 columns that are produced, which are coerced to character before type
@@ -61,8 +62,3 @@ df <- data.frame(row = rep(c(1, 51), each = 3),
 df \%>\% spread(var, value) \%>\% str
 df \%>\% spread(var, value, convert = TRUE) \%>\% str
 }
-\seealso{
-\code{\link{spread_}} for a version that uses regular evaluation
-  and is suitable for programming with.
-}
-
diff --git a/man/spread_.Rd b/man/spread_.Rd
deleted file mode 100644
index 92422af..0000000
--- a/man/spread_.Rd
+++ /dev/null
@@ -1,38 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/spread.R
-\name{spread_}
-\alias{spread_}
-\title{Standard-evaluation version of \code{spread}.}
-\usage{
-spread_(data, key_col, value_col, fill = NA, convert = FALSE, drop = TRUE,
-  sep = NULL)
-}
-\arguments{
-\item{data}{A data frame.}
-
-\item{key_col, value_col}{Strings giving names of key and value cols.}
-
-\item{fill}{If set, missing values will be replaced with this value. Note
-that there are two types of missingness in the input: explicit missing
-values (i.e. \code{NA}), and implicit missings, rows that simply aren't
-present. Both types of missing value will be replaced by \code{fill}.}
-
-\item{convert}{If \code{TRUE}, \code{\link{type.convert}} with \code{asis =
-TRUE} will be run on each of the new columns. This is useful if the value
-column was a mix of variables that was coerced to a string. If the class of
-the value column was factor or date, note that will not be true of the new
-columns that are produced, which are coerced to character before type
-conversion.}
-
-\item{drop}{If \code{FALSE}, will keep factor levels that don't appear in the
-data, filling in missing combinations with \code{fill}.}
-
-\item{sep}{If \code{NULL}, the column names will be taken from the values of
-\code{key} variable. If non-\code{NULL}, the column names will be given
-by "<key_name><sep><key_value>".}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/table1.Rd b/man/table1.Rd
index f1df4c1..2c8236a 100644
--- a/man/table1.Rd
+++ b/man/table1.Rd
@@ -41,4 +41,3 @@ The data is a subset of the data contained in the World Health
 Organization Global Tuberculosis Report
 }
 \keyword{datasets}
-
diff --git a/man/tidyr-package.Rd b/man/tidyr-package.Rd
new file mode 100644
index 0000000..95d47e8
--- /dev/null
+++ b/man/tidyr-package.Rd
@@ -0,0 +1,36 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/tidyr.R
+\docType{package}
+\name{tidyr-package}
+\alias{tidyr}
+\alias{tidyr-package}
+\title{tidyr: Easily Tidy Data with 'spread()' and 'gather()' Functions}
+\description{
+An evolution of 'reshape2'. It's designed specifically for data
+tidying (not general reshaping or aggregating) and works well with
+'dplyr' data pipelines.
+}
+\seealso{
+Useful links:
+\itemize{
+  \item \url{http://tidyr.tidyverse.org}
+  \item \url{https://github.com/tidyverse/tidyr}
+  \item Report bugs at \url{https://github.com/tidyverse/tidyr/issues}
+}
+
+}
+\author{
+\strong{Maintainer}: Hadley Wickham \email{hadley at rstudio.com}
+
+Authors:
+\itemize{
+  \item Lionel Henry \email{lionel at rstudio.com}
+}
+
+Other contributors:
+\itemize{
+  \item RStudio [copyright holder]
+}
+
+}
+\keyword{internal}
diff --git a/man/unite.Rd b/man/unite.Rd
index d219862..fe74734 100644
--- a/man/unite.Rd
+++ b/man/unite.Rd
@@ -9,11 +9,20 @@ unite(data, col, ..., sep = "_", remove = TRUE)
 \arguments{
 \item{data}{A data frame.}
 
-\item{col}{(Bare) name of column to add}
+\item{col}{The name of the new column, as a string or symbol.
 
-\item{...}{Specification of columns to unite. Use bare variable names.
-Select all variables between x and z with \code{x:z}, exclude y with
-\code{-y}. For more options, see the \link[dplyr]{select} documentation.}
+This argument is passed by expression and supports
+\link[rlang:quasiquotation]{quasiquotation} (you can unquote strings
+and symbols). The name is captured from the expression with
+\code{\link[rlang:quo_name]{rlang::quo_name()}} (note that this kind of interface where
+symbols do not represent actual objects is now discouraged in the
+tidyverse; we support it here for backward compatibility).}
+
+\item{...}{A selection of columns. If empty, all variables are
+selected. You can supply bare variable names, select all
+variables between x and z with \code{x:z}, exclude y with \code{-y}. For
+more options, see the \code{\link[dplyr:select]{dplyr::select()}} documentation. See also
+the section on selection rules below.}
 
 \item{sep}{Separator to use between values.}
 
@@ -22,6 +31,34 @@ Select all variables between x and z with \code{x:z}, exclude y with
 \description{
 Convenience function to paste together multiple columns into one.
 }
+\section{Rules for selection}{
+
+
+Arguments for selecting columns are passed to
+\code{\link[tidyselect:vars_select]{tidyselect::vars_select()}} and are treated specially. Unlike other
+verbs, selecting functions make a strict distinction between data
+expressions and context expressions.
+\itemize{
+\item A data expression is either a bare name like \code{x} or an expression
+like \code{x:y} or \code{c(x, y)}. In a data expression, you can only refer
+to columns from the data frame.
+\item Everything else is a context expression in which you can only
+refer to objects that you have defined with \code{<-}.
+}
+
+For instance, \code{col1:col3} is a data expression that refers to data
+columns, while \code{seq(start, end)} is a context expression that
+refers to objects from the contexts.
+
+If you really need to refer to contextual objects from a data
+expression, you can unquote them with the tidy eval operator
+\code{!!}. This operator evaluates its argument in the context and
+inlines the result in the surrounding function call. For instance,
+\code{c(x, !! x)} selects the \code{x} column within the data frame and the
+column referred to by the object \code{x} defined in the context (which
+can contain either a column name as string or a column position).
+}
+
 \examples{
 library(dplyr)
 unite_(mtcars, "vs_am", c("vs","am"))
@@ -32,9 +69,5 @@ mtcars \%>\%
   separate(vs_am, c("vs", "am"))
 }
 \seealso{
-\code{\link{separate}()}, the complement.
-
-\code{\link{unite_}} for a version that uses regular evaluation
-  and is suitable for programming with.
+\code{\link[=separate]{separate()}}, the complement.
 }
-
diff --git a/man/unite_.Rd b/man/unite_.Rd
deleted file mode 100644
index b0f32bd..0000000
--- a/man/unite_.Rd
+++ /dev/null
@@ -1,24 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/unite.R
-\name{unite_}
-\alias{unite_}
-\title{Standard-evaluation version of \code{unite}}
-\usage{
-unite_(data, col, from, sep = "_", remove = TRUE)
-}
-\arguments{
-\item{data}{A data frame.}
-
-\item{col}{Name of new column as string.}
-
-\item{from}{Names of existing columns as character vector}
-
-\item{sep}{Separator to use between values.}
-
-\item{remove}{If \code{TRUE}, remove input columns from output data frame.}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/unnest.Rd b/man/unnest.Rd
index d0d1438..5e73004 100644
--- a/man/unnest.Rd
+++ b/man/unnest.Rd
@@ -16,8 +16,8 @@ functions of variables. If omitted, defaults to all list-cols.}
 \code{unnest} will drop them if unnesting the specified columns requires
 the rows to be duplicated.}
 
-\item{.id}{Data frame idenfier - if supplied, will create a new column
-with name \code{.id}, giving a unique identifer. This is most useful if
+\item{.id}{Data frame identifier - if supplied, will create a new column
+with name \code{.id}, giving a unique identifier. This is most useful if
 the list column is named.}
 
 \item{.sep}{If non-\code{NULL}, the names of unnested data frame columns
@@ -26,12 +26,15 @@ nested data frame, separated by \code{.sep}.}
 }
 \description{
 If you have a list-column, this makes each element of the list its own
-row. List-columns can either be atomic vectors or data frames. Each
-row must have the same number of entries.
+row. List-columns can either be atomic vectors or data frames.
+}
+\details{
+If you unnest multiple columns, parallel entries must have the same length
+or number of rows (if a data frame).
 }
 \examples{
 library(dplyr)
-df <- data_frame(
+df <- tibble(
   x = 1:3,
   y = c("a", "d,e,f", "g,h")
 )
@@ -44,17 +47,17 @@ df \%>\%
   unnest(y = strsplit(y, ","))
 
 # It also works if you have a column that contains other data frames!
-df <- data_frame(
+df <- tibble(
   x = 1:2,
   y = list(
-   data_frame(z = 1),
-   data_frame(z = 3:4)
+   tibble(z = 1),
+   tibble(z = 3:4)
  )
 )
 df \%>\% unnest(y)
 
 # You can also unnest multiple columns simultaneously
-df <- data_frame(
+df <- tibble(
  a = list(c("a", "b"), "c"),
  b = list(1:2, 3),
  c = c(11, 22)
@@ -69,16 +72,12 @@ df \%>\% nest(y)
 df \%>\% nest(y) \%>\% unnest()
 
 # If you have a named list-column, you may want to supply .id
-df <- data_frame(
+df <- tibble(
   x = 1:2,
   y = list(a = 1, b = 3:4)
 )
 unnest(df, .id = "name")
 }
 \seealso{
-\code{\link{nest}} for the inverse operation.
-
-\code{\link{unnest_}} for a version that uses regular evaluation
-  and is suitable for programming with.
+\code{\link[=nest]{nest()}} for the inverse operation.
 }
-
diff --git a/man/unnest_.Rd b/man/unnest_.Rd
deleted file mode 100644
index 31df283..0000000
--- a/man/unnest_.Rd
+++ /dev/null
@@ -1,30 +0,0 @@
-% Generated by roxygen2: do not edit by hand
-% Please edit documentation in R/unnest.R
-\name{unnest_}
-\alias{unnest_}
-\title{Standard-evaluation version of \code{unnest}.}
-\usage{
-unnest_(data, unnest_cols, .drop = NA, .id = NULL, .sep = NULL)
-}
-\arguments{
-\item{data}{A data frame.}
-
-\item{unnest_cols}{Name of columns that needs to be unnested.}
-
-\item{.drop}{Should additional list columns be dropped? By default,
-\code{unnest} will drop them if unnesting the specified columns requires
-the rows to be duplicated.}
-
-\item{.id}{Data frame idenfier - if supplied, will create a new column
-with name \code{.id}, giving a unique identifer. This is most useful if
-the list column is named.}
-
-\item{.sep}{If non-\code{NULL}, the names of unnested data frame columns
-will combine the name of the original list-col with the names from
-nested data frame, separated by \code{.sep}.}
-}
-\description{
-This is a S3 generic.
-}
-\keyword{internal}
-
diff --git a/man/who.Rd b/man/who.Rd
index ef13f82..d90e841 100644
--- a/man/who.Rd
+++ b/man/who.Rd
@@ -2,15 +2,16 @@
 % Please edit documentation in R/data.R
 \docType{data}
 \name{who}
-\alias{population}
 \alias{who}
+\alias{population}
 \title{World Health Organization TB data}
 \format{A dataset with the variables
 \describe{
-  \item{country}{Country name}
-  \item{iso2,iso2}{2 & 3 letter ISO country codes}
-  \item{new_sp_m014 - new_rel_f65}{Counts of new TB cases recorded by group.
-   Column names encode three variables that describe the group (see details).}
+\item{country}{Country name}
+\item{iso2, iso3}{2 & 3 letter ISO country codes}
+\item{year}{Year}
+\item{new_sp_m014 - new_rel_f65}{Counts of new TB cases recorded by group.
+Column names encode three variables that describe the group (see details).}
 }}
 \source{
 \url{http://www.who.int/tb/country/data/download/en/}
@@ -26,15 +27,14 @@ Report, and accompanying global populations.
 }
 \details{
 The data uses the original codes given by the World Health
-  Organization. The column names for columns five through 60 are made by
-  combining \code{new_} to a code for method of diagnosis (\code{rel} =
-  relapse, \code{sn} = negative pulmonary smear, \code{sp} = positive
-  pulmonary smear, \code{ep} = extrapulmonary) to a code for gender
-  (\code{f} = female, \code{m} = male) to a code for age group (\code{014} =
-  0-14 yrs of age, \code{1524} = 15-24 years of age, \code{2534} = 25 to
-  34 years of age, \code{3544} = 35 to 44 years of age, \code{4554} = 45 to
-  54 years of age, \code{5564} = 55 to 64 years of age, \code{65} = 65 years
-  of age or older).
+Organization. The column names for columns five through 60 are made by
+combining \code{new_} to a code for method of diagnosis (\code{rel} =
+relapse, \code{sn} = negative pulmonary smear, \code{sp} = positive
+pulmonary smear, \code{ep} = extrapulmonary) to a code for gender
+(\code{f} = female, \code{m} = male) to a code for age group (\code{014} =
+0-14 yrs of age, \code{1524} = 15-24 years of age, \code{2534} = 25 to
+34 years of age, \code{3544} = 35 to 44 years of age, \code{4554} = 45 to
+54 years of age, \code{5564} = 55 to 64 years of age, \code{65} = 65 years
+of age or older).
 }
 \keyword{datasets}
-
diff --git a/src/RcppExports.cpp b/src/RcppExports.cpp
index 9d3560a..1cdb3b1 100644
--- a/src/RcppExports.cpp
+++ b/src/RcppExports.cpp
@@ -7,7 +7,7 @@ using namespace Rcpp;
 
 // fillDown
 SEXP fillDown(SEXP x);
-RcppExport SEXP tidyr_fillDown(SEXP xSEXP) {
+RcppExport SEXP _tidyr_fillDown(SEXP xSEXP) {
 BEGIN_RCPP
     Rcpp::RObject rcpp_result_gen;
     Rcpp::RNGScope rcpp_rngScope_gen;
@@ -18,7 +18,7 @@ END_RCPP
 }
 // fillUp
 SEXP fillUp(SEXP x);
-RcppExport SEXP tidyr_fillUp(SEXP xSEXP) {
+RcppExport SEXP _tidyr_fillUp(SEXP xSEXP) {
 BEGIN_RCPP
     Rcpp::RObject rcpp_result_gen;
     Rcpp::RNGScope rcpp_rngScope_gen;
@@ -29,7 +29,7 @@ END_RCPP
 }
 // melt_dataframe
 List melt_dataframe(const DataFrame& data, const IntegerVector& id_ind, const IntegerVector& measure_ind, String variable_name, String value_name, SEXP attrTemplate, bool factorsAsStrings, bool valueAsFactor, bool variableAsFactor);
-RcppExport SEXP tidyr_melt_dataframe(SEXP dataSEXP, SEXP id_indSEXP, SEXP measure_indSEXP, SEXP variable_nameSEXP, SEXP value_nameSEXP, SEXP attrTemplateSEXP, SEXP factorsAsStringsSEXP, SEXP valueAsFactorSEXP, SEXP variableAsFactorSEXP) {
+RcppExport SEXP _tidyr_melt_dataframe(SEXP dataSEXP, SEXP id_indSEXP, SEXP measure_indSEXP, SEXP variable_nameSEXP, SEXP value_nameSEXP, SEXP attrTemplateSEXP, SEXP factorsAsStringsSEXP, SEXP valueAsFactorSEXP, SEXP variableAsFactorSEXP) {
 BEGIN_RCPP
     Rcpp::RObject rcpp_result_gen;
     Rcpp::RNGScope rcpp_rngScope_gen;
@@ -48,7 +48,7 @@ END_RCPP
 }
 // simplifyPieces
 List simplifyPieces(ListOf<CharacterVector> pieces, int p, bool fillLeft);
-RcppExport SEXP tidyr_simplifyPieces(SEXP piecesSEXP, SEXP pSEXP, SEXP fillLeftSEXP) {
+RcppExport SEXP _tidyr_simplifyPieces(SEXP piecesSEXP, SEXP pSEXP, SEXP fillLeftSEXP) {
 BEGIN_RCPP
     Rcpp::RObject rcpp_result_gen;
     Rcpp::RNGScope rcpp_rngScope_gen;
@@ -59,3 +59,16 @@ BEGIN_RCPP
     return rcpp_result_gen;
 END_RCPP
 }
+
+static const R_CallMethodDef CallEntries[] = {
+    {"_tidyr_fillDown", (DL_FUNC) &_tidyr_fillDown, 1},
+    {"_tidyr_fillUp", (DL_FUNC) &_tidyr_fillUp, 1},
+    {"_tidyr_melt_dataframe", (DL_FUNC) &_tidyr_melt_dataframe, 9},
+    {"_tidyr_simplifyPieces", (DL_FUNC) &_tidyr_simplifyPieces, 3},
+    {NULL, NULL, 0}
+};
+
+RcppExport void R_init_tidyr(DllInfo *dll) {
+    R_registerRoutines(dll, NULL, CallEntries, NULL, NULL);
+    R_useDynamicSymbols(dll, FALSE);
+}
diff --git a/tests/testthat/test-complete.R b/tests/testthat/test-complete.R
index ad5f34d..4d09306 100644
--- a/tests/testthat/test-complete.R
+++ b/tests/testthat/test-complete.R
@@ -1,19 +1,15 @@
 context("complete")
 
 test_that("basic invocation works", {
-  df <- data_frame(x = 1:2, y = 1:2, z = 3:4)
+  df <- tibble(x = 1:2, y = 1:2, z = 3:4)
   out <- complete(df, x, y)
-
   expect_equal(nrow(out), 4)
   expect_equal(out$z, c(3, NA, NA, 4))
-
 })
 
 test_that("preserves grouping", {
-  df <- data_frame(x = 1:2, y = 1:2, z = 3:4) %>%
-    dplyr::group_by(x)
+  df <- tibble(x = 1:2, y = 1:2, z = 3:4) %>% dplyr::group_by(x)
   out <- complete(df, x, y)
-
   expect_s3_class(out, "grouped_df")
   expect_equal(dplyr::groups(out), dplyr::groups(df))
 })
diff --git a/tests/testthat/test-drop_na.R b/tests/testthat/test-drop_na.R
index 72ce217..970a6da 100644
--- a/tests/testthat/test-drop_na.R
+++ b/tests/testthat/test-drop_na.R
@@ -1,54 +1,42 @@
 context("drop_na")
 
 test_that("empty call drops every row", {
-  df <- data_frame(x = c(1, 2, NA), y = c("a", NA, "b"))
-  exp <- data_frame(x = c(1), y = c("a"))
+  df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
+  exp <- tibble(x = c(1), y = c("a"))
   res <- tidyr::drop_na(df)
   expect_equal(res, exp)
 })
 
 test_that("specifying (a) variables considers only that variable(s)", {
-  df <- data_frame(x = c(1, 2, NA), y = c("a", NA, "b"))
-  exp <- data_frame(x = c(1, 2), y = c("a", NA))
+  df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
+  exp <- tibble(x = c(1, 2), y = c("a", NA))
   res <- tidyr::drop_na(df, x)
   expect_equal(res, exp)
-  exp <- data_frame(x = c(1), y = c("a"))
+  exp <- tibble(x = c(1), y = c("a"))
   res <- tidyr::drop_na(df, x:y)
   expect_equal(res, exp)
 })
 
 test_that("groups are preserved", {
-  df <- data_frame(g = c("A", "A", "B"), x = c(1, 2, NA), y = c("a", NA, "b"))
-  exp <- data_frame(g = c("A", "B"), x = c(1, NA), y = c("a", "b"))
+  df <- tibble(g = c("A", "A", "B"), x = c(1, 2, NA), y = c("a", NA, "b"))
+  exp <- tibble(g = c("A", "B"), x = c(1, NA), y = c("a", "b"))
 
-  gdf <- dplyr::group_by_(df, "g")
-  gexp <- dplyr::group_by_(exp, "g")
+  gdf <- dplyr::group_by(df, "g")
+  gexp <- dplyr::group_by(exp, "g")
 
-  res <- tidyr::drop_na_(gdf, "y")
+  res <- tidyr::drop_na(gdf, y)
   expect_equal(res, gexp)
   expect_equal(dplyr::groups(res), dplyr::groups(gexp))
 })
 
-test_that("empty call drops every row (NSE version)", {
-  df <- data_frame(x = c(1, 2, NA), y = c("a", NA, "b"))
-  exp <- data_frame(x = c(1), y = c("a"))
-  res <- tidyr::drop_na_(df, character())
-  expect_equal(res, exp)
-})
-
-test_that("specifying (a) variable(s) considers only that variable(s) (NSE version)", {
-  df <- data_frame(x = c(1, 2, NA), y = c("a", NA, "b"))
-  exp <- data_frame(x = c(1, 2), y = c("a", NA))
-  res <- tidyr::drop_na_(df, "x")
-  expect_equal(res, exp)
-  exp <- data_frame(x = c(1), y = c("a"))
-  res <- tidyr::drop_na_(df, c("x", "y"))
-  expect_equal(res, exp)
+test_that("empty call drops every row", {
+  df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
+  res <- tidyr::drop_na(df)
+  expect_identical(res, tibble(x = 1, y = "a"))
 })
 
 test_that("errors are raised", {
-  df <- data_frame(x = c(1, 2, NA), y = c("a", NA, "b"))
-  expect_error(tidyr::drop_na_(df, NULL), "not a character vector")
-  expect_error(tidyr::drop_na_(df, 1), "not a character vector")
-  expect_error(tidyr::drop_na_(df, "z"), "Unknown column")
+  df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
+  expect_error(tidyr::drop_na(df, !! list()))
+  expect_error(tidyr::drop_na(df, "z"))
 })
diff --git a/tests/testthat/test-expand.R b/tests/testthat/test-expand.R
index a6fdbb8..2ab906b 100644
--- a/tests/testthat/test-expand.R
+++ b/tests/testthat/test-expand.R
@@ -3,7 +3,6 @@ context("expand")
 test_that("expand completes all values", {
   df <- data.frame(x = 1:2, y = 1:2)
   out <- expand(df, x, y)
-
   expect_equal(nrow(out), 4)
 })
 
@@ -13,26 +12,14 @@ test_that("multiple variables in one arg doesn't expand", {
   expect_equal(nrow(out), 2)
 })
 
-test_that("expand_ accepts character vectors", {
-  df <- data.frame(x = 1:2, y = 1:2)
-
-  expect_equal(names(expand_(df, c("x", "y"))), c("x", "y"))
-})
-
 test_that("nesting doesn't expand values" ,{
   df <- data.frame(x = 1:2, y = 1:2)
   expect_equal(expand(df, nesting(x, y)), df)
 })
 
-test_that("expand_ accepts list of formulas", {
-  df <- data.frame(x = 1:2, y = 1:2)
-  expect_equal(names(expand_(df, c(~ x, ~y))), c("x", "y"))
-})
-
 test_that("expand works with non-standard col names", {
-  df <- data_frame(` x ` = 1:2, `/y` = 1:2)
+  df <- tibble(` x ` = 1:2, `/y` = 1:2)
   out <- expand(df, ` x `, `/y`)
-
   expect_equal(nrow(out), 4)
 })
 
@@ -42,7 +29,7 @@ test_that("expand excepts expressions", {
 })
 
 test_that("expand respects groups", {
-  df <- data_frame(
+  df <- tibble(
     a = c(1L, 1L, 2L),
     b = c(1L, 2L, 1L),
     c = c(2L, 1L, 1L)
@@ -50,21 +37,23 @@ test_that("expand respects groups", {
   out <- df %>% dplyr::group_by(a) %>% expand(b, c) %>% nest()
 
   expect_equal(out$data[[1]], crossing(b = 1:2, c = 1:2))
-  expect_equal(out$data[[2]], data_frame(b = 1L, c = 1L))
+  expect_equal(out$data[[2]], tibble(b = 1L, c = 1L))
 })
 
 test_that("preserves ordered factors", {
-  df <- data_frame(a = ordered("a"))
+  df <- tibble(a = ordered("a"))
   out <- expand(df, a)
-
   expect_equal(df$a, ordered("a"))
 })
 
-
 test_that("zero length inputs are automatically dropped", {
   tb <- tibble::tibble(x = 1:5)
-
   expect_equal(expand(tb, x, y = numeric()), tb)
   expect_equal(nesting(x = tb$x, y = numeric()), tb)
   expect_equal(crossing(x = tb$x, y = numeric()), tb)
 })
+
+test_that("expand() reconstructs input dots is empty", {
+  expect_is(expand(mtcars), "data.frame")
+  expect_is(expand(as_tibble(mtcars)), "tbl_df")
+})
diff --git a/tests/testthat/test-extract.R b/tests/testthat/test-extract.R
index ad63350..3c2ea64 100644
--- a/tests/testthat/test-extract.R
+++ b/tests/testthat/test-extract.R
@@ -20,17 +20,15 @@ test_that("match failures give NAs", {
 })
 
 test_that("extract keeps characters as character", {
-  df <- data_frame(x = "X-1")
-
+  df <- tibble(x = "X-1")
   out <- extract(df, x, c("x", "y"), "(.)-(.)", convert = TRUE)
   expect_equal(out$x, "X")
   expect_equal(out$y, 1L)
 })
 
 test_that("groups are preserved", {
-  df <- data_frame(g = 1, x = "X1") %>% dplyr::group_by(g)
+  df <- tibble(g = 1, x = "X1") %>% dplyr::group_by(g)
   rs <- df %>% extract(x, c("x", "y"), "(.)(.)")
-
   expect_equal(class(df), class(rs))
   expect_equal(dplyr::groups(df), dplyr::groups(rs))
 })
diff --git a/tests/testthat/test-fill.R b/tests/testthat/test-fill.R
index e7ba1e3..95f5042 100644
--- a/tests/testthat/test-fill.R
+++ b/tests/testthat/test-fill.R
@@ -1,7 +1,7 @@
 context("fill")
 
 test_that("all missings left unchanged", {
-  df <- data_frame(
+  df <- tibble(
     lgl = c(NA, NA),
     int = c(NA_integer_, NA),
     dbl = c(NA_real_, NA),
@@ -16,21 +16,19 @@ test_that("all missings left unchanged", {
 })
 
 test_that("missings filled down from last non-missing", {
-  df <- data_frame(x = c(1, NA, NA))
-
+  df <- tibble(x = c(1, NA, NA))
   out <- fill(df, x)
   expect_equal(out$x, c(1, 1, 1))
 })
 
 test_that("missings filled up from last non-missing", {
-  df <- data_frame(x = c(NA, NA, 1))
-
+  df <- tibble(x = c(NA, NA, 1))
   out <- fill(df, x, .direction = "up")
   expect_equal(out$x, c(1, 1, 1))
 })
 
 test_that("missings filled down for each atomic vector", {
-  df <- data_frame(
+  df <- tibble(
     lgl = c(T, NA),
     int = c(1L, NA),
     dbl = c(1, NA),
@@ -39,7 +37,7 @@ test_that("missings filled down for each atomic vector", {
 
   )
 
-  out <- fill(df, everything())
+  out <- fill(df, tidyselect::everything())
   expect_equal(out$lgl, c(TRUE, TRUE))
   expect_equal(out$int, c(1L, 1L))
   expect_equal(out$dbl, c(1, 1))
@@ -48,7 +46,7 @@ test_that("missings filled down for each atomic vector", {
 })
 
 test_that("missings filled up for each vector", {
-  df <- data_frame(
+  df <- tibble(
     lgl = c(NA, T),
     int = c(NA, 1L),
     dbl = c(NA, 1),
@@ -56,7 +54,7 @@ test_that("missings filled up for each vector", {
     lst = list(NULL, 1:5)
   )
 
-  out <- fill(df, everything(), .direction = "up")
+  out <- fill(df, tidyselect::everything(), .direction = "up")
   expect_equal(out$lgl, c(TRUE, TRUE))
   expect_equal(out$int, c(1L, 1L))
   expect_equal(out$dbl, c(1, 1))
@@ -65,7 +63,7 @@ test_that("missings filled up for each vector", {
 })
 
 test_that("fill preserves attributes", {
-  df <- data_frame(x = factor(c(NA, "a", NA)))
+  df <- tibble(x = factor(c(NA, "a", NA)))
 
   out_d <- fill(df, x)
   out_u <- fill(df, x, .direction = "up")
@@ -75,7 +73,7 @@ test_that("fill preserves attributes", {
 })
 
 test_that("fill respects grouping", {
-  df <- data_frame(x = c(1, 1, 2), y = c(1, NA, NA))
+  df <- tibble(x = c(1, 1, 2), y = c(1, NA, NA))
   out <- df %>% dplyr::group_by(x) %>% fill(y)
   expect_equal(out$y, c(1, 1, NA))
 })
diff --git a/tests/testthat/test-gather.R b/tests/testthat/test-gather.R
index e6e4759..238ec8c 100644
--- a/tests/testthat/test-gather.R
+++ b/tests/testthat/test-gather.R
@@ -46,23 +46,22 @@ test_that("key preserves column ordering when factor_key = TRUE", {
 
 test_that("preserve class of input", {
   dat <- data.frame(x = 1:2)
-  dat %>% as_data_frame %>% gather %>% expect_is("tbl_df")
+  dat %>% as_tibble %>% gather %>% expect_is("tbl_df")
 })
 
-test_that("additional controls which columns to gather", {
-  data <- data_frame(a = 1, b1 = 1, b2 = 2, b3 = 3)
+test_that("additional inputs control which columns to gather", {
+  data <- tibble(a = 1, b1 = 1, b2 = 2, b3 = 3)
   out <- gather(data, key, val, b1:b3)
-
   expect_equal(names(out), c("a", "key", "val"))
   expect_equal(out$val, 1:3)
 })
 
 test_that("group_vars are kept where possible", {
-  df <- data_frame(x = 1, y = 1, z = 1)
+  df <- tibble(x = 1, y = 1, z = 1)
 
   # Can't keep
   out <- df %>% dplyr::group_by(x) %>% gather(key, val, x:z)
-  expect_equal(out, data_frame(key = c("x", "y", "z"), val = 1))
+  expect_equal(out, tibble(key = c("x", "y", "z"), val = 1))
 
   # Can keep
   out <- df %>% dplyr::group_by(x) %>% gather(key, val, y:z)
@@ -117,22 +116,18 @@ test_that("varying attributes are dropped with a warning", {
 test_that("gather preserves OBJECT bit on e.g. POSIXct", {
   df <- data.frame(now = Sys.time())
   out <- gather(df, k, v)
-
-  object_bit_set <- function(x) {
-    grepl("\\[OBJ", capture.output(.Internal(inspect(x)))[1])
-  }
-  expect_true(object_bit_set(out$v))
+  expect_true(is.object(out$v))
 })
 
 test_that("can handle list-columns", {
-  df <- data_frame(x = 1:2, y = list("a", TRUE))
+  df <- tibble(x = 1:2, y = list("a", TRUE))
   out <- gather(df, k, v, -y)
 
   expect_identical(out$y, df$y)
 })
 
 test_that("can gather list-columns", {
-  df <- data_frame(x = 1:2, y = list(1, 2), z = list(3, 4))
+  df <- tibble(x = 1:2, y = list(1, 2), z = list(3, 4))
   out <- gather(df, k, v, y:z)
   expect_equal(out$v, list(1, 2, 3, 4))
 })
diff --git a/tests/testthat/test-id.R b/tests/testthat/test-id.R
index 373e6a0..814aca8 100644
--- a/tests/testthat/test-id.R
+++ b/tests/testthat/test-id.R
@@ -2,7 +2,6 @@ context("id")
 
 test_that("drop preserves count of factor levels", {
   x <- factor(, levels = c("a", "b"))
-
   expect_equal(id_var(x), structure(integer(), n = 2))
   expect_equal(id(data.frame(x)), structure(integer(), n = 2))
 })
diff --git a/tests/testthat/test-nest.R b/tests/testthat/test-nest.R
index c47886c..d6f30b0 100644
--- a/tests/testthat/test-nest.R
+++ b/tests/testthat/test-nest.R
@@ -1,44 +1,38 @@
 context("nest")
 
 test_that("nest turns grouped values into one list-df", {
-  df <- data_frame(x = c(1, 1, 1), y = 1:3)
+  df <- tibble(x = c(1, 1, 1), y = 1:3)
   out <- nest(df, y)
   expect_equal(out$x, 1)
   expect_equal(out$data, list(data.frame(y = 1:3)))
 })
 
 test_that("can control output column name", {
-  df <- data_frame(x = c(1, 1, 1), y = 1:3)
+  df <- tibble(x = c(1, 1, 1), y = 1:3)
   out <- nest(df, y, .key = y)
-
   expect_equal(names(out), c("x", "y"))
 })
 
 test_that("nest doesn't include grouping vars in nested data", {
-  out <- data_frame(x = c(1, 1, 1), y = 1:3) %>%
-    dplyr::group_by(x) %>%
-    nest()
-
+  df <- tibble(x = c(1, 1, 1), y = 1:3)
+  out <- df %>% dplyr::group_by(x) %>% nest()
   expect_equal(out$data[[1]], data.frame(y = 1:3))
 })
 
 test_that("can restrict variables in grouped nest", {
-  df <- data_frame(x = 1, y = 2, z = 3) %>% dplyr::group_by(x)
-
+  df <- tibble(x = 1, y = 2, z = 3) %>% dplyr::group_by(x)
   out <- df %>% nest(y)
   expect_equal(names(out$data[[1]]), "y")
 })
 
 test_that("puts data into the correct row", {
-  df <- data_frame(x = 1:3, y = c("B", "A", "A"))
-  out <- nest(df, x) %>%
-    dplyr::filter(y == "B")
-
+  df <- tibble(x = 1:3, y = c("B", "A", "A"))
+  out <- nest(df, x) %>% dplyr::filter(y == "B")
   expect_equal(out$data[[1]]$x, 1)
 })
 
 test_that("nesting everything yields a simple data frame", {
-  df <- data_frame(x = 1:3, y = c("B", "A", "A"))
+  df <- tibble(x = 1:3, y = c("B", "A", "A"))
   out <- nest(df, x, y)
   expect_equal(out$data, list(df))
 })
diff --git a/tests/testthat/test-replace_na.R b/tests/testthat/test-replace_na.R
index f7e1143..ae82fe7 100644
--- a/tests/testthat/test-replace_na.R
+++ b/tests/testthat/test-replace_na.R
@@ -1,15 +1,13 @@
 context("replace_na")
 
 test_that("empty call does nothing", {
-  df <- data_frame(x = c(1, NA))
+  df <- tibble(x = c(1, NA))
   out <- replace_na(df)
-
   expect_equal(out, df)
 })
 
 test_that("missing values are replaced", {
-  df <- data_frame(x = c(1, NA))
+  df <- tibble(x = c(1, NA))
   out <- replace_na(df, list(x = 0))
-
   expect_equal(out$x, c(1, 0))
 })
diff --git a/tests/testthat/test-separate.R b/tests/testthat/test-separate.R
index 9ef6225..72929ad 100644
--- a/tests/testthat/test-separate.R
+++ b/tests/testthat/test-separate.R
@@ -1,22 +1,20 @@
 context("Separate")
 
 test_that("missing values in input are missing in output", {
-  df <- data_frame(x = c(NA, "a b"))
+  df <- tibble(x = c(NA, "a b"))
   out <- separate(df, x, c("x", "y"))
   expect_equal(out$x, c(NA, "a"))
   expect_equal(out$y, c(NA, "b"))
 })
 
 test_that("integer values specific position between characters", {
-  df <- data_frame(x = c(NA, "ab", "cd"))
-
+  df <- tibble(x = c(NA, "ab", "cd"))
   out <- separate(df, x, c("x", "y"), 1)
   expect_equal(out$x, c(NA, "a", "c"))
 })
 
 test_that("convert produces integers etc", {
-  df <- data_frame(x = "1-1.5-FALSE")
-
+  df <- tibble(x = "1-1.5-FALSE")
   out <- separate(df, x, c("x", "y", "z"), "-", convert = TRUE)
   expect_equal(out$x, 1L)
   expect_equal(out$y, 1.5)
@@ -24,15 +22,14 @@ test_that("convert produces integers etc", {
 })
 
 test_that("convert keeps characters as character", {
-  df <- data_frame(x = "X-1")
-
+  df <- tibble(x = "X-1")
   out <- separate(df, x, c("x", "y"), "-", convert = TRUE)
   expect_equal(out$x, "X")
   expect_equal(out$y, 1L)
 })
 
 test_that("too many pieces dealt with as requested", {
-  df <- data_frame(x = c("a b", "a b c"))
+  df <- tibble(x = c("a b", "a b c"))
 
   expect_warning(separate(df, x, c("x", "y")), "Too many")
 
@@ -46,7 +43,7 @@ test_that("too many pieces dealt with as requested", {
 })
 
 test_that("too few pieces dealt with as requested", {
-  df <- data_frame(x = c("a b", "a b c"))
+  df <- tibble(x = c("a b", "a b c"))
 
   expect_warning(separate(df, x, c("x", "y", "z")), "Too few")
 
@@ -60,17 +57,15 @@ test_that("too few pieces dealt with as requested", {
 })
 
 test_that("preserves grouping", {
-  df <- data_frame(g = 1, x = "a:b") %>% dplyr::group_by(g)
+  df <- tibble(g = 1, x = "a:b") %>% dplyr::group_by(g)
   rs <- df %>% separate(x, c("a", "b"))
-
   expect_equal(class(df), class(rs))
   expect_equal(dplyr::groups(df), dplyr::groups(rs))
 })
 
 test_that("drops grouping when needed", {
-  df <- data_frame(x = "a:b") %>% dplyr::group_by(x)
+  df <- tibble(x = "a:b") %>% dplyr::group_by(x)
   rs <- df %>% separate(x, c("a", "b"))
-
   expect_equal(rs$a, "a")
   expect_equal(dplyr::groups(rs), NULL)
 })
@@ -79,17 +74,17 @@ test_that("drops grouping when needed", {
 context("Separate rows")
 
 test_that("can handle collapsed rows", {
-  df <- data_frame(x = 1:3, y = c("a", "d,e,f", "g,h"))
+  df <- tibble(x = 1:3, y = c("a", "d,e,f", "g,h"))
   expect_equal(separate_rows(df, y)$y, unlist(strsplit(df$y, "\\,")))
 })
 
 test_that("default pattern does not split decimals in nested strings", {
-  df <- dplyr::data_frame(x = 1:3, y = c("1", "1.0,1.1", "2.1"))
+  df <- dplyr::tibble(x = 1:3, y = c("1", "1.0,1.1", "2.1"))
   expect_equal(separate_rows(df, y)$y, unlist(strsplit(df$y, ",")))
 })
 
 test_that("preserves grouping", {
-  df <- data_frame(g = 1, x = "a:b") %>% dplyr::group_by(g)
+  df <- tibble(g = 1, x = "a:b") %>% dplyr::group_by(g)
   rs <- df %>% separate_rows(x)
 
   expect_equal(class(df), class(rs))
@@ -97,7 +92,7 @@ test_that("preserves grouping", {
 })
 
 test_that("drops grouping when needed", {
-  df <- data_frame(x = 1, y = "a:b") %>% dplyr::group_by(x, y)
+  df <- tibble(x = 1, y = "a:b") %>% dplyr::group_by(x, y)
 
   out <- df %>% separate_rows(y)
   expect_equal(out$y, c("a", "b"))
@@ -108,7 +103,7 @@ test_that("drops grouping when needed", {
 })
 
 test_that("convert produces integers etc", {
-  df <- data_frame(x = "1,2,3", y = "T,F,T", z = "a,b,c")
+  df <- tibble(x = "1,2,3", y = "T,F,T", z = "a,b,c")
 
   out <- separate_rows(df, x, y, z, convert = TRUE)
   expect_equal(class(out$x), "integer")
diff --git a/tests/testthat/test-spread.R b/tests/testthat/test-spread.R
index 64b3af3..3f32b2a 100644
--- a/tests/testthat/test-spread.R
+++ b/tests/testthat/test-spread.R
@@ -2,31 +2,31 @@ context("Spread")
 library(dplyr, warn.conflicts = FALSE)
 
 test_that("order doesn't matter", {
+  df1 <- data.frame(x = c("a", "b"), y = 1:2)
+  df2 <- data.frame(x = c("b", "a"), y = 2:1)
+  one <- spread(df1, x, y)
+  two <- spread(df2, x, y) %>% select(a, b) %>% arrange(a, b)
+  expect_identical(one, two)
 
-  one <- data.frame(x = c("a", "b"), y = 1:2) %>% spread(x, y)
-  two <- data.frame(x = c("b", "a"), y = 2:1) %>% spread(x, y) %>%
-    select(a, b) %>% arrange(a, b)
-  expect_equal(one, two)
-
-  one <- data.frame(z = c("b", "a"), x = c("a", "b"), y = 1:2) %>%
-    spread(x, y) %>% arrange(z)
-  two <- data.frame(z = c("a", "b"), x = c("b", "a"), y = 2:1) %>%
-    spread(x, y)
-  expect_equal(one, two)
+  df1 <- data.frame(z = c("b", "a"), x = c("a", "b"), y = 1:2)
+  df2 <- data.frame(z = c("a", "b"), x = c("b", "a"), y = 2:1)
+  one <- spread(df1, x, y) %>% arrange(z)
+  two <- spread(df2, x, y)
+  expect_identical(one, two)
 })
 
 test_that("convert turns strings into integers", {
-  df <- data_frame(key = "a", value = "1")
+  df <- tibble(key = "a", value = "1")
   out <- spread(df, key, value, convert = TRUE)
-
   expect_is(out$a, "integer")
 })
 
 test_that("duplicate values for one key is an error", {
   df <- data.frame(x = c("a", "b", "b"), y = c(1, 2, 2), z = c(1, 2, 2))
-
-  expect_error(df %>% spread(x, y), "Duplicate identifiers for rows (2, 3)",
-    fixed = TRUE)
+  expect_error(spread(df, x, y),
+    "Duplicate identifiers for rows (2, 3)",
+    fixed = TRUE
+  )
 })
 
 test_that("factors are spread into columns (#35)", {
@@ -41,7 +41,6 @@ test_that("factors are spread into columns (#35)", {
   expect_true(all(vapply(out, is.factor, logical(1))))
   expect_identical(levels(out$a), levels(data$z))
   expect_identical(levels(out$b), levels(data$z))
-
 })
 
 test_that("drop = FALSE keeps missing combinations (#25)", {
@@ -76,7 +75,7 @@ test_that("preserve class of input", {
     y = c("c", "d", "c", "d"),
     z = c("w", "x", "y", "z")
   )
-  dat %>% as_data_frame %>% spread(x, z) %>% expect_is("tbl_df")
+  dat %>% as_tibble() %>% spread(x, z) %>% expect_is("tbl_df")
 })
 
 test_that("dates are spread into columns (#62)", {
@@ -146,7 +145,7 @@ test_that("complex values are preserved  (#134)", {
 })
 
 test_that("can spread with nested columns", {
-  df <- tibble::data_frame(x = c("a", "a"), y = 1:2, z = list(1:2, 3:5))
+  df <- tibble::tibble(x = c("a", "a"), y = 1:2, z = list(1:2, 3:5))
   out <- spread(df, x, y)
 
   expect_equal(out$a, 1:2)
@@ -154,7 +153,7 @@ test_that("can spread with nested columns", {
 })
 
 test_that("spread gives one column when no existing non-spread vars", {
-  df <- data_frame(
+  df <- tibble(
     key = c("a", "b", "c"),
     value = c(1, 2, 3)
   )
@@ -170,12 +169,12 @@ test_that("grouping vars are kept where possible", {
   # Can't keep
   df <- data.frame(key = c("a", "b"), value = 1:2)
   out <- df %>% group_by(key) %>% spread(key, value)
-  expect_equal(out, data_frame(a = 1L, b = 2L))
+  expect_equal(out, tibble(a = 1L, b = 2L))
 })
 
 
 test_that("col names never contains NA", {
-  df <- data_frame(x = c(1, NA), y = 1:2)
+  df <- tibble(x = c(1, NA), y = 1:2)
   df %>%
     spread(x, y) %>%
     expect_named(c("1", "<NA>"))
diff --git a/tests/testthat/test-underscored.R b/tests/testthat/test-underscored.R
new file mode 100644
index 0000000..b29729e
--- /dev/null
+++ b/tests/testthat/test-underscored.R
@@ -0,0 +1,117 @@
+context("Deprecated SE variants")
+
+test_that("complete_()", {
+  df <- tibble(x = 1:2, y = 1:2, z = 3:4)
+  out <- complete_(df, list("x", ~y))
+  expect_identical(nrow(out), 4L)
+  expect_identical(out$z, c(3L, NA, NA, 4L))
+})
+
+test_that("drop_na_() ", {
+  # Specifying (a) variable(s) considers only that variable(s)
+  df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
+  exp <- tibble(x = c(1, 2), y = c("a", NA))
+  res <- tidyr::drop_na_(df, "x")
+  expect_identical(res, exp)
+
+  exp <- tibble(x = c(1), y = c("a"))
+  res <- tidyr::drop_na(df, c("x", "y"))
+  expect_identical(res, exp)
+
+  # Empty call drops every row
+  df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
+  res <- tidyr::drop_na_(df, character())
+  expect_identical(res, tibble(x = 1, y = "a"))
+})
+
+test_that("drop_na_() works with non-syntactic names", {
+  df <- tibble(`non-syntactic` = 1)
+  expect_identical(drop_na_(df, "non-syntactic"), drop_na(df, `non-syntactic`))
+})
+
+test_that("expand_()", {
+  df <- data.frame(x = 1:2, y = 1:2)
+  out <- expand_(df, list("x", ~y))
+  expect_identical(names(out), c("x", "y"))
+  expect_identical(nrow(out), 4L)
+})
+
+test_that("extract_()", {
+  df <- data.frame(x = c("a.b", "a.d", "b.c"))
+  out <- df %>% extract_("x", "A")
+  expect_identical(out$A, c("a", "a", "b"))
+})
+
+test_that("fill_()", {
+  df <- tibble(x = c(1, NA, NA))
+  out <- fill_(df, "x")
+  expect_identical(out$x, c(1, 1, 1))
+})
+
+test_that("fill_() works with non-syntactic names", {
+  df <- tibble(`non-syntactic` = 1)
+  expect_identical(fill_(df, "non-syntactic"), fill(df, `non-syntactic`))
+})
+
+test_that("gather_()", {
+  df <- data.frame(x = 1:5, y = 6:10)
+  out <- gather_(df, "key", "val", c("x", "y"))
+  expect_identical(nrow(out), 10L)
+  expect_identical(names(out), c("key", "val"))
+})
+
+test_that("gather_() works with non-syntactic names", {
+  df <- tibble(`non-syntactic` = 1)
+  expect_identical(
+    gather(df, key, val, `non-syntactic`),
+    gather_(df, "key", "val", "non-syntactic")
+  )
+})
+
+test_that("nest_()", {
+  df <- tibble(x = c(1, 1, 1), y = 1:3)
+  expect_identical(nest_(df, "y", "y"), nest(df, y, .key = y))
+})
+
+test_that("separate_()", {
+  df <- tibble(x = c(NA, "a b"))
+  out <- separate_(df, "x", c("x", "y"))
+  expect_identical(out$x, c(NA, "a"))
+  expect_identical(out$y, c(NA, "b"))
+})
+
+test_that("separate() works with non-syntactic names", {
+  df <- tibble(`non-syntactic` = "1,2")
+  into <- c("non", "syntactic")
+  expect_identical(separate_(df, "non-syntactic", into), separate(df, `non-syntactic`, into))
+})
+
+test_that("separate_rows() works with non-syntactic names", {
+  df <- tibble(`non-syntactic` = 1)
+  expect_identical(separate_rows_(df, "non-syntactic"), separate_rows(df, `non-syntactic`))
+})
+
+test_that("spread_()", {
+  df1 <- data.frame(x = c("a", "b"), y = 1:2)
+  df2 <- data.frame(x = c("b", "a"), y = 2:1)
+  one <- spread_(df1, "x", ~y)
+  two <- spread_(df2, "x", ~y) %>% select(a, b) %>% arrange(a, b)
+  expect_identical(one, two)
+})
+
+test_that("unite_()", {
+  df <- tibble(x = "a", y = "b")
+  out <- unite_(df, "z", c("x", "y"))
+  expect_named(out, "z")
+  expect_identical(out$z, "a_b")
+})
+
+test_that("unite_() works with non-syntactic names", {
+  df <- tibble(x = 1, `non-syntactic` = 1)
+  expect_identical(unite_(df, "x", "non-syntactic"), unite(df, x, `non-syntactic`))
+})
+
+test_that("unnest_()", {
+  df <- tibble(x = list(1, 2:3, 4:10))
+  expect_identical(unnest_(df)$x, dbl(1:10))
+})
diff --git a/tests/testthat/test-unite.R b/tests/testthat/test-unite.R
index 4121d58..d5b3814 100644
--- a/tests/testthat/test-unite.R
+++ b/tests/testthat/test-unite.R
@@ -1,36 +1,30 @@
 context("unite")
 
 test_that("unite pastes columns together & removes old col", {
-  df <- data_frame(x = "a", y = "b")
+  df <- tibble(x = "a", y = "b")
   out <- unite(df, z, x:y)
-
   expect_equal(names(out), "z")
   expect_equal(out$z, "a_b")
 })
 
 test_that("unite does not remove new col in case of name clash", {
-  df <- data_frame(x = "a", y = "b")
+  df <- tibble(x = "a", y = "b")
   out <- unite(df, x, x:y)
-
   expect_equal(names(out), "x")
   expect_equal(out$x, "a_b")
 })
 
 test_that("unite preserves grouping", {
-  df <- data_frame(g = 1, x = "a") %>% dplyr::group_by(g)
+  df <- tibble(g = 1, x = "a") %>% dplyr::group_by(g)
   rs <- df %>% unite(x, x)
-
-
   expect_equal(df, rs)
   expect_equal(class(df), class(rs))
   expect_equal(dplyr::groups(df), dplyr::groups(rs))
 })
 
-
 test_that("drops grouping when needed", {
-  df <- data_frame(g = 1, x = "a") %>% dplyr::group_by(g)
+  df <- tibble(g = 1, x = "a") %>% dplyr::group_by(g)
   rs <- df %>% unite(gx, g, x)
-
   expect_equal(rs$gx, "1_a")
   expect_equal(dplyr::groups(rs), NULL)
 })
diff --git a/tests/testthat/test-unnest.R b/tests/testthat/test-unnest.R
index 998fd4c..e21414a 100644
--- a/tests/testthat/test-unnest.R
+++ b/tests/testthat/test-unnest.R
@@ -1,44 +1,44 @@
 context("unnest")
 
 test_that("unnesting combines atomic vectors", {
-  df <- data_frame(x = list(1, 2:3, 4:10))
+  df <- tibble(x = list(1, 2:3, 4:10))
   expect_equal(unnest(df)$x, 1:10)
 })
 
-test_that("vector unnest preseves names", {
-  df <- data_frame(x = list(1, 2:3), y = list("a", c("b", "c")))
+test_that("vector unnest preserves names", {
+  df <- tibble(x = list(1, 2:3), y = list("a", c("b", "c")))
   out <- unnest(df)
   expect_named(out, c("x", "y"))
 })
 
 test_that("unnesting row binds data frames", {
-  df <- data_frame(x = list(
-    data_frame(x = 1:5),
-    data_frame(x = 6:10)
+  df <- tibble(x = list(
+    tibble(x = 1:5),
+    tibble(x = 6:10)
   ))
   expect_equal(unnest(df)$x, 1:10)
 })
 
 test_that("elements must all be of same type", {
-  df <- data_frame(x = list(1, "a"))
-  expect_error(unnest(df), "(incompatible type)|(numeric to character)")
+  df <- tibble(x = list(1, "a"))
+  expect_error(unnest(df), "(incompatible type)|(numeric to character)|(character to numeric)")
 })
 
 test_that("can't combine vectors and data frames", {
-  df <- data_frame(x = list(1, data_frame(1)))
+  df <- tibble(x = list(1, tibble(1)))
   expect_error(unnest(df), "a list of vectors or a list of data frames")
 })
 
 test_that("multiple columns must be same length", {
-  df <- data_frame(x = list(1), y = list(1:2))
+  df <- tibble(x = list(1), y = list(1:2))
   expect_error(unnest(df), "same number of elements")
 
-  df <- data_frame(x = list(1), y = list(data_frame(x = 1:2)))
+  df <- tibble(x = list(1), y = list(tibble(x = 1:2)))
   expect_error(unnest(df), "same number of elements")
 })
 
 test_that("nested is split as a list (#84)", {
-  df <- data_frame(x = 1:3, y = list(1,2:3,4), z = list(5,6:7,8))
+  df <- tibble(x = 1:3, y = list(1,2:3,4), z = list(5,6:7,8))
   expect_warning(out <- unnest(df, y, z), NA)
   expect_equal(out$x, c(1, 2, 2, 3))
   expect_equal(out$y, unlist(df$y))
@@ -46,38 +46,57 @@ test_that("nested is split as a list (#84)", {
 })
 
 test_that("unnest has mutate semantics", {
-  df <- data_frame(x = 1:3, y = list(1,2:3,4))
-  out <- df %>% unnest(z = lapply(y, `+`, 1))
+  df <- tibble(x = 1:3, y = list(1,2:3,4))
+  out <- df %>% unnest(z = map(y, `+`, 1))
 
   expect_equal(out$z, 2:5)
 })
 
 test_that(".id creates vector of names for vector unnest", {
-  df <- data_frame(x = 1:2, y = list(a = 1, b = 1:2))
+  df <- tibble(x = 1:2, y = list(a = 1, b = 1:2))
+  out <- unnest(df, .id = "name")
+
+  expect_equal(out$name, c("a", "b", "b"))
+})
+
+test_that(".id creates vector of names for grouped vector unnest", {
+  df <- data_frame(x = 1:2, y = list(a = 1, b = 1:2)) %>%
+    dplyr::group_by(x)
   out <- unnest(df, .id = "name")
 
   expect_equal(out$name, c("a", "b", "b"))
 })
 
 test_that(".id creates vector of names for data frame unnest", {
+  df <- tibble(x = 1:2, y = list(
+    a = tibble(y = 1),
+    b = tibble(y = 1:2)
+  ))
+  out <- unnest(df, .id = "name")
+
+  expect_equal(out$name, c("a", "b", "b"))
+})
+
+test_that(".id creates vector of names for grouped data frame unnest", {
   df <- data_frame(x = 1:2, y = list(
     a = data_frame(y = 1),
     b = data_frame(y = 1:2)
-  ))
+  )) %>%
+    dplyr::group_by(x)
   out <- unnest(df, .id = "name")
 
   expect_equal(out$name, c("a", "b", "b"))
 })
 
 test_that("can use non-syntactic names", {
-  out <- data_frame("foo bar" = list(1:2, 3)) %>% unnest()
+  out <- tibble("foo bar" = list(1:2, 3)) %>% unnest()
 
   expect_named(out, "foo bar")
 })
 
 test_that("sep combines column names", {
-  ldf <- list(data_frame(x = 1))
-  data_frame(x = ldf, y = ldf) %>%
+  ldf <- list(tibble(x = 1))
+  tibble(x = ldf, y = ldf) %>%
     unnest(.sep = "_") %>%
     expect_named(c("x_x", "y_x"))
 })
@@ -85,21 +104,21 @@ test_that("sep combines column names", {
 # Drop --------------------------------------------------------------------
 
 test_that("unnest drops list cols if expanding", {
-  df <- data_frame(x = 1:2, y = list(3, 4), z = list(5, 6:7))
+  df <- tibble(x = 1:2, y = list(3, 4), z = list(5, 6:7))
   out <- df %>% unnest(z)
 
   expect_equal(names(out), c("x", "z"))
 })
 
 test_that("unnest keeps list cols if not expanding", {
-  df <- data_frame(x = 1:2, y = list(3, 4), z = list(5, 6:7))
+  df <- tibble(x = 1:2, y = list(3, 4), z = list(5, 6:7))
   out <- df %>% unnest(y)
 
   expect_equal(names(out), c("x", "z", "y"))
 })
 
 test_that("unnest respects .drop_lists", {
-  df <- data_frame(x = 1:2, y = list(3, 4), z = list(5, 6:7))
+  df <- tibble(x = 1:2, y = list(3, 4), z = list(5, 6:7))
 
   expect_equal(df %>% unnest(y, .drop = TRUE) %>% names(), c("x", "y"))
   expect_equal(df %>% unnest(z, .drop = FALSE) %>% names(), c("x", "y", "z"))
@@ -107,7 +126,7 @@ test_that("unnest respects .drop_lists", {
 })
 
 test_that("grouping is preserved", {
-  df <- data_frame(g = 1, x = list(1:3)) %>% dplyr::group_by(g)
+  df <- tibble(g = 1, x = list(1:3)) %>% dplyr::group_by(g)
   rs <- df %>% unnest(x)
 
   expect_equal(rs$x, 1:3)
diff --git a/vignettes/tidy-data.Rmd b/vignettes/tidy-data.Rmd
index a4e2b0a..012e6f8 100644
--- a/vignettes/tidy-data.Rmd
+++ b/vignettes/tidy-data.Rmd
@@ -1,7 +1,5 @@
 ---
 title: "Tidy data"
-author: "Hadley Wickham"
-date: "`r Sys.Date()`"
 output: rmarkdown::html_vignette
 vignette: >
   %\VignetteIndexEntry{Tidy data}
@@ -90,7 +88,7 @@ Tidy data is a standard way of mapping the meaning of a dataset to its structure
 
 3.  Each type of observational unit forms a table.
 
-This is Codd's 3rd normal form, but with the constraints framed in statistical language, and the focus put on a single dataset rather than the many connected datasets common in relational databases. **Messy data** is any other other arrangement of the data.
+This is Codd's 3rd normal form, but with the constraints framed in statistical language, and the focus put on a single dataset rather than the many connected datasets common in relational databases. **Messy data** is any other arrangement of the data.
 
 Tidy data makes it easy for an analyst or a computer to extract needed variables because it provides a standard way of structuring a dataset. Compare the different versions of the pregnancy data: in the messy version you need to use different strategies to extract different variables. This slows analysis and invites errors. If you consider how many data analysis operations involve all of the values in a variable (every aggregation function), you can see how important it is to extract the [...]
 
@@ -134,7 +132,7 @@ pew %>%
 
 This form is tidy because each column represents a variable and each row represents an observation, in this case a demographic unit corresponding to a combination of `religion` and `income`.
 
-This format is also used to record regularly spaced observations over time. For example, the Billboard dataset shown below records the date a song first entered the billboard top 100. It has variables for `artist`, `track`, `date.entered`, `rank` and `week`. The rank in each week after it enters the top 100 is recorded in 75 columns, `wk1` to `wk75`. This form of storage is not tidy, but it is useful for data entry. It reduces duplication since otherwise each song in each week would need [...]
+This format is also used to record regularly spaced observations over time. For example, the Billboard dataset shown below records the date a song first entered the billboard top 100. It has variables for `artist`, `track`, `date.entered`, `rank` and `week`. The rank in each week after it enters the top 100 is recorded in 75 columns, `wk1` to `wk75`. This form of storage is not tidy, but it is useful for data entry. It reduces duplication since otherwise each song in each week would need [...]
 
 ```{r}
 billboard <- tbl_df(read.csv("billboard.csv", stringsAsFactors = FALSE))

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/r-cran-tidyr.git



More information about the debian-med-commit mailing list