Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tradestatistics package #274

Closed
11 of 25 tasks
pachadotdev opened this issue Jan 3, 2019 · 79 comments
Closed
11 of 25 tasks

tradestatistics package #274

pachadotdev opened this issue Jan 3, 2019 · 79 comments

Comments

@pachadotdev
Copy link

pachadotdev commented Jan 3, 2019

Submitting Author: Pachá (@pachamaltese)
Repository: https://github.com/tradestatistics/tradestatistics
Version submitted: 0.1
Editor: @maelle
Reviewer 1: TBD
Reviewer 2: TBD
Archive: TBD
Version accepted: TBD


  • Paste the full DESCRIPTION file inside a code block below:
Package: tradestatistics
Type: Package
Title: Open Trade Statistics API Wrapper and Utility Program
Version: 0.1
Authors@R: c(
  person("Mauricio", "Vargas S.", role = c("aut", "cre", "cph"), email = "mvargas@dcc.uchile.cl"),
  person("Joshua", "Kunst", role = c("ctb")),
  person("Amanda", "Dobbyn", role = c("ctb")),
  person("Jorge", "Cimentada", role = c("ctb")),
  person(family = "UN Comtrade", role = c("dtc"))
  )
URL: https://CRAN.R-project.org/package=tradestatistics
BugReports: https://github.com/tradestatistics/ts-r-package/issues
Description: Access Open Trade Statistics API from R to download international trade data.
License: GPL-3
LazyData: TRUE
Depends:
  R (>= 3.2)
Imports:
  rlang,
  magrittr,
  dplyr,
  stringr,
  crul,
  purrr,
  jsonlite
RoxygenNote: 6.1.1
Suggests: 
  knitr,
  rmarkdown,
  testthat,
  covr,
  styler,
  pkgdown,
  vcr
VignetteBuilder: knitr

Scope

  • Please indicate which category or categories from our package fit policies this package falls under: (Please check an appropriate box below. If you are unsure, we suggest you make a pre-submission inquiry.):

    • data retrieval
    • data extraction
    • database access
    • data munging
    • data deposition
    • reproducibility
    • geospatial data
    • text analysis
  • Explain how the and why the package falls under these categories (briefly, 1-2 sentences):

Because the package connects to an API and does API calls to simplify things for the final user who wants imports/exports and some metrics such as % of increase/decrease.

  • Who is the target audience and what are scientific applications of this package?

Non-expert users that use international trade data. This can also be targeted to intermediate/advanced users who can benefit from the speed and short syntax that this package provides.

No, this is made for a new data project

  • If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted.

No.

Technical checks

Confirm each of the following by checking the box. This package:

Publication options

JOSS Options
  • The package has an obvious research application according to JOSS's definition.
    • The package contains a paper.md matching JOSS's requirements with a high-level description in the package root or in inst/.
    • The package is deposited in a long-term repository with the DOI:
    • (Do not submit your package separately to JOSS)
MEE Options
  • The package is novel and will be of interest to the broad readership of the journal.
  • The manuscript describing the package is no longer than 3000 words.
  • You intend to archive the code for the package in a long-term repository which meets the requirements of the journal (see MEE's Policy on Publishing Code)
  • (Scope: Do consider MEE's Aims and Scope for your manuscript. We make no guarantee that your manuscript will be within MEE scope.)
  • (Although not required, we strongly recommend having a full manuscript prepared when you submit here.)
  • (Please do not submit your package separately to Methods in Ecology and Evolution)

Code of conduct

@maelle
Copy link
Member

maelle commented Jan 4, 2019

👋 @pachamaltese! Thanks for your submission! Can you confirm the package and underlying data source are mature enough for review?

@pachadotdev
Copy link
Author

@maelle Hi!! Yes, I added all the feedback from the previous package and I extended it to obtain data from different tables provided I built my own tailored API

@pachadotdev
Copy link
Author

@maelle Hi! sorry, I totally forgot to mention that the ROpenSci badge that I added should get another numeral provided 217 is for the older package

@maelle
Copy link
Member

maelle commented Jan 5, 2019

Editor checks:

  • Fit: The package meets criteria for fit and overlap
  • Automated tests: Package has a testing suite and is tested via Travis-CI or another CI service.
  • License: The package has a CRAN or OSI accepted license
  • Repository: The repository link resolves correctly
  • Archive (JOSS only, may be post-review): The repository DOI resolves correctly
  • Version (JOSS only, may be post-review): Does the release version given match the GitHub release (v1.0.0)?

Editor comments

Thanks for your submission! I'm now looking for reviewers. Below are a few comments/questions.

In the README please write GPL3 licence to be more specific.

Is it necessary for the repo to be a fork? Or can you at least add an issue tracker to it?

Could you fix the example in get_countrycode() docs or at least add another working example?

Also note that when transferring it to the ropensci GitHub organization, it'll need to have the same name as the package.

Please update the review badge, the number in it should be the issue number, 274.

Why is there a CRAN link in DESCRIPTION given the package has not been released on CRAN? By the way please do not submit the package to CRAN before the end of the review process.

There's no example after the sentence "Here’s a basic example to obtain bilateral trade between Chile and Argentina in the year 2000 under different trade classifications:" in the README.

Here's the output from goodpractice, for you and the reviewers to have in mind.

 ✖ avoid long code lines, it is bad for readability.
    Also, many people prefer editor windows that are about 80
    characters wide. Try make your lines shorter than 80 characters

    R/get_countrycode.R:60:1
    R/get_countrycode.R:67:1
    R/get_data.R:44:1
    R/get_data.R:47:1
    R/get_data.R:97:1
    ... and 15 more lines

  ✖ fix this R CMD check NOTE: Note: found 4 marked UTF-8
    strings
  ✖ fix this R CMD check WARNING: LaTeX errors when
    creating PDF version. This typically indicates Rd problems.
  ✖ fix this R CMD check ERROR: Re-running with no
    redirection of stdout/stderr. Hmm ... looks like a package You
    may want to clean up by 'rm -rf
    /tmp/RtmpYe7MiN/Rd2pdf33297bad420f'

Typos

cointain      basic-usage.Rmd:45
mesured       basic-usage.Rmd:114
perfoms       get_data.Rd:39

Reviewers-: @emilyriederer @mpadge
Due date: 2019-01-28

@pachadotdev
Copy link
Author

pachadotdev commented Jan 5, 2019

Thanks a lot @maelle

Here's my checklist

  • In the README please write GPL3 licence to be more specific. added as plain text
  • Is it necessary for the repo to be a fork? Or can you at least add an issue tracker to it? I moved it to a new repo
  • Could you fix the example in get_countrycode() docs or at least add another working example? added more examples
  • Also note that when transferring it to the ropensci GitHub organization, it'll need to have the same name as the package. new repo is tradestatistics/tradestatistics
  • Please update the review badge, the number in it should be the issue number, 274. done
  • Why is there a CRAN link in DESCRIPTION given the package has not been released on CRAN? switched to package's pkgdown site
  • By the way please do not submit the package to CRAN before the end of the review process. absolutely
  • There's no example after the sentence "Here’s a basic example to obtain bilateral trade between Chile and Argentina in the year 2000 under different trade classifications:" in the README. re-added
  • long lines fixed
  • typos fixed
  • UTF-8 warnings

the only thing I cannot reproduce are the UTF-8 warnings

I created a new repo https://github.com/tradestatistics/tradestatistics

Thanks a lot !!!!

All the best!

@pachadotdev
Copy link
Author

pachadotdev commented Jan 6, 2019

Hi @maelle
I tested on another laptop and I still cannot reproduce the UTF-8 warnings
Here's my output

> devtools::check()
Updating tradestatistics documentation
Writing NAMESPACE
Loading tradestatistics
Writing NAMESPACE
── Building ────────────────────────────────────────────────────────────────────────────── tradestatistics ──
Setting env vars:
● CFLAGS    : -Wall -pedantic -fdiagnostics-color=always
● CXXFLAGS  : -Wall -pedantic -fdiagnostics-color=always
● CXX11FLAGS: -Wall -pedantic -fdiagnostics-color=always
─────────────────────────────────────────────────────────────────────────────────────────────────────────────
✔  checking for file ‘/home/pacha/github/tradestatistics/DESCRIPTION’ ...
─  preparing ‘tradestatistics’:
✔  checking DESCRIPTION meta-information ...
─  installing the package to build vignettes
✔  creating vignettes (16.6s)
─  checking for LF line-endings in source and make files and shell scripts
─  checking for empty or unneeded directories
   Removed empty directory ‘tradestatistics/data-raw’
   Removed empty directory ‘tradestatistics/docs’
─  looking to see if a ‘data/datalist’ file should be added
─  building ‘tradestatistics_0.1.tar.gz’
   
── Checking ────────────────────────────────────────────────────────────────────────────── tradestatistics ──
Setting env vars:
● _R_CHECK_CRAN_INCOMING_USE_ASPELL_: TRUE
● _R_CHECK_CRAN_INCOMING_REMOTE_    : FALSE
● _R_CHECK_CRAN_INCOMING_           : FALSE
● _R_CHECK_FORCE_SUGGESTS_          : FALSE
── R CMD check ──────────────────────────────────────────────────────────────────────────────────────────────
─  using log directory ‘/tmp/RtmpoNTgn8/tradestatistics.Rcheck’
─  using R version 3.5.2 (2018-12-20)
─  using platform: x86_64-pc-linux-gnu (64-bit)
─  using session charset: UTF-8
─  using options ‘--no-manual --as-cran’
✔  checking for file ‘tradestatistics/DESCRIPTION’
─  checking extension type ... Package
─  this is package ‘tradestatistics’ version ‘0.1’
─  package encoding: UTF-8
✔  checking package namespace information ...
✔  checking package dependencies (1.3s)
✔  checking if this is a source package
✔  checking if there is a namespace
✔  checking for executable files ...
✔  checking for hidden files and directories
✔  checking for portable file names
✔  checking for sufficient/correct file permissions ...
✔  checking serialization versions
✔  checking whether package ‘tradestatistics’ can be installed (1.4s)
✔  checking installed package size ...
✔  checking package directory ...
✔  checking ‘build’ directory
✔  checking DESCRIPTION meta-information ...
✔  checking top-level files
✔  checking for left-over files
✔  checking index information ...
✔  checking package subdirectories ...
✔  checking R files for non-ASCII characters ...
✔  checking R files for syntax errors ...
✔  checking whether the package can be loaded (394ms)
✔  checking whether the package can be loaded with stated dependencies (358ms)
✔  checking whether the package can be unloaded cleanly (370ms)
✔  checking whether the namespace can be loaded with stated dependencies (361ms)
✔  checking whether the namespace can be unloaded cleanly (388ms)
✔  checking loading without being on the library search path (437ms)
✔  checking dependencies in R code (395ms)
✔  checking S3 generic/method consistency (870ms)
✔  checking replacement functions (385ms)
✔  checking foreign function calls (394ms)
✔  checking R code for possible problems (2.3s)
✔  checking Rd files ...
✔  checking Rd metadata ...
✔  checking Rd line widths ...
✔  checking Rd cross-references ...
✔  checking for missing documentation entries (407ms)
✔  checking for code/documentation mismatches (1.2s)
✔  checking Rd \usage sections (956ms)
✔  checking Rd contents ...
✔  checking for unstated dependencies in examples ...
✔  checking contents of ‘data’ directory
✔  checking data for non-ASCII characters ...
✔  checking data for ASCII and uncompressed saves ...
✔  checking installed files from ‘inst/doc’ ...
✔  checking files in ‘vignettes’ ...
✔  checking examples (976ms)
✔  checking for unstated dependencies in ‘tests’ ...
─  checking tests ...
✔  Running ‘testthat.R’
✔  checking for unstated dependencies in vignettes ...
✔  checking package vignettes in ‘inst/doc’ ...
✔  checking re-building of vignette outputs (30.6s)
   
   
── R CMD check results ───────────────────────────────────────────────────────────── tradestatistics 0.1 ────
Duration: 48.7s

0 errors ✔ | 0 warnings ✔ | 0 notes ✔

@maelle
Copy link
Member

maelle commented Jan 7, 2019

@pachamaltese I've just re-run R CMD Check but not via devtools::check() and the warning disappeared 😮 so it was most probably a false positive, sorry about that.

@maelle
Copy link
Member

maelle commented Jan 7, 2019

@emilyriederer @mpadge thanks for accepting to review this package! 😺 Your reviews are due on 2019-01-28. As a reminder here are our reviewer guide and review template.

@emilyriederer
Copy link

emilyriederer commented Jan 13, 2019

Package Review

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README
  • Installation instructions: for the development version of package and any non-standard dependencies in README
  • Vignette(s) demonstrating major functionality that runs successfully locally
  • Function Documentation: for all exported functions in R help
  • Examples for all exported functions in R Help that run successfully locally
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed. Checked yes but please document timeframe of available data!
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests: Unit tests cover essential functions of the package and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing: 6 hours


Review Comments

I enjoyed reviewing the tradestatistics package. This package is very useful and stays true and well-focused to its stated goal. This package's clearly stated motivation made it very easy to learn since everything included is necessary and aptly named for its purpose and the documentation is comprehensive.

Below I've highlighted some observations that might make this package even better. Many of the items and small (and some arguably immaterial), but I chose to log everything I encountered in the review in case it would be useful.

Documentation

Overall, the documentation of the package is clear. I had no major problems understanding the goal it is meant to achieve, the scope of features it offers, and successfully writing and running the functions. However, here are a few opportunities I found to make the documentation even more accessible:

Package Metadata

  • The statement of need is very clear. It doesn't explicitly state "its target audience", per say, but I assume that is pretty clearly anyone who might wish to access such trade data. Perhaps an example could be given, but I don't think this is necessary.
  • The DESCRIPTION file has an outdated BugReports link. It links to (https://github.com/tradestatistics/ts-r-package/issues) instead of (https://github.com/tradestatistics/tradestatistics/issues).
  • Since get_data() restricts data to be from 1962 - 2016, you should consider putting this in the README and/or Description so users don't spend time learning the package only to find it can't provide the data they need. Users would probably also be interested to know if/when this date range might expand or if it is static.

Package Documentation

  • I absolutely love the diagram you have in the README / pkgdown home. I haven't seen many packages show what they do with diagrams, so this is a very nice touch. I can imagine it being particularly useful to those who have not worked with APIs a lot before because it helps to explain what parts have been abstracted.
  • The package is missing top-level documentation requested in Section 1.6 of rOpenSci's packaging guide. You can make this with usethis::use_package_doc().

Vignettes

  • "How to use this package" vignette
    • refers to package as oec not tradestatistics
    • it might be worth mentioning that no steps are need for authentication. So many API packages start out with instructions for setting up an access token, I expected something similar and at first worried I had missed something when I didn't see it
    • In the get_countrycode() example, if the phrases "Single match with no replacement" and "Single match with replacement" are important for end-users to understand the matching logic, this might benefit from more explanation.
    • The get_countrycode() example shows that this function might return either a vector or a tibble. However, this might benefit from explicitly calling out in words since this could potentially throw users off (e.g. if using these functions with type-safe map or apply functions)
    • In the sections on get_countrycode(), it might be useful to motivate these functions with the fact that the API is queried via these codes so locally-run functions help map English-language names of countries to query arguments. Adding motivation for get_productcodes() could also be useful. Since these don't seem to be query parameters, I don't have a clear understanding of when I as a user would need them.
    • In the get_data() examples, when you show queries with different arguments to the table parameter, consider pointing out that that is what you are changing in the query so people do not have to compare queries. (For example, instead of "If we want Chile-Argentine bilateral trade at aggregated level in 1980:", something like "By setting the table parameter to 'yrp', we can obtain the same data aggregated to the year-reporter-partner level.") On that note, before showing these examples, you might consider even proactively discussing the different tables available and the naming conventions used to distinguish them.
  • "Creating the datasets" vignette: It's great to have this documented, but it might be better off as an R script in /data-raw/ (as Hadley suggests in R Packages ) than as a vignette? Your package makes it unnecessary for users to obtain data this way, so the intent of vignette could confuse some readers and make them wonder if this is code they need to run.

Function Documentation

  • In the get_data() function, the years parameter correctly implies that one or more years can be passed. This is confirmed by the vignette. However, this is not totally obvious in the function documentation. The singular "value" in "Numeric value greater or equal to 1962 and lower of equal to 2016" might imply only a single year could be passed. Perhaps saying "value(s)" or showing a second example in this function's documentation could clarify.
  • In get_data() the documentation explains that the defaults for reporter, partner, and years is all NULL, which can be seen in the function signature. However, it does not explain how NULL values are treated. Users might make different assumptions if "NULL" means all, none, or something else. This could be particularly confusing when the default is not an acceptable value for that parameter (e.g. years = NULL results in an error). (More broadly, I'm not sure if defaults really make sense in this case when there isn't a logical default value. I added a long note on this to the UI section.)
  • The example given for the tables dataset mistakenly calls the products dataset instead
  • In the products documentation, it might be nice to add some sort of link with information about HS07. This is described in more detail in the vignette, but readers who are looking at the documentation first might appreciate the opportunity to get additional context.
  • Typo: In get_productcodes(): "Obtain a valid product codes" either remove "a" or change "codes" to "code"
  • In both get_productcodes() and get_countrycodes(), it might be worth clarifying what "searches" means. Especially with products, users might misinterpret search to include fuzzy matching or synonyms when the function is actually looking for exact-match substrings. This might inform how they use the function; for example, it would generally be smarter to only pass in a single word versus a phrase, a singular word than a plural, etc.
  • From what are the Rd files for the datasets being generated? I don't see code for those in the R folder.

Functionality / Implementation

External Dependencies

Section 1.11 of rOpenSci's packaging guides reinforces the good practice of trying to limit external dependencies where possible. This is a good practice not only for the health and longevity of your package but also for marketing purposes; sometimes people working on servers or behind proxies will have limited ability to install packages so fewer dependencies may enable more users. I think there are some opportunities to reduce dependencies in this package.

I truly don't mean to pick on tidyverse packages here! I also really enjoy writing (and reading) code this way. However, for package internals, I try to lean more heavily on dependency-free base R alternatives if I am only using relatively small numbers of less-differentiated functions from these packages. I realize this is also a personal style issue so I hope I'm not overstepping! Just throwing out alternatives for consideration.

  • The package imports stringr just for four functions. This can be eliminated entirely by using the following base functions instead:

    • str_detect -> grepl
    • str_to_lower -> tolower (or not needed with grepl's ignore.case argument)
    • str_length -> nchar
    • str_sub -> declared as import for get_data() but I didn't see it used anywhere
  • dplyr is also imported for only a few functions, most of them aren't incredibly specialized (e.g. select, filter). I believe you could eliminate dependence on dplyr (plus rlang and maggritr which are imported to support it) with the following swaps:

    • left_join -> merge(x = df1, y = df2, by = {{join keys here}}, all.x = TRUE)
    • select, everything -> data[c('name1', 'name2', 'name3'),] (since all variable names are predictable/predetermined, you can just type them out instead of using everything)
    • bind_rows -> rbind (don't need auto column-matching of bind_rows since the data to be bound is quite predictable)
    • rename -> colnames(data)[colnames(data)=="old_name"] <- "new_name"
    • as_tibble -> There are different ways to think about this one. First, consider if the benefits of outputting a tibble over a base R dataframe buys you anything, second see Section 1.11 which discusses adding the tbl_df class to your dataframe as an alternative. Finally, if you continue to use purrr (see the next large bullet), it would be better to import as_tibble from tibble than dplyr since purrr depends on tibble regardless.
    • filter, matches, mutate -> declared as import for get_data() but I didn't see it used anywhere
  • Similarly, only two functions are being used from purrr, mostly for iteration or casting

    • map_df -> the same result as data <- as_tibble(map_df(seq_along(years), read_from_api)) can be achieved with do.call( rbind, sapply(years, read_from_api, simplify = FALSE) ) (except for the dataframe/tible issue described above)
    • as_vector -> I think need for this goes away if you consider implementing get_countrycodes() similar to the get_productcodes() example before
    • The counterpoint to this is if you want "safe failure". For example, if the API has data for some of the specified years but not others, some of purrr's side effect adverbs could help you provide as much of the requested data as possible to users while preserving the error messages when failure occured.

As an example, here's an alternative implementation of get_productcodes() using only base functions:

Initial

get_productcode <- function(productname = NULL) {
  # get the products dataset, create the type_product column, 
  # bind them all together and do the search
  tradestatistics::products %>%
    mutate(
      type_product = productname
    ) %>% 
    filter(
      str_detect(
        str_to_lower(!!sym("product_fullname_english")), productname
      )
    )
}

Final

get_productcode <- function(productname) {
  
  products_match <- grepl(productname, 
                          products$product_fullname_english, 
                          ignore.case = TRUE)
  products[products_match,]
  
}

The code for get_countrycodes() could be quite similar.

Readability / Modularity

  • This is so trivial I almost hate to mention, but in get_countrycodes() it might make for cleaner reading to move the if() statement on line 28 to be part of the switch() starting on line 32.
  • get_data() would be easier to read and test if it was broken into small internal sub-functions. If this function is throwing errors for a user, it could be challenging to know where the problem lies. I love how the comments clearly break down the different distinct steps here, and these might benefit from modularization:
    • Validating inputted years given the table requested
    • Validating passed partner and/or reporter against internal country dataset
    • Validating passed table against internal table dataset
    • Define read_from_api function
    • Apply read_from_api function across requested years
    • Join output with tables embedded in package and rearrange output to return
  • Modularizing some of these steps could help make the get_data() code significantly more succinct by cutting out some arbitrary uniqueness. I notice that everything from lines 168 - 292 is variations on the same theme with only slightly different conditions by dataset. (Different variable names in select() and sometimes a join.) If these conditions became arguments for a single helper function, it could help follow the DRY principle (don't repeat yourself) and make the function easier to maintain.
  • Regardless of how much modularization you do in get_data(), I would at least recommend extracting the read_from_api function outside the get_data() function definition (and tagging it as an internal function). This would make the code more readable.
  • For read_from_api, consider letting the t parameter represent year instead of an index. This seems like a unnecessary complication which forces read_from_api to "know about" the full vector when it is only working with a single year on any one call.

Technical Implementation Details

  • get_countrycode() uses print() to communicate with users. I think this is somewhat redundant because what's being printed is also being visibly returned. I would consider deleting this altogether, but if you want to include it in messaging to users, it's better to do user-communication through message() than print() in packages, as the former is easier for users to supress.
  • In get_data() lines 71-73, it's not necessary to load in and reassign the datasets in your package. They should already be in the namespace.

Testing

  • Right now, tests seem to include only what examples where the right thing happens. I would also love to see tests added that the function fails in the expected way when it cannot succeed. For example get_productcodes() tests only a successful search for "fruit". But what about testing something like "abc123"? If someone changes the implementation of these functions later, they will be much more focused on "is it doing the right thing when it does anything at all", but documenting expected failure behavior through tests ensures that this too will be checked for stability over time.
  • The same would apply to the components of get_data() if it's modularized. Right now, especially with API mocking, not much of this function is being tested. Right now, there are no behavior guarantees for valid/invalid years, valid invalid reporters/partners, valid/invalid tables, etc.
  • One of the biggest reasons API unit tests are hard to write is API authorization and rate limiting. These don't seem to be a problem with this API, so it might be worth adding some tests to run locally that actually do make API calls. You can use testthat skip functions (like skip_on_cran, skip_on_travis, etc.) to avoid running these everywhere.

API Design / UI Improvements

Functions, Parameters, and Return Values

  • A lot of function parameters set NULL as the default value when they would actually require a valid value to run (e.g. a year, a product name, etc.) Does this provide an advantage over not setting a default?
  • Did you consider a more informative name for get_data()? Googling "r package get_data" reveals how many packages contain a function with the same name. A more specific name could help avoid potential namespace problems. (This is similar to a recommendation in Section 1.3 of rOpenSci's packaging guide)
  • I find it a little confusing that the return type of get_countrycode() varies by the number of elements. I think it would be nice if this function and get_productcode() had similar behavior, and personally I prefer the design of get_productcode() better. I can imagine someone using these functions with type-safe apply or map functions, and the varying output types could become a headache. I would prefer that tibbles be returned consistently.
  • What is the envisioned use case for get_productcode() since this information can't be passed into get_data()? From this package and the API documentation, I gather there is no support for filtering on commodity codes. Is that true? I bet a lot of users would be interested in filtering down to a specific industry.

Error Messages

  • I really appreciate how courteous the error messages are, but sometimes I think more blunt messages could be more helpful. I think this is especially true when thinking about non-native English speakers using this package. A few examples from get_data():
    • When years are out of bounds, the error begins "Please verify that you are requesting...", but explicitly stating something like "This package only supports years 1962-2016" frontloads the most crucial information
    • The error for an incorrect table name suggests "Please check the spelling or explore the tables table..." Since there are 10 potential tables, all with succinct names, listing these in the error message might provide more rapid user support without them needing to run additional commands
    • When the API fails, the error message says "It wasn't possible to obtain data. Provided this function tests your internet connection there was a serve problem. Please try again later." It doesn't really matter to users how this package has concluded there is a server problem, and a non-native speaker (or someone in a hurry) might see "test internet connection" and completely misinterpret. I might cut this part for something like "Data unavailable due to server problem. Please try again later."
  • Be sure all error messages are accurate. For example, in the first example above, the server being down might not be the only reason the request failed. If the API endpoints or query parameters change, other error could be generated. It might be more useful to give the user back the API response code to help them investigate further.

@pachadotdev
Copy link
Author

@emilyriederer Thanks a lot for this huge review!! I have a lot of homework now

@emilyriederer
Copy link

@pachamaltese Thanks for the package! It was very fun to play with. I hope it's helpful, and sorry for the length! All of the items are really small, and some I debated if they were worth mentioning. Personally, my struggle is always with not enough feedback, so I thought I'd share all my thoughts and you could do with them what you will 😄

@maelle
Copy link
Member

maelle commented Jan 14, 2019

Many thanks for your thorough review @emilyriederer! 😸

@mpadge
Copy link
Member

mpadge commented Jan 16, 2019

Package Review

Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README

  • Installation instructions: for the development version of package and any non-standard dependencies in README

  • Vignette(s) demonstrating major functionality that runs successfully locally

  • Function Documentation: for all exported functions in R help

  • Examples for all exported functions in R Help that run successfully locally

  • Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).

  • There are currently no contribution guidelines, nor a "CONTRIBUTING" file. As noted by @emilyriederer, the "BugReports" field in the "DESCRIPTION" needs updating.

For packages co-submitting to JOSS

The package contains a paper.md matching JOSS's requirements with:

  • A short summary describing the high-level functionality of the software
  • Authors: A list of authors with their affiliations
  • A statement of need clearly stating problems the software is designed to solve and its target audience.
  • References: with DOIs for all those that have one (e.g. papers, datasets, software).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests: Unit tests cover essential functions of the package and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing: 4.5 6 (including time spent on second review below)


Review Comments

Much of the current review does little more than reiterate the phenomenally detailed review of @emilyriederer, to whom I am also indebted for saving me much of the work, and for giving me a very helpful overview of the package!

One of my motivations in reviewing this pacakge was to gain hands-on insight into how @pachamaltese had overcome the server-side problems that plagued the previous review process. The solution is amazingly impressive, and the documentation provided on the tradestatistics.io website goes miles beyond what this package alone might suggest. That is not a problem at all, but some of my suggestions below, and many of the concerns of @emilyriederer, could be very easily allayed by providing links in the documentation to relevant pages of the fantastic tradestatistics.github.io/documentation.

(An immediate example is the central table of data nomenclature and availability which is so clear and simple that it could be helpfully reproduced in the documentation for the R package.)

One other general comment, again and as always as already indicated by @emilyriederer, concerns the names of data sets and functions, which it seems fair to suggest are too general, and would be better with, for example, package-specific prefixes (see below). In this regard, I note that the whole tradestatistics.io endeavour seems already set up to be -- or at least to deserve to become -- a mini-empire and go-to place for this stuff. And yet the Observatory of Economic Complexity is not the only source of such data, with arguably an equally rich source being the OECD. I for one could definitely see a case for tradestatistics.io offering a unified interface for accessing such data in general. Please note that I am no way suggesting that ought to be incorporated within the package at the current point in time, and certainly not as any requirement of this review process! I merely mention it as worth thinking about in terms of current nomenclature, because it offers a number of possibilities including,

  1. Prefix data and functions with, for example, oec_get_data(), and so allow future extension to, for example, oecd_get_data().
  2. Provide a general prefix and general functions which sources specified by parameters, tradestats_get_data(..., source = "oecd") (or whatever prefix might be desired).

General style

To continue on the above point, the names of functions and internal data sets are arguably somewhat overly generic, as is much of the function documentation. @pachamaltese would be advised to keep in mind that this package may be loaded in a workspace with many other packages, and ?counties simply tells me "Provides details about countries". This should be contextualised to be helpful - "Provides details about countries included in the Observatory of Economic Complexity data retreived by tradestatistics ..." or similar. Analagous arguments apply to most documentation. These are trivial but important fixes.

Function names suffer similar lack of specificity, as already emphasised by @emilyriederer. For example, rdocumentation.org suggests 109 functions named get_data() in various CRAN packages. Examples of package specific prefixes are sf with it's uniform st_* functions, or piggyback with pb_*. Note, however, that ts_ is used for the tsbox package, so likely best to avoid that.

README

While I concur with the sentiments of @emilyriederer that the diagram at the outset of the README is uniquely cool, I note that most of the text is a pasted GPL-3 license which is bundled with R and so strictly not needed there. Github repos are a separate issue, and Best Practice currently states,

The project MUST post the license(s) of its results in a standard location in their source repository ... E.g., as a top-level file named LICENSE or COPYING. License filenames MAY be followed by an extension such as ".txt" or ".md". Note that this criterion is only a requirement on the source repository. You do NOT need to include the license file when generating something from the source code (such as an executable, package, or container). For example, when generating an R package for the Comprehensive R Archive Network (CRAN), follow standard CRAN practice: if the license is a standard license, use the standard short license specification (to avoid installing yet another copy of the text) and list the LICENSE file in an exclusion file such as .Rbuildignore.

Best Practice is nevertheless an optional system, and it would suffice simply to delete the GPL-3 text from README in order to free it up to contain the useful package-related stuff everybody will expect.

get_countrycode()

While noting @emilyriederer's comment regarding default NULL values, I actually think these are generally much safer than having no default values and constantly checking if (!missing (x)), because they allow moving straight on to simple type checks for the parameters. I nevertheless note that the following is not very informative:

get_countrycode()
#> Error in if (countryname == "all") { : argument is of length zero

In her words, the package "does not explain how NULL values are treated." Moreover, the following call actually returns an object:

get_countrycode(1)
#> There is more than one match for your search. Please try again
#>        using one of these codes:
#> # A tibble: 0 x 6
#> # … with 6 variables: country_iso <chr>, country_name_english <chr>,
#> # country_fullname_english <chr>, continent_id <int>,
#> #   continent <chr>, eu28_member <int>

This is because str_to_lower returns countyname = "1" and the subsequent str_detect call then matches that on to all countries which have changed names because these have years of change listed like,

tradestatistics::countries$country_fullname_english [233]
#> [1] "Belgium (Belgium-Luxembourg Customs Union until 1999)"

(See comment below for why the result shown on screen is then actually empty.) There are lots of possibilities for handling such cases, and I'm sure @pachamaltese is capable of implementing any sensible option, but I mention this as an incentive to implement a bit more thorough pre-checking of parameter types and classes. (At least ensure argument is stripped to alpha characters only and error on anything else such as numeric.)

Another example:

get_countrycode("par")
#> There is more than one match for your search. Please try again
#>        using one of these codes:
#> # A tibble: 1 x 6
#>   country_iso country_name_english country_fullname_english continent_id continent eu28_member
#>     <chr>       <chr>                <chr>                           <int> <chr> <int>
#>   1 pry         Paraguay             Paraguay                            5   Americas            0

That message is misleading because there is actually only one match, contrary to the message. As with the above mismatches of "1" onto years of name changes, this is also a mismatch of "par" onto

  country_iso country_name_english country_fullname_english continent_id continent eu28_member
  <chr>       <chr>                <chr>    <int> <chr>           <int>
1 pry         Paraguay             Paraguay Americas            0
2 all         All Countries        All Countries (this is an alias to show several reporters/partner…
                                                  NA NA                 NA

.. and "par" matches on to the protracted and irrelevant "several reporters/partners ..." detail for "All Countries". The "All Countries" is then filtered from the screen dump because that filters country_name_english and not country_fullname_english, and that is certainly an inconsistency that ought to be rectified!

Also note that there will likely be workflows desiring multiple matches. This currently works, which is good:

get_countrycode("par|arg|vene")

but still issues the message, which is undesirable if I actually want all of those. That will be important to keep in mind when converting from tidyverse to base R string and index matching. An important starting point may be to presume that any values entered for countryname are likely to be the first letters, and so you can at least grep on string beginnings, which would then remove both the match of "par" to "...several reporters/partners" and "1" to the guff about name changes. But you'll likely know better than me there, and maybe other alternatives such as partial or fuzzy matching might be preferable?

And because we're here, I repeat the concerns of @emilyriederer, via here example for get_productcode(), that the current code for this step uses the potentially unnecessary tidyverse functions:

countrycode <- tradestatistics::countries %>%
    dplyr::filter(
           stringr::str_detect(
                      stringr::str_to_lower(!!rlang::sym("country_fullname_english")), countryname
           )
           ) %>%
    dplyr::select(!!rlang::sym("country_iso")) %>%
    purrr::as_vector()

while a base R equivalent in this case is actually much clearer and simpler:

index <- grep (countryname,
               tolower (tradestatistics::countries$country_fullname_english))
tradestatistics::countries$country_iso [index]

Finally, I suspect issuing a message rather than an error on length(countrycode) != 1 is an intentional design feature to prevent the latter from breaking long tidy-type operations, but would perhaps encourage @pachamaltese to consider having errors as these can easily be caught in a non-breaking way (for example with the collateral package), yet provide additional important information in such workflows. Happy to leave to authorial judgement call, but I would otherwise suggest that an inability to return a meaningful result ought to be an error.

get_productcode

I note some potential confusion in the overlap between the products data set, the return object of get_productcode(), and that of get_coutrycode(). The latter does what it says and returns a single code, while get_productcode() returns a filtered version of products with the search string appended as an extra column. This function thus doesn't really do what it says, rather it returns the full product table matching the search item. I'd suggest here that get_products() would be a more appropriate name. Maybe @pachamaltese might then see a use case for a get_productcode() function that returned just the "commodity_code" column, although some clarification of nomenclature would then be in order as "products" are actually referred to as "commodities". This confusion between "products" and "commodities" pervades the entire package somewhat, and is likely something worth clarifying wherever possible.

I'd also see a use for string matching on group_name as well as product_fullname_english. Maybe get_product(productname = NULL, productgroup = NULL)?

I also don't see any specific need to append the column with the search term to the return result, although grant that may be just a reflection of my ignorance here? Nevertheless, the presence of "group_name" seems to provide an "official" classification of "type_product", and so render this redundant.

get_data

My main concern here is precisely that of @emilyriederer, and worth quoting:

What is the envisioned use case for get_productcode() since this information
can't be passed into get_data()? From this package and the API documentation,
I gather there is no support for filtering on commodity codes. Is that true? I
bet a lot of users would be interested in filtering down to a specific
industry.

My even stronger assertion would be that there would be very few users whose workflow would not be get_data() %>% filter(). It would be trivial to automate that by implementing the two extra arguments mentioned above,

get_data (..., product_fullname_english = NULL, group_name = NULL)

but then if that's to be done, the logical and full extension may as well be implemented,

get_data (..., product_fullname_english = NULL, commodity_code = NULL,
          group_code = NULL, group_name = NULL)

(Because you'll likely be using base R rather than tidyverse for the filtering, the parameter names need not necessarily be identical with those in the table, but diverging from that may cause confusion.)

To again paraphrase @emilyriederer, in the absence of such filtering ability, it's hard to argue the value of get_productcode() (-> get_products()?), because it can only be used through the user writing their own code to extract the desired entries, or likely even just as a visual lookup table, in which case it is no different from simply looking at the website.

Miscellaneous

DESCRIPTION has Suggests: covr, but this should be removed - it is directly loaded in the .travis.yml, and not related to the package itself.

Checklist

To aid @pachamaltese, this condenses (i think) most of the above points into a simpler checklist. Items with (O) are those I am happy to consider optional and leave the ultimate decision to @pachamaltese.

  • Contributing guidelines; fix DESCR (including removing covr)
  • Make function documentation more specific
  • Make names of all functions and data sets more package-specific
  • Remove License text from README (and possibly place elsehwhere), and provide a bit more package-related useful stuff.
  • Fix fail cases and misleading errors for get_countrycode()
  • (O) Convert dependency-heavy "tidy" code to base R
  • get_productcode() -> get_product(productname = NULL, productgroup = NULL) or something along those lines.
  • get_data() -> get_data (..., product_fullname_english = NULL, commodity_code = NULL, group_code = NULL, group_name = NULL)` or something along those lines
  • Improve string matching in get_countycode()
  • (O) Convert messages on get_countycode() failure to errors?

Thanks to @maelle and @pachamaltese for the opportunity to review this package, which has been a great pleasure! It is obviously the result of a phenomenal amount of work culminating in tradestatistics.io, of which this R package represents a small component. Congrats @pachamaltese on devising and implementing such an important and consummately well-executed project!

@maelle
Copy link
Member

maelle commented Jan 16, 2019

Many thanks for your review @mpadge! 🚀

@pachamaltese, now both reviews are in! 🎉

@pachadotdev
Copy link
Author

Thanks a lot @maelle , @emilyriederer and @mpadge
Provided tradestatistics.io was basically an annoyed version of me trying to create a stable and open server to download trade data, I shall start doing some changes to the API so that data filtering by commodity is made in a more efficient way and without downloading data that then you will discard.

Does it work? What do you think about adding a commodity group filtering option to the API and then completing that nice to-do list from above?

@pachadotdev
Copy link
Author

Hi @mpadge

Once again, thanks a lot for the useful list.
To achieve some parts of the list without passing unwanted data I had to edit the API during lunch time.
Check this: https://api.tradestatistics.io/yrpc?y=2000&r=chl&p=arg&g=95
g is a new parameter that can be any HS group code (i.e. 01, 02, ..., 95, ... 99)
In the example above, the background writes a query like ... WHERE LEFT(commodity_code, 2) = 95

Another new filtering option in the API is that any table with commodity column (the g example also applies to any table with commodity field), for example: https://api.tradestatistics.io/yc?y=1962&c=0101

With these changes I can start improving the package

@mpadge
Copy link
Member

mpadge commented Jan 16, 2019

That's fantastic, @pachamaltese, and obviously a way more efficient solution than post-filtering. This review is become more and more interesting, with your ability to make server-side changes and then update R code. And all that "during lunch time" 🍽️

@pachadotdev
Copy link
Author

pachadotdev commented Jan 16, 2019

  • Contributing guidelines; fix DESCR (including removing covr)
  • Make function documentation more specific
  • Make names of all functions and data sets more package-specific
  • Remove License text from README (and possibly place elsehwhere), and provide a bit more package-related useful stuff.
  • Fix fail cases and misleading errors for get_countrycode()
  • (O) Convert dependency-heavy "tidy" code to base R
  • get_productcode() -> get_product(productname = NULL, productgroup = NULL) or something along those lines.
  • get_data() -> get_data (..., product_fullname_english = NULL, commodity_code = NULL, group_code = NULL, group_name = NULL)` or something along those lines
  • Improve string matching in get_countycode()
  • (O) Convert messages on get_countycode() failure to errors?
  • remove print in country_codes()

@emilyriederer
Copy link

@mpadge Really enjoyed reading your review! I really enjoyed hearing how you thought through it. Very interesting to see where we were aligned or to hear your perspective on the few places we differed. In particular, I do see your point on the NULL function defaults, so I retract my comment there. 😄

@pachamaltese I think I somehow missed the full context when I wrote my review that you seem to completely own the API? That's awesome! Very psyched you were able to add filtering by commodity, as this will probably majorly simplify things for users.

@pachadotdev
Copy link
Author

pachadotdev commented Mar 27, 2019 via email

@maelle
Copy link
Member

maelle commented Mar 27, 2019

Ok! @emilyriederer @mpadge, if you're happy with the changes implemented, can you check the box "The author has responded to my review and made changes to my satisfaction. I recommend approving this package." in your reviews?

Thanks to all of you for a very active and productive discussion in this thread!

@pachadotdev
Copy link
Author

@emilyriederer @mpadge
Hi! I just added the option of including shortened product names and communities. This passes all checks and it can be somehow useful for grouping/visualization.

@emilyriederer
Copy link

Box is checked on my end! Thanks, @pachamaltese , for such an cool API and package! You make me want a reason to need trade data so I can play with this even more 😄

@pachadotdev
Copy link
Author

Thanks a lot @emilyriederer !! Your feedback gave me tons of ideas I'd never figured myself. Now I'm working in a "GUI" (a shiny dashboard) for the package. I shall release a beta later today.

@pachadotdev
Copy link
Author

@maelle @emilyriederer @mpadge can you help me sharing this dashboard that is basically a GUI for the package? https://shiny.tradestatistics.io/
the main website is https://tradestatistics.io/

@mpadge
Copy link
Member

mpadge commented Apr 2, 2019

Completed above and copied here for completeness:

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

I've sent a very minor pull request of the only outstanding issue. Thanks so much @pachamaltese for the amazing work behind the scenes with your own server, and for the seamless package interface. I'll definitely be using this package!

@pachadotdev
Copy link
Author

@mpadge thanks a lot!! I passed that point somewhere in the middle of the thread. I just rebuilt both vignettes and pkgdown site with your changes to the ots_create_tidy_data Rd section

@maelle
Copy link
Member

maelle commented Apr 3, 2019

Approved! Thanks @pachamaltese for submitting and @emilyriederer @mpadge for your reviews! 😸

To-dos:

  • Transfer the repo to rOpenSci's "ropensci" GitHub organization under "Settings" in your repo. I have invited you to a team that should allow you to do so. You'll be made admin again once you do.
  • Add the rOpenSci footer to the bottom of your README
    " [![ropensci_footer](https://ropensci.org/public_images/ropensci_footer.png)](https://ropensci.org)"
  • Fix any links in badges for CI and coverage to point to the ropensci URL. We no longer transfer Appveyor projects to ropensci Appveyor account so after transfer of your repo to rOpenSci's "ropensci" GitHub organization the badge should be [![AppVeyor Build Status](https://ci.appveyor.com/api/projects/status/github/ropensci/pkgname?branch=master&svg=true)](https://ci.appveyor.com/project/individualaccount/pkgname).
  • We're starting to roll out software metadata files to all ropensci packages via the Codemeta initiative, see https://github.com/ropensci/codemetar/#codemetar for how to include it in your package, after installing the package - should be easy as running codemetar::write_codemeta() in the root of your package.

Should you want to awknowledge your reviewers in your package DESCRIPTION, you can do so by making them "rev"-type contributors in the Authors@R field (with their consent). More info on this here.

Welcome aboard! We'd also love a blog post about your package, either a short-form intro to it (https://ropensci.org/tech-notes/) or long-form post with more narrative about its development. (https://ropensci.org/blog/). If you are interested, @stefaniebutland will be in touch about content and timing.

We've started putting together a gitbook with our best practice and tips, this chapter starts the 3d section that's about guidance for after onboarding. Please tell us what could be improved, the corresponding repo is here.

@pachadotdev
Copy link
Author

pachadotdev commented Apr 3, 2019

Dear @maelle

I've completed these steps:

  • Transfer the repo to rOpenSci's "ropensci" GitHub organization under "Settings" in your repo. I have invited you to a team that should allow you to do so. You'll be made admin again once you do.
  • Add the rOpenSci footer to the bottom of your README " ropensci_footer"
  • Fix any links in badges for CI and coverage to point to the ropensci URL. We no longer transfer Appveyor projects to ropensci Appveyor account so after transfer of your repo to rOpenSci's "ropensci". GitHub organization the badge should be AppVeyor Build Status.
  • We're starting to roll out software metadata files to all ropensci packages via the Codemeta initiative, see https://github.com/ropensci/codemetar/#codemetar for how to include it in your package, after installing the package - should be easy as running codemetar::write_codemeta() in the root of your package.

Important questions before moving the repository:

  • I also updated all links in DESCRIPTION, Readme and vignettes that point to tradestatistics.github.io/* to ropensci.github.io/*. Is that ok?
  • Besides it, I kept Joshua, Amanda and Jorge as ctb provided they gave tons of feedback on previous iterations. Shall I keep them?
  • UN Comtrade is be credited as dtc.
  • Shall I add Emily and Mark as ctb?

@maelle
Copy link
Member

maelle commented Apr 3, 2019

I don't see your repo under the ropensci org, are you sure you've transferred it?

@maelle
Copy link
Member

maelle commented Apr 3, 2019

Yes to keeping previous contributors over time, and regarding reviewers, you should ask them 😉

@pachadotdev
Copy link
Author

@emilyriederer @mpadge are you ok with being added as ctb for all the feedback?

@pachadotdev
Copy link
Author

pachadotdev commented Apr 3, 2019

@maelle not yet, I am clearing ctb before pushing again before moving the repository

I don't see your repo under the ropensci org, are you sure you've transferred it?

@pachadotdev
Copy link
Author

@maelle ready, repository is transferred

@pachadotdev
Copy link
Author

@maelle The only thing I cannot change is the repo description in github:
image
It should say https://ropensci.github.io/tradestatistics but I cannot edit that header

@pachadotdev
Copy link
Author

I also can't change "R package to access our API" to "R package to access Open Trade Statistics API" since the repo has moved

@maelle
Copy link
Member

maelle commented Apr 3, 2019

@pachamaltese now you're admin again 😁

@maelle
Copy link
Member

maelle commented Apr 3, 2019

Speaking of pkgdown feel free to use the brand-new rOpenSci style https://ropensci.github.io/rotemplate/ (feedback on the package and its guidance very welcome).

@emilyriederer
Copy link

Wow, thank you @pachamaltese ! I don't really feel that I deserve to be considered a ctb when all of the hard work was yours. Opinions are cheap 🙂 But very honored to be included and really appreciate it!

@maelle
Copy link
Member

maelle commented Apr 4, 2019

By the way @pachamaltese we recommend the role "rev" for reviewers since it's more specific than "ctb". Folks can also be both, of course.

To easily add folks to DESCRIPTION I recommend e.g. desc::desc_add_author_gh("emilyriederer", role = "rev").

@maelle
Copy link
Member

maelle commented Apr 4, 2019

By the way @pachamaltese we recommend the role "rev" for reviewers since it's more specific than "ctb". Folks can also be both, of course.

To easily add folks to DESCRIPTION I recommend e.g. desc::desc_add_author_gh("emilyriederer", role = "rev", comment = "reviewed the package for rOpenSci, see https://github.com/ropensci/onboarding/issues/274")) (I'm planning on adding a better helper to rodev that'd add the comment automatically).

@pachadotdev
Copy link
Author

thanks @maelle !!
I was trying to add


      person(given = "Mark",
             family = "Padgham",
             role = "rev",
             comment = c("reviewed the package for rOpenSci, see https://github.com/ropensci/onboarding/issues/274",
                         ORCID = "0000-0003-2172-5265")),

but comment is limited to a single string, probably I can create a PR to ease that

@maelle
Copy link
Member

maelle commented Apr 4, 2019

Something like

desc::desc_add_author(given = "Mark",
                      family = "Padgham",
                      role = "rev",
                      comment = "blabla", orcid = "orcidid")

works although the comment section looks ugly.

@mpadge
Copy link
Member

mpadge commented Apr 4, 2019

Thanks @pachamaltese for adding my name - much appreciated! You've done an amazing job with the package. I'm looking forward to reading about it in the rOpenSci blog.

@pachadotdev
Copy link
Author

Hi @stefaniebutland ! Of course I'd like to write a long-form post. My email is mvargas@dcc.uchile.cl and I'm also in ROpenSci slack to be contacted.

Welcome aboard! We'd also love a blog post about your package, either a short-form intro to it (https://ropensci.org/tech-notes/) or long-form post with more narrative about its development. (https://ropensci.org/blog/). If you are interested, @stefaniebutland will be in touch about content and timing.

@maelle maelle closed this as completed Apr 5, 2019
@stefaniebutland
Copy link
Member

Great to hear! I'll follow up by email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants