Skip to content

Commit

Permalink
DOC: Accessing files from a S3 bucket. (#23639)
Browse files Browse the repository at this point in the history
  • Loading branch information
myles authored and jreback committed Nov 14, 2018
1 parent 886b040 commit 6f8c6e1
Showing 1 changed file with 8 additions and 1 deletion.
9 changes: 8 additions & 1 deletion doc/source/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1580,12 +1580,19 @@ You can pass in a URL to a CSV file:
df = pd.read_csv('https://download.bls.gov/pub/time.series/cu/cu.item',
sep='\t')
S3 URLs are handled as well:
S3 URLs are handled as well but require installing the `S3Fs
<https://pypi.org/project/s3fs/>`_ library:

.. code-block:: python
df = pd.read_csv('s3://pandas-test/tips.csv')
If your S3 bucket requires cedentials you will need to set them as environment
variables or in the ``~/.aws/credentials`` config file, refer to the `S3Fs
documentation on credentials
<https://s3fs.readthedocs.io/en/latest/#credentials>`_.



Writing out Data
''''''''''''''''
Expand Down

0 comments on commit 6f8c6e1

Please sign in to comment.