Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: Empty object column writes to parquet as INT32 instead of BINARY L:STRING #37083

Open
2 of 3 tasks
raginjason opened this issue Oct 12, 2020 · 5 comments
Open
2 of 3 tasks
Labels
Bug IO Parquet parquet, feather

Comments

@raginjason
Copy link

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Code Sample, a copy-pastable example

import pandas as pd

pd.DataFrame({"nullable_col": ["some_val"]}, dtype="object").to_parquet("non-null-obj.parquet")

pd.DataFrame({"nullable_col": []}, dtype="object").to_parquet("null-obj.parquet")

Once that runs, parquet-tools illustrates the issue. I would expect a datatype such as OPTIONAL BINARY L:STRING, as in:

$ parquet-tools meta non-null-obj.parquet | grep nullable_col

extra:        pandas = {"index_columns": [{"kind": "range", "name": null, "start": 0, "stop": 1, "step": 1}], "column_indexes": [{"name": null, "field_name": null, "pandas_type": "unicode", "numpy_type": "object", "metadata": {"encoding": "UTF-8"}}], "columns": [{"name": "nullable_col", "field_name": "nullable_col", "pandas_type": "unicode", "numpy_type": "object", "metadata": null}], "creator": {"library": "pyarrow", "version": "1.0.1"}, "pandas_version": "1.1.3"} 
nullable_col: OPTIONAL BINARY L:STRING R:0 D:1
nullable_col:  BINARY SNAPPY DO:4 FPO:32 SZ:80/76/0.95 VC:1 ENC:PLAIN,PLAIN_DICTIONARY,RLE ST:[min: some_val, max: some_val, num_nulls: 0]

But instead got a datatype of OPTIONAL INT32, as in:

$ parquet-tools meta null-obj.parquet | grep nullable_col

extra:        pandas = {"index_columns": [{"kind": "range", "name": null, "start": 0, "stop": 0, "step": 1}], "column_indexes": [{"name": null, "field_name": null, "pandas_type": "unicode", "numpy_type": "object", "metadata": {"encoding": "UTF-8"}}], "columns": [{"name": "nullable_col", "field_name": "nullable_col", "pandas_type": "empty", "numpy_type": "object", "metadata": null}], "creator": {"library": "pyarrow", "version": "1.0.1"}, "pandas_version": "1.1.3"} 
nullable_col: OPTIONAL INT32 R:0 D:1
nullable_col:  INT32 SNAPPY DO:4 FPO:0 SZ:15/14/0.93 VC:0 ENC:PLAIN,RLE,PLAIN_DICTIONARY ST:[no stats for this column]

Problem description

Writing out a column with pandas type object with no values appears to create a parquet type of INT32 when I would expect it to be BINARY L:STRING or similar. I have a daily process that outputs a set of records to parquet and on days where there are no values in an object column the parquet datatype changes to INT32, thus breaking my process as the schemas has changed relative to previous days.

Expected Output

Output of pd.show_versions()

INSTALLED VERSIONS

commit : db08276
python : 3.7.2.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8

pandas : 1.1.3
numpy : 1.19.2
pytz : 2020.1
dateutil : 2.8.1
pip : 20.2.3
setuptools : 50.3.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.0.1
pytables : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None

@raginjason raginjason added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Oct 12, 2020
@jorisvandenbossche jorisvandenbossche added the IO Parquet parquet, feather label Oct 13, 2020
@raginjason
Copy link
Author

For other people running into a similar issue, I was able to work around this by mapping the object type columns in question to StringDtype(). Columns of this type appear to map to BINARY L:STRING in Parquet regardless of their contents.

This still seems like a bug to me though. I can understand there may be a need to default types, but I don't see how INT32 is a reasonable default for the catch-all Pandas type of object

@jelther
Copy link

jelther commented Nov 10, 2020

This is happening in our processes as well.
We have some Decimal values that we map to objects but when we try to read those on Spark 3.0 it breaks our pipelines.

@jbrockmendel jbrockmendel removed the Needs Triage Issue that has not been reviewed by a pandas team member label Jun 6, 2021
@BenjMaq
Copy link

BenjMaq commented Jun 9, 2021

Hey @raginjason, I’m facing the same issue. Using StringDtype it writes correctly as BINARY, however my NULLs are being written as ‘None’ (i.e. the string representation), not a real NULL value. Did you face this as well? Wondering how you dealt with that. Thanks a lot!

@MaStFAU
Copy link

MaStFAU commented Mar 23, 2023

I can confirm this is still happening using pandas 1.4.2

@reyesjx7
Copy link

reyesjx7 commented Aug 6, 2024

Confirming this is still happening in Pandas 2.2.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug IO Parquet parquet, feather
Projects
None yet
Development

No branches or pull requests

7 participants