Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: idxmin and idxmax fail for groupby of decimal columns #40685

Closed
2 of 3 tasks
vyasr opened this issue Mar 29, 2021 · 3 comments · Fixed by #54109
Closed
2 of 3 tasks

BUG: idxmin and idxmax fail for groupby of decimal columns #40685

vyasr opened this issue Mar 29, 2021 · 3 comments · Fixed by #54109
Labels
Bug Dtype Conversions Unexpected or buggy dtype conversions Groupby Nuisance Columns Identifying/Dropping nuisance columns in reductions, groupby.add, DataFrame.apply

Comments

@vyasr
Copy link
Contributor

vyasr commented Mar 29, 2021

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Code Sample, a copy-pastable example

# Your code here
import pandas as pd
from decimal import Decimal
# from pandas.tests.extension.decimal import DecimalArray

df = pd.DataFrame({
    'idx': [0, 1],
    'x': [Decimal('8.68'), Decimal('42.23')],
    'y': [Decimal('7.11'), Decimal('79.61')],
    # 'x': DecimalArray([Decimal('8.68'), Decimal('42.23')]),
    # 'y': DecimalArray([Decimal('7.11'), Decimal('79.61')]),
})

print(df.groupby("idx", sort=True).agg('idxmin'))
# Empty DataFrame
# Columns: []
# Index: [0, 1]

Problem description

When a column contains Decimal objects, the idxmin and idxmax aggregations return empty output.

Expected Output

The expected output is

     x  y
idx
0    0  0
1    1  1

If I uncomment the commented lines in the above example (i.e. place the Decimal objects inside a DecimalArray), it works, but AFAICT that isn't really part of the intended public pandas API. Moreover, the issue is not exclusively due to singleton groups: I can also reproduce it using the following DataFrame:

df = pd.DataFrame({
    'idx': [0, 1, 0, 1],
    'x': [Decimal('8.68'), Decimal('42.23'), Decimal('8.69'), Decimal('42.24')],
    'y': [Decimal('7.11'), Decimal('79.61'), Decimal('7.12'), Decimal('79.62')],
})

I don't think this is related to #39098 because this only occurs for idxmin or idxmax, not aggregations like sum. That is the only obviously related issue I could find.

Output of pd.show_versions()

I've tested on two separate systems.

Docker Ubuntu container (running on a host Ubuntu machine)

INSTALLED VERSIONS

commit : f2c8480
python : 3.7.10.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-76-generic
Version : #86-Ubuntu SMP Fri Jan 17 17:24:28 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8

pandas : 1.2.3
numpy : 1.20.1
pytz : 2021.1
dateutil : 2.8.1
pip : 21.0.1
setuptools : 52.0.0.post20210125
Cython : 0.29.22
pytest : 6.2.2
hypothesis : 6.3.4
sphinx : 3.5.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.3
IPython : 7.21.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : 0.8.7
fastparquet : None
gcsfs : None
matplotlib : 3.3.4
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 1.0.1
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : 0.53.0

conda Python on a Mac

INSTALLED VERSIONS

commit : f2c8480
python : 3.9.2.final.0
python-bits : 64
OS : Darwin
OS-release : 19.6.0
Version : Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.2.3
numpy : 1.20.1
pytz : 2021.1
dateutil : 2.8.1
pip : 21.0.1
setuptools : 49.6.0.post20210108
Cython : 0.29.22
pytest : 6.2.2
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.3
IPython : 7.21.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None

@vyasr vyasr added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Mar 29, 2021
@jreback
Copy link
Contributor

jreback commented Mar 30, 2021

Decimal is not a first class type

if you want to use DecimalArray then that should work otherwise these are just like regular objects - meaning almost no support and you must use .apply

you can use dtype='decimal' on construction

there isn't any automatic inference based on Decimal (currently)

@rhshadrach rhshadrach added Dtype Conversions Unexpected or buggy dtype conversions Groupby labels Mar 30, 2021
@vyasr
Copy link
Contributor Author

vyasr commented Mar 30, 2021

That makes sense to me, I expect severe performance degradation and limited support when working with any pandas DataFrame/Series containing arbitrary 'object' dtype elements. However, in such cases pandas should just resort to (something like) calling the comparators of the individual elements, correct? The min/max groupby aggregations work fine, while the idxmin/idxmax aggregations don't. Naively, that suggests to me that something is off in the treatment of these aggregators for object arrays. I would expect unsupported operations on object arrays (if that's what these are) to either return an empty output (i.e. without the invalid column requests, rather than just empty columns) or to raise an Exception. I believe those are the usual behaviors; for instance, calling df.std() on a DataFrame composed entirely of string columns returns an empty Series because that's not a valid string operation.

@jbrockmendel jbrockmendel added the Nuisance Columns Identifying/Dropping nuisance columns in reductions, groupby.add, DataFrame.apply label Apr 3, 2021
@lithomas1 lithomas1 removed the Needs Triage Issue that has not been reviewed by a pandas team member label Apr 3, 2021
@jbrockmendel
Copy link
Member

This now raises instead of silently dropping the column bc nuisance columns are no longer dropped. The request then boils down to supporting argmax for object dtypes, xref #18021

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Dtype Conversions Unexpected or buggy dtype conversions Groupby Nuisance Columns Identifying/Dropping nuisance columns in reductions, groupby.add, DataFrame.apply
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants