Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Categorical label reorder and export #963

Merged
merged 19 commits into from
Jul 20, 2021
Merged

Conversation

jperez999
Copy link
Contributor

This PR tackles two issues:
Export count to file
Order categorical labels in decreasing frequency order

Copy link
Member

@benfred benfred left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced the 'gb.sort_values' in mid_level_groupby is sufficient here - like I believe we're calling again on the column name in _write_uniques that well undo this. Can you add a unittest to make sure the output is still sorted by frequency?

Also we should probably only do the sorting by frequency when the search_sorted parameter isn't set : otherwise we won't be returning correct results in this case (or alternatively, we could remove the search_sorted option).

nvtabular/ops/categorify.py Outdated Show resolved Hide resolved
@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit b0d0d3c5db71df2be3f9cc21842c4e368d0e107a, no merge conflicts.
Running as SYSTEM
Setting status of b0d0d3c5db71df2be3f9cc21842c4e368d0e107a to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2842/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse b0d0d3c5db71df2be3f9cc21842c4e368d0e107a^{commit} # timeout=10
Checking out Revision b0d0d3c5db71df2be3f9cc21842c4e368d0e107a (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b0d0d3c5db71df2be3f9cc21842c4e368d0e107a # timeout=10
Commit message: "fix the tests to reflect frequency based encoding"
 > git rev-list --no-walk 297f5ed8b0a8d91bc9fcee5257fd4f1b90e17953 # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins5025580695617442127.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Attempting uninstall: nvtabular
    Found existing installation: nvtabular 0.5.3+45.g9f25d42
    Can't uninstall 'nvtabular'. No files were found to uninstall.
  Running setup.py develop for nvtabular
Successfully installed nvtabular-0.5.3+53.gb0d0d3c
Running black --check
All done! ✨ 🍰 ✨
107 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-18 14:10:46.892547: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-18 14:10:48.193442: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-18 14:10:48.194555: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-18 14:10:48.195621: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-18 14:10:48.195651: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-18 14:10:48.195700: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-18 14:10:48.195732: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-18 14:10:48.195764: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-18 14:10:48.195798: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-18 14:10:48.195843: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-18 14:10:48.195875: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-18 14:10:48.195912: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-18 14:10:48.199951: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1108 items

tests/unit/test_column_group.py .F [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py .................................................. [ 17%]
....................................................................ssss [ 24%]
ssss.................................................. [ 29%]
tests/unit/test_notebooks.py ...... [ 29%]
tests/unit/test_ops.py ................................................. [ 34%]
........................................................................ [ 40%]
.................................................FFFFFFFFFF............. [ 47%]
........................................................................ [ 53%]
........................................................................ [ 60%]
........................................................................ [ 66%]
. [ 66%]
tests/unit/test_s3.py . [ 66%]
tests/unit/test_tf_dataloader.py ....................................... [ 70%]
.................................s [ 73%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 80%]
tests/unit/test_tools.py ...................... [ 82%]
tests/unit/test_torch_dataloader.py ..................Build timed out (after 40 minutes). Marking the build as failed.
Terminated
Build was aborted
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins3779715644031127369.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit 94c06b96dc74d216f52717010412295c0f154b6c, no merge conflicts.
Running as SYSTEM
Setting status of 94c06b96dc74d216f52717010412295c0f154b6c to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2846/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse 94c06b96dc74d216f52717010412295c0f154b6c^{commit} # timeout=10
Checking out Revision 94c06b96dc74d216f52717010412295c0f154b6c (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 94c06b96dc74d216f52717010412295c0f154b6c # timeout=10
Commit message: "more tests changes now failing less than 10 tests"
 > git rev-list --no-walk c5a82bef60fe1d123f389440f5c06ba383c03b9a # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins4207779997629765862.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular-0.5.3+54.g94c06b9
Running black --check
All done! ✨ 🍰 ✨
107 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-19 05:06:25.328047: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-19 05:06:26.614851: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-19 05:06:26.616028: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-19 05:06:26.617143: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-19 05:06:26.617174: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-19 05:06:26.617227: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-19 05:06:26.617265: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-19 05:06:26.617302: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-19 05:06:26.617337: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-19 05:06:26.617387: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-19 05:06:26.617423: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-19 05:06:26.617463: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-19 05:06:26.621787: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1108 items

tests/unit/test_column_group.py .F [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py .................................................. [ 17%]
....................................................................ssss [ 24%]
ssss.................................................. [ 29%]
tests/unit/test_notebooks.py ...... [ 29%]
tests/unit/test_ops.py ................................................. [ 34%]
........................................................................ [ 40%]
...................................................F...F.FF............. [ 47%]
........................................................................ [ 53%]
........................................................................ [ 60%]
........................................................................ [ 66%]
. [ 66%]
tests/unit/test_s3.py . [ 66%]
tests/unit/test_tf_dataloader.py ....................................... [ 70%]
.................................s [ 73%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 80%]
tests/unit/test_tools.py ...................... [ 82%]
tests/unit/test_torch_dataloader.py .................................... [ 85%]
.............................................. [ 89%]
tests/unit/test_triton_inference.py ssss.................. [ 91%]
tests/unit/test_workflow.py ..........................F................. [ 95%]
...........................F..................F. [100%]

=================================== FAILURES ===================================
___________________________ test_nested_column_group ___________________________

def test_nested_column_group():
    df = cudf.DataFrame(
        {
            "geo": ["US>CA", "US>NY", "CA>BC", "CA>ON"],
            "user": ["User_A", "User_A", "User_A", "User_B"],
        }
    )

    country = (
        ColumnGroup(["geo"]) >> (lambda col: col.str.slice(0, 2)) >> Rename(postfix="_country")
    )

    # make sure we can do a 'combo' categorify (cross based) of country+user
    # as well as categorifying the country and user columns on their own
    cats = [country + "user"] + country + "user" >> Categorify(encode_type="combo")

    workflow = Workflow(cats)
  df_out = workflow.fit_transform(Dataset(df)).to_ddf().compute(scheduler="synchronous")

tests/unit/test_column_group.py:45:


nvtabular/workflow.py:185: in fit_transform
self.fit(dataset)
nvtabular/workflow.py:152: in fit
results = dask.compute(stats, scheduler="synchronous")[0]
/usr/local/lib/python3.8/dist-packages/dask/base.py:567: in compute
results = schedule(dsk, keys, **kwargs)
/usr/local/lib/python3.8/dist-packages/dask/local.py:560: in get_sync
return get_async(
/usr/local/lib/python3.8/dist-packages/dask/local.py:503: in get_async
for key, res_info, failed in queue_get(queue).result():
/usr/lib/python3.8/concurrent/futures/_base.py:437: in result
return self.__get_result()
/usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
raise self._exception
/usr/local/lib/python3.8/dist-packages/dask/local.py:545: in submit
fut.set_result(fn(*args, **kwargs))
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in batch_execute_tasks
return [execute_task(*a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in
return [execute_task(a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:228: in execute_task
result = pack_exception(e, dumps)
/usr/local/lib/python3.8/dist-packages/dask/local.py:223: in execute_task
result = _execute_task(task, data)
/usr/local/lib/python3.8/dist-packages/dask/core.py:121: in _execute_task
return func(
(_execute_task(a, cache) for a in args))
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
nvtabular/ops/categorify.py:827: in _write_uniques
df = df.sort_values(name_count, ascending=False, ignore_index=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:4021: in sort_values
self[by].argsort(ascending=ascending, na_position=na_position),
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:723: in getitem
return self._get_columns_by_label(arg, downcast=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1449: in _get_columns_by_label
new_data = super()._get_columns_by_label(labels, downcast)
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:630: in _get_columns_by_label
return self._data.select_by_label(labels)
/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:346: in select_by_label
return self._select_by_label_grouped(key)


self = ColumnAccessor(multiindex=False, level_names=[None])
geo_country: object
user: object
geo_country_user_count: int64
key = 'geo_country_count'

def _select_by_label_grouped(self, key: Any) -> ColumnAccessor:
  result = self._grouped_data[key]

E KeyError: 'geo_country_count'

/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:408: KeyError
________________ test_categorify_multi[False-combo-cat_names0] _________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-4/test_categorify_multi_False_co0')
cat_names = [['Author', 'Engaging User']], kind = 'combo', cpu = False

@pytest.mark.parametrize("cat_names", [[["Author", "Engaging User"]], ["Author", "Engaging User"]])
@pytest.mark.parametrize("kind", ["joint", "combo"])
@pytest.mark.parametrize("cpu", [False, True])
def test_categorify_multi(tmpdir, cat_names, kind, cpu):
    df = pd.DataFrame(
        {
            "Author": ["User_A", "User_E", "User_B", "User_C"],
            "Engaging User": ["User_B", "User_B", "User_A", "User_D"],
            "Post": [1, 2, 3, 4],
        }
    )

    label_name = ["Post"]

    cats = cat_names >> ops.Categorify(out_path=str(tmpdir), encode_type=kind)

    workflow = nvt.Workflow(cats + label_name)

    df_out = (
      workflow.fit_transform(nvt.Dataset(df, cpu=cpu)).to_ddf().compute(scheduler="synchronous")
    )

tests/unit/test_ops.py:500:


nvtabular/workflow.py:185: in fit_transform
self.fit(dataset)
nvtabular/workflow.py:152: in fit
results = dask.compute(stats, scheduler="synchronous")[0]
/usr/local/lib/python3.8/dist-packages/dask/base.py:567: in compute
results = schedule(dsk, keys, **kwargs)
/usr/local/lib/python3.8/dist-packages/dask/local.py:560: in get_sync
return get_async(
/usr/local/lib/python3.8/dist-packages/dask/local.py:503: in get_async
for key, res_info, failed in queue_get(queue).result():
/usr/lib/python3.8/concurrent/futures/_base.py:437: in result
return self.__get_result()
/usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
raise self._exception
/usr/local/lib/python3.8/dist-packages/dask/local.py:545: in submit
fut.set_result(fn(*args, **kwargs))
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in batch_execute_tasks
return [execute_task(*a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in
return [execute_task(a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:228: in execute_task
result = pack_exception(e, dumps)
/usr/local/lib/python3.8/dist-packages/dask/local.py:223: in execute_task
result = _execute_task(task, data)
/usr/local/lib/python3.8/dist-packages/dask/core.py:121: in _execute_task
return func(
(_execute_task(a, cache) for a in args))
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
nvtabular/ops/categorify.py:827: in _write_uniques
df = df.sort_values(name_count, ascending=False, ignore_index=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:4021: in sort_values
self[by].argsort(ascending=ascending, na_position=na_position),
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:723: in getitem
return self._get_columns_by_label(arg, downcast=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1449: in _get_columns_by_label
new_data = super()._get_columns_by_label(labels, downcast)
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:630: in _get_columns_by_label
return self._data.select_by_label(labels)
/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:346: in select_by_label
return self._select_by_label_grouped(key)


self = ColumnAccessor(multiindex=False, level_names=[None])
Author: object
Engaging User: object
Author_Engaging User_count: int64
key = 'Author_count'

def _select_by_label_grouped(self, key: Any) -> ColumnAccessor:
  result = self._grouped_data[key]

E KeyError: 'Author_count'

/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:408: KeyError
_________________ test_categorify_multi[True-combo-cat_names0] _________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-4/test_categorify_multi_True_com0')
cat_names = [['Author', 'Engaging User']], kind = 'combo', cpu = True

@pytest.mark.parametrize("cat_names", [[["Author", "Engaging User"]], ["Author", "Engaging User"]])
@pytest.mark.parametrize("kind", ["joint", "combo"])
@pytest.mark.parametrize("cpu", [False, True])
def test_categorify_multi(tmpdir, cat_names, kind, cpu):
    df = pd.DataFrame(
        {
            "Author": ["User_A", "User_E", "User_B", "User_C"],
            "Engaging User": ["User_B", "User_B", "User_A", "User_D"],
            "Post": [1, 2, 3, 4],
        }
    )

    label_name = ["Post"]

    cats = cat_names >> ops.Categorify(out_path=str(tmpdir), encode_type=kind)

    workflow = nvt.Workflow(cats + label_name)

    df_out = (
      workflow.fit_transform(nvt.Dataset(df, cpu=cpu)).to_ddf().compute(scheduler="synchronous")
    )

tests/unit/test_ops.py:500:


nvtabular/workflow.py:185: in fit_transform
self.fit(dataset)
nvtabular/workflow.py:152: in fit
results = dask.compute(stats, scheduler="synchronous")[0]
/usr/local/lib/python3.8/dist-packages/dask/base.py:567: in compute
results = schedule(dsk, keys, **kwargs)
/usr/local/lib/python3.8/dist-packages/dask/local.py:560: in get_sync
return get_async(
/usr/local/lib/python3.8/dist-packages/dask/local.py:503: in get_async
for key, res_info, failed in queue_get(queue).result():
/usr/lib/python3.8/concurrent/futures/_base.py:437: in result
return self.__get_result()
/usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
raise self._exception
/usr/local/lib/python3.8/dist-packages/dask/local.py:545: in submit
fut.set_result(fn(*args, **kwargs))
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in batch_execute_tasks
return [execute_task(*a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in
return [execute_task(a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:228: in execute_task
result = pack_exception(e, dumps)
/usr/local/lib/python3.8/dist-packages/dask/local.py:223: in execute_task
result = _execute_task(task, data)
/usr/local/lib/python3.8/dist-packages/dask/core.py:121: in _execute_task
return func(
(_execute_task(a, cache) for a in args))
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
nvtabular/ops/categorify.py:827: in _write_uniques
df = df.sort_values(name_count, ascending=False, ignore_index=True)
/usr/local/lib/python3.8/dist-packages/pandas/core/frame.py:5298: in sort_values
k = self._get_label_or_level_values(by, axis=axis)


self = Author Engaging User Author_Engaging User_count
0 User_A User_B 1
0 User_B ... 1
1 User_C User_D 1
0 User_E User_B 1
key = 'Author_count', axis = 0

def _get_label_or_level_values(self, key: str, axis: int = 0) -> np.ndarray:
    """
    Return a 1-D array of values associated with `key`, a label or level
    from the given `axis`.

    Retrieval logic:
      - (axis=0): Return column values if `key` matches a column label.
        Otherwise return index level values if `key` matches an index
        level.
      - (axis=1): Return row values if `key` matches an index label.
        Otherwise return column level values if 'key' matches a column
        level

    Parameters
    ----------
    key: str
        Label or level name.
    axis: int, default 0
        Axis that levels are associated with (0 for index, 1 for columns)

    Returns
    -------
    values: np.ndarray

    Raises
    ------
    KeyError
        if `key` matches neither a label nor a level
    ValueError
        if `key` matches multiple labels
    FutureWarning
        if `key` is ambiguous. This will become an ambiguity error in a
        future version
    """
    axis = self._get_axis_number(axis)
    other_axes = [ax for ax in range(self._AXIS_LEN) if ax != axis]

    if self._is_label_reference(key, axis=axis):
        self._check_label_or_level_ambiguity(key, axis=axis)
        values = self.xs(key, axis=other_axes[0])._values
    elif self._is_level_reference(key, axis=axis):
        values = self.axes[axis].get_level_values(key)._values
    else:
      raise KeyError(key)

E KeyError: 'Author_count'

/usr/local/lib/python3.8/dist-packages/pandas/core/generic.py:1563: KeyError
______________________ test_categorify_multi_combo[False] ______________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-4/test_categorify_multi_combo_Fa0')
cpu = False

@pytest.mark.parametrize("cpu", [False, True])
def test_categorify_multi_combo(tmpdir, cpu):
    cat_names = [["Author", "Engaging User"], ["Author"], "Engaging User"]
    kind = "combo"
    df = pd.DataFrame(
        {
            "Author": ["User_A", "User_E", "User_B", "User_C"],
            "Engaging User": ["User_B", "User_B", "User_A", "User_D"],
            "Post": [1, 2, 3, 4],
        }
    )

    label_name = ["Post"]
    cats = cat_names >> ops.Categorify(out_path=str(tmpdir), encode_type=kind)
    workflow = nvt.Workflow(cats + label_name)
    df_out = (
      workflow.fit_transform(nvt.Dataset(df, cpu=cpu)).to_ddf().compute(scheduler="synchronous")
    )

tests/unit/test_ops.py:556:


nvtabular/workflow.py:185: in fit_transform
self.fit(dataset)
nvtabular/workflow.py:152: in fit
results = dask.compute(stats, scheduler="synchronous")[0]
/usr/local/lib/python3.8/dist-packages/dask/base.py:567: in compute
results = schedule(dsk, keys, **kwargs)
/usr/local/lib/python3.8/dist-packages/dask/local.py:560: in get_sync
return get_async(
/usr/local/lib/python3.8/dist-packages/dask/local.py:503: in get_async
for key, res_info, failed in queue_get(queue).result():
/usr/lib/python3.8/concurrent/futures/_base.py:437: in result
return self.__get_result()
/usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
raise self._exception
/usr/local/lib/python3.8/dist-packages/dask/local.py:545: in submit
fut.set_result(fn(*args, **kwargs))
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in batch_execute_tasks
return [execute_task(*a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in
return [execute_task(a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:228: in execute_task
result = pack_exception(e, dumps)
/usr/local/lib/python3.8/dist-packages/dask/local.py:223: in execute_task
result = _execute_task(task, data)
/usr/local/lib/python3.8/dist-packages/dask/core.py:121: in _execute_task
return func(
(_execute_task(a, cache) for a in args))
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
nvtabular/ops/categorify.py:827: in _write_uniques
df = df.sort_values(name_count, ascending=False, ignore_index=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:4021: in sort_values
self[by].argsort(ascending=ascending, na_position=na_position),
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:723: in getitem
return self._get_columns_by_label(arg, downcast=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1449: in _get_columns_by_label
new_data = super()._get_columns_by_label(labels, downcast)
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:630: in _get_columns_by_label
return self._data.select_by_label(labels)
/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:346: in select_by_label
return self._select_by_label_grouped(key)


self = ColumnAccessor(multiindex=False, level_names=[None])
Author: object
Engaging User: object
Author_Engaging User_count: int64
key = 'Author_count'

def _select_by_label_grouped(self, key: Any) -> ColumnAccessor:
  result = self._grouped_data[key]

E KeyError: 'Author_count'

/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:408: KeyError
______________________ test_categorify_multi_combo[True] _______________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-4/test_categorify_multi_combo_Tr0')
cpu = True

@pytest.mark.parametrize("cpu", [False, True])
def test_categorify_multi_combo(tmpdir, cpu):
    cat_names = [["Author", "Engaging User"], ["Author"], "Engaging User"]
    kind = "combo"
    df = pd.DataFrame(
        {
            "Author": ["User_A", "User_E", "User_B", "User_C"],
            "Engaging User": ["User_B", "User_B", "User_A", "User_D"],
            "Post": [1, 2, 3, 4],
        }
    )

    label_name = ["Post"]
    cats = cat_names >> ops.Categorify(out_path=str(tmpdir), encode_type=kind)
    workflow = nvt.Workflow(cats + label_name)
    df_out = (
      workflow.fit_transform(nvt.Dataset(df, cpu=cpu)).to_ddf().compute(scheduler="synchronous")
    )

tests/unit/test_ops.py:556:


nvtabular/workflow.py:185: in fit_transform
self.fit(dataset)
nvtabular/workflow.py:152: in fit
results = dask.compute(stats, scheduler="synchronous")[0]
/usr/local/lib/python3.8/dist-packages/dask/base.py:567: in compute
results = schedule(dsk, keys, **kwargs)
/usr/local/lib/python3.8/dist-packages/dask/local.py:560: in get_sync
return get_async(
/usr/local/lib/python3.8/dist-packages/dask/local.py:503: in get_async
for key, res_info, failed in queue_get(queue).result():
/usr/lib/python3.8/concurrent/futures/_base.py:437: in result
return self.__get_result()
/usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
raise self._exception
/usr/local/lib/python3.8/dist-packages/dask/local.py:545: in submit
fut.set_result(fn(*args, **kwargs))
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in batch_execute_tasks
return [execute_task(*a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in
return [execute_task(a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:228: in execute_task
result = pack_exception(e, dumps)
/usr/local/lib/python3.8/dist-packages/dask/local.py:223: in execute_task
result = _execute_task(task, data)
/usr/local/lib/python3.8/dist-packages/dask/core.py:121: in _execute_task
return func(
(_execute_task(a, cache) for a in args))
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
nvtabular/ops/categorify.py:827: in _write_uniques
df = df.sort_values(name_count, ascending=False, ignore_index=True)
/usr/local/lib/python3.8/dist-packages/pandas/core/frame.py:5298: in sort_values
k = self._get_label_or_level_values(by, axis=axis)


self = Author Engaging User Author_Engaging User_count
0 User_A User_B 1
0 User_B ... 1
1 User_C User_D 1
0 User_E User_B 1
key = 'Author_count', axis = 0

def _get_label_or_level_values(self, key: str, axis: int = 0) -> np.ndarray:
    """
    Return a 1-D array of values associated with `key`, a label or level
    from the given `axis`.

    Retrieval logic:
      - (axis=0): Return column values if `key` matches a column label.
        Otherwise return index level values if `key` matches an index
        level.
      - (axis=1): Return row values if `key` matches an index label.
        Otherwise return column level values if 'key' matches a column
        level

    Parameters
    ----------
    key: str
        Label or level name.
    axis: int, default 0
        Axis that levels are associated with (0 for index, 1 for columns)

    Returns
    -------
    values: np.ndarray

    Raises
    ------
    KeyError
        if `key` matches neither a label nor a level
    ValueError
        if `key` matches multiple labels
    FutureWarning
        if `key` is ambiguous. This will become an ambiguity error in a
        future version
    """
    axis = self._get_axis_number(axis)
    other_axes = [ax for ax in range(self._AXIS_LEN) if ax != axis]

    if self._is_label_reference(key, axis=axis):
        self._check_label_or_level_ambiguity(key, axis=axis)
        values = self.xs(key, axis=other_axes[0])._values
    elif self._is_level_reference(key, axis=axis):
        values = self.axes[axis].get_level_values(key)._values
    else:
      raise KeyError(key)

E KeyError: 'Author_count'

/usr/local/lib/python3.8/dist-packages/pandas/core/generic.py:1563: KeyError
________________________________ test_spec_set _________________________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-4/test_spec_set0')
client = <Client: 'tcp://127.0.0.1:33801' processes=2 threads=16, memory=125.83 GiB>

def test_spec_set(tmpdir, client):
    gdf_test = cudf.DataFrame(
        {
            "ad_id": [1, 2, 2, 6, 6, 8, 3, 3],
            "source_id": [2, 4, 4, 7, 5, 2, 5, 2],
            "platform": [1, 2, np.nan, 2, 1, 3, 3, 1],
            "cont": [1, 2, np.nan, 2, 1, 3, 3, 1],
            "clicked": [1, 0, 1, 0, 0, 1, 1, 0],
        }
    )

    cats = ColumnGroup(["ad_id", "source_id", "platform"])
    cat_features = cats >> ops.Categorify
    cont_features = ColumnGroup(["cont"]) >> ops.FillMissing >> ops.Normalize
    te_features = cats >> ops.TargetEncoding("clicked", kfold=5, fold_seed=42, p_smooth=20)

    p = Workflow(cat_features + cont_features + te_features, client=client)
  p.fit_transform(nvt.Dataset(gdf_test)).to_ddf().compute()

tests/unit/test_workflow.py:127:


nvtabular/workflow.py:185: in fit_transform
self.fit(dataset)
nvtabular/workflow.py:150: in fit
results = [r.result() for r in self.client.compute(stats)]
nvtabular/workflow.py:150: in
results = [r.result() for r in self.client.compute(stats)]
/usr/local/lib/python3.8/dist-packages/distributed/client.py:220: in result
raise exc.with_traceback(tb)
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
nvtabular/ops/categorify.py:835: in _write_uniques
df_0 = df[0]
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:723: in getitem
return self._get_columns_by_label(arg, downcast=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1449: in _get_columns_by_label
new_data = super()._get_columns_by_label(labels, downcast)
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:630: in _get_columns_by_label
return self._data.select_by_label(labels)
/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:346: in select_by_label
return self._select_by_label_grouped(key)


result = self._grouped_data[key]
E KeyError: 0

/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:408: KeyError
----------------------------- Captured stderr call -----------------------------
distributed.worker - WARNING - Compute Failed
Function: write_uniques
args: ([pyarrow.Table
platform: int64
platform_count: int64, pyarrow.Table
platform: int64
platform_count: int64, pyarrow.Table
platform: int64
platform_count: int64, pyarrow.Table
platform: int64
platform_count: int64, pyarrow.Table
platform: int64
platform_count: int64, pyarrow.Table
platform: int64
platform_count: int64, pyarrow.Table
platform: int64
platform_count: int64, pyarrow.Table
platform: int64
platform_count: int64], './/categories', ['platform'], FitOptions(col_groups=['ad_id', 'source_id', 'platform'], agg_cols=[], agg_list=['count'], out_path='./', freq_limit=0, tree_width={'ad_id': 8, 'source_id': 8, 'platform': 8}, on_host=True, stat_name='categories', concat_groups=True, name_sep='
', max_size=0, num_buckets=None))
kwargs: {}
Exception: KeyError(0)

_______________________________ test_chaining_2 ________________________________

def test_chaining_2():
    gdf = cudf.DataFrame(
        {
            "A": [1, 2, 2, 9, 6, np.nan, 3],
            "B": [2, np.nan, 4, 7, 7, 2, 5],
            "C": ["a", "b", "c", np.nan, np.nan, "g", "k"],
        }
    )

    cat_names = ["C"]
    cont_names = ["A", "B"]
    label_name = []

    all_features = (
        cat_names + cont_names
        >> ops.LambdaOp(f=lambda col: col.isnull())
        >> ops.Rename(postfix="_isnull")
    )
    cat_features = cat_names >> ops.Categorify()

    workflow = Workflow(all_features + cat_features + label_name)

    dataset = nvt.Dataset(gdf, engine="parquet")
  workflow.fit(dataset)

tests/unit/test_workflow.py:398:


nvtabular/workflow.py:152: in fit
results = dask.compute(stats, scheduler="synchronous")[0]
/usr/local/lib/python3.8/dist-packages/dask/base.py:567: in compute
results = schedule(dsk, keys, **kwargs)
/usr/local/lib/python3.8/dist-packages/dask/local.py:560: in get_sync
return get_async(
/usr/local/lib/python3.8/dist-packages/dask/local.py:503: in get_async
for key, res_info, failed in queue_get(queue).result():
/usr/lib/python3.8/concurrent/futures/_base.py:437: in result
return self.__get_result()
/usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
raise self._exception
/usr/local/lib/python3.8/dist-packages/dask/local.py:545: in submit
fut.set_result(fn(*args, **kwargs))
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in batch_execute_tasks
return [execute_task(*a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in
return [execute_task(a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:228: in execute_task
result = pack_exception(e, dumps)
/usr/local/lib/python3.8/dist-packages/dask/local.py:223: in execute_task
result = _execute_task(task, data)
/usr/local/lib/python3.8/dist-packages/dask/core.py:121: in _execute_task
return func(
(_execute_task(a, cache) for a in args))
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
nvtabular/ops/categorify.py:835: in _write_uniques
df_0 = df[0]
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:723: in getitem
return self._get_columns_by_label(arg, downcast=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1449: in _get_columns_by_label
new_data = super()._get_columns_by_label(labels, downcast)
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:630: in _get_columns_by_label
return self._data.select_by_label(labels)
/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:346: in select_by_label
return self._select_by_label_grouped(key)


self = ColumnAccessor(multiindex=False, level_names=[None])
C: object
C_count: int64
key = 0

def _select_by_label_grouped(self, key: Any) -> ColumnAccessor:
  result = self._grouped_data[key]

E KeyError: 0

/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:408: KeyError
______________________ test_workflow_input_output_dtypes _______________________

def test_workflow_input_output_dtypes():
    df = cudf.DataFrame({"genre": ["drama", "comedy"], "user": ["a", "b"], "unneeded": [1, 2]})
    features = [["genre", "user"], "genre"] >> ops.Categorify(encode_type="combo")
    workflow = Workflow(features)
  workflow.fit(Dataset(df))

tests/unit/test_workflow.py:594:


nvtabular/workflow.py:152: in fit
results = dask.compute(stats, scheduler="synchronous")[0]
/usr/local/lib/python3.8/dist-packages/dask/base.py:567: in compute
results = schedule(dsk, keys, **kwargs)
/usr/local/lib/python3.8/dist-packages/dask/local.py:560: in get_sync
return get_async(
/usr/local/lib/python3.8/dist-packages/dask/local.py:503: in get_async
for key, res_info, failed in queue_get(queue).result():
/usr/lib/python3.8/concurrent/futures/_base.py:437: in result
return self.__get_result()
/usr/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
raise self._exception
/usr/local/lib/python3.8/dist-packages/dask/local.py:545: in submit
fut.set_result(fn(*args, **kwargs))
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in batch_execute_tasks
return [execute_task(*a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:237: in
return [execute_task(a) for a in it]
/usr/local/lib/python3.8/dist-packages/dask/local.py:228: in execute_task
result = pack_exception(e, dumps)
/usr/local/lib/python3.8/dist-packages/dask/local.py:223: in execute_task
result = _execute_task(task, data)
/usr/local/lib/python3.8/dist-packages/dask/core.py:121: in _execute_task
return func(
(_execute_task(a, cache) for a in args))
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
nvtabular/ops/categorify.py:827: in _write_uniques
df = df.sort_values(name_count, ascending=False, ignore_index=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:4021: in sort_values
self[by].argsort(ascending=ascending, na_position=na_position),
/usr/lib/python3.8/contextlib.py:75: in inner
return func(*args, **kwds)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:723: in getitem
return self._get_columns_by_label(arg, downcast=True)
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1449: in _get_columns_by_label
new_data = super()._get_columns_by_label(labels, downcast)
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:630: in _get_columns_by_label
return self._data.select_by_label(labels)
/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:346: in select_by_label
return self._select_by_label_grouped(key)


self = ColumnAccessor(multiindex=False, level_names=[None])
genre: object
user: object
genre_user_count: int64
key = 'genre_count'

def _select_by_label_grouped(self, key: Any) -> ColumnAccessor:
  result = self._grouped_data[key]

E KeyError: 'genre_count'

/usr/local/lib/python3.8/dist-packages/cudf/core/column_accessor.py:408: KeyError
=============================== warnings summary ===============================
tests/unit/test_ops.py::test_fill_missing[True-True-parquet]
tests/unit/test_ops.py::test_fill_missing[True-False-parquet]
tests/unit/test_ops.py::test_filter[parquet-0.1-True]
/usr/local/lib/python3.8/dist-packages/pandas/core/indexing.py:670: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
iloc._setitem_with_indexer(indexer, value)

tests/unit/test_ops.py::test_join_external[True-True-left-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-left-device-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-device-pandas-parquet]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/ops/join_external.py:164: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
_ext.drop_duplicates(ignore_index=True, inplace=True)

tests/unit/test_ops.py::test_filter[parquet-0.1-True]
tests/unit/test_ops.py::test_filter[parquet-0.1-False]
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6610: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 157 19 82 6 86% 54, 59, 87, 128, 152-165, 214, 301
nvtabular/dispatch.py 230 37 112 19 82% 33-35, 40-42, 48-58, 62-63, 86, 94, 107, 112->114, 125, 148-151, 190, 206, 213, 244->249, 247, 250, 253->257, 290, 301-304, 347, 351, 392, 416, 418, 425
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 12 85 6 91% 60, 68->49, 122, 179, 231-239, 335->343, 357->360, 363-364, 367
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 30 1 12 1 95% 47
nvtabular/framework_utils/torch/models.py 45 0 28 0 100%
nvtabular/framework_utils/torch/utils.py 75 4 30 2 94% 64, 118-120
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 279 158 120 15 43% 118-168, 213-274, 305, 307, 331-343, 347-363, 367-370, 374, 396-412, 416-420, 506-528, 532-599, 608->611, 611->607, 640-650, 654-655, 659, 669, 675, 677, 679, 681, 683, 685, 687, 690, 694-700
nvtabular/inference/triton/benchmarking_tools.py 52 52 10 0 0% 2-103
nvtabular/inference/triton/data_conversions.py 87 3 58 4 95% 32-33, 84
nvtabular/inference/triton/model.py 140 140 66 0 0% 27-266
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 57 6 20 5 86% 22-23, 99, 103->107, 108, 110, 124
nvtabular/io/dask.py 179 7 68 11 93% 110, 113, 149, 224, 384->382, 412->415, 423, 427->429, 429->425, 434, 436
nvtabular/io/dataframe_engine.py 61 5 28 6 88% 19-20, 50, 69, 88->92, 92->97, 94->97, 97->116, 125
nvtabular/io/dataset.py 277 33 122 23 84% 238, 240, 253, 262, 280-294, 397->466, 402-405, 410->420, 415-416, 427->425, 441->445, 456, 516->520, 563, 688-689, 693->695, 695->704, 705, 712-713, 719, 725, 820-821, 937-942, 948, 998
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 492 23 156 13 94% 33-34, 88-89, 92-100, 124->126, 213-215, 338-343, 381-386, 502->509, 570->575, 576-577, 697, 701, 705, 743, 760, 764, 771->773, 891->896, 901->911, 938
nvtabular/io/shuffle.py 31 6 16 5 77% 42, 44-45, 49, 59, 63
nvtabular/io/writer.py 173 13 66 5 92% 24-25, 51, 79, 125, 128, 207, 216, 219, 262, 283-285
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 327 12 138 9 95% 142-143, 233->235, 245-249, 295-296, 335->339, 410, 414-415, 445, 550, 558
nvtabular/loader/tensorflow.py 155 22 50 7 85% 57, 65-68, 78, 88, 296, 332, 347-349, 378-380, 390-398, 401-404
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 81 13 16 2 78% 25-27, 30-36, 111, 149-150
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 32 10 18 3 62% 52-54, 58, 61-64, 83-86
nvtabular/ops/categorify.py 539 68 304 44 85% 240, 256, 260, 268, 276, 278, 300, 319-320, 347, 389, 409-410, 428-431, 504->506, 627, 663, 692->695, 696-698, 705-706, 719-721, 722->690, 738, 746, 748, 755->exit, 778, 781->784, 792, 817, 822, 836-838, 840->842, 844-847, 858, 862, 864, 876-879, 957, 959, 988->1011, 994->1011, 1012-1017, 1054, 1072->1077, 1076, 1086->1083, 1091->1083, 1099, 1107-1117
nvtabular/ops/clip.py 18 2 6 3 79% 43, 51->53, 54
nvtabular/ops/column_similarity.py 103 24 36 5 72% 19-20, 76->exit, 106, 178-179, 188-190, 198-214, 231->234, 235, 245
nvtabular/ops/data_stats.py 56 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 25 0 8 1 97% 66->68
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 57 2 20 1 96% 92, 118
nvtabular/ops/filter.py 20 1 6 1 92% 49
nvtabular/ops/groupby.py 92 4 56 6 92% 71, 80, 82, 92->94, 104->109, 180
nvtabular/ops/hash_bucket.py 29 2 18 2 87% 69, 99
nvtabular/ops/hashed_cross.py 28 3 13 4 83% 50, 63, 77->exit, 78
nvtabular/ops/join_external.py 83 4 36 5 92% 108, 110, 152, 169->171, 205
nvtabular/ops/join_groupby.py 84 5 30 2 94% 106, 109->118, 194-195, 198-199
nvtabular/ops/lambdaop.py 39 6 18 6 79% 59, 63, 77, 89, 94, 103
nvtabular/ops/list_slice.py 63 24 26 1 56% 21-22, 52-53, 100-114, 122-133
nvtabular/ops/logop.py 8 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 70 8 14 2 86% 60->59, 67, 75-76, 109-110, 132-133, 137
nvtabular/ops/operator.py 29 3 2 1 87% 25, 104, 109
nvtabular/ops/rename.py 23 3 14 3 84% 45, 66-68
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 146 11 64 6 90% 147, 167->171, 174->183, 228-229, 232-233, 242-248, 311->314, 339->342
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 1 99% 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 94 43 44 8 49% 30-31, 35-36, 49, 60-61, 63-65, 68, 71, 77, 83, 89-125, 144, 148->152
nvtabular/worker.py 82 5 38 7 90% 24-25, 82->99, 91, 92->99, 99->102, 108, 110, 111->113
nvtabular/workflow.py 156 11 73 4 93% 28-29, 45, 131, 145-147, 251, 280-281, 369

TOTAL 6229 1150 2465 270 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.09%
=========================== short test summary info ============================
FAILED tests/unit/test_column_group.py::test_nested_column_group - KeyError: ...
FAILED tests/unit/test_ops.py::test_categorify_multi[False-combo-cat_names0]
FAILED tests/unit/test_ops.py::test_categorify_multi[True-combo-cat_names0]
FAILED tests/unit/test_ops.py::test_categorify_multi_combo[False] - KeyError:...
FAILED tests/unit/test_ops.py::test_categorify_multi_combo[True] - KeyError: ...
FAILED tests/unit/test_workflow.py::test_spec_set - KeyError: 0
FAILED tests/unit/test_workflow.py::test_chaining_2 - KeyError: 0
FAILED tests/unit/test_workflow.py::test_workflow_input_output_dtypes - KeyEr...
===== 8 failed, 1087 passed, 13 skipped, 11 warnings in 773.52s (0:12:53) ======
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins1953486745438683564.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit b083d3eb873f30f86446474ee4401a15b0b9a737, no merge conflicts.
Running as SYSTEM
Setting status of b083d3eb873f30f86446474ee4401a15b0b9a737 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2850/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse b083d3eb873f30f86446474ee4401a15b0b9a737^{commit} # timeout=10
Checking out Revision b083d3eb873f30f86446474ee4401a15b0b9a737 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b083d3eb873f30f86446474ee4401a15b0b9a737 # timeout=10
Commit message: "less intrusive"
 > git rev-list --no-walk 7be249b589d32023e936deb08b8efc0c52f9f167 # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins1440278784803876451.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular-0.5.3+55.gb083d3e
Running black --check
would reformat /var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/ops/categorify.py
Oh no! 💥 💔 💥
1 file would be reformatted, 106 files would be left unchanged.
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script  : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" 
[nvtabular_tests] $ /bin/bash /tmp/jenkins7829722308311206161.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit 66b3fb94342e9d418e19547fc21b9373214e43fd, no merge conflicts.
Running as SYSTEM
Setting status of 66b3fb94342e9d418e19547fc21b9373214e43fd to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2852/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse 66b3fb94342e9d418e19547fc21b9373214e43fd^{commit} # timeout=10
Checking out Revision 66b3fb94342e9d418e19547fc21b9373214e43fd (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 66b3fb94342e9d418e19547fc21b9373214e43fd # timeout=10
Commit message: "fix formatting"
 > git rev-list --no-walk 297f5ed8b0a8d91bc9fcee5257fd4f1b90e17953 # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins7594573811924116328.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular-0.5.3+56.g66b3fb9
Running black --check
All done! ✨ 🍰 ✨
107 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-19 16:11:52.379380: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-19 16:11:53.592780: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-19 16:11:53.593876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-19 16:11:53.594886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-19 16:11:53.594916: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-19 16:11:53.594963: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-19 16:11:53.594995: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-19 16:11:53.595028: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-19 16:11:53.595058: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-19 16:11:53.595103: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-19 16:11:53.595134: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-19 16:11:53.595169: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-19 16:11:53.599155: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1108 items

tests/unit/test_column_group.py .F [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py .................................................. [ 17%]
....................................................................ssss [ 24%]
ssss.................................................. [ 29%]
tests/unit/test_notebooks.py ...... [ 29%]
tests/unit/test_ops.py ................................................. [ 34%]
........................................................................ [ 40%]
...................................................F...F.FF............. [ 47%]
........................................................................ [ 53%]
........................................................................ [ 60%]
........................................................................ [ 66%]
. [ 66%]
tests/unit/test_s3.py . [ 66%]
tests/unit/test_tf_dataloader.py ....................................... [ 70%]
.................................s [ 73%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 80%]
tests/unit/test_tools.py ...................... [ 82%]
tests/unit/test_torch_dataloader.py ..........................Build timed out (after 40 minutes). Marking the build as failed.
Terminated
Build was aborted
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins9032428767760245363.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit 9d97eb786fe7e1923ab7c05a2cd1c2d598e7e3e2, no merge conflicts.
Running as SYSTEM
Setting status of 9d97eb786fe7e1923ab7c05a2cd1c2d598e7e3e2 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2878/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse 9d97eb786fe7e1923ab7c05a2cd1c2d598e7e3e2^{commit} # timeout=10
Checking out Revision 9d97eb786fe7e1923ab7c05a2cd1c2d598e7e3e2 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9d97eb786fe7e1923ab7c05a2cd1c2d598e7e3e2 # timeout=10
Commit message: "all test green except horovod"
 > git rev-list --no-walk 7eb9cf76b29e65c87e13bc80b1f3d491ec3d4636 # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins2670297771196282196.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular-0.5.3+57.g9d97eb7
Running black --check
All done! ✨ 🍰 ✨
107 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-20 01:07:52.569489: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 01:07:53.778411: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-20 01:07:53.779479: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 01:07:53.780468: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 01:07:53.780501: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 01:07:53.780552: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-20 01:07:53.780590: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-20 01:07:53.780625: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-20 01:07:53.780660: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-20 01:07:53.780708: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-20 01:07:53.780743: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-20 01:07:53.780783: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-20 01:07:53.784938: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1108 items

tests/unit/test_column_group.py .. [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py .................................................. [ 17%]
....................................................................ssss [ 24%]
ssss.................................................. [ 29%]
tests/unit/test_notebooks.py ...... [ 29%]
tests/unit/test_ops.py ................................................. [ 34%]
........................................................................ [ 40%]
...................................................F...F.FF............. [ 47%]
........................................................................ [ 53%]
........................................................................ [ 60%]
........................................................................ [ 66%]
. [ 66%]
tests/unit/test_s3.py . [ 66%]
tests/unit/test_tf_dataloader.py ....................................... [ 70%]
.................................s [ 73%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 80%]
tests/unit/test_tools.py ...................... [ 82%]
tests/unit/test_torch_dataloader.py .................................... [ 85%]
.............................................. [ 89%]
tests/unit/test_triton_inference.py ssss.................. [ 91%]
tests/unit/test_workflow.py ............................................ [ 95%]
................................................ [100%]

=================================== FAILURES ===================================
________________ test_categorify_multi[False-combo-cat_names0] _________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-14/test_categorify_multi_False_co0')
cat_names = [['Author', 'Engaging User']], kind = 'combo', cpu = False

@pytest.mark.parametrize("cat_names", [[["Author", "Engaging User"]], ["Author", "Engaging User"]])
@pytest.mark.parametrize("kind", ["joint", "combo"])
@pytest.mark.parametrize("cpu", [False, True])
def test_categorify_multi(tmpdir, cat_names, kind, cpu):
    df = pd.DataFrame(
        {
            "Author": ["User_A", "User_E", "User_B", "User_C"],
            "Engaging User": ["User_B", "User_B", "User_A", "User_D"],
            "Post": [1, 2, 3, 4],
        }
    )

    label_name = ["Post"]

    cats = cat_names >> ops.Categorify(out_path=str(tmpdir), encode_type=kind)

    workflow = nvt.Workflow(cats + label_name)

    df_out = (
        workflow.fit_transform(nvt.Dataset(df, cpu=cpu)).to_ddf().compute(scheduler="synchronous")
    )

    if len(cat_names) == 1:
        if kind == "joint":
            # Columns are encoded jointly
            compare_authors = (
                df_out["Author"].to_list() if cpu else df_out["Author"].to_arrow().to_pylist()
            )
            compare_engaging = (
                df_out["Engaging User"].to_list()
                if cpu
                else df_out["Engaging User"].to_arrow().to_pylist()
            )
            # again userB has highest frequency given lowest encoding
            assert compare_authors == [2, 5, 1, 3]
            assert compare_engaging == [1, 1, 2, 4]
        else:
            # Column combinations are encoded
            compare_engaging = (
                df_out["Author_Engaging User"].to_list()
                if cpu
                else df_out["Author_Engaging User"].to_arrow().to_pylist()
            )
          assert compare_engaging == [2, 4, 1, 3]

E assert [1, 4, 2, 3] == [2, 4, 1, 3]
E At index 0 diff: 1 != 2
E Use -v to get the full diff

tests/unit/test_ops.py:524: AssertionError
_________________ test_categorify_multi[True-combo-cat_names0] _________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-14/test_categorify_multi_True_com0')
cat_names = [['Author', 'Engaging User']], kind = 'combo', cpu = True

@pytest.mark.parametrize("cat_names", [[["Author", "Engaging User"]], ["Author", "Engaging User"]])
@pytest.mark.parametrize("kind", ["joint", "combo"])
@pytest.mark.parametrize("cpu", [False, True])
def test_categorify_multi(tmpdir, cat_names, kind, cpu):
    df = pd.DataFrame(
        {
            "Author": ["User_A", "User_E", "User_B", "User_C"],
            "Engaging User": ["User_B", "User_B", "User_A", "User_D"],
            "Post": [1, 2, 3, 4],
        }
    )

    label_name = ["Post"]

    cats = cat_names >> ops.Categorify(out_path=str(tmpdir), encode_type=kind)

    workflow = nvt.Workflow(cats + label_name)

    df_out = (
        workflow.fit_transform(nvt.Dataset(df, cpu=cpu)).to_ddf().compute(scheduler="synchronous")
    )

    if len(cat_names) == 1:
        if kind == "joint":
            # Columns are encoded jointly
            compare_authors = (
                df_out["Author"].to_list() if cpu else df_out["Author"].to_arrow().to_pylist()
            )
            compare_engaging = (
                df_out["Engaging User"].to_list()
                if cpu
                else df_out["Engaging User"].to_arrow().to_pylist()
            )
            # again userB has highest frequency given lowest encoding
            assert compare_authors == [2, 5, 1, 3]
            assert compare_engaging == [1, 1, 2, 4]
        else:
            # Column combinations are encoded
            compare_engaging = (
                df_out["Author_Engaging User"].to_list()
                if cpu
                else df_out["Author_Engaging User"].to_arrow().to_pylist()
            )
          assert compare_engaging == [2, 4, 1, 3]

E assert [1, 4, 2, 3] == [2, 4, 1, 3]
E At index 0 diff: 1 != 2
E Use -v to get the full diff

tests/unit/test_ops.py:524: AssertionError
______________________ test_categorify_multi_combo[False] ______________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-14/test_categorify_multi_combo_Fa0')
cpu = False

@pytest.mark.parametrize("cpu", [False, True])
def test_categorify_multi_combo(tmpdir, cpu):
    cat_names = [["Author", "Engaging User"], ["Author"], "Engaging User"]
    kind = "combo"
    df = pd.DataFrame(
        {
            "Author": ["User_A", "User_E", "User_B", "User_C"],
            "Engaging User": ["User_B", "User_B", "User_A", "User_D"],
            "Post": [1, 2, 3, 4],
        }
    )

    label_name = ["Post"]
    cats = cat_names >> ops.Categorify(out_path=str(tmpdir), encode_type=kind)
    workflow = nvt.Workflow(cats + label_name)
    df_out = (
        workflow.fit_transform(nvt.Dataset(df, cpu=cpu)).to_ddf().compute(scheduler="synchronous")
    )

    # Column combinations are encoded
    compare_a = df_out["Author"].to_list() if cpu else df_out["Author"].to_arrow().to_pylist()
    compare_e = (
        df_out["Engaging User"].to_list() if cpu else df_out["Engaging User"].to_arrow().to_pylist()
    )
    compare_ae = (
        df_out["Author_Engaging User"].to_list()
        if cpu
        else df_out["Author_Engaging User"].to_arrow().to_pylist()
    )
    assert compare_a == [1, 4, 2, 3]
  assert compare_e == [2, 2, 1, 3]

E assert [1, 1, 2, 3] == [2, 2, 1, 3]
E At index 0 diff: 1 != 2
E Use -v to get the full diff

tests/unit/test_ops.py:570: AssertionError
______________________ test_categorify_multi_combo[True] _______________________

tmpdir = local('/tmp/pytest-of-jenkins/pytest-14/test_categorify_multi_combo_Tr0')
cpu = True

@pytest.mark.parametrize("cpu", [False, True])
def test_categorify_multi_combo(tmpdir, cpu):
    cat_names = [["Author", "Engaging User"], ["Author"], "Engaging User"]
    kind = "combo"
    df = pd.DataFrame(
        {
            "Author": ["User_A", "User_E", "User_B", "User_C"],
            "Engaging User": ["User_B", "User_B", "User_A", "User_D"],
            "Post": [1, 2, 3, 4],
        }
    )

    label_name = ["Post"]
    cats = cat_names >> ops.Categorify(out_path=str(tmpdir), encode_type=kind)
    workflow = nvt.Workflow(cats + label_name)
    df_out = (
        workflow.fit_transform(nvt.Dataset(df, cpu=cpu)).to_ddf().compute(scheduler="synchronous")
    )

    # Column combinations are encoded
    compare_a = df_out["Author"].to_list() if cpu else df_out["Author"].to_arrow().to_pylist()
    compare_e = (
        df_out["Engaging User"].to_list() if cpu else df_out["Engaging User"].to_arrow().to_pylist()
    )
    compare_ae = (
        df_out["Author_Engaging User"].to_list()
        if cpu
        else df_out["Author_Engaging User"].to_arrow().to_pylist()
    )
    assert compare_a == [1, 4, 2, 3]
  assert compare_e == [2, 2, 1, 3]

E assert [1, 1, 2, 3] == [2, 2, 1, 3]
E At index 0 diff: 1 != 2
E Use -v to get the full diff

tests/unit/test_ops.py:570: AssertionError
=============================== warnings summary ===============================
tests/unit/test_ops.py::test_fill_missing[True-True-parquet]
tests/unit/test_ops.py::test_fill_missing[True-False-parquet]
tests/unit/test_ops.py::test_filter[parquet-0.1-True]
/usr/local/lib/python3.8/dist-packages/pandas/core/indexing.py:670: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
iloc._setitem_with_indexer(indexer, value)

tests/unit/test_ops.py::test_join_external[True-True-left-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-left-device-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-device-pandas-parquet]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/ops/join_external.py:164: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
_ext.drop_duplicates(ignore_index=True, inplace=True)

tests/unit/test_ops.py::test_filter[parquet-0.1-True]
tests/unit/test_ops.py::test_filter[parquet-0.1-False]
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6610: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 157 18 82 5 87% 54, 87, 128, 152-165, 214, 301
nvtabular/dispatch.py 230 37 112 19 82% 33-35, 40-42, 48-58, 62-63, 86, 94, 107, 112->114, 125, 148-151, 190, 206, 213, 244->249, 247, 250, 253->257, 290, 301-304, 347, 351, 392, 416, 418, 425
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 12 85 6 91% 60, 68->49, 122, 179, 231-239, 335->343, 357->360, 363-364, 367
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 30 1 12 1 95% 47
nvtabular/framework_utils/torch/models.py 45 0 28 0 100%
nvtabular/framework_utils/torch/utils.py 75 4 30 2 94% 64, 118-120
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 279 158 120 15 43% 118-168, 213-274, 305, 307, 331-343, 347-363, 367-370, 374, 396-412, 416-420, 506-528, 532-599, 608->611, 611->607, 640-650, 654-655, 659, 669, 675, 677, 679, 681, 683, 685, 687, 690, 694-700
nvtabular/inference/triton/benchmarking_tools.py 52 52 10 0 0% 2-103
nvtabular/inference/triton/data_conversions.py 87 3 58 4 95% 32-33, 84
nvtabular/inference/triton/model.py 140 140 66 0 0% 27-266
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 57 6 20 5 86% 22-23, 99, 103->107, 108, 110, 124
nvtabular/io/dask.py 179 7 68 11 93% 110, 113, 149, 224, 384->382, 412->415, 423, 427->429, 429->425, 434, 436
nvtabular/io/dataframe_engine.py 61 5 28 6 88% 19-20, 50, 69, 88->92, 92->97, 94->97, 97->116, 125
nvtabular/io/dataset.py 277 33 122 23 84% 238, 240, 253, 262, 280-294, 397->466, 402-405, 410->420, 415-416, 427->425, 441->445, 456, 516->520, 563, 688-689, 693->695, 695->704, 705, 712-713, 719, 725, 820-821, 937-942, 948, 998
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 492 23 156 13 94% 33-34, 88-89, 92-100, 124->126, 213-215, 338-343, 381-386, 502->509, 570->575, 576-577, 697, 701, 705, 743, 760, 764, 771->773, 891->896, 901->911, 938
nvtabular/io/shuffle.py 31 6 16 5 77% 42, 44-45, 49, 59, 63
nvtabular/io/writer.py 173 13 66 5 92% 24-25, 51, 79, 125, 128, 207, 216, 219, 262, 283-285
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 327 12 138 9 95% 142-143, 233->235, 245-249, 295-296, 335->339, 410, 414-415, 445, 550, 558
nvtabular/loader/tensorflow.py 155 22 50 7 85% 57, 65-68, 78, 88, 296, 332, 347-349, 378-380, 390-398, 401-404
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 81 13 16 2 78% 25-27, 30-36, 111, 149-150
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 32 10 18 3 62% 52-54, 58, 61-64, 83-86
nvtabular/ops/categorify.py 542 64 310 43 86% 240, 256, 260, 268, 276, 278, 300, 319-320, 349, 411-412, 430-433, 506->508, 629, 665, 694->697, 698-700, 707-708, 721-723, 724->692, 740, 748, 750, 757->exit, 780, 783->786, 794, 820, 825, 839->843, 850-853, 864, 868, 870, 882-885, 963, 965, 994->1017, 1000->1017, 1018-1023, 1060, 1078->1083, 1082, 1092->1089, 1097->1089, 1105, 1113-1123
nvtabular/ops/clip.py 18 2 6 3 79% 43, 51->53, 54
nvtabular/ops/column_similarity.py 103 24 36 5 72% 19-20, 76->exit, 106, 178-179, 188-190, 198-214, 231->234, 235, 245
nvtabular/ops/data_stats.py 56 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 25 0 8 1 97% 66->68
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 57 2 20 1 96% 92, 118
nvtabular/ops/filter.py 20 1 6 1 92% 49
nvtabular/ops/groupby.py 92 4 56 6 92% 71, 80, 82, 92->94, 104->109, 180
nvtabular/ops/hash_bucket.py 29 2 18 2 87% 69, 99
nvtabular/ops/hashed_cross.py 28 3 13 4 83% 50, 63, 77->exit, 78
nvtabular/ops/join_external.py 83 4 36 5 92% 108, 110, 152, 169->171, 205
nvtabular/ops/join_groupby.py 84 5 30 2 94% 106, 109->118, 194-195, 198-199
nvtabular/ops/lambdaop.py 39 6 18 6 79% 59, 63, 77, 89, 94, 103
nvtabular/ops/list_slice.py 63 24 26 1 56% 21-22, 52-53, 100-114, 122-133
nvtabular/ops/logop.py 8 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 70 8 14 2 86% 60->59, 67, 75-76, 109-110, 132-133, 137
nvtabular/ops/operator.py 29 3 2 1 87% 25, 104, 109
nvtabular/ops/rename.py 23 3 14 3 84% 45, 66-68
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 146 11 64 5 90% 147, 167->171, 174->183, 228-229, 232-233, 242-248, 339->342
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 1 99% 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 94 43 44 8 49% 30-31, 35-36, 49, 60-61, 63-65, 68, 71, 77, 83, 89-125, 144, 148->152
nvtabular/worker.py 82 5 38 7 90% 24-25, 82->99, 91, 92->99, 99->102, 108, 110, 111->113
nvtabular/workflow.py 156 11 73 4 93% 28-29, 45, 131, 145-147, 251, 280-281, 369

TOTAL 6232 1145 2471 267 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.20%
=========================== short test summary info ============================
FAILED tests/unit/test_ops.py::test_categorify_multi[False-combo-cat_names0]
FAILED tests/unit/test_ops.py::test_categorify_multi[True-combo-cat_names0]
FAILED tests/unit/test_ops.py::test_categorify_multi_combo[False] - assert [1...
FAILED tests/unit/test_ops.py::test_categorify_multi_combo[True] - assert [1,...
===== 4 failed, 1091 passed, 13 skipped, 11 warnings in 773.10s (0:12:53) ======
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins1174774010249921475.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit e9a07452ad2412d4d2c2162ab458d065bae12a16, no merge conflicts.
Running as SYSTEM
Setting status of e9a07452ad2412d4d2c2162ab458d065bae12a16 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2881/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse e9a07452ad2412d4d2c2162ab458d065bae12a16^{commit} # timeout=10
Checking out Revision e9a07452ad2412d4d2c2162ab458d065bae12a16 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e9a07452ad2412d4d2c2162ab458d065bae12a16 # timeout=10
Commit message: "all greens now"
 > git rev-list --no-walk 7eb9cf76b29e65c87e13bc80b1f3d491ec3d4636 # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins17266277668639396.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular-0.5.3+58.ge9a0745
Running black --check
All done! ✨ 🍰 ✨
107 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-20 02:36:59.816801: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 02:37:01.015606: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-20 02:37:01.016689: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 02:37:01.017689: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 02:37:01.017719: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 02:37:01.017766: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-20 02:37:01.017798: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-20 02:37:01.017831: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-20 02:37:01.017862: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-20 02:37:01.017908: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-20 02:37:01.017948: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-20 02:37:01.017987: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-20 02:37:01.022384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1108 items

tests/unit/test_column_group.py .. [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py .................................................. [ 17%]
....................................................................ssss [ 24%]
ssss.................................................. [ 29%]
tests/unit/test_notebooks.py ...... [ 29%]
tests/unit/test_ops.py ................................................. [ 34%]
........................................................................ [ 40%]
........................................................................ [ 47%]
........................................................................ [ 53%]
........................................................................ [ 60%]
........................................................................ [ 66%]
. [ 66%]
tests/unit/test_s3.py . [ 66%]
tests/unit/test_tf_dataloader.py ....................................... [ 70%]
.................................s [ 73%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 80%]
tests/unit/test_tools.py ...................... [ 82%]
tests/unit/test_torch_dataloader.py .................................... [ 85%]
.............................................. [ 89%]
tests/unit/test_triton_inference.py ssss.................. [ 91%]
tests/unit/test_workflow.py ............................................ [ 95%]
................................................ [100%]

=============================== warnings summary ===============================
tests/unit/test_ops.py::test_fill_missing[True-True-parquet]
tests/unit/test_ops.py::test_fill_missing[True-False-parquet]
tests/unit/test_ops.py::test_filter[parquet-0.1-True]
/usr/local/lib/python3.8/dist-packages/pandas/core/indexing.py:670: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
iloc._setitem_with_indexer(indexer, value)

tests/unit/test_ops.py::test_join_external[True-True-left-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-left-device-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-device-pandas-parquet]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/ops/join_external.py:164: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
_ext.drop_duplicates(ignore_index=True, inplace=True)

tests/unit/test_ops.py::test_filter[parquet-0.1-True]
tests/unit/test_ops.py::test_filter[parquet-0.1-False]
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6610: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 157 18 82 5 87% 54, 87, 128, 152-165, 214, 301
nvtabular/dispatch.py 230 37 112 19 82% 33-35, 40-42, 48-58, 62-63, 86, 94, 107, 112->114, 125, 148-151, 190, 206, 213, 244->249, 247, 250, 253->257, 290, 301-304, 347, 351, 392, 416, 418, 425
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 12 85 6 91% 60, 68->49, 122, 179, 231-239, 335->343, 357->360, 363-364, 367
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 30 1 12 1 95% 47
nvtabular/framework_utils/torch/models.py 45 0 28 0 100%
nvtabular/framework_utils/torch/utils.py 75 4 30 2 94% 64, 118-120
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 279 158 120 15 43% 118-168, 213-274, 305, 307, 331-343, 347-363, 367-370, 374, 396-412, 416-420, 506-528, 532-599, 608->611, 611->607, 640-650, 654-655, 659, 669, 675, 677, 679, 681, 683, 685, 687, 690, 694-700
nvtabular/inference/triton/benchmarking_tools.py 52 52 10 0 0% 2-103
nvtabular/inference/triton/data_conversions.py 87 3 58 4 95% 32-33, 84
nvtabular/inference/triton/model.py 140 140 66 0 0% 27-266
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 57 6 20 5 86% 22-23, 99, 103->107, 108, 110, 124
nvtabular/io/dask.py 179 7 68 11 93% 110, 113, 149, 224, 384->382, 412->415, 423, 427->429, 429->425, 434, 436
nvtabular/io/dataframe_engine.py 61 5 28 6 88% 19-20, 50, 69, 88->92, 92->97, 94->97, 97->116, 125
nvtabular/io/dataset.py 277 33 122 23 84% 238, 240, 253, 262, 280-294, 397->466, 402-405, 410->420, 415-416, 427->425, 441->445, 456, 516->520, 563, 688-689, 693->695, 695->704, 705, 712-713, 719, 725, 820-821, 937-942, 948, 998
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 492 23 156 13 94% 33-34, 88-89, 92-100, 124->126, 213-215, 338-343, 381-386, 502->509, 570->575, 576-577, 697, 701, 705, 743, 760, 764, 771->773, 891->896, 901->911, 938
nvtabular/io/shuffle.py 31 6 16 5 77% 42, 44-45, 49, 59, 63
nvtabular/io/writer.py 173 13 66 5 92% 24-25, 51, 79, 125, 128, 207, 216, 219, 262, 283-285
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 327 12 138 9 95% 142-143, 233->235, 245-249, 295-296, 335->339, 410, 414-415, 445, 550, 558
nvtabular/loader/tensorflow.py 155 22 50 7 85% 57, 65-68, 78, 88, 296, 332, 347-349, 378-380, 390-398, 401-404
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 81 13 16 2 78% 25-27, 30-36, 111, 149-150
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 32 10 18 3 62% 52-54, 58, 61-64, 83-86
nvtabular/ops/categorify.py 542 64 310 43 86% 240, 256, 260, 268, 276, 278, 300, 319-320, 349, 411-412, 430-433, 506->508, 629, 665, 694->697, 698-700, 707-708, 721-723, 724->692, 740, 748, 750, 757->exit, 780, 783->786, 794, 820, 825, 839->843, 850-853, 864, 868, 870, 882-885, 963, 965, 994->1017, 1000->1017, 1018-1023, 1060, 1078->1083, 1082, 1092->1089, 1097->1089, 1105, 1113-1123
nvtabular/ops/clip.py 18 2 6 3 79% 43, 51->53, 54
nvtabular/ops/column_similarity.py 103 24 36 5 72% 19-20, 76->exit, 106, 178-179, 188-190, 198-214, 231->234, 235, 245
nvtabular/ops/data_stats.py 56 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 25 0 8 1 97% 66->68
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 57 2 20 1 96% 92, 118
nvtabular/ops/filter.py 20 1 6 1 92% 49
nvtabular/ops/groupby.py 92 4 56 6 92% 71, 80, 82, 92->94, 104->109, 180
nvtabular/ops/hash_bucket.py 29 2 18 2 87% 69, 99
nvtabular/ops/hashed_cross.py 28 3 13 4 83% 50, 63, 77->exit, 78
nvtabular/ops/join_external.py 83 4 36 5 92% 108, 110, 152, 169->171, 205
nvtabular/ops/join_groupby.py 84 5 30 2 94% 106, 109->118, 194-195, 198-199
nvtabular/ops/lambdaop.py 39 6 18 6 79% 59, 63, 77, 89, 94, 103
nvtabular/ops/list_slice.py 63 24 26 1 56% 21-22, 52-53, 100-114, 122-133
nvtabular/ops/logop.py 8 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 70 8 14 2 86% 60->59, 67, 75-76, 109-110, 132-133, 137
nvtabular/ops/operator.py 29 3 2 1 87% 25, 104, 109
nvtabular/ops/rename.py 23 3 14 3 84% 45, 66-68
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 146 11 64 5 90% 147, 167->171, 174->183, 228-229, 232-233, 242-248, 339->342
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 1 99% 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 94 43 44 8 49% 30-31, 35-36, 49, 60-61, 63-65, 68, 71, 77, 83, 89-125, 144, 148->152
nvtabular/worker.py 82 5 38 7 90% 24-25, 82->99, 91, 92->99, 99->102, 108, 110, 111->113
nvtabular/workflow.py 156 11 73 4 93% 28-29, 45, 131, 145-147, 251, 280-281, 369

TOTAL 6232 1145 2471 267 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.20%
========== 1095 passed, 13 skipped, 11 warnings in 775.79s (0:12:55) ===========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins3179034071444254022.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit 253be1d05a5941145487237ffeeef23aecddd97c, no merge conflicts.
Running as SYSTEM
Setting status of 253be1d05a5941145487237ffeeef23aecddd97c to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2882/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse 253be1d05a5941145487237ffeeef23aecddd97c^{commit} # timeout=10
Checking out Revision 253be1d05a5941145487237ffeeef23aecddd97c (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 253be1d05a5941145487237ffeeef23aecddd97c # timeout=10
Commit message: "Merge branch 'main' into order-cats"
 > git rev-list --no-walk e9a07452ad2412d4d2c2162ab458d065bae12a16 # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins5320129388683237498.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Requirement already satisfied: pandas<1.3.0dev0,>=1.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+63.g253be1d) (1.1.5)
Requirement already satisfied: versioneer in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+63.g253be1d) (0.20)
Requirement already satisfied: numba>=0.53.1 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+63.g253be1d) (0.53.1)
Requirement already satisfied: pyarrow in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+63.g253be1d) (1.0.1)
Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+63.g253be1d) (5.4.1)
Requirement already satisfied: tdqm in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+63.g253be1d) (0.0.1)
Requirement already satisfied: distributed==2021.4.1 in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+63.g253be1d) (2021.4.1)
Requirement already satisfied: dask==2021.4.1 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+63.g253be1d) (2021.4.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+63.g253be1d) (1.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+63.g253be1d) (0.11.1)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+63.g253be1d) (1.6.0)
Requirement already satisfied: fsspec>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+63.g253be1d) (2021.6.1)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (from distributed==2021.4.1->nvtabular==0.5.3+63.g253be1d) (57.4.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+63.g253be1d) (5.8.0)
Requirement already satisfied: click>=6.6 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+63.g253be1d) (8.0.1)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+63.g253be1d) (1.0.2)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+63.g253be1d) (1.7.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+63.g253be1d) (6.1)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+63.g253be1d) (2.4.0)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+63.g253be1d) (2.0.0)
Requirement already satisfied: numpy>=1.15 in /usr/local/lib/python3.8/dist-packages (from numba>=0.53.1->nvtabular==0.5.3+63.g253be1d) (1.20.2)
Requirement already satisfied: llvmlite<0.37,>=0.36.0rc1 in /usr/local/lib/python3.8/dist-packages (from numba>=0.53.1->nvtabular==0.5.3+63.g253be1d) (0.36.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+63.g253be1d) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.8/dist-packages (from pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+63.g253be1d) (2021.1)
Requirement already satisfied: locket in /usr/local/lib/python3.8/dist-packages (from partd>=0.3.10->dask==2021.4.1->nvtabular==0.5.3+63.g253be1d) (0.2.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.8/dist-packages (from python-dateutil>=2.7.3->pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+63.g253be1d) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed==2021.4.1->nvtabular==0.5.3+63.g253be1d) (1.0.1)
Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from tdqm->nvtabular==0.5.3+63.g253be1d) (4.61.2)
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
Running black --check
All done! ✨ 🍰 ✨
108 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-20 02:52:57.724257: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 02:52:58.936365: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-20 02:52:58.937464: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 02:52:58.938498: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 02:52:58.938529: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 02:52:58.938579: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-20 02:52:58.938615: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-20 02:52:58.938649: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-20 02:52:58.938682: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-20 02:52:58.938729: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-20 02:52:58.938762: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-20 02:52:58.938800: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-20 02:52:58.942753: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1109 items

tests/unit/test_column_group.py .. [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py ...............................................Terminated
Build was aborted
Aborted by �[8mha:////4I6AZwo/1Z8Fal8AhZTEatjIwqNwCcqT21311HdysuK+AAAAlx+LCAAAAAAAAP9b85aBtbiIQTGjNKU4P08vOT+vOD8nVc83PyU1x6OyILUoJzMv2y+/JJUBAhiZGBgqihhk0NSjKDWzXb3RdlLBUSYGJk8GtpzUvPSSDB8G5tKinBIGIZ+sxLJE/ZzEvHT94JKizLx0a6BxUmjGOUNodHsLgAzWEgZu/dLi1CL9xJTczDwAj6GcLcAAAAA=�[0madmin
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins5163626286810907067.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit 04f42e04b4040cc6fe412f6d7cac7d8da5d1db22, no merge conflicts.
Running as SYSTEM
Setting status of 04f42e04b4040cc6fe412f6d7cac7d8da5d1db22 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2883/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse 04f42e04b4040cc6fe412f6d7cac7d8da5d1db22^{commit} # timeout=10
Checking out Revision 04f42e04b4040cc6fe412f6d7cac7d8da5d1db22 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 04f42e04b4040cc6fe412f6d7cac7d8da5d1db22 # timeout=10
Commit message: "Merge branch 'order-cats' of https://github.com/jperez999/NVTabular into order-cats"
 > git rev-list --no-walk 253be1d05a5941145487237ffeeef23aecddd97c # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins8699563649216223805.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Requirement already satisfied: pandas<1.3.0dev0,>=1.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+66.g04f42e0) (1.1.5)
Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+66.g04f42e0) (5.4.1)
Requirement already satisfied: pyarrow in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+66.g04f42e0) (1.0.1)
Requirement already satisfied: tdqm in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+66.g04f42e0) (0.0.1)
Requirement already satisfied: distributed==2021.4.1 in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+66.g04f42e0) (2021.4.1)
Requirement already satisfied: numba>=0.53.1 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+66.g04f42e0) (0.53.1)
Requirement already satisfied: versioneer in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+66.g04f42e0) (0.20)
Requirement already satisfied: dask==2021.4.1 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+66.g04f42e0) (2021.4.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (1.2.0)
Requirement already satisfied: fsspec>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (2021.6.1)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (1.6.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (0.11.1)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (1.0.2)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (5.8.0)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (2.0.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (2.4.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (6.1)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (1.7.0)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (from distributed==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (57.4.0)
Requirement already satisfied: click>=6.6 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (8.0.1)
Requirement already satisfied: numpy>=1.15 in /usr/local/lib/python3.8/dist-packages (from numba>=0.53.1->nvtabular==0.5.3+66.g04f42e0) (1.20.2)
Requirement already satisfied: llvmlite<0.37,>=0.36.0rc1 in /usr/local/lib/python3.8/dist-packages (from numba>=0.53.1->nvtabular==0.5.3+66.g04f42e0) (0.36.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+66.g04f42e0) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.8/dist-packages (from pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+66.g04f42e0) (2021.1)
Requirement already satisfied: locket in /usr/local/lib/python3.8/dist-packages (from partd>=0.3.10->dask==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (0.2.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.8/dist-packages (from python-dateutil>=2.7.3->pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+66.g04f42e0) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed==2021.4.1->nvtabular==0.5.3+66.g04f42e0) (1.0.1)
Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from tdqm->nvtabular==0.5.3+66.g04f42e0) (4.61.2)
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
Running black --check
All done! ✨ 🍰 ✨
108 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-20 02:57:39.524342: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 02:57:40.722662: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-20 02:57:40.723890: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 02:57:40.725035: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 02:57:40.725067: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 02:57:40.725120: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-20 02:57:40.725158: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-20 02:57:40.725197: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-20 02:57:40.725232: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-20 02:57:40.725284: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-20 02:57:40.725319: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-20 02:57:40.725362: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-20 02:57:40.729810: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1109 items

tests/unit/test_column_group.py .. [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py .................................................. [ 17%]
....................................................................ssss [ 24%]
ssss.................................................. [ 29%]
tests/unit/test_notebooks.py ...... [ 29%]
tests/unit/test_ops.py ................................................. [ 33%]
........................................................................ [ 40%]
........................................................................ [ 46%]
........................................................................ [ 53%]
........................................................................ [ 59%]
........................................................................ [ 66%]
. [ 66%]
tests/unit/test_s3.py .. [ 66%]
tests/unit/test_tf_dataloader.py ....................................... [ 70%]
.................................s [ 73%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 80%]
tests/unit/test_tools.py ...................... [ 82%]
tests/unit/test_torch_dataloader.py .................................... [ 85%]
.............................................. [ 89%]
tests/unit/test_triton_inference.py ssss.................. [ 91%]
tests/unit/test_workflow.py ............................................ [ 95%]
................................................ [100%]

=============================== warnings summary ===============================
tests/unit/test_ops.py::test_fill_missing[True-True-parquet]
tests/unit/test_ops.py::test_fill_missing[True-False-parquet]
tests/unit/test_ops.py::test_filter[parquet-0.1-True]
/usr/local/lib/python3.8/dist-packages/pandas/core/indexing.py:670: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
iloc._setitem_with_indexer(indexer, value)

tests/unit/test_ops.py::test_join_external[True-True-left-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-left-device-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-device-pandas-parquet]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/ops/join_external.py:171: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
_ext.drop_duplicates(ignore_index=True, inplace=True)

tests/unit/test_ops.py::test_filter[parquet-0.1-True]
tests/unit/test_ops.py::test_filter[parquet-0.1-False]
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6610: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 157 18 82 5 87% 54, 87, 128, 152-165, 214, 301
nvtabular/dispatch.py 232 38 112 19 82% 33-35, 40-42, 48-58, 62-63, 83, 90, 98, 111, 116->118, 129, 152-155, 194, 210, 217, 248->253, 251, 254, 257->261, 294, 305-308, 351, 355, 396, 420, 422, 429
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 12 85 6 91% 60, 68->49, 122, 179, 231-239, 335->343, 357->360, 363-364, 367
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 30 1 12 1 95% 47
nvtabular/framework_utils/torch/models.py 45 0 28 0 100%
nvtabular/framework_utils/torch/utils.py 75 4 30 2 94% 64, 118-120
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 279 157 120 14 43% 118-168, 213-274, 305, 307, 331-343, 347-363, 367-370, 374, 396-412, 416-420, 506-528, 532-599, 608->611, 611->607, 640-650, 654-655, 659, 669, 675, 677, 679, 681, 683, 685, 690, 694-700
nvtabular/inference/triton/benchmarking_tools.py 52 52 10 0 0% 2-103
nvtabular/inference/triton/data_conversions.py 87 3 58 4 95% 32-33, 84
nvtabular/inference/triton/model.py 140 140 66 0 0% 27-266
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 57 6 20 5 86% 22-23, 99, 103->107, 108, 110, 124
nvtabular/io/dask.py 179 7 68 11 93% 110, 113, 149, 224, 384->382, 412->415, 423, 427->429, 429->425, 434, 436
nvtabular/io/dataframe_engine.py 61 5 28 6 88% 19-20, 50, 69, 88->92, 92->97, 94->97, 97->116, 125
nvtabular/io/dataset.py 283 35 124 23 84% 43-44, 245, 247, 260, 269, 287-301, 404->473, 409-412, 417->427, 422-423, 434->432, 448->452, 463, 523->527, 570, 695-696, 700->702, 702->711, 712, 719-720, 726, 732, 827-828, 944-949, 955, 1005
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 492 21 156 12 95% 33-34, 92-100, 124->126, 213-215, 338-343, 381-386, 502->509, 570->575, 576-577, 697, 701, 705, 743, 760, 764, 771->773, 891->896, 901->911, 938
nvtabular/io/shuffle.py 31 6 16 5 77% 42, 44-45, 49, 59, 63
nvtabular/io/writer.py 173 13 66 5 92% 24-25, 51, 79, 125, 128, 207, 216, 219, 262, 283-285
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 327 12 138 9 95% 142-143, 233->235, 245-249, 295-296, 335->339, 410, 414-415, 445, 550, 558
nvtabular/loader/tensorflow.py 155 22 50 7 85% 57, 65-68, 78, 88, 296, 332, 347-349, 378-380, 390-398, 401-404
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 81 13 16 2 78% 25-27, 30-36, 111, 149-150
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 32 10 18 3 62% 52-54, 58, 61-64, 83-86
nvtabular/ops/categorify.py 542 64 310 43 86% 240, 256, 260, 268, 276, 278, 300, 319-320, 347, 409-410, 428-431, 504->506, 627, 663, 692->695, 696-698, 705-706, 719-721, 722->690, 738, 746, 748, 755->exit, 778, 781->784, 792, 817, 822, 836->840, 847-850, 861, 865, 867, 879-882, 960, 962, 991->1014, 997->1014, 1015-1020, 1057, 1075->1080, 1079, 1089->1086, 1094->1086, 1102, 1110-1120
nvtabular/ops/clip.py 18 2 6 3 79% 43, 51->53, 54
nvtabular/ops/column_similarity.py 103 24 36 5 72% 19-20, 76->exit, 106, 178-179, 188-190, 198-214, 231->234, 235, 245
nvtabular/ops/data_stats.py 56 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 25 0 8 1 97% 66->68
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 57 2 20 1 96% 92, 118
nvtabular/ops/filter.py 20 1 6 1 92% 49
nvtabular/ops/groupby.py 92 4 56 6 92% 71, 80, 82, 92->94, 104->109, 180
nvtabular/ops/hash_bucket.py 29 2 18 2 87% 69, 99
nvtabular/ops/hashed_cross.py 28 3 13 4 83% 50, 63, 77->exit, 78
nvtabular/ops/join_external.py 89 7 38 6 90% 20-21, 113, 115, 117, 159, 176->178, 212
nvtabular/ops/join_groupby.py 84 5 30 2 94% 106, 109->118, 194-195, 198-199
nvtabular/ops/lambdaop.py 39 6 18 6 79% 59, 63, 77, 89, 94, 103
nvtabular/ops/list_slice.py 63 24 26 1 56% 21-22, 52-53, 100-114, 122-133
nvtabular/ops/logop.py 8 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 70 8 14 2 86% 60->59, 67, 75-76, 109-110, 132-133, 137
nvtabular/ops/operator.py 29 3 2 1 87% 25, 104, 109
nvtabular/ops/rename.py 23 3 14 3 84% 45, 66-68
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 146 11 64 5 90% 147, 167->171, 174->183, 228-229, 232-233, 242-248, 339->342
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 1 99% 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 94 43 44 8 49% 30-31, 35-36, 49, 60-61, 63-65, 68, 71, 77, 83, 89-125, 144, 148->152
nvtabular/worker.py 82 5 38 7 90% 24-25, 82->99, 91, 92->99, 99->102, 108, 110, 111->113
nvtabular/workflow.py 156 11 73 4 93% 28-29, 45, 131, 145-147, 251, 280-281, 369

TOTAL 6246 1148 2475 266 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.22%
========== 1096 passed, 13 skipped, 11 warnings in 809.12s (0:13:29) ===========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins1280722383710920091.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit e0e1591e8591c3e6a9739b62515078b909115043, no merge conflicts.
Running as SYSTEM
Setting status of e0e1591e8591c3e6a9739b62515078b909115043 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2885/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse e0e1591e8591c3e6a9739b62515078b909115043^{commit} # timeout=10
Checking out Revision e0e1591e8591c3e6a9739b62515078b909115043 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e0e1591e8591c3e6a9739b62515078b909115043 # timeout=10
Commit message: "Merge branch 'main' into order-cats"
 > git rev-list --no-walk feff50e7ba0870f9fd51c12f7e4957dffdda0b1f # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins6310523039384702058.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Requirement already satisfied: pyarrow in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+68.ge0e1591) (1.0.1)
Requirement already satisfied: pandas<1.3.0dev0,>=1.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+68.ge0e1591) (1.1.5)
Requirement already satisfied: tdqm in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+68.ge0e1591) (0.0.1)
Requirement already satisfied: distributed==2021.4.1 in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+68.ge0e1591) (2021.4.1)
Requirement already satisfied: versioneer in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+68.ge0e1591) (0.20)
Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+68.ge0e1591) (5.4.1)
Requirement already satisfied: dask==2021.4.1 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+68.ge0e1591) (2021.4.1)
Requirement already satisfied: numba>=0.53.1 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+68.ge0e1591) (0.53.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (1.2.0)
Requirement already satisfied: fsspec>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (2021.6.1)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (1.6.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (0.11.1)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (from distributed==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (57.4.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (6.1)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (2.0.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (1.0.2)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (1.7.0)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (2.4.0)
Requirement already satisfied: click>=6.6 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (8.0.1)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (5.8.0)
Requirement already satisfied: numpy>=1.15 in /usr/local/lib/python3.8/dist-packages (from numba>=0.53.1->nvtabular==0.5.3+68.ge0e1591) (1.20.2)
Requirement already satisfied: llvmlite<0.37,>=0.36.0rc1 in /usr/local/lib/python3.8/dist-packages (from numba>=0.53.1->nvtabular==0.5.3+68.ge0e1591) (0.36.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+68.ge0e1591) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.8/dist-packages (from pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+68.ge0e1591) (2021.1)
Requirement already satisfied: locket in /usr/local/lib/python3.8/dist-packages (from partd>=0.3.10->dask==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (0.2.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.8/dist-packages (from python-dateutil>=2.7.3->pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+68.ge0e1591) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed==2021.4.1->nvtabular==0.5.3+68.ge0e1591) (1.0.1)
Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from tdqm->nvtabular==0.5.3+68.ge0e1591) (4.61.2)
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
Running black --check
All done! ✨ 🍰 ✨
108 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-20 03:29:56.701080: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 03:29:57.915300: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-20 03:29:57.916465: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 03:29:57.917551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 03:29:57.917582: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 03:29:57.917636: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-20 03:29:57.917672: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-20 03:29:57.917707: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-20 03:29:57.917742: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-20 03:29:57.917790: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-20 03:29:57.917824: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-20 03:29:57.917865: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-20 03:29:57.922032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1110 items

tests/unit/test_column_group.py .. [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py .................................................. [ 17%]
....................................................................ssss [ 24%]
ssss.................................................. [ 29%]
tests/unit/test_notebooks.py ...... [ 29%]
tests/unit/test_ops.py ................................................. [ 33%]
........................................................................ [ 40%]
........................................................................ [ 46%]
........................................................................ [ 53%]
........................................................................ [ 59%]
........................................................................ [ 66%]
. [ 66%]
tests/unit/test_s3.py .. [ 66%]
tests/unit/test_tf_dataloader.py ....................................... [ 70%]
.................................s [ 73%]
tests/unit/test_tf_layers.py ........................................... [ 77%]
................................... [ 80%]
tests/unit/test_tools.py ...................... [ 82%]
tests/unit/test_torch_dataloader.py .................................... [ 85%]
.............................................. [ 89%]
tests/unit/test_triton_inference.py sssss.................. [ 91%]
tests/unit/test_workflow.py ............................................ [ 95%]
................................................ [100%]

=============================== warnings summary ===============================
tests/unit/test_ops.py::test_fill_missing[True-True-parquet]
tests/unit/test_ops.py::test_fill_missing[True-False-parquet]
tests/unit/test_ops.py::test_filter[parquet-0.1-True]
/usr/local/lib/python3.8/dist-packages/pandas/core/indexing.py:670: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
iloc._setitem_with_indexer(indexer, value)

tests/unit/test_ops.py::test_join_external[True-True-left-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-left-device-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-device-pandas-parquet]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/ops/join_external.py:171: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
_ext.drop_duplicates(ignore_index=True, inplace=True)

tests/unit/test_ops.py::test_filter[parquet-0.1-True]
tests/unit/test_ops.py::test_filter[parquet-0.1-False]
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6610: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 157 18 82 5 87% 54, 87, 128, 152-165, 214, 301
nvtabular/dispatch.py 232 38 112 19 82% 33-35, 40-42, 48-58, 62-63, 83, 90, 98, 111, 116->118, 129, 152-155, 194, 210, 217, 248->253, 251, 254, 257->261, 294, 305-308, 351, 355, 396, 420, 422, 429
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 12 85 6 91% 60, 68->49, 122, 179, 231-239, 335->343, 357->360, 363-364, 367
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 30 1 12 1 95% 47
nvtabular/framework_utils/torch/models.py 45 0 28 0 100%
nvtabular/framework_utils/torch/utils.py 75 4 30 2 94% 64, 118-120
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 279 157 120 14 43% 118-168, 213-274, 305, 307, 331-343, 347-363, 367-370, 374, 396-412, 416-420, 506-528, 532-599, 608->611, 611->607, 640-650, 654-655, 659, 669, 675, 677, 679, 681, 683, 685, 690, 694-700
nvtabular/inference/triton/benchmarking_tools.py 52 52 10 0 0% 2-103
nvtabular/inference/triton/data_conversions.py 87 3 58 4 95% 32-33, 84
nvtabular/inference/triton/model.py 140 140 66 0 0% 27-266
nvtabular/inference/triton/model_config_pb2.py 299 0 2 0 100%
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 57 6 20 5 86% 22-23, 99, 103->107, 108, 110, 124
nvtabular/io/dask.py 179 7 68 11 93% 110, 113, 149, 224, 384->382, 412->415, 423, 427->429, 429->425, 434, 436
nvtabular/io/dataframe_engine.py 61 5 28 6 88% 19-20, 50, 69, 88->92, 92->97, 94->97, 97->116, 125
nvtabular/io/dataset.py 283 35 124 23 84% 43-44, 245, 247, 260, 269, 287-301, 404->473, 409-412, 417->427, 422-423, 434->432, 448->452, 463, 523->527, 570, 695-696, 700->702, 702->711, 712, 719-720, 726, 732, 827-828, 944-949, 955, 1005
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 492 21 156 12 95% 33-34, 92-100, 124->126, 213-215, 338-343, 381-386, 502->509, 570->575, 576-577, 697, 701, 705, 743, 760, 764, 771->773, 891->896, 901->911, 938
nvtabular/io/shuffle.py 31 6 16 5 77% 42, 44-45, 49, 59, 63
nvtabular/io/writer.py 173 13 66 5 92% 24-25, 51, 79, 125, 128, 207, 216, 219, 262, 283-285
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 327 12 138 9 95% 142-143, 233->235, 245-249, 295-296, 335->339, 410, 414-415, 445, 550, 558
nvtabular/loader/tensorflow.py 155 22 50 7 85% 57, 65-68, 78, 88, 296, 332, 347-349, 378-380, 390-398, 401-404
nvtabular/loader/tf_utils.py 55 10 20 5 80% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70
nvtabular/loader/torch.py 81 13 16 2 78% 25-27, 30-36, 111, 149-150
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 32 10 18 3 62% 52-54, 58, 61-64, 83-86
nvtabular/ops/categorify.py 542 64 310 43 86% 240, 256, 260, 268, 276, 278, 300, 319-320, 347, 409-410, 428-431, 504->506, 627, 663, 692->695, 696-698, 705-706, 719-721, 722->690, 738, 746, 748, 755->exit, 778, 781->784, 792, 817, 822, 836->840, 847-850, 861, 865, 867, 879-882, 960, 962, 991->1014, 997->1014, 1015-1020, 1057, 1075->1080, 1079, 1089->1086, 1094->1086, 1102, 1110-1120
nvtabular/ops/clip.py 18 2 6 3 79% 43, 51->53, 54
nvtabular/ops/column_similarity.py 103 24 36 5 72% 19-20, 76->exit, 106, 178-179, 188-190, 198-214, 231->234, 235, 245
nvtabular/ops/data_stats.py 56 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 25 0 8 1 97% 66->68
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 57 2 20 1 96% 92, 118
nvtabular/ops/filter.py 20 1 6 1 92% 49
nvtabular/ops/groupby.py 92 4 56 6 92% 71, 80, 82, 92->94, 104->109, 180
nvtabular/ops/hash_bucket.py 29 2 18 2 87% 69, 99
nvtabular/ops/hashed_cross.py 28 3 13 4 83% 50, 63, 77->exit, 78
nvtabular/ops/join_external.py 89 7 38 6 90% 20-21, 113, 115, 117, 159, 176->178, 212
nvtabular/ops/join_groupby.py 84 5 30 2 94% 106, 109->118, 194-195, 198-199
nvtabular/ops/lambdaop.py 39 6 18 6 79% 59, 63, 77, 89, 94, 103
nvtabular/ops/list_slice.py 63 24 26 1 56% 21-22, 52-53, 100-114, 122-133
nvtabular/ops/logop.py 8 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 70 8 14 2 86% 60->59, 67, 75-76, 109-110, 132-133, 137
nvtabular/ops/operator.py 29 3 2 1 87% 25, 104, 109
nvtabular/ops/rename.py 23 3 14 3 84% 45, 66-68
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 146 11 64 5 90% 147, 167->171, 174->183, 228-229, 232-233, 242-248, 339->342
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 1 99% 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 94 43 44 8 49% 30-31, 35-36, 49, 60-61, 63-65, 68, 71, 77, 83, 89-125, 144, 148->152
nvtabular/worker.py 82 5 38 7 90% 24-25, 82->99, 91, 92->99, 99->102, 108, 110, 111->113
nvtabular/workflow.py 156 11 73 4 93% 28-29, 45, 131, 145-147, 251, 280-281, 369

TOTAL 6246 1148 2475 266 79%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 79.22%
========== 1096 passed, 14 skipped, 11 warnings in 821.90s (0:13:41) ===========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins2814255541065923186.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit 5f6e5fcbe72c53b5b96f4341e09a88f6a0526a20, no merge conflicts.
Running as SYSTEM
Setting status of 5f6e5fcbe72c53b5b96f4341e09a88f6a0526a20 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2898/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse 5f6e5fcbe72c53b5b96f4341e09a88f6a0526a20^{commit} # timeout=10
Checking out Revision 5f6e5fcbe72c53b5b96f4341e09a88f6a0526a20 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5f6e5fcbe72c53b5b96f4341e09a88f6a0526a20 # timeout=10
Commit message: "Merge branch 'main' into order-cats"
 > git rev-list --no-walk 701eabcd27d252d613311f43f0d1eaaa142625d3 # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins4112581149363508753.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+70.g5f6e5fc) (5.4.1)
Requirement already satisfied: pandas<1.3.0dev0,>=1.0 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+70.g5f6e5fc) (1.1.5)
Requirement already satisfied: pyarrow in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+70.g5f6e5fc) (1.0.1)
Requirement already satisfied: versioneer in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+70.g5f6e5fc) (0.20)
Requirement already satisfied: distributed==2021.4.1 in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+70.g5f6e5fc) (2021.4.1)
Requirement already satisfied: dask==2021.4.1 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+70.g5f6e5fc) (2021.4.1)
Requirement already satisfied: tdqm in /var/jenkins_home/.local/lib/python3.8/site-packages (from nvtabular==0.5.3+70.g5f6e5fc) (0.0.1)
Requirement already satisfied: numba>=0.53.1 in /usr/local/lib/python3.8/dist-packages (from nvtabular==0.5.3+70.g5f6e5fc) (0.53.1)
Requirement already satisfied: cloudpickle>=1.1.1 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (1.6.0)
Requirement already satisfied: fsspec>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (2021.6.1)
Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (1.2.0)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.8/dist-packages (from dask==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (0.11.1)
Requirement already satisfied: click>=6.6 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (8.0.1)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (2.0.0)
Requirement already satisfied: tornado>=6.0.3 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (6.1)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (2.4.0)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (from distributed==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (57.4.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (1.7.0)
Requirement already satisfied: psutil>=5.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (5.8.0)
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.8/dist-packages (from distributed==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (1.0.2)
Requirement already satisfied: llvmlite<0.37,>=0.36.0rc1 in /usr/local/lib/python3.8/dist-packages (from numba>=0.53.1->nvtabular==0.5.3+70.g5f6e5fc) (0.36.0)
Requirement already satisfied: numpy>=1.15 in /usr/local/lib/python3.8/dist-packages (from numba>=0.53.1->nvtabular==0.5.3+70.g5f6e5fc) (1.20.2)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.8/dist-packages (from pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+70.g5f6e5fc) (2021.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.8/dist-packages (from pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+70.g5f6e5fc) (2.8.1)
Requirement already satisfied: locket in /usr/local/lib/python3.8/dist-packages (from partd>=0.3.10->dask==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (0.2.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.8/dist-packages (from python-dateutil>=2.7.3->pandas<1.3.0dev0,>=1.0->nvtabular==0.5.3+70.g5f6e5fc) (1.15.0)
Requirement already satisfied: heapdict in /usr/local/lib/python3.8/dist-packages (from zict>=0.1.3->distributed==2021.4.1->nvtabular==0.5.3+70.g5f6e5fc) (1.0.1)
Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from tdqm->nvtabular==0.5.3+70.g5f6e5fc) (4.61.2)
Installing collected packages: nvtabular
  Running setup.py develop for nvtabular
Successfully installed nvtabular
Running black --check
All done! ✨ 🍰 ✨
108 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
  warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
  warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module nvtabular.ops.categorify
nvtabular/ops/categorify.py:431:15: I1101: Module 'nvtabular_cpp' has no 'inference' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
************* Module nvtabular.ops.fill
nvtabular/ops/fill.py:66:15: I1101: Module 'nvtabular_cpp' has no 'inference' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)

Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-20 16:09:29.431845: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 16:09:30.630357: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-20 16:09:30.631398: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 16:09:30.632391: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 16:09:30.632421: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 16:09:30.632470: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-20 16:09:30.632504: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-20 16:09:30.632539: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-20 16:09:30.632572: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-20 16:09:30.632618: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-20 16:09:30.632651: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-20 16:09:30.632689: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-20 16:09:30.636707: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1081 items / 2 skipped / 1079 selected

tests/unit/test_column_group.py .. [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 7%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py .................................................. [ 18%]
....................................................................ssss [ 24%]
ssss.................................................. [ 29%]
tests/unit/test_ops.py ................................................. [ 34%]
........................................................................ [ 40%]
........................................................................ [ 47%]
........................................................................ [ 54%]
........................................................................ [ 60%]
........................................................................ [ 67%]
. [ 67%]
tests/unit/test_s3.py .. [ 67%]
tests/unit/test_tf_dataloader.py ....................................... [ 71%]
.................................s [ 74%]
tests/unit/test_tf_layers.py ........................................... [ 78%]
................................... [ 81%]
tests/unit/test_tools.py ...................... [ 83%]
tests/unit/test_torch_dataloader.py .................................... [ 87%]
.............................................. [ 91%]
tests/unit/test_workflow.py ............................................ [ 95%]
................................................ [100%]

=============================== warnings summary ===============================
tests/unit/test_ops.py::test_fill_missing[True-True-parquet]
tests/unit/test_ops.py::test_fill_missing[True-False-parquet]
tests/unit/test_ops.py::test_filter[parquet-0.1-True]
/usr/local/lib/python3.8/dist-packages/pandas/core/indexing.py:670: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
iloc._setitem_with_indexer(indexer, value)

tests/unit/test_ops.py::test_join_external[True-True-left-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-left-device-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-device-pandas-parquet]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/ops/join_external.py:171: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
_ext.drop_duplicates(ignore_index=True, inplace=True)

tests/unit/test_ops.py::test_filter[parquet-0.1-True]
tests/unit/test_ops.py::test_filter[parquet-0.1-False]
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6610: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 157 42 82 4 72% 54, 87, 128, 152-165, 207-214, 218-221, 225, 240-258, 301
nvtabular/dispatch.py 232 38 112 20 82% 33-35, 40-42, 48-58, 62-63, 83, 90, 98, 111, 116->118, 121->123, 129, 152-155, 194, 210, 217, 248->253, 251, 254, 257->261, 294, 305-308, 351, 355, 396, 420, 422, 429
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 13 85 6 90% 60, 68->49, 122, 179, 231-239, 242, 335->343, 357->360, 363-364, 367
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 30 1 12 1 95% 47
nvtabular/framework_utils/torch/models.py 45 1 28 1 97% 108
nvtabular/framework_utils/torch/utils.py 75 13 30 3 79% 22, 25-33, 64, 118-120, 132->115
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 279 269 120 0 3% 30-700
nvtabular/inference/triton/benchmarking_tools.py 52 52 10 0 0% 2-103
nvtabular/inference/triton/data_conversions.py 87 87 58 0 0% 27-150
nvtabular/inference/triton/model.py 140 140 66 0 0% 27-267
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 57 6 20 5 86% 22-23, 99, 103->107, 108, 110, 124
nvtabular/io/dask.py 179 7 68 11 93% 110, 113, 149, 224, 384->382, 412->415, 423, 427->429, 429->425, 434, 436
nvtabular/io/dataframe_engine.py 61 5 28 6 88% 19-20, 50, 69, 88->92, 92->97, 94->97, 97->116, 125
nvtabular/io/dataset.py 283 35 124 23 84% 43-44, 245, 247, 260, 269, 287-301, 404->473, 409-412, 417->427, 422-423, 434->432, 448->452, 463, 523->527, 570, 695-696, 700->702, 702->711, 712, 719-720, 726, 732, 827-828, 944-949, 955, 1005
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 492 21 156 12 95% 33-34, 92-100, 124->126, 213-215, 338-343, 381-386, 502->509, 570->575, 576-577, 697, 701, 705, 743, 760, 764, 771->773, 891->896, 901->911, 938
nvtabular/io/shuffle.py 31 9 16 4 64% 42-49, 59, 63
nvtabular/io/writer.py 173 13 66 5 92% 24-25, 51, 79, 125, 128, 207, 216, 219, 262, 283-285
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 327 13 138 11 95% 98, 102->94, 142-143, 233->235, 245-249, 295-296, 335->339, 410, 414-415, 445, 550, 558
nvtabular/loader/tensorflow.py 155 23 50 8 84% 57, 65-68, 78, 82, 88, 296, 332, 347-349, 378-380, 390-398, 401-404
nvtabular/loader/tf_utils.py 55 27 20 5 44% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70, 85-90, 100-113
nvtabular/loader/torch.py 81 15 16 2 76% 25-27, 30-36, 111, 149-150, 190, 193
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 32 10 18 3 62% 52-54, 58, 61-64, 83-86
nvtabular/ops/categorify.py 543 66 310 45 85% 239, 255, 259, 267, 275, 277, 299, 318-319, 346, 408-409, 426-431, 504->506, 627, 663, 692->695, 696-698, 705-706, 719-721, 722->690, 738, 746, 748, 755->exit, 778, 781->784, 792, 817-819, 822, 824->826, 836->840, 847-850, 861, 865, 867, 879-882, 960, 962, 991->1014, 997->1014, 1015-1020, 1057, 1075->1080, 1079, 1089->1086, 1094->1086, 1102, 1110-1120
nvtabular/ops/clip.py 18 2 6 3 79% 43, 51->53, 54
nvtabular/ops/column_similarity.py 103 24 36 5 72% 19-20, 76->exit, 106, 178-179, 188-190, 198-214, 231->234, 235, 245
nvtabular/ops/data_stats.py 56 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 25 0 8 1 97% 66->68
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 63 6 22 1 89% 62-66, 101, 127
nvtabular/ops/filter.py 20 1 6 1 92% 49
nvtabular/ops/groupby.py 92 4 56 6 92% 71, 80, 82, 92->94, 104->109, 180
nvtabular/ops/hash_bucket.py 29 2 18 2 87% 69, 99
nvtabular/ops/hashed_cross.py 28 3 13 4 83% 50, 63, 77->exit, 78
nvtabular/ops/join_external.py 89 8 38 8 87% 20-21, 113, 115, 117, 159, 163->167, 176->178, 203, 212
nvtabular/ops/join_groupby.py 84 5 30 2 94% 106, 109->118, 194-195, 198-199
nvtabular/ops/lambdaop.py 39 13 18 3 58% 59, 63, 77, 88-103
nvtabular/ops/list_slice.py 63 24 26 1 56% 21-22, 52-53, 100-114, 122-133
nvtabular/ops/logop.py 8 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 70 8 14 2 86% 60->59, 67, 75-76, 109-110, 132-133, 137
nvtabular/ops/operator.py 29 4 2 1 84% 25, 99, 104, 109
nvtabular/ops/rename.py 23 3 14 3 84% 45, 66-68
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 146 11 64 5 90% 147, 167->171, 174->183, 228-229, 232-233, 242-248, 339->342
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 1 99% 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 94 44 44 9 47% 30-31, 35-36, 45, 49, 60-61, 63-65, 68, 71, 77, 83, 89-125, 144, 148->152
nvtabular/worker.py 82 5 38 7 90% 24-25, 82->99, 91, 92->99, 99->102, 108, 110, 111->113
nvtabular/workflow.py 156 11 73 4 93% 28-29, 45, 131, 145-147, 251, 280-281, 369

TOTAL 5954 1419 2475 254 73%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 73.46%
========== 1072 passed, 11 skipped, 11 warnings in 633.45s (0:10:33) ===========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins3649292403159189518.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit a98808e4bde263d7859cd86170b155706b14a3ec, no merge conflicts.
Running as SYSTEM
Setting status of a98808e4bde263d7859cd86170b155706b14a3ec to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2906/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse a98808e4bde263d7859cd86170b155706b14a3ec^{commit} # timeout=10
Checking out Revision a98808e4bde263d7859cd86170b155706b14a3ec (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a98808e4bde263d7859cd86170b155706b14a3ec # timeout=10
Commit message: "Merge branch 'main' into order-cats"
 > git rev-list --no-walk b0502960b32281579f58373db3f230b65218baed # timeout=10
First time build. Skipping changelog.
[nvtabular_tests] $ /bin/bash /tmp/jenkins5624261384373078579.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'error'
  ERROR: Command errored out with exit status 1:
   command: /usr/bin/python /var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmpjx7d9qp8
       cwd: /var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Complete output (16 lines):
  Traceback (most recent call last):
    File "/var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 280, in 
      main()
    File "/var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 263, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 114, in get_requires_for_build_wheel
      return hook(config_settings)
    File "/usr/local/lib/python3.8/dist-packages/setuptools/build_meta.py", line 154, in get_requires_for_build_wheel
      return self._get_build_requires(
    File "/usr/local/lib/python3.8/dist-packages/setuptools/build_meta.py", line 135, in _get_build_requires
      self.run_setup()
    File "/usr/local/lib/python3.8/dist-packages/setuptools/build_meta.py", line 150, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 66, in 
      define_macros=[("VERSION_INFO", versioneer.get_version())],
  AttributeError: module 'versioneer' has no attribute 'get_version'
  ----------------------------------------
WARNING: Discarding file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular. Command errored out with exit status 1: /usr/bin/python /var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmpjx7d9qp8 Check the logs for full command output.
ERROR: Command errored out with exit status 1: /usr/bin/python /var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmpjx7d9qp8 Check the logs for full command output.
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script  : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" 
[nvtabular_tests] $ /bin/bash /tmp/jenkins5761204806237204622.sh

@jperez999
Copy link
Contributor Author

rerun tests

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit a98808e4bde263d7859cd86170b155706b14a3ec, no merge conflicts.
Running as SYSTEM
Setting status of a98808e4bde263d7859cd86170b155706b14a3ec to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2907/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse a98808e4bde263d7859cd86170b155706b14a3ec^{commit} # timeout=10
Checking out Revision a98808e4bde263d7859cd86170b155706b14a3ec (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a98808e4bde263d7859cd86170b155706b14a3ec # timeout=10
Commit message: "Merge branch 'main' into order-cats"
 > git rev-list --no-walk a98808e4bde263d7859cd86170b155706b14a3ec # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins6336367458918972351.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Obtaining file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'error'
  ERROR: Command errored out with exit status 1:
   command: /usr/bin/python /var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmpz70jskad
       cwd: /var/jenkins_home/workspace/nvtabular_tests/nvtabular
  Complete output (16 lines):
  Traceback (most recent call last):
    File "/var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 280, in 
      main()
    File "/var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 263, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 114, in get_requires_for_build_wheel
      return hook(config_settings)
    File "/usr/local/lib/python3.8/dist-packages/setuptools/build_meta.py", line 154, in get_requires_for_build_wheel
      return self._get_build_requires(
    File "/usr/local/lib/python3.8/dist-packages/setuptools/build_meta.py", line 135, in _get_build_requires
      self.run_setup()
    File "/usr/local/lib/python3.8/dist-packages/setuptools/build_meta.py", line 150, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 66, in 
      define_macros=[("VERSION_INFO", versioneer.get_version())],
  AttributeError: module 'versioneer' has no attribute 'get_version'
  ----------------------------------------
WARNING: Discarding file:///var/jenkins_home/workspace/nvtabular_tests/nvtabular. Command errored out with exit status 1: /usr/bin/python /var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmpz70jskad Check the logs for full command output.
ERROR: Command errored out with exit status 1: /usr/bin/python /var/jenkins_home/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmpz70jskad Check the logs for full command output.
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script  : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" 
[nvtabular_tests] $ /bin/bash /tmp/jenkins5992414491453232965.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit ae3b61d950d1f0501db0792e26c7c10a9e1c4295, no merge conflicts.
Running as SYSTEM
Setting status of ae3b61d950d1f0501db0792e26c7c10a9e1c4295 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2908/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse ae3b61d950d1f0501db0792e26c7c10a9e1c4295^{commit} # timeout=10
Checking out Revision ae3b61d950d1f0501db0792e26c7c10a9e1c4295 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ae3b61d950d1f0501db0792e26c7c10a9e1c4295 # timeout=10
Commit message: "Tweak install command to try and get around versioneer issues"
 > git rev-list --no-walk a98808e4bde263d7859cd86170b155706b14a3ec # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins4097170062895813573.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Traceback (most recent call last):
  File "setup.py", line 22, in 
    from pybind11.setup_helpers import Pybind11Extension
ModuleNotFoundError: No module named 'pybind11'
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script  : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" 
[nvtabular_tests] $ /bin/bash /tmp/jenkins306600425635926534.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit e66d13968b5566bc5a6ce338297f7ab1bb1daffb, no merge conflicts.
Running as SYSTEM
Setting status of e66d13968b5566bc5a6ce338297f7ab1bb1daffb to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2909/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse e66d13968b5566bc5a6ce338297f7ab1bb1daffb^{commit} # timeout=10
Checking out Revision e66d13968b5566bc5a6ce338297f7ab1bb1daffb (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e66d13968b5566bc5a6ce338297f7ab1bb1daffb # timeout=10
Commit message: "."
 > git rev-list --no-walk ae3b61d950d1f0501db0792e26c7c10a9e1c4295 # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins5358835905507977328.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Collecting pybind11
  Downloading pybind11-2.7.0-py2.py3-none-any.whl (199 kB)
Installing collected packages: pybind11
  WARNING: The script pybind11-config is installed in '/var/jenkins_home/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed pybind11-2.7.0
running develop
running egg_info
creating nvtabular.egg-info
writing nvtabular.egg-info/PKG-INFO
writing dependency_links to nvtabular.egg-info/dependency_links.txt
writing requirements to nvtabular.egg-info/requires.txt
writing top-level names to nvtabular.egg-info/top_level.txt
writing manifest file 'nvtabular.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'nvtabular.egg-info/SOURCES.txt'
running build_ext
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.8 -c flagcheck.cpp -o flagcheck.o -std=c++17
building 'nvtabular_cpp' extension
creating build
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/cpp
creating build/temp.linux-x86_64-3.8/cpp/nvtabular
creating build/temp.linux-x86_64-3.8/cpp/nvtabular/inference
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+74.ge66d139 -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/__init__.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/__init__.o -std=c++17 -fvisibility=hidden -g0
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+74.ge66d139 -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/inference/__init__.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/__init__.o -std=c++17 -fvisibility=hidden -g0
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+74.ge66d139 -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/inference/categorify.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/categorify.o -std=c++17 -fvisibility=hidden -g0
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+74.ge66d139 -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/inference/fill.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/fill.o -std=c++17 -fvisibility=hidden -g0
creating build/lib.linux-x86_64-3.8
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.8/cpp/nvtabular/__init__.o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/__init__.o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/categorify.o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/fill.o -o build/lib.linux-x86_64-3.8/nvtabular_cpp.cpython-38-x86_64-linux-gnu.so
copying build/lib.linux-x86_64-3.8/nvtabular_cpp.cpython-38-x86_64-linux-gnu.so -> 
Creating /usr/local/lib/python3.8/dist-packages/nvtabular.egg-link (link to .)
Removing nvtabular 0.5.3+55.g25e975d from easy-install.pth file
Adding nvtabular 0.5.3+74.ge66d139 to easy-install.pth file

Installed /var/jenkins_home/workspace/nvtabular_tests/nvtabular
Processing dependencies for nvtabular==0.5.3+74.ge66d139
Searching for versioneer==0.20
Best match: versioneer 0.20
Adding versioneer 0.20 to easy-install.pth file
Installing versioneer script to /usr/local/bin
error: [Errno 13] Permission denied: '/usr/local/bin/versioneer'
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins3169783443374247679.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit 91d961bf42042db261d4021809a7a5e3b35c2d46, no merge conflicts.
Running as SYSTEM
Setting status of 91d961bf42042db261d4021809a7a5e3b35c2d46 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2910/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse 91d961bf42042db261d4021809a7a5e3b35c2d46^{commit} # timeout=10
Checking out Revision 91d961bf42042db261d4021809a7a5e3b35c2d46 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 91d961bf42042db261d4021809a7a5e3b35c2d46 # timeout=10
Commit message: "one more time"
 > git rev-list --no-walk e66d13968b5566bc5a6ce338297f7ab1bb1daffb # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins4991577846052829387.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Requirement already satisfied: pybind11 in /var/jenkins_home/.local/lib/python3.8/site-packages (2.7.0)
running develop
running egg_info
creating nvtabular.egg-info
writing nvtabular.egg-info/PKG-INFO
writing dependency_links to nvtabular.egg-info/dependency_links.txt
writing requirements to nvtabular.egg-info/requires.txt
writing top-level names to nvtabular.egg-info/top_level.txt
writing manifest file 'nvtabular.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'nvtabular.egg-info/SOURCES.txt'
running build_ext
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.8 -c flagcheck.cpp -o flagcheck.o -std=c++17
building 'nvtabular_cpp' extension
creating build
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/cpp
creating build/temp.linux-x86_64-3.8/cpp/nvtabular
creating build/temp.linux-x86_64-3.8/cpp/nvtabular/inference
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+75.g91d961b -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/__init__.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/__init__.o -std=c++17 -fvisibility=hidden -g0
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+75.g91d961b -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/inference/__init__.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/__init__.o -std=c++17 -fvisibility=hidden -g0
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+75.g91d961b -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/inference/categorify.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/categorify.o -std=c++17 -fvisibility=hidden -g0
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+75.g91d961b -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/inference/fill.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/fill.o -std=c++17 -fvisibility=hidden -g0
creating build/lib.linux-x86_64-3.8
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.8/cpp/nvtabular/__init__.o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/__init__.o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/categorify.o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/fill.o -o build/lib.linux-x86_64-3.8/nvtabular_cpp.cpython-38-x86_64-linux-gnu.so
copying build/lib.linux-x86_64-3.8/nvtabular_cpp.cpython-38-x86_64-linux-gnu.so -> 
Creating /usr/local/lib/python3.8/dist-packages/nvtabular.egg-link (link to .)
nvtabular 0.5.3+75.g91d961b is already the active version in easy-install.pth

Installed /var/jenkins_home/workspace/nvtabular_tests/nvtabular
Processing dependencies for nvtabular==0.5.3+75.g91d961b
Searching for pyarrow==1.0.1
Best match: pyarrow 1.0.1
Adding pyarrow 1.0.1 to easy-install.pth file
Installing plasma_store script to /usr/local/bin
error: [Errno 13] Permission denied: '/usr/local/bin/plasma_store'
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins5687010052545170453.sh

@nvidia-merlin-bot
Copy link
Contributor

Click to view CI Results
GitHub pull request #963 of commit ec55187c5312f9c9fe9955c29b8c424ea4524766, no merge conflicts.
Running as SYSTEM
Setting status of ec55187c5312f9c9fe9955c29b8c424ea4524766 to PENDING with url http://10.20.13.93:8080/job/nvtabular_tests/2911/ and message: 'Pending'
Using context: Jenkins Unit Test Run
Building in workspace /var/jenkins_home/workspace/nvtabular_tests
using credential nvidia-merlin-bot
Cloning the remote Git repository
Cloning repository https://github.com/NVIDIA/NVTabular.git
 > git init /var/jenkins_home/workspace/nvtabular_tests/nvtabular # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/NVIDIA/NVTabular.git # timeout=10
Fetching upstream changes from https://github.com/NVIDIA/NVTabular.git
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA/NVTabular.git +refs/pull/963/*:refs/remotes/origin/pr/963/* # timeout=10
 > git rev-parse ec55187c5312f9c9fe9955c29b8c424ea4524766^{commit} # timeout=10
Checking out Revision ec55187c5312f9c9fe9955c29b8c424ea4524766 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ec55187c5312f9c9fe9955c29b8c424ea4524766 # timeout=10
Commit message: "--user"
 > git rev-list --no-walk 91d961bf42042db261d4021809a7a5e3b35c2d46 # timeout=10
[nvtabular_tests] $ /bin/bash /tmp/jenkins4719228519001689765.sh
Installing NVTabular
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: pip in /var/jenkins_home/.local/lib/python3.8/site-packages (21.1.3)
Requirement already satisfied: setuptools in /var/jenkins_home/.local/lib/python3.8/site-packages (57.4.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.36.2)
Requirement already satisfied: pybind11 in /var/jenkins_home/.local/lib/python3.8/site-packages (2.7.0)
running develop
running egg_info
creating nvtabular.egg-info
writing nvtabular.egg-info/PKG-INFO
writing dependency_links to nvtabular.egg-info/dependency_links.txt
writing requirements to nvtabular.egg-info/requires.txt
writing top-level names to nvtabular.egg-info/top_level.txt
writing manifest file 'nvtabular.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'nvtabular.egg-info/SOURCES.txt'
running build_ext
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.8 -c flagcheck.cpp -o flagcheck.o -std=c++17
building 'nvtabular_cpp' extension
creating build
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/cpp
creating build/temp.linux-x86_64-3.8/cpp/nvtabular
creating build/temp.linux-x86_64-3.8/cpp/nvtabular/inference
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+76.gec55187 -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/__init__.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/__init__.o -std=c++17 -fvisibility=hidden -g0
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+76.gec55187 -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/inference/__init__.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/__init__.o -std=c++17 -fvisibility=hidden -g0
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+76.gec55187 -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/inference/categorify.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/categorify.o -std=c++17 -fvisibility=hidden -g0
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DVERSION_INFO=0.5.3+76.gec55187 -I./cpp/ -I/var/jenkins_home/.local/lib/python3.8/site-packages/pybind11/include -I/usr/include/python3.8 -c cpp/nvtabular/inference/fill.cc -o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/fill.o -std=c++17 -fvisibility=hidden -g0
creating build/lib.linux-x86_64-3.8
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.8/cpp/nvtabular/__init__.o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/__init__.o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/categorify.o build/temp.linux-x86_64-3.8/cpp/nvtabular/inference/fill.o -o build/lib.linux-x86_64-3.8/nvtabular_cpp.cpython-38-x86_64-linux-gnu.so
copying build/lib.linux-x86_64-3.8/nvtabular_cpp.cpython-38-x86_64-linux-gnu.so -> 
Creating /var/jenkins_home/.local/lib/python3.8/site-packages/nvtabular.egg-link (link to .)
Adding nvtabular 0.5.3+76.gec55187 to easy-install.pth file

Installed /var/jenkins_home/workspace/nvtabular_tests/nvtabular
Processing dependencies for nvtabular==0.5.3+76.gec55187
Searching for pyarrow==1.0.1
Best match: pyarrow 1.0.1
Adding pyarrow 1.0.1 to easy-install.pth file
Installing plasma_store script to /var/jenkins_home/.local/bin

Using /usr/local/lib/python3.8/dist-packages
Searching for tdqm==0.0.1
Best match: tdqm 0.0.1
Adding tdqm 0.0.1 to easy-install.pth file

Using /var/jenkins_home/.local/lib/python3.8/site-packages
Searching for numba==0.53.1
Best match: numba 0.53.1
Adding numba 0.53.1 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for pandas==1.1.5
Best match: pandas 1.1.5
Adding pandas 1.1.5 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for distributed==2021.4.1
Best match: distributed 2021.4.1
Adding distributed 2021.4.1 to easy-install.pth file
Installing dask-ssh script to /var/jenkins_home/.local/bin
Installing dask-scheduler script to /var/jenkins_home/.local/bin
Installing dask-worker script to /var/jenkins_home/.local/bin

Using /var/jenkins_home/.local/lib/python3.8/site-packages
Searching for dask==2021.4.1
Best match: dask 2021.4.1
Adding dask 2021.4.1 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for PyYAML==5.4.1
Best match: PyYAML 5.4.1
Adding PyYAML 5.4.1 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for numpy==1.20.2
Best match: numpy 1.20.2
Adding numpy 1.20.2 to easy-install.pth file
Installing f2py script to /var/jenkins_home/.local/bin
Installing f2py3 script to /var/jenkins_home/.local/bin
Installing f2py3.8 script to /var/jenkins_home/.local/bin

Using /usr/local/lib/python3.8/dist-packages
Searching for tqdm==4.61.2
Best match: tqdm 4.61.2
Adding tqdm 4.61.2 to easy-install.pth file
Installing tqdm script to /var/jenkins_home/.local/bin

Using /usr/local/lib/python3.8/dist-packages
Searching for llvmlite==0.36.0
Best match: llvmlite 0.36.0
Adding llvmlite 0.36.0 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for setuptools==57.4.0
Best match: setuptools 57.4.0
Adding setuptools 57.4.0 to easy-install.pth file

Using /var/jenkins_home/.local/lib/python3.8/site-packages
Searching for pytz==2021.1
Best match: pytz 2021.1
Adding pytz 2021.1 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for python-dateutil==2.8.2
Best match: python-dateutil 2.8.2
Adding python-dateutil 2.8.2 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for cloudpickle==1.6.0
Best match: cloudpickle 1.6.0
Adding cloudpickle 1.6.0 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for psutil==5.8.0
Best match: psutil 5.8.0
Adding psutil 5.8.0 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for tblib==1.7.0
Best match: tblib 1.7.0
Adding tblib 1.7.0 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for tornado==6.1
Best match: tornado 6.1
Adding tornado 6.1 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for sortedcontainers==2.4.0
Best match: sortedcontainers 2.4.0
Adding sortedcontainers 2.4.0 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for msgpack==1.0.2
Best match: msgpack 1.0.2
Adding msgpack 1.0.2 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for zict==2.0.0
Best match: zict 2.0.0
Adding zict 2.0.0 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for click==8.0.1
Best match: click 8.0.1
Adding click 8.0.1 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for toolz==0.11.1
Best match: toolz 0.11.1
Adding toolz 0.11.1 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for partd==1.2.0
Best match: partd 1.2.0
Adding partd 1.2.0 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for fsspec==2021.7.0
Best match: fsspec 2021.7.0
Adding fsspec 2021.7.0 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for six==1.15.0
Best match: six 1.15.0
Adding six 1.15.0 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for HeapDict==1.0.1
Best match: HeapDict 1.0.1
Adding HeapDict 1.0.1 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Searching for locket==0.2.1
Best match: locket 0.2.1
Adding locket 0.2.1 to easy-install.pth file

Using /usr/local/lib/python3.8/dist-packages
Finished processing dependencies for nvtabular==0.5.3+76.gec55187
Running black --check
All done! ✨ 🍰 ✨
108 files would be left unchanged.
Running flake8
Running isort
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/images
warn(f"Likely recursive symlink detected to {resolved_path}")
/usr/local/lib/python3.8/dist-packages/isort/main.py:141: UserWarning: Likely recursive symlink detected to /var/jenkins_home/workspace/nvtabular_tests/nvtabular/examples/scaling-criteo/imgs
warn(f"Likely recursive symlink detected to {resolved_path}")
Skipped 1 files
Running bandit
Running pylint
************* Module nvtabular.ops.categorify
nvtabular/ops/categorify.py:431:15: I1101: Module 'nvtabular_cpp' has no 'inference' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
************* Module nvtabular.ops.fill
nvtabular/ops/fill.py:66:15: I1101: Module 'nvtabular_cpp' has no 'inference' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
************* Module bench.datasets.tools.train_hugectr
bench/datasets/tools/train_hugectr.py:28:13: I1101: Module 'hugectr' has no 'solver_parser_helper' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)
bench/datasets/tools/train_hugectr.py:41:16: I1101: Module 'hugectr' has no 'optimizer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects. (c-extension-no-member)


Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

Running flake8-nb
Building docs
make: Entering directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
2021-07-20 19:54:44.810029: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 19:54:47.314375: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-07-20 19:54:47.315789: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:07:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 19:54:47.317126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 1 with properties:
pciBusID: 0000:08:00.0 name: Tesla P100-DGXS-16GB computeCapability: 6.0
coreClock: 1.4805GHz coreCount: 56 deviceMemorySize: 15.90GiB deviceMemoryBandwidth: 681.88GiB/s
2021-07-20 19:54:47.317309: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-20 19:54:47.317426: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-07-20 19:54:47.317506: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-07-20 19:54:47.317581: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-07-20 19:54:47.317682: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-07-20 19:54:47.317787: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-07-20 19:54:47.317863: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-07-20 19:54:47.317963: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-07-20 19:54:47.322913: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0, 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/lib/python3.8/dist-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
make: Leaving directory '/var/jenkins_home/workspace/nvtabular_tests/nvtabular/docs'
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /var/jenkins_home/workspace/nvtabular_tests/nvtabular, configfile: pyproject.toml
plugins: cov-2.12.1, forked-1.3.0, xdist-2.3.0
collected 1093 items / 2 skipped / 1091 selected

tests/unit/test_column_group.py .. [ 0%]
tests/unit/test_column_similarity.py ........................ [ 2%]
tests/unit/test_cpu_workflow.py ...... [ 2%]
tests/unit/test_dask_nvt.py ............................................ [ 6%]
..................................................................... [ 13%]
tests/unit/test_dataloader_backend.py . [ 13%]
tests/unit/test_io.py .................................................. [ 17%]
........................................................................ [ 24%]
........ssssssss.................................................. [ 30%]
tests/unit/test_ops.py ................................................. [ 35%]
........................................................................ [ 41%]
........................................................................ [ 48%]
........................................................................ [ 54%]
........................................................................ [ 61%]
........................................................................ [ 67%]
. [ 68%]
tests/unit/test_s3.py .. [ 68%]
tests/unit/test_tf_dataloader.py ....................................... [ 71%]
.................................s [ 74%]
tests/unit/test_tf_layers.py ........................................... [ 78%]
................................... [ 82%]
tests/unit/test_tools.py ...................... [ 84%]
tests/unit/test_torch_dataloader.py .................................... [ 87%]
.............................................. [ 91%]
tests/unit/test_workflow.py ............................................ [ 95%]
................................................ [100%]

=============================== warnings summary ===============================
tests/unit/test_ops.py::test_fill_missing[True-True-parquet]
tests/unit/test_ops.py::test_fill_missing[True-False-parquet]
tests/unit/test_ops.py::test_filter[parquet-0.1-True]
/usr/local/lib/python3.8/dist-packages/pandas/core/indexing.py:670: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
iloc._setitem_with_indexer(indexer, value)

tests/unit/test_ops.py::test_join_external[True-True-left-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-left-device-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-host-pandas-parquet]
tests/unit/test_ops.py::test_join_external[True-True-inner-device-pandas-parquet]
/var/jenkins_home/workspace/nvtabular_tests/nvtabular/nvtabular/ops/join_external.py:171: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
_ext.drop_duplicates(ignore_index=True, inplace=True)

tests/unit/test_ops.py::test_filter[parquet-0.1-True]
tests/unit/test_ops.py::test_filter[parquet-0.1-False]
tests/unit/test_ops.py::test_groupby_op[id-False]
tests/unit/test_ops.py::test_groupby_op[id-True]
/usr/local/lib/python3.8/dist-packages/dask/dataframe/core.py:6610: UserWarning: Insufficient elements for head. 1 elements requested, only 0 elements available. Try passing larger npartitions to head.
warnings.warn(msg.format(n, len(r)))

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------- coverage: platform linux, python 3.8.10-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing

examples/multi-gpu-movielens/torch_trainer.py 65 0 6 1 99% 32->36
nvtabular/init.py 12 0 0 0 100%
nvtabular/column_group.py 157 42 82 4 72% 54, 87, 128, 152-165, 207-214, 218-221, 225, 240-258, 301
nvtabular/dispatch.py 232 38 112 20 82% 33-35, 40-42, 48-58, 62-63, 83, 90, 98, 111, 116->118, 121->123, 129, 152-155, 194, 210, 217, 248->253, 251, 254, 257->261, 294, 305-308, 351, 355, 396, 420, 422, 429
nvtabular/framework_utils/init.py 0 0 0 0 100%
nvtabular/framework_utils/tensorflow/init.py 1 0 0 0 100%
nvtabular/framework_utils/tensorflow/feature_column_utils.py 146 137 96 0 4% 28-32, 69-303
nvtabular/framework_utils/tensorflow/layers/init.py 4 0 0 0 100%
nvtabular/framework_utils/tensorflow/layers/embedding.py 153 13 85 6 90% 60, 68->49, 122, 179, 231-239, 242, 335->343, 357->360, 363-364, 367
nvtabular/framework_utils/tensorflow/layers/interaction.py 47 25 20 1 43% 49, 74-103, 106-110, 113
nvtabular/framework_utils/tensorflow/layers/outer_product.py 30 24 10 0 15% 37-38, 41-60, 71-84, 87
nvtabular/framework_utils/torch/init.py 0 0 0 0 100%
nvtabular/framework_utils/torch/layers/init.py 2 0 0 0 100%
nvtabular/framework_utils/torch/layers/embeddings.py 30 1 12 1 95% 47
nvtabular/framework_utils/torch/models.py 45 1 28 1 97% 108
nvtabular/framework_utils/torch/utils.py 75 13 30 3 79% 22, 25-33, 64, 118-120, 132->115
nvtabular/inference/init.py 0 0 0 0 100%
nvtabular/inference/triton/init.py 279 269 120 0 3% 30-700
nvtabular/inference/triton/benchmarking_tools.py 52 52 10 0 0% 2-103
nvtabular/inference/triton/data_conversions.py 87 87 58 0 0% 27-150
nvtabular/inference/triton/model.py 140 140 66 0 0% 27-267
nvtabular/io/init.py 4 0 0 0 100%
nvtabular/io/avro.py 88 88 30 0 0% 16-189
nvtabular/io/csv.py 57 6 20 5 86% 22-23, 99, 103->107, 108, 110, 124
nvtabular/io/dask.py 180 7 70 11 93% 110, 113, 149, 225, 385->383, 413->416, 424, 428->430, 430->426, 435, 437
nvtabular/io/dataframe_engine.py 61 5 28 6 88% 19-20, 50, 69, 88->92, 92->97, 94->97, 97->116, 125
nvtabular/io/dataset.py 289 35 126 21 86% 43-44, 245, 247, 260, 269, 287-301, 404->473, 409-412, 417->427, 422-423, 434->432, 448->452, 463, 523->527, 570, 710-711, 738, 745-746, 752, 758, 853-854, 970-975, 981, 1031
nvtabular/io/dataset_engine.py 23 1 0 0 96% 45
nvtabular/io/hugectr.py 45 2 24 2 91% 34, 74->97, 101
nvtabular/io/parquet.py 492 21 156 12 95% 33-34, 92-100, 124->126, 213-215, 338-343, 381-386, 502->509, 570->575, 576-577, 697, 701, 705, 743, 760, 764, 771->773, 891->896, 901->911, 938
nvtabular/io/shuffle.py 31 6 16 5 77% 42, 44-45, 49, 59, 63
nvtabular/io/writer.py 173 13 66 5 92% 24-25, 51, 79, 125, 128, 207, 216, 219, 262, 283-285
nvtabular/io/writer_factory.py 18 2 8 2 85% 35, 60
nvtabular/loader/init.py 0 0 0 0 100%
nvtabular/loader/backend.py 327 13 138 11 95% 98, 102->94, 142-143, 233->235, 245-249, 295-296, 335->339, 410, 414-415, 445, 550, 558
nvtabular/loader/tensorflow.py 155 23 50 8 84% 57, 65-68, 78, 82, 88, 296, 332, 347-349, 378-380, 390-398, 401-404
nvtabular/loader/tf_utils.py 55 27 20 5 44% 29->32, 32->34, 39->41, 43, 50-51, 58-60, 66-70, 85-90, 100-113
nvtabular/loader/torch.py 81 15 16 2 76% 25-27, 30-36, 111, 149-150, 190, 193
nvtabular/ops/init.py 21 0 0 0 100%
nvtabular/ops/bucketize.py 32 10 18 3 62% 52-54, 58, 61-64, 83-86
nvtabular/ops/categorify.py 543 66 310 45 85% 239, 255, 259, 267, 275, 277, 299, 318-319, 346, 408-409, 426-431, 504->506, 627, 663, 692->695, 696-698, 705-706, 719-721, 722->690, 738, 746, 748, 755->exit, 778, 781->784, 792, 817-819, 822, 824->826, 836->840, 847-850, 861, 865, 867, 879-882, 960, 962, 991->1014, 997->1014, 1015-1020, 1057, 1075->1080, 1079, 1089->1086, 1094->1086, 1102, 1110-1120
nvtabular/ops/clip.py 18 2 6 3 79% 43, 51->53, 54
nvtabular/ops/column_similarity.py 103 24 36 5 72% 19-20, 76->exit, 106, 178-179, 188-190, 198-214, 231->234, 235, 245
nvtabular/ops/data_stats.py 56 2 22 3 94% 91->93, 95, 97->87, 102
nvtabular/ops/difference_lag.py 25 0 8 1 97% 66->68
nvtabular/ops/dropna.py 8 0 0 0 100%
nvtabular/ops/fill.py 63 6 22 1 89% 62-66, 101, 127
nvtabular/ops/filter.py 20 1 6 1 92% 49
nvtabular/ops/groupby.py 92 4 56 6 92% 71, 80, 82, 92->94, 104->109, 180
nvtabular/ops/hash_bucket.py 29 2 18 2 87% 69, 99
nvtabular/ops/hashed_cross.py 28 3 13 4 83% 50, 63, 77->exit, 78
nvtabular/ops/join_external.py 89 8 38 8 87% 20-21, 113, 115, 117, 159, 163->167, 176->178, 203, 212
nvtabular/ops/join_groupby.py 84 5 30 2 94% 106, 109->118, 194-195, 198-199
nvtabular/ops/lambdaop.py 39 13 18 3 58% 59, 63, 77, 88-103
nvtabular/ops/list_slice.py 63 24 26 1 56% 21-22, 52-53, 100-114, 122-133
nvtabular/ops/logop.py 8 0 0 0 100%
nvtabular/ops/moments.py 65 0 20 0 100%
nvtabular/ops/normalize.py 70 8 14 2 86% 60->59, 67, 75-76, 109-110, 132-133, 137
nvtabular/ops/operator.py 29 4 2 1 84% 25, 99, 104, 109
nvtabular/ops/rename.py 23 3 14 3 84% 45, 66-68
nvtabular/ops/stat_operator.py 8 0 0 0 100%
nvtabular/ops/target_encoding.py 146 11 64 5 90% 147, 167->171, 174->183, 228-229, 232-233, 242-248, 339->342
nvtabular/tools/init.py 0 0 0 0 100%
nvtabular/tools/data_gen.py 236 1 62 1 99% 323
nvtabular/tools/dataset_inspector.py 49 7 18 1 79% 31-38
nvtabular/tools/inspector_script.py 46 46 0 0 0% 17-168
nvtabular/utils.py 94 44 44 9 47% 30-31, 35-36, 45, 49, 60-61, 63-65, 68, 71, 77, 83, 89-125, 144, 148->152
nvtabular/worker.py 82 5 38 7 90% 24-25, 82->99, 91, 92->99, 99->102, 108, 110, 111->113
nvtabular/workflow.py 156 11 73 4 93% 28-29, 45, 131, 145-147, 251, 280-281, 369

TOTAL 5961 1416 2479 253 74%
Coverage XML written to file coverage.xml

Required test coverage of 70% reached. Total coverage: 73.61%
========== 1084 passed, 11 skipped, 11 warnings in 1300.73s (0:21:40) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://github.com/gitapi/repos/NVIDIA/NVTabular/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[nvtabular_tests] $ /bin/bash /tmp/jenkins455558134596227915.sh

@benfred benfred merged commit 0c80385 into NVIDIA-Merlin:main Jul 20, 2021
mikemckiernan pushed a commit that referenced this pull request Nov 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants