Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Databricks integrations #2823

Merged
merged 75 commits into from
Jul 16, 2024
Merged

Databricks integrations #2823

merged 75 commits into from
Jul 16, 2024

Conversation

safoinme
Copy link
Contributor

@safoinme safoinme commented Jul 4, 2024

Describe changes

I implemented/fixed _ to achieve _.

Pre-requisites

Please ensure you have done the following:

  • I have read the CONTRIBUTING.md document.
  • If my change requires a change to docs, I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • I have based my new branch on develop and the open PR is targeting develop. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop.
  • If my changes require changes to the dashboard, these changes are communicated/requested.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Other (add details above)

Summary by CodeRabbit

  • New Features

    • Introduced Databricks Orchestrator for running pipelines on Databricks.
    • Added comprehensive documentation for using Databricks with ZenML.
    • Launched new demo projects with configuration files, Makefiles, and setup scripts for end-to-end ML workflows.
    • Introduced alert notifications, data quality monitoring, ETL processes, hyperparameter tuning, and model promotion steps.
    • Added functionalities for model training, deployment, and inference with Databricks integration.
  • Documentation

    • Provided detailed guides and README for new demo projects.
    • Included licensing information for new files and modules.

Copy link

gitguardian bot commented Jul 4, 2024

️✅ There are no secrets present in this pull request anymore.

If these secrets were true positive and are still valid, we highly recommend you to revoke them.
Once a secret has been leaked into a git repository, you should consider it compromised, even if it was deleted immediately.
Find here more information about risks.


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Copy link
Contributor

coderabbitai bot commented Jul 4, 2024

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Walkthrough

The latest updates introduce Databricks Orchestrator integration into the ZenML framework, enabling distributed computing capabilities. This includes comprehensive configurations, model deployment, data preprocessing, and inference features. New files and modifications cover orchestrator settings, deployment processes, data pipelines, and utility functions to support scalable and efficient ML workflows on Databricks.

Changes

File/Directory Summary of Changes
docs/book/component-guide/orchestrators/databricks.md Introduction of Databricks Orchestrator and usage guidance.
examples/demo/.copier-answers.yml, examples/demo/configs/..., examples/demo/steps/..., examples/demo/utils/... New configuration settings related to project setup and pipeline steps.
examples/demo/LICENSE Introduction of Apache Software License 2.0.
examples/demo/Makefile New setup commands for dependencies and component registration.
examples/demo/README.md Comprehensive ML project description and usage guidance.
examples/demo/requirements.txt Addition of zenml[server] package.
examples/demo/run.py CLI script for running ZenML end-to-end projects.
src/zenml/__init__.py Addition of a new entrypoint export.
src/zenml/entrypoints/entrypoint.py Custom source root setup for DatabricksEntrypointConfiguration.
src/zenml/integrations/__init__.py Import of DatabricksIntegration.
src/zenml/integrations/constants.py Addition of DATABRICKS constant.
src/zenml/integrations/databricks/... Comprehensive integration of Databricks, including orchestrator, model deployer, services, and flavors.

Poem

In the land of code, where data flows,
A spark of change in pipelines grows.
Databricks now joins the ZenML dance,
Bringing power to workflows, a grand expanse.
With setups new, and models bright,
To scalable heights, we take our flight.
Shine on, dear data, in this computing light!


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added internal To filter out internal PRs and issues enhancement New feature or request labels Jul 4, 2024
Copy link
Contributor

github-actions bot commented Jul 4, 2024

LLM Finetuning template updates in examples/llm_finetuning have been pushed.

obj = source_utils.load(source)
logger.info("Loading step from source: %s", source)

if prefix := os.environ.get("ZENML_DATABRICKS_SOURCE_PREFIX"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely not code that should be in our base step implementation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah i guess this is fixed now by adding the installed wheel path to sys.path

src/zenml/steps/base_step.py Outdated Show resolved Hide resolved
)
env_arg = ",".join(env_vars)

arguments.extend(["--env", env_arg])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there no way to pass environment variables to a databricks job? This seems very ugly, and potentially runs into issues with a too long argument string.

@@ -39,12 +40,23 @@ def main() -> None:
# is not wrapped in a function or an `if __name__== "__main__":` check)
constants.SHOULD_PREVENT_PIPELINE_EXECUTION = True

source_utils.set_custom_source_root(source_root="custom_source_root")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because we are running from notebook (databricks apparently runs the code from notebook env) so it detect that there is no zenml init so setting a custom root is a solution that worked

src/zenml/entrypoints/entrypoint.py Outdated Show resolved Hide resolved
Copy link
Contributor

Images automagically compressed by Calibre's image-actions

Compression reduced images by 49.2%, saving 1,009.30 KB.

Filename Before After Improvement Visual comparison
docs/book/.gitbook/assets/DatabricksRunUI.png 1.39 MB 708.67 KB -50.3% View diff
docs/book/.gitbook/assets/DatabricksUI.png 626.83 KB 335.48 KB -46.5% View diff

246 images did not require optimisation.

Update required: Update image-actions configuration to the latest version before 1/1/21. See README for instructions.

Copy link
Contributor

@htahir1 htahir1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So im just going to review the docs, and not the code.

I synced this to gitbook here. There's a few obvious mistakes:

  • there is no mention in the toc so the pages don't appear
  • there is an example committed for some reason that shouldn't be there?

APart from that, see comments. Great job overall this is very exciting!@

docs/book/component-guide/model-deployers/databricks.md Outdated Show resolved Hide resolved

# Databricks Orchestrator

[Databricks](https://www.databricks.com/) is a unified data analytics platform that combines the best of data warehouses and data lakes to offer an integrated solution for big data processing and machine learning. It provides a collaborative environment for data scientists, data engineers, and business analysts to work together on data projects. Databricks is built on top of Apache Spark, offering optimized performance and scalability for big data workloads.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Spark is nowadays just one component of databricks and I think as the orchestrator has nothing to do with spark, there's no need to mention it


The Databricks orchestrator in ZenML leverages the concept of Wheel Packages. When you run a pipeline with the Databricks orchestrator, ZenML creates a Python wheel package from your project. This wheel package contains all the necessary code and dependencies for your pipeline.

Once the wheel package is created, ZenML uploads it to Databricks. ZenML leverage Databricks SDK to create a job definition, This job definition includes information about the pipeline steps and ensures that each step is executed only after its upstream steps have successfully completed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a diagram here would be nice, also for marketing . maybe the flow of what happens.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point will create one


Once the wheel package is created, ZenML uploads it to Databricks. ZenML leverage Databricks SDK to create a job definition, This job definition includes information about the pipeline steps and ensures that each step is executed only after its upstream steps have successfully completed.

The Databricks job is also configured with the necessary cluster settings to run. This includes specifying the version of Spark to use, the number of workers, the node type, and other configuration options.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why spark?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's how Databricks forces you to do it all their computing options are spark based and you can't really select otherwise

docs/book/component-guide/orchestrators/databricks.md Outdated Show resolved Hide resolved

#### Enabling CUDA for GPU-backed hardware

Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/training-with-gpus/training-with-gpus.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

id like a mention of how to specify a GPU via the settings

Comment on lines 48 to 53
if isinstance(
args.entrypoint_config_source, str
) and args.entrypoint_config_source.endswith(
"DatabricksEntrypointConfiguration"
):
source_utils.set_custom_source_root(source_root=os.getcwd())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely, this is something that should be in the DatabricksEntrypointConfiguration

Copy link
Contributor

Images automagically compressed by Calibre's image-actions

Compression reduced images by 40%, saving 110.07 KB.

Filename Before After Improvement Visual comparison
docs/book/.gitbook/assets/DatabricksPermessions.png 275.18 KB 165.11 KB -40.0% View diff

265 images did not require optimisation.

Update required: Update image-actions configuration to the latest version before 1/1/21. See README for instructions.

Copy link
Contributor

Images automagically compressed by Calibre's image-actions

Compression reduced images by 40%, saving 87.55 KB.

Filename Before After Improvement Visual comparison
docs/book/.gitbook/assets/Databricks_How_It_works.png 218.80 KB 131.25 KB -40.0% View diff

266 images did not require optimisation.

Update required: Update image-actions configuration to the latest version before 1/1/21. See README for instructions.

@safoinme safoinme changed the title Databricks Orchestrator integration Databricks integrations Jul 15, 2024
@safoinme safoinme merged commit ef66cc0 into develop Jul 16, 2024
53 of 75 checks passed
@safoinme safoinme deleted the feature/databricks-integrations branch July 16, 2024 10:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request internal To filter out internal PRs and issues run-slow-ci
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants