Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Optimized Write #1198

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

sezruby
Copy link
Contributor

@sezruby sezruby commented Jun 13, 2022

Description

Support OptimizeWrite described in https://docs.databricks.com/delta/optimizations/auto-optimize.html#how-optimized-writes-work

Fixes #1158

If OptimizeWrite is enabled, inject OptimizeWriteExchangeExec on top of the write plan and remove ShuffleExchangeExec or CoalesceExchange operation at the top of the plan to avoid unnecessary shuffle / stage.

In OptimizeWriteExchangeExec,

  1. Repartition data
    • RoundRobinPartitining for non partitioned data, HashPartitioning for partitioned data.
    • Use spark.sql.shuffle.partitions for partitioning. We can introduce a new config like spark.sql.adaptive.coalescePartitions.initialPartitionNum if needed.
  2. Rebalance partitions for write
    • Step1 - merge small partitions (CoalescedPartitionSpec)
    • Step2 - split large partitions (PartialReducerPartitionSpec)
    • targetSize config: spark.databricks.delta.optimizeWrite.binSize (default: 128MB)

How to enable

Ref: https://docs.databricks.com/delta/optimizations/auto-optimize.html#enable-auto-optimize
We can enable OptimizeWrite using Spark session config or table property.

  1. Spark session config
    • spark.databricks.delta.optimizeWrite.enabled = true
    • applied for write operations of all Delta tables)
  2. Table property
    • delta.autoOptimize.optimizeWrite = true

Spark session config is prior to the table property.

How was this patch tested?

Unit tests (+ more tests will be added)

Does this PR introduce any user-facing changes?

Yes, support OptimizeWrite

@vkorukanti
Copy link
Collaborator

Thank you @sezruby for creating this PR! This is a very useful feature. We are currently busy with the next release of Delta Lake. Will be reviewing the PR after the release.

@scottsand-db
Copy link
Collaborator

Can you please fix the conflicts?

Signed-off-by: Eunjin Song <sezruby@gmail.com>
Copy link
Contributor

@tdas tdas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @sezruby my apologies for not being able to review this so long. i took a first pass and i think my biggest feedback is that i dont understand the overall optimization algorithm and the parameters. Typically for a feature like this we write design docs (see Optimize Zorder design doc) where we discuss the design choices and come to an agreement. Could you write a short doc explaining the algorithm? I want to understand the behavior and edge cases of this optimization.

})
}

private[sql] def removeTopRepartition(plan: SparkPlan): SparkPlan = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain each of the cases with comments... these are pretty complicated on to reason about

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review! I'll add some comments & classdoc and try to improve documentation in #1158

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tdas I added some example for OptimizeWrite partitioned data.
#1158 (comment)
Could you have a look and let me know if there is something unclear?

For non-partitioned data, here I use RoundRobinPartitioning but it could be inefficient in some cases as it distributes all rows into all partitions which is unnecessary. I think we could improve it later.

private[sql] def removeTopRepartition(plan: SparkPlan): SparkPlan = {
plan match {
case p@AdaptiveSparkPlanExec(inputPlan: ShuffleExchangeExec, _, _, _, _)
if !inputPlan.shuffleOrigin.equals(ENSURE_REQUIREMENTS) =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am trying to understand this and map it spark code. what you are trying to do is remove a shuffle if it wasnt added automatically by the planner to ensure requirement. Doesnt that mean if user asked for repartitioning by a certain way with an explicit programmatic API (e.g., DataFrame.repartition) we will be ignoring that completely?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it's because for Optimize Write, we add repartition(partitionColumns)( + rebalancing) at top of the plan, so unnecessary repartition(n) or coalesce(n) could be removed.

#1158 - Things to do - 3 for detail

import scala.concurrent.Future
import scala.concurrent.duration.Duration

case class OptimizeWriteExchangeExec(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Class docs are needed with a full explanation of how the optimization occurs, what the algorithm is like, what are the parameters. Its really hard to understand the over all algorithm from the code without an overview here.

@sezruby sezruby force-pushed the optimizewrite branch 2 times, most recently from 13927b5 to a2cb8b6 Compare October 18, 2022 00:25
Signed-off-by: Eunjin Song <sezruby@gmail.com>
Copy link
Contributor

@Kimahriman Kimahriman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be great to get some momentum going on this again. Seems like it makes sense after understanding how Spark does rebalancing. My question would be if it works for streaming writes, and if so might be good to add a test for that?

} else {
mapStartIndices(i + 1)
}
val dataSize = startMapIndex.until(endMapIndex).map(mapPartitionSizes(_)).sum
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was an update on the Spark side to make this more performant: apache/spark@9e1d00c

Is the main reason to not just use the Spark versions directly to add the configurable mergedPartitionFactor, or to not rely on those internal helper functions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it's for both; make configurable and not rely on spark internal util which can change frequently. But I'm also okay to use Spark one. I'm just waiting for databricks team to get back on this. (and auto compaction too)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/optimize-write-for-apache-spark

FYI, the code is being used in prod at least 6 months and no major issue so far.
We might need to increase binSize config / PARQUET COMPRESSION RATIO for larger files. (60~80mb avg for now)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I tried building and using this myself and I don't seem to be getting my large partitions split, gonna add some more logging to try to see why/what's happening.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it's the case this approach (Spark rebalance logic) cannot handle. Because the unit of rebalancing is determined by the source partition layout. If the data is skewed or only few number of partitions, it cannot be rebalanced properly.
Please refer the figure in #1158 (comment)

If the source dataframe consists of all same key=1 and 10GB of 1 partition, it cannot be split.
e.g. df.repartition(1, col("key")).select(col("key")).write.format("delta").parquet("path")

I removed redundant repartition execution plan on top of the child plan, but it cannot cover all the cases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The smallest unit of splitting is the single task map output for a single reducer ID right? That wasn't what I was seeing, where I had all my map tasks shuffle write < 1 GB, but I had some reducer tasks reading > 10 GB of shuffle data. After digging through how map output sizes work a little bit, I'm gonna try again and see if this is some weird side effect of HighlyCompressedMapStatus for large numbers of reducing partitions (> 2000 by default). Only thought is some weird effect of a lot of small blocks being "averaged out" to compute map output sizes per reducer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That appeared to be it. I dropped my shuffle partitions to 1k and it behaved as I would expect. Not sure how common my case would be with particular types of data skew, but maybe it would be good to log a warning if your shuffle partitions exceeds the threshold for using HighlyCompressedMapStatus, it can limit the ability to properly split skewed partitions, because average map output size is used for individual map reducer outputs, instead of the real value.

@Kimahriman
Copy link
Contributor

Gentle ping on this again, just started using this in our production environment and would be great not to have to maintain my own Delta fork 😅

@sezruby
Copy link
Contributor Author

sezruby commented Apr 10, 2023

Please let me know if someone is ready to review. I'll rebase the PR then.

@isunli
Copy link

isunli commented May 16, 2023

any update on this PR?

shenavaa added a commit to shenavaa/delta that referenced this pull request Jul 21, 2023
@cb-sukumarnataraj
Copy link

Any update on this PR?.

@sezruby
Copy link
Contributor Author

sezruby commented Sep 6, 2023

Hi @tdas @scottsand-db Any update? So no plan to deliver this feature to OSS delta?

@adityakumar84
Copy link

I am using your PR for optimize write in my EMR streaming use cases.

I want that optimize write should create around 128m parquet files but it is topping at 64m.

Configuration I am using is as below
"spark.sql.catalog.spark_catalog": "org.apache.spark.sql.delta.catalog.DeltaCatalog",
"spark.databricks.delta.autoCompact.enabled": "true",
"spark.databricks.delta.optimizeWrite.enabled": "true",
"spark.databricks.delta.autoCompact.minNumFiles": "500",
"spark.sql.adaptive.enabled": "true",
"spark.sql.adaptive.coalescePartitions.enabled": "true",
"spark.sql.adaptive.advisoryPartitionSizeInBytes": "128m",
"spark.sql.adaptive.coalescePartitions.minPartitionNum": "1",
"spark.sql.adaptive.coalescePartitions.initialPartitionNum": "200",
"spark.databricks.delta.optimizeWrite.binSize": "134217728

2023-09-22 08:22:54 61.9 MiB part-00000-eb8c145c-7ac4-420c-8b85-c6fd58861e40.c000.snappy.parquet
2023-09-22 08:24:55 43.7 MiB part-00001-4fbd8215-5129-48aa-b858-d2ab9d2ca7a6.c000.snappy.parquet
2023-09-22 08:24:58 54.8 MiB part-00000-de4d1a37-543c-4c80-a2dd-9cbc06af7b3c.c000.snappy.parquet
2023-09-22 08:27:06 52.7 MiB part-00000-c6cf2c15-8dc3-4346-a03c-eb83171fdf46.c000.snappy.parquet
2023-09-22 08:27:06 59.3 MiB part-00001-a5f3db2e-e352-45b0-aa1b-bbf1570ebb6f.c000.snappy.parquet
2023-09-22 08:28:56 69.1 MiB part-00000-697cca77-e7ea-4267-9f7b-3fd406678b31.c000.snappy.parquet
2023-09-22 08:31:03 52.8 MiB part-00000-3dfbf27e-53f3-4f35-8955-7675d0ec86bf.c000.snappy.parquet
2023-09-22 08:31:05 60.5 MiB part-00001-6b7bc1d7-9882-4ba0-a01b-619a0a424ae4.c000.snappy.parquet
2023-09-22 08:32:58 32.5 MiB part-00001-ea4d882a-f182-48d4-a3e8-bc23260e617e.c000.snappy.parquet
2023-09-22 08:33:03 54.1 MiB part-00000-4c53bf10-eede-45c3-b178-f9d4f2895956.c000.snappy.parquet
2023-09-22 08:35:00 45.7 MiB part-00001-138bdaab-8d77-4faf-af49-489c4cfad4f6.c000.snappy.parquet
2023-09-22 08:35:01 51.7 MiB part-00000-62a2bdfa-9dea-41ff-9f8c-4e3ebcb12d04.c000.snappy.parquet
2023-09-22 11:42:10 52.0 MiB part-00000-64bccc7b-2664-4e67-b478-413a5a7554db.c000.snappy.parquet
2023-09-22 11:42:12 57.9 MiB part-00001-b79e83d9-c65e-438b-a555-aa2028320ea5.c000.snappy.parquet
2023-09-22 11:45:20 56.4 MiB part-00001-1ef411fb-160c-4cba-a5e7-cdee540295a5.c000.snappy.parquet
2023-09-22 11:45:20 57.0 MiB part-00004-fa51fd84-c4d3-4980-9a88-5566283d3234.c000.snappy.parquet
2023-09-22 11:45:20 57.5 MiB part-00005-80e03284-a8a6-4937-857c-1c434527c956.c000.snappy.parquet
2023-09-22 11:45:20 57.8 MiB part-00000-a347b170-d15f-4328-b402-ccdc099e94da.c000.snappy.parquet
2023-09-22 11:45:20 57.9 MiB part-00006-d566e257-f0cf-4cf9-b968-cad449784226.c000.snappy.parquet
2023-09-22 11:45:20 57.9 MiB part-00009-18a54bcf-3d53-4766-b485-62332266c974.c000.snappy.parquet
2023-09-22 11:45:20 58.5 MiB part-00003-8cf66203-9a32-41b5-a617-a3a9e9b2e465.c000.snappy.parquet
2023-09-22 11:45:21 38.7 MiB part-00010-1b8e65ae-b366-46fa-818e-d8ac95fd5904.c000.snappy.parquet
2023-09-22 11:45:22 57.1 MiB part-00007-05dc33ff-5125-435f-b0f1-f4215fba9d5b.c000.snappy.parquet
2023-09-22 11:45:22 58.9 MiB part-00008-5c99002a-1d7b-4db5-8c14-8e3f966773ec.c000.snappy.parquet
2023-09-22 11:45:23 57.6 MiB part-00002-6844ca06-b437-486f-b15b-ebf93fc0e740.c000.snappy.parquet

Any suggestions?

@sezruby
Copy link
Contributor Author

sezruby commented Sep 26, 2023

@adityakumar84 you can try to increase the binSize config. It's used for row format size in memory.
The result parquet size, we cannot control it precisely as it depends on data property / parquet compression ratio.
To make more accurate, we need to collect parquet compression ratio history from previous wrtie job... etc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] OPTIMIZED WRITE
8 participants