Skip to content

[SPARK-49991][SQL] Make HadoopMapReduceCommitProtocol respect 'mapreduce.output.basename' to generate file names#48494

Closed
yaooqinn wants to merge 2 commits intoapache:masterfrom
yaooqinn:SPARK-49991
Closed

[SPARK-49991][SQL] Make HadoopMapReduceCommitProtocol respect 'mapreduce.output.basename' to generate file names#48494
yaooqinn wants to merge 2 commits intoapache:masterfrom
yaooqinn:SPARK-49991

Conversation

@yaooqinn
Copy link
Copy Markdown
Member

@yaooqinn yaooqinn commented Oct 16, 2024

What changes were proposed in this pull request?

In 'HadoopMapReduceCommitProtocol', task output files are generated ahead instead of calling org.apache.hadoop.mapreduce.lib.output.FileOutputFormat#getDefaultWorkFile, which uses the mapreduce.output.basename as the prefix of output files.
In this pull request, we modify the HadoopMapReduceCommitProtocol.getFilename method to also look up this config instead of using the hardcoded 'part'.

Why are the changes needed?

Given a custom file name is a useful feature for users. They can use it to distinguish files added by different engines, on different days, etc. We can also align the usage scenario with other SQL on Hadoop engines for better Hadoop compatibility.

Does this PR introduce any user-facing change?

Yes, a Hadoop configuration 'mapreduce.output.basename' can be used in file datasource output files

How was this patch tested?

new tests

Was this patch authored or co-authored using generative AI tooling?

no`

@yaooqinn
Copy link
Copy Markdown
Member Author

cc @cloud-fan @dongjoon-hyun, thanks

withSQLConf("mapreduce.output.basename" -> "apachespark") {
spark.range(1).coalesce(1).write.parquet(dir.getCanonicalPath)
val df = spark.read.parquet(dir.getCanonicalPath)
assert(df.inputFiles.head.contains("apachespark"))
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we double check if there aren't anything specific to part- files in our codebase?

e.g.,

core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala:      .filter(_.getName.startsWith("part-"))
core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala:      .sortBy(_.getName.stripPrefix("part-").toInt)
core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala:    "part-%05d".format(partitionIndex)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For checkpoint* you have mentioned, it looks safe to me in a roundtrip.

In addition to those mentioned, there is another instance in SparkHadoopWriter where it is hardcoded. Since this affects the underlying RDD APIs, I plan to leave it unchanged in this PR.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very late LGTM :)

Copy link
Copy Markdown
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, LGTM for Apache Spark 4.0.0. Thank you, @yaooqinn and all.

@dongjoon-hyun
Copy link
Copy Markdown
Member

Thank you, @yaooqinn , @cloud-fan , @HyukjinKwon .
Merged to master for Apache Spark 4.0.0 on February 2025.

@yaooqinn yaooqinn deleted the SPARK-49991 branch October 18, 2024 02:57
@yaooqinn
Copy link
Copy Markdown
Member Author

Thank you all, @cloud-fan @HyukjinKwon @dongjoon-hyun

baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 11, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 12, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 13, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 15, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Dec 16, 2025
baibaichen added a commit to apache/gluten that referenced this pull request Dec 17, 2025
…Spark 4.0 (#11281)

* Replace direct exception throwing with `GlutenFileFormatWriter.throwWriteError` for task failure handling.

* Respect 'mapreduce.output.basename' configuration for file name generation, according to apache/spark#48494

* Refactor imports and variable initializations for improved clarity and consistency

* Remove exclusions

* Assert on the cause message

* Enhance error handling in commit and abort tasks to provide better diagnostics

* Fix minor syntax inconsistency

---------

Co-authored-by: Chang chen <chenchang@apache.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants