apache/spark-py

Sponsored OSS

By The Apache Software Foundation

•Updated almost 3 years ago

Image
Languages & frameworks
Machine learning & AI
Data science
68

1M+

apache/spark-py repository overview

⁠Apache Spark

Apache Sparkā„¢ is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.

https://spark.apache.org/⁠

⁠Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page⁠. This README file only contains basic setup instructions.

⁠Interactive Python Shell

The easiest way to start using PySpark is through the Python shell:

docker run -it apache/spark-py /opt/spark/bin/pyspark

And run the following command, which should also return 1,000,000,000:

>>> spark.range(1000 * 1000 * 1000).count()

⁠Running Spark on Kubernetes

https://spark.apache.org/docs/latest/running-on-kubernetes.html⁠

⁠To run Spark with Scala/Java only

Use the images on https://hub.docker.com/r/apache/spark⁠

⁠To run R Spark

Use the images on https://hub.docker.com/r/apache/spark-r⁠

Tag summary

Content type

Image

Digest

sha256:bec1fed78…

Size

525.1 MB

Last updated

almost 3 years ago

docker pull apache/spark-py

This week's pulls

Pulls:

2,540

Last week