1M+
Apache Spark⢠is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.
You can find the latest Spark documentation, including a programming guide, on the project web pageā . This README file only contains basic setup instructions.
The easiest way to start using PySpark is through the Python shell:
docker run -it apache/spark-py /opt/spark/bin/pyspark
And run the following command, which should also return 1,000,000,000:
>>> spark.range(1000 * 1000 * 1000).count()
https://spark.apache.org/docs/latest/running-on-kubernetes.htmlā
Use the images on https://hub.docker.com/r/apache/sparkā
Use the images on https://hub.docker.com/r/apache/spark-rā
Content type
Image
Digest
sha256:bec1fed78ā¦
Size
525.1 MB
Last updated
almost 3 years ago
docker pull apache/spark-pyPulls:
2,540
Last week