Apache Spark is a computational engine for large-scale data processing. PySpark exposes the Spark programming model to Python. It defines an API for Resilient Distributed Datasets (RDDs) and the DataFrame API.
This is the recording of my talk PySpark - Data Processing in Python on top of Apache Spark that I gave on EuroPython 2015 in Bilbao:
Apache Spark is a computational engine for large-scale data processing. It is responsible for scheduling, distribution and monitoring applications which consist of many computational task across many worker machines on a computing cluster.
This Talk will give an overview of PySpark with a focus on Resilient Distributed Datasets and the DataFrame API. While Spark Core itself is written in Scala and runs on the JVM, PySpark exposes the Spark programming model to Python. It defines an API for Resilient Distributed Datasets (RDDs). RDDs are a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. RDDs are immutable, partitioned collections of objects. Transformations construct a new RDD from a previous one. Actions compute a result based on an RDD. Multiple computation steps are expressed as directed acyclic graph (DAG). The DAG execution model is a generalization of the Hadoop MapReduce computation model.
The Spark DataFrame API was introduced in Spark 1.3. DataFrames envolve Spark’s RDD model and are inspired by Pandas and R data frames. The API provides simplified operators for filtering, aggregating, and projecting over large datasets. The DataFrame API supports diffferent data sources like JSON datasources, Parquet files, Hive tables and JDBC database connections.