Spark SQL: Relational Data Processing in Spark - MIT SQL: Relational Data Processing in Spark ......

download Spark SQL: Relational Data Processing in Spark - MIT SQL: Relational Data Processing in Spark ... Spark SQL is a new module in Apache Spark that integrates rela- ... Spark waits until

of 12

  • date post

    06-Feb-2018
  • Category

    Documents

  • view

    225
  • download

    2

Embed Size (px)

Transcript of Spark SQL: Relational Data Processing in Spark - MIT SQL: Relational Data Processing in Spark ......

  • Spark SQL: Relational Data Processing in Spark

    Michael Armbrust, Reynold S. Xin, Cheng Lian, Yin Huai, Davies Liu, Joseph K. Bradley,Xiangrui Meng, Tomer Kaftan, Michael J. Franklin, Ali Ghodsi, Matei Zaharia

    Databricks Inc. MIT CSAIL AMPLab, UC Berkeley

    ABSTRACTSpark SQL is a new module in Apache Spark that integrates rela-tional processing with Sparks functional programming API. Builton our experience with Shark, Spark SQL lets Spark program-mers leverage the benefits of relational processing (e.g., declarativequeries and optimized storage), and lets SQL users call complexanalytics libraries in Spark (e.g., machine learning). Compared toprevious systems, Spark SQL makes two main additions. First, itoffers much tighter integration between relational and proceduralprocessing, through a declarative DataFrame API that integrateswith procedural Spark code. Second, it includes a highly extensibleoptimizer, Catalyst, built using features of the Scala programminglanguage, that makes it easy to add composable rules, control codegeneration, and define extension points. Using Catalyst, we havebuilt a variety of features (e.g., schema inference for JSON, machinelearning types, and query federation to external databases) tailoredfor the complex needs of modern data analysis. We see Spark SQLas an evolution of both SQL-on-Spark and of Spark itself, offeringricher APIs and optimizations while keeping the benefits of theSpark programming model.

    Categories and Subject DescriptorsH.2 [Database Management]: Systems

    KeywordsDatabases; Data Warehouse; Machine Learning; Spark; Hadoop

    1 IntroductionBig data applications require a mix of processing techniques, datasources and storage formats. The earliest systems designed forthese workloads, such as MapReduce, gave users a powerful, butlow-level, procedural programming interface. Programming suchsystems was onerous and required manual optimization by the userto achieve high performance. As a result, multiple new systemssought to provide a more productive user experience by offeringrelational interfaces to big data. Systems like Pig, Hive, Dremel andShark [29, 36, 25, 38] all take advantage of declarative queries toprovide richer automatic optimizations.

    Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.SIGMOD15, May 31June 4, 2015, Melbourne, Victoria, Australia.Copyright is held by the owner/author(s). Publication rights licensed to ACM.ACM 978-1-4503-2758-9/15/05 ...$15.00.http://dx.doi.org/10.1145/2723372.2742797.

    While the popularity of relational systems shows that users oftenprefer writing declarative queries, the relational approach is insuffi-cient for many big data applications. First, users want to performETL to and from various data sources that might be semi- or un-structured, requiring custom code. Second, users want to performadvanced analytics, such as machine learning and graph processing,that are challenging to express in relational systems. In practice,we have observed that most data pipelines would ideally be ex-pressed with a combination of both relational queries and complexprocedural algorithms. Unfortunately, these two classes of systemsrelational and proceduralhave until now remained largely disjoint,forcing users to choose one paradigm or the other.

    This paper describes our effort to combine both models in SparkSQL, a major new component in Apache Spark [39]. Spark SQLbuilds on our earlier SQL-on-Spark effort, called Shark. Ratherthan forcing users to pick between a relational or a procedural API,however, Spark SQL lets users seamlessly intermix the two.

    Spark SQL bridges the gap between the two models through twocontributions. First, Spark SQL provides a DataFrame API thatcan perform relational operations on both external data sources andSparks built-in distributed collections. This API is similar to thewidely used data frame concept in R [32], but evaluates operationslazily so that it can perform relational optimizations. Second, tosupport the wide range of data sources and algorithms in big data,Spark SQL introduces a novel extensible optimizer called Catalyst.Catalyst makes it easy to add data sources, optimization rules, anddata types for domains such as machine learning.

    The DataFrame API offers rich relational/procedural integrationwithin Spark programs. DataFrames are collections of structuredrecords that can be manipulated using Sparks procedural API, orusing new relational APIs that allow richer optimizations. They canbe created directly from Sparks built-in distributed collections ofJava/Python objects, enabling relational processing in existing Sparkprograms. Other Spark components, such as the machine learninglibrary, take and produce DataFrames as well. DataFrames are moreconvenient and more efficient than Sparks procedural API in manycommon situations. For example, they make it easy to computemultiple aggregates in one pass using a SQL statement, somethingthat is difficult to express in traditional functional APIs. They alsoautomatically store data in a columnar format that is significantlymore compact than Java/Python objects. Finally, unlike existingdata frame APIs in R and Python, DataFrame operations in SparkSQL go through a relational optimizer, Catalyst.

    To support a wide variety of data sources and analytics workloadsin Spark SQL, we designed an extensible query optimizer calledCatalyst. Catalyst uses features of the Scala programming language,such as pattern-matching, to express composable rules in a Turing-complete language. It offers a general framework for transforming

  • trees, which we use to perform analysis, planning, and runtimecode generation. Through this framework, Catalyst can also beextended with new data sources, including semi-structured datasuch as JSON and smart data stores to which one can push filters(e.g., HBase); with user-defined functions; and with user-definedtypes for domains such as machine learning. Functional languagesare known to be well-suited for building compilers [37], so it isperhaps no surprise that they made it easy to build an extensibleoptimizer. We indeed have found Catalyst effective in enabling usto quickly add capabilities to Spark SQL, and since its release wehave seen external contributors easily add them as well.

    Spark SQL was released in May 2014, and is now one of themost actively developed components in Spark. As of this writing,Apache Spark is the most active open source project for big dataprocessing, with over 400 contributors in the past year. Spark SQLhas already been deployed in very large scale environments. Forexample, a large Internet company uses Spark SQL to build datapipelines and run queries on an 8000-node cluster with over 100PB of data. Each individual query regularly operates on tens ofterabytes. In addition, many users adopt Spark SQL not just for SQLqueries, but in programs that combine it with procedural processing.For example, 2/3 of customers of Databricks Cloud, a hosted servicerunning Spark, use Spark SQL within other programming languages.Performance-wise, we find that Spark SQL is competitive withSQL-only systems on Hadoop for relational queries. It is also upto 10 faster and more memory-efficient than naive Spark code incomputations expressible in SQL.

    More generally, we see Spark SQL as an important evolution ofthe core Spark API. While Sparks original functional programmingAPI was quite general, it offered only limited opportunities forautomatic optimization. Spark SQL simultaneously makes Sparkaccessible to more users and improves optimizations for existingones. Within Spark, the community is now incorporating Spark SQLinto more APIs: DataFrames are the standard data representationin a new ML pipeline API for machine learning, and we hope toexpand this to other components, such as GraphX and streaming.

    We start this paper with a background on Spark and the goals ofSpark SQL (2). We then describe the DataFrame API (3), theCatalyst optimizer (4), and advanced features we have built onCatalyst (5). We evaluate Spark SQL in 6. We describe externalresearch built on Catalyst in 7. Finally, 8 covers related work.

    2 Background and Goals2.1 Spark Overview

    Apache Spark is a general-purpose cluster computing engine withAPIs in Scala, Java and Python and libraries for streaming, graphprocessing and machine learning [6]. Released in 2010, it is to ourknowledge one of the most widely-used systems with a language-integrated API similar to DryadLINQ [20], and the most activeopen source project for big data processing. Spark had over 400contributors in 2014, and is packaged by multiple vendors.

    Spark offers a functional programming API similar to other recentsystems [20, 11], where users manipulate distributed collectionscalled Resilient Distributed Datasets (RDDs) [39]. Each RDD isa collection of Java or Python objects partitioned across a cluster.RDDs can be manipulated through operations like map, filter,and reduce, which take functions in the programming languageand ship them to nodes on the cluster. For example, the Scala codebelow counts lines starting with ERROR in a text file:

    lines = spark.textFile("hdfs://...")errors = lines.filter(s => s.contains("ERROR"))println(errors.count())

    This code creates an RDD of strings called lines by reading anHDFS file, then transforms it using filter to obtain another RDD,err