Advanced Apache Spark Meetup Data Sources API Cassandra Spark Connector Spark 1.5.1 Zeppelin 0.6.0

Click here to load reader

  • date post

    15-Apr-2017
  • Category

    Software

  • view

    1.614
  • download

    6

Embed Size (px)

Transcript of Advanced Apache Spark Meetup Data Sources API Cassandra Spark Connector Spark 1.5.1 Zeppelin 0.6.0

  • IBM | spark.tc

    Advanced Apache Spark MeetupSpark SQL + DataFrames + Catalyst + Data Sources API

    Chris Fregly, Principal Data Solutions EngineerIBM Spark Technology Center

    Oct 6, 2015

    Power of data. Simplicity of design. Speed of innovation.

  • Meetup Housekeeping

  • IBM | spark.tc

    AnnouncementsSteve Beier, Boss Man!

    IBM Spark Technology Center!

  • IBM | spark.tc

    CAP Theorem Adapted to Hiring

    Parochial!

    Collaborative!

    Awesome!

    Spelling Bee!Champion!

    !!!

    First Chair !Chess Club!

    !!!

    Math-lete !1st Place!!!

    - !!!!

  • IBM | spark.tc

    Who am I?!!!!

    Streaming Data Engineer!Netflix Open Source Committer!

    !!

    !Data Solutions Engineer!

    Apache Contributor!!!

    Principal Data Solutions Engineer!IBM Technology Center!

  • IBM | spark.tc

    Last Meetup (Spark Wins 100 TB Daytona GraySort)On-disk only, in-memory caching disabled!!sortbenchmark.org/ApacheSpark2014.pdf!

  • IBM | spark.tc

    Upcoming Advanced Apache Spark Meetups!Project Tungsten Data Structs/Algos for CPU/Memory Optimization!

    Nov 12th, 2015!

    Text-based Advanced Analytics and Machine Learning!Jan 14th, 2016!

    ElasticSearch-Spark Connector w/ Costin Leau (Elastic.co) & Me!Feb 16th, 2016!

    Spark Internals Deep Dive!Mar 24th, 2016!

    Spark SQL Catalyst Optimizer Deep Dive !Apr 21st, 2016!

  • IBM | spark.tc

    Meetup Metrics

    Total Spark Experts: 1100+!!!Donations: $0!

    Your money is no good here.!!

    Lloyd from !The Shining!

  • IBM | spark.tc

    Meetup UpdatesTalking with other Spark Meetup Groups!

    Potential mergers and/or hostile takeovers!!

    New Sponsors!!!!Connected with Organizer of Bangalore Spark Meetup!

    Madhukara Phatak !

  • IBM | spark.tc

    Constructive Criticism from Previous Attendees

    Chris, youre like a fat version of an ! already-fat Erlich from Silicon Valley -! except not funny.!

    Chris, your voice is so annoying that it ! keeps waking me up from sleep induced ! by your boring content.!

  • IBM | spark.tc

    Recent EventsCassandra Summit 2015!

    Real-time Advanced Analytics w/ Spark & Cassandra!!!!!!

    Strata NYC 2015!Practical Data Science w/ Spark: Recommender Systems!

    Available on Slideshare! http://slideshare.net/cfregly!

    !

  • IBM | spark.tc

    Freg-a-palooza Upcoming World Tour London Spark Meetup (Oct 12th)! Scotland Data Science Meetup (Oct 13th)! Dublin Spark Meetup (Oct 15th)! Barcelona Spark Meetup (Oct 20th)! Madrid Spark Meetup (Oct 22nd)! Paris Spark Summit (Oct 26th)! Amsterdam Spark Summit (Oct 27th Oct 29th)! Delft Dutch Data Science Meetup (Oct 29th) ! Brussels Spark Meetup (Oct 30th)! Zurich Big Data Developers Meetup (Nov 2nd)!

    High probability!Ill end up in jail!

    or married?!!

  • Spark SQL + DataFrames

    Catalyst + Data Sources API

  • IBM | spark.tc

    Topics of this Talk!DataFrames!Catalyst Optimizer and Query Plans!Data Sources API!Creating and Contributing Custom Data Source!!

    Partitions, Pruning, Pushdowns!!

    Native + Third-Party Data Source Impls!!

    Spark SQL Performance Tuning!

  • IBM | spark.tc

    DataFrames!Inspired by R and Pandas DataFrames!Cross language support!

    SQL, Python, Scala, Java, R!Levels performance of Python, Scala, Java, and R!

    Generates JVM bytecode vs serialize/pickle objects to Python!DataFrame is Container for Logical Plan!

    Transformations are lazy and represented as a tree!Catalyst Optimizer creates physical plan!

    DataFrame.rdd returns the underlying RDD if needed!Custom UDF using registerFunction() New, experimental UDAF support!!

    Use DataFrames !instead of RDDs!!!

  • IBM | spark.tc

    Catalyst Optimizer!Converts logical plan to physical plan!Manipulate & optimize DataFrame transformation tree!

    Subquery elimination use aliases to collapse subqueries!Constant folding replace expression with constant!Simplify filters remove unnecessary filters!Predicate/filter pushdowns avoid unnecessary data load!Projection collapsing avoid unnecessary projections!

    Hooks for custom rules!Rules = Scala Case Classes!

    val newPlan = MyFilterRule(analyzedPlan)

    !

    Implements!oas.sql.catalyst.rules.Rule!

    Apply to any !plan stage!

  • IBM | spark.tc

    Plan Debugging!gendersCsvDF.select($"id", $"gender").filter("gender != 'F'").filter("gender != 'M'").explain(true)!

    Requires explain(true)!

    DataFrame.queryExecution.logical!

    DataFrame.queryExecution.analyzed!

    DataFrame.queryExecution.optimizedPlan!

    DataFrame.queryExecution.executedPlan!

  • IBM | spark.tc

    Plan Visualization & Join/Aggregation Metrics!

    Effectiveness !of Filter!

    Cost-based !Optimization!is Applied!

    Peak Memory for!Joins and Aggs!

    Optimized !CPU-cache-aware!

    Binary Format!Minimizes GC &!

    Improves Join Perf!(Project Tungsten)!

    New in Spark 1.5!!

  • IBM | spark.tc

    Data Sources API!Relations (o.a.s.sql.sources.interfaces.scala)!

    BaseRelation (abstract class): Provides schema of data!TableScan (impl): Read all data from source, construct rows !PrunedFilteredScan (impl): Read with column pruning & predicate pushdowns!InsertableRelation (impl): Insert or overwrite data based on SaveMode enum!

    RelationProvider (trait/interface): Handles user options, creates BaseRelation!Execution (o.a.s.sql.execution.commands.scala)!

    RunnableCommand (trait/interface)!ExplainCommand(impl: case class)!CacheTableCommand(impl: case class)!

    Filters (o.a.s.sql.sources.filters.scala)!Filter (abstract class for all filter pushdowns for this data source)!

    EqualTo (impl)!GreaterThan (impl)!StringStartsWith (impl)!

  • IBM | spark.tc

    Creating a Custom Data Source!Study Existing Native and Third-Party Data Source Impls!!

    Native: JDBC (o.a.s.sql.execution.datasources.jdbc)! class JDBCRelation extends BaseRelation with PrunedFilteredScan with InsertableRelation !

    Third-Party: Cassandra (o.a.s.sql.cassandra)! class CassandraSourceRelation extends BaseRelation with PrunedFilteredScan with InsertableRelation!

    !!

  • IBM | spark.tc

    Contributing a Custom Data Source!spark-packages.org!

    Managed by!Contains links to externally-managed github projects!Ratings and comments!Spark version requirements of each package!

    Examples!https://github.com/databricks/spark-csv!https://github.com/databricks/spark-avro!https://github.com/databricks/spark-redshift!!!

  • Partitions, Pruning, Pushdowns

  • IBM | spark.tc

    Demo Dataset (from previous Spark After Dark talks)!

    RATINGS !========!

    UserID,ProfileID,Rating !(1-10)!

    GENDERS!========!

    UserID,Gender !(M,F,U)!

    !Anonymous !

  • IBM | spark.tc

    Partitions!Partition based on data usage patterns!

    /genders.parquet/gender=M/ /gender=F/

  • IBM | spark.tc

    Pruning!

    Partition Pruning!Filter out entire partitions of rows on partitioned data SELECT id, gender FROM genders where gender = U

    Column Pruning!

    Filter out entire columns for all rows if not required!Extremely useful for columnar storage formats!

    Parquet, ORC! SELECT id, gender FROM genders !

  • IBM | spark.tc

    Pushdowns!Predicate (aka Filter) Pushdowns!

    Predicate returns {true, false} for a given function/condition!Filters rows as deep into the data source as possible!

    Data Source must implement PrunedFilteredScan!

  • Native Spark SQL Data Sources

  • IBM | spark.tc

    Spark SQL Native Data Sources - Source Code!

  • IBM | spark.tc

    JSON Data Source!DataFrame!

    val ratingsDF = sqlContext.read.format("json") .load("file:/root/pipeline/datasets/dating/ratings.json.bz2")

    -- or --! val ratingsDF = sqlContext.read.json ("file:/root/pipeline/datasets/dating/ratings.json.bz2")

    SQL Code! CREATE TABLE genders USING json OPTIONS (path "file:/root/pipeline/datasets/dating/genders.json.bz2")

    Convenience Method!

  • IBM | spark.tc

    JDBC Data Source!Add Driver to Spark JVM System Classpath!

    $ export SPARK_CLASSPATH=

    DataFrame! val jdbcConfig = Map("driver" -> "org.postgresql.Driver", "url" -> "jdbc:postgresql:hostname:port/database", "dbtable" -> schema.tablename")

    df.read.format("jdbc").options(jdbcConfig).load()

    SQL! CREATE TABLE genders USING jdbc OPTIONS (url, dbtable, driver, )

  • IBM | spark.tc

    Parquet Data Source!Configuration!

    spark.sql.parquet.filterPushdown=true ! spark.sql.parquet.mergeSchema=true spark.sql.parquet.cacheMetadata=true ! spark.sql.parquet.compression.codec=[uncompressed,snappy,gzip,lzo]

    DataFrames! val gendersDF = sqlContext.read.format("parquet") .load("file:/root/pipeline/datasets/dating/genders.parquet")! gendersDF.write.format("parquet").partitionBy("gender") .save("file:/root/pipeline/datasets/dating/genders.parquet")

    SQL! CREATE TABLE genders USING parquet OPTIONS (path "file:/root/pipeline/datasets/dating/genders.parquet")

  • IBM | spark.tc

    ORC Data Source!Configuration!

    spark.sql.orc.filterPushdown=true

    DataFrames! val gendersDF = sqlContext.read.format("orc") .load("file:/root/pipeline/datasets/dating/genders")! gendersDF.write.format("orc").partitionBy("gender") .save("file:/root/pipeline/datasets/dating/genders")

    SQL! CREATE TABLE genders USING o