Breakthrough OLAP performance with Cassandra and Spark

Post on 21-Aug-2015

152 views 5 download

Tags:

Transcript of Breakthrough OLAP performance with Cassandra and Spark

Breakthrough OLAPPerformance with

Cassandra and SparkEvan Chan

August 2015

Who am I?

Distinguished Engineer, @evanfchan

User and contributor to Spark since 0.9, Cassandra since 0.6Co-creator and maintainer of

TupleJump

http://velvia.github.io

Spark Job Server

About Tuplejump is a big data technology leader providing solutions for

rapid insights from data.Tuplejump

- the first Spark-Cassandra integration - an open source Lucene indexer for Cassandra - open source HDFS for Cassandra

CalliopeStargateSnackFS

Didn't I attend the same talk last year?Similar title, but mostly new materialWill reveal new open source projects! :)

Problem SpaceNeed analytical database / queries on structured big data

Something SQL-like, very flexible and fastPre-aggregation too limiting

Fast data / constant updatesIdeally, want my queries to run over fresh data too

Example: Video analyticsTypical collection and analysis of consumer events3 billion new events every dayVideo publishers want updated stats, the sooner the betterPre-aggregation only enables simple dashboard UIsWhat if one wants to offer more advanced analysis, or ageneric data query API?

Eg, top countries filtered by device type, OS, browser

RequirementsScalable - rules out PostGreSQL, etc.Easy to update and ingest new data

Not traditional OLAP cubes - that's not what I'm talkingabout

Very fast for analytical queries - OLAP not OLTPExtremely flexible queriesPreferably open source

ParquetWidely used, lots of support (Spark, Impala, etc.)Problem: Parquet is read-optimized, not easy to use for writes

Cannot support idempotent writesOptimized for writing very large chunks, not small updatesNot suitable for time series, IoT, etc.Often needs multiple passes of jobs for compaction of smallfiles, deduplication, etc.

 

People really want a database-like abstraction, not a file format!

Turns out this has been solved before!

Even .Facebook uses Vertica

MPP Databases

Easy writes plus fast queries, with constant transfersAutomatic query optimization by storing intermediate queryprojectionsStonebraker, et. al. - paper (Brown Univ)CStore

What's wrong with MPP Databases?Closed source$$$Usually don't scale horizontally that well (or cost is prohibitive)

Cassandra

Horizontally scalableVery flexible data modelling (lists, sets, custom data types)Easy to operatePerfect for ingestion of real time / machine dataBest of breed storage technology, huge communityBUT: Simple queries onlyOLTP-oriented

Apache Spark

Horizontally scalable, in-memory queriesFunctional Scala transforms - map, filter, groupBy, sortetc.SQL, machine learning, streaming, graph, R, many more pluginsall on ONE platform - feed your SQL results to a logisticregression, easy!Huge number of connectors with every single storagetechnology

Spark provides the missing fast, deepanalytics piece of Cassandra!

Spark and CassandraOLAP Architectures

Separate Storage and Query LayersCombine best of breed storage and query platformsTake full advantage of evolution of eachStorage handles replication for availabilityQuery can replicate data for scaling read concurrency -independent!

Spark as Cassandra's Cache

Spark SQLAppeared with Spark 1.0In-memory columnar storeParquet, Json, Cassandra connector, Avro, many moreSQL as well as DataFrames (Pandas-style) APIIndexing integrated into data sources (eg C* secondaryindexes)Write custom functions in Scala .... take that Hive UDFs!!Integrates well with MLBase, Scala/Java/Python

Connecting Spark to CassandraDatastax's Tuplejump

Spark Cassandra ConnectorCalliope

 

Get started in one line with spark-shell!bin/spark-shell \ --packages com.datastax.spark:spark-cassandra-connector_2.10:1.4.0-M3 \ --conf spark.cassandra.connection.host=127.0.0.1

Caching a SQL Table from CassandraDataFrames support in Cassandra Connector 1.4.0 (and 1.3.0):

val sqlContext = new org.apache.spark.sql.SQLContext(sc)

val df = sqlContext.read .format("org.apache.spark.sql.cassandra") .options(Map("table" -> "gdelt", "keyspace" -> "test")) .load()df.registerTempTable("gdelt")sqlContext.cacheTable("gdelt")sqlContext.sql("SELECT count(monthyear) FROM gdelt").show()

 

Spark does no caching by default - you will always be readingfrom C*!

How Spark SQL's Table Caching Works

Spark Cached Tables can be Really FastGDELT dataset, 4 million rows, 60 columns, localhost

Method secsUncached 317

Cached 0.38

 

Almost a 1000x speedup!

On an 8-node EC2 c3.XL cluster, 117 million rows, can runcommon queries 1-2 seconds against cached dataset.

Tuning Connector Partitioningspark.cassandra.input.split.size

Guideline: One split per partition, one partition per CPU core

Much more parallelism won't speed up job much, but willstarve other C* requests

Lesson #1: Take Advantage of SparkCaching!

Problems with Cached TablesStill have to read the data from Cassandra first, which is slowAmount of RAM: your entire data + extra for conversion tocached tableCached tables only live in Spark executors - by default

tied to single context - not HAonce any executor dies, must re-read data from C*

Caching takes time: convert from RDD[Row] to compressedcolumnar formatCannot easily combine new RDD[Row] with cached tables(and keep speed)

Problems with Cached TablesIf you don't have enough RAM, Spark can cache your tablespartly to disk. This is still way, way, faster than scanning an entireC* table. However, cached tables are still tied to a single Sparkcontext/application.

Also: rdd.cache() is NOT the same as SQLContext'scacheTable!

What about C* Secondary Indexing?Spark-Cassandra Connector and Calliope can both reduce I/O byusing Cassandra secondary indices. Does this work with caching?

No, not really, because only the filtered rows would be cached.Subsequent queries against this limited cached table would notgive you expected results.

Tachyon Off-Heap Caching

Intro to TachyonTachyon: an in-memory cache for HDFS and other binary datasourcesKeeps data off-heap, so multiple Spark applications/executorscan share dataSolves HA problem for data

Wait, wait, wait!What am I caching exactly? Tachyon is designed for caching filesor binary blobs.

A serialized form of CassandraRow/CassandraRDD?Raw output from Cassandra driver?

What you really want is this:

Cassandra SSTable -> Tachyon (as row cache) -> CQL -> Spark

Bad programmers worry about the code. Goodprogrammers worry about data structures. - Linus Torvalds

 

Are we really thinking holistically about data modelling, caching,and how it affects the entire systems architecture?

Efficient Columnar Storage in CassandraWait, I thought Cassandra was columnar?

How Cassandra stores your CQL TablesSuppose you had this CQL table:

CREATE TABLE ( department text, empId text, first text, last text, age int, PRIMARY KEY (department, empId));

How Cassandra stores your CQL TablesPartitionKey 01:first 01:last 01:age 02:first 02:last 02:ageSales Bob Jones 34 Susan O'Connor 40

Engineering Dilbert P ? Dogbert Dog 1

 

Each row is stored contiguously. All columns in row 2 come afterrow 1.

To analyze only age, C* still has to read every field.

Cassandra is really a row-based, OLTP-oriented datastore.

Unless you know how to use it otherwise :)

The traditional row-based data storageapproach is dead- Michael Stonebraker

Columnar Storage (Memory)Name column

0 10 1

 

Dictionary: {0: "Barak", 1: "Hillary"}

 

Age column

0 146 66

Columnar Storage (Cassandra)Review: each physical row in Cassandra (e.g. a "partition key")stores its columns together on disk.

 

Schema CF

Rowkey TypeName StringDict

Age Int

 

Data CF

Rowkey 0 1Name 0 1

Age 46 66

Columnar Format solves I/OCompression

Dictionary compression - HUGE savings for low-cardinalitystring columnsRLE, other techniques

Reduce I/OOnly columns needed for query are loaded from disk

Batch multiple rows in one cell for efficiency (avoid cluster keyoverhead)

Columnar Format solves CachingUse the same format on disk, in cache, in memory scan

Caching works a lot better when the cached object is thesame!!

No data format dissonance means bringing in new bits of dataand combining with existing cached data is seamless

So, why isn't everybody doing this?No columnar storage format designed to work with NoSQLstoresEfficient conversion to/from columnar format a hard problemMost infrastructure is still row oriented

Spark SQL/DataFrames based on RDD[Row]Spark Catalyst is a row-oriented query parser

All hard work leads to profit, but mere talk leadsto poverty.- Proverbs 14:23

Columnar Storage Performance Study 

http://github.com/velvia/cassandra-gdelt

GDELT Dataset1979 to now

60 columns, 250 million+ rows, 250GB+Let's compare Cassandra I/O only, no caching or Spark

Global Database of Events, Language, and Tone

The scenarios1. Narrow table - CQL table with one row per partition key2. Wide table - wide rows with 10,000 logical rows per partition

key3. Columnar layout - 1000 rows per columnar chunk, wide rows,

with dictionary compressionFirst 4 million rows, localhost, SSD, C* 2.0.9, LZ4 compression.Compaction performed before read benchmarks.

Query and ingest timesScenario Ingest Read all

columnsRead onecolumn

Narrowtable

1927sec

505 sec 504 sec

Widetable

3897sec

365 sec 351 sec

Columnar 93 sec 8.6 sec 0.23 sec 

On reads, using a columnar format is up to 2190x faster, whileingestion is 20-40x faster.

Of course, real life perf gains will depend heavily on query,table width, etc. etc.

Disk space usageScenario Disk usedNarrow table 2.7 GB

Wide table 1.6 GB

Columnar 0.34 GBThe disk space usage helps explain some of the numbers.

Towards Extreme Query Performance

The filo project is a binary data vector library

designed for extreme read performance with minimaldeserialization costs.

http://github.com/velvia/filo

Designed for NoSQL, not a file formatrandom or linear accesson or off heapmissing value supportScala only, but cross-platform support possible

What is the ceiling?This Scala loop can read integers from a binary Filo blob at a rateof 2 billion integers per second - single threaded:

def sumAllInts(): Int = { var total = 0 for { i <- 0 until numValues optimized } { total += sc(i) } total }

Vectorization of Spark QueriesThe project.Tungsten

Process many elements from the same column at once, keep datain L1/L2 cache.

Coming in Spark 1.4 through 1.6

Hot Column Caching in TachyonHas a "table" feature, originally designed for SharkKeep hot columnar chunks in shared off-heap memory for fastaccess

Introducing FiloDB 

http://github.com/velvia/FiloDB

What's in the name?

Rich sweet layers of distributed, versioned database goodness

DistributedApache Cassandra. Scale out with no SPOF. Cross-datacenterreplication. Proven storage and database technology.

VersionedIncrementally add a column or a few rows as a new version. Easilycontrol what versions to query. Roll back changes inexpensively.

Stream out new versions as continuous queries :)

ColumnarParquet-style storage layoutRetrieve select columns and minimize I/O for OLAP queriesAdd a new column without having to copy the whole tableVectorization and lazy/zero serialization for extremeefficiency

100% ReactiveBuilt completely on the Typesafe Platform:

Scala 2.10 and SBTSpark (including custom data source)Akka Actors for rational scale-out concurrencyFutures for I/OPhantom Cassandra client for reactive, type-safe C* I/OTypesafe Config

Spark SQL Queries!SELECT first, last, age FROM customers WHERE _version > 3 AND age < 40 LIMIT 100

Read to and write from Spark DataframesAppend/merge to FiloDB table from Spark Streaming

FiloDB vs ParquetComparable read performance - with lots of space to improve

Assuming co-located Spark and CassandraOn localhost, both subsecond for simple queries (GDELT1979-1984)FiloDB has more room to grow - due to hot column cachingand much less deserialization overhead

Lower memory requirement due to much smaller block sizesMuch better fit for IoT / Machine / Time-series applicationsLimited support for types

array / set / map support not there, but will be added later

Where FiloDB Fits InUse regular C* denormalized tables for OLTP and single-keylookupsUse FiloDB for the remaining ad-hoc or more complexanalytical queriesSimplify your analytics infrastructure!

No need to export to Hadoop/Parquet/data warehouse.Use Spark and C* for both OLAP and OLTP!

Perform ad-hoc OLAP analysis of your time-series, IoT data

Simplify your Lambda Architecture...

( )https://www.mapr.com/developercentral/lambda-architecture

With Spark, Cassandra, and FiloDB

Ma, where did all the components go?You mean I don't have to deal with Hadoop?Use Cassandra as a front end to store IoT data first

Exactly-Once Ingestion from Kafka

New rows appended via KafkaWrites are idempotent - no need to dedup!Converted to columnar chunks on ingest and stored in C*Only necessary columnar chunks are read into Spark forminimal I/O

You can help!Send me your use cases for OLAP on Cassandra and Spark

Especially IoT and GeospatialEmail if you want to contribute

Thanks...to the entire OSS community, but in particular:

Lee Mighdoll, Nest/GoogleRohit Rai and Satya B., TuplejumpMy colleagues at Socrata

 

If you want to go fast, go alone. If you want to gofar, go together.-- African proverb

DEMO TIMEGDELT: Regular C* Tables vs FiloDB

Extra Slides

When in doubt, use brute force- Ken Thompson

Automatic Columnar Conversion usingCustom Indexes

Write to Cassandra as you normally doCustom indexer takes changes, merges and compacts intocolumnar chunks behind scenes

Implementing Lambda is HardUse real-time pipeline backed by a KV store for new updatesLots of moving parts

Key-value store, real time sys, batch, etc.Need to run similar code in two placesStill need to deal with ingesting data to Parquet/HDFSNeed to reconcile queries against two different places