Data Preparation with Trifacta for Amazon SageMaker

20
Data Preparation with Trifacta for Amazon SageMaker A step-by-step practical guide By Bertrand Cariou, Senior Director of Partner Marketing, Trifacta

Transcript of Data Preparation with Trifacta for Amazon SageMaker

Page 1: Data Preparation with Trifacta for Amazon SageMaker

Data Preparation with Trifacta for Amazon SageMakerA step-by-step practical guide

By Bertrand Cariou, Senior Director of Partner Marketing, Trifacta

Page 2: Data Preparation with Trifacta for Amazon SageMaker

DATA PREPARATION FOR AMAZON SAGEMAKER

In a recent blog covering usage of SageMaker’s unique modeling algorithms such as DeepAR, but also more traditional ones such as Autoregressive Integrated Moving Average (ARIMA) or Exponential Smoothing (ES) to better forecast; it came to my mind that all these algorithms expect well structured and clean data to deliver the most accurate prediction.

However, forecast modeling depends on numerous data sets such as inventory data, promotions, past orders, products, or even weather data and product ratings originated from internal systems but more often than not, from various parties (retailers, distributors, brokers, manufacturers, CPG, public data, social media, etc.) with their own proprietary formats, standards and a very personal perspective of what data quality is.

To structure, clean, and combine these disparate data sets into a consistent matrix for modeling, most data scientists have to spend a massive amount of their time preparing the data.

Data Preparation for Machine Learning

Data scientists should focus on finding and tuning the best models instead of spending too much time on the janitorial work of cleaning the data. This is the reason data preparation gets an increasing adoption in the AI/ML world. This guide covers the typical work one has to go through.

Trifacta is a recognized leader in data preparation, is an AWS ML Competency partner and is available on the AWS marketplace. Data Preparation consists of a visual approach to structure, clean, combine, and enrich disparate data at scale (leveraging the elastic power of AWS) and achieve it way faster than traditional approaches such as coding. Let’s take a closer look at preparing data for SageMaker using Trifacta.

The 4 Steps to Prepare Data for SageMaker

Data Preparation involves 4 major steps to transform raw data into a refined asset ready for modeling in Amazon SageMaker.

Step 1: Structuring the DataThe data can come from internal systems and they will mostly be in files, applications and databases, however when it has to be combined with retailer or distributor data, it may come in various shapes (i.e. hierarchical files, JSON, XML) and often exports of a report, and of course Excel. Any data like this must first be put into a readable form, standardized into columns and rows so it can be assessed. Trifacta automatically recognizes data types and organizes them into a familiar grid interface.

Step 2: Assessing Data AccuracyOnce data has been reformatted a bit, it becomes easier to assess its overall quality. Are fields well formed?, maybe there are mismatched values like an invalid zip code, unit of measures may be inconsistent. You might even see value distribution anomalies, a form of unexpected data value (such percentages over 100 for fractions or a weight way over the average). Identifying these data flaws and reviewing for accuracy is critical for effective data modeling, and it’s at the core of Trifacta’s architecture. Every time you open a data set or derive a new value from existing data, Trifacta automatically and dynamically profiles your data, assesses its accuracy, and then displays a health check bar with information for each column.

You can experience these steps yourself by downloading the reference files used in this guide:Retail Datasets Before Data PreparationRetail Datasets Prepared for SagemakerXgboost Jupyter Notebook for SagemakerLeveraging Trifacta free Wrangler edition or Trifacta Wrangler on AWS marketplace.

2

Page 3: Data Preparation with Trifacta for Amazon SageMaker

Step 3: Addressing Data InconsistencyFollowing the assessment and discovery of the data, other problems can be addressed. Starting with simple missing and mismatched attributes as well as anomalies and outliers, and then data formats, standardization, enrichments to make the data consistent for the model. Sometimes cross reference and lookup tables or conversion formulas are useful depending on an organization’s needs. Lastly, duplicate and inconsistent records are also addressed.

Step 4: Building A Consolidated ViewOnce the data is clean, Trifacta provides easy ways to join datasets, to pivot and unpivot data, aggregate values, and calculate key indicators and predictors. Because modeling implies sampling out of very large datasets, Trifacta offers native sampling techniques to make sure data flows are comprehensively identified. If all goes well, the data preparation recipes can run at scale to provide a structured form of data to SageMaker (via S3 files). If there’s a problem, the process iterates to the various steps to tune the data until it is finally ready.

While these 4 steps are described separately, they are not linear, they are iterative and inter-relative by nature. As the data is structured, it is assessed, then cleaned to generate a new data set that is assessed again, etc.

Better Product Forecasting With Trifacta and SageMaker In order to demonstrate data preparation for SageMaker, let’s take a scenario of a retailer who is implementing a predictive model to identify the type of goods and quantities to deliver to their stores based on extreme weather conditions. The predictive model will have to be trained based on historical data such as past transactions, store locations and weather data points. This particular company maintains its list of stores in an excel spreadsheet (no surprise here!), order management in a business application using a data warehouse and subscribes to a weather service that delivers JSON files in S3 with numerous past and future weather data points.

Let’s follow the 4 steps presented above

3

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 4: Data Preparation with Trifacta for Amazon SageMaker

Step 1: Structuring the Data

The weather data is provided in daily batches of large JSON files with historical weather information for the past day and some forecast for the next few days and weeks. The files are stored in an S3 bucket.

With Trifacta, we can browse the S3 buckets and select the weather file we want to wrangle or a whole folder all at once if the files have the same structure.

We will now also connect to the tables containing our store transactions data which a copy is stored in Amazon RedShift

4

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 5: Data Preparation with Trifacta for Amazon SageMaker

And finally we are also importing the Excel file that contains the list of stores across the US.

Let’s start wrangling now so we can supply some comprehensive and clean data to SageMaker to predict our product forecast based on weather!

As you can see in this preview the weather file is structured in JSON, which is not very user friendly. Especially since we have multiple levels of nested structures.

As soon as we open the file to wrangle it, Trifacta automatically recognizes the JSON format and starts to unnest it to present it in columnar format (see the recipe on the right with some transformation steps? we will come back to it later). The other thing that Trifacta does is to detect the column types and run a series of statistics on it to inform the user about value distribution and overall data quality score. In this case everything looks green. Data seems to be in a good shape.

5

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 6: Data Preparation with Trifacta for Amazon SageMaker

We’re now selecting the 3 bars that represent the temperature, CloudCover and PrecipType, in the weather_summary JSON column and Trifacta suggest to unnest the JSON structure and create 3 columns out of it. Trifacta always shows a preview (in yellow) of the data so one can validate that the transformation step is actually what we expect. We can continue as such to completely flatten the JSON file so we have these features available for our model.

6

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 7: Data Preparation with Trifacta for Amazon SageMaker

Step 2 and 3: Assessing Data Accuracy and Addressing Data Inconsistency:

While the JSON file structure was decent, we may have some surprises when dealing with Excel files or supplier and distributor data.

Opening the ACME_STORE.xlsx file, we can see that some extra structuring, standardization, and cleansing might be needed.

For example, we can see that there are some invalid values for the Staff column as well as missing values. This column is important because it gives us an idea of the size of the store, so it could be a predictor for our SageMaker model. Better to have it clean.

7

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 8: Data Preparation with Trifacta for Amazon SageMaker

By clicking the red square in the quality bar, we can preview the invalid values and a suggestion to delete them. In this case, we accept the suggestion (while ideally we should reach out to the owners of the Excel file an ask them to correct it). Either way, if there are invalid values, this will not do any good to our model. So let’s remove this noise by adding the suggestion and also deleting the missing values.

Next, we will need to standardize the store hours in discrete values. By brushing over the ‘,‘ characters in the Store_hours column, we get a suggestion to split the column in 3. Which is way simpler than writing a regular expression.

8

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 9: Data Preparation with Trifacta for Amazon SageMaker

Trifacta supports regular expressions, it also provides a pattern language to make it easier such as in this other suggestion to split on a dayofweek-abbrev.

By proceeding the same way, we can create a reusable and readable recipe of steps that will structure and clean the data to feed SageMaker. Here is an example of a recipe for this excel file that could be edited and unenriched to get to the level of quality required.

Each individual dataset has to be assessed, structured, and standardized to provide as many features as needed. Trifacta provides hundreds of functions to manipulate the data such as pivoting, unpivoting, aggregating, calculating, windowing, date pattern standardization and many more that can be used visually or scripted in the natural Wrangler language.

9

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 10: Data Preparation with Trifacta for Amazon SageMaker

Step 4: Building A Consolidated View

Training for a ML model requires creating a large flatten file with all the attributes and features in single columns. Hence, the necessity to combine these datasets together. Trifacta provides join wizard, lookup and union functions to combine the data into a consistent and consolidated view.

Similar to suggesting transformations, Trifacta will suggest the best columns to match to combine the datasets. In this case the suggestion to join the weather file and the excel store file is to map them based on the latitude and longitude columns.

Proceeding the same way, we can combine the RedShift data to define a master file with all the attributes. Trifacta provides a logical flow view that showcases how the data is combined.

10

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 11: Data Preparation with Trifacta for Amazon SageMaker

Before supplying the data to SageMaker, we want to validate that there are no data inconsistencies remaining in the final dataset. Trifacta runs a full profile on the data and highlights possible errors. In this case, we have invalid values in the QTY column. We can revisit the recipe and figure out how we can eliminate this issue.

Generating The Data For SageMaker Now that the combined dataset is standardized and consistent, there are some final steps left to structure the data to make it suitable for SageMaker training. The overall goal will be to create 3 different files (train.csv, validation.csv, and test.csv) that will require binary categorical results to feed into the Sagemaker’s built-in XGBoost algorithm.

Columns filled with categorical data must be converted into a series of new columns in which possible values that will indicate 1 when the value is found and 0 if not (i.e. one-hot encodings).

11

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 12: Data Preparation with Trifacta for Amazon SageMaker

For example, the column class_temp which contains up to 5 possible values must be converted from that structure:

Into this one:

12

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 13: Data Preparation with Trifacta for Amazon SageMaker

Trifacta provides the one-hot function that does just this translation:

The next step is to randomly shuffle the data set and create 3 distinct files to train, validate, and test the model.

This is easily achieved with these steps:

First, create a new column with a random value

13

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 14: Data Preparation with Trifacta for Amazon SageMaker

And by sorting the dataset based on this new column, we’re shuffling the whole dataset

Next, we need to create the 3 files for SageMaker that will respectively contain 70% of the rows to train the model, 20% to validate it and 10% to test it.

To do so, we will assign a row number to each row and then assign a group train, test, validation based on the respective percentages.

To create the row number, we will use the ROWNUMBER() function like this:

14

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 15: Data Preparation with Trifacta for Amazon SageMaker

Which will produce this result:

Now we can assign each row to a particular bin. Train, Validation or Test.

We can use the case function like this:

15

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 16: Data Preparation with Trifacta for Amazon SageMaker

This will produce a new column distributing the data in a particular bin

We can now easily split the files by filtering out on the DataSet_Model_Usage value

16

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 17: Data Preparation with Trifacta for Amazon SageMaker

Clicking the run job button takes the recipe and pushes it to Amazon EMR to process the data at scale producing CSV, which is the format used by SageMaker’s XGBoost algorithm. For smaller data sets and experimentation we could also skip EMR and use Trifacta client to do the processing.

Data preparation at scale is part of the iteration process needed to get data fit for machine learning modeling. And doing it with Trifacta reduces the time it takes (often more that 70%) so that data scientists can focus on the modeling part of the project, which is where the business outcome will become visible. However if the data preparation is inconsistent so are the business predictions.

Now we have the data ready for SageMaker’s built-in algorithm, XGBoost. We use the algorithm to predict whether QTY is 10 or more (1) or less (0) for given features. We have three .csv files namely, train.csv, validation.csv and test.csv. We can make use of our SageMaker XGBoost script, which is modified from two Amazon SageMaker sample scripts, Targeting Direct Marketing with Amazon SageMaker XGBoost and Predicting Product Success When Review Data Is Available.

Three csv files and the sample notebook (Trifacta-Blog-Retail-Transaction-Xgboost.ipynb) are placed in the same directory in the SageMaker jupyter notebook.

17

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 18: Data Preparation with Trifacta for Amazon SageMaker

Once the data and notebook are ready, there are only three steps to start training.

( 1 ) Specify S3 bucket and prefix that you want to use for training and model artifacts. Copy two csv files, train.csv and validation.csv to the S3 as input for SageMaker’s managed training.

bucket = #’<your s3 bucket>’#prefix = ‘sagemaker/DEMO-XGBOOST-RETAIL-TRANSACTIONS-QTY-csv’

import sagemakerrole = sagemaker.get_execution_role()

boto3.Session().resource(‘s3’).Bucket(bucket).Object(prefix + ‘/train/train.csv’).upload_file(‘train.csv’)boto3.Session().resource(‘s3’).Bucket(bucket).Object(prefix + ‘/validation/validation.csv’).upload_file(‘validation.csv’)

(2) Spacify algorithm, locations to train.csv and validation.csv, training instances and hyper parameters.

from sagemaker.amazon.amazon_estimator import get_image_uri

container = get_image_uri(boto3.Session().region_name, ‘xgboost’)

s3_input_train = sagemaker.s3_input(s3_data=’s3://{}/{}/train’.format(bucket, prefix), content_type=’csv’)s3_input_validation = sagemaker.s3_input(s3_data=’s3://{}/{}/validation/’.format(bucket, prefix), content_type=’csv’)

sess = sagemaker.Session()xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type=’ml.m4.xlarge’,

output_path=’s3://{}/{}/output’.format(bucket, prefix), sagemaker_session=sess)

xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective=’binary:logistic’, num_round=100)

18

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 19: Data Preparation with Trifacta for Amazon SageMaker

(3) Start training by calling fit().

xgb.fit({‘train’: s3_input_train, ‘validation’: s3_input_validation})

First we copy train.csv and validation.csv to S3 (i.e. s3://<- your s3 bucket name ->/sagemaker/DEMO-XGBOOST-RETAIL-TRANSACTIONS-QTY-csv/train and s3://<- your s3 bucket name ->/sagemaker/DEMO-XGBOOST-RETAIL-TRANSACTIONS-QTY-csv/validation, respectively). Second, we specify the algorithm. In this case, we use XGBoost. We also need to declare the S3 location where the train.csv and validation.csv are stored.

For this sample we choose a single ml.m4.xlarge as the training instance, although we could use multiple training instances too. The S3 location for the output model artifact is s3://<- your s3 bucket name ->/sagemaker/DEMO-XGBOOST-RETAIL-TRANSACTIONS-QTY-csv/output. We set hyperparameters for binary:logistic. Finally we start training by calling .fit().

After the training is over, the model is deployed to an inference endpoint on the cloud by executing the following command.

xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type=’ml.m4.xlarge’)

Conclusion In this guide post we demonstrated an end to end process to prepare the data using Trifacta, and then train and host the model using Amazon SageMaker. Although we used a popular SageMaker built-in algorithm, XGBoost, the process would be very similar for other training methods on SageMaker. By using other built-in algorithms, through deep learning frameworks such as TensorFlow, MXNet or PyTorch, or with your own custom algorithms.

A free trial of Trifacta Wrangler is available here or on AWS marketplace here. To learn more about the SageMaker example used in this blog post or more, please take a look at Amazon SageMaker examples.

©2018 Trifacta

19

DATA PREPARATION FOR AMAZON SAGEMAKER

Page 20: Data Preparation with Trifacta for Amazon SageMaker

Bertrand Cariou is Senior Director of Partner Marketing at Trifacta with over 25 years of experience in the computing and data management industry holding various roles in consulting, sales consulting, product & solution marketing, and partner marketing. In his current function, Bertrand defines and executes the go to market strategy with cloud providers, independent software vendors, system integrators and consulting firms. To inform and operationalize his partner marketing strategy, Bertrand wrangles data himself using Trifacta.

ABOUT TR IFACTA

Trifacta is the industry pioneer and established leader of the global market for data preparation technology. The company draws on decades of academic research to make the process of preparing data faster and more intuitive. More than 50,000 Data Wranglers in 10,000 companies worldwide use Trifacta solutions across cloud, hybrid and on-premises environments. Leading organizations such as Deutsche Börse, Google, Kaiser Permanente, New York Life and PepsiCo count on Trifacta to accelerate time-to-insight and discover opportunities that drive success. Learn more at trifacta.com.

©2018 Trifacta

For Additional Questions, Contact Trifactawww.trifacta.com | [email protected]

Experience the Power of Data Wrangling Todayhttps://www.trifacta.com/start-wrangling

DATA PREPARATION FOR AMAZON SAGEMAKER