ISQS 7342 Dr. zhangxi Lin By: Tej Pulapa. DT in Forecasting Targeted Marketing - Know before hand...
-
Upload
samuel-russell -
Category
Documents
-
view
216 -
download
0
Transcript of ISQS 7342 Dr. zhangxi Lin By: Tej Pulapa. DT in Forecasting Targeted Marketing - Know before hand...
Decision Trees in Forecasting Applications
ISQS 7342 Dr. zhangxi Lin
By: Tej Pulapa
DT in Forecasting• Targeted Marketing - Know before hand what an online customer loves to see or hear about.
• Credit approval – I can forecast who is good to pay you back.
• Medical Diagnosis – my model will tell you if you have chances to get cancer genetically.
•Searching for High Info Gains - Given something I am trying to predict, it is easy to ask the computer to find which attribute has highest information gain for it.
DT in Classification Models
TrainingData
ClassificationAlgorithms
Classifier(Model)
Unseen Data
The optimum extent to which the model predicts accurately demands Training Data set to be representative of the unseen data.
So is it just about the available data?
Training Set Error
For each record, follow the decision tree to
see what it would predictFor what number of records does the decision
tree’s prediction disagree with the true value in
the database?This quantity is called the training set error.
The smaller the better.
DT can be used to analyze cross-sectional and time series data
Reworking of the data can be done in certain situations – for e.g., In direct marketing , there is a need to derive customer measures for recency, frequency, and monetary value from transactional data based on purchase interactions
In order to obtain valuable marketing rules, decision tree induction technique is used to analyze purchase-transaction histories, customer profiles, and product information.
The extracted marketing rules are stored in a marketing-rule base and are used for real-time personalized-advertisement selection when customers visit the Internet store
The process of recommendation rule extraction consists of four steps,
(1)target variable generation (2) data partitioning (3) decision tree construction (4) recommendation rule selection.