ValuePick : Towards a Value-Oriented Dual-Goal Recommender System
description
Transcript of ValuePick : Towards a Value-Oriented Dual-Goal Recommender System
ValuePick: Towards a Value-Oriented Dual-Goal Recommender System
Leman Akoglu Christos Faloutsos
OEDM in conjunction with ICDM 2010 Sydney, Australia
Recommender Systems
Traditional recommender systems try to achieve high user satisfaction
2 of 19
Dual-goal Recommender Systems
Dual-goal recommender systems try to achieve (1) high user satisfaction as well as(2) high-“value” vendor gain
-“value”
Trade-off user
satisfaction vs.
vendor profit
3 of 19
vertices ranked by proximity
v253v162v261v327
.
.
.
Dual-goal Recommender Systems
network-“value”
4 of 19
query vertex
v253v162v261v327
.
.
.
Dual-goal Recommender Systems vertices ranked by
proximity
network-“value”
5 of 19
Dual-goal Recommender Systems
v253v162v261v327
.
.
.
network-“value” vertices ranked by
proximity
network-“value”
Trade-off user satisfaction
vs. network
connectivity 6 of 19
Vendor
Main concerns: We cannot make the highest value
recommendations Recommendations should still reflect
users’ likes relatively well
Dual-goal Recommender Systems
7 of 19
User
User Vendor
Carefully perturb (change the order of) the proximity-ranked list of recommendations
Controlled by a tolerance for each user
ValuePick: Main idea
ζζ
8 of 19
ValuePick Optimization Framework“valu
e” proximity Total expected
gain (assuming proximity ~ acceptance prob.)
toleranceϵ [0,1]
average proximity score of original top-k
9 of 19
DETA
ILS
ValuePick ~ 0-1 Knapsackvalue
maximum weight W allowedweight of item
i
We use CPLEX to solve our integer programming optimization problem
10 of 19
DETA
ILS
Pros and Cons of ValuePickCons: In marketing, it is hard to predict the
effect of an intervention in the marketing scheme, i.e., not clear how users will respond to ‘adjustments’
Pros: Tolerance ζ can flexibly (and even
dynamically) control the `level-of-adjustment’
Users rate same item differently at different times, i.e., users have natural variability in their decisions.
11 of 19
Experimental Setup I Two real networks
Netscience – collaboration network DBLP – co-authorship network
Four recommendation schemes:1) No Gain Optimization (ζ = 0)2) ValuePick (ζ = 0.01, ζ = 0.02)3) Max Gain Optimization (ζ = 1)4) Random
“value” is centrality
12 of 19
Experimental Setup II
Given a recommendation scheme At each step
For each node Make a set of recommendations to node using Node links to node ϵ with prob. proximity(,)
Re-compute proximity and centrality scores
Simulation steps:
We use =5 and =30
13 of 19
Comparison of schemes
ValuePick provides a balance between user satisfaction (high E), and vendor gain (small diameter).
EX
PER
IMEN
TS
14 of 19
Recommend by heuristic
Simple perturbation heuristics do not balance user satisfaction and vendor gain properly.
EX
PER
IMEN
TS
15 of 19
Computational complexityEX
PER
IMEN
TS
16 of 19
Making ValuePick recommendations to a given node involves:1 - finding PPR scores
O(#edges)2 - solving ValuePick optimization w/ CPLEX
1/10 sec. to solve among top 1K nodes
Conclusions Problem formulation: incorporate the
“value” of recommendations into the system Design of ValuePick:
parsimonious single parameter ζ flexible adjust ζ for each user
dynamically general use any “value” metric
Performance study: experiments to show proper trade of user
acceptance in exchange for higher gain CPLEX with fast solutions
17 of 19