[RakutenTechConf2013] [C-2_1] Viki - Technology evolution from idea to acquisition
-
Upload
rakuten-inc -
Category
Technology
-
view
1.236 -
download
5
description
Transcript of [RakutenTechConf2013] [C-2_1] Viki - Technology evolution from idea to acquisition
VikiTechnology evolution from idea to acquisition
Global TV, Powered By Fans
• TV, Movies & Music videos • Subtitles created by avid fans for free in 160+
languages
• 1bn+ video views / year• 400mm+ words translated by fans• 23mm+ monthly active users• 12mm+ mobile installs • 17,000+ hours of global prime-time content
from 50+ countries
History
• Founded in Palo Alto, CA, out of beta as a company in Dec 2010
• Offices in SF, Singapore, Seoul, Tokyo• Investors: Greylock, Andreessen Horowitz,
Neoteny (Joi Ito), BBC, SK Planet …
Awards
• World Economic Forum Tech Pioneer ‘14• WSJ Asia Most Innovative Companies ‘12• TechCrunch Best International Start-Up ‘10
The Beginning
• Founded in 2008 by Razmig Hovaghimian, Changseong Ho and Jiwon Moon
• Initially named ViiKii
• Self funded
• First engineering team in Korea
Viki 1.0 technology - 2008
• Flash developers who built the subbing tools also built the website
• PHP + MySQL
• Business logic in stored procedures
• Very heavy feature set e.g nearly every object supported threaded conversation and many were loaded on each page
• No caching
Inflection Point - 2010
• Rapid user adoption. Big hits like Playful Kiss
• Website was slow and buggy. Every new feature made it worse
• Peak hours access had to be limited to users who had made a donation to Viki
Viki 2.0 - 2010 to 2011
• Viki moves base to Singapore and raises Series A of $4.3 million in Dec 2010
• Hires Pivotal Labs to solve scale problems and train new full time engineers being hired
• Website rewritten in Ruby on Rails and Postgres
• Caching using Varnish and Memcache
• Use Heroku as PAAS
• Built IOS and Android App
Inflection point - 2012
• Explosive adoption of mobile apps
• Many partner apps and integrations
• Millions more users all over the world.
• Many requests > 150ms
• Not enough separation of concerns
• Single point of failure
Viki 3.0 - 2012 to now
• Public API (http://dev.viki.com/v4/api/)• Multiple points of presence• High performance (most API calls < 25ms)• Read Optimized• Eventually consistent architecture
Eventually consistent
• Single central data store (source of truth)
• Writes to a specific POP are propagated to other POPs through a central queue.
• Typical writes propagate within seconds
Internal API
Queue
Worker
POP
POP
POP
DB
Public API
Points of Presence
• Multiple POPs increase fault tolerance
• Latency based DNS routing (Route53) so clients access closest healthy POP
• Currently have 4 POPs - two in the US, one in Europe and one in Singapore
Nginx
Hyperion
API Proxy/Caching Layer
Cache (Redis)
High performance
• Network Time - API Requests served by nearest POP• Generation Time - Data model tuned for performance
with extensive use of precomputed in-memory data structures. Most calls returned in < 25ms
• Render Time - Rich API reduce client side operations
Takeaways
• It is normal for your architecture/code to run its course and be replaced
• Need buy in from management to make revolutionary rather than evolutionary changes
• No Technology Religion
• Be humble and keep learning
Where are we headed
• More social
• Personalized
• Huge growth in content (library and on-air)
• 100 Million Users
• Support for more devices and partners
• Exploiting synergies and leveraging other Depts - ID, Superpoints, Search etc.
Viki 4.0 technical challenges
• Data partitioning/sharding
• Search
• Recommendations
• Content Management
• Analytics and Insights
• Monitoring and Troubleshooting
• We need your help :-)
• Rohit Dewan - [email protected]
Questions?