The Evolution of a Scrappy Startup to a Successful Web Service

5 min read Original article ↗
  • 1.

    The Evolution ofa Scrappy Startup to a Successful Web ServicePoornimaVijayashanker – Software EngineerNovember 7, 2008

  • 2.

    What is Mint.com?My BioCame up with the name and the second employeeGetting Mint.com off the groundUsed open source tools: Eclipse, mySQL, Apache TomcatNo prior experience at a startup or in web servicesMint.com today

  • 3.

    Creating a prototypePrototype:a rudimentary working model of a product or information system, build for demonstration purposes or as part of the development process.

  • 4.

    What doesn’t belongin a prototype? Don’t waste time spec’ing out a complete feature set“Everyone by now presumably knows about the danger of premature optimization. I think we should be just as worried about premature design - designing too early what a program should do. “ Paul GrahamMint.com’s mission statement: “Do more with your money.”Aggregate checking, savings, and credit card accountsShow balances Auto-categorize transactions

  • 5.

    What does belongin a prototype.Started small by focusing on features that differentiated our productFocus on solving critical user problems. Engineering problems arise from their solutions.e.g. Financial data is a sensitive matter, needed a good security model.Handle concurrency amongst our 100+ users, and making sure we were encrypting stored data. Bare-bones implementation for algorithms is sufficient. Don’t get tied down to any one particular solution. Refine in subsequent releases and based on product specifications.Simple unit test framework, no system tests; focus was to get the product out there and have real users test it with real data!

  • 6.

    What did ourapplication server look like? Single ServerWeb layer and analysis engine, wired together using RMI. UI -> Business Logic -> Data processing enginesSingle data base that consisted of less than 50 tables

  • 7.

    What did ourcode base look like? A few key choices during prototyping molded our software’s architecture, and affected the longevity of our code base e.g. Choice in messaging system: JMS, RMI, or noneRe-factored code into logical modules to avoid spaghetti code, a task that would have been much harder to do later on, especially with a growing team and code base.

  • 8.

    Web Application toWeb ServicePrototype -> Product likened to Web Application to Web Service. Application is a point tool used to complete a simple tasksWeb services is more than featuresGoal is to create a product with growing user base you have to broaden thinking from features to logistics 

  • 9.

    Prime mover ofSoftware’s ArchitectureUser Growth RateTo solve the business problem of becoming profitable you have to grow the user base More users -> Growing PainsNeed to be addressed in a timely manner in order to continue to grow and keep the existing users happy.

  • 10.

    Computer Architecture 101Latency:a reasonable response timeThroughput: satisfying many requests without compromising latencyQuality: accurate data and reliable serviceAddress points of failure

  • 11.

    How do youachieve each?Latency: affected by amount of data that needs to be retrieved, in order to satisfy the incoming requests. e.g. computed data and persisted to db, as users increased reading/writing to the db took forever. Switched to loading user data into a cache upon login and then computing. 

  • 12.

    Throughput: handle multiplerequests/processesSeparate into code into tiers (UI, web, business/service logic, DAO, db) in parallel data processing/retrieving tier Separate data engine from web engine One server for handling data processingOne server to process user requests and serve user pagesTune each server based on its needs (web vs. analysis)

  • 13.

    Quality: low bugcount, which is directly proportional to incoming number of customer service requestsTesting (unit, load, and system)Beware of the vocal minority, by measuring the number of users impacted by a bug.

  • 14.

    Optimization: improving performanceof code at runtime in order to satisfy latency and throughput requirements. “…premature optimization is the root of all evil." – Donald Knuth

  • 15.

    How to MeasureCreatedinternal tools to measure the performance of our code base, which helps figure out where to optimize.Product will continue to evolve in approximately 6 month cycles. Don't waste time optimizing everything, or before you see a demand for a feature.  Remember its a startup; resources are scarce and time is critical.

  • 16.

    What to measureHowquickly a data set is going to grow when designing tables, foreign key associations, retrieving data, and frequency with which data is accessedSharding: break up a large database into smaller pieces that contains redundant information or a parent db can map data to separate dbs.Implementation was based on user id lookupEasy to shard because we were dealing with financial data that is restricted to a single person, each user and their data was limited to a single shard, unless services like Twitter or Facebook that have lots of interactions amongst users

  • 17.

    Review DecisionsWe didn’tcache user data from the start because synchronizing data across nodes was difficult and had no mechanism for centralized locking, but once this was put in place we switched to loading data on demandWe didn't shard databases from the start because of overhead and more impending issues.In the future we might only show most recent data instead of all data).

  • 18.

    Summary Prototype with limitedfeaturesAddressing CS 101 Basics: Latency, Throughput, and QualityMaking architectural decisions based on the time frameMeasure and Optimize that which is criticalCheck it out: www.mint.comMy blog: www.femgineer.com