The Feynman Algorithm:
1. Write down the problem.
2. Think real hard.
3. Write down the solution.
The Feynman algorithm was facetiously suggested by Murray Gell-Mann, a colleague of Feynman, in a New York Times interview. (source)
I have spent a fair amount of time over the last few years playing with SaaS metrics, particularly cohort analyses and measuring things like the trailing twelve month retention (RTTM), customer acquisition cost (CAC) and life time value (LTV). Those 3 metrics are a fundamental part of any product offering and are really the pulse of a SaaS business. But what represents a decent rate of retention, 90%? (hint: NO)
As anyone who has had the misfortune to end up in a conversation with me on the subject will of noticed, I may of developed a few opinions on the best way to measure these metrics (more on that in upcoming posts) and also on quite how much difference a couple of percent makes. It is this latter point that these charts aim to provide some insight into. The first chart shows the difference on the expected lifetime value of a customer paying $50 a month for various levels of retention between 91% and 100%. Its pretty clear that the biggest effect occurs past the 95% mark with the gradient of the LTV curve increasing sharply as it heads towards 100% – seems like that part of the curve is where you would want to be!!
These next 2 charts just rehash that data. The first one shows the LTV normalised to the 91% value just to make the relative amounts easier to see and the second shows the expected subscriber months.
I’m quite a visual person, I like to plot things – I think it helps give context to data, not just in terms of nicely labelled axes but also in understanding the effect of changing any of the variables being plotted. One of the charts I like to keep in my head (when coding mostly, not that much use on a night out…) is the relationship between the response time of a server and the number of pages it can serve.
Now I know a lot of people seem to consider that any type of performance considerations are just premature optimizations, I happen to consider them fundamental design decisions (and a feature) – experience has also taught me its easier to do things right when you have time on your hands than when the site is down!!! Anyway, staying on the right side of that 200ms response is not always easy but its a good thing to aim for +/- 100ms. Its pretty clear that to the left of that things get better very fast and to the right, well too far in that direction and your site falls over and your customers leave……
A while ago I was asked over on Sprouter’s Answers about a simple founders agreement. I meant to post it up here a while ago, but, well I didn’t! So anyway here is a simple agreement you can use when starting something to outline what happens if someone wants out before incorporation – it happens……. This is based on the 3 year vesting period that is more popular in Europe but obviously you can change that to whatever.
I had an interesting conversation with Hayes Davis of Appozite about influence calculations and their importance. I have to admit I hadn’t really spend much time thinking about the big picture of the real time web and how influence fits into it, I found Klout and the like interesting but mostly in terms of adding context.
However, during the course of our conversation last night it occurred to me that an influence metric could prove to be a very fundamental part of being able to extract any useful information from the deluge of data the web is now producing.