RACONTEUR

Entries in startups (3)

Tuesday
May152012

Is Check-in A Dead Business

 

I have been using Foursquare for about two years now.  I was intrigued by the idea of location announcement and the value checking-in might proffer to me and my network of friends.

What I have found is that the very network of folks that I wanted to engage with is leaving Foursquare.  I don’t mean that they are forgetting to check-in; they are with conscious intent removing the app from their mobiles.

 

Why?

 

I have heard folks (myself included) say the following:

  • It’s too much work to keep your status current
  • It’s stressful - especially after I leave a place and have forgotten to check-in 
  • I don’t get any meaningful value from the exercise 
  • I am not “discovering” anything new 
  • The offers do not reliably work across venues (e.g. the staff in many cases knows nothing about the “free parking” offer)

 

My Foursquare network is not the only service suffering from this abandonment, however. 

 

Yelp is an echo chamber.  

 

Viggle is a dead zone.

 

Perhaps this speaks more to the virtues of over simplified gamification.  It seems the desire to earn a digital prize for performing a task has waned - at least within my network. 

 

Earning points are no different.

 

Recently, Foursquare announced that they have passed the 20 million user mark.  I have to wonder if this is an all-time metric or MAU.

Given the behavior that I am seeing in my dwindling network, these kinds of services need to introduce real value and do it quickly.  Creating a new feature that allows people to search for nearby businesses on their mobile phones (or worse yet, on the web) is not exactly groundbreaking stuff.  Citysearch pioneered this kind of service back in the 90’s. 

I am curious to hear from users of [your_flavor_of_check-in service] to see if they are experiencing this same phenomenon.

 

Editors Note: I did verify in Real Life to make sure that my network of friends was still intact.  It is.  Whew.

 

Wednesday
Apr252012

What To Start Up?

 

As an entrepreneur deciding how to spend a few intense years of my life building a new company I’m struck by a vexing decision:

Follow the trends of companies that are getting easily funded and have seemingly quick exits

OR    

Enter a new category and try to solve some hard problems

It seems that trend spotting is the path that the broader investment / acquisition community is rewarding these days - just look at Viddy’s announcement today that their Series A round is valued at some $300MM.  This is a company who describes themselves by saying; “Yeah, we’re like these other guys that just got bought for a billion BUT we do video”. 

Here is an eye opening statement from Michael Carney:

 

The company’s $1.5 million seed round closed around April 2011 and, according to my sources, was priced at $16 million. That’s a 20-times multiple in just twelve months for a company that has not monetized to any significant degree.

 

Amazing stuff.

 

I recently attended a Fast Pitch Competition here in Los Angeles given by the Tech Coast Angels.  Out of 170+ company ideas, eleven were chosen to present their concepts to a panel of savvy investors and entrepreneurs.  The panel then voted on a scale of 1-10 who they thought was most fundable.

The winner? 

 

A dating site. 

 

Humph.

 

Let me say that I’m very happy the investment community is making an effort in SoCal.  We have plenty of talent here and it should have local access to mentors and capital. 

I was just underwhelmed that out of all the 170+ applicants the best, most innovative idea was a small pivot in an already tapped market.

Don’t hate the player – hate the game is perhaps the mantra of the day. 

 

 

Thursday
Jan262012

Machine Learning And The Startup

Orthogonal Contexts

I’ve been involved with two things these past few years that, in my mind, have very similar execution success characteristics: creating a startup company (SC) and a recommender system based on Supervised Learning (ML).

Looking at how to implement each from scratch reveals interesting parallels.

The Cold Start

ML

The challenge that is faced when first implementing a recommender system (recsys) in a new environment is that the ML engine is not very smart about what items to recommend.  This makes some sense in that without rich historical data about how an item has performed (e.g. purchases, shares, ratings) there can be no concrete statistic on an items success probability.  Historical data can be analyzed and used to help train the engine but that data’s usefulness is quickly obviated as new information is gathered from the live system.

A specific lack of information means that out of the gate there are some postulates guesstimated and models built which are based on the best information available.  It is well understood that these models will change, features will be dropped or added and a next generation model should be in the works as soon as possible. 

SC

The same issue typically faces a new startup.  Not much is really known about the best plan for success.  Which products will win users over or land clients with eager, open checkbooks?  Forecasting revenues for a brand new venture seems nearly an exercise in mental masturbation.  New products envisioned by the ambitious team have no proof points.

Much like the ML implementation, understanding that the initial products and business models are literally a jumping off point is important.  Not betting it all on an unproven theory will save time, energy and money.

Learning And Adapting

ML

The ML engine learns how to provide more accurate recommendations over time by getting feedback from the ecosystem.  The engine will suggest a variety of items, based on statistical analysis, that it believes will match a users interest at a particular time and for a specific context.  The results of this endeavor are collected as log data and are returned to the ML system.  This data set represents the success and failure of the previous set of predictions.  As real time passes, the ML engine consumes the latest available data sets and (if properly designed) the accuracy of the engine’s future predictions becomes more acute. 

Likewise the model through which these data sets pass may need to be updated to reflect new learning.  Testing a new model against the previous model to reveal its efficacy (A/B testing) is an absolute prerequisite.  Only after a rigorous testing will you know if the new direction you have chosen will provide better quality or if you should stick with the status quo.

SC

Building a company on a hunch or in a complete vacuum is a recipe for failure.  As one of my favorite startup bloggers Steve Blank said in no uncertain terms, “Get out of the building”.  It is imperative to canvas the market you are trying to address to see if the widget you want to produce has value. 

Getting feedback about the direction you are taking the venture toward or the features that you want to add to the product will be worth more than any blue-sky session at a whiteboard.  Being agile, testing and iterating quickly over product ideas and features as data comes in from the field will produce dividends and should help steer you toward a successful path.  

Measuring 

ML

It is important to the success of the ML project to establish KPIs before coding begins.  These metrics should be meaningful and not academic or esoteric.  Typically measuring the users reaction to the recommendations is the best metric; e.g. is your revenue per user or page view going up?

Based on clear, measureable success criteria the model can be refined and tuned to best perform.

SC

Peter Drucker said it best, “What gets measured, gets managed”.  Without measurable success metrics for your fledgling business how will you know if you are succeeding?  You should create KPIs that have actionable meaning for your business.  These KPIs should necessarily cover more than just your product or service.  Internal KPIs for marketing, IT, business development etc. can be captured and presented in a company dashboard.  Clear goals and metrics align the people getting the work done and provide the impetus for acceleration or course correction.

Rinse And Repeat

For both ML and SC, relevance is determined by maintaining a contemporary view of the ecosystem that it serves.  Hence: 

Learn->Adapt->Measure->Learn->Adapt->Measure…