Category Archives: platform

Mobilising the web of feeds

I wrote this piece for the Guardian’s Media Network on the role that RSS could play now that the social platforms are becoming more difficult to work with. GeoRSS, in particular, has a lot of potential given the mobile device explosion. I’m not suggesting necessarily that RSS is the answer, but it is something that a lot of people already understand and could help unify the discussion around sharing geotagged information feeds.


Powered by Guardian.co.ukThis article titled “Mobilising the web of feeds” was written by Matt McAlister, for theguardian.com on Monday 10th September 2012 16.43 UTC

While the news that Twitter will no longer support RSS was not really surprising, it was a bit annoying. It served as yet another reminder that the Twitter-as-open-message-utility idea that many early adopters of the service loved was in fact going away.

There are already several projects intending to disrupt Twitter, mostly focused on the idea of a distributed, federated messaging standard and/or platform. But we already have such a service: an open standard adopted by millions of sources; a federated network of all kinds of interesting, useful and entertaining data feeds published in real-time. It’s called RSS.

There was a time when nearly every website was RSS-enabled, and a cacophony of Silicon Valley startups fought to own pieces of this new landscape, hoping to find dotcom gold. But RSS didn’t lead to gold, and most people stopped doing anything with it.

Nobody found an effective advertising or service model (except, ironically, Dick Costolo, CEO of Twitter, who sold Feedburner to Google). The end-user market for RSS reading never took off. Media organisations didn’t fully buy into it, and the standard took a backseat to more robust technologies.

Twitter is still very open in many ways and encourages technology partners to use the Twitter API. That model gives the company much more control over who is able to use tweets outside of the Twitter owned apps, and it’s a more obvious commercial strategy that many have been asking Twitter to work on for a long time now.

But I think we’ve all made a mistake in the media world by turning our backs on RSS. It’s understandable why it happened. But hopefully those who rejected RSS in the past will see the signals demonstrating that an open feed network is a sensible thing to embrace today.

Let’s zoom out for context first. Looking at the macro trends in the internet’s evolution, we can see one or two clear winners as more information and more people appeared on the network in waves over the last 15 years.

Following the initial explosion of new domains, Yahoo! solved the need to surface only the websites that mattered through browsing. The Yahoo! directory became saturated, so Google then surfaced pages that mattered within those websites through searches. Google became saturated, so Facebook and Twitter surfaced things that mattered that live on the webpages within those web sites through connecting with people.

Now that the social filter is saturated, what will be used next to surface things that matter out of all the noise? The answer is location. It is well understood technically. The software-hardware-service stack is done. The user experience is great. We’re already there, right?

No – most media organisations still haven’t caught up yet. There’s a ton of information not yet optimised for this new view of the world and much more yet to be created. This is just the beginning.

Do we want a single platform to be created that catalyses the location filter of the internet and mediates who sees what and when? Or do we want to secure forever a neutral environment where all can participate openly and equally?

If the first option happens, as historically has been the case, then I hope that position is taken by a force that exists because of and reliant on the second option.

What can a media company do to help make that happen? The answer is to mobilise your feeds. As a publisher, being part of the wider network used to mean having a website on a domain that Yahoo! could categorise. Then it meant having webpages on that website optimised for search terms people were using to find things via Google. And more recently it has meant providing sharing hooks that can spread things from those pages on that site from person to person.

Being part of the wider network today suddenly means all of those things above, and, additionally, being location-enabled for location-aware services.

It doesn’t just mean offering a location-specific version of your brand, though that is certainly an important thing to do as well. The major dotcoms use this strategy increasingly across their portfolios, and I’m surprised more publishers don’t do this.

More importantly, though, and this is where it matters in the long run, it means offering location-enabled feeds that everyone can use in order to be relevant in all mobile clients, applications and utilities.

Entrepreneurs are all over this space already. Pure-play location-based apps can be interesting, but many feel very shallow without useful information. The iTunes store is full of travel apps, reference apps, news, sports, utilities and so on that are location-aware, but they are missing some of the depth that you can get on blogs and larger publishers’ sites. They need your feeds.

Some folks have been experimenting in some very interesting ways that demonstrate what is possible with location-enabled feeds. Several mobile services, such as FlipBoard, Pulse and now Prismatic, have really nice and very popular mobile reading apps that all pull RSS feeds, and they are well placed to turn those into location-based news services.

Perhaps a more instructive example of the potential is the augmented reality app hypARlocal at Talk About Local. They are getting location-aware content out of geoRSS feeds published by hyperlocal bloggers around the UK and the citizen journalism platform n0tice.com.

But it’s not just the entrepreneurs that want your location-enabled feeds. Google Now for Android notifies you of local weather and sports scores along with bus times and other local data, and Google Glasses will be dependent on quality location-specific data as well.

Of course, the innovations come with new revenue models that could get big for media organisations. They include direct, advertising, and syndication models, to name a few, but have a look at some of the startups in the rather dense ‘location‘ category on Crunchbase to find commercial innovations too.

Again, this isn’t a new space. Not only has the location stack been well formed, but there are also a number of bloggers who have been evangelising location feeds for years. They already use WordPress, which automatically pumps out RSS. And many of them also geotag their posts today using one of the many useful WordPress mapping plugins.

It would take very little to reinvigorate a movement around open location-based feeds. I wouldn’t be surprised to see Google prioritising geotagged posts in search results, for example. That would probably make Google’s search on mobile devices much more compelling, anyhow.

Many publishers and app developers, large and small, have complained that the social platforms are breaking their promises and closing down access, becoming enemies of the open internet and being difficult to work with. The federated messaging network is being killed off, they say. Maybe it’s just now being born.

Media organisations need to look again at RSS, open APIs, geotagging, open licensing, and better ways of collaborating. You may have abandoned it in the past, but RSS would have you back in a heartbeat. And if RSS is insufficient then any location-aware API standard could be the meeting place where we rebuild the open internet together.

It won’t solve all your problems, but it could certainly solve a few, including new revenue streams. And it’s conceivable that critical mass around open location-based feeds would mean that the internet becomes a stronger force for us all, protected from nascent platforms whose their future selves may not share the same vision that got them off the ground in the first place.

To get more articles like this sent direct to your inbox, sign up for free membership to the Guardian Media Network. This content is brought to you by Guardian Professional.

guardian.co.uk © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

Calling your web site a ‘property’ deprives it of something bigger

BBC offered another history of London documentary the other night, a sort of people’s perspective on how the character of the city has changed over time, obviously inspired by Danny Boyle’s Opening Ceremony at the Olympics.

Some of the sequences were interesting to me particularly as a foreigner – the gentrification of Islington, the anarchist squatters in Camden, the urbanization of the Docklands, etc.  – a running theme of haves vs have-nots.

It’s one of a collection of things inspiring me recently including a book called ‘The Return of the Public‘ by Dan Hind, a sort of extension to the Dewey v Lippman debates, what’s going on with n0tice, such as Sarah Hartley’s adaptation for it called Protest Near You and the dispatch-o-rama hack, and, of course, the Olympics.

I’m becoming reinvigorated and more bullish on where collective action can take us.

At a more macro level these things remind me of the need to challenge the many human constructs and institutions that are reflections of the natural desire to claim things and own them.

Why is it so difficult to embrace a more ‘share and share alike’ attitude?  This is as true for children and their toys as it is for governments and their policies.

The bigger concern for me, of course, is the future of the Internet and how media and journalism thrive and evolve there.

Despite attempts by its founders to shape the Internet so it can’t be owned and controlled, there are many who have tried to change that both intentionally and unwittingly, occasionally with considerable success.

How does this happen?

We’re all complicit.  We buy a domain. We then own it and build a web site on it. That “property” then becomes a thing we use to make money.  We fight to get people there and sell them things when they arrive.  It’s the Internet-as-retailer or Internet-as-distributor view of the world.

That’s how business on the Internet works…or is it?

While many have made that model work for them, it’s my belief that the property model is never going to be as important or meaningful or possibly as lucrative as the platform or service model over time. More specifically, I’m talking about generative media networks.

Here are a few different ways of visualizing this shift in perspective (more):

Even if it works commercially, the property model is always going to be in conflict with the Internet-as-public-utility view of the world.

Much like Britain’s privately owned public spaces issue, many worry that the Internet-as-public-utility will be ruined or, worse, taken from us over time by commercial and government interests.

Playing a zero sum game like that turns everyone and everything into a threat.  Companies can be very effective at fighting and defending their interests even if the people within those companies mean well.

I’m an optimist in this regard.  There may be a pendulum that swings between “own” and “share”, and there are always going to be fights to secure public spaces.  But you can’t put the Internet genie back in the bottle.  And even if you could it would appear somewhere else in another form just as quickly…in some ways it already has.

The smart money, in my mind, is where many interests are joined up regardless of their individual goals, embracing the existence of each other in order to benefit from each other’s successes.

The answer is about cooperation, co-dependency, mutualisation, openness, etc.

We think about this a lot at the Guardian. I recently wrote about how it applies to the recent Twitter issues here. And this presentation by Chris Thorpe below from back in 2009 on how to apply it to the news business is wonderful:

Of course, Alan Rusbridger’s description of a mutualised newspaper in this video is still one of the strongest visions I’ve heard for a collaborative approach to media.

The possibility of collective action at such an incredible scale is what makes the Internet so great.  If we can focus on making collective activities more fruitful for everyone then our problems will become less about haves and have-nots and more about ensuring that everyone participates.

That won’t be an easy thing to tackle, but it would be a great problem to have.

Dispatchorama: a distributed approach to covering a distributed news event

We’ve had a sort of Hack Week at the Guardian, or “Discovery Week“. So, I took the opportunity to mess around with the n0tice API to test out some ideas about distributed reporting.

This is what it became (best if opened in a mobile web browser):

http://dispatchorama.com/



It’s a little web app that looks at your location and then helps you to quickly get to the scene of whatever nearby news events are happening right now.

The content is primarily coming from n0tice at the moment, but I’ve added some tweets with location data. I’ve looked at some geoRSS feeds, but I haven’t tackled that, yet. It should also include only things from the last 24 hours. Adding more feeds and tuning the timing will help it feel more ‘live’.

The concept here is another way of thinking about the impact of the binding effect of the digital and physical worlds. Being able to understand the signals coming out of networked media is increasingly important. By using the context that travels with bits of information to inform your physical reality you can be quicker to respond, more insightful about what’s going on and proactive in your participation, as a result.

I’m applying that idea to distributed news events here, things that might be happening in many places at once or a news event that is moving around.

In many ways, this little experiment is a response to the amazing effort of the Guardian’s Paul Lewis and several other brave reporters covering last year’s UK riots.

There were 2 surprises in doing this:

  1. The twitter location-based tweets are really all over the place and not helpful. You really have to narrow your source list to known twitter accounts to get anything good, but that kind of defeats the purpose.
  2. I haven’t done a ton of research, yet, but there seems to be a real lack of useful geoRSS feeds out there. What happened? Did the failure of RSS readers kill the geoRSS movement? What a shame. That needs to change.

The app uses the n0tice API, JQuery Mobile, and Google’s location APIs and a few snippets picked off StackOverflow. It’s on GitHub here:
https://github.com/mattmcalister/dispatchorama/

Positioning real-time web platforms

Like many people, I’ve been thinking a lot about the live nature of the web more and more recently.

The startup world has gone mad for it. And though I think Microsoft’s Chief Software Architect Ray Ozzie played down the depth of Microsoft’s commitment to it in his recent interview with Steve Gillmor, it’s apparent that it’s at the very least a top-of-mind subject for the people at the highest levels of the biggest companies in the Internet world. As it should be.

The live web started to feel more tangible in shape and clearer for me to see because of Google Wave. Two of the Guardian developers here, Lisa van Gelder and Martyn Inglis, recently shared the results of a DevLab they did on Wave.

My brain has been spinning on the idea ever since.

(A DevLab is an internal research project where an individual or team pull out of the development cycle for a week and study an idea or a technology. There’s a grant associated with the study. They then share their findings with the entire team, and they share the grant with the individual who writes the most insightful peer review of the research.)

Many before me have noted the ambition and tremendous scale of the Wave effort. But I also find it fascinating how Google is approaching the development of the platform as a service.

The tendency when designing a platform is to create the rules and restrictions that prevent worst-case scenario behavior from ruining everything for you and your key partners. You release capability gradually as you understand its impact.

You then have to manage the constant demand from customers to release more and more capability.

Google turned this upside down and enabled a wide breadth of capability with no apologies for the unknowns. Developers won’t complain about lack of functionality. Instead it will probably have the opposite effect and invite the developers to tell Google how to close down the risks so their work won’t get damaged by the lawlessness of the ecosystem.

That’s a very exciting proposition, as if new land has been found where gold might be discovered.

But on the other hand, is it also a bit lazy or even irresponsible to put the task of creating the rules of the world that your service defines on the customers of your service? And do those partners then get a false sense of security because of that, as if they could influence the evolution of the platform in their favor when really it’s all about Google?

Google takes no responsibility for the bad things that may happen in the world they’ve created, yet they have retained full authority on their own for decisions about the service.

They’ve mitigated much of their risk by releasing the code as “open source” and allowing Wave to run in your own hosted environment as you choose. It’s a good PR move, but it may not have the effect they want it to have if they aren’t also sharing the way contributions to the code are managed and sharing in the governance.

They list the principles for the project on the site:

  • Wave is an open network: anyone should be able to become a wave provider and interoperate with the public network
  • Wave is a distributed network model: traffic is routed peer-to-peer, not through a central server
  • Make rapid progress, together: a shared commitment to contribute to the evolution and timely deployment of protocol improvements
  • Community contributions are fundamental: everyone is invited to participate in the public development process
  • Decisions are made in public: all protocol specification discussions are recorded in a public archive

Those are definitions, not principles. Interestingly, there’s no commitment to opening decision-making itself, only sharing the results of decisions. Contrast that with Apache Foundation projects which have different layers of engagement and specific responsibilities for the different roles in a project. For example,

“a Project Management Committee member is a developer or a committer that was elected due to merit for the evolution of the project and demonstration of commitment. They have write access to the code repository, an apache.org mail address, the right to vote for the community-related decisions and the right to propose an active user for committership.”

That model may be too open for Google, but it would help a lot to have a team of self-interested supporters when things go wrong, particularly as there are so many security risks with Wave. If they are still the sole sponsor of the platform when the first damage appears then they will have to take responsibility for the problem. They can only use the “we don’t control the apps, only the platform” excuse for so long before it starts to look like a cop out.

Maybe they should’ve chosen a market they thought would run with it and offer it in preview exclusively for key partners in that market until Google understood how to position it. With a team of launch partners they would have seemed less autocratic and more trustworthy.

Shared ownership of the launch might also have resulted in a better first use-case app than the Wave client they invented for the platform. The Google Wave client may take a long time to catch on, if ever.

As Ray Ozzie noted,

“When you create something that people don’t know what it is, when they can’t describe it exactly, and you have to teach them, it’s hard…all of the systems, as long as I’ve been working in this area, the picture that I’ve always had in my mind is kind of three overlapping circles of technology, social dynamics, and organizational dynamics. And any two of those is relatively straightforward and understandable.”

I might even argue that perhaps Google actually made a very bad decision to offer a client at all. This was likely the result of failing to have a home for OpenSocial when it launched. Plus, it’s never a good idea to launch a platform without a principle customer app that can drive the initial requirements.

In my opinion, open conference-style IM and email or live collaborative editing within docs is just not groundbreaking enough as an end-user offering.

But the live web is fractionally about the client app.

The live web that matters, in my mind, harnesses real-time message interplay via multiple open networks between people and machines.

There’s not one app that runs on top of it. I can imagine there could be millions of client apps.

The Wave idea, whether it’s most potent incarnation is Wave itself or some combination of a Twitter/RabbitMQ mesh or an open XML P2P server or some other new approach to sharing data, is going to blow open the Internet for people once again.

I remember trying very hard to convince people that RSS was going to change the Internet and how publishing works several years ago. But the killer RSS app never happened.

It’s obvious why it feels like RSS didn’t take off. RSS is fabric. Most people won’t get that, nor should they have to.

In hindsight, I think I overvalued RSS but undervalued the importance of the idea…lubricating the path for data to get wherever it is needed.

I suspect Wave will suffer from many of the same issues.

Wave is fabric, too.

When people and things create data on a network that machines can do stuff with, the world gets really interesting. It gets particularly interesting when those machines unlock connections between people.

And while the race is on to come up with the next Twitter-like service, I just hope that the frantic Silicon Valley Internet platform architects don’t forget that it’s about people in the end.

One of the things many technology innovators forget to do is to talk to people. More developers should ask people about their day and watch them work. You may be able to breakthrough by solving real problems that real people have.

That’s a much better place to start than by inventing strategic points of leverage in order to challenge your real and perceived competitors.

GPS device + data feeds + social = awesome service

One of the most interesting market directions in recent months in my mind is the way the concept of a location service is evolving. People are using location as a vector to bring information that matters directly to them. A great example of this is Dash.net.

Dash is a GPS device that leverages the activity of its user base and the wider local data pools on the Internet to create a more personalized driving experience. Ricky Montalvo and I interviewed them for the latest Developer Spotlight on YDN Theater:Of particular note are the ways that Dash pulls in external data sources from places like Yahoo! Pipes. Any geoRSS feed can be used to identify relevant locations near you or near where you’re going directly from the device. They give the example of using a Surfline.com feed built with Pipes to identify surfing hot spots at any given moment. You can drive to Santa Cruz and then decide which beach to hit once you get there.

There are other neat ways to use the collaborative user data such as the traffic feedback loop so that you can choose the fastest route to a destination in real time. And the integration with the Yahoo! Local and the Upcoming APIs make for great discoveries while you’re out and about.

You can also see an early demo of their product which they showed at Web 2.0 Summit in the fall:

The way they’ve opened up a hardware device to take advantage of both the information on the Internet and the behaviors of its customers is really innovative, not to mention very useful, too. I think Dash is going to be one to watch.

How to launch an online platform (part II)

The MySpace guys won the latest launch party battle. About 200 people met at the new MySpace house last night in San Francisco to see what the company was going to do to compete with Facebook on the developer front.

MySpace FlipThey had a fully catered event including an open bar with some good whiskey. The schwag bag included the Flip digital video camera (wow!). There were a small handful of very basic demos on the floor from the usual suspects (Slide, iLike, Flixster, etc.). And the presentation was short and sweet so we could get back to socializing.

Nicely executed.

The party wasn’t without flaw, mind you.

First, the date. Why throw a launch party on the same day as the biggest political event in our time, Super Tuesday? The headlines were on everything but the MySpace launch. The right people knew what was going on, but the impact was severely muted. I was somewhat eager to leave to find out what was happening out there in the real world.

Second, the presentation. You have to appreciate them keeping it super short. Once the drinks start flowing, it gets very hard to keep people quiet for more than a few minutes. But I think most everyone there was actually very interested in hearing something meaty or a future vision or something. Bullets on a powerpoint rarely impress.

Neither of those things really mattered, in the end. The party served its purpose.

It also occurred to me afterward that it would have been a shame if the co-founders and executive team weren’t there. But they were very much in this and made themselves accessible to chat. This isn’t a sideshow move for MySpace. It matters to them.

Contrast this with the standard formula followed by the Bebo guys, and you can see why MySpace does so well in social networking. They embody it as a company.

Now, whether or not they can raise the bar on app quality or improve on distribution for apps is yet to be seen. By giving developers a month to get their submissions in for the end-user roll out they are resetting the playing field. That’s great. But I’m not sure whether the MySpace user experience will encourage the sharing of apps as fluidly as the FaceBook UE. I don’t use it enough to know, to be honest.

As far as the platform itself goes, I’m curious about the impact the REST API will have. I’ve wondered how the social networks would make themselves more relevant in the context of the world outside the domain.

Will the REST API be used more by services that want to expose more data within MySpace or by services that want to leverage the MySpace data in their own environments outside myspace.com? I suspect the latter will matter more over time but that won’t mean anything until people adopt the apps.

Overall, good show. This should help bring back some of the MySpace cool that was lost the last year or so.

The Internet’s secret sauce: surfacing coincidence

What is it that makes my favorite online services so compelling? I’m talking about the whole family of services that includes Dopplr, Wesabe, Twitter, Flickr, and del.icio.us among others.

I find it interesting that people don’t generally refer to any of these as “web sites”. They are “services”.

I was fortunate enough to spend some time with Dopplr’s Matt Biddulph and Matt Jones last week while in London where they described the architecture of what they’ve built in terms of connected data keys. The job of Dopplr, Mr. Jones said, was to “surface coincidence”.

I think that term slipped out accidentally, but I love it. What does it mean to “surface coincidence”?

It starts by enabling people to manufacture the circumstances by which coincidence becomes at least meaningful if not actually useful. Or, as Jon Udell put it years ago now when comparing Internet data signals to cellular biology:

“It looks like serendipity, and in a way it is, but it’s manufactured serendipity.”

All these services allow me to manage fragments of my life without requiring burdensome tasks. They all let me take my data wherever I want. They all enhance my data by connecting it to more data. They all make my data relevant in the context of a larger community.

When my life fragments are managed by an intelligent service, then that service can make observations about my data on my behalf.

Dopplr can show me when a distant friend will be near and vice versa. Twitter can show me what my friends are doing right now. Wesabe can show me what others have learned about saving money at the places where I spend my money. Among many other things Flickr can show me how to look differently at the things I see when I take photos. And del.icio.us can show me things that my friends are reading every day.

There are many many behaviors both implicit and explicit that could be managed using this formula or what is starting to look like a successful formula, anyhow. Someone could capture, manage and enhance the things that I find funny, the things I hate, the things at home I’m trying to get rid of, the things I accomplished at work today, the political issues I support, etc.

But just collecting, managing and enhancing my life fragments isn’t enough. And I think what Matt Jones said is a really important part of how you make data come to life.

You can make information accessible and even fun. You can make the vast pool feel manageable and usable. You can make people feel connected.

And when you can create meaning in people’s lives, you create deep loyalty. That loyalty can be the foundation of larger businesses powered by advertising or subscriptions or affiliate networks or whatever.

The result of surfacing coincidence is a meaningful action. And those actions are where business value is created.

Wikipedia defines coincidence as follows:

“Coincidence is the noteworthy alignment of two or more events or circumstances without obvious causal connection.”

This is, of course, similar and related to the definition of serendipity:

“Serendipity is the effect by which one accidentally discovers something fortunate, especially while looking for something else entirely.”

You might say that this is a criteria against which any new online service should be measured. Though it’s probably so core to getting things right that every other consideration in building a new online service needs to support it.

It’s probably THE criteria.

How to launch an online platform

I attended the Bebo developer platform announcement this morning in San Francisco. The announcement seemed to go down very well based on immediate response, though only time will tell if the expected impact is achieved.

Bebo schwag
It’s clear that a formula for launching this kind of stuff exists, and I think Bebo did a great job of giving it their own flavor. The overall format Bebo used was standard:

  • Invite people to a nice place and give them some free stuff
  • Give a presentation including a video showing customer testimonials
  • Let the founder or product owner or thought leader present the product
  • Parade the partners on stage
  • Provide demos for people to peruse after the presentation
  • Keep it short

But the nuances in the formula are what make an online platform launch successful.

  1. Create an invite-only experience: This is true with restaurants, art galleries, clubs and just about any socially-driven service. Make a select few feel important by treating them differently, and they will then be your advocate. Bebo invited press and partners to a small-ish rooom to give their presentation at the Metreon. Those people then felt responsible for spreading the news.
  2. Make it newsworthy: I wouldn’t say that the Bebo platform was a secret, by any means, but the features that make it worth talking about were kept secret until the event. In particular, the crowd seemed very pleased to hear that Bebo decided to emulate Facebook’s success by making their platform fully compatible with Facebook’s.
  3. Follow standards: Developers are not generally interested in proprietary environments unless there is a substantial gain to be made by leveraging that environment. Platforms on the Internet should default to known and proven standards, and when they do deviate, there should be compelling reason to do so. Bebo indicated that there might be features in the future that are Bebo-specific such as micropayments, and I suspect the developer community would be happy to customize their apps for Bebo when those features are ready.
  4. Prime the pump with partners: An ecosystem is not an ecosystem if it doesn’t have partners. So, don’t launch a service for partners with no partners already committed. But more than that, partners are proofpoints that the wider market wants to validate that what you offer is in fact real. Give them the stage. Make them successful, so others want to follow suit. I wasn’t all that impressed with the NBC Universal app showcased at the Bebo event, but the Gaia Online and Flixster apps were solid. And the 20 or so partners demoing in the back of the room after the presentations were great evangelists for the platform. They were proud to be there and happy to sing Bebo’s praises.
  5. Be real: I’m always a sucker for a self-deprecating joker, but Bebo founder Michael Birch backed up the laughs with substance. He admitted that they intend to follow Facebook and do whatever they do which is a totally viable strategy in this space, at this point in time. Of course, he gave himself a great defense should they get pounded by the press, but his approach was very refreshing in a market that’s increasingly crowded full with ambition and arrogance.

Again, the response by developers and then the subsequent uptake by users will be the real indicators of success. But Bebo gave themselves as good a start as any by getting the launch off on the right foot.

The business of network effects

The Internet platform business has some unique challenges. It’s very tempting to adopt known models to make sense of it, like the PC business, for example, and think of the Internet platform like an operating system.

The similarities are hard to deny, and who wouldn’t want to control the operating system of the Internet?

In 2005, Jason Kottke proposed a vision for the “WebOS” where users could control their experience with tools that leveraged a combination of local storage and a local server, networked services and rich clients.

“Applications developed for this hypothetical platform have some powerful advantages. Because they run in a Web browser, these applications are cross platform, just like Web apps such as Gmail, Basecamp, and Salesforce.com. You don’t need to be on a specific machine with a specific OS…you just need a browser + local Web server to access your favorite data and apps.”

Prior to that post, Nick Carr offered a view on the role of the browser that surely resonated with the OS perspective for the Internet:

“Forget the traditional user interface. The looming battle in the information technology business is over control of the utility interface…Control over the utility interface will provide an IT vendor with the kind of power that Microsoft has long held through its control of the PC user interface.”

He also responded later to Kottke’s vision saying that the reliance on local web and storage services on a user’s PC may be unnecessary:

“Your personal desktop, residing entirely on a distant server, will be easily accessible from any device wherever you go. Personal computing will have broken free of the personal computer.”

But the client layer is merely a piece of the much larger puzzle, in my opinon.

Dare Obasanjo more recently broke down the different ideas of what “Cloud OS” might mean:

“I think it is a good idea for people to have a clear idea of what they are talking about when they throw around terms like “cloud OS” or “cloud platform” so we don’t end up with another useless term like SOA which means a different thing to each person who talks about it. Below are the three main ideas people often identify as a “Web OS”, “cloud OS” or “cloud platform” and examples of companies executing on that vision.”

He defines them as follows:

  1. WIMP Desktop Environment Implemented as a Rich Internet Application (The YouOS Strategy)
  2. Platform for Building Web-based Applications (The Amazon Strategy)
  3. Web-based Applications and APIs for Integrating with Them (The Google Strategy)

The OS metaphor has lots of powerful implications for business models, as we’ve seen on the PC. The operating system in a PC controls all the connections from the application user experience through the filesystem down through the computer hardware itself out to the interaction with peripheral services. Being the omniscient hub makes the operating system a very effective taxman for every service in the stack. And from there, the revenue streams become very easy to enable and enforce.

But the OS metaphor implies a command-and-control dynamic that doesn’t really work in a global network controlled only by protocols.

Internet software and media businesses don’t have an equivilent choke point. There’s no single processor or function or service that controls the Internet experience. There’s no one technology or one company that owns distribution.

There are lots of stacks that do have choke points on the Internet. And there are choke points that have tremendous value and leverage. Some are built purely and intentionally on top of a distribution point such as the iPod on iTunes, for example.

But no single distribution center touches all the points in any stack. The Internet business is fundamentally made of data vectors, not operational stacks.

Jeremy Zawodny shed light on this concept for me using building construction analogies.

He noted that my building contractor doesn’t exclusively buy Makita or DeWalt or Ryobi tools, though some tools make more sense in bundles. He buys the tool that is best for the job and what he needs.

My contractor doesn’t employ plumbers, roofers and electricians himself. Rather he maintains a network of favorite providers who will serve different needs on different jobs.

He provides value to me as an experienced distribution and aggregation point, but I am not exclusively tied to using him for everything I want to do with my house, either.

Similarly, the Internet market is a network of services. The trick to understanding what the business model looks like is figuring out how to open and connect services in ways that add value to the business.

In a precient viewpoint from 2002 about the Internet platform business, Tim O’Reilly explained why a company that has a large and valuable data store should open it up to the wider network:

“If they don’t ride the horse in the direction it’s going, it will run away from them. The companies that “grasp the nettle firmly” (as my English mother likes to say) will reap the benefits of greater control over their future than those who simply wait for events to overtake them.

There are a number of ways for a company to get benefits out of providing data to remote programmers:

Revenue. The brute force approach imposes costs both on the company whose data is being spidered and on the company doing the spidering. A simple API that makes the operation faster and more efficient is worth money. What’s more, it opens up whole new markets. Amazon-powered library catalogs anyone?

Branding. A company that provides data to remote programmers can request branding as a condition of the service.

Platform lock in. As Microsoft has demonstrated time and time again, a platform strategy beats an application strategy every time. Once you become part of the platform that other applications rely on, you are a key part of the computing infrastructure, and very difficult to dislodge. The companies that knowingly take their data assets and make them indispensable to developers will cement their role as a key part of the computing infrastructure.

Goodwill. Especially in the fast-moving high-tech industry, the “coolness” factor can make a huge difference both in attracting customers and in attracting the best staff.”

That doesn’t clearly translate into traditional business models necessarily, but if you look at key business breakthroughs in the past, the picture today becomes more clear.

  1. The first breakthrough business model was based around page views. The domain created an Apple-like controlled container. Exposure to eyeballs was sold by the thousands per domain. All the software and content was owned and operated by the domain owner, except the user’s browser. All you needed was to get and keep eyeballs on your domain.
  2. The second breakthrough business model emerged out of innovations in distribution. By building a powerful distribution center and direct connections with the user experience, advertising could be sold both where people began their online experiences and at the various independent domain stacks where they landed. Inventory beget spending beget redistribution beget inventory…it started to look a lot like network effects as it matured.
  3. The third breakthrough business model seems to be a riff on its predecessors and looks less and less like an operating system. The next breakthrough is network effects.

Network EffectsNetwork effects happen when the value of the entire network increases with each node added to the network. The telephone is the classic example, where every telephone becomes more valuable with each new phone in the network.

This is in contrast to TVs which don’t care or even notice if more TVs plug in.

Recommendation engines are the ultimate network effect lubricator. The more people shop at Amazon, the better their recommendation engine gets…which, in turn, helps people buy more stuff at Amazon.

Network effects are built around unique and useful nodes with transparent and highly accessible connection points. Social networks are a good example because they use a person’s profile as a node and a person’s email address as a connection point.

Network effects can be built around other things like keyword-tagged URLs (del.icio.us), shared photos (flickr), songs played (last.fm), news items about locations (outside.in).

The contribution of each data point wherever that may happen makes the aggregate pool more valuable. And as long as there are obvious and open ways for those data points to talk to each other and other systems, then network effects are enabled.

Launching successful network effect businesses is no easy task. The value a participant can extract from the network must be higher than the cost of adding a node in the network. The network’s purpose and its output must be indespensible to the node creators.

Massively distributed network effects require some unique characteristics to form. Value not only has to build with each new node, but the value of each node needs to increase as it gets leveraged in other ways in the network.

For example, my email address has become an enabler around the Internet. Every site that requires a login is going to capture my email address. And as I build a relationship with those sites, my email address becomes increasingly important to me. Not only is having an email address adding value to the entire network of email addresses, but the value of my email address increases for me with each service that is able to leverage my investment in my email address.

Then the core services built around my email address start to increase in value, too.

For example, when I turned on my iPhone and discovered that my Yahoo! Address Book was automatically cooked right in without any manual importing, I suddenly realized that my Yahoo! Address Book has been a constant in my life ever since I got my first Yahoo! email address back in the ’90’s. I haven’t kept it current, but it has followed me from job to job in a way that Outlook has never been able to do.

My Yahoo! Address Book is becoming more and more valuable to me. And my iPhone is more compelling because of my investment in my email address and my address book.

Now, if the network was an operating system, there would be taxes to pay. Apple would have to pay a tax for accessing my address book, and I would have to pay a tax to keep my address book at Yahoo!. Nobody wins in that scenario.

User data needs to be open and accessible in meaningful ways, and revenue needs to be built as a result of the effects of having open data rather than as a margin-based cost-control business.

But Dare Obasanjo insightfully exposes the flaw in reducing openness around identity to individual control alone:

“One of the bitter truths about “Web 2.0″ is that your data isn’t all that interesting, our data on the other hand is very interesting…A lot of “Web 2.0″ websites provide value to their users via wisdom of the crowds appproaches such as tagging or recommendations which are simply not possible with a single user’s data set or with a small set of users.”

Clearly, one of the most successful revenue-driving opportunities in the networked economy is advertising. It makes sense that it would be since so many of the most powerful network effects are built on people’s profiles and their relationships with other people. No wonder advertisers can’t spend enough money online to reach their targets.

It will be interesting to see how some of the clever startups leveraging network effects such as Wesabe think about advertising.

Wesabe have built network effects around people’s spending behavior. As you track your finances and pull in your personal banking data, Wesabe makes loose connections between your transactions and other people who have made similar transactions. Each new person and each new transaction creates more value in the aggregate pool. You then discover other people who have advice about spending in ways that are highly relevant to you.

I’ve been a fan of Netflix for a long time now, but when Wesabe showed me that lots of Netflix customers were switching to Blockbuster, I had to investigate and before long decided to switch, too. Wesabe knew to advise me based on my purchasing behavior which is a much stronger indicator of my interests than my reading behavior.

Advertisers should be drooling at the prospects of reaching people on Wesabe. No doubt Netflix should encourage their loyal subscribers to use Wesabe, too.

The many explicit clues about my interests I leave around the Internet — my listening behavior at last.fm, my information needs I express in del.icio.us, my address book relationships, my purchasing behavior in Wesabe — are all incredibly fruitful data points that advertisers want access to.

And with managed distribution, a powerful ad platform could form around these explicit behaviors that can be loosely connected everywhere I go.

Netflix could automatically find me while I’m reading a movie review on a friend’s blog or even at The New York Times and offer me a discount to re-subscribe. I’m sure they would love to pay lots of money for an ad that was so precisely targeted.

That blogger and The New York Times would be happy share revenue back to the ad platform provider who enabled such precise targeting that resulted in higher payouts overall.

And I might actually come back to Netflix if I saw that ad. Who knows, I might even start paying more attention to ads if they started to find me rather than interrupt me.

This is why the Internet looks less and less like an operating system to me. Network effects look different to me in the way people participate in them and extract value from them, the way data and technologies connect to them, and the way markets and revenue streams build off of them.

Operating systems are about command-and-control distribution points, whereas network effects are about joining vectors to create leverage.

I know little about the mathematical nuances of chaos theory, but it offers some relevant philosophical approaches to understanding what network effects are about. Wikipedia addresses how chaos theory affects organizational development:

“Most of the focus on chaos theory is primarily rooted in the underlying patterns found in an otherwise chaotic enviornment, more specifically, concepts such as self-organization, bifurcation and self-similarity…

Self-organization, as opposed to natural or social selection, is a dynamic change within the organization where system changes are made by recalculating, re-inventing and modifying its structure in order to adapt, survive, grow and develop. Self-organization is the result of re-invention and creative adaptation due to the introduction of, or being in a constant state of, perturbed equilibrium.”

Yes, my PC is often in a state of ‘perturbed equilibrium’ but not because it wants to be.

Why Outside.in may have the local solution

The recent blog frenzy over hyperlocal media inspired me to have a look at Outside.in again.


It’s not just the high profile backers and the intense competitive set that make Outside.in worth a second look. There’s something very compelling in the way they are connecting data that seems like it matters.

My initial thought when it launched was that this idea had been done before too many times already. Topix.net appeared to be a dominant player in the local news space, not to mention similar but different kinds of local efforts at startups like Yelp and amongst all the big dotcoms.

And even from their strong position, Topix’s location-based news media aggregaton model was kind of, I don’t know, uninteresting. I’m not impressed with local media coverage these days, in general, so why would an aggregator of mediocre coverage be any more interesting than what I discover through my RSS reader?

But I think Outside.in starts to give some insight into how local media could be done right…how it could be more interesting and, more importantly, useful.

The light triggered for me when I read Jon Udell’s post on “the data finds the data”. He explains how data can be a vector through which otherwise unrelated people meet eachother, a theme that continues to resonate for me.

Media brands have traditionally been good at connecting the masses to eachother and to marketers. But the expectation of how directly people feel connected to other individuals by the media they share has changed.

Whereas the brand once provided a vector for connections, data has become the vehicle for people to meet people now. Zip code, for example, enables people to find people. So does marital status, date and time, school, music taste, work history. There are tons of data points that enable direct human-to-human discovery and interaction in ways that media brands could only accomplish in abstract ways in the past.

URLs can enable connections, too. Jon goes on to explain:

“On June 17 I bookmarked this item from Mike Caulfield… On June 19 I noticed that Jim Groom had responded to Mike’s post. Ten days later I noticed that Mike had become Jim’s new favorite blogger.

I don’t know whether Jim subscribes to my bookmark feed or not, but if he does, that would be the likely vector for this nice bit of manufactured serendipity. I’d been wanting to introduce Mike at KSC to Jim (and his innovative team) at UMW. It would be delightful to have accomplished that introduction by simply publishing a bookmark.”

Now, Outside.in allows me to post URLs much like one would do in Newsvine or Digg any number of other collaborative citizen media services. But Outside.in leverages the zip code data point as the topical vector rather than a set of predetermined one-size-fits-all categories. It then allows miscellaneous tagging to be the subservient navigational pivot.

Suddenly, I feel like I can have a real impact on the site if I submit something. If there’s anything near a critical mass of people in the 94107 zip code on Outside.in then it’s likely my neighbors will be influenced by my posts.

Fred Wilson of Union Square Ventures explains:

“They’ve built a platform that placebloggers can submit their content to. Their platform “tags” that content with a geocode — an address, zip code, or city — and that renders a new page for every location that has tagged content. If you visit outside.in/10010, you’ll find out what’s going on in the neigborhood around Union Square Ventures. If you visit outside.in/back_bay, you’ll see what’s going on in Boston’s Back Bay neighborhood.”

Again, the local online media model isn’t new. In fact, it’s old. CitySearch in the US and UpMyStreet in the UK proved years ago that a market does in fact exist in local media somehwere somehow, but the market always feels fragile and susceptible to ghost town syndrome.

Umair Haque explains why local is so hard:

“Why doesn’t Craigslist choose small towns? Because there isn’t enough liquidity in the market. Let me put that another way. In cities, there are enough buyers and sellers to make markets work – whether of used stuff, new stuff, events, etc, etc.

In smaller towns, there just isn’t enough supply or demand.”

If they commit to building essentially micro media brands based exclusively on location I suspect Outside.in will run itself into the ground spending money to establish critical mass in every neighborhood around the world.

Now that they have a nice micro media approach that seems to work they may need to start thinking about macro media. In order to reach the deep dark corners of the physical grid, they should connect people in larger contexts, too. Here’s an example of what I mean…

I’m remodeling the Potrero Hill shack we call a house right now. It’s all I talk about outside of work, actually. And I need to understand things like how to design a kitchen, ways to work through building permits, and who can supply materials and services locally for this job.

There must be kitchen design experts around the world I can learn from. Equally, I’m sure there is a guy around the corner from me who can give me some tips on local services. Will Architectural Digest or Home & Garden connect me to these different people? No. Will The San Francisco Chronicle connect us? No.

Craigslist won’t even connect us, because that site is so much about the transaction.

I need help both from people who can connect on my interest vector in addition to the more local geographic vector. Without fluid connections on both vectors, I’m no better off than I was with my handy RSS reader and my favorite search engine.

Looking at how they’ve decided to structure their data, it seems Outside.in could pull this off and connect my global affinities with my local activities pretty easily.

This post is way too long already (sorry), but it’s worth pointing out some of the other interesting things they’re doing if you care to read on.

Outside.in is also building automatic semantic links with the contributors’ own blogs. By including my zip code in a blog post, Outside.in automatically drinks up that post and adds it into the pool. They even re-tag my post with the correct geodata and offer GeoRSS feeds back out to the world.

Here are the instructions:

“Any piece of content that is tagged with a zip code will be assigned to the corresponding area within outside.in’s system. You can include the zip code as either a tag or a category, depending on your blogging platform.”

I love this.

30Boxes does something similar where I can tell it to collect my Upcoming data, and it automatically imports events as I tag them in Upcoming.

They are also recognizing local contributors and shining light on them with prominant links. I can see who the key bloggers are in my area and perhaps even get a sense of which ones matter, not just who posts the most. I’m guessing they will apply the “people who like this contributor also like this contributor” type of logic to personalize the experience for visitors at some point.

Now what gets me really excited is to think about the ad model that could happen in this environment of machine-driven semantic relationships.

If they can identify relevant blog posts from local contributors, then I’m sure they could identify local coupons from good sources of coupon feeds.

Let’s say I’m the national Ace Hardware marketing guy, and I publish a feed of coupons. I might be able to empower all my local Ace franchises and affiliates to publish their own coupons for their own areas and get highly relevant distribution on Outside.in. Or I could also run a national coupon feed with zip code tags cooked into each item.

To Umair’s point, that kind of marketing will only pay off in major metros where the markets are stronger.

To help address the inventory problem, Outside.in could then offer to sell ad inventory on their contributors’ web sites. As an Outside.in contributor, I would happily run Center Hardware coupons, my local Ace affiliate, on my blog posts that talk about my remodelling project if someone gave them to me in some automated way.

If they do something like this then they will be able to serve both the major metros and the smaller hot spots that you can never predict will grow. Plus, the incentives for the individuals in the smaller communities start feeding the wider ecosystem that lives on the Outside.in platform.

Outside.in would be pushing leverage out to the edge both in terms of participation as they already do and in terms of revenue generation, a fantastic combination of forces that few media companies have figured out, yet.

I realize there are lots of ‘what ifs’ in this assessment. The company has a lot of work to do before they breakthrough, and none of it is easy. The good news for them is that they have something pretty solid that works today despite a crowded market.

Regardless, knowing Fred Wilson, Esther Dyson, John Seely Brown and Steven Berlin Johnson are behind it, among others, no doubt they are going to be one to watch.