An open community news platform: n0tice.com

The last several weeks I’ve been working on a new project, a SoLoMo initiative, as John Doerr or Mary Meeker would call it.

One of those places
Noticeboard photo by Jer*ry

It’s a mobile publishing platform that resembles a community notice board.  It’s called n0tice*:

http://n0tice.com.

After seeing Google’s “News near you” service announced on Friday I thought it was a good time to jump into the conversation and share what I’m up to.  Clearly, there are a lot of people chasing the same or similar issues.

First, here’s some background.  Then I’ll detail what it does, how it works, and what I hope it will become.

What is n0tice?

It began as a simple hack day project over a year ago.  I was initially just curious about how location worked on the phone.  At first I thought that was going to be beyond me, and then Simon Willison enlightened me to the location capabilites inherent in modern web browsers. There are many solutions published out there. Here’s one.

It took half a second from working out how to identify a user’s location to realizing that this feature could be handy for citizen reporters.

Around the same time there was a really interesting little game called noticin.gs going around which was built by Tom Taylor and Tom Armitage, two incredibly talented UK developers.  The game rewarded people for being good at spotting interesting things in the world and capturing a photo of them.

Ushahidi was tackling emergency response reporting. And, of course, Foursquare was hitting its stride then, too.

These things were all capturing my imagination, and so I thought I would try something similar in the context of sharing news, events and listings in your community.

Photo by Roo Reynolds

However, I was quite busy with the Guardian’s Open Platform, as the team was moving everything out of beta, introducing some big new services and infusing it into the way we operate.  I learned a lot doing that which has informed n0tice, too, but it was another 12 months before I could turn my attention back to this project.  It doesn’t feel any less relevant today than it did then. It’s just a much more crowded market now.

What does n0tice do?

The service operates in two modes – reading and posting.

n0tice.com - what's near you nowWhen you go to n0tice.com it will first detect whether or not you’re coming from a mobile device.  It was designed for the iPhone first, but the desktop version is making it possible to integrate a lot of useful features, too.

(Lesson:  jQuery Mobile is amazing. It makes your mobile projects better faster. I wish I had used it from day one.)

It will then ask your permission to read your location.  If you agree, it grabs your latitude and longitude, and it shows you what has been published to n0tice within a close radius.

(Lesson: It uses Google Maps and their geocoder to get the location out of the browser, but then it uses Yahoo!’s geo services to do some of the other lookups since I wanted to work with different types of location objects.  This combination is clunky and probably a bad idea, but those tools are very robust.)

You can then zoom out or zoom in to see broader or more precise coverage.

Since it knows where you are already, it’s easy to post something you’ve seen near you, too.  You can actually post without being logged in, but there are some social incentives to encourage logged in behavior.

Like Foursquare’s Mayor analogy, n0tice has the ‘Editor’ badge.

The first person to post in a particular city becomes the Editor of that city.  The Editor can then be ousted if someone completes more actions in the same city or region.

It was definitely a challenge working out how to make sensible game mechanics work, but it was even harder finding the right mix of neighborhood, city, country, lat/long coordinates so that the idea of an ‘Editor’ was consistent from place to place.

London and New York, for example, are much more complicated given the importance of the neighborhoods yet poorly defined boundaries for them.

(Lesson: Login is handled via Facebook. Their platform has improved a lot in the last 12 months and feels much more ‘give-and-take’ than just ‘take’ as it used to. Now, I’m not convinced that the activities in a person’s local community are going to join up naturally via the Facebook paradigm, so it needs to be used more as a quickstart for a new service like this one.)

The ‘Editor’ mechanics are going to need a lot more work.  But what I like about the ‘Editor’ concept is that we can now start to endow more rights and priveleges upon each Editor when an area matures.

Perhaps Editors are the only ones who can delete posts. Perhaps they can promote important posts. Maybe they can even delegate authority to other participants or groups.

Of course, quality is always an issue with open communities. Having learned a few things about crowdsourcing activities at the Guardian now, there are some simple triggers in place that should make it easier to surface quality should the platform scale to a larger audience.

For example, rather than comments, n0tice accepts ‘Evidence’.

You can add a link to a story, post a photo, embed a video or even a storify feed that improve the post.

Also, the ratings aren’t merely positive/negative.  They ask if something matters, if people will care, and if it’s accurate. That type of engagement may be expecting too much of the community, but I’m hopeful it will work.

Of course, all this additional level of interactivity is only available on the desktop version, as the mobile version is intended to serve just two very specific use cases:

  1. getting a snapshot of what’s happening near you now
  2. posting something you’ve seen quickly and easily

How will n0tice make money?

Since the service is a community notice board, it makes sense to use an advertising model that people already understand in that context: classifieds.

Anyone can list something on n0tice for free that they are trying to sell.  Then they can buy featured promotional positions based on how large the area is in which they want their item to appear and for how long they want it to be seen there.

(Lesson: Integrating PayPal for payments took no time at all. Their APIs and documentation feel a little dated in some ways, but just as Facebook is fantastic as a quickstart tool for identity, PayPal is a brilliant quickstart for payments.)

Promotion on n0tice costs $1 per 1 mile radius per day. That’s in US dollars.

While still getting the word out and growing the community $1 will buy you a featured spot that lasts until more people come along and start buying up availability.

But there’s a lot we can do with this framework.

For example, I think it would make sense that a ‘Publisher’ role could be defined much like the ‘Editor’ for a region.

Perhaps a ‘Publisher’ could earn a percentage of every sale in a region.  The ‘Publisher’ could either earn that privelege or license it from us.

I’m also hopeful that we can make some standard affiliate services possible for people who want to use the ad platform in other apps and web sites across the Internet.  That will only really work if the platform is open.

How will it work for developers and partners?

The platform is open in every way.

There are both read and write APIs for it.  The mobile and desktop versions are both using those APIs, in fact.

The read API can be used without a key at the moment, and the write API is not very complicated to use.

So, for example, here are the 10 most recent news reports with the ‘crime’ tag in machine-readable form:

http://n0tice.com/api/readapi-reports.php?output=xml&tags=crime&count=10

The client code for the mobile version is posted on Github with an open license (we haven’t committed to which license, yet), though it is a few versions behind what is running on the live site.  That will change at some point.

And the content published on n0tice is all Creative Commons Attribution-Share Alike so people can use it elsewhere commercially.

The idea in this approach to openness is that the value is in the network itself, the connections between things, the reputation people develop, the impact they have in their communities.

The data and the software are enablers that create and sustain the value.  So the more widely used the data and software become the more valuable the network is for all the participants.

How scalable is the platform?

The user experience can scale globally given it is based on knowing latitude and longitude, something treated equally everywhere in the world.  There are limitations with the lat/long model, but we have a lot of headroom before hitting those problems.

The architecture is pretty simple at the moment, really.  There’s not much to speak of in terms of directed graphs and that kind of thing, yet.  So the software, regardless of how badly written it is, which it most definitely is, could be rewritten rather quickly.  I suspect that’s inevitable, actually.

The software environment is a standard LAMP stack hosted on Dreamhost which should be good enough for now.  I’ve started hooking in things like Amazon’s CloudFront, but it’s not yet on EC2.  That seems like a must at some point, too.

The APIs should also help with performance if we make them more cacheable.

The biggest performance/scalability problem I foresee will happen when the gaming mechanics start to matter more and the location and social graphs get bigger.  It will certainly creak when lots of people are spending time doing things to build their reputation and acquire badges and socialize with other users.

If we do it right, we will learn from projects like WordPress and turn the platform into something that many people care about and contribute to.  It would surely fail if we took the view that we can be the only source of creative ideas for this platform.

To be honest, though, I’m more worried about the dumb things like choking on curly quotes in users’ posts and accidentally losing users’ badges than I’m worried about scaling.

It also seems likely that the security model for n0tice is currently worse than the performance and scalability model. The platform is going to need some help from real professionals on that front, for sure.

What’s the philosophy driving it?

There’s most definitely an ideology fueling n0tice, but it would be an overstatement to say that the vision is leading what we’re doing at the moment.

In its current state, I’m just trying to see if we can create a new kind of mobile publishing environment that appeals to lots of people.

There’s enough meat to it already, though, that the features are very easy to line up against the mission of being an open community notice board.

Local UK community champion Will Perrin said it felt like a “floating cloud of data that follows you around without having to cleave to distribution or boundary.”

I really like that idea.

Taking a wider view, the larger strategic context that frames projects like this one and things like the Open Platform is about being Open and Connected.  Recently, I’ve written about Generative Media Platforms and spoken about Collaborative Media.  Those ideas are all informing the decisions behind n0tice.

What does the future look like for n0tice?

The Guardian Media Group exists to deliver financial security for Guardian News and Media.

My hope is that we can move n0tice from being a hack to becoming a new GMG business that supports the Guardian more broadly.

The support n0tice provides should come in two forms: 1) new approaches to open and collaborative journalism and 2) new revenue streams.

It’s also very useful to have living projects that demonstrate the most extreme examples of ‘Open and Connected‘ models.  We need to be exploring things outside our core business that may point to the future in addition to moving our core efforts where we want to go.

We spend a lot of time thinking about openness and collaboration and the live web at the Guardian.  If n0tice does nothing more than illustrate what the future might look like then it will be very helpful indeed.

However, the more I work on this the more I think it’s less a demo of the future and more a product of the present.

Like most of the innovations in social media, the hard work isn’t the technology or even the business model.

The most challenging aspect of any social media or SoLoMo platform is making it matter to real people who are going to make it come alive.

If that’s also true for n0tice, then the hard part is just about to begin.

 


* The hack was originally called ‘News Signals’.  But after trying and failing to convince a few people that this was a good idea, including both technical people and potential users, such as my wife, I realized the name really mattered.

I’ve spent a lot of time thinking about generative media platforms, and the name needed to reflect that goal, something that spoke to the community’s behaviors through the network. It was supposed to be about people, not machines.

Now, of course, it’s hard to find a short domain name these days, but digits and dots and subdomains can make things more interesting and fun anyhow. Luckily, n0tice.com was available…that’s a zero for an ‘o’.

Behind the scenes of the Open Platform’s evolution

When I came to the Guardian two years ago, I brought with me some crazy California talk about open strategies and APIs and platforms. Little did I know the Guardian already understood openness. It’s part of its DNA. It just needed new ways of working in an open world.

Last week, The Guardian’s approach to openness and mutualisation took a giant step forward when we brought the Open Platform out of Beta.

It’s a whole new business model with a new technology infrastructure that is already accelerating our ambitions.

I’ll explain how we got to this point, but let me clarify what we just announced:

  • We’ve implemented a tiered access model that I think is a first in this space. We have a simple way to work with anyone who wants to work with us, from hobbyist to large-scale service provider and everything in between.
  • We’ve created a new type of ad network with 24/7 Real Media’s Open AdStream, one where the ads travel with the content that we make available for partners to reuse.
  • That ad network is going to benefit from another first which is Omniture analytics code that travels with the content, as well.
  • License terms that encourage people to add value are rare. Using many of the open license principles we developed T&Cs that will fuel new business, not stop it.
  • Hosted in the cloud on Amazon EC2 the service scales massively. There are no limits to the number of customers we can serve.
  • The API uses the open source search platform Solr which makes it incredibly fast, robust, and easy for us to iterate quickly.
  • We introduced a new service for building apps on our network called MicroApps. Partners can create pages and fully functional applications on guardian.co.uk.

We’re using all the tools in the Open Platform for many of our own products, including the Guardian iPad app, several digital products and more and more news apps that require quick turn-around times and high performance levels.

Open Platform: Build applications with the GuardianThere’s lots of documentation on the Open Platform web site explaining all this and more, but I figured I could use this space to give a picture of what’s been happening behind the scenes to get to this point.

It’s worth noting that this is far from the full picture of all the amazing stuff that has been happening at the Guardian the past 12 months. These are the things that I’ve had the pleasure of being close to.

Beginning with Beta

First, we launched in Beta last year. We wanted to build some excitement around it via the people who would use it first. So, we unveiled it at a launch event in our building to some of the smartest and most influential London developers and tech press.

We were resolute in our strategy, but when you release something with unknown outcomes and a clear path to chaos people get uneasy. So, we created just large enough hurdles to keep it from exploding but a wide enough berth for those who used it to take it to its extreme and to demonstrate its value.

It worked. Developers got it right away and praised us for it. They immediately started building things using it (see the app gallery). All good signs.

Socializing the message

We ran a Guardian Hack Day and started hosting and sponsoring developer events, including BarCamp, Rewired State, FOWA, dConstruct, djugl, Music Hack Day, ScaleCamp, etc.

Next, we knew the message had to reach their bosses soon, and their bosses’ bosses. So, we aimed right for the top.

Industry events can be useful ways to build relationships, but Internet events have been really lacking meaning. People who care about how the Internet is changing the world and who are also actively making that change happen were the types of people we needed to build a long term dialog with.

So, we came up with a new kind of event: Activate Summit.

The quality of the speakers and attendees at Activate was incredible. Because of those people the event has now turned into something much more amazing than what we initially conceived.

Nick Bostrom’s darkly humorous analysis of the likelihood of human extinction as a result of technology haunts me frequently still, but the event also celebrates some brilliant ways technology is making life better. I think we successfully tapped into some kind of shared consciousness about why people invest their careers into the Internet movement…it’s about making a difference.

Developers, developers, developers!

Gordon Brown was wise in his decision to put Tim Berners-Lee and Nigel Shadbolt on the task of opening government data. But they knew enough to know that they didn’t know how to engage developers. Where did they turn for help? The Guardian!

We couldn’t have been more excited to help them get data.gov.uk out the door successfully. It was core to what we’re about. As Free Our Data champion Charles Arthur joked on the way to the launch presentation, “nice of them to throw a party for me.”

We gave them a platform to launch data.gov.uk in the form of developer outreach, advice, support, event logistics, a nice building, etc., but, again, the people involved made the whole effort much more impressive than any contribution we made to it.

Tom Taylor’s Postcode Paper, for example, was just brilliant on so many levels. The message for why open government data could not have been clearer.

Election data

Then when the UK election started to pick up some momentum, we opened up the Guardian’s deep politics database and gave it a free-to-use API. We knew we couldn’t possibly cover every angle of the election and hoped that others could use the Guardian’s resources to engage voters. We couldn’t have asked for a better example of that then Voter Power.

A range of revenue models

All along there were some interesting things happening more behind the scenes, too.

The commercial team was experimenting with some new types of deals. Our ad network business grew substantially, and we added a Food Ad Network and a Diversity Network to our already successful Green Ad network.

It was clear that there was also room for a new type of ad network, a broader content-targeted ad network. And better yet, if we could learn about what happens with content out across the web then we might have the beginnings of a very intelligent targeting engine, too.

24/7 Real Media’s Open Ad Stream and Omniture were ready to help us make this happen. So, we embedded ads and analytics code with article content in the Content API. We’ve launched with some house ads to test it out, but we’re very excited by the possibilities when the network grows.

The Guardian’s commercial teams, including Matt Gilbert, Steve Wing, Dan Hedley and Torsten de Reise, also worked out a range of different partnerships with several Beta customers including syndication, rev share on paid apps, and rev share on advertising. We’re scaling those models and working out some new ones, as well.

It became obvious to everyone that we were on to something with a ton of potential.


Rewriting the API for scale

Similarly, the technology team was busily rebuilding the Content API the moment we realized how big it needed to be.

In addition to supporting commercial partners, we wanted to use it for our own development. The new API had to scale massively, it had to be fast, it had to be reliable, it had to be easy to use, and it had to be cheap. We used the open source search platform Solr hosted on Amazon’s EC2. API service management was handled by Mashery.

The project has hit the desk of nearly every member of the development team at one point or another. Here are some of the key contributions. Mat Wall architected it. Graham Tackley made Mat’s ideas actually work. Graham and Stephen Wells led the development, while Francis Rhys-Jones and Daithi O’Crualaoich wrote most of the functions and features for it. Martyn Inglis and Grant Klopper handled the ad integration. The wonderful API Explorer was written by Francis, Thibault Sacreste and Ken Lim. Matthew O’Brien wrote the Politics API. The MicroApps framework included all these people plus basically the entire team.

Stephen Dunn and Graham Tackley provided more detail in a presentation to the open source community in Prague at Lucid Imagination’s Solr/Lucene EuroCon event.

The application platform we call MicroApps

Perhaps even more groundbreaking than all this is the MicroApp framework. A newspaper web site that can run 3rd party apps? Yes!

MicroApps makes the relationship between the Guardian and the Internet feel like a two-way, read-write, permeable membrane rather than a broadcast tower. It’s a very tangible technology answer to the openness vision.

You can learn more by reading 2 excellent blog posts about MicroApps. Dan Catt explains how he used MicroApps for Zeitgeist. Since most of the MicroApps that exist today are hosted on Google AppEngine, the Google Code team published Chris Thorpe’s insights about what we’re doing with MicroApps on their blog.

The MicroApps idea was born out of a requirement to release smaller chunks of more independent functionality without affecting the core platform….hence the name “MicroApps”. Like many technology breakthroughs, the thing it was intended to do becomes only a small part of the new world it opens up.

Bringing it all together

At the same time our lead software architect Mat Wall was formulating the MicroApp framework, the strategy for openness was forming our positioning and our approach to platforms:

…to weave the Guardian into the fabric of the Internet; to become ‘of‘ the Web, not just ‘on‘ the Web

The Content API is a great way to Open Out and make the Guardian meaningful in multiple environments. But we also knew that we had to find a way to Open In, or to allow relevant and interesting things going on across the Internet to be integrated sensibly within guardian.co.uk.

Similarly, the commercial team was looking to experiment with several media partners who are all thinking about engagement in new ways. What better way to engage 36M users than to offer fully functional apps directly on our domain?

The strategy, technology and business joined up perfectly. A tiered business model was born.

The model

Simon Willison was championing a lightweight keyless access level from the day we launched the Beta API. We tested keyless access with the Politics API, and we liked it a lot. So, that became the first access tier: Keyless.

We offered full content with embedded ads and analytics code in the next access level. We knew getting API keys was a pain. So, we approved keys automatically on signup. That defined the second tier: Approved.

Lastly, we combined unfettered access to all the content in our platform with the MicroApp framework for building apps on the Guardian network. We made this deep integration level available exclusively for people who will find ways to make money with us. That’s the 3rd tier: Bespoke. It’s essentially the same as working in the building with our dev team.

We weren’t precisely clear on how we’d join these things up when we conceived the model. Not surprisingly, as we’ve seen over and over with this whole effort, our partners are the ones who are turning the ideas into reality. Mashery was already working on API access levels, and suddenly the last of our problems went away.

The tiers gave some tangible structure to our partner strategy. The model felt like it just started to create itself.

Now we have lots of big triangle diagrams (see below) and grids and magic quadrants and things that we can put into presentation slides that help us understand and communicate how the ecosystem works.

Officially opening for business

Given the important commercial positioning now, we decided that the launch event had to focus first and foremost on our media partners. We invited media agencies and clients into our offices. Adam Freeman and Mike Bracken opened the presentation. Matt Gilbert then delivered the announcement and gave David Fisher a chance to walk through a deep dive case study on the Enjoy England campaign.

There was one very interesting twist on the usual launch event idea which was a ‘Developer Challenge’. Several members of the development team spent the next 24 hours answering briefs given to us by the media partners at the event. It was run very much like a typical hack day, but the hacks were inspired by the ideas our partners are thinking about. Developer advocate Michael Brunton-Spall wrote up the results if you want to see what people built.

Here is the presentation we gave at the launch event:


(Had we chosen a day to launch other than the same day that Google threw a press release party I think you’d already know all this.)

Do the right thing

Of all the things that make this initiative as successful as it is, the thing that strikes me most is how engaged and supportive the executive team is. Alan Rusbridger, Carolyn McCall, Tim Brooks, Derek Gannon, Emily Bell, Mike and Adam, to name a few, are enthusiastic sponsors because this is the right thing to do.

They created a healthy environment for this project to exist and let everyone work out what it meant and how to do it together.

Alan articulated what we’re trying do to in the Cudlipp lecture earlier this year. Among other things, Alan’s framework is an understanding that our abilities as a major media brand and those of the people formerly known as the audience are stronger when unified than they are when applied separately.

Most importantly, we can afford to venture into open models like this one because we are owned by the Scott Trust, not an individual or shareholders. The organization wants us to support journalism and a free press.

“The Trust was created in 1936 to safeguard the journalistic freedom and liberal values of the Guardian. Its core purpose is to preserve the financial and editorial independence of the Guardian in perpetuity, while its subsidiary aims are to champion its principles and to promote freedom of the press in the UK and abroad.”

The Open Platform launch was a big day for me and my colleagues. It was a big day for the future of the Guardian. I hope people also see that it was a major milestone toward a brighter future for journalism itself.

Building communities from Twitter posts

I spent a little time over the last couple of weeks playing around with some Twitter data. I was noticing how several people, myself included, were sharing the funny things their kids say sometimes:

So then I wondered whether there was a way to capture, prioritize and then syndicate the best Twitter posts into a ‘kiddie quote of the day’ or something like that.

My experiment only sort of works, but there are some lessons here that may be useful for community builders out there. Here’s what I did:

  1. Get the quotes: I ran some searches through Twitter Search and collected the RSS feeds from those results to create the pool of content to use for the project. In this case, I used ‘daughter said‘ and ‘son said‘. I put those feeds into Yahoo! Pipes and filtered out any posts with swear words. Then I had a basic RSS feed of quotes to work with.
  2. Prioritize the quotes: I’m not sure the best way to prioritize a collection of sources and content, but the group voting method may do what you want. Jon Udell has another approach for capturing trusted sources using Del.icio.us. For voting, there’s an open source Digg clone called Pligg. I set it up on a domain at Dreamhost (I called it KidTwits…Dreamhost has a one-click Pligg installer that works great) and then pumped the RSS feed I just made into it. In no time I had a view into all the Twitter posts which were wrapped in all the typical social media features I needed (voting, comments, RSS, bookmarking, etc.).
  3. Resyndicate the quotes to Twitter: While you might be able to draw people into the web site, it made more sense in this case to be present where interested people might be socializing already. First, I created a Twitter account called KidTwits. Then I took a feed from the web site and sent it through an auto-post utility called twitterfeed. Now the KidTwits Twitter account gets updated when new posts bubble up to the home page of kidtwits.com.
  4. Link everywhere possible: When building the feed into Pligg I made sure that the twitter ID of each post was captured. This then made it possible to “retweet” with their IDs intact. Thus, the source of the quote would then see the KidTwit posts in their Twitter replies. It works really well. People were showing up at the web site and replying to me on Twitter the same day I began the project.

    Again, I used Yahoo! Pipes to clean up and format the feed back out to Twitter to include the ‘RT’ and @userid prefix to each entry. I played around a bit before arriving at this format.

    I also included a Creative Commons copyright on all the pages of the web site to make sure the rights ownership issues were clear.

    Lastly, I added a search criteria for my feed collector that looks for references to KidTwits. This means people can post directly to the web site either by adding @kidtwits to their posts or #kt. There was already a New Zealand Twitter community forming who began using ‘kt’ to join their posts (short for kiwitweets), but they gave it up. I then had to filter out references to the kidtwits Twitter posts to avoid an infinite loop.

  5. Improve post quality: Now, here’s where things have been failing for me. I can’t think of better search terms to capture the pool of quotes I want, but there are so many extraneous Twitter posts using those words that it seems like I’m getting between 5% and 10% accuracy. Not bad, but certainly not good enough. The good news is that it’s pretty easy to kill the posts you don’t want through the Pligg interfaces. I just don’t have the time or desire to maintain that.
  6. Optimize the site: I then did a bunch of the little things that wrapped up the project. I added Google Analytics tracking, created a simple logo and favicon, customized the Twitter background, and configured Pligg to import the Twitter Search pipe automatically.

There are several things I like and a few I dislike about this little project.

  • I really like the fluidity of Twitter’s platform. It’s amazingly easy to capture and resyndicate Twitter posts.
  • I love the effects of the @reply mechanism. I can essentially notify anyone who gets their Twitter post listed on the home page of kidtwits.com without lifting a finger. And they get credit automatically for their post.
  • I already knew this, but Yahoo! Pipes is just brilliant. I can’t imagine I would have even considered this project without it.
  • Pligg is pretty good, too. It does everything I want it to do.
  • I would love to hand over the management of the voting and quality checks to someone else. Voting naturally invites gaming. At the end of the day, however, the quality control and community management function is what makes a community service interesting to people. You can’t automate everything.
  • I’m actually not a fan of voting approaches to prioritizing content. It will ultimately result in dumbing down the quality. That’s less of an issue for highly niched topics like this, though.
  • The rights issues are a little weird. This wouldn’t be a problem in forming a community whose purpose is noncommercial naturally. But I’m not sure the Twitterverse would respond well to aggregators that make money off their posts without their knowledge or consent. (To be clear, KidTwits is not and never will be a commercial project…it’s just a fun experiment.)
  • Auto-retweeting feels a bit wrong. I wouldn’t be surprised if the KidTwits account gets banned. But I have explicitly included the source and clearly labeled each Twitter post with ‘RT’ to be clear about what I’m doing. I’m not building traffic to my account, the web site, nor am I intentionally misrepresenting anything.
  • By adding “RT @userid” I’ve killed the first 10 or so characters of the post that I’m retweeting. This means the punchline is often dropped which kills the meaning of the retweeted post.
  • Some conversational Twitter posts get through which include @replies to another user. When the KidTwits retweet of that post goes out it’s very confusing.

The potential here, among other things, is in creating cohesive topical communities around what people are saying on Twitter. You can easily imagine thousands of communities forming in similar ways around highly focused interest areas.

In this method the community doesn’t necessarily have the typical collective or person-to-person dynamics to it, but the core Twitter account can act as a facilitator of connections. It can actually create some of the authority dynamics people have been wanting to see. It becomes a broker of contextually relevant connections.

In a very similar way the web site serves as a service broker or activity driver. It’s a functional tool for filtering and fine-tuning the community experience at the edge. The web site is not a destination but more of a dashboard or a control panel for the network.

The experiment feels very unfinished to me still. There’s much more that can be done to create better activity brokering dynamics across the network through the combination of a Twitter account and a web site, I’m sure.

Creating leverage at the data layer

There’s a reason that the world fully embraced HTTP but not Gopher or Telnet or even FTP. That’s because the power of the Internet is best expressed through the concept of a network, lots of interlinked pieces that make up something bigger rather than tunnels and holes that end in a destination.

The World Wide Web captured people’s imaginations, and then everything changed.

I was reminded of this while reading a recent interview with Tim Berners-Lee (via TechCrunch). He talked a bit about the power of linking data:

“Web 2.0 is a stovepipe system. It’s a set of stovepipes where each site has got its data and it’s not sharing it. What people are sometimes calling a Web 3.0 vision where you’ve got lots of different data out there on the Web and you’ve got lots of different applications, but they’re independent. A given application can use different data. An application can run on a desktop or in my browser, it’s my agent. It can access all the data, which I can use and everything’s much more seamless and much more powerful because you get this integration. The same application has access to data from all over the place…

Data is different from documents. When you write a document, if you write a blog, you write a poem, it is the power of the spoken word. And even if the website adds a lot of decoration, the really important thing is the spoken words. And it is one brain to another through these words.”

Data is what matters. It’s a point of interest in a larger context. It’s a vector and a launchpad to other paths. It’s the vehicle for leverage for a business on the Internet.

What’s the business strategy at the data layer?

I have mixed views on where the value is on social networks and the apps therein, but they are all showing where the opportunity is for services that have actually useful data. Social networks are a good user interface for distributed data, much like web browsers became a good interface for distributed documents.

But it’s not the data consumption experience that drives value, in my mind.

Value on the Internet is being created in the way data is shared and linked to more data. That value comes as a result of the simplicity and ease of access, in the completeness and timeliness, and by the readability of that data.

It’s not about posting data to a domain and figuring out how to get people there to consume it. It’s about being the best data source or the best data aggregator no matter how people make use of it in the end.

Where’s the money?

Like most Internet service models, there’s always the practice of giving away the good stuff for free and then upselling paid services or piggybacking revenue-generating services on the distribution of the free stuff. Chris Anderson’s Wired article on the future of business presents the case well:

“The most common of the economies built around free is the three-party system. Here a third party pays to participate in a market created by a free exchange between the first two parties…what the Web represents is the extension of the media business model to industries of all sorts. This is not simply the notion that advertising will pay for everything. There are dozens of ways that media companies make money around free content, from selling information about consumers to brand licensing, “value-added” subscriptions, and direct ecommerce. Now an entire ecosystem of Web companies is growing up around the same set of models.”

Yet these markets and technologies are still in very early stages. There’s lots of room for someone to create an open advertising marketplace for information, a marketplace where access to data can be obtained in exchange for ad inventory, for example.

Data providers and aggregators have a huge opportunity in this world if they can become authoritative or essential for some type of useful information. With that leverage they could have the social networks, behavioral data services and ad networks all competing to piggyback on their data out across the Internet to all the sites using or contributing to that data.

Regardless of the specific revenue method, the businesses that become a dependency in the Web of data of the future will also find untethered growth opportunities. The cost of that type of business is one of scale, a much more interesting place to be than one that must fight for attention.

I’ve never really liked the “walled garden” metaphor and its negative implications. I much prefer to think in terms of designing for growth.

Frank Lloyd Wright designed buildings that were engaged with the environments in which they lived. Similarly, the best services on the World Wide Web are those that contribute to the whole rather than compete with it, ones that leverage the strengths in the network rather than operate in isolation. Their existence makes the Web better as a whole.

Photo: happy via

Targeting ads at the edge, literally

Esther Dyson wrote about a really interesting area of the advertising market in an article for The Wall Street Journal.

She’s talking about user behavior data arbiters, companies that capture what users are doing on the Internet through ISPs and sell that data to advertisers.

These companies put tracking software between the ISP and a user’s HTTP requests. They then build dynamic and anonymous profiles for each user. NebuAd, Project Rialto, Phorm, Frontporch and Adzilla are among several companies competing for space on ISPs’ servers. And there’s no shortage of ad networks who will make use of that data to improve performance.

Esther gives an example:

“Take user number 12345, who was searching for cars yesterday, and show him a Porche ad. It doesn’t matter if he’s on Yahoo! or MySpace today — he’s the same number as yesterday. As an advertiser, would you prefer to reach someone reading a car review featured on Yahoo! or someone who visited two car-dealer sites yesterday?”

Behavioral and demographic targeting is going to become increasingly important this year as marketers shift budgets away from blanket branding campaigns toward direct response marketing. Over the next few years advertisers plan to spend more on behavioral, search, geographic, and demographic targeting, in that order, according to Forrester. AdWeek has been following this trend:

“According to the Forrester Research report, marketer moves into areas like word of mouth, blogging and social networking will withstand tightened budgets. In contrast, marketers are likely to decrease spending in traditional media and even online vehicles geared to building brand awareness.”

We tried behavioral targeting campaigns back at InfoWorld.com with mild success using Tacoda. The main problem was traffic volume. Though performance was better than broad content-targeted campaigns, the target segments were too small to sell in meaningful ways. The idea of an open exchange for auctioning inventory might have helped, but at the time we had to sell what we called “laser targeting” in packages that started to look more like machine gun fire.

This “edge targeting” market, for lack of a better term, is very compelling. It captures data from a user’s entire online experience rather than just one web site. When you know what a person is doing right now you can make much more intelligent assumptions about their intent and, therefore, the kinds of things they might be more interested in seeing.

It’s important to emphasize that edge targeting doesn’t need to know anything personally identifiable about a person. ISP’s legally can’t watch what known individuals are doing online, and they can’t share anything they know about a person with an advertiser. AdWeek discusses the issue of advertising data optimization in a report title “The New Gold Standard“:

“As it stands now, consumers don’t have much control over their information. Direct marketing firms routinely buy and sell personal data offline, and online, ad networks, search engines and advertisers collect reams of information such as purchasing behavior and Web usage. Google, for instance, keeps consumers’ search histories for up to two years, not allowing them the option of erasing it.

Legalities, however, preclude ad networks from collecting personally identifiable information such as names and addresses. Ad networks also allow users to opt out of being tracked.”

Though a person is only identified as a number in edge targeting, that number is showing very specific intent. That intent, if profiled properly, is significantly more accurate than a single search query at a search engine.

I suspect this is going to be a very important space to watch in the coming years.

Ad networks vs ad exchanges

I spent yesterday at the Right Media Open event in Half Moon Bay at the Ritz Carlton Hotel.


Right Media assembled an impressive list of executives and innovators including John Battelle of Federated Media, David Rosenblatt of DoubleClick, Scott Howe of Microsoft, entrepreneur Steve Jenkins, Jonathan Shapiro of MediaWhiz, Ellen Siminoff of Efficient Frontiers, and Yahoo!’s own Bill Wise and the Right Media team including Pat McCarthy to name a few.

It was an intimate gathering of maybe 120 people.

Much of the dialog at the event revolved around ad exchange market dynamics and how ad networks differ from exchanges. DoubleClick’s Roseblatt described the 2 as analagous to stock exchanges and hedge funds…there are a few large exchanges where everyone can participate and then there are many specialized networks that serve a particular market or customer segment. That seemed to resonate with people.

The day opened with a very candid dialog between Jerry Yang and IAB President Randall Rothenberg where Jerry talked about his approach to refocusing the company and his experiences at Yahoo! to date.

Battelle’s panel later in the afternoon was very engaging, as well. The respective leaders of the ad technology divisions at Yahoo! (Mike Walrath of Right Media), Miscrosoft (Scott Howe of Drivepm and Atlas) and Google (David Rosenblatt of DoubleClick) shared the stage and took questions from John who, as usual, didn’t hold back.

The panelists seemed to have similar approaches to the exchange market, though it seems clear that Right Media has a more mature approach, ironically due in large part to the company’s youth. Microsoft was touting its technology “arsenal”. And DoubleClick wasn’t afraid to admit that they were still testing the waters.

I also learned about an interesting market of middlemen that I didn’t know existed. For example, I spoke with a guy from a company called exeLate that serves as a user behavior data provider between a publisher and an exchange.

There were also ad services providers like Text Link Ads and publishers like Jim Mansfield’s PhoneZoo all discussing the tricky aspects of managing the mixture of inventory, rates and yield, relationships with ad networks, and the advantages of using exchanges.

I’ve been mostly out of touch with the ad technology world for too long.

Our advanced advertising technology experiments at InfoWorld such as behavioral targeting with Tacoda, O & O contextual targeting services like CheckM8, our own RSS advertising, lead generation and rich media experiences were under development about 3 years ago now.

This event was a great way to reacquaint myself with what’s going on out in the market starting at the top from the strategic business perspective. I knew ad exchanges were going to be hot when I learned about Right Media a year ago, but I’m even more bullish on the concept now.

The business of network effects

The Internet platform business has some unique challenges. It’s very tempting to adopt known models to make sense of it, like the PC business, for example, and think of the Internet platform like an operating system.

The similarities are hard to deny, and who wouldn’t want to control the operating system of the Internet?

In 2005, Jason Kottke proposed a vision for the “WebOS” where users could control their experience with tools that leveraged a combination of local storage and a local server, networked services and rich clients.

“Applications developed for this hypothetical platform have some powerful advantages. Because they run in a Web browser, these applications are cross platform, just like Web apps such as Gmail, Basecamp, and Salesforce.com. You don’t need to be on a specific machine with a specific OS…you just need a browser + local Web server to access your favorite data and apps.”

Prior to that post, Nick Carr offered a view on the role of the browser that surely resonated with the OS perspective for the Internet:

“Forget the traditional user interface. The looming battle in the information technology business is over control of the utility interface…Control over the utility interface will provide an IT vendor with the kind of power that Microsoft has long held through its control of the PC user interface.”

He also responded later to Kottke’s vision saying that the reliance on local web and storage services on a user’s PC may be unnecessary:

“Your personal desktop, residing entirely on a distant server, will be easily accessible from any device wherever you go. Personal computing will have broken free of the personal computer.”

But the client layer is merely a piece of the much larger puzzle, in my opinon.

Dare Obasanjo more recently broke down the different ideas of what “Cloud OS” might mean:

“I think it is a good idea for people to have a clear idea of what they are talking about when they throw around terms like “cloud OS” or “cloud platform” so we don’t end up with another useless term like SOA which means a different thing to each person who talks about it. Below are the three main ideas people often identify as a “Web OS”, “cloud OS” or “cloud platform” and examples of companies executing on that vision.”

He defines them as follows:

  1. WIMP Desktop Environment Implemented as a Rich Internet Application (The YouOS Strategy)
  2. Platform for Building Web-based Applications (The Amazon Strategy)
  3. Web-based Applications and APIs for Integrating with Them (The Google Strategy)

The OS metaphor has lots of powerful implications for business models, as we’ve seen on the PC. The operating system in a PC controls all the connections from the application user experience through the filesystem down through the computer hardware itself out to the interaction with peripheral services. Being the omniscient hub makes the operating system a very effective taxman for every service in the stack. And from there, the revenue streams become very easy to enable and enforce.

But the OS metaphor implies a command-and-control dynamic that doesn’t really work in a global network controlled only by protocols.

Internet software and media businesses don’t have an equivilent choke point. There’s no single processor or function or service that controls the Internet experience. There’s no one technology or one company that owns distribution.

There are lots of stacks that do have choke points on the Internet. And there are choke points that have tremendous value and leverage. Some are built purely and intentionally on top of a distribution point such as the iPod on iTunes, for example.

But no single distribution center touches all the points in any stack. The Internet business is fundamentally made of data vectors, not operational stacks.

Jeremy Zawodny shed light on this concept for me using building construction analogies.

He noted that my building contractor doesn’t exclusively buy Makita or DeWalt or Ryobi tools, though some tools make more sense in bundles. He buys the tool that is best for the job and what he needs.

My contractor doesn’t employ plumbers, roofers and electricians himself. Rather he maintains a network of favorite providers who will serve different needs on different jobs.

He provides value to me as an experienced distribution and aggregation point, but I am not exclusively tied to using him for everything I want to do with my house, either.

Similarly, the Internet market is a network of services. The trick to understanding what the business model looks like is figuring out how to open and connect services in ways that add value to the business.

In a precient viewpoint from 2002 about the Internet platform business, Tim O’Reilly explained why a company that has a large and valuable data store should open it up to the wider network:

“If they don’t ride the horse in the direction it’s going, it will run away from them. The companies that “grasp the nettle firmly” (as my English mother likes to say) will reap the benefits of greater control over their future than those who simply wait for events to overtake them.

There are a number of ways for a company to get benefits out of providing data to remote programmers:

Revenue. The brute force approach imposes costs both on the company whose data is being spidered and on the company doing the spidering. A simple API that makes the operation faster and more efficient is worth money. What’s more, it opens up whole new markets. Amazon-powered library catalogs anyone?

Branding. A company that provides data to remote programmers can request branding as a condition of the service.

Platform lock in. As Microsoft has demonstrated time and time again, a platform strategy beats an application strategy every time. Once you become part of the platform that other applications rely on, you are a key part of the computing infrastructure, and very difficult to dislodge. The companies that knowingly take their data assets and make them indispensable to developers will cement their role as a key part of the computing infrastructure.

Goodwill. Especially in the fast-moving high-tech industry, the “coolness” factor can make a huge difference both in attracting customers and in attracting the best staff.”

That doesn’t clearly translate into traditional business models necessarily, but if you look at key business breakthroughs in the past, the picture today becomes more clear.

  1. The first breakthrough business model was based around page views. The domain created an Apple-like controlled container. Exposure to eyeballs was sold by the thousands per domain. All the software and content was owned and operated by the domain owner, except the user’s browser. All you needed was to get and keep eyeballs on your domain.
  2. The second breakthrough business model emerged out of innovations in distribution. By building a powerful distribution center and direct connections with the user experience, advertising could be sold both where people began their online experiences and at the various independent domain stacks where they landed. Inventory beget spending beget redistribution beget inventory…it started to look a lot like network effects as it matured.
  3. The third breakthrough business model seems to be a riff on its predecessors and looks less and less like an operating system. The next breakthrough is network effects.

Network EffectsNetwork effects happen when the value of the entire network increases with each node added to the network. The telephone is the classic example, where every telephone becomes more valuable with each new phone in the network.

This is in contrast to TVs which don’t care or even notice if more TVs plug in.

Recommendation engines are the ultimate network effect lubricator. The more people shop at Amazon, the better their recommendation engine gets…which, in turn, helps people buy more stuff at Amazon.

Network effects are built around unique and useful nodes with transparent and highly accessible connection points. Social networks are a good example because they use a person’s profile as a node and a person’s email address as a connection point.

Network effects can be built around other things like keyword-tagged URLs (del.icio.us), shared photos (flickr), songs played (last.fm), news items about locations (outside.in).

The contribution of each data point wherever that may happen makes the aggregate pool more valuable. And as long as there are obvious and open ways for those data points to talk to each other and other systems, then network effects are enabled.

Launching successful network effect businesses is no easy task. The value a participant can extract from the network must be higher than the cost of adding a node in the network. The network’s purpose and its output must be indespensible to the node creators.

Massively distributed network effects require some unique characteristics to form. Value not only has to build with each new node, but the value of each node needs to increase as it gets leveraged in other ways in the network.

For example, my email address has become an enabler around the Internet. Every site that requires a login is going to capture my email address. And as I build a relationship with those sites, my email address becomes increasingly important to me. Not only is having an email address adding value to the entire network of email addresses, but the value of my email address increases for me with each service that is able to leverage my investment in my email address.

Then the core services built around my email address start to increase in value, too.

For example, when I turned on my iPhone and discovered that my Yahoo! Address Book was automatically cooked right in without any manual importing, I suddenly realized that my Yahoo! Address Book has been a constant in my life ever since I got my first Yahoo! email address back in the ’90’s. I haven’t kept it current, but it has followed me from job to job in a way that Outlook has never been able to do.

My Yahoo! Address Book is becoming more and more valuable to me. And my iPhone is more compelling because of my investment in my email address and my address book.

Now, if the network was an operating system, there would be taxes to pay. Apple would have to pay a tax for accessing my address book, and I would have to pay a tax to keep my address book at Yahoo!. Nobody wins in that scenario.

User data needs to be open and accessible in meaningful ways, and revenue needs to be built as a result of the effects of having open data rather than as a margin-based cost-control business.

But Dare Obasanjo insightfully exposes the flaw in reducing openness around identity to individual control alone:

“One of the bitter truths about “Web 2.0” is that your data isn’t all that interesting, our data on the other hand is very interesting…A lot of “Web 2.0″ websites provide value to their users via wisdom of the crowds appproaches such as tagging or recommendations which are simply not possible with a single user’s data set or with a small set of users.”

Clearly, one of the most successful revenue-driving opportunities in the networked economy is advertising. It makes sense that it would be since so many of the most powerful network effects are built on people’s profiles and their relationships with other people. No wonder advertisers can’t spend enough money online to reach their targets.

It will be interesting to see how some of the clever startups leveraging network effects such as Wesabe think about advertising.

Wesabe have built network effects around people’s spending behavior. As you track your finances and pull in your personal banking data, Wesabe makes loose connections between your transactions and other people who have made similar transactions. Each new person and each new transaction creates more value in the aggregate pool. You then discover other people who have advice about spending in ways that are highly relevant to you.

I’ve been a fan of Netflix for a long time now, but when Wesabe showed me that lots of Netflix customers were switching to Blockbuster, I had to investigate and before long decided to switch, too. Wesabe knew to advise me based on my purchasing behavior which is a much stronger indicator of my interests than my reading behavior.

Advertisers should be drooling at the prospects of reaching people on Wesabe. No doubt Netflix should encourage their loyal subscribers to use Wesabe, too.

The many explicit clues about my interests I leave around the Internet — my listening behavior at last.fm, my information needs I express in del.icio.us, my address book relationships, my purchasing behavior in Wesabe — are all incredibly fruitful data points that advertisers want access to.

And with managed distribution, a powerful ad platform could form around these explicit behaviors that can be loosely connected everywhere I go.

Netflix could automatically find me while I’m reading a movie review on a friend’s blog or even at The New York Times and offer me a discount to re-subscribe. I’m sure they would love to pay lots of money for an ad that was so precisely targeted.

That blogger and The New York Times would be happy share revenue back to the ad platform provider who enabled such precise targeting that resulted in higher payouts overall.

And I might actually come back to Netflix if I saw that ad. Who knows, I might even start paying more attention to ads if they started to find me rather than interrupt me.

This is why the Internet looks less and less like an operating system to me. Network effects look different to me in the way people participate in them and extract value from them, the way data and technologies connect to them, and the way markets and revenue streams build off of them.

Operating systems are about command-and-control distribution points, whereas network effects are about joining vectors to create leverage.

I know little about the mathematical nuances of chaos theory, but it offers some relevant philosophical approaches to understanding what network effects are about. Wikipedia addresses how chaos theory affects organizational development:

“Most of the focus on chaos theory is primarily rooted in the underlying patterns found in an otherwise chaotic enviornment, more specifically, concepts such as self-organization, bifurcation and self-similarity…

Self-organization, as opposed to natural or social selection, is a dynamic change within the organization where system changes are made by recalculating, re-inventing and modifying its structure in order to adapt, survive, grow and develop. Self-organization is the result of re-invention and creative adaptation due to the introduction of, or being in a constant state of, perturbed equilibrium.”

Yes, my PC is often in a state of ‘perturbed equilibrium’ but not because it wants to be.

Why Outside.in may have the local solution

The recent blog frenzy over hyperlocal media inspired me to have a look at Outside.in again.


It’s not just the high profile backers and the intense competitive set that make Outside.in worth a second look. There’s something very compelling in the way they are connecting data that seems like it matters.

My initial thought when it launched was that this idea had been done before too many times already. Topix.net appeared to be a dominant player in the local news space, not to mention similar but different kinds of local efforts at startups like Yelp and amongst all the big dotcoms.

And even from their strong position, Topix’s location-based news media aggregaton model was kind of, I don’t know, uninteresting. I’m not impressed with local media coverage these days, in general, so why would an aggregator of mediocre coverage be any more interesting than what I discover through my RSS reader?

But I think Outside.in starts to give some insight into how local media could be done right…how it could be more interesting and, more importantly, useful.

The light triggered for me when I read Jon Udell’s post on “the data finds the data”. He explains how data can be a vector through which otherwise unrelated people meet eachother, a theme that continues to resonate for me.

Media brands have traditionally been good at connecting the masses to eachother and to marketers. But the expectation of how directly people feel connected to other individuals by the media they share has changed.

Whereas the brand once provided a vector for connections, data has become the vehicle for people to meet people now. Zip code, for example, enables people to find people. So does marital status, date and time, school, music taste, work history. There are tons of data points that enable direct human-to-human discovery and interaction in ways that media brands could only accomplish in abstract ways in the past.

URLs can enable connections, too. Jon goes on to explain:

“On June 17 I bookmarked this item from Mike Caulfield… On June 19 I noticed that Jim Groom had responded to Mike’s post. Ten days later I noticed that Mike had become Jim’s new favorite blogger.

I don’t know whether Jim subscribes to my bookmark feed or not, but if he does, that would be the likely vector for this nice bit of manufactured serendipity. I’d been wanting to introduce Mike at KSC to Jim (and his innovative team) at UMW. It would be delightful to have accomplished that introduction by simply publishing a bookmark.”

Now, Outside.in allows me to post URLs much like one would do in Newsvine or Digg any number of other collaborative citizen media services. But Outside.in leverages the zip code data point as the topical vector rather than a set of predetermined one-size-fits-all categories. It then allows miscellaneous tagging to be the subservient navigational pivot.

Suddenly, I feel like I can have a real impact on the site if I submit something. If there’s anything near a critical mass of people in the 94107 zip code on Outside.in then it’s likely my neighbors will be influenced by my posts.

Fred Wilson of Union Square Ventures explains:

“They’ve built a platform that placebloggers can submit their content to. Their platform “tags” that content with a geocode — an address, zip code, or city — and that renders a new page for every location that has tagged content. If you visit outside.in/10010, you’ll find out what’s going on in the neigborhood around Union Square Ventures. If you visit outside.in/back_bay, you’ll see what’s going on in Boston’s Back Bay neighborhood.”

Again, the local online media model isn’t new. In fact, it’s old. CitySearch in the US and UpMyStreet in the UK proved years ago that a market does in fact exist in local media somehwere somehow, but the market always feels fragile and susceptible to ghost town syndrome.

Umair Haque explains why local is so hard:

“Why doesn’t Craigslist choose small towns? Because there isn’t enough liquidity in the market. Let me put that another way. In cities, there are enough buyers and sellers to make markets work – whether of used stuff, new stuff, events, etc, etc.

In smaller towns, there just isn’t enough supply or demand.”

If they commit to building essentially micro media brands based exclusively on location I suspect Outside.in will run itself into the ground spending money to establish critical mass in every neighborhood around the world.

Now that they have a nice micro media approach that seems to work they may need to start thinking about macro media. In order to reach the deep dark corners of the physical grid, they should connect people in larger contexts, too. Here’s an example of what I mean…

I’m remodeling the Potrero Hill shack we call a house right now. It’s all I talk about outside of work, actually. And I need to understand things like how to design a kitchen, ways to work through building permits, and who can supply materials and services locally for this job.

There must be kitchen design experts around the world I can learn from. Equally, I’m sure there is a guy around the corner from me who can give me some tips on local services. Will Architectural Digest or Home & Garden connect me to these different people? No. Will The San Francisco Chronicle connect us? No.

Craigslist won’t even connect us, because that site is so much about the transaction.

I need help both from people who can connect on my interest vector in addition to the more local geographic vector. Without fluid connections on both vectors, I’m no better off than I was with my handy RSS reader and my favorite search engine.

Looking at how they’ve decided to structure their data, it seems Outside.in could pull this off and connect my global affinities with my local activities pretty easily.

This post is way too long already (sorry), but it’s worth pointing out some of the other interesting things they’re doing if you care to read on.

Outside.in is also building automatic semantic links with the contributors’ own blogs. By including my zip code in a blog post, Outside.in automatically drinks up that post and adds it into the pool. They even re-tag my post with the correct geodata and offer GeoRSS feeds back out to the world.

Here are the instructions:

“Any piece of content that is tagged with a zip code will be assigned to the corresponding area within outside.in’s system. You can include the zip code as either a tag or a category, depending on your blogging platform.”

I love this.

30Boxes does something similar where I can tell it to collect my Upcoming data, and it automatically imports events as I tag them in Upcoming.

They are also recognizing local contributors and shining light on them with prominant links. I can see who the key bloggers are in my area and perhaps even get a sense of which ones matter, not just who posts the most. I’m guessing they will apply the “people who like this contributor also like this contributor” type of logic to personalize the experience for visitors at some point.

Now what gets me really excited is to think about the ad model that could happen in this environment of machine-driven semantic relationships.

If they can identify relevant blog posts from local contributors, then I’m sure they could identify local coupons from good sources of coupon feeds.

Let’s say I’m the national Ace Hardware marketing guy, and I publish a feed of coupons. I might be able to empower all my local Ace franchises and affiliates to publish their own coupons for their own areas and get highly relevant distribution on Outside.in. Or I could also run a national coupon feed with zip code tags cooked into each item.

To Umair’s point, that kind of marketing will only pay off in major metros where the markets are stronger.

To help address the inventory problem, Outside.in could then offer to sell ad inventory on their contributors’ web sites. As an Outside.in contributor, I would happily run Center Hardware coupons, my local Ace affiliate, on my blog posts that talk about my remodelling project if someone gave them to me in some automated way.

If they do something like this then they will be able to serve both the major metros and the smaller hot spots that you can never predict will grow. Plus, the incentives for the individuals in the smaller communities start feeding the wider ecosystem that lives on the Outside.in platform.

Outside.in would be pushing leverage out to the edge both in terms of participation as they already do and in terms of revenue generation, a fantastic combination of forces that few media companies have figured out, yet.

I realize there are lots of ‘what ifs’ in this assessment. The company has a lot of work to do before they breakthrough, and none of it is easy. The good news for them is that they have something pretty solid that works today despite a crowded market.

Regardless, knowing Fred Wilson, Esther Dyson, John Seely Brown and Steven Berlin Johnson are behind it, among others, no doubt they are going to be one to watch.

Thinking about media as a platform

Back in my InfoWorld days (2004-ish?) I somehow woke up to the idea that media could be a platform.1 Whereas my professional media experience prior to that was all about creating user experiences that resulted in better page views and conversions, something changed in the way I perceived how online media was supposed to work.

I didn’t have language to use for it at the time (still working on it, actually), but I knew it wasn’t inspired by the “openness” and “walled garden” metaphors so much. Neither concept reflected the opportunity for me. Once I saw the opportunity, though, the shift happening in online media seemed much much bigger.

In a presentation at the Bioneers conference back in August 2000 (below), architect William McDonough talked about designing systems that leverage nature’s strengths for mutually beneficial growth rather than for conservation or merely sustainability.

He tells us to design with positive results in mind instead of using less bad materials,

Similarly, the implications around the “openness” and “walled garden” concepts get clouded by the tactical impressions those words draw for someone who has unique assets in the media business.

It’s not about stopping bad behavior or even embracing good behavior. It’s about investing in an architecture that promotes growth for an entire ecosystem. If you do it right, you will watch network effects take hold naturally. And then everyone wins.

When you look around the Internet media landscape today you see a lot of successful companies that either consciously or subconsciously understand how to make media work as a platform. MySpace created a fantastic expression platform, though perhaps unwittingly. Wikipedia evolved quickly into a massive research platform. Flickr and del.icio.us, of course, get the network effects inherent in sharing information…photos and links, respectively. Washingtonpost and BBC Backstage are moving toward national political information platforms. Last.fm is a very succssful music listening platform if not one of the most interesting platforms among them all.

All of these share a common approach. At a simple level, the brand gets stronger the further their data and services reach outside of their domain and into the wider market.

But the most successful media platforms are the ones that give their users the power to impact the experience for themselves and to improve the total experience for everyone as they use it.

My commitment to flickr, del.icio.us and last.fm gets deeper and deeper the more I’m able to apply them in my online lifestyle wherever that may be. We have a tangible relationship. And I have a role in the wider community, even if only a small part, and that community has a role in my experience, too.

The lesson is that it’s not about the destination — it’s about the relationship. Or, if you like the Cluetrain language, it’s about the conversation, though somehow “relationship” seems more meaningful than “conversation” to me. Ask any salesperson whether they’d prefer to have a relationship or a conversation with a potential customer.

Ok, so user engagement can extend outside a domain. Where’s the opportunity in that?

Very few media platforms know how to leverage their relationships to connect buyers and sellers and vice versa. They typically just post banner ads or text links on their sites and hope people click on them. Creating a fluid and active marketplace that can grow is about more than relevant advertising links.

Amazon created an incredibly powerful marketplace platform, but they are essentially just a pure play in this space. They are about buying and selling first and foremost. Relationships on their platforms are transactional.

Media knows how to be more than that.

eBay and Craigslist get closer to colliding the buying/selling marketplace with deeper media experiences. People build relationships in micromarkets, but again it’s all about a handshake and then good riddance on eBay and Craigslist.

Again, media knows how to be more than that.

The big opportunity in my mind is in applying the transactional platform concept within a relationship-building environment.

A more tangible example, please…?


Washingtonpost.com is an interesting case, as they have been more aggressive than most traditional media companies in terms of “openness”. They have data feeds for all of their content. And they have an amazing resource in the U.S. Congress Votes Database, a feed of legislative voting records sliced in several different ways. For example, you can watch what legislation Nancy Pelosi votes on and how she votes.

Unfortunately, everything Washingtonpost.com offers is read-only. You can pull information from Washingtonpost.com, but you can’t contribute to it. You can’t serve the wider Washingtonpost.com community with your additions or edits. You can’t engage with other Washingtonpost.com community members in meaningful ways.

Washingtonpost.com thinks of their relationship with you in a one-to-many way. They are one, and you are one of many.

Instead, they should think of themselves as the government data platform. Every citizen in the US should be able to feed data about their local government into the system, and the wider community should be able to help edit and clean community-contributed data (or UGC for you bizdev folks).

For example, I recently spent some time investigating crime data and how that gets shared or not shared in various local communities. Local citizens could provide a very powerful resource if they were empowered to report crime in meaningful ways on the Internet.

Washingtonpost.com is as well suited as anyone to provide that platform.

Now, imagine the opportunity for Washingtonpost.com if people around the US were reporting, editing and analyzing local crime data from Washingtonpost’s platform. They would become a critical source of national information and news across the country. Washintonpost.com would be well poised to be the primary source of any type of government-related information.

The money would soon follow.

As a result of becoming essential in the ecosystem of local and national citizen data, they would expand their advertising possibilities exponentially. They could create an ad platform (or partner with one) that is tuned particularly for their ecosystem. Then any number of services could start forming around the combination of their data platform and their ad platform.


You can imagine legal services, security, counseling and financing services wanting to reach directly into my local Potrero Hill crimewatch community. The marketplace would probably be very fluid where people are recommending services and providers are helping the community as a whole as a way to build relationships.

Washingtonpost could sit behind all these services, powering the data and taking a cut of all the advertising.

Again, it’s not just about being “open” or taking down the “walled garden”.

The “openness” and “walled garden” concepts which often turn into accusations feel more like objectives than strategic directions. If “openness” was the goal, then offering everything as RSS would be the game.

No, RSS is just step one. The media platform game is much more than that.

It’s about both being a part of the larger Internet ecosystem and understanding how to grow and design a future that benefits lots of different constituents. You can be a source in someone else’s platform, a vehicle within a wider networked platform and a hub at the center of your own ecosystem all at the same time.

I would never claim this stuff is easy, as I certainly failed to make that happen while at InfoWorld. The first place to start, in my opinion, is to stop worrying about “openness” and “walled gardens”. Those are scary ideas that don’t necessarioly inspire people to build or participate in growing ecosystems.

Instead, it’s important to understand “network effects” and “platforms“. Once you understand how media can be a platform, the world of opportunity will hopefully start to look a lot bigger, as big as the Internet itself, if not even bigger than that.

It’s at that point that you may wonder why you would pursue anything else.

1 It shouldn’t be surprising that my thinking changed while surrounded by thinkers like Jon Udell, Steve Gillmor, and Steve Fox to name a few who all waved the web services flag and sang the software-as-a-service song before many of the leading IT efforts at some of the most innovative companies knew how to put those words into coherent sentences. Those concepts can apply to lots of markets, media among them.

Ziff Davis sells its mission statement

Paul Conley rants on Ziff Davis for their latest breach of journalistic ethics. They’ve gone so far as to sell the very text of their editorial mission statement to advertisers. Check out this screenshot:

“If you want to see the single most ridiculous, most offensive, most disgusting and dimwitted thing in the entire history of B2B publishing, then take a look at the Editorial Mission statement of Baseline magazine — the Editorial Mission statement, for god’s sake!!! — where ads have been inserted in the copy.”

Rex Hammock agrees and clarifies his distaste for the intelliTXT model:

“I’m not even opposed to having clearly marked advertising or sponsored content that is interspersed with editorial content. The practice that Paul (and I) oppose is the hidden nature of hyperlinked-text advertising…This is a slippery slope.”

It’s just unbelievable what people will do to their future to get an extra dollar today.