Eternal transitions

Market transitions in digital media can be absolutely fascinating both as a bystander and as a participant.

I’ve been on both the frontfoot and the backfoot as part of media businesses trying to lead, fast-follow, steer away from or kill different technologies and services that come and go.

Transitioning seems to be part of being in this business, something you’re always doing, not the end game.

There are a few patterns I’ve observed, but i’m hoping some clever business model historian will do some research and really nail down how it works.

There are job training issues that people face. Remember all those Lotus Notes specialists? Organizational issues. How about when the tech teams finally let editors control the web site home page? Leadership issues. It wasn’t until about 2003 before media CEOs started talking openly aout the day when their Internet businesses would overtake their traditional businesses. Technology strategies. Investing in template-driven web sites was once a major decision. Etc.

The mobile wave we’re in right now shares all the same issues for people who are in the business of running web sites. Re-educate your developers or hire new ones? Should mobile be a separate business; should it be added to the queue for the tech team; should the editors or product managers manage it? Is mobile-first a subset of digital-first or does it mean something more profound than that? Responsive, native, both? What’s the mobile pureplay for what you already do?

Media organizations have become so diverse over the last several years that they can easily get caught thinking that you can do all of the above – a hedge-your-bets strategy by investing lightly in all aspects of the transition. While that strategy has drawbacks it is definitely better than the hide-and-hope or cut-til-you-profit strategy.

The most interesting part of this story is about the anomolies, those moments of clarity that signal to the rest of us what is happening.

For example, everyone who disregarded the Newton and then the PalmPilot for failing missed the point. These were anomolies in a world dominated by Microsoft and the PC. They understood computing was going to be in people’s pockets, and they were right to execute on that idea.

What they failed to get right was timing of the market transition, and timing is everything.

(Harry McCracken’s retrospective review of the Newton is a fun read.)

So, when is an anomoly worth noticing? And when is the right time to execute on the new model?

Google has been a great example of both in many ways. They cracked one of the key business models native to the Internet at just the right time…they weren’t first or even best, but they got it right when it mattered. Android is another example.

But social has been one challenge after another. They ignored the anomoly that was Twitter and Facebook (and Orkut!) and then executed poorly over and over again.

They don’t want to be behind the curve ever again and are deploying some market tests around the ubiquitous, wearable network – Google Glass.

But if they are leading on this vision of the new way, how do they know, and, importantly, how do other established businesses know that this is the moment the market shifts?

I’m not convinced we’ve achieved enough critical mass around the mobile transition to see the ubiquitous network as a serious place to operate.

The pureplay revenue models in the mobile space are incomplete. The levels of investment being made in mobile products and services are growing too fast. The biggest catalysts of commercial opportunity are not yet powerful enough to warrant total reinterpretations of legacy models.

The mobile market is at its high growth stage. That needs to play out before the next wave will get broader support.

The ubiquitous network is coming, but the fast train to success aka ‘mobile’ has arrived and left the station and everyone’s onboard.

Is Google Glass this era’s Newton? Too early? Dorky rather than geeky-cool? Feature-rich yet brilliant at nothing in particular? Too big?

They’ve done a brilliant job transitioning to the mobile era. And you have to give them props for trying to lead on the next big market shift.

Even if Google gets it wrong with Google Glass (See Wired’s commentary) they are becoming very good at being in transition. If that is the lesson they learned from their shortcomings in ‘social’ then they may have actually gained more by doing poorly then if they had succeeded.

Orchestrating streams of data from across the Internet

The liveblog was a revelation for us at the Guardian. The sports desk had been doing them for years experimenting with different styles, methods and tone. And then about 3 years ago the news desk started using them liberally to great effect.

I think it was Matt Wells who suggested that perhaps the liveblog was *the* network-native format for news. I think that’s nearly right…though it’s less the ‘format’ of a liveblog than the activity powering the page that demonstrates where news editing in a networked world is going.

It’s about orchestrating the streams of data flowing across the Internet into a compelling use in one form or another. One way to render that data is the liveblog. Another is a map with placemarks. Another is a RSS feed. A stream of tweets. Storify. Etc.

I’m not talking about Big Data for news. There is certainly a very hairy challenge in big data investigations and intelligent data visualizations to give meaning to complex statistics and databases. But this is different.

I’m talking about telling stories by playing DJ to the beat of human observation pumping across the network.

We’re working on one such experiment with a location-tagging tool we call FeedWax. It creates location-aware streams of data for you by looking across various media sources including Twitter, Instagram, YouTube, Google News, Daylife, etc.

The idea with FeedWax is to unify various types of data through shared contexts, beginning with location. These sources may only have a keyword to join them up or perhaps nothing at all, but when you add location they may begin sharing important meaning and relevance. The context of space and time is natural connective tissue, particularly when the words people use to describe something may vary.

We’ve been conducting experiments in orchestrated stream-based and map-based storytelling on n0tice for a while now. When you start crafting the inputs with tools like FeedWax you have what feels like a more frictionless mechanism for steering the flood of data that comes across Twitter, Instagram, Flickr, etc. into something interesting.

For example, when the space shuttle Endeavour flew its last flight and subsequently labored through the streets of LA there was no shortage of coverage from on-the-ground citizen reporters. I’d bet not one of them considered themselves a citizen reporter. They were just trying to get a photo of this awesome sight and share it, perhaps getting some acknowledgement in the process.

You can see the stream of images and tweets here: http://n0tice.com/search?q=endeavor+OR+endeavour. And you can see them all plotted on a map here: http://goo.gl/maps/osh8T.

Interestingly, the location of the photos gives you a very clear picture of the flight path. This is crowdmapping without requiring that anyone do anything they wouldn’t already do. It’s orchestrating streams that already exist.

This behavior isn’t exclusive to on-the-ground reporting. I’ve got a list of similar types of activities in a blog post here which includes task-based reporting like the search for computer scientist Jim Gray, the use of Ushahidi during the Haiti earthquake, the Guardian’s MPs Expenses project, etc. It’s also interesting to see how people like Jon Udell approach this problem with other data streams out there such as event and venue calendars.

Sometimes people refer to the art of code and code-as-art. What I see in my mind when I hear people say that is a giant global canvas in the form of a connected network, rivers of different colored paints in the form of data streams, and a range of paint brushes and paint strokes in the form of software and hardware.

The savvy editors in today’s world are learning from and working with these artists, using their tools and techniques to tease out the right mix of streams to tell stories that people care about. There’s no lack of material or tools to work with. Becoming network-native sometimes just means looking at the world through a different lens.

Calling your web site a ‘property’ deprives it of something bigger

BBC offered another history of London documentary the other night, a sort of people’s perspective on how the character of the city has changed over time, obviously inspired by Danny Boyle’s Opening Ceremony at the Olympics.

Some of the sequences were interesting to me particularly as a foreigner – the gentrification of Islington, the anarchist squatters in Camden, the urbanization of the Docklands, etc.  – a running theme of haves vs have-nots.

It’s one of a collection of things inspiring me recently including a book called ‘The Return of the Public‘ by Dan Hind, a sort of extension to the Dewey v Lippman debates, what’s going on with n0tice, such as Sarah Hartley’s adaptation for it called Protest Near You and the dispatch-o-rama hack, and, of course, the Olympics.

I’m becoming reinvigorated and more bullish on where collective action can take us.

At a more macro level these things remind me of the need to challenge the many human constructs and institutions that are reflections of the natural desire to claim things and own them.

Why is it so difficult to embrace a more ‘share and share alike’ attitude?  This is as true for children and their toys as it is for governments and their policies.

The bigger concern for me, of course, is the future of the Internet and how media and journalism thrive and evolve there.

Despite attempts by its founders to shape the Internet so it can’t be owned and controlled, there are many who have tried to change that both intentionally and unwittingly, occasionally with considerable success.

How does this happen?

We’re all complicit.  We buy a domain. We then own it and build a web site on it. That “property” then becomes a thing we use to make money.  We fight to get people there and sell them things when they arrive.  It’s the Internet-as-retailer or Internet-as-distributor view of the world.

That’s how business on the Internet works…or is it?

While many have made that model work for them, it’s my belief that the property model is never going to be as important or meaningful or possibly as lucrative as the platform or service model over time. More specifically, I’m talking about generative media networks.

Here are a few different ways of visualizing this shift in perspective (more):

Even if it works commercially, the property model is always going to be in conflict with the Internet-as-public-utility view of the world.

Much like Britain’s privately owned public spaces issue, many worry that the Internet-as-public-utility will be ruined or, worse, taken from us over time by commercial and government interests.

Playing a zero sum game like that turns everyone and everything into a threat.  Companies can be very effective at fighting and defending their interests even if the people within those companies mean well.

I’m an optimist in this regard.  There may be a pendulum that swings between “own” and “share”, and there are always going to be fights to secure public spaces.  But you can’t put the Internet genie back in the bottle.  And even if you could it would appear somewhere else in another form just as quickly…in some ways it already has.

The smart money, in my mind, is where many interests are joined up regardless of their individual goals, embracing the existence of each other in order to benefit from each other’s successes.

The answer is about cooperation, co-dependency, mutualisation, openness, etc.

We think about this a lot at the Guardian. I recently wrote about how it applies to the recent Twitter issues here. And this presentation by Chris Thorpe below from back in 2009 on how to apply it to the news business is wonderful:

Of course, Alan Rusbridger’s description of a mutualised newspaper in this video is still one of the strongest visions I’ve heard for a collaborative approach to media.

The possibility of collective action at such an incredible scale is what makes the Internet so great.  If we can focus on making collective activities more fruitful for everyone then our problems will become less about haves and have-nots and more about ensuring that everyone participates.

That won’t be an easy thing to tackle, but it would be a great problem to have.

The importance of co-dependency on the Internet

The thing that still blows my mind about the Internet is that through all the dramatic changes over the last 2 or 3 decades it remains a mostly open public commons that everyone can use.

There are many land ownership battles happening all around it. But it has so far withstood challenges to its shape, its size, its governance and its role in all aspects of our lives.

Will that always be the case? Or are too many special interests with unbalanced power breaking down the core principles that made it our space, a shared space owned by nobody and available to everybody?

It used to be that corporate interests were aligned with public interests on the Internet.

Many early pioneers thrived because they recognized how to create and earn value across the network, in the connection between things, the edges (read more on Graph Theory). They succeeded by developing services that offered unfettered and easy access to something useful, often embracing some sort of sharing model for partners powered by revenue models built on network effects.

These innovators were dependent on open and public participation and a highly distributed network adhering to common standards such as HTTP and HTML.

The public and private interests were aligned. Everyone’s a winner!

However, hypertext was designed for connecting text, not for connecting people. The standards designed for communication such as SMTP and later XMPP were fine, but no standard for connecting people to other people achieved global adoption.

Even backing from the web’s founder for a standard called FOAF and a Bill of Rights for Social Networking Users backed by Arrington and Scoble failed to lift the movement for standards in social connections.

Without a standard the entrepreneurs got busy coming up with solutions, and, eventually, Facebook and Twitter were born. Like their predecessors, they recognized how to create and earn value in the connections between things, in this case, a network of people, a privately owned social graph.

But the social graph that they created is not an open, owner-less, public commons.

Facebook is interested in Facebook’s existence and have no interest in FOAF or XMPP. Industry standards are to be ignored at Facebook (and Apple, Microsoft and sometimes Google, too) unless they help to acquire customers or to generate PR.

Governance is held privately in these spaces. They consult with their customers, but they write their own privacy policies and are held accountable to nobody but public opinion and antiquated legal frameworks around the world.

Now, it’s a very appealing idea to think about an open and public alternative to the dominant social networks. Many have tried, including StatusNet and Diaspora. And more challengers are on the way.

But there are harder problems to solve here that I think matter more than that temporary band-aid.

  • We need to know why the industry failed to adopt a global standard for social networking. Will we go through this again as a market forms inevitably around a digital network of real-world things? Will it repeat again when we learn how to connect raw data, too?
  • Can the benefits (and therefore the incentives) to ensuring contributions are owned and controlled by a platform’s contributors in perpetuity be more commercially viable and legally sensible? Are there ways to support those benefits globally?
  • In what ways can those who are adversely affected by centralized control of social activity hold those forces to account? Where is the balance of power coming from, and how can it be amplified to be an equal force?

Much like the founders of the U.S. Constitution, there were some very clever folks who codified open access to this public space we call the Internet in ways that effectively future-proofed it. They structured it to reward individual contributions that also benefit the wider community.

Instead of pen and paper, the Internet’s founders used technology standards to accomplish this goal.

While it seems very doable to build a successful and lasting digital business that embraces these ideals, the temptation of money and power, the unrelenting competitive pressures, and weak incentives to collaborate will steer many good intentions in bad directions over time.

I don’t think Facebook was ever motivated to support an open public commons, so it’s no surprise when they threaten such a thing. But Twitter somehow seemed different.

At the moment, it seems from outside Twitter HQ that the company is being crushed by the weight of all these things and making some bad choices over small issues that will have big impact over time.

It’s not what Facebook and Twitter do that really matters, though.

The greater concern, in my opinion, is that the types of people who originally created this global network for the rest of us to enjoy aren’t being heard or, worse, not getting involved at all anymore.

I’m not suggesting we need more powerful bureaucratic standards bodies to slow things down. Arguably, these increasingly powerful platforms are already crushing competition and squeezing out great innovations. We don’t need more of that. Stifling innovation always has disastrous effects.

What I’m suggesting is that more people will benefit in more ways over a longer period of time if someone can solve this broader organizational problem – the need to codify open access, shared governance, and other future-proofing tactics for public spaces in whatever shape they take.

Social networking is threatening the open public network today. Something else will threaten it tomorrow.

We need to work out what can be done today to fuel and secure an ongoing and healthy co-dependency between our public spaces and commercial interests.

An open community news platform: n0tice.com

The last several weeks I’ve been working on a new project, a SoLoMo initiative, as John Doerr or Mary Meeker would call it.

One of those places
Noticeboard photo by Jer*ry

It’s a mobile publishing platform that resembles a community notice board.  It’s called n0tice*:

http://n0tice.com.

After seeing Google’s “News near you” service announced on Friday I thought it was a good time to jump into the conversation and share what I’m up to.  Clearly, there are a lot of people chasing the same or similar issues.

First, here’s some background.  Then I’ll detail what it does, how it works, and what I hope it will become.

What is n0tice?

It began as a simple hack day project over a year ago.  I was initially just curious about how location worked on the phone.  At first I thought that was going to be beyond me, and then Simon Willison enlightened me to the location capabilites inherent in modern web browsers. There are many solutions published out there. Here’s one.

It took half a second from working out how to identify a user’s location to realizing that this feature could be handy for citizen reporters.

Around the same time there was a really interesting little game called noticin.gs going around which was built by Tom Taylor and Tom Armitage, two incredibly talented UK developers.  The game rewarded people for being good at spotting interesting things in the world and capturing a photo of them.

Ushahidi was tackling emergency response reporting. And, of course, Foursquare was hitting its stride then, too.

These things were all capturing my imagination, and so I thought I would try something similar in the context of sharing news, events and listings in your community.

Photo by Roo Reynolds

However, I was quite busy with the Guardian’s Open Platform, as the team was moving everything out of beta, introducing some big new services and infusing it into the way we operate.  I learned a lot doing that which has informed n0tice, too, but it was another 12 months before I could turn my attention back to this project.  It doesn’t feel any less relevant today than it did then. It’s just a much more crowded market now.

What does n0tice do?

The service operates in two modes – reading and posting.

n0tice.com - what's near you nowWhen you go to n0tice.com it will first detect whether or not you’re coming from a mobile device.  It was designed for the iPhone first, but the desktop version is making it possible to integrate a lot of useful features, too.

(Lesson:  jQuery Mobile is amazing. It makes your mobile projects better faster. I wish I had used it from day one.)

It will then ask your permission to read your location.  If you agree, it grabs your latitude and longitude, and it shows you what has been published to n0tice within a close radius.

(Lesson: It uses Google Maps and their geocoder to get the location out of the browser, but then it uses Yahoo!’s geo services to do some of the other lookups since I wanted to work with different types of location objects.  This combination is clunky and probably a bad idea, but those tools are very robust.)

You can then zoom out or zoom in to see broader or more precise coverage.

Since it knows where you are already, it’s easy to post something you’ve seen near you, too.  You can actually post without being logged in, but there are some social incentives to encourage logged in behavior.

Like Foursquare’s Mayor analogy, n0tice has the ‘Editor’ badge.

The first person to post in a particular city becomes the Editor of that city.  The Editor can then be ousted if someone completes more actions in the same city or region.

It was definitely a challenge working out how to make sensible game mechanics work, but it was even harder finding the right mix of neighborhood, city, country, lat/long coordinates so that the idea of an ‘Editor’ was consistent from place to place.

London and New York, for example, are much more complicated given the importance of the neighborhoods yet poorly defined boundaries for them.

(Lesson: Login is handled via Facebook. Their platform has improved a lot in the last 12 months and feels much more ‘give-and-take’ than just ‘take’ as it used to. Now, I’m not convinced that the activities in a person’s local community are going to join up naturally via the Facebook paradigm, so it needs to be used more as a quickstart for a new service like this one.)

The ‘Editor’ mechanics are going to need a lot more work.  But what I like about the ‘Editor’ concept is that we can now start to endow more rights and priveleges upon each Editor when an area matures.

Perhaps Editors are the only ones who can delete posts. Perhaps they can promote important posts. Maybe they can even delegate authority to other participants or groups.

Of course, quality is always an issue with open communities. Having learned a few things about crowdsourcing activities at the Guardian now, there are some simple triggers in place that should make it easier to surface quality should the platform scale to a larger audience.

For example, rather than comments, n0tice accepts ‘Evidence’.

You can add a link to a story, post a photo, embed a video or even a storify feed that improve the post.

Also, the ratings aren’t merely positive/negative.  They ask if something matters, if people will care, and if it’s accurate. That type of engagement may be expecting too much of the community, but I’m hopeful it will work.

Of course, all this additional level of interactivity is only available on the desktop version, as the mobile version is intended to serve just two very specific use cases:

  1. getting a snapshot of what’s happening near you now
  2. posting something you’ve seen quickly and easily

How will n0tice make money?

Since the service is a community notice board, it makes sense to use an advertising model that people already understand in that context: classifieds.

Anyone can list something on n0tice for free that they are trying to sell.  Then they can buy featured promotional positions based on how large the area is in which they want their item to appear and for how long they want it to be seen there.

(Lesson: Integrating PayPal for payments took no time at all. Their APIs and documentation feel a little dated in some ways, but just as Facebook is fantastic as a quickstart tool for identity, PayPal is a brilliant quickstart for payments.)

Promotion on n0tice costs $1 per 1 mile radius per day. That’s in US dollars.

While still getting the word out and growing the community $1 will buy you a featured spot that lasts until more people come along and start buying up availability.

But there’s a lot we can do with this framework.

For example, I think it would make sense that a ‘Publisher’ role could be defined much like the ‘Editor’ for a region.

Perhaps a ‘Publisher’ could earn a percentage of every sale in a region.  The ‘Publisher’ could either earn that privelege or license it from us.

I’m also hopeful that we can make some standard affiliate services possible for people who want to use the ad platform in other apps and web sites across the Internet.  That will only really work if the platform is open.

How will it work for developers and partners?

The platform is open in every way.

There are both read and write APIs for it.  The mobile and desktop versions are both using those APIs, in fact.

The read API can be used without a key at the moment, and the write API is not very complicated to use.

So, for example, here are the 10 most recent news reports with the ‘crime’ tag in machine-readable form:

http://n0tice.com/api/readapi-reports.php?output=xml&tags=crime&count=10

The client code for the mobile version is posted on Github with an open license (we haven’t committed to which license, yet), though it is a few versions behind what is running on the live site.  That will change at some point.

And the content published on n0tice is all Creative Commons Attribution-Share Alike so people can use it elsewhere commercially.

The idea in this approach to openness is that the value is in the network itself, the connections between things, the reputation people develop, the impact they have in their communities.

The data and the software are enablers that create and sustain the value.  So the more widely used the data and software become the more valuable the network is for all the participants.

How scalable is the platform?

The user experience can scale globally given it is based on knowing latitude and longitude, something treated equally everywhere in the world.  There are limitations with the lat/long model, but we have a lot of headroom before hitting those problems.

The architecture is pretty simple at the moment, really.  There’s not much to speak of in terms of directed graphs and that kind of thing, yet.  So the software, regardless of how badly written it is, which it most definitely is, could be rewritten rather quickly.  I suspect that’s inevitable, actually.

The software environment is a standard LAMP stack hosted on Dreamhost which should be good enough for now.  I’ve started hooking in things like Amazon’s CloudFront, but it’s not yet on EC2.  That seems like a must at some point, too.

The APIs should also help with performance if we make them more cacheable.

The biggest performance/scalability problem I foresee will happen when the gaming mechanics start to matter more and the location and social graphs get bigger.  It will certainly creak when lots of people are spending time doing things to build their reputation and acquire badges and socialize with other users.

If we do it right, we will learn from projects like WordPress and turn the platform into something that many people care about and contribute to.  It would surely fail if we took the view that we can be the only source of creative ideas for this platform.

To be honest, though, I’m more worried about the dumb things like choking on curly quotes in users’ posts and accidentally losing users’ badges than I’m worried about scaling.

It also seems likely that the security model for n0tice is currently worse than the performance and scalability model. The platform is going to need some help from real professionals on that front, for sure.

What’s the philosophy driving it?

There’s most definitely an ideology fueling n0tice, but it would be an overstatement to say that the vision is leading what we’re doing at the moment.

In its current state, I’m just trying to see if we can create a new kind of mobile publishing environment that appeals to lots of people.

There’s enough meat to it already, though, that the features are very easy to line up against the mission of being an open community notice board.

Local UK community champion Will Perrin said it felt like a “floating cloud of data that follows you around without having to cleave to distribution or boundary.”

I really like that idea.

Taking a wider view, the larger strategic context that frames projects like this one and things like the Open Platform is about being Open and Connected.  Recently, I’ve written about Generative Media Platforms and spoken about Collaborative Media.  Those ideas are all informing the decisions behind n0tice.

What does the future look like for n0tice?

The Guardian Media Group exists to deliver financial security for Guardian News and Media.

My hope is that we can move n0tice from being a hack to becoming a new GMG business that supports the Guardian more broadly.

The support n0tice provides should come in two forms: 1) new approaches to open and collaborative journalism and 2) new revenue streams.

It’s also very useful to have living projects that demonstrate the most extreme examples of ‘Open and Connected‘ models.  We need to be exploring things outside our core business that may point to the future in addition to moving our core efforts where we want to go.

We spend a lot of time thinking about openness and collaboration and the live web at the Guardian.  If n0tice does nothing more than illustrate what the future might look like then it will be very helpful indeed.

However, the more I work on this the more I think it’s less a demo of the future and more a product of the present.

Like most of the innovations in social media, the hard work isn’t the technology or even the business model.

The most challenging aspect of any social media or SoLoMo platform is making it matter to real people who are going to make it come alive.

If that’s also true for n0tice, then the hard part is just about to begin.

 


* The hack was originally called ‘News Signals’.  But after trying and failing to convince a few people that this was a good idea, including both technical people and potential users, such as my wife, I realized the name really mattered.

I’ve spent a lot of time thinking about generative media platforms, and the name needed to reflect that goal, something that spoke to the community’s behaviors through the network. It was supposed to be about people, not machines.

Now, of course, it’s hard to find a short domain name these days, but digits and dots and subdomains can make things more interesting and fun anyhow. Luckily, n0tice.com was available…that’s a zero for an ‘o’.

Behind the scenes of the Open Platform’s evolution

When I came to the Guardian two years ago, I brought with me some crazy California talk about open strategies and APIs and platforms. Little did I know the Guardian already understood openness. It’s part of its DNA. It just needed new ways of working in an open world.

Last week, The Guardian’s approach to openness and mutualisation took a giant step forward when we brought the Open Platform out of Beta.

It’s a whole new business model with a new technology infrastructure that is already accelerating our ambitions.

I’ll explain how we got to this point, but let me clarify what we just announced:

  • We’ve implemented a tiered access model that I think is a first in this space. We have a simple way to work with anyone who wants to work with us, from hobbyist to large-scale service provider and everything in between.
  • We’ve created a new type of ad network with 24/7 Real Media’s Open AdStream, one where the ads travel with the content that we make available for partners to reuse.
  • That ad network is going to benefit from another first which is Omniture analytics code that travels with the content, as well.
  • License terms that encourage people to add value are rare. Using many of the open license principles we developed T&Cs that will fuel new business, not stop it.
  • Hosted in the cloud on Amazon EC2 the service scales massively. There are no limits to the number of customers we can serve.
  • The API uses the open source search platform Solr which makes it incredibly fast, robust, and easy for us to iterate quickly.
  • We introduced a new service for building apps on our network called MicroApps. Partners can create pages and fully functional applications on guardian.co.uk.

We’re using all the tools in the Open Platform for many of our own products, including the Guardian iPad app, several digital products and more and more news apps that require quick turn-around times and high performance levels.

Open Platform: Build applications with the GuardianThere’s lots of documentation on the Open Platform web site explaining all this and more, but I figured I could use this space to give a picture of what’s been happening behind the scenes to get to this point.

It’s worth noting that this is far from the full picture of all the amazing stuff that has been happening at the Guardian the past 12 months. These are the things that I’ve had the pleasure of being close to.

Beginning with Beta

First, we launched in Beta last year. We wanted to build some excitement around it via the people who would use it first. So, we unveiled it at a launch event in our building to some of the smartest and most influential London developers and tech press.

We were resolute in our strategy, but when you release something with unknown outcomes and a clear path to chaos people get uneasy. So, we created just large enough hurdles to keep it from exploding but a wide enough berth for those who used it to take it to its extreme and to demonstrate its value.

It worked. Developers got it right away and praised us for it. They immediately started building things using it (see the app gallery). All good signs.

Socializing the message

We ran a Guardian Hack Day and started hosting and sponsoring developer events, including BarCamp, Rewired State, FOWA, dConstruct, djugl, Music Hack Day, ScaleCamp, etc.

Next, we knew the message had to reach their bosses soon, and their bosses’ bosses. So, we aimed right for the top.

Industry events can be useful ways to build relationships, but Internet events have been really lacking meaning. People who care about how the Internet is changing the world and who are also actively making that change happen were the types of people we needed to build a long term dialog with.

So, we came up with a new kind of event: Activate Summit.

The quality of the speakers and attendees at Activate was incredible. Because of those people the event has now turned into something much more amazing than what we initially conceived.

Nick Bostrom’s darkly humorous analysis of the likelihood of human extinction as a result of technology haunts me frequently still, but the event also celebrates some brilliant ways technology is making life better. I think we successfully tapped into some kind of shared consciousness about why people invest their careers into the Internet movement…it’s about making a difference.

Developers, developers, developers!

Gordon Brown was wise in his decision to put Tim Berners-Lee and Nigel Shadbolt on the task of opening government data. But they knew enough to know that they didn’t know how to engage developers. Where did they turn for help? The Guardian!

We couldn’t have been more excited to help them get data.gov.uk out the door successfully. It was core to what we’re about. As Free Our Data champion Charles Arthur joked on the way to the launch presentation, “nice of them to throw a party for me.”

We gave them a platform to launch data.gov.uk in the form of developer outreach, advice, support, event logistics, a nice building, etc., but, again, the people involved made the whole effort much more impressive than any contribution we made to it.

Tom Taylor’s Postcode Paper, for example, was just brilliant on so many levels. The message for why open government data could not have been clearer.

Election data

Then when the UK election started to pick up some momentum, we opened up the Guardian’s deep politics database and gave it a free-to-use API. We knew we couldn’t possibly cover every angle of the election and hoped that others could use the Guardian’s resources to engage voters. We couldn’t have asked for a better example of that then Voter Power.

A range of revenue models

All along there were some interesting things happening more behind the scenes, too.

The commercial team was experimenting with some new types of deals. Our ad network business grew substantially, and we added a Food Ad Network and a Diversity Network to our already successful Green Ad network.

It was clear that there was also room for a new type of ad network, a broader content-targeted ad network. And better yet, if we could learn about what happens with content out across the web then we might have the beginnings of a very intelligent targeting engine, too.

24/7 Real Media’s Open Ad Stream and Omniture were ready to help us make this happen. So, we embedded ads and analytics code with article content in the Content API. We’ve launched with some house ads to test it out, but we’re very excited by the possibilities when the network grows.

The Guardian’s commercial teams, including Matt Gilbert, Steve Wing, Dan Hedley and Torsten de Reise, also worked out a range of different partnerships with several Beta customers including syndication, rev share on paid apps, and rev share on advertising. We’re scaling those models and working out some new ones, as well.

It became obvious to everyone that we were on to something with a ton of potential.


Rewriting the API for scale

Similarly, the technology team was busily rebuilding the Content API the moment we realized how big it needed to be.

In addition to supporting commercial partners, we wanted to use it for our own development. The new API had to scale massively, it had to be fast, it had to be reliable, it had to be easy to use, and it had to be cheap. We used the open source search platform Solr hosted on Amazon’s EC2. API service management was handled by Mashery.

The project has hit the desk of nearly every member of the development team at one point or another. Here are some of the key contributions. Mat Wall architected it. Graham Tackley made Mat’s ideas actually work. Graham and Stephen Wells led the development, while Francis Rhys-Jones and Daithi O’Crualaoich wrote most of the functions and features for it. Martyn Inglis and Grant Klopper handled the ad integration. The wonderful API Explorer was written by Francis, Thibault Sacreste and Ken Lim. Matthew O’Brien wrote the Politics API. The MicroApps framework included all these people plus basically the entire team.

Stephen Dunn and Graham Tackley provided more detail in a presentation to the open source community in Prague at Lucid Imagination’s Solr/Lucene EuroCon event.

The application platform we call MicroApps

Perhaps even more groundbreaking than all this is the MicroApp framework. A newspaper web site that can run 3rd party apps? Yes!

MicroApps makes the relationship between the Guardian and the Internet feel like a two-way, read-write, permeable membrane rather than a broadcast tower. It’s a very tangible technology answer to the openness vision.

You can learn more by reading 2 excellent blog posts about MicroApps. Dan Catt explains how he used MicroApps for Zeitgeist. Since most of the MicroApps that exist today are hosted on Google AppEngine, the Google Code team published Chris Thorpe’s insights about what we’re doing with MicroApps on their blog.

The MicroApps idea was born out of a requirement to release smaller chunks of more independent functionality without affecting the core platform….hence the name “MicroApps”. Like many technology breakthroughs, the thing it was intended to do becomes only a small part of the new world it opens up.

Bringing it all together

At the same time our lead software architect Mat Wall was formulating the MicroApp framework, the strategy for openness was forming our positioning and our approach to platforms:

…to weave the Guardian into the fabric of the Internet; to become ‘of‘ the Web, not just ‘on‘ the Web

The Content API is a great way to Open Out and make the Guardian meaningful in multiple environments. But we also knew that we had to find a way to Open In, or to allow relevant and interesting things going on across the Internet to be integrated sensibly within guardian.co.uk.

Similarly, the commercial team was looking to experiment with several media partners who are all thinking about engagement in new ways. What better way to engage 36M users than to offer fully functional apps directly on our domain?

The strategy, technology and business joined up perfectly. A tiered business model was born.

The model

Simon Willison was championing a lightweight keyless access level from the day we launched the Beta API. We tested keyless access with the Politics API, and we liked it a lot. So, that became the first access tier: Keyless.

We offered full content with embedded ads and analytics code in the next access level. We knew getting API keys was a pain. So, we approved keys automatically on signup. That defined the second tier: Approved.

Lastly, we combined unfettered access to all the content in our platform with the MicroApp framework for building apps on the Guardian network. We made this deep integration level available exclusively for people who will find ways to make money with us. That’s the 3rd tier: Bespoke. It’s essentially the same as working in the building with our dev team.

We weren’t precisely clear on how we’d join these things up when we conceived the model. Not surprisingly, as we’ve seen over and over with this whole effort, our partners are the ones who are turning the ideas into reality. Mashery was already working on API access levels, and suddenly the last of our problems went away.

The tiers gave some tangible structure to our partner strategy. The model felt like it just started to create itself.

Now we have lots of big triangle diagrams (see below) and grids and magic quadrants and things that we can put into presentation slides that help us understand and communicate how the ecosystem works.

Officially opening for business

Given the important commercial positioning now, we decided that the launch event had to focus first and foremost on our media partners. We invited media agencies and clients into our offices. Adam Freeman and Mike Bracken opened the presentation. Matt Gilbert then delivered the announcement and gave David Fisher a chance to walk through a deep dive case study on the Enjoy England campaign.

There was one very interesting twist on the usual launch event idea which was a ‘Developer Challenge’. Several members of the development team spent the next 24 hours answering briefs given to us by the media partners at the event. It was run very much like a typical hack day, but the hacks were inspired by the ideas our partners are thinking about. Developer advocate Michael Brunton-Spall wrote up the results if you want to see what people built.

Here is the presentation we gave at the launch event:


(Had we chosen a day to launch other than the same day that Google threw a press release party I think you’d already know all this.)

Do the right thing

Of all the things that make this initiative as successful as it is, the thing that strikes me most is how engaged and supportive the executive team is. Alan Rusbridger, Carolyn McCall, Tim Brooks, Derek Gannon, Emily Bell, Mike and Adam, to name a few, are enthusiastic sponsors because this is the right thing to do.

They created a healthy environment for this project to exist and let everyone work out what it meant and how to do it together.

Alan articulated what we’re trying do to in the Cudlipp lecture earlier this year. Among other things, Alan’s framework is an understanding that our abilities as a major media brand and those of the people formerly known as the audience are stronger when unified than they are when applied separately.

Most importantly, we can afford to venture into open models like this one because we are owned by the Scott Trust, not an individual or shareholders. The organization wants us to support journalism and a free press.

“The Trust was created in 1936 to safeguard the journalistic freedom and liberal values of the Guardian. Its core purpose is to preserve the financial and editorial independence of the Guardian in perpetuity, while its subsidiary aims are to champion its principles and to promote freedom of the press in the UK and abroad.”

The Open Platform launch was a big day for me and my colleagues. It was a big day for the future of the Guardian. I hope people also see that it was a major milestone toward a brighter future for journalism itself.

Using fantasy football to drive network effects

Network effects accelerate when services are accessible wherever the user is engaged. That leap has been made in many different contexts in online media from advertising (AdSense) to participation and personal publishing (Flickr and Twitter).

More mainstream publishers got close to this when they began publishing RSS feeds, but the effects of the RSS reading experience don’t come back to the publisher and add value to the wider network like they should.

A click back to the article on the source domain does not improve that article for everyone else who reads it, for example.

It may seem difficult to create network effects around content except in the form of things like reader comments and social bookmarking. But now there are some new ways to create network effects in the publishing business.

Most publishers have found some kind of social tool that makes sense as part of what they offer. It may be a forum, a friends network, or in some cases a game or contest. All those things can capture activity and engage the participants from anywhere on the Internet.

We recently launched a new fantasy football application at The Guardian (when I say ‘football’ I mean ‘soccer’), and we immediately began thinking about where else people might enjoy playing the game. The developers and product manager cranked out a very rudimentary iGoogle Gadget version of the app so that you can stay on top of what’s happening in the game directly from your browser start page.

The gadget is not yet fully functional, but when we start reflecting your activity in the game back to you through the gadget then network effects will be possible. I haven’t been a huge fan of most of the social apps out there, but I can definitely see myself using this one a lot.

In many ways, it also makes me a more frequent user of Google than I already was, but that’s a topic for another post.

At this point in the evolution of the Internet, the online product launch checklist probably dictates that a portable version of a service is a minimum requirement, must-have feature. In that model, the domain can serve as a rules engine, storage and a transaction hub, but the activity of an application needs only a lightweight container and an end-user who’s happy with the experience wherever it may exist.

Creating leverage at the data layer

There’s a reason that the world fully embraced HTTP but not Gopher or Telnet or even FTP. That’s because the power of the Internet is best expressed through the concept of a network, lots of interlinked pieces that make up something bigger rather than tunnels and holes that end in a destination.

The World Wide Web captured people’s imaginations, and then everything changed.

I was reminded of this while reading a recent interview with Tim Berners-Lee (via TechCrunch). He talked a bit about the power of linking data:

“Web 2.0 is a stovepipe system. It’s a set of stovepipes where each site has got its data and it’s not sharing it. What people are sometimes calling a Web 3.0 vision where you’ve got lots of different data out there on the Web and you’ve got lots of different applications, but they’re independent. A given application can use different data. An application can run on a desktop or in my browser, it’s my agent. It can access all the data, which I can use and everything’s much more seamless and much more powerful because you get this integration. The same application has access to data from all over the place…

Data is different from documents. When you write a document, if you write a blog, you write a poem, it is the power of the spoken word. And even if the website adds a lot of decoration, the really important thing is the spoken words. And it is one brain to another through these words.”

Data is what matters. It’s a point of interest in a larger context. It’s a vector and a launchpad to other paths. It’s the vehicle for leverage for a business on the Internet.

What’s the business strategy at the data layer?

I have mixed views on where the value is on social networks and the apps therein, but they are all showing where the opportunity is for services that have actually useful data. Social networks are a good user interface for distributed data, much like web browsers became a good interface for distributed documents.

But it’s not the data consumption experience that drives value, in my mind.

Value on the Internet is being created in the way data is shared and linked to more data. That value comes as a result of the simplicity and ease of access, in the completeness and timeliness, and by the readability of that data.

It’s not about posting data to a domain and figuring out how to get people there to consume it. It’s about being the best data source or the best data aggregator no matter how people make use of it in the end.

Where’s the money?

Like most Internet service models, there’s always the practice of giving away the good stuff for free and then upselling paid services or piggybacking revenue-generating services on the distribution of the free stuff. Chris Anderson’s Wired article on the future of business presents the case well:

“The most common of the economies built around free is the three-party system. Here a third party pays to participate in a market created by a free exchange between the first two parties…what the Web represents is the extension of the media business model to industries of all sorts. This is not simply the notion that advertising will pay for everything. There are dozens of ways that media companies make money around free content, from selling information about consumers to brand licensing, “value-added” subscriptions, and direct ecommerce. Now an entire ecosystem of Web companies is growing up around the same set of models.”

Yet these markets and technologies are still in very early stages. There’s lots of room for someone to create an open advertising marketplace for information, a marketplace where access to data can be obtained in exchange for ad inventory, for example.

Data providers and aggregators have a huge opportunity in this world if they can become authoritative or essential for some type of useful information. With that leverage they could have the social networks, behavioral data services and ad networks all competing to piggyback on their data out across the Internet to all the sites using or contributing to that data.

Regardless of the specific revenue method, the businesses that become a dependency in the Web of data of the future will also find untethered growth opportunities. The cost of that type of business is one of scale, a much more interesting place to be than one that must fight for attention.

I’ve never really liked the “walled garden” metaphor and its negative implications. I much prefer to think in terms of designing for growth.

Frank Lloyd Wright designed buildings that were engaged with the environments in which they lived. Similarly, the best services on the World Wide Web are those that contribute to the whole rather than compete with it, ones that leverage the strengths in the network rather than operate in isolation. Their existence makes the Web better as a whole.

Photo: happy via

A handy music playlist tool

I’ve been looking for a way to share playlists on my blog and elsewhere online for a long time. It’s been surprisingly hard to find a really convenient way to do it.

DRM and industry lockdown have been a big part of that, but there have also been too few technical ways to point to music files that are already publicly available. There are tons of legal MP3’s on the Internet that reside at readable URLs today.

Lucas Gonze and his team at Yahoo! solved this problem. They launched a source-agnostic embeddable media player. You can read more about it on YDN.

It’s fantastically simple. All you do is paste this reference to Yahoo!’s media player javascript code anywhere on your web page (I added it at the bottom of my blog templates):

<script type=”text/javascript” src=”http://mediaplayer.yahoo.com/js”></script>

Then you just add an HTML link somewhere on your web page to any MP3 file you want to see in your playlist.

That’s it. You’re already done. The link you just made will now include a small play button in front of it, and a mini media player will appear in the browser.

Here’s a short playlist I quickly put together to show how it works. The 4th track here is particularly relevant to my life:

Cut Chemist – The Garden
Young Einstein (Ugly Duckling) – Handcuts Soul Mix
They Might Be Giants- Birdhouse in Your Soul
LCD Soundsystem – Losing My Edge

The code for that playlist looks like this:

<a href=”http://download.wbr.com/cutchemist/TheGarden.mp3″> Cut Chemist – The Garden </a>
<a href=”http://www.uglyduckling.us/music/HandCutsSoulMix.mp3″> Young Einstein (Ugly Duckling) – Handcuts Soul Mix </a>
<a href=”http://midwesternhousewives.com/mix/The%20Might%20Be%20Giants-%20Birdhouse%20in%20Your%20Soul.mp3″> They Might Be Giants- Birdhouse in Your Soul </a>
<a href=”http://www.personal.psu.edu/users/s/m/smk291/muchies/LCD%20Soundsystem%20-%20Losing%20My%20Edge.mp3″> LCD Soundsystem – Losing My Edge </a>

They’ve included some other nice things in the code that give you some flexibility. You can create a shareable playlist file, and you can add cover art, for example.

What I like most, probably, is the architecture of the solution. Anyone who already links to MP3 files can just add the music player javascript code to their page templates, and it will just work immediately. You don’t have to force fit a heavily branded HTML badge into your web page. And since the links are all standard HTML href’s, the content of the playlist is search engine friendly.

It’s the first time I’ve seen a media player so closely aligned with the way the Internet works.

Lucas posts about the need to unlock how media files are referenced. He wants to take the complexity out of distribution and reduce the concept of music sharing and discoverability to the Internet’s roots with URLs as identifiers:

“Almost all online music businesses right now are in the distribution business, even if they see other functions like discovery or social connection as their main value, because they have no way to connect their discovery or social connection features with a reliable provisioning service from a third party. But provisioning is a commodity service which doesn’t give anybody an edge. They don’t want to import playlists from third parties because *that’s* where they are adding value.

Exporting playlists for others to provision, though, is a different story, and it makes much more sense from a business perspective. Let somebody else deal with provisioning. This is what it would mean for somebody like Launchcast or Pandora to publish XSPF with portable song identifiers that could be resolved by companies that specialize in provisioning.”

It seems Lucas is thinking about how to get music flowing around the Internet with the same efficiency that text has enjoyed. Very smart.

Building markets out of data

I’m intrigued by the various ways people view ‘value’. There seem to be 2 camps: 1) people who view the world in terms of competition for finite resources and 2) people who see ways to create new forms of value and to grow the entire pie.

Umair Haque talks about choices companies make that push them into one of those 2 camps. He often argues that the market needs more builders than winners. He clarifies his position in his post The Economics of Evil:

“When you’re evil, your ability to co-create value implodes: because you make moves which are focused on shifting costs and extracting value, rather than creating it. …when you’re evil, the only game you want to – or can play – is domination.”

I really like the idea that the future of the media business is in the way we build value for all constituencies rather than the way we extract value from various parts of a system. It’s not about how you secure marketshare, control distribution, mitigate risk or reduce costs. It’s about how you enable the creation of value for all.

He goes on to explain how media companies often make the mistake of focusing on data ownership:

“Data isn’t the value. In fact, data’s a commodity…What is valuable are the things that create data: markets, networks, and communities.

Google isn’t revolutionizing media because it “owns the data”. Rather, it’s because Google uses markets and networks to massively amplify the flow of data relative to competitors.”

I would add that it’s not just the creation of valuable data that matters but also in the way people interface with existing data. Scott Karp’s excellent post on the guidelines for transforming media companies shares a similar view:

“The most successful media companies will be those that learn to how build networks and harness network effects. This requires a mindset that completely contradicts traditional media business practices. Remember, Google doesn’t own the web. It doesn’t control the web. Google harnesses the power of the web by analyzing how websites link to each other.”