Orchestrating streams of data from across the Internet

The liveblog was a revelation for us at the Guardian. The sports desk had been doing them for years experimenting with different styles, methods and tone. And then about 3 years ago the news desk started using them liberally to great effect.

I think it was Matt Wells who suggested that perhaps the liveblog was *the* network-native format for news. I think that’s nearly right…though it’s less the ‘format’ of a liveblog than the activity powering the page that demonstrates where news editing in a networked world is going.

It’s about orchestrating the streams of data flowing across the Internet into a compelling use in one form or another. One way to render that data is the liveblog. Another is a map with placemarks. Another is a RSS feed. A stream of tweets. Storify. Etc.

I’m not talking about Big Data for news. There is certainly a very hairy challenge in big data investigations and intelligent data visualizations to give meaning to complex statistics and databases. But this is different.

I’m talking about telling stories by playing DJ to the beat of human observation pumping across the network.

We’re working on one such experiment with a location-tagging tool we call FeedWax. It creates location-aware streams of data for you by looking across various media sources including Twitter, Instagram, YouTube, Google News, Daylife, etc.

The idea with FeedWax is to unify various types of data through shared contexts, beginning with location. These sources may only have a keyword to join them up or perhaps nothing at all, but when you add location they may begin sharing important meaning and relevance. The context of space and time is natural connective tissue, particularly when the words people use to describe something may vary.

We’ve been conducting experiments in orchestrated stream-based and map-based storytelling on n0tice for a while now. When you start crafting the inputs with tools like FeedWax you have what feels like a more frictionless mechanism for steering the flood of data that comes across Twitter, Instagram, Flickr, etc. into something interesting.

For example, when the space shuttle Endeavour flew its last flight and subsequently labored through the streets of LA there was no shortage of coverage from on-the-ground citizen reporters. I’d bet not one of them considered themselves a citizen reporter. They were just trying to get a photo of this awesome sight and share it, perhaps getting some acknowledgement in the process.

You can see the stream of images and tweets here: http://n0tice.com/search?q=endeavor+OR+endeavour. And you can see them all plotted on a map here: http://goo.gl/maps/osh8T.

Interestingly, the location of the photos gives you a very clear picture of the flight path. This is crowdmapping without requiring that anyone do anything they wouldn’t already do. It’s orchestrating streams that already exist.

This behavior isn’t exclusive to on-the-ground reporting. I’ve got a list of similar types of activities in a blog post here which includes task-based reporting like the search for computer scientist Jim Gray, the use of Ushahidi during the Haiti earthquake, the Guardian’s MPs Expenses project, etc. It’s also interesting to see how people like Jon Udell approach this problem with other data streams out there such as event and venue calendars.

Sometimes people refer to the art of code and code-as-art. What I see in my mind when I hear people say that is a giant global canvas in the form of a connected network, rivers of different colored paints in the form of data streams, and a range of paint brushes and paint strokes in the form of software and hardware.

The savvy editors in today’s world are learning from and working with these artists, using their tools and techniques to tease out the right mix of streams to tell stories that people care about. There’s no lack of material or tools to work with. Becoming network-native sometimes just means looking at the world through a different lens.

Information physicality

The recent advances in human-to-computer interaction should be scrambling your brain if you’re paying attention at all. From gesture interfaces (both 2D *and* 3D) to location-aware social media and the rapid adoption of connected devices, our relationship to computing and the increasingly ubiquitous network is changing dramatically.

Whereas I grew up in an era where we had to work relatively hard to get a computer to behave the way we wanted, kids today will grow up expecting computers to respond to them instead.

What is this trend going to mean to journalism and publishers? Getting closer to the leaders will help uncover some answers.

The gaming consoles have been working on this stuff for years already, but now Google, Amazon, Sony and even the telcos all have relevant projects starting to ship now.  

Google, for example, just unveiled a new project called Field Trip to add to its portfolio of location-responsive media that also includes Google Now and Google Glasses.

The app is populated using data from “dozens” of content partners, according to Google. Songkick (show information), Eater (restaurants), Flavorpill (events of all kinds), and Thrillist (hot cafes and shops) are there to tell you where to go and what to eat. Architizer (public art, interesting buildings), Remodelista (designy boutiques), and Inhabitat (a designy blog) are there for the nerdier stuff. You can turn any of these services on or off, or ask to see more or less of the items from each partner.

Also served to you are Google Offers, which show up as coupons and deals for nearby businesses, and restaurant reviews from Zagat, Google’s crown jewel in this space.

- Google’s New Hyper-Local City Guide Is a Real Trip, Wired

What kind of publisher is well-suited for a world where technology responds?

What does it mean for information to adjust to the way we move our hands, the way we slide our fingers across a glass surface, where our eyes are focused, and which direction we’re facing?

What does it mean for information to alter based on our location, places we’ve been and places we’re going?

How do you make information more physical?

The answers have yet to be invented, but there are some obvious ways to re-factor current assets and processes in order to get invited to the party.

  • Atomize everything. Separate independent elements and link them intelligently. Well-structured information and consistent workflow help a lot with this.
  • Add a concept of time and space to media. Location can be a point on the planet, a place, a geopolitical boundary. And time can be a moment or a period. And then look at adding more context.
  • Standardize around formats that software developers like to work with. Offer APIs that can accept data as well as release data.

It’s about adjusting, being malleable and responding. Information, how it’s collected, where it goes, and how it is experienced needs to adjust according to the way the user is looking at it and touching it.  It needs to synch with where in space and time the person is focused and interested.

More simply, make everything you do as software-friendly as you possibly can. And then go partner with people whose brains and financial incentives are inextricably linked to the new hardware and software.

This presentation may communicate some of these ideas more effectively than a blog post:



Posted from London, England, United Kingdom.

Rethinking news for a world of ubiquitous connectivity

I gave a presentation on the implications of ubiquitous connectivity for journalism at the Rethinking Small Media event held at the University of London yesterday. The slides are here:

I realized by the time I finished talking that the point I really wanted to make was more about how important it is that we move the dialog away from an us v them view of small and big media. Fracturing a community that is mostly full of people trying to do good in the world is not helpful, even if the definition and method of doing good varies.

The more important issue is about protecting the open public space we call the Internet.

As the network begins adopting more and more nodes, more streams of nonhuman data, new connected devices, etc., we must work harder to ensure that the interests that make all these things possible are aligned with the principles that made the Internet such valuable infrastructure for people across the globe.

But, in the meantime, there are certainly some tangible things people from both small and big media can do to point in the right direction.

The list includes atomizing information, adding more context such as time and space, linking it, making it developer-friendly, and sharing it openly with partners, among other things.

Mobilising the web of feeds

I wrote this piece for the Guardian’s Media Network on the role that RSS could play now that the social platforms are becoming more difficult to work with. GeoRSS, in particular, has a lot of potential given the mobile device explosion. I’m not suggesting necessarily that RSS is the answer, but it is something that a lot of people already understand and could help unify the discussion around sharing geotagged information feeds.


Powered by Guardian.co.ukThis article titled “Mobilising the web of feeds” was written by Matt McAlister, for theguardian.com on Monday 10th September 2012 16.43 UTC

While the news that Twitter will no longer support RSS was not really surprising, it was a bit annoying. It served as yet another reminder that the Twitter-as-open-message-utility idea that many early adopters of the service loved was in fact going away.

There are already several projects intending to disrupt Twitter, mostly focused on the idea of a distributed, federated messaging standard and/or platform. But we already have such a service: an open standard adopted by millions of sources; a federated network of all kinds of interesting, useful and entertaining data feeds published in real-time. It’s called RSS.

There was a time when nearly every website was RSS-enabled, and a cacophony of Silicon Valley startups fought to own pieces of this new landscape, hoping to find dotcom gold. But RSS didn’t lead to gold, and most people stopped doing anything with it.

Nobody found an effective advertising or service model (except, ironically, Dick Costolo, CEO of Twitter, who sold Feedburner to Google). The end-user market for RSS reading never took off. Media organisations didn’t fully buy into it, and the standard took a backseat to more robust technologies.

Twitter is still very open in many ways and encourages technology partners to use the Twitter API. That model gives the company much more control over who is able to use tweets outside of the Twitter owned apps, and it’s a more obvious commercial strategy that many have been asking Twitter to work on for a long time now.

But I think we’ve all made a mistake in the media world by turning our backs on RSS. It’s understandable why it happened. But hopefully those who rejected RSS in the past will see the signals demonstrating that an open feed network is a sensible thing to embrace today.

Let’s zoom out for context first. Looking at the macro trends in the internet’s evolution, we can see one or two clear winners as more information and more people appeared on the network in waves over the last 15 years.

Following the initial explosion of new domains, Yahoo! solved the need to surface only the websites that mattered through browsing. The Yahoo! directory became saturated, so Google then surfaced pages that mattered within those websites through searches. Google became saturated, so Facebook and Twitter surfaced things that mattered that live on the webpages within those web sites through connecting with people.

Now that the social filter is saturated, what will be used next to surface things that matter out of all the noise? The answer is location. It is well understood technically. The software-hardware-service stack is done. The user experience is great. We’re already there, right?

No – most media organisations still haven’t caught up yet. There’s a ton of information not yet optimised for this new view of the world and much more yet to be created. This is just the beginning.

Do we want a single platform to be created that catalyses the location filter of the internet and mediates who sees what and when? Or do we want to secure forever a neutral environment where all can participate openly and equally?

If the first option happens, as historically has been the case, then I hope that position is taken by a force that exists because of and reliant on the second option.

What can a media company do to help make that happen? The answer is to mobilise your feeds. As a publisher, being part of the wider network used to mean having a website on a domain that Yahoo! could categorise. Then it meant having webpages on that website optimised for search terms people were using to find things via Google. And more recently it has meant providing sharing hooks that can spread things from those pages on that site from person to person.

Being part of the wider network today suddenly means all of those things above, and, additionally, being location-enabled for location-aware services.

It doesn’t just mean offering a location-specific version of your brand, though that is certainly an important thing to do as well. The major dotcoms use this strategy increasingly across their portfolios, and I’m surprised more publishers don’t do this.

More importantly, though, and this is where it matters in the long run, it means offering location-enabled feeds that everyone can use in order to be relevant in all mobile clients, applications and utilities.

Entrepreneurs are all over this space already. Pure-play location-based apps can be interesting, but many feel very shallow without useful information. The iTunes store is full of travel apps, reference apps, news, sports, utilities and so on that are location-aware, but they are missing some of the depth that you can get on blogs and larger publishers’ sites. They need your feeds.

Some folks have been experimenting in some very interesting ways that demonstrate what is possible with location-enabled feeds. Several mobile services, such as FlipBoard, Pulse and now Prismatic, have really nice and very popular mobile reading apps that all pull RSS feeds, and they are well placed to turn those into location-based news services.

Perhaps a more instructive example of the potential is the augmented reality app hypARlocal at Talk About Local. They are getting location-aware content out of geoRSS feeds published by hyperlocal bloggers around the UK and the citizen journalism platform n0tice.com.

But it’s not just the entrepreneurs that want your location-enabled feeds. Google Now for Android notifies you of local weather and sports scores along with bus times and other local data, and Google Glasses will be dependent on quality location-specific data as well.

Of course, the innovations come with new revenue models that could get big for media organisations. They include direct, advertising, and syndication models, to name a few, but have a look at some of the startups in the rather dense ‘location‘ category on Crunchbase to find commercial innovations too.

Again, this isn’t a new space. Not only has the location stack been well formed, but there are also a number of bloggers who have been evangelising location feeds for years. They already use WordPress, which automatically pumps out RSS. And many of them also geotag their posts today using one of the many useful WordPress mapping plugins.

It would take very little to reinvigorate a movement around open location-based feeds. I wouldn’t be surprised to see Google prioritising geotagged posts in search results, for example. That would probably make Google’s search on mobile devices much more compelling, anyhow.

Many publishers and app developers, large and small, have complained that the social platforms are breaking their promises and closing down access, becoming enemies of the open internet and being difficult to work with. The federated messaging network is being killed off, they say. Maybe it’s just now being born.

Media organisations need to look again at RSS, open APIs, geotagging, open licensing, and better ways of collaborating. You may have abandoned it in the past, but RSS would have you back in a heartbeat. And if RSS is insufficient then any location-aware API standard could be the meeting place where we rebuild the open internet together.

It won’t solve all your problems, but it could certainly solve a few, including new revenue streams. And it’s conceivable that critical mass around open location-based feeds would mean that the internet becomes a stronger force for us all, protected from nascent platforms whose their future selves may not share the same vision that got them off the ground in the first place.

To get more articles like this sent direct to your inbox, sign up for free membership to the Guardian Media Network. This content is brought to you by Guardian Professional.

guardian.co.uk © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

Calling your web site a ‘property’ deprives it of something bigger

BBC offered another history of London documentary the other night, a sort of people’s perspective on how the character of the city has changed over time, obviously inspired by Danny Boyle’s Opening Ceremony at the Olympics.

Some of the sequences were interesting to me particularly as a foreigner – the gentrification of Islington, the anarchist squatters in Camden, the urbanization of the Docklands, etc.  - a running theme of haves vs have-nots.

It’s one of a collection of things inspiring me recently including a book called ‘The Return of the Public‘ by Dan Hind, a sort of extension to the Dewey v Lippman debates, what’s going on with n0tice, such as Sarah Hartley’s adaptation for it called Protest Near You and the dispatch-o-rama hack, and, of course, the Olympics.

I’m becoming reinvigorated and more bullish on where collective action can take us.

At a more macro level these things remind me of the need to challenge the many human constructs and institutions that are reflections of the natural desire to claim things and own them.

Why is it so difficult to embrace a more ‘share and share alike’ attitude?  This is as true for children and their toys as it is for governments and their policies.

The bigger concern for me, of course, is the future of the Internet and how media and journalism thrive and evolve there.

Despite attempts by its founders to shape the Internet so it can’t be owned and controlled, there are many who have tried to change that both intentionally and unwittingly, occasionally with considerable success.

How does this happen?

We’re all complicit.  We buy a domain. We then own it and build a web site on it. That “property” then becomes a thing we use to make money.  We fight to get people there and sell them things when they arrive.  It’s the Internet-as-retailer or Internet-as-distributor view of the world.

That’s how business on the Internet works…or is it?

While many have made that model work for them, it’s my belief that the property model is never going to be as important or meaningful or possibly as lucrative as the platform or service model over time. More specifically, I’m talking about generative media networks.

Here are a few different ways of visualizing this shift in perspective (more):

Even if it works commercially, the property model is always going to be in conflict with the Internet-as-public-utility view of the world.

Much like Britain’s privately owned public spaces issue, many worry that the Internet-as-public-utility will be ruined or, worse, taken from us over time by commercial and government interests.

Playing a zero sum game like that turns everyone and everything into a threat.  Companies can be very effective at fighting and defending their interests even if the people within those companies mean well.

I’m an optimist in this regard.  There may be a pendulum that swings between “own” and “share”, and there are always going to be fights to secure public spaces.  But you can’t put the Internet genie back in the bottle.  And even if you could it would appear somewhere else in another form just as quickly…in some ways it already has.

The smart money, in my mind, is where many interests are joined up regardless of their individual goals, embracing the existence of each other in order to benefit from each other’s successes.

The answer is about cooperation, co-dependency, mutualisation, openness, etc.

We think about this a lot at the Guardian. I recently wrote about how it applies to the recent Twitter issues here. And this presentation by Chris Thorpe below from back in 2009 on how to apply it to the news business is wonderful:

Of course, Alan Rusbridger’s description of a mutualised newspaper in this video is still one of the strongest visions I’ve heard for a collaborative approach to media.

The possibility of collective action at such an incredible scale is what makes the Internet so great.  If we can focus on making collective activities more fruitful for everyone then our problems will become less about haves and have-nots and more about ensuring that everyone participates.

That won’t be an easy thing to tackle, but it would be a great problem to have.

The importance of co-dependency on the Internet

The thing that still blows my mind about the Internet is that through all the dramatic changes over the last 2 or 3 decades it remains a mostly open public commons that everyone can use.

There are many land ownership battles happening all around it. But it has so far withstood challenges to its shape, its size, its governance and its role in all aspects of our lives.

Will that always be the case? Or are too many special interests with unbalanced power breaking down the core principles that made it our space, a shared space owned by nobody and available to everybody?

It used to be that corporate interests were aligned with public interests on the Internet.

Many early pioneers thrived because they recognized how to create and earn value across the network, in the connection between things, the edges (read more on Graph Theory). They succeeded by developing services that offered unfettered and easy access to something useful, often embracing some sort of sharing model for partners powered by revenue models built on network effects.

These innovators were dependent on open and public participation and a highly distributed network adhering to common standards such as HTTP and HTML.

The public and private interests were aligned. Everyone’s a winner!

However, hypertext was designed for connecting text, not for connecting people. The standards designed for communication such as SMTP and later XMPP were fine, but no standard for connecting people to other people achieved global adoption.

Even backing from the web’s founder for a standard called FOAF and a Bill of Rights for Social Networking Users backed by Arrington and Scoble failed to lift the movement for standards in social connections.

Without a standard the entrepreneurs got busy coming up with solutions, and, eventually, Facebook and Twitter were born. Like their predecessors, they recognized how to create and earn value in the connections between things, in this case, a network of people, a privately owned social graph.

But the social graph that they created is not an open, owner-less, public commons.

Facebook is interested in Facebook’s existence and have no interest in FOAF or XMPP. Industry standards are to be ignored at Facebook (and Apple, Microsoft and sometimes Google, too) unless they help to acquire customers or to generate PR.

Governance is held privately in these spaces. They consult with their customers, but they write their own privacy policies and are held accountable to nobody but public opinion and antiquated legal frameworks around the world.

Now, it’s a very appealing idea to think about an open and public alternative to the dominant social networks. Many have tried, including StatusNet and Diaspora. And more challengers are on the way.

But there are harder problems to solve here that I think matter more than that temporary band-aid.

  • We need to know why the industry failed to adopt a global standard for social networking. Will we go through this again as a market forms inevitably around a digital network of real-world things? Will it repeat again when we learn how to connect raw data, too?
  • Can the benefits (and therefore the incentives) to ensuring contributions are owned and controlled by a platform’s contributors in perpetuity be more commercially viable and legally sensible? Are there ways to support those benefits globally?
  • In what ways can those who are adversely affected by centralized control of social activity hold those forces to account? Where is the balance of power coming from, and how can it be amplified to be an equal force?

Much like the founders of the U.S. Constitution, there were some very clever folks who codified open access to this public space we call the Internet in ways that effectively future-proofed it. They structured it to reward individual contributions that also benefit the wider community.

Instead of pen and paper, the Internet’s founders used technology standards to accomplish this goal.

While it seems very doable to build a successful and lasting digital business that embraces these ideals, the temptation of money and power, the unrelenting competitive pressures, and weak incentives to collaborate will steer many good intentions in bad directions over time.

I don’t think Facebook was ever motivated to support an open public commons, so it’s no surprise when they threaten such a thing. But Twitter somehow seemed different.

At the moment, it seems from outside Twitter HQ that the company is being crushed by the weight of all these things and making some bad choices over small issues that will have big impact over time.

It’s not what Facebook and Twitter do that really matters, though.

The greater concern, in my opinion, is that the types of people who originally created this global network for the rest of us to enjoy aren’t being heard or, worse, not getting involved at all anymore.

I’m not suggesting we need more powerful bureaucratic standards bodies to slow things down. Arguably, these increasingly powerful platforms are already crushing competition and squeezing out great innovations. We don’t need more of that. Stifling innovation always has disastrous effects.

What I’m suggesting is that more people will benefit in more ways over a longer period of time if someone can solve this broader organizational problem – the need to codify open access, shared governance, and other future-proofing tactics for public spaces in whatever shape they take.

Social networking is threatening the open public network today. Something else will threaten it tomorrow.

We need to work out what can be done today to fuel and secure an ongoing and healthy co-dependency between our public spaces and commercial interests.

Dispatchorama: a distributed approach to covering a distributed news event

We’ve had a sort of Hack Week at the Guardian, or “Discovery Week“. So, I took the opportunity to mess around with the n0tice API to test out some ideas about distributed reporting.

This is what it became (best if opened in a mobile web browser):

http://dispatchorama.com/



It’s a little web app that looks at your location and then helps you to quickly get to the scene of whatever nearby news events are happening right now.

The content is primarily coming from n0tice at the moment, but I’ve added some tweets with location data. I’ve looked at some geoRSS feeds, but I haven’t tackled that, yet. It should also include only things from the last 24 hours. Adding more feeds and tuning the timing will help it feel more ‘live’.

The concept here is another way of thinking about the impact of the binding effect of the digital and physical worlds. Being able to understand the signals coming out of networked media is increasingly important. By using the context that travels with bits of information to inform your physical reality you can be quicker to respond, more insightful about what’s going on and proactive in your participation, as a result.

I’m applying that idea to distributed news events here, things that might be happening in many places at once or a news event that is moving around.

In many ways, this little experiment is a response to the amazing effort of the Guardian’s Paul Lewis and several other brave reporters covering last year’s UK riots.

There were 2 surprises in doing this:

  1. The twitter location-based tweets are really all over the place and not helpful. You really have to narrow your source list to known twitter accounts to get anything good, but that kind of defeats the purpose.
  2. I haven’t done a ton of research, yet, but there seems to be a real lack of useful geoRSS feeds out there. What happened? Did the failure of RSS readers kill the geoRSS movement? What a shame. That needs to change.

The app uses the n0tice API, JQuery Mobile, and Google’s location APIs and a few snippets picked off StackOverflow. It’s on GitHub here:
https://github.com/mattmcalister/dispatchorama/

No backlog

The backlog has lots of benefits in the software development process, but is it necessary?  Could there be more to gain by just trusting your crew will do the right thing at the right time?

It occurred to me while tracking Github commits on a project that I didn’t need to maintain a backlog or a burn down chart or any of those kinds of things anymore.

Everyone was watching each other’s commits, commenting on issues, and chatting when they needed to.  I could see what everyone had done the day before.

They were all in synch enough to collaborate on the output when it helped to collaborate or work independently when that was more effective.

What could I add? Remind them of all the things they haven’t done? That’s uninspiring for everyone involved.

How does everyone know what to work on next?

The devs know what’s important, and they know how to do their job efficiently…let them work it out. If they don’t know what’s important it will become obvious in their commits. Then you can just steer when the emphasis is wrong rather than mapping hours to tasks.

They may in fact want to maintain a list that works like a backlog.  But maybe that should be a personal productivity choice, not something that’s imposed by someone else.

What about all those things you want to do that aren’t getting done?

I’ve never had a feature that really mattered to me just fade from memory. In fact, having no backlog forces a sharper focus.

How do you know when things will get done?

Your devs will tell you, and they will be accurate if it’s a personal agreement between you rather than a number on a spreadsheet.  If you have a deadline that really matters, then just be clear about that.  That becomes framework within which to operate, a feature of the code.

What if the developer doesn’t understand the requirements?

Well, do you actually really need to spell out requirements? Aren’t your developers tasked with solving the need? Let them pitch a solution to a problem, agree on it, and then let them run with it.

Of course, I don’t know how well this approach would work in a team larger than maybe 8 people, or a large-scale project with multiple parallel streams to it.  And maybe the chaos does more harm than good over time.

Clearly, I’m exaggerating for effect a little here, but I wonder a lot about how far you could go with this approach and where it really breaks down.

I think a lot of folks want things like backlogs because one can establish a meaningful agreement and reduce tension between people who organize stuff and people who create stuff.  But it’s often used to protect one side from the faults of the other rather than join them up to create a stronger whole.

But all projects and teams are different.  And it can be very tricky working out what should be done, by whom and when.

I think what I’m suggesting is that rather than making decisions around time and resource where success is measured by how effectively activity maps to a plan, maybe the better way to lead a software project instead is to adjust decision making according to the appropriate abstraction level for the project.  That way you can value quality and creativity over precision of delivery.

For example, the resources required to build, say, a global transaction platform vs a web page are very different.  And your teams will not allow you to rank them together.  You have to zoom in or out to understand the impact of those two projects, and that will then affect how you allocate resources to make them each happen.

Once that discussion has been had and everyone has agreed on what they are supposed to be working on, make sure they have enough caffeinated beverages and get out of the way.

Keep an eye on their commits each day.  And drop the backlog.

It’s incredibly liberating.

Capturing the essence of a region through photography

We’ve already seen some fantastic submissions just hours after publishing this on the Guardian via The Northerner blog.  Anyone can contribute here: http://northernlandscapes.n0tice.com 


Powered by Guardian.co.ukThis article titled “Do you have an image that captures the essence of The North?” was written by Sarah Hartley, for guardian.co.uk on Monday 9th July 2012 10.00 UTC

Being such a diverse and, frankly huge, geographical area, the north of England is difficult to sum up.

There’s plenty of well-known monuments such as the Angel of the North, wonderful landmark buildings such as Manchester’s mini houses of parliament in the shape of the town hall or the industrial heritage of Teesside’s Transporter Bridge but what’s the image that says The North to you?

We are inviting you to share your photos on the theme of Landscapes of the North during July. We’ll feature some of them here on The Northerner blog during the month and two expert photographers from the region will help us find a shortlist of images.

We’ll then ask Northerner readers to vote on those before selecting a final image to represent the north which will also become the backdrop for our official Northerner noticeboard.

One of the the judges helping to find that final selection will be Graeme Rowatt, an award-winning commercial photographer, based in the North East, who specialises in quality, bespoke commercial, editorial, corporate and advertising photograph.

The other, Jon Eland is well-known through Exposure Leeds, of which he is founding director, who describes himself as photographic image-maker, digital evangelist and all-round good egg. He offers this tip to those looking to take a winning image:

“As with all photography – a pretty picture can only count for so much – interest, excitement and a strong story to tell will always be a priority for me. It has to be much more than face value.”

To take part in the challenge, you’ll need to have the basic details of where in the north the image is located, a suitable headline/title, and a brief description of what the picture is about. Submit it to us using the instructions below. Please note that by entering into this, you are agreeing to have your picture shown on this blog and on the noticeboard but the copyright for the image remains with you. Maximum size of 2MB. JPG, GIF, PNG. Entries close Friday 27 July.

To submit your picture:

- if this is your first visit you’ll need to sign-up to n0tice.com. You can do this via your existing Facebook or Twitter accounts or by creating a user name and entering your email address.
- once logged in, go to http://northernlandscapes.n0tice.com and click on ‘post a new report’
- you will be presented with a simple form asking for the information mentioned above.

guardian.co.uk © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

The effects of openness matter more than the degrees of openness

Platform strategy or more specifically API strategy is a very effective starting point from which to debate the many flavors and degrees of ‘open’ that play out on the Internet.

For me, the open API debate is all about catering to the means of production.

Developers want data to be hosted by machines at some URL that they don’t have to worry about. When they are building things, they like the data output from those sources to be structured in clean formats and easy to obtain in different ways.

Give them good materials to build with and maintain low overheads.  They will build better things as a result.  Your costs go down.  Your output and your ceiling of opportunity go up.  It’s that simple really.

Of course, there are certainly many nuances.

When Mathew Ingram of GigaOm recently posed the challenge that Twitter and NYT face a similar business model issue around openness he was right to point out the difference between NYT and the Guardian’s approaches to APIs.

The New York Times has experimented with open APIs, which give outside developers access to its data for use in third-party services or features…But the traditional media player that has taken this idea the furthest is The Guardian newspaper in Britain — which launched an “open platform” project in 2010, offering all of its data to outside developers through an API. Doing this has been a core part of Editor-in-Chief Alan Rusbridger’s concept of “open journalism.”

It’s useful to have an example of where an open API creates value.  The Guardian Facebook app is a good example both in terms of innovation with partners but also in terms of real commercial value.

The concept for the app had already been explored months before Facebook announced seamless sharing.  Michael Brunton-SpallLisa van Gelder and Graham Tackley built a clever app they called Social Guardian at a Hack Day.

When FB then gave us the opportunity to build something for their launch, we obviously took it.  The app was built by a 3rd party in record time, and it subsequently took off like a rocket.

As we all know, Facebook adjusted their algorithm and tempered the explosive growth, but it should be considered a success by any measure.  It was built quickly and executed well.  It cost us very little. Users adopted it very quickly.  It generated huge buzz for our brand and introduced the Guardian to a whole new audience we weren’t reaching.

It also drove dramatic traffic levels back to the Guardian web site which we then turned into advertising revenue for the business.

Low cost. High adoption rate. Innovative. Revenue generating.  What else could you ask for?

It’s a solid example of the generative media strategy I was trying to articulate a while back.

Martin Belam posted a detailed case study of the app here and here.

However, while we’ve pushed the envelope on openness and commercial leverage for APIs in the newspaper world, there are other API pure play businesses like NewsCred who have expressed the open API strategy for content in an even more complete form.

They are a content API warehouse. As a developer, if you are working on a digital product that could use some high quality articles or video from brand name media sources then you would be wise to browse the NewsCred catalog.

But NewsCred doesn’t allow just anyone to drop a feed of content directly into their platform. They want to curate relationships with their sources and their API customers…they make money being in the middle.

What’s the trajectory on the sliding scale of open APIs?

There was an interesting marketplace forming several years ago around similar types of businesses we’re seeing today that never completely catalyzed.  It might be instructive to look at that space with fresh eyes.

The blog, RSS feed and personal start page triple play was a perfect storm of networked information innovation in 2004 or so. Several companies including Twitter CEO Dick Costolo’s company did very well executing an open platform strategy and exiting at the right moment.

Today the new blog includes context in addition to words and pictures. RSS feeds evolved into APIs. And personal start pages learned to listen to our behaviors.

The killer open strategy now would be one that can unify those forces into a self-reinforcing amplifier.

Arguably, Facebook already did that, but they’ve applied a portal-like layer to the idea creating a destination instead of an ecosystem.  They are also using personal connections as the glue that brings out the best in these 3 things.

That is only one approach to this space.  Another approach is to do one of those things really really well.

Twitter, Tumblr and WordPress are doing great on the creation side, but they need to keep an eye on open participation platforms that marry context with content. Mass market API activity is nascent but bound to explode again given how important APIs are for developers. FlipBoard and some newcomers are reinventing the old idea of automated aggregation through better packaging and smarter recommendation algorithms.

Enter the business model question.

One thing I’ve learned to appreciate since joining the Guardian 4 years ago now is the value of the long game.  The long game forces you to think about what value you create for your customers more than what value you take from your customers.

Of course, going long should never be mistaken for being slow. Marathon runners can still do a sub 5 minute mile.

As I recently said about the WordPress strategy of generosity, the value you create in the market will then come back in the form of stronger ties and meaningful relationships with partners who can help you make money.

The open debate often gets ruined at this point in the argument by those who only think of success in terms of quarterly P&Ls. That’s fine and totally understandable. That matters, too…massively. But it’s not everything. And it’s as big of a mistake to focus only on P&L as it is to focus only on the long term.

I once got some brilliant advice from my former boss at The Industry Standard Europe, Neil Thackray, who said to me when I was struggling to work out what my next move was going to be after that business failed.

He said, “what are you going to tell the grandkids you did during the war?”

It’s a great way of looking at this problem.

The battle we’re all fighting in the news business is how to make the P&L work.  We will win that battle with hard work, creativity, and perseverance.

But the war we’re all fighting in the news business is about securing the long term viability of journalism or a journalism-like force in the world that can hold power to account and amplify the voices of people that need to be heard.

Profit is one force that can secure that future.  But profit is not the goal itself.  Nor is the success of one media brand at the expense of another.

I’m also of the opinion that Twitter has made a long term mistake by prioritizing advertising on their client experiences over the value of their partner ecosystem.  But it’s easy to have that opinion from outside their board room, and perhaps advertising will make them a stronger force for good than they would have been as a pure platform service.

Similarly, NYT is using their APIs to improve innovation within the business. Effectively, the Guardian is doing the same except that it views the success of its business through the eyes of its partners in addition to itself.

Is that ‘more open’, as Mathew asks?

Who cares?

Is the NYT form of an open API helping them secure a future for the effects of journalism in the world?

If the answer to that question is ‘yes’, then the degree of openness compared to others is totally irrelevant.