The need for better research

I find Susan Greenfield’s perspective on the future of the human brain a bit alarmist (see below), but it’s clear that we have a lack of knowledge on the impact of the increasingly intertwined connections forming as a result of networks.

I suspect the physical brain damage and enhancements we may be experiencing are less harmful and beneficial than the things people will do to each other because of networked behaviors.

Powered by article titled “Oxford scientist calls for research on technology ‘mind change'” was written by Ian Sample, science correspondent, for The Guardian on Tuesday 14th September 2010 19.07 UTC

Lady Greenfield reignited the debate over modern technology and its impact on the brain today by claiming the issue could pose the greatest threat to humanity after climate change.

The Oxford University researcher called on the government and private companies to join forces and thoroughly investigate the effects that computer games, the internet and social networking sites such as Twitter may have on the brain.

Lady Greenfield has coined the term "mind change" to describe differences that arise in the brain as a result of spending long periods of time on a computer. Many scientists believe it is too early to know whether these changes are a cause for concern.

"We need to recognise this is an issue rather than sweeping it under the carpet," Greenfield said. "We should acknowledge that it is bringing an unprecedented change in our lives and we have to work out whether it is for good or bad."

Everything we do causes changes in the brain and the things we do a lot are most likely to cause long term changes. What is unclear is how modern technology influences the brain and the consequences this has.

"For me, this is almost as important as climate change," said Greenfield. "Whilst of course it doesn’t threaten the existence of the planet like climate change, I think the quality of our existence is threatened and the kind of people we might be in the future."

Lady Greenfield was talking at the British Science Festival in Birmingham before a speech at the Tory party conference next month. She said possible benefits of modern technology included higher IQ and faster processing of information, but using internet search engines to find facts may affect people’s ability to learn. Computer games in which characters get multiple lives might even foster recklessness, she said.

"We have got to be very careful about what price we are paying, that the things that are being lost don’t outweigh the things gained," Greenfield said. "Every single parent I have spoken to so far is concerned. I have yet to find a parent who says ‘I am really pleased that my kid is spending so much time in front of the computer’."

Sarah-Jayne Blakemore, a cognitive neuroscientist at University College London and co-author of the book The Learning Brain, agreed that more research was needed to know whether technology was causing significant changes in the brain. "We know nothing at all about how the developing brain is being influenced by video games or social networking and so on.

"We can only really know how seriously to take this issue once the research starts to produce data. So far, most of the research on how video games affect the brain has been done with adult participants and, perhaps surprisingly, has mostly shown positive effects of gaming on many cognitive abilities," she said.

Maryanne Wolf, a cognitive neuroscientist at Tufts University in Massachusetts and author of Proust and the Squid, said that brain circuits honed by reading books and thinking about their content could be lost as people spend more time on computers.

"It takes time to think deeply about information and we are becoming accustomed to moving on to the next distraction. I worry that the circuits that give us deep reading abilities will atrophy in adults and not be properly formed in the young," she said.

<a href="" rel="nofollow"> <img src="" alt="Ads by The Guardian" /> </a> © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

Enhanced by Zemanta

Humanizing the network

Several years ago now I was working with a team at Yahoo! who were building a platform that automatically surfaced relevant content for visitors to  The engineers were writing algorithms that intended to make the Internet more meaningful to people.

The team eventually worked their way into the Yahoo! home page experience.  The Yahoo! home page can reorder content based on what the platform thinks you will most likely click on.

In a sort of similar way, the new Google Instant feature responds to your behaviors and adjusts as you give Google clues as to your intent.

In both cases, the computers help you work less to get what you want faster.

These kinds of innovations are a response to the growth rate of information on the Internet. The big dotcoms need to preserve their position in the market.  And it will work to a degree, I’m sure, as the Yahoo!, Google and Facebook infrastructures get so sophisticated that they can’t be beaten at what they do.

But the network is changing shape, and I wonder if these companies are failing to actually evolve with people and the changes happening in society as a result of the presence of this new infrastructure in our world.

How did we arrive here?

When the world of possibility gets lost in the volume of options people get frustrated.  And when people get frustrated technology breakthroughs happen.

This chart shows how certain technological breakthroughs have made the Internet more manageable for people.

Technology breakthroughs as network complexity increases

In the very early days of the web it was easy enough to find what you wanted just by hopping around from document to document.  But it didn’t take long for the world of documents to explode.  And when it exploded people became overwhelmed.  When they became overwhelmed opportunity opened up to help people get started on their Internet journeys.

The directory took shape, and Yahoo! became the center of the web.

But the directory got full.  There was no way to track everything as more and more people arrived and more and more organizations and individuals put more and more documents on the web.  The volume of information went through another explosion, and people got frustrated again.

The search game followed, and Google became the center of all things web.

Now, of course, the search engine got full, too. In 2008, Google indexed over 1 trillion documents.  The number of searches required to find what you wanted escalated and irritated people.  And as the Internet population topped 1 billion people the amount of activity happening online made findability a madhouse again.

People coopted other people to make the Internet feel more human again.  The social filter changed the balance of power in our relationship with shared knowledge, and information began to find us.

But this will change, too.  Not only will more people join the party, but more networks will get connected and dump large piles of information onto the Internet, not just simple documents.  And when real world devices flood the network the connected experience will become overwhelming yet again.  Your friends won’t be good enough at helping you to manage your experience.

The new infrastructure

Tom Coates’ recent dConstruct presentation addresses some of this.  He says lots of interesting things that anyone making a career developing for the Internet really should hear:

“The increase in transport infrastructure has completely transformed the way we make things and even what we can make….today the history of any object around you is a long and intricate chain of exchange, manufacture, component building, transport.  Every object around you implicates the entire planet.  Why isn’t the web like that?”

The networked world is still in its infancy, but the many ways we can interleave aspects of our lives with the people and things around us is absolutely incredible.  The power people have in their hands is enormous when they know they have it, overwhelming when it discovers them.

With interesting raw data, useful APIs, connected devices and social triggers all around us now the materials to broker and assemble intelligent tools are getting cheaper and easier to offer people.  We can devise algorithms that learn and automagically make the nodes on the network more accessible and relevant and valuable to each of us when and where we care about them.

But machines are not great at creating meaning, differentiating emotive responses, interpreting our motivations and contextualizing historical, personal and social references.

Wag the dog

The problem then surfaces when we assume that machines should be doing the hard work of making judgments about things.  When machines make lots of small decisions on our behalf we become tempted to let them make the big decisions for us, too.

Amar Bhidé wrote an essay for Harvard Business Review titled “The Judgment Deficit“.  He explains how the financial crisis was the result of our willingness to forgo important moral positions because we let machines interpret things humans should be looking at.

“A new form of centralized control has taken root—one that is the work not of old-fashioned autocrats, committees, or rule books but of statistical models and algorithms. These mechanistic decision-making technologies have value under certain circumstances, but when misused or overused they can be every bit as dysfunctional as a Muscovite politburo. Consider what has just happened in the financial sector: A host of lending officers used to make boots-on-the-ground, case-by-case examinations of borrowers’ creditworthiness. Unfortunately, those individuals were replaced by a small number of very similar statistical models created by financial wizards and disseminated by Wall Street firms, rating agencies, and government-sponsored mortgage lenders. This centralization and robotization of credit flourished as banks were freed from many regulatory limits on their activities and regulators embraced top-down, mechanistic capital requirements. The result was an epic financial crisis and the near-collapse of the global economy. Finance suffered from a judgment deficit, and all of us are paying the price.”

The financial meltdown could be interpreted as an autopilot problem, laziness as a result of efficiency.  But perhaps even more important than economic catastrophe is the potential for cultural divides to sharpen and even turn violent when people fail to cooperate in healthy ways.

The existence of the network and the machines that make it do wonderful things does not change the fact that we are human, that we can be greedy, cruel, selfish, etc.  The network will only amplify the best and worst about humanity.

Ethan Zuckerman woke me up to the important nuances of forming a global society via the Internet in his Activate Summit presentation this summer.  The fact that we can communicate with individuals in far away countries, conduct transactions via phone, make goods in one place and ship them from another individually or en masse anywhere in the world does not mean that we understand the people we’re interacting with.

Fish oil, not snake oil

Enabling people to communicate with eachother around the world and to publish on the world’s stage is great.  It’s hugely important.  As is enabling and incentivizing organizations and institutions to release the raw data that drives what they do.  And the many approaches to stitching these things together via infrastructure is a must-have for future generations, as Mr. Coates said.

But we must also learn how to step back and assess the value of the network, the connections happening within and across it.  We must learn how to evaluate and articulate what changes need to happen to improve it and our relationship with it.

Without humanizing the decisions that are made as a result of the work computers are doing on our behalf we will be creating crutches for our brains and validating Nick Carr’s fears.  Rather than help people take back control of their lives, the technologies will reinforce current power structures and enable new ones that hurt us.

Meet the new boss, same as the old bossWhen the next wave of activity on the network renders the social filter unmanageable the technology on the network should respond by empowering individuals and groups to better ourselves and improve global society rather than find more ways for us to do less.

There’s no need to create a new boss that looks just like the old boss.  We can do better than that.

Enhanced by Zemanta

Publishing network models

There are several ways to think about what publishing networks can be. The models range from a more traditional portfolio of owned and operated properties to loosely connected collections of self-publishing individuals.

Gawker Media is a great example of the O&O portfolio model. Nick Denton recently shared some impressive growth figures across the Gawker family of sites. In total, the network overtook the big US newspaper sites USA Today, Washington Post and LA Times. He’s looking to swim with some bigger fish:

“The newspapers are now the least of our competition. The inflated expectations of investors and executives may one day explode the Huffington Post. And Yahoo and AOL are in long-term decline. But they are all increasingly in our business.”

Gawker owns the content they publish and pays their staff and contributors for their work. The network of sites share a publishing platform but exist independently and serve separate but similar audiences: Jalopnik, Jezebel, Gizmodo, Gawker, Lifehacker and Kotaku.

This model of targeted media properties is what made Pat McGovern so successful with his privately-owned IDG, a global portfolio of 300+ computer-focused magazines and web sites with over $3B in revenue…yes, that’s $3 BILLION!

The Huffington Post has been incredibly successful using the flip-side of Gawker’s model, a decentralized contributor network in addition to a small staff who all post to one media property. Cenk Uygur, host of The Young Turks, mirrors posts from his own site to the Huffington Post web site. He can use the Huffington Post to build his reputation more broadly, and Huffington Post gets some decent coverage to offer their visitors at no cost. He’s had similar success building his brand on YouTube.

Huffington Post has been technically very experimental and innovative, and contributors seem very happy with the results they are getting by posting to the site despite some controversy about not being paid for the value of their work.

Of course, Demand Media is the new Internet publishing posterboy. There’s lots of interesting coverage about what they are up to.  Personally, I’m most intrigued by the B2B play they call ‘Content Channels‘. They are getting distribution through other sites such as San Francisco Chronicle and USA Today where they provide Home Guides and Travel Tips, respectively.

The business model is just a very simple advertising revenue sharing agreement. And production costs are kept to a minimum by paying very very low fees for content.

I find WordPress fascinating in this context, too.

It doesn’t make much sense thinking of as a cohesive network except that there’s a single platform hosting each individual blog. The individuals are not connected to each other in any meangingful way, but WordPress has the ability to instrument activities across the network.

A great example of this is when they partnered with content recommendations startup Zemanta to help WordPress bloggers find links and images to add to their posts.  There are now thousands of WordPress plugins (including our News Feed plugin) that individual bloggers deploy on their own hosted versions of the platform.

It’s not exactly a self-fueling ecosystem, as there are no revenue models connecting it all together.  But it may be the absence of financial shenanigans that makes the whole WordPress ecosystem so compelling to the millions of participants.

Then there’s Glam Media which combines their portfolio of owned and operated properties with a collection of totally independent publishers to create a network.  In fact, it’s the independents who actually make up the lionshare of the traffic that they sell.

Glam looks more like an ad network than a more substantive source of interest, but they have done some very clever things. For example, they are developing an application network, a way for advertisers to get meaningful reach for brand experiences that are better than traditional banner ad units. If it works it will create value for everyone in the Glam ecosystem:

“As a publisher, you make money on every ad impression that appears as part of a Glam App. This includes apps embedded on your site and on pop-up pages generated by an application on your site. Glam App ad revenue is split through a three-way rev share between the publisher, app developer, and Glam.”

There’s something very powerful about enabling rich experiences to exist in a distributed way.  That was the vision many people shared in terms of the widgetization of the web and a  hypersyndication future for media that still needs to happen, in my mind.

At the Guardian we’re balancing a few of these different concepts, as you can see from the Science Blog Network announcement below and our fast-growing Green Network.  The whole Open Platform initiative enables this kind of idea, among other things.

There are a few different aspects of the Science Blog Network announcement that are interesting, not least of which is the fact that I was able to post the article directly on my blog here in full using the Guardian’s WordPress plugin.

As Megan Garber of Niemen Journalism Lab said about it…

“The blog setup reframes the relationship between the expert and the outlet — with the Guardian itself, in this case, going from “gatekeeper” to “host.””

The trick that few of the publishing networks have really worked out successfully in my mind is how you surface quality.  That’s a much easier problem to solve when your operation is purely owned and operated. But O&O rarely scales as successfully as an open network.

Amar Bhidé wrote a wonderful essay for the September issue of Harvard Business Review about the need for human judgment in a world of complex relationships.

“The right level of control is an elusive and moving target: Economic dynamism is best maintained by minimizing centralized control, but the very dynamism that individual initiative unleashes tends to increase the degree of control needed. And how to centralize—whether through case-by case judgment, a rule book, or a computer model—is as difficult a question as how much.”

At the end of the day it comes down to purpose.

If the intent of the network is purely for generating revenue then it will be susceptible to all kinds of maligned interests amongst participants in the network.

If on the other hand the network is able to create value that the participants actually care about (which should include commercial value in addition to many other measures of value), then the network will have a long growth path and may actually fuel itself over time if it is managed well.

The strategic choices behind the controlled versus open models are very different, and there’s no reason both can’t exist together.  What will ultimately matter most is whether or not end-users find value in the nodes of the network.

Powered by article titled “Guardian science blogs: We aim to entertain, enrage and inform” was written by Alok Jha, for on Tuesday 31st August 2010 12.00 UTC

It’s nearly the end of summer holidays, and there are plans afoot in the blogosphere.

You would not know it from general media coverage but, on the web, science is alive with remarkable debate. According to the Pew Research Centre, science accounts for 10% of all stories on blogs but only 1% of the stories in mainstream media coverage. (The Pew Research Centre’s Project for Excellence in Journalism looked at a year’s news coverage starting from January 2009.)

On the web, thousands of scientists, journalists, hobbyists and numerous other interested folk write about and create lively discussions around palaeontology, astronomy, viruses and other bugs, chemistry, pharmaceuticals, evolutionary biology, extraterrestrial life or bad science. For regular swimmers in this fast-flowing river of words, it can be a rewarding (and sometimes maddening) experience. For the uninitiated, it can be overwhelming.

The Guardian’s science blogs network is an attempt to bring some of the expertise and these discussions to our readers. Our four bloggers will bring you their untrammelled thoughts on the latest in evolution and ecology, politics and campaigns, skepticism (with a dollop of righteous anger) and particle physics (I’ll let them make their own introductions).

Our fifth blog will hopefully become a window onto just some of the discussions going on elsewhere. It will also host the Guardian’s first ever science blog festival – a celebration of the best writing on the web. Every day, a new blogger will take the reins and we hope it will give you a glimpse of the gems out there. If you’re a newbie, we hope the blog festival will give you dozens of new places to start reading about science. And if you’re a seasoned blog follower, we hope you’ll find something entertaining or enraging.

We start tomorrow with the supremely thoughtful Mo Costandi of Neurophilosophy. You can also look forward to posts from Ed Yong, Brian Switek, Jenny Rohn, Deborah Blum, Dorothy Bishop and Vaughan Bell among many others.

In his Hugh Cudlipp lecture in January, Guardian editor Alan Rusbridger discussed the changing relationship between writers (amateur and professional) and readers.

We are edging away from the binary sterility of the debate between mainstream media and new forms which were supposed to replace us. We feel as if we are edging towards a new world in which we bring important things to the table – editing; reporting; areas of expertise; access; a title, or brand, that people trust; ethical professional standards and an extremely large community of readers. The members of that community could not hope to aspire to anything like that audience or reach on their own; they bring us a rich diversity, specialist expertise and on the ground reporting that we couldn’t possibly hope to achieve without including them in what we do.

There is a mutualised interest here. We are reaching towards the idea of a mutualised news organisation.

We’re starting our own path towards mutualisation with some baby steps. We will probably make lots of mistakes (and we know you’ll point them out). Where we end up will depend as much on you as it does on us. © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

Enhanced by Zemanta

How do you visualise the future of datajournalism?

Powered by article titled “How do you visualise the future of datajournalism?” was written by Simon Rogers, for on Friday 3rd September 2010 08.00 UTC

Following the big rules of datajournalism can take you to some strange places. What are the big three?

1) Everything is data
2) If you can get the data, then you can visualise it
3) Just because it is data doesn’t mean it it isn’t subjective

These were put to the test at the recent European Centre of Journalism’s data conference in Amsterdam. Graphic artist Anna Lena Schiller took each talk at the event and visualised it while it was going on. You can see mine here and Tony Hirst’s here. The full set is above.

On her blog, Anna Lena explains:

The day was divided into four parts, with two to five ten minute talks in each session. When you browse through the pictures you’ll see the headlines, participants and topics of the talks.

Anna Lena has also been working on the big meet up in Berlin this week at which my colleague Martin Belam spoke – so soon we’ll get to see what she made of that event.

World government data

• Search the world’s government datasets

• More environment data
• Get the A-Z of data
• More at the Datastore directory

• Follow us on Twitter © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

Enhanced by Zemanta

Captivating Arcade Fire video shows what HTML5 can do

This is a wonderful interactive…a must-see:

Powered by article titled “Captivating Arcade Fire video shows what HTML5 can do” was written by Jemima Kiss, for on Wednesday 1st September 2010 14.04 UTC

It keeps crashing on me, but I’ve had enough of a blast to be inspired – it’s the heavenly Arcade Fire video built in collaboration with Google and director Chris Milk.

The Wilderness Downtown combines Arcade Fire’s We Used To Wait with some beautiful animation and footage – courtesy of Street View – of your childhood home – made all the more poignant for me because it was bulldozed a few years ago.

Thomas Gayno from Google’s Creative Labs decsribed it on the Chrome Blog: "It features a mash-up of Google Maps and Google Street View with HTML5 canvas, HTML5 audio and video, an interactive drawing tool, and choreographed windows that dance around the screen. These modern web technologies have helped us craft an experience that is personalised and unique for each viewer, as you virtually run through the streets where you grew up."

The Chrome Experiments blog explains each technique, including the flock of birds that respond to the music and mouse movemens, created with the HTML5 Canvas 3D engine, film clips played in windows at custom sizes, thanks to HTML5, and various colour correction, drawing and animation techniques.

I’ve watched thousands of videos thanks to the curse of the viral video chart and nothing has come close to this for originality, imagination and for that inspired piece of personalised storytelling.

There’s plenty more inspiration on the Chrome Experiments blog; Bomomo is pretty slick, and Canopy is hypnotic.

<a href="" rel="nofollow"> <img src="" alt="Ads by The Guardian" /> </a> © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

Enhanced by Zemanta

Read this! Gmail now prioritises your inbox

Powered by article titled “Read this! Gmail now prioritises your inbox” was written by Jemima Kiss, for on Tuesday 31st August 2010 09.54 UTC

Gmail’s latest feature is arguably the biggest innovation since the service launched in April 2004.

‘Priority inbox’ learns from your email usage patterns and begins to prioritise messages that it thinks you’ll be most likely to read. Your inbox is divided into three sections: important and unread, starred and everything else.

The classification should improve, because you can mark messages with ‘less important’ or ‘more important’, and Gmail will learn to reclassify accordingly. It’s like the inverse of junk mail filtering.

Software engineer Doug Aberdeen on the official Gmail blog described this as “a new way of taking on information overload”.

“Gmail uses a variety of signals to predict which messages are important, including the people you email most (if you email Bob a lot, a message from Bob is probably important) and which messages you open and reply to (these are likely more important than the ones you skip over).”

Priority inbox is slowly rolling out across Gmail services. It hasn’t appeared in my personal account yet, but will in the next few days along with Google Apps users (if their administrator has opted to ‘Enable pre-release features’).

Drag and drop, launched in April, helped a little. Filters help, for those that can be bothered to set them up. But priority inbox could make a significant difference, and if Wave wasn’t quite the right format for centralising and streamlining messages, then this is a more usable step in that direction. © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.

Enhanced by Zemanta