Geoff Mulgan’s blog: “Here I suggest three complementary ways of thinking about the future which provide partial protection against the pitfalls.
The shape of the future
First, create your own composite future by engaging with the trends. There are many methods available for mapping the future – from Foresight to scenarios to the Delphi method.
Behind all are implicit views about the shapes of change. Indeed any quantitative exploration of the future uses a common language of patterns (shown in this table above) which summarises the fact that some things will go up, some go down, some change suddenly and some not at all.
All of us have implicit or explicit assumptions about these. But it’s rare to interrogate them systematically and test whether our assumptions about what fits in which category are right.
Let’s start with the J shaped curves. Many of the long-term trends around physical phenomena look J-curved: rising carbon emissions, water useage and energy consumption have been exponential in shape over the centuries. As we know, physical constraints mean that these simply can’t go on – the J curves have to become S shaped sooner or later, or else crash. That is the ecological challenge of the 21st century.
New revolutions
But there are other J curves, particularly the ones associated with digital technology. Moore’s Law and Metcalfe’s Law describe the dramatically expanding processing power of chips, and the growing connectedness of the world. Some hope that the sheer pace of technological progress will somehow solve the ecological challenges. That hope has more to do with culture than evidence. But these J curves are much faster than the physical ones – any factor that doubles every 18 months achieves stupendous rates of change over decades.
That’s why we can be pretty confident that digital technologies will continue to throw up new revolutions – whether around the Internet of Things, the quantified self, machine learning, robots, mass surveillance or new kinds of social movement. But what form these will take is much harder to predict, and most digital prediction has been unreliable – we have Youtube but not the Interactive TV many predicted (when did you last vote on how a drama should end?); relatively simple SMS and twitter spread much more than ISDN or fibre to the home. And plausible ideas like the long tail theory turned out to be largely wrong.
If the J curves are dramatic but unusual, much more of the world is shaped by straight line trends – like ageing or the rising price of disease that some predict will take costs of healthcare up towards 40 or 50% of GDP by late in the century, or incremental advances in fuel efficiency, or the likely relative growth of the Chinese economy.
Also important are the flat straight lines – the things that probably won’t change in the next decade or two: the continued existence of nation states not unlike those of the 19th century? Air travel making use of fifty year old technologies?
Great imponderables
If the Js are the most challenging trends, the most interesting ones are the ‘U’s’- the examples of trends bending: like crime which went up for a century and then started going down, or world population that has been going up but could start going down in the later part of this century, or divorce rates which seem to have plateaued, or Chinese labour supply which is forecast to turn down in the 2020s.
No one knows if the apparently remorseless upward trends of obesity and depression will turn downwards. No one knows if the next generation in the West will be poorer than their parents. And no one knows if democratic politics will reinvent itself and restore trust. In every case, much depends on what we do. None of these trends is a fact of nature or an act of God.
That’s one reason why it’s good to immerse yourself in these trends and interrogate what shape they really are. Out of that interrogation we can build a rough mental model and generate our own hypotheses – ones not based on the latest fashion or bestseller but hopefully on a sense of what the data shows and in particular what’s happening to the deltas – the current rates of change of different phenomena.”
Open Access
Reports by the UK’s House of Commons, Business, Innovation and Skills Committee: “Open access refers to the immediate, online availability of peer reviewed research articles, free at the point of access (i.e. without subscription charges or paywalls). Open access relates to scholarly articles and related outputs. Open data (which is a separate area of Government policy and outside the scope of this inquiry) refers to the availability of the underlying research data itself. At the heart of the open access movement is the principle that publicly funded research should be publicly accessible. Open access expanded rapidly in the late twentieth century with the growth of the internet and digitisation (the transcription of data into a digital form), as it became possible to disseminate research findings more widely, quickly and cheaply.
Whilst there is widespread agreement that the transition to open access is essential in order to improve access to knowledge, there is a lack of consensus about the best route to achieve it. To achieve open access at scale in the UK, there will need to be a shift away from the dominant subscription-based business model. Inevitably, this will involve a transitional period and considerable change within the scholarly publishing market.
For the UK to transition to open access, an effective, functioning and competitive market in scholarly communications will be vital. The evidence we saw over the course of this inquiry shows that this is currently far from the case, with journal subscription prices rising at rates that are unsustainable for UK universities and other subscribers. There is a significant risk that the Government’s current open access policy will inadvertently encourage and prolong the dysfunctional elements of the scholarly publishing market, which are a major barrier to access.
See Volume I and Volume II “
Nudge Nation: A New Way to Prod Students Into and Through College
By giving students information-driven suggestions that lead to smarter actions, technology nudges are intended to tackle a range of problems surrounding the process by which students begin college and make their way to graduation.
New approaches are certainly needed….
There are many reasons for low rates of persistence and graduation, including financial problems, the difficulty of juggling non-academic responsibilities such as work and family, and, for some first-generation students, culture shock. But academic engagement and success are major contributors. That’s why colleges are using behavioral nudges, drawing on data analytics and behavioral psychology, to focus on problems that occur along the academic pipeline:
• Poor student organization around the logistics of going to college
• Unwise course selections that increase the risk of failure and extend time to degree
• Inadequate information about academic progress and the need for academic help
• Unfocused support systems that identify struggling students but don’t directly engage with them
• Difficulty tapping into counseling services
These new ventures, whether originating within colleges or created by outside entrepreneurs, are doing things with data that just couldn’t be done in the past—creating giant databases of student course records, for example, to find patterns of success and failure that result when certain kinds of students take certain kinds of courses.”
Patients Take Control of Their Health Care Online
MIT Technology Review: “Patients are collaborating for better health — and, just maybe, radically reduced health-care costs….Not long ago, Sean Ahrens managed flare-ups of his Crohn’s disease—abdominal pain, vomiting, diarrhea—by calling his doctor and waiting a month for an appointment, only to face an inconclusive array of possible prescriptions. Today, he can call on 4,210 fellow patients in 66 countries who collaborate online to learn which treatments—drugs, diets, acupuncture, meditation, even do-it-yourself infusions of intestinal parasites —bring the most relief.
The online community Ahrens created and launched two years ago, Crohnology.com, is one of the most closely watched experiments in digital health. It lets patients with Crohn’s, colitis, and other inflammatory bowel conditions track symptoms, trade information on different diets and remedies, and generally care for themselves.
The site is at the vanguard of the growing “e-patient” movement that is letting patients take control over their health decisions—and behavior—in ways that could fundamentally change the economics of health care. Investors are particularly interested in the role “peer-to-peer” social networks could play in the $3 trillion U.S. health-care market.
“Patients sharing data about how they feel, the type of treatments they’re using, and how well they’re working is a new behavior,” says Malay Gandhi, chief strategy officer of Rock Health, a San Francisco incubator for health-care startups that invested in Crohnology.com. “If you can get consumers to engage in their health for 15 to 30 minutes a day, there’s the largest opportunity in digital health care.”
Experts say when patients learn from each other, they tend to get fewer tests, make fewer doctors’ visits, and also demand better treatment. “It can lead to better quality, which in many cases will be way more affordable,” says Bob Kocher, an oncologist and former adviser to the Obama administration on health policy.”
Frontiers in Massive Data Analysis
New report from the National Academy of Sciences: “Data mining of massive data sets is transforming the way we think about crisis response, marketing, entertainment, cybersecurity and national intelligence. Collections of documents, images, videos, and networks are being thought of not merely as bit strings to be stored, indexed, and retrieved, but as potential sources of discovery and knowledge, requiring sophisticated analysis techniques that go far beyond classical indexing and keyword counting, aiming to find relational and semantic interpretations of the phenomena underlying the data.
Frontiers in Massive Data Analysis examines the frontier of analyzing massive amounts of data, whether in a static database or streaming through a system. Data at that scale–terabytes and petabytes–is increasingly common in science (e.g., particle physics, remote sensing, genomics), Internet commerce, business analytics, national security, communications, and elsewhere. The tools that work to infer knowledge from data at smaller scales do not necessarily work, or work well, at such massive scale. New tools, skills, and approaches are necessary, and this report identifies many of them, plus promising research directions to explore. Frontiers in Massive Data Analysis discusses pitfalls in trying to infer knowledge from massive data, and it characterizes seven major classes of computation that are common in the analysis of massive data. Overall, this report illustrates the cross-disciplinary knowledge–from computer science, statistics, machine learning, and application disciplines–that must be brought to bear to make useful inferences from massive data.”
Understanding the impact of releasing and re-using open government data
New Report by the European Public Sector Information Platform: “While there has been a proliferation of open data portals and data re-using tools and applications of tremendous speed in the last decade, research and understanding about the impact of opening up public sector information and open government data (OGD hereinafter) has been lacking behind.
Until now, there have been some research efforts to structure the concept of the impact of OGD suggesting various theories of change, their measuring methodologies or in some cases, concrete calculations as to what financial benefits opening government data brings on a table. For instance, the European Commission conducted a study on pricing of public sector information, which attempted evaluating direct and indirect economic impact of opening public data and identified key indicators to monitor the effects of open data portals. Also, Open Data Research Network issued a background report in April 2012 suggesting a general framework of key indicators to measure the impact of open data initiatives both on a provision and re-use stages.
Building on the research efforts up to date, this report will reflect upon the main types of impacts OGD may have and will also present key measuring frameworks to observe the change OGD initiatives may bring about.”
Open data for accountable governance: Is data literacy the key to citizen engagement?
Camilla Monckton at UNDP’s Voices of Eurasia blog: “How can technology connect citizens with governments, and how can we foster, harness, and sustain the citizen engagement that is so essential to anti-corruption efforts?
UNDP has worked on a number of projects that use technology to make it easier for citizens to report corruption to authorities:
- Serbia’s SMS corruption reporting in the health sector
- Montenegro’s ‘be responsible app’
- Kosovo’s online corruption reporting site kallxo.com
These projects are showing some promising results, and provide insights into how a more participatory, interactive government could develop.
At the heart of the projects is the ability to use citizen generated data to identify and report problems for governments to address….
Wanted: Citizen experts
As Kenneth Cukier, The Economist’s Data Editor, has discussed, data literacy will become the new computer literacy. Big data is still nascent and it is impossible to predict exactly how it will affect society as a whole. What we do know is that it is here to stay and data literacy will be integral to our lives.
It is essential that we understand how to interact with big data and the possibilities it holds.
Data literacy needs to be integrated into the education system. Educating non-experts to analyze data is critical to enabling broad participation in this new data age.
As technology advances, key government functions become automated, and government data sharing increases, newer ways for citizens to engage will multiply.
Technology changes rapidly, but the human mind and societal habits cannot. After years of closed government and bureaucratic inefficiency, adaptation of a new approach to governance will take time and education.
We need to bring up a generation that sees being involved in government decisions as normal, and that views participatory government as a right, not an ‘innovative’ service extended by governments.
What now?
In the meantime, while data literacy lies in the hands of a few, we must continue to connect those who have the technological skills with citizen experts seeking to change their communities for the better – as has been done in many a Social Innovation Camps recently (in Montenegro, Ukraine and Armenia at Mardamej and Mardamej Relaoded and across the region at Hurilab).
The social innovation camp and hackathon models are an increasingly debated topic (covered by Susannah Vila, David Eaves, Alex Howard and Clay Johnson).
On the whole, evaluations are leading to newer models that focus on greater integration of mentorship to increase sustainability – which I readily support. However, I do have one comment:
Social innovation camps are often criticized for a lack of sustainability – a claim based on the limited number of apps that go beyond the prototype phase. I find a certain sense of irony in this, for isn’t this what innovation is about: Opening oneself up to the risk of failure in the hope of striking something great?
In the words of Vinod Khosla:
“No failure means no risk, which means nothing new.”
As more data is released, the opportunity for new apps and new ways for citizen interaction will multiply and, who knows, someone might come along and transform government just as TripAdvisor transformed the travel industry.”
Coase’s theories predicted Internet’s impact on how business is done
Don Tapscott in The Globe and Mail: “Renowned economist Ronald Coase died last week at the age of 102. Among his many achievements, Mr. Coase was awarded the 1991 Nobel Prize in Economics, largely for his inspiring 1937 paper The Nature of the Firm. The Nobel committee applauded the academic for his “discovery and clarification of the significance of transaction costs … for the institutional structure and functioning of the economy.”
Mr. Coase’s enduring legacy may well be that 60 years later, his paper and theories help us understand the Internet’s impact on business, the economy and all our institutions… Mr. Coase wondered why there was no market within the firm. Why is it unprofitable to have each worker, each step in the production process, become an independent buyer and seller? Why doesn’t the draftsperson auction their services to the engineer? Why is it that the engineer does not sell designs to the highest bidder? Mr. Coase argued that preventing this from happening created marketplace friction.
Mr. Coase argued that this friction gave rise to transaction costs – or to put it more broadly, collaboration or relationship costs. There are three types of these relationship costs. First are search costs, such as the hunt for appropriate suppliers. Second are contractual costs, including price and contract negotiations. Third are the co-ordination costs of meshing the different products and processes.
The upshot is that most vertically integrated corporations found it cheaper and simpler to perform most functions in-house, rather than incurring the cost, hassle and risk of constant transactions with outside partners….This is no longer the case. Many behemoths have lost market share to more supple competitors. Digital technologies slash transaction and collaboration costs. Smart companies are making their boundaries porous, using the Internet to harness knowledge, resources and capabilities outside the company. Everywhere,leading firms set a context for innovation and then invite their customers, partners and other third parties to co-create their products and services.
Today’s economic engines are Internet-based clusters of businesses. While each company retains its identity, companies function together, creating more wealth than they could ever hope to create individually. Where corporations were once gigantic, new business ecosystems tend toward the amorphous.
Procter & Gamble now gets 60 per cent of its innovation from outside corporate walls. Boeing has built a massive ecosystem to design and manufacture jumbo jets. China’s motorcycle industry, which consists of dozens of companies collaborating with no single company pulling the strings, now comprises 40 per cent of global motorcycle production.
Looked at one way, Amazon.com is a website with many employees that ships books. Looked at another way, however, Amazon is a vast ecosystem that includes authors, publishers, customers who write reviews for the site, delivery companies like UPS, and tens of thousands of affiliates that market products and arrange fulfilment through the Amazon network. Hundreds of thousands of people are involved in Amazon’s viral marketing network.
This is leading to the biggest change to the corporation in a century and altering how we orchestrate capability to innovate, create goods and services and engage with the world. From now on, the ecosystem itself, not the corporation per se, should serve as the point of departure for every business strategist seeking to understand the new economy – and for every manager, entrepreneur and investor seeking to prosper in it.
Nor does the Internet tonic apply only to corporations. The Web is dropping transaction costs everywhere – enabling networked approaches to almost every institution in society, from government, media, science and health care to our energy grid, transportation systems and institutions for global problem solving.
Governments can change from being vertically integrated, industrial-age bureaucracies to become networks. By releasing their treasures of raw data, governments can now become platforms upon which companies, NGOs, academics, foundations, individuals and other government agencies can collaborate to create public value…”
Can The "GitHub For Science" Convince Researchers To Open-Source Their Data?
Interview at Co.Labs: “Science has a problem: Researchers don’t share their data. A new startup wants to change that by melding GitHub and Google Docs…Nathan Jenkins is a condensed matter physicist and programmer who has worked at CERN, the European Organization for Nuclear Research. He recently left his post-doc program at New York University to cofound Authorea, a platform that helps scientists draft, collaborate on, share, and publish academic articles. We talked with him about the idea behind Authorea, the open science movement, and the future of scientific publishing.”
Public Open Data: The Good, the Bad, the Future
Camille Crittenden at IDEALAB: “Some of the most powerful tools combine official public data with social media or other citizen input, such as the recent partnership between Yelp and the public health departments in New York and San Francisco for restaurant hygiene inspection ratings. In other contexts, such tools can help uncover and ultimately reduce corruption by making it easier to “follow the money.”
Despite the opportunities offered by “free data,” this trend also raises new challenges and concerns, among them, personal privacy and security. While attention has been devoted to the unsettling power of big data analysis and “predictive analytics” for corporate marketing, similar questions could be asked about the value of public data. Does it contribute to community cohesion that I can find out with a single query how much my neighbors paid for their house or (if employed by public agencies) their salaries? Indeed, some studies suggest that greater transparency leads not to greater trust in government but to resignation and apathy.
Exposing certain law enforcement data also increases the possibility of vigilantism. California law requires the registration and publication of the home addresses of known sex offenders, for instance. Or consider the controversy and online threats that erupted when, shortly after the Newtown tragedy, a newspaper in New York posted an interactive map of gun permit owners in nearby counties.
…Policymakers and officials must still mind the “big data gap.”So what does the future hold for open data? Publishing data is only one part of the information ecosystem. To be useful, tools must be developed for cleaning, sorting, analyzing and visualizing it as well. …
For-profit companies and non-profit watchdog organizations will continue to emerge and expand, building on the foundation of this data flood. Public-private partnerships such as those between San Francisco and Appallicious or Granicus, startups created by Code for America’s Incubator, and non-partisan organizations like the Sunlight Foundation and MapLight rely on public data repositories for their innovative applications and analysis.
Making public data more accessible is an important goal and offers enormous potential to increase civic engagement. To make the most effective and equitable use of this resource for the public good, cities and other government entities should invest in the personnel and equipment — hardware and software — to make it universally accessible. At the same time, Chief Data Officers (or equivalent roles) should also be alert to the often hidden challenges of equity, inclusion, privacy, and security.”