Paper by R. Grimm, C. Fox, S. Baines, and K. Albertson in Innovation: The European Journal of Social Science Research: “Social innovation discourses see in social challenges opportunities to make societies more sustainable and cohesive through inclusive practices, coproduction and pro-active grassroots initiatives. In this paper we are concerned first that the concept has been stretched in so many directions that it is at breaking point. We illustrate this by documenting the varied uses of social innovation in different academic and policy discourses. Second, we assume that, if social innovation is to be a useful concept for policy-makers, then it must tell us something about what adjustments are needed to develop an effective political economy that is social innovation ready. Finally, we argue that what is needed is more theoretical and empirical work to help social innovation to develop into an effective policy tool”
NEW: The Open Governance Knowledge Base
In its continued efforts to organize and disseminate learnings in the field of technology-enabled governance innovation, today, The Governance Lab is introducing a collaborative, wiki-style repository of information and research at the nexus of technology, governance and citizenship. Right now we’re calling it the Open Governance Knowledge Base, and it goes live today.
Our goal in creating this collaborative platform is to provide a single source of research and insights related to the broad, interdiscplinary field of open governance for the benefit of: 1) decision-makers in governing institutions seeking information and inspiration to guide their efforts to increase openness; 2) academics seeking to enrich and expand their scholarly pursuits in this field; 3) technology practitioners seeking insights and examples of familiar tools being used to solve public problems; and 4) average citizens simply seeking interesting information on a complex, evolving topic area.
While you can already find some pre-populated information and research on the platform, we need your help! The field of open governance is too vast, complex and interdisciplinary to meaningfully document without broad collaboration.
Here’s how you can help to ensure this shared resource is as useful and engaging as possible:
- What should we call the platform? We want your title suggestions. Leave your ideas in the comments or tweet them to us @TheGovLab.
- And more importantly: Share your knowledge and research. Take a look at what we’ve posted, create an account, refer to this MediaWiki formatting guide as needed and start editing!
Crisis response needs to be a science, not an art
Jimmy Whitworth in the Financial Times:”…It is an imperative to offer shelter, nutrition, sanitation and medical care to those suddenly bereft of it. Without aid, humanitarian crises would cause still greater suffering. Yet admiration for the agencies that deliver relief should not blind us to the need to ensure that it is well delivered. Humanitarian responses must be founded on good evidence.
The evidence base, unfortunately, is weak. We know that storms, earthquakes and conflicts have devastating consequences for health and wellbeing, and that not responding is not an option, but we know surprisingly little about how best to go about it. Not only is evidence-based practice rare in humanitarian relief operations, it is often impossible.
Questions about how best to deliver clean water or adequate shelter, and even about which health needs should be prioritised as the most pressing, have often been barely researched. Indeed, the evidence gap is so great that the Humanitarian Practice Network has highlighted a “dire lack of credible data to help us understand just how much populations in crisis suffer, and to what extent relief operations are able to relieve that suffering”. No wonder aid responses are often characterised as messy.
Good practice often rests on past practice rather than research. The Bible of humanitarian relief is a document called the Sphere handbook, an important initiative to set minimum standards for provision of health, nutrition, sanitation and shelter. Yet analysis of the 2004 handbook has revealed that just 13 per cent of its 346 standards were supported by good evidence of relevance to health. The handbook, for example, recommended that refugee camps should prioritise measles vaccination – a worthwhile goal, but not one that should clearly be favoured over control of other infectious diseases.
Also under-researched is the question of how best to provide types of relief that everybody agrees meet essential needs. Access to clean water is a clear priority for almost all populations in crisis but little is understood about how this is most efficiently delivered. Is it best to ship bottled water to stricken areas? Are tankers of clean water more effective? Or can water purification tablets do the job? The summer floods in northern India made it clear that there is little good evidence one way or another.
Adequate shelter, too, is a human essential in all but the most benign environments but, once again, the evidence base about how best to provide it is limited. There is a school of thought that building transitional shelter from locally available materials is better in the long run than housing people under tents, tarpaulins and plastic, which if accurate would have far-reaching consequences for standard practice. But too little research has been done…
Researchers also face significant challenges to building a better evidence base. They can struggle to secure access to disaster zones when getting relief in is the priority. The timescales involved in applying for funding and ethical approval, too, make it difficult for them to move quickly enough to set up a study in the critical post-disaster period.
It is to address this that Enhancing Learning and Research for Humanitarian Assistance, with the support of the Wellcome Trust and the UK Department for International Development, recently launched an £8m research programme that investigates these issues.”
White House Unveils Big Data Projects, Round Two
Information Week: “The White House Office of Science and Technology Policy (OSTP) and Networking and Information Technology R&D program (NITRD) on Tuesday introduced a slew of new big-data collaboration projects aimed at stimulating private-sector interest in federal data. The initiatives, announced at the White House-sponsored “Data to Knowledge to Action” event, are targeted at fields as varied as medical research, geointelligence, economics, and linguistics.
The new projects are a continuation of the Obama Administration’s Big Data Initiative, announced in March 2012, when the first round of big-data projects was presented.
Thomas Kalil, OSTP’s deputy director for technology and innovation, said that “dozens of new partnerships — more than 90 organizations,” are pursuing these new collaborative projects, including many of the best-known American technology, pharmaceutical, and research companies.
Among the initiatives, Amazon Web Services (AWS) and NASA have set up the NASA Earth eXchange, or NEX, a collaborative network to provide space-based data about our planet to researchers in Earth science. AWS will host much of NASA’s Earth-observation data as an AWS Public Data Set, making it possible, for instance, to crowdsource research projects.
An estimated 4.4 million jobs are being created between now and 2015 to support big-data projects. Employers, educational institutions, and government agencies are working to build the educational infrastructure to provide students with the skills they need to fill those jobs.
To help train new workers, IBM, for instance, has created a new assessment tool that gives university students feedback on their readiness for number-crunching careers in both the public and private sector. Eight universities that have a big data and analytics curriculum — Fordham, George Washington, Illinois Institute of Technology, University of Massachusetts-Boston, Northwestern, Ohio State, Southern Methodist, and the University of Virginia — will receive the assessment tool.
OSTP is organizing an initiative to create a “weather service” for pandemics, Kalil said, a way to use big data to identify and predict pandemics as early as possible in order to plan and prepare for — and hopefully mitigate — their effects.
The National Institutes of Health (NIH), meanwhile, is undertaking its ” Big Data to Knowledge” (BD2K) initiative to develop a range of standards, tools, software, and other approaches to make use of massive amounts of data being generated by the health and medical research community….”
See also:
November 12, 2013 – Fact Sheet: Progress by Federal Agencies: Data to Knowledge to Action
November 12, 2013 – Fact Sheet: New Announcements: Data to Knowledge to Action
November 12, 2013 – Press Release: Data to Knowledge to Action Event
Candy Crush-style game helps scientists fight tree disease
Springwise: “The Sainsbury Laboratory has turned genome research into a game called Fraxinus, which could help find a cure for the Chalara ash dieback disease. Crowdsourcing science research isn’t a new thing — we’ve already seen Cancer Research UK enable anyone to help out by identifying cells through its ClicktoCure site. Now the Sainsbury Laboratory has turned genome research into a game called Fraxinus, which could help find a cure for the Chalara ash dieback disease.
Developed as a Facebook app, the game presents players with a number of colored, diamond-shaped blocks that represent the nucleotides that make up the DNA of ash trees. In each round, they have to try to match a particular string of nucleotides as best they can. Users with the nearest match get to ‘claim’ that pattern, but it can be stolen by others with a better sequence. Each sequence gives scientists insight into which genes may be immune from the disease and gives them a better shot at replenishing ash woodland.
According to the creators, Fraxinus has proved an addictive hit with young players, who are helping a good cause while playing. Are there other ways to gamify crowdsourced science research? Website: www.tsl.ac.uk“
What future do you want? Commission invites votes on what Europe could look like in 2050 to help steer future policy and research planning
European Commission – MEMO: “Vice-President Neelie Kroes, responsible for the Digital Agenda, is inviting people to join a voting and ranking process on 11 visions of what the world could look like in 20-40 years. The Commission is seeking views on living and learning, leisure and working in Europe in 2050, to steer long-term policy or research planning.
The visions have been gathered over the past year through the Futurium, an online debate platform that allows policymakers to not only consult citizens, but to collaborate and “co-create” with them, and at events throughout Europe. Thousands of thinkers – from high school students, to the Erasmus Students Network; from entrepreneurs and internet pioneers to philosophers and university professors, have engaged in a collective inquiry – a means of crowd-sourcing what our future world could look like.
Eleven over-arching themes have been drawn together from more than 200 ideas for the future. From today, everyone is invited to join the debate and offer their rating and rankings of the various ideas. The results of the feedback will help the European Commission make better decisions about how to fund projects and ideas that both shape the future and get Europe ready for that future….
The Futurium is a foresight project run by DG CONNECT, based on an open source approach. It develops visions of society, technologies, attitudes and trends in 2040-2050 and use these, for example as potential blueprints for future policy choices or EU research and innovation funding priorities.
It is an online platform developed to capture emerging trends and enable interested citizens to co-create compelling visions of the futures that matter to them.
This crowd-sourcing approach provides useful insights on:
- vision: where people want to go, how desirable and likely are the visions posted on the platform;
- policy ideas: what should ideally be done to realise the futures; the possible impacts and plausibility of policy ideas;
- evidence: scientific and other evidence to support the visions and policy ideas.
….
Connecting policy making to people: in an increasingly connected society, online outreach and engagement is an essential response to the growing demand for participation, helping to capture new ideas and to broaden the legitimacy of the policy making process (IP/10/1296). The Futurium is an early prototype of a more general policy-making model described in the paper “The Futurium—a Foresight Platform for Evidence-Based and Participatory Policymaking“.
The Futurium was developed to lay the groundwork for future policy proposals which could be considered by the European Parliament and the European Commission under their new mandates as of 2014. But the Futurium’s open, flexible architecture makes it easily adaptable to any policy-making context, where thinking ahead, stakeholder participation and scientific evidence are needed.”
Concerns about opening up data, and responses which have proved effective
Google doc by Christopher Gutteridge, University of Southampton and Alexander Dutton, University of Oxford: “This document is inspired by the open data excuses bingo card. Someone asked for what responses have proved effective. This document is a work in progress based on our experience. Carly Strasser has also written at the Data Pub blog about these issues from an Open Science and research data perspective. You may also be interested in How to make a business case for open data, published by the ODI.
We’ll get spam…
Terrorists might use the data…
People will contact us to ask about stuff…
People will misinterpret the data…
It’s too big…
It’s not very interesting…
We might want to use it in a research paper…
There’s no API to that system…
We’re worried about the Data Protection Act…
We’re not sure that we own it…
I don’t mind making it open, but I worry someone else might object…
It’s too complicated…
Our data is embarrassingly bad…
It’s not a priority and we’re busy…
Our lawyers want to make a custom license…
It changes too quickly…
There’s already a project in progress which sounds similar…
Some of what you asked for is confidential…
I don’t own the data, so can’t give you permission…
We don’t have that data…
That data is already published via (external organisation X)….
We can’t provide that dataset because one part is not possible…
What if something breaks and the open version becomes out of date?…
We can’t see the benefit…
What if we want to sell access to this data…?
If we publish this data, people might sue us…
We want people to come direct to us so we know why they want the data…
Scientific Humanities
The course provides concepts and methods to :
- learn the basics of the field called “science and technology studies”, a vast corpus of literature developed over the last forty years to give a realistic description of knowledge production
- handle the flood of different opinions about contentious issues and order the various positions by using the tools now available through digital media
- comment on those different pieces of news in a more articulated way through a specifically designed blog.
Course Format : the course is organized in 8 sequences It displays multimedia contents (images, video, original documents)
Bruno Latour was trained as a philosopher and an anthropologist. From 1982 to 2006, he has been professor at the CSI (Ecole des mines) in Paris. He is now professor at Sciences Po where he created the medialab in 2009. He became famous for his social studies of science and technology. He developed with others a widely known theory called “Actor Network Theory”.
http://www.bruno-latour.fr/ ”
Selected Readings on Linked Data and the Semantic Web
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of linked data and the semantic web was originally published in 2013.
Linked Data and the Semantic Web movement are seeking to make our growing body of digital knowledge and information more interconnected, searchable, machine-readable and useful. First introduced by the W3C, Sir Tim Berners-Lee, Christian Bizer and Tom Heath define Linked Data as “data published to the Web in such a way that it is machine-readable, its meaning is explicitly defined, it is linked to other external data sets, and can in turn be linked to from external datasets.” In other words, Linked Data and the Semantic Web seek to do for data what the Web did for documents. Additionally, the evolving capability of linking together different forms of data is fueling the potentially transformative rise of social machines – “processes in which the people do the creative work and the machine does the administration.”
Selected Reading List (in alphabetical order)
- Harith Alani, David Dupplaw, John Sheridan, Kieron O’Hara, John Darlington, Nigel Shadbolt and Carol Tullo — Unlocking the Potential of Public Sector Information with Semantic Web Technology — a paper discussing the potential of using Semantic Web technology to increase the value of public sector information already in existence.
- Tim Berners-Lee, James Hendler and Ora Lassila — The Semantic Web — an introduction to the concept of the Semantic Web and its transformative potential.
- Christian Bizer, Tom Heath and Tim Berners-Lee — Linked Data – The Story So Far — a paper exploring the challenges, potential and successes of Linked Data almost a decade after its introduction.
- Li Ding, Dominic Difranzo, Sarah Magidson, Deborah L. Mcguinness and Jim Hendler — Data-Gov Wiki: Towards Linked Government Data — a look at the role of Semantic Web technologies in converting, enhancing and using linked government data.
- Evangelos Kalampokis, Michael Hausenblas and Konstantinos Tarabanis — Combining Social and Government Open Data for Participatory Decision-Making — a paper that proposes a data architecture for participatory decision-making based on linking subjective social data and objective government data.
- Kaiser Rady — Publishing the Public Sector Legal Information in the Era of the Semantic Web — an argument in favor of publishing public sector legal information as Linked Data.
-
Nigel Shadbolt, Kieron O’Hara, Tim Berners-Lee, Nicholas Gibbins, Hugh Glaser, Wendy Hall, and m.c. schraefel — Linked Open Government Data: Lessons from Data.gov.uk — a paper discussing the opportunities and challenges related to integrating Open Government Data onto the Linked Data Web.
-
Michael Vitale, Anni Rowland-Campbell, Valentina Cardo and Peter Thompson — The Implications of Government as a “Social Machine” for Making and Implementing Market-based Policy — a report discussing evolving role of government as a social machine and its potential to reimagine the relationship between citizens and government.
Annotated Selected Reading List (in alphabetical order)
Alani, Harith, David Dupplaw, John Sheridan, Kieron O’Hara, John Darlington, Nigel Shadbolt, and Carol Tullo. “Unlocking the Potential of Public Sector Information with Semantic Web Technology,” 2007. http://bit.ly/17fMbCt.
- This paper explores the potential of using Semantic Web technology to increase the value of public sector information already in existence.
- The authors note that, while “[g]overnments often hold very rich data and whilst much of this information is published and available for re-use by others, it is often trapped by poor data structures, locked up in legacy data formats or in fragmented databases. One of the great benefits that Semantic Web (SW) technology offers is facilitating the large scale integration and sharing of distributed data sources.”
- They also argue that Linked Data and the Semantic Web are growing in use and visibility in other sectors, but government has been slower to adapt: “The adoption of Semantic Web technology to allow for more efficient use of data in order to add value is becoming more common where efficiency and value-added are important parameters, for example in business and science. However, in the field of government there are other parameters to be taken into account (e.g. confidentiality), and the cost-benefit analysis is more complex.” In spite of that complexity, the authors’ work “was intended to show that SW technology could be valuable in the governmental context.”
Berners-Lee, Tim, James Hendler, and Ora Lassila. “The Semantic Web.” Scientific American 284, no. 5 (2001): 28–37. http://bit.ly/Hhp9AZ.
- In this article, Sir Tim Berners-Lee, James Hendler and Ora Lassila introduce the Semantic Web, “a new form of Web content that is meaningful to computers [and] will unleash a revolution of new possibilities.”
- The authors argue that the evolution of linked data and the Semantic Web “lets anyone express new concepts that they invent with minimal effort. Its unifying logical language will enable these concepts to be progressively linked into a universal Web. This structure will open up the knowledge and workings of humankind to meaningful analysis by software agents, providing a new class of tools by which we can live, work and learn together.”
Bizer, Christian, Tom Heath, and Tim Berners-Lee. “Linked Data – The Story So Far.” International Journal on Semantic Web and Information Systems (IJSWIS) 5, no. 3 (2009): 1–22. http://bit.ly/HedpPO.
- In this paper, the authors take stock of Linked Data’s challenges, potential and successes close to a decade after its introduction. They build their argument for increasingly linked data by referring to the incredible value creation of the Web: “Despite the inarguable benefits the Web provides, until recently the same principles that enabled the Web of documents to flourish have not been applied to data.”
- The authors expect that “Linked Data will enable a significant evolutionary step in leading the Web to its full potential” if a number of research challenges can be adequately addressed, both technical, like interaction paradigms and data fusion; and non-technical, like licensing, quality and privacy.
Ding, Li, Dominic Difranzo, Sarah Magidson, Deborah L. Mcguinness, and Jim Hendler. Data-Gov Wiki: Towards Linked Government Data, n.d. http://bit.ly/1h3ATHz.
- In this paper, the authors “investigate the role of Semantic Web technologies in converting, enhancing and using linked government data” in the context of Data-gov Wiki, a project that attempts to integrate datasets found at Data.gov into the Linking Open Data (LOD) cloud.
- The paper features discussion and “practical strategies” based on four key issue areas: Making Government Data Linkable, Linking Government Data, Supporting the Use of Linked Government Data and Preserving Knowledge Provenance.
Kalampokis, Evangelos, Michael Hausenblas, and Konstantinos Tarabanis. “Combining Social and Government Open Data for Participatory Decision-Making.” In Electronic Participation, edited by Efthimios Tambouris, Ann Macintosh, and Hans de Bruijn, 36–47. Lecture Notes in Computer Science 6847. Springer Berlin Heidelberg, 2011. http://bit.ly/17hsj4a.
- This paper presents a proposed data architecture for “supporting participatory decision-making based on the integration and analysis of social and government data.” The authors believe that their approach will “(i) allow decision makers to understand and predict public opinion and reaction about specific decisions; and (ii) enable citizens to inadvertently contribute in decision-making.”
- The proposed approach, “based on the use of the linked data paradigm,” draws on subjective social data and objective government data in two phases: Data Collection and Filtering and Data Analysis. “The aim of the former phase is to narrow social data based on criteria such as the topic of the decision and the target group that is affected by the decision. The aim of the latter phase is to predict public opinion and reactions using independent variables related to both subjective social and objective government data.”
Rady, Kaiser. Publishing the Public Sector Legal Information in the Era of the Semantic Web. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 2012. http://bit.ly/17fMiOp.
- Following an EU directive calling for the release of public sector information by member states, this study examines the “uniqueness” of creating and publishing primary legal source documents on the web and highlights “the most recent technological strategy used to structure, link and publish data online (the Semantic Web).”
- Rady argues for public sector legal information to be published as “open-linked-data in line with the new approach for the web.” He believes that if data is created and published in this form, “the data will be more independent from devices and applications and could be considered as a component of [a] big information system. That because, it will be well-structured, classified and has the ability to be used and utilized in various combinations to satisfy specific user requirements.”
Shadbolt, Nigel, Kieron O’Hara, Tim Berners-Lee, Nicholas Gibbins, Hugh Glaser, Wendy Hall, and m.c. schraefel. “Linked Open Government Data: Lessons from Data.gov.uk.” IEEE Intelligent Systems 27, no. 3 (May 2012): 16–24. http://bit.ly/1cgdH6R.
- In this paper, the authors view Open Government Data (OGD) as an “opportunity and a challenge for the LDW [Linked Data Web]. The opportunity is to grow by linking with PSI [Public Sector Information] – real-world, useful information with good provenance. The challenge is to manage the sudden influx of heterogeneous data, often with minimal semantics and structure, tailored to highly specific task contexts.
- As the linking of OGD continues, the authors argue that, “Releasing OGD is not solely a technical problem, although it presents technical challenges. OGD is not a rigid government IT specification, but it demands productive dialogue between data providers, users, and developers. We should expect a ‘perpetual beta,’ in which best practice, technical development, innovative use of data, and citizen-centric politics combine to drive data-release programs.”
- Despite challenges, the authors believe that, “Integrating OGD onto the LDW will vastly increase the scope and richness of the LDW. A reciprocal benefit is that the LDW will provide additional resources and context to enrich OGD. Here, we see the network effect in action, with resources mutually adding value to one another.”
Vitale, Michael, Anni Rowland-Campbell, Valentina Cardo, and Peter Thompson. “The Implications of Government as a ‘Social Machine’ for Making and Implementing Market-based Policy.” Intersticia, September 2013. http://bit.ly/HhMzqD.
- This report from the Australia and New Zealand School of Government (ANZSOG) explores the concept of government as a social machine. The authors draw on the definition of a social machine proposed by Sir Nigel Shadbolt et al. – a system where “human and computational intelligence coalesce in order to achieve a given purpose” – to describe a “new approach to the relationship between citizens and government, facilitated by technological systems which are increasingly becoming intuitive, intelligent and ‘social.'”
- The authors argue that beyond providing more and varied data to government, the evolving concept of government as a social machine as the potential to alter power dynamics, address the growing lack of trust in public institutions and facilitate greater public involvement in policy-making.
Big Data
Special Report on Big Data by Volta – A newsletter on Science, Technology and Society in Europe: “Locating crime spots, or the next outbreak of a contagious disease, Big Data promises benefits for society as well as business. But more means messier. Do policy-makers know how to use this scale of data-driven decision-making in an effective way for their citizens and ensure their privacy?90% of the world’s data have been created in the last two years. Every minute, more than 100 million new emails are created, 72 hours of new video are uploaded to YouTube and Google processes more than 2 million searches. Nowadays, almost everyone walks around with a small computer in their pocket, uses the internet on a daily basis and shares photos and information with their friends, family and networks. The digital exhaust we leave behind every day contributes to an enormous amount of data produced, and at the same time leaves electronic traces that contain a great deal of personal information….
Until recently, traditional technology and analysis techniques have not been able to handle this quantity and type of data. But recent technological developments have enabled us to collect, store and process data in new ways. There seems to be no limitations, either to the volume of data or technology for storing and analyzing them. Big Data can map a driver’s sitting position to identify a car thief, it can use Google searches to predict outbreaks of the H1N1 flu virus, it can data-mine Twitter to predict the price of rice or use mobile phone top-ups to describe unemployment in Asia.
The word ‘data’ means ‘given’ in Latin. It commonly refers to a description of something that can be recorded and analyzed. While there is no clear definition of the concept of ‘Big Data’, it usually refers to the processing of huge amounts and new types of data that have not been possible with traditional tools.
‘The new development is not necessarily that there are so much more data. It’s rather that data is available to us in a new way.’
The notion of Big Data is kind of misleading, argues Robindra Prabhu, a project manager at the Norwegian Board of Technology. “The new development is not necessarily that there are so much more data. It’s rather that data is available to us in a new way. The digitalization of society gives us access to both ‘traditional’, structured data – like the content of a database or register – and unstructured data, for example the content in a text, pictures and videos. Information designed to be read by humans is now also readable by machines. And this development makes a whole new world of data gathering and analysis available. Big Data is exciting not just because of the amount and variety of data out there, but that we can process data about so much more than before.”