Patrick Meier in the Guardian: “Unmanned aerial vehicles (UAVs) capture images faster, cheaper, and at a far higher resolution than satellite imagery. And as John DeRiggi speculates in “Drones for Development?” these attributes will likely lead to a host of applications in development work. In the humanitarian field that future is already upon us — so we need to take a rights-based approach to advance the discussion, improve coordination of UAV flights, and to promote regulation that will ensure safety while supporting innovation.
It was the unprecedentedly widespread use of civilian UAVs following typhoon Haiyan in the Philippines that opened my eyes to UAV use in post-disaster settings. I was in Manila to support the United Nations’ digital humanitarian efforts and came across new UAV projects every other day.
One team was flying rotary-wing UAVs to search for survivors among vast fields of debris that were otherwise inaccessible. Another flew fixed-wing UAVs around Tacloban to assess damage and produce high-quality digital maps. Months later, UAVs are still being used to support recovery and preparedness efforts. One group is working with local mayors to identify which communities are being overlooked in the reconstruction.
Humanitarian UAVs are hardly new. As far back as 2007, the World Food Program teamed up with the University of Torino to build humanitarian UAVs. But today UAVs are much cheaper, safer, and easier to fly. This means more people own personal UAVs. The distinguishing feature between these small UAVs and traditional remote control airplanes or helicopters is that they are intelligent. Most can be programmed to fly and land autonomously at designated locations. Newer UAVs also have on-board, flight-stabilization features that automatically adapt to changing winds, automated collision avoidance systems, and standard fail-safe mechanisms.
While I was surprised by the surge in UAV projects in the Philippines, I was troubled that none of these teams were aware of each other and that most were apparently not sharing their imagery with local communities. What happens when even more UAV teams show up following future disasters? Will they be accompanied by droves of drone journalists and “disaster tourists” equipped with personal UAVs? Will we see thousands of aerial disaster pictures and videos uploaded to social media rather than in the hands of local communities? What are the privacy implications? And what about empowering local communities to deploy their own UAVs?
There were many questions but few answers. So I launched the humanitarian UAV network (UAViators) to bridge the worlds of humanitarian professionals and UAV experts to address these questions. Our first priority was to draft a code of conduct for the use of UAVs in humanitarian settings to hold ourselves accountable while educating new UAV pilots before serious mistakes are made…”
Measuring Governance: What’s the point?
Alan Hudson at Global Integrity: “Over the last 10-15 years, the fact that governance – the institutional arrangements and relationships that shape how effectively things get done – plays a central role in shaping countries’ development trajectories has become widely acknowledged (see for instance the World Bank’s World Development Report of 2011). This acknowledgement has developed hand-in-hand with determined efforts to measure various aspects of governance.
This emphasis on governance and the efforts made to measure its patterns and understand its dynamics is very welcome. There’s no doubt that governance matters and measuring “governance” and its various dimensions can play a useful role in drawing attention to problems and opportunities, in monitoring compliance with standards, in evaluating efforts to support reform, and in informing decisions about what reforms to implement and how.
But in my experience, discussions about governance and its measurement sometimes gloss over a number of key questions (for a similar argument see the early sections of Matt Andrews’ piece on “Governance indicators can make sense”). These include questions about: what is being measured – “governance” is a multi-faceted and rather woolly concept (see Francis Fukuyama’s 2013 piece on “What is Governance?” and various responses); who is going to use the data that is generated; how that data might have an impact; and what results are being sought.
I’ve noticed this most recently in discussions about the inclusion of “governance” in the post-2015 development framework of goals, targets and indicators. From what I’ve seen, the understandable enthusiasm for ensuring that governance gains a place in the post-2015 framework can lead to discussions that: skate over the fact that the evidence that particular forms of governance – often labelled as “Good Governance” – lead to better development outcomes is patchy; fail to effectively grapple with the fact that a cookie-cutter approach to governance is unlikely to work across diverse contexts; pay little attention given to the ways in which the data generated might actually be used to make a difference; and, give scant consideration to the needs of those who might use the data, particularly citizens and citizens’ groups.
In my view, a failure to address these issues risks inadvertently weakening the case for paying attention to, and measuring, aspects of governance. As the Overseas Development Institute’s excellent report on “Governance targets and indicators for post-2015” put it, in diplomatic language: “including something as a target or indicator does not automatically lead to its improvement and the prize is not just to find governance targets and indicators that can be ‘measured’. Rather, it may be important to reflect on the pathways through which set targets and indicators are thought to lead to better outcomes and on the incentives that might be generated by different measurement approaches.” (See my working document on “Fiscal Governance and Post-2015” for additional thoughts on the inclusion of governance in the post-2015 framework, including notes toward a theory of change).
More broadly, beyond the confines of the post-2015 debate, the risk – and arguably, in many cases, the reality – is that by paying insufficient attention to some key issues, we end up with a lot of data on various aspects of “governance”, but that that data doesn’t get used as much as it might, isn’t very useful for informing context-specific efforts to improve governance, and has limited impact.
To remedy this situation, I’d suggest that any effort to measure aspects of “governance” or to improve the availability, quality, use and impact of governance data (as the Governance Data Alliance is doing – with a Working Group on Problem Statements and Theories of Change) should answer up-front a series of simple questions:
- Outcomes: What outcome(s) are you interested in? Are you interested in improving governance for its own sake, because you regard a particular type of governance as intrinsically valuable, and/or because you think, for instance, that improving governance will help to improve service delivery and accelerate progress against poverty? (See Nathaniel Heller’s post on “outputs versus outcomes in open government”)
- Theory: If your interest is not solely based on the intrinsic value you attach to “governance”, which aspects of “governance” do you think matter in terms of the outcomes – e.g. service delivery and/or reduced poverty – that you’re interested in? What’s the theory of change that links governance to development outcomes? Without such a theory, it’s difficult to decide what to measure!
- Data: In what ways do you think that data about the aspects of governance that you think are important – for intrinsic or extrinsic reasons – will be used to help to drive progress towards the type of governance that you value? To what use might the data be put, by whom, to do what? Or, from the perspective of data-users, what information do they need to take action to improve governance?
Organizations that are involved in generating governance data no doubt spend time considering these questions. But nonetheless, I think there would be value in making that thinking – and information about whether and how the data gets used, and with what effect – explicit….”
Lessons in Mass Collaboration
Elizabeth Walker, Ryan Siegel, Todd Khozein, Nick Skytland, Ali Llewellyn, Thea Aldrich, and Michael Brennan in the Stanford Social Innovation Review: “significant advances in technology in the last two decades have opened possibilities to engage the masses in ways impossible to imagine centuries ago. Beyond coordination, today’s technological capability permits organizations to leverage and focus public interest, talent, and energy through mass collaborative engagement to better understand and solve today’s challenges. And given the rising public awareness of a variety of social, economic, and environmental problems, organizations have seized the opportunity to leverage and lead mass collaborations in the form of hackathons.
Hackathons emerged in the mid-2000s as a popular approach to leverage the expertise of large numbers of individuals to address social issues, often through the creation of online technological solutions. Having led hundreds of mass collaboration initiatives for organizations around the world in diverse cultural contexts, we at SecondMuse offer the following lessons as a starting point for others interested in engaging the masses, as well as challenges others’ may face.
What Mass Collaboration Looks Like
An early example of a mass collaborative endeavor was Random Hacks of Kindness (RHoK), which formed in 2009. RHoK was initially developed in collaboration with Google, Microsoft, Yahoo!, NASA, the World Bank, and later, HP as a volunteer mobilization effort; it aimed to build technology that would enable communities to respond better to crises such as natural disasters. In 2012, nearly 1,000 participants attended 30 events around the world to address 176 well-defined problems.
In 2013, NASA and SecondMuse led the International Space Apps Challenge, which engaged six US federal agencies, 400 partner institutions, and 9,000 global citizens through a variety of local and global team configurations; it aimed to address 58 different challenges to improve life on Earth and in space. In Athens, Greece, for example, in direct response to the challenge of creating a space-deployable greenhouse, a team developed a modular spinach greenhouse designed to survive the harsh Martian climate. Two months later, 11,000 citizens across 95 events participated in the National Day of Civic Hacking in 83 different US cities, ultimately contributing about 150,000 person-hours and addressing 31 federal and several state and local challenges over a single weekend. One result was Keep Austin Fed from Austin, Texas, which leveraged local data to coordinate food donations for those in need.
Strong interest on the part of institutions and an enthusiastic international community has paved the way for follow-up events in 2014.
Benefits of Mass Collaboration
The benefits of this approach to problem-solving are many, including:
- Incentivizing the use of government data. As institutions push to make data available to the public, mass collaboration can increase the usefulness of that data by creating products from it, as well as inform and streamline future data collection processes.
- Increasing transparency. Engaging citizens in the process of addressing public concerns educates them about the work that institutions do and advances efforts to meet public expectations of transparency.
- Increasing outcome ownership. When people engage in a collaborative process of problem solving, they naturally have a greater stake in the outcome. Put simply, the more people who participate in the process, the greater the sense of community ownership. Also, when spearheading new policies or initiatives, the support of a knowledgeable community can be important to long-term success.
- Increasing awareness. Engaging the populace in addressing challenges of public concern increases awareness of issues and helps develop an active citizenry. As a result, improved public perception and license to operate bolster governmental and non-governmental efforts to address challenges.
- Saving money. By providing data and structures to the public, and allowing them to build and iterate on plans and prototypes, mass collaboration gives agencies a chance to harness the power of open innovation with minimal time and funds.
- Harnessing cognitive surplus. The advent of online tools allowing for distributed collaboration enables citizens to use their free time incrementally toward collective endeavors that benefit local communities and the nation.
Challenges of Mass Collaboration
Although the benefits can be significant, agencies planning to lead mass collaborations should be aware of several challenges:
- Investing time and effort. A mass collaboration is most effective when it is not a one-time event. The up-front investment in building a collaboration of supporting partner organizations, creating a robust framework for action, developing the necessary tools and defining the challenges, and investing in implementation and scaling of the most promising results all require substantial time to secure long-term commitment and strong relationships.
- Forging an institution-community relationship. Throughout the course of most engagements, the power dynamic between the organization providing the frameworks and challenges and the groupings of individuals responding to the call to action can shift dramatically as the community incorporates the endeavor into their collective identity. Everyone involved should embrace this as they lay the foundation for self-sustaining mass collaboration communities. Once participants develop a firmly entrenched collective identity and sense of ownership, the convening organization can fully tap into its collective genius, as they can work together based on trust and shared vision. Without community ownership, organizers need to allot more time, energy, and resources to keep their initiative moving forward, and to battle against volunteer fatigue, diminished productivity, and substandard output.
- Focusing follow-up. Turning a massive infusion of creative ideas, concepts, and prototypes into concrete solutions requires a process of focused follow-up. Identifying and nurturing the most promising seeds to fruition requires time, discrete skills, insight, and—depending on the solutions you scale—support from a variety of external organizations.
- Understanding ROI. Any resource-intensive endeavor where only a few of numerous resulting products ever see the light of day demands deep consideration of what constitutes a reasonable return on investment. For mass collaborations, this means having an initial understanding of the potential tangible and intangible outcomes, and making a frank assessment of whether those outcomes meet the needs of the collaborators.
Technological developments in the last century have enabled relationships between individuals and institutions to blossom into a rich and complex tapestry…”
CollaborativeScience.org: Sustaining Ecological Communities Through Citizen Science and Online Collaboration
David Mellor at CommonsLab: “In any endeavor, there can be a tradeoff between intimacy and impact. The same is true for science in general and citizen science in particular. Large projects with thousands of collaborators can have incredible impact and robust, global implications. On the other hand, locally based projects can foster close-knit ties that encourage collaboration and learning, but face an uphill battle when it comes to creating rigorous and broadly relevant investigations. Online collaboration has the potential to harness the strengths of both of these strategies if a space can be created that allows for the easy sharing of complex ideas and conservation strategies.
CollaborativeScience.org was created by researchers from five different universities to train Master Naturalists in ecology, scientific modeling and adaptive management, and then give these capable volunteers a space to put their training to work and create conservation plans in collaboration with researchers and land managers.
We are focusing on scientific modeling throughout this process because environmental managers and ecologists have been trained to intuitively create explanations based on a very large number of related observations. As new data are collected, these explanations are revised and are put to use in generating new, testable hypotheses. The modeling tools that we are providing to our volunteers allow them to formalize this scientific reasoning by adding information, sources and connections, then making predictions based on possible changes to the system. We integrate their projects into the well-established citizen science tools at CitSci.org and guide them through the creation of an adaptive management plan, a proven conservation project framework…”
Choice, Rules and Collective Action
New book on “The Ostroms on the Study of Institutions and Governance”: “This volume brings a set of key works by Elinor Ostrom, co-recipient of the Nobel Prize in Economic Sciences, together with those of Vincent Ostrom, one of the originators of Public Choice political economy. The two scholars introduce and expound their approaches and analytical perspectives on the study of institutions and governance.
The book puts together works representing the main analytical and conceptual vehicles articulated by the Ostroms to create the Bloomington School of public choice and institutional theory. Their endeavours sought to ‘re-establish the priority of theory over data collection and analysis’, and to better integrate theory and practice.
These efforts are illustrated via selected texts, organised around three themes: the political economy and public choice roots of their work in creating a distinct branch of political economy; the evolutionary nature of their work that led them to go beyond mainstream public choice, thereby enriching the public choice tradition itself; and, finally, the foundational and epistemological dimensions and implications of their work.”
How open data can help shape the way we analyse electoral behaviour
Harvey Lewis (Deloitte), Ulrich Atz, Gianfranco Cecconi, Tom Heath (ODI) in The Guardian: “Even after the local council elections in England and Northern Ireland on 22 May, which coincided with polling for the European Parliament, the next 12 months remain a busy time for the democratic process in the UK.
In September, the people of Scotland make their choice in a referendum on the future of the Union. Finally, the first fixed-term parliament in Westminster comes to an end with a general election in all areas of Great Britain and Northern Ireland in May 2015.
To ensure that as many people as possible are eligible and able to vote, the government is launching an ambitious programme of Individual Electoral Registration (IER) this summer. This will mean that the traditional, paper-based approach to household registration will shift to a tailored and largely digital process more in-keeping with the data-driven demands of the twenty-first century.
Under IER, citizens will need to provide ‘identifying information’, such as date of birth or national insurance number, when applying to register.
Ballots: stuck in the past?
However, despite the government’s attempts through IER to improve the veracity of information captured prior to ballots being posted, little has changed in terms of the vision for capturing, distributing and analysing digital data from election day itself.
Indeed, paper is still the chosen medium for data collection.
Digitising elections is fraught with difficulty, though. In the US, for example, the introduction of new voting machines created much controversy even though they are capable of providing ‘near-perfect’ ballot data.
The UK’s democratic process is not completely blind, though. Numerous opinion surveys are conducted both before and after polling, including the long-running British Election Study, to understand the shifting attitudes of a representative cross-section of the electorate.
But if the government does not retain in sufficient geographic detail digital information on the number of people who vote, then how can it learn what is necessary to reverse the long-running decline in turnout?
The effects of lack of data
To add to the debate around democratic engagement, a joint research team, with data scientists from Deloitte and the Open Data Institute (ODI), have been attempting to understand what makes voters tick.
Our research has been hampered by a significant lack of relevant data describing voter behaviour at electoral ward level, as well as difficulties in matching what little data is available to other open data sources, such as demographic data from the 2011 Census.
Even though individual ballot papers are collected and verified for counting the number of votes per candidate – the primary aim of elections, after all – the only recent elections for which aggregate turnout statistics have been published at ward level are the 2012 local council elections in England and Wales. In these elections, approximately 3,000 wards from a total of over 8,000 voted.
Data published by the Electoral Commission for the 2013 local council elections in England and Wales purports to be at ward level but is, in fact, for ‘county electoral divisions’, as explained by the Office for National Statistics.
Moreover, important factors related to the accessibility of polling stations – such as the distance from main population centres – could not be assessed because the location of polling stations remains the responsibility of individual local authorities – and only eight of these have so far published their data as open data.
Given these fundamental limitations, drawing any robust conclusions is difficult. Nevertheless, our research shows the potential for forecasting electoral turnout with relatively few census variables, the most significant of which are age and the size of the electorate in each ward.
…
What role can open data play?
The limited results described above provide a tantalising glimpse into a possible future scenario: where open data provides a deeper and more granular understanding of electoral behaviour.
On the back of more sophisticated analyses, policies for improving democratic engagement – particularly among young people – have the potential to become focused and evidence-driven.
And, although the data captured on election day will always remain primarily for the use of electing the public’s preferred candidate, an important secondary consideration is aggregating and publishing data that can be used more widely.
This may have been prohibitively expensive or too complex in the past but as storage and processing costs continue to fall, and the appetite for such knowledge grows, there is a compelling business case.
The benefits of this future scenario potentially include:
- tailoring awareness and marketing campaigns to wards and other segments of the electorate most likely to respond positively and subsequently turn out to vote
- increasing the efficiency with which European, general and local elections are held in the UK
- improving transparency around the electoral process and stimulating increased democratic engagement
- enhancing links to the Government’s other significant data collection activities, including the Census.
Achieving these benefits requires commitment to electoral data being collected and published in a systematic fashion at least at ward level. This would link work currently undertaken by the Electoral Commission, the ONS, Plymouth University’s Election Centre, the British Election Study and the more than 400 local authorities across the UK.”
How to treat government like an open source project
Ben Balter in OpenSource.com: “Open government is great. At least, it was a few election cycles ago. FOIA requests, open data, seeing how your government works—it’s arguably brought light to a lot of not-so-great practices, and in many cases, has spurred citizen-centric innovation not otherwise imagined before the information’s release.
It used to be that sharing information was really, really hard. Open government wasn’t even a possibility a few hundred years ago. Throughout the history of communication tools—be it the printing press, fax machine, or floppy disks—new tools have generally done three things: lowered the cost to transmit information, increased who that information could be made available to, and increase how quickly that information could be distributed. But, printing presses and fax machines have two limitations: they are one way and asynchronous. They let you more easily request, and eventually see how the sausage was made but don’t let you actually take part in the sausage-making. You may be able to see what’s wrong, but you don’t have the chance to make it better. By the time you find out, it’s already too late.
As technology allows us to communicate with greater frequency and greater fidelity, we have the chance to make our government not only transparent, but truly collaborative.
…
So, how do we encourage policy makers and bureaucrats to move from open government to collaborative government, to learn open source’s lessons about openness and collaboration at scale?
For one, we geeks can help to create a culture of transparency and openness within government by driving up the demand side of the equation. Be vocal, demand data, expect to see process, and once released, help build lightweight apps. Show potential change agents in government that their efforts will be rewarded.
Second, it’s a matter of tooling. We’ve got great tools out there—things like Git that can track who made what change when and open standards like CSV or JSON that don’t require proprietary software—but by-and-large they’re a foreign concept in government, at least among those empowered to make change. Command line interfaces with black background and green text can be intimidating to government bureaucrats used to desktop publishing tools. Make it easier for government to do the right thing and choose open standards over proprietary tooling.”
Last, be a good open source ambassador. Help your home city or state get involved with open source. Encourage them to take their first step (be it consuming open source, publishing, or collaborating with the public), teach them what it means to do things in the open, And when they do push code outside the firewall, above all, be supportive. We’re in this together.
As technology makes it easier to work together, geeks can help make our government not just open, but in fact collaborative. Government is the world’s largest and longest running open source project (bugs, trolls, and all). It’s time we start treating it like one.
Open government: getting beyond impenetrable online data
Jed Miller in The Guardian: “Mathematician Blaise Pascal famously closed a long letter by apologising that he hadn’t had time to make it shorter. Unfortunately, his pithy point about “download time” is regularly attributed to Mark Twain and Henry David Thoreau, probably because the public loves writers more than it loves statisticians. Scientists may make things provable, but writers make them memorable.
The World Bank confronted a similar reality of data journalism earlier this month when it revealed that, of the 1,600 bank reports posted online on from 2008 to 2012, 32% had never been downloaded at all and another 40% were downloaded under 100 times each.
Taken together, these cobwebbed documents represent millions of dollars in World Bank funds and hundreds of thousands of person-hours, spent by professionals who themselves represent millions of dollars in university degrees. It’s difficult to see the return on investment in producing expert research and organising it into searchable web libraries when almost three quarters of the output goes largely unseen.
The World Bank works at a scale unheard of by most organisations, but expert groups everywhere face the same challenges. Too much knowledge gets trapped in multi-page pdf files that are slow to download (especially in low-bandwidth areas), costly to print, and unavailable for computer analysis until someone manually or automatically extracts the raw data.
Even those who brave the progress bar find too often that urgent, incisive findings about poverty, health, discrimination, conflict or social change are presented in prose written by and for high-level experts, rendering it impenetrable to almost everyone else. Information isn’t just trapped in pdfs; it’s trapped in PhDs.
Governments and NGOs are beginning to realise that digital strategy means more than posting a document online, but what will it take for these groups to change not just their tools, but their thinking? It won’t be enough to partner with WhatsApp or hire GrumpyCat.
I asked strategists from the development, communications and social media fields to offer simple, “Tweetable” suggestions for how the policy community can become better communicators.
For nonprofits and governments that still publish 100-page pdfs on their websites and do not optimise the content to share in other channels such as social: it is a huge waste of time and ineffective. Stop it now.
– Beth Kanter, author and speaker. Beth’s Blog: How Nonprofits Can Use Social Media
Treat text as #opendata so infomediaries can mash it up and make it more accessible (see, for example federalregister.gov) and don’t just post and blast: distribute information in a targeted way to those most likely to be interested.
– Beth Noveck, director at the Governance Lab and former director at White House Open Government Initiative
Don’t be boring. Sounds easy, actually quite hard, super-important.
– Eli Pariser, CEO of Upworthy
Surprise me. Uncover the key finding that inspired you, rather than trying to tell it all at once and show me how the world could change because of it.
– Jay Golden, co-founder of Wakingstar Storyworks
For the Bank or anyone who is generating policy information they actually want people to use, they must actually write it for the user, not for themselves. As Steve Jobs said, ‘Simple can be harder than complex’.
– Kristen Grimm, founder and president at Spitfire Strategies
The way to reach the widest audience is to think beyond content format and focus on content strategy.
– Laura Silber, director of public affairs at Open Society Foundations
Open the door to policy work with short, accessible pieces – a blog post, a video take, infographics – that deliver the ‘so what’ succinctly.
– Robert McMahon, editor at Council on Foreign Relations
Policy information is more usable if it’s linked to corresponding actions one can take, or if it helps stir debate. Also, whichever way you slice it, there will always be a narrow market for raw policy reports … that’s why explainer sites, listicles and talking heads exist.
– Ory Okolloh, director of investments at Omidyar Network and former public policy and government relations manager at Google Africa
Ms Okolloh, who helped found the citizen reporting platform Ushahidi, also offered a simple reminder about policy reports: “‘Never gets downloaded’ doesn’t mean ‘never gets read’.” Just as we shouldn’t mistake posting for dissemination, we shouldn’t confuse popularity with influence….”
Citizen participation and technology
ICTlogy: “The recent, rapid rise in the use of digital technology is changing relationships between citizens, organizations and public institutions, and expanding political participation. But while technology has the potential to amplify citizens’ voices, it must be accompanied by clear political goals and other factors to increase their clout.
Those are among the conclusions of a new NDI study, “Citizen Participation and Technology,” that examines the role digital technologies – such as social media, interactive websites and SMS systems – play in increasing citizen participation and fostering accountability in government. The study was driven by the recognition that better insights are needed into the relationship between new technologies, citizen participation programs and the outcomes they aim to achieve.
Using case studies from countries such as Burma, Mexico and Uganda, the study explores whether the use of technology in citizen participation programs amplifies citizen voices and increases government responsiveness and accountability, and whether the use of digital technology increases the political clout of citizens.
The research shows that while more people are using technology—such as social media for mobile organizing, and interactive websites and text messaging systems that enable direct communication between constituents and elected officials or crowdsourcing election day experiences— the type and quality of their political participation, and therefore its impact on democratization, varies. It also suggests that, in order to leverage technology’s potential, there is a need to focus on non-technological areas such as political organizing, leadership skills and political analysis.
For example, the “2% and More Women in Politics” coalition led by Mexico’s National Institute for Women (INMUJERES) used a social media campaign and an online petition to call successfully for reforms that would allocate two percent of political party funding for women’s leadership training. Technology helped the activists reach a wider audience, but women from the different political parties who made up the coalition might not have come together without NDI’s role as a neutral convener.
The study, which was conducted with support from the National Endowment for Democracy, provides an overview of NDI’s approach to citizen participation, and examines how the integration of technologies affects its programs in order to inform the work of NDI, other democracy assistance practitioners, donors, and civic groups.
Observations:
Key findings:
- Technology can be used to readily create spaces and opportunities for citizens to express their voices, but making these voices politically stronger and the spaces more meaningful is a harder challenge that is political and not technological in nature.
- Technology that was used to purposefully connect citizens’ groups and amplify their voices had more political impact.
- There is a scarcity of data on specific demographic groups’ use of, and barriers to technology for political participation. Programs seeking to close the digital divide as an instrument of narrowing the political divide should be informed by more research into barriers to access to both politics and technology.
- There is a blurring of the meaning between the technologies of open government data and the politics of open government that clouds program strategies and implementation.
- Attempts to simply crowdsource public inputs will not result in users self-organizing into politically influential groups, since citizens lack the opportunities to develop leadership, unity, and commitment around a shared vision necessary for meaningful collective action.
- Political will and the technical capacity to engage citizens in policy making, or providing accurate data on government performance are lacking in many emerging democracies. Technology may have changed institutions’ ability to respond to citizen demands but its mere presence has not fundamentally changed actual government responsiveness.”
Linking Social, Open, and Enterprise Data
Paper by T Omitola, J Davies, A Duke, H Glaser, N Shadbolt for Proceeding WIMS ’14 (Proceedings of the 4th International Conference on Web Intelligence, Mining and Semantics): “The new world of big data, of the LOD cloud, of the app economy, and of social media means that organisations no longer own, much less control, all the data they need to make the best informed business decisions. In this paper, we describe how we built a system using Linked Data principles to bring in data from Web 2.0 sites (LinkedIn, Salesforce), and other external business sites such as OpenCorporates, linking these together with pertinent internal British Telecommunications enterprise data into that enterprise data space. We describe the challenges faced during the implementation, which include sourcing the datasets, finding the appropriate “join points” from the individual datasets, as well as developing the client application used for data publication. We describe our solutions to these challenges and discuss the design decisions made. We conclude by drawing some general principles from this work.”