NextGov: “When government technology leaders first described a public repository for government data sets more than five years ago, the vision wasn’t totally clear.
“I just didn’t understand what they were talking about,” said Marion Royal of the General Services Administration, describing his first introduction to the project. “I was thinking, ‘this is not going to work for a number of reasons.’”
A few minutes later, he was the project’s program director. He caught onto and helped clarify that vision and since then has worked with a small team to help shepherd online and aggregate more than 100,000 data sets compiled and hosted by agencies across federal, state and local governments.
Many Americans still don’t know what Data.gov is, but chances are good they’ve benefited from the site, perhaps from information such as climate or consumer complaint data. Maybe they downloaded the Red Cross’ Hurricane App after Superstorm Sandy or researched their new neighborhood through a real estate website that drew from government information.
Hundreds of companies pull data they find on the site, which has seen 4.5 million unique visitors from 195 countries, according to GSA. Data.gov has proven a key part of President Obama’s open data policies, which aim to make government more efficient and open as well as to stimulate economic activity by providing private companies, organizations and individuals machine-readable ingredients for new apps, digital tools and programs.”
Open Data at Core of New Governance Paradigm
GovExec: “Rarely are federal agencies compared favorably with Facebook, Instagram, or other modern models of innovation, but there is every reason to believe they can harness innovation to improve mission effectiveness. After all, Aneesh Chopra, former U.S. Chief Technology Officer, reminded the Excellence in Government 2014 audience that government has a long history of innovation. From nuclear fusion to the Internet, the federal government has been at the forefront of technological development.
According to Chopra, the key to fueling innovation and economic prosperity today is open data. But to make the most of open data, government needs to adapt its culture. Chopra outlined three essential elements of doing so:
- Involve external experts – integrating outside ideas is second to none as a source of innovation.
- Leverage the experience of those on the front lines – federal employees who directly execute their agency’s mission often have the best sense of what does and does not work, and what can be done to improve effectiveness.
- Look to the public as a value multiplier – just as Facebook provides a platform for tens of thousands of developers to provide greater value, federal agencies can provide the raw material for many more to generate better citizen services.
In addition to these three broad elements, Chopra offered four specific levers government can use to help enact this paradigm shift:
- Democratize government data – opening government data to the public facilitates innovation. For example, data provided by the National Oceanic and Atmospheric Administration helps generate a 5 billion dollar industry by maintaining almost no intellectual property constraints on its weather data.
- Collaborate on technical standards – government can act as a convener of industry members to standardize technological development, and thereby increase the value of data shared.
- Issue challenges and prizes – incentivizing the public to get involved and participate in efforts to create value from government data enhances the government’s ability to serve the public.
- Launch government startups – programs like the Presidential Innovation Fellows initiative helps challenge rigid bureaucratic structures and permeate a culture of innovation.
Federal leaders will need a strong political platform to sustain this shift. Fortunately, this blueprint is also bipartisan, says Chopra. Political leaders on both sides of the aisle are already getting behind the movement to bring innovation to the core of government..
Three projects meet the European Job Challenge and receive the Social Innovation Prize
EU Press Release: “Social innovation can be a tool to create new or better jobs, while giving an answer to pressing challenges faced by Europe. Today, Michel Barnier, European Commissioner, has awarded three European Social Innovation prizes to ground-breaking ideas to create new types of work and address social needs. The winning projects aim to help disadvantaged women by employing them to create affordable and limited fashion collections, create jobs in the sector of urban farming, and convert abandoned social housing into learning spaces and entrepreneurship labs.
After the success of the first edition in 2013, the European Commission launched a second round of the Social Innovation Competition in memory of Diogo Vasconcelos1. Its main goal was to invite Europeans to propose new solutions to answer The Job Challenge. The Commission received 1,254 ideas out of which three were awarded with a prize of €30,000 each.
Commissioner Michel Barnier said: “We believe that the winning projects can take advantage of unmet social needs and create sustainable jobs. I want these projects to be scaled up and replicated and inspire more social innovations in Europe. We need to tap into this potential to bring innovative solutions to the needs of our citizens and create new types of work.”
More informationon the Competition page
More jobs for Europe – three outstanding ideas
The following new and exceptional ideas are the winners of the second edition of the European Social Innovation Competition:
-
‘From waste to wow! QUID project’ (Italy): fashion business demands perfection, and slightly damaged textile cannot be used for top brands. The project intends to recycle this first quality waste into limited collections and thereby provide jobs to disadvantaged women. This is about creating highly marketable products and social value through recycling.
-
‘Urban Farm Lease’ (Belgium): urban agriculture could provide 6,000 direct jobs in Brussels, and an additional 1,500 jobs considering indirect employment (distribution, waste management, training or events). The project aims at providing training, connection and consultancy so that unemployed people take advantage of the large surfaces available for agriculture in the city (e.g. 908 hectares of land or 394 hectares of suitable flat roofs).
-
‘Voidstarter’ (Ireland): all major cities in Europe have “voids”, units of social housing which are empty because city councils have insufficient budgets to make them into viable homes. At the same time these cities also experience pressure with social housing provision and homelessness. Voidstarter will provide unemployed people with learning opportunities alongside skilled tradespersons in the refurbishing of the voids.”
The Collective Intelligence Handbook: an open experiment
Michael Bernstein: “Is there really a wisdom of the crowd? How do we get at it and understand it, utilize it, empower it?
You probably have some ideas about this. I certainly do. But I represent just one perspective. What would an economist say? A biologist? A cognitive or social psychologist? An artificial intelligence or human-computer interaction researcher? A communications scholar?
For the last two years, Tom Malone (MIT Sloan) and I (Stanford CS) have worked to bring together all these perspectives into one book. We are nearing completion, and the Collective Intelligence Handbook will be published by the MIT Press later this year. I’m still relatively dumbfounded by the rockstar lineup we have managed to convince to join up.
It’s live.
Today we went live with the authors’ current drafts of the chapters. All the current preprints are here: http://cci.mit.edu/CIchapterlinks.html
And now is when you come in.
But we’re not done. We’d love for you — the crowd — to help us make this book better. We envisioned this as an open process, and we’re excited that all the chapters are now at a point where we’re ready for critique, feedback, and your contributions.
There are two ways you can help:
- Read the current drafts and leave comments inline in the Google Docs to help us make them better.
- Drop suggestions in the separate recommended reading list for each chapter. We (the editors) will be using that material to help us write an introduction to each chapter.
We have one month. The authors’ final chapters are due to us in mid-June. So off we go!”
Here’s what’s in the book:
Chapter 1. Introduction
Thomas W. Malone (MIT) and Michael S. Bernstein (Stanford University)
What is collective intelligence, anyway?
Chapter 2. Human-Computer Interaction and Collective Intelligence
Jeffrey P. Bigham (Carnegie Mellon University), Michael S. Bernstein (Stanford University), and Eytan Adar (University of Michigan)
How computation can help gather groups of people to tackle tough problems together.
Chapter 3. Artificial Intelligence and Collective Intelligence
Daniel S. Weld (University of Washington), Mausam (IIT Delhi), Christopher H. Lin (University of Washington), and Jonathan Bragg (University of Washington)
Mixing machine intelligence with human intelligence could enable a synthesized intelligent actor that brings together the best of both worlds.
Chapter 4. Collective Behavior in Animals: An Ecological Perspective
Deborah M. Gordon (Stanford University)
How do groups of animals work together in distributed ways to solve difficult problems?
Chapter 5. The Wisdom of Crowds vs. the Madness of Mobs
Andrew W. Lo (MIT)
Economics has studied a collectively intelligent forum — the market — for a long time. But are we as smart as we think we are?
Chapter 6. Collective Intelligence in Teams and Organizations
Anita Williams Woolley (Carnegie Mellon University), Ishani Aggarwal (Georgia Tech), Thomas W. Malone (MIT)
How do the interactions between groups of people impact how intelligently that group acts?
Chapter 7. Cognition and Collective Intelligence
Mark Steyvers (University of California, Irvine), Brent Miller (University of California, Irvine)
Understanding the conditions under which people are smart individually can help us predict when they might be smart collectively.
Chapter 8. Peer Production: A Modality of Collective Intelligence
Yochai Benkler (Harvard University), Aaron Shaw (Northwestern University), Benjamin Mako Hill (University of Washington)
What have collective efforts such as Wikipedia taught us about how large groups come together to create knowledge and creative artifacts?
Public service workers will have to become Jacks and Jills of all trades
Catherine Needham in the Guardian: “When Kent county council was looking to save money a couple of years ago, it hit upon the idea of merging the roles of library manager and registrar. Library managers were expected to register births and deaths on top of their existing duties, and registrars took on roles in libraries. One former library manager chose to leave the service as a result. It wasn’t, he said, what he signed up for: “I don’t associate the skills in running a library with those of a registrar. I don’t have the emotional skill to do it.”
Since the council was looking to cut staff numbers, it was probably not too troubled by his departure. But this does raise questions about how to support staff who are being asked to work well beyond their professional boundaries.
In our 21st Century Public Servant project at the University of Birmingham, we have found that this trend is evident across public services. We interviewed local government managers who said staff needed to think differently about their skills. As one put it: “We need to use people’s latent talent – if you are a librarian, for example, a key skill will be working with people from the local community. It’s about a different background mindset: ‘I am not just here to do a specific job, but to help the people of this town.'”
The skills of this generic public service worker include interpersonal skills (facilitation, empathy, political skills), analysing skills (sorting evidence, making judgements, offering critique and being creative), organisation (particularly for group work and collaboration) and communication skills (such as using social media and multimedia resources).
The growing interest in genericism seems to have two main drivers. The first, of course, is austerity. Cost cutting on an unprecedented scale in local authorities requires those staff that survive the waves of redundancies to be willing to take on new roles and work in multi-purpose settings. The second is the drive for whole-person approaches in which proper engagement with the public might require staff to cross traditional sector boundaries.
It is good that public service workers are being granted greater flexibility. But there are two main limitations to this move to greater genericism. The first is that multi-tasking in an era of cost cutting can look a lot like deprofessionalisation. Within social work, for example, concerns have been expressed about the downgrading of social work posts (by appointing brokers in their place, say) and the resulting loss of professional skills and knowledge.
A second limitation is that skills training continues to be sectoral, failing to catch up with the move to genericism….”
New Research Suggests Collaborative Approaches Produce Better Plans
JPER: “In a previous blog post (see, http://goo.gl/pAjyWE), we discussed how many of the most influential articles in the Journal of Planning Education and Research (and in peer publications, like JAPA) over the last two decades have focused on communicative or collaborative planning. Proponents of these approaches, most notably Judith Innes, Patsy Healey, Larry Susskind, and John Forester, developed the idea that the collaborative and communicative structures that planners use impact the quality, legitimacy, and equity of planning outcomes. In practice, communicative theory has led to participatory initiatives, such as those observed in New Orleans (post-Katrina, http://goo.gl/A5J5wk), Chattanooga (to revitalize its downtown and riverfront, http://goo.gl/zlQfKB), and in many other smaller efforts to foment wider involvement in decision making. Collaboration has also impacted regional governance structures, leading to more consensus based forms of decision making, notably CALFED (SF Bay estuary governance, http://goo.gl/EcXx9Q) and transportation planning with Metropolitan Planning Organizations (MPOs)….
Most studies testing the implementation of collaborative planning have been case studies. Previous work by authors such as Innes and Booher, has provided valuable qualitative data about collaboration in planning, but few studies have attempted to empirically test the hypothesis that consensus building and participatory practices lead to better planning outcomes.
Robert Deyle (Florida State) and Ryan Weidenman (Atkins Global) build on previous case study research by surveying officials in involved in developing long-range transportation plans in 88 U.S. MPOs about the process and outcomes of those plans. The study tests the hypothesis that collaborative processes provide better outcomes and enhanced long-term relationships in situations where “many stakeholders with different needs” have “shared interests in common resources or challenges” and where “no actor can meet his/her interests without the cooperation of many others (Innes and Booher 2010, 7; Innes and Gruber 2005, 1985–2186). Current theory posits that consensus-based collaboration requires 1) the presence of all relevant interests, 2) mutual interdependence for goal achievement, and 3) honest and authentic dialog between participants (Innes and Booher 2010, 35–36, Deyle and Weidenmann, 2014).
Figure 2 Deyle and Weidenman (2014)
By surveying planning authorities, the authors found that most of the conditions (See Figure 2, above) posited in collaborative planning literature had statistically significant impacts on planning outcomes.These included perceptions of plan quality, participant satisfaction with the plan, as well as intangible outcomes that benefit both the participants and their ongoing collaboration efforts. However, having a planning process in which all or most decisions were made by consensus did not improve outcomes. ….
Deyle, Robert E., and Ryan E. Wiedenman. “Collaborative Planning by Metropolitan Planning Organizations A Test of Causal Theory.” Journal of Planning Education and Research (2014): 0739456X14527621.
To access this article FREE until May 31 click the following links: Online, http://goo.gl/GU9inf, PDF, http://goo.gl/jehAf1.”
Civic Crowdfunding: Participatory Communities, Entrepreneurs and the Political Economy of Place
Rodrigo Davis: “Today I’m capping two years of studying the emergence of civic crowdfunding by submitting my master’s thesis to the MIT archives…You can read Civic Crowdfunding: Participatory Communities, Entrepreneurs and the Political Economy of Place in its entirety (173 pages) now,…
Crowdfunding is everywhere. People are using it to fund watches, comic books, even famous film directors are doing it. In what is now a $6 billion industry globally, I think the most interesting, disruptive and exciting work that’s happening is in donation-based crowdfunding. That’s worth, very roughly, $1.2 billion a year worldwide per year. Within that subset, I’ve been looking at civic projects, people who are producing shared goods for a community or broader public. These projects build on histories of community fundraising and resource pooling that long predate the Internet; what’s changed is that we’ve created a scalable, portable platform model to carry out these existing practices.
So how is civic crowdfunding doing? When I started this project very few people were using that term. No one had done any aggregated data collection and published it. So I decided to take on that task. I collected data on 1224 projects between 2010 and March 2014, which raised $10.74 million in just over three years. I focused on seven platforms: Catarse (Brazil), Citizinvestor (US), Goteo (Spain), IOBY (US), Kickstarter (US), Neighbor.ly (US) and Spacehive (UK). I didn’t collect everything. …
Here are four things I found out about civic crowdfunding.
- Civic crowdfunding is small-scale but relatively successful, and it has big ambitions.Currently the average civic crowdfunding project is small in scale: $6,357 is the median amount raised. But these civic projects seem to be doing pretty well. Projects tagged ‘civic’ on Kickstarter, for instance, succeed 81% of the time. If Civic were a separate category, it would be Kickstarter’s most successful category. Meanwhile, most platform owners and some incumbent institutions see civic crowdfunding as a new mechanism for public-private partnerships capable of realizing large-scale projects. In a small minority of cases, such as the three edge-case projects I explored in Chapter 3 of my thesis, civic crowdfunding has begun to fulfill some of those ambitions. For the center of gravity to shift further in the direction of these potential outcomes, though, existing institutions, including government, large non-profits and the for-profit sector, will need to engage more comprehensively with the process.
- Civic crowdfunding started as a hobby for green space projects by local non-profits, but larger organizations are getting involved. Almost a third of campaigners are using civic crowdfunding platforms for park and garden-related projects (29%). Event-based projects, and education and training are also popular. Sports and mobility projects are pretty uncommon. The frequency of garden and park projects is partly because these projects are not capital intensive, and they’re uncontroversial. That’s also changing. Organizations from governments to corporations and large foundations, are exploring ways to support crowdfunding for a much wider range of community-facing activities. Their modes of engagement include publicizing campaigns, match-funding campaigns on an ad-hoc basis, running their own campaigns and even building new platforms from the ground up.
- Civic crowdfunding is concentrated in cities (especially those where platforms are based). The genre is too new to have spread very effectively, it seems. Five states account for 80% of the projects, and this is partly a function of where the platforms are located. New York, California are our top two, followed by Illinois and Oregon. We know there’s a strong trend towards big cities. It’s hard work for communities to use crowdfunding to get projects off the ground, especially when it’s an unfamiliar process. The platforms have played a critical role in building participants’ understanding of crowdfunding and supporting them through the process.
- Civic crowdfunding has the same highly unequal distributional tendencies as other crowd markets. When we look at the size distribution of projects, the first thing we notice is something close to a Pareto distribution, or Long Tail. Most projects are small-scale, but a small number of high-value projects have taken a large share of the total revenue raised by civic crowdfunding. We shouldn’t be surprised by this. On Kickstarter most successful projects are between 5 and 10k, and 47% of civic projects I studied are in the same bracket. The problem is that we tend to remember the outliers, such as Veronica Mars and Spike Lee – because they show what’s possible. But they are still the outliers.
Now, here are two things we don’t know.
- Will civic crowdfunding deter public investment or encourage it?
- Will civic crowdfunding widen wealth gaps?”
The merits of participatory budgeting
One particularly promising innovation in participatory budgeting, or PB — a process to directly empower citizens to make spending decisions on a defined public budget. PB was first attempted in Porto Alegre, Brazil, in 1989. Its success led to the World Bank calling PB a “best practice” in democratic innovation. Since then, PB has expanded to over 1,500 cities worldwide, including several in the U.S. Starting in 2009 in Chicago’s 49th Ward with a budget of just $1 million, PB in the United States has expanded to a $27 million-a-year experiment. Municipal leaders from Vallejo, California, to New York City have turned over a portion of their discretionary funds to neighborhood residents. Boston recently launched the first youth-driven PB. Nearly half of New York’s City Council members are slated to participate this fall, after newly elected Mayor Bill de Blasio made it a cornerstone of his campaign. Chicago Mayor Rahm Emanuel created a new manager of participatory budgeting who will help coordinate Council districts that want to participate. The White House recently included federally supported participatory budgeting as part of its international Open Government Partnership commitments.
Wants and needs
Chicago has been a particularly insightful petri dish to study PB in the U.S., mainly because the city is an unlikely candidate for democratic innovations. For decades its Democratic machine retained a strong and continuous hold over city government. The Daley family held the mayoralty for a combined 12 terms. While discretionary funds (known as “menu money”) are allocated equally — but not equitably, given different needs — to all 50 wards, the process of spending this money is at the discretion of locally elected aldermen. From 1972 to 2009, 30 Chicago aldermen were indicted and convicted of federal crimes ranging from income tax evasion to extortion, embezzlement and conspiracy. Clearly, Chicago has not always been a model of good governance.
Against this backdrop, PB has continued to expand in Chicago. This year three districts participated. The Fifth Ward, home to the University of Chicago, decided not to continue the process again this year. Instead, this year the ward had four groups of residents each allocate $250,000. The alderwoman noted that this enabled the transparency and engagement aspect of PB with fewer process resources — they had only 100 people come out to vote.
Different versions of PB are aimed to lower the current barriers to civic engagement. I have seen PB bring out people who have never before engaged in politics. Many longtime civic participants often cite PB as the single most meaningful civic engagement of their lives — far above, say, jury duty. Suddenly, citizens are empowered with real decision-making authority and leave with new relationships with their peers, community and elected officials.
However, PB is not a stand-alone endeavor. It must be part of a larger effort to improve governance. This must include greater transparency in public decision making and empowering citizens to hold their elected officials more accountable. The process provides an enormous education that can be translated into civic activity beyond PB. Ideally after engaging in PB, a citizen will be better equipped to volunteer in the community, vote or push for policy reform. What other infrastructure, both online and off, is needed to support citizens who want to further engage in more collaborative governance? …”
Conceptualizing Open Data ecosystems: A timeline analysis of Open Data development in the UK
New paper by Tom Heath et al: “In this paper, we conceptualize Open Data ecosystems by analysing the major stakeholders in the UK. The conceptualization is based on a review of popular Open Data definitions and business ecosystem theories, which we applied to empirical data using a timeline analysis. Our work is informed by a combination of discourse analysis and in-depth interviews, undertaken during the summer of 2013. Drawing on the UK as a best practice example, we identify a set of structural business ecosystem properties: circular flow of resources, sustainability, demand that encourages supply, and dependence developing between suppliers, intermediaries, and users. However, significant gaps and shortcomings are found to remain. Most prominently, demand is not yet fully encouraging supply and actors have yet to experience fully mutual interdependence.”
The Emerging Science of Superspreaders (And How to Tell If You're One Of Them)
Emerging Technology From the arXiv: “Who are the most influential spreaders of information on a network? That’s a question that marketers, bloggers, news services and even governments would like answered. Not least because the answer could provide ways to promote products quickly, to boost the popularity of political parties above their rivals and to seed the rapid spread of news and opinions.
So it’s not surprising that network theorists have spent some time thinking about how best to identify these people and to check how the information they receive might spread around a network. Indeed, they’ve found a number of measures that spot so-called superspreaders, people who spread information, ideas or even disease more efficiently than anybody else.
But there’s a problem. Social networks are so complex that network scientists have never been able to test their ideas in the real world—it has always been too difficult to reconstruct the exact structure of Twitter or Facebook networks, for example. Instead, they’ve created models that mimic real networks in certain ways and tested their ideas on these instead.
But there is growing evidence that information does not spread through real networks in the same way as it does through these idealised ones. People tend to pass on information only when they are interested in a topic and when they are active, factors that are hard to take into account in a purely topological model of a network.
So the question of how to find the superspreaders remains open. That looks set to change thanks to the work of Sen Pei at Beihang University in Beijing and a few pals who have performed the first study of superspreaders on real networks.
These guys have studied the way information flows around various networks ranging from the Livejournal blogging network to the network of scientific publishing at the American Physical Society’s, as well as on subsets of the Twitter and Facebook networks. And they’ve discovered the key indicator that identifies superspreaders in these networks.
In the past, network scientists have developed a number of mathematical tests to measure the influence that individuals have on the spread of information through a network. For example, one measure is simply the number of connections a person has to other people in the network, a property known as their degree. The thinking is that the most highly connected people are the best at spreading information.
Another measure uses the famous PageRank algorithm that Google developed for ranking webpages. This works by ranking somebody more highly if they are connected to other highly ranked people.
Then there is ‘betweenness centrality’ , a measure of how many of the shortest paths across a network pass through a specific individual. The idea is that these people are more able to inject information into the network.
And finally there is a property of nodes in a network known as their k-core. This is determined by iteratively pruning the peripheries of a network to see what is left. The k-core is the step at which that node or person is pruned from the network. Obviously, the most highly connected survive this process the longest and have the highest k-core score..
The question that Sen and co set out to answer was which of these measures best picked out superspreaders of information in real networks.
They began with LiveJournal, a network of blogs in which individuals maintain lists of friends that represent social ties to other LiveJournal users. This network allows people to repost information from other blogs and to use a reference the links back to the original post. This allows Sen and co to recreate not only the network of social links between LiveJournal users but also the way in which information is spread between them.
Sen and co collected all of the blog posts from February 2010 to November 2011, a total of more than 56 million posts. Of these, some 600,000 contain links to other posts published by LiveJournal users.
The data reveals two important properties of information diffusion. First, only some 250,000 users are actively involved in spreading information. That’s a small fraction of the total.
More significantly, they found that information did not always diffuse across the social network. The found that information could spread between two LiveJournal users even though they have no social connection.
That’s probably because they find this information outside of the LiveJournal ecosystem, perhaps through web searches or via other networks. “Only 31.93% of the spreading posts can be attributed to the observable social links,” they say.
That’s in stark contrast to the assumptions behind many social network models. These simulate the way information flows by assuming that it travels directly through the network from one person to another, like a disease spread by physical contact.
The work of Sen and co suggests that influences outside the network are crucial too. In practice, information often spreads via several seemingly independent sources within the network at the same time. This has important implications for the way superspreaders can be spotted.
Sen and co say that a person’s degree– the number of other people he or her are connected to– is not as good a predictor of information diffusion as theorists have thought. “We find that the degree of the user is not a reliable predictor of influence in all circumstances,” they say.
What’s more, the Pagerank algorithm is often ineffective in this kind of network as well. “Contrary to common belief, although PageRank is effective in ranking web pages, there are many situations where it fails to locate superspreaders of information in reality,” they say….
Ref: arxiv.org/abs/1405.1790 : Searching For Superspreaders Of Information In Real-World Social Media”