Crowdsourcing the Charlottesville Investigation


Internet sleuths got to work, and by Monday morning they were naming names and calling for arrests.

The name of the helmeted man went viral after New York Daily News columnist Shaun King posted a series of photos on Twitter and Facebook that more clearly showed his face and connected him to photos from a Facebook account. “Neck moles gave it away,” King wrote in his posts, which were shared more than 77,000 times. But the name of the red-bearded assailant was less clear: some on Twitter claimed it was a Texas man who goes by a Nordic alias online. Others were sure it was a Michigan man who, according to Facebook, attended high school with other white nationalist demonstrators depicted in photos from Charlottesville.

After being contacted for comment by The Marshall Project, the Michigan man removed his Facebook page from public view.

Such speculation, especially when it is not conclusive, has created new challenges for law enforcement. There is the obvious risk of false identification. In 2013, internet users wrongly identified university student Sunil Tripathi as a suspect in the Boston marathon bombing, prompting the internet forum Reddit to issue an apology for fostering “online witch hunts.” Already, an Arkansas professor was misidentified as as a torch-bearing protester, though not a criminal suspect, at the Charlottesville rallies.

Beyond the cost to misidentified suspects, the crowdsourced identification of criminal suspects is both a benefit and burden to investigators.

“If someone says: ‘hey, I have a picture of someone assaulting another person, and committing a hate crime,’ that’s great,” said Sgt. Sean Whitcomb, the spokesman for the Seattle Police Department, which used social media to help identify the pilot of a drone that crashed into a 2015 Pride Parade. (The man was convicted in January.) “But saying, ‘I am pretty sure that this person is so and so’. Well, ‘pretty sure’ is not going to cut it.”

Still, credible information can help police establish probable cause, which means they can ask a judge to sign off on either a search warrant, an arrest warrant, or both….(More)“.

Gaming for Infrastructure


Nilmini Rubin & Jennifer Hara  at the Stanford Social Innovation Review: “…the American Society of Civil Engineers (ASCE) estimates that the United States needs $4.56 trillion to keep its deteriorating infrastructure current but only has funding to cover less than half of necessary infrastructure spending—leaving the at least country $2.0 trillion short through the next decade. Globally, the picture is bleak as well: World Economic Forum estimates that the infrastructure gap is $1 trillion each year.

What can be done? Some argue that public-private partnerships (PPPs or P3s) are the answer. We agree that they can play an important role—if done well. In a PPP, a private party provides a public asset or service for a government entity, bears significant risk, and is paid on performance. The upside for governments and their citizens is that the private sector can be incentivized to deliver projects on time, within budget, and with reduced construction risk. The private sector can benefit by earning a steady stream of income from a long-term investment from a secure client. From the Grand Parkway Project in Texas to the Queen Alia International Airport in Jordan, PPPs have succeeded domestically and internationally.

The problem is that PPPs can be very hard to design and implement. And since they can involve commitments of millions or even billions of dollars, a PPP failure can be awful. For example, the Berlin Airport is a PPP that is six years behind schedule, and its costs overruns total roughly $3.8 billion to date.

In our experience, it can be useful for would-be partners to practice engaging in a PPP before they dive into a live project. At our organization, Tetra Tech’s Institute for Public-Private Partnerships, for example, we use an online and multiplayer game—the P3 Game—to help make PPPs work.

The game is played with 12 to 16 people who are divided into two teams: a Consortium and a Contracting Authority. In each of four rounds, players mimic the activities they would engage in during the course of a real PPP, and as in real life, they are confronted with unexpected events: The Consortium fails to comply with a routine road inspection, how should the Contracting Authority team respond? The cost of materials skyrockets, how should the Consortium team manage when it has a fixed price contract?

Players from government ministries, legislatures, construction companies, financial institutions, and other entities get to swap roles and experience a PPP from different vantage points. They think through challenges and solve problems together—practicing, failing, learning, and growing—within the confines of the game and with no real-world cost.

More than 1,000 people have participated to date, including representatives of the US Army Corps of Engineers, the World Bank, and Johns Hopkins University, using a variety of scenarios. PPP team members who work on part of the Schiphol-Amsterdam-Almere Project, a $5.6-billion road project in the Netherlands, played the game using their actual contract document….(More)”.

Can AI tools replace feds?


Derek B. Johnson at FCW: “The Heritage Foundation…is calling for increased reliance on automation and the potential creation of a “contractor cloud” offering streamlined access to private sector labor as part of its broader strategy for reorganizing the federal government.

Seeking to take advantage of a united Republican government and a president who has vowed to reform the civil service, the foundation drafted a pair of reports this year attempting to identify strategies for consolidating, merging or eliminating various federal agencies, programs and functions. Among those strategies is a proposal for the Office of Management and Budget to issue a report “examining existing government tasks performed by generously-paid government employees that could be automated.”

Citing research on the potential impacts of automation on the United Kingdom’s civil service, the foundation’s authors estimated that similar efforts across the U.S. government could yield $23.9 billion in reduced personnel costs and a reduction in the size of the federal workforce by 288,000….

The Heritage report also called on the federal government to consider a “contracting cloud.” The idea would essentially be for a government version of TaskRabbit, where agencies could select from a pool of pre-approved individual contractors from the private sector who could be brought in for specialized or seasonal work without going through established contracts. Greszler said the idea came from speaking with subcontractors who complained about having to kick over a certain percentage of their payments to prime contractors even as they did all the work.

Right now the foundation is only calling for the government to examine the potential of the issue and how it would interact with existing or similar vehicles for contracting services like the GSA schedule. Greszler emphasized that any pool of workers would need to be properly vetted to ensure they met federal standards and practices.

“There has to be guidelines or some type of checks, so you’re not having people come off the street and getting access to secure government data,” she said….(More)

Open & Shut


Harsha Devulapalli: “Welcome to Open & Shut — a new blog dedicated to exploring the opportunities and challenges of working with open data in closed societies around the world. Although we’ll be exploring questions relevant to open data practitioners worldwide, we’re particularly interested in seeing how civil society groups and actors in the Global South are using open data to push for greater government transparency, and tackle daunting social and economic challenges facing their societies….Throughout this series we’ll be profiling and interviewing organisations working with open data worldwide, and providing do-it-yourself data tutorials that will be useful for beginners as well as data experts. …

What do we mean by the terms ‘open data’ and ‘closed societies’?

It’s important to be clear about what we’re dealing with, here. So let’s establish some key terms. When we talk about ‘open data’, we mean data that anyone can access, use and share freely. And when we say ‘closed societies’, we’re referring to states or regions in which the political and social environment is actively hostile to notions of openness and public scrutiny, and which hold principles of freedom of information in low esteem. In closed societies, data is either not published at all by the government, or else is only published in inaccessible formats, is missing data, is hard to find or else is just not digitised at all.

Iran is one such state that we would characterise as a ‘closed society’. At Small Media, we’ve had to confront the challenges of poor data practice, secrecy, and government opaqueness while undertaking work to support freedom of information and freedom of expression in the country. Based on these experiences, we’ve been working to build Iran Open Data — a civil society-led open data portal for Iran, in an effort to make Iranian government data more accessible and easier for researchers, journalists, and civil society actors to work with.

Iran Open Data — an open data portal for Iran, created by Small Media

.

..Open & Shut will shine a light on the exciting new ways that different groups are using data to question dominant narratives, transform public opinion, and bring about tangible change in closed societies. At the same time, it’ll demonstrate the challenges faced by open data advocates in opening up this valuable data. We intend to get the community talking about the need to build cross-border alliances in order to empower the open data movement, and to exchange knowledge and best practices despite the different needs and circumstances we all face….(More)

Where’s the ‘Civic’ in CivicTech?


Blog by Pius Enywaru: “The ideology of community participation and development is a crucial topic for any nation or community seeking to attain sustainable development. Here in Uganda, oftentimes when the opportunity for public participation either in local planning or in holding local politicians to account — the ‘don’t care’ attitude reigns….

What works?

Some of these tools include Ask Your Government Uganda, a platform built to help members of the public get the information they want about from 106 public agencies in Uganda. U-Report developed by UNICEF provides an SMS-based social monitoring tool designed to address issues affecting the youth of Uganda. Mentioned in a previous blog post, Parliament Watchbrings the proceedings of the Par­lia­ment of Uganda to the citizens. The or­ga­ni­za­tion lever­ages tech­nol­ogy to share live up­dates on so­cial me­dia and pro­vides in-depth analy­sis to cre­ate a bet­ter un­der­stand­ing on the busi­ness of Par­lia­ment. Other tools used include citizen scorecards, public media campaigns and public petitions. Just recently, we have had a few calls to action to get people to sign petitions, with somewhat lackluster results.

What doesn’t work?

Although the usage of these tools have dramatically grown, there is still a lack of awareness and consequently, community participation. In order to understand the interventions which the Government of Uganda believes are necessary for sustainable urban development, it is important to examine the realities pertaining to urban areas and their planning processes. There are many challenges in deploying community participation tools based on ICT such as limited funding and support for such initiatives, low literacy levels, low technical literacy, a large digital divide, low rates of seeking input from communities in developing these tools, lack of adequate government involvement and resistance/distrust of change by both government and citizens. Furthermore, in many of these initiatives, a large marketing or sensitization push is needed to let citizens know that these services exist for their benefit.

There are great minds who have brilliant ideas to try and bring literally everyone on board though civic engagement. When you have a look at their ideas, you will agree that indeed they might make a reputable service and bring about remarkable change in different communities. However, the biggest question has always been, “How do these ideas get executed and adopted by these communities that they target”? These ideas suffer a major setback of lack of inclusivity to enhance community participation. This still remains a puzzle for most folks that have these ideas….(More)”.

We need a safe space for policy failure


Catherine Althaus & David Threlfall in The Mandarin: “Who remembers Google Schemer, the Apple Pippin, or Microsoft Zune? No one — and yet such no-go ideas didn’t hold back these prominent companies. In IT, such high profile failures are simply steps on the path to future success. When a start-up or major corporate puts a product onto the market they identify the kinks in their invention immediately, design a fix, and release a new version. If the whole idea falls flat — and who ever listened to music on a Zune instead of an iPod? — the next big thing is just around the corner. Learning from failure is celebrated as a key feature of innovation.

But in the world of public policy, this approach is only now creeping into our collective consciousness. We tread ever so lightly.

Drug policy, childcare reform, or information technology initiatives are areas where innovation could provide policy improvements, but who is going to be a first-mover innovator in this policy area without fearing potential retribution should anything go wrong?…

Public servants don’t have the luxury of ‘making a new version’ without fear of blame or retribution. Critically, their process often lacks the ability to test assumptions before delivery….

The most persuasive or entertaining narrative often trumps the painstaking work — and potential missteps — required to build an evidence base to support political and policy decisions. American academics Elizabeth Shanahan, Mark McBeth and Paul Hathaway make a remarkable claim regarding the power of narrative in the policy world: “Research in the field of psychology shows that narratives have a stronger ability to persuade individuals and influence their beliefs than scientific evidence does.” If narrative and stories overtake what we normally accept as evidence, then surely we ought to be taking more notice of what the narratives are, which we choose and how we use them…

Failing the right way

Essential policy spheres such as health, education and social services should benefit from innovative thinking and theory testing. What is necessary in these areas is even more robust attention to carefully calibrated and well-thought through experimentation. Rewards need to outweigh risks, and risks need to be properly managed. This has always been the case in clinical trials in medicine. Incredible breakthroughs in medical practice made throughout the 20th century speak to the success of this model. Why should policymaking suffer from a timid inertia given the potential for similar success?

An innovative approach, focused on learning while failing right, will certainly require a shift in thinking. Every new initiative will need to be designed in a holistic way, to not just solve an issue but learn from every stage of the design and delivery process. Evaluation doesn’t follow implementation but instead becomes part of the entire cycle. A small-scale, iterative approach can then lead to bigger successes down the track….(More)”.

Artificial Intelligence for Citizen Services and Government


Paper by Hila Mehr: “From online services like Netflix and Facebook, to chatbots on our phones and in our homes like Siri and Alexa, we are beginning to interact with artificial intelligence (AI) on a near daily basis. AI is the programming or training of a computer to do tasks typically reserved for human intelligence, whether it is recommending which movie to watch next or answering technical questions. Soon, AI will permeate the ways we interact with our government, too. From small cities in the US to countries like Japan, government agencies are looking to AI to improve citizen services.

While the potential future use cases of AI in government remain bounded by government resources and the limits of both human creativity and trust in government, the most obvious and immediately beneficial opportunities are those where AI can reduce administrative burdens, help resolve resource allocation problems, and take on significantly complex tasks. Many AI case studies in citizen services today fall into five categories: answering questions, filling out and searching documents, routing requests, translation, and drafting documents. These applications could make government work more efficient while freeing up time for employees to build better relationships with citizens. With citizen satisfaction with digital government offerings leaving much to be desired, AI may be one way to bridge the gap while improving citizen engagement and service delivery.

Despite the clear opportunities, AI will not solve systemic problems in government, and could potentially exacerbate issues around service delivery, privacy, and ethics if not implemented thoughtfully and strategically. Agencies interested in implementing AI can learn from previous government transformation efforts, as well as private-sector implementation of AI. Government offices should consider these six strategies for applying AI to their work: make AI a part of a goals-based, citizen-centric program; get citizen input; build upon existing resources; be data-prepared and tread carefully with privacy; mitigate ethical risks and avoid AI decision making; and, augment employees, do not replace them.

This paper explores the various types of AI applications, and current and future uses of AI in government delivery of citizen services, with a focus on citizen inquiries and information. It also offers strategies for governments as they consider implementing AI….(More)”

Journal tries crowdsourcing peer reviews, sees excellent results


Chris Lee at ArsTechnica: “Peer review is supposed to act as a sanity check on science. A few learned scientists take a look at your work, and if it withstands their objective and entirely neutral scrutiny, a journal will happily publish your work. As those links indicate, however, there are some issues with peer review as it is currently practiced. Recently, Benjamin List, a researcher and journal editor in Germany, and his graduate assistant, Denis Höfler, have come up with a genius idea for improving matters: something called selected crowd-sourced peer review….

My central point: peer review is burdensome and sometimes barely functional. So how do we improve it? The main way is to experiment with different approaches to the reviewing process, which many journals have tried, albeit with limited success. Post-publication peer review, when scientists look over papers after they’ve been published, is also an option but depends on community engagement.

But if your paper is uninteresting, no one will comment on it after it is published. Pre-publication peer review is the only moment where we can be certain that someone will read the paper.

So, List (an editor for Synlett) and Höfler recruited 100 referees. For their trial, a forum-style commenting system was set up that allowed referees to comment anonymously on submitted papers but also on each other’s comments as well. To provide a comparison, the papers that went through this process also went through the traditional peer review process. The authors and editors compared comments and (subjectively) evaluated the pros and cons. The 100-person crowd of researchers was deemed the more effective of the two.

The editors found that it took a bit more time to read and collate all the comments into a reviewers’ report. But it was still faster, which the authors loved. Typically, it took the crowd just a few days to complete their review, which compares very nicely to the usual four to six weeks of the traditional route (I’ve had papers languish for six months in peer review). And, perhaps most important, the responses were more substantive and useful compared to the typical two-to-four-person review.

So far, List has not published the trial results formally. Despite that, Synlett is moving to the new system for all its papers.

Why does crowdsourcing work?

Here we get back to something more editorial. I’d suggest that there is a physical analog to traditional peer review, called noise. Noise is not just a constant background that must be overcome. Noise is also generated by the very process that creates a signal. The difference is how the amplitude of noise grows compared to the amplitude of signal. For very low-amplitude signals, all you measure is noise, while for very high-intensity signals, the noise is vanishingly small compared to the signal, even though it’s huge compared to the noise of the low-amplitude signal.

Our esteemed peers, I would argue, are somewhat random in their response, but weighted toward objectivity. Using this inappropriate physics model, a review conducted by four reviewers can be expected (on average) to contain two responses that are, basically, noise. By contrast, a review by 100 reviewers may only have 10 responses that are noise. Overall, a substantial improvement. So, adding the responses of a large number of peers together should produce a better picture of a scientific paper’s strengths and weaknesses.

Didn’t I just say that reviewers are overloaded? Doesn’t it seem that this will make the problem worse?

Well, no, as it turns out. When this approach was tested (with consent) on papers submitted to Synlett, it was discovered that review times went way down—from weeks to days. And authors reported getting more useful comments from their reviewers….(More)”.

Is it too late to build a better world?


Keith Burnett at Campaign for Social Science: “The greatest challenge we face is to use our intellects to guide our actions in making the world a better place for us and our fellow human beings.

This is no easy task and its history is littered with false dawns and doctrines. You would have to be blind to the lessons of the past to fail to appreciate the awful impact that delusional ideas have had on mankind. Some of the worst are those meant to save us.

There are some who take this as a warning against intervention at all, who say it can never be done and shouldn’t even be attempted. That the forces of nature blow winds in society that we can never tame. That we are bound to suffer like a small ship in a stormy sea.

They might be right, but it would be the utmost dereliction of academia to give up on this quest. And in any case, I don’t believe it is true. These forces may be there, but there is much we can do, a lot of it deeply practical to make the journey more comfortable and so we even end up in the right port.

Of course, there are those who believe we academics simply don’t care. That scholarship is happiest at a distance from messy, contradictory humanity and prefers in its detached world of conferences and publications. That we are content to analyse rather than heal.

Well I can tell you that my social sciences colleagues at Sheffield are not content in an ivory tower and they never have been. They feel the challenges of our world as keenly as any. And they know if we ever needed understanding, and a vision of what society could be, we need it now.

I am confident they are not alone and, as a scientist all my life, it has become apparent to me that, to translate insights into change, we must frequently overcome barriers of perception and culture, of politics and prejudice. Our great challenges are not only technical but matters of education and economics. Our barriers those of opportunity, power and purpose.

If we want solutions to reach those who desperately need them, we must understand how to take the word and make it flesh. Ideas alone are not enough, they come to life through people. They need money, armies of changed opinion.

If we don’t do this work, the risk is truly terrible – that the armies and the power, the public opinion and votes, will be led by ignorance and profit. As the ancient Greeks knew, a demos could only function when citizens could grasp the consequences of their choices.

Perhaps we had forgotten; thought ‘it can’t happen here’? If so, this year has been a stark reminder of why we dare not be complacent. For who would deny the great political lessons we are almost choking on as we see Brexit evolve from fringe populist movement to a force that is shaking us to pieces? Who will have failed to understand, in the frustrations of Trump, the value of a constitution designed to protect citizens against the ravages of a tyrant?

Why do the social sciences matter? Just look around us. Who would deny the need for new ways to organise our industry and our economy as real incomes fade? Who would deny that we need a society which is able to sensibly regulate against the depredations of the unscrupulous landlord?

Who would deny the need to understand how to better educate and train our youth?

We are engaged in a battle for society, and the fronts are many and difficult. Can we hope to build a society that will look after the stranger in its midst? Is social justice a chimera?

Is there anything to be done?

To this we answer, yes. But we must do more than study, we must find the gears which will ensure what we discover can be absorbed by a society than needs to act with understanding…(More)”

The Politics of Evidence: From evidence-based policy to the good governance of evidence


(Open Access) Book by Justin Parkhurst: “There has been an enormous increase in interest in the use of evidence for public policymaking, but the vast majority of work on the subject has failed to engage with the political nature of decision making and how this influences the ways in which evidence will be used (or misused) within political areas. This book provides new insights into the nature of political bias with regards to evidence and critically considers what an ‘improved’ use of evidence would look like from a policymaking perspective.

Part I describes the great potential for evidence to help achieve social goals, as well as the challenges raised by the political nature of policymaking. It explores the concern of evidence advocates that political interests drive the misuse or manipulation of evidence, as well as counter-concerns of critical policy scholars about how appeals to ‘evidence-based policy’ can depoliticise political debates. Both concerns reflect forms of bias – the first representing technical bias, whereby evidence use violates principles of scientific best practice, and the second representing issue bias in how appeals to evidence can shift political debates to particular questions or marginalise policy-relevant social concerns.

Part II then draws on the fields of policy studies and cognitive psychology to understand the origins and mechanisms of both forms of bias in relation to political interests and values. It illustrates how such biases are not only common, but can be much more predictable once we recognise their origins and manifestations in policy arenas.

Finally, Part III discusses ways to move forward for those seeking to improve the use of evidence in public policymaking. It explores what constitutes ‘good evidence for policy’, as well as the ‘good use of evidence’ within policy processes, and considers how to build evidence-advisory institutions that embed key principles of both scientific good practice and democratic representation. Taken as a whole, the approach promoted is termed the ‘good governance of evidence’ – a concept that represents the use of rigorous, systematic and technically valid pieces of evidence within decision-making processes that are representative of, and accountable to, populations served…(More)”