A Framework for Strengthening Data Ecosystems to Serve Humanitarian Purposes


Paper by Marc van den Homberg et al: “The incidence of natural disasters worldwide is increasing. As a result, a growing number of people is in need of humanitarian support, for which limited resources are available. This requires an effective and efficient prioritization of the most vulnerable people in the preparedness phase, and the most affected people in the response phase of humanitarian action. Data-driven models have the potential to support this prioritization process. However, the applications of these models in a country requires a certain level of data preparedness.
To achieve this level of data preparedness on a large scale we need to know how to facilitate, stimulate and coordinate data-sharing between humanitarian actors. We use a data ecosystem perspective to develop success criteria for establishing a “humanitarian data ecosystem”. We first present the development of a general framework with data ecosystem governance success criteria based on a systematic literature review. Subsequently, the applicability of this framework in the humanitarian sector is assessed through a case study on the “Community Risk Assessment and Prioritization toolbox” developed by the Netherlands Red Cross. The empirical evidence led to the adaption the framework to the specific criteria that need to be addressed when aiming to establish a successful humanitarian data ecosystem….(More)”.

Data sharing in PLOS ONE: An analysis of Data Availability Statements


Lisa M. Federer et al at PLOS One: “A number of publishers and funders, including PLOS, have recently adopted policies requiring researchers to share the data underlying their results and publications. Such policies help increase the reproducibility of the published literature, as well as make a larger body of data available for reuse and re-analysis. In this study, we evaluate the extent to which authors have complied with this policy by analyzing Data Availability Statements from 47,593 papers published in PLOS ONE between March 2014 (when the policy went into effect) and May 2016. Our analysis shows that compliance with the policy has increased, with a significant decline over time in papers that did not include a Data Availability Statement. However, only about 20% of statements indicate that data are deposited in a repository, which the PLOS policy states is the preferred method. More commonly, authors state that their data are in the paper itself or in the supplemental information, though it is unclear whether these data meet the level of sharing required in the PLOS policy. These findings suggest that additional review of Data Availability Statements or more stringent policies may be needed to increase data sharing….(More)”.

Optimal Scope for Free Flow of Non-Personal Data in Europe


Paper by Simon Forge for the European Parliament Think Tank: “Data is not static in a personal/non-personal classification – with modern analytic methods, certain non-personal data can help to generate personal data – so the distinction may become blurred. Thus, de-anonymisation techniques with advances in artificial intelligence (AI) and manipulation of large datasets will become a major issue. In some new applications, such as smart cities and connected cars, the enormous volumes of data gathered may be used for personal information as well as for non-personal functions, so such data may cross over from the technical and non-personal into the personal domain. A debate is taking place on whether current EU restrictions on confidentiality of personal private information should be relaxed so as to include personal information in free and open data flows. However, it is unlikely that a loosening of such rules will be positive for the growth of open data. Public distrust of open data flows may be exacerbated because of fears of potential commercial misuse of such data, as well of leakages, cyberattacks, and so on. The proposed recommendations are: to promote the use of open data licences to build trust and openness, promote sharing of private enterprises’ data within vertical sectors and across sectors to increase the volume of open data through incentive programmes, support testing for contamination of open data mixed with personal data to ensure open data is scrubbed clean – and so reinforce public confidence, ensure anti-competitive behaviour does not compromise the open data initiative….(More)”.

Open Social Innovation: Why and How Seekers Use Crowdsourcing for Societal Benefits


Paper by Krithika Randhawa, Ralf Wilden Macquarie and Joel West: “Despite the increased research attention on crowdsourcing, we know little about why and how seeker organizations use this open innovation mechanism. Furthermore, previous studies have focused on profit-seeking firms, despite the use of open innovation practices by public sector organizations to achieve societal benefits. In this study, we investigate the organizational and project level choices of government (seekers) that crowdsource from citizens (solvers) to drive open social innovation, and thus develop new ways to address societal problems, a process referred to as “citizensourcing”.

Using a dataset of 18 local government seekers that use the same intermediary to conduct more than 2,000 crowdsourcing projects, we develop a model of seeker crowdsourcing implementation that links a previously-unstudied variance in seeker intent and engagement strategies, at the organizational level, to differences in project team motivation and capabilities, in turn leading to varying online engagement behaviors and ultimately project outcomes. Comparing and contrasting governmental with the more familiar corporate context, we further find that the non-pecuniary orientation of both seekers and solvers means that the motives of government crowdsourcing differ fundamentally from corporate crowdsourcing, but that the process more closely resembles a corporate-sponsored community rather than government-sponsored contests. More broadly, we offer insights on how seeker organizational factors and choices shape project-level implementation and success of crowdsourcing efforts, as well as suggest implications for open innovation activities of other smaller, geographicallybound organizations….(More)”.

Privacy and Freedom of Expression In the Age of Artificial Intelligence


Joint Paper by Privacy International and ARTICLE 19: “Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data. AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights. While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information. This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.

The aim of the paper is fourfold:

1. Present key technical definitions to clarify the debate;

2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;

3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and

4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities….(More)”.

Accountability in modern government: what are the issues?


Discussion Paper by Benoit Guerin, Julian McCrae and Marcus Shepheard: “…Accountability lies at the heart of democratic government. It enables people to know how the Government is doing and how to gain redress when things go wrong. It ensures ministers and civil servants are acting in the interests of the people they serve.

Accountability is a part of good governance and it can increase the trustworthiness and legitimacy of the state in the eyes of the public. Every day, 5.4 million public sector workers deliver services ranging from health care to schools to national defence.1 A host of bodies hold them to account – whether the National Audit Office undertaking around 60 value for money inquiries a year,2 Ofsted inspecting more than 5,000 schools per year, or the main Government ombudsman services dealing with nearly 80,000 complaints from the public in 2016/17 alone. More than 21,000 elected officials, ranging from MPs to local councillors, scrutinise these services on behalf of citizens.

When that accountability works properly, it helps the UK’s government to be among the best in the world. For example, public spending is authorised by Parliament and routinely stays within the limits set. The accountability that surrounds this – provided through oversight by the Treasury, audit by the National Audit Office and scrutiny by the Public Accounts Committee – is strong and dates back to the 19th century. However, in areas where that accountability is weak, the risk of failure – whether financial mismanagement, the collapse of services or chronic underperformance – increases. …

There are three factors underpinning the weak accountability that is perpetuating failure. They are: fundamental gaps in accountability in Whitehall; a failure of accountability beyond Whitehall to keep pace with an increasingly complex public sector landscape; and a pervading culture of blame….

This paper suggests potential options for strengthening accountability, based on our analysis. These involve changes to structures, increased transparency and moves to improve the culture. These options are meant to elicit discussion rather than to set the Institute for Government’s position at this stage….(More)”

What Is Human-Centric Design?


Zack Quaintance at GovTech: “…Government services, like all services, have historically used some form of design to deploy user-facing components. The design portion of this equation is nothing new. What Olesund says is new, however, is the human-centric component.

“In the past, government services were often designed from the perspective and need of the government institution, not necessarily with the needs or desires of residents or constituents in mind,” said Olesund. “This might lead, for example, to an accumulation of stats and requirements for residents, or utilization of outdated technology because the government institution is locked into a contract.”

Basically, government has never set out to design its services to be clunky or hard to use. These qualities have, however, grown out of the legally complex frameworks that governments must adhere to, which can subsequently result in a failure to prioritize the needs of the people using the services rather than the institution.

Change, however, is underway. Human-centric design is one of the main priorities of the U.S. Digital Service (USDS) and 18F, a pair of organizations created under the Obama administration with missions that largely involve making government services more accessible to the citizenry through efficient use of tech.

Although the needs of state and municipal governments are more localized, the gov tech work done at the federal level by the USDS and 18F has at times served as a benchmark or guidepost for smaller government agencies.

“They both redesign services to make them digital and user-friendly,” Olesund said. “But they also do a lot of work creating frameworks and best practices for other government agencies to adopt in order to achieve some of the broader systemic change.”

One of the most tangible examples of human-centered design at the state or local level can be found at Michigan’s Department of Health and Human Services, which recently worked with the Detroit-based design studio Civillato reduce its paper services application from 40 pages, 18,000-some words and 1,000 questions, down to 18 pages, 3,904 words and 213 questions. Currently, Civilla is working with the nonprofit civic tech group Code for America to help bring the same massive level of human-centered design progress to the state’s digital services.

Other work is underway in San Francisco’s City Hall and within the state of California. A number of cities also have iTeams funded through Bloomberg Philanthropies, and their missions are to innovate in ways that solve ongoing municipal problems, a mission that often requires use of human-centric design….(More)”.

How Artificial Intelligence Could Increase the Risk of Nuclear War


Rand Corporation: “The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.

The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world’s major nuclear powers. It’s not the killer robots of Hollywood blockbusters that we need to worry about; it’s how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.

That’s the premise behind a new paper from RAND Corporation, How Might Artificial Intelligence Affect the Risk of Nuclear War? It’s part of a special project within RAND, known as Security 2040, to look over the horizon and anticipate coming threats.

“This isn’t just a movie scenario,” said Andrew Lohn, an engineer at RAND who coauthored the paper and whose experience with AI includes using it to route drones, identify whale calls, and predict the outcomes of NBA games. “Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful.”…(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

A survey of incentive engineering for crowdsourcing


Conor MuldoonMichael J. O’Grady and Gregory M. P. O’Hare in the Knowledge Engineering Review: “With the growth of the Internet, crowdsourcing has become a popular way to perform intelligence tasks that hitherto would be either performed internally within an organization or not undertaken due to prohibitive costs and the lack of an appropriate communications infrastructure.

In crowdsourcing systems, whereby multiple agents are not under the direct control of a system designer, it cannot be assumed that agents will act in a manner that is consistent with the objectives of the system designer or principal agent. In situations whereby agents’ goals are to maximize their return in crowdsourcing systems that offer financial or other rewards, strategies will be adopted by agents to game the system if appropriate mitigating measures are not put in place.

The motivational and incentivization research space is quite large; it incorporates diverse techniques from a variety of different disciplines including behavioural economics, incentive theory, and game theory. This paper specifically focusses on game theoretic approaches to the problem in the crowdsourcing domain and places it in the context of the wider research landscape. It provides a survey of incentive engineering techniques that enable the creation of apt incentive structures in a range of different scenarios….(More)”.