This tech tells cities when floods are coming–and what they will destroy


Ben Paynter at FastCompany: “Several years ago, one of the eventual founders of One Concern nearly died in a tragic flood. Today, the company specializes in using artificial intelligence to predict how natural disasters are unfolding in real time on a city-block-level basis, in order to help disaster responders save as many lives as possible….

To fix that, One Concern debuted Flood Concern in late 2018. It creates map-based visualizations of where water surges may hit hardest, up to five days ahead of an impending storm. For cities, that includes not just time-lapse breakdowns of how the water will rise, how fast it could move, and what direction it will be flowing, but also what structures will get swamped or washed away, and how differing mitigation efforts–from levy building to dam releases–will impact each scenario. It’s the winner of Fast Company’s 2019 World Changing Ideas Awards in the AI and Data category.

[Image: One Concern]

So far, Flood Concern has been retroactively tested against events like Hurricane Harvey to show that it could have predicted what areas would be most impacted well ahead of the storm. The company, which was founded in Silicon Valley in 2015, started with one of that region’s pressing threats: earthquakes. It’s since earned contracts with cities like San Francisco, Los Angeles, and Cupertino, as well as private insurance companies….

One Concern’s first offering, dubbed Seismic Concern, takes existing information from satellite images and building permits to figure out what kind of ground structures are built on, and what might happen if they started shaking. If a big one hits, the program can extrapolate from the epicenter to suggest the likeliest places for destruction, and then adjust as more data from things like 911 calls and social media gets factored in….(More)”.


Platform Surveillance


Editorial by David Murakami Wood and Torin Monahan of Special Issue of Surveillance and Society: “This editorial introduces this special responsive issue on “platform surveillance.” We develop the term platform surveillance to account for the manifold and often insidious ways that digital platforms fundamentally transform social practices and relations, recasting them as surveillant exchanges whose coordination must be technologically mediated and therefore made exploitable as data. In the process, digital platforms become dominant social structures in their own right, subordinating other institutions, conjuring or sedimenting social divisions and inequalities, and setting the terms upon which individuals, organizations, and governments interact.

Emergent forms of platform capitalism portend new governmentalities, as they gradually draw existing institutions into alignment or harmonization with the logics of platform surveillance while also engendering subjectivities (e.g., the gig-economy worker) that support those logics. Because surveillance is essential to the operations of digital platforms, because it structures the forms of governance and capital that emerge, the field of surveillance studies is uniquely positioned to investigate and theorize these phenomena….(More)”.

Responsible Data Governance of Neuroscience Big Data


Paper by B. Tyr Fothergill et al: “Current discussions of the ethical aspects of big data are shaped by concerns regarding the social consequences of both the widespread adoption of machine learning and the ways in which biases in data can be replicated and perpetuated. We instead focus here on the ethical issues arising from the use of big data in international neuroscience collaborations.

Neuroscience innovation relies upon neuroinformatics, large-scale data collection and analysis enabled by novel and emergent technologies. Each step of this work involves aspects of ethics, ranging from concerns for adherence to informed consent or animal protection principles and issues of data re-use at the stage of data collection, to data protection and privacy during data processing and analysis, and issues of attribution and intellectual property at the data-sharing and publication stages.

Significant dilemmas and challenges with far-reaching implications are also inherent, including reconciling the ethical imperative for openness and validation with data protection compliance, and considering future innovation trajectories or the potential for misuse of research results. Furthermore, these issues are subject to local interpretations within different ethical cultures applying diverse legal systems emphasising different aspects. Neuroscience big data require a concerted approach to research across boundaries, wherein ethical aspects are integrated within a transparent, dialogical data governance process. We address this by developing the concept of ‘responsible data governance’, applying the principles of Responsible Research and Innovation (RRI) to the challenges presented by governance of neuroscience big data in the Human Brain Project (HBP)….(More)”.

Responsible data sharing in international health research: a systematic review of principles and norms


Paper by Shona Kalkman, Menno Mostert, Christoph Gerlinger, Johannes J. M. van Delden and Ghislaine J. M. W. van Thiel: ” Large-scale linkage of international clinical datasets could lead to unique insights into disease aetiology and facilitate treatment evaluation and drug development. Hereto, multi-stakeholder consortia are currently designing several disease-specific translational research platforms to enable international health data sharing. Despite the recent adoption of the EU General Data Protection Regulation (GDPR), the procedures for how to govern responsible data sharing in such projects are not at all spelled out yet. In search of a first, basic outline of an ethical governance framework, we set out to explore relevant ethical principles and norms…

We observed an abundance of principles and norms with considerable convergence at the aggregate level of four overarching themes: societal benefits and value; distribution of risks, benefits and burdens; respect for individuals and groups; and public trust and engagement. However, at the level of principles and norms we identified substantial variation in the phrasing and level of detail, the number and content of norms considered necessary to protect a principle, and the contextual approaches in which principles and norms are used....

While providing some helpful leads for further work on a coherent governance framework for data sharing, the current collection of principles and norms prompts important questions about how to streamline terminology regarding de-identification and how to harmonise the identified principles and norms into a coherent governance framework that promotes data sharing while securing public trust….(More)”

Opening Internet Monopolies to Competition with Data Sharing Mandates


Policy Brief by Claudia Biancotti (PIIE) and Paolo Ciocca (Consob): “Over the past few years, it has become apparent that a small number of technology companies have assembled detailed datasets on the characteristics, preferences, and behavior of billions of individuals. This concentration of data is at the root of a worrying power imbalance between dominant internet firms and the rest of society, reflecting negatively on collective security, consumer rights, and competition. Introducing data sharing mandates, or requirements for market leaders to share user data with other firms and academia, would have a positive effect on competition. As data are a key input for artificial intelligence (AI), more widely available information would help spread the benefits of AI through the economy. On the other hand, data sharing could worsen existing risks to consumer privacy and collective security. Policymakers intending to implement a data sharing mandate should carefully evaluate this tradeoff….(More).

The Politics of Referendum Use in European Democracies


Book by Saskia Hollander: “This book demonstrates that the generally assumed dichotomy between referendums and representative democracy does not do justice to the great diversity of referendum types and of how referendums are used in European democracies. Although in all referendums citizens vote directly on issues rather than letting their political representatives do this for them, some referendums are more direct than others.

Rather than reflecting the direct power of the People, most referendums in EU countries are held by, and serve the interests of, the political elites, most notably the executive. The book shows that these interests rarely match the justifications given in the public debate. Instead of being driven by the need to compensate for the deficiency of political parties, decision-makers use referendums primarily to protect the position of their party. In unravelling the strategic role played by national referendums in decision-making, this book makes an unconventional contribution to the debate on the impact of referendums on democracy….(More)”

Does increased ‘participation’ equal a new-found enthusiasm for democracy?


Blog by Stephen King and Paige Nicol: “With a few months under our belts, 2019 looks unlikely to be the year of a great global turnaround for democracy. The decade of democratic ‘recession’ that Larry Diamond declared in 2015 has dragged on and deepened, and may now be teetering on the edge of becoming a full-blown depression. 

The start of each calendar year is marked by the release of annual indices, rankings, and reports on how democracy is faring around the world. 2018 reports from Freedom House and the Economist Intelligence Unit (EIU) highlighted precipitous declines in civil liberties in long-standing democracies as well as authoritarian states. Some groups, including migrants, women, ethnic and other minorities, opposition politicians, and journalists have been particularly affected by these setbacks. According to the Committee to Protect Journalists, the number of journalists murdered nearly doubled last year, while the number imprisoned remained above 250 for the third consecutive year. 

Yet, the EIU also found a considerable increase in political participation worldwide. Levels of participation (including voting, protesting, and running for elected office, among other dimensions) increased substantially enough last year to offset falling scores in the other four categories of the index. Based on the methodology used, the rise in political participation was significant enough to prevent a decline in the global overall score for democracy for the first time in three years.

Though this development could give cause for optimism we believe it could also raise new concerns. 

In Zimbabwe, Sudan, and Venezuela we see people who, through desperation and frustration, have taken to the streets – a form of participation which has been met with brutal crackdowns. Time has yet to tell what the ultimate outcome of these protests will be, but it is clear that governments with autocratic tendencies have more – and cheaper – tools to monitor, direct, control, and suppress participation than ever before. 

Elsewhere, we see a danger of people becoming dislocated and disenchanted with democracy, as their representatives fail to take meaningful action on the issues that matter to them. In the UK Parliament, as Brexit discussions have become increasingly polarised and fractured along party political and ideological lines, Foreign Secretary Jeremy Hunt warned that there was a threat of social unrest if Parliament was seen to be frustrating the ‘will of the people.’ 

While we see enhanced participation as crucial to just and fair societies, it alone will not be the silver bullet that saves democracy. Whether this trend becomes a cause for hope or concern will depend on three factors: who is participating, what form does participation take, and how is participation received by those with power?…(More)”.

Advancing Computational Biology and Bioinformatics Research Through Open Innovation Competitions


HBR Working Paper by Andrea Blasco et al: “Open data science and algorithm development competitions offer a unique avenue for rapid discovery of better computational strategies. We highlight three examples in computational biology and bioinformatics research where the use of competitions has yielded significant performance gains over established algorithms. These include algorithms for antibody clustering, imputing gene expression data, and querying the Connectivity Map (CMap). Performance gains are evaluated quantitatively using realistic, albeit sanitized, data sets. The solutions produced through these competitions are then examined with respect to their utility and the prospects for implementation in the field. We present the decision process and competition design considerations that lead to these successful outcomes as a model for researchers who want to use competitions and non-domain crowds as collaborators to further their research….(More)”.

Data Can Help Students Maximize Return on Their College Investment


Blog by Jennifer Latson for Arnold Ventures: “When you buy a car, you want to know it will get you where you’re going. Before you invest in a certain model, you check its record. How does it do in crash tests? Does it have a history of breaking down? Are other owners glad they bought it?

Students choosing between college programs can’t do the same kind of homework. Much of the detailed data we demand when we buy a car isn’t available for postsecondary education — data such as how many students find jobs in the fields they studied, what they earn, how much debt they accumulate, and how quickly they repay it — yet choosing a college is a much more important financial decision.

The most promising solution to filling in the gaps, according to data advocates, is the College Transparency Act, which would create a secure, comprehensive national data network with information on college costs, graduation rates, and student career paths — and make this data publicly available. The bill, which will be discussed in Congress this year, has broad support from both Republicans and Democrats in the House and the Senate in part because it includes precautions to protect privacy and secure student data….

The data needed to answer questions about student success already exists but is scattered among various agencies and institutions: the Department of Educationfor data on student loan repayment; the Treasury Department for earnings information; and schools themselves for graduation rates.

“We can’t connect the dots to find out how these programs are serving certain students, and that’s because the Department of Education isn’t allowed to connect all the information these places have already collected,” says Amy Laitinen, director for higher education at New America, a think tank collaborating with IHEP to promote educational transparency.
And until recently, publicly available federal postsecondary data only included full-time students who’d never enrolled in a college program before, ignoring the more than half of the higher ed population made up of students who attend school part time or who transfer from one institution to another….(More)”.

Trustworthy Privacy Indicators: Grades, Labels, Certifications and Dashboards


Paper by Joel R. Reidenberg et al: “Despite numerous groups’ efforts to score, grade, label, and rate the privacy of websites, apps, and network-connected devices, these attempts at privacy indicators have, thus far, not been widely adopted. Privacy policies, however, remain long, complex, and impractical for consumers. Communicating in some short-hand form, synthesized privacy content is now crucial to empower internet users and provide them more meaningful notice, as well as nudge consumers and data processors toward more meaningful privacy. Indeed, on the basis of these needs, the National Institute of Standards and Technology and the Federal Trade Commission in the United States, as well as lawmakers and policymakers in the European Union, have advocated for the development of privacy indicator systems.

Efforts to develop privacy grades, scores, labels, icons, certifications, seals, and dashboards have wrestled with various deficiencies and obstacles for the wide-scale deployment as meaningful and trustworthy privacy indicators. This paper seeks to identify and explain these deficiencies and obstacles that have hampered past and current attempts. With these lessons, the article then offers criteria that will need to be established in law and policy for trustworthy indicators to be successfully deployed and adopted through technological tools. The lack of standardization prevents user-recognizability and dependability in the online marketplace, diminishes the ability to create automated tools for privacy, and reduces incentives for consumers and industry to invest in a privacy indicators. Flawed methods in selection and weighting of privacy evaluation criteria and issues interpreting language that is often ambiguous and vague jeopardize success and reliability when baked into an indicator of privacy protectiveness or invasiveness. Likewise, indicators fall short when those organizations rating or certifying the privacy practices are not objective, trustworthy, and sustainable.

Nonetheless, trustworthy privacy rating systems that are meaningful, accurate, and adoptable can be developed to assure effective and enduring empowerment of consumers. This paper proposes a framework using examples from prior and current attempts to create privacy indicator systems in order to provide a valuable resource for present-day, real world policymaking….(More)”.