The Internet of Us


Book by Michael P. Lynch: “We used to say “seeing is believing”; now, googling is believing. With 24/7 access to nearly all of the world’s information at our fingertips, we no longer trek to the library or the encyclopedia shelf in search of answers. We just open our browsers, type in a few keywords and wait for the information to come to us. Now firmly established as a pioneering work of modern philosophy, The Internet of Us has helped revolutionize our understanding of what it means to be human in the digital age. Indeed, demonstrating that knowledge based on reason plays an essential role in society and that there is more to “knowing” than just acquiring information, leading philosopher Michael P. Lynch shows how our digital way of life makes us value some ways of processing information over others, and thus risks distorting the greatest traits of mankind. Charting a path from Plato’s cave to Google Glass, the result is a necessary guide on how to navigate the philosophical quagmire that is the “Internet of Things.”…(More)”.

How to Regulate Artificial Intelligence


Oren Etzioni in the New York Times: “…we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I.

I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.

These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.

First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford….

My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information…(More)”

Data-Driven Policy Making: The Policy Lab Approach


Paper by Anne Fleur van Veenstra and Bas Kotterink: “Societal challenges such as migration, poverty, and climate change can be considered ‘wicked problems’ for which no optimal solution exists. To address such problems, public administrations increasingly aim for datadriven policy making. Data-driven policy making aims to make optimal use of sensor data, and collaborate with citizens to co-create policy. However, few public administrations have realized this so far. Therefore, in this paper an approach for data-driven policy making is developed that can be used in the setting of a Policy Lab. A Policy Lab is an experimental environment in which stakeholders collaborate to develop and test policy. Based on literature, we first identify innovations in data-driven policy making. Subsequently, we map these innovations to the stages of the policy cycle. We found that most innovations are concerned with using new data sources in traditional statistics and that methodologies capturing the benefits of data-driven policy making are still under development. Further research should focus on policy experimentation while developing new methodologies for data-driven policy making at the same time….(More)”.

Dictionaries and crowdsourcing, wikis and user-generated content


Living Reference Work Entry by Michael Rundel: “It is tempting to dismiss crowdsourcing as a largely trivial recent development which has nothing useful to contribute to serious lexicography. This temptation should be resisted. When applied to dictionary-making, the broad term “crowdsourcing” in fact describes a range of distinct methods for creating or gathering linguistic data. A provisional typology is proposed, distinguishing three approaches which are often lumped under the heading “crowdsourcing.” These are: user-generated content (UGC), the wiki model, and what is referred to here as “crowd-sourcing proper.” Each approach is explained, and examples are given of their applications in linguistic and lexicographic projects. The main argument of this chapter is that each of these methods – if properly understood and carefully managed – has significant potential for lexicography. The strengths and weaknesses of each model are identified, and suggestions are made for exploiting them in order to facilitate or enhance different operations within the process of developing descriptions of language. Crowdsourcing – in its various forms – should be seen as an opportunity rather than as a threat or diversion….(More)”.

The Role of Evidence in Politics: Motivated Reasoning and Persuasion among Politicians


Martin Baekgaard et al in British Journal of Political Science: “Does evidence help politicians make informed decisions even if it is at odds with their prior beliefs? And does providing more evidence increase the likelihood that politicians will be enlightened by the information? Based on the literature on motivated political reasoning and the theory about affective tipping points, this article hypothesizes that politicians tend to reject evidence that contradicts their prior attitudes, but that increasing the amount of evidence will reduce the impact of prior attitudes and strengthen their ability to interpret the information correctly. These hypotheses are examined using randomized survey experiments with responses from 954 Danish politicians, and results from this sample are compared to responses from similar survey experiments with Danish citizens. The experimental findings strongly support the hypothesis that politicians are biased by prior attitudes when interpreting information. However, in contrast to expectations, the findings show that the impact of prior attitudes increases when more evidence is provided….(More)“.

Crowdsourcing website is helping volunteers save lives in hurricane-hit Houston


By Monday morning, the 27-year-old developer, sitting in his leaky office, had slapped together an online mapping tool to track stranded residents. A day later, nearly 5,000 people had registered to be rescued, and 2,700 of them were safe.

If there’s a silver lining to Harvey, it’s the flood of civilian volunteers such as Marchetti who have joined the rescue effort. It became pretty clear shortly after the storm started pounding Houston that the city would need their help. The heavy rains quickly outstripped authorities’ ability to respond. People watched water levels rise around them while they waited on hold to get connected to a 911 dispatcher. Desperate local officials asked owners of high-water vehicles and boats to help collect their fellow citizens trapped on second-stories and roofs.

In the past, disaster volunteers have relied on social media and Zello, an app that turns your phone into a walkie-talkie, to organize. … Harvey’s magnitude, both in terms of damage and the number of people anxious to pitch in, also overwhelmed those grassroots organizing methods, says Marchetti, who spent the spent the first days after the storm hit monitoring Facebook and Zello to figure out what was needed where.

“The channels were just getting overloaded with people asking ‘Where do I go?’” he says. “We’ve tried to cut down on the level of noise.”

The idea behind his project, Houstonharveyrescue.com, is simple. The map lets people in need register their location. They are asked to include details—for example, if they’re sick or have small children—and their cell phone numbers.

The army of rescuers, who can also register on the site, can then easily spot the neediest cases. A team of 100 phone dispatchers follows up with those wanting to be rescued, and can send mass text messages with important information. An algorithm weeds out any repeats.

It might be one of the first open-sourced rescue missions in the US, and could be a valuable blueprint for future disaster volunteers. (For a similar civilian-led effort outside the US, look at Tijuana’s Strategic Committee for Humanitarian Aid, a Facebook group that sprouted last year when the Mexican border city was overwhelmed by a wave of Haitian immigrants.)…(More)”.

Design Thinking Approach to Ethical (Responsible) Technological Innovation


Chapter by Ganesh Nathan in Responsible Research and Innovation: From Concepts to Practices, (eds.) R. Gianni, J. Pearson and B. Reber, Routledge: “There is growing interest and importance for responsible research and innovation (RRI) among academic scholars and policy makers, especially, in relation to emerging technologies such as nanotechnology. It is also to be noted that, although the design thinking approach has been around since 1960s, there is renewed interest in this approach to innovation with an increasing number of related publications over the last couple of decades. Furthermore, it is currently introduced in a number of schools and community projects. However, there is a gap in bridging design thinking approach to RRI and this chapter attempts to address this need.

This chapter aims to show that design thinking approach is potentially conducive to ethical (responsible) technological innovation especially within emerging and converging technologies, due to its emphasis on human-centered design and other core attributes such as empathy – although it poses many challenges to implement….(More)”.

The Case for Sharing All of America’s Data on Mosquitoes


Ed Yong in the Atlantic: “The U.S. is sitting on one of the largest data sets on any animal group, but most of it is inaccessible and restricted to local agencies….For decades, agencies around the United States have been collecting data on mosquitoes. Biologists set traps, dissect captured insects, and identify which species they belong to. They’ve done this for millions of mosquitoes, creating an unprecedented trove of information—easily one of the biggest long-term attempts to monitor any group of animals, if not the very biggest.

The problem, according to Micaela Elvira Martinez from Princeton University and Samuel Rund from the University of Notre Dame, is that this treasure trove of data isn’t all in the same place, and only a small fraction of it is public. The rest is inaccessible, hoarded by local mosquito-control agencies around the country.

Currently, these agencies can use their data to check if their attempts to curtail mosquito populations are working. Are they doing enough to remove stagnant water, for example? Do they need to spray pesticides? But if they shared their findings, Martinez and Rund say that scientists could do much more. They could better understand the ecology of these insects, predict the spread of mosquito-borne diseases like dengue fever or Zika, coordinate control efforts across states and counties, and quickly spot the arrival of new invasive species.

That’s why Martinez and Rund are now calling for the creation of a national database of mosquito records that anyone can access. “There’s a huge amount of taxpayer investment and human effort that goes into setting traps, checking them weekly, dissecting all those mosquitoes under a microscope, and tabulating the data,” says Martinez. “It would be a big bang for our buck to collate all that data and make it available.”

Martinez is a disease modeler—someone who uses real-world data to build simulations that reveal how infections rise, spread, and fall. She typically works with childhood diseases like measles and polio, where researchers are almost spoiled for data. Physicians are legally bound to report any cases, and the Centers for Disease Control and Prevention (CDC) compiles and publishes this information as a weekly report.

The same applies to cases of mosquito-borne diseases like dengue and Zika, but not to populations of the insects themselves. So, during last year’s Zika epidemic, when Martinez wanted to study the Aedes aegypti mosquito that spreads the disease, she had a tough time. “I was really surprised that I couldn’t find data on Aedes aegypti numbers,” she says. Her colleagues explained that scientists use climate variables like temperature and humidity to predict where mosquitoes are going to be abundant. That seemed ludicrous to her, especially since organizations collect information on the actual insects. It’s just that no one ever gathers those figures together….

Together with Rund and a team of undergraduate students, she found that there are more than 1,000 separate agencies in the United States that collect mosquito data—at least one in every county or jurisdiction. Only 152 agencies make their data publicly available in some way. The team collated everything they could find since 2009, and ended up with information about more than 15 million mosquitoes. Imagine what they’d have if all the datasets were open, especially since some go back decades.

A few mosquito-related databases do exist, but none are quite right. ArboNET, which is managed by the CDC and state health departments, mainly stores data about mosquito-borne diseases, and whatever information it has on the insects themselves isn’t precise enough in either time or space to be useful for modeling. MosquitoNET, which was developed by the CDC, does track mosquitoes, but “it’s a completely closed system, and hardly anyone has access to it,” says Rund. The Smithsonian Institution’s VectorMap is better in that it’s accessible, “but it lacks any real-time data from the continental United States,” says Rund. “When I checked a few months ago, it had just one record of Aedes aegypti since 2013.”…

Some scientists who work on mosquito control apparently disagree, and negative reviews have stopped Martinez and Rund from publishing their ideas in prominent academic journals. (For now, they’ve uploaded a paper describing their vision to the preprint repository bioRxiv.) “Some control boards say: What if people want to sue us because we’re showing that they have mosquito vectors near their homes, or if their house prices go down?” says Martinez. “And one mosquito-control scientist told me that no one should be able to work with mosquito data unless they’ve gone out and trapped mosquitoes themselves.”…

“Data should be made available without having to justify exactly what’s going to be done with it,” Martinez says. “We should put it out there for scientists to start unlocking it. I think there are a ton of biologists who will come up with cool things to do.”…(More)”.

Debating big data: A literature review on realizing value from big data


Wendy Arianne Günther et al in The Journal of Strategic Information Systems: “Big data has been considered to be a breakthrough technological development over recent years. Notwithstanding, we have as yet limited understanding of how organizations translate its potential into actual social and economic value. We conduct an in-depth systematic review of IS literature on the topic and identify six debates central to how organizations realize value from big data, at different levels of analysis. Based on this review, we identify two socio-technical features of big data that influence value realization: portability and interconnectivity. We argue that, in practice, organizations need to continuously realign work practices, organizational models, and stakeholder interests in order to reap the benefits from big data. We synthesize the findings by means of an integrated model….(More)”.

Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing


Priscilla Guo, Danielle Kehl, and Sam Kessler at Responsive Communities (Harvard): “In the summer of 2016, some unusual headlines began appearing in news outlets across the United States. “Secret Algorithms That Predict Future Criminals Get a Thumbs Up From the Wisconsin Supreme Court,” read one. Another declared: “There’s software used across the country to predict future criminals. And it’s biased against blacks.” These news stories (and others like them) drew attention to a previously obscure but fast-growing area in the field of criminal justice: the use of risk assessment software, powered by sophisticated and sometimes proprietary algorithms, to predict whether individual criminals are likely candidates for recidivism. In recent years, these programs have spread like wildfire throughout the American judicial system. They are now being used in a broad capacity, in areas ranging from pre-trial risk assessment to sentencing and probation hearings. This paper focuses on the latest—and perhaps most concerning—use of these risk assessment tools: their incorporation into the criminal sentencing process, a development which raises fundamental legal and ethical questions about fairness, accountability, and transparency. The goal is to provide an overview of these issues and offer a set of key considerations and questions for further research that can help local policymakers who are currently implementing or considering implementing similar systems. We start by putting this trend in context: the history of actuarial risk in the American legal system and the evolution of algorithmic risk assessments as the latest incarnation of a much broader trend. We go on to discuss how these tools are used in sentencing specifically and how that differs from other contexts like pre-trial risk assessment. We then delve into the legal and policy questions raised by the use of risk assessment software in sentencing decisions, including the potential for constitutional challenges under the Due Process and Equal Protection clauses of the Fourteenth Amendment. Finally, we summarize the challenges that these systems create for law and policymakers in the United States, and outline a series of possible best practices to ensure that these systems are deployed in a manner that promotes fairness, transparency, and accountability in the criminal justice system….(More)”.