Akemi TakeokaChatfield and Christopher G.Reddick at Government Information Quarterly: “While citizens previously took a back seat to government, citizen coproduction of disaster risk communications through social media networks is emerging. We draw on information-processing, citizen coproduction, and networked governance theories to examine the governance and impact of networked interactions in the following question: When government’s capacity in information-processing and communication is overwhelmed by unfolding disasters, how do government and citizens coproduce disaster risk communications? During the Hurricane Sandy, we collected 132,922 #sandy tweets to analyze the structure and networked interactions using social network analysis. We then conducted case study of the government’s social media policy governance networks. Networked citizen interactions – their agility in voluntarily retweeting the government’s #sandy tweets and tweeting their own messages – magnified the agility and reach of the government’s #sandy disaster communications. Our case study indicates the criticality of social media policy governance networks in empowering the lead agencies and citizens to coproduce disaster communication public services….(More)”.
Using big data to predict suicide risk among Canadian youth
SAS Insights “Suicide is the second leading cause of death among youth in Canada, according to Statistics Canada, accounting for one-fifth of deaths of people under the age of 25 in 2011. The Canadian Mental Health Association states that among 15 – 24 year olds the number is an even more frightening at 24 percent – the third highest in the industrialized world. Yet despite these disturbing statistics, the signals that an individual plans on self-injury or suicide are hard to isolate….
Team members …collected 2.3 million tweets and used text mining software to identify 1.1 million of them as likely to have been authored by 13 to 17 year olds in Canada by building a machine learning model to predict age, based on the open source PAN author profiling dataset. Their analysis made use of natural language processing, predictive modelling, text mining, and data visualization….
However, there were challenges. Ages are not revealed on Twitter, so the team had to figure out how to tease out the data for 13 – 17 year olds in Canada. “We had a text data set, and we created a model to identify if people were in that age group based on how they talked in their tweets,” Soehl said. “From there, we picked some specific buzzwords and created topics around them, and our software mined those tweets to collect the people.”
Another issue was the restrictions Twitter places on pulling data, though Soehl believes that once this analysis becomes an established solution, Twitter may work with researchers to expedite the process. “Now that we’ve shown it’s possible, there are a lot of places we can go with it,” said Soehl. “Once you know your path and figure out what’s going to be valuable, things come together quickly.”
The team looked at the percentage of people in the group who were talking about depression or suicide, and what they were talking about. Horne said that when SAS’ work went in front of a Canadian audience working in health care, they said that it definitely filled a gap in their data — and that was the validation he’d been looking for. The team also won $10,000 for creating the best answer to this question (the team donated the award money to two mental health charities: Mind Your Mind and Rise Asset Development)
What’s next?
That doesn’t mean the work is done, said Jos Polfliet. “We’re just scraping the surface of what can be done with the information.” Another way to use the results is to look at patterns and trends….(More)”
Humanizing technology
Kaliya Young at Open Democracy: “Can we use the internet to enhance deep human connection and support the emergence of thriving communities in which everyone’s needs are met and people’s lives are filled with joy and meaning?….
Our work on ‘technical’ technologies won’t generate broad human gains unless we invest an equal amount of time, energy and resources in the development of social and emotional technologies that drive how our whole society is organized and how we work together. I think we are actually on the cusp of having the tools, understanding and infrastructure to make that happen, without all our ideas and organizing being intermediated by giant corporations. But what does that mean in practice?
I think two things are absolutely vital.
First of all, how do we connect all the people and all the groups that want to align their goals in pursuit of social justice, deep democracy, and the development of new economies that share wealth and protect the environment? How are people supported to protect their own autonomy while also working with multiple other groups in processes of joint work and collective action?
One key element of the answer to that question is to generate a digital identity that is not under the control of a corporation, an organization or a government.
I have been co-leading the community surrounding the Internet Identity Workshop for the last 12 years. After many explorations of the techno-possibility landscape we have finally made some breakthroughs that will lay the foundations of a real internet-scale infrastructure to support what are called ‘user-centric’ or ‘self-sovereign’ identities.
This infrastructure consists of a network with two different types of nodes—people and organizations—with each individual being able to join lots of different groups. But regardless of how many groups they join, people will need a digital identity that is not owned by Twitter, Amazon, Apple, Google or Facebook. That’s the only way they will be able to control their own autonomous interactions on the internet. If open standards are not created for this critical piece of infrastructure then we will end up in a future where giant corporations control all of our identities. In many ways we are in this future now.
This is where something called ‘Shared Ledger Technology’ or SLT comes in—more commonly known as ‘blockchain’ or ‘distributed ledger technology.’ SLT represents a huge innovation in terms of databases that can be read by anyone and which are highly resistant to tampering—meaning that data cannot be erased or changed once entered. At the moment there’s a lot of work going on to design the encryption key management that’s necessary to support the creation and operation of these unique private channels of connection and communication between individuals and organizations. The Sovrin Foundation has built an SLT specifically for digital identity key management, and has donated the code required to the HyperLeger Foundation under ‘project Indy.’…
To put it simply, technical technologies are easier to turn in the direction of democracy and social justice if they are developed and applied with social and emotional intelligence. Combining all three together is the key to using technology for liberating ends….(More)”.
Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation
Paper by Jack Balkin: “We have now moved from the early days of the Internet to the Algorithmic Society. The Algorithmic Society features the use of algorithms, artificial intelligence agents, and Big Data to govern populations. It also features digital infrastructure companies, large multi-national social media platforms, and search engines that sit between traditional nation states and ordinary individuals, and serve as special-purpose governors of speech.
The Algorithmic Society presents two central problems for freedom of expression. First, Big Data allows new forms of manipulation and control, which private companies will attempt to legitimate and insulate from regulation by invoking free speech principles. Here First Amendment arguments will likely be employed to forestall digital privacy guarantees and prevent consumer protection regulation. Second, privately owned digital infrastructure companies and online platforms govern speech much as nation states once did. Here the First Amendment, as normally construed, is simply inadequate to protect the practical ability to speak.
The first part of the essay describes how to regulate online businesses that employ Big Data and algorithmic decision making consistent with free speech principles. Some of these businesses are “information fiduciaries” toward their end-users; they must exercise duties of good faith and non-manipulation. Other businesses who are not information fiduciaries have a duty not to engage in “algorithmic nuisance”: they may not externalize the costs of their analysis and use of Big Data onto innocent third parties.
The second part of the essay turns to the emerging pluralist model of online speech regulation. This pluralist model contrasts with the traditional dyadic model in which nation states regulated the speech of their citizens.
In the pluralist model, territorial governments continue to regulate the speech directly. But they also attempt to coerce or co-opt owners of digital infrastructure to regulate the speech of others. This is “new school” speech regulation….(More)”.
Chatbot helps asylum seekers prepare for their interviews
Springwise: “MarHub is a new chatbot developed by students at the University of California-Berkeley’s Haas School of Businessto help asylum seekers through the complicated process of applying to become an official refugee – which can take up to 18 months – and to avoid using smugglers.
Finding the right information for the asylum process isn’t easy, and although most asylum seekers are in possession of a smartphone, a lot of the information is either missing or out of date. MarHub is designed to help with that, as it will walk the user through what they can expect and also how to present their case. MarHub is also expandable, so that new information or regulations can be quickly added to make it a hub of useful information.
The concept of MarHub was born in late 2016, in response to the Hult Prize social enterprise challenge, which was focusing on refugees for 2017. The development team quickly realized that there was a gap in the market which they felt they could fill. MarHub will initially be made available through Facebook, and then later on WhatsApp and text messaging….(More)”.
MIT map offers real-time, crowd-sourced flood reporting during Hurricane Irma
MIT News: “As Hurricane Irma bears down on the U.S., the MIT Urban Risk Lab has launched a free, open-source platform that will help residents and government officials track flooding in Broward County, Florida. The platform, RiskMap.us, is being piloted to enable both residents and emergency managers to obtain better information on flooding conditions in near-real time.
Residents affected by flooding can add information to the publicly available map via popular social media channels. Using Twitter, Facebook, and Telegram, users submit reports by sending a direct message to the Risk Map chatbot. The chatbot replies to users with a one-time link through which they can upload information including location, flood depth, a photo, and description.
Residents and government officials can view the map to see recent flood reports to understand changing flood conditions across the county. Tomas Holderness, a research scientist in the MIT Department of Architecture, led the design of the system. “This project shows the importance that citizen data has to play in emergencies,” he says. “By connecting residents and emergency managers via social messaging, our map helps keep people informed and improve response times.”…
The Urban Risk Lab also piloted the system in Indonesia — where the project is called PetaBencana.id, or “Map Disaster” — during a large flood event on Feb. 20, 2017.
During the flooding, over 300,000 users visited the public website in 24 hours, and the map was integrated into the Uber application to help drivers avoid flood waters. The project in Indonesia is supported by a grant from USAID and is working in collaboration with the Indonesian Federal Emergency Management Agency, the Pacific Disaster Centre, and the Humanitarian Open Street Map Team.
The Urban Risk Lab team is also working in India on RiskMap.in….(More)”.
Feeding the Machine: Policing, Crime Data, & Algorithms
Elizabeth E. Joh at William & Mary Bill of Rights J. (2017 Forthcoming): “Discussions of predictive algorithms used by the police tend to assume the police are merely end users of big data. Accordingly, police departments are consumers and clients of big data — not much different than users of Spotify, Netflix, Amazon, or Facebook. Yet this assumption about big data policing contains a flaw. Police are not simply end users of big data. They generate the information that big data programs rely upon. This essay explains why predictive policing programs can’t be fully understood without an acknowledgment of the role police have in creating its inputs. Their choices, priorities, and even omissions become the inputs algorithms use to forecast crime. The filtered nature of crime data matters because these programs promise cutting edge results, but may deliver analyses with hidden limitations….(More)”.
The Use of Big Data Analytics by the IRS: Efficient Solutions or the End of Privacy as We Know It?
Kimberly A. Houser and Debra Sanders in the Vanderbilt Journal of Entertainment and Technology Law: “This Article examines the privacy issues resulting from the IRS’s big data analytics program as well as the potential violations of federal law. Although historically, the IRS chose tax returns to audit based on internal mathematical mistakes or mismatches with third party reports (such as W-2s), the IRS is now engaging in data mining of public and commercial data pools (including social media) and creating highly detailed profiles of taxpayers upon which to run data analytics. This Article argues that current IRS practices, mostly unknown to the general public are violating fair information practices. This lack of transparency and accountability not only violates federal law regarding the government’s data collection activities and use of predictive algorithms, but may also result in discrimination. While the potential efficiencies that big data analytics provides may appear to be a panacea for the IRS’s budget woes, unchecked, these activities are a significant threat to privacy. Other concerns regarding the IRS’s entrée into big data are raised including the potential for political targeting, data breaches, and the misuse of such information. This Article intends to bring attention to these privacy concerns and contribute to the academic and policy discussions about the risks presented by the IRS’s data collection, mining and analytics activities….(More)”.
How to Regulate Artificial Intelligence
Oren Etzioni in the New York Times: “…we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I.
I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.
These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.
First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.
Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.
My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford….
My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information…(More)”
Crowdsourcing website is helping volunteers save lives in hurricane-hit Houston
By Monday morning, the 27-year-old developer, sitting in his leaky office, had slapped together an online mapping tool to track stranded residents. A day later, nearly 5,000 people had registered to be rescued, and 2,700 of them were safe.
If there’s a silver lining to Harvey, it’s the flood of civilian volunteers such as Marchetti who have joined the rescue effort. It became pretty clear shortly after the storm started pounding Houston that the city would need their help. The heavy rains quickly outstripped authorities’ ability to respond. People watched water levels rise around them while they waited on hold to get connected to a 911 dispatcher. Desperate local officials asked owners of high-water vehicles and boats to help collect their fellow citizens trapped on second-stories and roofs.
In the past, disaster volunteers have relied on social media and Zello, an app that turns your phone into a walkie-talkie, to organize. … Harvey’s magnitude, both in terms of damage and the number of people anxious to pitch in, also overwhelmed those grassroots organizing methods, says Marchetti, who spent the spent the first days after the storm hit monitoring Facebook and Zello to figure out what was needed where.
“The channels were just getting overloaded with people asking ‘Where do I go?’” he says. “We’ve tried to cut down on the level of noise.”
The idea behind his project, Houstonharveyrescue.com, is simple. The map lets people in need register their location. They are asked to include details—for example, if they’re sick or have small children—and their cell phone numbers.
The army of rescuers, who can also register on the site, can then easily spot the neediest cases. A team of 100 phone dispatchers follows up with those wanting to be rescued, and can send mass text messages with important information. An algorithm weeds out any repeats.
It might be one of the first open-sourced rescue missions in the US, and could be a valuable blueprint for future disaster volunteers. (For a similar civilian-led effort outside the US, look at Tijuana’s Strategic Committee for Humanitarian Aid, a Facebook group that sprouted last year when the Mexican border city was overwhelmed by a wave of Haitian immigrants.)…(More)”.