Crowdsourced reports could save lives when the next earthquake hits


Charlotte Jee at MIT Technology Review: “When it comes to earthquakes, every minute counts. Knowing that one has hit—and where—can make the difference between staying inside a building and getting crushed, and running out and staying alive. This kind of timely information can also be vital to first responders.

However, the speed of early warning systems varies from country to country. In Japan  and California, huge networks of sensors and seismic stations can alert citizens to an earthquake. But these networks are expensive to install and maintain. Earthquake-prone countries such as Mexico and Indonesia don’t have such an advanced or widespread system.

A cheap, effective way to help close this gap between countries might be to crowdsource earthquake reports and combine them with traditional detection data from seismic monitoring stations. The approach was described in a paper in Science Advances today.

The crowdsourced reports come from three sources: people submitting information using LastQuake, an app created by the Euro-Mediterranean Seismological Centre; tweets that refer to earthquake-related keywords; and the time and IP address data associated with visits to the EMSC website.

When this method was applied retrospectively to earthquakes that occurred in 2016 and 2017, the crowdsourced detections on their own were 85% accurate. Combining the technique with traditional seismic data raised accuracy to 97%. The crowdsourced system was faster, too. Around 50% of the earthquake locations were found in less than two minutes, a whole minute faster than with data provided only by a traditional seismic network.

When EMSC has identified a suspected earthquake, it sends out alerts via its LastQuake app asking users nearby for more information: images, videos, descriptions of the level of tremors, and so on. This can help assess the level of damage for early responders….(More)”.

Nudging the dead: How behavioural psychology inspired Nova Scotia’s organ donation scheme


Joseph Brean at National Post: “Nova Scotia’s decision to presume people’s consent to donating their organs after death is not just a North American first. It is also the latest example of how deeply behavioural psychology has changed policy debates.

That is a rare achievement for science. Governments used to appeal to people’s sense of reason, religion, civic duty, or fear of consequences. Today, when they want to change how their citizens behave, they use psychological tricks to hack their minds.

Nudge politics, as it came to be known, has been an intellectual hit among wonks and technocrats ever since Daniel Kahneman won the Nobel Prize in 2002 for destroying the belief people make decisions based on good information and reasonable expectations. Not so, he showed. Not even close. Human decision-making is an organic process, all but immune to reason, but strangely susceptible to simple environmental cues, just waiting to be exploited by a clever policymaker….

Organ donation is a natural fit. Nova Scotia’s experiment aims to solve a policy problem by getting people to do what they always tend to do about government requests — nothing.

The cleverness is evident in the N.S. government’s own words, which play on the meaning of “opportunity”: “Every Nova Scotian will have the opportunity to be an organ and tissue donor unless they opt out.” The policy applies to kidneys, pancreas, heart, liver, lungs, small bowel, cornea, sclera, skin, bones, tendons and heart valves.

It is so clever it aims to make progress as people ignore it. The default position is a positive for the policy. It assumes poor pickup. You can opt out of organ donation if you want. Nova Scotia is simply taking the informed gamble that you probably won’t. That is the goal, and it will make for a revealing case study.

Organ donation is an important question, and chronically low donation rates can reasonably be called a crisis. But most people make their personal choice “thoughtlessly,” as Kahneman wrote in the 2011 book Thinking, Fast and Slow.

He referred to European statistics which showed vast differences in organ donation rights between neighbouring and culturally similar countries, such as Sweden and Denmark, or Germany and Austria. The key difference, he noted, was what he called “framing effects,” or how the question was asked….(More)”.

Fearful of fake news blitz, U.S. Census enlists help of tech giants


Nick Brown at Reuters: “The U.S. Census Bureau has asked tech giants Google, Facebook and Twitter to help it fend off “fake news” campaigns it fears could disrupt the upcoming 2020 count, according to Census officials and multiple sources briefed on the matter.

The push, the details of which have not been previously reported, follows warnings from data and cybersecurity experts dating back to 2016 that right-wing groups and foreign actors may borrow the “fake news” playbook from the last presidential election to dissuade immigrants from participating in the decennial count, the officials and sources told Reuters.

The sources, who asked not to be named, said evidence included increasing chatter on platforms like “4chan” by domestic and foreign networks keen to undermine the survey. The census, they said, is a powerful target because it shapes U.S. election districts and the allocation of more than $800 billion a year in federal spending.

Ron Jarmin, the Deputy Director of the Census Bureau, confirmed the bureau was anticipating disinformation campaigns, and was enlisting the help of big tech companies to fend off the threat.

“We expect that (the census) will be a target for those sorts of efforts in 2020,” he said.

Census Bureau officials have held multiple meetings with tech companies since 2017 to discuss ways they could help, including as recently as last week, Jarmin said.

So far, the bureau has gotten initial commitments from Alphabet Inc’s Google, Twitter Inc and Facebook Inc to help quash disinformation campaigns online, according to documents summarizing some of those meetings reviewed by Reuters.

But neither Census nor the companies have said how advanced any of the efforts are….(More)”.

How the NYPD is using machine learning to spot crime patterns


Colin Wood at StateScoop: “Civilian analysts and officers within the New York City Police Department are using a unique computational tool to spot patterns in crime data that is closing cases.

A collection of machine-learning models, which the department calls Patternizr, was first deployed in December 2016, but the department only revealed the system last month when its developers published a research paper in the Informs Journal on Applied Analytics. Drawing on 10 years of historical data about burglary, robbery and grand larceny, the tool is the first of its kind to be used by law enforcement, the developers wrote.

The NYPD hired 100 civilian analysts in 2017 to use Patternizr. It’s also available to all officers through the department’s Domain Awareness System, a citywide network of sensors, databases, devices, software and other technical infrastructure. Researchers told StateScoop the tool has generated leads on several cases that traditionally would have stretched officers’ memories and traditional evidence-gathering abilities.

Connecting similar crimes into patterns is a crucial part of gathering evidence and eventually closing in on an arrest, said Evan Levine, the NYPD’s assistant commissioner of data analytics and one of Patternizr’s developers. Taken independently, each crime in a string of crimes may not yield enough evidence to identify a perpetrator, but the work of finding patterns is slow and each officer only has a limited amount of working knowledge surrounding an incident, he said.

“The goal here is to alleviate all that kind of busywork you might have to do to find hits on a pattern,” said Alex Chohlas-Wood, a Patternizr researcher and deputy director of the Computational Policy Lab at Stanford University.

The knowledge of individual officers is limited in scope by dint of the NYPD’s organizational structure. The department divides New York into 77 precincts, and a person who commits crimes across precincts, which often have arbitrary boundaries, is often more difficult to catch because individual beat officers are typically focused on a single neighborhood.

There’s also a lot of data to sift through. In 2016 alone, about 13,000 burglaries, 15,000 robberies and 44,000 grand larcenies were reported across the five boroughs.

Levine said that last month, police used Patternizr to spot a pattern of three knife-point robberies around a Bronx subway station. It would have taken police much longer to connect those crimes manually, Levine said.

The software works by an analyst feeding it “seed” case, which is then compared against a database of hundreds of thousands of crime records that Patternizr has already processed. The tool generates a “similarity score” and returns a rank-ordered list and a map. Analysts can read a few details of each complaint before examining the seed complaint and similar complaints in a detailed side-by-side view or filtering results….(More)”.

What you don’t know about your health data will make you sick


Jeanette Beebe at Fast Company: “Every time you shuffle through a line at the pharmacy, every time you try to get comfortable in those awkward doctor’s office chairs, every time you scroll through the web while you’re put on hold with a question about your medical bill, take a second to think about the person ahead of you and behind you.

Chances are, at least one of you is being monitored by a third party like data analytics giant Optum, which is owned by UnitedHealth Group, Inc. Since 1993, it’s captured medical data—lab results, diagnoses, prescriptions, and more—from 150 million Americans. That’s almost half of the U.S. population.

“They’re the ones that are tapping the data. They’re in there. I can’t remove them from my own health insurance contracts. So I’m stuck. It’s just part of the system,” says Joel Winston, an attorney who specializes in privacy and data protection law.

Healthcare providers can legally sell their data to a now-dizzyingly vast spread of companies, who can use it to make decisions, from designing new drugs to pricing your insurance rates to developing highly targeted advertising.

It’s written in the fine print: You don’t own your medical records. Well, except if you live in New Hampshire. It’s the only state that mandates its residents own their medical data. In 21 states, the law explicitly says that healthcare providers own these records, not patients. In the rest of the country, it’s up in the air.

Every time you visit a doctor or a pharmacy, your record grows. The details can be colorful: Using sources like Milliman’s IntelliScript and ExamOne’s ScriptCheck, a fuller picture of you emerges. Your interactions with the health are system, your medical payments, your prescription drug purchase history. And the market for the data is surging.

Its buyers and sharers—pharma giants, insurers, credit reporting agencies, and other data-hungry companies or “fourth parties” (like Facebook)—say that these massive health data sets can improve healthcare delivery and fuel advances in so-called “precision medicine.”

Still, this glut of health data has raised alarms among privacy advocates, who say many consumers are in the dark about how much of their health-related info is being gathered and mined….

Gardner predicted that traditional health data systems—electronic health records and electronic medical records—are less than ideal, given the “rigidity of the vendors and the products” and the way our data is owned and secured. Don’t count on them being around much longer, she said, “beyond the next few years.”

The future, Gardner suggested, is a system that runs on blockchain, which she defined for the committee as “basically a secure, visible, irrefutable ledger of transactions and ownership.” Still, a recent analysis of over 150 white papers revealed most healthcare blockchain projects “fall somewhere between half-baked and overly optimistic.”

As larger companies like IBM sign on, the technology may be edging closer to reality. Last year, Proof Work outlined a HIPAA-compliant system that manages patients’ medical histories over time, from acute care in the hospital to preventative checkups. The goal is to give these records to patients on their phones, and to create a “democratized ecosystem” to solve interoperability between patients, healthcare providers, insurance companies, and researchers. Similar proposals from blockchain-focused startups like Health Bank and Humanity.co would help patients store and share their health information securely—and sell it to researchers, too….(More)”.

Technology and political will can create better governance


Darshana Narayanan at The Economist: “Current forms of democracy exclude most people from political decision-making. We elect representatives and participate in the occasional referendums, but we mainly remain on the outside. The result is that a handful of people in power dictate what ought to be collective decisions. What we have now is hardly a democracy, or at least, not a democracy that we should settle for.

To design a truer form of democracy—that is, fair representation and an outcome determined by a plurality—we might draw some lessons from the collective behaviour of other social animals: schools of fish, for example. Schooling fish self-organise for the benefit of the group and are rarely in a fracas. Individuals in the group may not be associated and yet they reach consensus. A study in 2011 led by Iain Couzin found that “uninformed” fish—in that case, ones that had not been trained to have a preference to move towards a particular target—can dilute the influence of a powerful minority group which did have such preferences. 

Of course fish are not the same as humans. But that study does suggest a way of thinking about decision-making. Instead of limiting influence to experts and strongly motivated interest groups, we should actively work to broaden participation to ensure that we include people lacking strong preferences or prior knowledge of an issue. In other words, we need to go against the ingrained thinking that non-experts should be excluded from decision-making. Inclusivity might just improve our chances of reaching a real, democratic consensus.

How can our political institutions facilitate this? In my work over the past several years I have tried to apply findings from behavioural science into institutions and into code to create better systems of governance. In the course of my work, I have found some promising experiments taking place around the world that harness new digital tools. They point the way to how democracy can be practiced in the 21st century….(More)”.

Catch Me Once, Catch Me 218 Times


Josh Kaplan at Slate: “…It was 2010, and the San Diego County Sheriff’s Department had recently rolled out a database called GraffitiTracker—software also used by police departments in Denver and Los Angeles County—and over the previous year, they had accumulated a massive set of images that included a couple hundred photos with his moniker. Painting over all Kyle’s handiwork, prosecutors claimed, had cost the county almost $100,000, and that sort of damage came with life-changing consequences. Ultimately, he made a plea deal: one year of incarceration, five years of probation, and more than $87,000 in restitution.

Criticism of police technology often gets mired in the complexities of the algorithms involved—the obscurity of machine learning, the feedback loops, the potentials for racial bias and error. But GraffitiTracker can tell us a lot about data-driven policing in part because the concept is so simple. Whenever a public works crew goes to clean up graffiti, before they paint over it, they take a photo and put it in the county database. Since taggers tend to paint the same moniker over and over, now whenever someone is caught for vandalism, police can search the database for their pseudonym and get evidence of all the graffiti they’ve ever done.

In San Diego County, this has radically changed the way that graffiti is prosecuted and has pumped up the punishment for taggers—many of whom are minors—to levels otherwise unthinkable. The results have been lucrative. In 2011, the first year San Diego started using GraffitiTracker countywide (a few San Diego jurisdictions already had it in place), the amount of restitution received for graffiti jumped from about $170,000 to almost $800,000. Roughly $300,000 of that came from juvenile cases. For the jurisdictions that weren’t already using GraffitiTracker, the jump was even more stark: The annual total went from $45,000 to nearly $400,000. In these cities, the average restitution ordered in adult cases went from $1,281 to $5,620, and at the same time, the number of cases resulting in restitution tripled. (San Diego has said it makes prosecuting vandalism easier.)

Almost a decade later, San Diego County and other jurisdictions are still using GraffitiTracker, yet it’s received very little media attention, despite the startling consequences for vandalism prosecution. But its implications extend far beyond tagging. GraffitiTracker presaged a deeper problem with law enforcement’s ability to use technology to connect people to crimes that, as Deputy District Attorney Melissa Ocampo put it to me, “they thought they got away with.”…(More)”.

The Bad Pupil


CCCBLab: “In recent years we have been witnessing a constant trickle of news on artificial intelligence, machine learning and computer vision. We are told that machines learn, see, create… and all this builds up a discourse based on novelty, on a possible future and on a series of worries and hopes. It is difficult, sometimes, to figure out in this landscape which are real developments, and which are fantasies or warnings. And, undoubtedly, this fog that surrounds it forms part of the power that we grant, both in the present and on credit, to these tools, and of the negative and positive concerns that they arouse in us. Many of these discourses may fall into the field of false debates or, at least, of the return of old debates. Thus, in the classical artistic field, associated with the discourse on creation and authorship, there is discussion regarding the entity to be awarded to the images created with these tools. (Yet wasn’t the argument against photography in art that it was an image created automatically and without human participation? And wasn’t that also an argument in favour of taking it and using it to put an end to a certain idea of art?)

Metaphors are essential in the discourse on all digital tools and the power that they have. Are expressions such as “intelligence”, “vision”, “learning”, “neural” and the entire range of similar words the most adequate for defining these types of tools? Probably not, above all if their metaphorical nature is sidestepped. We would not understand them in the same way if we called them tools of probabilistic classification or if instead of saying that an artificial intelligence “has painted” a Rembrandt, we said that it has produced a statistical reproduction of his style (something which is still surprising, and to be celebrated, of course). These names construct an entity for these tools that endows them with a supposed autonomy and independence upon which their future authority is based.

Because that is what it’s about in many discourses: constructing a characterisation that legitimises an objective or non-human capacity in data analysis….

We now find ourselves in what is, probably, the point of the first cultural reception of these tools. Of their development in fields of research and applications that have already been derived, we are moving on to their presence in the public discourse. It is in this situation and context, where we do not fully know the breadth and characteristics of these technologies (meaning fears are more abstract and diffuse and, thus, more present and powerful), when it is especially important to understand what we are talking about, to appropriate the tools and to intervene in the discourses. Before their possibilities are restricted and solidified until they seem indisputable, it is necessary to experiment with them and reflect on them; taking advantage of the fact that we can still easily perceive them as in creation, malleable and open.

In our projects The Bad Pupil. Critical pedagogy for artificial intelligences and Latent Spaces. Machinic Imaginations we have tried to approach to these tools and their imaginary. In the statement of intentions of the former, we expressed our desire, in the face of the regulatory context and the metaphor of machine learning, to defend the bad pupil as one who escapes the norm. And also how, faced with an artificial intelligence that seeks to replicate the human on inhuman scales, it is necessary to defend and construct a non-mimetic one that produces unexpected relations and images.

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil - Estampa

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil – Estampa

Both projects are also attempts to appropriate these tools, which means, first of all, escaping industrial barriers and their standards. In this field in which mass data are an asset within reach of big companies, employing quantitively poor datasets and non-industrial calculation potentials is not just a need but a demand….(More)”.

Privacy’s not dead. It’s just not evenly distributed


Alex Pasternack in Fast Company: “In the face of all the data abuse, many of us have, quite reasonably, thrown up our hands. But privacy didn’t die. It’s just been beaten up, sold, obscured, diffused unevenly across society. What privacy is and why it matters increasingly depends upon who you are, your age, your income, gender, ethnicity, where you’re from, and where you live. To borrow William Gibson’s famous quote about the future and its unevenness and inequalities, privacy is alive—it’s just not evenly distributed. And while we don’t all care about it the same way—we’re even divided on what exactly privacy is—its harms are still real. Even when our own privacy isn’t violated, privacy violations can still hurt us.

Privacy is personal, from the creepy feeling that our phones are literally listening to the endless parade of data breaches that test our ability to care anymore. It’s the unsettling feeling of giving “consent” without knowing what that means, “agreeing” to contracts we didn’t read with companies we don’t really trust. (Forget about understanding all the details; researchers have shown that most privacy policies surpass the reading level of the average person.)

It’s the data about us that’s harvested, bought, sold, and traded by an obscure army of data brokers without our knowledge, feeding marketers, landlords, employers, immigration officialsinsurance companies, debt collectors, as well as stalkers and who knows who else. It’s the body camera or the sports arena or the social network capturing your face for who knows what kind of analysis. Don’t think of personal data as just “data.” As it gets more detailed and more correlated, increasingly, our data is us.

And “privacy” isn’t just privacy. It’s also tied up with security, freedom, social justice, free speech, and free thought. Privacy harms aren’t only personal, but societal. It’s not just the multibillion-dollar industry that aims to nab you and nudge you, but the multibillion-dollar spyware industry that helps governments nab dissidents and send them to prison or worse. It’s the supposedly fair and transparent algorithms that aren’t, turning our personal data into risk scores that can help perpetuate race, class, and gender divides, often without our knowing it.

Privacy is about dark ads bought with dark money and the micro-targeting of voters by overseas propagandists or by political campaigns at home. That kind of influence isn’t just the promise of a shadowy Cambridge Analytica or state-run misinformation campaigns, but also the premise of modern-day digital ad campaigns. (Note that Facebook’s research division later hired one of the researchers behind the Cambridge app.) And as the micro-targeting gets more micro, the tech giants that deal in ads are only getting more macro….(More)”

(This story is part of The Privacy Divide, a series that explores the fault lines and disparities–economic, cultural, philosophical–that have developed around digital privacy and its impact on society.)

How data collected from mobile phones can help electricity planning


Article by Eduardo Alejandro Martínez Ceseña, Joseph Mutale, Mathaios Panteli, and Pierluigi Mancarella in The Conversation: “Access to reliable and affordable electricity brings many benefits. It supports the growth of small businesses, allows students to study at night and protects health by offering an alternative cooking fuel to coal or wood.

Great efforts have been made to increase electrification in Africa, but rates remain low. In sub-Saharan Africa only 42% of urban areas have access to electricity, just 22% in rural areas.

This is mainly because there’s not enough sustained investment in electricity infrastructure, many systems can’t reliably support energy consumption or the price of electricity is too high.

Innovation is often seen as the way forward. For instance, cheaper and cleaner technologies, like solar storage systems deployed through mini grids, can offer a more affordable and reliable option. But, on their own, these solutions aren’t enough.

To design the best systems, planners must know where on- or off-grid systems should be placed, how big they need to be and what type of energy should be used for the most effective impact.

The problem is reliable data – like village size and energy demand – needed for rural energy planning is scarce or non-existent. Some can be estimated from records of human activities – like farming or access to schools and hospitals – which can show energy needs. But many developing countries have to rely on human activity data from incomplete and poorly maintained national census. This leads to inefficient planning.

In our research we found that data from mobile phones offer a solution. They provide a new source of information about what people are doing and where they’re located.

In sub-Saharan Africa, there are more people with mobile phones than access to electricity, as people are willing to commute to get a signal and/or charge their phones.

This means that there’s an abundance of data – that’s constantly updated and available even in areas that haven’t been electrified – that could be used to optimise electrification planning….

We were able to use mobile data to develop a countrywide electrification strategy for Senegal. Although Senegal has one of the highest access to electricity rates in sub-Saharan Africa, just 38% of people in rural areas have access.

By using mobile data we were able to identify the approximate size of rural villages and access to education and health facilities. This information was then used to size and cost different electrification options and select the most economic one for each zone – whether villages should be connected to the grids, or where off-grid systems – like solar battery systems – were a better option.

To collect the data we randomly selected mobile phone data from 450,000 users from Senegal’s main telecomms provider, Sonatel, to understand exactly how information from mobile phones could be used. This includes the location of user and the characteristics of the place they live….(More)”