How Social Media Came To The Rescue After Kerala’s Floods


Kamala Thiagarajan at NPR: Devastating rainfall followed by treacherous landslides have killed 210 people since August 8 and displaced over a million in the southern Indian state of Kerala. India’s National Disaster Relief Force launched its biggest ever rescue operation in the state, evacuating over 10,000 people. The Indian army and the navy were deployed as well.

But they had some unexpected assistance.

Thousands of Indian citizens used mobile phone technology and social media platforms to mobilize relief efforts….

In many other cases, it was ordinary folk who harnessed social media and their own resources to play a role in relief and rescue efforts.

As the scope of the disaster became clear, the state government of Kerala reached out to software engineers from around the world. They joined hands with the state-government-run Information Technology Cell, coming together on Slack, a communications platform, to create the website www.keralarescue.in

The website allowed volunteers who were helping with disaster relief in Kerala’s many flood-affected districts to share the needs of stranded people so that authorities could act.

Johann Binny Kuruvilla, a travel blogger, was one of many volunteers. He put in 14-hour shifts at the District Emergency Operations Center in Ernakulam, Kochi.

The first thing he did, he says, was to harness the power of Whatsapp, a critical platform for dispensing information in India. He joined five key Whatsapp groups with hundreds of members who were coordinating rescue and relief efforts. He sent them his number and mentioned that he would be in a position to communicate with a network of police, army and navy personnel. Soon he was receiving an average of 300 distress calls a day from people marooned at home and faced with medical emergencies.

No one trained volunteers like Kuruvilla. “We improvised and devised our own systems to store data,” he says. He documented the information he received on Excel spreadsheets before passing them on to authorities.

He was also the contact point for INSPIRE, a fraternity of mechanical engineering students at a government-run engineering college at Barton Hill in Kerala. The students told him they had made nearly 300 power banks for charging phones, using four 1.5 volt batteries and cables, and, he says, “asked us if we could help them airdrop it to those stranded in flood-affected areas.” A power bank could boost a mobile phone’s charge by 20 percent in minutes, which could be critical for people without access to electricity. Authorities agreed to distribute the power banks, wrapping them in bubble wrap and airdropping them to areas where people were marooned.

Some people took to social media to create awareness of the aftereffects of the flooding.

Anand Appukuttan, 38, is a communications designer. Working as a consultant he currently lives in Chennai, 500 miles by road from Kerala, and designs infographics, mobile apps and software for tech companies. Appukuttan was born and brought up in Kottayam, a city in South West Kerala. When he heard of the devastation caused by the floods, he longed to help. A group of experts on disaster management reached out to him over Facebook on August 18, asking if he would share his time and expertise in creating flyers for awareness; he immediately agreed….(More)”.

World War Web


Special issue of Foreign Affairs: “The last few decades have witnessed the growth of an American-sponsored Internet open to all. But that was then; conditions have changed.

History is filled with supposed lost utopias, and there is no greater cliché than to see one’s own era as a lamentable decline from a previous golden age. Sometimes, however, clichés are right. And as we explored the Internet’s future for this issue’s lead package, it became clear this was one of those times. Contemplating where we have come from digitally and where we are heading, it’s hard not to feel increasingly wistful and nostalgic.

The last few decades have witnessed the growth of an American-sponsored Internet open to all, and that has helped tie the world together, bringing wide-ranging benefits to billions. But that was then; conditions have changed.

Other great powers are contesting U.S. digital leadership, pushing their own national priorities. Security threats appear and evolve constantly. Platforms that were supposed to expand and enrich the marketplace of ideas have been hijacked by trolls and bots and flooded with disinformation. And real power is increasingly concentrated in the hands of a few private tech giants, whose self-interested choices have dramatic consequences for the entire world around them.

Whatever emerges from this melee, it will be different from, and in many ways worse than, what we have now.

Adam Segal paints the big picture well. “The Internet has long been an American project,” he writes. “Yet today, the United States has ceded leadership in cyberspace to China.” What will happen if Beijing continues its online ascent? “The Internet will be less global and less open. A major part of it will run Chinese applications over Chinese-made hardware. And Beijing will reap the economic, diplomatic, national security, and intelligence benefits that once flowed to Washington.”

Nandan Nilekani, a co-founder of Infosys, outlines India’s unique approach to these issues, which is based on treating “digital infrastructure as a public good and data as something that citizens deserve access to.” Helen Dixon, Ireland’s data protection commissioner, presents a European perspective, arguing that giving individuals control over their own data—as the General Data Protection Regulation, the EU’s historic new regulatory effort, aims to do—is essential to restoring the Internet’s promise. And Karen Kornbluh, a veteran U.S. policymaker, describes how the United States dropped the digital ball and what it could do to pick it up again.

Finally, Michèle Flournoy and Michael Sulmeyer explain the new realities of cyberwarfare, and Viktor Mayer-Schönberger and Thomas Ramge consider the problems caused by Big Tech’s hoarding of data and what can be done to address it.

A generation from now, people across the globe will no doubt revel in the benefits the Internet has brought. But the more thoughtful among them will also lament the eclipse of the founders’ idealistic vision and dream of a world connected the way it could—and should— have been….(More)”.

Countries Can Learn from France’s Plan for Public Interest Data and AI


Nick Wallace at the Center for Data Innovation: “French President Emmanuel Macron recently endorsed a national AI strategy that includes plans for the French state to make public and private sector datasets available for reuse by others in applications of artificial intelligence (AI) that serve the public interest, such as for healthcare or environmental protection. Although this strategy fails to set out how the French government should promote widespread use of AI throughout the economy, it will nevertheless give a boost to AI in some areas, particularly public services. Furthermore, the plan for promoting the wider reuse of datasets, particularly in areas where the government already calls most of the shots, is a practical idea that other countries should consider as they develop their own comprehensive AI strategies.

The French strategy, drafted by mathematician and Member of Parliament Cédric Villani, calls for legislation to mandate repurposing both public and private sector data, including personal data, to enable public-interest uses of AI by government or others, depending on the sensitivity of the data. For example, public health services could use data generated by Internet of Things (IoT) devices to help doctors better treat and diagnose patients. Researchers could use data captured by motorway CCTV to train driverless cars. Energy distributors could manage peaks and troughs in demand using data from smart meters.

Repurposed data held by private companies could be made publicly available, shared with other companies, or processed securely by the public sector, depending on the extent to which sharing the data presents privacy risks or undermines competition. The report suggests that the government would not require companies to share data publicly when doing so would impact legitimate business interests, nor would it require that any personal data be made public. Instead, Dr. Villani argues that, if wider data sharing would do unreasonable damage to a company’s commercial interests, it may be appropriate to only give public authorities access to the data. But where the stakes are lower, companies could be required to share the data more widely, to maximize reuse. Villani rightly argues that it is virtually impossible to come up with generalizable rules for how data should be shared that would work across all sectors. Instead, he argues for a sector-specific approach to determining how and when data should be shared.

After making the case for state-mandated repurposing of data, the report goes on to highlight four key sectors as priorities: health, transport, the environment, and defense. Since these all have clear implications for the public interest, France can create national laws authorizing extensive repurposing of personal data without violating the General Data Protection Regulation (GDPR) which allows national laws that permit the repurposing of personal data where it serves the public interest. The French strategy is the first clear effort by an EU member state to proactively use this clause in aid of national efforts to bolster AI….(More)”.

China’s Aggressive Surveillance Technology Will Spread Beyond Its Borders


Already there are reports that Zimbabwe, for example, is turning to Chinese firms to implement nationwide facial-recognition and surveillance programs, wrapped into China’s infrastructure investments and a larger set of security agreements as well, including for policing online communication. The acquisition of black African faces will help China’s tech sector improve its overall data set.

Malaysia, too, announced new partnerships this spring with China to equip police with wearable facial-recognition cameras. There are quiet reports of Arab Gulf countries turning to China not just for the drone technologies America has denied but also for the authoritarian suite of surveillance, recognition, and data tools perfected in China’s provinces. In a recent article on Egypt’s military-led efforts to build a new capital city beyond Cairo’s chaos and revolutionary squares, a retired general acting as project spokesman declared, “a smart city means a safe city, with cameras and sensors everywhere. There will be a command center to control the entire city.” Who is financing construction? China.

While many governments are making attempts to secure this information, there have been several alarming stories of data leaks. Moreover, these national identifiers create an unprecedented opportunity for state surveillance at scale. What about collecting biometric information in nondemocratic regimes? In 2016, the personal details of nearly 50 million people in Turkey were leaked….

China and other determined authoritarian states may prove undeterrable in their zeal to adopt repressive technologies. A more realistic goal, as Georgetown University scholar Nicholas Wright has argued, is to sway countries on the fence by pointing out the reputational costs of repression and supporting those who are advocating for civil liberties in this domain within their own countries. Democracy promoters (which we hope will one day again include the White House) will also want to recognize the coming changes to the authoritarian public sphere. They can start now in helping vulnerable populations and civil society to gain greater technological literacy to advocate for their rights in new domains. It is not too early for governments and civil society groups alike to study what technological and tactical countermeasures exist to circumvent and disrupt new authoritarian tools.

Seven years ago, techno-optimists expressed hope that a wave of new digital tools for social networking and self-expression could help young people in the Middle East and elsewhere to find their voices. Today, a new wave of Chinese-led technological advances threatens to blossom into what we consider an “Arab spring in reverse”—in which the next digital wave shifts the pendulum back, enabling state domination and repression at a staggering scale and algorithmic effectiveness.

Americans are absolutely right to be urgently focused on countering Russian weaponized hacking and leaking as its primary beneficiary sits in the Oval Office. But we also need to be more proactive in countering the tools of algorithmic authoritarianism that will shape the worldwide future of individual freedom….(More)”.

A roadmap for restoring trust in Big Data


Mark Lawler et al in the Lancet: “The fallout from the Cambridge Analytica–Facebook scandal marks a significant inflection point in the public’s trust concerning Big Data. The health-science community must use this crisis-in-confidence to redouble its commitment to talk openly and transparently about benefits and risks and to act decisively to deliver robust effective governance frameworks, under which personal health data can be responsibly used. Activities such as the Innovative Medicines Initiative’s Big Data for Better Outcomes emphasise how a more granular data-driven understanding of human diseases including cancer could underpin innovative therapeutic intervention.
 Health Data Research UK is developing national research expertise and infrastructure to maximise the value of health data science for the National Health Service and ultimately British citizens.
Comprehensive data analytics are crucial to national programmes such as the US Cancer Moonshot, the UK’s 100 000 Genomes project, and other national genomics programmes. Cancer Core Europe, a research partnership between seven leading European oncology centres, has personal data sharing at its core. The Global Alliance for Genomics and Health recently highlighted the need for a global cancer knowledge network to drive evidence-based solutions for a disease that kills more than 8·7 million citizens annually worldwide. These activities risk being fatally undermined by the recent data-harvesting controversy.
We need to restore the public’s trust in data science and emphasise its positive contribution in addressing global health and societal challenges. An opportunity to affirm the value of data science in Europe was afforded by Digital Day 2018, which took place on April 10, 2018, in Brussels, and where European Health Ministers signed a declaration of support to link existing or future genomic databanks across the EU, through the Million European Genomes Alliance.
So how do we address evolving challenges in analysis, sharing, and storage of information, ensure transparency and confidentiality, and restore public trust? We must articulate a clear Social Contract, where citizens (as data donors) are at the heart of decision-making. We need to demonstrate integrity, honesty, and transparency as to what happens to data and what level of control people can, or cannot, expect. We must embed ethical rigour in all our data-driven processes. The Framework for Responsible Sharing of Genomic and Health Related Data represents a practical global approach, promoting effective and ethical sharing and use of research or patient data, while safeguarding individual privacy through secure and accountable data transfer…(More)”.

Denialism: what drives people to reject the truth


Keith Kahn-Harris at The Guardian: “…Denialism is an expansion, an intensification, of denial. At root, denial and denialism are simply a subset of the many ways humans have developed to use language to deceive others and themselves. Denial can be as simple as refusing to accept that someone else is speaking truthfully. Denial can be as unfathomable as the multiple ways we avoid acknowledging our weaknesses and secret desires.

Denialism is more than just another manifestation of the humdrum intricacies of our deceptions and self-deceptions. It represents the transformation of the everyday practice of denial into a whole new way of seeing the world and – most important – a collective accomplishment. Denial is furtive and routine; denialism is combative and extraordinary. Denial hides from the truth, denialism builds a new and better truth.

In recent years, the term has been used to describe a number of fields of “scholarship”, whose scholars engage in audacious projects to hold back, against seemingly insurmountable odds, the findings of an avalanche of research. They argue that the Holocaust (and other genocides) never happened, that anthropogenic (human-caused) climate change is a myth, that Aids either does not exist or is unrelated to HIV, that evolution is a scientific impossibility, and that all manner of other scientific and historical orthodoxies must be rejected.

In some ways, denialism is a terrible term. No one calls themselves a “denialist”, and no one signs up to all forms of denialism. In fact, denialism is founded on the assertion that it is not denialism. In the wake of Freud (or at least the vulgarisation of Freud), no one wants to be accused of being “in denial”, and labelling people denialists seems to compound the insult by implying that they have taken the private sickness of denial and turned it into public dogma.

But denial and denialism are closely linked; what humans do on a large scale is rooted in what we do on a small scale. While everyday denial can be harmful, it is also just a mundane way for humans to respond to the incredibly difficult challenge of living in a social world in which people lie, make mistakes and have desires that cannot be openly acknowledged. Denialism is rooted in human tendencies that are neither freakish nor pathological.

All that said, there is no doubt that denialism is dangerous. In some cases, we can point to concrete examples of denialism causing actual harm. In South Africa, President Thabo Mbeki, in office between 1999 and 2008, was influenced by Aids denialists such as Peter Duesberg, who deny the link between HIV and Aids (or even HIV’s existence) and cast doubt on the effectiveness of anti-retroviral drugs. Mbeki’s reluctance to implement national treatment programmes using anti-retrovirals has been estimated to have cost the lives of 330,000 people. On a smaller scale, in early 2017 the Somali-American community in Minnesota was struck by a childhood measles outbreak, as a direct result of proponents of the discredited theory that the MMR vaccine causes autism, persuading parents not to vaccinate their children….(More)”.

Americans Want to Share Their Medical Data. So Why Can’t They?


Eleni Manis at RealClearHealth: “Americans are willing to share personal data — even sensitive medical data — to advance the common good. A recent Stanford University study found that 93 percent of medical trial participants in the United States are willing to share their medical data with university scientists and 82 percent are willing to share with scientists at for-profit companies. In contrast, less than a third are concerned that their data might be stolen or used for marketing purposes.

However, the majority of regulations surrounding medical data focus on individuals’ ability to restrict the use of their medical data, with scant attention paid to supporting the ability to share personal data for the common good. Policymakers can begin to right this balance by establishing a national medical data donor registry that lets individuals contribute their medical data to support research after their deaths. Doing so would help medical researchers pursue cures and improve health care outcomes for all Americans.

Increased medical data sharing facilitates advances in medical science in three key ways. First, de-identified participant-level data can be used to understand the results of trials, enabling researchers to better explicate the relationship between treatments and outcomes. Second, researchers can use shared data to verify studies and identify cases of data fraud and research misconduct in the medical community. For example, one researcher recently discovered a prolific Japanese anesthesiologist had falsified data for almost two decades. Third, shared data can be combined and supplemented to support new studies and discoveries.

Despite these benefits, researchers, research funders, and regulators have struggled to establish a norm for sharing clinical research data. In some cases, regulatory obstacles are to blame. HIPAA — the federal law regulating medical data — blocks some sharing on grounds of patient privacy, while federal and state regulations governing data sharing are inconsistent. Researchers themselves have a proprietary interest in data they produce, while academic researchers seeking to maximize publications may guard data jealously.

Though funding bodies are aware of this tension, they are unable to resolve it on their own. The National Institutes of Health, for example, requires a data sharing plan for big-ticket funding but recognizes that proprietary interests may make sharing impossible….(More)”.

#TrendingLaws: How can Machine Learning and Network Analysis help us identify the “influencers” of Constitutions?


Unicef: “New research by scientists from UNICEF’s Office of Innovation — published today in the journal Nature Human Behaviour — applies methods from network science and machine learning to constitutional law.  UNICEF Innovation Data Scientists Alex Rutherford and Manuel Garcia-Herranz collaborated with computer scientists and political scientists at MIT, George Washington University, and UC Merced to apply data analysis to the world’s constitutions over the last 300 years. This work sheds new light on how to better understand why countries’ laws change and incorporate social rights…

Data science techniques allow us to use methods like network science and machine learning to uncover patterns and insights that are hard for humans to see. Just as we can map influential users on Twitter — and patterns of relations between places to predict how diseases will spread — we can identify which countries have influenced each other in the past and what are the relations between legal provisions.

Why The Science of Constitutions?

One way UNICEF fulfills its mission is through advocacy with national governments — to enshrine rights for minorities, notably children, formally in law. Perhaps the most renowned example of this is the International Convention on the Rights of the Child (ICRC).

Constitutions, such as Mexico’s 1917 constitution — the first to limit the employment of children — are critical to formalizing rights for vulnerable populations. National constitutions describe the role of a country’s institutions, its character in the eyes of the world, as well as the rights of its citizens.

From a scientific standpoint, the work is an important first step in showing that network analysis and machine learning technique can be used to better understand the dynamics of caring for and protecting the rights of children — critical to the work we do in a complex and interconnected world. It shows the significant, and positive policy implications of using data science to uphold children’s rights.

What the Research Shows:

Through this research, we uncovered:

  • A network of relationships between countries and their constitutions.
  • A natural progression of laws — where fundamental rights are a necessary precursor to more specific rights for minorities.
  • The effect of key historical events in changing legal norms….(More)”.

The economic value of data: discussion paper


HM Treasury (UK): “Technological change has radically increased both the volume of data in the economy, and our ability to process it. This change presents an opportunity to transform our economy and society for the better.

Data-driven innovation holds the keys to addressing some of the most significant challenges confronting modern Britain, whether that is tackling congestion and improving air quality in our cities, developing ground-breaking diagnosis systems to support our NHS, or making our businesses more productive.

The UK’s strengths in cutting-edge research and the intangible economy make it well-placed to be a world leader, and estimates suggest that data-driven technologies will contribute over £60 billion per year to the UK economy by 2020.1 Recent events have raised public questions and concerns about the way that data, and particularly personal data, can be collected, processed, and shared with third party organisations.

These are concerns that this government takes seriously. The Data Protection Act 2018 updates the UK’s world-leading data protection framework to make it fit for the future, giving individuals strong new rights over how their data is used. Alongside maintaining a secure, trusted data environment, the government has an important role to play in laying the foundations for a flourishing data-driven economy.

This means pursuing policies that improve the flow of data through our economy, and ensure that those companies who want to innovate have appropriate access to high-quality and well-maintained data.

This discussion paper describes the economic opportunity presented by data-driven innovation, and highlights some of the key challenges that government will need to address, such as: providing clarity around ownership and control of data; maintaining a strong, trusted data protection framework; making effective use of public sector data; driving interoperability and standards; and enabling safe, legal and appropriate data sharing.

Over the last few years, the government has taken significant steps to strengthen the UK’s position as a world leader in data-driven innovation, including by agreeing the Artificial Intelligence Sector Deal, establishing the Geospatial Commission, and making substantial investments in digital skills. The government will build on those strong foundations over the coming months, including by commissioning an Expert Panel on Competition in Digital Markets. This Expert Panel will support the government’s wider review of competition law by considering how competition policy can better enable innovation and support consumers in the digital economy.

There are still big questions to be answered. This document marks the beginning of a wider set of conversations that government will be holding over the coming year, as we develop a new National Data Strategy….(More)”.

This surprising, everyday tool might hold the key to changing human behavior


Annabelle Timsit at Quartz: “To be a person in the modern world is to worry about your relationship with your phone. According to critics, smartphones are making us ill-mannered and sore-necked, dragging parents’ attention away from their kids, and destroying an entire generation.

But phones don’t have to be bad. With 4.68 billion people forecast to become mobile phone users by 2019, nonprofits and social science researchers are exploring new ways to turn our love of screens into a force for good. One increasingly popular option: Using texting to help change human behavior.

Texting: A unique tool

The short message service (SMS) was invented in the late 1980s, and the first text message was sent in 1992. (Engineer Neil Papworth sent “merry Christmas” to then-Vodafone director Richard Jarvis.) In the decades since, texting has emerged as the preferred communication method for many, and in particular younger generations. While that kind of habit-forming can be problematic—47% of US smartphone users say they “couldn’t live without” the device—our attachment to our phones also makes text-based programs a good way to encourage people to make better choices.

“Texting, because it’s anchored in mobile phones, has the ability to be with you all the time, and that gives us an enormous flexibility on precision,” says Todd Rose, director of the Mind, Brain, & Education Program at the Harvard Graduate School of Education. “When people lead busy lives, they need timely, targeted, actionable information.”

And who is busier than a parent? Text-based programs can help current or would-be moms and dads with everything from medication pickup to childhood development. Text4Baby, for example, messages pregnant women and young moms with health information and reminders about upcoming doctor visits. Vroom, an app for building babies’ brains, sends parents research-based prompts to help them build positive relationships with their children (for example, by suggesting they ask toddlers to describe how they’re feeling based on the weather). Muse, an AI-powered app, uses machine learning and big data to try and help parents raise creative, motivated, emotionally intelligent kids. As Jenny Anderson writes in Quartz: “There is ample evidence that we can modify parents’ behavior through technological nudges.”

Research suggests text-based programs may also be helpful in supporting young children’s academic and cognitive development. …Texts aren’t just being used to help out parents. Non-governmental organizations (NGOs) have also used them to encourage civic participation in kids and young adults. Open Progress, for example, has an all-volunteer community called “text troop” that messages young adults across the US, reminding them to register to vote and helping them find their polling location.

Text-based programs are also useful in the field of nutrition, where private companies and public-health organizations have embraced them as a way to give advice on healthy eating and weight loss. The National Cancer Institute runs a text-based program called SmokefreeTXT that sends US adults between three and five messages per day for up to eight weeks, to help them quit smoking.

Texting programs can be a good way to nudge people toward improving their mental health, too. Crisis Text Line, for example, was the first national 24/7 crisis-intervention hotline to conduct counseling conversations entirely over text…(More).