Explore our articles
View All Results

Stefaan Verhulst

Kamala Thiagarajan at NPR: Devastating rainfall followed by treacherous landslides have killed 210 people since August 8 and displaced over a million in the southern Indian state of Kerala. India’s National Disaster Relief Force launched its biggest ever rescue operation in the state, evacuating over 10,000 people. The Indian army and the navy were deployed as well.

But they had some unexpected assistance.

Thousands of Indian citizens used mobile phone technology and social media platforms to mobilize relief efforts….

In many other cases, it was ordinary folk who harnessed social media and their own resources to play a role in relief and rescue efforts.

As the scope of the disaster became clear, the state government of Kerala reached out to software engineers from around the world. They joined hands with the state-government-run Information Technology Cell, coming together on Slack, a communications platform, to create the website www.keralarescue.in

The website allowed volunteers who were helping with disaster relief in Kerala’s many flood-affected districts to share the needs of stranded people so that authorities could act.

Johann Binny Kuruvilla, a travel blogger, was one of many volunteers. He put in 14-hour shifts at the District Emergency Operations Center in Ernakulam, Kochi.

The first thing he did, he says, was to harness the power of Whatsapp, a critical platform for dispensing information in India. He joined five key Whatsapp groups with hundreds of members who were coordinating rescue and relief efforts. He sent them his number and mentioned that he would be in a position to communicate with a network of police, army and navy personnel. Soon he was receiving an average of 300 distress calls a day from people marooned at home and faced with medical emergencies.

No one trained volunteers like Kuruvilla. “We improvised and devised our own systems to store data,” he says. He documented the information he received on Excel spreadsheets before passing them on to authorities.

He was also the contact point for INSPIRE, a fraternity of mechanical engineering students at a government-run engineering college at Barton Hill in Kerala. The students told him they had made nearly 300 power banks for charging phones, using four 1.5 volt batteries and cables, and, he says, “asked us if we could help them airdrop it to those stranded in flood-affected areas.” A power bank could boost a mobile phone’s charge by 20 percent in minutes, which could be critical for people without access to electricity. Authorities agreed to distribute the power banks, wrapping them in bubble wrap and airdropping them to areas where people were marooned.

Some people took to social media to create awareness of the aftereffects of the flooding.

Anand Appukuttan, 38, is a communications designer. Working as a consultant he currently lives in Chennai, 500 miles by road from Kerala, and designs infographics, mobile apps and software for tech companies. Appukuttan was born and brought up in Kottayam, a city in South West Kerala. When he heard of the devastation caused by the floods, he longed to help. A group of experts on disaster management reached out to him over Facebook on August 18, asking if he would share his time and expertise in creating flyers for awareness; he immediately agreed….(More)”.

How Social Media Came To The Rescue After Kerala’s Floods

Rochelle Gurstein in the Baffler: “WHAT DO WE LOSE WHEN WE LOSE OUR PRIVACY? This question has become increasingly difficult to answer, living as we do in a society that offers boundless opportunities for men and women to expose themselves (in all dimensions of that word) as never before, to commit what are essentially self-invasions of privacy. Although this is a new phenomenon, it has become as ubiquitous as it is quotidian, and for that reason, it is perhaps one of the most telling signs of our time. To get a sense of the sheer range of unconscious exhibitionism, we need only think of the popularity of reality TV shows, addiction-recovery memoirs, and cancer diaries. Then there are the banal but even more conspicuous varieties, like soaring, all-glass luxury apartment buildings and hotels in which inhabitants display themselves in all phases of their private lives to the casual glance of thousands of city walkers below. Or the incessant sound of people talking loudly—sometimes gossiping, sometimes crying—on their cell phones, broadcasting to total strangers the intimate details of their lives.

And, of course, there are now unprecedented opportunities for violating one’s own privacy, furnished by the technology of the internet. The results are everywhere, from selfies and Instagrammed trivia to the almost automatic, everyday activity of Facebook users registering their personal “likes” and preferences. (As we recently learned, this online pastime is nowhere near as private as we had been led to believe; more than fifty million users’ idly generated “data” was “harvested” by Cambridge Analytica to make “personality profiles” that were then used to target voters with advertisements from Donald Trump’s presidential campaign.)

Beyond these branded and aggressively marketed forums for self-invasions of privacy there are all the giddy, salacious forms that circulate in graphic images and words online—the sort that led not so long ago to the downfall of Anthony Weiner. The mania for attention of any kind is so pervasive—and the invasion of privacy so nonchalant—that many of us no longer notice, let alone mind, what in the past would have been experienced as insolent violations of privacy….(More)”.

Self-Invasion And The Invaded Self

Guest Editorial to Special Issue of IEEE Internet of Things Journal: “As we become increasingly reliant on intelligent, interconnected devices in every aspect of our lives, critical trust, security, and privacy concerns are raised as well.

First, the sensing data provided by individual participants is not always reliable. It may be noisy or even faked due to various reasons, such as poor sensor quality, lack of sensor calibration, background noise, context impact, mobility, incomplete view of observations, or malicious attacks. The crowdsourcing applications should be able to evaluate the trustworthiness of collected data in order to filter out the noisy and fake data that may disturb or intrude a crowdsourcing system. Second, providing data (e.g., photographs taken with personal mobile devices) or using IoT applications may compromise data providers’ personal data privacy (e.g., location, trajectory, and activity privacy) and identity privacy. Therefore, it becomes essential to assess the trust of the data while preserving the data providers’ privacy. Third, data analytics and mining in crowdsourcing may disclose the privacy of data providers or related entities to unauthorized parities, which lowers the willingness of participants to contribute to the crowdsourcing system, impacts system acceptance, and greatly impedes its further development. Fourth, the identities of data providers could be forged by malicious attackers to intrude the whole crowdsourcing system. In this context, trust, security, and privacy start to attract a special attention in order to achieve high quality of service in each step of crowdsourcing with regard to data collection, transmission, selection, processing, analysis and mining, as well as utilization.

Trust, security, and privacy in crowdsourcing receives increasing attention. Many methods have been proposed to protect privacy in the process of data collection and processing. For example, data perturbation can be adopted to hide the real data values during data collection. When preprocessing the collected data, data anonymization (e.g., k-anonymization) and fusion can be applied to break the links between the data and their sources/providers. In application layer, anonymity is used to mask the real identities of data sources/providers. To enable privacy-preserving data mining, secure multiparty computation (SMC) and homomorphic encryption provide options for protecting raw data when multiple parties jointly run a data mining algorithm. Through cryptographic techniques, no party knows anything else than its own input and expected results. For data truth discovery, applicable solutions include correlation-based data quality analysis and trust evaluation of data sources. But current solutions are still imperfect, incomprehensive, and inefficient….(More)”.

Trust, Security, and Privacy in Crowdsourcing

Paper by Caterina Marchionni and Samuli Reijula: “It has recently been argued that successful evidence-based policy should rely on two kinds of evidence: statistical and mechanistic. The former is held to be evidence that a policy brings about the desired outcome, and the latter concerns how it does so. Although agreeing with the spirit of this proposal, we argue that the underlying conception of mechanistic evidence as evidence that is different in kind from correlational, difference-making or statistical evidence, does not correctly capture the role that information about mechanisms should play in evidence-based policy. We offer an alternative account of mechanistic evidence as information concerning the causal pathway connecting the policy intervention to its outcome. Not only can this be analyzed as evidence of difference-making, it is also to be found at any level and is obtainable by a broad range of methods, both experimental and observational. Using behavioral policy as an illustration, we draw the implications of this revised understanding of mechanistic evidence for debates concerning policy extrapolation, evidence hierarchies, and evidence integration…(More)”.

What is mechanistic evidence, and why do we need it for evidence-based policy?

Book by Longbing Cao: “This book explores answers to the fundamental questions driving the research, innovation and practices of the latest revolution in scientific, technological and economic development: how does data science transform existing science, technology, industry, economy, profession and education?  How does one remain competitive in the data science field? What is responsible for shaping the mindset and skillset of data scientists?

Data Science Thinking paints a comprehensive picture of data science as a new scientific paradigm from the scientific evolution perspective, as data science thinking from the scientific-thinking perspective, as a trans-disciplinary science from the disciplinary perspective, and as a new profession and economy from the business perspective.

The topics cover an extremely wide spectrum of essential and relevant aspects of data science, spanning its evolution, concepts, thinking, challenges, discipline, and foundation, all the way to industrialization, profession, education, and the vast array of opportunities that data science offers. The book’s three parts each detail layers of these different aspects….(More)”.

Data Science Thinking: The Next Scientific, Technological and Economic Revolution

MIT Technology Review: “Our newest issue is live today, in which we dive into the many ways that technology is changing politics.

A major shift: In 2013 we emblazoned our cover with the words, “Big Data Will Save Politics.” When we chose that headline, Barack Obama had just won reelection with the help of a crack team of data scientists. The Arab Spring had already cooled into an Arab Winter, but the social-media platforms that had powered the uprisings were still basking in the afterglow. As our editor in chief Gideon Lichfield writes, today, with Cambridge Analytica, fake news, election hacking, and the shrill cacophony that dominates social media, technology feels as likely to destroy politics as to save it.

The political impact: From striking data visualizations that take a close look at the famed “filter bubble” effect that’s blamed for political polarization to an examination of how big data is disrupting the cozy world of political lobbying, we’re analyzing how emerging technologies are shaping the political landscape, eroding trust, and, possibly, becoming a part of the solution….(More)”.

Technology is threatening our democracy. How do we save it?

Anol Bhattacherjee and Utkarsh Shrivastava in Government Information Quarterly: “Investigations of white collar crimes such as corruption are often hindered by the lack of information or physical evidence. Information and communication technologies (ICT), by virtue of their ability to monitor, track, record, analyze, and share vast amounts of information may help countries identify and prosecute criminals, and deter future corruption. While prior studies have demonstrated that ICT is an important tool in reducing corruption at the country level, they provide little explanation as to how ICT influences corruption and when does it work best.

We explore these gaps in the literature using the hypothetico-deductive approach to research, by using general deterrence theory to postulate a series of main and moderating effects relating ICT use and corruption, and then testing those effects using secondary data analysis. Our analysis suggests that ICT use influences corruption by increasing the certainty and celerity of punishment related to corruption. Moreover, ICT laws moderate the effect of ICT use on corruption, suggesting that ICT investments may have limited effect on corruption, unless complemented with appropriate ICT laws. Implications of our findings for research and practice are discussed….(More)”.

The effects of ICT use and ICT Laws on corruption: A general deterrence theory perspective

Michael Sanders et al in Behavioral Public Policy: “The use of behavioural sciences in government has expanded and matured in the last decade. Since the Behavioural Insights Team (BIT) has been part of this movement, we sketch out the history of the team and the current state of behavioural public policy, recognising that other works have already told this story in detail. We then set out two clusters of issues that have emerged from our work at BIT. The first cluster concerns current challenges facing behavioural public policy: the long-term effects of interventions; repeated exposure effects; problems with proxy measures; spillovers and general equilibrium effects and unintended consequences; cultural variation; ‘reverse impact’; and the replication crisis. The second cluster concerns opportunities: influencing the behaviour of government itself; scaling interventions; social diffusion; nudging organisations; and dealing with thorny problems. We conclude that the field will need to address these challenges and take these opportunities in order to realise the full potential of behavioural public policy….(More)”.

Behavioural science and policy: where are we now and where are we going?

Frank Pasquale at Real Life Magazine: “Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”

The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.

Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether….(More)”.

Odd Numbers: Algorithms alone can’t meaningfully hold other algorithms accountable

Special issue of Foreign Affairs: “The last few decades have witnessed the growth of an American-sponsored Internet open to all. But that was then; conditions have changed.

History is filled with supposed lost utopias, and there is no greater cliché than to see one’s own era as a lamentable decline from a previous golden age. Sometimes, however, clichés are right. And as we explored the Internet’s future for this issue’s lead package, it became clear this was one of those times. Contemplating where we have come from digitally and where we are heading, it’s hard not to feel increasingly wistful and nostalgic.

The last few decades have witnessed the growth of an American-sponsored Internet open to all, and that has helped tie the world together, bringing wide-ranging benefits to billions. But that was then; conditions have changed.

Other great powers are contesting U.S. digital leadership, pushing their own national priorities. Security threats appear and evolve constantly. Platforms that were supposed to expand and enrich the marketplace of ideas have been hijacked by trolls and bots and flooded with disinformation. And real power is increasingly concentrated in the hands of a few private tech giants, whose self-interested choices have dramatic consequences for the entire world around them.

Whatever emerges from this melee, it will be different from, and in many ways worse than, what we have now.

Adam Segal paints the big picture well. “The Internet has long been an American project,” he writes. “Yet today, the United States has ceded leadership in cyberspace to China.” What will happen if Beijing continues its online ascent? “The Internet will be less global and less open. A major part of it will run Chinese applications over Chinese-made hardware. And Beijing will reap the economic, diplomatic, national security, and intelligence benefits that once flowed to Washington.”

Nandan Nilekani, a co-founder of Infosys, outlines India’s unique approach to these issues, which is based on treating “digital infrastructure as a public good and data as something that citizens deserve access to.” Helen Dixon, Ireland’s data protection commissioner, presents a European perspective, arguing that giving individuals control over their own data—as the General Data Protection Regulation, the EU’s historic new regulatory effort, aims to do—is essential to restoring the Internet’s promise. And Karen Kornbluh, a veteran U.S. policymaker, describes how the United States dropped the digital ball and what it could do to pick it up again.

Finally, Michèle Flournoy and Michael Sulmeyer explain the new realities of cyberwarfare, and Viktor Mayer-Schönberger and Thomas Ramge consider the problems caused by Big Tech’s hoarding of data and what can be done to address it.

A generation from now, people across the globe will no doubt revel in the benefits the Internet has brought. But the more thoughtful among them will also lament the eclipse of the founders’ idealistic vision and dream of a world connected the way it could—and should— have been….(More)”.

World War Web

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday