How to Use Social Media to Better Engage People Affected by Crises


Guide by the International Red Cross Federation: “Together with ICRC, and with the support of OCHA, we have published a brief guide on how to use social media to better engage people affected by crisis. The guide is geared towards staff in humanitarian organisations who are responsible for official social media channels.

In the past few years, the role of social media and digital technologies in times of disasters and crises has grown exponentially. During disasters like the 2015 Nepal earthquake, for instance, Facebook and Twitter were crucial components of the humanitarian response, allowing mostly local, but also international actors involved in relief efforts, to disseminate lifesaving messages. However, the use of social media by humanitarian organizations to engage and communicate with (not about) affected people is, to date, still vastly untapped and largely under researched and document¬ed in terms of the provision of practical guidance, both thematically and technically, good practices and lessons learned.

This brief guide, trying to address this gap, provides advice on how to use social media effectively to engage with, and be accountable to, affected people through practical tips and case studies from within the Movement and the wider sector…(Guide)”.

Using Facebook data as a real-time census


Phys.org: “Determining how many people live in Seattle, perhaps of a certain age, perhaps from a specific country, is the sort of question that finds its answer in the census, a massive data dump for places across the country.

But just how fresh is that data? After all, the census is updated once a decade, and the U.S. Census Bureau’s smaller but more detailed American Community Survey, annually. There’s also a delay between when data are collected and when they are published. (The release of data for 2016 started gradually in September 2017.)

Enter Facebook, which, with some caveats, can serve as an even more current source of , especially about migrants. That’s the conclusion of a study led by Emilio Zagheni, associate professor of sociology at the University of Washington, published Oct. 11 in Population and Development Review. The study is believed to be the first to demonstrate how present-day migration statistics can be obtained by compiling the same data that advertisers use to target their audience on Facebook, and by combining that source with information from the Census Bureau.

Migration indicates a variety of political and economic trends and is a major driver of population change, Zagheni said. As researchers further explore the increasing number of databases produced for advertisers, Zagheni argues, social scientists could leverage Facebook, LinkedIn and Twitter more often to glean information on geography, mobility, behavior and employment. And while there are some limits to the data – each platform is a self-selected, self-reporting segment of the population – the number of migrants according to Facebook could supplement the official numbers logged by the U.S. Census Bureau, Zagheni said….(Full Paper).

Tech’s fight for the upper hand on open data


Rana Foroohar at the Financial Times: “One thing that’s becoming very clear to me as I report on the digital economy is that a rethink of the legal framework in which business has been conducted for many decades is going to be required. Many of the key laws that govern digital commerce (which, increasingly, is most commerce) were crafted in the 1980s or 1990s, when the internet was an entirely different place. Consider, for example, the US Computer Fraud and Abuse Act.

This 1986 law made it a federal crime to engage in “unauthorised access” to a computer connected to the internet. It was designed to prevent hackers from breaking into government or corporate systems. …While few hackers seem to have been deterred by it, the law is being used in turf battles between companies looking to monetise the most valuable commodity on the planet — your personal data. Case in point: LinkedIn vs HiQ, which may well become a groundbreaker in Silicon Valley.

LinkedIn is the dominant professional networking platform, a Facebook for corporate types. HiQ is a “data-scraping” company, one that accesses publicly available data from LinkedIn profiles and then mixes it up in its own quantitative black box to create two products — Keeper, which tells employers which of their employees are at greatest risk of being recruited away, and Skill Mapper, which provides a summary of the skills possessed by individual workers. LinkedIn allowed HiQ to do this for five years, before developing a very similar product to Skill Mapper, at which point LinkedIn sent the company a “cease and desist” letter, and threatened to invoke the CFAA if HiQ did not stop tapping its user data.

..Meanwhile, a case that might have been significant mainly to digital insiders is being given a huge publicity boost by Harvard professor Laurence Tribe, the country’s pre-eminent constitutional law scholar. He has joined the HiQ defence team because, as he told me, he believes the case is “tremendously important”, not only in terms of setting competitive rules for the digital economy, but in the realm of free speech. According to Prof Tribe, if you accept that the internet is the new town square, and “data is a central type of capital”, then it must be freely available to everyone — and LinkedIn, as a private company, cannot suddenly decide that publicly accessible, Google-searchable data is their private property….(More)”.

How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem


Paper by Amanda Levendowski: “As the use of artificial intelligence (AI) continues to spread, we have seen an increase in examples of AI systems reflecting or exacerbating societal bias, from racist facial recognition to sexist natural language processing. These biases threaten to overshadow AI’s technological gains and potential benefits. While legal and computer science scholars have analyzed many sources of bias, including the unexamined assumptions of its often-homogenous creators, flawed algorithms, and incomplete datasets, the role of the law itself has been largely ignored. Yet just as code and culture play significant roles in how AI agents learn about and act in the world, so too do the laws that govern them. This Article is the first to examine perhaps the most powerful law impacting AI bias: copyright.

Artificial intelligence often learns to “think” by reading, viewing, and listening to copies of human works. This Article first explores the problem of bias through the lens of copyright doctrine, looking at how the law’s exclusion of access to certain copyrighted source materials may create or promote biased AI systems. Copyright law limits bias mitigation techniques, such as testing AI through reverse engineering, algorithmic accountability processes, and competing to convert customers. The rules of copyright law also privilege access to certain works over others, encouraging AI creators to use easily available, legally low-risk sources of data for teaching AI, even when those data are demonstrably biased. Second, it examines how a different part of copyright law — the fair use doctrine — has traditionally been used to address similar concerns in other technological fields, and asks whether it is equally capable of addressing them in the field of AI bias. The Article ultimately concludes that it is, in large part because the normative values embedded within traditional fair use ultimately align with the goals of mitigating AI bias and, quite literally, creating fairer AI systems….(More)”.

Can Blockchain Bring Voting Online?


Ben Miller at Government Technology: “Hash chains are not a new concept in cryptography. They are, essentially, a long chain of data connected by values called hashes that prove the connection of each part to the next. By stringing all these pieces together and representing them in small values, then, one can represent a large amount of information without doing much. Josh Benaloh, a senior cryptographer for Microsoft Research and director of the International Association for Cryptologic Research, gives the rough analogy of taking a picture of a person, then taking another picture of that person holding the first picture, and so on. Loss of resolution aside, each picture would contain all the images from the previous pictures.

It’s only recently that people have found a way to extend the idea to commonplace applications. That happened with the advent of bitcoin, a digital “cryptocurrency” that has attained real-world value and become a popular exchange medium for ransomware attacks. The bitcoin community operates using a specific type of hash chain called a blockchain. It works by asking a group of users to solve complex problems as a sort of proof that bitcoin transactions took place, in exchange for a reward.

“Academics who have been looking at this for years, when they saw bitcoin, they said, ‘This can’t work, this has too many problems,’” Benaloh said. “It surprised everybody that this seems to work and to hold.”

But the blockchain concept is by no means limited to money. It’s simply a public ledger, a bulletin board meant to ensure accuracy based on the fact that everyone can see it — and what’s been done to it — at all times. It could be used to keep property records, or to provide an audit trail for how a product got from factory to buyer.

Or perhaps it could be used to prove the veracity and accuracy of digital votes in an election.

It is a potential solution to the problem of cybersecurity in online elections because the foundation of blockchain is the audit trail: If anybody tampered with votes, it would be easy to see and prove.

And in fact, blockchain elections have already been run in the U.S. — just not in the big leagues. Voatz, a Massachusetts-based startup that has struck up a partnership with one of the few companies in the country that actually builds voting systems, has used a blockchain paradigm to run elections for colleges, school boards, unions and other nonprofit and quasi-governmental groups. Perhaps its most high-profile endeavor was authenticating delegate badges at the 2016 Massachusetts Democratic Convention….

Rivest and Benaloh both talk about another online voting solution with much more enthusiasm. And much in the spirit of academia, the technology’s name is pragmatic rather than sleek and buzzworthy: end-to-end verifiable Internet voting (E2E-VIV).

It’s not too far off from blockchain in spirit, but it relies on a centralized approach instead of a decentralized one. Votes are sent from remote electronic devices to the election authority, most likely the secretary of state for the state the person is voting in, and posted online in an encrypted format. The person voting can use her decryption key to check that her vote was recorded accurately.

But there are no validating peers, no chain of blocks stretching back to the first vote….(More)”.

On the cultural ideology of Big Data


Nathan Jurgenson in The New Inquiry: “Modernity has long been obsessed with, perhaps even defined by, its epistemic insecurity, its grasping toward big truths that ultimately disappoint as our world grows only less knowable. New knowledge and new ways of understanding simultaneously produce new forms of nonknowledge, new uncertainties and mysteries. The scientific method, based in deduction and falsifiability, is better at proliferating questions than it is at answering them. For instance, Einstein’s theories about the curvature of space and motion at the quantum level provide new knowledge and generates new unknowns that previously could not be pondered.

Since every theory destabilizes as much as it solidifies in our view of the world, the collective frenzy to generate knowledge creates at the same time a mounting sense of futility, a tension looking for catharsis — a moment in which we could feel, if only for an instant, that we know something for sure. In contemporary culture, Big Data promises this relief.

As the name suggests, Big Data is about size. Many proponents of Big Data claim that massive databases can reveal a whole new set of truths because of the unprecedented quantity of information they contain. But the big in Big Data is also used to denote a qualitative difference — that aggregating a certain amount of information makes data pass over into Big Data, a “revolution in knowledge,” to use a phrase thrown around by startups and mass-market social-science books. Operating beyond normal science’s simple accumulation of more information, Big Data is touted as a different sort of knowledge altogether, an Enlightenment for social life reckoned at the scale of masses.

As with the similarly inferential sciences like evolutionary psychology and pop-neuroscience, Big Data can be used to give any chosen hypothesis a veneer of science and the unearned authority of numbers. The data is big enough to entertain any story. Big Data has thus spawned an entire industry (“predictive analytics”) as well as reams of academic, corporate, and governmental research; it has also sparked the rise of “data journalism” like that of FiveThirtyEight, Vox, and the other multiplying explainer sites. It has shifted the center of gravity in these fields not merely because of its grand epistemological claims but also because it’s well-financed. Twitter, for example recently announced that it is putting $10 million into a “social machines” Big Data laboratory.

The rationalist fantasy that enough data can be collected with the “right” methodology to provide an objective and disinterested picture of reality is an old and familiar one: positivism. This is the understanding that the social world can be known and explained from a value-neutral, transcendent view from nowhere in particular. The term comes from Positive Philosophy (1830-1842), by August Comte, who also coined the term sociology in this image. As Western sociology began to congeal as a discipline (departments, paid jobs, journals, conferences), Emile Durkheim, another of the field’s founders, believed it could function as a “social physics” capable of outlining “social facts” akin to the measurable facts that could be recorded about the physical properties of objects. It’s an arrogant view, in retrospect — one that aims for a grand, general theory that can explain social life, a view that became increasingly rooted as sociology became focused on empirical data collection.

A century later, that unwieldy aspiration has been largely abandoned by sociologists in favor of reorienting the discipline toward recognizing complexities rather than pursuing universal explanations for human sociality. But the advent of Big Data has resurrected the fantasy of a social physics, promising a new data-driven technique for ratifying social facts with sheer algorithmic processing power…(More)”

Policy Analytics, Modelling, and Informatics


Book edited by J. Ramon Gil-Garcia, Theresa A. Pardo and Luis F. Luna-Reyes: “This book provides a comprehensive approach to the study of policy analytics, modelling and informatics. It includes theories and concepts for understanding tools and techniques used by governments seeking to improve decision making through the use of technology, data, modelling, and other analytics, and provides relevant case studies and practical recommendations. Governments around the world face policy issues that require strategies and solutions using new technologies, new access to data and new analytical tools and techniques such as computer simulation, geographic information systems, and social network analysis for the successful implementation of public policy and government programs. Chapters include cases, concepts, methodologies, theories, experiences, and practical recommendations on data analytics and modelling for public policy and practice, and addresses a diversity of data tools, applied to different policy stages in several contexts, and levels and branches of government. This book will be of interest of researchers, students, and practitioners in e-government, public policy, public administration, policy analytics and policy informatics….(More)”.

Open mapping from the ground up: learning from Map Kibera


Report by Erica Hagen for Making All Voices Count: “In Nairobi in 2009, 13 young residents of the informal settlement of Kibera mapped their community using OpenStreetMap, an online mapping platform. This was the start of Map Kibera, and eight years of ongoing work to date on digital mapping, citizen media and open data. In this paper, Erica Hagen – one of the initiators of Map Kibera – reflects on the trajectory of this work. Through research interviews with Map Kibera staff, participants and clients, and users of the data and maps the project has produced, she digs into what it means for citizens to map their communities, and examines the impact of open local information on members of the community. The paper begins by situating the research and Map Kibera in selected literature on transparency, accountability and mapping. It then presents three case studies of mapping in Kibera – in the education, security and water sectors – discussing evidence about the effects not only on project participants, but also on governmental and non-governmental actors in each of the three sectors. It concludes that open, community-based data collection can lead to greater trust, which is sorely lacking in marginalised places. In large-scale data gathering, it is often unclear to those involved why the data is needed or what will be done with it. But the experience of Map Kibera shows that by starting from the ground up and sharing open data widely, it is possible to achieve strong sector-wide ramifications beyond the scope of the initial project, including increased resources and targeting by government and NGOs. While debates continue over the best way to truly engage citizens in the ‘data revolution’ and tracking the Sustainable Development Goals, the research here shows that engaging people fully in the information value chain can be the missing link between data as a measurement tool, and information having an impact on social development….(More)”.

Nobody reads privacy policies – here’s how to fix that


 at the Conversation: “…The key to turning privacy notices into something useful for consumers is to rethink their purpose. A company’s policy might show compliance with the regulations the firm is bound to follow, but remains impenetrable to a regular reader.

The starting point for developing consumer-friendly privacy notices is to make them relevant to the user’s activity, understandable and actionable. As part of the Usable Privacy Policy Project, my colleagues and I developed a way to make privacy notices more effective.

The first principle is to break up the documents into smaller chunks and deliver them at times that are appropriate for users. Right now, a single multi-page policy might have many sections and paragraphs, each relevant to different services and activities. Yet people who are just casually browsing a website need only a little bit of information about how the site handles their IP addresses, if what they look at is shared with advertisers and if they can opt out of interest-based ads. Those people doesn’t need to know about many other things listed in all-encompassing policies, like the rules associated with subscribing to the site’s email newsletter, nor how the site handles personal or financial information belonging to people who make purchases or donations on the site.

When a person does decide to sign up for email updates or pay for a service through the site, then an additional short privacy notice could tell her the additional information she needs to know. These shorter documents should also offer users meaningful choices about what they want a company to do – or not do – with their data. For instance, a new subscriber might be allowed to choose whether the company can share his email address or other contact information with outside marketing companies by clicking a check box.

Understanding users’ expectations

Notices can be made even simpler if they focus particularly on unexpected or surprising types of data collection or sharing. For instance, in another study, we learned that most people know their fitness tracker counts steps – so they didn’t really need a privacy notice to tell them that. But they did not expect their data to be collectedaggregated and shared with third parties. Customers should be asked for permission to do this, and allowed to restrict sharing or opt out entirely.

Most importantly, companies should test new privacy notices with users, to ensure final versions are understandable and not misleading, and that offered choices are meaningful….(More)”

Blockchain Could Help Us Reclaim Control of Our Personal Data


Michael Mainelli at Harvard Business Review: “…numerous smaller countries, such as Singapore, are exploring national identity systems that span government and the private sector. One of the more successful stories of governments instituting an identity system is Estonia, with its ID-kaarts. Reacting to cyber-attacks against the nation, the Estonian government decided that it needed to become more digital, and even more secure. They decided to use a distributed ledger to build their system, rather than a traditional central database. Distributed ledgers are used in situations where multiple parties need to share authoritative information with each other without a central third party, such as for data-logging clinical assessments or storing data from commercial deals. These are multi-organization databases with a super audit trail. As a result, the Estonian system provides its citizens with an all-digital government experience, significantly reduced bureaucracy, and significantly high citizen satisfaction with their government dealings.

Cryptocurrencies such as Bitcoin have increased the awareness of distributed ledgers with their use of a particular type of ledger — blockchain — to hold the details of coin accounts among millions of users. Cryptocurrencies have certainly had their own problems with their wallets and exchanges — even ID-kaarts are not without their technical problems — but the distributed ledger technology holds firm for Estonia and for cryptocurrencies. These technologies have been working in hostile environments now for nearly a decade.

The problem with a central database like the ones used to house social security numbers, or credit reports, is that once it’s compromised, a thief has the ability to copy all of the information stored there. Hence the huge numbers of people that can be affected — more than 140 million people in the Equifax breach, and more than 50 million at Home Depot — though perhaps Yahoo takes the cake with more than three billion alleged customer accounts hacked.  Of course, if you can find a distributed ledger online, you can copy it, too. However, a distributed ledger, while available to everyone, may be unreadable if its contents are encrypted. Bitcoin’s blockchain is readable to all, though you can encrypt things in comments. Most distributed ledgers outside cryptocurrencies are encrypted in whole or in part. The effect is that while you can have a copy of the database, you can’t actually read it.

This characteristic of encrypted distributed ledgers has big implications for identity systems.  You can keep certified copies of identity documents, biometric test results, health data, or academic and training certificates online, available at all times, yet safe unless you give away your key. At a whole system level, the database is very secure. Each single ledger entry among billions would need to be found and then individually “cracked” at great expense in time and computing, making the database as a whole very safe.

Distributed ledgers seem ideal for private distributed identity systems, and many organizations are working to provide such systems to help people manage the huge amount of paperwork modern society requires to open accounts, validate yourself, or make payments.  Taken a small step further, these systems can help you keep relevant health or qualification records at your fingertips.  Using “smart” ledgers, you can forward your documentation to people who need to see it, while keeping control of access, including whether another party can forward the information. You can even revoke someone’s access to the information in the future….(More)”.