Web design plays a role in how much we reveal online


European Commission: “A JRC study, “Nudges to Privacy Behaviour: Exploring an Alternative Approach to Privacy Notices“, used behavioural sciences to look at how individuals react to different types of privacy notices. Specifically, the authors analysed users’ reactions to modified choice architecture (i.e. the environment in which decisions take place) of web interfaces.

Two types of privacy behaviour were measured: passive disclosure, when people unwittingly disclose personal information, and direct disclosure, when people make an active choice to reveal personal information. After testing different designs with over 3 000 users from the UK, Italy, Germany and Poland, results show web interface affects decisions on disclosing personal information. The study also explored differences related to country of origin, gender, education level and age.

A depiction of a person’s face on the website led people to reveal more personal information. Also, this design choice and the visualisation of the user’s IP or browsing history had an impact on people’s awareness of a privacy notice. If confirmed, these features are particularly relevant for habitual and instinctive online behaviour.

With regard to education, users who had attended (though not necessarily graduated from) college felt significantly less observed or monitored and more comfortable answering questions than those who never went to college. This result challenges the assumption that the better educated are more aware of information tracking practices. Further investigation, perhaps of a qualitative nature, could help dig deeper into this issue. On the other hand, people with a lower level of education were more likely to reveal personal information unwittingly. This behaviour appeared to be due to the fact that non-college attendees were simply less aware that some online behaviour revealed personal information about themselves.

Strong differences between countries were noticed, indicating a relation between cultures and information disclosure. Even though participants in Italy revealed the most personal information in passive disclosure, in direct disclosure they revealed less than in other countries. Approximately 75% of participants in Italy chose to answer positively to at least one stigmatised question, compared to 81% in Poland, 83% in Germany and 92% in the UK.

Approximately 73% of women answered ‘never’ to the questions asking whether they had ever engaged in socially stigmatised behaviour, compared to 27% of males. This large difference could be due to the nature of the questions (e.g. about alcohol consumption, which might be more acceptable for males). It could also suggest women feel under greater social scrutiny or are simply more cautious when disclosing personal information.

These results could offer valuable insights to inform European policy decisions, despite the fact that the study has targeted a sample of users in four countries in an experimental setting. Major web service providers are likely to have extensive amounts of data on how slight changes to their services’ privacy controls affect users’ privacy behaviour. The authors of the study suggest that collaboration between web providers and policy-makers can lead to recommendations for web interface design that allow for conscientious disclosure of privacy information….(More)”

The Quantified Community and Neighborhood Labs: A Framework for Computational Urban Planning and Civic Technology Innovation


Constantine E. Kontokosta: “This paper presents the conceptual framework and justification for a “Quantified Community” (QC) and a networked experimental environment of neighborhood labs. The QC is a fully instrumented urban neighborhood that uses an integrated, expandable, and participatory sensor network to support the measurement, integration, and analysis of neighborhood conditions, social interactions and behavior, and sustainability metrics to support public decision-making. Through a diverse range of sensor and automation technologies — combined with existing data generated through administrative records, surveys, social media, and mobile sensors — information on human, physical, and environmental elements can be processed in real-time to better understand the interaction and effects of the built environment on human well-being and outcomes. The goal is to create an “informatics overlay” that can be incorporated into future urban development and planning that supports the benchmarking and evaluation of neighborhood conditions, provides a test-bed for measuring the impact of new technologies and policies, and responds to the changing needs and preferences of the local community….(More)”

Nudge 2.0


Philipp Hacker: “This essay is both a review of the excellent book “Nudge and the Law. A European Perspective”, edited by Alberto Alemanno and Anne-Lise Sibony, and an assessment of the major themes and challenges that the behavioural analysis of law will and should face in the immediate future.

The book makes important and novel contributions in a range of topics, both on a theoretical and a substantial level. Regarding theoretical issues, four themes stand out: First, it highlights the differences between the EU and the US nudging environments. Second, it questions the reliance on expertise in rulemaking. Third, it unveils behavioural trade-offs that have too long gone unnoticed in behavioural law and economics. And fourth, it discusses the requirement of the transparency of nudges and the related concept of autonomy. Furthermore, the different authors discuss the impact of behavioural regulation on a number of substantial fields of law: health and lifestyle regulation, privacy law, and the disclosure paradigm in private law.

This paper aims to take some of the book’s insights one step further in order to point at crucial challenges – and opportunities – for the future of the behavioural analysis of law. In the last years, the movement has gained tremendously in breadth and depth. It is now time to make it scientifically even more rigorous, e.g. by openly embracing empirical uncertainty and by moving beyond the neo-classical/behavioural dichotomy. Simultaneously, the field ought to discursively readjust its normative compass. Finally and perhaps most strikingly, however, the power of big data holds the promise of taking behavioural interventions to an entirely new level. If these challenges can be overcome, this paper argues, the intersection between law and behavioural sciences will remain one of the most fruitful approaches to legal analysis in Europe and beyond….(More)”

As a Start to NYC Prison Reform, Jail Data Will Be Made Public


Brentin Mock at CityLab: “…In New York City, 40 percent of the jailed population are there because they couldn’t afford bail—most of them for nonviolent drug crimes. The city spends $42 million on average annually incarcerating non-felony defendants….

Wednesday, NYC Mayor Bill de Blasio signed into law legislation aimed at helping correct these bail problems, providing inmates a bill of rights for when they’re detained and addressing other problems that lead to overstuffing city jails with poor people of color.

The omnibus package of criminal justice reform bills will require the city to produce better accounting of how many people are in city jails, what they’re average incarceration time is while waiting for trial, the average bail amounts imposed on defendants, and a whole host of other data points on incarceration. Under the new legislation, the city will have to release reports quarterly and semi-annually to the public—much of it from data now sheltered within the city’s Department of Corrections.

“This is bringing sunshine to information that is already being looked at internally, but is better off being public data,” New York City council member Helen Rosenthal tells CityLab. “We can better understand what polices we need to change if we have the data to understand what’s going on in the system.”…

The city passed a package of transparency bills last month that focused on Rikers, but the legislation passed Wednesday will focus on the city’s courts and jails system as a whole….(More)”

The importance of human innovation in A.I. ethics


John C. Havens at Mashable: “….While welcoming the feedback that sensors, data and Artificial Intelligence provide, we’re at a critical inflection point. Demarcating the parameters between assistance and automation has never been more central to human well-being. But today, beauty is in the AI of the beholder. Desensitized to the value of personal data, we hemorrhage precious insights regarding our identity that define the moral nuances necessary to navigate algorithmic modernity.

If no values-based standards exist for Artificial Intelligence, then the biases of its manufacturers will define our universal code of human ethics. But this should not be their cross to bear alone. It’s time to stop vilifying the AI community and start defining in concert with their creations what the good life means surrounding our consciousness and code.

The intention of the ethics

“Begin as you mean to go forward.” Michael Stewart is founder, chairman & CEO of Lucid, an Artificial Intelligence company based in Austin that recently announced the formation of the industry’s first Ethics Advisory Panel (EAP). While Google claimed creation of a similar board when acquiring AI firm DeepMind in January 2014, no public realization of its efforts currently exist (as confirmed by a PR rep from Google for this piece). Lucid’s Panel, by comparison, has already begun functioning as a separate organization from the analytics side of the business and provides oversight for the company and its customers. “Our efforts,” Stewart says, “are guided by the principle that our ethics group is obsessed with making sure the impact of our technology is good.”

Kay Firth-Butterfield is chief officer of the EAP, and is charged with being on the vanguard of the ethical issues affecting the AI industry and society as a whole. Internally, the EAP provides the hub of ethical behavior for the company. Someone from Firth-Butterfield’s office even sits on all core product development teams. “Externally,” she notes, “we plan to apply Cyc intelligence (shorthand for ‘encyclopedia,’ Lucid’s AI causal reasoning platform) for research to demonstrate the benefits of AI and to advise Lucid’s leadership on key decisions, such as the recent signing of the LAWS letter and the end use of customer applications.”

Ensuring the impact of AI technology is positive doesn’t happen by default. But as Lucid is demonstrating, ethics doesn’t have to stymie innovation by dwelling solely in the realm of risk mitigation. Ethical processes aligning with a company’s core values can provide more deeply relevant products and increased public trust. Transparently including your customer’s values in these processes puts the person back into personalization….(Mashable)”

Understanding democracy as a product of citizen performances reduces the need for a defined ‘people’


Liron Lavi at Democratic Audit: “Dēmokratía, literally ‘the rule of the people’, is the basis for democracy as a political regime. However, ‘the people’ is a heterogeneous, open, and dynamic entity. So, how can we think about democracy without the people as a coherent entity, yet as the source of democracy? I employ a performative theorisation of democracy in order to answer this question. Democracy, I suggest, is an effect produced by repetitive performative acts and ‘the people’ is produced as the source of democratic sovereignty.

A quick search on ‘democratic performance’ will usually yield results (and concerns) regarding voter competence, government accountability, liberal values, and legitimacy. However, from the perspective of performative theory, the term gains a rather different meaning (as has been discussed at length by Judith Butler). It suggests that democracy is not a pre-given structure but rather needs to be constructed repeatedly. Thus, for a democracy to be recognised and maintained as such it needs to be performed by citizens, institutions, office-holders, the media, etc. Acts made by these players – voting, demonstrating, decision- and- law-making, etc. – give form to the abstract concept of democracy, thus producing it as their (imagined) source. There is, therefore, no finite set of actions that can determine once and for all that a social structure is indeed a democracy, for the regime is not a stable and pre-given structure, but rather produced and imagined through a multitude of acts and procedures.

Elections, for example, are a democratic performance insofar as they are perceived as an effective tool for expressing the public’s preferences and choosing its representatives and desired policies. Polling stations are therefore the site in which democracy is constituted insofar as all eligible members (can) participate in the act of voting, and therefore are constructed as the source of sovereignty. By this, elections produce democracy as their effect, as their source, and hold together the political imagination of democracy. And they do this periodically, thus open options for new variations (and failures) in the democratic effect they produce. Elections are therefore, not only an opportunity to replace representatives and incumbents, but also an opportunity to perform democracy, shape it, alter it, and load it with various meanings….(More)”

Five principles for applying data science for social good


Jake Porway at O’Reilly: “….Every week, a data or technology company declares that it wants to “do good” and there are countless workshops hosted by major foundations musing on what “big data can do for society.” Add to that a growing number of data-for-good programs from Data Science for Social Good’s fantastic summer program toBayes Impact’s data science fellowships to DrivenData’s data-science-for-good competitions, and you can see how quickly this idea of “data for good” is growing.

Yes, it’s an exciting time to be exploring the ways new datasets, new techniques, and new scientists could be deployed to “make the world a better place.” We’ve already seen deep learning applied to ocean health,satellite imagery used to estimate poverty levels, and cellphone data used to elucidate Nairobi’s hidden public transportation routes. And yet, for all this excitement about the potential of this “data for good movement,” we are still desperately far from creating lasting impact. Many efforts will not only fall short of lasting impact — they will make no change at all….

So how can these well-intentioned efforts reach their full potential for real impact? Embracing the following five principles can drastically accelerate a world in which we truly use data to serve humanity.

1. “Statistics” is so much more than “percentages”

We must convey what constitutes data, what it can be used for, and why it’s valuable.

There was a packed house for the March 2015 release of the No Ceilings Full Participation Report. Hillary Clinton, Melinda Gates, and Chelsea Clinton stood on stage and lauded the report, the culmination of a year-long effort to aggregate and analyze new and existing global data, as the biggest, most comprehensive data collection effort about women and gender ever attempted. One of the most trumpeted parts of the effort was the release of the data in an open and easily accessible way.

I ran home and excitedly pulled up the data from the No Ceilings GitHub, giddy to use it for our DataKind projects. As I downloaded each file, my heart sunk. The 6MB size of the entire global dataset told me what I would find inside before I even opened the first file. Like a familiar ache, the first row of the spreadsheet said it all: “USA, 2009, 84.4%.”

What I’d encountered was a common situation when it comes to data in the social sector: the prevalence of inert, aggregate data. ….

2. Finding problems can be harder than finding solutions

We must scale the process of problem discovery through deeper collaboration between the problem holders, the data holders, and the skills holders.

In the immortal words of Henry Ford, “If I’d asked people what they wanted, they would have said a faster horse.” Right now, the field of data science is in a similar position. Framing data solutions for organizations that don’t realize how much is now possible can be a frustrating search for faster horses. If data cleaning is 80% of the hard work in data science, then problem discovery makes up nearly the remaining 20% when doing data science for good.

The plague here is one of education. …

3. Communication is more important than technology

We must foster environments in which people can speak openly, honestly, and without judgment. We must be constantly curious about each other.

At the conclusion of one of our recent DataKind events, one of our partner nonprofit organizations lined up to hear the results from their volunteer team of data scientists. Everyone was all smiles — the nonprofit leaders had loved the project experience, the data scientists were excited with their results. The presentations began. “We used Amazon RedShift to store the data, which allowed us to quickly build a multinomial regression. The p-value of 0.002 shows …” Eyes glazed over. The nonprofit leaders furrowed their brows in telegraphed concentration. The jargon was standing in the way of understanding the true utility of the project’s findings. It was clear that, like so many other well-intentioned efforts, the project was at risk of gathering dust on a shelf if the team of volunteers couldn’t help the organization understand what they had learned and how it could be integrated into the organization’s ongoing work…..

4. We need diverse viewpoints

To tackle sector-wide challenges, we need a range of voices involved.

One of the most challenging aspects to making change at the sector level is the range of diverse viewpoints necessary to understand a problem in its entirety. In the business world, profit, revenue, or output can be valid metrics of success. Rarely, if ever, are metrics for social change so cleanly defined….

Challenging this paradigm requires diverse, or “collective impact,” approaches to problem solving. The idea has been around for a while (h/t Chris Diehl), but has not yet been widely implemented due to the challenges in successful collective impact. Moreover, while there are many diverse collectives committed to social change, few have the voice of expert data scientists involved. DataKind is piloting a collective impact model called DataKind Labs, that seeks to bring together diverse problem holders, data holders, and data science experts to co-create solutions that can be applied across an entire sector-wide challenge. We just launchedour first project with Microsoft to increase traffic safety and are hopeful that this effort will demonstrate how vital a role data science can play in a collective impact approach.

5. We must design for people

Data is not truth, and tech is not an answer in-and-of-itself. Without designing for the humans on the other end, our work is in vain.

So many of the data projects making headlines — a new app for finding public services, a new probabilistic model for predicting weather patterns for subsistence farmers, a visualization of government spending — are great and interesting accomplishments, but don’t seem to have an end user in mind. The current approach appears to be “get the tech geeks to hack on this problem, and we’ll have cool new solutions!” I’ve opined that, though there are many benefits to hackathons, you can’t just hack your way to social change….(More)”

Data-Driven Innovation: Big Data for Growth and Well-Being


“A new OECD report on data-driven innovation finds that countries could be getting much more out of data analytics in terms of economic and social gains if governments did more to encourage investment in “Big Data” and promote data sharing and reuse.

The migration of economic and social activities to the Internet and the advent of The Internet of Things – along with dramatically lower costs of data collection, storage and processing and rising computing power – means that data-analytics is increasingly driving innovation and is potentially an important new source of growth.

The report suggest countries act to seize these benefits, by training more and better data scientists, reducing barriers to cross-border data flows, and encouraging investment in business processes to incorporate data analytics.

Few companies outside of the ICT sector are changing internal procedures to take advantage of data. For example, data gathered by companies’ marketing departments is not always used by other departments to drive decisions and innovation. And in particular, small and medium-sized companies face barriers to the adoption of data-related technologies such as cloud computing, partly because they have difficulty implementing organisational change due to limited resources, including the shortage of skilled personnel.

At the same time, governments will need to anticipate and address the disruptive effects of big data on the economy and overall well-being, as issues as broad as privacy, jobs, intellectual property rights, competition and taxation will be impacted. Read the Policy Brief

TABLE OF CONTENTS
Preface
Foreword
Executive summary
The phenomenon of data-driven innovation
Mapping the global data ecosystem and its points of control
How data now drive innovation
Drawing value from data as an infrastructure
Building trust for data-driven innovation
Skills and employment in a data-driven economy
Promoting data-driven scientific research
The evolution of health care in a data-rich environment
Cities as hubs for data-driven innovation
Governments leading by example with public sector data

 

Anxieties of Democracy


Debate at the Boston Review opened by Ira Katznelson: “…..Across the range of established democracies, we see skepticism bordering on cynicism about whether parliamentary governments can successfully address pressing domestic and global challenges. These doubts about representative democracy speak to both its fairness and its ability to make good policy.

Since the late eighteenth century, liberal constitutional regimes have recurrently collided with forms of autocratic rule—including fascism and communism—that claim moral superiority and greater efficacy. Today, there is no formal autocratic alternative competing with democracy for public allegiance. Instead, two other concerns characterize current debates. First, there is a sense that constitutional democratic forms, procedures, and practices are softening in the face of allegedly more authentic and more efficacious types of political participation—those that take place outside representative institutions and seem closer to the people. There is also widespread anxiety that national borders no longer define a zone of security, a place more or less safe from violent threats and insulated from rules and conditions established by transnational institutions and seemingly inexorable global processes.

These are recent anxieties. One rarely heard them voiced in liberal democracies when, in 1989, Francis Fukuyama designated the triumph of free regimes and free markets “the end of history.” Fukuyama described “the universalization of Western liberal democracy as the final form of human government,“ a “victory of liberalism” in “the realm of ideas and consciousness,” even if “as yet incomplete in the real or material world.” Tellingly, the disruption of this seemingly irresistible trend has recently prompted him to ruminate on the brittleness of democratic institutions across the globe.

Perhaps today’s representative democracies—the ones that do not appear to be candidates for collapse or supersession—are merely confronting ephemeral worries. But the challenge seems starker: a profound crisis of moral legitimacy, practical capacity, and institutional sustainability….(More)

Reactions by:

 

Health Data Governance: Privacy, Monitoring and Research


OECD publishing: “All countries are investing in health data, however; there are significant cross-country differences in data availability and use. Some countries stand out for their innovative practices enabling privacy-protective respectful data use; while others are falling behind with insufficient data and restrictions that limit access to and use of data, even by government itself. Countries that develop a data governance framework that enables privacy-protective data use will not only have the information needed to promote quality, efficiency and performance in their health systems, they will become a more attractive centre for medical research. After examining the current situation in OECD countries, a multi-disciplinary advisory panel of experts identified eight key data governance mechanisms to maximise benefits to patients and to societies from the collection, linkage and analysis of health data and to, at the same time, minimise risks to the privacy of patients and to the security of health data. These mechanisms include coordinated developming of high-value, privacy-protective health information systems; legislation that permits privacy-protective data use; open and transparent public communication ; accreditation or certification of health data processors; transparent and fair project approval processes; data de-identification and data security practices that meet legal requirements and public expectations without compromising data utility; and a process to continually assess and renew the data governance framework as new data and new risks emerge…”

.