Crowdsourcing Change: A Novel Vantage Point for Investigating Online Petitioning Platforms


Presentation by Shipi Dhanorkar and Mary Beth Rosson: “The internet connects people who are spatially and temporally separated. One result is new modes of reaching out to, organizing and mobilizing people, including online activism. Internet platforms can be used to mobilize people around specific concerns, short-circuiting structures such as organizational hierarchies or elected officials. These online processes allow consumers and concerned citizens to voice their opinions, often to businesses, other times to civic groups or other authorities. Not surprisingly, this opportunity has encouraged a steady rise in specialized platforms dedicated to online petitioning; eg., Change.org, Care2 Petitions, MoveOn.org, etc.

These platforms are open to everyone; any individual or group who is affected by a problem or disappointed with the status quo, can raise awareness for or against corporate or government policies. Such platforms can empower ordinary citizens to bring about social change, by leveraging support from the masses. In this sense, the platforms allow citizens to “crowdsource change”. In this paper, we offer a comparative analysis of the affordances of four online petitioning platforms, and use this analysis to propose ideas for design enhancements to online petitioning platforms….(More)”.

So Many Nudges, So Little Time: Can Cost-effectiveness Tell Us When It Is Worthwhile to Try to Change Provider Behavior?


Paper by David Atkins: “Interest in behavioral economics has grown steadily within health care. Policy makers, payers, and providers now recognize that the decisions of patients and of their doctors frequently deviate from the strictly “rational” choices that classical economic theory would predict. For example, patients rarely adhere to the medication regimens or health behaviors that would optimize their health outcomes, and clinicians often make decisions that conflict with evidence-based recommendations or even the practices they profess to endorse. The groundbreaking work of psychologist Daniel Kahneman and his collaborator Amos Tversky raised attention to this field, which was accelerated by Kahneman’s 2002 Nobel Prize in economics and his popular 2011 book “Thinking Fast and Slow” which reached a much broader audience.

Behavioral economics examines cognitive, psychological, and cultural factors that may influence how we make decisions, resulting in behavior that another Nobel laureate, economist Richard Thaler, has termed “predictably irrational.” Principles from behavioral economics have been adopted to health care, including the role of heuristics (rules of thumb), the importance of framing, and the effects of specific cognitive biases (for example, overconfidence and status quo bias).

These principles have been incorporated into interventions that seek to use these insights to change health-related behaviors—these include nudges, where systems are redesigned to make the preferred choice the default choice (for example, making generic versions the default in electronic prescribing); incentive programs that reward patients for taking their medications on schedule or getting preventive interventions like immunizations; and specific interventions aimed at how clinicians respond to information or make decisions….(More)”.

A Skeptical View of Information Fiduciaries


Paper by Lina Khan and David Pozen: “The concept of “information fiduciaries” has surged to the forefront of debates on online platform regulation. Developed by Professor Jack Balkin, the concept is meant to rebalance the relationship between ordinary individuals and the digital companies that accumulate, analyze, and sell their personal data for profit. Just as the law imposes special duties of care, confidentiality, and loyalty on doctors, lawyers, and accountants vis-à-vis their patients and clients, Balkin argues, so too should it impose special duties on corporations such as Facebook, Google, and Twitter vis-à-vis their end users. Over the past several years, this argument has garnered remarkably broad support and essentially zero critical pushback.

This Essay seeks to disrupt the emerging consensus by identifying a number of lurking tensions and ambiguities in the theory of information fiduciaries, as well as a number of reasons to doubt the theory’s capacity to resolve them satisfactorily. Although we agree with Balkin that the harms stemming from dominant online platforms call for legal intervention, we question whether the concept of information fiduciaries is an adequate or apt response to the problems of information insecurity that he stresses, much less to more fundamental problems associated with outsized market share and business models built on pervasive surveillance. We also call attention to the potential costs of adopting an information-fiduciary framework—a framework that, we fear, invites an enervating complacency toward online platforms’ structural power and a premature abandonment of more robust visions of public regulation….(More)”.

The Technology Trap: Capital, Labor, and Power in the Age of Automation


Book by Carl Benedikt Frey: “From the Industrial Revolution to the age of artificial intelligence, The Technology Trap takes a sweeping look at the history of technological progress and how it has radically shifted the distribution of economic and political power among society’s members. As Carl Benedikt Frey shows, the Industrial Revolution created unprecedented wealth and prosperity over the long run, but the immediate consequences of mechanization were devastating for large swaths of the population. Middle-income jobs withered, wages stagnated, the labor share of income fell, profits surged, and economic inequality skyrocketed. These trends, Frey documents, broadly mirror those in our current age of automation, which began with the Computer Revolution.

Just as the Industrial Revolution eventually brought about extraordinary benefits for society, artificial intelligence systems have the potential to do the same. But Frey argues that this depends on how the short term is managed. In the nineteenth century, workers violently expressed their concerns over machines taking their jobs. The Luddite uprisings joined a long wave of machinery riots that swept across Europe and China. Today’s despairing middle class has not resorted to physical force, but their frustration has led to rising populism and the increasing fragmentation of society. As middle-class jobs continue to come under pressure, there’s no assurance that positive attitudes to technology will persist.
The Industrial Revolution was a defining moment in history, but few grasped its enormous consequences at the time. The Technology Trap demonstrates that in the midst of another technological revolution, the lessons of the past can help us to more effectively face the present….(More)”.

How AI Can Cure the Big Idea Famine


Saahil Jayraj Dama at JoDS: “Today too many people are still deprived of basic amenities such as medicine, while current patent laws continue to convolute and impede innovation. But if allowed, AI can provide an opportunity to redefine this paradigm and be the catalyst for change—if….

Which brings us to the most befitting answer: No one owns the intellectual property rights to AI-generated creations, and these creations fall into the public domain. This may seem unpalatable at first, especially since intellectual property laws have played such a fundamental role in our society so far. We have been conditioned to a point where it seems almost unimaginable that some creations should directly enter the public domain upon their birth.

But, doctrinally, this is the only proposition that stays consistent to extant intellectual property laws. Works created by AI have no rightful owner because the application of mind to generate the creation, along with the actual generation of the creation, would entirely be done by the AI system. Human involvement is ancillary and is limited to creating an environment within which such a creation can take form.

This can be better understood through a hypothetical example: If an AI system were to invent a groundbreaking pharmaceutical ingredient which completely treats balding, then the system would likely begin by understanding the problem and state of prior art. It would undertake research on causes of balding, existing cures, problems with existing cures, and whether its proposed cure would have any harmful side effects. It would also possibly combine research and knowledge across various domains, which could range from Ayurveda to modern-day biochemistry, before developing its invention.

The developer can lay as much stake to this invention as the team behind AlphaGo for beating Lee Sedol at Go. The user is even further detached from the exercise of ingenuity: She would be the person who first thought, “We should build a Go playing AI system,” and direct the AI system to learn Go by watching certain videos and playing against itself. Despite the intervention of all these entities, the fact remains that the victory only belongs to AlphaGo itself.

Doctrinal issues aside, this solution ties in with what people need from intellectual property laws: more openness and accessibility. The demands for improved access to medicines and knowledge, fights against cultural monopolies, and brazen violations of unjust intellectual property laws are all symptomatic of the growing public discontent against strong intellectual property laws. Through AI, we can design legal systems which address these concerns and reform the heavy handed approach that has been adopted toward intellectual property rights so far.

Tying the Threads Together

For the above to materialize, governments and legislators need to accept that our present intellectual property system is broken and inconsistent with what people want. Too many people are being deprived of basic amenities such as medicines, patent trolls and patent thickets are slowing innovation, educational material is still outside the reach of most people, and culture is not spreading as widely as it should. AI can provide an opportunity for us to redefine this paradigm—it can lead to a society that draws and benefits from an enriched public domain.

However, this approach does come with built-in cynicism because it contemplates an almost complete overhaul of the system. One could argue that if open access for AI-generated creations does become the norm, then innovation and creativity would suffer as people would no longer have the incentive to create. People may even refuse to use their AI systems, and instead stick to producing inventions and creative works by themselves. This would be detrimental to scientific and cultural progress and would also slow adoption of AI systems in society.

Yet, judging by the pace at which these systems have progressed so far and what they can currently do, it is easy to imagine a reality where humans developing inventions and producing creative works almost becomes an afterthought. If a machine can access all the world’s publicly available knowledge and information to develop an invention, or study a user’s likes and dislikes while producing a new musical composition, it is easy to see how humans would, eventually, be pushed out of the loop. AI-generated creations are, thus, inevitable.

The incentive theory will have to be reimagined, too. Constant innovation coupled with market forces will change the system from “incentive-to-create” to “incentive-to-create-well.” While every book, movie, song, and invention is treated at par under the law, only the best inventions and creative works will thrive under the new model. If a particular developer’s AI system can write incredible dialogue for a comedy film or invent the most efficient car engines, the market would want more of these AI systems. Thus incentive will not be eliminated, it will just take a different form.

It is true that writing about such grand schemes is significantly tougher than practically implementing them. But, for any idea to succeed, it must start with a discussion such as this one. Admittedly, we are still a moonshot away from any country granting formal recognition to open access as the basis of its intellectual property laws. And even if a country were to do this, it faces a plethora of hoops to jump through, such as conducting feasibility-testing and dealing with international and internal pressure. Despite these issues, facilitating better access through AI systems remains an objective worth achieving for any society that takes pride in being democratic and equal….(More)”.

Civic Tech for Civic Engagement


Blog Post by Jason Farra: “When it came to gathering input for their new Environmental Master Plan, the Town of Okotoks, AB decided to try something different. Rather than using more traditional methods of consulting residents, they turned to a Canadian civic tech company called Ethelo.

Ethelo’s online software “enables groups to evaluate scenarios, apply constraints, prioritize options and come up with decisions that will get broad support from the group,” says John Richardson, the company’s CEO and founder.

Okotoks gathered over 350 responses, with residents able to compare and evaluate different solutions for a variety of environmental issues, including what kind of transportation and renewable energy options they wanted to see in their town.

One of the options presented to Okotoks residents in the online engagement site for the town’s Environmental Master Plan.

“Ethelo offered a different opportunity in terms of allowing a conversation to happen online,” Marni Hutchison, Communications Specialist with the Town of Okotoks, said in a case study of the project. “We can see the general consensus as it’s forming and participants have more opportunities to see different perspectives.”

John sees this as part of a broader shift in how governments and other organizations are approaching stakeholder engagement, particularly with groups like IAP2 working to improve engagement practices by training practitioners.

Rather than simply consulting, then informing residents about decisions, civic tech startups like Ethelo allow governments to involve residents more actively in the actual decision-making process….(More)”.

What Would More Democratic A.I. Look Like?


Blog post by Andrew Burgess: “Something curious is happening in Finland. Though much of the global debate around artificial intelligence (A.I.) has become concerned with unaccountable, proprietary systems that could control our lives, the Finnish government has instead decided to embrace the opportunity by rolling out a nationwide educational campaign.

Conceived in 2017, shortly after Finland’s A.I. strategy was announced, the government wants to rebuild the country’s economy around the high-end opportunities of artificial intelligence, and has launched a national programto train 1 percent of the population — that’s 55,000 people — in the basics of A.I. “We’ll never have so much money that we will be the leader of artificial intelligence,” said economic minister Mika Lintilä at the launch. “But how we use it — that’s something different.”

Artificial intelligence can have many positive applications, from being trained to identify cancerous cells in biopsy screenings, predict weather patterns that can help farmers increase their crop yields, and improve traffic efficiency.

But some believe that A.I. expertise is currently too concentrated in the hands of just a few companies with opaque business models, meaning resources are being diverted away from projects that could be more socially, rather than commercially, beneficial. Finland’s approach of making A.I. accessible and understandable to its citizens is part of a broader movement of people who want to democratize the technology, putting utility and opportunity ahead of profit.

This shift toward “democratic A.I.” has three main principles: that all society will be impacted by A.I. and therefore its creators have a responsibility to build open, fair, and explainable A.I. services; that A.I. should be used for social benefit and not just for private profit; and that because A.I. learns from vast quantities of data, the citizens who create that data — about their shopping habits, health records, or transport needs — have a right to say and understand how it is used.

A growing movement across industry and academia believes that A.I. needs to be treated like any other “public awareness” program — just like the scheme rolled out in Finland….(More)”.

How Effective Is Nudging? A Quantitative Review on the Effect Sizes and Limits of Empirical Nudging Studies


Paper by Dennis Hummel and Alexander Maedche: “Changes in the choice architecture, so-called nudges, have been employed in a variety of contexts to alter people’s behavior. Although nudging has gained a widespread popularity, the effect sizes of its influences vary considerably across studies. In addition, nudges have proven to be ineffective or even backfire in selected studies which raises the question whether, and under which conditions, nudges are effective. Therefore, we conduct a quantitative review on nudging with 100 primary publications including 317 effect sizes from different research areas. We derive four key results. (1) A morphological box on nudging based on eight dimensions, (2) an assessment of the effectiveness of different nudging interventions, (3) a categorization of the relative importance of the application context and the nudge category, and (4) a comparison of nudging and digital nudging. Thereby, we shed light on the (in)effectiveness of nudging and we show how the findings of the past can be used for future research. Practitioners, especially government officials, can use the results to review and adjust their policy making….(More)”.

Our data, our society, our health: a vision for inclusive and transparent health data science in the UK and Beyond


Paper by Elizabeth Ford et al in Learning Health Systems: “The last six years have seen sustained investment in health data science in the UK and beyond, which should result in a data science community that is inclusive of all stakeholders, working together to use data to benefit society through the improvement of public health and wellbeing.

However, opportunities made possible through the innovative use of data are still not being fully realised, resulting in research inefficiencies and avoidable health harms. In this paper we identify the most important barriers to achieving higher productivity in health data science. We then draw on previous research, domain expertise, and theory, to outline how to go about overcoming these barriers, applying our core values of inclusivity and transparency.

We believe a step-change can be achieved through meaningful stakeholder involvement at every stage of research planning, design and execution; team-based data science; as well as harnessing novel and secure data technologies. Applying these values to health data science will safeguard a social license for health data research, and ensure transparent and secure data usage for public benefit….(More)”.

Transparency, Fairness, Data Protection, Neutrality: Data Management Challenges in the Face of New Regulation


Paper by Serge Abiteboul and Julia Stoyanovich: “The data revolution continues to transform every sector of science, industry and government. Due to the incredible impact of data-driven technology on society, we are becoming increasingly aware of the imperative to use data and algorithms responsibly — in accordance with laws and ethical norms. In this article we discuss three recent regulatory frameworks: the European Union’s General Data Protection Regulation (GDPR), the New York City Automated Decisions Systems (ADS) Law, and the Net Neutrality principle, that aim to protect the rights of individuals who are impacted by data collection and analysis. These frameworks are prominent examples of a global trend: Governments are starting to recognize the need to regulate data-driven algorithmic technology. 


Our goal in this paper is to bring these regulatory frameworks to the attention of the data management community, and to underscore the technical challenges they raise and which we, as a community, are well-equipped to address. The main .take-away of this article is that legal and ethical norms cannot be incorporated into data-driven systems as an afterthought. Rather, we must think in terms of responsibility by design, viewing it as a systems requirement….(More)”