If China valued free speech, there would be no coronavirus crisis


Verna Yu in The Guardian: “…Despite the flourishing of social media, information is more tightly controlled in China than ever. In 2013, an internal Communist party edict known as Document No 9 ordered cadres to tackle seven supposedly subversive influences on society. These included western-inspired notions of press freedom, “universal values” of human rights, civil rights and civic participation. Even within the Communist party, cadres are threatened with disciplinary action for expressing opinions that differ from the leadership.

Compared with 17 years ago, Chinese citizens enjoy even fewer rights of speech and expression. A few days after 34-year-old Li posted a note in his medical school alumni social media group on 30 December, stating that seven workers from a local live-animal market had been diagnosed with an illness similar to Sars and were quarantined in his hospital, he was summoned by police. He was made to sign a humiliating statement saying he understood if he “stayed stubborn and failed to repent and continue illegal activities, (he) will be disciplined by the law”….

Unless Chinese citizens’ freedom of speech and other basic rights are respected, such crises will only happen again. With a more globalised world, the magnitude may become even greater – the death toll from the coronavirus outbreak is already comparable to the total Sars death toll.

Human rights in China may appear to have little to do with the rest of the world but as we have seen in this crisis, disaster could occur when China thwarts the freedoms of its citizens. Surely it is time the international community takes this issue more seriously….(More)”.

Re-thinking Public Innovation, Beyond Innovation in Government


Jocelyne Bourgon at Dubai Policy Review: “The situation faced by public servants and public sector leaders today may not be more challenging in absolute terms than in previous generations, but it is certainly different. The problems societies face today stem from a world characterised by increasing complexity, hyper-connectivity and a high level of uncertainty. In this context, the public sector’s role in developing innovative solutions is critical. Despite the need for public innovation, public servants (when asked to discuss the challenges they face in New Synthesis1 labs and workshops) tend to present a narrow perspective, rarely going beyond the boundary of their respective units. While recent public sector reforms have encouraged a drive for efficiency and productivity, they have also generated a narrow and sometimes distorted view of the scale of the role of government in society. Ideas and principles matter. The way one thinks has a direct impact on the solutions that will be found and the results that will be achieved. Innovation in government has received much attention over the years. For the most part, the focus has been introspective, giving special attention to the modernisation of public sector systems and practices as well as the service delivery functions of government. The focus of attention in these conversations is on innovation in government and as a result may have missed the most important contributions of government to public innovation….

I define public innovation as “innovative solutions serving a public purpose that require the use of public means”9. What distinguishes public innovation from social innovation is the intimate link to government actions and the use of instruments of the State10. From this perspective, far from being risk averse, the State is the ultimate risk taker in society. Government takes risks on a scale that no other sector or agent in society could take on and intervenes in areas where the forces of the market or the capacity of civil society would be unable to go. This broader perspective reveals some of the distinctive characteristics of public innovation….(More)”

Astroturfing Is Bad But It's Not the Whole Problem


Beth Noveck at NextGov: “In November 2019, Securities and Exchange Commission Chairman Jay Clayton boasted that draft regulations requiring proxy advisors to run their recommendations past the companies they are evaluating before giving that advice to their clients received dozens of letters of support from ordinary Americans. But the letters he cited turned out to be fakes, sent by corporate advocacy groups and signed with the names of people who never saw the comments or who do not exist at all.

When interest groups manufacture the appearance that comments come from the “ordinary public,” it’s known as astroturfing. The practice is the subject of today’s House Committee on Financial Services Subcommittee on Oversight and Investigations hearing, entitled “Fake It till They Make It: How Bad Actors Use Astroturfing to Manipulate Regulators, Disenfranchise Consumers, and Subvert the Rulemaking Process.” 

Of course, commissioners who cherry-pick from among the public comments looking for the information to prove themselves right should be called out and it is tempting to use the occasion to embarrass those who do, especially when they are from the other party. But focusing on astroturfing distracts attention away from the more salient and urgent problem: the failure to obtain the best possible evidence by creating effective public participation opportunities in federal rulemaking. 

Thousands of federal regulations are enacted every year that touch every aspect of our lives, and under the 1946 Administrative Procedure Act, the public has a right to participate.

Participation in rulemaking advances both the legitimacy and the quality of regulations by enabling agencies—and the congressional committees that oversee them—to obtain information from a wider audience of stakeholders, interest groups, businesses, nonprofits, academics and interested individuals. Participation also provides a check on the rulemaking process, helping to ensure public scrutiny.

But the shift over the last two decades to a digital process, where people submit comments via regulations.gov has made commenting easier yet also inadvertently opened the floodgates to voluminous, duplicative and, yes, even “fake” comments, making it harder for agencies to extract the information needed to inform the rulemaking process.

Although many agencies receive only a handful of comments, some receive voluminous responses, thanks to this ease of digital commenting. In 2017, when the Federal Communications Commission sought to repeal an earlier Obama-era rule requiring internet service providers to observe net neutrality, the agency received 22 million comments in response. 

There is a remedy. Tools have evolved to make quick work of large data stores….(More)”. See also https://congress.crowd.law/

Why It’s So Hard for Users to Control Their Data


Bhaskar Chakravorti at the Harvard Business Review: “A recent IBM study found that 81% of consumers say they have become more concerned about how their data is used online. But most users continue to hand over their data online and tick consent boxes impatiently, giving rise to a “privacy paradox,” where users’ concerns aren’t reflected in their behaviors. It’s a daunting challenge for regulators and companies alike to navigate the future of data governance.

In my view, we’re missing a system that defines and grants users digital agency” — the ability to own the rights to their personal data, manage access to this data and, potentially, be compensated fairly for such access. This would make data similar to other forms of personal property: a home, a bank account or even a mobile phone number. But before we can imagine such a state, we need to examine three central questions: Why don’t users care enough to take actions that match their concerns? What are the possible solutions? Why is this so difficult?

Why don’t users’ actions match their concerns?

To start, data is intangible. We don’t actively hand it over. As a byproduct of our online activity, it is easy to ignore or forget about. A lot of data harvesting is invisible to the consumer — they see the results in marketing offers, free services, customized feeds, tailored ads, and beyond.

Second, even if users wanted to negotiate more data agency, they have little leverage. Normally, in well-functioning markets, customers can choose from a range of competing providers. But this is not the case if the service is a widely used digital platform. For many, leaving a platform like Facebook feels like it would come at a high cost in terms of time and effort and that they have no other option for an equivalent service with connections to the same people. Plus, many people use their Facebook logins on numerous apps and services. On top of that, Facebook has bought up many of its natural alternatives, like Instagram. It’s equally hard to switch away from other major platforms, like Google or Amazon, without a lot of personal effort.

Third, while a majority of American users believe more regulation is needed, they are not as enthusiastic about broad regulatory solutions being imposed. Instead, they would prefer to have better data management tools at their disposal. However, managing one’s own data would be complex – and that would deter users from embracing such an option….(More)”.

Change of heart: how algorithms could revolutionise organ donations


Tej Kohli at TheNewEconomy: “Artificial intelligence (AI) and biotechnology are both on an exponential growth trajectory, with the potential to improve how we experience our lives and even to extend life itself. But few have considered how these two frontier technologies could be brought together symbiotically to tackle global health and environmental challenges…

For example, combination technologies could tackle a global health issue such as organ donation. According to the World Health Organisation, an average of around 100,800 solid organ transplants were performed each year as of 2008. Yet, in the US, there are nearly 113,000 people waiting for a life-saving organ transplant, while thousands of good organs are discarded each year. For years, those in need of a kidney transplant had limited options: they either had to find a willing and biologically viable living donor, or wait for a viable deceased donor to show up in their local hospital.

But with enough patients and willing donors, big data and AI make it possible to facilitate far more matches than this one-to-one system allows, through a system of paired kidney donation. Patients can now procure a donor who is not a biological fit and still receive a kidney, because AI can match donors to recipients across a massive array of patient-donor relationships. In fact, a single person who steps forward to donate a kidney – to a loved one or even to a stranger – can set off a domino effect that saves dozens of lives by resolving the missing link in a long chain of pairings….

The moral and ethical implications of today’s frontier technologies are far-reaching. Fundamental questions have not been adequately addressed. How will algorithms weigh the needs of poor and wealthy patients? Should a donor organ be sent to a distant patient – potentially one in a different country – with a low rejection risk or to a nearby patient whose rejection risk is only slightly higher?

These are important questions, but I believe we should get combination technologies up and working, and then decide on the appropriate controls. The matching power of AI means that eight lives could be saved by just one deceased organ donor; innovations in biotechnology could ensure that organs are never wasted. The faster these technologies advance, the more lives we can save…(More)”.

Do you trust your fellow citizens more than your leaders?


Domhnall O’Sullivan at swissinfo.ch:” “Voting up to four times a year, as the Swiss do, is a nice democratic right, but it also means keeping up with a lot of topics.

Usually this means following the media, talking to family and friends, watching what political parties and campaigners are saying, and wading through information sent out by authorities before vote day.

Last week, in advance of the next national ballot on February 9, 21,000 voters in the town of Sion got something new in the post: an informational sheet, drafted by a group of 20 randomly selected locals, giving a citizen’s take on what’s at stake.

The document, written by the citizen panel over two weekends last November, is the first output of ‘demoscan’: a project aiming to spur participation in a country where turnout rates are low and electoral issues sometimes complex.

On the front side, the issue (a proposed increase in the building of social housing) is presented in eight key points, listed in order of perceived importance; on the back, there are three arguments for and three arguments against the proposal.

At first reading, it’s not clear how different or more digestible the information is compared with what’s sent out by federal authorities, aside from the fact that unlike in the government’s package, there is no recommendation on how to vote. (Official materials include the position of parliament and government on each issue).

Demoscan project leader Nenad Stojanović says however that the main added value is that the document presents a “filtering” and “prioritising” of information – ultimately giving an overview of the most pertinent points as seen through the eyes of 20 “normal” citizens.

He also reckons that the process was as important as the output.

By selecting the participants randomly and representatively, the project included social groups not normally involved in the political debate, he says. Four days of research and deliberation were like a “democracy school”, teaching them about the functioning of previously distant institutions….(More)”.

Imagining the Next Decade of Behavioral Science


Evan Nesterak at the Behavioral Scientist: “If you asked Richard Thaler in 2010, what he thought would become of the then very new field of behavioral science over the next decade, he would have been wrong, at least for the most part. Could he have predicted the expansion of behavioral economics research? Probably. The Nobel Prize? Maybe. The nearly 300 and counting behavioral teams in governments, businesses, and other organizations around the world? Not a chance. 

When we asked him a year and a half ago to sum up the 10 years since the publication of Nudgehe replied “Am I too old to just say OMG? … [Cass Sunstein and I] would never have anticipated one “nudge unit” much less 200….Every once in a while, one of us will send the other an email that amounts to just ‘wow.’”

As we closed last year (and the last decade), we put out a call to help us imagine the next decade of behavioral science. We asked you to share your hopes and fears, predictions and warnings, open questions and big ideas. 

We received over 120 submissions from behavioral scientists around the world. We picked the most thought-provoking submissions and curated them below.

We’ve organized the responses into three sections. The first section, Promises and Pitfalls, houses the responses about the field as whole—its identity, purpose, values. In that section, you’ll find authors challenging the field to be bolder. You’ll also find ideas to unite the field, which in its growth has felt for some like the “Wild West.” Ethical concerns are also top of mind. “Behavioral science has confronted ethical dilemmas before … but never before has the essence of the field been so squarely in the wheelhouse of corporate interests,” writes Phillip Goff.

In the second section, we’ve placed the ideas about specific domains. This includes “Technology: Nightmare or New Norm,” where Tania Ramos considers the possibility of a behaviorally optimized tech dystopia. In “The Future of Work,” Lazslo Bock imagines that well-timed, intelligent nudges will foster healthier company cultures, and Jon Jachomiwcz emphasizes the importance of passion in an economy increasingly dominated by A.I. In “Climate Change: Targeting Individuals and Systems” behavioral scientists grapple with how the field can pull its weight in this existential fight. You’ll also find sections on building better governments, health care at the digital frontier and final mile, and the next steps for education. 

The third and final section gets the most specific of all. Here you’ll find commentary on the opportunities (and obligations) for research and application. For instance, George Lowenstein suggests we pay more attention to attention—an increasingly scarce resource. Others, on the application side, ponder how behavioral science will influence the design of our neighborhoods and wonder what it will take to bring behavioral science into the courtroom. The section closes with ideas on the future of intervention design and ways we can continue to master our methods….(More)”.

Artificial Morality


Essay by Bruce Sterling: “This is an essay about lists of moral principles for the creators of Artificial Intelligence. I collect these lists, and I have to confess that I find them funny.

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done.

What I find comical is a programmer’s approach to morality — the urge to carefully type out some moral code before raising unholy hell. Many professions other than programming have stern ethical standards: lawyers and doctors, for instance. Are lawyers and doctors evil? It depends. If a government is politically corrupt, then a nation’s lawyers don’t escape that moral stain. If a health system is slaughtering the population with mis-prescribed painkillers, then doctors can’t look good, either.

So if AI goes south, for whatever reason, programmers are just bound to look culpable and sinister. Careful lists of moral principles will not avert that moral judgment, no matter how many earnest efforts they make to avoid bias, to build and test for safety, to provide feedback for user accountability, to design for privacy, to consider the colossal abuse potential, and to eschew any direct involvement in AI weapons, AI spyware, and AI violations of human rights.

I’m not upset by moral judgments in well-intentioned manifestos, but it is an odd act of otherworldly hubris. Imagine if car engineers claimed they could build cars fit for all genders and races, test cars for safety-first, mirror-plate the car windows and the license plates for privacy, and make sure that military tanks and halftracks and James Bond’s Aston-Martin spy-mobile were never built at all. Who would put up with that presumptuous behavior? Not a soul, not even programmers.

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.

I’m not a cynic about morality per se, but everybody’s got some. The military, the spies, the secret police and organized crime, they all have their moral codes. Military AI flourishes worldwide, and so does cyberwar AI, and AI police-state repression….(More)”.

Incentive Competitions and the Challenge of Space Exploration


Article by Matthew S. Williams: “Bill Joy, the famed computer engineer who co-founded Sun Microsystems in 1982, once said, “No matter who you are, most of the smartest people work for someone else.” This has come to be known as “Joy’s Law” and is one of the inspirations for concepts such as “crowdsourcing”.

Increasingly, government agencies, research institutions, and private companies are looking to the power of the crowd to find solutions to problems. Challenges are created and prizes offered – that, in basic terms, is an “incentive competition.”

The basic idea of an incentive competition is pretty straightforward. When confronted with a particularly daunting problem, you appeal to the general public to provide possible solutions and offer a reward for the best one. Sounds simple, doesn’t it?

But in fact, this concept flies in the face of conventional problem-solving, which is for companies to recruit people with knowledge and expertise and solve all problems in-house. This kind of thinking underlies most of our government and business models, but has some significant limitations….

Another benefit to crowdsourcing is the way it takes advantage of the exponential growth in human population in the past few centuries. Between 1650 and 1800, the global population doubled, to reach about 1 billion. It took another one-hundred and twenty years (1927) before it doubled again to reach 2 billion.

However, it only took fifty-seven years for the population to double again and reach 4 billion (1974), and just fifteen more for it to reach 6 billion. As of 2020, the global population has reached 7.8 billion, and the growth trend is expected to continue for some time.

This growth has paralleled another trend, the rapid development of new ideas in science and technology. Between 1650 and 2020, humanity has experienced multiple technological revolutions, in what is a comparatively very short space of time….(More)”.

Shining light into the dark spaces of chat apps


Sharon Moshavi at Columbia Journalism Review: “News has migrated from print to the web to social platforms to mobile. Now, at the dawn of a new decade, it is heading to a place that presents a whole new set of challenges: the private, hidden spaces of instant messaging apps.  

WhatsApp, Facebook Messenger, Telegram, and their ilk are platforms that journalists cannot ignore — even in the US, where chat-app usage is low. “I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Mark Zuckerberg, Facebook’s CEO, wrote in March 2019. By 2022, three billion people will be using them on a regular basis, according to Statista

But fewer journalists worldwide are using these platforms to disseminate news than they were two years ago, as ICFJ discovered in its 2019 “State of Technology in Global Newsrooms” survey. That’s a particularly dangerous trend during an election year, because messaging apps are potential minefields of misinformation. 

American journalists should take stock of recent elections in India and Brazil, ahead of which misinformation flooded WhatsApp. ICFJ’s “TruthBuzz” projects found coordinated and widespread disinformation efforts using text, videos, and photos on that platform.  

It is particularly troubling given that more people now use it as a primary source for information. In Brazil, one in four internet users consult WhatsApp weekly as a news source. A recent report from New York University’s Center for Business and Human Rights warned that WhatsApp “could become a troubling source of false content in the US, as it has been during elections in Brazil and India.” It’s imperative that news media figure out how to map the contours of these opaque, unruly spaces, and deliver fact-based news to those who congregate there….(More)”.