The Challenge for Business and Society: From Risk to Reward

Book by Stanley Litow that seeks to provide “A roadmap to improve corporate social responsibility”:  “The 2016 U.S. Presidential Campaign focused a good deal of attention on the role of corporations in society, from both sides of the aisle. In the lead up to the election, big companies were accused of profiteering, plundering the environment, and ignoring (even exacerbating) societal ills ranging from illiteracy and discrimination to obesity and opioid addiction. Income inequality was laid squarely at the feet of us companies. The Trump administration then moved swiftly to scrap fiscal, social, and environmental rules that purportedly hobble business, to redirect or shut down cabinet offices historically protecting the public good, and to roll back clean power, consumer protection, living wage, healthy eating initiatives and even basic public funding for public schools. To many eyes, and the lens of history, this may usher in a new era of cowboy capitalism with big companies, unfettered by regulation and encouraged by the presidential bully pulpit, free to go about the business of making money—no matter the consequences to consumers and the commonwealth. While this may please some companies in the short term, the long term consequences might result in just the opposite.

And while the new administration promises to reduce “foreign aid” and the social safety net, Stanley S. Litow believes big companies will be motivated to step up their efforts to create jobs, reduce poverty, improve education and health, and address climate change issues — both domestically and around the world. For some leaders in the private sector this is not a matter of public relations or charity. It is integral to their corporate strategy—resulting in creating new markets, reducing risks, attracting and retaining top talent, and generating growth and realizing opportunities. Through case studies (many of which the author spearheaded at IBM), The Challenge for Business and Society provides clear guidance for companies to build their own corporate sustainability and social responsibility plans positively effecting their bottom lines producing real return on their investments….(More).

Is ‘Innovocracy’ Hurting the Public Sector?

Katherine Barrett & Richard Greene at Governing: “Tom Shack has an unusual background for his job. Most state comptrollers have worked exclusively in finance and accounting. Before becoming comptroller of Massachusetts in 2015, Shack went to law school, taught college courses in entrepreneurship, and became an assistant district attorney.

When Shack took over the comptroller’s office, which has administrative and audit oversight over every state government agency, he quickly recognized an issue that he’s been trying to address ever since.

“One of the things I noticed right away, when working for government, is that there’s complete risk-aversion,” he says. “You have to be able to manage risk and avoid it when necessary.”

That tendency, he says, holds back innovation and progress. He and his staff have even made up a word for the phenomenon: “innovocracy, which in our vernacular means that though the government has come up with great innovative ideas, it then imposes a bureaucratic framework, thus crushing any entrepreneurial spirit associated with the idea.”

Take technology. Shack is frustrated by the typical approach to investing in new government IT….(More)”.

Conversations Gone Awry: Detecting Early Signs of Conversational Failure

Paper by Justine Zhang et al: “One of the main challenges online social systems face is the prevalence of antisocial behavior, such as harassment and personal attacks. In this work, we introduce the task of predicting from the very start of a conversation whether it will get out of hand. As opposed to detecting undesirable behavior after the fact, this task aims to enable early, actionable prediction at a time when the conversation might still be salvaged.
To this end, we develop a framework for capturing pragmatic devices—such as politeness strategies and rhetorical prompts—used to start a conversation, and analyze their relation to its future trajectory. Applying this framework in a controlled setting, we demonstrate the feasibility of detecting early warning signs of antisocial behavior in online discussions…. (More)”.

Tech Platforms and the Knowledge Problem

Frank Pasquale at American Affairs: “Friedrich von Hayek, the preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to central planning. Knowledge about the price of supplies and labor, and consumers’ ability and willingness to pay, is so scattered and protean that even the wisest authorities cannot access all of it. No person knows everything about how goods and services in an economy should be priced. No central decision-maker can grasp the idiosyncratic preferences, values, and purchasing power of millions of individuals. That kind of knowledge, Hayek said, is distributed.

In an era of artificial intelligence and mass surveillance, however, the possibility of central planning has reemerged—this time in the form of massive firms. Having logged and analyzed billions of transactions, Amazon knows intimate details about all its customers and suppliers. It can carefully calibrate screen displays to herd buyers toward certain products or shopping practices, or to copy sellers with its own, cheaper, in-house offerings. Mark Zuckerberg aspires to omniscience of consumer desires, by profiling nearly everyone on Facebook, Instagram, and WhatsApp, and then leveraging that data trove to track users across the web and into the real world (via mobile usage and device fingerprinting). You don’t even have to use any of those apps to end up in Facebook/Instagram/WhatsApp files—profiles can be assigned to you. Google’s “database of intentions” is legendary, and antitrust authorities around the world have looked with increasing alarm at its ability to squeeze out rivals from search results once it gains an interest in their lines of business. Google knows not merely what consumers are searching for, but also what other businesses are searching, buying, emailing, planning—a truly unparalleled matching of data-processing capacity to raw communication flows.

Nor is this logic limited to the online context. Concentration is paying dividends for the largest banks (widely assumed to be too big to fail), and major health insurers (now squeezing and expanding the medical supply chain like an accordion). Like the digital giants, these finance and insurance firms not only act as middlemen, taking a cut of transactions, but also aspire to capitalize on the knowledge they have gained from monitoring customers and providers in order to supplant them and directly provide services and investment. If it succeeds, the CVS-Aetna merger betokens intense corporate consolidations that will see more vertical integration of insurers, providers, and a baroque series of middlemen (from pharmaceutical benefit managers to group purchasing organizations) into gargantuan health providers. A CVS doctor may eventually refer a patient to a CVS hospital for a CVS surgery, to be followed up by home health care workers employed by CVS who bring CVS pharmaceuticals—allcovered by a CVS/Aetna insurance plan, which might penalize the patient for using any providers outside the CVS network. While such a panoptic firm may sound dystopian, it is a logical outgrowth of health services researchers’ enthusiasm for “integrated delivery systems,” which are supposed to provide “care coordination” and “wraparound services” more efficiently than America’s current, fragmented health care system.

The rise of powerful intermediaries like search engines and insurers may seem like the next logical step in the development of capitalism. But a growing chorus of critics questions the size and scope of leading firms in these fields. The Institute for Local Self-Reliance highlights Amazon’s manipulation of both law and contracts to accumulate unfair advantages. International antitrust authorities have taken Google down a peg, questioning the company’s aggressive use of its search engine and Android operating system to promote its own services (and demote rivals). They also question why Google and Facebook have for years been acquiring companies at a pace of more than two per month. Consumer advocates complain about manipulative advertising. Finance scholars lambaste megabanks for taking advantage of the implicit subsidies that too-big-to-fail status confers….(More)”.

CrowdLaw Manifesto

At the Rockefeller Foundation Bellagio Center this spring, assembled participants  met to discuss CrowdLaw, namely how to use technology to improve the quality and effectiveness of law and policymaking through greater public engagement. We put together and signed 12 principles to promote the use of CrowdLaw by local legislatures and national parliaments, calling for legislatures, technologists and the public to participate in creating more open and participatory lawmaking practices. We invite you to sign the Manifesto using the form below.

Draft dated May 29, 2018

  1. To improve public trust in democratic institutions, we must improve how we govern in the 21st century.
  2. CrowdLaw is any law, policy-making or public decision-making that offers a meaningful opportunity for the public to participate in one or multiples stages of decision-making, including but not limited to the processes of problem identification, solution identification, proposal drafting, ratification, implementation or evaluation.
  3. CrowdLaw draws on innovative processes and technologies and encompasses diverse forms of engagement among elected representatives, public officials, and those they represent.
  4. When designed well, CrowdLaw may help governing institutions obtain more relevant facts and knowledge as well as more diverse perspectives, opinions and ideas to inform governing at each stage and may help the public exercise political will.
  5. When designed well, CrowdLaw may help democratic institutions build trust and the public to play a more active role in their communities and strengthen both active citizenship and democratic culture.
  6. When designed well, CrowdLaw may enable engagement that is thoughtful, inclusive, informed but also efficient, manageable and sustainable.
  7. Therefore, governing institutions at every level should experiment and iterate with CrowdLaw initiatives in order to create formal processes for diverse members of society to participate in order to improve the legitimacy of decision-making, strengthen public trust and produce better outcomes.
  8. Governing institutions at every level should encourage research and learning about CrowdLaw and its impact on individuals, on institutions and on society.
  9. The public also has a responsibility to improve our democracy by demanding and creating opportunities to engage and then actively contributing expertise, experience, data and opinions.
  10. Technologists should work collaboratively across disciplines to develop, evaluate and iterate varied, ethical and secure CrowdLaw platforms and tools, keeping in mind that different participation mechanisms will achieve different goals.
  11. Governing institutions at every level should encourage collaboration across organizations and sectors to test what works and share good practices.
  12. Governing institutions at every level should create the legal and regulatory frameworks necessary to promote CrowdLaw and better forms of public engagement and usher in a new era of more open, participatory and effective governing.

The CrowdLaw Manifesto has been signed by the following individuals and organizations:


  • Victoria Alsina, Senior Fellow at The GovLab and Faculty Associate at Harvard Kennedy School, Harvard University
  • Marta Poblet Balcell , Associate Professor, RMIT University
  • Robert Bjarnason — President & Co-founder, Citizens Foundation; Better Reykjavik
  • Pablo Collada — Former Executive Director, Fundación Ciudadano Inteligente
  • Mukelani Dimba — Co-chair, Open Government Partnership
  • Hélène Landemore, Associate Professor of Political Science, Yale University
  • Shu-Yang Lin, re:architect & co-founder,
  • José Luis Martí , Vice-Rector for Innovation and Professor of Legal Philosophy, Pompeu Fabra University
  • Jessica Musila — Executive Director, Mzalendo
  • Sabine Romon — Chief Smart City Officer — General Secretariat, Paris City Council
  • Cristiano Ferri Faría — Director, Hacker Lab, Brazilian House of Representatives
  • Nicola Forster — President and Founder, Swiss Forum on Foreign Policy
  • Raffaele Lillo — Chief Data Officer, Digital Transformation Team, Government of Italy
  • Tarik Nesh-Nash — CEO & Co-founder, GovRight; Ashoka Fellow
  • Beth Simone Noveck, Director, The GovLab and Professor at New York University Tandon School of Engineering
  • Ehud Shapiro , Professor of Computer Science and Biology, Weizmann Institute of Science


  • Citizens Foundation, Iceland
  • Fundación Ciudadano Inteligente, Chile
  • International School for Transparency, South Africa
  • Mzalendo, Kenya
  • Smart Cities, Paris City Council, Paris, France
  • Hacker Lab, Brazilian House of Representatives, Brazil
  • Swiss Forum on Foreign Policy, Switzerland
  • Digital Transformation Team, Government of Italy, Italy
  • The Governance Lab, New York, United States
  • GovRight, Morocco
  • ICT4Dev, Morocco

Randomistas: How Radical Researchers Are Changing Our World

Book by Andrew Leigh: “Experiments have consistently been used in the hard sciences, but in recent decades social scientists have adopted the practice. Randomized trials have been used to design policies to increase educational attainment, lower crime rates, elevate employment rates, and improve living standards among the poor.

This book tells the stories of radical researchers who have used experiments to overturn conventional wisdom. From finding the cure for scurvy to discovering what policies really improve literacy rates, Leigh shows how randomistas have shaped life as we know it. Written in a “Gladwell-esque” style, this book provides a fascinating account of key randomized control trial studies from across the globe and the challenges that randomistas have faced in getting their studies accepted and their findings implemented. In telling these stories, Leigh draws out key lessons learned and shows the most effective way to conduct these trials….(More)”.

How the Math Men Overthrew the Mad Men

 in the New Yorker: “Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men—the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data—coupled with their truculence and an arrogant conviction that their “science” is nearly flawless—has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

In the advertising world, Big Data is the Holy Grail, because it enables marketers to target messages to individuals rather than general groups, creating what’s called addressable advertising. And only the digital giants possess state-of-the-art Big Data. “The game is no longer about sending you a mail order catalogue or even about targeting online advertising,” Shoshana Zuboff, a professor of business administration at the Harvard Business School, wrote on, in 2016. “The game is selling access to the real-time flow of your daily life—your reality—in order to directly influence and modify your behavior for profit.” Success at this “game” flows to those with the “ability to predict the future—specifically the future of behavior,” Zuboff writes. She dubs this “surveillance capitalism.”

However, to thrash just Facebook and Google is to miss the larger truth: everyone in advertising strives to eliminate risk by perfecting targeting data. Protecting privacy is not foremost among the concerns of marketers; protecting and expanding their business is. The business model adopted by ad agencies and their clients parallels Facebook and Google’s. Each aims to massage data to better identify potential customers. Each aims to influence consumer behavior. To appreciate how alike their aims are, sit in an agency or client marketing meeting and you will hear wails about Facebook and Google’s “walled garden,” their unwillingness to share data on their users. When Facebook or Google counter that they must protect “the privacy” of their users, advertisers cry foul: You’re using the data to target ads we paid for—why won’t you share it, so that we can use it in other ad campaigns?…(More)”

AI trust and AI fears: A media debate that could divide society

Article by Vyacheslav Polonski: “Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictionsRecent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place….

Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes terribly wrong:

These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that humans cannot always rely on technology. In the end, it all goes back to the simple truth that machine learning is not foolproof, in part because the humans who design it aren’t….

Fortunately we already have some ideas about how to improve trust in AI — there’s light at the end of the tunnel.

  1. Experience: One solution may be to provide more hands-on experiences with automation apps and other AI applications in everyday situations (like this robot that can get you a beer from the fridge). Thus, instead of presenting the Sony’s new robot dog Aibo as an exclusive product for the upper-class, we’d recommend making these kinds of innovations more accessible to the masses. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our experimental study. And this is especially important for the general public that may not have a very sophisticated understanding of the technology. Similar evidence also suggests the more you use other technologies such as the Internet, the more you trust them.
  2. Insight: Another solution may be to open the “black-box” of machine learning algorithms and be slightly more transparent about how they work. Companies such as GoogleAirbnb and Twitter already release transparency reports on a regular basis. These reports provide information about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of how algorithmic decisions are made. Therefore, providing people with a top-level understanding of machine learning systems could go a long way towards alleviating algorithmic aversion.
  3. Control: Lastly, creating more of a collaborative decision-making process will help build trust and allow the AI to learn from human experience. In our work at Avantgarde Analytics, we have also found that involving people more in the AI decision-making process could improve trust and transparency. In a similar vein, a group of researchers at the University of Pennsylvania recently found that giving people control over algorithms can help create more trust in AI predictions. Volunteers in their study who were given the freedom to slightly modify an algorithm felt more satisfied with it, more likely to believe it was superior and more likely to use in in the future.

These guidelines (experience, insight and control) could help making AI systems more transparent and comprehensible to the individuals affected by their decisions….(More)”.

On Dimensions of Citizenship

Introduction by Niall Atkinson, Ann Lui, and Mimi Zeiger to a Special Exhibit and dedicated set of Essays: “We begin by defining citizenship as a cluster of rights, responsibilities, and attachments, and by positing their link to the built environment. Of course architectural examples of this affiliation—formal articulations of inclusion and exclusion—can seem limited and rote. The US-Mexico border wall (“The Wall,” to use common parlance) dominates the cultural imagination. As an architecture of estrangement, especially when expressed as monolithic prototypes staked in the San Diego-Tijuana landscape, the border wall privileges the rhetorical security of nationhood above all other definitions of citizenship—over the individuals, ecologies, economies, and communities in the region. And yet, as political theorist Wendy Brown points out, The Wall, like its many counterparts globally, is inherently fraught as both a physical infrastructure and a nationalist myth, ultimately racked by its own contradictions and paradoxes.

Calling border walls across the world “an ad hoc global landscape of flows and barriers,” Brown writes of the paradoxes that riddle any effort to distinguish the nation as a singular, cohesive form: “[O]ne irony of late modern walling is that a structure taken to mark and enforce an inside/outside distinction—a boundary between ‘us’ and ‘them’ and between friend and enemy—appears precisely the opposite when grasped as part of a complex of eroding lines between the police and the military, subject and patria, vigilante and state, law and lawlessness.”1 While 2018 is a moment when ideologies are most vociferously cast in binary rhetoric, the lived experience of citizenship today is rhizomic, overlapping, and distributed. A person may belong and feel rights and responsibilities to a neighborhood, a voting district, remain a part of an immigrant diaspora even after moving away from their home country, or find affiliation on an online platform. In 2017, Blizzard Entertainment, the maker of World of Warcraft, reported a user community of 46 million people across their international server network. Thus, today it is increasingly possible to simultaneously occupy multiple spaces of citizenship independent from the delineation of a formal boundary.

Conflict often makes visible emergent spaces of citizenship, as highlighted by recent acts both legislative and grassroots. Gendered bathrooms act as renewed sites of civil rights debate. Airports illustrate the thresholds of national control enacted by the recent Muslim bans. Such clashes uncover old scar tissue, violent histories and geographies of spaces. The advance of the Keystone XL pipeline across South Dakota, for example, brought the fight for indigenous sovereignty to the fore.

If citizenship itself designates a kind of border and the networks that traverse and ultimately elude such borders, then what kind of architecture might Dimensions of Citizenship offer in lieu of The Wall? What designed object, building, or space might speak to the heart of what and how it means to belong today? The participants in the United States Pavilion offer several of the clear and vital alternatives deemed so necessary by Samuel R. Delany: The Cobblestone. The Space Station. The Watershed.

Dimensions of Citizenship argues that citizenship is indissociable from the built environment, which is exactly why that relationship can be the source for generating or supporting new forms of belonging. These new forms may be more mutable and ephemeral, but no less meaningful and even, perhaps, ultimately more equitable. Through commissioned projects, and through film, video artworks, and responsive texts, Dimensions of Citizenship exhibits the ways that architects, landscape architects, designers, artists, and writers explore the changing form of citizenship: the different dimensions it can assume (legal, social, emotional) and the different dimensions (both actual and virtual) in which citizenship takes place. The works are valuably enigmatic, wide-ranging, even elusive in their interpretations, which is what contemporary conditions seem to demand. More often than not, the spaces of citizenship under investigation here are marked by histories of inequality and the violence imposed on people, non-human actors, ecologies. Our exhibition features spaces and individuals that aim to manifest the democratic ideals of inclusion against the grain of broader systems: new forms of “sharing economy” platforms, the legacies of the Underground Railroad, tenuous cross-national alliances at the border region, or the seemingly Sisyphean task of buttressing coastline topologies against the rising tides….(More)”.

Inclusive Innovation in Biohacker Spaces: The Role of Systems and Networks

Paper by Jeremy de Beer and Vipal Jain in Technology Innovation Management Review: “The biohacking movement is changing who can innovate in biotechnology. Driven by principles of inclusivity and open science, the biohacking movement encourages sharing and transparency of data, ideas, and resources. As a result, innovation is now happening outside of traditional research labs, in unconventional spaces – do-it-yourself (DIY) biology labs known as “biohacker spaces”. Labelled like “maker spaces” (which contain the fabrication, metal/woodworking, additive manufacturing/3D printing, digitization, and related tools that “makers” use to tinker with hardware and software), biohacker spaces are attracting a growing number of entrepreneurs, students, scientists, and members of the public.

A biohacker space is a space where people with an interest in biotechnology gather to tinker with biological materials. These spaces, such as Genspace in New York, Biotown in Ottawa, and La Paillasse in Paris, exist outside of traditional academic and research labs with the aim of democratizing and advancing science by providing shared access to tools and resources (Scheifele & Burkett, 2016).

Biohacker spaces hold great potential for promoting innovation. Numerous innovative projects have emerged from these spaces. For example, biohackers have developed cheaper tools and equipment (Crook, 2011; see also Bancroft, 2016). They are also working to develop low-cost medicines for conditions such as diabetes (Ossolo, 2015). There is a general, often unspoken assumption that the openness of biohacker spaces facilitates greater participation in biotechnology research, and therefore, more inclusive innovation. In this article, we explore that assumption using the inclusive innovation framework developed by Schillo and Robinson (2017).

Inclusive innovation requires that opportunities for participation are broadly available to all and that the benefits of innovation are broadly shared by all (CSLS, 2016). In Schillo and Robinson’s framework, there are four dimensions along which innovation may be inclusive:

  1. The people involved in innovation (who)
  2. The type of innovation activities (what)
  3. The range of outcomes to be captured (why)
  4. The governance mechanism of innovation (how)…(More)”.