Why Business Schools Need to Teach Experimentation


Elizabeth R. Tenney, Elaine Costa, and Ruchi M. Watson at Harvard Business Review: “…The value of experiments in nonscientific organizations is quite high. Instead of calling in managers to solve every puzzle or dispute large and small (Should we make the background yellow or blue? Should we improve basic functionality or add new features? Are staff properly supported and incentivized to provide rapid responses?), teams can run experiments and measure outcomes of interest and, armed with new data, decide for themselves, or at least put forward a proposal grounded in relevant information. The data also provide tangible deliverables to show to stakeholders to demonstrate progress and accountability.

Experiments spur innovation. They can provide proof of concept and a degree of confidence in new ideas before taking bigger risks and scaling up. When done well, with data collected and interpreted objectively, experiments can also provide a corrective for faulty intuition, inaccurate assumptions, or overconfidence. The scientific method (which powers experiments) is the gold standard of tools to combat bias and answer questions objectively.

But as more and more companies are embracing a culture of experimentation, they face a major challenge: talent. Experiments are difficult to do well. Some challenges include special statistical knowledge, clear problem definition, and interpretation of the results. And it’s not enough to have the skillset. Experiments should ideally be done iteratively, building on prior knowledge and working toward deeper understanding of the question at hand. There are also the issues of managers’ preparedness to override their intuition when data disagree with it, and their ability to navigate hierarchy and bureaucracy to implement changes based on the experiments’ outcomes.

Some companies seem to be hiring small armies of PhDs to meet these competency challenges. (Amazon, for example, employs more than 100 PhD economists.) This isn’t surprising, given that PhDs receive years of training — and that the shrinking tenure-track market in academia has created a glut of PhDs. Other companies are developing employees in-house, training them in narrow, industry-specific methodologies. For example, General Mills recently hired for their innovator incubator group, called g-works, advertising for employees who are “using entrepreneurial skills and an experimental mindset” in what they called a “test and learn environment, with rapid experimentation to validate or invalidate assumptions.” Other companies — including Fidelity, LinkedIn, and Aetna — have hired consultants to conduct experiments, among them Irrational Labs, cofounded by Duke University’s Dan Ariely and the behavioral economist Kristen Berman….(More)”.

Retail Analytics: The Quest for Actionable Insights from Big Data on Consumer Behavior and Operational Execution


Paper by Robert P. Rooderkerk, Nicole DeHoratius, and Andres Musalem: “We document the development of academic research on retail analytics and compare it with current practice. We provide a definition of retail analytics, describe its evolution, and conduct bibliometric analyses on 123 retail analytics articles published in top operations journals in the 2000-2020 period. Our work distinguishes nine different analytical decision areas as well as types of analytics used. To document the current state of retail practice, we interviewed global practitioners, asking them to highlight their transitions from basic to advanced analytics. We conclude with a roadmap advising future research on retail analytics, including a discussion of how to better facilitate the adoption of academic work in practice….(More)”.

Selected Readings on the Use of Artificial Intelligence in the Public Sector


By Kateryna Gazaryan and Uma Kalkar

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works focuses on algorithms and artificial intelligence in the public sector.

As Artificial Intelligence becomes more developed, governments have turned to it to improve the speed and quality of public sector service delivery, among other objectives. Below, we provide a selection of recent literature that examines how the public sector has adopted AI to serve constituents and solve public problems. While the use of AI in governments can cut down costs and administrative work, these technologies are often early in development and difficult for organizations to understand and control with potential harmful effects as a result. As such, this selected reading explores not only the use of artificial intelligence in governance but also its benefits, and its consequences.

Readings are listed in alphabetical order.

Berryhill, Jamie, Kévin Kok Heang, Rob Clogher, and Keegan McBride. “Hello, World: Artificial intelligence and its use in the public sector.OECD Working Papers on Public Governance no. 36 (2019): https://doi.org/10.1787/726fd39d-en.

This working paper emphasizes the importance of defining AI for the public sector and outlining use cases of AI within governments. It provides a map of 50 countries that have implemented or set in motion the development of AI strategies and highlights where and how these initiatives are cross-cutting, innovative, and dynamic. Additionally, the piece provides policy recommendations governments should consider when exploring public AI strategies to adopt holistic and humanistic approaches.

Kuziemski, Maciej, and Gianluca Misuraca. “AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings.” Telecommunications Policy 44, no. 6 (2020): 101976. 

Kuziemski and Misuraca explore how the use of artificial intelligence in the public sector can exacerbate existing power imbalances between the public and the government. They consider the European Union’s artificial intelligence “governance and regulatory frameworks” and compare these policies with those of Canada, Finland, and Poland. Drawing on previous scholarship, the authors outline the goals, drivers, barriers, and risks of incorporating artificial intelligence into public services and assess existing regulations against these factors. Ultimately, they find that the “current AI policy debate is heavily skewed towards voluntary standards and self-governance” while minimizing the influence of power dynamics between governments and constituents. 

Misuraca, Gianluca, and Colin van Noordt. “AI Watch, Artificial Intelligence in Public Services: Overview of the Use and Impact of AI in Public Services in the EU.” 30255 (2020).

This study provides “evidence-based scientific support” for the European Commission as it navigates AI regulation via an overview of ways in which European Union member-states use AI to enhance their public sector operations. While AI has the potential to positively disrupt existing policies and functionalities, this report finds gaps in how AI gets applied by governments. It suggests the need for further research centered on the humanistic, ethical, and social ramification of AI use and a rigorous risk assessment from a “public-value perspective” when implementing AI technologies. Additionally, efforts must be made to empower all European countries to adopt responsible and coherent AI policies and techniques.

Saldanha, Douglas Morgan Fullin, and Marcela Barbosa da Silva. “Transparency and Accountability of Government Algorithms: The Case of the Brazilian Electronic Voting System.” Cadernos EBAPE.BR 18 (2020): 697–712.

Saldanha and da Silva note that open data and open government revolutions have increased citizen demand for algorithmic transparency. Algorithms are increasingly used by governments to speed up processes and reduce costs, but their black-box  systems and lack of explanability allows them to insert implicit and explicit bias and discrimination into their calculations. The authors conduct a qualitative study of the “practices and characteristics of the transparency and accountability” in the Brazilian e-voting system across seven dimensions: consciousness; access and reparations; accountability; explanation; data origin, privacy and justice; auditing; and validation, precision and tests. They find the Brazilian e-voting system fulfilled the need to inform citizens about the benefits and consequences of data collection and algorithm use but severely lacked in demonstrating accountability and opening algorithm processes for citizen oversight. They put forth policy recommendations to increase the e-voting system’s accountability to Brazilians and strengthen auditing and oversight processes to reduce the current distrust in the system.

Sharma, Gagan Deep, Anshita Yadav, and Ritika Chopra. “Artificial intelligence and effective governance: A review, critique and research agenda.Sustainable Futures 2 (2020): 100004.

This paper conducts a systematic review of the literature of how AI is used across different branches of government, specifically, healthcare, information, communication, and technology, environment, transportation, policy making, and economic sectors. Across the 74 papers surveyed, the authors find a gap in the research on selecting and implementing AI technologies, as well as their monitoring and evaluation. They call on future research to assess the impact of AI pre- and post-adoption in governance, along with the risks and challenges associated with the technology.

Tallerås, Kim, Terje Colbjørnsen, Knut Oterholm, and Håkon Larsen. “Cultural Policies, Social Missions, Algorithms and Discretion: What Should Public Service Institutions Recommend?Part of the Lecture Notes in Computer Science book series (2020).

Tallerås et al. examine how the use of algorithms by public services, such as public radio and libraries, influence broader society and culture. For instance, to modernize their offerings, Norway’s broadcasting corporation (NRK) has adopted online platforms similar to popular private streaming services. However, NRK’s filtering process has faced “exposure diversity” problems that narrow recommendations to already popular entertainment and move Norway’s cultural offerings towards a singularity. As a public institution, NRK is required to “fulfill […] some cultural policy goals,” raising the question of how public media services can remain relevant in the era of algorithms fed by “individualized digital culture.” Efforts are currently underway to employ recommendation systems that balance cultural diversity with personalized content relevance that engage individuals and uphold the socio-cultural mission of public media.

Vogl, Thomas, Seidelin Cathrine, Bharath Ganesh, and Jonathan Bright. “Smart Technology and the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities.” Public administration review 80, no. 6 (2020): 946–961.

Local governments are using “smart technologies” to create more efficient and effective public service delivery. These tools are twofold: not only do they help the public interact with local authorities, they also streamline the tasks of government officials. To better understand the digitization of local government, the authors conducted surveys, desk research, and in-depth interviews with stakeholders from local British governments to understand reasoning, processes, and experiences within a changing government framework. Vogl et al. found an increase in “algorithmic bureaucracy” at the local level to reduce administrative tasks for government employees, generate feedback loops, and use data to enhance services. While the shift toward digital local government demonstrates initiatives to utilize emerging technology for public good, further research is required to determine which demographics are not involved in the design and implementation of smart technology services and how to identify and include these audiences.

Wirtz, Bernd W., Jan C. Weyerer, and Carolin Geyer. “Artificial intelligence and the public sector—Applications and challenges.International Journal of Public Administration 42, no. 7 (2019): 596-615.

The authors provide an extensive review of the existing literature on AI uses and challenges in the public sector to identify the gaps in current applications. The developing nature of AI in public service has led to differing definitions of what constitutes AI and what are the risks and benefits it poses to the public. As well, the authors note the lack of focus on the downfalls of AI in governance, with studies tending to primarily focus on the positive aspects of the technology. From this qualitative analysis, the researchers highlight ten AI applications: knowledge management, process automation, virtual agents, predictive analytics and data visualization, identity analytics, autonomous systems, recommendation systems, digital assistants, speech analytics, and threat intelligence. As well, they note four challenge dimensions—technology implementation, laws and regulation, ethics, and society. From these applications and risks, Wirtz et al. provide a “checklist for public managers” to make informed decisions on how to integrate AI into their operations. 

Wirtz, Bernd W., Jan C. Weyerer, and Benjamin J. Sturm. “The dark sides of artificial intelligence: An integrated AI governance framework for public administration.International Journal of Public Administration 43, no. 9 (2020): 818-829.

As AI is increasingly popularized and picked up by governments, Wirtz et al. highlight the lack of research on the challenges and risks—specifically, privacy and security—associated with implementing AI systems in the public sector. After assessing existing literature and uncovering gaps in the main governance frameworks, the authors outline the three areas of challenges of public AI: law and regulations, society, and ethics. Last, they propose an “integrated AI governance framework” that takes into account the risks of AI for a more holistic “big picture” approach to AI in the public sector.

Zuiderwijk, Anneke, Yu-Che Chen, and Fadi Salem. “Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda.Government Information Quarterly (2021): 101577.

Following a literature review on the risks and possibilities of AI in the public sector, Zuiderwijk, Chen, and Salem design a research agenda centered around the “implications of the use of AI for public governance.” The authors provide eight process recommendations, including: avoiding superficial buzzwords in research; conducting domain- and locality-specific research on AI in governance; shifting from qualitative analysis to diverse research methods; applying private sector “practice-driven research” to public sector study; furthering quantitative research on AI use by governments; creating “explanatory research designs”; sharing data for broader study; and adopting multidisciplinary reference theories. Further, they note the need for scholarship to delve into best practices, risk management, stakeholder communication, multisector use, and impact assessments of AI in the public sector to help decision-makers make informed decisions on the introduction, implementation, and oversight of AI in the public sector.

Platform as a Rule Maker: Evidence from Airbnb’s Cancellation Policies


Paper by Jian Jia, Ginger Zhe Jin & Liad Wagman: “Digital platforms are not only match-making intermediaries but also establish internal rules that govern all users in their ecosystems. To better understand the governing role of platforms, we study two Airbnb pro-guest rules that pertain to guest and host cancellations, using data on Airbnb and VRBO listings in 10 US cities. We demonstrate that such pro-guest rules can drive demand and supply to and from the platform, as a function of the local platform competition between Airbnb and VRBO. Our results suggest that platform competition sometimes dampens a platform wide pro-guest rule and sometimes reinforces it, often with heterogeneous effects on different hosts. This implies that platform competition does not necessarily mitigate a platform’s incentive to treat the two sides asymmetrically, and any public policy in platform competition must consider its implication on all sides….(More)”.

Privacy Tech’s Third Generation


“A Review of the Emerging Privacy Tech Sector” by Privacy Tech Alliance and Future of Privacy Forum: “As we enter the third phase of development of the privacy tech market, purchasers are demanding more integrated solutions, product offerings are more comprehensive, and startup valuations are higher than ever, according to a new report from the Future of Privacy Forum and Privacy Tech Alliance. These factors are leading to companies providing a wider range of services, acting as risk management platforms, and focusing on support of business outcomes.

According to the report, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” regulations are often the biggest driver for buyers’ initial privacy tech purchases. Organizations also are deploying tools to mitigate potential harms from the use of data. However, buyers serving global markets increasingly need privacy tech that offers data availability and control and supports its utility, in addition to regulatory compliance. 

The report finds the COVID-19 pandemic has accelerated global marketplace adoption of privacy tech as dependence on digital technologies grows. Privacy is becoming a competitive differentiator in some sectors, and TechCrunch reports that 200+ privacy startups have together raised more than $3.5 billion over hundreds of individual rounds of funding….(More)”.

Social-Tech Entrepreneurs: Building Blocks of a New Social Economy


Article by Mario Calderini, Veronica Chiodo, Francesco Gerli & Giulio Pasi: “Is it possible to create a sustainable, human-centric, resilient economy that achieves diverse objectives—including growth, inclusion, and equity? Could industry provide prosperity beyond jobs and economic growth, by adopting societal well-being as a compass to inform the production of goods and services?

The policy brief “Industry 5.0,” recently released by the European Commission, seems to reply positively. It makes the case for conceiving economic growth as a means to inclusive prosperity. It is also an invitation to rethink the role of industry in society, and reprioritize policy targets and tools

The following reflection, based on insights gathered from empirical research, is a first attempt to elaborate on how we might achieve this rethinking, and aims to contribute to the social economy debate in Europe and beyond.

A New Entrepreneurial Genre

A new entrepreneurial genre forged by the values of social entrepreneurship and fueled by technological opportunities is emerging, and it is well-poised to mend the economic and social wounds inflicted by both COVID-19 and the unexpected consequences of the early knowledge economy—an economy built around ideas and intellectual capital, and driven by diffused creativity, technology, and innovation.

We believe this genre, which we call social-tech entrepreneurship, is important to inaugurating a new generation of place-based, innovation-driven development policies inspired by a more inclusive idea of growth—though under the condition that industrial and innovation policies include it in their frame of reference.

This is partly because social innovation has undergone a complex transformation in recent years. It has seen a hybridization of social and commercial objectives and, as a direct consequence, new forms of management that support organizational missions that blend the two. Today, a more recent trend, reinforced by the pandemic, might push this transformation further: the idea that technologies—particularly those commoditized in the digital and software domains—offer a unique opportunity to solve societal challenges at scale.

Social-tech entrepreneurship differs from the work of high-tech companies in that, as researchers Geoffrey Desa and Suresh Kotha explain, it specifically aims to “develop and deploy technology-driven solutions to address social needs.” A social-tech entrepreneur also leverages technology not just to make parts of their operations more efficient, but to prompt a disruptive change in the way a specific social problem is addressed—and in a way that safeguards economic sustainability. In other words, they attempt to satisfy a social need through technological innovation in a financially sustainable manner. …(More)”.

Dark patterns, the tricks websites use to make you say yes, explained


Article by Sara Morrison: “If you’re an Instagram user, you may have recently seen a pop-up asking if you want the service to “use your app and website activity” to “provide a better ads experience.” At the bottom there are two boxes: In a slightly darker shade of black than the pop-up background, you can choose to “Make ads less personalized.” A bright blue box urges users to “Make ads more personalized.”

This is an example of a dark pattern: design that manipulates or heavily influences users to make certain choices. Instagram uses terms like “activity” and “personalized” instead of “tracking” and “targeting,” so the user may not realize what they’re actually giving the app permission to do. Most people don’t want Instagram and its parent company, Facebook, to know everything they do and everywhere they go. But a “better experience” sounds like a good thing, so Instagram makes the option it wants users to select more prominent and attractive than the one it hopes they’ll avoid.

There’s now a growing movement to ban dark patterns, and that may well lead to consumer protection laws and action as the Biden administration’s technology policies and initiatives take shape. California is currently tackling dark patterns in its evolving privacy laws, and Washington state’s latest privacy bill includes a provision about dark patterns.

“When you look at the way dark patterns are employed across digital engagement, generally, [the internet allows them to be] substantially exacerbated and made less visible to consumers,” Rebecca Kelly Slaughter, acting chair of the Federal Trade Commission (FTC), told Recode. “Understanding the effect of that is really important to us as we craft our strategy for the digital economy.”

Dark patterns have for years been tricking internet users into giving up their data, money, and time. But if some advocates and regulators get their way, they may not be able to do that for much longer…(More)”.

Mine!: How the Hidden Rules of Ownership Control Our Lives


Book by Michael Heller and James Salzman: “A hidden set of rules governs who owns what–explaining everything from whether you can recline your airplane seat to why HBO lets you borrow a password illegally–and in this lively and entertaining guide, two acclaimed law professors reveal how things become “mine.”

“Mine” is one of the first words babies learn. By the time we grow up, the idea of ownership seems natural, whether buying a cup of coffee or a house. But who controls the space behind your airplane seat: you reclining or the squished laptop user behind? Why is plagiarism wrong, but it’s okay to knock-off a recipe or a dress design? And after a snowstorm, why does a chair in the street hold your parking space in Chicago, but in New York you lose the space and the chair?

Mine! explains these puzzles and many more. Surprisingly, there are just six simple stories that everyone uses to claim everything. Owners choose the story that steers us to do what they want. But we can always pick a different story. This is true not just for airplane seats, but also for battles over digital privacy, climate change, and wealth inequality. As Michael Heller and James Salzman show–in the spirited style of Freakonomics, Nudge, and Predictably Irrational–ownership is always up for grabs.

With stories that are eye-opening, mind-bending, and sometimes infuriating, Mine! reveals the rules of ownership that secretly control our lives….(More)”.

Collaboration technology has been invaluable during the pandemic


TechRepublic: “The pandemic forced the enterprise to quickly pivot from familiar business practices and develop ways to successfully function while keeping employees safe. A new report from Zoom, The Impact of Video Communications During COVID-19, was released Thursday.

“Video communications were suddenly our lifeline to society, enabling us to continue work and school in a digital environment,” said Brendan Ittelson, chief technology officer of Zoomon the company’s blog. “Any baby steps toward digital transformation suddenly had to become leaps and bounds, with people reimagining their entire day-to-day practically overnight.”

Zoom commissioned the Boston Consulting Group (BCG) to conduct a survey and economic analysis to evaluate the economic impact of remote work and video communications solutions during the pandemic. BCG also conducted a survey and economic analysis, with a focus on which industries pivoted business processes using video conferencing, resulting in business continuity and even growth during a time of significant economic turmoil.

Key findings

  • In the U.S., the ability to work remotely saved 2.28 million jobs up to three times as many employees worked remotely, with a nearly three times increase in the use of video conferencing solutions.
  • Of the businesses surveyed, the total time spent on video conferencing solutions increased by as much as five times the numbers pre-pandemic.
  • BCG’s COVID-19 employee sentiment survey from 2020 showed that 70% of managers are more open to flexible remote working models than they were before the pandemic.
  • Hybrid working models will be the norm soon. The businesses surveyed expect more than a third of employees to work remotely beyond the pandemic.
  • The U.K. saved 550,000 jobs because of remote capabilities; Germany saved 372,00 jobs and France saved 250,000….(More)”.

The New Tech Tools in Data Sharing


Essay by Massimo Russo and Tian Feng: “…Cloud providers are integrating data-sharing capabilities into their product suites and investing in R&D that addresses new features such as data directories, trusted execution environments, and homomorphic encryption. They are also partnering with industry-specific ecosystem orchestrators to provide joint solutions.

Cloud providers are moving beyond infrastructure to enable broader data sharing. In 2018, for example, Microsoft teamed up with Oracle and SAP to kick off its Open Data Initiative, which focuses on interoperability among the three large platforms. Microsoft has also begun an Open Data Campaign to close the data divide and help smaller organizations get access to data needed for innovation in artificial intelligence (AI). Amazon Web Services (AWS) has begun a number of projects designed to promote open data, including the AWS Data Exchange and the Open Data Sponsorship Program. In addition to these large providers, specialty technology companies and startups are likewise investing in solutions that further data sharing.

Technology solutions today generally fall into three categories: mitigating risks, enhancing value, and reducing friction. The following is a noncomprehensive list of solutions in each category.

1. Mitigating the Risks of Data Sharing

Potential financial, competitive, and brand risks associated with data disclosure inhibit data sharing. To address these risks, data platforms are embedding solutions to control use, limit data access, encrypt data, and create substitute or synthetic data. (See slide 2 in the slideshow.)

Data Breaches. Here are some of the technological solutions designed toprevent data breaches and unauthorized access to sensitive or private data:

  • Data modification techniques alter individual data elements or full data sets while maintaining data integrity. They provide increasing levels of protection but at a cost: loss of granularity of the underlying data. De-identification and masking strip personal identifier information and use encryption, allowing most of the data value to be preserved. More complex encryptions can increase security, but they also remove resolution of information from the data set.
  • Secure data storage and transfer can help ensure that data stays safe both at rest and in transit. Cloud solutions such as Microsoft Azure and AWS have invested in significant platform security and interoperability.
  • Distributed ledger technologies, such as blockchain, permit data to be stored and shared in a decentralized manner that makes it very difficult to tamper with. IOTA, for example, is a distributed ledger platform for IoT applications supported by industy players such as Bosch and Software AG.
  • Secure computation enables analysis without revealing details of the underlying data. This can be done at a software level, with techniques such as secure multiparty computation (MPC) that allow potentially untrusting parties to jointly compute a function without revealing their private inputs. For example, with MPC, two parties can calculate the intersection of their respective encrypted data set while only revealing information about the intersection. Google, for one, is embedding MPC in its open-source Private Join and Compute tools.
  • Trusted execution environments (TEEs) are hardware modules separate from the operating system that allow for secure data processing within an encrypted private area on the chip. Startup Decentriq is partnering with Intel and Microsoft to explore confidential computing by means of TEEs. There is a significant opportunity for IoT equipment providers to integrate TEEs into their products….(More)”