8 Strategies for Chief Data Officers to Create — and Demonstrate — Value


Article by Thomas H. Davenport, Richard Y. Wang, and Priyanka Tiwari: “The chief data officer (CDO) role was only established in 2002, but it has grown enormously since then. In one recent survey of large companies, 83% reported having a CDO. This isn’t surprising: Data and approaches to understanding it (analytics and AI) are incredibly important in contemporary organizations. What is eyebrow-raising, however, is that the CDO job is terribly ill-defined. Sixty-two percent of CDOs surveyed in the research we describe below reported that the CDO role is poorly understood, and incumbents of the job have often met with diffuse expectations and short tenures. There is a clear need for CDOs to focus on adding visible value to their organizations.

Part of the problem is that traditional data management approaches are unlikely to provide visible value in themselves. Many nontechnical executives don’t really understand the CDO’s work and struggle to recognize when it’s being done well. CDOs are often asked to focus on preventing data problems (defense-oriented initiatives) and such data management projects as improving data architectures, data governance, and data quality. But data will never be perfect, meaning executives will always be somewhat frustrated with their organization’s data situation. While improvements in data management may be difficult to recognize or measure, major problems such as hacks, breaches, lost or inaccessible data, or poor quality are much easier to recognize than improvements.

So how can CDOs demonstrate that they’re creating value?…(More)”

The Market Power of Technology: Understanding the Second Gilded Age


Book by Mordecai Kurz: “Since the 1980s, the United States has regressed to a level of economic inequality not seen since the Gilded Age in the late nineteenth century. At the same time, technological innovation has transformed society, and a core priority of public policy has been promoting innovation. What is the relationship between economic inequality and technological change?

Mordecai Kurz develops a comprehensive integrated theory of the dynamics of market power and income inequality. He shows that technological innovations are not simply sources of growth and progress: they sow the seeds of market power. In a free market economy with intellectual property rights, firms’ control over technology enables them to expand, attain monopoly power, and earn exorbitant profits. Competition among innovators does not eliminate market power because technological competition is different from standard competition; it results in only one or two winners. Kurz provides a pioneering analysis grounded on quantifying technological market power and its effects on inequality, innovation, and economic growth. He outlines what causes market power to rise and fall and details its macroeconomic and distributional consequences.

Kurz demonstrates that technological market power tends to rise, increasing inequality of income and wealth. Unchecked inequality threatens the foundations of democracy: public policy is the only counterbalancing force that can restrain corporate power, attain more egalitarian distribution of wealth, and make democracy compatible with capitalism. Presenting a new paradigm for understanding today’s vast inequalities, this book offers detailed proposals to redress them by restricting corporate mergers and acquisitions, reforming patent law, improving the balance of power in the labor market, increasing taxation, promoting upward mobility, and stabilizing the middle class…(More)”.

Digital Oil


Book by Eric Monteiro: “Digitalization sits at the forefront of public and academic conversation today, calling into question how we work and how we know. In Digital Oil, Eric Monteiro uses the Norwegian offshore oil and gas industry as a lens to investigate the effects of digitalization on embodied labor and, in doing so, shows how our use of new digital technology transforms work and knowing.

For years, roughnecks have performed the dangerous and unwieldy work of extracting the oil that lies three miles below the seabed along the Norwegian continental shelf. Today, the Norwegian oil industry is largely digital, operated by sensors and driven by data. Digital representations of physical processes inform work practices and decision-making with remotely operated, unmanned deep-sea facilities. Drawing on two decades of in-depth interviews, observations, news clips, and studies of this industry, Monteiro dismantles the divide between the virtual and the physical in Digital Oil.

What is gained or lost when objects and processes become algorithmic phenomena with the digital inferred from the physical? How can data-driven work practices and operational decision-making approximate qualitative interpretation, professional judgement, and evaluation? How are emergent digital platforms and infrastructures, as machineries of knowing, enabling digitalization? In answering these questions Monteiro offers a novel analysis of digitalization as an effort to press the limits of quantification of the qualitative…(More)”.

Data Cartels: The Companies That Control and Monopolize Our Information


Book by Sarah Lamdan: “In our digital world, data is power. Information hoarding businesses reign supreme, using intimidation, aggression, and force to maintain influence and control. Sarah Lamdan brings us into the unregulated underworld of these “data cartels”, demonstrating how the entities mining, commodifying, and selling our data and informational resources perpetuate social inequalities and threaten the democratic sharing of knowledge.

Just a few companies dominate most of our critical informational resources. Often self-identifying as “data analytics” or “business solutions” operations, they supply the digital lifeblood that flows through the circulatory system of the internet. With their control over data, they can prevent the free flow of information, masterfully exploiting outdated information and privacy laws and curating online information in a way that amplifies digital racism and targets marginalized communities. They can also distribute private information to predatory entities. Alarmingly, everything they’re doing is perfectly legal.

In this book, Lamdan contends that privatization and tech exceptionalism have prevented us from creating effective legal regulation. This in turn has allowed oversized information oligopolies to coalesce. In addition to specific legal and market-based solutions, Lamdan calls for treating information like a public good and creating digital infrastructure that supports our democratic ideals….(More)”.

The Exploited Labor Behind Artificial Intelligence


Essay by Adrienne Williams, Milagros Miceli, and Timnit Gebru: “The public’s understanding of artificial intelligence (AI) is largely shaped by pop culture — by blockbuster movies like “The Terminator” and their doomsday scenarios of machines going rogue and destroying humanity. This kind of AI narrative is also what grabs the attention of news outlets: a Google engineer claiming that its chatbot was sentient was among the most discussed AI-related news in recent months, even reaching Stephen Colbert’s millions of viewers. But the idea of superintelligent machines with their own agency and decision-making power is not only far from reality — it distracts us from the real risks to human lives surrounding the development and deployment of AI systems. While the public is distracted by the specter of nonexistent sentient machines, an army of precarized workers stands behind the supposed accomplishments of artificial intelligence systems today.

Many of these systems are developed by multinational corporations located in Silicon Valley, which have been consolidating power at a scale that, journalist Gideon Lewis-Kraus notes, is likely unprecedented in human history. They are striving to create autonomous systems that can one day perform all of the tasks that people can do and more, without the required salaries, benefits or other costs associated with employing humans. While this corporate executives’ utopia is far from reality, the march to attempt its realization has created a global underclass, performing what anthropologist Mary L. Gray and computational social scientist Siddharth Suri call ghost work: the downplayed human labor driving “AI”.

Tech companies that have branded themselves “AI first” depend on heavily surveilled gig workers like data labelers, delivery drivers and content moderators. Startups are even hiring people to impersonate AI systems like chatbots, due to the pressure by venture capitalists to incorporate so-called AI into their products. In fact, London-based venture capital firm MMC Ventures surveyed 2,830 AI startups in the EU and found that 40% of them didn’t use AI in a meaningful way…(More)”.

Leveraging Data for the Public Good


Article by Christopher Pissarides, Fadi Farra and Amira Bensebaa: “…Yet data are simply too important to be entrusted to either governments or large corporations that treat them as their private property. Instead, governments should collaborate with companies on joint-governance frameworks that recognize both the opportunities and the risks of big data.

Businesses – which are best positioned to understand big data’s true value – must move beyond short‐sighted efforts to prevent regulation. Instead, they need to initiate a dialogue with policymakers on how to design viable solutions that can leverage the currency of our era to benefit the public good. Doing so would help them regain public trust.

Governments, for their part, must avoid top‐down regulatory strategies. To win the support they need from businesses, they need to create incentives for data sharing and privacy protection and help develop new analytical tools through advanced modeling. Governments should also rethink and renew deeply-rooted frameworks inherited from the industrial era, such as those for taxation and social welfare.

In the digital age, governments should recognize the centrality of data to policymaking and develop tools to reward businesses that contribute to the public good by sharing it. True, governments require taxes to raise revenues, but they must recognize that a better understanding of individuals enables more efficient policies. By recognizing companies’ ability to save public money and create social value, governments could encourage companies to share data as a matter of social responsibility…(More)”.

Cross-border Data Flows: Taking Stock of Key Policies and Initiatives


OECD Report: “As data become an important resource for the global economy, it is important to strengthen trust to facilitate data sharing domestically and across borders. Significant momentum for related policies in the G7, and G20, has gone hand in hand with a wide range of – often complementary – national and international initiatives and the development of technological and organisational measures. Advancing a common understanding and dialogue among G7 countries and beyond is crucial to support coordinated and coherent progress in policy and regulatory approaches that leverage the full potential of data for global economic and social prosperity. This report takes stock of key policies and initiatives on cross-border data flows to inform and support G7 countries’ engagement on this policy agenda…(More)”.

Simple Writing Pays Off (Literally)


Article by Bill Birchard: “When SEC Chairman Arthur Levitt championed “plain English” writing in the 1990s, he argued that simpler financial disclosures would help investors make more informed decisions. Since then, we’ve also learned that it can help companies make more money. 

Researchers have confirmed that if you write simply and directly in disclosures like 10-Ks you can attract more investors, cut the cost of debt and equity, and even save money and time on audits.  

landmark experiment by Kristina Rennekamp, an accounting professor at Cornell, documented some of the consequences of poor corporate writing. Working with readers of corporate press releases, she showed that companies stand to lose readers owing to lousy “processing fluency” of their documents. “Processing fluency” is a measure of readability used by psychologists and neuroscientists. 

Rennekamp asked people in an experiment to evaluate two versions of financial press releases. One was the actual release, from a soft drink company. The other was an edit using simple language advocated by the SEC’s Plain English Handbook. The handbook, essentially a guide to better fluency, contains principles that now serve as a standard by which researchers measure readability. 

Published under Levitt, the handbook clarified the requirements of Rule 421, which, starting in 1998, required all prospectuses (and in 2008 all mutual fund summary prospectuses) to adhere to the handbook’s principles. Among them: Use short sentences. Stick to active voice. Seek concrete words. Shun boilerplate. Minimize jargon. And avoid multiple negatives. 

Rennekamp’s experiment, using the so-called Fog Index, a measure of readability based on handbook standards, provided evidence that companies would do better at hooking readers if they simply made their writing easier to read. “Processing fluency from a more readable disclosure,” she wrote in 2012 after measuring the greater trust readers put in well-written releases, “acts as a heuristic cue and increases investors’ beliefs that they can rely on the information in the disclosure…(More)”.

The EU wants to put companies on the hook for harmful AI


Article by Melissa Heikkilä: “The EU is creating new rules to make it easier to sue AI companies for harm. A bill unveiled this week, which is likely to become law in a couple of years, is part of Europe’s push to prevent AI developers from releasing dangerous systems. And while tech companies complain it could have a chilling effect on innovation, consumer activists say it doesn’t go far enough. 

Powerful AI technologies are increasingly shaping our lives, relationships, and societies, and their harms are well documented. Social media algorithms boost misinformation, facial recognition systems are often highly discriminatory, and predictive AI systems that are used to approve or reject loans can be less accurate for minorities.  

The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care. 

The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.

For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue. 

The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation…(More)”.

Designing a Data Sharing Tool Kit


Paper by Ilka Jussen, Julia Christina Schweihoff, Maleen Stachon and Frederik Möller: “Sharing data is essential to the success of modern data-driven business models. They play a crucial role for companies in creating new and better services and optimizing existing processes. While the interest in data sharing is growing, companies face an array of challenges preventing them from fully exploiting data sharing opportunities. Mitigating these risks and weighing them against their potential is a creative, interdisciplinary task in each company. The paper starts precisely at this point and proposes a Tool Kit with three Visual Inquiry Tool (VIT) to work on finding data sharing potential conjointly. We do this using a design-oriented research approach and contribute to research and practice by providing three VITs that help different stakeholders or companies in an ecosystem to visualize and design their data-sharing activities…(More)”.