8 Strategies for Chief Data Officers to Create — and Demonstrate — Value


Article by Thomas H. Davenport, Richard Y. Wang, and Priyanka Tiwari: “The chief data officer (CDO) role was only established in 2002, but it has grown enormously since then. In one recent survey of large companies, 83% reported having a CDO. This isn’t surprising: Data and approaches to understanding it (analytics and AI) are incredibly important in contemporary organizations. What is eyebrow-raising, however, is that the CDO job is terribly ill-defined. Sixty-two percent of CDOs surveyed in the research we describe below reported that the CDO role is poorly understood, and incumbents of the job have often met with diffuse expectations and short tenures. There is a clear need for CDOs to focus on adding visible value to their organizations.

Part of the problem is that traditional data management approaches are unlikely to provide visible value in themselves. Many nontechnical executives don’t really understand the CDO’s work and struggle to recognize when it’s being done well. CDOs are often asked to focus on preventing data problems (defense-oriented initiatives) and such data management projects as improving data architectures, data governance, and data quality. But data will never be perfect, meaning executives will always be somewhat frustrated with their organization’s data situation. While improvements in data management may be difficult to recognize or measure, major problems such as hacks, breaches, lost or inaccessible data, or poor quality are much easier to recognize than improvements.

So how can CDOs demonstrate that they’re creating value?…(More)”

Data Free Disney


Essay by Janet Vertesy: “…Once upon a time, you could just go to Disneyland. You could get tickets at the gates, stand in line for rides, buy food and tchotchkes, even pick up copies of your favorite Disney movies at a local store. It wasn’t even that long ago. The last time I visited, in 2010, the company didn’t record what I ate for dinner or detect that I went on Pirates of the Caribbean five times. It was none of their business.

But sometime in the last few years, tracking and tracing became their business. Like many corporations out there, Walt Disney Studios spent the last decade transforming into a data company.

The theme parks alone are a data scientist’s dream. Just imagine: 50,000 visitors a day, most equipped with cell phones and a specialized app. Millions of location traces, along with rides statistics, lineup times, and food-order preferences. Thousands and thousands of credit card swipes, each populating a database with names and addresses, each one linking purchases across the park grounds.1 A QR-code scavenger hunt that records the path people took through Star Wars: Galaxy’s Edge. Hotel keycards with entrance times, purchases, snack orders, and more. Millions of photos snapped on rides and security cameras throughout the park, feeding facial-recognition systems. Tickets with names, birthdates, and portraits attached. At Florida’s Disney World, MagicBands—bracelets using RFID (radio-frequency identification) technology—around visitors’ wrists gather all that information plus fingerprints in one place, while sensors ambiently detect their every move. What couldn’t you do with all that data?…(More)”.

The Power of the Stora Rör Swimming Association and Other Local Institutions


Article by Erik Angner: “On a late-summer afternoon of 1938, two eleven-year-old girls waded into the water in Stora Rör harbor on the Baltic island of Öland. They were awaiting their mother, who was returning by ferry from a hospital visit on the mainland. Unbeknownst to the girls, the harbor had been recently dredged. Where there used to be shallow sands, the water was now cold, dark, and deep. The girls couldn’t swim. They drowned mere feet from safety—in full view of a powerless little sister on the beach.

The community was shaken. It resolved that no such tragedy should ever happen again. To make sure every child would learn to swim, the community decided to offer swimming lessons to anyone interested. The Stora Rör Swimming Association, founded that same year, is still going strong. It’s enrolled thousands of children, adolescents, and adults. My grandmother, a physical-education teacher by training, was one of its first instructors. My father, myself, and my children all learned how to swim there.

It’s impossible to know if the association has saved lives. It may well have. The community has been spared, although kids play in and fall into the water all the time. Nationwide, drowning is the leading cause of death for Swedish kids between one and six years of age.

We do know that the association has had many other beneficial effects. It has offered healthy, active outdoor summer activities for generations of kids. The activities of the association remain open to all. Fees are nominal. Children come from families of farmers and refugees, artists and writers, university professors and CEOs of major corporations, locals and tourists…

In economic terms, the Stora Rör Swimming Association is an institution. It’s a set of rules, or “prescriptions,” that humans use to structure all sorts of repeated interactions. These rules can be formalized in a governing document. The constitution of the association says that you have to pay dues if you want to remain a member in good standing, for example. But the rules that define the institution don’t need to be written down. They don’t even need to be formulated in words. “Attend the charity auction and bid on things if you can afford it.” “Volunteer to serve on the board when it’s your turn.” “Treat swimming teachers with respect.” These are all unwritten rules. They may never have been formulated quite like this before. Still, they’re widely—if not universally—followed. And, from an economic perspective, these rules taken together define what sort of thing the Swimming Association is.

Economist Elinor Ostrom studied institutions throughout her career. She wanted to know what institutions do, how and why they work, how they appear and evolve over time, how we can build and improve them, and, finally, how to share that knowledge with the rest of us. She believed in the power of economics to “bring out the best in humans.” The way to do it, she thought, was to help them build community—developing the rich network of relationships that form the fabric of a society…(More)”.

Open-source intelligence is piercing the fog of war in Ukraine


The Economist: “…The rise of open-source intelligenceOSINT to insiders, has transformed the way that people receive news. In the run-up to war, commercial satellite imagery and video footage of Russian convoys on TikTok, a social-media site, allowed journalists and researchers to corroborate Western claims that Russia was preparing an invasion. OSINT even predicted its onset. Jeffrey Lewis of the Middlebury Institute in California used Google Maps’ road-traffic reports to identify a tell-tale jam on the Russian side of the border at 3:15am on February 24th. “Someone’s on the move”, he tweeted. Less than three hours later Vladimir Putin launched his war.

Satellite imagery still plays a role in tracking the war. During the Kherson offensive, synthetic-aperture radar (SAR) satellites, which can see at night and through clouds, showed Russia building pontoon bridges over the Dnieper river before its retreat from Kherson, boats appearing and disappearing as troops escaped east and, later, Russia’s army building new defensive positions along the M14 highway on the river’s left bank. And when Ukrainian drones struck two air bases deep inside Russia on December 5th, high-resolution satellite images showed the extent of the damage…(More)”.

Big Data and the Law of War


Essay by Paul Stephan: “Big data looms large in today’s world. Much of the tech sector regards the building up of large sets of searchable data as part (sometimes the greater part) of its business model. Surveillance-oriented states, of which China is the foremost example, use big data to guide and bolster monitoring of their own people as well as potential foreign threats. Many other states are not far behind in the surveillance arms race, notwithstanding the attempts of the European Union to put its metaphorical finger in the dike. Finally, ChatGPT has revived popular interest in artificial intelligence (AI), which uses big data as a means of optimizing the training and algorithm design on which it depends, as a cultural, economic, and social phenomenon. 

If big data is growing in significance, might it join territory, people, and property as objects of international conflict, including armed conflict? So far it has not been front and center in Russia’s invasion of Ukraine, the war that currently consumes much of our attention. But future conflicts could certainly feature attacks on big data. China and Taiwan, for example, both have sophisticated technological infrastructures that encompass big data and AI capabilities. The risk that they might find themselves at war in the near future is larger than anyone would like. What, then, might the law of war have to say about big data? More generally, if existing law does not meet our needs,  how might new international law address the issue?

In a recent essay, part of an edited volume on “The Future Law of Armed Conflict,” I argue that big data is a resource and therefore a potential target in an armed conflict. I address two issues: Under the law governing the legality of war (jus ad bellum), what kinds of attacks on big data might justify an armed response, touching off a bilateral (or multilateral) armed conflict (a war)? And within an existing armed conflict, what are the rules (jus in bello, also known as international humanitarian law, or IHL) governing such attacks?

The distinction is meaningful. If cyber operations rise to the level of an armed attack, then the targeted state has, according to Article 51 of the U.N. Charter, an “inherent right” to respond with armed force. Moreover, the target need not confine its response to a symmetrical cyber operation. Once attacked, a state may use all forms of armed force in response, albeit subject to the restrictions imposed by IHL. If the state regards, say, a takedown of its financial system as an armed attack, it may respond with missiles…(More)”.

Ready, set, share: Researchers brace for new data-sharing rules


Jocelyn Kaiser and Jeffrey Brainard in Science: “…By 2025, new U.S. requirements for data sharing will extend beyond biomedical research to encompass researchers across all scientific disciplines who receive federal research funding. Some funders in the European Union and China have also enacted data-sharing requirements. The new U.S. moves are feeding hopes that a worldwide movement toward increased sharing is in the offing. Supporters think it could speed the pace and reliability of science.

Some scientists may only need to make a few adjustments to comply with the policies. That’s because data sharing is already common in fields such as protein crystallography and astronomy. But in other fields the task could be weighty, because sharing is often an afterthought. For example, a study involving 7750 medical research papers found that just 9% of those published from 2015 to 2020 promised to make their data publicly available, and authors of just 3% actually shared, says lead author Daniel Hamilton of the University of Melbourne, who described the finding at the International Congress on Peer Review and Scientific Publication in September 2022. Even when authors promise to share their data, they often fail to follow through. Out of 21,000 journal articles that included data-sharing plans, a study published in PLOS ONE in 2020 found, fewer than 21% provided links to the repository storing the data.

Journals and funders, too, have a mixed record when it comes to supporting data sharing. Research presented at the September 2022 peer-review congress found only about half of the 110 largest public, corporate, and philanthropic funders of health research around the world recommend or require grantees to share data…

“Health research is the field where the ethical obligation to share data is the highest,” says Aidan Tan, a clinician-researcher at the University of Sydney who led the study. “People volunteer in clinical trials and put themselves at risk to advance medical research and ultimately improve human health.”

Across many fields of science, researchers’ support for sharing data has increased during the past decade, surveys show. But given the potential cost and complexity, many are apprehensive about the NIH policy, and other requirements to follow. “How we get there is pretty messy right now,” says Parker Antin, a developmental biologist and associate vice president for research at the University of Arizona. “I’m really not sure whether the total return will justify the cost. But I don’t know of any other way to find out than trying to do it.”

Science offers this guide as researchers prepare to plunge in….(More)”.

How can health data be used for public benefit? 3 uses that people agree on


Article by Alison Papricia et al: “Health data can include information about health-care services, health status and behaviours, medications and genetic data, in addition to demographic information like age, education and neighbourhood.

These facts and statistics are valuable because they offer insights and information about population health and well-being. However, they can also be sensitive, and there are legitimate public concerns about how these data are used, and by whom. The term “social licence” describes uses of health data that have public support.

Studies performed in Canada, the United Kingdom and internationally have all found public support and social licence for uses of health data that produce public benefits.

However, this support is conditional. Public concerns related to privacy, commercial motives, equity and fairness must be addressed.

Our team of health policy researchers set out to build upon prior studies with actionable advice from a group of 20 experienced public and patient advisers. Studies have shown that health data use, sharing and reuse is a complex topic. So we recruited people who already had some knowledge of potential uses of health data through their roles advising research institutions, hospitals, community organizations and governments.

We asked these experienced advisers to exchange views about uses of health data that they supported or opposed. We also gathered participants’ views about requirements for social licence, such as privacy, security and transparency.

Consensus views: After hours of facilitated discussion and weeks of reflection, all 20 participants agreed on some applications and uses of health data that are within social licence, and some that are not.

Participants agreed it is within social licence for health data to be used by:

  • health-care practitioners — to directly improve the health-care decisions and services provided to a patient.
  • governments, health-care facilities and health-system administrators — to understand and improve health care and the health-care system.
  • university-based researchers — to understand the drivers of disease and well-being.

Participants agreed that it is not within social licence for:

  • an individual or organization to sell (or re-sell) another person’s identified health data.
  • health data to be used for a purpose that has no patient, public or societal benefit.

Points of disagreement: Among other topics, the participants discussed uses of health data about systemically marginalized populations and companies using health data. Though some participants saw benefits from both practices, there was not consensus support for either.

For example, participants were concerned that vulnerable populations could be exploited, and that companies would put profit ahead of public benefits. Participants also worried that if harms were done by companies or to marginalized populations, they could not be “undone.” Several participants expressed skepticism about whether risks could be managed, even if additional safeguards are in place.

The participants also had different views about what constitutes an essential requirement for social licence. This included discussions about benefits, governance, patient consent and involvement, equity, privacy and transparency.

Collectively, they generated a list of 85 essential requirements, but 38 of those requirements were only seen as essential by one person. There were also cases where some participants actively opposed a requirement that another participant thought was essential…(More)”

Social media is too important to be so opaque with its data


Article by Alex González Ormerod: “Over 50 people were killed by the police during demonstrations in Peru. Brazil is reeling from a coup attempt in its capital city. The residents of Culiacán, a city in northern Mexico, still cower in their houses after the army swooped in to arrest a cartel kingpin. Countries across Latin America have kicked off the year with turmoil. 

It is almost a truism to say that the common factor in these events has been the role of social media. Far-right radicals in Brazil were seen to be openly organizing and spreading fake news about electoral fraud on Twitter. Peruvians used TikTok to bear witness to police brutality, preserving it for posterity.

Dealing with the aftermath of the crises, in Culiacán, Sinaloans shared crucial info as to where roadblocks continued to burn, and warned about shootouts in certain neighborhoods. Brazilians opened up Instagram and other social channels to compile photos and other evidence that might help the police bring the Brasília rioters to justice.

These events could be said to have happened online as much as they did offline, yet we know next to nothing about the inner workings of the platforms they occurred on.

People covering these platforms face a common refrain: After reaching out for basic social media data, they will often get a reply saying, “Unfortunately we do not have the information you need at this time.” (This particular quote came from Alberto de Golin, a PR agency representative for TikTok Mexico)…(More)”

How Smart Are the Robots Getting?


Cade Metz at The New York Times: “…These are not systems that anyone can properly evaluate with the Turing test — or any other simple method. Their end goal is not conversation.

Researchers at Google and DeepMind, which is owned by Google’s parent company, are developing tests meant to evaluate chatbots and systems like DALL-E, to judge what they do well, where they lack reason and common sense, and more. One test shows videos to artificial intelligence systems and asks them to explain what has happened. After watching someone tinker with an electric shaver, for instance, the A.I. must explain why the shaver did not turn on.

These tests feel like academic exercises — much like the Turing test. We need something that is more practical, that can really tell us what these systems do well and what they cannot, how they will replace human labor in the near term and how they will not.

We could also use a change in attitude. “We need a paradigm shift — where we no longer judge intelligence by comparing machines to human behavior,” said Oren Etzioni, professor emeritus at the University of Washington and founding chief executive of the Allen Institute for AI, a prominent lab in Seattle….

At the same time, there are many ways these bots are superior to you and me. They do not get tired. They do not let emotion cloud what they are trying to do. They can instantly draw on far larger amounts of information. And they can generate text, images and other media at speeds and volumes we humans never could.

Their skills will also improve considerably in the coming years.

Researchers can rapidly hone these systems by feeding them more and more data. The most advanced systems, like ChatGPT, require months of training, but over those months, they can develop skills they did not exhibit in the past.

“We have found a set of techniques that scale effortlessly,” said Raia Hadsell, senior director of research and robotics at DeepMind. “We have a simple, powerful approach that continues to get better and better.”

The exponential improvement we have seen in these chatbots over the past few years will not last forever. The gains may soon level out. But even then, multimodal systems will continue to improve — and master increasingly complex skills involving images, sounds and computer code. And computer scientists will combine these bots with systems that can do things they cannot. ChatGPT failed Turing’s chess test. But we knew in 1997 that a computer could beat the best humans at chess. Plug ChatGPT into a chess program, and the hole is filled.

In the months and years to come, these bots will help you find information on the internet. They will explain concepts in ways you can understand. If you like, they will even write your tweets, blog posts and term papers.

They will tabulate your monthly expenses in your spreadsheets. They will visit real estate websites and find houses in your price range. They will produce online avatars that look and sound like humans. They will make mini-movies, complete with music and dialogue…

Certainly, these bots will change the world. But the onus is on you to be wary of what these systems say and do, to edit what they give you, to approach everything you see online with skepticism. Researchers know how to give these systems a wide range of skills, but they do not yet know how to give them reason or common sense or a sense of truth.

That still lies with you…(More)”.

Why Europe must embrace participatory policymaking


Article by Alberto Alemanno, Claire Davenport, and Laura Batalla: “Today, Europe faces many threats – from economic uncertainty and war on its eastern borders to the rise of illiberal democracies and popular reactionary politicians.

As Europe recovers from the pandemic and grapples with economic and social unrest, it is at an inflection point; it can either create new spaces to build trust and a sense of shared purpose between citizens and governments, or it can continue to let its democratic institutions erode and distrust grow. 

The scale of such problems requires novel problem-solving and new perspectives, including those from civil society and citizens. Increased opportunities for citizens to engage with policymakers can lend legitimacy and accountability to traditionally ‘opaque’ policymaking processes. The future of the bloc hinges on its ability to not only sustain democratic institutions but to do so with buy-in from constituents.

Yet policymaking in the EU is often understood as a technocratic process that the public finds difficult, if not impossible, to navigate. The Spring 2022 Eurobarometer found that just 53% of respondents believed their voice counts in the EU. The issue is compounded by a lack of political literacy coupled with a dearth of channels for participation or co-creation. 

In parallel, there is a strong desire from citizens to make their voices heard. A January 2022 Special Eurobarometer on the Future of Europe found that 90% of respondents agreed that EU citizens’ voices should be taken more into account during decision-making. The Russian war in Ukraine has strengthened public support for the EU as a whole. According to the Spring 2022 Eurobarometer, 65% of Europeans view EU membership as a good thing. 

This is not to say that the EU has no existing models for citizen engagement. The European Citizens Initiative – a mechanism for petitioning the Commission to propose new laws – is one example of existing infrastructure. There is also an opportunity to build on the success of The Conference on the Future of Europe, a gathering held this past spring that gave citizens the opportunity to contribute policy recommendations and justifications alongside traditional EU policymakers…(More)”