Safeguarding Public Values in Cooperation with Big Tech Companies: The Case of the Austrian Contact Tracing App Stopp Corona


Paper by Valerie Eveline: “In April 2020, at the beginning of the COVID-19 pandemic, the Austrian Red Cross announced it was encouraging a cooperation with Google and Apple’s Exposure Notification Framework to develop the so-called Stop Corona app – a contact tracing app which would support health personnel in monitoring the spread of the virus to prevent new infections (European Commission, 2020a). The involvement of Google and Apple to support combating a public health emergency fueled controversy over addressing profit-driven private interests at the expense of public values. Concerns have been raised about the dominant position of US based big tech companies in political decision concerning public values. This research investigates how public values are safeguarded in cooperation with big tech companies in the Austrian contact tracing app Stop Corona. Contact tracing apps manifest a bigger trend in literature, signifying power dynamics of big tech companies, governments, and civil society in relation to public values. The theoretical foundation of this research form prevailing concepts from Media and Communication Studies (MCS) and Science and Technology Studies (STS) about power dynamics such as the expansion of digital platforms and infrastructures, the political economy of big tech companies, dependencies, and digital platforms and infrastructure governance.

The cooperative responsibility framework guides the empirical investigation in four main steps. First steps identify key public values at stake and main stakeholders. After, public deliberations on advancing public values and the translation of public values based on the outcome of public deliberation are analyzed….(More)”.

A comprehensive study of technological change


Article by Scott Murray: The societal impacts of technological change can be seen in many domains, from messenger RNA vaccines and automation to drones and climate change. The pace of that technological change can affect its impact, and how quickly a technology improves in performance can be an indicator of its future importance. For decision-makers like investors, entrepreneurs, and policymakers, predicting which technologies are fast improving (and which are overhyped) can mean the difference between success and failure.

New research from MIT aims to assist in the prediction of technology performance improvement using U.S. patents as a dataset. The study describes 97 percent of the U.S. patent system as a set of 1,757 discrete technology domains, and quantitatively assesses each domain for its improvement potential.

“The rate of improvement can only be empirically estimated when substantial performance measurements are made over long time periods,” says Anuraag Singh SM ’20, lead author of the paper. “In some large technological fields, including software and clinical medicine, such measures have rarely, if ever, been made.”

previous MIT study provided empirical measures for 30 technological domains, but the patent sets identified for those technologies cover less than 15 percent of the patents in the U.S. patent system. The major purpose of this new study is to provide predictions of the performance improvement rates for the thousands of domains not accessed by empirical measurement. To accomplish this, the researchers developed a method using a new probability-based algorithm, machine learning, natural language processing, and patent network analytics….(More)”.

Financial data unbound: The value of open data for individuals and institutions


Paper by McKinsey Global Institute: “As countries around the world look to ensure rapid recovery once the COVID-19 crisis abates, improved financial services are emerging as a key element to boost growth, raise economic efficiency, and lift productivity. Robust digital financial infrastructure proved its worth during the crisis, helping governments cushion people and businesses from the economic shock of the pandemic. The next frontier is to create an open-data ecosystem for finance.

Already, technological, regulatory, and competitive forces are moving markets toward easier and safer financial data sharing. Open-data initiatives are springing up globally, including the United Kingdom’s Open Banking Implementation Entity, the European Union’s second payment services directive, Australia’s new consumer protection laws, Brazil’s drafting of open data guidelines, and Nigeria’s new Open Technology Foundation (Open Banking Nigeria). In the United States, the Consumer Financial Protection Bureau aims to facilitate a consumer-authorized data-sharing market, while the Financial Data Exchange consortium attempts to promote common, interoperable standards for secure access to financial data. Yet, even as many countries put in place stronger digital financial infrastructure and data-sharing mechanisms, COVID-19 has exposed limitations and gaps in their reach, a theme we explored in earlier research.

This discussion paper from the McKinsey Global Institute (download full text in 36-page PDF) looks at the potential value that could be created—and the key issues that will need to be addressed—by the adoption of open data for finance. We focus on four regions: the European Union, India, the United Kingdom, and the United States.

By open data, we mean the ability to share financial data through a digital ecosystem in a manner that requires limited effort or manipulation. Advantages include more accurate credit risk evaluation and risk-based pricing, improved workforce allocation, better product delivery and customer service, and stronger fraud protection.

Our analysis suggests that the boost to the economy from broad adoption of open-data ecosystems could range from about 1 to 1.5 percent of GDP in 2030 in the European Union, the United Kingdom, and the United States, to as much as 4 to 5 percent in India. All market participants benefit, be they institutions or consumers—either individuals or micro-, small-, and medium-sized enterprises (MSMEs)—albeit to varying degrees….(More)”.

The Predictive Power of Patents


Paper by Sabrina Safrin: “This article explains that domestic patenting activity may foreshadow a country’s level of regulation of path-breaking technologies. The article considers whether different governments will act with a light or a heavy regulatory hand when encountering a new disruptive technology. The article hypothesizes that part of the answer to this important regulatory, economic, and geopolitical question may lie in an unexpected place: the world’s patent offices. Countries with early and significant patent activity in an emerging technology are more likely to view themselves as having a stake in the technology and therefore will be less inclined to subject the technology to extensive health, safety and environmental regulation that would constrain it. The article introduces the term “patent footprint” to describe a country’s degree of patenting activity in a new technology, and the article posits that a country’s patent footprint may provide an early clue to its willingness or reluctance to strenuously regulate the new technology. Even more so, lack of geographic diversity in patent footprints may help predict whether an emerging technology will face extensive international regulation. Patent footprints provide a useful tool to policymakers, businesses, investors, and NGOs considering the health, safety, and environmental regulation of a disruptive technology. The predictive power of patent footprints adds to the literature on the broader function of patents in society….(More)”.

Why You Should Care About Your Right to Repair Gadgets


Brian X. Chen at The New York Times: “When your car has problems, your instinct is probably to take it to a mechanic. But when something goes wrong with your smartphone — say a shattered screen or a depleted battery — you may wonder: “Is it time to buy a new one?”

That’s because even as our consumer electronics have become as vital as our cars, the idea of tech repair still hasn’t been sown into our collective consciousness. Studies have shown that when tech products begin to fail, most people are inclined to buy new things rather than fix their old ones.

“Repair is inconvenient and difficult, so people don’t seek it,” said Nathan Proctor, a director for the U.S. Public Interest Research Group, a consumer advocacy organization, who is working on legislation to make tech repair more accessible. “Because people don’t expect to repair things, they replace things when by far the most logical thing to do is to repair it.”

It doesn’t have to be this way. More of us could maintain our tech products, as we do with cars, if it were more practical to do so. If we all had more access to the parts, instructions and tools to revive products, repairs would become simpler and less expensive.

This premise is at the heart of the “right to repair” act, a proposed piece of legislation that activists and tech companies have fought over for nearly a decade. Recently, right-to-repair supporters scored two major wins. In May, the Federal Trade Commission published a report explaining how tech companies were harming competition by restricting repairs. And last Friday, President Biden issued an executive order that included a directive for the F.T.C. to place limits on how tech manufacturers could restrict repairs.

The F.T.C. is set to meet next week to discuss new policies about electronics repair. Here’s what you need to know about the fight over your right to fix gadgets…(More)”.

Concern trolls and power grabs: Inside Big Tech’s angry, geeky, often petty war for your privacy


Article by Issie Lapowsky: “Inside the World Wide Web Consortium, where the world’s top engineers battle over the future of your data….

The W3C’s members do it all by consensus in public GitHub forums and open Zoom meetings with meticulously documented meeting minutes, creating a rare archive on the internet of conversations between some of the world’s most secretive companies as they collaborate on new rules for the web in plain sight.

But lately, that spirit of collaboration has been under intense strain as the W3C has become a key battleground in the war over web privacy. Over the last year, far from the notice of the average consumer or lawmaker, the people who actually make the web run have converged on this niche community of engineers to wrangle over what privacy really means, how the web can be more private in practice and how much power tech giants should have to unilaterally enact this change.

On one side are engineers who build browsers at Apple, Google, Mozilla, Brave and Microsoft. These companies are frequent competitors that have come to embrace web privacy on drastically different timelines. But they’ve all heard the call of both global regulators and their own users, and are turning to the W3C to develop new privacy-protective standards to replace the tracking techniques businesses have long relied on.

On the other side are companies that use cross-site tracking for things like website optimization and advertising, and are fighting for their industry’s very survival. That includes small firms like Rosewell’s, but also giants of the industry, like Facebook.

Rosewell has become one of this side’s most committed foot soldiers since he joined the W3C last April. Where Facebook’s developers can only offer cautious edits to Apple and Google’s privacy proposals, knowing full well that every exchange within the W3C is part of the public record, Rosewell is decidedly less constrained. On any given day, you can find him in groups dedicated to privacy or web advertising, diving into conversations about new standards browsers are considering.

Rather than asking technical questions about how to make browsers’ privacy specifications work better, he often asks philosophical ones, like whether anyone really wants their browser making certain privacy decisions for them at all. He’s filled the W3C’s forums with concerns about its underlying procedures, sometimes a dozen at a time, and has called upon the W3C’s leadership to more clearly articulate the values for which the organization stands….(More)”.

How can governments boost citizen-led projects?


Justin Tan at GovInsider: “The visual treat of woks tossing fried carrot cake, the dull thuds of a chopper expertly dicing up a chicken, the fragrant lime aroma of grilled sambal stingray. The sensory playgrounds of Singapore’s hawker centres are close to many citizens’ homes and hearts, and have even recently won global recognition by UNESCO.

However, the pandemic has left many hawkers facing slow business. While restaurants and fast food chains have quickly caught on to food delivery services, many elderly hawkers were left behind in the digital race.

28 year-old Singaporean M Thirukkumaran developed an online community map called “Help Our Hawkers” that provides information on digitally-disadvantaged hawkers near users’ locations, such as opening hours and stall information. GovInsider caught up with him to learn how it was built and how governments can support fellow civic hackers…

Besides creating space for civic innovation, governments can step in to give particularly promising projects a boost with their resources and influence, Thiru says.

Most community-led projects need to rely on cloud services such as AWS, which can be expensive for a small team to bear, he explains. Government subsidies or grants may help to ease the cost for digital infrastructure.

In Thiru’s case, the map needed to be rolled out quickly to be useful. He chose to build his tool with Google Maps to speed up the process, as many users are already familiar with it.

Another way that governments can help is through getting more visibility to these community-led projects with their wide reach, Thiru suggests. Community projects commonly face a “cold start” dilemma. This arises where the community tool needs data for it to be useful, but citizens also hesitate to spend time on a tool if it is not useful in the first place.

Thiru jump started his tool by contributing a few stalls on his own. With more publicity with government campaigns, the process could be sped up considerably, he shares….(More)”.

Is there a role for consent in privacy?


Article by Robert Gellman: “After decades, we still talk about the role of notice and choice in privacy. Yet there seems to be broad recognition that notice and choice do nothing for the privacy of consumers. Some American businesses cling to notice and choice because they hate all the alternatives. Some legislators draft laws with elements of notice and choice, either because it’s easier to draft a law that way, because they don’t know any better or because they carry water for business.

For present purposes, I will talk about notice and choice generically as consent. Consent is a broader concept than choice, but the difference doesn’t matter for the point I want to make. How you frame consent is complex. There are many alternatives and many approaches. It’s not just a matter of opt-in or opt-out. While I’m discarding issues, I also want to acknowledge and set aside the eight basic Fair Information Practices. There is no notice and choice principle in FIPS, and FIPs are not specifically important here.

Until recently, my view was that consent in almost any form is pretty much death for consumer privacy. No matter how you structure it, websites and others will find a way to wheedle consent from consumers. Those who want to exploit consumer data will cajole, pressure, threaten, mystify, obscure, entice or otherwise coax consumers to agree.

Suddenly, I’m not as sure of my conclusion about consent. What changed my mind? There is a new data point from Apple’s App Tracking Transparency framework. Apple requires mobile application developers to obtain opt-in consent before serving targeted advertising via Apple’s Identifier for Advertisers. Early reports suggest consumers are saying “NO” in overwhelming numbers — overwhelming as in more than 90%.

It isn’t this strong consumer reaction that makes me think consent might possibly have a place. I want to highlight a different aspect of the Apple framework….(More)”.

ASEAN Data Management Framework


ASEAN Framework: “Due to the growing interactions between data, connected things and people, trust in data has become the pre-condition for fully realising the gains of digital transformation. SMEs are threading a fine line between balancing digital initiatives and concurrently managing data protection and customer privacy safeguards to ensure that these do not impede innovation. Therefore, there is a motivation to focus on digital data governance as it is critical to boost economic integration and technology adoption across all sectors in the ten ASEAN Member States (AMS).
To ensure that their data is appropriately managed and protected, organisations need to know what levels of technical, procedural and physical controls they need to put in place. The categorisation of datasets help organisations manage their data assets and put in place the right level of controls. This is applicable for both data at rest as well as data in transit. The establishment of an ASEAN Data Management Framework will promote sound data governance practices by helping organisations to discover the datasets they have, assign it with the appropriate categories, manage the data, protect it accordingly and all these while continuing to comply with relevant regulations. Improved governance and protection will instil trust in data sharing both between organisations and between countries, which will then promote the growth of trade and the flow of data among AMS and their partners in the digital economy….(More)”

Why Business Schools Need to Teach Experimentation


Elizabeth R. Tenney, Elaine Costa, and Ruchi M. Watson at Harvard Business Review: “…The value of experiments in nonscientific organizations is quite high. Instead of calling in managers to solve every puzzle or dispute large and small (Should we make the background yellow or blue? Should we improve basic functionality or add new features? Are staff properly supported and incentivized to provide rapid responses?), teams can run experiments and measure outcomes of interest and, armed with new data, decide for themselves, or at least put forward a proposal grounded in relevant information. The data also provide tangible deliverables to show to stakeholders to demonstrate progress and accountability.

Experiments spur innovation. They can provide proof of concept and a degree of confidence in new ideas before taking bigger risks and scaling up. When done well, with data collected and interpreted objectively, experiments can also provide a corrective for faulty intuition, inaccurate assumptions, or overconfidence. The scientific method (which powers experiments) is the gold standard of tools to combat bias and answer questions objectively.

But as more and more companies are embracing a culture of experimentation, they face a major challenge: talent. Experiments are difficult to do well. Some challenges include special statistical knowledge, clear problem definition, and interpretation of the results. And it’s not enough to have the skillset. Experiments should ideally be done iteratively, building on prior knowledge and working toward deeper understanding of the question at hand. There are also the issues of managers’ preparedness to override their intuition when data disagree with it, and their ability to navigate hierarchy and bureaucracy to implement changes based on the experiments’ outcomes.

Some companies seem to be hiring small armies of PhDs to meet these competency challenges. (Amazon, for example, employs more than 100 PhD economists.) This isn’t surprising, given that PhDs receive years of training — and that the shrinking tenure-track market in academia has created a glut of PhDs. Other companies are developing employees in-house, training them in narrow, industry-specific methodologies. For example, General Mills recently hired for their innovator incubator group, called g-works, advertising for employees who are “using entrepreneurial skills and an experimental mindset” in what they called a “test and learn environment, with rapid experimentation to validate or invalidate assumptions.” Other companies — including Fidelity, LinkedIn, and Aetna — have hired consultants to conduct experiments, among them Irrational Labs, cofounded by Duke University’s Dan Ariely and the behavioral economist Kristen Berman….(More)”.