Article by Sara Morrison: “If you’re an Instagram user, you may have recently seen a pop-up asking if you want the service to “use your app and website activity” to “provide a better ads experience.” At the bottom there are two boxes: In a slightly darker shade of black than the pop-up background, you can choose to “Make ads less personalized.” A bright blue box urges users to “Make ads more personalized.”
This is an example of a dark pattern: design that manipulates or heavily influences users to make certain choices. Instagram uses terms like “activity” and “personalized” instead of “tracking” and “targeting,” so the user may not realize what they’re actually giving the app permission to do. Most people don’t want Instagram and its parent company, Facebook, to know everything they do and everywhere they go. But a “better experience” sounds like a good thing, so Instagram makes the option it wants users to select more prominent and attractive than the one it hopes they’ll avoid.
There’s now a growing movement to ban dark patterns, and that may well lead to consumer protection laws and action as the Biden administration’s technology policies and initiatives take shape. California is currently tackling dark patterns in its evolving privacy laws, and Washington state’s latest privacy bill includes a provision about dark patterns.
“When you look at the way dark patterns are employed across digital engagement, generally, [the internet allows them to be] substantially exacerbated and made less visible to consumers,” Rebecca Kelly Slaughter, acting chair of the Federal Trade Commission (FTC), told Recode. “Understanding the effect of that is really important to us as we craft our strategy for the digital economy.”
Dark patterns have for years been tricking internet users into giving up their data, money, and time. But if some advocates and regulators get their way, they may not be able to do that for much longer…(More)”.
Book by Michael Heller and James Salzman: “A hidden set of rules governs who owns what–explaining everything from whether you can recline your airplane seat to why HBO lets you borrow a password illegally–and in this lively and entertaining guide, two acclaimed law professors reveal how things become “mine.”
“Mine” is one of the first words babies learn. By the time we grow up, the idea of ownership seems natural, whether buying a cup of coffee or a house. But who controls the space behind your airplane seat: you reclining or the squished laptop user behind? Why is plagiarism wrong, but it’s okay to knock-off a recipe or a dress design? And after a snowstorm, why does a chair in the street hold your parking space in Chicago, but in New York you lose the space and the chair?
Mine! explains these puzzles and many more. Surprisingly, there are just six simple stories that everyone uses to claim everything. Owners choose the story that steers us to do what they want. But we can always pick a different story. This is true not just for airplane seats, but also for battles over digital privacy, climate change, and wealth inequality. As Michael Heller and James Salzman show–in the spirited style of Freakonomics, Nudge, and Predictably Irrational–ownership is always up for grabs.
With stories that are eye-opening, mind-bending, and sometimes infuriating, Mine! reveals the rules of ownership that secretly control our lives….(More)”.
TechRepublic: “The pandemic forced the enterprise to quickly pivot from familiar business practices and develop ways to successfully function while keeping employees safe. A new report from Zoom, The Impact of Video Communications During COVID-19, was released Thursday.
“Video communications were suddenly our lifeline to society, enabling us to continue work and school in a digital environment,” said Brendan Ittelson, chief technology officer of Zoom, on the company’s blog. “Any baby steps toward digital transformation suddenly had to become leaps and bounds, with people reimagining their entire day-to-day practically overnight.”
Zoom commissioned the Boston Consulting Group (BCG) to conduct a survey and economic analysis to evaluate the economic impact of remote work and video communications solutions during the pandemic. BCG also conducted a survey and economic analysis, with a focus on which industries pivoted business processes using video conferencing, resulting in business continuity and even growth during a time of significant economic turmoil.
- In the U.S., the ability to work remotely saved 2.28 million jobs up to three times as many employees worked remotely, with a nearly three times increase in the use of video conferencing solutions.
- Of the businesses surveyed, the total time spent on video conferencing solutions increased by as much as five times the numbers pre-pandemic.
- BCG’s COVID-19 employee sentiment survey from 2020 showed that 70% of managers are more open to flexible remote working models than they were before the pandemic.
- Hybrid working models will be the norm soon. The businesses surveyed expect more than a third of employees to work remotely beyond the pandemic.
- The U.K. saved 550,000 jobs because of remote capabilities; Germany saved 372,00 jobs and France saved 250,000….(More)”.
Essay by Massimo Russo and Tian Feng: “…Cloud providers are integrating data-sharing capabilities into their product suites and investing in R&D that addresses new features such as data directories, trusted execution environments, and homomorphic encryption. They are also partnering with industry-specific ecosystem orchestrators to provide joint solutions.
Cloud providers are moving beyond infrastructure to enable broader data sharing. In 2018, for example, Microsoft teamed up with Oracle and SAP to kick off its Open Data Initiative, which focuses on interoperability among the three large platforms. Microsoft has also begun an Open Data Campaign to close the data divide and help smaller organizations get access to data needed for innovation in artificial intelligence (AI). Amazon Web Services (AWS) has begun a number of projects designed to promote open data, including the AWS Data Exchange and the Open Data Sponsorship Program. In addition to these large providers, specialty technology companies and startups are likewise investing in solutions that further data sharing.
Technology solutions today generally fall into three categories: mitigating risks, enhancing value, and reducing friction. The following is a noncomprehensive list of solutions in each category.
1. Mitigating the Risks of Data Sharing
Potential financial, competitive, and brand risks associated with data disclosure inhibit data sharing. To address these risks, data platforms are embedding solutions to control use, limit data access, encrypt data, and create substitute or synthetic data. (See slide 2 in the slideshow.)
Data Breaches. Here are some of the technological solutions designed toprevent data breaches and unauthorized access to sensitive or private data:
- Data modification techniques alter individual data elements or full data sets while maintaining data integrity. They provide increasing levels of protection but at a cost: loss of granularity of the underlying data. De-identification and masking strip personal identifier information and use encryption, allowing most of the data value to be preserved. More complex encryptions can increase security, but they also remove resolution of information from the data set.
- Secure data storage and transfer can help ensure that data stays safe both at rest and in transit. Cloud solutions such as Microsoft Azure and AWS have invested in significant platform security and interoperability.
- Distributed ledger technologies, such as blockchain, permit data to be stored and shared in a decentralized manner that makes it very difficult to tamper with. IOTA, for example, is a distributed ledger platform for IoT applications supported by industy players such as Bosch and Software AG.
- Secure computation enables analysis without revealing details of the underlying data. This can be done at a software level, with techniques such as secure multiparty computation (MPC) that allow potentially untrusting parties to jointly compute a function without revealing their private inputs. For example, with MPC, two parties can calculate the intersection of their respective encrypted data set while only revealing information about the intersection. Google, for one, is embedding MPC in its open-source Private Join and Compute tools.
- Trusted execution environments (TEEs) are hardware modules separate from the operating system that allow for secure data processing within an encrypted private area on the chip. Startup Decentriq is partnering with Intel and Microsoft to explore confidential computing by means of TEEs. There is a significant opportunity for IoT equipment providers to integrate TEEs into their products….(More)”
Book by Michael Kende: “The upside of the Internet is free Wi-Fi at Starbucks, Facetime over long distances, and nearly unlimited data for downloading or streaming. The downside is that our data goes to companies that use it to make money, our financial information is exposed to hackers, and the market power of technology companies continues to increase. In The Flip Side of Free, Michael Kende shows that free Internet comes at a price. We’re beginning to realize this. Our all-purpose techno-caveat is “I love my smart speaker,” but is it really tracking everything I do? listening to everything I say?
Kende explains the unique economics of the Internet and the paradoxes that result. The most valuable companies in the world are now Internet companies, built on data often exchanged for free content and services. Many users know the impact of this trade-off on privacy but continue to use the services anyway. Moreover, although the Internet lowers barriers for companies to enter markets, it is hard to compete with the largest providers. We complain about companies having too much data, but developing countries without widespread Internet usage may suffer from the reverse: not enough data collection for the development of advanced services—which leads to a worsening data divide between developed and developing countries.
What’s the future of free? Data is the price of free service, and the new currency of the Internet age. There’s nothing necessarily wrong with free, Kende says, as long as we anticipate and try to mitigate what’s on the flip side…(More)”.
Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi at Harvard Business Review: “As economies around the world digitalize rapidly in response to the pandemic, one component that can sometimes get left behind is user trust. What does it take to build out a digital ecosystem that users will feel comfortable actually using? To answer this question, the authors explored four components of digital trust: the security of an economy’s digital environment; the quality of the digital user experience; the extent to which users report trust in their digital environment; and the extent to which users actually use the digital tools available to them. They then used almost 200 indicators to rank 42 global economies on their performance in each of these four metrics, finding a number of interesting trends around how different economies have developed mechanisms for engendering trust, as well as how different types of trust do — or don’t — correspond to other digital development metrics…(More)”.
Editorial Board of the Financial Times: “…Most of the biggest tech companies, which have been at the forefront of the AI revolution, are well aware of the risks of deploying flawed systems at scale. Tech companies publicly acknowledge the need for societal acceptance if their systems are to be trusted. Although historically allergic to government intervention, some industry bosses are even calling for stricter regulation in areas such as privacy and facial recognition technology.
A parallel is often drawn between two conferences held in Asilomar, California, in 1975 and 2017. At the first, a group of biologists, lawyers and doctors created a set of ethical guidelines around research into recombinant DNA. This opened an era of responsible and fruitful biomedical research that has helped us deal with the Covid-19 pandemic today. Inspired by the example, a group of AI experts repeated the exercise 42 years later and came up with an impressive set of guidelines for the beneficial use of the technology.
Translating such high principles into everyday practice is hard, especially when so much money is at stake. But three rules should always apply. First, teams that develop AI systems must be as diverse as possible to reduce the risk of bias. Second, complex AI systems should never be deployed in any field unless they offer a demonstrable improvement on what already exists. Third, algorithms that companies and governments deploy in sensitive areas such as healthcare, education, policing, justice and workplace monitoring should be subject to audit and comprehension by outside experts.
The US Congress has been considering an Algorithmic Accountability Act, which would compel companies to assess the probable real-world impact of automated decision-making systems. There is even a case for creating the algorithmic equivalent of the US Food and Drug Administration to preapprove the use of AI in sensitive areas. Criminal liability for those who deploy irresponsible AI systems might also help concentrate minds.
The AI industry has talked a good game about AI ethics. But if some of the most sophisticated companies in this field cannot even convince their own employees of their good intentions, they will struggle to convince anyone else. That could result in a fierce public backlash against companies using AI. Worse, it may yet impede the real benefits of using AI for societal good in areas such as healthcare. The tech sector has to restore credibility for all our sakes….(More)”
Book by Aileen Nielsen: “Fairness is becoming a paramount consideration for data scientists. Mounting evidence indicates that the widespread deployment of machine learning and AI in business and government is reproducing the same biases we’re trying to fight in the real world. But what does fairness mean when it comes to code? This practical book covers basic concerns related to data security and privacy to help data and AI professionals use code that’s fair and free of bias.
Many realistic best practices are emerging at all steps along the data pipeline today, from data selection and preprocessing to closed model audits. Author Aileen Nielsen guides you through technical, legal, and ethical aspects of making code fair and secure, while highlighting up-to-date academic research and ongoing legal developments related to fairness and algorithms.
- Identify potential bias and discrimination in data science models
- Use preventive measures to minimize bias when developing data modeling pipelines
- Understand what data pipeline components implicate security and privacy concerns
- Write data processing and modeling code that implements best practices for fairness
- Recognize the complex interrelationships between fairness, privacy, and data security created by the use of machine learning models
- Apply normative and legal concepts relevant to evaluating the fairness of machine learning models…(More)”.
Paper by Jens Prufer and Inge Graef: “To prevent market tipping, which inhibits innovation, there is an urgent need to mandate sharing of user information in data-driven markets. Existing legal mechanisms to impose data sharing under EU competition law and data portability under the GDPR are not sufficient to tackle this problem. Mandated data sharing requires the design of a governance structure that combines elements of economically efficient centralization with legally necessary decentralization. We identify three feasible options. One is to centralize investigations and enforcement in a European Data Sharing Agency (EDSA), while decision-making power lies with National Competition Authorities in a Board of Supervisors. The second option is to set up a Data Sharing Cooperation Network coordinated through a European Data Sharing Board, with the National Competition Authority best placed to run the investigation adjudicating and enforcing the mandatory data-sharing decision across the EU. A third option is to mix both governance structures and to task national authorities to investigate and adjudicate and the EU-level EDSA with enforcement of data sharing….(More)”
Article by Jillian S. Ambroz: “A federal agency is gearing up to make wide-ranging policy changes on consumers’ access to their financial data.
The Consumer Financial Protection Bureau (CFPB) is looking to implement the area of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act pertaining to a consumer’s rights to his or her own financial data. It is detailed in section 1033.
The agency has been laying the groundwork on this move for years, from requesting information in 2016 from financial institutions to hosting a symposium earlier this year on the problems of screen scraping, a risky but common method of collecting consumer data.
Now the agency, which was established by the Dodd-Frank Act, is asking for comments on this critical and controversial topic ahead of the proposed rulemaking. Unlike other regulations that affect single industries, this could be all-encompassing because the consumer data rule touches almost every market the agency covers, according to the story in American Banker.
The Trump administration all but ‘systematically neutered’ the agency.
With the ruling, the agency seeks to clarify its compliance expectations and help establish market practices to ensure consumers have access to consumer financial data. The agency sees an opportunity here to help shape this evolving area of financial technology, or fintech, recognizing both the opportunities and the risks to consumers as more fintechs become enmeshed with their data and day-to-day lives.
Its goal is “to better effectuate consumer access to financial records,” as stated in the regulatory filing….(More)”.