International Data Flows and Privacy: The Conflict and its Resolution


World Bank Policy Research Working Paper by Aaditya Mattoo and Joshua P Meltzer: “The free flow of data across borders underpins today’s globalized economy. But the flow of personal dataoutside the jurisdiction of national regulators also raises concerns about the protection of privacy. Addressing these legitimate concerns without undermining international integration is a challenge. This paper describes and assesses three types of responses to this challenge: unilateral development of national or regional regulation, such as the European Union’s Data Protection Directive and forthcoming General Data Protection Regulation; international negotiation of trade disciplines, most recently in the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP); and international cooperation involving regulators, most significantly in the EU-U.S. Privacy Shield Agreement.

The paper argues that unilateral restrictions on data flows are costly and can hurt exports, especially of data-processing and other data-based services; international trade rules that limit only the importers’ freedom to regulate cannot address the challenge posed by privacy; and regulatory cooperation that aims at harmonization and mutual recognition is not likely to succeed, given the desirable divergence in national privacy regulation. The way forward is to design trade rules (as the CPTPP seeks to do) that reflect the bargain central to successful international cooperation (as in the EU-US Privacy Shield): regulators in data destination countries would assume legal obligations to protect the privacy of foreign citizens in return for obligations on data source countries not to restrict the flow of data. Existing multilateral rules can help ensure that any such arrangements do not discriminate against and are open to participation by other countries….(More)”.

Privacy by Design: Building a Privacy Policy People Actually Want to Read


Richard Mabey at the Artificial Lawyer: “…when it came to updating our privacy policy ahead of GDPR it was important to us from the get-go that our privacy policy was not simply a compliance exercise. Legal documents should not be written by lawyers for lawyers; they should be useful, engaging and designed for the end user. But it seemed that we weren’t the only ones to think this. When we read the regulations, it turned out the EU agreed.

Article 12 mandates that privacy notices be “concise, transparent, intelligible and easily accessible”. Legal design is not just a nice to have in the context of privacy; it’s actually a regulatory imperative. With this mandate, the team at Juro set out with a simple aim: design a privacy policy that people would actually want to read.

Here’s how we did it.

Step 1: framing the problem

When it comes to privacy notices, the requirements of GDPR are heavy and the consequences of non-compliance enormous (potentially 4% of annual turnover). We knew therefore that there would be an inherent tension between making the policy engaging and readable, and at the same time robust and legally watertight.

Lawyers know that when it comes to legal drafting, it’s much harder to be concise than wordy. Specifically, it’s much harder to be concise and preserve legal meaning than it is to be wordy. But the fact remains. Privacy notices are suffered as downside risk protections or compliance items, rather than embraced as important customer communications at key touchpoints. So how to marry the two.

We decided that the obvious route of striking out words and translating legalese was not enough. We wanted cakeism: how can we have an exceptionally robust privacy policy, preserve legal nuance and actually make it readable?

Step 2: changing the design process

The usual flow of creating a privacy policy is pretty basic: (1) management asks legal to produce privacy policy, (2) legal sends Word version of privacy policy back to management (back and forth ensues), (3) management checks Word doc and sends it on to engineering for implementation, (4) privacy policy goes live…

Rather than the standard process, we decided to start with the end user and work backwards and started a design sprint (more about this here) on our privacy notice with multiple iterations, rapid prototyping and user testing.

Similarly, this was not going to be a process just for lawyers. We put together a multi-disciplinary team co-led by me and, legal information designer Stefania Passera, with input from our legal counsel Adam, Tom (our content editor), Alice (our marketing manager) and Anton (our front-end developer).

Step 3: choosing design patterns...(More).

Open data privacy and security policy issues and its influence on embracing the Internet of Things


Radhika Garg in First Monday: “Information and communication technologies (ICT) are changing the way people interact with each other. Today, every physical device can have the capability to connect to the Internet (digital presence) to send and receive data. Internet connected cameras, home automation systems, connected cars are all examples of interconnected Internet of Things (IoT). IoT can bring benefits to users in terms of monitoring and intelligent capabilities, however, these devices collect, transmit, store, and have a potential to share vast amount of personal and individual data that encroach private spaces and can be vulnerable to security breaches. The ecosystem of IoT comprises not only of users, various sensors, and devices but also other stakeholders of IoT such as data collectors, processors, regulators, and policy-makers. Even though the number of commercially available IoT devices is on steep rise, the uptake of these devices has been slow, and abandonment rapid. This paper explains how stakeholders (including users) and technologies form an assemblage in which these stakeholders are cumulatively responsible for making IoT an essential element of day-to-day living and connectivity. To this end, this paper examines open issues in data privacy and security policies (from perspectives of the European Union and North America), and its effects on stakeholders in the ecosystem. This paper concludes by explaining how these open issues, if unresolved, can lead to another wave of digital division and discrimination in the use of IoT….(More)”.

Everyone can now patrol this city’s streets for crime. ACLU says that’s a bad idea


NJ.com: “All eyes are on the city of Newark, literally.  The city recently revealed its new “Citizen Virtual Patrol” program, which places 60 cameras around the city’s intersections, putting the city’s streets, and those who venture out on them, on display seven days a week, 24 hours a day.

That isn’t startling, as cameras have been up in the city for the past dozen years, says Anthony Ambrose, the city’s public safety director.

What is new, and not found in other cities, is that police officers won’t be the only ones trolling for criminals. Now, anyone who’s willing to submit their email address and upload an app onto their home computer or phone, can watch those cameras.

Citizens can then alert police when they see suspicious activity and remain anonymous.  “Right now, in this era of society, it’s impossible to be outside without being recorded,” said Newark Mayor Ras Baraka. “We need to be able to use that technology to allow the police to do their job more efficiently and more cost effective.”

Those extra eyes, however, come at a cost. The cameras could also provide stalkers with their victim’s whereabouts, show intimate scenes and even when residents leave their homes vacant as they head out on vacation.

The American Civil Liberties Association of New Jersey is asking Newark to end the program, saying it’s a violation of privacy and the Fourth Amendment.

“Newark is crowdsourcing it’s responsibility to the public instead of engaging in policing,” said ACLU-NJ Executive Director Amol Sinha.

“There’s a fundamental difference between a civilian using their phone to record a certain area than government having cameras where people have a reasonable expectation of privacy,” Sinha said….

The city also plans to launch a campaign informing residents about the cameras.

“It’s about transparency,” Ambrose said. “We’re not saying we put cameras out there and you don’t know where they are at, we’re telling you.” …(More)”.

Privacy and Freedom of Expression In the Age of Artificial Intelligence


Joint Paper by Privacy International and ARTICLE 19: “Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data. AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights. While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information. This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.

The aim of the paper is fourfold:

1. Present key technical definitions to clarify the debate;

2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;

3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and

4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities….(More)”.

China asserts firm grip on research data


ScienceMag: “In a move few scientists anticipated, the Chinese government has decreed that all scientific data generated in China must be submitted to government-sanctioned data centers before appearing in publications. At the same time, the regulations, posted last week, call for open access and data sharing.

The possibly conflicting directives puzzle researchers, who note that the yet-to-be-established data centers will have latitude in interpreting the rules. Scientists in China can still share results with overseas collaborators, says Xie Xuemei, who specializes in innovation economics at Shanghai University. Xie also believes that the new requirements to register data with authorities before submitting papers to journals will not affect most research areas. Gaining approval could mean publishing delays, Xie says, but “it will not have a serious impact on scientific research.”

The new rules, issued by the powerful State Council, apply to all groups and individuals generating research data in China. The creation of a national data center will apparently fall to the science ministry, though other ministries and local governments are expected to create their own centers as well. Exempted from the call for open access and sharing are data involving state and business secrets, national security, “public interest,” and individual privacy… (More)”

Privacy’s Blueprint: The Battle to Control the Design of New Technologies


Book by Woodrow Hartzog: “Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them.

In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information.

Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust….(More)”.

Blockchain To Solve Bahamas’ ‘Major Workforce Waste’


Tribune 242: “The Government’s first-ever use of blockchain technology will tackle what was yesterday branded “an enormous waste of human capital”.

The Inter-American Development Bank (IDB), unveiling a $200,000 ‘technical co-operation’ project, revealed that the Minnis administration plans to deploy the technology as a way to determine the success of an apprenticeship programme targeted at 1,350 Bahamians aged between 16-40 years-old, and who are either unemployed or school leavers.

Documents obtained by Tribune Business reveal that the Government is also looking to blockchain to combat the widespread problem of lost/missing student records and certifications, which the IDB described as a major constraint to developing a skilled, productive Bahamian workforce.

“Currently, the certification process in the Bahamas lacks technological advances,” the IDB report said. “Today, student records management is a lengthy and cumbersome process. Students do not own their own records of achievement, depending on issuing institutions to verify their achievements throughout their lives. “This results not only in a verification process that can last weeks or months, and involves hours of human labour and (fallible) judgment, but also creates inefficiencies in placing new students and processing transfer equivalencies.“In extreme cases, when the issuing institution goes out of business, loses their records or is destroyed due to natural disasters, students have no way of verifying their achievements and must often start from nothing. This results in an enormous waste of human capital.”

The IDB report said the Bahamas was now “in a singular position to highlight the value of blockchain-based digital records for both students and institutions”, with the technology seen as a mechanism for Bahamians to possess and share records of their educational achievements. Blockchain technology allows information to be recorded, shared and updated by a particular community, with each member maintaining their own copy of data that has to be verified collectively.

Anything that can be described in digital form, such as contracts, transactions and assets, could thus be suitable for blockchain solutions. And Blockcerts, the open-standard for creating, issuing and verifying blockchain-based certificates, ensures they are tamper-proof. “Not only does the Blockcerts standard (open standard for digital documents anchored to the blockchain) allow Bahamian institutions to prevent records fraud, safeguarding and building confidence in their brands, but it allows them to leapfrog the digitisation process, skipping many of the interoperability issues associated with legacy digital formats (i.e. PDF, XML),” the IDB report said.

“Blockcerts provides students with autonomy, privacy, security and greater access all over the world, while allowing the Bahamian government to consolidate and streamline its credentialing operations in a way that produces real return on investment over a period. Primary use cases include: Student diplomas, professional certifications, awards, transcripts, enrollment verification, employment verification, verifications of qualifications, credit equivalencies and more.”…(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Leveraging the Power of Bots for Civil Society


Allison Fine & Beth Kanter  at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.

So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.

To Bot or Not to Bot?

History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to votecontact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.

And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.

But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….

The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:

  • What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
  • Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
  • Do we make it clear to the people using the bot when they are interacting with a bot?
  • Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
  • Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
  • In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
  • Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.