Privacy by Design: Building a Privacy Policy People Actually Want to Read


Richard Mabey at the Artificial Lawyer: “…when it came to updating our privacy policy ahead of GDPR it was important to us from the get-go that our privacy policy was not simply a compliance exercise. Legal documents should not be written by lawyers for lawyers; they should be useful, engaging and designed for the end user. But it seemed that we weren’t the only ones to think this. When we read the regulations, it turned out the EU agreed.

Article 12 mandates that privacy notices be “concise, transparent, intelligible and easily accessible”. Legal design is not just a nice to have in the context of privacy; it’s actually a regulatory imperative. With this mandate, the team at Juro set out with a simple aim: design a privacy policy that people would actually want to read.

Here’s how we did it.

Step 1: framing the problem

When it comes to privacy notices, the requirements of GDPR are heavy and the consequences of non-compliance enormous (potentially 4% of annual turnover). We knew therefore that there would be an inherent tension between making the policy engaging and readable, and at the same time robust and legally watertight.

Lawyers know that when it comes to legal drafting, it’s much harder to be concise than wordy. Specifically, it’s much harder to be concise and preserve legal meaning than it is to be wordy. But the fact remains. Privacy notices are suffered as downside risk protections or compliance items, rather than embraced as important customer communications at key touchpoints. So how to marry the two.

We decided that the obvious route of striking out words and translating legalese was not enough. We wanted cakeism: how can we have an exceptionally robust privacy policy, preserve legal nuance and actually make it readable?

Step 2: changing the design process

The usual flow of creating a privacy policy is pretty basic: (1) management asks legal to produce privacy policy, (2) legal sends Word version of privacy policy back to management (back and forth ensues), (3) management checks Word doc and sends it on to engineering for implementation, (4) privacy policy goes live…

Rather than the standard process, we decided to start with the end user and work backwards and started a design sprint (more about this here) on our privacy notice with multiple iterations, rapid prototyping and user testing.

Similarly, this was not going to be a process just for lawyers. We put together a multi-disciplinary team co-led by me and, legal information designer Stefania Passera, with input from our legal counsel Adam, Tom (our content editor), Alice (our marketing manager) and Anton (our front-end developer).

Step 3: choosing design patterns...(More).

Open data privacy and security policy issues and its influence on embracing the Internet of Things


Radhika Garg in First Monday: “Information and communication technologies (ICT) are changing the way people interact with each other. Today, every physical device can have the capability to connect to the Internet (digital presence) to send and receive data. Internet connected cameras, home automation systems, connected cars are all examples of interconnected Internet of Things (IoT). IoT can bring benefits to users in terms of monitoring and intelligent capabilities, however, these devices collect, transmit, store, and have a potential to share vast amount of personal and individual data that encroach private spaces and can be vulnerable to security breaches. The ecosystem of IoT comprises not only of users, various sensors, and devices but also other stakeholders of IoT such as data collectors, processors, regulators, and policy-makers. Even though the number of commercially available IoT devices is on steep rise, the uptake of these devices has been slow, and abandonment rapid. This paper explains how stakeholders (including users) and technologies form an assemblage in which these stakeholders are cumulatively responsible for making IoT an essential element of day-to-day living and connectivity. To this end, this paper examines open issues in data privacy and security policies (from perspectives of the European Union and North America), and its effects on stakeholders in the ecosystem. This paper concludes by explaining how these open issues, if unresolved, can lead to another wave of digital division and discrimination in the use of IoT….(More)”.

Everyone can now patrol this city’s streets for crime. ACLU says that’s a bad idea


NJ.com: “All eyes are on the city of Newark, literally.  The city recently revealed its new “Citizen Virtual Patrol” program, which places 60 cameras around the city’s intersections, putting the city’s streets, and those who venture out on them, on display seven days a week, 24 hours a day.

That isn’t startling, as cameras have been up in the city for the past dozen years, says Anthony Ambrose, the city’s public safety director.

What is new, and not found in other cities, is that police officers won’t be the only ones trolling for criminals. Now, anyone who’s willing to submit their email address and upload an app onto their home computer or phone, can watch those cameras.

Citizens can then alert police when they see suspicious activity and remain anonymous.  “Right now, in this era of society, it’s impossible to be outside without being recorded,” said Newark Mayor Ras Baraka. “We need to be able to use that technology to allow the police to do their job more efficiently and more cost effective.”

Those extra eyes, however, come at a cost. The cameras could also provide stalkers with their victim’s whereabouts, show intimate scenes and even when residents leave their homes vacant as they head out on vacation.

The American Civil Liberties Association of New Jersey is asking Newark to end the program, saying it’s a violation of privacy and the Fourth Amendment.

“Newark is crowdsourcing it’s responsibility to the public instead of engaging in policing,” said ACLU-NJ Executive Director Amol Sinha.

“There’s a fundamental difference between a civilian using their phone to record a certain area than government having cameras where people have a reasonable expectation of privacy,” Sinha said….

The city also plans to launch a campaign informing residents about the cameras.

“It’s about transparency,” Ambrose said. “We’re not saying we put cameras out there and you don’t know where they are at, we’re telling you.” …(More)”.

Privacy and Freedom of Expression In the Age of Artificial Intelligence


Joint Paper by Privacy International and ARTICLE 19: “Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data. AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights. While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information. This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.

The aim of the paper is fourfold:

1. Present key technical definitions to clarify the debate;

2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;

3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and

4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities….(More)”.

China asserts firm grip on research data


ScienceMag: “In a move few scientists anticipated, the Chinese government has decreed that all scientific data generated in China must be submitted to government-sanctioned data centers before appearing in publications. At the same time, the regulations, posted last week, call for open access and data sharing.

The possibly conflicting directives puzzle researchers, who note that the yet-to-be-established data centers will have latitude in interpreting the rules. Scientists in China can still share results with overseas collaborators, says Xie Xuemei, who specializes in innovation economics at Shanghai University. Xie also believes that the new requirements to register data with authorities before submitting papers to journals will not affect most research areas. Gaining approval could mean publishing delays, Xie says, but “it will not have a serious impact on scientific research.”

The new rules, issued by the powerful State Council, apply to all groups and individuals generating research data in China. The creation of a national data center will apparently fall to the science ministry, though other ministries and local governments are expected to create their own centers as well. Exempted from the call for open access and sharing are data involving state and business secrets, national security, “public interest,” and individual privacy… (More)”

Privacy’s Blueprint: The Battle to Control the Design of New Technologies


Book by Woodrow Hartzog: “Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them.

In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information.

Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust….(More)”.

Blockchain To Solve Bahamas’ ‘Major Workforce Waste’


Tribune 242: “The Government’s first-ever use of blockchain technology will tackle what was yesterday branded “an enormous waste of human capital”.

The Inter-American Development Bank (IDB), unveiling a $200,000 ‘technical co-operation’ project, revealed that the Minnis administration plans to deploy the technology as a way to determine the success of an apprenticeship programme targeted at 1,350 Bahamians aged between 16-40 years-old, and who are either unemployed or school leavers.

Documents obtained by Tribune Business reveal that the Government is also looking to blockchain to combat the widespread problem of lost/missing student records and certifications, which the IDB described as a major constraint to developing a skilled, productive Bahamian workforce.

“Currently, the certification process in the Bahamas lacks technological advances,” the IDB report said. “Today, student records management is a lengthy and cumbersome process. Students do not own their own records of achievement, depending on issuing institutions to verify their achievements throughout their lives. “This results not only in a verification process that can last weeks or months, and involves hours of human labour and (fallible) judgment, but also creates inefficiencies in placing new students and processing transfer equivalencies.“In extreme cases, when the issuing institution goes out of business, loses their records or is destroyed due to natural disasters, students have no way of verifying their achievements and must often start from nothing. This results in an enormous waste of human capital.”

The IDB report said the Bahamas was now “in a singular position to highlight the value of blockchain-based digital records for both students and institutions”, with the technology seen as a mechanism for Bahamians to possess and share records of their educational achievements. Blockchain technology allows information to be recorded, shared and updated by a particular community, with each member maintaining their own copy of data that has to be verified collectively.

Anything that can be described in digital form, such as contracts, transactions and assets, could thus be suitable for blockchain solutions. And Blockcerts, the open-standard for creating, issuing and verifying blockchain-based certificates, ensures they are tamper-proof. “Not only does the Blockcerts standard (open standard for digital documents anchored to the blockchain) allow Bahamian institutions to prevent records fraud, safeguarding and building confidence in their brands, but it allows them to leapfrog the digitisation process, skipping many of the interoperability issues associated with legacy digital formats (i.e. PDF, XML),” the IDB report said.

“Blockcerts provides students with autonomy, privacy, security and greater access all over the world, while allowing the Bahamian government to consolidate and streamline its credentialing operations in a way that produces real return on investment over a period. Primary use cases include: Student diplomas, professional certifications, awards, transcripts, enrollment verification, employment verification, verifications of qualifications, credit equivalencies and more.”…(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Leveraging the Power of Bots for Civil Society


Allison Fine & Beth Kanter  at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.

So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.

To Bot or Not to Bot?

History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to votecontact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.

And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.

But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….

The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:

  • What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
  • Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
  • Do we make it clear to the people using the bot when they are interacting with a bot?
  • Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
  • Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
  • In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
  • Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.

UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.