Radhika Garg in First Monday: “Information and communication technologies (ICT) are changing the way people interact with each other. Today, every physical device can have the capability to connect to the Internet (digital presence) to send and receive data. Internet connected cameras, home automation systems, connected cars are all examples of interconnected Internet of Things (IoT). IoT can bring benefits to users in terms of monitoring and intelligent capabilities, however, these devices collect, transmit, store, and have a potential to share vast amount of personal and individual data that encroach private spaces and can be vulnerable to security breaches. The ecosystem of IoT comprises not only of users, various sensors, and devices but also other stakeholders of IoT such as data collectors, processors, regulators, and policy-makers. Even though the number of commercially available IoT devices is on steep rise, the uptake of these devices has been slow, and abandonment rapid. This paper explains how stakeholders (including users) and technologies form an assemblage in which these stakeholders are cumulatively responsible for making IoT an essential element of day-to-day living and connectivity. To this end, this paper examines open issues in data privacy and security policies (from perspectives of the European Union and North America), and its effects on stakeholders in the ecosystem. This paper concludes by explaining how these open issues, if unresolved, can lead to another wave of digital division and discrimination in the use of IoT….(More)”.
Everyone can now patrol this city’s streets for crime. ACLU says that’s a bad idea
NJ.com: “All eyes are on the city of Newark, literally. The city recently revealed its new “Citizen Virtual Patrol” program, which places 60 cameras around the city’s intersections, putting the city’s streets, and those who venture out on them, on display seven days a week, 24 hours a day.
That isn’t startling, as cameras have been up in the city for the past dozen years, says Anthony Ambrose, the city’s public safety director.
What is new, and not found in other cities, is that police officers won’t be the only ones trolling for criminals. Now, anyone who’s willing to submit their email address and upload an app onto their home computer or phone, can watch those cameras.
Citizens can then alert police when they see suspicious activity and remain anonymous. “Right now, in this era of society, it’s impossible to be outside without being recorded,” said Newark Mayor Ras Baraka. “We need to be able to use that technology to allow the police to do their job more efficiently and more cost effective.”
Those extra eyes, however, come at a cost. The cameras could also provide stalkers with their victim’s whereabouts, show intimate scenes and even when residents leave their homes vacant as they head out on vacation.
The American Civil Liberties Association of New Jersey is asking Newark to end the program, saying it’s a violation of privacy and the Fourth Amendment.
“Newark is crowdsourcing it’s responsibility to the public instead of engaging in policing,” said ACLU-NJ Executive Director Amol Sinha.
“There’s a fundamental difference between a civilian using their phone to record a certain area than government having cameras where people have a reasonable expectation of privacy,” Sinha said….
The city also plans to launch a campaign informing residents about the cameras.
“It’s about transparency,” Ambrose said. “We’re not saying we put cameras out there and you don’t know where they are at, we’re telling you.” …(More)”.
Privacy and Freedom of Expression In the Age of Artificial Intelligence
Joint Paper by Privacy International and ARTICLE 19: “Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data. AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights. While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information. This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.
The aim of the paper is fourfold:
1. Present key technical definitions to clarify the debate;
2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;
3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and
4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities….(More)”.
China asserts firm grip on research data
ScienceMag: “In a move few scientists anticipated, the Chinese government has decreed that all scientific data generated in China must be submitted to government-sanctioned data centers before appearing in publications. At the same time, the regulations, posted last week, call for open access and data sharing.
The possibly conflicting directives puzzle researchers, who note that the yet-to-be-established data centers will have latitude in interpreting the rules. Scientists in China can still share results with overseas collaborators, says Xie Xuemei, who specializes in innovation economics at Shanghai University. Xie also believes that the new requirements to register data with authorities before submitting papers to journals will not affect most research areas. Gaining approval could mean publishing delays, Xie says, but “it will not have a serious impact on scientific research.”
The new rules, issued by the powerful State Council, apply to all groups and individuals generating research data in China. The creation of a national data center will apparently fall to the science ministry, though other ministries and local governments are expected to create their own centers as well. Exempted from the call for open access and sharing are data involving state and business secrets, national security, “public interest,” and individual privacy… (More)”
Privacy’s Blueprint: The Battle to Control the Design of New Technologies
Book by Woodrow Hartzog: “Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them.
In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information.
Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust….(More)”.
Blockchain To Solve Bahamas’ ‘Major Workforce Waste’
Tribune 242: “The Government’s first-ever use of blockchain technology will tackle what was yesterday branded “an enormous waste of human capital”.
The Inter-American Development Bank (IDB), unveiling a $200,000 ‘technical co-operation’ project, revealed that the Minnis administration plans to deploy the technology as a way to determine the success of an apprenticeship programme targeted at 1,350 Bahamians aged between 16-40 years-old, and who are either unemployed or school leavers.
Documents obtained by Tribune Business reveal that the Government is also looking to blockchain to combat the widespread problem of lost/missing student records and certifications, which the IDB described as a major constraint to developing a skilled, productive Bahamian workforce.
“Currently, the certification process in the Bahamas lacks technological advances,” the IDB report said. “Today, student records management is a lengthy and cumbersome process. Students do not own their own records of achievement, depending on issuing institutions to verify their achievements throughout their lives. “This results not only in a verification process that can last weeks or months, and involves hours of human labour and (fallible) judgment, but also creates inefficiencies in placing new students and processing transfer equivalencies.“In extreme cases, when the issuing institution goes out of business, loses their records or is destroyed due to natural disasters, students have no way of verifying their achievements and must often start from nothing. This results in an enormous waste of human capital.”
The IDB report said the Bahamas was now “in a singular position to highlight the value of blockchain-based digital records for both students and institutions”, with the technology seen as a mechanism for Bahamians to possess and share records of their educational achievements. Blockchain technology allows information to be recorded, shared and updated by a particular community, with each member maintaining their own copy of data that has to be verified collectively.
Anything that can be described in digital form, such as contracts, transactions and assets, could thus be suitable for blockchain solutions. And Blockcerts, the open-standard for creating, issuing and verifying blockchain-based certificates, ensures they are tamper-proof. “Not only does the Blockcerts standard (open standard for digital documents anchored to the blockchain) allow Bahamian institutions to prevent records fraud, safeguarding and building confidence in their brands, but it allows them to leapfrog the digitisation process, skipping many of the interoperability issues associated with legacy digital formats (i.e. PDF, XML),” the IDB report said.
“Blockcerts provides students with autonomy, privacy, security and greater access all over the world, while allowing the Bahamian government to consolidate and streamline its credentialing operations in a way that produces real return on investment over a period. Primary use cases include: Student diplomas, professional certifications, awards, transcripts, enrollment verification, employment verification, verifications of qualifications, credit equivalencies and more.”…(More)”.
How artificial intelligence is transforming the world
Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.
Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.
In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.
In order to maximize AI benefits, we recommend nine steps for going forward:
- Encourage greater data access for researchers without compromising users’ personal privacy,
- invest more government funding in unclassified AI research,
- promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
- create a federal AI advisory committee to make policy recommendations,
- engage with state and local officials so they enact effective policies,
- regulate broad AI principles rather than specific algorithms,
- take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
- maintain mechanisms for human oversight and control, and
- penalize malicious AI behavior and promote cybersecurity….(More)
Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion
Leveraging the Power of Bots for Civil Society
Allison Fine & Beth Kanter at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.
So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.
To Bot or Not to Bot?
History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to vote, contact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.
And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.
But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….
The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:
- What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
- Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
- Do we make it clear to the people using the bot when they are interacting with a bot?
- Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
- Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
- In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
- Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.
UK can lead the way on ethical AI, says Lords Committee
Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….
One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
Other conclusions from the report include:
- Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
- Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
- The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
- The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
- Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
- At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
- The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
- It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
- The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.
- Report: AI in the UK: ready, willing and able? (HTML)
- Report: AI in the UK: ready, willing and able? (PDF)
- Written evidence volume: AI in the UK: ready, willing and able? (PDF) ( PDF 11.46 MB)
- Oral evidence volume: AI in the UK: ready, willing and able? (PDF) ( PDF 2.18 MB)
- Select Committee on Artificial Intelligence
Data rights are civic rights: a participatory framework for GDPR in the US?
Elena Souris and Hollie Russon Gilman at Vox: “…While online rights are coming into question, it’s worth considering how those will overlap with offline rights and civic engagement.
The two may initially seem completely separate, but democracy itself depends on information and communication, and a balance of privacy (secret ballot) and transparency. As communication moves almost entirely to networked online technology platforms, the governance questions surrounding data and privacy have far-reaching civic and political implications for how people interact with all aspects of their lives, from commerce and government services to their friends, families, and communities. That is why we need a conversation about data protections, empowering users with their own information, and transparency — ultimately, data rights are now civic rights…
What could a golden mean in the US look like? Is it possible to take principles of the GDPR and apply a more community based, citizen-centric approach across states and localities in the United States? Could a US version of the GDPR be designed in a way that included public participation? Perhaps there could be an ongoing participatory role? Most of all, the questions underpinning data regulation need to serve as an impetus for an honest conversation about equity across digital access, digital literacy, and now digital privacy.
Across the country, we’re already seeing successful experiments with a more citizen-inclusive democracy, with localities and cities rising as engines of American re-innovationand laboratories of participatory democracy. Thanks to our federalist system, states are already paving the way for greater electoral reform, from public financing of campaigns to experiments with structures such as ranked-choice voting.
In these local federalist experiments, civic participation is slowly becoming a crucial tool. Innovations from participatory budgeting to interactive policy co-production sessions are giving people in communities a direct say in public policies. For example, the Rural Climate Dialogues in Minnesota empower rural residents to impact policy on long-term climate mitigation. Bowling Green, Kentucky, recently used the online deliberation platform Polisto identify common policy areas for consensus building. Scholars have been writing about various potential participatory models for our digital lives as well, including civic trusts.
Can we take these principles and begin a serious conversation for how to translate the best privacy practices, tools, and methods to ensure that people’s valuable online and offline resources — including their trust, attention span, and vital information — are also protected and honored? Since the people are a primary stakeholder in the conversation about civic data and data privacy, they should have a seat at the table.
Including citizens and residents in these conversations could have a big policy impact. First, working toward a participatory governance framework for civic data would enable people to understand the value of their data in the open market. Second, it would provide greater transparency to the value of networks — an individual’s social graph, a valuable asset, which, until now, people are generating in aggregate without anything in return. Third, it could amplify concerns of more vulnerable data users, including elderly or tech-illiterate citizens — and even refugees and international migrants, as Andrew Young and Stefaan Verhulst recently argued in the Stanford Social Innovation Review.
There are already templates and road maps for responsible data, but talking to those users themselves with a participatory governance approach could make them even more effective. Finally, citizens can help answer tough questions about what we value and when and how we need to make ethical choices with data.
Because data-collecting organizations will have to comply abroad soon, the GDPR is a good opportunity for the American social sector to consider data rights as civic rights and incorporate a participatory process to meet this challenge. Instead of simply assuming regulatory agencies will pave the way, a more participatory data framework could foster an ongoing process of civic empowerment and make the outcome more effective. It’s too soon to know the precise forms or mechanisms new data regulation should take. Instead of a rigid, predetermined format, the process needs to be community-driven by design — ensuring traditionally marginalized communities are front and center in this conversation, not only the elites who already hold the microphone.
It won’t be easy. Building a participatory governance structure for civic data will require empathy, compromise, and potentially challenging the preconceived relationship between people, institutions, and their information. The interplay between our online and offline selves is a continuous process of learning error. But if we simply replicate the top-down structures of the past, we can’t evolve toward a truly empowered digital democratic future. Instead, let’s use the GDPR as an opening in the United States for advancing the principles of a more transparent and participatory democracy….(More)”.