Allison Fine & Beth Kanter at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.
So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.
To Bot or Not to Bot?
History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to vote, contact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.
And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.
But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….
The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:
- What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
- Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
- Do we make it clear to the people using the bot when they are interacting with a bot?
- Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
- Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
- In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
- Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.