Jeremy Berg at Science: “Transparency is critical when it comes to decision-making that broadly affects the public, particularly when it comes to policies purported to be grounded in scientific evidence. The scientific community has been increasingly focused on improving the transparency of research through initiatives that represent good-faith efforts to enhance the robustness of scientific findings and to increase access to and utility of data that underlie research. Yet, concerns about transparency associated with scientific results continue to emerge in political discussions. Most recently in the United States, a new proposal by the Environmental Protection Agency (EPA) would eliminate the use of publications in its policy discussions for which all underlying data are not publicly available. Here, a push for transparency appears actually to be a mechanism for suppressing important scientific evidence in policy-making, thereby threatening the public’s well-being.
Stefaan Verhulst
Hetan Shah at Nature: “Data science brings enormous potential for good — for example, to improve the delivery of public services, and even to track and fight modern slavery. No wonder researchers around the world — including members of my own organization, the Royal Statistical Society in London — have had their heads in their hands over headlines about how Facebook and the data-analytics company Cambridge Analytica might have handled personal data. We know that trustworthiness underpins public support for data innovation, and we have just seen what happens when that trust is lost….But how else might we ensure the use of data for the public good rather than for purely private gain?
Here are two proposals towards this goal.
First, governments should pass legislation to allow national statistical offices to gain anonymized access to large private-sector data sets under openly specified conditions. This provision was part of the United Kingdom’s Digital Economy Act last year and will improve the ability of the UK Office for National Statistics to assess the economy and society for the public interest.
My second proposal is inspired by the legacy of John Sulston, who died earlier this month. Sulston was known for his success in advocating for the Human Genome Project to be openly accessible to the science community, while a competitor sought to sequence the genome first and keep data proprietary.
Like Sulston, we should look for ways of making data available for the common interest. Intellectual-property rights expire after a fixed time period: what if, similarly, technology companies were allowed to use the data that they gather only for a limited period, say, five years? The data could then revert to a national charitable corporation that could provide access to certified researchers, who would both be held to account and be subject to scrutiny that ensure the data are used for the common good.
Technology companies would move from being data owners to becoming data stewards…(More)” (see also http://datacollaboratives.org/).
Allison Fine & Beth Kanter at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.
So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.
To Bot or Not to Bot?
History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to vote, contact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.
And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.
But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….
The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:
- What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
- Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
- Do we make it clear to the people using the bot when they are interacting with a bot?
- Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
- Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
- In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
- Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.
Adrian Smith at The Guardian: “…The Smart City is an alluring prospect for many city leaders. Even if you haven’t heard of it, you may have already joined in by looking up bus movements on your phone, accessing Council services online or learning about air contamination levels. By inserting sensors across city infrastructures and creating new data sources – including citizens via their mobile devices – Smart City managers can apply Big Data analysis to monitor and anticipate urban phenomena in new ways, and, so the argument goes, efficiently manage urban activity for the benefit of ‘smart citizens’.
Barcelona has been a pioneering Smart City. The Council’s business partners have been installing sensors and opening data platforms for years. Not everyone is comfortable with this technocratic turn. After Ada Colau was elected Mayor on a mandate of democratising the city and putting citizens centre-stage, digital policy has sought to go ‘beyond the Smart City’. Chief Technology Officer Francesca Bria is opening digital platforms to greater citizen participation and oversight. Worried that the city’s knowledge was being ceded to tech vendors, the Council now promotes technological sovereignty.
On the surface, the noise project in Plaça del Sol is an example of such sovereignty. It even features in Council presentations. Look more deeply, however, and it becomes apparent that neighbourhood activists are really appropriating new technologies into the old-fashioned politics of community development….
What made Plaça del Sol stand out can be traced to a group of technology activists who got in touch with residents early in 2017. The activists were seeking participants in their project called Making Sense, which sought to resurrect a struggling ‘Smart Citizen Kit’ for environmental monitoring. The idea was to provide residents with the tools to measure noise levels, compare them with officially permissible levels, and reduce noise in the square. More than 40 neighbours signed up and installed 25 sensors on balconies and inside apartments.
The neighbours had what project coordinator Mara Balestrini from Ideas for Change calls ‘a matter of concern’. The earlier Smart Citizen Kit had begun as a technological solution looking for a problem: a crowd-funded gadget for measuring pollution, whose data users could upload to a web-platform for comparison with information from other users. Early adopters found the technology trickier to install than developers had presumed. Even successful users stopped monitoring because there was little community purpose. A new approach was needed. Noise in Plaça del Sol provided a problem for this technology fix….
Anthropologist Clifford Geertz argued many years ago that situations can only be made meaningful through ‘thick description’. Applied to the Smart City, this means data cannot really be explained and used without understanding the contexts in which it arises and gets used. Data can only mobilise people and change things when it becomes thick with social meaning….(More)”
Danny Crichton at TechCrunch: “While the vagaries of the cryptocurrency markets are keeping crypto traders glued to their CoinDesk graphs, the real potential of blockchain is its capability to solve real human challenges in a decentralized, private, and secure way. Government officials have increasingly investigated how blockchain might solve critical problems, but now one city intends to move forward with an actual implementation.
The city of Austin is piloting a new blockchain platform to improve identity services for its homeless population, as part of a competitive grant awarded by the Mayor’s Challenge program sponsored by Bloomberg Philanthropies. Austin was one of 35 cities to be awarded pilot grants, and the top city from that group will ultimately be awarded $5 million….
The city wanted to improve the ability of its patchwork of government and private homeless service providers to offer integrated and comprehensive aid. There are a number of separate challenges here: verifying the identity of a person seeking help, knowing what care that individual has previously received, and empowering the individual to “own” their own records, and ultimately, their destiny.
The goal of the city’s blockchain pilot program is to consolidate the identity and vital records of each homeless person in a safe and confidential way while providing a means for service providers to access that information. Adler explained that “there are all kinds of confidentiality issues that arise when you try to do that, so the thought was that blockchain would allow us to bridge that need.”
By using blockchain, the hope is that the city could replace paper records, which are hard to manage, with electronic encrypted records that would be more reliable and secure. In addition, the blockchain platform could create a decentralized authentication mechanism to verify a particular person’s identity. For instance, a homeless services worker operating in the field could potentially use their mobile device to verify a person live, without having to bring someone back to an office for processing.
More importantly, vital records on the blockchain could build over time, so different providers would know what services a person had used previously. Majid provided the example of health care, where it is crucially important to know the history of an individual. The idea is that, when a homeless person walks into a clinic, the blockchain would provide the entire patient history of that individual to the provider. “Here was your medical records from your last clinic visits, and we can build off the care that you were given last time,” he said. Austin is partnering with the Dell Medical School at the University of Texas to work out how best to implement the blockchain for medical professionals….(More)”.
Springwise: “Urban Rivers is a Chicago-based charity focused on cleaning up the city’s rivers and re-wilding bankside habitats. One of their most visible pieces of work is a floating habitat installed in the middle of the river that runs through the city. An immediate problem that arose after installation was the accumulation of trash. At first, the company sent someone out on a kayak every other day to clean the habitat. Yet in less than a day, huge amounts of garbage would again be choking the space. The company’s solution was to create a Trash Task Force. The outcome of the Task Force’s work is the TrashBot, a remote-controlled garbage-collecting robot. The TrashBot allows gamers all over the world to do their bit in cleaning up Chicago’s river.
Anyone interested in playing the cleaning game can sign up via the Urban River website. Future development of the bot will likely focus on wildlife monitoring. Similarly, the end goal of the game will be that no one wants to play because there is no more garbage for collection.
From crowdsourced ocean data gathered by the fins of surfers’ boards to a solar-powered autonomous drone that gathers waste from harbor waters, the health of the world’s waterways is being improved in a number of ways. The surfboard fins use sensors to monitor sea salinity, acidity levels and wave motion. Those are all important coastal ecosystem factors that could be affected by climate change. The water drones are intelligent and use on-board cameras and sensors to learn about their environment and avoid other craft as they collect garbage from rivers, canals and harbors….(More)”.
F. Javier Miranda, Antonio Chamorro and Sergio Rubio in Electronic Government: “The social networks have increased the ways in which public administrations can actively interact with the public. However, these new means of communication are not always used efficiently to create an open and two-way relationship. The purpose of this study is to analyse the presence on and use of the social network Facebook by the large councils in the USA, UK and Spain. This research adapts Facebook assessment index (FAI) to the field of local authorities. This index assesses three dimensions: popularity, content and interactivity. The results show that there is no relationship between the population of the municipality and the degree of use of Facebook by the council, but there are notable differences depending on the country. By creating this ranking, we are helping those responsible for this management to carry out benchmarking activities in order to improve their communication strategy on the social networks….(More)”.
“These approaches…limit the impact of valuable information in developing policies…”
Under the new policy, studies that do not fully meet transparency criteria would be excluded from use in EPA policy development. This proposal follows unsuccessful attempts to enact the Honest and Open New EPA Science Treatment (HONEST) Act and its predecessor, the Secret Science Reform Act. These approaches undervalue many scientific publications and limit the impact of valuable information in developing policies in the areas that the EPA regulates….In developing effective policies, earnest evaluations of facts and fair-minded assessments of the associated uncertainties are foundational. Policy discussions require an assessment of the likelihood that a particular observation is true and examinations of the short- and long-term consequences of potential actions or inactions, including a wide range of different sorts of costs. Those with training in making these judgments with access to as much relevant information as possible are crucial for this process. Of course, policy development requires considerations other than those related to science. Such discussions should follow clear assessment after access to all of the available evidence. The scientific enterprise should stand up against efforts that distort initiatives aimed to improve scientific practice, just to pursue other agendas…(More)”.
Report by Tracey Lauriault, Rachel Bloom, Carly Livingstone and Jean-Noé Landry: “This executive summary consolidates findings from a smart city environmental scan (E-Scan) and five case studies of smart city initiatives in Canada. The E-Scan entailed compiling and reviewing documents and definitions produced by smart city vendors, think tanks, associations, consulting firms, standards organizations, conferences, civil society organizations, including critical academic literature, government reports, marketing material, specifications and requirements documents. This research was motivated by a desire to identify international shapers of smart cities and to better understand what differentiates a smart city from an Open Smart City….(More)”.
Chao Yu in the International Journal of Crowd Science: “A group can be of more power and better wisdom than the sum of the individuals. Foreign scholars have noticed that for a long time and called it collective intelligence. It has emerged from the communication, collaboration, competition and brain storming, etc. Collective intelligence appears in many fields such as public decisions, voting activities, social networks and crowdsourcing.
Crowd science mainly focuses on the basic principles and laws of the intelligent activities of groups under the new interconnection model. It explores how to give full play to the intelligence agents and groups, dig their potential to solve the problems that are difficult for a single agent.
In this paper, we present a literature review on collective intelligence in a crowd science perspective. We focus on researchers’ related work, especially that under which circumstance can group show their wisdom, how to measure it, how to optimize it and its modern or future applications in the digital world. That is exactly what the crowd science pays close attention to….(More)”.
Mitchell Waldrop at Science: “…The point of such models is to avoid describing human affairs from the top down with fixed equations, as is traditionally done in such fields as economics and epidemiology. Instead, outcomes such as a financial crash or the spread of a disease emerge from the bottom up, through the interactions of many individuals, leading to a real-world richness and spontaneity that is otherwise hard to simulate.
That kind of detail is exactly what emergency managers need, says Christopher Barrett, a computer scientist who directs the Biocomplexity Institute at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, which developed the NPS1 model for the government. The NPS1 model can warn managers, for example, that a power failure at point X might well lead to a surprise traffic jam at point Y. If they decide to deploy mobile cell towers in the early hours of the crisis to restore communications, NPS1 can tell them whether more civilians will take to the roads, or fewer. “Agent-based models are how you get all these pieces sorted out and look at the interactions,” Barrett says.
The downside is that models like NPS1 tend to be big—each of the model’s initial runs kept a 500-microprocessor computing cluster busy for a day and a half—forcing the agents to be relatively simple-minded. “There’s a fundamental trade-off between the complexity of individual agents and the size of the simulation,” says Jonathan Pfautz, who funds agent-based modeling of social behavior as a program manager at the Defense Advanced Research Projects Agency in Arlington, Virginia.
But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. In fields as diverse as economics, transportation, public health, and urban planning, more and more decision-makers are taking agent-based models seriously. “They’re the most flexible and detailed models out there,” says Ira Longini, who models epidemics at the University of Florida in Gainesville, “which makes them by far the most effective in understanding and directing policy.”
he roots of agent-based modeling go back at least to the 1940s, when computer pioneers such as Alan Turing experimented with locally interacting bits of software to model complex behavior in physics and biology. But the current wave of development didn’t get underway until the mid-1990s….(More)”.