Twitter’s misinformation problem is much bigger than Trump. The crowd may help solve it.


Elizabeth Dwoskin at the Washington Post: “A pilot program called Birdwatch lets selected users write corrections and fact checks on potentially misleading tweets…

The presidential election is over, but the fight against misinformation continues.

The latest volley in that effort comes from Twitter, which on MondayannouncedBirdwatch, a pilot project that uses crowdsourcing techniques to combat falsehoods and misleading statements on its service.

The pilot, which is open to only about 1,000 select users who can apply to be contributors, will allow people to write notes with corrections and accurate information directly into misleading tweets — a method that has the potential to get quality information to people more quickly than traditional fact-checking. Fact checks that are rated by other contributors as high quality may get bumped up or rewarded with greater visibility.

Birdwatch represents Twitter’s most experimental response to one of the biggest lessons that social media companies drew from the historic events of 2020: that their existing efforts to combat misinformation — including labeling, fact-checking and sometimes removing content — were not enough to prevent falsehoods about a stolen election or the coronavirus from reaching and influencing broad swaths of the population. Researchers who studied enforcement actions by social media companies last year found that fact checks and labels are usually implemented too late, after a post or a tweet has gone viral.

The Birdwatch project — which for the duration of the pilot will function as a separate website — is novel in that it attempts to build new mechanisms into Twitter’s product that foreground fact-checking by its community of 187 million daily users worldwide. Rather than having to comb through replies to tweets to sift through what’s true or false — or having Twitter employees append to a tweet a label providing additional context — users will be able to click on a separate notes folder attached to a tweet where they can see the consensus-driven responses from the community. Twitter will have a team reviewing winning responses to prevent manipulation, though a major question is whether any part of the process will be automated and therefore more easily gamed….(More)”

A recommendation and risk classification system for connecting rough sleepers to essential outreach services


Paper by Harrison Wilde et al: “Rough sleeping is a chronic experience faced by some of the most disadvantaged people in modern society. This paper describes work carried out in partnership with Homeless Link (HL), a UK-based charity, in developing a data-driven approach to better connect people sleeping rough on the streets with outreach service providers. HL’s platform has grown exponentially in recent years, leading to thousands of alerts per day during extreme weather events; this overwhelms the volunteer-based system they currently rely upon for the processing of alerts. In order to solve this problem, we propose a human-centered machine learning system to augment the volunteers’ efforts by prioritizing alerts based on the likelihood of making a successful connection with a rough sleeper. This addresses capacity and resource limitations whilst allowing HL to quickly, effectively, and equitably process all of the alerts that they receive. Initial evaluation using historical data shows that our approach increases the rate at which rough sleepers are found following a referral by at least 15% based on labeled data, implying a greater overall increase when the alerts with unknown outcomes are considered, and suggesting the benefit in a trial taking place over a longer period to assess the models in practice. The discussion and modeling process is done with careful considerations of ethics, transparency, and explainability due to the sensitive nature of the data involved and the vulnerability of the people that are affected….(More)”.

Citizen acceptance of mass surveillance? Identity, intelligence, and biodata concerns


Paper by Westerlund, Mika; Isabelle, Diane A; Leminen, Seppo: “News media and human rights organizations are warning about the rise of the surveillance state that builds on distrust and mass surveillance of its citizens. Further, the global pandemic has fostered public-private collaboration such as the launch of contact tracing apps to tackle COVID-19. Thus, such apps also contribute to the diffusion of technologies that can collect and analyse large amounts of sensitive data and the growth of the surveillance society. This study examines the impacts of citizens’ concerns about digital identity, government’s intelligence activities, and security of the increasing biodata on their trust in and acceptance of government’s use of personal data. Our analysis of survey data from 1,486 Canadians suggest that those concerns have direct effects on people’s acceptance of government’s use of personal data, but not necessarily on the trust in the government being respectful of privacy. Authorities should be more transparent about the collection and uses of data….(More)”

Re-use of smart city data: The need to acquire a social license through data assemblies


Written Testimony by Stefaan G. Verhulst before the New York City Council Committee on Technology: “…In crises such as these, calls for the city to harness technology and data to help policy-makers find solutions grow louder and stronger. Many have spoken about accelerating already ongoing work to turn New York into “a smart city” — using digital technology to connect, protect, and improve the lives of its residents. Some of this proposed work could involve the use of sensors to collect data on how people live and work across New York City. Other work could involve expanding the city’s relationships with private organizations through data collaboratives. Data collaboratives, which are central to our work at the GovLab, are a new form of collaboration that extends beyond the conventional public-private partnership model, in which participants from different sectors exchange their data to create public value. The city already operates one such data collaborative in the form of the NYC Recovery Data Partnership, a partnership that allows New York-based private and civic organizations to provide their data to analysts at city agencies to inform the COVID-19 pandemic response. I have the privilege of serving as an advisor to that initiative.

Data collaboration takes place widely through a variety of institutional, contractual and technical structures and instruments. Borrowing in language and inspiration from the open data movement, the emerging data collaborative movement has proven its value and possible positive impact. Data reuse has the potential to improve disease treatment, identify better ways to source supplies, monitor adherence to non-pharmaceutical restrictions, and provide a range of other public benefits. Whether it is informing decision-making or shaping the development of new tools and techniques, it is clear that data has tremendous potential to mitigate the worst effects of this pandemic.

However, as promising and attractive as reusing data might seem, it is important to keep in mind that there also exist widespread concerns and challenges….

My colleagues and I at The GovLab believe the Data Assembly methodology offers the city a new way forward on the issues under discussion today, as they relate to smart cities. In our view, oversight cannot just be a reactive process of responding to complaints but a proactive one, inviting city residents, data holders, and advocacy groups to the table to determine what is and is not acceptable. Amid rapidly changing circumstances, the city needs ways to collect and synthesize actionable and diverse public input to identify concerns, expectations, and opportunities. We encourage the city to explore assembling mini-publics of its own or, failing that, commission legitimate partners to lead such efforts.

New York faces many challenges in 2021 but I do not doubt the capacity of its people to overcome these struggles. Through people-led innovation and processes, the city can ensure that data re-use conducted as part of the smart city is deemed legitimate and more effective and targeted. It can also support the city in ensuring work across the city is more open, collaborative, and legitimate…(More)”.

5 Domains of Government That Are Ripe for Transformation


Article by William Eggers: “…in a Deloitte report entitled Creating the Government of the Future my colleagues and I identified five principal domains of government activity that are ripe for technological transformation:…

Service delivery: In Estonia, taxpayers can file taxes online simply by approving forms auto-populated with their income data. This ease represents the future of service delivery: focused on the user, automated for no-touch government that serves people without them having to fill out long forms. (Think hospital data of a birth triggering a birth certificate, Social Security card and health-care record for the child and family allowance payment to qualifying parents.)

Services will more and more tailor to such anticipated life events. Ideally, a single login omnichannel experience provides access to tasks as varied as collecting unemployment benefits to registering to run for office. With once-only government, citizens and businesses need only provide their data once, and it’s then shared across departments with appropriate privacy protections.

Operations: Government operations should take a cue from the private sector, where technologies like data analytics and cognitive automation converge to create serious efficiencies. Operations from HR to procurement can combine in an integrated center office, creating insights from shared, analyzed data about what to expect and how to improve. “As-a-service” acquisition allows contractors to provide basic infrastructure, such as cloud services, leading to faster scaling. To transform operations, strike teams of specialists and subject-matter experts meet in digital factories, using agile processes without traditional bureaucracies.

Policy- and decision-making: Evidence-based policymaking can identify what approaches produce the best results. With artificial-intelligence-based scenario analysis, machine learning can test the relationship between factors in systemic problems. Potentially, understanding these relationships could allow policy to be self-correcting. Likewise, increasingly sophisticated statistical models will allow government by simulation — a cheap way to A/B test systems like traffic management, disaster response and city planning. Meanwhile, mass-communication tools enable crowdsourced and distributed policymaking, in which ordinary citizens contribute their expertise.

Regulation and enforcement: The future of this governmental domain is tied to the predictive abilities of AI and analytics. In a form of risk-based regulation, for example, AI can identify factors likely to contribute to a food-borne illness outbreak, helping food inspectors focus energies on restaurants more likely to violate. Modeling systems to identify beneficial behaviors can enable positive enforcement strategies, which reward a business’ focus on the big picture and going beyond the bare minimum. Lastly, countries like New Zealand have experimented with legislation written as software code. The bureaucratic effects of the legislation could be simulated ahead of time.

Talent/workforce: Flexibility will be the hallmark of the future public workforce. NASA and other agencies are trying a talent marketplace model, in which some workers have the ability to move from project to project, even between agencies, based on their documented skills. Talent won’t go to waste in this just-in-time civil service. Such a talent marketplace would cover an open talent spectrum, from freelancers to career employees….(More)”.

From Journalistic Ethics To Fact-Checking Practices: Defining The Standards Of Content Governance In The Fight Against Disinformation


Paper by Paolo Cavaliere: “This article claims that the practices undertaken by digital platforms to counter disinformation, under the EU Action Plan against Disinformation and the Code of Practice, mark a shift in the governance of news media content. While professional journalism standards have been used for long, both within and outside the industry, to assess the accuracy of news content and adjudicate on media conduct, the platforms are now resolving to different fact-checking routines to moderate and curate their content.
The article will demonstrate how fact-checking organisations have different working methods than news operators and ultimately understand and assess ‘accuracy’ in different ways. As a result, this new and enhanced role for platforms and fact-checkers as curators of content impacts on how content is distributed to the audience and, thus, on media freedom. Depending on how the fact-checking standards and working routines will consolidate in the near future, however, this trend offers an actual opportunity to improve the quality of news and the right to receive information…(More)”.

Sustainable Rescue: data sharing to combat human trafficking


Interview with Paul Fockens of  Sustainable Rescue: “Human trafficking still takes place on a large scale, and still too often under the radar. That does not make it easy for organisations that want to combat human trafficking. Sharing of data between various sorts of organisations, including the government, the police, but also banks play a crucial role in mapping the networks of criminals involved in human trafficking, including their victims. Data sharing contributes to tackling this criminal business not only reactively, but also proactively….Sustainable Rescue tries to make the largely invisible human trafficking visible. Bundling data and therefore knowledge is crucial in this. Paul: “It’s about combining the routes criminals (and their victims) take from A to B, the financial transactions they make, the websites they visit, the hotels where they check in et cetera. All those signs of human trafficking can be found in the data of various types of organisations: the police, municipalities, the Public Prosecution Service, charities such as the Salvation Army, but also banks and insurance institutions. The problem here is that you need to collect all pieces of the puzzle to get clear insights from them. As long as this relevant data is not combined through data sharing, it is a very difficult job to get these insights. In nine out of ten cases, these authorities are not willing and/or allowed to share their data, mainly because of the privacy sensitivity of this data. However, in order to eliminate human trafficking, that data will have to be bundled. Only then analyses can be made about the patterns of a network of human trafficking.”…(More)”.

Applying behavioural science to the annual electoral canvass in England: Evidence from a large-scale randomised controlled trial


Paper by Martin Sweeney, Peter John, Michael Sanders, Hazel Wright and Lucy Makinson: “Local authorities in Great Britain are required to ensure that their electoral registers are as accurate and complete as possible. To this end, Household Enquiry Forms (HEFs) are mailed to all properties annually to collect updated details from residents, and any eligible unregistered residents will subsequently be invited to register to vote. Unfortunately, HEF nonresponse is pervasive and costly. Using insights from behavioural science, we modified letters and envelopes posted to households as part of the annual canvass, and evaluated their effects using a randomised controlled trial across two local authorities in England (N=226,528 properties). We find that modified materials – particularly redesigned envelopes – significantly increase initial response rates and savings. However, we find no effects on voter registration. While certain behavioural interventions can improve the efficiency of the annual canvass, other approaches or interventions may be needed to increase voter registration rates and update voter information….(More)”.

The Rule of Technology – How Technology Is Used to Disturb Basic Labor Law Protections


Paper by Tammy Katsabian: “Much has been written on technology and the law. Leading scholars are occupied with the power dynamics between capital, technology, and the law, along with their implications for society and human rights. Alongside that, various labor law scholars focus on the implications of smart technology on employees’ rights throughout the recruitment and employment periods and on workers’ status and rights in the growing phenomenon of platform-based work. This article aims to contribute to the current scholarship by zooming it out and observing from a bird’s-eye view how certain actors use technology to manipulate and challenge basic legal categories in labor today. This is done by referring to legal, sociological, and internet scholarship on the matter.

The main argument elaborated throughout this article is that digital technology is used to blur and distort many of the basic labor law protections. Because of this, legal categories and rights in the labor field seem to be outdated and need to be adjusted to this new reality.
By providing four detailed examples, the article unpacks how employers, giant high-tech companies, and society use various forms of technology to constantly disturb legal categories in the labor field regarding time, sphere, and relations. In this way, the article demonstrates how social media sites, information communication technologies, and artificial intelligence are used to blur the traditional concepts of privacy, working time and place, the employment contract, and community. This increased blurriness and fragility in labor have created many new difficulties that require new ways of thinking about regulation. Therefore, the article argues that both law and technology have to be modified to cope with the new challenges. Following this, the article proposes three possible ways in which to start considering the regulation of labor in the digital reality: (1) embrace flexibility as part of the legal order and use it as an interpretive tool and not just as an obstacle, (2) broaden the current legal protection and add a procedural layer to the legal rights at stake, and (3) use technology as part of the solution to the dilemmas that technology itself has emphasized. By doing so, this article seeks to enable more accurate thinking on law and regulation in the digital reality, particularly in the labor field, as well as in other fields and contexts….(More)”.

The Nudge Puzzle: Matching Nudge Interventions to Cybersecurity Decisions


Paper by Verena Zimmermann and Karen Renaud: “Nudging is a promising approach, in terms of influencing people to make advisable choices in a range of domains, including cybersecurity. However, the processes underlying the concept and the nudge’s effectiveness in different contexts, and in the long term, are still poorly understood. Our research thus first reviewed the nudge concept and differentiated it from other interventions before applying it to the cybersecurity area. We then carried out an empirical study to assess the effectiveness of three different nudge-related interventions on four types of cybersecurity-specific decisions. Our study demonstrated that the combination of a simple nudge and information provision, termed a “hybrid nudge,” was at least as, and in some decision contexts even more effective in encouraging secure choices as the simple nudge on its own. This indicates that the inclusion of information when deploying a nudge, thereby increasing the intervention’s transparency, does not necessarily diminish its effectiveness.

A follow-up study explored the educational and long-term impact of our tested nudge interventions to encourage secure choices. The results indicate that the impact of the initial nudges, of all kinds, did not endure. We conclude by discussing our findings and their implications for research and practice….(More)”.