Social media firms 'should hand over data amid suicide risk'


Denis Campbell at the Guardian: “Social media firms such as Facebook and Instagram should be forced to hand over data about who their users are and why they use the sites to reduce suicide among children and young people, psychiatrists have said.

The call from the Royal College of Psychiatrists comes as ministers finalise plans to crack down on issues caused by people viewing unsavoury material and messages online.

The college, which represents the UK’s 18,000 psychiatrists, wants the government to make social media platforms hand over the data to academics so that they can study what sort of content users are viewing.

“We will never understand the risks and benefits of social media use unless the likes of Twitter, Facebook and Instagram share their data with researchers,” said Dr Bernadka Dubicka, chair of the college’s child and adolescent mental health faculty. “Their research will help shine a light on how young people are interacting with social media, not just how much time they spend online.”

Data passed to academics would show the type of material viewed and how long users were spending on such platforms but would be anonymous, the college said.

The government plans to set up a new online safety regulator and the college says it should be given the power to compel firms to hand over data. It is also calling for the forthcoming 2% “turnover tax” on social media companies’ income to be extended so that it includes their turnover internationally, not from just the UK.

“Self-regulation is not working. It is time for government to step up and take decisive action to hold social media companies to account for escalating harmful content to vulnerable children and young people,” said Dubicka.

The college’s demands come amid growing concern that young people are being harmed by material that, for example, encourages self-harm, suicide and eating disorders. They are included in a new position statement on technology use and the mental health of children and young people.

NHS England challenged firms to hand over the sort of information that the college is suggesting. Claire Murdoch, its national director for mental health, said that action was needed “to rein in potentially misleading or harmful online content and behaviours”.

She said: “If these tech giants really want to be a force for good, put a premium on users’ wellbeing and take their responsibilities seriously, then they should do all they can to help researchers better understand how they operate and the risks posed. Until then, they cannot confidently say whether the good outweighs the bad.”

The demands have also been backed by Ian Russell, who has become a campaigner against social media harm since his 14-year-old daughter Molly killed herself in November 2017….(More)”.

Global problems need social science


Hetan Shah at Nature: “Without human insights, data and the hard sciences will not meet the challenges of the next decade…

I worry about the fact that the call prioritized science and technology over the humanities and social sciences. Governments must make sure they also tap into that expertise, or they will fail to tackle the challenges of this decade.

For example, we cannot improve global health if we take only a narrow medical view. Epidemics are social as well as biological phenomena. Anthropologists such as Melissa Leach at the University of Sussex in Brighton, UK, played an important part in curbing the West African Ebola epidemic with proposals to substitute risky burial rituals with safer ones, rather than trying to eliminate such rituals altogether.

Treatments for mental health have made insufficient progress. Advances will depend, in part, on a better understanding of how social context influences whether treatment succeeds. Similar arguments apply to the problem of antimicrobial resistance and antibiotic overuse.

Environmental issues are not just technical challenges that can be solved with a new invention. To tackle climate change we will need insight from psychology and sociology. Scientific and technological innovations are necessary, but enabling them to make an impact requires an understanding of how people adapt and change their behaviour. That will probably require new narratives — the purview of rhetoric, literature, philosophy and even theology.

Poverty and inequality call even more obviously for expertise beyond science and maths. The UK Economic and Social Research Council has recognized that poor productivity in the country is a big problem, and is investing up to £32.4 million (US$42 million) in a new Productivity Institute in an effort understand the causes and potential remedies.

Policy that touches on national and geographical identity also needs scholarly input. What is the rise of ‘Englishness’? How do we live together in a community of diverse races and religions? How is migration understood and experienced? These intangibles have real-world consequences, as demonstrated by the Brexit vote and ongoing discussions about whether the United Kingdom has a future as a united kingdom. It will take the work of historians, social psychologists and political scientists to help shed light on these questions. I could go on: fighting against misinformation; devising ethical frameworks for artificial intelligence. These are issues that cannot be tackled with better science alone….(More)”.

Human-centred policy? Blending ‘big data’ and ‘thick data’ in national policy


Policy Lab (UK): “….Compared with quantitative data, ethnography creates different forms of data – what anthropologists call ‘thick data’. Complex social problems benefit from insights beyond linear, standardised evidence and this is where thick data shows its worth. In Policy Lab we have generated ethnographic films and analysis to sit alongside quantitative data, helping policy-makers to build a rich picture of current circumstances. 

On the other hand, much has been written about big data – data generated through digital interactions – whether it be traditional ledgers and spreadsheets or emerging use of artificial intelligence and the internet of things.  The ever-growing zettabytes of data can reveal a lot, providing a (sometimes real time) digital trail capturing and aggregating our individual choices, preferences, behaviours and actions.  

Much hyped, this quantitative data has great potential to inform future policy, but must be handled ethically, and also requires careful preparation and analysis to avoid biases and false assumptions creeping in. Three issues we have seen in our projects relate to:

  • partial data, for example not having data on people who are not digitally active, biasing the sample
  • the time-consuming challenge of cleaning up data, in a political context where time is often of the essence
  • the lack of data interoperability, where different localities/organisations capture different metrics

Through a number of Policy Lab projects we have used big data to see the big picture before then using thick data to zoom in to the detail of people’s lived experience.  Whereas big data can give us cumulative evidence at a macro, often systemic level, thick data provides insights at an individual or group level.  We have found the blending of ‘big data’ and ‘thick data’ – to be the sweet spot. 

This is a diagram of Policy Lab's model for combining big data and thick data.
Policy Lab’s model for combining big data and thick data (2020)

Policy Lab’s work develops data and insights into ideas for potential policy intervention which we can start to test as prototypes with real people. These operate at the ‘meso’ level (in the middle of the diagram above), informed by both the thick data from individual experiences and the big data at a population or national level. We have written a lot about prototyping for policy and are continuing to explore how you prototype a policy compared to say a digital service….(More)”.

Predictive Policing Theory


Paper by Andrew Guthrie Ferguson: “Predictive policing is changing law enforcement. New place-based predictive analytic technologies allow police to predict where and when a crime might occur. Data-driven insights have been operationalized into concrete decisions about police priorities and resource allocation. In the last few years, place-based predictive policing has spread quickly across the nation, offering police administrators the ability to identify higher crime locations, to restructure patrol routes, and to develop crime suppression strategies based on the new data.

This chapter suggests that the debate about technology is better thought about as a choice of policing theory. In other words, when purchasing a particular predictive technology, police should be doing more than simply choosing the most sophisticated predictive model; instead they must first make a decision about the type of policing response that makes sense in their community. Foundational questions about whether we want police officers to be agents of social control, civic problem-solvers, or community partners lie at the heart of any choice of which predictive technology might work best for any given jurisdiction.

This chapter then examines predictive policing technology as a choice about policing theory and how the purchase of a particular predictive tool becomes – intentionally or unintentionally – a statement about police role. Interestingly, these strategic choices map on to existing policing theories. Three of the traditional policing philosophies – hot spot policing , problem-oriented policing, and community-based policing have loose parallels with new place-based predictive policing technologies like PredPol, Risk Terrain Modeling (RTM), and HunchLab. This chapter discusses these leading predictive policing technologies as illustrative examples of how police can choose between prioritizing additional police presence, targeting environmental vulnerabilities, and/or establishing a community problem-solving approach as a different means of achieving crime reduction….(More)”.

The promise and perils of big gender data


Essay by Bapu Vaitla, Stefaan Verhulst, Linus Bengtsson, Marta C. González, Rebecca Furst-Nichols & Emily Courey Pryor in Special Issue on Big Data of Nature Medicine: “Women and girls are legally and socially marginalized in many countries. As a result, policymakers neglect key gendered issues such as informal labor markets, domestic violence, and mental health1. The scientific community can help push such topics onto policy agendas, but science itself is riven by inequality: women are underrepresented in academia, and gendered research is rarely a priority of funding agencies.

However, the critical importance of better gender data for societal well-being is clear. Mental health is a particularly striking example. Estimates from the Global Burden of Disease database suggest that depressive and anxiety disorders are the second leading cause of morbidity among females between 10 and 63 years of age2. But little is known about the risk factors that contribute to mental illness among specific groups of women and girls, the challenges of seeking care for depression and anxiety, or the long-term consequences of undiagnosed and untreated illness. A lack of data similarly impedes policy action on domestic and intimate-partner violence, early marriage, and sexual harassment, among many other topics.

‘Big data’ can help fill that gap. The massive amounts of information passively generated by electronic devices represent a rich portrait of human life, capturing where people go, the decisions they make, and how they respond to changes in their socio-economic environment. For example, mobile-phone data allow better understanding of health-seeking behavior as well as the dynamics of infectious-disease transmission3. Social-media platforms generate the world’s largest database of thoughts and emotions—information that, if leveraged responsibly, can be used to infer gendered patterns of mental health4. Remote sensors, especially satellites, can be used in conjunction with traditional data sources to increase the spatial and temporal granularity of data on women’s economic activity and health status5.

But the risk of gendered algorithmic bias is a serious obstacle to the responsible use of big data. Data are not value free; they reproduce the conscious and unconscious attitudes held by researchers, programmers, and institutions. Consider, for example, the training datasets on which the interpretation of big data depends. Training datasets establish the association between two or more directly observed phenomena of interest—for example, the mental health of a platform user (typically collected through a diagnostic survey) and the semantic content of the user’s social-media posts. These associations are then used to develop algorithms that interpret big data streams. In the example here, the (directly unobserved) mental health of a large population of social-media users would be inferred from their observed posts….(More)”.

Tech groups cannot be allowed to hide from scrutiny


Marietje Schaake at the Financial Times: “Technology companies have governments over a barrel. Whether they are maximising traffic flow efficiency, matching pupils with their school preferences, trying to anticipate drought based on satellite and soil data, most governments heavily rely on critical infrastructure and artificial intelligence developed by the private sector. This growing dependence has profound implications for democracy.

An unprecedented information asymmetry is growing between companies and governments. We can see this in the long-running investigation into interference in the 2016 US presidential elections. Companies build voter registries, voting machines and tallying tools, while social media companies sell precisely targeted advertisements using information gleaned by linking data on friends, interests, location, shopping and search.

This has big privacy and competition implications, yet oversight is minimal. Governments, researchers and citizens risk being blindsided by the machine room that powers our lives and vital aspects of our democracies. Governments and companies have fundamentally different incentives on transparency and accountability.

While openness is the default and secrecy the exception for democratic governments, companies resist providing transparency about their algorithms and business models. Many of them actively prevent accountability, citing rules that protect trade secrets.

We must revisit these protections when they shield companies from oversight. There is a place for protecting proprietary information from commercial competitors, but the scope and context need to be clarified and balanced when they have an impact on democracy and the rule of law.

Regulators must act to ensure that those designing and running algorithmic processes do not abuse trade secret protections. Tech groups also use the EU’s General Data Protection Regulation to deny access to company information. Although the regulation was enacted to protect citizens against the mishandling of personal data, it is now being wielded cynically to deny scientists access to data sets for research. The European Data Protection Supervisor has intervened, but problems could recur. To mitigate concerns about the power of AI, provider companies routinely promise that the applications will be understandable, explainable, accountable, reliable, contestable, fair and — don’t forget — ethical.

Yet there is no way to test these subjective notions without access to the underlying data and information. Without clear benchmarks and information to match, proper scrutiny of the way vital data is processed and used will be impossible….(More)”.

How digital sleuths unravelled the mystery of Iran’s plane crash


Chris Stokel-Walker at Wired: “The video shows a faint glow in the distance, zig-zagging like a piece of paper caught in an underdraft, slowly meandering towards the horizon. Then there’s a bright flash and the trees in the foreground are thrown into shadow as Ukraine International Airlines flight PS752 hits the ground early on the morning of January 8, killing all 176 people on board.

At first, it seemed like an accident – engine failure was fingered as the cause – until the first video showing the plane seemingly on fire as it weaved to the ground surfaced. United States officials started to investigate, and a more complicated picture emerged. It appeared that the plane had been hit by a missile, corroborated by a second video that appears to show the moment the missile ploughs into the Boeing 737-800. While military and intelligence officials at governments around the world were conducting their inquiries in secret, a team of investigators were using open-source intelligence (OSINT) techniques to piece together the puzzle of flight PS752.

It’s not unusual nowadays for OSINT to lead the way in decoding key news events. When Sergei Skripal was poisoned, Bellingcat, an open-source intelligence website, tracked and identified his killers as they traipsed across London and Salisbury. They delved into military records to blow the cover of agents sent to kill. And in the days after the Ukraine Airlines plane crashed into the ground outside Tehran, Bellingcat and The New York Times have blown a hole in the supposition that the downing of the aircraft was an engine failure. The pressure – and the weight of public evidence – compelled Iranian officials to admit overnight on January 10 that the country had shot down the plane “in error”.

So how do they do it? “You can think of OSINT as a puzzle. To get the complete picture, you need to find the missing pieces and put everything together,” says Loránd Bodó, an OSINT analyst at Tech versus Terrorism, a campaign group. The team at Bellingcat and other open-source investigators pore over publicly available material. Thanks to our propensity to reach for our cameraphones at the sight of any newsworthy incident, video and photos are often available, posted to social media in the immediate aftermath of events. (The person who shot and uploaded the second video in this incident, of the missile appearing to hit the Boeing plane was a perfect example: they grabbed their phone after they heard “some sort of shot fired”.) “Open source investigations essentially involve the collection, preservation, verification, and analysis of evidence that is available in the public domain to build a picture of what happened,” says Yvonne McDermott Rees, a lecturer at Swansea University….(More)”.

Innovation labs and co-production in public problem solving


Paper by Michael McGann, Tamas Wells & Emma Blomkamp: “Governments are increasingly establishing innovation labs to enhance public problem solving. Despite the speed at which these new units are being established, they have only recently begun to receive attention from public management scholars. This study assesses the extent to which labs are enhancing strategic policy capacity through pursuing more collaborative and citizen-centred approaches to policy design. Drawing on original case study research of five labs in Australia and New Zealand, it examines the structure of lab’s relationships to government partners, and the extent and nature of their activities in promoting citizen-participation in public problem solving….(More)”.

Machine Learning, Big Data and the Regulation of Consumer Credit Markets: The Case of Algorithmic Credit Scoring


Paper by Nikita Aggarwal et al: “Recent advances in machine learning (ML) and Big Data techniques have facilitated the development of more sophisticated, automated consumer credit scoring models — a trend referred to as ‘algorithmic credit scoring’ in recognition of the increasing reliance on computer (particularly ML) algorithms for credit scoring. This chapter, which forms part of the 2018 collection of short essays ‘Autonomous Systems and the Law’, examines the rise of algorithmic credit scoring, and considers its implications for the regulation of consumer creditworthiness assessment and consumer credit markets more broadly.

The chapter argues that algorithmic credit scoring, and the Big Data and ML technologies underlying it, offer both benefits and risks for consumer credit markets. On the one hand, it could increase allocative efficiency and distributional fairness in these markets, by widening access to, and lowering the cost of, credit, particularly for ‘thin-file’ and ‘no-file’ consumers. On the other hand, algorithmic credit scoring could undermine distributional fairness and efficiency, by perpetuating discrimination in lending against certain groups and by enabling the more effective exploitation of borrowers.

The chapter considers how consumer financial regulation should respond to these risks, focusing on the UK/EU regulatory framework. As a general matter, it argues that the broadly principles and conduct-based approach of UK consumer credit regulation provides the flexibility necessary for regulators and market participants to respond dynamically to these risks. However, this approach could be enhanced through the introduction of more robust product oversight and governance requirements for firms in relation to their use of ML systems and processes. Supervisory authorities could also themselves make greater use of ML and Big Data techniques in order to strengthen the supervision of consumer credit firms.

Finally, the chapter notes that cross-sectoral data protection regulation, recently updated in the EU under the GDPR, offers an important avenue to mitigate risks to consumers arising from the use of their personal data. However, further guidance is needed on the application and scope of this regime in the consumer financial context….(More)”.

The future is intelligent: Harnessing the potential of artificial intelligence in Africa


Youssef Travaly and Kevin Muvunyi at Brookings: “…AI in particular presents countless avenues for both the public and private sectors to optimize solutions to the most crucial problems facing the continent today, especially for struggling industries. For example, in health care, AI solutions can help scarce personnel and facilities do more with less by speeding initial processing, triage, diagnosis, and post-care follow up. Furthermore, AI-based pharmacogenomics applications, which focus on the likely response of an individual to therapeutic drugs based on certain genetic markers, can be used to tailor treatments. Considering the genetic diversity found on the African continent, it is highly likely that the application of these technologies in Africa will result in considerable advancement in medical treatment on a global level.

In agricultureAbdoulaye Baniré Diallo, co-founder and chief scientific officer of the AI startup My Intelligent Machines, is working with advanced algorithms and machine learning methods to leverage genomic precision in livestock production models. With genomic precision, it is possible to build intelligent breeding programs that minimize the ecological footprint, address changing consumer demands, and contribute to the well-being of people and animals alike through the selection of good genetic characteristics at an early stage of the livestock production process. These are just a few examples that illustrate the transformative potential of AI technology in Africa.

However, a number of structural challenges undermine rapid adoption and implementation of AI on the continent. Inadequate basic and digital infrastructure seriously erodes efforts to activate AI-powered solutions as it reduces crucial connectivity. (For more on strategies to improve Africa’s digital infrastructure, see the viewpoint on page 67 of the full report). A lack of flexible and dynamic regulatory systems also frustrates the growth of a digital ecosystem that favors AI technology, especially as tech leaders want to scale across borders. Furthermore, lack of relevant technical skills, particularly for young people, is a growing threat. This skills gap means that those who would have otherwise been at the forefront of building AI are left out, preventing the continent from harnessing the full potential of transformative technologies and industries.

Similarly, the lack of adequate investments in research and development is an important obstacle. Africa must develop innovative financial instruments and public-private partnerships to fund human capital development, including a focus on industrial research and innovation hubs that bridge the gap between higher education institutions and the private sector to ensure the transition of AI products from lab to market….(More)”.