Enhancing teacher deployment in Sierra Leone: Using spatial analysis to address disparity


Blog by Paul Atherton and Alasdair Mackintosh:”Sierra Leone has made significant progress towards educational targets in recent years, but is still struggling to ensure equitable access to quality teachers for all its learners. The government is exploring innovative solutions to tackle this problem. In support of this, Fab Inc. has brought their expertise in data science and education systems, merging the two to use spatial analysis to unpack and explore this challenge….

Figure 1: Pupil-teacher ratio for primary education by district (left); and within Kailahun district, Sierra Leone, by chiefdom (right), 2020.

maps

Source: Mackintosh, A., A. Ramirez, P. Atherton, V. Collis, M. Mason-Sesay, & C. Bart-Williams. 2019. Education Workforce Spatial Analysis in Sierra Leone. Research and Policy Paper. Education Workforce Initiative. The Education Commission.

…Spatial analysis, also referred to as geospatial analysis, is a set of techniques to explain patterns and behaviours in terms of geography and locations. It uses geographical features, such as distances, travel times and school neighbourhoods, to identify relationships and patterns.

Our team, using its expertise in both data science and education systems, examined issues linked to remoteness to produce a clearer picture of Sierra Leone’s teacher shortage. To see how the current education workforce was distributed across the country, and how well it served local populations, we drew on geo-processed population data from the Grid-3 initiative and the Government of Sierra Leone’s Education Data Hub. The project benefited from close collaboration with the Ministry and Teaching Service Commission (TSC).

Our analysis focused on teacher development, training and the deployment of new teachers across regions, drawing on exam data. Surveys of teacher training colleges (TTCs) were conducted to assess how many future teachers will need to be trained to make up for shortages. Gender and subject speciality were analysed to better address local imbalances. The team developed a matching algorithm for teacher deployment, to illustrate how schools’ needs, including aspects of qualifications and subject specialisms, can be matched to teachers’ preferences, including aspects of language and family connections, to improve allocation of both current and future teachers….(More)”

Are we all social scientists now? The rise of citizen social science raises more questions about social science than it answers


Blog by Alexandra Albert: “…In many instances people outside of the academy can and do, do social research even when they do not consider what they are doing to be social research, since that is perceived to be the preserve of ‘experts’. What is it about social science that makes it a skilful and expert activity, and how or why is it practiced in a way that makes it difficult to do? CSS produces tensions between the ideals of inclusion of social actors in the generation of information about the everyday, and the notion that many participants do not necessarily feel entitled, or empowered, to participate in the analysis of this information, or in the interpretation of what it means. For example, in the case of the Empty Houses project, set up to explore some of these issues discussed here in more detail, some participants suggested they did not feel comfortable reporting on empty houses because they found them hard to identify and assumed that some prior knowledge or ‘expertise’ was required. CSS is the perfect place to interrogate these tensions since it challenges the closed nature of social science.

Second, CSS blurs the roles between researchers and researched, creating new responsibilities for participants and researchers alike. A notable distinction between expert and non-expert in social science research is the critique of the approach and the interpretation or analysis of the data. However, the way that traditional social science is done, with critical analysis being the preserve of the trained expert, means that many participants do not feel that it is their role to do the analysis. Does the professionalisation of observational techniques constitute a different category of sociological data that means that people need to be trained in formal and distinct sociological ways of collecting and analysing data? This is a challenge for research design and execution in CSS, and the potentially new perspectives that participating in CSS can engender.

Third, in addressing social worlds, CSS questions whether such observations are just a regular part of people’s everyday lives, or whether they entail a more active form of practice in observing everyday life. In this sense, what does it really mean to participate? Is there a distinction between ‘active’ and ‘passive’ observation? Arguably participating in a project is never just about this – it’s more of a conscious choice, and therefore, in some respects, a burden of some sort. This further raises the issue of how to appropriately compensate participants for their time and energy, potentially as co-researchers in a project and co-authors on papers?

Finally, while CSS can rearrange the power dynamics of citizenship, research and knowing, narratives of ‘duty’ to take part, and to ‘do your bit’, necessarily place a greater burden on the individual and raise questions about the supposed emancipatory potential of participatory methods such as CSS….(More)”

Why We Should End the Data Economy


Essay by Carissa Véliz: “…The data economy undermines equality and fairness. You and your neighbor are no longer treated as equal citizens. You aren’t given an equal opportunity because you are treated differently on the basis of your data. The ads and content you have access to, the prices you pay for the same services, and even how long you wait when you call customer service depend on your data.

We are much better at collecting personal data than we are at keeping it safe. But personal data is a serious threat, and we shouldn’t be collecting it in the first place if we are incapable of keeping it safe. Using smartphone location data acquired from a data broker, reporters from The New York Times were able to track military officials with security clearances, powerful lawyers and their guests, and even the president of the United States (through the phone of someone believed to be a Secret Service agent).

Our current data economy is based on collecting as much personal data as possible, storing it indefinitely, and selling it to the highest bidder. Having so much sensitive data circulating freely is reckless. By designing our economy around surveillance, we are building a dangerous structure for social control that is at odds with freedom. In the surveillance society we are constructing, there is no such thing as under the radar. It shouldn’t be up to us to constantly opt out of data collection. The default matters, and the default should be no data collection…(More)”.

Is there a role for consent in privacy?


Article by Robert Gellman: “After decades, we still talk about the role of notice and choice in privacy. Yet there seems to be broad recognition that notice and choice do nothing for the privacy of consumers. Some American businesses cling to notice and choice because they hate all the alternatives. Some legislators draft laws with elements of notice and choice, either because it’s easier to draft a law that way, because they don’t know any better or because they carry water for business.

For present purposes, I will talk about notice and choice generically as consent. Consent is a broader concept than choice, but the difference doesn’t matter for the point I want to make. How you frame consent is complex. There are many alternatives and many approaches. It’s not just a matter of opt-in or opt-out. While I’m discarding issues, I also want to acknowledge and set aside the eight basic Fair Information Practices. There is no notice and choice principle in FIPS, and FIPs are not specifically important here.

Until recently, my view was that consent in almost any form is pretty much death for consumer privacy. No matter how you structure it, websites and others will find a way to wheedle consent from consumers. Those who want to exploit consumer data will cajole, pressure, threaten, mystify, obscure, entice or otherwise coax consumers to agree.

Suddenly, I’m not as sure of my conclusion about consent. What changed my mind? There is a new data point from Apple’s App Tracking Transparency framework. Apple requires mobile application developers to obtain opt-in consent before serving targeted advertising via Apple’s Identifier for Advertisers. Early reports suggest consumers are saying “NO” in overwhelming numbers — overwhelming as in more than 90%.

It isn’t this strong consumer reaction that makes me think consent might possibly have a place. I want to highlight a different aspect of the Apple framework….(More)”.

Engaging with the public about algorithmic transparency in the public sector


Blog by the Centre for Data Ethics and Innovation (UK): “To move the recommendation that we made in our review into bias in algorithmic decision-making forward, we have been working with the Central Digital and Data Office (CDDO) and BritainThinks to scope what a transparency obligation could look like in practice, and in particular, which transparency measures would be most effective at increasing public understanding about the use of algorithms in the public sector. 

Due to the low levels of awareness about the use of algorithms in the public sector (CDEI polling in July 2020 found that 38% of the public were not aware that algorithmic systems were used to support decisions using personal data), we opted for a deliberative public engagement approach. This involved spending time gradually building up participants’ understanding and knowledge about algorithm use in the public sector and discussing their expectations for transparency, and co-designing solutions together. 

For this project, we worked with a diverse range of 36 members of the UK public, spending over five hours engaging with them over a three week period. We focused on three particular use-cases to test a range of emotive responses – policing, parking and recruitment.  

The final stage was an in-depth co-design session, where participants worked collaboratively to review and iterate prototypes in order to develop a practical approach to transparency that reflected their expectations and needs for greater openness in the public sector use of algorithms. 

What did we find? 

Our research validated that there was fairly low awareness or understanding of the use of algorithms in the public sector. Algorithmic transparency in the public sector was not a front-of-mind topic for most participants.

However, once participants were introduced to specific examples of potential public sector algorithms, they felt strongly that transparency information should be made available to the public, both citizens and experts. This included desires for; a description of the algorithm, why an algorithm was being used, contact details for more information, data used, human oversight, potential risks and technicalities of the algorithm…(More)”.

To regulate AI, try playing in a sandbox


Article by Dan McCarthy: “For an increasing number of regulators, researchers, and tech developers, the word “sandbox” is just as likely to evoke rulemaking and compliance as it is to conjure images of children digging, playing, and building. Which is kinda the point.

That’s thanks to the rise of regulatory sandboxes, which allow organizations to develop and test new technologies in a low-stakes, monitored environment before rolling them out to the general public. 

Supporters, from both the regulatory and the business sides, say sandboxes can strike the right balance of reining in potentially harmful technologies without kneecapping technological progress. They can also help regulators build technological competency and clarify how they’ll enforce laws that apply to tech. And while regulatory sandboxes originated in financial services, there’s growing interest in using them to police artificial intelligence—an urgent task as AI is expanding its reach while remaining largely unregulated. 

Even for all of its promise, experts told us, the approach should be viewed not as a silver bullet for AI regulation, but instead as a potential step in the right direction. 

Rashida Richardson, an AI researcher and visiting scholar at Rutgers Law School, is generally critical of AI regulatory sandboxes, but still said “it’s worth testing out ideas like this, because there is not going to be any universal model to AI regulation, and to figure out the right configuration of policy, you need to see theoretical ideas in practice.” 

But waiting for the theoretical to become concrete will take time. For example, in April, the European Union proposed AI regulation that would establish regulatory sandboxes to help the EU achieve its aim of responsible AI innovation, mentioning the word “sandbox” 38 times, compared to related terms like “impact assessment” (13 mentions) and “audit” (four). But it will likely take years for the EU’s proposal to become law. 

In the US, some well-known AI experts are working on an AI sandbox prototype, but regulators are not yet in the picture. However, the world’s first and (so far) only AI-specific regulatory sandbox did roll out in Norway this March, as a way to help companies comply with AI-specific provisions of the EU’s General Data Protection Regulation (GDPR). The project provides an early window into how the approach can work in practice.

“It’s a place for mutual learning—if you can learn earlier in the [product development] process, that is not only good for your compliance risk, but it’s really great for building a great product,” according to Erlend Andreas Gjære, CEO and cofounder of Secure Practice, an information security (“infosec”) startup that is one of four participants in Norway’s new AI regulatory sandbox….(More)”

How Does Artificial Intelligence Work?


BuiltIn: “Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: “Can machines think?” 

Turing’s paper “Computing Machinery and Intelligence” (1950), and its subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.   

At its core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.  

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what artificial intelligence is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive percepts from the environment and perform actions.” (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI: 

  1. Thinking humanly
  2. Thinking rationally
  3. Acting humanly 
  4. Acting rationally

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.” (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as  “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”…(More)”.

Citizens ‘on mute’ in digital public service delivery


Blog by Sarah Giest at Data and Policy: “Various countries are digitalizing their welfare system in the larger context of austerity considerations and fraud detection goals, but these changes are increasingly under scrutiny. In short, digitalization of the welfare system means that with the help of mathematical models, data and/or the combination of different administrative datasets, algorithms issue a decision on, for example, an application for social benefits (Dencik and Kaun 2020).

Several examples exist where such systems have led to unfair treatment of welfare recipients. In Europe, the Dutch SyRI system has been banned by court, due to human rights violations in the profiling of welfare recipients, and the UK has found errors in the automated processes leading to financial hardship among citizens. In the United States and Canada, automated systems led to false underpayment or denial of benefits. A recent UN report (2019) even warns that countries are ‘stumbling zombie-like into a digital welfare dystopia’. Further, studies raise alarm that this process of digitalization is done in a way that it not only creates excessive information asymmetry among government and citizens, but also disadvantages certain groups more than others.

A closer look at the Dutch Childcare Allowance case highlights this. In this example, low-income parents were regarded as fraudsters by the Tax Authorities if they had incorrectly filled out any documents. An automated and algorithm-based procedure then also singled out dual-nationality families. The victims lost their allowance without having been given any reasons. Even worse, benefits already received were reclaimed. This led to individual hardship, where financial troubles and the categorization as a fraudster by government led for citizens to a chain of events from unpaid healthcare insurance and the inability to visit a doctor to job loss, potential home loss and mental health concerns (Volkskrant 2020)….(More)”.

Citizen science allows people to ‘really know’ their communities


UGAResearch: “Local populations understand their communities best. They’re familiar both with points of pride and with areas that could be improved. But determining the nature of those improvements from best practices, as well as achieving community consensus on implementation, can present a different set of challenges.

Jerry Shannon, associate professor of geography in the Franklin College of Arts & Sciences, worked with a team of researchers to introduce a citizen science approach in 11 communities across Georgia, from Rockmart to Monroe to Millen. This work combines local knowledge with emerging digital technologies to bolster community-driven efforts in multiple communities in rural Georgia. His research was detailed in a paper, “‘Really Knowing’ the Community: Citizen Science, VGI and Community Housing Assessments” published in December in the Journal of Planning Education and Research.

Shannon worked with the Georgia Initiative for Community Housing, managed out of the College of Family and Consumer Sciences (FACS), to create tools for communities to evaluate and launch plans to address their housing needs and revitalization. This citizen science effort resulted in a more diverse and inclusive body of data that incorporated local perspectives.

“Through this project, we hope to further support and extend these community-driven efforts to assure affordable, quality housing,” said Shannon. “Rural communities don’t have the resources internally to do this work themselves. We provide training and tools to these communities.”

As part of their participation in the GICH program, each Georgia community assembled a housing team consisting of elected officials, members of community organizations and housing professionals such as real estate agents. The team recruited volunteers from student groups and religious organizations to conduct so-called “windshield surveys,” where participants work from their vehicle or walk the neighborhoods….(More)”

Process Mapping: a Tool with Many Uses


Essay by Jessica Brandt: “Traditionally, process maps are used when one is working on improving a process, but a good process map can serve many purposes. But what is a process map used for and why is this a tool worth learning about? A process map is a tool using a flowchart to illustrate the flow, people, as well as inputs, actions, and outputs of the process in a clear and detailed way. A good process map will reflect the work that is actually done within a given process, not what the intended or imagined workflow might entail. This means in order to build a good process map you should be talking to and learning from the folks that use the process every day, not just the people that oversee the process. Because I see the value behind having a good process map and the many ways you can utilize one to make your work more efficient I want to share with you some of the different ways you can use this versatile tool….(More)”.