Policy Brief by Muriel Poisson: “As part of its research project on ‘Open government (OG) in education: Learning from experience’, the UNESCO International Institute for Educational Planning (IIEP) has prepared five thematic briefs illustrating various forms of OG as applied to the education field: open government, open budgeting, open contracting, open policy-making and crowd-sourcing, and social auditing. This brief deals specifically with open policy-making and crowd-sourcing….(More)”.
Abel Wajnerman Paz at Rest of the World: “Neurotechnology” is an umbrella term for any technology that can read and transcribe mental states by decoding and modulating neural activity. This includes technologies like closed-loop deep brain stimulation that can both detect neural activity related to people’s moods and can suppress undesirable symptoms, like depression, through electrical stimulation.
Despite their evident usefulness in education, entertainment, work, and the military, neurotechnologies are largely unregulated. Now, as Chile redrafts its constitution — disassociating it from the Pinochet surveillance regime — legislators are using the opportunity to address the need for closer protection of people’s rights from the unknown threats posed by neurotechnology.
Although the technology is new, the challenge isn’t. Decades ago, similar international legislation was passed following the development of genetic technologies that made possible the collection and application of genetic data and the manipulation of the human genome. These included the Universal Declaration on the Human Genome and Human Rights in 1997 and the International Declaration on Human Genetic Data in 2003. The difference is that, this time, Chile is a leading light in the drafting of neuro-rights legislation.
In Chile, two bills — a constitutional reform bill, which is awaiting approval by the Chamber of Deputies, and a bill on neuro-protection — will establish neuro-rights for Chileans. These include the rights to personal identity, free will, mental privacy, equal access to cognitive enhancement technologies, and protection against algorithmic bias….(More)”.
Manuel León Urrutia at The Conversation: “I find it tempting to celebrate the public’s expanding access to data and familiarity with terms like “flattening the curve”. After all, a better informed society is a successful society, and the provision of data-driven information to the public seems to contribute to the notion that together we can beat COVID.
But increased data visibility shouldn’t necessarily be interpreted as increased data literacy. For example, at the start of the pandemic it was found that the portrayal of COVID deaths in logarithmic graphs confused the public. Logarithmic graphs control for data that’s growing exponentially by using a scale which increases by a factor of ten on the y, or vertical axis. This led some people to radically underestimate the dramatic rise in COVID cases.
The vast amount of data we now have available doesn’t even guarantee consensus. In fact, instead of solving the problem, this data deluge can contribute to the polarisation of public discourse. One study recently found that COVID sceptics use orthodox data presentation techniques to spread their controversial views, revealing how more data doesn’t necessarily result in better understanding. Though data is supposed to be objective and empirical, it has assumed a political, subjective hue during the pandemic….
This is where educators come in. The pandemic has only strengthened the case presented by academics for data literacy to be included in the curriculum at all educational levels, including primary. This could help citizens navigate our data-driven world, protecting them from harmful misinformation and journalistic malpractice.
Data literacy does in fact already feature in many higher education roadmaps in the UK, though I’d argue it’s a skill the entire population should be equipped with from an early age. Misconceptions about vaccine efficacy and the severity of the coronavirus are often based on poorly presented, false or misinterpreted data. The “fake news” these misconceptions generate would spread less ferociously in a world of data literate citizens.
Blog by Paul Atherton and Alasdair Mackintosh:”Sierra Leone has made significant progress towards educational targets in recent years, but is still struggling to ensure equitable access to quality teachers for all its learners. The government is exploring innovative solutions to tackle this problem. In support of this, Fab Inc. has brought their expertise in data science and education systems, merging the two to use spatial analysis to unpack and explore this challenge….
Figure 1: Pupil-teacher ratio for primary education by district (left); and within Kailahun district, Sierra Leone, by chiefdom (right), 2020.
…Spatial analysis, also referred to as geospatial analysis, is a set of techniques to explain patterns and behaviours in terms of geography and locations. It uses geographical features, such as distances, travel times and school neighbourhoods, to identify relationships and patterns.
Our team, using its expertise in both data science and education systems, examined issues linked to remoteness to produce a clearer picture of Sierra Leone’s teacher shortage. To see how the current education workforce was distributed across the country, and how well it served local populations, we drew on geo-processed population data from the Grid-3 initiative and the Government of Sierra Leone’s Education Data Hub. The project benefited from close collaboration with the Ministry and Teaching Service Commission (TSC).
Our analysis focused on teacher development, training and the deployment of new teachers across regions, drawing on exam data. Surveys of teacher training colleges (TTCs) were conducted to assess how many future teachers will need to be trained to make up for shortages. Gender and subject speciality were analysed to better address local imbalances. The team developed a matching algorithm for teacher deployment, to illustrate how schools’ needs, including aspects of qualifications and subject specialisms, can be matched to teachers’ preferences, including aspects of language and family connections, to improve allocation of both current and future teachers….(More)”
Free-to-download book by Mine Cetinkaya-Rundel and Johanna Hardin: “…a re-imagining of a previous title, Introduction to Statistics with Randomization and Simulation. The new book puts a heavy emphasis on exploratory data analysis (specifically exploring multivariate relationships using visualization, summarization, and descriptive models) and provides a thorough discussion of simulation-based inference using randomization and bootstrapping, followed by a presentation of the related Central Limit Theorem based approaches. Other highlights include:
Web native book. The online book is available in HTML, which offers easy navigation and searchability in the browser. The book is built with the bookdown package and the source code to reproduce the book can be found on GitHub. Along with the bookdown site, this book is also available as a PDF and in paperback. Read the book online here.
Tutorials. While the main text of the book is agnostic to statistical software and computing language, each part features 4-8 interactive R tutorials (for a total of 32 tutorials) that walk you through the implementation of the part content in R with the tidyverse for data wrangling and visualisation and the tidyverse-friendly infer package for inference. The self-paced and interactive R tutorials were developed using the learnr R package, and only an internet browser is needed to complete them. Browse the tutorials here.
Labs. Each part also features 1-2 R based labs. The labs consist of data analysis case studies and they also make heavy use of the tidyverse and infer packages. View the labs here.
Datasets. Datasets used in the book are marked with a link to where you can find the raw data. The majority of these point to the openintro package. You can install the openintro package from CRAN or get the development version on GitHub. Find out more about the package here….(More)”.
Blog by Olivier Thévenon at the OECD: “Childhood is a critical period in which individuals develop many of the skills and abilities needed to thrive later in life. Promoting child well-being is not only an important end in itself, but is also essential for safeguarding the prosperity and sustainability of future generations. As the COVID-19 pandemic exacerbates existing challenges—and introduces new ones—for children’s material, physical, socio-emotional and cognitive development, improving child well-being should be a focal point of the recovery agenda.
To design effective child well-being policies, policy-makers need comprehensive and timely data that capture what is going on in children’s lives. Our new report, Measuring What Matters for Child Well-being and Policies, aims to move the child data agenda forward by laying the groundwork for better statistical infrastructures that will ultimately inform policy development. We identify key data gaps and outline a new aspirational measurement framework, pinpointing the aspects of children’s lives that should be assessed to monitor their well-being….(More)”.
Working Paper by the Federal Reserve Bank of Chicago: “Local governments spend over 12 billion dollars annually funding the operation of 15,000 public libraries in the United States. This funding supports widespread library use: more than 50% of Americans visit public libraries each year. But despite extensive public investment in libraries, surprisingly little research quantities the effects of public libraries on communities and children. We use data on the near-universe of U.S. public libraries to study the effects of capital spending shocks on library resources, patron usage, student achievement, and local housing prices. We use a dynamic difference-in-difference approach to show that library capital investment increases children’s attendance at library events by 18%, children’s checkouts of items by 21%, and total library visits by 21%. Increases in library use translate into improved children’s test scores in nearby school districts: a $1,000 or greater per-student capital investment in local public libraries increases reading test scores by 0.02 standard deviations and has no effects on math test scores. Housing prices do not change after a sharp increase in public library capital investment, suggesting that residents internalize the increased cost and improved quality of their public libraries….(More)”.
GovTech article: “While New York is not the first state to propose data privacy legislation, it is the first to propose a data privacy bill that would implement a tax on big tech companies that benefit from the sale of New Yorkers’ consumer data.
Known as the Data Economy Labor Compensation and Accountability Act, the bill looks to enact a 2 percent tax on annual receipts earned off New York residents’ data. This tax and other rules and regulations aimed at safeguarding citizens’ data will be enforced by a newly created Office of Consumer Data Protection outlined in the bill.
The office would require all data controllers and processors to register annually in order to meet state compliance requirements. Failure to do so, the bill states, would result in fines.
As for the tax, all funds will be put toward improving education and closing the digital divide.
“The revenue from the tax will be put towards digital literacy, workforce redevelopment, STEAM education (science, technology, engineering, arts and mathematics), K-12 education, workforce reskilling and retraining,” said Sen. Andrew Gounardes, D-22.
As for why the bill is being proposed now, Gounardes said, “Every day, big tech companies like Amazon, Apple, Facebook and Google capitalize on the unpaid labor of billions of people to create their products and services through targeted advertising and artificial intelligence.”…(More)”
Article by Ben Castleman: “I like to think of it as my Mark Zuckerberg moment: I was a graduate student and it was a sweltering summer evening in Cambridge. Text messages were slated to go out to recent high school graduates in Massachusetts and Texas. Knowing that thousands of phones would soon start chirping and vibrating with information about college, I refreshed my screen every 30 seconds, waiting to see engagement statistics on how students would respond. Within a few minutes there were dozens of new responses from students wanting to connect with an advisor to discuss their college plans.
We’re approaching the tenth anniversary of that first text-based advising campaign to reduce summer melt—when students have been accepted to and plan to attend college upon graduating high school, but do not start college in the fall. The now-ubiquity of businesses sending texts makes it hard to remember how innovative texting as a channel was; back in the early 2010s, text was primarily used for social and conversational communication. Maybe the occasional doctor’s office or airline would send a text reminder, but SMS was not broadly used as a channel by schools or colleges.
Those novel text nudges appeared successful. Results from a randomized controlled trial (RCT) that I conducted with Lindsay Page showed that students who received the texts reminding them of pre-enrollment tasks and connecting them with advisors enrolled in college at higher rates. We had the opportunity to replicate our summer melt work two summers later in additional cities and with engagement from the White House Social and Behavioral Sciences team and found similar impacts.
This evidence emerged as the Obama administration made higher ed policy a greater focus in the second term, with a particular emphasis on expanding college opportunity for underrepresented students. Similar text campaigns expanded rapidly and broadly—most notably former First Lady Michelle Obama’s Up Next campaign—in part because they check numerous boxes for policymakers and funders: Texts are inexpensive to send; text campaigns are relatively easy to implement; and there was evidence of their effectiveness at expanding college access….(More)”.
Paper by Archita Misra (PARIS21): “The COVID-19 crisis presents a monumental opportunity to engender a widespread data culture in our societies. Since early 2020, the emergence of popular data sites like Worldometer2 have promoted interest and attention in data-driven tracking of the pandemic. “R values”, “flattening the curve” and “exponential increase” have seeped into everyday lexicon. Social media and news outlets have filled the public consciousness with trends, rankings and graphs throughout multiple waves of COVID-19.
Yet, the crisis also reveals a critical lack of data literacy amongst citizens in many parts of the world. The lack of a data literate culture predates the pandemic. The supply of statistics and information has significantly outpaced the ability of lay citizens to make informed choices about their lives in the digital data age.
Today’s fragmented datafied information landscape is also susceptible to the pitfalls of misinformation, post-truth politics and societal polarisation – all of which demand a critical thinking lens towards data. There is an urgent need to develop data literacy at the level of citizens, organisations and society – such that all actors are empowered to navigate the complexity of modern data ecosystems.
The paper identifies three key take-aways. It is crucial to
- forge a common language around data literacy
- adopt a demand-driven approach and participatory approach to doing data literacy
- move from ad-hoc programming towards sustained policy, investment and impact…(More)”.