When Do Informational Interventions Work? Experimental Evidence from New York City High School Choice


Paper by Sarah Cohodes, Sean Corcoran, Jennifer Jennings & Carolyn Sattin-Bajaj: “This paper reports the results of a large, school-level randomized controlled trial evaluating a set of three informational interventions for young people choosing high schools in 473 middle schools, serving over 115,000 8th graders. The interventions differed in their level of customization to the student and their mode of delivery (paper or online); all treated schools received identical materials to scaffold the decision-making process. Every intervention reduced likelihood of application to and enrollment in schools with graduation rates below the city median (75 percent). An important channel is their effect on reducing nonoptimal first choice application strategies. Providing a simplified, middle-school specific list of relatively high graduation rate schools had the largest impacts, causing students to enroll in high schools with 1.5-percentage point higher graduation rates. Providing the same information online, however, did not alter students’ choices or enrollment. This appears to be due to low utilization. Online interventions with individual customization, including a recommendation tool and search engine, induced students to enroll in high schools with 1-percentage point higher graduation rates, but with more variance in impact. Together, these results show that successful informational interventions must generate engagement with the material, and this is possible through multiple channels…(More)”.

Sharing Student Data Across Public Sectors: Importance of Community Engagement to Support Responsible and Equitable Use


Report by CDT: “Data and technology play a critical role in today’s education institutions, with 85 percent of K-12 teachers anticipating that online learning and use of education technology at their school will play a larger role in the future than it did before the pandemic.  The growth in data-driven decision-making has helped fuel the increasing prevalence of data sharing practices between K-12 education agencies and adjacent public sectors like social services. Yet the sharing of personal data can pose risks as well as benefits, and many communities have historically experienced harm as a result of irresponsible data sharing practices. For example, if the underlying data itself is biased, sharing that information exacerbates those inequities and increases the likelihood that potential harms fall disproportionately on certain communities. As a result, it is critical that agencies participating in data sharing initiatives take steps to ensure the benefits are available to all and no groups of students experience disproportionate harm.

A core component of sharing data responsibly is proactive, robust community engagement with the group of people whose data is being shared, as well as their surrounding community. This population has the greatest stake in the success or failure of a given data sharing initiative; as such, public agencies have a practical incentive, and a moral obligation, to engage them regarding decisions being made about their data…

This paper presents guidance on how practitioners can conduct effective community engagement around the sharing of student data between K-12 education agencies and adjacent public sectors. We explore the importance of community engagement around data sharing initiatives, and highlight four dimensions of effective community engagement:

  • Plan: Establish Goals, Processes, and Roles
  • Enable: Build Collective Capacity
  • Resource: Dedicate Appropriate People, Time, and Money
  • Implement: Carry Out Vision Effectively and Monitor Implementation…(More)”.

Conceptualizing AI literacy: An exploratory review


Paper by Davy Tsz KitNg, Jac Ka LokLeung, Samuel K.W.Chu, and Maggie QiaoShen: “Artificial Intelligence (AI) has spread across industries (e.g., business, science, art, education) to enhance user experience, improve work efficiency, and create many future job opportunities. However, public understanding of AI technologies and how to define AI literacy is under-explored. This vision poses upcoming challenges for our next generation to learn about AI. On this note, an exploratory review was conducted to conceptualize the newly emerging concept “AI literacy”, in search for a sound theoretical foundation to define, teach and evaluate AI literacy. Grounded in literature on 30 existing peer-reviewed articles, this review proposed four aspects (i.e., know and understand, use, evaluate, and ethical issues) for fostering AI literacy based on the adaptation of classic literacies. This study sheds light on the consolidated definition, teaching, and ethical concerns on AI literacy, establishing the groundwork for future research such as competency development and assessment criteria on AI literacy….(More)”.

How behavioral science could get people back into public libraries


Article by Talib Visram: “In October, New York City’s three public library systems announced they would permanently drop fines on late book returns. Comprised of Brooklyn, Queens, and New York public libraries, the City’s system is the largest in the country to remove fines. It’s a reversal of a long-held policy intended to ensure shelves stayed stacked, but an outdated one that many major cities, including Chicago, San Francisco, and Dallas, had already scrapped without any discernible downsides. Though a source of revenue—in 2013, for instance, Brooklyn Public Library (BPL) racked up $1.9 million in late fees—the fee system also created a barrier to library access that disproportionately touched the low-income communities that most need the resources.

That’s just one thing Brooklyn’s library system has done to try to make its services more equitable. In 2017, well before the move to eliminate fines, BPL on its own embarked on a partnership with Nudge, a behavioral science lab at the University of West Virginia, to find ways to reduce barriers to access and increase engagement with the book collections. In the first-of-its-kind collaboration, the two tested behavioral science interventions via three separate pilots, all of which led to the library’s long-term implementation of successful techniques. Those involved in the project say the steps can be translated to other library systems, though it takes serious investment of time and resources….(More)”.

Developing indicators to support the implementation of education policies


OECD Report: “Across OECD countries, the increasing demand for evidence-based policy making has further led governments to design policies jointly with clear measurable objectives, and to define relevant indicators to monitor their achievement. This paper discusses the importance of such indicators in supporting the implementation of education policies.

Building on the OECD education policy implementation framework, the paper reviews the role of indicators along each of the dimensions of the framework, namely smart policy design, inclusive stakeholder engagement, and conducive environment. It draws some lessons to improve the contribution of indicators to the implementation of education policies, while taking into account some of their perennial challenges pertaining to the unintended effects of accountability. This paper aims to provide insights to policy makers and various education stakeholders, to initiate a discussion on the use and misuse of indicators in education, and to guide future actions towards a better contribution of indicators to education policy implementation…..(More)”.

What Do Teachers Know About Student Privacy? Not Enough, Researchers Say


Nadia Tamez-Robledo at EdTech: “What should teachers be expected to know about student data privacy and ethics?

Considering so much of their jobs now revolve around student data, it’s a simple enough question—and one that researcher Ellen B. Mandinach and a colleague were tasked with answering. More specifically, they wanted to know what state guidelines had to say on the matter. Was that information included in codes of education ethics? Or perhaps in curriculum requirements for teacher training programs?

“The answer is, ‘Not really,’” says Mandinach, a senior research scientist at the nonprofit WestEd. “Very few state standards have anything about protecting privacy, or even much about data,” she says, aside from policies touching on FERPA or disposing of data properly.

While it seems to Mandinach that institutions have historically played hot potato over who is responsible for teaching educators about data privacy, the pandemic and its supercharged push to digital learning have brought new awareness to the issue.

The application of data ethics has real consequences for students, says Mandinach, like an Atlanta sixth grader who was accused of “Zoombombing” based on his computer’s IP address or the Dartmouth students who were exonerated from cheating accusations.

“There are many examples coming up as we’re in this uncharted territory, particularly as we’re virtual,” Mandinach says. “Our goal is to provide resources and awareness building to the education community and professional organization…so [these tools] can be broadly used to help better prepare educators, both current and future.”

This week, Mandinach and her partners at the Future of Privacy Forum released two training resources for K-12 teachers: the Student Privacy Primer and a guide to working through data ethics scenarios. The curriculum is based on their report examining how much data privacy and ethics preparation teachers receive while in college….(More)”.

Enrollment algorithms are contributing to the crises of higher education


Paper by Alex Engler: “Hundreds of higher education institutions are procuring algorithms that strategically allocate scholarships to convince more students to enroll. In doing so, these enrollment management algorithms help colleges vary the cost of attendance to students’ willingness to pay, a crucial aspect of competition in the higher education market. This paper elaborates on the specific two-stage process by which these algorithms first predict how likely prospective students are to enroll, and second help decide how to disburse scholarships to convince more of those prospective students to attend the college. These algorithms are valuable to colleges for institutional planning and financial stability, as well as to help reach their preferred financial, demographic, and scholastic outcomes for the incoming student body.

Unfortunately, the widespread use of enrollment management algorithms may also be hurting students, especially due to their narrow focus on enrollment. The prevailing evidence suggests that these algorithms generally reduce the amount of scholarship funding offered to students. Further, algorithms excel at identifying a student’s exact willingness to pay, meaning they may drive enrollment while also reducing students’ chances to persist and graduate. The use of this two-step process also opens many subtle channels for algorithmic discrimination to perpetuate unfair financial aid practices. Higher education is already suffering from low graduation rates, high student debt, and stagnant inequality for racial minorities—crises that enrollment algorithms may be making worse.

This paper offers a range of recommendations to ameliorate the risks of enrollment management algorithms in higher education. Categorically, colleges should not use predicted likelihood to enroll in either the admissions process or in awarding need-based aid—these determinations should only be made based on the applicant’s merit and financial circumstances, respectively. When colleges do use algorithms to distribute scholarships, they should proceed cautiously and document their data, processes, and goals. Colleges should also examine how scholarship changes affect students’ likelihood to graduate, or whether they may deepen inequities between student populations. Colleges should also ensure an active role for humans in these processes, such as exclusively using people to evaluate application quality and hiring internal data scientists who can challenge algorithmic specifications. State policymakers should consider the expanding role of these algorithms too, and should try to create more transparency about their use in public institutions. More broadly, policymakers should consider enrollment management algorithms as a concerning symptom of pre-existing trends towards higher tuition, more debt, and reduced accessibility in higher education….(More)”.

Artificial intelligence masters’ programs


An analysis “of curricula building blocks” by JRC-European Commission: “This report identifies building blocks of master programs on Artificial Intelligence (AI), on the basis of the existing programs available in the European Union. These building blocks provide a first analysis that requires acceptance and sharing by the AI community. The proposal analyses first, the knowledge contents, and second, the educational competences declared as the learning outcomes, of 45 post-graduate academic masters’ programs related with AI from universities in 13 European countries (Belgium, Denmark, Finland, France, Germany, Italy, Ireland, Netherlands, Portugal, Spain, and Sweden in the EU; plus Switzerland and the United Kingdom).

As a closely related and relevant part of Informatics and Computer Science, major AI-related curricula on data science have been also taken into consideration for the analysis. The definition of a specific AI curriculum besides data science curricula is motivated by the necessity of a deeper understanding of topics and skills of the former that build up the foundations of strong AI versus narrow AI, which is the general focus of the latter. The body of knowledge with the proposed building blocks for AI consists of a number of knowledge areas, which are classified as Essential, Core, General and Applied.

First, the AI Essentials cover topics and competences from foundational disciplines that are fundamental to AI. Second, topics and competences showing a close interrelationship and specific of AI are classified in a set of AI Core domain-specific areas, plus one AI General area for non-domain-specific knowledge. Third, AI Applied areas are built on top of topics and competences required to develop AI applications and services under a more philosophical and ethical perspective. All the knowledge areas are refined into knowledge units and topics for the analysis. As the result of studying core AI knowledge topics from the master programs sample, machine learning is observed to prevail, followed in order by: computer vision; human-computer interaction; knowledge representation and reasoning; natural language processing; planning, search and optimisation; and robotics and intelligent automation. A significant number of master programs analysed are significantly focused on machine learning topics, despite being initially classified in another domain. It is noteworthy that machine learning topics, along with selected topics on knowledge representation, depict a high degree of commonality in AI and data science programs. Finally, the competence-based analysis of the sample master programs’ learning outcomes, based on Bloom’s cognitive levels, outputs that understanding and creating cognitive levels are dominant.

Besides, analysing and evaluating are the most scarce cognitive levels. Another relevant outcome is that master programs on AI under the disciplinary lenses of engineering studies show a notable scarcity of competences related with informatics or computing, which are fundamental to AI….(More)”.

Alliance formed to create new professional standards for data science


Press Release: “A new alliance has been formed to create industry-wide professional standards for data science. ‘The Alliance for Data Science Professionals’ is defining the standards needed to ensure an ethical and well-governed approach so the public, organisations and governments can have confidence in how their data is being used. 

While the skills of data scientists are increasingly in demand, there is currently no professional framework for those working in the field. These new industry-wide standards, which will be finalised by the autumn, look to address current issues, such as data breaches, the misuse of data in modelling and bias in artificial intelligence. They can give people confidence that their data is being used ethically, stored safely and analysed robustly. 

The Alliance members, who initially convened in July 2020, are the Royal Statistical Society, BCS, The Chartered Institute for IT, the Operational Research Society, the Institute of Mathematics and its Applications, the Alan Turing Institute and the National Physical Laboratory (NPL). They are supported by the Royal Academy of Engineering and the Royal Society.  

Since convening, the Alliance has worked with volunteers and stakeholders to develop draft standards for individuals, standards for universities seeking accreditation of their courses and a certification process that will enable both individuals and education providers to gain recognition based on skills and knowledge within data science.  

Governed by a memorandum of understanding, the Alliance is committed to:  

  • Defining the standards of professional competence and behaviour expected of people who work with data which impacts life and livelihoods. These include data scientists, data engineers, data analysts and data stewards.  
  • Using an open-source process to maintain and update the standards. 
  • Delivering these standards as data science certifications offered by the Alliance members to their professional members, with processes to hold certified members accountable for their professional status in this area. 
  • Using these standards as criteria for Alliance members to accredit data science degrees, and data science modules of associated degrees, as contributing to certification. 
  • Creating a single searchable public register of certified data science professionals….(More)”.

From open policy-making to crowd-sourcing: illustrative forms of open government in education


Policy Brief by Muriel Poisson: “As part of its research project on ‘Open government (OG) in education: Learning from experience’, the UNESCO International Institute for Educational Planning (IIEP) has prepared five thematic briefs illustrating various forms of OG as applied to the education field: open government, open budgeting, open contracting, open policy-making and crowd-sourcing, and social auditing. This brief deals specifically with open policy-making and crowd-sourcing….(More)”.