Paper by Davy Tsz KitNg, Jac Ka LokLeung, Samuel K.W.Chu, and Maggie QiaoShen: “Artificial Intelligence (AI) has spread across industries (e.g., business, science, art, education) to enhance user experience, improve work efficiency, and create many future job opportunities. However, public understanding of AI technologies and how to define AI literacy is under-explored. This vision poses upcoming challenges for our next generation to learn about AI. On this note, an exploratory review was conducted to conceptualize the newly emerging concept “AI literacy”, in search for a sound theoretical foundation to define, teach and evaluate AI literacy. Grounded in literature on 30 existing peer-reviewed articles, this review proposed four aspects (i.e., know and understand, use, evaluate, and ethical issues) for fostering AI literacy based on the adaptation of classic literacies. This study sheds light on the consolidated definition, teaching, and ethical concerns on AI literacy, establishing the groundwork for future research such as competency development and assessment criteria on AI literacy….(More)”.
Article by Talib Visram: “In October, New York City’s three public library systems announced they would permanently drop fines on late book returns. Comprised of Brooklyn, Queens, and New York public libraries, the City’s system is the largest in the country to remove fines. It’s a reversal of a long-held policy intended to ensure shelves stayed stacked, but an outdated one that many major cities, including Chicago, San Francisco, and Dallas, had already scrapped without any discernible downsides. Though a source of revenue—in 2013, for instance, Brooklyn Public Library (BPL) racked up $1.9 million in late fees—the fee system also created a barrier to library access that disproportionately touched the low-income communities that most need the resources.
That’s just one thing Brooklyn’s library system has done to try to make its services more equitable. In 2017, well before the move to eliminate fines, BPL on its own embarked on a partnership with Nudge, a behavioral science lab at the University of West Virginia, to find ways to reduce barriers to access and increase engagement with the book collections. In the first-of-its-kind collaboration, the two tested behavioral science interventions via three separate pilots, all of which led to the library’s long-term implementation of successful techniques. Those involved in the project say the steps can be translated to other library systems, though it takes serious investment of time and resources….(More)”.
OECD Report: “Across OECD countries, the increasing demand for evidence-based policy making has further led governments to design policies jointly with clear measurable objectives, and to define relevant indicators to monitor their achievement. This paper discusses the importance of such indicators in supporting the implementation of education policies.
Building on the OECD education policy implementation framework, the paper reviews the role of indicators along each of the dimensions of the framework, namely smart policy design, inclusive stakeholder engagement, and conducive environment. It draws some lessons to improve the contribution of indicators to the implementation of education policies, while taking into account some of their perennial challenges pertaining to the unintended effects of accountability. This paper aims to provide insights to policy makers and various education stakeholders, to initiate a discussion on the use and misuse of indicators in education, and to guide future actions towards a better contribution of indicators to education policy implementation…..(More)”.
Nadia Tamez-Robledo at EdTech: “What should teachers be expected to know about student data privacy and ethics?
Considering so much of their jobs now revolve around student data, it’s a simple enough question—and one that researcher Ellen B. Mandinach and a colleague were tasked with answering. More specifically, they wanted to know what state guidelines had to say on the matter. Was that information included in codes of education ethics? Or perhaps in curriculum requirements for teacher training programs?
“The answer is, ‘Not really,’” says Mandinach, a senior research scientist at the nonprofit WestEd. “Very few state standards have anything about protecting privacy, or even much about data,” she says, aside from policies touching on FERPA or disposing of data properly.
While it seems to Mandinach that institutions have historically played hot potato over who is responsible for teaching educators about data privacy, the pandemic and its supercharged push to digital learning have brought new awareness to the issue.
The application of data ethics has real consequences for students, says Mandinach, like an Atlanta sixth grader who was accused of “Zoombombing” based on his computer’s IP address or the Dartmouth students who were exonerated from cheating accusations.
“There are many examples coming up as we’re in this uncharted territory, particularly as we’re virtual,” Mandinach says. “Our goal is to provide resources and awareness building to the education community and professional organization…so [these tools] can be broadly used to help better prepare educators, both current and future.”
This week, Mandinach and her partners at the Future of Privacy Forum released two training resources for K-12 teachers: the Student Privacy Primer and a guide to working through data ethics scenarios. The curriculum is based on their report examining how much data privacy and ethics preparation teachers receive while in college….(More)”.
Paper by Alex Engler: “Hundreds of higher education institutions are procuring algorithms that strategically allocate scholarships to convince more students to enroll. In doing so, these enrollment management algorithms help colleges vary the cost of attendance to students’ willingness to pay, a crucial aspect of competition in the higher education market. This paper elaborates on the specific two-stage process by which these algorithms first predict how likely prospective students are to enroll, and second help decide how to disburse scholarships to convince more of those prospective students to attend the college. These algorithms are valuable to colleges for institutional planning and financial stability, as well as to help reach their preferred financial, demographic, and scholastic outcomes for the incoming student body.
Unfortunately, the widespread use of enrollment management algorithms may also be hurting students, especially due to their narrow focus on enrollment. The prevailing evidence suggests that these algorithms generally reduce the amount of scholarship funding offered to students. Further, algorithms excel at identifying a student’s exact willingness to pay, meaning they may drive enrollment while also reducing students’ chances to persist and graduate. The use of this two-step process also opens many subtle channels for algorithmic discrimination to perpetuate unfair financial aid practices. Higher education is already suffering from low graduation rates, high student debt, and stagnant inequality for racial minorities—crises that enrollment algorithms may be making worse.
This paper offers a range of recommendations to ameliorate the risks of enrollment management algorithms in higher education. Categorically, colleges should not use predicted likelihood to enroll in either the admissions process or in awarding need-based aid—these determinations should only be made based on the applicant’s merit and financial circumstances, respectively. When colleges do use algorithms to distribute scholarships, they should proceed cautiously and document their data, processes, and goals. Colleges should also examine how scholarship changes affect students’ likelihood to graduate, or whether they may deepen inequities between student populations. Colleges should also ensure an active role for humans in these processes, such as exclusively using people to evaluate application quality and hiring internal data scientists who can challenge algorithmic specifications. State policymakers should consider the expanding role of these algorithms too, and should try to create more transparency about their use in public institutions. More broadly, policymakers should consider enrollment management algorithms as a concerning symptom of pre-existing trends towards higher tuition, more debt, and reduced accessibility in higher education….(More)”.
An analysis “of curricula building blocks” by JRC-European Commission: “This report identifies building blocks of master programs on Artificial Intelligence (AI), on the basis of the existing programs available in the European Union. These building blocks provide a first analysis that requires acceptance and sharing by the AI community. The proposal analyses first, the knowledge contents, and second, the educational competences declared as the learning outcomes, of 45 post-graduate academic masters’ programs related with AI from universities in 13 European countries (Belgium, Denmark, Finland, France, Germany, Italy, Ireland, Netherlands, Portugal, Spain, and Sweden in the EU; plus Switzerland and the United Kingdom).
As a closely related and relevant part of Informatics and Computer Science, major AI-related curricula on data science have been also taken into consideration for the analysis. The definition of a specific AI curriculum besides data science curricula is motivated by the necessity of a deeper understanding of topics and skills of the former that build up the foundations of strong AI versus narrow AI, which is the general focus of the latter. The body of knowledge with the proposed building blocks for AI consists of a number of knowledge areas, which are classified as Essential, Core, General and Applied.
First, the AI Essentials cover topics and competences from foundational disciplines that are fundamental to AI. Second, topics and competences showing a close interrelationship and specific of AI are classified in a set of AI Core domain-specific areas, plus one AI General area for non-domain-specific knowledge. Third, AI Applied areas are built on top of topics and competences required to develop AI applications and services under a more philosophical and ethical perspective. All the knowledge areas are refined into knowledge units and topics for the analysis. As the result of studying core AI knowledge topics from the master programs sample, machine learning is observed to prevail, followed in order by: computer vision; human-computer interaction; knowledge representation and reasoning; natural language processing; planning, search and optimisation; and robotics and intelligent automation. A significant number of master programs analysed are significantly focused on machine learning topics, despite being initially classified in another domain. It is noteworthy that machine learning topics, along with selected topics on knowledge representation, depict a high degree of commonality in AI and data science programs. Finally, the competence-based analysis of the sample master programs’ learning outcomes, based on Bloom’s cognitive levels, outputs that understanding and creating cognitive levels are dominant.
Besides, analysing and evaluating are the most scarce cognitive levels. Another relevant outcome is that master programs on AI under the disciplinary lenses of engineering studies show a notable scarcity of competences related with informatics or computing, which are fundamental to AI….(More)”.
Press Release: “A new alliance has been formed to create industry-wide professional standards for data science. ‘The Alliance for Data Science Professionals’ is defining the standards needed to ensure an ethical and well-governed approach so the public, organisations and governments can have confidence in how their data is being used.
While the skills of data scientists are increasingly in demand, there is currently no professional framework for those working in the field. These new industry-wide standards, which will be finalised by the autumn, look to address current issues, such as data breaches, the misuse of data in modelling and bias in artificial intelligence. They can give people confidence that their data is being used ethically, stored safely and analysed robustly.
The Alliance members, who initially convened in July 2020, are the Royal Statistical Society, BCS, The Chartered Institute for IT, the Operational Research Society, the Institute of Mathematics and its Applications, the Alan Turing Institute and the National Physical Laboratory (NPL). They are supported by the Royal Academy of Engineering and the Royal Society.
Since convening, the Alliance has worked with volunteers and stakeholders to develop draft standards for individuals, standards for universities seeking accreditation of their courses and a certification process that will enable both individuals and education providers to gain recognition based on skills and knowledge within data science.
Governed by a memorandum of understanding, the Alliance is committed to:
- Defining the standards of professional competence and behaviour expected of people who work with data which impacts life and livelihoods. These include data scientists, data engineers, data analysts and data stewards.
- Using an open-source process to maintain and update the standards.
- Delivering these standards as data science certifications offered by the Alliance members to their professional members, with processes to hold certified members accountable for their professional status in this area.
- Using these standards as criteria for Alliance members to accredit data science degrees, and data science modules of associated degrees, as contributing to certification.
- Creating a single searchable public register of certified data science professionals….(More)”.
Policy Brief by Muriel Poisson: “As part of its research project on ‘Open government (OG) in education: Learning from experience’, the UNESCO International Institute for Educational Planning (IIEP) has prepared five thematic briefs illustrating various forms of OG as applied to the education field: open government, open budgeting, open contracting, open policy-making and crowd-sourcing, and social auditing. This brief deals specifically with open policy-making and crowd-sourcing….(More)”.
Abel Wajnerman Paz at Rest of the World: “Neurotechnology” is an umbrella term for any technology that can read and transcribe mental states by decoding and modulating neural activity. This includes technologies like closed-loop deep brain stimulation that can both detect neural activity related to people’s moods and can suppress undesirable symptoms, like depression, through electrical stimulation.
Despite their evident usefulness in education, entertainment, work, and the military, neurotechnologies are largely unregulated. Now, as Chile redrafts its constitution — disassociating it from the Pinochet surveillance regime — legislators are using the opportunity to address the need for closer protection of people’s rights from the unknown threats posed by neurotechnology.
Although the technology is new, the challenge isn’t. Decades ago, similar international legislation was passed following the development of genetic technologies that made possible the collection and application of genetic data and the manipulation of the human genome. These included the Universal Declaration on the Human Genome and Human Rights in 1997 and the International Declaration on Human Genetic Data in 2003. The difference is that, this time, Chile is a leading light in the drafting of neuro-rights legislation.
In Chile, two bills — a constitutional reform bill, which is awaiting approval by the Chamber of Deputies, and a bill on neuro-protection — will establish neuro-rights for Chileans. These include the rights to personal identity, free will, mental privacy, equal access to cognitive enhancement technologies, and protection against algorithmic bias….(More)”.
Manuel León Urrutia at The Conversation: “I find it tempting to celebrate the public’s expanding access to data and familiarity with terms like “flattening the curve”. After all, a better informed society is a successful society, and the provision of data-driven information to the public seems to contribute to the notion that together we can beat COVID.
But increased data visibility shouldn’t necessarily be interpreted as increased data literacy. For example, at the start of the pandemic it was found that the portrayal of COVID deaths in logarithmic graphs confused the public. Logarithmic graphs control for data that’s growing exponentially by using a scale which increases by a factor of ten on the y, or vertical axis. This led some people to radically underestimate the dramatic rise in COVID cases.
The vast amount of data we now have available doesn’t even guarantee consensus. In fact, instead of solving the problem, this data deluge can contribute to the polarisation of public discourse. One study recently found that COVID sceptics use orthodox data presentation techniques to spread their controversial views, revealing how more data doesn’t necessarily result in better understanding. Though data is supposed to be objective and empirical, it has assumed a political, subjective hue during the pandemic….
This is where educators come in. The pandemic has only strengthened the case presented by academics for data literacy to be included in the curriculum at all educational levels, including primary. This could help citizens navigate our data-driven world, protecting them from harmful misinformation and journalistic malpractice.
Data literacy does in fact already feature in many higher education roadmaps in the UK, though I’d argue it’s a skill the entire population should be equipped with from an early age. Misconceptions about vaccine efficacy and the severity of the coronavirus are often based on poorly presented, false or misinterpreted data. The “fake news” these misconceptions generate would spread less ferociously in a world of data literate citizens.