Can open data increase younger generations’ trust in democratic institutions? A study in the European Union


Paper by Nicolás Gonzálvez-Gallego and Laura Nieto-Torrejón: “Scholars and policy makers are giving increasing attention to how young people are involved in politics and their confidence in the current democratic system. In a context of a global trust crisis in the European Union, this paper examines if open government data, a promising governance strategy, may help to boost Millennials’ and Generation Z trust in public institutions and satisfaction with public outcomes. First, results from our preliminary analysis challenge some popular beliefs by revealing that younger generations tend to trust in their institutions notably more than the rest of the European citizens. In addition, our findings show that open government data is a trust-enabler for Millennials and Generation Z, not only through a direct link between both, but also thanks to the mediator role of citizens’ satisfaction. Accordingly, public officers are encouraged to spread the implementation of open data strategies as a way to improve younger generations’ attachment to democratic institutions….(More)”.

New York Temporarily Bans Facial Recognition Technology in Schools


Hunton’s Privacy Blog: “On December 22, 2020, New York Governor Andrew Cuomo signed into law legislation that temporarily bans the use or purchase of facial recognition and other biometric identifying technology in public and private schools until at least July 1, 2022. The legislation also directs the New York Commissioner of Education (the “Commissioner”) to conduct a study on whether this technology is appropriate for use in schools.

In his press statement, Governor Cuomo indicated that the legislation comes after concerns were raised about potential risks to students, including issues surrounding misidentification by the technology as well as safety, security and privacy concerns. “This legislation requires state education policymakers to take a step back, consult with experts and address privacy issues before determining whether any kind of biometric identifying technology can be brought into New York’s schools. The safety and security of our children is vital to every parent, and whether to use this technology is not a decision to be made lightly,” the Governor explained.

Key elements of the legislation include:

  • Defining “facial recognition” as “any tool using an automated or semi-automated process that assists in uniquely identifying or verifying a person by comparing and analyzing patterns based on the person’s face,” and “biometric identifying technology” as “any tool using an automated or semi-automated process that assists in verifying a person’s identity based on a person’s biometric information”;
  • Prohibiting the purchase and use of facial recognition and other biometric identifying technology in all public and private elementary and secondary schools until July 1, 2022, or until the Commissioner authorizes the purchase and use of such technology, whichever occurs later; and
  • Directing the Commissioner, in consultation with New York’s Office of Information Technology, Division of Criminal Justice Services, Education Department’s Chief Privacy Officer and other stakeholders, to conduct a study and make recommendations as to the circumstances in which facial recognition and other biometric identifying technology is appropriate for use in schools and what restrictions and guidelines should be enacted to protect privacy, civil rights and civil liberties interests….(More)”.

Building Trust for Inter-Organizational Data Sharing: The Case of the MLDE


Paper by Heather McKay, Sara Haviland, and Suzanne Michael: “There is increasing interest in sharing data across agencies and even between states that was once siloed in separate agencies. Driving this is a need to better understand how people experience education and work, and their pathways through each. A data-sharing approach offers many possible advantages, allowing states to leverage pre-existing data systems to conduct increasingly sophisticated and complete analyses. However, information sharing across state organizations presents a series of complex challenges, one of which is the central role trust plays in building successful data-sharing systems. Trust building between organizations is therefore crucial to ensuring project success.

This brief examines the process of building trust within the context of the development and implementation of the Multistate Longitudinal Data Exchange (MLDE). The brief is based on research and evaluation activities conducted by Rutgers’ Education & Employment Research Center (EERC) over the past five years, which included 40 interviews with state leaders and the Western Interstate Commission for Higher Education (WICHE) staff, observations of user group meetings, surveys, and MLDE document analysis. It is one in a series of MLDE briefs developed by EERC….(More)”.

Four Principles to Make Data Tools Work Better for Kids and Families


Blog by the Annie E. Casey Foundation: “Advanced data analytics are deeply embedded in the operations of public and private institutions and shape the opportunities available to youth and families. Whether these tools benefit or harm communities depends on their design, use and oversight, according to a report from the Annie E. Casey Foundation.

Four Principles to Make Advanced Data Analytics Work for Children and Families examines the growing field of advanced data analytics and offers guidance to steer the use of big data in social programs and policy….

The Foundation report identifies four principles — complete with examples and recommendations — to help steer the growing field of data science in the right direction.

Four Principles for Data Tools

  1. Expand opportunity for children and families. Most established uses of advanced analytics in education, social services and criminal justice focus on problems facing youth and families. Promising uses of advanced analytics go beyond mitigating harm and help to identify so-called odds beaters and new opportunities for youth.
    • Example: The Children’s Data Network at the University of Southern California is helping the state’s departments of education and social services explore why some students succeed despite negative experiences and what protective factors merit more investment.
    • Recommendation: Government and its philanthropic partners need to test if novel data science applications can create new insights and when it’s best to apply them.
       
  2. Provide transparency and evidence. Advanced analytical tools must earn and maintain a social license to operate. The public has a right to know what decisions these tools are informing or automating, how they have been independently validated, and who is accountable for answering and addressing concerns about how they work.
    • Recommendations: Local and state task forces can be excellent laboratories for testing how to engage youth and communities in discussions about advanced analytics applications and the policy frameworks needed to regulate their use. In addition, public and private funders should avoid supporting private algorithms whose design and performance are shielded by trade secrecy claims. Instead, they should fund and promote efforts to develop, evaluate and adapt transparent and effective models.
       
  3. Empower communities. The field of advanced data analytics often treats children and families as clients, patients and consumers. Put to better use, these same tools can help elucidate and reform the systems acting upon children and families. For this shift to occur, institutions must focus analyses and risk assessments on structural barriers to opportunity rather than individual profiles.
    • Recommendation: In debates about the use of data science, greater investment is needed to amplify the voices of youth and their communities.
       
  4. Promote equitable outcomes. Useful advanced analytics tools should promote more equitable outcomes for historically disadvantaged groups. New investments in advanced analytics are only worthwhile if they aim to correct the well-documented bias embedded in existing models.
    • Recommendations: Advanced analytical tools should only be introduced when they reduce the opportunity deficit for disadvantaged groups — a move that will take organizing and advocacy to establish and new policy development to institutionalize. Philanthropy and government also have roles to play in helping communities test and improve tools and examples that already exist….(More)”.

Right/Wrong:How Technology Transforms Our Ethics


Book by Juan Enriquez: “Most people have a strong sense of right and wrong, and they aren’t shy about expressing their opinions. But when we take a polarizing stand on something we regard as an eternal truth, we often forget that ethics evolve over time. Many shifts in the right versus wrong pendulum are driven by advances in technology. Our great-grandparents might be shocked by in vitro fertilization; our great-grandchildren might be shocked by the messiness of pregnancy, childbirth, and unedited genes. In Right/Wrong, Juan Enriquez reflects on what happens to our ethics as technology makes the once unimaginable a commonplace occurrence.

Evolving technology changes ethics. Enriquez points out that, contrary to common wisdom, technology often enables more ethical behaviors. Technology challenges old beliefs and upends institutions that do not grow and change. With wit and compassion, Enriquez takes on a series of technology-influenced ethical dilemmas, from sexual liberation to climate change to the “immortality” of mistakes on social media. (“Facebook, Twitter, Instagram, and Google are electronic tattoos.”) He cautions us to judge those who “should have known better,” given today’s vantage point, with less fury and more compassion. We need a quality often absent in today’s charged debates: humility. Judge those in the past as we hope to be judged in the future….(More)”.

Ethical issues of crowdsourcing in education


Paper by Katerina Zdravkova: “Crowdsourcing has become a fruitful solution for many activities, promoting the joined power of the masses. Although not formally recognised as an educational model, the first steps towards embracing crowdsourcing as a form of formal learning and teaching have recently emerged. Before taking a dramatic step forward, it should be estimated whether it is feasible, sustainable and socially responsible.

A nice initiative, which intends to set a groundwork for responsible research and innovation and actively implement crowdsourcing for language learning of all citizens regardless of their diversified social, educational, and linguistic backgrounds is enetCollect.

In order to achieve these goals, a sound framework that embraces the ethical and legal considerations should be established. The framework is intended for all the current and prospective creators of crowd-oriented educational systems. It incorporates the ethical issues affecting the three stakeholders: collaborative content creators, prospective users, as well as the institutions intending to implement the approach for educational purposes. The proposed framework offers a practical solution intending to overcome the revealed barriers, which might increase the risk of compromising its main educational goals. If carefully designed and implemented, crowdsourcing might become a very helpful, and at the same time, a very reliable educational model….(More)”.

The Pandemic Is No Excuse to Surveil Students


 Zeynep Tufekci in the Atlantic: “In Michigan, a small liberal-arts college is requiring students to install an app called Aura, which tracks their location in real time, before they come to campus. Oakland University, also in Michigan, announced a mandatory wearable that would track symptoms, but, facing a student-led petition, then said it would be optional. The University of Missouri, too, has an app that tracks when students enter and exit classrooms. This practice is spreading: In an attempt to open during the pandemic, many universities and colleges around the country are forcing students to download location-tracking apps, sometimes as a condition of enrollment. Many of these apps function via Bluetooth sensors or Wi-Fi networks. When students enter a classroom, their phone informs a sensor that’s been installed in the room, or the app checks the Wi-Fi networks nearby to determine the phone’s location.

As a university professor, I’ve seen surveillance like this before. Many of these apps replicate the tracking system sometimes installed on the phones of student athletes, for whom it is often mandatory. That system tells us a lot about what we can expect with these apps.

There is a widespread charade in the United States that university athletes, especially those who play high-profile sports such as football and basketball, are just students who happen to be playing sports as amateurs “in their free time.” The reality is that these college athletes in high-level sports, who are aggressively recruited by schools, bring prestige and financial resources to universities, under a regime that requires them to train like professional athletes despite their lack of salary. However, making the most of one’s college education and training at that level are virtually incompatible, simply because the day is 24 hours long and the body, even that of a young, healthy athlete, can only take so much when training so hard. Worse, many of these athletes are minority students, specifically Black men, who were underserved during their whole K–12 education and faced the same challenge then as they do now: Train hard in hopes of a scholarship and try to study with what little time is left, often despite being enrolled in schools with mediocre resources. Many of them arrive at college with an athletic scholarship but not enough academic preparation compared with their peers who went to better schools and could also concentrate on schooling….(More)”

The Road Back to College Is Paved with Barriers, but Behavioral Science Can Help Smooth the Way


Blog by Katherine Flaschen and Ben Castleman: “In order to create the most effective solutions, policymakers and educators need to better understand a fundamental question: Why do so many of these students, many of whom have already made substantial progress toward their degree, fail to return to college and graduate? …

With a better understanding of the barriers preventing people who intend to finish their degree from following through, policymakers and colleges can create solutions that meaningfully meet students’ needs and help them re-enroll. As states across the country face rising unemployment rates, it’s critical to design and test interventions that address these behavioral barriers and help thousands of citizens who are out of work due to the COVID-19 crisis consider their options for going back to school.

For example, colleges could provide monetary incentives to students for taking actions related to re-enrollment that overcome these barriers, such as speaking with an advisor, reviewing upcoming recommended courses and developing a course plan, and making an active choice about when to return to college. In addition, SCND students could be paired with current students to serve as peer mentors, both to provide support with the re-enrollment process and to hold them accountable for degree completion (especially if faced with difficult remaining classes). Community colleges could also encourage major employers of the SCND population in high-demand fields, like health care, to provide options for employees to finish their degree while working (e.g., via tuition reimbursement programs), translate degree attainment into concrete career returns, and identify representatives within the company, such as recent graduates, to promote re-enrollment and make it a more salient opportunity….(More)”.

An algorithm shouldn’t decide a student’s future


Hye Jung Han at Politico: “…Education systems across Europe struggled this year with how to determine students’ all-important final grades. But one system, the International Baccalaureate (“IB”) — a high school program that is highly regarded by European universities, and offered by both public and private schools in 152 countries — did something unusual.

Having canceled final exams, which make up the majority of an IB student’s grade, the Geneva-based foundation of the same name hastily built an algorithm that used a student’s coursework scores, predicted grades by teachers and their school’s historical IB results to guess what students might have scored if they had taken their exams in a hypothetical, pandemic-free year. The result of the algorithm became the student’s final grade.

The results were catastrophic. Soon after the grades were released, serious mismatches emerged between expected grades based on a student’s prior performance, and those awarded by the algorithm. Because IB students’ university admissions are contingent upon their final grades, the unexpectedly poor grades generated for some resulted in scholarships and admissions offers being revoked

The IB had alternatives. Instead, it could have used students’ actual academic performance and graded on a generous curve. It could have incorporated practice test grades, third-party moderation to minimize grading bias and teachers’ broad evaluations of student progress.

It could have engaged with universities on flexibly factoring in final grades into this year’s admissions decisions, as universities contemplate opening their now-virtual classes to more students to replace lost revenue.

It increasingly seems like the greatest potential of the power promised by predictive data lies in the realm of misuse.

For this year’s graduating class, who have already responded with grace and resilience in their final year of school, the automating away of their capacity and potential is an unfair and unwanted preview of the world they are graduating into….(More)”.

Blame the politicians, not the technology, for A-level fiasco


The Editorial Board at the Financial Times: “The soundtrack of school students marching through Britain’s streets shouting “f*** the algorithm” captured the sense of outrage surrounding the botched awarding of A-level exam grades this year. But the students’ anger towards a disembodied computer algorithm is misplaced. This was a human failure. The algorithm used to “moderate” teacher-assessed grades had no agency and delivered exactly what it was designed to do.

It is politicians and educational officials who are responsible for the government’s latest fiasco and should be the target of students’ criticism….

Sensibly designed, computer algorithms could have been used to moderate teacher assessments in a constructive way. Using past school performance data, they could have highlighted anomalies in the distribution of predicted grades between and within schools. That could have led to a dialogue between Ofqual, the exam regulator, and anomalous schools to come up with more realistic assessments….

There are broader lessons to be drawn from the government’s algo fiasco about the dangers of automated decision-making systems. The inappropriate use of such systems to assess immigration status, policing policies and prison sentencing decisions is a live danger. In the private sector, incomplete and partial data sets can also significantly disadvantage under-represented groups when it comes to hiring decisions and performance measures.

Given the severe erosion of public trust in the government’s use of technology, it might now be advisable to subject all automated decision-making systems to critical scrutiny by independent experts. The Royal Statistical Society and The Alan Turing Institute certainly have the expertise to give a Kitemark of approval or flag concerns.

As ever, technology in itself is neither good nor bad. But it is certainly not neutral. The more we deploy automated decision-making systems, the smarter we must become in considering how best to use them and in scrutinising their outcomes. We often talk about a deficit of trust in our societies. But we should also be aware of the dangers of over-trusting technology. That may be a good essay subject for next year’s philosophy A-level….(More)”.