Data Must Speak: Positive Deviance Research


Report by UNICEF: “Despite the global learning crisis, even in the most difficult contexts, there are some “positive deviant” schools that outperform others in terms of learning, gender equality, and retention. Since 2019, in line with UNICEF’s Foundational Literacy and Numeracy Programme, Data Must Speak (DMS) research identifies these positive deviant schools, explores which behaviours and practices make them outperform others, and investigates how these could be implemented in lower performing schools in similar contexts. DMS research uses a sequential, participatory, mixed-methods approach to improve uptake, replicability, and sustainability. The research is being undertaken in 14 countries across Africa, Asia, and Latin America…(More)”.

The 5 Stages of Data Must Speak Research

Can AI mediate conflict better than humans?


Article by Virginia Pietromarchi: “Diplomats whizzing around the globe. Hush-hush meetings, often never made public. For centuries, the art of conflict mediation has relied on nuanced human skills: from elements as simple as how to make eye contact and listen carefully to detecting shifts in emotions and subtle signals from opponents.

Now, a growing set of entrepreneurs and experts are pitching a dramatic new set of tools into the world of dispute resolution – relying increasingly on artificial intelligence (AI).

“Groundbreaking technological advancements are revolutionising the frontier of peace and mediation,” said Sama al-Hamdani, programme director of Hala System, a private company using AI and data analysis to gather unencrypted intelligence in conflict zones, among other war-related tasks.

“We are witnessing an era where AI transforms mediators into powerhouses of efficiency and insight,” al-Hamdani said.

The researcher is one of thousands of speakers participating in the Web Summit in Doha, Qatar, where digital conflict mediation is on the agenda. The four-day summit started on February 26 and concludes on Thursday, February 29.

Already, say experts, digital solutions have proven effective in complex diplomacy. At the peak of the COVID-19 restrictions, mediators were not able to travel for in-person meetings with their interlocutors.

The solution? Use remote communication software Skype to facilitate negotiations, as then-United States envoy Zalmay Khalilzad did for the Qatar-brokered talks between the US and the Taliban in 2020.

For generations, power brokers would gather behind doors to make decisions affecting people far and wide. Digital technologies can now allow the process to be relatively more inclusive.

This is what Stephanie Williams, special representative of the United Nations’ chief in Libya, did in 2021 when she used a hybrid model integrating personal and digital interactions as she led mediation efforts to establish a roadmap towards elections. That strategy helped her speak to people living in areas deemed too dangerous to travel to. The UN estimates that Williams managed to reach one million Libyans.

However, practitioners are now growing interested in the use of technology beyond online consultations…(More)”

What Happens to Your Sensitive Data When a Data Broker Goes Bankrupt?


Article by Jon Keegan: “In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company. 

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies...(More)”.

Governing the use of big data and digital twin technology for sustainable tourism


Report by Eko Rahmadian: “The tourism industry is increasingly utilizing big data to gain valuable insights and enhance decision-making processes. The advantages of big data, such as real-time information, robust data processing capabilities, and improved stakeholder decision-making, make it a promising tool for analyzing various aspects of tourism, including sustainability. Moreover, integrating big data with prominent technologies like machine learning, artificial intelligence (AI), and the Internet of Things (IoT) has the potential to revolutionize smart and sustainable tourism.

Despite the potential benefits, the use of big data for sustainable tourism remains limited, and its implementation poses challenges related to governance, data privacy, ethics, stakeholder communication, and regulatory compliance. Addressing these challenges is crucial to ensure the responsible and sustainable use of these technologies. Therefore, strategies must be developed to navigate these issues through a proper governing system.

To bridge the existing gap, this dissertation focuses on the current research on big data for sustainable tourism and strategies for governing its use and implementation in conjunction with emerging technologies. Specifically, this PhD dissertation centers on mobile positioning data (MPD) as a case due to its unique benefits, challenges, and complexity. Also, this project introduces three frameworks, namely: 1) a conceptual framework for digital twins (DT) for smart and sustainable tourism, 2) a documentation framework for architectural decisions (DFAD) to ensure the successful implementation of the DT technology as a governance mechanism, and 3) a big data governance framework for official statistics (BDGF). This dissertation not only presents these frameworks and their benefits but also investigates the issues and challenges related to big data governance while empirically validating the applicability of the proposed frameworks…(More)”.

How will AI shape our future cities?


Article by Ying Zhang: “For city planners, a bird’s-eye view of a map showing buildings and streets is no longer enough. They need to simulate changes to bus routes or traffic light timings before implementation to know how they might affect the population. Now, they can do so with digital twins – often referred to as a “mirror world” – which allows them to simulate scenarios more safely and cost-effectively through a three-dimensional virtual replica.

Cities such as New York, Shanghai and Helsinki are already using digital twins. In 2022, the city of Zurich launched its own version. Anyone can use it to measure the height of buildings, determine the shadows they cast and take a look into the future to see how Switzerland’s largest city might develop. Traffic congestion, a housing shortage and higher energy demands are becoming pressing issues in Switzerland, where 74% of the population already lives in urban areas.

But updating and managing digital twins will become more complex as population densities and the levels of detail increase, according to architect and urban designer Aurel von Richthofen of the consultancy Arup.

The world’s current urban planning models are like “individual silos” where “data cannot be shared, which makes urban planning not as efficient as we expect it to be”, said von Richthofen at a recent event hosted by the Swiss innovation network Swissnex. …

The underlying data is key to whether a digital twin city is effective. But getting access to quality data from different organisations is extremely difficult. Sensors, drones and mobile devices may collect data in real-time. But they tend to be organised around different knowledge domains – such as land use, building control, transport or ecology – each with its own data collection culture and physical models…(More)”

AI as a Public Good: Ensuring Democratic Control of AI in the Information Space


Report by the Forum on Information and Democracy: “…The report outlines key recommendations to governments, the industry and relevant stakeholders, notably:

  • Foster the creation of a tailored certification system for AI companies inspired by the success of the Fair Trade certification system.
  • Establish standards governing content authenticity and provenance, including for author authentication.
  • Implement a comprehensive legal framework that clearly defines the rights of individuals including the right to be informed, to receive an explanation, to challenge a machine-generated outcome, and to non-discrimination
  • Provide users with an easy and user-friendly opportunity to choose alternative recommender systems that do not optimize for engagement but build on ranking in support of positive individual and societal outcomes, such as reliable information, bridging content or diversity of information.
  • Set up a participatory process to determine the rules and criteria guiding dataset provenance and curation, human labeling for AI training, alignment, and red-teaming to build inclusive, non-discriminatory and transparent AI systems…(More)”.

The AI project pushing local languages to replace French in Mali’s schools


Article by Annie Risemberg and Damilare Dosunmu: “For the past six months,Alou Dembele, a27-year-oldengineer and teacher, has spent his afternoons reading storybooks with children in the courtyard of a community school in Mali’s capital city, Bamako. The books are written in Bambara — Mali’s most widely spoken language — and include colorful pictures and stories based on local culture. Dembele has over 100 Bambara books to pick from — an unimaginable educational resource just a year ago.

From 1960 to 2023, French was Mali’s official language. But in June last year, the military government replaced it in favor of 13 local languages, creating a desperate need for new educational materials.

Artificial intelligence came to the rescue: RobotsMali, a government-backed initiative, used tools like ChatGPT, Google Translate, and free-to-use image-maker Playgroundto create a pool of 107 books in Bambara in less than a year. Volunteer teachers, like Dembele, distribute them through after-school classes. Within a year, the books have reached over 300 elementary school kids, according to RobotsMali’s co-founder, Michael Leventhal. They are not only helping bridge the gap created after French was dropped but could also be effective in helping children learn better, experts told Rest of World…(More)”.

Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?


Paper by Alice Xiang: “Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property…(More)”.

AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.

Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World


Paper by Jennifer King, Caroline Meinhardt: “In this paper, we present a series of arguments and predictions about how existing and future privacy and data protection regulation will impact the development and deployment of AI systems.

➜ Data is the foundation of all AI systems. Going forward, AI development will continue to increase developers’ hunger for training data, fueling an even greater race for data acquisition than we have already seen in past decades.

➜ Largely unrestrained data collection poses unique risks to privacy that extend beyond the individual level—they aggregate to pose societal-level harms that cannot be addressed through the exercise of individual data rights alone.

➜ While existing and proposed privacy legislation, grounded in the globally accepted Fair Information Practices (FIPs), implicitly regulate AI development, they are not sufficient to address the data acquisition race as well as the resulting individual and systemic privacy harms.

➜ Even legislation that contains explicit provisions on algorithmic decision-making and other forms of AI does not provide the data governance measures needed to meaningfully regulate the data used in AI systems.

➜ We present three suggestions for how to mitigate the risks to data privacy posed by the development and adoption of AI:

1. Denormalize data collection by default by shifting away from opt-out to opt-in data collection. Data collectors must facilitate true data minimization through “privacy by default” strategies and adopt technical standards and infrastructure for meaningful consent mechanisms.

2. Focus on the AI data supply chain to improve privacy and data protection. Ensuring dataset transparency and accountability across the entire life cycle must be a focus of any regulatory system that addresses data privacy.

3. Flip the script on the creation and management of personal data. Policymakers should support the development of new governance mechanisms and technical infrastructure (e.g., data intermediaries and data permissioning infrastructure) to support and automate the exercise of individual data rights and preferences…(More)”.