What are location services and how do they work?


Article by Douglas Crawford: “Location services refer to a combination of technologies used in devices like smartphones and computers that use data from your device’s GPS, WiFi, mobile (cellular networks), and sometimes even Bluetooth connections to determine and track your geographic location.

This information can be accessed by your operating system (OS) and the apps installed on your device. In many cases, this allows them to perform their purpose correctly or otherwise deliver useful content and features. 

For example, navigation/map, weather, ridesharing (such Uber or Lyft), and health and fitness tracking apps require location services to perform their functions, while datingtravel, and social media apps can offer additional functionality with access to your device’s location services (such as being able to locate a Tinder match or see recommendations for nearby restaurants ).

There’s no doubt location services (and the apps that use them) can be useful. However, the technology can be (and is) also abused by apps to track your movements. The apps then usually sell this information to advertising and analytics companies  that combine it with other data to create a profile of you, which they can then use to sell ads. 

Unfortunately, this behavior is not limited to “rogue” apps. Apps usually regarded as legitimate, including almost all Google apps, Facebook, Instagram, and others, routinely send detailed and highly sensitive location details back to their developers by default. And it’s not just apps — operating systems themselves, such as Google’s Android and Microsoft Windows also closely track your movements using location services. 

This makes weighing the undeniable usefulness of location services with the need to maintain a basic level of privacy a tricky balancing act. However, because location services are so easy to abuse, all operating systems include built-in safeguards that give you some control over their use.

In this article, we’ll look at how location services work..(More)”.

The not-so-silent type: Vulnerabilities across keyboard apps reveal keystrokes to network eavesdroppers


Report by Jeffrey KnockelMona Wang, and Zoë Reichert: “Typing logographic languages such as Chinese is more difficult than typing alphabetic languages, where each letter can be represented by one key. There is no way to fit the tens of thousands of Chinese characters that exist onto a single keyboard. Despite this obvious challenge, technologies have developed which make typing in Chinese possible. To enable the input of Chinese characters, a writer will generally use a keyboard app with an “Input Method Editor” (IME). IMEs offer a variety of approaches to inputting Chinese characters, including via handwriting, voice, and optical character recognition (OCR). One popular phonetic input method is Zhuyin, and shape or stroke-based input methods such as Cangjie or Wubi are commonly used as well. However, used by nearly 76% of mainland Chinese keyboard users, the most popular way of typing in Chinese is the pinyin method, which is based on the pinyin romanization of Chinese characters.

All of the keyboard apps we analyze in this report fall into the category of input method editors (IMEs) that offer pinyin input. These keyboard apps are particularly interesting because they have grown to accommodate the challenge of allowing users to type Chinese characters quickly and easily. While many keyboard apps operate locally, solely within a user’s device, IME-based keyboard apps often have cloud features which enhance their functionality. Because of the complexities of predicting which characters a user may want to type next, especially in logographic languages like Chinese, IMEs often offer “cloud-based” prediction services which reach out over the network. Enabling “cloud-based” features in these apps means that longer strings of syllables that users type will be transmitted to servers elsewhere. As many have previously pointed out, “cloud-based” keyboards and input methods can function as vectors for surveillance and essentially behave as keyloggers. While the content of what users type is traveling from their device to the cloud, it is additionally vulnerable to network attackers if not properly secured. This report is not about how operators of cloud-based IMEs read users’ keystrokes, which is a phenomenon that has already been extensively studied and documented. This report is primarily concerned with the issue of protecting this sensitive data from network eavesdroppers…(More)”.

The citizen’s panel on AI issues its report


Belgian presidency of the European Union: “Randomly select 60 citizens from all four corners of Belgium. Give them an exciting topic to explore. Add a few local players. Season with participation experts. Bake for three weekends at the Egmont Palace conference centre. And you’ll end up with the rich and ambitious views of citizens on the future of artificial intelligence (AI) in the European Union.

This is the recipe that has been in progress since February 2024, led by the Belgian presidency of the European Union, with the ambition of involving citizens in this strategic field and enriching the debate on AI, which has been particularly lively in recent months as part of the drafting of the AI Act recently adopted by the European Parliament.

And the initiative really cut the mustard, as the 60 citizens worked enthusiastically, overcoming their apprehensions about a subject as complex as AI. In a spirit of collective intelligence, they dove right into the subject, listening to speakers from academia, government, civil society and the private sector, and sharing their experiences and knowledge. Some of them were just discovering AI, while others were already using it. They turned this diversity into a richness, enabling them to write a report on citizens’ views that reflects the various aspirations of the Belgian population.

At the end of the three weekends, the citizens almost unanimously adopted a precise and ambitious report containing nine key messages focusing on the need for a responsible, ambitious and beneficial approach to AI, ensuring that it serves the interests of all and leaves no one behind…(More)”

Quantum Policy


A Primer by Jane Bambauer: “Quantum technologies have received billions in private and public
investments and have caused at least some ambient angst about how they will disrupt an already fast-moving economy and uncertain social order. Some consulting firms are already offering “quantum readiness” services, even though the potential applications for quantum computing, networking, and sensing technologies are still somewhat speculative, in part because the impact of these technologies may be mysterious and profound. Law and policy experts have begun to offer advice about how the development of quantum technologies should be regulated through ethical norms or laws. This report builds on the available work by providing a brief summary of the applications that seem potentially viable
to researchers and companies and cataloging the effects—both positive and negative—that these applications may have on industry, consumers, and society at large.

As the report will show, quantum technologies (like many information technologies that have come before) will produce benefits and risks and will inevitably require developers and regulators to make trade-offs between several legitimate but conflicting goals. Some of these policy decisions can be made in advance, but some will have to be reactive in nature, as unexpected risks and benefits will emerge…(More)”.

A New National Purpose: Harnessing Data for Health


Report by the Tony Blair Institute: “We are at a pivotal moment where the convergence of large health and biomedical data sets, artificial intelligence and advances in biotechnology is set to revolutionise health care, drive economic growth and improve the lives of citizens. And the UK has strengths in all three areas. The immense potential of the UK’s health-data assets, from the NHS to biobanks and genomics initiatives, can unlock new diagnostics and treatments, deliver better and more personalised care, prevent disease and ultimately help people live longer, healthier lives.

However, realising this potential is not without its challenges. The complex and fragmented nature of the current health-data landscape, coupled with legitimate concerns around privacy and public trust, has made for slow progress. The UK has had a tendency to provide short-term funding across multiple initiatives, which has led to an array of individual projects – many of which have struggled to achieve long-term sustainability and deliver tangible benefits to patients.

To overcome these challenges, it will be necessary to be bold and imaginative. We must look for ways to leverage the unique strengths of the NHS, such as its nationwide reach and cradle-to-grave data coverage, to create a health-data ecosystem that is much more than the sum of its many parts. This will require us to think differently about how we collect, manage and utilise health data, and to create new partnerships and models of collaboration that break down traditional silos and barriers. It will mean treating data as a key health resource and managing it accordingly.

One model to do this is the proposed sovereign National Data Trust (NDT) – an endeavour to streamline access to and curation of the UK’s valuable health-data assets…(More)”.

Access to Public Information, Open Data, and Personal Data Protection: How do they dialogue with each other?


Report by Open Data Charter and Civic Compass: “In this study, we aim to examine data protection policies in the European Union and Latin America juxtaposed with initiatives concerning open government data and access to public information. We analyse the regulatory landscape, international rankings, and commitments about each right in four countries from each region to achieve this. Additionally, we explore how these institutions interact with one another, considering their respective stances while delving into existing tensions and exploring possibilities for achieving a balanced approach…(More)”.

AI-enabled Peacekeeping Tech for the Digital Age


Springwise: “There are countless organisations and government agencies working to resolve conflicts around the globe, but they often lack the tools to know if they are making the right decisions. Project Didi is developing those technological tools – helping peacemakers plan appropriately and understand the impact of their actions in real time.

Project Didi Co-founder and CCO Gabe Freund explained to Springwise that the project uses machine learning, big data, and AI to analyse conflicts and “establish a new standard for best practice when it comes to decision-making in the world of peacebuilding.”

In essence, the company is attempting to analyse the many factors that are involved in conflict in order to identify a ‘ripe moment’ when both parties will be willing to negotiate for peace. The tools can track the impact and effect of all actors across a conflict. This allows them to identify and create connections between organisations and people who are doing similar work, amplifying their effects…(More)” See also: Project Didi (Kluz Prize)

When Online Content Disappears


Pew Research: “The internet is an unimaginably vast repository of modern life, with hundreds of billions of indexed webpages. But even as users across the world rely on the web to access books, images, news articles and other resources, this content sometimes disappears from view…

  • A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.
A line chart showing that 38% of webpages from 2013 are no longer accessible
  • For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This “digital decay” occurs in many different online spaces. We examined the links that appear on government and news websites, as well as in the “References” section of Wikipedia pages as of spring 2023. This analysis found that:

  • 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
  • 54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists...(More)”.

Defining AI incidents and related terms


OECD Report: “As AI use grows, so do its benefits and risks. These risks can lead to actual harms (“AI incidents”) or potential dangers (“AI hazards”). Clear definitions are essential for managing and preventing these risks. This report proposes definitions for AI incidents and related terms. These definitions aim to foster international interoperability while providing flexibility for jurisdictions to determine the scope of AI incidents and hazards they wish to address…(More)”.

Building a trauma-informed algorithmic assessment toolkit


Report by Suvradip Maitra, Lyndal Sleep, Suzanna Fay, Paul Henman: “Artificial intelligence (AI) and automated processes provide considerable promise to enhance human wellbeing by fully automating or co-producing services with human service providers. Concurrently, if not well considered, automation also provides ways in which to generate harms at scale and speed. To address this challenge, much discussion to date has focused on principles of ethical AI and accountable algorithms with a groundswell of early work seeking to translate these into practical frameworks and processes to ensure such principles are enacted. AI risk assessment frameworks to detect and evaluate possible harms is one dominant approach, as are a growing body of AI audit frameworks, with concomitant emerging governmental and organisational regulatory settings, and associate professionals.

The research outlined in this report took a different approach. Building on work in social services on trauma-informed practice, researchers identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma. This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.

This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI…(More)”.