The not-so-silent type: Vulnerabilities across keyboard apps reveal keystrokes to network eavesdroppers


Report by Jeffrey KnockelMona Wang, and Zoë Reichert: “Typing logographic languages such as Chinese is more difficult than typing alphabetic languages, where each letter can be represented by one key. There is no way to fit the tens of thousands of Chinese characters that exist onto a single keyboard. Despite this obvious challenge, technologies have developed which make typing in Chinese possible. To enable the input of Chinese characters, a writer will generally use a keyboard app with an “Input Method Editor” (IME). IMEs offer a variety of approaches to inputting Chinese characters, including via handwriting, voice, and optical character recognition (OCR). One popular phonetic input method is Zhuyin, and shape or stroke-based input methods such as Cangjie or Wubi are commonly used as well. However, used by nearly 76% of mainland Chinese keyboard users, the most popular way of typing in Chinese is the pinyin method, which is based on the pinyin romanization of Chinese characters.

All of the keyboard apps we analyze in this report fall into the category of input method editors (IMEs) that offer pinyin input. These keyboard apps are particularly interesting because they have grown to accommodate the challenge of allowing users to type Chinese characters quickly and easily. While many keyboard apps operate locally, solely within a user’s device, IME-based keyboard apps often have cloud features which enhance their functionality. Because of the complexities of predicting which characters a user may want to type next, especially in logographic languages like Chinese, IMEs often offer “cloud-based” prediction services which reach out over the network. Enabling “cloud-based” features in these apps means that longer strings of syllables that users type will be transmitted to servers elsewhere. As many have previously pointed out, “cloud-based” keyboards and input methods can function as vectors for surveillance and essentially behave as keyloggers. While the content of what users type is traveling from their device to the cloud, it is additionally vulnerable to network attackers if not properly secured. This report is not about how operators of cloud-based IMEs read users’ keystrokes, which is a phenomenon that has already been extensively studied and documented. This report is primarily concerned with the issue of protecting this sensitive data from network eavesdroppers…(More)”.

A New National Purpose: Harnessing Data for Health


Report by the Tony Blair Institute: “We are at a pivotal moment where the convergence of large health and biomedical data sets, artificial intelligence and advances in biotechnology is set to revolutionise health care, drive economic growth and improve the lives of citizens. And the UK has strengths in all three areas. The immense potential of the UK’s health-data assets, from the NHS to biobanks and genomics initiatives, can unlock new diagnostics and treatments, deliver better and more personalised care, prevent disease and ultimately help people live longer, healthier lives.

However, realising this potential is not without its challenges. The complex and fragmented nature of the current health-data landscape, coupled with legitimate concerns around privacy and public trust, has made for slow progress. The UK has had a tendency to provide short-term funding across multiple initiatives, which has led to an array of individual projects – many of which have struggled to achieve long-term sustainability and deliver tangible benefits to patients.

To overcome these challenges, it will be necessary to be bold and imaginative. We must look for ways to leverage the unique strengths of the NHS, such as its nationwide reach and cradle-to-grave data coverage, to create a health-data ecosystem that is much more than the sum of its many parts. This will require us to think differently about how we collect, manage and utilise health data, and to create new partnerships and models of collaboration that break down traditional silos and barriers. It will mean treating data as a key health resource and managing it accordingly.

One model to do this is the proposed sovereign National Data Trust (NDT) – an endeavour to streamline access to and curation of the UK’s valuable health-data assets…(More)”.

AI-enabled Peacekeeping Tech for the Digital Age


Springwise: “There are countless organisations and government agencies working to resolve conflicts around the globe, but they often lack the tools to know if they are making the right decisions. Project Didi is developing those technological tools – helping peacemakers plan appropriately and understand the impact of their actions in real time.

Project Didi Co-founder and CCO Gabe Freund explained to Springwise that the project uses machine learning, big data, and AI to analyse conflicts and “establish a new standard for best practice when it comes to decision-making in the world of peacebuilding.”

In essence, the company is attempting to analyse the many factors that are involved in conflict in order to identify a ‘ripe moment’ when both parties will be willing to negotiate for peace. The tools can track the impact and effect of all actors across a conflict. This allows them to identify and create connections between organisations and people who are doing similar work, amplifying their effects…(More)” See also: Project Didi (Kluz Prize)

On the Meaning of Community Consent in a Biorepository Context


Article by Astha Kapoor, Samuel Moore, and Megan Doerr: “Biorepositories, vital for medical research, collect and store human biological samples and associated data for future use. However, our reliance solely on the individual consent of data contributors for biorepository data governance is becoming inadequate. Big data analysis focuses on large-scale behaviors and patterns, shifting focus from singular data points to identifying data “journeys” relevant to a collective. The individual becomes a small part of the analysis, with the harms and benefits emanating from the data occurring at an aggregated level.

Community refers to a particular qualitative aspect of a group of people that is not well captured by quantitative measures in biorepositories. This is not an excuse to dodge the question of how to account for communities in a biorepository context; rather, it shows that a framework is needed for defining different types of community that may be approached from a biorepository perspective. 

Engaging with communities in biorepository governance presents several challenges. Moving away from a purely individualized understanding of governance towards a more collectivizing approach necessitates an appreciation of the messiness of group identity, its ephemerality, and the conflicts entailed therein. So while community implies a certain degree of homogeneity (i.e., that all members of a community share something in common), it is important to understand that people can simultaneously consider themselves a member of a community while disagreeing with many of its members, the values the community holds, or the positions for which it advocates. The complex nature of community participation therefore requires proper treatment for it to be useful in a biorepository governance context…(More)”.

Multiple Streams and Policy Ambiguity


Book by Rob A. DeLeo, Reimut Zohlnhöfer and Nikolaos Zahariadis: “The last decade has seen a proliferation of research bolstering the theoretical and methodological rigor of the Multiple Streams Framework (MSF), one of the most prolific theories of agenda-setting and policy change. This Element sets out to address some of the most prominent criticisms of the theory, including the lack of empirical research and the inconsistent operationalization of key concepts, by developing the first comprehensive guide for conducting MSF research. It begins by introducing the MSF, including key theoretical constructs and hypotheses. It then presents the most important theoretical extensions of the framework and articulates a series of best practices for operationalizing, measuring, and analyzing MSF concepts. It closes by exploring existing gaps in MSF research and articulating fruitful areas of future research…(More)”.

How Open-Source Software Empowers Nonprofits And The Global Communities They Serve


Article by Steve Francis: “One particular area where this challenge is evident is climate. Thousands of nonprofits strive to address the effects of a changing climate and its impact on communities worldwide. Headlines often go to big organizations doing high-profile work (planting trees, for instance) in well-known places. Money goes to large-scale commercial agriculture or new technologies — because that’s where profits are most easily made. But thousands of other communities of small farmers that aren’t as visible or profitable need help too. These communities come together to tackle a number of interrelated problems: climate, soil health and productivity, biodiversity and human health and welfare. They envision a more sustainable future.

The reality is that software is crafted to meet market needs, but these communities don’t represent a profitable market. Every major industry has its own software applications and a network of consultants to tune that software for optimal performance. A farm cooperative in less developed parts of the world seeking to maximize value for sustainably harvested produce faces very different challenges than do any of these business users. Often they need to collect and manipulate data in the field, on whatever mobile device they have, with little or no connectivity. Modern software systems are rarely designed to operate in such an environment; they assume the latest devices and continuous connectivity…(More)”.

Building a trauma-informed algorithmic assessment toolkit


Report by Suvradip Maitra, Lyndal Sleep, Suzanna Fay, Paul Henman: “Artificial intelligence (AI) and automated processes provide considerable promise to enhance human wellbeing by fully automating or co-producing services with human service providers. Concurrently, if not well considered, automation also provides ways in which to generate harms at scale and speed. To address this challenge, much discussion to date has focused on principles of ethical AI and accountable algorithms with a groundswell of early work seeking to translate these into practical frameworks and processes to ensure such principles are enacted. AI risk assessment frameworks to detect and evaluate possible harms is one dominant approach, as are a growing body of AI audit frameworks, with concomitant emerging governmental and organisational regulatory settings, and associate professionals.

The research outlined in this report took a different approach. Building on work in social services on trauma-informed practice, researchers identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma. This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.

This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI…(More)”.

Applying Social and Behavioral Science to Federal Policies and Programs to Deliver Better Outcomes


The White House: “Human behavior is a key component of every major national and global challenge. Social and behavioral science examines if, when, and how people’s actions and interactions influence decisions and outcomes. Understanding human behavior through social and behavioral science is vitally important for creating federal policies and programs that open opportunities for everyone.

Today, the Biden-Harris Administration shares the Blueprint for the Use of Social and Behavioral Science to Advance Evidence-Based Policymaking. This blueprint recommends actions for agencies across the federal government to effectively leverage social and behavioral science in improving policymaking to deliver better outcomes and opportunities for people all across America. These recommendations include specific actions for agencies, such as considering social and behavioral insights early in policy or program development. The blueprint also lays out broader opportunities for agencies, such as ensuring agencies have a sufficient number of staff with social and behavioral science expertise.  

The blueprint includes nearly a hundred examples of how social and behavioral science is already used to make real progress on our highest priorities, including promoting safe, equitable, and engaged communities; protecting the environment and promoting climate innovation; advancing economic prosperity and the future of the workforce; enhancing the health outcomes of all Americans; rebuilding our infrastructure and building for tomorrow; and promoting national defense and international security. Social and behavioral science informs the conceptualization, development, implementation, dissemination, and evaluation of interventions, programs, and policies. Policymakers and social scientists can examine data about how government services reach people or measure the effectiveness of a program in assisting a particular community. Using this information, we can understand why programs sometimes fall short in delivering their intended benefits or why other programs are highly successful in delivering benefits. These approaches also help us design better policies and scale proven successful interventions to benefit the entire country…(More)”.

Empowered Mini-Publics: A Shortcut or Democratically Legitimate?


Paper by Shao Ming Lee: “Contemporary mini-publics involve randomly selected citizens deliberating and eventually tackling thorny issues. Yet, the usage of mini-publics in creating public policy has come under criticism, of which a more persuasive  strand  is  elucidated  by  eminent  philosopher  Cristina  Lafont,  who  argues  that  mini-publics  with  binding  decision-making  powers  (or  ‘empowered  mini-publics’)  are  an  undemocratic  ‘shortcut’  and  deliberative democrats thus cannot use empowered mini-publics for shaping public policies. This paper aims to serve as a nuanced defense of empowered mini-publics against Lafont’s claims. I argue against her  claims  by  explicating  how  participants  of  an  empowered  mini-public  remain  ordinary,  accountable,  and therefore connected to the broader public in a democratically legitimate manner. I further critique Lafont’s own proposals for non-empowered mini-publics and judicial review as failing to satisfy her own criteria for democratic legitimacy in a self-defeating manner and relying on a double standard. In doing so, I show how empowered mini-publics are not only democratic but can thus serve to expand democratic deliberation—a goal Lafont shares but relegates to non-empowered mini-publics…(More)”.

AI for social good: Improving lives and protecting the planet


McKinsey Report: “…Challenges in scaling AI for social-good initiatives are persistent and tough. Seventy-two percent of the respondents to our expert survey observed that most efforts to deploy AI for social good to date have focused on research and innovation rather than adoption and scaling. Fifty-five percent of grants for AI research and deployment across the SDGs are $250,000 or smaller, which is consistent with a focus on targeted research or smaller-scale deployment, rather than large-scale expansion. Aside from funding, the biggest barriers to scaling AI continue to be data availability, accessibility, and quality; AI talent availability and accessibility; organizational receptiveness; and change management. More on these topics can be found in the full report.

While overcoming these challenges, organizations should also be aware of strategies to address the range of risks, including inaccurate outputs, biases embedded in the underlying training data, the potential for large-scale misinformation, and malicious influence on politics and personal well-being. As we have noted in multiple recent articles, AI tools and techniques can be misused, even if the tools were originally designed for social good. Experts identified the top risks as impaired fairness, malicious use, and privacy and security concerns, followed by explainability (Exhibit 2). Respondents from not-for-profits expressed relatively more concern about misinformation, talent issues such as job displacement, and effects of AI on economic stability compared with their counterparts at for-profits, who were more often concerned with IP infringement…(More)”