Participatory mapping as a social digital tool


Blog by María de los Ángeles Briones: “…we will use 14 different examples from different continents and contexts to explore the goals and methods used for participatory mapping as a social digital tool. Despite looking very different and coming from a range of cultural backgrounds, there are a number of similarities in these different case studies.

Although the examples have different goals, we have identified four main focus areas: activism, conviviality, networking and urban planning. More localised mapping projects often had a focus on activism. We also see from that maps are not isolated tools, they are complementary to work with other communication tools and platforms.

The internet has transformed communications and networks across the globe – allowing for interconnectivity and scalability of information among and between different groups of society. This allows voices, regardless of their location, to be amplified and listened to by many other voices achieving collective goals. This has great potential in a global world where it is evident that top-down initiatives are not enough to handle many of the social needs that local people experience. However, though the internet makes sharing and collaborating between people easier, offline maps are still valuable, as shown in some of our examples.

The similarity between the different maps that we explored is that they are social digital tools. They are social because they are related to projects that are seeking to solve social needs; and they are digital because they are based on digital platforms that permit them to be alive, spread, shared and used. These characteristics also refer to their function and design.

A tool can be defined as a device or implement, especially one held in the hand, used to carry out a particular function. So when we speak of a tool there are four things involved: an actor, an object, a function and a purpose. Just as a hammer is a tool that a carpenter (actor) use to hammer nails (function) and thus build something (purpose) we understand that social tools are used by one or more people for taking actions where the final objective is to meet a social need…(More)”.

Big data for everyone


Article by Henrietta Howells: “Raw neuroimaging data require further processing before they can be used for scientific or clinical research. Traditionally, this could be accomplished with a single powerful computer. However, much greater computing power is required to analyze the large open-access cohorts that are increasingly being released to the community. And processing pipelines are inconsistently scripted, which can hinder reproducibility efforts. This creates a barrier for labs lacking access to sufficient resources or technological support, potentially excluding them from neuroimaging research. A paper by Hayashi and colleagues in Nature Methods offers a solution. They present https://brainlife.io, a freely available, web-based platform for secure neuroimaging data access, processing, visualization and analysis. It leverages ‘opportunistic computing’, which pools processing power from commercial and academic clouds, making it accessible to scientists worldwide. This is a step towards lowering the barriers for entry into big data neuroimaging research…(More)”.

“Data Commons”: Under Threat by or The Solution for a Generative AI Era ? Rethinking Data Access and Re-us


Article by Stefaan G. Verhulst, Hannah Chafetz and Andrew Zahuranec: “One of the great paradoxes of our datafied era is that we live amid both unprecedented abundance and scarcity. Even as data grows more central to our ability to promote the public good, so too does it remain deeply — and perhaps increasingly — inaccessible and privately controlled. In response, there have been growing calls for “data commons” — pools of data that would be (self-)managed by distinctive communities or entities operating in the public’s interest. These pools could then be made accessible and reused for the common good.

Data commons are typically the results of collaborative and participatory approaches to data governance [1]. They offer an alternative to the growing tendency toward privatized data silos or extractive re-use of open data sets, instead emphasizing the communal and shared value of data — for example, by making data resources accessible in an ethical and sustainable way for purposes in alignment with community values or interests such as scientific researchsocial good initiativesenvironmental monitoringpublic health, and other domains.

Data commons can today be considered (the missing) critical infrastructure for leveraging data to advance societal wellbeing. When designed responsibly, they offer potential solutions for a variety of wicked problems, from climate change to pandemics and economic and social inequities. However, the rapid ascent of generative artificial intelligence (AI) technologies is changing the rules of the game, leading both to new opportunities as well as significant challenges for these communal data repositories.

On the one hand, generative AI has the potential to unlock new insights from data for a broader audience (through conversational interfaces such as chats), fostering innovation, and streamlining decision-making to serve the public interest. Generative AI also stands out in the realm of data governance due to its ability to reuse data at a massive scale, which has been a persistent challenge in many open data initiatives. On the other hand, generative AI raises uncomfortable questions related to equitable accesssustainability, and the ethical re-use of shared data resources. Further, without the right guardrailsfunding models and enabling governance frameworks, data commons risk becoming data graveyards — vast repositories of unused, and largely unusable, data.

Ten part framework to rethink Data Commons

In what follows, we lay out some of the challenges and opportunities posed by generative AI for data commons. We then turn to a ten-part framework to set the stage for a broader exploration on how to reimagine and reinvigorate data commons for the generative AI era. This framework establishes a landscape for further investigation; our goal is not so much to define what an updated data commons would look like but to lay out pathways that would lead to a more meaningful assessment of the design requirements for resilient data commons in the age of generative AI…(More)”

Establish Data Collaboratives To Foster Meaningful Public Involvement


Article by Gwen Ottinger: “Federal agencies are striving to expand the role of the public, including members of marginalized communities, in developing regulatory policy. At the same time, agencies are considering how to mobilize data of increasing size and complexity to ensure that policies are equitable and evidence-based. However, community engagement has rarely been extended to the process of examining and interpreting data. This is a missed opportunity: community members can offer critical context to quantitative data, ground-truth data analyses, and suggest ways of looking at data that could inform policy responses to pressing problems in their lives. Realizing this opportunity requires a structure for public participation in which community members can expect both support from agency staff in accessing and understanding data and genuine openness to new perspectives on quantitative analysis. 

To deepen community involvement in developing evidence-based policy, federal agencies should form Data Collaboratives in which staff and members of the public engage in mutual learning about available datasets and their affordances for clarifying policy problems…(More)”.

‘Positive deviance’ and the power of outliers


Bloomberg Cities Network: “Groundbreaking solutions in cities are often the result of visionary mayoral leadership. But sometimes certain communities achieve significantly better outcomes than their similarly resourced neighbors—and the underlying reasons may not be immediately obvious to local leaders. Ravi Gurumurthy, CEO of the global innovation foundation Nesta, believes that this variation in quality of life at a hyper-local level is something worth paying a lot more attention to. 

“The fastest way for us to improve people’s lives will be to mine that variation and really understand what is going on,” he says.    

This concept, known as “positive deviance,” describes individuals or communities that achieve remarkable success or exhibit highly effective behaviors despite facing the same constraints as their peers. With a long history of use in international development, positive deviance is now gaining traction among city leaders as a source of solutions to stubborn urban challenges.  

Here’s a closer look at what it’s about, and how it’s already being used to uplift promising approaches in cities. 

What is positive deviance? 

Positive deviance first gained widespread attention because of a remarkable success story in 1990s Vietnam. Much of the country was suffering from a malnutrition crisis, and efforts to design and implement new solutions were coming up short. But aid workers landed on a breakthrough by paying closer attention to children who already appeared larger and healthier than their peers.  

It turned out these children were being fed different diets—leaning more heavily on shrimp and crab, for example, which were widely accessible but less often fed to young people. These children also were being fed more frequently, in smaller meals, throughout the day—an intervention that, again, did not require parents to have more resources so much as to differently use what was universally available.  

When these practices—feeding kids shellfish and making meals smaller and more frequent—were replicated, malnutrition plummeted…(More)”

How do you accidentally run for President of Iceland?


Blog by Anna Andersen: “Content design can have real consequences — for democracy, even…

To run for President of Iceland, you need to be an Icelandic citizen, at least 35 years old, and have 1,500 endorsements.

For the first time in Icelandic history, this endorsement process is digital. Instead of collecting all their signatures on paper the old-fashioned way, candidates can now send people to https://island.is/forsetaframbod to submit their endorsement.

This change has, also for the first time in Icelandic history, given the nation a clear window into who is trying to run — and it’s a remarkably large number. To date, 82 people are collecting endorsements, including a comedian, a model, the world’s first double-arm transplant receiver, and my aunt Helga.

Many of these people are seriously vying for president (yep, my aunt Helga), some of them have undoubtedly signed up as a joke (nope, not the comedian), and at least 11 of them accidentally registered and had no idea that they were collecting endorsements for their candidacy.

“I’m definitely not about to run for president, this was just an accident,” one person told a reporter after having a good laugh about it.

“That’s hilarious!” another person said, thanking the reporter for letting them know that they were in the running.

As a content designer, I was intrigued. How could so many people accidentally start a campaign for President of Iceland?

It turns out, the answer largely has to do with content design.Presidential hopefuls were sending people a link to a page where they could be endorsed, but instead of endorsing the candidate, some people accidentally registered to be a candidate…(More)”.

The Open Data Maturity Ranking is shoddy – it badly needs to be re-thought


Article by Olesya Grabova: “Digitalising government is essential for Europe’s future innovation and economic growth and one of the keys to achieving this is open data – information that public entities gather, create, or fund, and it’s accessible to all to freely use.

This includes everything from public budget details to transport schedules. Open data’s benefits are vast — it fuels research, boosts innovation, and can even save lives in wartime through the creation of chatbots with information about bomb shelter locations. It’s estimated that its economic value will reach a total of EUR 194 billion for EU countries and the UK by 2030.

This is why correctly measuring European countries’ progress in open data is so important. And that’s why the European Commission developed the Open Data Maturity (ODM) ranking, which annually measures open data quality, policies, online portals, and impact across 35 European countries.

Alas, however, it doesn’t work as well as it should and this needs to be addressed.

A closer look at the report’s overall approach reveals the ranking hardly reflects countries’ real progress when it comes to open data. This flawed system, rather than guiding countries towards genuine improvement, risks misrepresenting their actual progress and misleads citizens about their country’s advancements, which further stalls opportunities for innovation.

Take Slovakia. It’s apparently the biggest climber,  leaping from 29th to 10th place in just over a year. One would expect that the country has made significant progress in making public sector information available and stimulating its reuse – one of the OMD assessment’s key elements.

A deeper examination reveals that this isn’t the case. Looking at the ODM’s methodology highlights where it falls short… and how it can be fixed…(More)”.

The economic research policymakers actually need


Blog by Jed Kolko: “…The structure of academia just isn’t set up to produce the kind of research many policymakers need. Instead, top academic journal editors and tenure committees reward research that pushes the boundaries of the discipline and makes new theoretical or empirical contributions. And most academic papers presume familiarity with the relevant academic literature, making it difficult for anyone outside of academia to make the best possible use of them.

The most useful research often came instead from regional Federal Reserve banks, non-partisan think-tanks, the corporate sector, and from academics who had the support, freedom, or job security to prioritize policy relevance. It generally fell into three categories:

  1. New measures of the economy
  2. Broad literature reviews
  3. Analyses that directly quantify or simulate policy decisions.

If you’re an economic researcher and you want to do work that is actually helpful for policymakers — and increases economists’ influence in government — aim for one of those three buckets.

The pandemic and its aftermath brought an urgent need for data at higher frequency, with greater geographic and sectoral detail, and about ways the economy suddenly changed. Some of the most useful research contributions during that period were new data and measures of the economy: they were valuable as ingredients rather than as recipes or finished meals. Here are some examples:

Millions of gamers advance biomedical research


Article by McGill: “…4.5 million gamers around the world have advanced medical science by helping to reconstruct microbial evolutionary histories using a minigame included inside the critically and commercially successful video game, Borderlands 3. Their playing has led to a significantly refined estimate of the relationships of microbes in the human gut. The results of this collaboration will both substantially advance our knowledge of the microbiome and improve on the AI programs that will be used to carry out this work in future.

By playing Borderlands Science, a mini-game within the looter-shooter video game Borderlands 3, these players have helped trace the evolutionary relationships of more than a million different kinds of bacteria that live in the human gut, some of which play a crucial role in our health. This information represents an exponential increase in what we have discovered about the microbiome up till now. By aligning rows of tiles which represent the genetic building blocks of different microbes, humans have been able to take on tasks that even the best existing computer algorithms have been unable to solve yet…(More) (and More)”.

Evidence Ecosystems and the Challenge of Humanising and Normalising Evidence


Article by Geoff Mulgan: “It is reasonable to assume that the work of governments, businesses and civil society goes better if the people making decisions are well-informed, using reliable facts and strong evidence rather than only hunch and anecdote.  The term ‘evidence ecosystem’1  is a useful shorthand for the results of systematic attempts to make this easier, enabling decision makers, particularly in governments, to access the best available evidence, in easily digestible forms and when it’s needed.  

…This sounds simple.  But these ecosystems are as varied as ecosystems in nature.  How they work depends on many factors, including how political or technical the issues are; the presence or absence of confident, well-organised professions; the availability of good quality evidence; whether there is a political culture that values research; and much more.

In particular, the paper argues that the next generation of evidence ecosystems need a sharper understanding of how the supply of evidence meets demand, and the human dimension of evidence.  That means cultivating lasting relationships rather than relying too much on a linear flow of evidence from researchers to decision-makers; it means using conversation as much as prose reports to ensure evidence is understood and acted on; and it means making use of stories as well as dry analysis.  It depends, in other words, on recognising that the users of evidence are humans.

In terms of prescription the paper emphasises:

  • Sustainability/normalisation: the best approaches are embedded, part of the daily life of decision-making rather than depending on one-off projects and programmes.  This applies both to evidence and to data.  Yet embeddedness is the exception rather than the rule.
  • Multiplicity: multiple types of knowledge, and logics, are relevant to decisions, which is why people and institutions that understand these different logics are so vital.  
  • Credibility and relationships: the intermediaries who connect the supply and demand of knowledge need to be credible, with both depth of knowledge and an ability to interpret it for diverse audiences, and they need to be able to create and maintain relationships, which will usually be either place or topic based, and will take time to develop, with the communication of evidence often done best in conversation.
  • Stories: influencing decision-makers depends on indirect as well as direct communication, since the media in all their forms play a crucial role in validating evidence and evidence travels best with stories, vignettes and anecdotes.

In short, while evidence is founded on rigorous analysis, good data and robust methods, it also needs to be humanised – embedded in relationships, brought alive in conversations and vivid, human stories – and normalised, becoming part of everyday work…(More)”.