Report by Craig Matasick: “…innovative new set of citizen engagement practices—collectively known as deliberative democracy—offers important lessons that, when applied to the media development efforts, can help improve media assistance efforts and strengthen independent media environments around the world. At a time when disinformation runs rampant, it is more important than ever to strengthen public demand for credible information, reduce political polarization, and prevent media capture. Deliberative democracy approaches can help tackle these issues by expanding the number and diversity of voices that participate in policymaking, thereby fostering greater collective action and enhancing public support for media reform efforts.
Through a series of five illustrative case studies, the report demonstrates how deliberative democracy practices can be employed in both media development and democracy assistance efforts, particularly in the Global South. Such initiatives produce recommendations that take into account a plurality of voices while building trust between citizens and decision-makers by demonstrating to participants that their issues will be heard and addressed. Ultimately, this process can enable media development funders and practitioners to identify priorities and design locally relevant projects that have a higher likelihood for long-term impact.
– Deliberative democracy approaches, which are characterized by representative participation and moderated deliberation, provide a framework to generate demand-driven media development interventions while at the same time building greater public support for media reform efforts.
– Deliberative democracy initiatives foster collaboration across different segments of society, building trust in democratic institutions, combatting polarization, and avoiding elite capture.
– When employed by news organizations, deliberative approaches provide a better understanding of the issues their audiences care most about and uncover new problems affecting citizens that might not otherwise have come to light….(More)”.
Jos Berens at Bloomberg New Economy Forum: “…Despite these and other examples, data sharing between the private sector and humanitarian agencies is still limited. Out of 281 contributing organizations on HDX, only a handful come from the private sector.
So why don’t we see more use of private sector data in humanitarian response? One obvious set of challenges concerns privacy, data protection and ethics. Companies and their customers are often wary of data being used in ways not related to the original purpose of data collection. Such concerns are understandable, especially given the potential legal and reputational consequences of personal data breaches and leaks.
Figuring out how to use this type of sensitive data in an already volatile setting seems problematic, and it is — negotiations between public and private partners in the middle of a crisis often get hung up on a lack of mutual understanding. Data sharing partnerships negotiated during emergencies often fail to mature beyond the design phase. This dynamic creates a loop of inaction due to a lack of urgency in between crises, followed by slow and halfway efforts when action is needed most.
To ensure that private sector data is accessible in an emergency, humanitarian organizations and private sector companies need to work together to build partnerships before a crisis. They can do this by taking the following actions:
Invest in relationships and build trust. Both humanitarian organizations and private sector organizations should designate focal points who can quickly identify potentially useful data during a humanitarian emergency. A data stewards networkwhich identifies and connects data responsibility leaders across organizations, as proposed by the NYU Govlab, is a great example of how such relations could look. Efforts to build trust with the general public regarding private sector data use for humanitarian response should also be strengthened, primarily through transparency about the means and purpose of such collaborations. This is particularly important in the context of COVID-19, as noted in the UN Comprehensive Response to COVID-19 and the World Economic Forum’s ‘Great Reset’ initiative…(More)”.
Paper by Ciara Greene and Gillian Murphy: “Previous research has argued that fake news may have grave consequences for health behaviour, but surprisingly, no empirical data have been provided to support this assumption. This issue takes on new urgency in the context of the coronavirus pandemic. In this large preregistered study (N = 3746) we investigated the effect of exposure to fabricated news stories about COVID-19 on related behavioural intentions. We observed small but measurable effects on some related behavioural intentions but not others – for example, participants who read a story about problems with a forthcoming contact-tracing app reported reduced willingness to download the app. We found no effects of providing a general warning about the dangers of online misinformation on response to the fake stories, regardless of the framing of the warning in positive or negative terms. We conclude with a call for more empirical research on the real-world consequences of fake news….(More)”
Matthew Hutson at IEEE Spectrum: “…Researchers say they’ve learned a lot of lessons modeling this pandemic, lessons that will carry over to the next.
The first set of lessons is all about data. Garbage in, garbage out, they say. Jarad Niemi, an associate professor of statistics at Iowa State University who helps run the forecast hub used by the CDC, says it’s not clear what we should be predicting. Infections, deaths, and hospitalization numbers each have problems, which affect their usefulness not only as inputs for the model but also as outputs. It’s hard to know the true number of infections when not everyone is tested. Deaths are easier to count, but they lag weeks behind infections. Hospitalization numbers have immense practical importance for planning, but not all hospitals release those figures. How useful is it to predict those numbers if you never have the true numbers for comparison? What we need, he said, is systematized random testing of the population, to provide clear statistics of both the number of people currently infected and the number of people who have antibodies against the virus, indicating recovery. Prakash, of Georgia Tech, says governments should collect and release data quickly in centralized locations. He also advocates for central repositories of policy decisions, so modelers can quickly see which areas are implementing which distancing measures.
Researchers also talked about the need for a diversity of models. At the most basic level, averaging an ensemble of forecasts improves reliability. More important, each type of model has its own uses—and pitfalls. An SEIR model is a relatively simple tool for making long-term forecasts, but the devil is in the details of its parameters: How do you set those to match real-world conditions now and into the future? Get them wrong and the model can head off into fantasyland. Data-driven models can make accurate short-term forecasts, and machine learning may be good for predicting complicated factors. But will the inscrutable computations of, for instance, a neural network remain reliable when conditions change? Agent-based models look ideal for simulating possible interventions to guide policy, but they’re a lot of work to build and tricky to calibrate.
Finally, researchers emphasize the need for agility. Niemi of Iowa State says software packages have made it easier to build models quickly, and the code-sharing site GitHub lets people share and compare their models. COVID-19 is giving modelers a chance to try out all their newest tools, says Meyers, of the University of Texas. “The pace of innovation, the pace of development, is unlike ever before,” she says. “There are new statistical methods, new kinds of data, new model structures.”…(More)”.
Book edited by Martin Paul Eve and Jonathan Gray: “The Open Access Movement proposes to remove price and permission barriers for accessing peer-reviewed research work—to use the power of the internet to duplicate material at an infinitesimal cost-per-copy. In this volume, contributors show that open access does not exist in a technological or policy vacuum; there are complex social, political, cultural, philosophical, and economic implications for opening research through digital technologies. The contributors examine open access from the perspectives of colonial legacies, knowledge frameworks, publics and politics, archives and digital preservation, infrastructures and platforms, and global communities.
he contributors consider such topics as the perpetuation of colonial-era inequalities in research production and promulgation; the historical evolution of peer review; the problematic histories and discriminatory politics that shape our choices of what materials to preserve; the idea of scholarship as data; and resistance to the commercialization of platforms. Case studies report on such initiatives as the Making and Knowing Project, which created an openly accessible critical digital edition of a sixteenth-century French manuscript, the role of formats in Bruno Latour’s An Inquiry into Modes of Existence, and the Scientific Electronic Library Online (SciELO), a network of more than 1,200 journals from sixteen countries. Taken together, the contributions represent a substantive critical engagement with the politics, practices, infrastructures, and imaginaries of open access, suggesting alternative trajectories, values, and possible futures…(More)”.
Book edited by John R. Vacca: “… the most complete guide for integrating next generation smart city technologies into the very foundation of urban areas worldwide, showing how to make urban areas more efficient, more sustainable, and safer. Smart cities are complex systems of systems that encompass all aspects of modern urban life. A key component of their success is creating an ecosystem of smart infrastructures that can work together to enable dynamic, real-time interactions between urban subsystems such as transportation, energy, healthcare, housing, food, entertainment, work, social interactions, and governance. Solving Urban Infrastructure Problems Using Smart City Technologies is a complete reference for building a holistic, system-level perspective on smart and sustainable cities, leveraging big data analytics and strategies for planning, zoning, and public policy. It offers in-depth coverage and practical solutions for how smart cities can utilize resident’s intellectual and social capital, press environmental sustainability, increase personalization, mobility, and higher quality of life….(More)”
Special issue by ZDNet exploring “how new technologies like AI, cloud, drones, and 5G are helping government agencies, public organizations, and private companies respond to the events of today and tomorrow…:
Press Release: “Important questions are being raised about whether blockchain technologies can contribute to solving governance challenges in the mining, oil and gas sectors. This report seeks to begin addressing such questions, with particular reference to current blockchain applications and transparency efforts in the extractive sector.
It summarizes analysis by The Governance Lab (GovLab) at the New York University Tandon School of Engineering and the Natural Resource Governance Institute (NRGI). The study focused in particular on three activity areas: licensing and contracting, corporate registers and beneficial ownership, and commodity trading and supply chains.
Key messages:
Blockchain technology could potentially reduce transparency challenges and information asymmetries in certain parts of the extractives value chain. However, stakeholders considering blockchain technologies need a more nuanced understanding of problem definition, value proposition and blockchain attributes to ensure that such interventions could positively impact extractive sector governance.
The blockchain field currently lacks design principles, governance best practices, and open data standards that could ensure that the technology helps advance transparency and good governance in the extractive sector. Our analysis offers an initial set of design principles that could act as a starting point for a more targeted approach to the use of blockchain in improving extractives governance.
Most blockchain projects are preliminary concepts or pilots, with little demonstration of how to effectively scale up successful experiments, especially in countries with limited resources.
Meaningful impact evaluations or peer-reviewed publications that assess impact, including on the implications of blockchain’s emissions footprint, are still lacking. More broadly, a shared research agenda around blockchain could help address questions that are particularly ripe for future research.
Transition to a blockchain-enabled system is likely to be smoother and faster in cases when digital records are already available than when a government or company attempts to move from an analog system to one leveraging blockchain.
Companies or governments using blockchain are more likely to implement it successfully when they have a firm grasp of the technology, its strengths, its weaknesses, and how it fits into the broader governance landscape. But often these actors are often overly reliant on and empowering of blockchain technology vendors and startups, which can lead to “lock-in”, whereby the market gets stuck with an approach even though market participants may be better off with an alternative.
The role played by intermediaries like financial institutions or registrars can determine the success or failure of blockchain applications….(More)”.
Book by Maureen Webb: “Hackers have a bad reputation, as shady deployers of bots and destroyers of infrastructure. In Coding Democracy, Maureen Webb offers another view. Hackers, she argues, can be vital disruptors. Hacking is becoming a practice, an ethos, and a metaphor for a new wave of activism in which ordinary citizens are inventing new forms of distributed, decentralized democracy for a digital era. Confronted with concentrations of power, mass surveillance, and authoritarianism enabled by new technology, the hacking movement is trying to “build out” democracy into cyberspace.
Webb travels to Berlin, where she visits the Chaos Communication Camp, a flagship event in the hacker world; to Silicon Valley, where she reports on the Apple-FBI case, the significance of Russian troll farms, and the hacking of tractor software by desperate farmers; to Barcelona, to meet the hacker group XNet, which has helped bring nearly 100 prominent Spanish bankers and politicians to justice for their role in the 2008 financial crisis; and to Harvard and MIT, to investigate the institutionalization of hacking. Webb describes an amazing array of hacker experiments that could dramatically change the current political economy. These ambitious hacks aim to displace such tech monoliths as Facebook and Amazon; enable worker cooperatives to kill platforms like Uber; give people control over their data; automate trust; and provide citizens a real say in governance, along with capacity to reach consensus. Coding Democracy is not just another optimistic declaration of technological utopianism; instead, it provides the tools for an urgently needed upgrade of democracy in the digital era….(More)”.
Report by Philippe Lorenz: “Shaping international norms around the ethics of Artificial Intelligence (AI) is perceived as a new responsibility by foreign policy makers. This responsibility is accompanied by a desire to play an active role in the most important international fora. Given the limited resources in terms of time and budget, foreign ministries need to set priorities for their involvement in the governance of AI. First and foremost, this requires an understanding of the entire AI governance landscape and the actors involved. The intention of this paper is to take a step back and familiarize foreign policy makers with the internal structures of the individual AI governance initiatives and the relationships between the involved actors. A basic understanding of the landscape also makes it easier to classify thematic developments and emerging actors, their agendas, and strategies.
This paper provides foreign policy practitioners with a mapping that can serve as a compass to navigate the complex web of stakeholders that shape the international debate on AI ethics. It plots political fora that serve as a platform for actors to agree upon ethical principles and pursue binding regulation. The mapping supplements the political purview with key actors who create technical standards on the ethics of AI. Furthermore, it describes the dynamic relationships between actors from these two domains. International governance addresses AI ethics through two different dimensions: political fora and Standards Developing Organizations (SDOs). Although it may be tempting to only engage on the diplomatic stage, this would be insufficient to help shape AI policy. Foreign policy makers must tend to both dimensions. While both governance worlds share the same topics and themes (in this case, AI ethics), they differ in their stakeholders, goals, outputs, and reach.
Key political and economic organizations such as the United Nations (UN), the Organisation for Economic Co-operation and Development (OECD), and the European Commission (EC) address ethical concerns raised by AI technologies. But so do SDOs such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the IEEE Standards Association (IEEE SA). Although actors from the latter category are typically concerned with the development of standards that address terminology, ontology, and technical benchmarks that facilitate product interoperability and market access, they, too, address AI ethics.
But these discussions on AI ethics will be useless if they do not inform the development of concrete policies for how to govern the technology. At international political fora, on the one hand, states shape the outputs that are often limited to non-binding, soft AI principles. SDOs, on the other hand, tend to the private sector. They are characterized by consensus-based decision-making processes that facilitate the adoption of industry standards. These fora are generally not accessible to (foreign) policy makers. Either because they exclusively cater to private sector and bar policy makers from joining, or because active participation requires in-depth technical expertise as well as industry knowledge which may surpass diplomats’ skill sets. Nonetheless, as prominent standard setting bodies such as ISO, IEC, and IEEE SA pursue industry standards in AI ethics, foreign policy makers need to take notice, as this will likely have consequences for their negotiations at international political fora.
The precondition for active engagement is to gain an overview of the AI Governance environment. Foreign policy practitioners need to understand the landscape of stakeholders, identify key actors, and start to strategically engage with questions relevant to AI governance. This is necessary to determine whether a given initiative on AI ethics is aligned with one’s own foreign policy goals and, therefore, worth engaging with. It is also helpful to assess industry dynamics that might affect geo-economic deliberations. Lastly, all of this is vital information to report back to government headquarters to inform policy making, as AI policy is a matter of domestic and foreign policy….(More)”.