Delivery-Driven Policy: Policy designed for the digital age


Report by Code for America: “Policymaking is in a quiet crisis. Too often, government policies do not live up to their intent due to a key disconnect between policymakers and government delivery.

How might the shift to a digital world affect government’s ability to implement policy?

Practicing delivery-driven policymaking means bringing user-centered, iterative, and data-driven practices to bear from the start and throughout. It means getting deep into the weeds of implementation in ways that the policy world has traditionally avoided, iterating both on policy and delivery.

By tightly coupling policy and delivery, governments can use data about how people actually experience government services to narrow the implementation gap and help policies get the outcome they intend….(More)”

Kenya passes data protection law crucial for tech investments


George Obulutsa and Duncan Miriri at Reuters: “Kenyan President Uhuru Kenyatta on Friday approved a data protection law which complies with European Union legal standards as it looks to bolster investment in its information technology sector.

The East African nation has attracted foreign firms with innovations such as Safaricom’s M-Pesa mobile money services, but the lack of safeguards in handling personal data has held it back from its full potential, officials say.

“Kenya has joined the global community in terms of data protection standards,” Joe Mucheru, minister for information, technology and communication, told Reuters.

The new law sets out restrictions on how personally identifiable data obtained by firms and government entities can be handled, stored and shared, the government said.

Mucheru said it complies with the EU’s General Data Protection Regulation which came into effect in May 2018 and said an independent office will investigate data infringements….

A lack of data protection legislation has also hampered the government’s efforts to digitize identity records for citizens.

The registration, which the government said would boost its provision of services, suffered a setback this year when the exercise was challenged in court.

“The lack of a data privacy law has been an enormous lacuna in Kenya’s digital rights landscape,” said Nanjala Nyabola, author of a book on information technology and democracy in Kenya….(More)”.

Voting could be the problem with democracy


Bernd Reiter at The Conversation: “Around the globe, citizens of many democracies are worried that their governments are not doing what the people want.

When voters pick representatives to engage in democracy, they hope they are picking people who will understand and respond to constituents’ needs. U.S. representatives have, on average, more than 700,000 constituents each, making this task more and more elusive, even with the best of intentions. Less than 40% of Americans are satisfied with their federal government.

Across Europe, South America, the Middle East and China, social movements have demanded better government – but gotten few real and lasting results, even in those places where governments were forced out.

In my work as a comparative political scientist working on democracy, citizenship and race, I’ve been researching democratic innovations in the past and present. In my new book, “The Crisis of Liberal Democracy and the Path Ahead: Alternatives to Political Representation and Capitalism,” I explore the idea that the problem might actually be democratic elections themselves.

My research shows that another approach – randomly selecting citizens to take turns governing – offers the promise of reinvigorating struggling democracies. That could make them more responsive to citizen needs and preferences, and less vulnerable to outside manipulation….

For local affairs, citizens can participate directly in local decisions. In Vermont, the first Tuesday of March is Town Meeting Day, a public holiday during which residents gather at town halls to debate and discuss any issue they wish.

In some Swiss cantons, townspeople meet once a year, in what are called Landsgemeinden, to elect public officials and discuss the budget.

For more than 30 years, communities around the world have involved average citizens in decisions about how to spend public money in a process called “participatory budgeting,” which involves public meetings and the participation of neighborhood associations. As many as 7,000 towns and cities allocate at least some of their money this way.

The Governance Lab, based at New York University, has taken crowd-sourcing to cities seeking creative solutions to some of their most pressing problems in a process best called “crowd-problem solving.” Rather than leaving problems to a handful of bureaucrats and experts, all the inhabitants of a community can participate in brainstorming ideas and selecting workable possibilities.

Digital technology makes it easier for larger groups of people to inform themselves about, and participate in, potential solutions to public problems. In the Polish harbor city of Gdansk, for instance, citizens were able to help choose ways to reduce the harm caused by flooding….(More)”.

Are Randomized Poverty-Alleviation Experiments Ethical?


Peter Singer et al at Project Syndicate: “Last month, the Nobel Memorial Prize in Economic Sciences was awarded to three pioneers in using randomized controlled trials (RCTs) to fight poverty in low-income countries: Abhijit Banerjee, Esther Duflo, and Michael Kremer. In RCTs, researchers randomly choose a group of people to receive an intervention, and a control group of people who do not, and then compare the outcomes. Medical researchers use this method to test new drugs or surgical techniques, and anti-poverty researchers use it alongside other methods to discover which policies or interventions are most effective. Thanks to the work of Banerjee, Duflo, Kremer, and others, RCTs have become a powerful tool in the fight against poverty.

But the use of RCTs does raise ethical questions, because they require randomly choosing who receives a new drug or aid program, and those in the control group often receive no intervention or one that may be inferior. One could object to this on principle, following Kant’s claim that it is always wrong to use human beings as a means to an end; critics have argued that RCTs “sacrifice the well-being of study participants in order to ‘learn.’”

Rejecting all RCTs on this basis, however, would also rule out the clinical trials on which modern medicine relies to develop new treatments. In RCTs, participants in both the control and treatment groups are told what the study is about, sign up voluntarily, and can drop out at any time. To prevent people from choosing to participate in such trials would be excessively paternalistic, and a violation of their personal freedom.

less extreme version of the criticism argues that while medical RCTs are conducted only if there are genuine doubts about a treatment’s merits, many development RCTs test interventions, such as cash transfers, that are clearly better than nothing. In this case, maybe one should just provide the treatment?

This criticism neglects two considerations. First, it is not always obvious what is better, even for seemingly stark examples like this one. For example, before RCT evidence to the contrary, it was feared that cash transfers lead to conflict and alcoholism.

Second, in many development settings, there are not enough resources to help everyone, creating a natural control group….

A third version of the ethical objection is that participants may actually be harmed by RCTs. For example, cash transfers might cause price inflation and make non-recipients poorer, or make non-recipients envious and unhappy. These effects might even affect people who never consented to be part of a study.

This is perhaps the most serious criticism, but it, too, does not make RCTs unethical in general….(More)”.

Finland’s model in utilising forest data


Report by Matti Valonen et al: “The aim of this study is to depict the Finnish Forest Centre’s Metsään.fiwebsite’s background, objectives and implementation and to assess its needs for development and future prospects. The Metsään.fi-service included in the Metsään.fi-website is a free e-service for forest owners and corporate actors (companies, associations and service providers) in the forest sector, which aim is to support active decision-making among forest owners by offering forest resource data and maps on forest properties, by making contacts with the authorities easier through online services and to act as a platform for offering forest services, among other things.

In addition to the Metsään.fi-service, the website includes open forest data services that offer the users national forest resource data that is not linked with personal information.

Private forests are in a key position as raw material sources for traditional and new forest-based bioeconomy. In addition to wood material, the forests produce non-timber forest products (for example berries and mushrooms), opportunities for recreation and other ecosystem services.

Private forests cover roughly 60 percent of forest land, but about 80 percent of the domestic wood used by forest industry. In 2017 the value of the forest industry production was 21 billion euros, which is a fifth of the entire industry production value in Finland. The forest industry export in 2017 was worth about 12 billion euros, which covers a fifth of the entire export of goods. Therefore, the forest sector is important for Finland’s national economy…(More)”.

Big Data, Algorithms and Health Data


Paper by Julia M. Puaschunder: “The most recent decade featured a data revolution in the healthcare sector in screening, monitoring and coordination of aid. Big data analytics have revolutionarized the medical profession. The health sector relys on Artificial Intelligence (AI) and robotics as never before. The opportunities of unprecedented access to healthcare, rational precision and human resemblance but also targeted aid in decentralized aid grids are obvious innovations that will lead to most sophisticated neutral healthcare in the future. Yet big data driven medical care also bears risks of privacy infringements and ethical concerns of social stratification and discrimination. Today’s genetic human screening, constant big data information amalgamation as well as social credit scores pegged to access to healthcare also create the most pressing legal and ethical challenges of our time.Julia M. PuaschunderThe most recent decade featured a data revolution in the healthcare sector in screening, monitoring and coordination of aid. Big data analytics have revolutionarized the medical profession. The health sector relys on Artificial Intelligence (AI) and robotics as never before. The opportunities of unprecedented access to healthcare, rational precision and human resemblance but also targeted aid in decentralized aid grids are obvious innovations that will lead to most sophisticated neutral healthcare in the future. Yet big data driven medical care also bears risks of privacy infringements and ethical concerns of social stratification and discrimination. Today’s genetic human screening, constant big data information amalgamation as well as social credit scores pegged to access to healthcare also create the most pressing legal and ethical challenges of our time.

The call for developing a legal, policy and ethical framework for using AI, big data, robotics and algorithms in healthcare has therefore reached unprecedented momentum. Problematic appear compatibility glitches in the AI-human interaction as well as a natural AI preponderance outperforming humans. Only if the benefits of AI are reaped in a master-slave-like legal frame, the risks associated with these novel superior technologies can be curbed. Liability control but also big data privacy protection appear important to secure the rights of vulnerable patient populations. Big data mapping and social credit scoring must be met with clear anti-discrimination and anti-social stratification ethics. Lastly, the value of genuine human care must be stressed and precious humanness in the artifical age conserved alongside coupling the benefits of AI, robotics and big data with global common goals of sustainability and inclusive growth.

The report aims at helping a broad spectrum of stakeholders understand the impact of AI, big data, algorithms and health data based on information about key opportunities and risks but also future market challenges and policy developments for orchestrating the concerted pursuit of improving healthcare excellence. Stateshuman and diplomates are invited to consider three trends in the wake of the AI (r)evolution:

Artificial Intelligence recently gained citizenship in robots becoming citizens: With attributing quasi-human rights to AI, ethical questions arise of a stratified citizenship. Robots and algorithms may only be citizens for their protection and upholding social norms towards human-like creatures that should be considered slave-like for economic and liability purposes without gaining civil privileges such as voting, property rights and holding public offices.

Big data and computational power imply unprecedented opportunities for: crowd understanding, trends prediction and healthcare control. Risks include data breaches, privacy infringements, stigmatization and discrimination. Big data protection should be enacted through technological advancement, self-determined privacy attention fostered by e-education as well as discrimination alleviation by only releasing targeted information and regulated individual data mining capacities.

The European Union should consider establishing a fifth trade freedom of data by law and economic incentives: in order to bundle AI and big data gains large scale. Europe holds the unique potential of offering data supremacy in state-controlled universal healthcare big data wealth that is less fractionate than the US health landscape and more Western-focused than Asian healthcare. Europe could therefore lead the world on big data derived healthcare insights but should also step up to imbuing humane societal imperatives on these most cutting-edge innovations of our time….(More)”.

We are finally getting better at predicting organized conflict


Tate Ryan-Mosley at MIT Technology Review: “People have been trying to predict conflict for hundreds, if not thousands, of years. But it’s hard, largely because scientists can’t agree on its nature or how it arises. The critical factor could be something as apparently innocuous as a booming population or a bad year for crops. Other times a spark ignites a powder keg, as with the assassination of Archduke Franz Ferdinand of Austria in the run-up to World War I.

Political scientists and mathematicians have come up with a slew of different methods for forecasting the next outbreak of violence—but no single model properly captures how conflict behaves. A study published in 2011 by the Peace Research Institute Oslo used a single model to run global conflict forecasts from 2010 to 2050. It estimated a less than .05% chance of violence in Syria. Humanitarian organizations, which could have been better prepared had the predictions been more accurate, were caught flat-footed by the outbreak of Syria’s civil war in March 2011. It has since displaced some 13 million people.

Bundling individual models to maximize their strengths and weed out weakness has resulted in big improvements. The first public ensemble model, the Early Warning Project, launched in 2013 to forecast new instances of mass killing. Run by researchers at the US Holocaust Museum and Dartmouth College, it claims 80% accuracy in its predictions.

Improvements in data gathering, translation, and machine learning have further advanced the field. A newer model called ViEWS, built by researchers at Uppsala University, provides a huge boost in granularity. Focusing on conflict in Africa, it offers monthly predictive readouts on multiple regions within a given state. Its threshold for violence is a single death.

Some researchers say there are private—and in some cases, classified—predictive models that are likely far better than anything public. Worries that making predictions public could undermine diplomacy or change the outcome of world events are not unfounded. But that is precisely the point. Public models are good enough to help direct aid to where it is needed and alert those most vulnerable to seek safety. Properly used, they could change things for the better, and save lives in the process….(More)”.

Citizen Engagement in Energy Efficiency Retrofit of Public Housing Buildings: A Lisbon Case Study


Paper by Catarina Rolim and Ricardo Gomes: “In Portugal, there are about 120 thousand social housing and a large share of them are in need of some kind of rehabilitation. Alongside the technical challenge associated with the retrofit measures implementation, there is the challenge of involving the citizens in adopting more energy conscious behaviors. Within the Sharing Cities project and, specifically in the case of social housing retrofit, engagement activities with the tenants are being promoted, along with participation from city representatives, decision makers, stakeholders, and among others. This paper will present a methodology outlined to evaluate the impact of retrofit measures considering the citizen as a crucial retrofit stakeholder. The approach ranges from technical analysis and data monitoring but also conveys activities such as educational and training sessions, interviews, surveys, workshops, public events, and focus groups. These will be conducted during the different stages of project implementation; the definition process, during deployment and beyond deployment of solutions….(More)”.

Artificial intelligence: From expert-only to everywhere


Deloitte: “…AI consists of multiple technologies. At its foundation are machine learning and its more complex offspring, deep-learning neural networks. These technologies animate AI applications such as computer vision, natural language processing, and the ability to harness huge troves of data to make accurate predictions and to unearth hidden insights (see sidebar, “The parlance of AI technologies”). The recent excitement around AI stems from advances in machine learning and deep-learning neural networks—and the myriad ways these technologies can help companies improve their operations, develop new offerings, and provide better customer service at a lower cost.

The trouble with AI, however, is that to date, many companies have lacked the expertise and resources to take full advantage of it. Machine learning and deep learning typically require teams of AI experts, access to large data sets, and specialized infrastructure and processing power. Companies that can bring these assets to bear then need to find the right use cases for applying AI, create customized solutions, and scale them throughout the company. All of this requires a level of investment and sophistication that takes time to develop, and is out of reach for many….

These tech giants are using AI to create billion-dollar services and to transform their operations. To develop their AI services, they’re following a familiar playbook: (1) find a solution to an internal challenge or opportunity; (2) perfect the solution at scale within the company; and (3) launch a service that quickly attracts mass adoption. Hence, we see Amazon, Google, Microsoft, and China’s BATs launching AI development platforms and stand-alone applications to the wider market based on their own experience using them.

Joining them are big enterprise software companies that are integrating AI capabilities into cloud-based enterprise software and bringing them to the mass market. Salesforce, for instance, integrated its AI-enabled business intelligence tool, Einstein, into its CRM software in September 2016; the company claims to deliver 1 billion predictions per day to users. SAP integrated AI into its cloud-based ERP system, S4/HANA, to support specific business processes such as sales, finance, procurement, and the supply chain. S4/HANA has around 8,000 enterprise users, and SAP is driving its adoption by announcing that the company will not support legacy SAP ERP systems past 2025.

A host of startups is also sprinting into this market with cloud-based development tools and applications. These startups include at least six AI “unicorns,” two of which are based in China. Some of these companies target a specific industry or use case. For example, Crowdstrike, a US-based AI unicorn, focuses on cybersecurity, while Benevolent.ai uses AI to improve drug discovery.

The upshot is that these innovators are making it easier for more companies to benefit from AI technology even if they lack top technical talent, access to huge data sets, and their own massive computing power. Through the cloud, they can access services that address these shortfalls—without having to make big upfront investments. In short, the cloud is democratizing access to AI by giving companies the ability to use it now….(More)”.

New Directions in Public Opinion


Book edited by Adam J. Berinsky: “The 2016 elections called into question the accuracy of public opinion polling while tapping into new streams of public opinion more widely. The third edition of this well-established text addresses these questions and adds new perspectives to its authoritative line-up. The hallmark of this book is making cutting-edge research accessible and understandable to students and general readers. Here we see a variety of disciplinary approaches to public opinion reflected including psychology, economics, sociology, and biology in addition to political science. An emphasis on race, gender, and new media puts the elections of 2016 into context and prepares students to look ahead to 2020 and beyond.

New to the third edition:

• Includes 2016 election results and their implications for public opinion polling going forward.

• Three new chapters have been added on racializing politics, worldview politics, and the modern information environment….(More)”.