Finland’s model in utilising forest data


Report by Matti Valonen et al: “The aim of this study is to depict the Finnish Forest Centre’s Metsään.fiwebsite’s background, objectives and implementation and to assess its needs for development and future prospects. The Metsään.fi-service included in the Metsään.fi-website is a free e-service for forest owners and corporate actors (companies, associations and service providers) in the forest sector, which aim is to support active decision-making among forest owners by offering forest resource data and maps on forest properties, by making contacts with the authorities easier through online services and to act as a platform for offering forest services, among other things.

In addition to the Metsään.fi-service, the website includes open forest data services that offer the users national forest resource data that is not linked with personal information.

Private forests are in a key position as raw material sources for traditional and new forest-based bioeconomy. In addition to wood material, the forests produce non-timber forest products (for example berries and mushrooms), opportunities for recreation and other ecosystem services.

Private forests cover roughly 60 percent of forest land, but about 80 percent of the domestic wood used by forest industry. In 2017 the value of the forest industry production was 21 billion euros, which is a fifth of the entire industry production value in Finland. The forest industry export in 2017 was worth about 12 billion euros, which covers a fifth of the entire export of goods. Therefore, the forest sector is important for Finland’s national economy…(More)”.

Big Data, Algorithms and Health Data


Paper by Julia M. Puaschunder: “The most recent decade featured a data revolution in the healthcare sector in screening, monitoring and coordination of aid. Big data analytics have revolutionarized the medical profession. The health sector relys on Artificial Intelligence (AI) and robotics as never before. The opportunities of unprecedented access to healthcare, rational precision and human resemblance but also targeted aid in decentralized aid grids are obvious innovations that will lead to most sophisticated neutral healthcare in the future. Yet big data driven medical care also bears risks of privacy infringements and ethical concerns of social stratification and discrimination. Today’s genetic human screening, constant big data information amalgamation as well as social credit scores pegged to access to healthcare also create the most pressing legal and ethical challenges of our time.Julia M. PuaschunderThe most recent decade featured a data revolution in the healthcare sector in screening, monitoring and coordination of aid. Big data analytics have revolutionarized the medical profession. The health sector relys on Artificial Intelligence (AI) and robotics as never before. The opportunities of unprecedented access to healthcare, rational precision and human resemblance but also targeted aid in decentralized aid grids are obvious innovations that will lead to most sophisticated neutral healthcare in the future. Yet big data driven medical care also bears risks of privacy infringements and ethical concerns of social stratification and discrimination. Today’s genetic human screening, constant big data information amalgamation as well as social credit scores pegged to access to healthcare also create the most pressing legal and ethical challenges of our time.

The call for developing a legal, policy and ethical framework for using AI, big data, robotics and algorithms in healthcare has therefore reached unprecedented momentum. Problematic appear compatibility glitches in the AI-human interaction as well as a natural AI preponderance outperforming humans. Only if the benefits of AI are reaped in a master-slave-like legal frame, the risks associated with these novel superior technologies can be curbed. Liability control but also big data privacy protection appear important to secure the rights of vulnerable patient populations. Big data mapping and social credit scoring must be met with clear anti-discrimination and anti-social stratification ethics. Lastly, the value of genuine human care must be stressed and precious humanness in the artifical age conserved alongside coupling the benefits of AI, robotics and big data with global common goals of sustainability and inclusive growth.

The report aims at helping a broad spectrum of stakeholders understand the impact of AI, big data, algorithms and health data based on information about key opportunities and risks but also future market challenges and policy developments for orchestrating the concerted pursuit of improving healthcare excellence. Stateshuman and diplomates are invited to consider three trends in the wake of the AI (r)evolution:

Artificial Intelligence recently gained citizenship in robots becoming citizens: With attributing quasi-human rights to AI, ethical questions arise of a stratified citizenship. Robots and algorithms may only be citizens for their protection and upholding social norms towards human-like creatures that should be considered slave-like for economic and liability purposes without gaining civil privileges such as voting, property rights and holding public offices.

Big data and computational power imply unprecedented opportunities for: crowd understanding, trends prediction and healthcare control. Risks include data breaches, privacy infringements, stigmatization and discrimination. Big data protection should be enacted through technological advancement, self-determined privacy attention fostered by e-education as well as discrimination alleviation by only releasing targeted information and regulated individual data mining capacities.

The European Union should consider establishing a fifth trade freedom of data by law and economic incentives: in order to bundle AI and big data gains large scale. Europe holds the unique potential of offering data supremacy in state-controlled universal healthcare big data wealth that is less fractionate than the US health landscape and more Western-focused than Asian healthcare. Europe could therefore lead the world on big data derived healthcare insights but should also step up to imbuing humane societal imperatives on these most cutting-edge innovations of our time….(More)”.

The Rising Threat of Digital Nationalism


Essay by Akash Kapur in the Wall Street Journal: “Fifty years ago this week, at 10:30 on a warm night at the University of California, Los Angeles, the first email was sent. It was a decidedly local affair. A man sat in front of a teleprinter connected to an early precursor of the internet known as Arpanet and transmitted the message “login” to a colleague in Palo Alto. The system crashed; all that arrived at the Stanford Research Institute, some 350 miles away, was a truncated “lo.”

The network has moved on dramatically from those parochial—and stuttering—origins. Now more than 200 billion emails flow around the world every day. The internet has come to represent the very embodiment of globalization—a postnational public sphere, a virtual world impervious and even hostile to the control of sovereign governments (those “weary giants of flesh and steel,” as the cyberlibertarian activist John Perry Barlow famously put it in his Declaration of the Independence of Cyberspace in 1996).

But things have been changing recently. Nicholas Negroponte, a co-founder of the MIT Media Lab, once said that national law had no place in cyberlaw. That view seems increasingly anachronistic. Across the world, nation-states have been responding to a series of crises on the internet (some real, some overstated) by asserting their authority and claiming various forms of digital sovereignty. A network that once seemed to effortlessly defy regulation is being relentlessly, and often ruthlessly, domesticated.

From firewalls to shutdowns to new data-localization laws, a specter of digital nationalism now hangs over the network. This “territorialization of the internet,” as Scott Malcomson, a technology consultant and author, calls it, is fundamentally changing its character—and perhaps even threatening its continued existence as a unified global infrastructure.

The phenomenon of digital nationalism isn’t entirely new, of course. Authoritarian governments have long sought to rein in the internet. China has been the pioneer. Its Great Firewall, which restricts what people can read and do online, has served as a model for promoting what the country calls “digital sovereignty.” China’s efforts have had a powerful demonstration effect, showing other autocrats that the internet can be effectively controlled. China has also proved that powerful tech multinationals will exchange their stated principles for market access and that limiting online globalization can spur the growth of a vibrant domestic tech industry.

Several countries have built—or are contemplating—domestic networks modeled on the Chinese example. To control contact with the outside world and suppress dissident content, Iran has set up a so-called “halal net,” North Korea has its Kwangmyong network, and earlier this year, Vladimir Putin signed a “sovereign internet bill” that would likewise set up a self-sufficient Runet. The bill also includes a “kill switch” to shut off the global network to Russian users. This is an increasingly common practice. According to the New York Times, at least a quarter of the world’s countries have temporarily shut down the internet over the past four years….(More)”

We are finally getting better at predicting organized conflict


Tate Ryan-Mosley at MIT Technology Review: “People have been trying to predict conflict for hundreds, if not thousands, of years. But it’s hard, largely because scientists can’t agree on its nature or how it arises. The critical factor could be something as apparently innocuous as a booming population or a bad year for crops. Other times a spark ignites a powder keg, as with the assassination of Archduke Franz Ferdinand of Austria in the run-up to World War I.

Political scientists and mathematicians have come up with a slew of different methods for forecasting the next outbreak of violence—but no single model properly captures how conflict behaves. A study published in 2011 by the Peace Research Institute Oslo used a single model to run global conflict forecasts from 2010 to 2050. It estimated a less than .05% chance of violence in Syria. Humanitarian organizations, which could have been better prepared had the predictions been more accurate, were caught flat-footed by the outbreak of Syria’s civil war in March 2011. It has since displaced some 13 million people.

Bundling individual models to maximize their strengths and weed out weakness has resulted in big improvements. The first public ensemble model, the Early Warning Project, launched in 2013 to forecast new instances of mass killing. Run by researchers at the US Holocaust Museum and Dartmouth College, it claims 80% accuracy in its predictions.

Improvements in data gathering, translation, and machine learning have further advanced the field. A newer model called ViEWS, built by researchers at Uppsala University, provides a huge boost in granularity. Focusing on conflict in Africa, it offers monthly predictive readouts on multiple regions within a given state. Its threshold for violence is a single death.

Some researchers say there are private—and in some cases, classified—predictive models that are likely far better than anything public. Worries that making predictions public could undermine diplomacy or change the outcome of world events are not unfounded. But that is precisely the point. Public models are good enough to help direct aid to where it is needed and alert those most vulnerable to seek safety. Properly used, they could change things for the better, and save lives in the process….(More)”.

Citizen Engagement in Energy Efficiency Retrofit of Public Housing Buildings: A Lisbon Case Study


Paper by Catarina Rolim and Ricardo Gomes: “In Portugal, there are about 120 thousand social housing and a large share of them are in need of some kind of rehabilitation. Alongside the technical challenge associated with the retrofit measures implementation, there is the challenge of involving the citizens in adopting more energy conscious behaviors. Within the Sharing Cities project and, specifically in the case of social housing retrofit, engagement activities with the tenants are being promoted, along with participation from city representatives, decision makers, stakeholders, and among others. This paper will present a methodology outlined to evaluate the impact of retrofit measures considering the citizen as a crucial retrofit stakeholder. The approach ranges from technical analysis and data monitoring but also conveys activities such as educational and training sessions, interviews, surveys, workshops, public events, and focus groups. These will be conducted during the different stages of project implementation; the definition process, during deployment and beyond deployment of solutions….(More)”.

Artificial intelligence: From expert-only to everywhere


Deloitte: “…AI consists of multiple technologies. At its foundation are machine learning and its more complex offspring, deep-learning neural networks. These technologies animate AI applications such as computer vision, natural language processing, and the ability to harness huge troves of data to make accurate predictions and to unearth hidden insights (see sidebar, “The parlance of AI technologies”). The recent excitement around AI stems from advances in machine learning and deep-learning neural networks—and the myriad ways these technologies can help companies improve their operations, develop new offerings, and provide better customer service at a lower cost.

The trouble with AI, however, is that to date, many companies have lacked the expertise and resources to take full advantage of it. Machine learning and deep learning typically require teams of AI experts, access to large data sets, and specialized infrastructure and processing power. Companies that can bring these assets to bear then need to find the right use cases for applying AI, create customized solutions, and scale them throughout the company. All of this requires a level of investment and sophistication that takes time to develop, and is out of reach for many….

These tech giants are using AI to create billion-dollar services and to transform their operations. To develop their AI services, they’re following a familiar playbook: (1) find a solution to an internal challenge or opportunity; (2) perfect the solution at scale within the company; and (3) launch a service that quickly attracts mass adoption. Hence, we see Amazon, Google, Microsoft, and China’s BATs launching AI development platforms and stand-alone applications to the wider market based on their own experience using them.

Joining them are big enterprise software companies that are integrating AI capabilities into cloud-based enterprise software and bringing them to the mass market. Salesforce, for instance, integrated its AI-enabled business intelligence tool, Einstein, into its CRM software in September 2016; the company claims to deliver 1 billion predictions per day to users. SAP integrated AI into its cloud-based ERP system, S4/HANA, to support specific business processes such as sales, finance, procurement, and the supply chain. S4/HANA has around 8,000 enterprise users, and SAP is driving its adoption by announcing that the company will not support legacy SAP ERP systems past 2025.

A host of startups is also sprinting into this market with cloud-based development tools and applications. These startups include at least six AI “unicorns,” two of which are based in China. Some of these companies target a specific industry or use case. For example, Crowdstrike, a US-based AI unicorn, focuses on cybersecurity, while Benevolent.ai uses AI to improve drug discovery.

The upshot is that these innovators are making it easier for more companies to benefit from AI technology even if they lack top technical talent, access to huge data sets, and their own massive computing power. Through the cloud, they can access services that address these shortfalls—without having to make big upfront investments. In short, the cloud is democratizing access to AI by giving companies the ability to use it now….(More)”.

New Directions in Public Opinion


Book edited by Adam J. Berinsky: “The 2016 elections called into question the accuracy of public opinion polling while tapping into new streams of public opinion more widely. The third edition of this well-established text addresses these questions and adds new perspectives to its authoritative line-up. The hallmark of this book is making cutting-edge research accessible and understandable to students and general readers. Here we see a variety of disciplinary approaches to public opinion reflected including psychology, economics, sociology, and biology in addition to political science. An emphasis on race, gender, and new media puts the elections of 2016 into context and prepares students to look ahead to 2020 and beyond.

New to the third edition:

• Includes 2016 election results and their implications for public opinion polling going forward.

• Three new chapters have been added on racializing politics, worldview politics, and the modern information environment….(More)”.

OMB rethinks ‘protected’ or ‘open’ data binary with upcoming Evidence Act guidance


Jory Heckman at Federal News Network: “The Foundations for Evidence-Based Policymaking Act has ordered agencies to share their datasets internally and with other government partners — unless, of course, doing so would break the law.

Nearly a year after President Donald Trump signed the bill into law, agencies still have only a murky idea of what data they can share, and with whom. But soon, they’ll have more nuanced options of ranking the sensitivity of their datasets before sharing them out to others.

Chief Statistician Nancy Potok said the Office of Management and Budget will soon release proposed guidelines for agencies to provide “tiered” access to their data, based on the sensitivity of that information….

OMB, as part of its Evidence Act rollout, will also rethink how agencies ensure protected access to data for research. Potok said agency officials expect to pilot a single application governmentwide for people seeking access to sensitive data not available to the public.

The pilot resembles plans for a National Secure Data Service envisioned by the Commission on Evidence-Based Policymaking, an advisory group whose recommendations laid the groundwork for the Evidence Act.

“As a state-of-the-art resource for improving government’s capacity to use the data it already collects, the National Secure Data Service will be able to temporarily link existing data and provide secure access to those data for exclusively statistical purposes in connection with approved projects,” the commission wrote in its 2017 final report.

In an effort to strike a balance between access and privacy, Potok said OMB has also asked agencies to provide a list of the statutes that prohibit them from sharing data amongst themselves….(More)”.

Governing Missions in the European Union


Report by Marianna Mazucatto: “This report, Governing Missions, looks at the ‘how’: how to implement and govern a mission-oriented process so that it unleashes the full creativity and ambition potential of R&I policy-making; and how it crowds-in investments from across Europe in the process. The focus is on 3 key questions:

  • How to engage citizens in codesigning, co-creating, co-implementing
    and co-assessing missions?
  • What are the public sector capabilities and instruments needed to foster a dynamic innovation ecosystem, including the ability of civil servants to welcome experimentation and help governments work outside silos?
  • How can mission-oriented finance and funding leverage and crowd-in other forms of finance, galvanising innovation across actors (public, private and third sector), different manufacturing and service sectors, and across national and transnational levels?…(More)”.

Geolocation Data for Pattern of Life Analysis in Lower-Income Countries


Report by Eduardo Laguna-Muggenburg, Shreyan Sen and Eric Lewandowski: “Urbanization processes in the developing world are often associated with the creation of informal settlements. These areas frequently have few or no public services exacerbating inequality even in the context of substantial economic growth.

In the past, the high costs of gathering data through traditional surveying methods made it challenging to study how these under-served areas evolve through time and in relation to the metropolitan area to which they belong. However, the advent of mobile phones and smartphones in particular presents an opportunity to generate new insights on these old questions.

In June 2019, Orbital Insight and the United Nations Development Programme (UNDP) Arab States Human Development Report team launched a collaborative pilot program assessing the feasibility of using geolocation data to understand patterns of life among the urban poor in Cairo, Egypt.

The objectives of this collaboration were to assess feasibility (and conditionally pursue preliminary analysis) of geolocation data to create near-real time population density maps, understand where residents of informal settlements tend to work during the day, and to classify universities by percentage of students living in informal settlements.

The report is organized as follows. In Section 2 we describe the data and its limitations. In Section 3 we briefly explain the methodological background. Section 4 summarizes the insights derived from the data for the Egyptian context. Section 5 concludes….(More)”.