Stefaan Verhulst
Urs Gasser: “Lawmakers and regulators need to look at AI not as a homogenous technology, but a set of techniques and methods that will be deployed in specific and increasingly diversified applications. There is currently no generally agreed-upon definition of AI. What is important to understand from a technical perspective is that AI is not a single, homogenous technology, but a rich set of subdisciplines, methods, and tools that bring together areas such as speech recognition, computer vision, machine translation, reasoning, attention and memory, robotics and control, etc. ….
Given the breadth and scope of application, AI-based technologies are expected to trigger a myriad of legal and regulatory issues not only at the intersections of data and algorithms, but also of infrastructures and humans. …
When considering (or anticipating) possible responses by the law vis-à-vis AI innovation, it might be helpful to differentiate between application-specific and cross-cutting legal and regulatory issues. …
Information asymmetries and high degrees of uncertainty pose particular difficulty to the design of appropriate legal and regulatory responses to AI innovations — and require learning systems. AI-based applications — which are typically perceived as “black boxes” — affect a significant number of people, yet there are nonetheless relatively few people who develop and understand AI-based technologies. ….Approaches such as regulation 2.0, which relies on dynamic, real-time, and data-driven accountability models, might provide interesting starting points.
The responses to a variety of legal and regulatory issues across different areas of distributed applications will likely result in a complex set of sector-specific norms, which are likely to vary across jurisdictions….
Law and regulation may constrain behavior yet also act as enablers and levelers — and are powerful tools as we aim for the development of AI for social good. …
Law is one important approach to the governance of AI-based technologies. But lawmakers and regulators have to consider the full potential of available instruments in the governance toolbox. ….
In a world of advanced AI technologies and new governance approaches towards them, the law, the rule of law, and human rights remain critical bodies of norms. …
As AI applies to the legal system itself, however, the rule of law might have to be re-imagined and the law re-coded in the longer run….(More).
Springwise: “In an era where the term ‘fake news’ has become commonplace, news app Read Across the Aisle by US-based BeeLine Reader is designed to help users break out from the ‘filter bubble’ of media sources they are inclined to read from by offering articles from opposing angles. The app, which is Kickstarter funded, hopes to combat political polarization by allowing readers to see the partisan bias of the news sources they are accessing. It tracks the user’s own political news bias over time, and finds reliable new sources from both the left and right wing to offer a reader a well-rounded spectrum of approaches.
Research has found that Internet users, particularly in the realm of news and social media, tend to immerse themselves with those who have similar opinions, meaning other information can be missed or deemed false. App users are informed when their reading habits skew too far to one side of the political spectrum, and are consequently prompted to read articles written by the press from the opposing side.
As the once-popular newspapers have made way for online news consumption, technology to support the industry has excelled. Recent innovations covered by Springwise include a blockchain transparency tool applied to newsfeeds to create algorithms of trustworthy news sources, and a news website that encourages readers to empathise with opposing views….(More)”.
About: “With all the exciting A.I. stuff happening, there are lots of people eager to start tinkering with machine learning technology. A.I. Experiments is a showcase for simple experiments that let anyone play with this technology in hands-on ways, through pictures, drawings, language, music, and more.
Submit your own
We want to make it easier for any coder – whether you have a machine learning background or not – to create your own experiments. This site includes open-source code and resources to help you get started. If you make something you’d like to share, we’d love to see it and possibly add it to the showcase….(More)”
Patrick Sisson at Curbed: “…The Assembly Civic Engagement Survey, a new report released yesterday by the Center for Active Design, seeks to understand the connections between the design of public spaces and buildings on public life, and eventually create a toolbox for planners and politicians to make decisions that can help improve civic pride. There’s perhaps an obvious connection between what one might consider a better-designed neighborhood and public perception of government and community, but how to design that neighborhood to directly improve public engagement—especially during an era of low voter engagement and partisan divide—is an important, and unanswered, question….
One of the most striking findings was around park signage. Respondents were shown a series of three signs, ranging from a traditional display of park rules and prohibitions to a more proactive, engaging pictograph that tells parkgoers it’s okay to give high-fives. The survey found the simple switch to more eye-catching, positive, and entertaining signage improved neighborhood pride by 11 percent and boosted the feeling that “the city cares for people in this park” by 9 percent. Similar improvements were found in surveys looking at signage on community centers.
According to Frank, the biggest revelation from the research is how a minimum of effort can make a large impact. On one hand, she says, it doesn’t take a genius to realize that transforming a formerly graffiti-covered vacant lot into a community garden can impact community trust and cohesion.
What sticks out from the study’s findings is how little is really necessary to shift attitudes and improve people’s trust in their neighborhoods and attitudes toward city government and police. Litter turned out to be a huge issue: High levels of trash eroded community pride by 10 percent, trust in police by 5 percent, and trust in local government by 4 percent. When presented with a series of seven things they could improve about their city, including crime, traffic, and noise, 23 percent of respondents chose litter.
/cdn.vox-cdn.com/uploads/chorus_asset/file/8732275/Neighborhoods_Litter.png)
In short, disorder erodes civic trust. The small things matter, especially when cities are formulating budgets and streetscaping plans and looking at the most effective ways of investing in community improvements….
/cdn.vox-cdn.com/uploads/chorus_asset/file/8732305/Parks_CivicTrust.png)
Giving cities direction as well as data
Beyond connecting the dots, Frank wants to give planners rationale for their actions. Telling designers that placing planters in the middle of a street can beautify a neighborhood is one thing; showing that this kind of beautification increases walkability, brings more shoppers to a commercial strip, and ultimately leads to higher sales and tax revenue spurs action and innovation.
Frank gives the example of redesigning the streetscape in front of a police station. The idea of placing planters and benches may seem like a poor use of limited funds, until data and research reveals it’s a cost-effective way to encourage interactions between cops and the community and helps change the image of the department….(More)”
Cardiff University News: “An analysis of data taken from the London riots in 2011 showed that computer systems could automatically scan through Twitter and detect serious incidents, such as shops being broken in to and cars being set alight, before they were reported to the Metropolitan Police Service.
The computer system could also discern information about where the riots were rumoured to take place and where groups of youths were gathering. The new research, published in the peer-review journal ACM Transactions on Internet Technology, showed that on average the computer systems could pick up on disruptive events several minutes before officials and over an hour in some cases.
“Antagonistic narratives and cyber hate”
The researchers believe that their work could enable police officers to better manage and prepare for both large and small scale disruptive events.
Co-author of the study Dr Pete Burnap, from Cardiff University’s School of Computer Science and Informatics, said: “We have previously used machine-learning and natural language processing on Twitter data to better understand online deviance, such as the spread of antagonistic narratives and cyber hate…”
“We will never replace traditional policing resource on the ground but we have demonstrated that this research could augment existing intelligence gathering and draw on new technologies to support more established policing methods.”
Scientists are continually looking to the swathes of data produced from Twitter, Facebook and YouTube to help them to detect events in real-time.
Estimates put social media membership at approximately 2.5 billion non-unique users, and the data produced by these users have been used to predict elections, movie revenues and even the epicentre of earthquakes.
In their study the research team analysed 1.6m tweets relating to the 2011 riots in England, which began as an isolated incident in Tottenham on August 6 but quickly spread across London and to other cities in England, giving rise to looting, destruction of property and levels of violence not seen in England for more than 30 years.
Machine-learning algorithms
The researchers used a series of machine-learning algorithms to analyse each of the tweets from the dataset, taking into account a number of key features such as the time they were posted, the location where they were posted and the content of the tweet itself.
Results showed that the machine-learning algorithms were quicker than police sources in all but two of the disruptive events reported…(More)”.
Proceedings of a National Academies Workshop: “The Government-University-Industry Research Roundtable held a meeting on February 28 and March 1, 2017, to explore trends in public opinion of science, examine potential sources of mistrust, and consider ways that cross-sector collaboration between government, universities, and industry may improve public trust in science and scientific institutions in the future. The keynote address on February 28 was given by Shawn Otto, co-founder and producer of the U.S. Presidential Science Debates and author of The War on Science.
“There seems to be an erosion of the standing and understanding of science and engineering among the public,” Otto said. “People seem much more inclined to reject facts and evidence today than in the recent past. Why could that be?” Otto began exploring that question after the candidates in the 2008 presidential election declined an invitation to debate science-driven policy issues and instead chose to debate faith and values.
“Wherever the people are well-informed, they can be trusted with their own government,” wrote Thomas Jefferson. Now, some 240 years later, science is so complex that it is difficult even for scientists and engineers to understand the science outside of their particular fields. Otto argued,
“The question is, are people still well-enough informed to be trusted with their own government? Of the 535 members of Congress, only 11—less than 2 percent—have a professional background in science or engineering. By contrast, 218—41 percent—are lawyers. And lawyers approach a problem in a fundamentally different way than a scientist or engineer. An attorney will research both sides of a question, but only so that he or she can argue against the position that they do not support. A scientist will approach the question differently, not starting with a foregone conclusion and arguing towards it, but examining both sides of the evidence and trying to make a fair assessment.”
According to Otto, anti-science positions are now acceptable in public discourse, in Congress, state legislatures and city councils, in popular culture, and in presidential politics. Discounting factually incorrect statements does not necessarily reshape public opinion in the way some trust it to. What is driving this change? “Science is never partisan, but science is always political,” said Otto. “Science takes nothing on faith; it says, ‘show me the evidence and I’ll judge for myself.’ But the discoveries that science makes either confirm or challenge somebody’s cherished beliefs or vested economic or ideological interests. Science creates knowledge—knowledge is power, and that power is political.”…(More)”.
Report by Atlantic Council and Thomson Reuters: “We are living in a world awash in data. Accelerated interconnectivity, driven by the proliferation of internet-connected devices, has led to an explosion of data—big data. A race is now underway to develop new technologies and implement innovative methods that can handle the volume, variety, velocity, and veracity of big data and apply it smartly to provide decisive advantage and help solve major challenges facing companies and governments
For policy makers in government, big data and associated technologies like machine-learning and artificial Intelligence, have the potential to drastically improve their decision-making capabilities. How governments use big data may be a key factor in improved economic performance and national security. This publication looks at how big data can maximize the efficiency and effectiveness of government and business, while minimizing modern risks. Five authors explore big data across three cross-cutting issues: security, finance, and law.
Chapter 1, “The Conflict Between Protecting Privacy and Securing Nations,” Els de Busser
Chapter 2, “Big Data: Exposing the Risks from Within,” Erica Briscoe
Chapter 3, “Big Data: The Latest Tool in Fighting Crime,” Benjamin Dean, Fellow
Chapter 4, “Big Data: Tackling Illicit Financial Flows,” Tatiana Tropina
Chapter 5, “Big Data: Mitigating Financial Crime Risk,” Miren Aparicio….Read the Publication (PDF)“
Bruno Sánchez-Andrade Nuño at WEForum: “How are we going to close the $2.5 trillion/year finance gap to achieve the Sustainable Development Goals (SDGs)? Whose money? What business model? How to scale it that much? If you read the recent development economics scholar literature, or Jim Kim’s new financing approach of the World Bank, you might hear the benefits of “blended finance” or “triple bottom lines.” I want to tell you instead about a real case that makes a dent. I want to tell you about Sonam.
Sonam is a 60-year old farmer in rural Bhutan. His children left for the capital, Thimphu, like many are doing nowadays. Four years ago, he decided to plant 2 acres of hazelnuts on an unused rocky piece of his land. Hazelnut saplings, training, and regular supervision all come from “Mountain Hazelnuts”, Bhutan’s only 100% foreign invested company. They fund the costs of the trees and helps him manage his orchard. In return, when the nuts come, he will sell his harvest to them above the guaranteed floor price, which will double his income; in a time when he will be too old to work in his rice field.
You could find similar impact stories for the roughly 10,000 farmers participating in this operation across the country, where the farmers are carefully selected to ensure productivity, maximize social and environmental benefits, such as vulnerable households, or reducing land erosion.
But Sonam also gets a visit from Kinzang every month. This is Kinzang’s first job. Otherwise, he would have moved to the city in hopes of finding a low paying job, but more likely joining the many unemployed youth from the countryside. Kinzang carefully records data on his smart-phone, talks to Sonam and digitally transmits the data back to the company HQ. There, if a problem is recorded with irrigation, pests, or there is any data anomaly, a team of experts (locally trained agronomists) will visit his orchard to figure out a solution.
The whole system of support, monitoring, and optimization live on a carefully crafted data platform that feeds information to and from the farmers, the monitors, the agronomist experts, and local government authorities. It ensures that all 10 million trees are healthy and productive, minimizes extra costs, tests and tracks effectiveness of new treatments….
This is also a story which demonstrates how “Data is the new oil” is not the right approach. If Data is the new oil, you extract value from the data, without much regard to feeding back value to the source of the data. However, in this system, “Data is the new soil.” Data creates a higher ground in which value flows back and forth. It lifts the source of the data -the farmers- into new income generation, it enables optimized operations; and it also helps the whole country: Much of the data (such as road quality used by the monitors) is made open for the benefit of the Bhutanese people, without contradiction or friction with the business model….(More)”.
Preface and Roadmap by Andrew Reamer and Julia Lane: “Throughout the United States, there is broadly emerging support to significantly enhance the nation’s capacity for evidence-based policymaking. This support is shared across the public and private sectors and all levels of geography. In recent years, efforts to enable evidence-based analysis have been authorized by the U.S. Congress, and funded by state and local governments, philanthropic foundations.
The potential exists for substantial change. There has been dramatic growth in technological capabilities to organize, link, and analyze massive volumes of data from multiple, disparate sources. A major resource is administrative data, which offer both advantages and challenges in comparison to data gathered through the surveys that have been the basis for much policymaking to date. To date, however, capability-building efforts have been largely “artisanal” in nature. As a result, the ecosystem of evidence-based policymaking capacity-building efforts is thin and weakly connected.
Each attempt to add a node to the system faces multiple barriers that require substantial time, effort, and luck to address. Those barriers are systemic. Too much attention is paid to the interests of researchers, rather than in the engagement of data producers. Individual projects serve focused needs and operate at a relative distance from one another Researchers, policymakers and funding agencies thus need exists to move from these artisanal efforts to new, generalized solutions that will catalyze the creation of a robust, large-scale data infrastructure for evidence-based policymaking.
This infrastructure will have be a “complex, adaptive ecosystem” that expands, regenerates, and replicates as needed while allowing customization and local control. To create a path for achieving this goal, the U.S. Partnership on Mobility from Poverty commissioned 12 papers and then hosted a day-long gathering (January 23, 2017) of over 60 experts to discuss findings and implications for action. Funded by the Gates Foundation, the papers and workshop panels were organized around three topics: privacy and confidentiality, data providers, and comprehensive strategies.
This issue of the Annals showcases those 12 papers which jointly propose solutions for catalyzing the development of a data infrastructure for evidence-based policymaking.
This preface:
- places current evidence-based policymaking efforts in historical context
- briefly describes the nature of multiple current efforts,
- provides a conceptual framework for catalyzing the growth of any large institutional ecosystem,
- identifies the major dimensions of the data infrastructure ecosystem,
- describes key barriers to the expansion of that ecosystem, and
- suggests a roadmap for catalyzing that expansion….(More)
(All 12 papers can be accessed here).
Jérôme Denis and Samuel Goëta in Social Studies of Science: “Drawing on a two-year ethnographic study within several French administrations involved in open data programs, this article aims to investigate the conditions of the release of government data – the rawness of which open data policies require. This article describes two sets of phenomena. First, far from being taken for granted, open data emerge in administrations through a progressive process that entails uncertain collective inquiries and extraction work. Second, the opening process draws on a series of transformations, as data are modified to satisfy an important criterion of open data policies: the need for both human and technical intelligibility. There are organizational consequences of these two points, which can notably lead to the visibilization or the invisibilization of data labour. Finally, the article invites us to reconsider the apparent contradiction between the process of data release and the existence of raw data. Echoing the vocabulary of one of the interviewees, the multiple operations can be seen as a ‘rawification’ process by which open government data are carefully generated. Such a notion notably helps to build a relational model of what counts as data and what counts as work….(More)”.