Public Entrepreneurship and Policy Engineering


Essay by Beth Noveck at Communications of the ACM: “Science and technology have progressed exponentially, making it possible for humans to live longer, healthier, more creative lives. The explosion of Internet and mobile phone technologies have increased trade, literacy, and mobility. At the same time, life expectancy for the poor has not increased and is declining.

As science fiction writer William Gibson famously quipped, the future is here, but unevenly distributed. With urgent problems from inequality to climate change, we must train more passionate and innovative people—what I call public entrepreneurs—to learn how to leverage new technology to tackle public problems. Public problems are those compelling and important challenges where neither the problem is well understood nor the solution agreed upon, yet we must devise and implement approaches, often from different disciplines, in an effort to improve people’s lives….(More)”.

The most innovative political projects in Europe 2019


The Innovation in Politics Institute: “Since 2017, the Innovation in Politics Awards have been honouring successfully implemented political initiatives – regardless of party affiliation, political level or region. The aim is to strengthen, further develop and inspire democratic politics…

The winning projects by category are:

COOPERATIVE COUNCIL GRONINGEN: Trust is crucial in life – and in politics. The open citizens’ council in Groningen builds trust between citizens and politicians. When they sit shoulder to shoulder in the local council and decide together, a joint sense of responsibility quickly develops. The citizens are chosen at random in order to motivate a variety of people to participate. An evaluation by the University of Groningen showed increased trust on all sides, more active voting behaviour and a stronger community. …

SMART CITY BAD HERSFELD: The “Smart City Bad Hersfeld” project links public administration, citizens and businesses in the city to improve living and working conditions. With 30,000 inhabitants, it is the smallest city in Germany to have developed such a programme. A digital parking guidance system optimises the use of space and the finding of a parking space. Municipal charging stations for electric cars promote environmentally friendly transport. “Smartboxes” on main roads collect data on traffic noise and waste materials for effective environmental management. Free Internet in the city centre motivates everyone to use such services….(More)”

Collective Intelligence: A Taxonomy and Survey


Paper by Feijuan He et al: “Collective intelligence (CI) refers to the intelligence that emerges at the macro-level of a collection and transcends that of the individuals. CI is a continuously popular research topic that is studied by researchers in different areas, such as sociology, economics, biology, and artificial intelligence. In this survey, we summarize the works of CI in various fields. First, according to the existence of interactions between individuals and the feedback mechanism in the aggregation process, we establish CI taxonomy that includes three paradigms: isolation, collaboration and feedback. We then conduct statistical literature analysis to explain the differences among three paradigms and their development in recent years. Second, we elaborate the types of CI under each paradigm and discuss the generation mechanism or theoretical basis of the different types of CI. Third, we describe certain CI-related applications in 2019, which can be appropriately categorized by our proposed taxonomy. Finally, we summarize the future research directions of CI under each paradigm. We hope that this survey helps researchers understand the current conditions of CI and clears the directions of future research….(More)”

Regulating Artificial Intelligence


Book by Thomas Wischmeyer and Timo Rademacher: “This book assesses the normative and practical challenges for artificial intelligence (AI) regulation, offers comprehensive information on the laws that currently shape or restrict the design or use of AI, and develops policy recommendations for those areas in which regulation is most urgently needed. By gathering contributions from scholars who are experts in their respective fields of legal research, it demonstrates that AI regulation is not a specialized sub-discipline, but affects the entire legal system and thus concerns all lawyers. 

Machine learning-based technology, which lies at the heart of what is commonly referred to as AI, is increasingly being employed to make policy and business decisions with broad social impacts, and therefore runs the risk of causing wide-scale damage. At the same time, AI technology is becoming more and more complex and difficult to understand, making it harder to determine whether or not it is being used in accordance with the law. In light of this situation, even tech enthusiasts are calling for stricter regulation of AI. Legislators, too, are stepping in and have begun to pass AI laws, including the prohibition of automated decision-making systems in Article 22 of the General Data Protection Regulation, the New York City AI transparency bill, and the 2017 amendments to the German Cartel Act and German Administrative Procedure Act. While the belief that something needs to be done is widely shared, there is far less clarity about what exactly can or should be done, or what effective regulation might look like. 

The book is divided into two major parts, the first of which focuses on features common to most AI systems, and explores how they relate to the legal framework for data-driven technologies, which already exists in the form of (national and supra-national) constitutional law, EU data protection and competition law, and anti-discrimination law. In the second part, the book examines in detail a number of relevant sectors in which AI is increasingly shaping decision-making processes, ranging from the notorious social media and the legal, financial and healthcare industries, to fields like law enforcement and tax law, in which we can observe how regulation by AI is becoming a reality….(More)”.

The Golden Age of Social Science


Essay by Anastasia Buyalskaya, Marcos Gallo and Colin Camerer: “In this short essay we argue that social science is entering a golden age, marked by explosive growth in new data and analytic methods, interdisciplinarity, and a recognition that both of those ingredients are necessary to solve hard problems. Two examples are given to illustrate these themes, which are behavioral economics and social networks. Numerous other specific study examples are then given. We also address the challenges that accompany the three positive trends, which include informatics, career incentives, and the search for unifying frameworks….(More)”.

Is There a Crisis of Truth?


Essay by Steven Shapin: “…It seems irresponsible or perverse to reject the idea that there is a Crisis of Truth. No time now for judicious reflection; what’s needed is a full-frontal attack on the Truth Deniers. But it’s good to be sure about the identity of the problem before setting out to solve it. Conceiving the problem as a Crisis of Truth, or even as a Crisis of Scientific Authority, is not, I think, the best starting point. There’s no reason for complacency, but there is reason to reassess which bits of our culture are in a critical state and, once they are securely identified, what therapies are in order.

Start with the idea of Truth. What could be more important, especially if the word is used — as it often is in academic writing — as a placeholder for Reality? But there’s a sort of luminous glow around the notion of Truth that prejudges and pre-processes the attitudes proper to entertain about it. The Truth goes marching on. God is Truth. The Truth shall set you free. Who, except the mad and the malevolent, could possibly be against Truth? It was, after all, Pontius Pilate who asked, “What is Truth?” — and then went off to wash his hands.

So here’s an only apparently pedantic hint about how to construe Truth and also about why our current problem might not be described as a Crisis of Truth. In modern common usage, Truth is a notably uncommon term. The natural home of Truth is not in the workaday vernacular but in weekend, even language-gone-on-holiday, scenes. The notion of Truth tends to crop up when statements about “what’s the case” are put under pressure, questioned, or picked out for celebration. Statements about “the case” can then become instances of the Truth, surrounded by an epistemic halo. Truth is invoked when we swear to tell it — “the whole Truth and nothing but” — in legal settings or in the filling-out of official forms when we’re cautioned against departing from it; or in those sorts of school and bureaucratic exams where we’re made to choose between True and False. Truth is brought into play when it’s suspected that something of importance has been willfully obscured — as when Al Gore famously responded to disbelief in climate change by insisting on “an inconvenient truth” or when we demand to be told the Truth about the safety of GMOs. [2]

Truth-talk appears in such special-purpose forums as valedictory statements where scientists say that their calling is a Search for Truth. And it’s worth considering the difference between saying that and saying they’re working to sequence a breast cancer gene or to predict when a specific Indonesian volcano is most likely to erupt. Truth stands to Matters-That-Are-the-Case roughly as incantations, proverbs, and aphorisms stand to ordinary speech. Truth attaches more to some formal intellectual practices than to others — to philosophy, religion, art, and, of course, science, even though in science there is apparent specificity. Compare those sciences that seem good fits with the notion of a Search for Truth to those that seem less good fits: theoretical physics versus seismology, academic brain science versus research on the best flavoring for a soft drink. And, of course, Truth echoes around philosophy classrooms and journals, where theories of what it is are advanced, defended, and endlessly disputed. Philosophers collectively know that Truth is very important, but they don’t collectively know what it is.

I’ve said that Truth figures in worries about the problems of knowledge we’re said to be afflicted with, where saying that we have a Crisis of Truth both intensifies the problem and gives it a moral charge. In May 2019, Angela Merkel gave the commencement speech at Harvard. Prettily noting the significance of Harvard’s motto, Veritas, the German Chancellor described the conditions for academic inquiry, which, she said, requires that “we do not describe lies as truth and truth as lies,” nor that “we accept abuses [Missstände] as normal.” The Harvard audience stood and cheered: they understood the coded political reference to Trump and evidently agreed that the opposite of Truth was a lie — not just a statement that didn’t match reality but an intentional deception. You can, however, think of Truth’s opposite as nonsense, error, or bullshit, but calling it a lie was to position Truth in a moral field. Merkel was not giving Harvard a lesson in philosophy but a lesson in global civic virtue….(More)”.

The Crowd and the Cosmos: Adventures in the Zooniverse


Book by Chris Lintott: “The world of science has been transformed. Where once astronomers sat at the controls of giant telescopes in remote locations, praying for clear skies, now they have no need to budge from their desks, as data arrives in their inbox. And what they receive is overwhelming; projects now being built provide more data in a few nights than in the whole of humanity’s history of observing the Universe. It’s not just astronomy either – dealing with this deluge of data is the major challenge for scientists at CERN, and for biologists who use automated cameras to spy on animals in their natural habitats. Artificial intelligence is one part of the solution – but will it spell the end of human involvement in scientific discovery?

No, argues Chris Lintott. We humans still have unique capabilities to bring to bear – our curiosity, our capacity for wonder, and, most importantly, our capacity for surprise. It seems that humans and computers working together do better than computers can on their own. But with so much scientific data, you need a lot of scientists – a crowd, in fact. Lintott found such a crowd in the Zooniverse, the web-based project that allows hundreds of thousands of enthusiastic volunteers to contribute to science.

In this book, Lintott describes the exciting discoveries that people all over the world have made, from galaxies to pulsars, exoplanets to moons, and from penguin behavior to old ship’s logs. This approach builds on a long history of so-called “citizen science,” given new power by fast internet and distributed data. Discovery is no longer the remit only of scientists in specialist labs or academics in ivory towers. It’s something we can all take part in. As Lintott shows, it’s a wonderful way to engage with science, yielding new insights daily. You, too, can help explore the Universe in your lunch hour…(More)”.

The Downside of Tech Hype


Jeffrey Funk at Scientific American: “Science and technology have been the largest drivers of economic growth for more than 100 years. But this contribution seems to be declining. Growth in labor productivity has slowed, corporate revenue growth per research dollar has fallen, the value of Nobel Prize–winning research has declined, and the number of researchers needed to develop new molecular entities (e.g., drugs) and same percentage improvements in crop yields and numbers of transistors on a microprocessor chip (commonly known as Moore’s Law) has risen. More recently, the percentage of profitable start-ups at the time of their initial public stock offering has dropped to record lows, not seen since the dot-com bubble and start-ups such as Uber, Lyft and WeWork have accumulated losses much larger than ever seen by start-ups, including Amazon.

Although the reasons for these changes are complex and unclear, one thing is certain: excessive hype about new technologies makes it harder for scientists, engineers and policy makers to objectively analyze and understand these changes, or to make good decisions about new technologies.

One driver of hype is the professional incentives of venture capitalists, entrepreneurs, consultants and universities. Venture capitalists have convinced decision makers that venture capitalist funding and start-ups are the new measures of their success. Professional and business service consultants hype technology for both incumbents and start-ups to make potential clients believe that new technologies make existing strategies, business models and worker skills obsolete every few years.

Universities are themselves a major source of hype. Their public relations offices often exaggerate the results of research papers, commonly implying that commercialization is close at hand, even though the researchers know it will take many years if not decades. Science and engineering courses often imply an easy path to commercialization, while misleading and inaccurate forecasts from Technology Review and Scientific American make it easier for business schools and entrepreneurship programs to claim that opportunities are everywhere and that incumbent firms are regularly being disrupted. With a growth in entrepreneurship programs from about 16 in 1970 to more than 2,000 in 2014, many young people now believe that being an entrepreneur is the cool thing to be, regardless of whether they have a good idea.

Hype from these types of experts is exacerbated by the growth of social media, the falling cost of website creation, blogging, posting of slides and videos and the growing number of technology news, investor and consulting websites….(More)”.

Responsible Artificial Intelligence


Book by Virginia Dignum: “In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values. 


Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens….(More)”.

Human Rights in the Age of Platforms


Book by Rikke Frank Jørgensen: “Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms? According to the Guiding Principles on Business and Human Rights, adopted by the UN Human Right Council in 2011, businesses have a responsibility to respect human rights and to carry out human rights due diligence. But this goal is dependent on the willingness of states to encode such norms into business regulations and of companies to comply. In this volume, contributors from across law and internet and media studies examine the state of human rights in today’s platform society.

The contributors consider the “datafication” of society, including the economic model of data extraction and the conceptualization of privacy. They examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices. Finally, they discuss the relationship between human rights law and private actors, addressing such issues as private companies’ human rights responsibilities and content regulation…(More)”.