Stefaan Verhulst
Internews Report: “Data has the potential to help communities understand their biggest challenges – why people become sick or well, why development initiatives succeed or fail, how government actions align with citizens’ priorities. However, most people do not have the skills or inclination to engage with data directly. That’s where data journalists and the open data community come in.
This report explains the role of data journalists and open data, and lays out the key considerations that can help predict the success or failure of new data journalism initiatives….
Max Opray in The Guardian: “The first two user tutorials are pretty stock standard but, from there, things escalate dramatically. After mastering How to Sign Up and How to RecoverYour Password, users are apparently ready to advance to lesson number three: How to Create a Democracy.
As it turns out, on DemocracyOS, this is a relatively straightforward matter – not overthrowing the previous regime nor exterminating the last traces of the royal lineage in order to pave the way for a new world order. Instead Argentinian developers Democracia en Red have made it a simple matter of clicking a button to form a group and thrash out the policies voters wish to see enacted.
It is one of a range of digital platforms for direct democracy created by developers and activists to redefine the relationship between citizens and their governments,with the powers that be in Latin American city councils through to European anti-austerity parties making the upgrade to democracy 2.0.
Reshaping how government works is a difficult enough pitch by itself but,beyond that, there’s another challenge facing developers – the online trolls are ready and waiting.
Britain alone this year offered up two examples of what impact trolls could have on online direct democracy – there was the case of “BoatyMcBoatface” famously winning a Natural Environment ResearchCouncil poll to determine the name of a multimillion-pound arctic research vessel, and then there was the more serious case of trolls adding the signatures of thousands of residents of countries such as the Cayman Islands and Vatican City to a formal petition calling for a second Brexit referendum, in order to have the entire document disregarded as an online prank.
In the US presidential election even the politicians are getting in on it,with a pro-Hillary Clinton super PAC (political action committee) hiring an army of online commenters to defend the candidate in arguments on social media, while the Republican contender, Donald Trump, is himself engaging in textbook trolling behaviour – whether that’s urging the hacking of Clinton’s emails, revealing the phone number of a Republican rival during the primaries, or unleashing a constant stream of controversial statements as a means of derailing conversations, attracting attention and humiliating his targets.
So what does this mean for digital platforms for direct democracy? By merging the world of the internet with that of politics, will we all end up governed by some fusion of trolls and Trumps promising to build Wally McWallfaces on our borders? And will the technologies of the fourth industrial revolution also usher in a revolution in how democracy functions?…(More)”
John Davis at the Hill: “…What became immediately clear to me was that — although not impossible to overcome — the lack of consistency and shared best practices across all federal agencies in accepting and reviewing public comments was a serious impediment. The promise of Natural Language Processing and cognitive computing to make the public comment process light years faster and more transparent becomes that much more difficult without a consensus among federal agencies on what type of data is collected – and how.
“There is a whole bunch of work we have to do around getting government to be more customer friendly and making it at least as easy to file your taxes as it is to order a pizza or buy an airline ticket,” President Obama recently said in an interview with WIRED. “Whether it’s encouraging people to vote or dislodging Big Data so that people can use it more easily, or getting their forms processed online more simply — there’s a huge amount of work to drag the federal government and state governments and local governments into the 21st century.”
…expanding the discussion around Artificial Intelligence and regulatory processes to include how the technology should be leveraged to ensure fairness and responsiveness in the very basic processes of rulemaking – in particular public notices and comments. These technologies could also enable us to consider not just public comments formally submitted to an agency, but the entire universe of statements made through social media posts, blogs, chat boards — and conceivably every other electronic channel of public communication.
Obviously, an anonymous comment on the Internet should not carry the same credibility as a formally submitted, personally signed statement, just as sworn testimony in court holds far greater weight than a grapevine rumor. But so much public discussion today occurs on Facebook pages, in Tweets, on news website comment sections, etc. Anonymous speech enjoys explicit protection under the Constitution, based on a justified expectation that certain sincere statements of sentiment might result in unfair retribution from the government.
Should we simply ignore the valuable insights about actual public sentiment on specific issues made possible through the power of Artificial Intelligence, which can ascertain meaning from an otherwise unfathomable ocean of relevant public conversations? With certain qualifications, I believe Artificial Intelligence, or AI, should absolutely be employed in the critical effort to gain insights from public comments – signed or anonymous.
“In the criminal justice system, some of the biggest concerns with Big Data are the lack of data and the lack of quality data,” the NSTC report authors state. “AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias.” As a former federal criminal prosecutor and defense attorney, I am well familiar with the absolute necessity to weigh the relative value of various forms of evidence – or in this case, data…(More)“
Paper by Anne Lundgren in “Environment and Planning C: Government and Policy”: “In the networked information and knowledge-based economy and society, the notions of ‘open’ and ‘openness’ are used in a variety of contexts; open source, open access, open economy, open government, open innovation – just to name a few. This paper aims at discussing openness and developing a taxonomy that may be used to analyse the concept of openness. Are there different qualities of openness? How are these qualities interrelated? What analytical tools may be used to understand openness? In this paper four qualities of openness recurrent in literature and debate are explored: accessibility, transparency, participation and sharing. To further analyse openness new institutional theory as interpreted by Williamson (2000) is used, encompassing four different institutional levels; cultural embeddedness, institutional environment, governance structure and resource allocations. At what institutional levels is openness supported and/or constrained? Accessibility as a quality of openness seems to have a particularly strong relation to the other qualities of openness, whereas the notions of sharing and collaborative economics seem to be the most complex and contested quality of openness in the knowledge-based economy. This research contributes to academia, policy and governance, as handling of challenges with regard to openness vs. closure in different contexts, territorial, institutional and/or organizational, demand not only a better understanding of the concept, but also tools for analysis….(More)”
Book edited by Eleanor Jupp, Jessica Pykett, and Fiona M. Smith: “What is the political allure, value and currency of emotions within contemporary cultures of governance? What does it mean to govern more humanely? Since the emergence of an emotional turn in human geography over the last decade, the notion that our emotions matter in understanding an array of social practices, spatial formations and aspects of everyday life is no longer seen as controversial. This book brings recent developments in emotional geography into dialogue with social policy concerns and contemporary issues of governance. It sets the intellectual scene for research into the geographical dimensions of the emotionalized states of the citizen, policy maker and public service worker, and highlights new research on the emotional forms of governance which now characterise public life.
An international range of empirical field studies are used to examine issues of regulation, modification, governance and potential manipulation of emotional affects, professional and personal identities and political technologies. Contributors provide analysis of the role of emotional entanglements in policy strategy, policy implementation, service delivery, citizenship and participation as well as considering the emotional nature of the research process itself. It will be of interest to researchers and students within social policy, human geography, politics and related disciplines….(More)”
Book by Bart Custers: “Given the popularity of drones and the fact that they are easy and cheap to buy, it is generally expected that the ubiquity of drones will significantly increase within the next few years. This raises questions as to what is technologically feasible (now and in the future), what is acceptable from an ethical point of view and what is allowed from a legal point of view. Drone technology is to some extent already available and to some extent still in development. The aim and scope of this book is to map the opportunities and threats associated with the use of drones and to discuss the ethical and legal issues of the use of drones.
This book provides an overview of current drone technologies and applications and of what to expect in the next few years. The question of how to regulate the use of drones in the future is addressed, by considering conditions and contents of future drone legislation and by analyzing issues surrounding privacy and safeguards that can be taken. As such, this book is valuable to scholars in several disciplines, such as law, ethics, sociology, politics and public administration, as well as to practitioners and others who may be confronted with the use of drones in their work, such as professionals working in the military, law enforcement, disaster management and infrastructure management. Individuals and businesses with a specific interest in drone use may also find in the nineteen contributions contained in this volume unexpected perspectives on this new field of research and innovation….(More)”
in The Conversation: “Health professionals have a duty to improve the accuracy of medical entries in Wikipedia, according to a letter published today in Lancet Global Health, because it’s the first port of call for people all over the world seeking medical information.
In our correspondence, a group of international colleagues and I call on medical journals to do more to help experts make Wikipedia more accurate, and for the medical community to make improving its content a top priority.
Use around the world
Ranked the fifth most-visited website in the world, Wikipedia is one of the most-read sources of medical information by the general public. It’s also frequently the first port of call for doctors, medical students, lawmakers, and educators.
Access is provided free of charge on mobile phones in many countries, under the Wikipedia Zero scheme. In developing nations, this has helped the site become the main source of information on medical topics. During the 2014 Ebola outbreak, for instance, page views of the Ebola virus disease peaked at more than 2.5 million per day.
Earlier this year, the site launched the free Medical Wikipedia Offline app in seven languages. The Android app has had nearly 100,000 downloads in its first few months of release. It’s particularly useful in low and middle-income countries, where internet access is typically slow and expensive.
All this makes Wikipedia’s accuracy vital because every medical entry on the collaborative online encyclopedia has the potential for immediate real-world health consequences.
A question of priorities
Given its model of allowing anyone to edit entries, Wikipedia is already surprisingly accurate, famously rivalling Encyclopedia Britannica. But even as the online encyclopedia matures, the accuracy of its medical content remains inconsistent.
The platform has historically struggled to attract expert contributions from researchers. Improving Wikipedia entries tends to be low on the list of priorities for doctors and other health professionals…
accurate information on medication affects what doctors prescribe, what patients request, and what students learn…
Several scholarly journals have been exploring academic peer review of Wikipedia entries and more look to soon join them. Examples of joint-publishing include the Wikipedia articles for Dengue fever and the cerebellum, which have been reviewed and published by the medical journals Open Medicine and the WikiJournal of Medicine respectively.
PLOS Computational Biology similarly joint-publishes review articles in its journal and in Wikipedia for maximum impact. And, the journal RNA Biology requires researchers describing a new RNA family to also write a Wikipedia entry for it.
Embedding the new approach
Progress has been slow, but several independent ventures show how the attitudes of major players in the biomedical ecosystem are beginning to shift further, and take Wikipedia more seriously.
Cochrane, which creates medical guidelines after reviewing research data, now finds Wikipedian partners for its Review Groups to help disseminate their information through Wikipedia.
Medical schools are also getting involved in improving Wikipedia entries. Medical students at University of California, San Francisco, can gain course credit for supervised editing of Wikipedia articles in need of attention….(More)”
Book by Shin’ichi Konomi and George Roussos: “In recent years, the presence of ubiquitous computing has increasingly integrated into the lives of people in modern society. As these technologies become more pervasive, new opportunities open for making citizens’ environments more comfortable, convenient, and efficient.
Enriching Urban Spaces with Ambient Computing, the Internet of Things, and Smart City Design is a pivotal reference source for the latest scholarly material on the interaction between people and computing systems in contemporary society, showcasing how ubiquitous computing influences and shapes urban environments. Highlighting the impacts of these emerging technologies from an interdisciplinary perspective, this book is ideally designed for professionals, researchers, academicians, and practitioners interested in the influential state of pervasive computing within urban contexts….(Table of Contents and List of Contributors)”.
Essay by Stefaan G. Verhulst and Danny Lämmerhirt: “…To realize its potential there is a need for more evidence on the full life cycle of open data – within and across settings and sectors….
In particular, three substantive areas were identified that could benefit from interdisciplinary and comparative research:
Demand and use: First, many expressed a need to become smarter about the demand and use-side of open data. Much of the focus, given the nascent nature of many initiatives around the world, has been on the supply-side of open data. Yet to be more responsive and sustainable more insight needs to be gained to the demand and/or user needs.
Conversations repeatedly emphasized that we should differentiate between open data demand and use. Open data demand and use can be analyzed from multiple directions: 1) top-down, starting from a data provider, to intermediaries, to the end users and/or audiences; or 2) bottom-up, studying the data demands articulated by individuals (for instance, through FOIA requests), and how these demands can be taken up by intermediaries and open data providers to change what is being provided as open data.
Research should scrutinize each stage (provision, intermediation, use and demand) on its own, but also examine the interactions between stages (for instance, how may open data demand inform data supply, and how does data supply influence intermediation and use?)….
Informing data supply and infrastructure: Second, we heard on numerous occasions, a call upon researchers and domain experts to help in identifying “key data” and inform the government data infrastructure needed to provide them. Principle 1 of the International Open Data Charter states that governments should provide key data “open by default”, yet the questions remains in how to identify “key” data (e.g., would that mean data relevant to society at large?).
Which governments (and other public institutions) should be expected to provide key data and which information do we need to better understand government’s role in providing key data? How can we evaluate progress around publishing these data coherently if countries organize the capture, collection, and publication of this data differently?…
Impact: In addition to those two focus areas – covering the supply and demand side – there was also a call to become more sophisticated about impact. Too often impact gets confused with outputs, or even activities. Given the embryonic and iterative nature of many open data efforts, signals of impact are limited and often preliminary. In addition, different types of impact (such as enhancing transparency versus generating innovation and economic growth) require different indicators and methods. At the same time, to allow for regular evaluations of what works and why there is a need for common assessment methods that can generate comparative and directional insights….
Research Networking: Several researchers identified a need for better exchange and collaboration among the research community. This would allow to tackle the research questions and challenges listed above, as well as to identify gaps in existing knowledge, to develop common research methods and frameworks and to learn from each other. Key questions posed involved: how to nurture and facilitate networking among researchers and (topical) experts from different disciplines, focusing on different issues or using different methods? How are different sub-networks related or disconnected with each other (for instance how connected are the data4development; freedom of information or civic tech research communities)? In addition, an interesting discussion emerged around how researchers can also network more with those part of the respective universe of analysis – potentially generating some kind of participatory research design….(More)”
Adam Mann in Nature: “It was a great way to mix science with gambling, says Anna Dreber. The year was 2012, and an international group of psychologists had just launched the ‘Reproducibility Project’ — an effort to repeat dozens of psychology experiments to see which held up1. “So we thought it would be fantastic to bet on the outcome,” says Dreber, who leads a team of behavioural economists at the Stockholm School of Economics.
In particular, her team wanted to see whether scientists could make good use of prediction markets: mini Wall Streets in which participants buy and sell ‘shares’ in a future event at a price that reflects their collective wisdom about the chance of the event happening. As a control, Dreber and her colleagues first asked a group of psychologists to estimate the odds of replication for each study on the project’s list. Then the researchers set up a prediction market for each study, and gave the same psychologists US$100 apiece to invest.
When the Reproducibility Project revealed last year that it had been able to replicate fewer than half of the studies examined2, Dreber found that her experts hadn’t done much better than chance with their individual predictions. But working collectively through the markets, they had correctly guessed the outcome 71% of the time3.
Experiments such as this are a testament to the power of prediction markets to turn individuals’ guesses into forecasts of sometimes startling accuracy. That uncanny ability ensures that during every US presidential election, voters avidly follow the standings for their favoured candidates on exchanges such as Betfair and the Iowa Electronic Markets (IEM). But prediction markets are increasingly being used to make forecasts of all kinds, on everything from the outcomes of sporting events to the results of business decisions. Advocates maintain that they allow people to aggregate information without the biases that plague traditional forecasting methods, such as polls or expert analysis….
Prediction markets have also had some high-profile misfires, however — such as giving the odds of a Brexit ‘stay’ vote as 85% on the day of the referendum, 23 June. (UK citizens in fact narrowly voted to leave the European Union.) And prediction markets lagged well behind conventional polls in predicting that Donald Trump would become the 2016 Republican nominee for US president.
Such examples have inspired academics to probe prediction markets. Why do they work as well as they do? What are their limits, and why do their predictions sometimes fail?…(More)”