Stefaan Verhulst
Paper by Henry Farrell and Abraham L. Newman: “…Domestically, policy-makers and scholars argued that information openness, like economic openness, would go hand-in-glove with political liberalization and the spread of democratic values. This was perhaps, in part an accident of timing: the Internet – which seemed to many to be inherently resistant to censorship – burgeoned shortly after the collapse of Communism in the Soviet Union and Eastern Europe. Politicians celebrated the dawn of a new era of open communication, while scholars began to argue that the spread of the Internet would lead to the spread of democracy (Diamond 2010;Shirky 2008).
A second wave of literature suggested that Internet-based social media had played a crucial role in spreading freedom in the Arab Spring (Howard 2010; Hussain and Howard 2013). There were some skeptics who highlighted the vexed relationship between open networks and the closed national politics of autocracies (Goldsmith and Wu 2006), or who pointed out that the Internet was nowhere near as censorship-resistant as early optimists had supposed (Deibert et al. 2008). Even these pessimists seemed to believe that the Internet could bolster liberalism in healthy democracies, although it would by no means necessarily prevail over tyranny.
The international liberal order for information, however, finds itself increasingly on shaky ground. Non-democratic regimes ranging from China to Saudi Arabia have created domestic technological infrastructures, which undermine and provide an alternative to the core principles of the regime (Boas 2006; Deibert 2008).
The European Union, while still generally supportive of open communication and free speech, has grown skeptical of the regime’s focus on unfettered economic access and has used privacy and anti-trust policy to challenge its most neo-liberal elements (Newman 2008). Non-state actors like Wikileaks have relied on information openness as a channel of disruption and perhaps manipulation.
More troubling are the arguments of a new literature – that open information flows are less a harbinger of democracy than a vector of attack…
How can IR scholars make sense of this Janus-face quality of information? In this brief memo, we argue that much of the existing work on information technology and information flows suffers from two key deficiencies.
First – there has been an unhelpful separation between two important debates about information flows and liberalism. One – primarily focused on the international level – concerned global governance of information networks, examining how states (especially the US) arrived at and justified their policy stances, and how power dynamics shaped the battles between liberal and illiberal states over what the relevant governance arrangements should be (Klein 2002; Singh 2008; Mueller 2009). …
This leads to the second problem – that research has failed to appreciate the dynamics of contestation over time…(More)”
George Atalla at Ernst and Young: “Analysis of the success or failure of government digital transformation projects tends to focus on the technology that has been introduced. Seldom discussed is the role played by organizational culture and by a government’s willingness to embrace new approaches and working practices. And yet factors such as an ability to transcend bureaucratic working styles and collaborate with external partners are just as vital to success as deploying the right IT…
The study, Inside the Black Box: Journey Mapping Digital Innovation in Government, used a range of qualitative research tools including rich pictures, journey maps
The aim of the study was to look inside the “black box” of digital transformation to find out what really goes on within the teams responsible for delivery. In every case, the implementation journey involved ups and downs, advances and setbacks, but there were always valuable lessons to learn. We have extracted the six key insights for governments, outlined below, to provide guidance for government and public sector leaders who are embarking on their own innovation journey…(More)”.
Book edited by Carolin Kaltofen, Madeline Carr and Michele Acuto: “This book examines the role of technology in the core voices for International Relations theory and how this has shaped the contemporary thinking of ‘IR’ across some of the discipline’s major texts. Through an interview format between different generations of IR scholars, the conversations of the book analyse the relationship between technology and concepts like power, security and global order. They explore to what extent ideas about the role and implications of technology help to understand the way IR has been framed and world politics are conceived of today. This innovative text will appeal to scholars in Politics and International Relations as well as STS, Human Geography and Anthropology….(More)” .
Book edited by Aboul Ella Hassanien, Mohamed Elhoseny, Syed Hassan Ahmed and Amit Kumar Singh: “This book offers an essential guide to IoT Security, Smart Cities, IoT Applications, etc. In addition, it presents a structured introduction to the subject of destination marketing and an exhaustive review
Written in plain and straightforward language, the book offers a self-contained resource for readers with no prior background in the field. Primarily intended for students in Information Security and IoT applications (including smart cities systems and data heterogeneity), it will also greatly benefit academic researchers, IT professionals, policymakers and legislators. It is well suited as a reference book for both undergraduate and graduate courses on information security approaches, the Internet of Things, and real-world intelligent applications….(More)
Paper by Abhishek Nagaraj: “The public sector provides many types of information, such as geographic and census maps, that firms use when making decisions. However, the economic implications of such information infrastructure remain unexamined.
This study estimates the impact of information from Landsat, a NASA satellite mapping program, on the discovery of new deposits by large and small firms in the gold exploration industry. Using a simple theoretical framework, I argue that public sector information guides firms on the viability of risky projects and increases the likelihood of project success.
This effect is especially relevant for smaller firms, who face higher project costs and are particularly deterred from engaging in risky projects. I test the predictions of this framework by exploiting idiosyncratic timing variation in Landsat coverage across regions. Landsat maps nearly doubled the rate of significant gold discoveries after a region was mapped and increased the market share of smaller, junior firms from about 10% to 25%.
Public information infrastructure, including mapping efforts, seem to be an important, yet overlooked, driver of private-sector productivity and small business performance…(More)”
Paper by Jennifer Shkabatur: “Data platform companies (such as Facebook, Google, or Twitter) amass and process immense amounts of data that is generated by their users. These companies primarily use the data to advance their commercial interests, but there is a growing public dismay regarding the adverse and discriminatory impacts of their algorithms on society at large. The regulation of data platform companies and their algorithms has been hotly debated in the literature, but current approaches often neglect the value of data collection, defy the logic of algorithmic decision-making, and exceed the platform companies’ operational capacities.
This Article suggests a different approach — an open, collaborative, and incentives-based stance toward data platforms that takes full advantage of the tremendous societal value of user-generated data. It contends that this data shall be recognized as a “global commons,” and access to it shall be made available to a wide range of independent stakeholders — research institutions, journalists, public authorities, and international organizations. These external actors would be able to utilize the data to address a variety of public challenges, as well as observe from within the operation and impacts of the platforms’ algorithms.
After making the theoretical case for the “global commons of data,” the Article explores the practical implementation of this model. First, it argues that a data commons regime should operate through a spectrum of data sharing and usage modalities that would protect the commercial interests of data platforms and the privacy of data users. Second, it discusses regulatory measures and incentives that can solicit the collaboration of platform companies with the commons model. Lastly, it explores the challenges embedded in this approach….(More)”.
Samantha Cole at Motherboard: “Wikipedia, the internet’s encyclopedia, is run entirely by volunteers—people who spend large swaths of their personal time making sure the information that hundreds of millions of people access every day stays accurate and up-to-date. Of those volunteers, 77 percent of Wikipedia articles are written by just one percent of Wikipedia editors. As such, tensions tend to get a little high, because these editors are often highly invested. They’ve been arguing about corn for nearly a decade, for example, and there’s a long-running edit war about the meaning of neuroticism.
When editors disagree about an edit to be made on a Wikipedia article, they start by discussing it on the article’s Talk page. When that doesn’t result in a decision, they can open a Request for Comment (RfC). From there, any editor can choose a side or discuss the merits of whatever edit is up for discussion, and—in theory—come to an agreement. Or at least, some kind of decision about how to make the edit.
But a new study by MIT researchers found that as many as one-third of RfC disputes go unresolved, often abandoned out of frustration or exhaustion. The most common sticking points were chalked up to inexperience, inattention from experience editors, and just plain petty bickering….
But they didn’t just critique how Wikipedians argue: The researchers developed a tool called Wikum that they say will help resolve more discussions, and make it easier for editors to stay involved when arguments get gnarly. The tool uses the data they found and analyzed in this research, to summarize threads and predict when they’re at risk of going stale….(More)”.
Steve Lohr at The New York Times: “The mechanics of elections that attract the most attention are casting and counting, snafus with voting machines and ballots and allegations of hacking and fraud. But Jeff Jonas, a prominent data scientist, is focused on something else: the integrity, updating and expansion of voter rolls.
“As I dove into the subject, it grew on me, the complexity and relevance of the problem,” he said.
As a result, Mr. Jonas has played a geeky, behind-the-scenes role in encouraging turnout for the midterm elections on Tuesday.
For the last four years, Mr. Jonas has used his software for a multistate project known as Electronic Registration Information Center that identifies eligible voters and cleans up voter rolls. Since its founding in 2012, the nonprofit center has identified 26 million people who are eligible but unregistered to vote, as well as 10 million registered voters who have moved, appear on more than one list or have died.
“I have no doubt that more people are voting as a result of ERIC,” said John Lindback, a former senior election administrator in Oregon and Alaska who was the center’s first executive director.
Voter rolls, like nearly every aspect of elections, are a politically charged issue. ERIC, brought together by the Pew Charitable Trusts, is meant to play it down the middle. It was started largely with professional election administrators, from both red and blue states.
But the election officials recognized that their headaches often boiled down to a data-handling challenge. Then Mr. Jonas added his technology, which has been developed and refined for decades. It is artificial intelligence software fine-tuned for spotting and resolving identities, whether people or things….(More)”.
The UK is not the only country falling short, says the Open Data Barometer, which monitors the status of government data across the world. Among the 30 leading governments — those that have championed the open data movement and have made progress over five years — “less than a quarter of the data with the biggest potential for social and economic impact” is truly open. This goal of transparency, it seems, has not proved sufficient for “creating value” — the movement’s latest focus. In 2015, nearly a decade after advocates first discussed the principles of open government data, 62 countries adopted the six Open Data Charter principles — which called for data to be open by default, usable and comparable….
The use of open data has already bore fruit for some countries. In 2015, Japan’s ministry of land, infrastructure and transport set up an open data site aimed at disabled and elderly people. The 7,000 data points published are downloadable and the service can be used to generate a map that shows which passenger terminals on train, bus and ferry networksprovide barrier-free access.
In the US, The Climate Corporation, a digital agriculture company, combined 30 years of weather data and 60 years of crop yield data to help farmers increase their productivity. And in the UK, subscription service Land Insight merges different sources of land data to help individuals and developers compare property information, forecast selling prices, contact land owners and track planning applications…
Open Data 500, an international network of organisations that studies the use and impact of open data, reveals that private companies in South Korea are using government agency data, with technology, advertising and business services among the biggest users. It shows, for example, that Archidraw, a four-year-old Seoul-based company that provides 3D visualisation tools for interior design and property remodelling, has used mapping data from the Ministry of Land, Infrastructure and Transport…(More)”.
Paper by Bill Howe et al: “Data too sensitive to be “open” for analysis and re-purposing typically remains “closed” as proprietary information. This dichotomy undermines efforts to make algorithmic systems more fair, transparent, and accountable. Access to proprietary data in particular is needed by government agencies to enforce policy, researchers to evaluate methods, and the public to hold agencies accountable; all of these needs must be met while preserving individual privacy and firm competitiveness. In this paper, we describe an integrated legaltechnical approach provided by a third-party public-private data trust designed to balance these competing interests.
Basic membership allows firms and agencies to enable low-risk access to data for compliance reporting and core methods research, while modular data sharing agreements support a wide array of projects and use cases. Unless specifically stated otherwise in an agreement, all data access is initially provided to end users through customized synthetic datasets that offer a) strong privacy guarantees, b) removal of signals that could expose competitive advantage for the data providers, and c) removal of biases that could reinforce discriminatory policies, all while maintaining empirically good fidelity to the original data. We find that the liberal use of synthetic data, in conjunction with strong legal protections over raw data, strikes a tunable balance between transparency, proprietorship, privacy, and research objectives; and that the legal-technical framework we describe can form the basis for organizational data trusts in a variety of contexts….(More)”.