Stefaan Verhulst
Payal Arora at the International Journal of Communication: “To date, little attention has been given to the impact of big data in the Global South, about 60% of whose residents are below the poverty line. Big data manifests in novel and unprecedented ways in these neglected contexts. For instance, India has created biometric national identities for her 1.2 billion people, linking them to welfare schemes, and social entrepreneurial initiatives like the Ushahidi project that leveraged crowdsourcing to provide real-time crisis maps for humanitarian relief.
While these projects are indeed inspirational, this article argues that in the context of the Global South there is a bias in the framing of big data as an instrument of empowerment. Here, the poor, or the “bottom of the pyramid” populace are the new consumer base, agents of social change instead of passive beneficiaries. This neoliberal outlook of big data facilitating inclusive capitalism for the common good sidelines critical perspectives urgently needed if we are to channel big data as a positive social force in emerging economies. This article proposes to assess these new technological developments through the lens of databased democracies, databased identities, and databased geographies to make evident normative assumptions and perspectives in this under-examined context….(More)”.
Openness and transparency are becoming hallmarks of responsible data practice in science and governance. Concerns about data falsification, erroneous analysis, and misleading presentation of research results have recently strengthened the call for new procedures that ensure public accountability for data-driven decisions. Though we generally count ourselves in favor of increased transparency in data practice, this Commentary highlights a caveat. We suggest that legislative efforts that invoke the language of data transparency can sometimes function as “Trojan Horses” through which other political goals are pursued. Framing these maneuvers in the language of transparency can be strategic, because approaches that emphasize open access to data carry tremendous appeal, particularly in current political and technological contexts. We illustrate our argument through two examples of pro-transparency policy efforts, one historical and one current: industry-backed “sound science” initiatives in the 1990s, and contemporary legislative efforts to open environmental data to public inspection. Rules that exist mainly to impede science-based policy processes weaponize the concept of data transparency. The discussion illustrates that, much as Big Data itself requires critical assessment, the processes and principles that attend it—like transparency—also carry political valence, and, as such, warrant careful analysis….(More)”
10 Lessons : “…The GovLab and its network of 25 world-class coaches and over 100 mentors helped 446 participants in more thana dozen US cities and thirty foreign countries to take a public interest technology project from idea to implementation. In the process, we ‘ve learned a lot about the need for new ways of training the next generation of leaders and problem solvers.
Our aim has been to aid public entrepreneurs — passionate and innovative people who wish to take advantage of new technology to do good in the world. That’s why we measure success, not by the number of participants in a class, but by the project’s participants create and the impact those projects have on communities….
Lesson 1: There is growing, and unmet, demand for training a new kind of public servant: the public entrepreneur…
Lesson 2: Tap the distributed supply of talent and expertise to accelerate learning…
Lesson 3: Create new methods for training public entrepreneurs to solve problems…
Lesson 4: Develop tools to help public interest innovators “cross the chasm” from idea to implementation…
Lesson 5: Teach collaboration and partnering for change…
Lesson 6: In order to be successful, public entrepreneurs must be able to define the problem — a skill widely lacking…
Lesson 7: Connecting innovators and alumni with one another generates a lasting public infrastructure that can help solve problems more effectively…
Lesson 8: Pedagogical priorities include making problem solving more data driven and evidence based….
Lesson 9: The demand and supply are global — which requires a global mindset and platform in order to learn what has worked elsewhere and why…
Lesson 10: Collaboration and coordination among anchor organizations is key to meeting the demand and coordinating the supply….(More)
Rebecca Lipman at Economist Intelligence Unit Perspectives on “One city tweets to stay dry: From drones to old-fashioned phone calls, data come from many unlikely sources. In a disaster, such as a flood or earthquake, responders will take whatever information they can get to visualise the crisis and best direct their resources. Increasingly, cities prone to natural disasters are learning to better aid their citizens by empowering their local agencies and responders with sophisticated tools to cut through the large volume and velocity of disaster-related data and synthesise actionable information.
Consider the plight of the metro area of Jakarta, Indonesia, home to some 28m people, 13 rivers and 1,100 km of canals. With 40% of the city below sea level (and sinking), and regularly subject to extreme weather events including torrential downpours in monsoon season, Jakarta’s residents face far-too-frequent, life-threatening floods. Despite the unpredictability of flooding conditions, citizens have long taken a passive approach that depended on government entities to manage the response. But the information Jakarta’s responders had on the flooding conditions was patchy at best. So in the last few years, the government began to turn to the local population for help. It helped.
Today, Jakarta’s municipal government is relying on the web-based PetaJakarta.org project and a handful of other crowdsourcing mobile apps such as Qlue and CROP to collect data and respond to floods and other disasters. Through these programmes, crowdsourced, time-sensitive data derived from citizens’ social-media inputs have made it possible for city agencies to more precisely map the locations of rising floods and help the residents at risk. In January 2015, for example, the web-based Peta Jakarta received 5,209 reports on floods via tweets with detailed text and photos. Anytime there’s a flood, Peta Jakarta’s data from the tweets are mapped and updated every minute, and often cross-checked by Jakarta Disaster Management Agency (BPBD) officials through calls with community leaders to assess the information and guide responders.
But in any city Twitter is only one piece of a very large puzzle. …
Even with such life-and-death examples, government agencies remain deeply protective of data because of issues of security, data ownership and citizen privacy. They are also concerned about liability issues if incorrect data lead to an activity that has unsuccessful outcomes. These concerns encumber the combination of crowdsourced data with operational systems of record, and impede the fast progress needed in disaster situations….Download the case study here.”
Book by Ian Hargreaves and John Hartley on “How social media and DIY culture contribute to democracy, communities and the creative economy”: “The creative citizen unbound introduces the concept of ‘creative citizenship’ to explore the potential of civic-minded creative individuals in the era of social media and in the context of an expanding creative economy. Drawing on the findings of a 30-month study of communities supported by the UK research funding councils, multidisciplinary contributors examine the value and nature of creative citizenship, not only in terms of its contribution to civic life and social capital but also to more contested notions of value, both economic and cultural. This original book will be beneficial to researchers and students across a range of disciplines including media and communication, political science, economics, planning and economic geography, and the creative and performing arts….(More)”
Nathan Collins at Pacific Standard: “…there are contests like the DARPA Robotics Challenge, which gives prizes for solving particularly difficult problems, like how to prevent an autonomous vehicle from crashing.
But who wins such contests, and how? One might think it’s the science insiders, since they have the knowledge and background to solve difficult scientific problems. It’s hard to imagine, for example, a political scientist solving a major problem in theoretical physics. At the same time, insiders can become inflexible, having been so ensconced in a particular way of thinking that they can’t see outside of the box, let alone think outside it.
Unfortunately, most of what we know about insiders, outsiders, and scientific success is anecdotal. (Hedy Lamarr, the late actress and co-inventor of a key wireless technology, is a prominent anecdote, but still just an anecdote.) To remedy that, Oguz Ali Acar and Jan van den Ende decided to conduct a proper study. For data, they looked to InnoCentive, an online platform that “crowdsource[s] innovative solutions from the world’s smartest people, who compete to provide ideas and solutions to important business, social, policy, scientific, and technical challenges,” according to its website.
Acar and van den Ende surveyed 230 InnoCentive contest participants, who reported how much expertise they had related to the last problem they’d solved, along with how much experience they had solving similar problems in the past, regardless of whether it was related to their professional expertise. The researchers also asked how many different scientific fields problem solvers had looked to for ideas, and how much effort they’d put into their solutions. For each of the solvers, the researchers then looked at all the contests that person won and computed their odds of winning—a measure of creativity, they argue, since contests are judged in part on the solutions’ creativity.
That data revealed an intuitive, though not entirely simple pattern. Insiders (think Richard Feynman in physics) were more likely to win a contest when they cast a wide net for ideas, while outsiders (like Lamarr) performed best when they focused on one scientific or technological domain. In other words, outsiders—who may bring a useful new perspective to bear—should bone up on the problem they’re trying to solve, while insiders, who’ve already done their homework, benefit from thinking outside the box.
Still, there’s something both groups can’t do without: hard work. “[I]f insiders … spend significant amounts of time seeking out knowledge from a wide variety of other fields, they are more likely to be creative in that domain,” Acar and van den Ende write, and if outsiders work hard, they “can turn their lack of knowledge in a domain into an advantage.”….(More)”
Paper by Natascha Just & Michael Latzer in Media, Culture & Society (fortcoming): “This paper explores the governance by algorithms in information societies. Theoretically, it builds on (co-)evolutionary innovation studies in order to adequately grasp the interplay of technological and societal change, and combines these with institutional approaches to incorporate governance by technology or rather software as institutions. Methodologically it draws from an empirical survey of Internet-based services that rely on automated algorithmic selection, a functional typology derived from it, and an analysis of associated potential social risks. It shows how algorithmic selection has become a growing source of social order, of a shared social reality in information societies. It argues that – similar to the construction of realities by traditional mass media – automated algorithmic selection applications shape daily lives and realities, affect the perception of the world, and influence behavior. However, the co-evolutionary perspective on algorithms as institutions, ideologies, intermediaries and actors highlights differences that are to be found first in the growing personalization of constructed realities, and second in the constellation of involved actors. Altogether, compared to reality construction by traditional mass media, algorithmic reality construction tends to increase individualization, commercialization, inequalities and deterritorialization, and to decrease transparency, controllability and predictability…(Full Paper)”
MIT Press: “Peter Suber has been a leading advocate for open access since 2001 and has worked full time on issues of open access since 2003. As a professor of philosophy during the early days of the internet, he realized its power and potential as a medium for scholarship. As he writes now, “it was like an asteroid crash, fundamentally changing the environment, challenging dinosaurs to adapt, and challenging all of us to figure out whether we were dinosaurs.” When Suber began putting his writings and course materials online for anyone to use for any purpose, he soon experienced the benefits of that wider exposure. In 2001, he started a newsletter—the Free Online Scholarship Newsletter, which later became the SPARC Open Access Newsletter—in which he explored the implications of open access for research and scholarship. This book offers a selection of some of Suber’s most significant and influential writings on open access from 2002 to 2010.
In these texts, Suber makes the case for open access to research; answers common questions, objections, and misunderstandings; analyzes policy issues; and documents the growth and evolution of open access during its most critical early decade. (Free Download)”
LIMN issue edited by Boris Jardine and Christopher Kelty: “Vast accumulations saturate our world: phone calls and emails stored by security agencies; every preference of every individual collected by advertisers; ID numbers, and maybe an iris scan, for every Indian; hundreds of thousands of whole genome sequences; seed banks of all existing plants, and of course, books… all of them. Just what is the purpose of these optimistically total archives, and how are they changing us?
This issue of Limn asks authors and artists to consider how these accumulations govern us, where this obsession with totality came from and how we might think differently about big data and algorithms, by thinking carefully through the figure of the archive.
Contributors: Miriam Austin, Jenny Bangham, Reuben Binns, Balázs Bodó, Geoffry C. Bowker, Finn Brunton,Lawrence Cohen, Stephen Collier, Vadig De Croehling, Lukas Engelmann, Nicholas HA Evans, Fabienne Hess, Anna Hughes, Boris Jardine, Emily Jones, Judith Kaplan, Whitney Laemmli, Andrew Lakoff, Rebecca Lemov, Branwyn Poleykett, Mary Murrell, Ben Outhwaite, Julien Prévieux, and Jenny Reardon….(More)”
