Stefaan Verhulst
Molly Jackman & Lauri Kanerva at Wash. & Lee L. Rev. Online : “Increasingly, companies are conducting research so that they can make informed decisions about what products to build and what features to change.These data-driven insights enable companies to make responsible decisions that will improve peoples’ experiences with their products. Importantly, companies must also be responsible in how they conduct research. Existing ethical guidelines for research do not always robustly address the considerations that industry researchers face. For this reason, companies should develop principles and practices around research that are appropriate to the environments in which they operate,taking into account the values set out in law and ethics. This paper describes the research review process designed and implemented at Facebook, including the training employees receive, and the steps involved in evaluating proposed research. We emphasize that there is no one-size-fits-all model of research review that can be applied across companies, and that processes should be designed to fit the contexts in which the research is taking place. However, we hope that general principles can be extracted from Facebook’s process that will inform other companies as they develop frameworks for research review that serve their needs….(More)”.
Lauren Woodman of Nethope:” Data: Everyone’s talking about it, everyone wants more of it….
Still, I’d posit that we’re too obsessed with data. Not just us in the humanitarian space, of course, but everyone. How many likes did that Facebook post get? How many airline miles did I fly last year? How many hours of sleep did I get last week?…
The problem is that data by itself isn’t that helpful: information is.
We need to develop a new obsession, around making sure that data is actionable, that it is relevant in the context in which we work, and on making sure that we’re using the data as effectively as we are collecting it.
In my talk at ICT4D, I referenced the example of 7-Eleven in Japan. In the 1970s, 7-Eleven in Japan became independent from its parent, Southland Corporation. The CEO had to build a viable business in a tough economy. Every month, each store manager would receive reams of data, but it wasn’t effective until the CEO stripped out the noise and provided just four critical data points that had the greatest relevance to drive the local purchasing that each store was empowered to do on their own.
Those points – what sold the day before, what sold the same day a year ago, what sold the last time the weather was the same, and what other stores sold the day before – were transformative. Within a year, 7-Eleven had turned a corner, and for 30 years, remained the most profitable retailer in Japan. It wasn’t about the Big Data; it was figuring out what data was relevant, actionable and empowered local managers to make nimble decisions.
For our sector to get there, we need to do the front-end work that transforms our data into information that we can use. That, after all, is where the magic happens.
A few examples provide more clarity as to why this is so critical.
We know that adaptive decision-making requires access to real-time data. By knowing what is happening in real-time, or near-real-time, we can adjust our approaches and interventions to be most impactful. But to do so, our data has to be accessible to those that are empowered to make decisions. To achieve that, we have to make investments in training, infrastructure, and capacity-building at the organizational level. But in the nonprofit sector, such investments are rarely supported by donors and beyond the limited unrestricted funding available to most most organizations. As a result, the sector has, so far, been able to take only limited steps towards effective data usage, hampering our ability to transform the massive amounts of data we have into useful information.
Another big question about data, and particularly in the humanitarian space, is whether it should be open, closed or somewhere in between. Privacy is certainly paramount, and for types of data, the need for close protection is very clear. For many other data, however, the rules are far less clear. Every country has its own rules about how data can and cannot be used or shared, and more work is needed to provide clarity and predictability so that appropriate data-sharing can evolve.
And perhaps more importantly, we need to think about not just the data, but the use cases. Most of us would agree, for example, that sharing information during a crisis situation can be hugely beneficial to the people and the communities we serve – but in a world where rules are unclear, that ambiguity limits what we can do with the data we have. Here again, the context in which data will be used is critically important.
Finally, all of in the sector have to realize that the journey to transforming data into information is one we’re on together. We have to be willing to give and take. Having data is great; sharing information is better. Sometimes, we have to co-create that basis to ensure we all benefit….(More)”
For many, citizen science is exciting because of the possibility for more diverse, equitable partnerships in scientific research with outcomes considered meaningful and useful by all, including public participants. This was the focus of a symposium we organized at the 2015 conference of the Citizen Science Association. Here we synthesize points made by symposium participants and our own reflections.
Professional science has a participation problem that is part of a larger equity problem in society. Inequity in science has negative consequences including a failure to address the needs and goals arising from diverse human and social experiences, for example, lack of attention to issues such as environmental contamination that disproportionately impact under-represented populations, and a failure to recognize the pervasive effects of structural racism. Inequity also encourages mistrust of science and scientists. A perception that science is practiced for the sole benefit of dominant social groups is reinforced when investigations of urgent community concerns such as hydraulic fracturing are questioned as being biased endeavors.
Defined broadly, citizen science can challenge and change this inequity and mistrust, but only if it reflects the diversity of publics, and if it doesn’t reinforce existing inequities in science and society. Key will be the way that science is portrayed: Acknowledging the presence of bias in all scientific research and the tools available for minimizing this, and demonstrating the utility of science for local problem solving and policy change. Symposium participants called for reflexive research, mutual learning, and other methods for supporting more equitable engagement in practice and in the activities of the Citizen Science Association…(More)”.
IBM Center for the Business of Government: “In this report, Professor Greenberg examines a dozen cities across the United States that have award-winning reputations for using innovation and technology to improve the services they provide to their residents. She explores a variety of success factors associated with effective service delivery at the local level, including:
- The policies, platforms, and applications that cities use for different purposes, such as public engagement, streamlining the issuance of permits, and emergency response
- How cities can successfully partner with third parties, such as nonprofits, foundations, universities, and private businesses to improve service delivery using technology
- The types of business cases that can be presented to mayors and city councils to support various changes proposed by innovators in city government
Professor Greenberg identifies a series of trends that drive cities to undertake innovations, such as the increased use of mobile devices by residents. Based on cities’ responses to these trends, she offers a set of findings and specific actions that city officials can act upon to create innovation agendas for their communities. Her report also presents case studies for each of the dozen cities in her review. These cases provide a real-world context, which will allow interested leaders in other cities to see how their own communities might approach similar innovation initiatives.
This report builds on two other IBM Center reports: A Guide for Making Innovation Offices Work, by Rachel Burstein and Alissa Black, and The Persistence of Innovation in Government: A Guide for Public Servants, by Sandford Borins, which examines the use of awards to stimulate innovation in government.
We hope that government leaders who are interested in innovations using technology to improve services will benefit from the governance models and tools described in this report, as they consider how best to leverage innovation and technology initiatives to serve residents more effectively and efficiently….(More)”
Paper by Paulina Guerrero, Maja Steen Møller, Anton Stahl Olafsson, and Bernhard Snizek on “The Potential of Social Media Volunteered Geographic Information for Urban Green Infrastructure Planning and Governance”: “With the prevalence of smartphones, new ways of engaging citizens and stakeholders in urban planning and governance are emerging. The technologies in smartphones allow citizens to act as sensors of their environment, producing and sharing rich spatial data useful for new types of collaborative governance set-ups. Data derived from Volunteered Geographic Information (VGI) can support accessible, transparent, democratic, inclusive, and locally-based governance situations of interest to planners, citizens, politicians, and scientists. However, there are still uncertainties about how to actually conduct this in practice. This study explores how social media VGI can be used to document spatial tendencies regarding citizens’ uses and perceptions of urban nature with relevance for urban green space governance. Via the hashtag #sharingcph, created by the City of Copenhagen in 2014, VGI data consisting of geo-referenced images were collected from Instagram, categorised according to their content and analysed according to their spatial distribution patterns. The results show specific spatial distributions of the images and main hotspots. Many possibilities and much potential of using VGI for generating, sharing, visualising and communicating knowledge about citizens’ spatial uses and preferences exist, but as a tool to support scientific and democratic interaction, VGI data is challenged by practical, technical and ethical concerns. More research is needed in order to better understand the usefulness and application of this rich data source to governance….(More)”
Pandula Gamage in Public Money & Management: “This article examines the opportunities presented by effectively harnessing big data in the public sector context. The article is exploratory and reviews both academic- and practitioner–oriented literature related to big data developments. The findings suggest that big data will have an impact on the future role of public sector organizations in functional areas. However, the author also reveals that there are challenges to be addressed by governments in adopting big data applications. To realize the benefits of big data, policy-makers need to: invest in research; create incentives for private and public sector entities to share data; and set up programmes to develop appropriate skills….(More)”
BreakDengue: “Dengue fever outbreaks are increasing in both frequency and magnitude. Not only that, the number of countries that could potentially be affected by the disease is growing all the time.
This growth has led to renewed efforts to address the disease, and a pioneering Malaysian researcher was recently recognized for his efforts to harness the power of big data and artificial intelligence to accurately predict dengue outbreaks.
Dr. Dhesi Baha Raja received the Pistoia Alliance Life Science Award at King’s College London in April of this year, for developing a disease prediction platform that employs technology and data to give people prior warning of when disease outbreaks occur.The medical doctor and epidemiologist has spent years working to develop AIME (Artificial Intelligence in Medical Epidemiology)…
it relies on a complex algorithm, which analyses a wide range of data collected by local government and also satellite image recognition systems. Over 20 variables such as weather, wind speed, wind direction, thunderstorm, solar radiation and rainfall schedule are included and analyzed. Population models and geographical terrain are also included. The ultimate result of this intersection between epidemiology, public health and technology is a map, which clearly illustrates the probability and location of the next dengue outbreak.
The ground-breaking platform can predict dengue fever outbreaks up to two or three months in advance, with an accuracy approaching 88.7 per cent and within a 400m radius. Dr. Dhesi has just returned from Rio de Janeiro, where the platform was employed in a bid to fight dengue in advance of this summer’s Olympics. In Brazil, its perceived accuracy was around 84 per cent, whereas in Malaysia in was over 88 per cent – giving it an average accuracy of 86.37 per cent.
The web-based application has been tested in two states within Malaysia, Kuala Lumpur, and Selangor, and the first ever mobile app is due to be deployed across Malaysia soon. Once its capability is adequately tested there, it will be rolled out globally. Dr. Dhesi’s team are working closely with mobile digital service provider Webe on this.
By making the app free to download, this will ensure the service becomes accessible to all, Dr Dhesi explains.
“With the web-based application, this could only be used by public health officials and agencies. We recognized the need for us to democratize this health service to the community, and the only way to do this is to provide the community with the mobile app.”
This will also enable the gathering of even greater knowledge on the possibility of dengue outbreaks in high-risk areas, as well as monitoring the changing risks as people move to different areas, he adds….(More)”
Motherboard: “In May, Manu Sporny became the 10,000th “e-Resident” of Estonia. Sporny, the founder and CEO of a digital payments and identity company located in the United States, has never set foot in Estonia. However, he heard about the country’s e-Residency program and decided it would be an obvious choice for his company’s European headquarters.
People like Sporny are why Estonia launched a digital residency program in December 2014. The program allows anyone in the world to apply for a digital identity, which will let them: establish and run a location independent business online, get easier access to EU markets, open a bank account and conduct e-banking, use international payment service providers, declare taxes, and sign all relevant documents and contracts remotely…..
One of the most essential components of a functioning digital society is a secure digital identity. The state and the private sector need to know who is accessing these online services. Likewise, users need to feel secure that their identity is protected.
Estonia found the solution to this problem. In 2002, we started issuing residents a mandatory ID-card with a chip that empowers them to categorically identify themselves and verify legal transactions and documents through a digital signature. A digital signature has been legally equivalent to a handwritten one throughout the European Union—not just in Estonia—since 1999.
With this new digital identity system, the state could serve not only areas with a low population, but also the entire Estonian diaspora. Estonians anywhere in the world could maintain a connection to their homeland via e-services, contribute to the legislative process, and even participate in elections. Once the government realized that it could scale this service worldwide, it seemed logical to offer its e-services to those without physical residency in Estonia. This meant the Estonian country suddenly had value as a service in addition to a place to live.
What does “Country as a Service” mean?
With the rise of a global internet, we’ve seen more skilled workers and businesspeople offering their services across nations, regardless of their physical location. A survey by Intuit estimates that this number will reach 40 percent in the US alone by 2020.
These entrepreneurs and skilled artisans are ultimately looking for the simplest way to create and maintain a legal, global identity as an outlet for their global offerings.
They look to other countries, not because they are looking for a tax haven, but because they have been prevented from incorporating and maintaining a business, due to barriers from their own government.
The most important thing for these entrepreneurs is that the creation and upkeep of the company is easy and hassle-free. It is also important that, despite being incorporated in a different nation, they remain honest taxpayers within their country of physical residence.
This is exactly what Estonia offers—a location-independent, hassle-free and fully-digital economic and financial environment where entrepreneurs can run their own company globally….
When an e-Resident establishes a company, it means that the company will likely start using the services offered by other Estonian companies (like creating a bank account, partnering with a payment service provider, seeking assistance from accountants, auditors and lawyers). As more clients are created for Estonian companies, their growth potential increases, along with the growth potential of the Estonian economy.
Eventually, there will be more residents outside borders than inside them
If states fail to redesign and simplify the machinery of bureaucracy and make it location-independent, there will be an opportunity for countries that can offer such services across borders.
Estonia has learned that it’s incredibly important in a small state to serve primarily small and micro businesses. In order to sustain a nation on this, we must automate and digitize processes to scale. Estonia’s model, for instance, is location-independent, making it simple to scale successfully. We hope to acquire at least 10 million digital residents (e-Residents) in a way that is mutually beneficial by the nation-states where these people are tax residents….(More)”
Glyn Moody at ArsTechnica: “In 1836, Anthony Panizzi, who later became principal librarian of the British Museum, gave evidence before a parliamentary select committee. At that time, he was only first assistant librarian, but even then he had an ambitious vision for what would one day became the British Library. He told the committee:
I want a poor student to have the same means of indulging his learned curiosity, of following his rational pursuits, of consulting the same authorities, of fathoming the most intricate inquiry as the richest man in the kingdom, as far as books go, and I contend that the government is bound to give him the most liberal and unlimited assistance in this respect.
He went some way to achieving that goal of providing general access to human knowledge. In 1856, after 20 years of labour as Keeper of Printed Books, he had helped boost the British Museum’s collection to over half a million books, making it the largest library in the world at the time. But there was a serious problem: to enjoy the benefits of those volumes, visitors needed to go to the British Museum in London.
Imagine, for a moment, if it were possible to provide access not just to those books, but to all knowledge for everyone, everywhere—the ultimate realisation of Panizzi’s dream. In fact, we don’t have to imagine: it is possible today, thanks to the combined technologies of digital texts and the Internet. The former means that we can make as many copies of a work as we want, for vanishingly small cost; the latter provides a way to provide those copies to anyone with an Internet connection. The global rise of low-cost smartphones means that group will soon include even the poorest members of society in every country.
That is to say, we have the technical means to share all knowledge, and yet we are nowhere near providing everyone with the ability to indulge their learned curiosity as Panizzi wanted it.
What’s stopping us? That’s the central question that the “open access” movement has been asking, and trying to answer, for the last two decades. Although tremendous progress has been made, with more knowledge freely available now than ever before, there are signs that open access is at a critical point in its development, which could determine whether it will ever succeed in realising Panizzi’s plan.
Table of Contents
- The arcana of academic publishing
- What about us?
- In the beginning was arXiv
- Scholarly skywriting
- Opening up the Americas
- Public Library of Science
- Open access is born
- CERN’s SCOAP
- PLoS ONE
- Gold open access
- Hybrid problems
- Green open access
- The empire strikes back
- Diamond open access
- From Aaron Swartz…
- …to Sci-Hub“
Code and the City explores the extent and depth of the ways in which software mediates how people work, consume, communication, travel and play. The reach of these systems is set to become even more pervasive through efforts to create smart cities: cities that employ ICTs to underpin and drive their economy and governance. Yet, despite the roll-out of software-enabled systems across all aspects of city life, the relationship between code and the city has barely been explored from a critical social science perspective. This collection of essays seeks to fill that gap, and offers an interdisciplinary examination of the relationship between software and contemporary urbanism.
This book will be of interest to those researching or studying smart cities and urban infrastructure….(More)”.