Tanzania’s government is casting itself as the nation’s sole custodian of data


Abdi Latif Dahir at Quartz: “Tanzania’s government wants to have exclusive control over who collects and shares data about the country.

In a bill tabled in parliament this week, the government aims to criminalize the collection, analysis, and dissemination of any data without first obtaining authorization from the country’s chief statistician. The key amendments to the Statistics Act also prohibit researchers from publicly releasing any data “which is intended to invalidate, distort, or discredit official statistics.” Any person who does anything to the contrary could merit a fine of not less than 10 million shillings ($4,400), a jail term of three years, or both.

Officials have said the amendments are being passed as a measure to promote peace and security and to stop the publication of fake information. Critics, however, argue the laws will curtail both the collection of crucial data and the ability to fact-check and hold official sources accountable. Opposition members in parliament also said the law could target institutions and scholars releasing data that isn’t in favor of the government….

the move to ban independent data collection could be damaging given how much quality information could help in national development. African nations increasingly lack evidence-based research that could inform how they formulate national policies. And many times in Tanzania, independent actors fulfill this gap, providing data on flood-prone areas to avoid disasters, or documenting citizens’ needs—something that isn’t captured in official government statistics….(More)”.

To Secure Knowledge: Social Science Partnerships for the Common Good


Social Science Research Council: “For decades, the social sciences have generated knowledge vital to guiding public policy, informing business, and understanding and improving the human condition. But today, the social sciences face serious threats. From dwindling federal funding to public mistrust in institutions to widespread skepticism about data, the infrastructure supporting the social sciences is shifting in ways that threaten to undercut research and knowledge production.

How can we secure social knowledge for future generations?

This question has guided the Social Science Research Council’s Task Force. Following eighteen months of consultation with key players as well as internal deliberation, we have identified both long-term developments and present threats that have created challenges for the social sciences, but also created unique opportunities. And we have generated recommendations to address these issues.

Our core finding focuses on the urgent need for new partnerships and collaborations among several key players: the federal government, academic institutions, donor organizations, and the private sector. Several decades ago, these institutions had clear zones of responsibility in producing social knowledge, with the federal government constituting the largest portion of funding for basic research. Today, private companies represent an increasingly large share not just of research and funding, but also the production of data that informs the social sciences, from smart phone usage to social media patterns.

In addition, today’s social scientists face unprecedented demands for accountability, speedy publication, and generation of novel results. These pressures have emerged from the fragmented institutional foundation that undergirds research. That foundation needs a redesign in order for the social sciences to continue helping our communities address problems ranging from income inequality to education reform.

To build a better future, we identify five areas of action: Funding, Data, Ethics, Research Quality, and Research Training. In each area, our recommendations range from enlarging corporate-academic pilot programs to improving social science training in digital literacy.

A consistent theme is that none of the measures, if taken unilaterally, can generate optimal outcomes. Instead, we have issued a call to forge a new research compact to harness the potential of the social sciences for improving human lives. That compact depends on partnerships, and we urge the key players in the construction of social science knowledge—including universities, government, foundations, and corporations—to act swiftly. With the right realignments, the security of social knowledge lies within our reach….(More)”

Don’t forget people in the use of big data for development


Joshua Blumenstock at Nature: “Today, 95% of the global population has mobile-phone coverage, and the number of people who own a phone is rising fast (see ‘Dialling up’)1. Phones generate troves of personal data on billions of people, including those who live on a few dollars a day. So aid organizations, researchers and private companies are looking at ways in which this ‘data revolution’ could transform international development.

Some businesses are starting to make their data and tools available to those trying to solve humanitarian problems. The Earth-imaging company Planet in San Francisco, California, for example, makes its high-resolution satellite pictures freely available after natural disasters so that researchers and aid organizations can coordinate relief efforts. Meanwhile, organizations such as the World Bank and the United Nations are recruiting teams of data scientists to apply their skills in statistics and machine learning to challenges in international development.

But in the rush to find technological solutions to complex global problems there’s a danger of researchers and others being distracted by the technology and losing track of the key hardships and constraints that are unique to each local context. Designing data-enabled applications that work in the real world will require a slower approach that pays much more attention to the people behind the numbers…(More)”.

Resource Guide to Data Governance and Security


National Neighborhood Indicators Partnership (NNIP): “Any organization that collects, analyzes, or disseminates data should establish formal systems to manage data responsibly, protect confidentiality, and document data files and procedures. In doing so, organizations will build a reputation for integrity and facilitate appropriate interpretation and data sharing, factors that contribute to an organization’s long-term sustainability.

To help groups improve their data policies and practices, this guide assembles lessons from the experiences of partners in the National Neighborhood Indicators Partnership network and similar organizations. The guide presents advice and annotated resources for the three parts of a data governance program: protecting privacy and human subjects, ensuring data security, and managing the data life cycle. While applicable for non-sensitive data, the guide is geared for managing confidential data, such as data used in integrated data systems or Pay-for-Success programs….(More)”.

Ethics and Data Science


(Open) Ebook by Mike LoukidesHilary Mason and DJ Patil: “As the impact of data science continues to grow on society there is an increased need to discuss how data is appropriately used and how to address misuse. Yet, ethical principles for working with data have been available for decades. The real issue today is how to put those principles into action. With this report, authors Mike Loukides, Hilary Mason, and DJ Patil examine practical ways for making ethical data standards part of your work every day.

To help you consider all of possible ramifications of your work on data projects, this report includes:

  • A sample checklist that you can adapt for your own procedures
  • Five framing guidelines (the Five C’s) for building data products: consent, clarity, consistency, control, and consequences
  • Suggestions for building ethics into your data-driven culture

Now is the time to invest in a deliberate practice of data ethics, for better products, better teams, and better outcomes….(More)”.

Is the Government More Entrepreneurial Than You Think?


 Freakonomics Radio (Podcast): We all know the standard story: our economy would be more dynamic if only the government would get out of the way. The economist Mariana Mazzucato says we’ve got that story backward. She argues that the government, by funding so much early-stage research, is hugely responsible for big successes in tech, pharma, energy, and more. But the government also does a terrible job in claiming credit — and, more important, getting a return on its investment….

Quote:

MAZZUCATO: “…And I’ve been thinking about this especially around the big data and the kind of new questions around privacy with Facebook, etc. Instead of having a situation where all the data basically gets captured, which is citizens’ data, by companies which then, in some way, we have to pay into in terms of accessing these great new services — whether they’re free or not, we’re still indirectly paying. We should have the data in some sort of public repository because it’s citizens’ data. The technology itself was funded by the citizens. What would Uber be without GPS, publicly financed? What would Google be without the Internet, publicly financed? So, the tech was financed from the state, the citizens; it’s their data. Why not completely reverse the current relationship and have that data in a public repository which companies actually have to pay into to get access to it under certain strict conditions which could be set by an independent advisory council?… (More)”

Pick your poison: How a crowdsourcing app helped identify and reduce food poisoning


Alex Papas at LATimes: “At some point in life, almost everyone will have experienced the debilitating effects of a foodborne illness. Whether an under-cooked chicken kebab, an E. coli infested salad or some toxic fish, a good day can quickly become a loathsome frenzy of vomiting and diarrhoea caused by poorly prepared or poorly kept food.

Since 2009, the website iwaspoisoned.com has allowed victims of food-poisoning victims to help others avoid such an ordeal by crowd-sourcing food illnesses on one easy-to-use, consumer-led platform.

Whereas previously a consumer struck down by food poisoning may have been limited to complaining to the offending food outlet, IWasPosioned allows users to submit detailed reports of food-poisoning incidents – including symptoms, location and space to describe the exact effects and duration of the incident. The information is then transferred in real time to public health organisations and food industry groups, who  use the data to flag potentially dangerous foodborne illness before a serious outbreak occurs.

In the United States alone, where food safety standards are among the highest in the world, there are still 48 million cases of food poisoning per year. From those cases, 128,000 result in hospitalisation and 3,000 in death, according to data from the U.S. Food and Drug Association.

Back in 2008 the site’s founder, Patrick Quade, himself fell foul to food poisoning after eating a BLT from a New York deli which caused him to be violently ill. Concerned by the lack of options for reporting such incidents, he set up the novel crowdsourcing platform, which also aims at improving transparency in the food monitoring industry.

The emergence of IWasPoisoned is part of the wider trend of consumers taking revenge against companies via digital platforms, which spans various industries. In the case of IWasPoisoned, reports of foodborne illness have seriously tarnished the reputations of several major food retailers….(More)”.

Technology Run Amok: Crisis Management in the Digital Age


Book by Ian I. Mitroff: “The recent data controversy with Facebook highlights the tech industry as a whole was utterly unprepared for the backlash it faced as a result of its business model of selling user data to third parties. Despite the predominant role that technology plays in all of our lives, the controversy also revealed that many tech companies are reactive, rather than proactive, in addressing crises.

This book examines society’s failure to manage technology and its resulting negative consequences. Mitroff argues that the “technological mindset” is responsible for society’s unbridled obsession with technology and unless confronted, will cause one tech crisis after another. This trans-disciplinary text, edgy in its approach, will appeal to academics, students, and practitioners through its discussion of the modern technological crisis…(More)”.

How Smart Should a City Be? Toronto Is Finding Out


Laura Bliss at CityLab: “A data-driven “neighborhood of the future” masterminded by a Google corporate sibling, the Quayside project could be a milestone in digital-age city-building. But after a year of scandal in Silicon Valley, questions about privacy and security remain…

Quayside was billed as “the world’s first neighborhood built from the internet up,” according to Sidewalk Labs’ vision plan, which won the RFP to develop this waterfront parcel. The startup’s pitch married “digital infrastructure” with an utopian promise: to make life easier, cheaper, and happier for Torontonians.

Everything from pedestrian traffic and energy use to the fill-height of a public trash bin and the occupancy of an apartment building could be counted, geo-tagged, and put to use by a wifi-connected “digital layer” undergirding the neighborhood’s physical elements. It would sense movement, gather data, and send information back to a centralized map of the neighborhood. “With heightened ability to measure the neighborhood comes better ways to manage it,” stated the winning document. “Sidewalk expects Quayside to become the most measurable community in the world.”

“Smart cities are largely an invention of the private sector—an effort to create a market within government,” Wylie wrote in Canada’s Globe and Mail newspaper in December 2017. “The business opportunities are clear. The risks inherent to residents, less so.” A month later, at a Toronto City Council meeting, Wylie gave a deputation asking officials to “ensure that the data and data infrastructure of this project are the property of the city of Toronto and its residents.”

In this case, the unwary Trojans would be Waterfront Toronto, the nonprofit corporation appointed by three levels of Canadian government to own, manage, and build on the Port Lands, 800 largely undeveloped acres between downtown and Lake Ontario. When Waterfront Toronto gave Sidewalk Labs a green light for Quayside in October, the startup committed $50 million to a one-year consultation, which was recently extended by several months. The plan is to submit a final “Master Innovation and Development Plan” by the end of this year.

That somewhat Orwellian vision of city management had privacy advocates and academics concerned from the the start. Bianca Wylie, the co-founder of the technology advocacy group Tech Reset Canada, has been perhaps the most outspoken of the project’s local critics. For the last year, she’s spoken up at public fora, written pointed op-edsand Medium posts, and warned city officials of what she sees as the “Trojan horse” of smart city marketing: private companies that stride into town promising better urban governance, but are really there to sell software and monetize citizen data.

But there has been no guarantee about who would own the data at the core of its proposal—much of which would ostensibly be gathered in public space. Also unresolved is the question of whether this data could be sold. With little transparency about what that means from the company or its partner, some Torontonians are wondering what Waterfront Toronto—and by extension, the public—is giving away….(More)”.

Decentralisation: the next big step for the world wide web


Zoë Corbyn at The Observer: “The decentralised web, or DWeb, could be a chance to take control of our data back from the big tech firms. So how does it work and when will it be here?...What is the decentralised web? 
It is supposed to be like the web you know but without relying on centralised operators. In the early days of the world wide web, which came into existence in 1989, you connected directly with your friends through desktop computers that talked to each other. But from the early 2000s, with the advent of Web 2.0, we began to communicate with each other and share information through centralised services provided by big companies such as Google, Facebook, Microsoft and Amazon. It is now on Facebook’s platform, in its so called “walled garden”, that you talk to your friends. “Our laptops have become just screens. They cannot do anything useful without the cloud,” says Muneeb Ali, co-founder of Blockstack, a platform for building decentralised apps. The DWeb is about re-decentralising things – so we aren’t reliant on these intermediaries to connect us. Instead users keep control of their data and connect and interact and exchange messages directly with others in their network.

Why do we need an alternative? 
With the current web, all that user data concentrated in the hands of a few creates risk that our data will be hacked. It also makes it easier for governments to conduct surveillance and impose censorship. And if any of these centralised entities shuts down, your data and connections are lost. Then there are privacy concerns stemming from the business models of many of the companies, which use the private information we provide freely to target us with ads. “The services are kind of creepy in how much they know about you,” says Brewster Kahle, the founder of the Internet Archive. The DWeb, say proponents, is about giving people a choice: the same services, but decentralised and not creepy. It promises control and privacy, and things can’t all of a sudden disappear because someone decides they should. On the DWeb, it would be harder for the Chinese government to block a site it didn’t like, because the information can come from other places.

How does the DWeb work that is different? 

There are two big differences in how the DWeb works compared to the world wide web, explains Matt Zumwalt, the programme manager at Protocol Labs, which builds systems and tools for the DWeb. First, there is this peer-to-peer connectivity, where your computer not only requests services but provides them. Second, how information is stored and retrieved is different. Currently we use http and https links to identify information on the web. Those links point to content by its location, telling our computers to find and retrieve things from those locations using the http protocol. By contrast, DWeb protocols use links that identify information based on its content – what it is rather than where it is. This content-addressed approach makes it possible for websites and files to be stored and passed around in many ways from computer to computer rather than always relying on a single server as the one conduit for exchanging information. “[In the traditional web] we are pointing to this location and pretending [the information] exists in only one place,” says Zumwalt. “And from this comes this whole monopolisation that has followed… because whoever controls the location controls access to the information.”…(More)”.