Springwise: “There’s more traffic on today’s city streets than there ever has been, and managing it all can prove to be a headache for local authorities and transport bodies. In the past, we’ve seen the City of Calgary in Canada detect drivers’ Bluetooth signals to develop a map of traffic congestion. Now the EAR-IT project in Santander, Spain, is using acoustic sensors to measure the sounds of city streets and determine real time activity on the ground.
Launched as part of the autonomous community’s SmartSantander initiative, the experimental scheme placed hundreds of acoustic processing units around the region. These pick up the sounds being made in any given area and, when processed through an audio recognition engine, can provide data about what’s going on on the street. Smaller ‘motes’ were also developed to provide more accurate location information about each sound.
Created by members of Portugal’s UNINOVA institute and IT consultants EGlobalMark, the system was able to use city noises to detect things such as traffic congestion, parking availability and the location of emergency vehicles based on their sirens. It could then automatically trigger smart signs to display up-to-date information, for example.
The team particularly focused on a junction near the city hospital that’s a hotspot for motor accidents. Rather than force ambulance drivers to risk passing through a red light and into lateral traffic, the sensors were able to detect when and where an emergency vehicle was coming through and automatically change the lights in their favor.
The system could also be used to pick up ‘sonic events’ such as gunshots or explosions and detect their location. The researchers have also trialled an indoor version that can sense if an elderly resident has fallen over or to turn lights off when the room becomes silent.”
Hashtag Standards For Emergencies
Key Findings of New Report by the UN Office for the Coordination of Humanitarian Affairs:”
- The public is using Twitter for real-time information exchange and for expressing emotional support during a variety of crises, such as wildfires, earthquakes, floods, hurricanes, political protests, mass shootings, and communicable-disease tracking.31 By encouraging proactive standardization of hashtags, emergency responders may be able to reduce a big-data challenge and better leverage crowdsourced information for operational planning and response.
- Twitter is the primary social media platform discussed in this Think Brief. However, the use of hashtags has spread to other social media platforms, including Sina Weibo, Facebook, Google+ and Diaspora. As a result, the ideas behind hashtag standardization may have a much larger sphere of influence than just this one platform.
- Three hashtag standards are encouraged and discussed: early standardization of the disaster name (e.g., #Fay), how to report non-emergency needs (e.g., #PublicRep) and requesting emergency assistance (e.g., #911US).
- As well as standardizing hashtags, emergency response agencies should encourage the public to enable Global Positioning System (GPS) when tweeting during an emergency. This will provide highly detailed information to facilitate response.
- Non-governmental groups, national agencies and international organizations should discuss the potential added value of monitoring social media during emergencies. These groups need to agree who is establishing the standards for a given country or event, which agency disseminates these prescriptive messages, and who is collecting and validating the incoming crowdsourced reports.
- Additional efforts should be pursued regarding how to best link crowdsourced information into emergency response operations and logistics. If this information will be collected, the teams should be ready to act on it in a timely manner.”
Politics, Policy and Privatisation in the Everyday Experience of Big Data in the NHS
Qualitative research methods including discourse analysis, ethnography of software and key informant interviews were used. Actor-network theories, as developed by Science and technology Studies (STS) researchers were used to inform the research questions, data gathering and analysis. The chapter focuses on the aftermath of legislation to change the organisation of the NHS.
The chapter shows the benefits of qualitative research into specific manifestations information technology. It explains how apparently ‘objective’ and ‘neutral’ quantitative data gathering and analysis is mediated by complex software practices. It considers the political power of claims that data is neutral.
The chapter provides insight into a specific case of healthcare data and. It makes explicit the role of politics and the State in digitisation and shows how STS approaches can be used to understand political and technological practice.”
Gov.uk quietly disrupts the problem of online identity login
The Guardian: “A new “verified identity” scheme for gov.uk is making it simpler to apply for a new driving licence, passport or to file a tax return online, allowing users to register securely using one log in that connects and securely stores their personal data.
After nearly a year of closed testing with a few thousand Britons, the “Gov.UK Verify” scheme quietly opened to general users on 14 October, expanding across more services. It could have as many as half a million users with a year.
The most popular services are expected to be one for tax credit renewals, and CAP farm information – both expected to have around 100,000 users by April next year, and on their own making up nearly half of the total use.
The team behind the system claim this is a world first. Those countries that have developed advanced government services online, such as Estonia, rely on state identity cards – which the UK has rejected.
“This is a federated model of identity, not a centralised one,” said Janet Hughes, head of policy and engagement at the Government Digital Service’s identity assurance program, which developed and tested the system.
How it works
The Verify system has taken three years to develop, and involves checking a user’s identity against details from a range of sources, including credit reference agencies, utility bills, driving licences and mobile provider bills.
But it does not retain those pieces of information, and the credit checking companies do not know what service is being used. Only a mobile or landline number is kept in order to send verification codes for subsequent logins.
When people subsequently log in, they would have to provide a user ID and password, and verify their identity by entering a code sent to related stored phone number.
To enrol in the system, users have to be over 19, living in the UK, and been resident for over 12 months. A faked passport would not be sufficient: “they would need a very full false ID, and have to not appear on any list of fraudulent identities,” one source at the GDS told the Guardian.
Banks now following gov.uk’s lead
Government developers are confident that it presents a higher barrier to authentication than any other digital service – so that fraudulent transactions will be minimised. That has interested banks, which are understood to be expressing interest in using the same service to verify customer identities through an arms-length verification system.
The government system would not pass on people’s data, but would instead verify that someone is who they claim to be, much like Twitter and Facebook verify users’ identity to log in to third party sites, yet don’t share their users’ data.
The US, Canada and New Zealand have also expressed interest in following up the UK’s lead in the system, which requires separate pieces of verified information about themselves from different sources.
The system then cross-references that verified information with credit reference agencies and other sources, which can include a mobile phone provider, passport, bank account, utility bill or driving licence.
The level of confidence in an individual’s identity is split into four levels. The lowest is for the creation of simple accounts to receive reports or updates: “we don’t need to know who it is, only that it’s the same person returning,” said Hughes.
Level 2 requires that “on the balance of probability” someone is who they say they are – which is the level to which Verify will be able to identify people. Hughes says that this will cover the majority of services.
Level 3 requires identity “beyond reasonable doubt” – perhaps including the first application for a passport – and Level 4 would require biometric information to confirm individual identity.
Seattle Launches Sweeping, Ethics-Based Privacy Overhaul
Privacy Advisor: “The City of Seattle this week launched a citywide privacy initiative aimed at providing greater transparency into the city’s data collection and use practices.
To that end, the city has convened a group of stakeholders, the Privacy Advisory Committee, comprising various government departments, to look at the ways the city is using data collected from practices as common as utility bill payments and renewing pet licenses or during the administration of emergency services like police and fire. By this summer, the committee will deliver the City Council suggested principles and a “privacy statement” to provide direction on privacy practices citywide.
In addition, the city has partnered with the University of Washington, where Jan Whittington, assistant professor of urban design and planning and associate director at the Center for Information Assurance and Cybersecurity, has been given a $50,000 grant to look at open data, privacy and digital equity and how municipal data collection could harm consumers.
Responsible for all things privacy in this progressive city is Michael Mattmiller, who was hired to the position of chief technology officer (CTO) for the City of Seattle in June. Before his current gig, he worked as a senior strategist in enterprise cloud privacy for Microsoft. He said it’s an exciting time to be at the helm of the office because there’s momentum, there’s talent and there’s intention.
“We’re at this really interesting time where we have a City Council that strongly cares about privacy … We have a new police chief who wants to be very good on privacy … We also have a mayor who is focused on the city being an innovative leader in the way we interact with the public,” he said.
In fact, some City Council members have taken it upon themselves to meet with various groups and coalitions. “We have a really good, solid environment we think we can leverage to do something meaningful,” Mattmiller said….
Armbruster said the end goal is to create policies that will hold weight over time.
“I think when looking at privacy principles, from an ethical foundation, the idea is to create something that will last while technology dances around us,” she said, adding the principles should answer the question, “What do we stand for as a city and how do we want to move forward? So any technology that falls into our laps, we can evaluate and tailor or perhaps take a pass on as it falls under our ethical framework.”
The bottom line, Mattmiller said, is making a decision that says something about Seattle and where it stands.
“How do we craft a privacy policy that establishes who we want to be as a city and how we want to operate?” Mattmiller asked.”
The Creepy New Wave of the Internet
Review by Sue Halpern in the New York Review of Books from:
The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism
Enchanted Objects: Design, Human Desire, and the Internet of Things
Age of Context: Mobile, Sensors, Data and the Future of Privacy
More Awesome Than Money: Four Boys and Their Heroic Quest to Save Your Privacy from Facebook
…So here comes the Internet’s Third Wave. In its wake jobs will disappear, work will morph, and a lot of money will be made by the companies, consultants, and investment banks that saw it coming. Privacy will disappear, too, and our intimate spaces will become advertising platforms—last December Google sent a letter to the SEC explaining how it might run ads on home appliances—and we may be too busy trying to get our toaster to communicate with our bathroom scale to notice. Technology, which allows us to augment and extend our native capabilities, tends to evolve haphazardly, and the future that is imagined for it—good or bad—is almost always historical, which is to say, naive.”
A World That Counts: Mobilising a Data Revolution for Sustainable Development
Executive Summary of the Report by the UN Secretary-General’s Independent Expert Advisory Group on a Data Revolution for Sustainable Development (IEAG): “Data are the lifeblood of decision-making and the raw material for accountability. Without high-quality data providing the right information on the right things at the right time; designing, monitoring and evaluating effective policies becomes almost impossible.
New technologies are leading to an exponential increase in the volume and types of data available, creating unprecedented possibilities for informing and transforming society and protecting the environment. Governments, companies, researchers and citizen groups are in a ferment of experimentation, innovation and adaptation to the new world of data, a world in which data are bigger, faster and more detailed than ever before. This is the data revolution.
Some are already living in this new world. But too many people, organisations and governments are excluded because of lack of resources, knowledge, capacity or opportunity. There are huge and growing inequalities in access to data and information and in the ability to use it.
Data needs improving. Despite considerable progress in recent years, whole groups of people are not being counted and important aspects of people’s lives and environmental conditions are still not measured. For people, this can lead to the denial of basic rights, and for the planet, to continued environmental degradation. Too often, existing data remain unused because they are released too late or not at all, not well-documented and harmonized, or not available at the level of detail needed for decision-making.
As the world embarks on an ambitious project to meet new Sustainable Development Goals (SDGs), there is an urgent need to mobilise the data revolution for all people and the whole planet in order to monitor progress, hold governments accountable and foster sustainable development. More diverse, integrated, timely and trustworthy information can lead to better decision-making and real-time citizen feedback. This in turn enables individuals, public and private institutions, and companies to make choices that are good for them and for the world they live in.
This report sets out the main opportunities and risks presented by the data revolution for sustain-able development. Seizing these opportunities and mitigating these risks requires active choices, especially by governments and international institutions. Without immediate action, gaps between developed and developing countries, between information-rich and information-poor people, and between the private and public sectors will widen, and risks of harm and abuses of human rights will grow.
An urgent call for action: Key recommendations
The strong leadership of the United Nations (UN) is vital for the success of this process. The Independent Expert Advisory Group (IEAG), established in August 2014, offers the UN Secretary-General several key recommendations for actions to be taken in the near future, summarised below:
- Develop a global consensus on principles and standards: The disparate worlds of public, private and civil society data and statistics providers need to be urgently brought together to build trust and confidence among data users. We propose that the UN establish a process whereby key stakeholders create a “Global Consensus on Data”, to adopt principles concerning legal, technical, privacy, geospatial and statistical standards which, among other things, will facilitate openness and information exchange and promote and protect human rights.
- Share technology and innovations for the common good: To create mechanisms through which technology and innovation can be shared and used for the common good, we propose
to create a global “Network of Data Innovation Networks”, to bring together the organisations and experts in the field. This would: contribute to the adoption of best practices for improving the monitoring of SDGs, identify areas where common data-related infrastructures could address capacity problems and improve efficiency, encourage collaborations, identify critical research gaps and create incentives to innovate. - New resources for capacity development: Improving data is a development agenda in
its own right, and can improve the targeting of existing resources and spur new economic opportunities. Existing gaps can only be overcome through new investments and the strengthening of capacities. A new funding stream to support the data revolution for sustainable development should be endorsed at the “Third International Conference on Financing for Development”, in Addis Ababa in July 2015. An assessment will be needed of the scale of investments, capacity development and technology transfer that is required, especially for low income countries; and proposals developed for mechanisms to leverage the creativity and resources of the private sector. Funding will also be needed to implement an education program aimed at improving people’s, infomediaries’ and public servants’ capacity and data literacy to break down barriers between people and data. - Leadership for coordination and mobilisation: A UN-led “Global Partnership for Sustainable Development Data” is proposed, tomobiliseandcoordinate the actions and institutions required to make the data revolution serve sustainable development, promoting several initiatives, such as:
- A “World Forum on Sustainable Development Data” to bring together the whole data ecosystem to share ideas and experiences for data improvements, innovation, advocacy and technology transfer. The first Forum should take place at the end of 2015, once the SDGs are agreed;
- A “Global Users Forum for Data for SDGs”, to ensure feedback loops between data producers and users, help the international community to set priorities and assess results;
- Brokering key global public-private partnerships for data sharing.
- Exploit some quick wins on SDG data: Establishing a “SDGs data lab” to support the development of a first wave of SDG indicators, developing an SDG analysis and visualisation platform using the most advanced tools and features for exploring data, and building a dashboard from diverse data sources on ”the state of the world”.
Never again should it be possible to say “we didn’t know”. No one should be invisible. This is the world we want – a world that counts.”
OpenUp Corporate Data while Protecting Privacy
Much of the data generated by these devices is today controlled by corporations. These companies are in effect “owners” of terabytes of data and metadata. Companies use this data to aggregate, analyze, and track individual preferences, provide more targeted consumer experiences, and add value to the corporate bottom line.
At the same time, even as we witness a rapid “datafication” of the global economy, access to data is emerging as an increasingly critical issue, essential to addressing many of our most important social, economic, and political challenges. While the rise of the Open Data movement has opened up over a million datasets around the world, much of this openness is limited to government (and, to a lesser extent, scientific) data. Access to corporate data remains extremely limited. This is a lost opportunity. If corporate data—in the form of Web clicks, tweets, online purchases, sensor data, call data records, etc.—were made available in a de-identified and aggregated manner, researchers, public interest organizations, and third parties would gain greater insights on patterns and trends that could help inform better policies and lead to greater public good (including combatting Ebola).
Corporate data sharing holds tremendous promise. But its potential—and limitations—are also poorly understood. In what follows, we share early findings of our efforts to map this emerging open data frontier, along with a set of reflections on how to safeguard privacy and other citizen and consumer rights while sharing. Understanding the practice of shared corporate data—and assessing the associated risks—is an essential step in increasing access to socially valuable data held by businesses today. This is a challenge certainly worth exploring during the forthcoming OpenUp conference!
Understanding and classifying current corporate data sharing practices
Corporate data sharing remains very much a fledgling field. There has been little rigorous analysis of different ways or impacts of sharing. Nonetheless, our initial mapping of the landscape suggests there have been six main categories of activity—i.e., ways of sharing—to date:…
Assessing risks of corporate data sharing
Although the shared corporate data offers several benefits for researchers, public interest organizations, and other companies, there do exist risks, especially regarding personally identifiable information (PII). When aggregated, PII can serve to help understand trends and broad demographic patterns. But if PII is inadequately scrubbed and aggregated data is linked to specific individuals, this can lead to identity theft, discrimination, profiling, and other violations of individual freedom. It can also lead to significant legal ramifications for corporate data providers….”
Could digital badges clarify the roles of co-authors?
AAAS Science Magazine: “Ever look at a research paper and wonder how the half-dozen or more authors contributed to the work? After all, it’s usually only the first or last author who gets all the media attention or the scientific credit when people are considered for jobs, grants, awards, and more. Some journals try to address this issue with the “authors’ contributions” sections within a paper, but a collection of science, publishing, and software groups is now developing a more modern solution—digital “badges,” assigned on publication of a paper online, that detail what each author did for the work and that the authors can link to their profiles elsewhere on the Web.
at
Those organizations include publishers BioMed Central and the Public Library of Science; The Wellcome Trust research charity; software development groups Mozilla Science Lab (a group of researchers, developers, librarians, and publishers) and Digital Science (a software and technology firm); and ORCID, an effort to assign researchers digital identifiers. The collaboration presented its progress on the project at the Mozilla Festival in London that ended last week. (Mozilla is the open software community behind the Firefox browser and other programs.)
The infrastructure of the badges is still being established, with early prototypes scheduled to launch early next year, according to Amye Kenall, the journal development manager of open data initiatives and journals at BioMed Central. She envisions the badge process in the following way: Once an article is published, the publisher would alert software maintained by Mozilla to automatically set up an online form, where authors fill out roles using a detailed contributor taxonomy. After the authors have completed this, the badges would then appear next to their names on the journal article, and double-clicking on a badge would lead to the ORCID site for that particular author, where the author’s badges, integrated with their publishing record, live….
The parties behind the digital badge effort are “looking to change behavior” of scientists in the competitive dog-eat-dog world of academia by acknowledging contributions, says Kaitlin Thaney, director of Mozilla Science Lab. Amy Brand, vice president of academic and research relations and VP of North America at Digital Science, says that the collaboration believes that the badges should be optional, to accommodate old-fashioned or less tech-savvy authors. She says that the digital credentials may improve lab culture, countering situations where junior scientists are caught up in lab politics and the “star,” who didn’t do much of the actual research apart from obtaining the funding, gets to be the first author of the paper and receive the most credit. “All of this calls out for more transparency,” Brand says….”