Hundreds of Bounty Hunters Had Access to AT&T, T-Mobile, and Sprint Customer Location Data for Years


Joseph Cox at Motherboard: ” In January, Motherboard revealed that AT&T, T-Mobile, and Sprint were selling their customers’ real-time location data, which trickled down through a complex network of companies until eventually ending up in the hands of at least one bounty hunter. Motherboard was also able to purchase the real-time location of a T-Mobile phone on the black market from a bounty hunter source for $300. In response, telecom companies said that this abuse was a fringe case.

In reality, it was far from an isolated incident.

Around 250 bounty hunters and related businesses had access to AT&T, T-Mobile, and Sprint customer location data, with one bail bond firm using the phone location service more than 18,000 times, and others using it thousands or tens of thousands of times, according to internal documents obtained by Motherboard from a company called CerCareOne, a now-defunct location data seller that operated until 2017. The documents list not only the companies that had access to the data, but specific phone numbers that were pinged by those companies.

In some cases, the data sold is more sensitive than that offered by the service used by Motherboard last month, which estimated a location based on the cell phone towers that a phone connected to. CerCareOne sold cell phone tower data, but also sold highly sensitive and accurate GPS data to bounty hunters; an unprecedented move that means users could locate someone so accurately so as to see where they are inside a building. This company operated in near-total secrecy for over 5 years by making its customers agree to “keep the existence of CerCareOne.com confidential,” according to a terms of use document obtained by Motherboard.

Some of these bounty hunters then resold location data to those unauthorized to handle it, according to two independent sources familiar with CerCareOne’s operations.

The news shows how widely available Americans’ sensitive location data was to bounty hunters. This ease-of-access dramatically increased the risk of abuse….(More)”.

Artificial Intelligence and National Security


Report by Congressional Research Service: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semi-autonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI.

In addition, many commercial AI applications must undergo significant modification prior to being functional for the military. A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics. Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges.

AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations.

While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Smart Contracts and Their Identity Crisis


Paper by Alvaro Gonzalez Rivas, Mariya Tsyganova and Eliza Mik: “Many expect Smart Contracts (SC’s) to disrupt the way contracts are done implying that SC have the potential to affect all commercial relationships. SC’s are automatization tools; therefore, proponents claim that SC’s can reduce transaction costs through disintermediation and risk reduction.

This is an over-simplification of the role of relationships, contract law, and risk. We believe there is a gap in the understanding of the capabilities of SC’s. With that in mind we seek to define an amorphous term and clarify the capabilities of SC’s, intending to facilitate future SC research. We’ve examined the legal, technical, and IS views from an academic and practitioner’s perspective. We conclude that SC’s have taken many forms, becoming a suitcase word for any sort of code stored on a blockchain, including the embodiment of contractual terms; and that the immutable nature of SC’s is a barrier to their adoption in uncertain and multi-contextual environments….(More)”.

Using Personal Informatics Data in Collaboration among People with Different Expertise


Dissertation by Chia-Fang Chung: “Many people collect and analyze data about themselves to improve their health and wellbeing. With the prevalence of smartphones and wearable sensors, people are able to collect detailed and complex data about their everyday behaviors, such as diet, exercise, and sleep. This everyday behavioral data can support individual health goals, help manage health conditions, and complement traditional medical examinations conducted in clinical visits. However, people often need support to interpret this self-tracked data. For example, many people share their data with health experts, hoping to use this data to support more personalized diagnosis and recommendations as well as to receive emotional support. However, when attempting to use this data in collaborations, people and their health experts often struggle to make sense of the data. My dissertation examines how to support collaborations between individuals and health experts using personal informatics data.

My research builds an empirical understanding of individual and collaboration goals around using personal informatics data, current practices of using this data to support collaboration, and challenges and expectations for integrating the use of this data into clinical workflows. These understandings help designers and researchers advance the design of personal informatics systems as well as the theoretical understandings of patient-provider collaboration.

Based on my formative work, I propose design and theoretical considerations regarding interactions between individuals and health experts mediated by personal informatics data. System designers and personal informatics researchers need to consider collaborations occurred throughout the personal tracking process. Patient-provider collaboration might influence individual decisions to track and to review, and systems supporting this collaboration need to consider individual and collaborative goals as well as support communication around these goals. Designers and researchers should also attend to individual privacy needs when personal informatics data is shared and used across different healthcare contexts. With these design guidelines in mind, I design and develop Foodprint, a photo-based food diary and visualization system. I also conduct field evaluations to understand the use of lightweight data collection and integration to support collaboration around personal informatics data. Findings from these field deployments indicate that photo-based visualizations allow both participants and health experts to easily understand eating patterns relevant to individual health goals. Participants and health experts can then focus on individual health goals and questions, exchange knowledge to support individualized diagnoses and recommendations, and develop actionable and feasible plans to accommodate individual routines….(More)”.

What Makes a City Street Smart?


Taxi and Limousine Commission’s (TLC): “Cities aren’t born smart. They become smart by understanding what is happening on their streets. Measurement is key to management, and amid the incomparable expansion of for-hire transportation service in New York City, measuring street activity is more important than ever. Between 2015 (when app companies first began reporting data) and June 2018, trips by app services increased more than 300%, now totaling over 20 million trips each month. That’s more cars, more drivers, and more mobility.

Taxi and Limousine Commission’s (TLC): “Cities aren’t born smart. They become smart by understanding what is happening on their streets. Measurement is key to management, and amid the incomparable expansion of for-hire transportation service in New York City, measuring street activity is more important than ever. Between 2015 (when app companies first began reporting data) and June 2018, trips by app services increased more than 300%, now totaling over 20 million trips each month. That’s more cars, more drivers, and more mobility.

We know the true scope of this transformation today only because of the New York City Taxi and Limousine Commission’s (TLC) pioneering regulatory actions. Unlike most cities in the country, app services cannot operate in NYC unless they give the City detailed information about every trip. This is mandated by TLC rules and is not contingent on companies voluntarily “sharing” only a self-selected portion of the large amount of data they collect. Major trends in the taxi and for-hire vehicle industry are highlighted in TLC’s 2018 Factbook.

What Transportation Data Does TLC Collect?

Notably, Uber, Lyft, and their competitors today must give the TLC granular data about each and every trip and request for service. TLC does not receive passenger information; we require only the data necessary to understand traffic patterns, working conditions, vehicle efficiency, service availability, and other important information.

One of the most important aspects of the data TLC collects is that they are stripped of identifying information and made available to the public. Through the City’s Open Data portal, TLC’s trip data help businesses distinguish new business opportunities from saturated markets, encourage competition, and help investors follow trends in both new app transportation and the traditional car service and hail taxi markets. As app companies contemplate going public, their investors have surely already bookmarked TLC’s Open Data site.

Using Data to Improve Mobility

With this information NYC now knows people are getting around the boroughs using app services and shared rides with greater frequency. These are the same NYC neighborhoods that traditionally were not served by yellow cabs and often have less robust public transportation options. We also know these services provide an increasing number of trips in congested areas like Manhattan and the inner rings of Brooklyn and Queens, where public transportation options are relatively plentiful….(More)”.

The Big (data) Bang: Opportunities and Challenges for Compiling SDG Indicators


Steve MacFeely at Global Policy: “Official statisticians around the world are faced with the herculean task of populating the Sustainable Development Goals global indicator framework. As traditional data sources appear to be insufficient, statisticians are naturally considering whether big data can contribute anything useful. While the statistical possibilities appear to be theoretically endless, in practice big data also present some enormous challenges and potential pitfalls: legal; ethical; technical; and reputational. This paper examines the opportunities and challenges presented by big data for compiling indicators to support Agenda 2030….(More)”.

Open Data Politics: A Case Study on Estonia and Kazakhstan


Book by Maxat Kassen: “… offers a cross-national comparison of open data policies in Estonia and Kazakhstan. By analyzing a broad range of open data-driven projects and startups in both countries, it reveals the potential that open data phenomena hold with regard to promoting public sector innovations. The book addresses various political and socioeconomic contexts in these two transitional societies, and reviews the strategies and tactics adopted by policymakers and stakeholders to identify drivers of and obstacles to the implementation of open data innovations. Given its scope, the book will appeal to scholars, policymakers, e-government practitioners and open data entrepreneurs interested in implementing and evaluating open data-driven public sector projects….(More)”

Facebook could be forced to share data on effects to the young


Nicola Davis at The Guardian: “Social media companies such as Facebook and Twitter could be required by law to share data with researchers to help examine potential harms to young people’s health and identify who may be at risk.

Surveys and studies have previously suggested a link between the use of devices and networking sites and an increase in problems among teenagers and younger children ranging from poor sleep to bullyingmental health issues and grooming.

However, high quality research in the area is scarce: among the conundrums that need to be looked at are matters of cause and effect, the size of any impacts, and the importance of the content of material accessed online.

According to a report by the Commons science and technology committee on the effects of social media and screen time among young people, companies should be compelled to protect users and legislation was needed to enable access to data for high quality studies to be carried out.

The committee noted that the government had failed to commission such research and had instead relied on requesting reviews of existing studies. This was despite a 2017 green paper that set out a consultation process on aUK internet safety strategy.

“We understand [social media companies’] eagerness to protect the privacy of users but sharing data with bona fide researchers is the only way society can truly start to understand the impact, both positive and negative, that social media is having on the modern world,” said Norman Lamb, the Liberal Democrat MP who chairs the committee. “During our inquiry, we heard that social media companies had openly refused to share data with researchers who are keen to examine patterns of use and their effects. This is not good enough.”

Prof Andrew Przybylski, the director of research at the Oxford Internet Institute, said the issue of good quality research was vital, adding that many people’s perception of the effect of social media is largely rooted in hype.

“Social media companies must participate in open, robust, and transparent science with independent scientists,” he said. “Their data, which we give them, is both their most valuable resource and it is the only means by which we can effectively study how these platforms affect users.”…(More)”

Toward an Open Data Demand Assessment and Segmentation Methodology


Stefaan Verhulst and Andrew Young at IADB: “Across the world, significant time and resources are being invested in making government data accessible to all with the broad goal of improving people’s lives. Evidence of open data’s impact – on improving governance, empowering citizens, creating economic opportunity, and solving public problems – is emerging and is largely encouraging. Yet much of the potential value of open data remains untapped, in part because we often do not understand who is using open data or, more importantly, who is not using open data but could benefit from the insights it may generate. By identifying, prioritizing, segmenting, and engaging with the actual and future demand for open data in a systemic and systematic way, practitioners can ensure that open data is more targeted. Understanding and meeting the demand for open data can increase overall impact and return on investment of public funds.

The GovLab, in partnership with the Inter-American Development Bank, and with the support of the French Development Agency developed the Open Data Demand and Assessment Methodology to provide open data policymakers and practitioners with an approach for identifying, segmenting, and engaging with demand. This process specifically seeks to empower data champions within public agencies who want to improve their data’s ability to improve people’s lives….(More)”.

Privacy concerns collide with the public interest in data


Gillian Tett in the Financial Times: “Late last year Statistics Canada — the agency that collects government figures — launched an innovation: it asked the country’s banks to supply “individual-level financial transactions data” for 500,000 customers to allow it to track economic trends. The agency argued this was designed to gather better figures for the public interest. However, it tipped the banks into a legal quandary. Under Canadian law (as in most western countries) companies are required to help StatsCan by supplying operating information. But data privacy laws in Canada also say that individual bank records are confidential. When the StatsCan request leaked out, it sparked an outcry — forcing the agency to freeze its plans. “It’s a mess,” a senior Canadian banker says, adding that the laws “seem contradictory”.

Corporate boards around the world should take note. In the past year, executive angst has exploded about the legal and reputational risks created when private customer data leak out, either by accident or in a cyber hack. Last year’s Facebook scandals have been a hot debating topic among chief executives at this week’s World Economic Forum in Davos, as has the EU’s General Data Protection Regulation. However, there is another important side to this Big Data debate: must companies provide private digital data to public bodies for statistical and policy purposes? Or to put it another way, it is time to widen the debate beyond emotive privacy issues to include the public interest and policy needs. The issue has received little public debate thus far, except in Canada. But it is becoming increasingly important.

Companies are sitting on a treasure trove of digital data that offers valuable real-time signals about economic activity. This information could be even more significant than existing statistics, because they struggle to capture how the economy is changing. Take Canada. StatsCan has hitherto tracked household consumption by following retail sales statistics, supplemented by telephone surveys. But consumers are becoming less willing to answer their phones, which undermines the accuracy of surveys, and consumption of digital services cannot be easily pursued. ...

But the biggest data collections sit inside private companies. Big groups know this, and some are trying to respond. Google has created its own measures to track inflation, which it makes publicly available. JPMorgan and other banks crunch customer data and publish reports about general economic and financial trends. Some tech groups are even starting to volunteer data to government bodies. LinkedIn has offered to provide anonymised data on education and employment to municipal and city bodies in America and beyond, to help them track local trends; the group says this is in the public interest for policy purposes, as “it offers a different perspective” than official data sources. But it is one thing for LinkedIn to offer anonymised data when customers have signed consent forms permitting the transfer of data; it is quite another for banks (or other companies) who have operated with strict privacy rules. If nothing else, the CanStat saga shows there urgently needs to be more public debate, and more clarity, around these rules. Consumer privacy issues matter (a lot). But as corporate data mountains grow, we will need to ask whether we want to live in a world where Amazon and Google — and Mastercard and JPMorgan — know more about economic trends than central banks or finance ministries. Personally, I would say “no”. But sooner or later politicians will need to decide on their priorities in this brave new Big Data world; the issue cannot be simply left to the half-hidden statisticians….(More)”.