One State Wants To Let You Carry Your Driver’s License On Your Phone


at Singularity Hub: “There’s now a technology to replace almost everything in your wallet. Your cash, credit cards, and loyalty programs are all on their way to becoming obsolete. Money can now be sent via app, text, e-mail — it can even be sent via Snapchat. But you can’t leave your wallet home just yet. That’s because there is one item that remains largely unchanged: your driver’s license.

If the Iowa Department of Motor Vehicles has its way, that may no longer be the case. According to an article in the Des Moines Register, the agency is in the early stages of developing mobile software for just this purpose. The app would store a resident’s personal information, whatever is already on the physical licenses, and also include a scannable bar code. The plans are for the app to include a two-step verification process including some type of biometric or pin code. At this time, it appears that specific implementation details are still being worked out.

The governments of the United Kingdom and United Arab Emirates had both previously announced their own attempts to experiment with the concept. It’s becoming increasingly common to see mobile versions of other documents. Over 30 states now allow motorists to show electronic proof of insurance. It only follows that the driver’s license would be next. But the considerations around that document are different — it is perhaps the most regulated and important document that a person carries….(More)”

R U There?


in the New Yorker on a new counselling service harnesses the power of the text message:” …. a person can contact Crisis Text Line without even looking at her phone. The number—741741—traces a simple, muscle-memory-friendly path down the left column of the keypad. Anyone who texts in receives an automatic response welcoming her to the service. Another provides a link to the organization’s privacy policy and explains that she can text “STOP” to end a conversation at any time. Meanwhile, the incoming message appears on the screen of Crisis Text Line’s proprietary computer system. The interface looks remarkably like a Facebook feed—pale background, blue banner at the top, pop-up messages in the lower right corner—a design that is intended to feel familiar and frictionless. The system, which receives an average of fifteen thousand texts a day, highlights messages containing words that might indicate imminent danger, such as “suicide,” “kill,” and “hopeless.”

Within five minutes, one of the counsellors on duty will write back. (Up to fifty people, most of them in their late twenties, are available at any given time, depending upon demand, and they can work wherever there’s an Internet connection.) An introductory message from a counsellor includes a casual greeting and a question about why the texter is writing in….(More)”

Big Data and Discriminatory Pricing


White House: “In response to the big data and privacy report’s finding that these technologies and tools can enable new forms of discrimination, the White House Council of Economic Advisers conducted a study examining whether and how companies may use big data technologies to offer different prices to different consumers — a practice known as “discriminatory pricing.” The CEA found that many companies already use big data for targeted marketing, and others are experimenting in a limited way with personalized pricing, but this practice is not yet widespread. While the economic literature contends that discriminatory pricing will often, though not always, be welfare-enhancing for businesses and consumers, the CEA concludes that policymakers should be vigilant against the potential for discriminatory outcomes, particularly in cases where prices are not transparent and could give rise to fraud or scams….To read the Council of Economic Advisers report on discriminatory pricing, click here.

The Precision Medicine Initiative: Data-Driven Treatments as Unique as Your Own Body


White House Press Release: “…the Precision Medicine Initiative will pioneer a new model of patient-powered research that promises to accelerate biomedical discoveries and provide clinicians with new tools, knowledge, and therapies to select which treatments will work best for which patients.

Most medical treatments have been designed for the “average patient.” As a result of this “one-size-fits-all-approach,” treatments can be very successful for some patients but not for others.  This is changing with the emergence of precision medicine, an innovative approach to disease prevention and treatment that takes into account individual differences in people’s genes, environments, and lifestyles.  Precision medicine gives clinicians tools to better understand the complex mechanisms underlying a patient’s health, disease, or condition, and to better predict which treatments will be most effective….

Objectives of the Precision Medicine Initiative:

  • More and better treatments for cancer: NCI will accelerate the design and testing of effective, tailored treatments for cancer by expanding genetically based clinical cancer trials, exploring fundamental aspects of cancer biology, and establishing a national “cancer knowledge network” that will generate and share new knowledge to fuel scientific discovery and guide treatment decisions.
  • Creation of a voluntary national research cohort: NIH, in collaboration with other agencies and stakeholders, will launch a national, patient-powered research cohort of one million or more Americans who volunteer to participate in research.  Participants will be involved in the design of the Initiative and will have the opportunity to contribute diverse sources of data—including medical records; profiles of the patient’s genes, metabolites (chemical makeup), and microorganisms in and on the body; environmental and lifestyle data; patient-generated information; and personal device and sensor data.  Privacy will be rigorously protected.  This ambitious project will leverage existing research and clinical networks and build on innovative research models that enable patients to be active participants and partners.  The cohort will be broadly accessible to qualified researchers and will have the potential to inspire scientists from multiple disciplines to join the effort and apply their creative thinking to generate new insights. The ONC will develop interoperability standards and requirements to ensure secure data exchange with patients’ consent, to empower patients and clinicians and advance individual, community, and population health.
  • Commitment to protecting privacy: To ensure from the start that this Initiative adheres to rigorous privacy protections, the White House will launch a multi-stakeholder process with HHS and other Federal agencies to solicit input from patient groups, bioethicists, privacy, and civil liberties advocates, technologists, and other experts in order to identify and address any legal and technical issues related to the privacy and security of data in the context of precision medicine.
  • Regulatory modernization: The Initiative will include reviewing the current regulatory landscape to determine whether changes are needed to support the development of this new research and care model, including its critical privacy and participant protection framework.  As part of this effort, the FDA will develop a new approach for evaluating Next Generation Sequencing technologies — tests that rapidly sequence large segments of a person’s DNA, or even their entire genome. The new approach will facilitate the generation of knowledge about which genetic changes are important to patient care and foster innovation in genetic sequencing technology, while ensuring that the tests are accurate and reliable.
  • Public-private partnerships: The Obama Administration will forge strong partnerships with existing research cohorts, patient groups, and the private sector to develop the infrastructure that will be needed to expand cancer genomics, and to launch a voluntary million-person cohort.  The Administration will call on academic medical centers, researchers, foundations, privacy experts, medical ethicists, and medical product innovators to lay the foundation for this effort, including developing new approaches to patient participation and empowerment.  The Administration will carefully consider and develop an approach to precision medicine, including appropriate regulatory frameworks, that ensures consumers have access to their own health data – and to the applications and services that can safely and accurately analyze it – so that in addition to treating disease, we can empower individuals and families to invest in and manage their health.”

(More).

With a Few Bits of Data, Researchers Identify ‘Anonymous’ People


in the New York Times: “Even when real names and other personal information are stripped from big data sets, it is often possible to use just a few pieces of the information to identify a specific person, according to a study to be published Friday in the journal Science.

In the study, titled “Unique in the Shopping Mall: On the Reidentifiability of Credit Card Metadata,” a group of data scientists analyzed credit card transactions made by 1.1 million people in 10,000 stores over a three-month period. The data set contained details including the date of each transaction, amount charged and name of the store.

Although the information had been “anonymized” by removing personal details like names and account numbers, the uniqueness of people’s behavior made it easy to single them out.

In fact, knowing just four random pieces of information was enough to reidentify 90 percent of the shoppers as unique individuals and to uncover their records, researchers calculated. And that uniqueness of behavior — or “unicity,” as the researchers termed it — combined with publicly available information, like Instagram or Twitter posts, could make it possible to reidentify people’s records by name.

“The message is that we ought to rethink and reformulate the way we think about data protection,” said Yves-Alexandre de Montjoye, a graduate student in computational privacy at the M.I.T. Media Lab who was the lead author of the study. “The old model of anonymity doesn’t seem to be the right model when we are talking about large-scale metadata.”

The analysis of large data sets containing details on people’s behavior holds great potential to improve public health, city planning and education.

But the study calls into question the standard methods many companies, hospitals and government agencies currently use to anonymize their records. It may also give ammunition to some technologists and privacy advocates who have challenged the consumer-tracking processes used by advertising software and analytics companies to tailor ads to so-called anonymous users online….(More).”

The Architecture of Privacy


Book by “Technology’s influence on privacy has become a matter of everyday concern for millions of people, from software architects designing new products to political leaders and consumer groups. This book explores the issue from the perspective of technology itself: how privacy-protective features can become a core part of product functionality, rather than added on late in the development process.
The Architecture of Privacy will not only help empower software engineers, but also show policymakers, academics, and advocates that, through an arsenal of technical tools, engineers can form the building blocks of nuanced policies that maximize privacy protection and utility—a menu of what to demand in new technology.
Topics include:

  • How technology and privacy policy interact and influence one another
  • Privacy concerns about government and corporate data collection practices
  • Approaches to federated systems as a component of privacy-protecting architecture
  • Alternative approaches to compartmentalized access to data
  • Methods to limit the amount of data revealed in searches, sidestepping all-or-nothing choices
  • Techniques for data purging and responsible data retention
  • Keeping and analyzing audit logs as part of a program of comprehensive system oversight
  • … (More)

The Cathedral of Computation


at the Atlantic: “We’re not living in an algorithmic culture so much as a computational theocracy.  Algorithms are everywhere, supposedly. We are living in an “algorithmic culture,” to use the author and communication scholar Ted Striphas’s name for it. Google’s search algorithms determine how we access information. Facebook’s News Feed algorithms determine how we socialize. Netflix’s and Amazon’s collaborative filtering algorithms choose products and media for us. You hear it everywhere. “Google announced a change to its algorithm,” a journalist reports. “We live in a world run by algorithms,” a TED talk exhorts. “Algorithms rule the world,” a news report threatens. Another upgrades rule to dominion: “The 10 Algorithms that Dominate Our World.”…
It’s part of a larger trend. The scientific revolution was meant to challenge tradition and faith, particularly a faith in religious superstition. But today, Enlightenment ideas like reason and science are beginning to flip into their opposites. Science and technology have become so pervasive and distorted, they have turned into a new type of theology.
The worship of the algorithm is hardly the only example of the theological reversal of the Enlightenment—for another sign, just look at the surfeit of nonfiction books promising insights into “The Science of…” anything, from laughter to marijuana. But algorithms hold a special station in the new technological temple because computers have become our favorite idols….
Once you adopt skepticism toward the algorithmic- and the data-divine, you can no longer construe any computational system as merely algorithmic. Think about Google Maps, for example. It’s not just mapping software running via computer—it also involves geographical information systems, geolocation satellites and transponders, human-driven automobiles, roof-mounted panoramic optical recording systems, international recording and privacy law, physical- and data-network routing systems, and web/mobile presentational apparatuses. That’s not algorithmic culture—it’s just, well, culture….(More).”

Would You Share Private Data for the Good of City Planning?


Henry Grabar at NextCity: “The proliferation of granular data on automobile movement, drawn from smartphones, cab companies, sensors and cameras, is sharpening our sense of how cars travel through cities. Panglossian seers believe the end of traffic jams is nigh.
This information will change cities beyond their roads. Real-time traffic data may lead to reworked intersections and new turning lanes, but understanding cars is in some ways a stand-in for understanding people. There’s traffic as traffic and traffic as proxy, notes Brett Goldstein, an urban science fellow at the University of Chicago who served as that city’s first data officer from 2011 to 2013. “We’d be really naive, in thinking about how we make cities better,” he says, “to only consider traffic for what it is.”
Even a small subset of a city’s car data goes a long way. Consider the raft of discrete findings that have emerged from the records of New York City taxis.
Researchers at the Massachusetts Institute of Technology, led by Paolo Santi, showed that cab-sharing could reduce taxi mileage by 40 percent. Their counterparts at NYU, led by Claudio Silva, mapped activity around hubs like train stations and airports and during hurricanes.
“You start to build actual models of how people move, and where they move,” observes Silva, the head of disciplines at NYU’s Center for Science and Urban Progress (CUSP). “The uses of this data for non-traffic engineering are really substantial.”…
Many of these ideas are hypothetical, for the moment, because so-called “granular” data is so hard to come by. That’s one reason the release of New York’s taxi cab data spurred so many studies — it’s an oasis of information in a desert of undisclosed records. Corporate entreaties, like Uber’s pending data offering to Boston, don’t always meet researchers’ standards. “It’s going to be a lot of superficial data, and it’s not clear how usable it’ll be at this point,” explains Sarah Kaufman, the digital manager at NYU’s Rudin Center for Transportation….
Yet Americans seem much more alarmed by the collection of location data than other privacy breaches.
How can data utopians convince the hoi polloi to share their comings and goings? One thought: Make them secure. Mike Flowers, the founder of New York City’s Office of Data Analytics and a fellow at NYU’s CUSP, told me it might be time to consider establishing a quasi-governmental body that people would trust to make their personal data anonymous before they are channeled into government projects. (New York City’s Taxi and Limousine Commission did not do a very good job at this, which led to Gawker publishing a dozen celebrity cab rides.)
Another idea is to frame open data as a beneficial trade-off. “When people provide information, they want to realize the benefit of the information,” Goldstein says.
Users tell the routing company Waze where they are and get a smoother commute in return. Progressive Insurance offers drivers a “Snapshot” tracker. If it likes the way you drive, the company will lower your rates. It’s not hard to imagine that, in the long run, drivers will be penalized for refusing such a device…. (More).”

Big Data Now


at Radar – O’Reilly: “In the four years we’ve been producing Big Data Now, our wrap-up of important developments in the big data field, we’ve seen tools and applications mature, multiply, and coalesce into new categories. This year’s free wrap-up of Radar coverage is organized around seven themes:

  • Cognitive augmentation: As data processing and data analytics become more accessible, jobs that can be automated will go away. But to be clear, there are still many tasks where the combination of humans and machines produce superior results.
  • Intelligence matters: Artificial intelligence is now playing a bigger and bigger role in everyone’s lives, from sorting our email to rerouting our morning commutes, from detecting fraud in financial markets to predicting dangerous chemical spills. The computing power and algorithmic building blocks to put AI to work have never been more accessible.
  • The convergence of cheap sensors, fast networks, and distributed computation: The amount of quantified data available is increasing exponentially — and aside from tools for centrally handling huge volumes of time-series data as it arrives, devices and software are getting smarter about placing their own data accurately in context, extrapolating without needing to ‘check in’ constantly.
  • Reproducing, managing, and maintaining data pipelines: The coordination of processes and personnel within organizations to gather, store, analyze, and make use of data.
  • The evolving, maturing marketplace of big data components: Open-source components like Spark, Kafka, Cassandra, and ElasticSearch are reducing the need for companies to build in-house proprietary systems. On the other hand, vendors are developing industry-specific suites and applications optimized for the unique needs and data sources in a field.
  • The value of applying techniques from design and social science: While data science knows human behavior in the aggregate, design works in the particular, where A/B testing won’t apply — you only get one shot to communicate your proposal to a CEO, for example. Similarly, social science enables extrapolation from sparse data. Both sets of tools enable you to ask the right questions, and scope your problems and solutions realistically.
  • The importance of building a data culture: An organization that is comfortable with gathering data, curious about its significance, and willing to act on its results will perform demonstrably better than one that doesn’t. These priorities must be shared throughout the business.
  • The perils of big data: From poor analysis (driven by false correlation or lack of domain expertise) to intrusiveness (privacy invasion, price profiling, self-fulfilling predictions), big data has negative potential.

Download our free snapshot of big data in 2014, and follow the story this year on Radar.”

Social Sensing and Crowdsourcing: the future of connected sensors


Conference Paper by C. Geijer, M. Larsson, M. Stigelid: “Social sensing is becoming an alternative to static sensors. It is a way to crowdsource data collection where sensors can be placed on frequently used objects, such as mobile phones or cars, to gather important information. Increasing availability in technology, such as cheap sensors being added in cell phones, creates an opportunity to build bigger sensor networks that are capable of collecting a larger quantity and more complex data. The purpose of this paper is to highlight problems in the field, as well as their solutions. The focus lies on the use of physical sensors and not on the use of social media to collect data. Research papers were reviewed based on implemented or suggested implementations of social sensing. The discovered problems are contrasted with possible solutions, and used to reflect upon the future of the field. We found issues such as privacy, noise and trustworthiness to be problems when using a distributed network of sensors. Furthermore, we discovered models for determining the accuracy as well as truthfulness of gathered data that can effectively combat these problems. The topic of privacy remains an open-ended problem, since it is based upon ethical considerations that may differ from person to person, but there exists methods for addressing this as well. The reviewed research suggests that social sensing will become more and more useful in the future….(More).”