This is what happens when you give social networking to doctors


in PandoDaily: “Dr. Gregory Kurio will never forget the time he was called to the ER because a epileptic girl was brought in suffering a cardiac arrest of sorts (HIPAA mandates he doesn’t give out the specific details of the situation). In the briefing, he learned the name of her cardiac physician who he happened to know through the industry. He subsequently called the other doctor and asked him to send over any available information on the patient — latest meds, EKGs, recent checkups, etc.

The scene in the ER was, to be expected, one of chaos, with trainees and respiratory nurses running around grabbing machinery and meds. Crucial seconds were ticking past, and Dr. Kurio quickly realized the fax machine was not the best approach for receiving the records he needed. ER fax machines are often on the opposite of the emergency room, take awhile to print lengthy of records, frequently run out of paper, and aren’t always reliable – not exactly the sort of technology you want when a patient’s life or death hangs in the midst.

Email wasn’t an option either, because HIPAA mandates that sensitive patient files are only sent through secure channels. With precious little time to waste, Dr. Kurio decided to take a chance on a new technology service he had just signed up for — Doximity.

Doximity is a LinkedIn for Doctors of sorts. It has, as one feature, a secure e-fax system that turns faxes into digital messages and sends them to a user’s mobile device. Dr. Kurio gave the other physician his e-fax number, and a little bit of techno-magic happened.

….

With a third of the nation’s doctors on the platform, today Doximity announced a $54 million Series C from DFJ,  T. Rowe Price Associates, Morgan Stanley, and existing investors. The funding news isn’t particularly important, in and of itself, aside from the fact that the company is attracting the attention of private market investors very early in its growth trajectory. But it’s a good opportunity to take a look at Doximity’s business model, how it mirrors the upwards growth of other vertical professional social networks (say that five times fast), and the way it’s transforming our healthcare providers’ jobs.

Doximity works, in many ways, just like LinkedIn. Doctors have profiles with pictures and their resume, and recruiters pay the company to message medical professionals. “If you think it’s hard to find a Ruby developer in San Francisco, try to find an emergency room physician in Indiana,” Doximity CEO Jeff Tangney says. One recruiter’s pain is a smart entrepreneur’s pleasure — a simple, straightforward monetization strategy.

But unlike LinkedIn, Doximity can dive much deeper on meeting doctors’ needs through specialized features like the e-fax system. It’s part of the reason Konstantin Guericke, one of LinkedIn’s “forgotten” co-founders, was attracted to the company and decided to join the board as an advisor. “In some ways, it’s a lot like LinkedIn,” Guericke says, when asked why he decided to help out. “But for me it’s the pleasure of focusing on a more narrow audience and making more of an impact on their life.”

In another such high-impact, specialized feature, doctors can access Doximity’s Google Alerts-like system for academic articles. They can sign up to receive notifications when stories are published about their obscure specialties. That means time-strapped physicians gain a more efficient way to stay up to date on all the latest research and information in their field. You can imagine that might impact the quality of the care they provide.

Lastly, Doximity offers a secure messaging system, allowing doctors to email one another regarding a fellow patient. Such communication is a thorny issue for doctors given HIPPA-related privacy requirements. There are limited ways to legally update say, a primary care physician when a specialist learns one of their patients has colon cancer. It turns into a big game of phone tag to relay what should be relatively straightforward information. Furthermore, leaving voicemails and sending faxes can result in details getting lost in what its an searchable system.

The platform is free for doctors, and it has attracted them quickly join in droves. Doximity co-founder and CEO Jeff Tangney estimates that last year the platform had added 15 to 16 percent of US doctors. But this year, the company claims it’s “on track to have half of US physicians as members by this summer.” Fairly impressive growth rate and market penetration.

With great market penetration comes great power. And dollars. Although the company is only monetizing through recruitment at the moment, the real money to be made with this service is through targeted advertising. Think about how much big pharma and medtech companies would be willing to cough up to to communicate at scale with the doctors who make purchasing decisions. Plus, this is an easy way for them to target industry thought leaders or professionals with certain specialties.

Doximity’s founders’ and investors’ eyes might be seeing dollar signs, but they haven’t rolled anything out yet on the advertising front. They’re wary and want to do so in a way that ads value to all parties while avoiding pissing off medical professionals. When they finally pul lthe trigger, however, it’s has the potential to be a Gold Rush.

Doximity isn’t the only company to have discovered there’s big money to be made in vertical professional social networks. As Pando has written, there’s a big trend in this regard. Spiceworks, the social network for IT professionals which claims to have a third of the world’s IT professionals on the site, just raised $57 million in a round led by none other than Goldman Sachs. Why does the firm have such faith in a free social network for IT pros — seemingly the most mundane and unprofitable of endeavors? Well, just like with doctor and pharma corps, IT companies are willing to shell out big to market their wares directly to such IT pros.

Although the monetization strategies differ from business to business, ResearchGate is building a similar community with a social network of scientists around the world, Edmodo is doing it with educators, GitHub with developers, GrabCAD for mechanical engineers. I’ve argued that such vertical professional social networks are a threat to LinkedIn, stealing business out from under it in large industry swaths. LinkedIn cofounder Konstantin Guericke disagrees.

“I don’t think it’s stealing revenue from them. Would it make sense for LinkedIn to add a profile subset about what insurance someone takes? That would just be clutter,” Guericke says. “It’s more going after an opportunity LinkedIn isn’t well positioned to capitalize on. They could do everything Doximity does, but they’d have to give up something else.”

All businesses come with their own challenges, and Doximity will certainly face its share of them as it scales. It has overcome the initial hurdle of achieving the network effects that come with penetrating the a large segment of the market. Next will come monetizing sensitively and continuing to protecting users — and patients’ — privacy.

There are plenty of data minefields to be had in a sector as closely regulated as healthcare, as fellow medical startup Practice Fusion recently found out. Doximity has to make sure its system for onboarding and verifying new doctors is airtight. The company has already encountered some instances of individuals trying to pose as medical professionals to get access to another’s records — specifically a former lover trying to chase down their ex-spouse’s STI tests. One blowup where the company approves someone they shouldn’t or hackers break into the system, and doctors could lose trust in the safety of the technology….”

Twitter Can Now Predict Crime, and This Raises Serious Questions


Motherboard: “Police departments in New York City may soon be using geo-tagged tweets to predict crime. It sounds like a far-fetched sci-fi scenario a la Minority Report, but when I contacted Dr. Matthew Greber, the University of Virginia researcher behind the technology, he explained that the system is far more mathematical than metaphysical.
The system Greber has devised is an amalgam of both old and new techniques. Currently, many police departments target hot spots for criminal activity based on actual occurrences of crime. This approach, called kernel density estimation (KDE), involves pairing a historical crime record with a geographic location and using a probability function to calculate the possibility of future crimes occurring in that area. While KDE is a serviceable approach to anticipating crime, it pales in comparison to the dynamism of Twitter’s real-time data stream, according to Dr. Gerber’s research paper “Predicting Crime Using Twitter and Kernel Density Estimation”.
Dr. Greber’s approach is similar to KDE, but deals in the ethereal realm of data and language, not paperwork. The system involves mapping the Twitter environment; much like how police currently map the physical environment with KDE. The big difference is that Greber is looking at what people are talking about in real time, as well as what they do after the fact, and seeing how well they match up. The algorithms look for certain language that is likely to indicate the imminent occurrence of a crime in the area, Greber says. “We might observe people talking about going out, getting drunk, going to bars, sporting events, and so on—we know that these sort of events correlate with crime, and that’s what the models are picking up on.”
Once this data is collected, the GPS tags in tweets allows Greber and his team to pin them to a virtual map and outline hot spots for potential crime. However, everyone who tweets about hitting the club later isn’t necessarily going to commit a crime. Greber tests the accuracy of his approach by comparing Twitter-based KDE predictions with traditional KDE predictions based on police data alone. The big question is, does it work? For Greber, the answer is a firm “sometimes.” “It helps for some, and it hurts for others,” he says.
According to the study’s results, Twitter-based KDE analysis yielded improvements in predictive accuracy over traditional KDE for stalking, criminal damage, and gambling. Arson, kidnapping, and intimidation, on the other hand, showed a decrease in accuracy from traditional KDE analysis. It’s not clear why these crimes are harder to predict using Twitter, but the study notes that the issue may lie with the kind of language used on Twitter, which is characterized by shorthand and informal language that can be difficult for algorithms to parse.
This kind of approach to high-tech crime prevention brings up the familiar debate over privacy and the use of users’ date for purposes they didn’t explicitly agree to. The case becomes especially sensitive when data will be used by police to track down criminals. On this point, though he acknowledges post-Snowden societal skepticism regarding data harvesting for state purposes, Greber is indifferent. “People sign up to have their tweets GPS tagged. It’s an opt-in thing, and if you don’t do it, your tweets won’t be collected in this way,” he says. “Twitter is a public service, and I think people are pretty aware of that.”…

How Can the Department of Education Increase Innovation, Transparency and Access to Data?


David Soo at the Department of Education: “Despite the growing amount of information about higher education, many students and families still need access to clear, helpful resources to make informed decisions about going to – and paying for – college.  President Obama has called for innovation in college access, including by making sure all students have easy-to-understand information.
Now, the U.S. Department of Education needs your input on specific ways that we can increase innovation, transparency, and access to data.  In particular, we are interested in how APIs (application programming interfaces) could make our data and processes more open and efficient.
APIs are set of software instructions and standards that allow machine-to-machine communication.  APIs could allow developers from inside and outside government to build apps, widgets, websites, and other tools based on government information and services to let consumers access government-owned data and participate in government-run processes from more places on the Web, even beyond .gov websites. Well-designed government APIs help make data and processes freely available for use within agencies, between agencies, in the private sector, or by citizens, including students and families.
So, today, we are asking you – student advocates, designers, developers, and others – to share your ideas on how APIs could spark innovation and enable processes that can serve students better. We need you to weigh in on a Request for Information (RFI) – a formal way the government asks for feedback – on how the Department could use APIs to increase access to higher education data or financial aid programs. There may be ways that Department forms – like the Free Application for Federal Student Aid (FAFSA) – or information-gathering processes could be made easier for students by incorporating the use of APIs. We invite the best and most creative thinking on specific ways that Department of Education APIs could be used to improve outcomes for students.
To weigh in, you can email [email protected] by June 2, or send your input via other addresses as detailed in the online notice.
The Department wants to make sure to do this right. It must ensure the security and privacy of the data it collects or maintains, especially when the information of students and families is involved.  Openness only works if privacy and security issues are fully considered and addressed.  We encourage the field to provide comments that identify concerns and offer suggestions on ways to ensure privacy, safeguard student information, and maintain access to federal resources at no cost to the student.
Through this request, we hope to gather ideas on how APIs could be used to fuel greater innovation and, ultimately, affordability in higher education.  For further information, see the Federal Register notice.”

Historic release of data delivers unprecedented transparency on the medical services physicians provide and how much they are paid


Jonathan Blum, Principal Deputy Administrator, Centers for Medicare & Medicaid Services : “Today the Centers for Medicare & Medicaid Services (CMS) took a major step forward in making Medicare data more transparent and accessible, while maintaining the privacy of beneficiaries, by announcing the release of new data on medical services and procedures furnished to Medicare fee-for-service beneficiaries by physicians and other healthcare professionals (http://www.cms.gov/newsroom/newsroom-center.html). For too long, the only information on physicians readily available to consumers was physician name, address and phone number. This data will, for the first time, provide a better picture of how physicians practice in the Medicare program.
This new data set includes over nine million rows of data on more than 880,000 physicians and other healthcare professionals in all 50 states, DC and Puerto Rico providing care to Medicare beneficiaries in 2012. The data set presents key information on the provision of services by physicians and how much they are paid for those services, and is organized by provider (National Provider Identifier or NPI), type of service (Healthcare Common Procedure Coding System, or HCPCS) code, and whether the service was performed in a facility or office setting. This public data set includes the number of services, average submitted charges, average allowed amount, average Medicare payment, and a count of unique beneficiaries treated. CMS takes beneficiary privacy very seriously and we will protect patient-identifiable information by redacting any data in cases where it includes fewer than 11 beneficiaries.
Previously, CMS could not release this information due to a permanent injunction issued by a court in 1979. However, in May 2013, the court vacated this injunction, causing a series of events that has led CMS to be able to make this information available for the first time.
Data to Fuel Research and Innovation
In addition to the public data release, CMS is making slight modifications to the process to request CMS data for research purposes. This will allow researchers to conduct important research at the physician level. As with the public release of information described above, CMS will continue to prohibit the release of patient-identifiable information. For more information about CMS’s disclosures to researchers, please contact the Research Data Assistance Center (ResDAC) at http://www.resdac.org/.
Unprecedented Data Access
This data release follows other CMS efforts to make more data available to the public. Since 2010, the agency has released an unprecedented amount of aggregated data in machine-readable form, with much of it available at http://www.healthdata.gov. These data range from previously unpublished statistics on Medicare spending, utilization, and quality at the state, hospital referral region, and county level, to detailed information on the quality performance of hospitals, nursing homes, and other providers.
In May 2013, CMS released information on the average charges for the 100 most common inpatient services at more than 3,000 hospitals nationwide http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Inpatient.html.
In June 2013, CMS released average charges for 30 selected outpatient procedures http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Outpatient.html.
We will continue to work toward harnessing the power of data to promote quality and value, and improve the health of our seniors and persons with disabilities.”

Medicare to Publish Trove of Data on Doctors


Louise Radnofsky in the Wall Street Journal: “The Obama administration said it would publish as early as next week data on what Medicare paid individual doctors in 2012, aiming to boost transparency and help root out fraud.
The move, which faced fierce resistance from doctors’ groups, would end a decadeslong block on making the information public.
Federal officials said they planned to release reimbursement information on April 9 or soon after that would show billing data for 880,000 health-care providers treating patients in the government-run insurance program for elderly and disabled people. It will include how many times the providers carried out a particular service or procedure, whether they carried it out in a medical facility or an office setting, the average amount they charged Medicare for it, the average amount they were paid for it, and the total number of people they treated.
The data set would show the names and addresses of the providers in connection with their reimbursement information, officials at the Centers for Medicare and Medicaid Services said. The agency hasn’t previously released such data.
Physicians’ organizations had sought to prevent the release of the data, citing concerns about physician privacy. But a federal judge last year lifted a long-standing injunction placed on the publication of the information by a federal court in Florida, in response to a challenge from Dow Jones & Co., The Wall Street Journal’s parent company.
Jonathan Blum, principal deputy administrator at CMS, informed the American Medical Association and Florida Medical Association in letters dated Wednesday that the agency would move to publish the data soon.
Ardis Dee Hoven, president of the American Medical Association, said the group remained concerned that CMS was taking a “broad approach” that could result in “unwarranted bias against physicians that can destroy careers.” Dr. Hoven said the AMA wanted doctors to be able to review and correct their information before the data set was published. The Florida Medical Association couldn’t immediately be reached.
Mr. Blum said that for privacy reasons, data related to subsets of fewer than 11 Medicare patients would be redacted.
In the letters, Mr. Blum said the agency believed that news organizations seeking the information—which include the Journal—would be able to use it to shed light on problems in the Medicare program. He also specifically cited earlier reporting by the Journal that had drawn on similar data.
“The Department concluded that the data to be released would assist the public’s understanding of Medicare fraud, waste, and abuse, as well as shed light on payments to physicians for services furnished to Medicare beneficiaries,” Mr. Blum wrote. “As an example, using similar payment information, The Wall Street Journal was able to identify and report on a number of instances of Medicare fraud, waste, and abuse, using Medicare payment data in its Secrets of the System series,” Mr. Blum wrote. That series was a finalist for a Pulitzer Prize in 2011.”

Politics and the Internet


Edited book by William H. Dutton (Routledge – 2014 – 1,888 pages: “It is commonplace to observe that the Internet—and the dizzying technologies and applications which it continues to spawn—has revolutionized human communications. But, while the medium’s impact has apparently been immense, the nature of its political implications remains highly contested. To give but a few examples, the impact of networked individuals and institutions has prompted serious scholarly debates in political science and related disciplines on: the evolution of ‘e-government’ and ‘e-politics’ (especially after recent US presidential campaigns); electronic voting and other citizen participation; activism; privacy and surveillance; and the regulation and governance of cyberspace.
As research in and around politics and the Internet flourishes as never before, this new four-volume collection from Routledge’s acclaimed Critical Concepts in Political Science series meets the need for an authoritative reference work to make sense of a rapidly growing—and ever more complex—corpus of literature. Edited by William H. Dutton, Director of the Oxford Internet Institute (OII), the collection gathers foundational and canonical work, together with innovative and cutting-edge applications and interventions.
With a full index and comprehensive bibliographies, together with a new introduction by the editor, which places the collected material in its historical and intellectual context, Politics and the Internet is an essential work of reference. The collection will be particularly useful as a database allowing scattered and often fugitive material to be easily located. It will also be welcomed as a crucial tool permitting rapid access to less familiar—and sometimes overlooked—texts. For researchers, students, practitioners, and policy-makers, it is a vital one-stop research and pedagogic resource.”

Smart cities are here today — and getting smarter


Computer World: “Smart cities aren’t a science fiction, far-off-in-the-future concept. They’re here today, with municipal governments already using technologies that include wireless networks, big data/analytics, mobile applications, Web portals, social media, sensors/tracking products and other tools.
These smart city efforts have lofty goals: Enhancing the quality of life for citizens, improving government processes and reducing energy consumption, among others. Indeed, cities are already seeing some tangible benefits.
But creating a smart city comes with daunting challenges, including the need to provide effective data security and privacy, and to ensure that myriad departments work in harmony.

The global urban population is expected to grow approximately 1.5% per year between 2025 and 2030, mostly in developing countries, according to the World Health Organization.

What makes a city smart? As with any buzz term, the definition varies. But in general, it refers to using information and communications technologies to deliver sustainable economic development and a higher quality of life, while engaging citizens and effectively managing natural resources.
Making cities smarter will become increasingly important. For the first time ever, the majority of the world’s population resides in a city, and this proportion continues to grow, according to the World Health Organization, the coordinating authority for health within the United Nations.
A hundred years ago, two out of every 10 people lived in an urban area, the organization says. As recently as 1990, less than 40% of the global population lived in a city — but by 2010 more than half of all people lived in an urban area. By 2050, the proportion of city dwellers is expected to rise to 70%.
As many city populations continue to grow, here’s what five U.S. cities are doing to help manage it all:

Scottsdale, Ariz.

The city of Scottsdale, Ariz., has several initiatives underway.
One is MyScottsdale, a mobile application the city deployed in the summer of 2013 that allows citizens to report cracked sidewalks, broken street lights and traffic lights, road and sewer issues, graffiti and other problems in the community….”

Visualizing Health IT: A holistic overview


Andy Oram in O’Reilly Data: “There is no dearth of health reformers offering their visions for patient engagement, information exchange, better public health, and disruptive change to health industries. But they often accept too freely the promise of technology, without grasping how difficult the technical implementations of their reforms would be. Furthermore, no document I have found pulls together the various trends in technology and explores their interrelationships.
I have tried to fill this gap with a recently released report: The Information Technology Fix for Health: Barriers and Pathways to the Use of Information Technology for Better Health Care. This posting describes some of the issues it covers.
Take a basic example: fitness devices. Lots of health reformers would love to see these pulled into treatment plans to help people overcome hypertension and other serious conditions. It’s hard to understand the factors that make doctors reluctant to do so–blind conservatism is not the problem, but actual technical factors. To become part of treatment plans, the accuracy of devices would have to be validated, they would need to produce data in formats and units that are universally recognized, and electronic records would have to be undergo major upgrades to store and process the data.
Another example is patient engagement, which doctors and hospitals are furiously pursuing. Not only are patients becoming choosier and rating their institutions publicly in Yelp-like fashion, but the clinicians have come to realize that engaged patients are more likely to participate in developing effective treatment plans, not to mention following through on them.
Engaging patients to improve their own outcomes directly affects the institutions’ bottom lines as insurers and the government move from paying for each procedure to pay-per-value (a fixed sum for handling a group of patients that share a health condition). But what data do we need to make pay-per-value fair and accurate? How do we get that data from one place to another, and–much more difficult–out of one ungainly proprietary format and possibly into others? The answer emerging among activists to these questions is: leave the data under the control of the patients, and let them share it as they find appropriate.
Collaboration may be touted even more than patient engagement as the way to better health. And who wouldn’t want his cardiologist to be consulting with his oncologist, nutritionist, and physical therapist? It doesn’t happen as much as it should, and while picking up the phone may be critical sometimes to making the right decisions, electronic media can also be of crucial value. Once again, we have to overcome technical barriers.
The The Information Technology Fix for Health report divides these issues into four umbrella categories:

  • Devices, sensors, and patient monitoring
  • Using data: records, public data sets, and research
  • Coordinated care: teams and telehealth
  • Patient empowerment

Underlying all these as a kind of vast subterranean network of interconnected roots are electronic health records (EHRs). These must function well in order for devices to send output to the interested observers, researchers to collect data, and teams to coordinate care. The article delves into the messy and often ugly area of formats and information exchange, along with issues of privacy. I extol once again the virtue of patient control over records and suggest how we could overcome all barriers to make that happen.”

The GovLab Index: Privacy and Security


Please find below the latest installment in The GovLab Index series, inspired by the Harper’s Index. “The GovLab Index: Privacy and Security examines the attitudes and concerns of American citizens regarding online privacy. Previous installments include Designing for Behavior ChangeThe Networked Public, Measuring Impact with Evidence, Open Data, The Data Universe, Participation and Civic Engagement and Trust in Institutions.
Globally

  • Percentage of people who feel the Internet is eroding their personal privacy: 56%
  • Internet users who feel comfortable sharing personal data with an app: 37%
  • Number of users who consider it important to know when an app is gathering information about them: 70%
  • How many people in the online world use privacy tools to disguise their identity or location: 28%, or 415 million people
  • Country with the highest penetration of general anonymity tools among Internet users: Indonesia, where 42% of users surveyed use proxy servers
  • Percentage of China’s online population that disguises their online location to bypass governmental filters: 34%

In the United States
Over the Years

  • In 1996, percentage of the American public who were categorized as having “high privacy concerns”: 25%
    • Those with “Medium privacy concerns”: 59%
    • Those who were unconcerned with privacy: 16%
  • In 1998, number of computer users concerned about threats to personal privacy: 87%
  • In 2001, those who reported “medium to high” privacy concerns: 88%
  • Individuals who are unconcerned about privacy: 18% in 1990, down to 10% in 2004
  • How many online American adults are more concerned about their privacy in 2014 than they were a year ago, indicating rising privacy concerns: 64%
  • Number of respondents in 2012 who believe they have control over their personal information: 35%, downward trend for 7 years
  • How many respondents in 2012 continue to perceive privacy and the protection of their personal information as very important or important to the overall trust equation: 78%, upward trend for seven years
  • How many consumers in 2013 trust that their bank is committed to ensuring the privacy of their personal information is protected: 35%, down from 48% in 2004

Privacy Concerns and Beliefs

  • How many Internet users worry about their privacy online: 92%
    • Those who report that their level of concern has increased from 2013 to 2014: 7 in 10
    • How many are at least sometimes worried when shopping online: 93%, up from 89% in 2012
    • Those who have some concerns when banking online: 90%, up from 86% in 2012
  • Number of Internet users who are worried about the amount of personal information about them online: 50%, up from 33% in 2009
    • Those who report that their photograph is available online: 66%
      • Their birthdate: 50%
      • Home address: 30%
      • Cell number: 24%
      • A video: 21%
      • Political affiliation: 20%
  • Consumers who are concerned about companies tracking their activities: 58%
    • Those who are concerned about the government tracking their activities: 38%
  • How many users surveyed felt that the National Security Association (NSA) overstepped its bounds in light of recent NSA revelations: 44%
  • Respondents who are comfortable with advertisers using their web browsing history to tailor advertisements as long as it is not tied to any other personally identifiable information: 36%, up from 29% in 2012
  • Percentage of voters who do not want political campaigns to tailor their advertisements based on their interests: 86%
  • Percentage of respondents who do not want news tailored to their interests: 56%
  • Percentage of users who are worried about their information will be stolen by hackers: 75%
    • Those who are worried about companies tracking their browsing history for targeted advertising: 54%
  • How many consumers say they do not trust businesses with their personal information online: 54%
  • Top 3 most trusted companies for privacy identified by consumers from across 25 different industries in 2012: American Express, Hewlett Packard and Amazon
    • Most trusted industries for privacy: Healthcare, Consumer Products and Banking
    • Least trusted industries for privacy: Internet and Social Media, Non-Profits and Toys
  • Respondents who admit to sharing their personal information with companies they did not trust in 2012 for reasons such as convenience when making a purchase: 63%
  • Percentage of users who say they prefer free online services supported by targeted ads: 61%
    • Those who prefer paid online services without targeted ads: 33%
  • How many Internet users believe that it is not possible to be completely anonymous online: 59%
    • Those who believe complete online anonymity is still possible: 37%
    • Those who say people should have the ability to use the Internet anonymously: 59%
  • Percentage of Internet users who believe that current laws are not good enough in protecting people’s privacy online: 68%
    • Those who believe current laws provide reasonable protection: 24%

FULL LIST at http://thegovlab.org/the-govlab-index-privacy-and-trust/

Artists Show How Anyone Can Fight the Man with Open Data


MotherBoard: “The UK’s Open Data Institute usually looks, as you’d probably expect, like an office full of people staring at screens. But visit at the moment and you might see a potato gun among the desks or a bunch of drone photos on the wall—all in the name of encouraging public discussion around and engagement with open data.
The ODI was set up by World Wide Web inventor Tim Berners-Lee and interdisciplinary researcher Nigel Shadbolt in London to push for an open data culture, and from Monday it will be hosting the second Data as Culture exhibition, which presents a more artistic take on questions surrounding the practicalities of open data. In doing so, it shows quite how the general public can (and probably really should) use data to inform their own lives and to engage with political issues.
All of the exhibits are based on freely available data, which is made lot more animated and accessible than numbers in a spreadsheet. “I made the decision straight away to move away from anything screen-based,” curator Shiri Shalmy told me as she gave me a tour, winding through office workers tapping away on keyboards. “Everything had to be physical.”…
James Bridle’s work on drone warfare touches a similar theme, though in this case the data are not hidden: his images of military UAVs come from Google Maps. “They’re there for anybody to look at, they’re kind of secret but available,” said Shalmy, who added that with the data out there, we can’t pretend we don’t know what’s going on. “They can do things in secret as long as we pretend it’s a secret.”
We’ve looked at Bridle’s work before, from his Dronestagram photos to his chalk outlines of drones, and he’s been commissioned to do something new for the Data as Culture show: Shalmy has asked him to compare the open data on military drones against that of London’s financial centre. He’ll present what he digs up in summer.

From the series ‘Watching the Watchers.’ Image: James Bridle/ODI

Using this kind of government data—from local council expenses to military movements—shows quite how much information is available and how it can be used to hold politicians to account. In essence, anyone can do surveillance to some level. While activists including Berners-Lee push for more data to be made accessible, it’s only useful if we actually bother to engage with it, and work like Bridle’s pose the uneasy suggestion that sometimes it’s more comfortable to remain ignorant.
And in addition to reading data, we can collect it. Rather than delving into government files, a knitted banner by artist Sam Meech uses publicly generated data to make a political point. The banner bears the phrase “8 hour labour,” a reference to the eight-hour workday movement that sprang up in Britain’s Industrial Revolution. The idea was that people would have eight hours work, eight hours rest, and eight hours recreation.

A detail from Sam Meechan’s Punchcard Economy. Image: Sam Meechan/ODI

But the black-and-white pattern in the banner is made up of much less regular working hours: those logged by self-employed creatives, who can take part by entering their own timesheet data via virtual punchcards. Shalmy pointed out her own schedule in a week when she was setting up the exhibition: a 70-hour block woven into the knit. It’s an example of how individuals can use data to make a political point—the work is reminiscent of trade union banners and seems particularly relevant at a time when controversial zero hours contracts are on the rise.
Also garnering data from the public, artist collective Thickear are asking people to fill in data forms on their arrival, which they’ll file on an old-fashioned spike. I took one of the forms, only to be confronted with nonsensical bureaucratic-type boxes. “The data itself is not informative in any way,” said Shalmy. It’s more about the idea of who we trust to give our data to. How often do we accept privacy policies without even giving ourselves the chance to even blink at the small print?…”