A Theory of Creepy: Technology, Privacy and Shifting Social Norms


Omer Tene and Jules Polonetsky in Yale Journal of Law & Technology:  “The rapid evolution of digital technologies has hurled to the forefront of public and legal discourse dense social and ethical dilemmas that we have hardly begun to map and understand. In the near past, general community norms helped guide a clear sense of ethical boundaries with respect to privacy. One does not peek into the window of a house even if it is left open. One does not hire a private detective to investigate a casual date or the social life of a prospective employee. Yet with technological innovation rapidly driving new models for business and inviting new types of personal socialization, we often have nothing more than a fleeting intuition as to what is right or wrong. Our intuition may suggest that it is responsible to investigate the driving record of the nanny who drives our child to school, since such tools are now readily available. But is it also acceptable to seek out the records of other parents in our child’s car pool or of a date who picks us up by car? Alas, intuitions and perceptions of “creepiness” are highly subjective and difficult to generalize as social norms are being strained by new technologies and capabilities. And businesses that seek to create revenue opportunities by leveraging newly available data sources face huge challenges trying to operationalize such subjective notions into coherent business and policy strategies.
This article presents a set of social and legal considerations to help individuals, engineers, businesses and policymakers navigate a world of new technologies and evolving social norms. These considerations revolve around concepts that we have explored in prior work, including enhanced transparency; accessibility to information in usable format; and the elusive principle of context.”

The US Constitution version 2.0


Luis Ibanez : “After ‘version 1.0’ of the US Constitution was released to the public on Sept 17, 1787 there was remaining discontent among several states regarding the powers assigned to the new Federal government and a lack of protections for fundamental individual freedoms and civil rights.

To fix this bug, the First Continental Congress voted on twelve Constitutional Amendments in September of 1789. Two of them failed to gain enough support and the remaining ten, collectively known as The Bill of Rights, were included in ‘version 2.0’ of the US Constitution, released in 1791.
This refactoring process was open source-minded on multiple levels.
First, the voice of the people (the community) was heard when expressing concern about defects (bugs) in the Constitution. In this case, the bugs related to the lack of sufficient protection for individual civil rights. There was no presumption of perfection or completeness in the US Constitution, and there was a will to improve it and make it better through an open political process.
Second, changes were proposed, discussed, and finally implemented. The discussion of these amendments is equivalent to code reviews that a typical open source software project will go through when adopting substantial changes. Note: the amendements were adopted without having to “fork” the project (the country), though later the country was deeply divided, resulting in the American Civil War in 1861…
As code and law, and community and society, come closer together, taking a fresh look at the history that led us here, sheds a bright light on how we can continue to work together, and how open source principles can continue to change the world for the better.”

OECD's Revised Guidelines on Privacy


OECD: “Over many decades the OECD has played an important role in promoting respect for privacy as a fundamental value and a condition for the free flow of personal data across borders. The cornerstone of OECD work on privacy is its newly revised Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (2013).
Another key component of work in this area aims to improve cross-border co-operation among privacy law enforcement authorities.  This work produced an OECD Recommendation on Cross-border Co-operation in the Enforcement of Laws Protecting Privacy in 2007 and inspired the formation of the Global Privacy Enforcement Network, to which the OECD provides support.
Other projects have examined privacy notices and considered privacy in the context of horizontal issues such as radio frequency indentification (RFID), digital identity management, and looked at metrics to inform policy making in these areas. The important role of privacy is also addressed in the OECD Recommendation on Principles for Internet Policy Making (2011) and the Seoul Ministerial Declaration on the Future of the Internet Economy (2008).
Current work is examining privacy-related issues raised by large-scale data use and analytics. It is part of a broader project on the data-driven innovation and growth, which already produced a preliminary report identifying key issues.”

OpenPrism


thomas levine: “There are loads of open data portals There’s even portal about data portals. And each of these portals has loads of datasets.
OpenPrism is my most recent attempt at understanding what is going on in all of these portals. Read on if you want to see why I made it, or just go to the site and start playing with it.

People don’t know much about open data

Nobody seems to know what is in the data portals. Many people know about datasets that are relevant to their work, municipality, &c., but nobody seems to know about the availability of data on broader topics, and nobody seems to have a good way of finding out what is available.
If someone does know any of this, he probably works for an open data portal. Still, he probably doesn’t know much about what is going on in other portals.

Naive search method

One difficulty in discovering open data is the search paradigm.
Open data portals approach searching data as if data were normal prose; your search terms are some keywords, a category, &c., and your results are dataset titles and descriptions.
There are other approaches. For example, AppGen searches for datasets with the same variables as each other, and the results are automatically generated app prototypes.

Siloed open data portals

Another issue is that people tend to use data from only one portal; they use their local government’s portals or their organizations’ portals.
Let me give you a couple examples of why this should maybe be different. Perhaps I’m considering making an app to help people find parking, and I want to see what parking lot data are available before I put much work into the app. Or maybe I want to find all of the data about sewer overflows so that I can expand my initiative to reduce water pollution.
OpenPrism is one small attempt at making it easier to search. Rather than going to all of the different portals and making a separate search for each portal, you type your search in one search bar, and you get results from a bunch of different Socrata, CKAN and Junar portals.”

Nudge Nation: A New Way to Prod Students Into and Through College


Ben Wildavsky at EducationSector: “Thanks in part to Thaler and Sunstein’s work, the power of nudges has become well-established—including on many college campuses, where students around the country are beginning the fall semester. While online education and software-driven pedagogy on college campuses have received a good deal of attention, a less visible set of technology-driven initiatives also has gained a foothold: behavioral nudges designed to keep students on track to succeed. Just as e-commerce entrepreneurs have drawn on massive troves of consumer data to create algorithms for firms such as Netflix and Amazon, which unbundle the traditional storefront consumer experience through customized, online delivery, architects of campus technology nudges also rely on data analytics or data mining to improve the student experience.

By giving students information-driven suggestions that lead to smarter actions, technology nudges are intended to tackle a range of problems surrounding the process by which students begin college and make their way to graduation.
New approaches are certainly needed….
There are many reasons for low rates of persistence and graduation, including financial problems, the difficulty of juggling non-academic responsibilities such as work and family, and, for some first-generation stu­dents, culture shock. But academic engagement and success are major contributors. That’s why colleges are using behavioral nudges, drawing on data analytics and behavioral psychology, to focus on problems that occur along the academic pipeline:
• Poor student organization around the logistics of going to college
• Unwise course selections that increase the risk of failure and extend time to degree
• Inadequate information about academic progress and the need for academic help
• Unfocused support systems that identify struggling students but don’t directly engage with them
• Difficulty tapping into counseling services
These new ventures, whether originating within colleges or created by outside entrepreneurs, are doing things with data that just couldn’t be done in the past—creating giant databases of student course records, for example, to find patterns of success and failure that result when certain kinds of students take certain kinds of courses.”

The Tech Intellectuals


New Essay by Henry Farrell in Democracy: “A quarter of a century ago, Russell Jacoby lamented the demise of the public intellectual. The cause of death was an improvement in material conditions. Public intellectuals—Dwight Macdonald, I.F. Stone, and their like—once had little choice but to be independent. They had difficulty getting permanent well-paying jobs. However, as universities began to expand, they offered new opportunities to erstwhile unemployables. The academy demanded a high price. Intellectuals had to turn away from the public and toward the practiced obscurities of academic research and prose. In Jacoby’s description, these intellectuals “no longer need[ed] or want[ed] a larger public…. Campuses [were] their homes; colleagues their audience; monographs and specialized journals their media.”
Over the last decade, conditions have changed again. New possibilities are opening up for public intellectuals. Internet-fueled media such as blogs have made it much easier for aspiring intellectuals to publish their opinions. They have fostered the creation of new intellectual outlets (Jacobin, The New Inquiry, The Los Angeles Review of Books), and helped revitalize some old ones too (The Baffler, Dissent). Finally, and not least, they have provided the meat for a new set of arguments about how communications technology is reshaping society.
These debates have created opportunities for an emergent breed of professional argument-crafters: technology intellectuals. Like their predecessors of the 1950s and ’60s, they often make a living without having to work for a university. Indeed, the professoriate is being left behind. Traditional academic disciplines (except for law, which has a magpie-like fascination with new and shiny things) have had a hard time keeping up. New technologies, to traditionalists, are suspect: They are difficult to pin down within traditional academic boundaries, and they look a little too fashionable to senior academics, who are often nervous that their fields might somehow become publicly relevant.
Many of these new public intellectuals are more or less self-made. Others are scholars (often with uncomfortable relationships with the academy, such as Clay Shirky, an unorthodox professor who is skeptical that the traditional university model can survive). Others still are entrepreneurs, like technology and media writer and podcaster Jeff Jarvis, working the angles between public argument and emerging business models….
Different incentives would lead to different debates. In a better world, technology intellectuals might think more seriously about the relationship between technological change and economic inequality. Many technology intellectuals think of the culture of Silicon Valley as inherently egalitarian, yet economist James Galbraith argues that income inequality in the United States “has been driven by capital gains and stock options, mostly in the tech sector.”
They might think more seriously about how technology is changing politics. Current debates are still dominated by pointless arguments between enthusiasts who believe the Internet is a model for a radically better democracy, and skeptics who claim it is the dictator’s best friend.
Finally, they might pay more attention to the burgeoning relationship between technology companies and the U.S. government. Technology intellectuals like to think that a powerful technology sector can enhance personal freedom and constrain the excesses of government. Instead, we are now seeing how a powerful technology sector may enable government excesses. Without big semi-monopolies like Facebook, Google, and Microsoft to hoover up personal information, surveillance would be far more difficult for the U.S. government.
Debating these issues would require a more diverse group of technology intellectuals. The current crop are not diverse in some immediately obvious ways—there are few women, few nonwhites, and few non-English speakers who have ascended to the peak of attention. Yet there is also far less intellectual diversity than there ought to be. The core assumptions of public debates over technology get less attention than they need and deserve.”

Frontiers in Massive Data Analysis


New report from the National Academy of Sciences: “Data mining of massive data sets is transforming the way we think about crisis response, marketing, entertainment, cybersecurity and national intelligence. Collections of documents, images, videos, and networks are being thought of not merely as bit strings to be stored, indexed, and retrieved, but as potential sources of discovery and knowledge, requiring sophisticated analysis techniques that go far beyond classical indexing and keyword counting, aiming to find relational and semantic interpretations of the phenomena underlying the data.
Frontiers in Massive Data Analysis examines the frontier of analyzing massive amounts of data, whether in a static database or streaming through a system. Data at that scale–terabytes and petabytes–is increasingly common in science (e.g., particle physics, remote sensing, genomics), Internet commerce, business analytics, national security, communications, and elsewhere. The tools that work to infer knowledge from data at smaller scales do not necessarily work, or work well, at such massive scale. New tools, skills, and approaches are necessary, and this report identifies many of them, plus promising research directions to explore. Frontiers in Massive Data Analysis discusses pitfalls in trying to infer knowledge from massive data, and it characterizes seven major classes of computation that are common in the analysis of massive data. Overall, this report illustrates the cross-disciplinary knowledge–from computer science, statistics, machine learning, and application disciplines–that must be brought to bear to make useful inferences from massive data.”

How to make a city great


New video and report by McKinsey: “What makes a great city? It is a pressing question because by 2030, 5 billion people—60 percent of the world’s population—will live in cities, compared with 3.6 billion today, turbocharging the world’s economic growth. Leaders in developing nations must cope with urbanization on an unprecedented scale, while those in developed ones wrestle with aging infrastructures and stretched budgets. All are fighting to secure or maintain the competitiveness of their cities and the livelihoods of the people who live in them. And all are aware of the environmental legacy they will leave if they fail to find more sustainable, resource-efficient ways of managing these cities.

To understand the core processes and benchmarks that can transform cities into superior places to live and work, McKinsey developed and analyzed a comprehensive database of urban economic, social, and environmental performance indicators. The research included interviewing 30 mayors and other leaders in city governments on four continents and synthesizing the findings from more than 80 case studies that sought to understand what city leaders did to improve processes and services from urban planning to financial management and social housing.
The result is How to make a city great (PDF–2.1MB), a new report arguing that leaders who make important strides in improving their cities do three things really well:

  • They achieve smart growth.
  • They do more with less. Great cities secure all revenues due, explore investment partnerships, embrace technology, make organizational changes that eliminate overlapping roles, and manage expenses. Successful city leaders have also learned that, if designed and executed well, private–public partnerships can be an essential element of smart growth, delivering lower-cost, higher-quality infrastructure and services.
  • They win support for change. Change is not easy, and its momentum can even attract opposition. Successful city leaders build a high-performing team of civil servants, create a working environment where all employees are accountable for their actions, and take every opportunity to forge a stakeholder consensus with the local population and business community. They take steps to recruit and retain top talent, emphasize collaboration, and train civil servants in the use of technology.”

Open Government in a Digital Age


An essay by Jonathan Reichental (Chief Information Officer of the City of Palo Alto) and Sheila Tucker (Assistant to the City Manager) that summarizes work done at the City that resulted in the Thomas H. Muehlenback Award for Excellent in Local Government: “Government is in a period of extraordinary change. Demographics are shifting. Fiscal constraints continue to challenge service delivery. Communities are becoming more disconnected with one another and their governments, and participation in civic affairs is rapidly declining. Adding to the complexities, technology is rapidly changing the way cities provide services, and conduct outreach and civic engagement. Citizens increasingly expect to engage with their government in much the same way they pay bills online or find directions using their smartphone where communication is interactive and instantaneous. The role of government of course is more complicated than simply improving transactions.
To help navigate these challenges, the City of Palo Alto has focused its effort on new ways of thinking and acting by leveraging our demographic base, wealth of intellectual talent and entrepreneurial spirit to engage our community in innovative problem solving. The City’s historic advantages in innovative leadership create a compelling context to push the possibilities of technology to solve civic challenges.
This case study examines how Palo Alto is positioning itself to maximize the use of technology to build a leading Digital City and make local government more inclusive, transparent and engage a broader base of its community in civic affairs….”

Political Scientists Acknowledge Need to Make Stronger Case for Their Field


Beth McMurtrie in The Chronicle of Higher Education: “Back in March, Congress limited federal support for political-science research by the National Science Foundation to projects that promote national security or American economic interests. That decision was a victory for Sen. Tom Coburn, a Republican from Oklahoma who has long aimed to eliminate all NSF grants for political science, arguing that unlike the hard sciences it rarely produces concrete benefits to society.
Congress’s action has led to soul searching within the discipline about how effective academics have been in conveying the value of their work to the public. It has also revived a longstanding debate among political scientists about the shift toward more statistically sophisticated, mathematically esoteric research, and its usefulness outside of academe. Those discussions were out front at the annual conference of the American Political Science Association, held here last week.
Rogers M. Smith, a political-science professor at the University of Pennsylvania, was one of 13 members of a panel that discussed the controversy over NSF money for political-science studies. He put the problem bluntly: “We need to make a better case for ourselves.”
Few on the panel, in fact, seemed to think that political science had done a good job on that front. The association has created a task force—led by Arthur Lupia, a political-science professor at the University of Michigan at Ann Arbor—to improve public perceptions of political science’s value. He said his colleagues could learn from organizations like the American Association for the Advancement of Science, which holds special sessions for the news media at its annual conference to explain the work of its members to the public.”