What a Million Syllabuses Can Teach Us


College course syllabuses are curious documents. They represent the best efforts by faculty and instructors to distill human knowledge on a given subject into 14-week chunks. They structure the main activity of colleges and universities. And then, for the most part, they disappear….

Until now. Over the past two years, we and our partners at the Open Syllabus Project (based at the American Assembly at Columbia) have collected more than a million syllabuses from university websites. We have also begun to extract some of their key components — their metadata — starting with their dates, their schools, their fields of study and the texts that they assign.

This past week, we made available online a beta version of our Syllabus Explorer, which allows this database to be searched. Our hope and expectation is that this tool will enable people to learn new things about teaching, publishing and intellectual history.

At present, the Syllabus Explorer is mostly a tool for counting how often texts are assigned over the past decade. There is something for everyone here. The traditional Western canon dominates the top 100, with Plato’s “Republic” at No. 2, “The Communist Manifesto” at No. 3, and “Frankenstein” at No. 5, followed by Aristotle’s “Ethics,” Hobbes’s “Leviathan,” Machiavelli’s “The Prince,” “Oedipus” and “Hamlet.”….

Top articles? Garrett Hardin’s “The Tragedy of the Commons” and Francis Fukuyama’s “The End of History.” And so on. Altogether, the Syllabus Explorer tracks about 933,000 works. Nearly half of these are assigned only once.

Such data has many uses. For academics, for example, it offers a window onto something they generally know very little about: how widely their work is read.

It also allows us to introduce a new publication metric based on the frequency with which works are taught, which we call the “teaching score.” The score is derived from the ranking order of the text, not the raw number of citations, such that a book or article that is used in four or five classes gets a score of 1, while “The Republic,” which is assigned 3,500 times, gets a score of 100….

Because of a complex mix of privacy and copyright issues concerning syllabuses, the Open Syllabus Project publishes only metadata, not the underlying documents or any personally identifying material (even though these documents can be viewed on university websites). But we think that it is important for schools to move toward a more open approach to curriculums. As universities face growing pressure to justify their teaching and research missions, we doubt that curricular obscurity is helpful.

We think that the Syllabus Explorer demonstrates how more open strategies can support teaching, diversify evaluation practices and offer new perspectives on publishing, scholarship and intellectual traditions. But as with any newly published work, that judgment now passes out of our hands and into yours…(More)”

Distributed ledger technology: beyond block chain


UK Government Office for Science: “In a major report on distributed ledgers published today (19 January 2016), the Government Chief Scientist, Sir Mark Walport, sets out how this technology could transform the delivery of public services and boost productivity.

A distributed ledger is a database that can securely record financial, physical or electronic assets for sharing across a network through entirely transparent updates of information.

Its first incarnation was ‘Blockchain’ in 2008, which underpinned digital cash systems such as Bitcoin. The technology has now evolved into a variety of models that can be applied to different business problems and dramatically improve the sharing of information.

Distributed ledger technology could provide government with new tools to reduce fraud, error and the cost of paper intensive processes. It also has the potential to provide new ways of assuring ownership and provenance for goods and intellectual property.

Distributed ledgers are already being used in the diamond markets and in the disbursing of international aid payments.

Sir Mark Walport said:

Distributed ledger technology has the potential to transform the delivery of public and private services. It has the potential to redefine the relationship between government and the citizen in terms of data sharing, transparency and trust and make a leading contribution to the government’s digital transformation plan.

Any new technology creates challenges, but with the right mix of leadership, collaboration and sound governance, distributed ledgers could yield significant benefits for the UK.

The report makes a number of recommendations which focus on ministerial leadership, research, standards and the need for proof of concept trials.

They include:

  • government should provide ministerial leadership to ensure that it provides the vision, leadership and the platform for distributed ledger technology within government; this group should consider governance, privacy, security and standards
  • government should establish trials of distributed ledgers in order to assess the technology’s usability within the public sector
  • government could support the creation of distributed ledger demonstrators for local government that will bring together all the elements necessary to test the technology and its application.
  • the UK research community should invest in the research required to ensure that distributed ledgers are scalable, secure and provide proof of correctness of their contents….View the report ‘Distributed ledger technology: beyond block chain’.”

Big Data in U.S. Agriculture


Megan Stubbs at the Congressional Research Service: “Recent media and industry reports have employed the term big data as a key to the future of increased food production and sustainable agriculture. A recent hearing on the private elements of big data in agriculture suggests that Congress too is interested in potential opportunities and challenges big data may hold. While there appears to be great interest, the subject of big data is complex and often misunderstood, especially within the context of agriculture.

There is no commonly accepted definition of the term big data. It is often used to describe a modern trend in which the combination of technology and advanced analytics creates a new way of processing information that is more useful and timely. In other words, big data is just as much about new methods for processing data as about the data themselves. It is dynamic, and when analyzed can provide a useful tool in a decisionmaking process. Most see big data in agriculture at the end use point, where farmers use precision tools to potentially create positive results like increased yields, reduced inputs, or greater sustainability. While this is certainly the more intriguing part of the discussion, it is but one aspect and does not necessarily represent a complete picture.

Both private and public big data play a key role in the use of technology and analytics that drive a producer’s evidence-based decisions. Public-level big data represent records collected, maintained, and analyzed through publicly funded sources, specifically by federal agencies (e.g., farm program participant records and weather data). Private big data represent records generated at the production level and originate with the farmer or rancher (e.g., yield, soil analysis, irrigation levels, livestock movement, and grazing rates). While discussed separately in this report, public and private big data are typically combined to create a more complete picture of an agricultural operation and therefore better decisionmaking tools.

Big data may significantly affect many aspects of the agricultural industry, although the full extent and nature of its eventual impacts remain uncertain. Many observers predict that the growth of big data will bring positive benefits through enhanced production, resource efficiency, and improved adaptation to climate change. While lauded for its potentially revolutionary applications, big data is not without issues. From a policy perspective, issues related to big data involve nearly every stage of its existence, including its collection (how it is captured), management (how it is stored and managed), and use (how it is analyzed and used). It is still unclear how big data will progress within agriculture due to technical and policy challenges, such as privacy and security, for producers and policymakers. As Congress follows the issue a number of questions may arise, including a principal one—what is the federal role?…(More)”

Managing Secrecy


Clare Birchall in the International Journal of Communication: “As many anthropologists and sociologists have long argued, understanding the meaning and place of secrets is central to an adequate representation of society. This article extends previous accounts of secrecy in social, governmental, and organizational settings to configure secrecy as one form of visibility management among others. Doing so helps to remove the secret from a post-Enlightenment value system that deems secrets bad and openness good. Once secrecy itself is seen as a neutral phenomenon, we can focus on the politicality or ethics of any particular distribution of the visible, sayable, and knowable. Alongside understanding the work secrecy performs in contemporary society, this article argues that we can also seek inspiration from the secret as a methodological tool and political tactic. Moving beyond the claim to privacy, a claim that has lost bite in this era of state and consumer dataveillance, a “right to opacity”—the right to not be transparent, legible, seen—might open up an experience of subjectivity and responsibility beyond the circumscribed demands of the current politicotechnological management of visibilities….(More)”

Privacy, security and data protection in smart cities: a critical EU law perspective


CREATe Working Paper by Lilian Edwards: “Smart cities” are a buzzword of the moment. Although legal interest is growing, most academic responses at least in the EU, are still from the technological, urban studies, environmental and sociological rather than legal, sectors2 and have primarily laid emphasis on the social, urban, policing and environmental benefits of smart cities, rather than their challenges, in often a rather uncritical fashion3 . However a growing backlash from the privacy and surveillance sectors warns of the potential threat to personal privacy posed by smart cities . A key issue is the lack of opportunity in an ambient or smart city environment for the giving of meaningful consent to processing of personal data; other crucial issues include the degree to which smart cities collect private data from inevitable public interactions, the “privatisation” of ownership of both infrastructure and data, the repurposing of “big data” drawn from IoT in smart cities and the storage of that data in the Cloud.

This paper, drawing on author engagement with smart city development in Glasgow as well as the results of an international conference in the area curated by the author, argues that smart cities combine the three greatest current threats to personal privacy, with which regulation has so far failed to deal effectively; the Internet of Things(IoT) or “ubiquitous computing”; “Big Data” ; and the Cloud. While these three phenomena have been examined extensively in much privacy literature (particularly the last two), both in the US and EU, the combination is under-explored. Furthermore, US legal literature and solutions (if any) are not simply transferable to the EU because of the US’s lack of an omnibus data protection (DP) law. I will discuss how and if EU DP law controls possible threats to personal privacy from smart cities and suggest further research on two possible solutions: one, a mandatory holistic privacy impact assessment (PIA) exercise for smart cities: two, code solutions for flagging the need for, and consequences of, giving consent to collection of data in ambient environments….(More)

Daedalus Issue on “The Internet”


Press release: “Thirty years ago, the Internet was a network that primarily delivered email among academic and government employees. Today, it is rapidly evolving into a control system for our physical environment through the Internet of Things, as mobile and wearable technology more tightly integrate the Internet into our everyday lives.

How will the future Internet be shaped by the design choices that we are making today? Could the Internet evolve into a fundamentally different platform than the one to which we have grown accustomed? As an alternative to big data, what would it mean to make ubiquitously collected data safely available to individuals as small data? How could we attain both security and privacy in the face of trends that seem to offer neither? And what role do public institutions, such as libraries, have in an environment that becomes more privatized by the day?

These are some of the questions addressed in the Winter 2016 issue of Daedalus on “The Internet.”  As guest editors David D. Clark (Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Laboratory) and Yochai Benkler (Berkman Professor of Entrepreneurial Legal Studies at Harvard Law School and Faculty Co-Director of the Berkman Center for Internet and Society at Harvard University) have observed, the Internet “has become increasingly privately owned, commercial, productive, creative, and dangerous.”

Some of the themes explored in the issue include:

  • The conflicts that emerge among governments, corporate stakeholders, and Internet users through choices that are made in the design of the Internet
  • The challenges—including those of privacy and security—that materialize in the evolution from fixed terminals to ubiquitous computing
  • The role of public institutions in shaping the Internet’s privately owned open spaces
  • The ownership and security of data used for automatic control of connected devices, and
  • Consumer demand for “free” services—developed and supported through the sale of user data to advertisers….

Essays in the Winter 2016 issue of Daedalus include:

  • The Contingent Internet by David D. Clark (MIT)
  • Degrees of Freedom, Dimensions of Power by Yochai Benkler (Harvard Law School)
  • Edge Networks and Devices for the Internet of Things by Peter T. Kirstein (University College London)
  • Reassembling Our Digital Selves by Deborah Estrin (Cornell Tech and Weill Cornell Medical College) and Ari Juels (Cornell Tech)
  • Choices: Privacy and Surveillance in a Once and Future Internet by Susan Landau (Worcester Polytechnic Institute)
  • As Pirates Become CEOs: The Closing of the Open Internet by Zeynep Tufekci (University of North Carolina at Chapel Hill)
  • Design Choices for Libraries in the Digital-Plus Era by John Palfrey (Phillips Academy)…(More)

See also: Introduction

Swipe right to fix the world: can Tinder-like tech match solutions to problems?


Beth Noveck in The Guardian: “Increasingly, these technologies of expertise are making it possible for the individual to make searchable lived experience. The New York police department, for example, maintains a database of employee skills. As the social service agency of last resort, the department needs to be able to pinpoint quickly who within the organization has the know how to wrangle a runaway beehive in Brooklyn or sing the national anthem in Queens in Chinese.

In public institutions, especially, it is all too common for individual knowhow to be masked by vague titles like “manager” and “director”. Using software to give organizations insights about the aptitude of employees has the potential to improve effectiveness and efficiency for public good.

Already an accelerating practice in the private sector, where managers want granular evidence of hard skills not readily apparent from transcripts, this year the World Bank created its own expert network called SkillFinder to index the talents of its 27,000 employees, consultants and alumni. With the launch of SkillFinder, the bank is just beginning to explore how to use the tool to better organize its human capital to achieve the bank’s mission of eradicating poverty.

Giving people outside as well as inside institutions opportunities to share their knowledge could save time, financial resources and even lives. Take the example of PulsePoint, a smartphone app created by the fire department of San Ramon, California. Now used by 1400 communities across the United States, PulsePoint matches those with a specific skill, namely CPR training, with dramatic results.

By tapping into a feed of the 911 calls, PulsePoint sends a text message “CPR Needed!” to those registered members of the public – off-duty doctors, nurses, police and trained amateurs – near the victim. Effective bystander CPR immediately administered can potentially double or triple the victim’s chance of survival. By augmenting traditional government first response,  Pulsepoint’s matching has already helped over 7,000 victims.

Employers can accelerate this process by going beyond merely asking employees for HR information and, instead, begin to catalog systematically the unique skills of the individuals within their organization. Many employers are anyway turning to new technology to match employees (and would-be employees) with the right skills to available jobs. How easily they could develop and share databases with public information about who has what experience while at the same time protecting the privacy of personal information….(More)”

Privacy by design in big data


An overview of privacy enhancing technologies in the era of big data analytics by the European Union Agency for Network and Information Security (ENISA) : “The extensive collection and further processing of personal information in the context of big data analytics has given rise to serious privacy concerns, especially relating to wide scale electronic surveillance, profiling, and disclosure of private data. In order to allow for all the benefits of analytics without invading individuals’ private sphere, it is of utmost importance to draw the limits of big data processing and integrate the appropriate data protection safeguards in the core of the analytics value chain. ENISA, with the current report, aims at supporting this approach, taking the position that, with respect to the underlying legal obligations, the challenges of technology (for big data) should be addressed by the opportunities of technology (for privacy). To this end, in the present study we first explain the need to shift the discussion from “big data versus privacy” to “big data with privacy”, adopting the privacy and data protection principles as an essential value of big data, not only for the benefit of the individuals, but also for the very prosperity of big data analytics. In this respect, the concept of privacy by design is key in identifying the privacy requirements early at the big data analytics value chain and in subsequently implementing the necessary technical and organizational measures. Therefore, after an analysis of the proposed privacy by design strategies in the different phases of the big data value chain, we provide an overview of specific identified privacy enhancing technologies that we find of special interest for the current and future big data landscape. In particular, we discuss anonymization, the “traditional” analytics technique, the emerging area of encrypted search and privacy preserving computations, granular access control mechanisms, policy enforcement and accountability, as well as data provenance issues. Moreover, new transparency and access tools in big data are explored, together with techniques for user empowerment and control. Following the aforementioned work, one immediate conclusion that can be derived is that achieving “big data with privacy” is not an easy task and a lot of research and implementation is still needed. Yet, we find that this task can be possible, as long as all the involved stakeholders take the necessary steps to integrate privacy and data protection safeguards in the heart of big data, by design and by default. To this end, ENISA makes the following recommendations:

  • Privacy by design applied …
  • Decentralised versus centralised data analytics …
  • Support and automation of policy enforcement
  • Transparency and control….
  • User awareness and promotion of PETs …
  • A coherent approach towards privacy and big data ….(More)”

Big and Open Linked Data (BOLD) in government: A challenge to transparency and privacy?


Marijn Janssen and Jeroen van den Hoven in Government Information Quarterly: “Big and Open Linked Data (BOLD) results in new opportunities and have the potential to transform government and its interactions with the public. BOLD provides the opportunity to analyze the behavior of individuals, increase control, and reduce privacy. At the same time BOLD can be used to create an open and transparent government. Transparency and privacy are considered as important societal and democratic values that are needed to inform citizens and let them participate in democratic processes. Practices in these areas are changing with the rise of BOLD. Although intuitively appealing, the concepts of transparency and privacy have many interpretations and are difficult to conceptualize, which makes it often hard to implement them. Transparency and privacy should be conceptualized as complex, non-dichotomous constructs interrelated with other factors. Only by conceptualizing these values in this way, the nature and impact of BOLD on privacy and transparency can be understood, and their levels can be balanced with security, safety, openness and other socially-desirable values….(More)”

 

Privacy in Public Spaces: What Expectations of Privacy Do We Have in Social Media Intelligence?


Paper by Edwards, Lilian and Urquhart, Lachlan: “In this paper we give a basic introduction to the transition in contemporary surveillance from top down traditional police surveillance to profiling and “pre-crime” methods. We then review in more detail the rise of open source (OSINT) and social media (SOCMINT) intelligence and its use by law enforcement and security authorities. Following this we consider what if any privacy protection is currently given in UK law to SOCMINT. Given the largely negative response to the above question, we analyse what reasonable expectations of privacy there may be for users of public social media, with reference to existing case law on art 8 of the ECHR. Two factors are in particular argued to be supportive of a reasonable expectation of privacy in open public social media communications: first, the failure of many social network users to perceive the environment where they communicate as “public”; and secondly, the impact of search engines (and other automated analytics) on traditional conceptions of structured dossiers as most problematic for state surveillance. Lastly, we conclude that existing law does not provide adequate protection foropen SOCMINT and that this will be increasingly significant as more and more personal data is disclosed and collected in public without well-defined expectations of privacy….(More)”