When Guarding Student Data Endangers Valuable Research


Susan M. Dynarski  in the New York Times: “There is widespread concern over threats to privacy posed by the extensive personal data collected by private companies and public agencies.

Some of the potential danger comes from the government: The National Security Agency has swept up the telephone records of millions of people, in what it describes as a search for terrorists. Other threats are posed by hackers, who have exploited security gaps to steal data from retail giantslike Target and from the federal Office of Personnel Management.

Resistance to data collection was inevitable — and it has been particularly intense in education.

Privacy laws have already been strengthened in some states, and multiple bills now pending in state legislatures and in Congress would tighten the security and privacy of student data. Some of this proposed legislation is so broadly written, however, that it could unintentionally choke off the use of student data for its original purpose: assessing and improving education. This data has already exposed inequities, allowing researchers and advocates to pinpoint where poor, nonwhite and non-English-speaking children have been educated inadequately by their schools.

Data gathering in education is indeed extensive: Across the United States, large, comprehensive administrative data sets now track the academic progress of tens of millions of students. Educators parse this data to understand what is working in their schools. Advocates plumb the data to expose unfair disparities in test scores and graduation rates, building cases to target more resources for the poor. Researchers rely on this data when measuring the effectiveness of education interventions.

To my knowledge there has been no large-scale, Target-like theft of private student records — probably because students’ test scores don’t have the market value of consumers’ credit card numbers. Parents’ concerns have mainly centered not on theft, but on the sharing of student data with third parties, including education technology companies. Last year, parentsresisted efforts by the tech start-up InBloom to draw data on millions of students into the cloud and return it to schools as teacher-friendly “data dashboards.” Parents were deeply uncomfortable with a third party receiving and analyzing data about their children.

In response to such concerns, some pending legislation would scale back the authority of schools, districts and states to share student data with third parties, including researchers. Perhaps the most stringent of these proposals, sponsored by Senator David Vitter, a Louisiana Republican, would effectively end the analysis of student data by outside social scientists. This legislation would have banned recent prominent research documenting the benefits of smaller classes, the value of excellent teachersand the varied performance of charter schools.

Under current law, education agencies can share data with outside researchers only to benefit students and improve education. Collaborations with researchers allow districts and states to tap specialized expertise that they otherwise couldn’t afford. The Boston public school district, for example, has teamed up with early-childhood experts at Harvard to plan and evaluate its universal prekindergarten program.

In one of the longest-standing research partnerships, the University of Chicago works with the Chicago Public Schools to improve education. Partnerships like Chicago’s exist across the nation, funded by foundations and the United States Department of Education. In one initiative, a Chicago research consortium compiled reports showing high school principals that many of the seniors they had sent off to college swiftly dropped out without earning a degree. This information spurred efforts to improve high school counseling and college placement.

Specific, tailored information in the hands of teachers, principals or superintendents empowers them to do better by their students. No national survey could have told Chicago’s principals how their students were doing in college. Administrative data can provide this information, cheaply and accurately…(More)”

What cybersecurity can learn from citizen science


But as anyone who has observed an online forum thread dissecting the minutiae of geek culture can attest, hobbyists can be remarkably thorough in their exploration of topics they are passionate about. And it is often a point of pride to pick the subject that is the least conventional or popular.

The idea of citizen science is to include amateur science enthusiasts in the collection and processing of data. Thanks to the Internet, we’ve seen a surge in the number of self-taught experts in a variety of subjects. New participation platforms are social and gamified – utilizing people’s desire to compete or collaborate with others who share their passion.

How this process plays out differs from one app to the next, according to their needs: StarDust@Home asks volunteers to help sort through samples captured by the Stardust spacecraft when it flew through the coma of comet Wild 2 in 2004. They do this by viewing movies of the contents of the aerogel tiles that were used as collectors.

The security community is ripe for using the citizen science in similar ways to these. Most antimalware vendors make use of customer samples for adding detection and cleaning to their products. Many security companies use customers’ reports to gather file reputation, telemetry and prevalence data. And bug reports come from researchers of all ages and education levels – not just professional security researchers. “Month of Bug” events are a more controversial way that security is gamified. Could security companies or organizations be doing more to engage enthusiasts to help improve our response to security issues?

It could be argued that the stuff of security research – especially malware research – is potentially harmful in the hands of amateurs and should be handled only by seasoned professionals. Not only that, security is an adversarial system where the criminals would likely try to game the system to improve their profits. These are important concerns that would need to be addressed.

But the citizen science approach provides good lessons…(More)”

Did Performance Measurement Cause America’s Police Problem?


Katherine Barrett and Richard Greene in Governing: “You’ve doubtless heard the maxim “what gets measured, gets managed.” Sometimes it’s attributed to management guru Peter Drucker, though others also get credit for it. But whoever actually coined the phrase, we remember the first time we became aware of it, about a quarter of a century ago.

It seemed like a purely positive sentiment to us back in the days when we naively believed that performance measurement could cure most governmental ills. If gathering data about inputs, outputs and outcomes could solve all management problems, then cities and states had access to a golden key to a more effective and efficient future. Then reality intervened and we recognized that even good measurements don’t necessarily result in the right policy or practice changes.

But, somewhat more ominously, we’ve become aware of a troubling question that lurks in the field of performance measurement: What happens if we’re not measuring the right things in the first place? If Drucker — or whoever — was right, doesn’t that mean that we may manage government programs in a way that leads to more problems? Sometimes, for example, states and localities focus their measurements on the speed with which a service is delivered. Faster always seems better. But often delivering a service quickly means doing so less effectively.

For fire departments, response times are a commonly used measure of service quality.  But “the requirement for low response times may incentivize firefighters to drive fast,” said Amy Donahue, professor and vice-provost for academic operations at the University of Connecticut. “And it has been shown that while speeding saves very little in terms of total driving time, it is much more dangerous — both to those in the emergency vehicle and other innocents who might get in their way. The potential for accidents is high, and when they happen, the consequences can be very tragic.”

As the field has become aware of these dangers, many agencies are trying to mitigate them by improving education, prohibiting responders from exceeding speed limits, and requiring responders to participate in emergency vehicle operators programs.

Examples like this one are everywhere. But we just came across something in the March 2015 edition of New Perspectives in Policing that had never occurred to us before and that seems to be widely ignored by public safety organizations around the country. It was written by Malcolm K. Sparrow, professor of practice of public management at the John F. Kennedy School of Government at Harvard University

As violent incidents in several of America’s cities show the underlying tensions between police and the public they serve, Sparrow argues that some of this dissonance has actually been encouraged by the fact that most police departments are pushed to measure crime clearance and enforcement. These are important factors, but they have little to do with community satisfaction. Meanwhile, he points out that “a few departments now use citizen satisfaction surveys on a regular basis, but most do not.”…(More)”

Nudges Do Not Undermine Human Agency


Cass R. Sunstein in the Journal of Consumer Policy: “Some people believe that nudges undermine human agency, but with appropriate nudges, neither agency nor consumer freedom is at risk. On the contrary, nudges can promote both goals. In some contexts, they are indispensable. There is no opposition between education on the one hand and nudges on the other. Many nudges are educative. Even when they are not, they can complement, and not displace, consumer education….(More)”.

New ODI research shows open data reaching every sector of UK industry


ODI: “New research has been published today (1 June) by the Open Data Institute showing that open data is reaching every sector of UK industry.

In various forms, open data is being adopted by a wide variety of businesses – small and large, new and old, from right across the country. The findings from Open data means business: UK innovation across sectors and regions draw on 270 companies with a combined turnover of £92bn and over 500k employees, identified by the ODI as using, producing or investing in open data as part of their business. The project included desk research, surveys and interviews on the companies’ experiences.

Key findings from the research include:

  • Companies using open data come from many sectors; over 46% from outside the information and communication sector. These include finance & insurance, science & technology, business administration & support, arts & entertainment, health, retail, transportation, education and energy.
  • The most popular datasets for companies aregeospatial/mapping data (57%), transport data (43%) and environment data (42%).
  • 39% of companies innovating with open data are over 10 years old, with some more than 25 years old, proving open data isn’t just for new digital startups.
  • ‘Micro-enterprises’ (businesses with fewer than 10 employees) represented 70% of survey respondents, demonstrating athriving open data startup scene. These businesses are using it to create services, products and platforms. 8% of respondents were drawn from large companies of 251 or more employees….
  • The companies surveyed listed 25 different government sources for the data they use. Notably, Ordnance Survey data was cited most frequently, by 14% of the companies. The non-government source most commonly used was OpenStreetMap, an openly licenced map of the world created by volunteers….(More)

Measuring ‘governance’ to improve lives


Robert Rotberg at the Conversation: “…Citizens everywhere desire “good governance” – to be governed well within their nation-states, their provinces, their states and their cities.

Governance is more useful than “democracy” if we wish to understand how different political rulers and ruling elites satisfy the aspirations of their citizens.

But to make the notion of “governance” useful, we need both a practical definition and a method of measuring the gradations between good and bad governance.

What’s more, if we can measure well, we can diagnose weak areas of governance and, hence, seek ways to make the weak actors strong.

Governance, defined as “the performance of governments and the delivery of services by governments,” tells us if and when governments are in fact meeting the expectations of their constituents and providing for them effectively and responsibly.

Democracy outcomes, by contrast, are much harder to measure because the meaning of the very word itself is contested and impossible to measure accurately.

For the purposes of making policy decisions, if we seek to learn how citizens are faring under regime X or regime Y, we need to compare governance (not democracy) in those respective places.

In other words, governance is a construct that enables us to discern exactly whether citizens are progressing in meeting life’s goals.

Measuring governance: five bundles and 57 subcategories

Are citizens of a given country better off economically, socially and politically than they were in an earlier decade? Are their various human causes, such as being secure or being free, advancing? Are their governments treating them well, and attempting to respond to their various needs and aspirations and relieving them of anxiety?

Just comparing national gross domestic products (GDPs), life expectancies or literacy rates provides helpful distinguishing data, but governance data are more comprehensive, more telling and much more useful.

Assessing governance tells us far more about life in different developing societies than we would learn by weighing the varieties of democracy or “human development” in such places.

Government’s performance, in turn, is according to the scheme advanced in my book On Governance and in my Index of African Governance, the delivery to citizens of five bundles (divided into 57 underlying subcategories) of political goods that citizens within any kind of political jurisdiction demand.

The five major bundles are Security and Safety, Rule of Law and Transparency, Political Participation and Respect for Human Rights, Sustainable Economic Opportunity, and Human Development (education and health)….(More)”

Governing methods: policy innovation labs, design and data science in the digital governance of education


Paper by Ben Williamson in the Journal of Educational Administration and History: “Policy innovation labs are emerging knowledge actors and technical experts in the governing of education. The article offers a historical and conceptual account of the organisational form of the policy innovation lab. Policy innovation labs are characterised by specific methods and techniques of design, data science, and digitisation in public services such as education. The second half of the article details how labs promote the use of digital data analysis, evidence-based evaluation and ‘design-for-policy’ techniques as methods for the governing of education. In particular, they promote the ‘computational thinking’ associated with computer programming as a capacity required by a ‘reluctant state’ that is increasingly concerned to delegate its responsibilities to digitally enabled citizens with the ‘designerly’ capacities and technical expertise to ‘code’ solutions to public and social problems. Policy innovation labs are experimental laboratories trialling new methods within education for administering and governing the future of the state itself….(More)”

Law school students crowdsource commencement address


Chronicle of Higher Education: “Though higher education is constantly changing, commencement ceremonies have largely stayed the same. A graduating student at Stanford Law School is trying to change that.

Marta F. Belcher is crowdsourcing the speech she will give next month at the law school’s precommencement diploma ceremony, offering her classmates an opportunity to share in crafting that final message.

The point of a student commencement speaker, Ms. Belcher said, is to have someone who can speak to the student experience. But as she learned when she gave the student address at her undergraduate ceremony, it’s not easy for one person to represent hundreds, or even thousands, of classmates.

With all the online collaboration tools that are available today, Ms. Belcher saw the possibility of updating the tradition. So she competed to be the student speaker and invited classmates to contribute to her address.

“That was so clearly the right choice — for Stanford, especially, in the Silicon Valley at the cutting edge of innovation — that we should be the ones to sort of pioneer this new kind of way of writing a graduation speech,” she said.

After holding a number of meetings and fielding questions from skeptics, Ms. Belcher set up a wiki to gather ideas. The months-long effort was divided into three stages. First students would establish themes and ideas; next they would start contributing actual content for the speech; and finally, those pieces would be edited into a cohesive narrative during collaborative “edit-a-thons.”

Since the wiki went up, in February, 85 students have contributed to it….(More)”

Big Data. Big Obstacles.


Dalton Conley et al. in the Chronicle of Higher Education: “After decades of fretting over declining response rates to traditional surveys (the mainstay of 20th-century social research), an exciting new era would appear to be dawning thanks to the rise of big data. Social contagion can be studied by scraping Twitter feeds; peer effects are tested on Facebook; long-term trends in inequality and mobility can be assessed by linking tax records across years and generations; social-psychology experiments can be run on Amazon’s Mechanical Turk service; and cultural change can be mapped by studying the rise and fall of specific Google search terms. In many ways there has been no better time to be a scholar in sociology, political science, economics, or related fields.

However, what should be an opportunity for social science is now threatened by a three-headed monster of privatization, amateurization, and Balkanization. A coordinated public effort is needed to overcome all of these obstacles.

While the availability of social-media data may obviate the problem of declining response rates, it introduces all sorts of problems with the level of access that researchers enjoy. Although some data can be culled from the web—Twitter feeds and Google searches—other data sit behind proprietary firewalls. And as individual users tune up their privacy settings, the typical university or independent researcher is increasingly locked out. Unlike federally funded studies, there is no mandate for Yahoo or Alibaba to make its data publicly available. The result, we fear, is a two-tiered system of research. Scientists working for or with big Internet companies will feast on humongous data sets—and even conduct experiments—and scholars who do not work in Silicon Valley (or Alley) will be left with proverbial scraps….

To address this triple threat of privatization, amateurization, and Balkanization, public social science needs to be bolstered for the 21st century. In the current political and economic climate, social scientists are not waiting for huge government investment like we saw during the Cold War. Instead, researchers have started to knit together disparate data sources by scraping, harmonizing, and geo­coding any and all information they can get their hands on.

Currently, many firms employ some well-trained social and behavioral scientists free to pursue their own research; likewise, some companies have programs by which scholars can apply to be in residence or work with their data extramurally. However, as Facebook states, its program is “by invitation only and requires an internal Facebook champion.” And while Google provides services like Ngram to the public, such limited efforts at data sharing are not enough for truly transparent and replicable science….(More)”

 

Contest Aims to Harness Low-Cost Devices to Help the Poor


Steve Lohr in the New York Times: “The timing and technology are right to bring the power of digital sensing to the poor to improve health, safety and education.

That is the animating assumption behind a new project announced on Tuesday. The initiative is led by Unicef and ARM, the British chip designer whose microprocessors power most smartphones and tablets. They are being joined by Frog, the San Francisco-based product strategy and design firm, along with people described as coaches and advisers from companies and organizations including Google, Orange, Singularity University, the Red Cross and the Senseable City Lab at the Massachusetts Institute of Technology.

The long-term ambition is to jump-start an industrial ecosystem for sensing and data technology that serves the needs of mothers and children in developing nations.

The project, called Wearables for Good, is beginning with a contest to generate ideas. Applications can be submitted online on the project’s website until August 4. Two winners will be selected in the fall. Each will receive $15,000, and assistance and advice from ARM, Frog and others on translating their ideas into a product and perhaps a company.

The online application lists the required characteristics for device ideas. They should be, according to the form, “cost-effective, rugged and durable, low-power and scalable.” The form offers no price limits, but it is safe to assume the project is looking for devices priced far less than an Apple Watch or a Fitbit device.

…. the Wearables for Good project goes further, focusing less on aggregated data and more on personal monitoring. “This is the next level of what we’re doing,” said Erica Kochi, co-founder of Unicef Innovation, which pursues technology initiatives that advance the agency’s goals….(More)”