Enchanted Objects


Book by David Rose: “Some believe the future will look like more of the same—more smartphones, tablets, screens embedded in every conceivable surface. David Rose has a different vision: technology that atomizes, combining itself with the objects that make up the very fabric of daily living. Such technology will be woven into the background of our environment, enhancing human relationships and channeling desires for omniscience, long life, and creative expression. The enchanted objects of fairy tales and science fiction will enter real life.
Groundbreaking, timely, and provocative, Enchanted Objects is a blueprint for a better future, where efficient solutions come hand in hand with technology that delights our senses. It is essential reading for designers, technologists, entrepreneurs, business leaders, and anyone who wishes to understand the future and stay relevant in the Internet of Things. Download the prologue here.”

For Big-Data Scientists, ‘Janitor Work’ Is Key Hurdle to Insights


in the New York Times: “Technology revolutions come in measured, sometimes foot-dragging steps. The lab science and marketing enthusiasm tend to underestimate the bottlenecks to progress that must be overcome with hard work and practical engineering.

The field known as “big data” offers a contemporary case study. The catchphrase stands for the modern abundance of digital data from many sources — the web, sensors, smartphones and corporate databases — that can be mined with clever software for discoveries and insights. Its promise is smarter, data-driven decision-making in every field. That is why data scientist is the economy’s hot new job.

Yet far too much handcrafted work — what data scientists call “data wrangling,” “data munging” and “data janitor work” — is still required. Data scientists, according to interviews and expert estimates, spend from 50 percent to 80 percent of their time mired in this more mundane labor of collecting and preparing unruly digital data, before it can be explored for useful nuggets….”

Technology’s Crucial Role in the Fight Against Hunger


Crowdsourcing, predictive analytics and other new tools could go far toward finding innovative solutions for America’s food insecurity.

National Geographic recently sent three photographers to explore hunger in the United States. It was an effort to give a face to a very troubling statistic: Even today, one-sixth of Americans do not have enough food to eat. Fifty million people in this country are “food insecure” — having to make daily trade-offs among paying for food, housing or medical care — and 17 million of them skip at least one meal a day to get by. When choosing what to eat, many of these individuals must make choices between lesser quantities of higher-quality food and larger quantities of less-nutritious processed foods, the consumption of which often leads to expensive health problems down the road.
This is an extremely serious, but not easily visible, social problem. Nor does the challenge it poses become any easier when poorly designed public-assistance programs continue to count the sauce on a pizza as a vegetable. The deficiencies caused by hunger increase the likelihood that a child will drop out of school, lowering her lifetime earning potential. In 2010 alone, food insecurity cost America $167.5 billion, a figure that includes lost economic productivity, avoidable health-care expenses and social-services programs.
As much as we need specific policy innovations, if we are to eliminate hunger in America food insecurity is just one of many extraordinarily complex and interdependent “systemic” problems facing us that would benefit from the application of technology, not just to identify innovative solutions but to implement them as well. In addition to laudable policy initiatives by such states as Illinois and Nevada, which have made hunger a priority, or Arkansas, which suffers the greatest level of food insecurity but which is making great strides at providing breakfast to schoolchildren, we can — we must — bring technology to bear to create a sustained conversation between government and citizens to engage more Americans in the fight against hunger.

Identifying who is genuinely in need cannot be done as well by a centralized government bureaucracy — even one with regional offices — as it can through a distributed network of individuals and organizations able to pinpoint with on-the-ground accuracy where the demand is greatest. Just as Ushahidi uses crowdsourcing to help locate and identify disaster victims, it should be possible to leverage the crowd to spot victims of hunger. As it stands, attempts to eradicate so-called food deserts are often built around developing solutions for residents rather than with residents. Strategies to date tend to focus on the introduction of new grocery stores or farmers’ markets but with little input from or involvement of the citizens actually affected.

Applying predictive analytics to newly available sources of public as well as private data, such as that regularly gathered by supermarkets and other vendors, could also make it easier to offer coupons and discounts to those most in need. In addition, analyzing nonprofits’ tax returns, which are legally open and available to all, could help map where the organizations serving those in need leave gaps that need to be closed by other efforts. The Governance Lab recently brought together U.S. Department of Agriculture officials with companies that use USDA data in an effort to focus on strategies supporting a White House initiative to use climate-change and other open data to improve food production.

Such innovative uses of technology, which put citizens at the center of the service-delivery process and streamline the delivery of government support, could also speed the delivery of benefits, thus reducing both costs and, every bit as important, the indignity of applying for assistance.

Being open to new and creative ideas from outside government through brainstorming and crowdsourcing exercises using social media can go beyond simply improving the quality of the services delivered. Some of these ideas, such as those arising from exciting new social-science experiments involving the use of incentives for “nudging” people to change their behaviors, might even lead them to purchase more healthful food.

Further, new kinds of public-private collaborative partnerships could create the means for people to produce their own food. Both new kinds of financing arrangements and new apps for managing the shared use of common real estate could make more community gardens possible. Similarly, with the kind of attention, convening and funding that government can bring to an issue, new neighbor-helping-neighbor programs — where, for example, people take turns shopping and cooking for one another to alleviate time away from work — could be scaled up.

Then, too, advances in citizen engagement and oversight could make it more difficult for lawmakers to cave to the pressures of lobbying groups that push for subsidies for those crops, such as white potatoes and corn, that result in our current large-scale reliance on less-nutritious foods. At the same time, citizen scientists reporting data through an app would be able do a much better job than government inspectors in reporting what is and is not working in local communities.

As a society, we may not yet be able to banish hunger entirely. But if we commit to using new technologies and mechanisms of citizen engagement widely and wisely, we could vastly reduce its power to do harm.

As Data Overflows Online, Researchers Grapple With Ethics


at The New York Times: “Scholars are exhilarated by the prospect of tapping into the vast troves of personal data collected by Facebook, Google, Amazon and a host of start-ups, which they say could transform social science research.

Once forced to conduct painstaking personal interviews with subjects, scientists can now sit at a screen and instantly play with the digital experiences of millions of Internet users. It is the frontier of social science — experiments on people who may never even know they are subjects of study, let alone explicitly consent.

“This is a new era,” said Jeffrey T. Hancock, a Cornell University professor of communication and information science. “I liken it a little bit to when chemistry got the microscope.”

But the new era has brought some controversy with it. Professor Hancock was a co-author of the Facebook study in which the social network quietly manipulated the news feeds of nearly 700,000 people to learn how the changes affected their emotions. When the research was published in June, the outrage was immediate…

Such testing raises fundamental questions. What types of experiments are so intrusive that they need prior consent or prompt disclosure after the fact? How do companies make sure that customers have a clear understanding of how their personal information might be used? Who even decides what the rules should be?

Existing federal rules governing research on human subjects, intended for medical research, generally require consent from those studied unless the potential for harm is minimal. But many social science scholars say the federal rules never contemplated large-scale research on Internet users and provide inadequate guidance for it.

For Internet projects conducted by university researchers, institutional review boards can be helpful in vetting projects. However, corporate researchers like those at Facebook don’t face such formal reviews.

Sinan Aral, a professor at the Massachusetts Institute of Technology’s Sloan School of Management who has conducted large-scale social experiments with several tech companies, said any new rules must be carefully formulated.

“We need to understand how to think about these rules without chilling the research that has the promise of moving us miles and miles ahead of where we are today in understanding human populations,” he said. Professor Aral is planning a panel discussion on ethics at a M.I.T. conference on digital experimentation in October. (The professor also does some data analysis for The New York Times Company.)

Mary L. Gray, a senior researcher at Microsoft Research and associate professor at Indiana University’s Media School, who has worked extensively on ethics in social science, said that too often, researchers conducting digital experiments work in isolation with little outside guidance.

She and others at Microsoft Research spent the last two years setting up an ethics advisory committee and training program for researchers in the company’s labs who are working with human subjects. She is now working with Professor Hancock to bring such thinking to the broader research world.

“If everyone knew the right thing to do, we would never have anyone hurt,” she said. “We really don’t have a place where we can have these conversations.”…

An Infographic That Maps 2,000 Years of Cultural History in 5 Minutes


in Wired:  “…Last week in the journal Science, the researchers (led by University of Texas art historian Maximilian Schich) published a study that looked at the cultural history of Europe and North America by mapping the birth and deaths of more than 150,000 notable figures—including everyone from Leonardo Da Vinci to Ernest Hemingway. That data was turned into an amazing animated infographic that looks strikingly similar to the illustrated flight paths you find in the back of your inflight magazine. Blue dots indicate a birth, red ones means death.

The researchers used data from Freebase, which touts itself as a “community curated database of people, places and things.” This gives the data a strong western-bent. You’ll notice that many parts of Asia and the Middle East (not to mention pre-colonized North America), are almost wholly ignored in this video. But to be fair, the abstract did acknowledge that the study was focused mainly on Europe and North America.
Still, mapping the geography of cultural migration does gives you some insight about how the kind of culture we value has shifted over the centuries. It’s also a novel lens through which to view our more general history, as those migration trends likely illuminate bigger historical happenings like wars and the building of cross-country infrastructure.

Collective Genius


Linda A. Hill, Greg Brandeau, Emily Truelove, and Kent Lineback in HBR Review: “Google’s astonishing success in its first decade now seems to have been almost inevitable. But step inside its systems infrastructure group, and you quickly learn otherwise. The company’s meteoric growth depended in large part on its ability to innovate and scale up its infrastructure at an unprecedented pace. Bill Coughran, as a senior vice president of engineering, led the group from 2003 to 2011. His 1,000-person organization built Google’s “engine room,” the systems and equipment that allow us all to use Google and its many services 24/7. “We were doing work that no one else in the world was doing,” he says. “So when a problem happened, we couldn’t just go out and buy a solution. We had to create it.”
Coughran joined Google in 2003, just five years after its founding. By then it had already reinvented the way it handled web search and data storage multiple times. His group was using Google File System (GFS) to store the massive amount of data required to support Google searches. Given Google’s ferocious appetite for growth, Coughran knew that GFS—once a groundbreaking innovation—would have to be replaced within a couple of years. The number of searches was growing dramatically, and Google was adding Gmail and other applications that needed not just more storage but storage of a kind different from what GFS had been optimized to handle.
Building the next-generation system—and the next one, and the one after that—was the job of the systems infrastructure group. It had to create the new engine room, in-house, while simultaneously refining the current one. Because this was Coughran’s top priority—and given that he had led the storied Bell Labs and had a PhD in computer science from Stanford and degrees in mathematics from Caltech—one might expect that he would first focus on developing a technical solution for Google’s storage problems and then lead his group through its implementation.
But that’s not how Coughran proceeded. To him, there was a bigger problem, a perennial challenge that many leaders inevitably come to contemplate: How do I build an organization capable of innovating continually over time? Coughran knew that the role of a leader of innovation is not to set a vision and motivate others to follow it. It’s to create a community that is willing and able to generate new ideas…”

In Tests, Scientists Try to Change Behaviors


Wall Street Journal: “Behavioral scientists look for environmental ‘nudges’ to influence how people act. Pelle Guldborg Hansen, a behavioral scientist, is trying to figure out how to board passengers on a plane with less fuss.
The goal is to make plane-boarding more efficient by coaxing passengers to want to be more orderly, not by telling them they must. It is one of many projects in which Dr. Hansen seeks to encourage people, when faced with options, to make better choices. Among these: prompting people to properly dispose of cigarette butts outside of bars and clubs and inducing hospital workers to use hand sanitizers.
Dr. Hansen, 37 years old, is director of the Initiative for Science, Society & Policy, a collaboration of the University of Southern Denmark and Roskilde University. The concept behind his work is known commonly as a nudge, dubbed such because of the popular 2008 book of the same name by U.S. academics Richard Thaler and Cass Sunstein that examined how people make decisions.
At the Copenhagen airport, Dr. Hansen recently deployed a team of three young researchers to mill about a gate in terminal B. The trio was dressed casually in jeans and wore backpacks. They blended in with the passengers, except for the badges they wore displaying airport credentials, and the clipboards and pens they carried to record how the boarding process unfolds.
Thirty-five minutes before a flight departed, the team got into position. Andreas Rathmann Jensen stood in one corner, then moved to another, so he could survey the entire gate area. He mapped where people were sitting and where they placed their bags. This behavior can vary depending, for example, if people are flying alone, with a partner or in a group.
Johannes Schuldt-Jensen circulated among the rows and counted how many bags were blocking seats and how many seats were empty as boarding time approached. He wore headphones, though he wasn’t listening to music, because people seem less suspicious of behavior when a person has headphones on, he says. Another researcher, Kasper Hulgaard, counted how many people were standing versus sitting.
The researchers are mapping out gate-seating patterns for a total of about 500 flights. Some early observations: The more people who are standing, the more chaotic boarding tends to be. Copenhagen airport seating areas are designed for groups, even though most travelers come solo or in pairs. Solo flyers like to sit in a corner and put their bag on an adjacent seat. Pairs of travelers tend to perch anywhere as long as they can sit side-by-side….”

Complexity, Governance, and Networks: Perspectives from Public Administration


Paper by Naim Kapucu: “Complex public policy problems require a productive collaboration among different actors from multiple sectors. Networks are widely applied as a public management tool and strategy. This warrants a deeper analysis of networks and network management in public administration. There is a strong interest in both in practice and theory of networks in public administration. This requires an analysis of complex networks within public governance settings. In this this essay I briefly discuss research streams on complex networks, network governance, and current research challenges in public administration.”

Quantifying the Interoperability of Open Government Datasets


Paper by Pieter Colpaert, Mathias Van Compernolle, Laurens De Vocht, Anastasia Dimou, Miel Vander Sande, Peter Mechant, Ruben Verborgh, and Erik Mannens, to be published in Computer: “Open Governments use the Web as a global dataspace for datasets. It is in the interest of these governments to be interoperable with other governments worldwide, yet there is currently no way to identify relevant datasets to be interoperable with and there is no way to measure the interoperability itself. In this article we discuss the possibility of comparing identifiers used within various datasets as a way to measure semantic interoperability. We introduce three metrics to express the interoperability between two datasets: the identifier interoperability, the relevance and the number of conflicts. The metrics are calculated from a list of statements which indicate for each pair of identifiers in the system whether they identify the same concept or not. While a lot of effort is needed to collect these statements, the return is high: not only relevant datasets are identified, also machine-readable feedback is provided to the data maintainer.”