Blood donors in Sweden get a text message whenever their blood saves someone’s life


Jon Stone at the Independent: “With blood donation rates in decline all over the developed world, Sweden’s blood service is enlisting new technology to help push back against shortages.

One new initiative, where donors are sent automatic text messages telling them when their blood has actually been used, has caught the public eye.

People who donate initially receive a ‘thank you’ text when they give blood, but they get another message when their blood makes it into somebody else’s veins.

“We are constantly trying to develop ways to express [donors’] importance,” Karolina Blom Wiberg, a communications manager at the Stockholm blood service told The Independent.

“We want to give them feed back on their effort, and we find this is a good way to do that.”

The service says the messages give donors more positive feedback about how they’ve helped their fellow citizens – which encourages them to donate again.

But the new policy has also been a hit on social media and has got people talking about blood donation amongst their friends….(More)”

How a Mexico City Traffic Experiment Connects to Community Trust


Zoe Mendelson in Next Cities: “Last November, Gómez-Mont, Jose Castillo, an urban planning professor at Harvard’s Graduate School of Design, and Carlos Gershenson, their data analyst, won the Audi Urban Future award for their plan to use big data to solve Mexico City’s traffic problem. The plan consists of three parts, the first a data-donating platform that collects information on origin and destination, transit times, and modes of transit. The app, Living Mobs, is now in use in beta form. The plan also establishes data-sharing partnerships with companies, educational institutions and government agencies. So far, they’ve already signed on Yaxi, Microsoft, Movistar and Uber among others, and collected 14,000 datasets.

The data will be a welcome new resource for the city. “We just don’t have enough,” explains Gómez-Mont, “we call it ‘big city, little data.” The city’s last origin-destination survey conducted in 2007 only caught data from 50,000 people, which at the time was somewhat of a feat. Now, just one of their current data-sharing partners, Yaxi, has 10,000 cars circulating alone. Still, they have one major obstacle to a comprehensive citywide survey that can only be partially addressed by their data-donating platform (which also, of course, does depend on people having smartphones): 60 percent of transportation in Mexico City is on a hard-to-track informal bus system.

The data will eventually end up in an app that gives people real-time transit information. But an underlying idea — that traffic can be solved simply by asking people to take turns — is the project’s most radical and interesting component. Gómez-Mont paints a seductive alternative futuristic vision of incentivized negotiation of the city.

“Say I wake up and while getting ready for work I check and see that Périferico is packed and I say, ‘OK, today I’m going to use my bike or take public transit,’ and maybe I earn some kind of City Points, which translates into a tax break. Or maybe I’m on Périferico and earn points for getting off to relieve congestion.” She even envisions a system through which people could submit their calendar data weeks in advance. With the increasing popularity of Google Calendar and other similar systems that sync with smartphones, advanced “data donation” doesn’t seem that far-fetched.

Essentially, the app would create the opportunity for an entire city to behave as a group and solve its own problems together in real time.

Gómez-Mont insists that mobility is not just a problem for the government to solve. “It’s also very much about citizens and how we behave and what type of culture is embedded in the world outside of the government,” she notes….(More)”.

Introducing the News Lab


Steve Grove at Google: “It’s hard to think of a more important source of information in the world than quality journalism. At its best, news communicates truth to power, keeps societies free and open, and leads to more informed decision-making by people and leaders. In the past decade, better technology and an open Internet have led to a revolution in how news is created, distributed, and consumed. And given Google’s mission to ensure quality information is accessible and useful everywhere, we want to help ensure that innovation in news leads to a more informed, more democratic world.

That’s why we’ve created the News Lab, a new effort at Google to empower innovation at the intersection of technology and media. Our mission is to collaborate with journalists and entrepreneurs to help build the future of media. And we’re tackling this in three ways: though ensuring our tools are made available to journalists around the world (and that newsrooms know how to use them); by getting helpful Google data sets in the hands of journalists everywhere; and through programs designed to build on some of the biggest opportunities that exist in the media industry today…..

Data for more insightful storytelling

There’s a revolution in data journalism happening in newsrooms today, as more data sets and more tools for analysis are allowing journalists to create insights that were never before possible. To help journalists use our data to offer a unique window to the world, last week we announced an update to our Google Trends platform. The newGoogle Trends provides journalists with deeper, broader, and real-time data, and incorporates feedback we collected from newsrooms and data journalists around the world. We’re also helping newsrooms around the world tell stories using data, with a daily feed of curated Google Trends based on the headlines of the day, and throughpartnerships with newsrooms on specific data experiments.

Another area we’ve focused our programs on is citizen reporting. Now that mobile technology allows anyone to be a reporter, we want to do our part to ensure that user-generated news content is a positive and game-changing force in media. We’re doing that with three projectsFirst Draft, the WITNESS Media Lab, and the YouTube Newswire—each of which aims to make YouTube and other open platforms more useful places for first-hand news content from citizen reporters around the world….(More)

Researcher uncovers inherent biases of big data collected from social media sites


Phys.org: “With every click, Facebook, Twitter and other social media users leave behind digital traces of themselves, information that can be used by businesses, government agencies and other groups that rely on “big data.”

But while the information derived from social network sites can shed light on social behavioral traits, some analyses based on this type of data collection are prone to bias from the get-go, according to new research by Northwestern University professor Eszter Hargittai, who heads the Web Use Project.

Since people don’t randomly join Facebook, Twitter or LinkedIn—they deliberately choose to engage —the data are potentially biased in terms of demographics, socioeconomic background or Internet skills, according to the research. This has implications for businesses, municipalities and other groups who use because it excludes certain segments of the population and could lead to unwarranted or faulty conclusions, Hargittai said.

The study, “Is Bigger Always Better? Potential Biases of Big Data Derived from Social Network Sites” was published last month in the journal The Annals of the American Academy of Political and Social Science and is part of a larger, ongoing study.

The buzzword “big data” refers to automatically generated information about people’s behavior. It’s called “big” because it can easily include millions of observations if not more. In contrast to surveys, which require explicit responses to questions, big data is created when people do things using a service or system.

“The problem is that the only people whose behaviors and opinions are represented are those who decided to join the site in the first place,” said Hargittai, the April McClain-Delaney and John Delaney Professor in the School of Communication. “If people are analyzing big data to answer certain questions, they may be leaving out entire groups of people and their voices.”

For example, a city could use Twitter to collect local opinion regarding how to make the community more “age-friendly” or whether more bike lanes are needed. In those cases, “it’s really important to know that people aren’t on Twitter randomly, and you would only get a certain type of person’s response to the question,” said Hargittai.

“You could be missing half the population, if not more. The same holds true for companies who only use Twitter and Facebook and are looking for feedback about their products,” she said. “It really has implications for every kind of group.”…

More information: “Is Bigger Always Better? Potential Biases of Big Data Derived from Social Network Sites” The Annals of the American Academy of Political and Social Science May 2015 659: 63-76, DOI: 10.1177/0002716215570866

Open Innovation, Open Science, Open to the World


Speech by Carlos Moedas, EU Commissioner for Research, Science and Innovation: “On 25 April this year, an earthquake of magnitude 7.3 hit Nepal. To get real-time geographical information, the response teams used an online mapping tool called Open Street Map. Open Street Map has created an entire online map of the world using local knowledge, GPS tracks and donated sources, all provided on a voluntary basis. It is open license for any use.

Open Street Map was created by a 24 year-old computer science student at University College London in 2004, has today 2 million users and has been used for many digital humanitarian and commercial purposes: From the earthquakes in Haiti and Nepal to the Ebola outbreak in West Africa.

This story is one of many that demonstrate that we are moving into a world of open innovation and user innovation. A world where the digital and physical are coming together. A world where new knowledge is created through global collaborations involving thousands of people from across the world and from all walks of life.

Ladies and gentlemen, over the next two days I would like us to chart a new path for European research and innovation policy. A new strategy that is fit for purpose for a world that is open, digital and global. And I would like to set out at the start of this important conference my own ambitions for the coming years….

Open innovation is about involving far more actors in the innovation process, from researchers, to entrepreneurs, to users, to governments and civil society. We need open innovation to capitalise on the results of European research and innovation. This means creating the right ecosystems, increasing investment, and bringing more companies and regions into the knowledge economy. I would like to go further and faster towards open innovation….

I am convinced that excellent science is the foundation of future prosperity, and that openness is the key to excellence. We are often told that it takes many decades for scientific breakthroughs to find commercial application.

Let me tell you a story which shows the opposite. Graphene was first isolated in the laboratory by Profs. Geim and Novoselov at the University of Manchester in 2003 (Nobel Prizes 2010). The development of graphene has since benefitted from major EU support, including ERC grants for Profs. Geim and Novoselov. So I am proud to show you one of the new graphene products that will soon be available on the market.

This light bulb uses the unique thermal dissipation properties of graphene to achieve greater energy efficiencies and a longer lifetime that LED bulbs. It was developed by a spin out company from the University of Manchester, called Graphene Lighting, as is expected to go on sale by the end of the year.

But we must not be complacent. If we look at indicators of the most excellent science, we find that Europe is not top of the rankings in certain areas. Our ultimate goal should always be to promote excellence not only through ERC and Marie Skłodowska-Curie but throughout the entire H2020.

For such an objective we have to move forward on two fronts:

First, we are preparing a call for European Science Cloud Project in order to identify the possibility of creating a cloud for our scientists. We need more open access to research results and the underlying data. Open access publication is already a requirement under Horizon 2020, but we now need to look seriously at open data…

When innovators like LEGO start fusing real bricks with digital magic, when citizens conduct their own R&D through online community projects, when doctors start printing live tissues for patients … Policymakers must follow suit…(More)”

Transforming Government Information


Sharyn Clarkson at the (Interim) Digital Transformation Office (Australia): “Our challenge: How do we get the right information and services to people when and where they need it?

The public relies on Government for a broad range of information – advice for individuals and businesses, what services are available and how to access them, and how various rules and laws impact our lives.

The government’s digital environment has grown organically over the last couple of decades. At the moment, information is largely created and managed within agencies and published across more than 1200 disparate gov.au websites, plus a range of social media accounts, apps and other digital formats.

This creates some difficulties for people looking for government information. By publishing within agency silos we are presenting people with an agency-centric view of government information. This is a problem because people largely don’t understand or care about how government organises itself and the structure of government does not map to the needs of people. Having a baby or travelling overseas? Up to a dozen government agencies may have information relevant to you. And as people’s needs span more than one agency, they end up with a disjointed and confusing user experience as they have to navigate across disparate government sites. And even if you begin at your favourite search engine how do you know which of the many government search results is the right place to start?

There are two government entry points already in place to help users – Australia.gov.au and business.gov.au – but they largely act as an umbrella across the 1200+ sites and currently only provide a very thin layer of whole of government information and mainly refer people off to other websites.

The establishment of the DTO has provided the first opportunity for people to come together and better understand how our underlying structural landscape is impacting people’s experience with government. It’s also given us an opportunity to take a step back and ask some of the big questions about how we manage information and what problems can only really be solved through whole of government transformation.

How do we make information and services easier to find? How do we make sure we provide information that people can trust and rely upon at times of need? How should the gov.au landscape be organised to make it easier for us to meet user’s needs and expectations? How many websites should we have – assuming 1200 is too many? What makes up a better user experience – does it mean all sites should look and feel the same? How can we provide government information at the places people naturally go looking for assistance – even if these are not government sites?

As we asked these questions we started to come across some central ideas:

  • What if we could decouple the authoring and management of information from the publishing process, so the subject experts in government still manage their content but we have flexibility to present it in more user-centric ways?
  • What if we unleashed government information? Making it possible for state and local governments, non-profit groups and businesses to deliver content and services alongside their own information to give better value users.
  • Should we move the bureaucratic content (information about agencies and how they are managed such as annual reports, budget statements and operating rules) out of the way of core content and services for people? Can we simplify our environment and base it around topics and life events instead of agencies? What if we had people in government responsible for curating these topics and life events across agencies and creating simpler pathways for users?…(More)”

Rethinking Smart Cities From The Ground Up


New report byTom Saunders and Peter Baeck (NESTA): “This report tells the stories of cities around the world – from Beijing to Amsterdam, and from London to Jakarta – that are addressing urban challenges by using digital technologies to engage and enable citizens.

Key findings

  • Many ‘top down’ smart city ideas have failed to deliver on their promise, combining high costs and low returns.
  • ‘Collaborative technologies’ offer cities another way to make smarter use of resources, smarter ways of collecting data and smarter ways to make decisions.
  • Collaborative technologies can also help citizens themselves shape the future of their cities.
  • We have created five recommendations for city government who want to make their cities smarter.

As cities bring people together to live, work and play, they amplify their ability to create wealth and ideas. But scale and density also bring acute challenges: how to move around people and things; how to provide energy; how to keep people safe.

‘Smart cities’ offer sensors, ‘big data’ and advanced computing as answers to these challenges, but they have often faced criticism for being too concerned with hardware rather than with people.

In this report we argue that successful smart cities of the future will combine the best aspects of technology infrastructure while making the most of the growing potential of ‘collaborative technologies’, technologies that enable greater collaboration between urban communities and between citizens and city governments.

How will this work in practice? Drawing on examples from all around the world we investigate four emerging methods which are helping city governments engage and enable citizens: the collaborative economy, crowdsourcing data, collective intelligence and crowdfunding.

Policy recommendations

  1. Set up a civic innovation lab to drive innovation in collaborative technologies.
  2. Use open data and open platforms to mobilise collective knowledge.
  3. Take human behaviour as seriously as technology.
  4. Invest in smart people, not just smart technology.
  5. Spread the potential of collaborative technologies to all parts of society….(More)”

Please, Corporations, Experiment on Us


Michelle N. Meyer and Christopher Chabris in the New York Times: ” Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent?

The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise.

But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.

Companies — and other powerful actors, including lawmakers, educators and doctors — “experiment” on us without our consent every time they implement a new policy, practice or product without knowing its consequences. When Facebook started, it created a radical new way for people to share emotionally laden information, with unknown effects on their moods. And when OkCupid started, it advised users to go on dates based on an algorithm without knowing whether it worked.

Why does one “experiment” (i.e., introducing a new product) fail to raise ethical concerns, whereas a true scientific experiment (i.e., introducing a variation of the product to determine the comparative safety or efficacy of the original) sets off ethical alarms?

In a forthcoming article in the Colorado Technology Law Journal, one of us (Professor Meyer) calls this the “A/B illusion” — the human tendency to focus on the risk, uncertainty and power asymmetries of running a test that compares A to B, while ignoring those factors when A is simply imposed by itself.

Consider a hypothetical example. A chief executive is concerned that her employees are taking insufficient advantage of the company’s policy of matching contributions to retirement savings accounts. She suspects that telling her workers how many others their age are making the maximum contribution would nudge them to save more, so she includes this information in personalized letters to them.

If contributions go up, maybe the new policy worked. But perhaps contributions would have gone up anyhow (say, because of an improving economy). If contributions go down, it might be because the policy failed. Or perhaps a declining economy is to blame, and contributions would have gone down even more without the letter.

You can’t answer these questions without doing a true scientific experiment — in technology jargon, an “A/B test.” The company could randomly assign its employees to receive either the old enrollment packet or the new one that includes the peer contribution information, and then statistically compare the two groups of employees to see which saved more.

Let’s be clear: This is experimenting on people without their consent, and the absence of consent is essential to the validity of the entire endeavor. If the C.E.O. were to tell the workers that they had been randomly assigned to receive one of two different letters, and why, that information would be likely to distort their choices.

Our chief executive isn’t so hypothetical. Economists do help corporations run such experiments, but many managers chafe at debriefing their employees afterward, fearing that they will be outraged that they were experimented on without their consent. A company’s unwillingness to debrief, in turn, can be a deal-breaker for the ethics boards that authorize research. So those C.E.O.s do what powerful people usually do: Pick the policy that their intuition tells them will work best, and apply it to everyone….(More)”

Forging Trust Communities: How Technology Changes Politics


Book by Irene S. Wu: “Bloggers in India used social media and wikis to broadcast news and bring humanitarian aid to tsunami victims in South Asia. Terrorist groups like ISIS pour out messages and recruit new members on websites. The Internet is the new public square, bringing to politics a platform on which to create community at both the grassroots and bureaucratic level. Drawing on historical and contemporary case studies from more than ten countries, Irene S. Wu’s Forging Trust Communities argues that the Internet, and the technologies that predate it, catalyze political change by creating new opportunities for cooperation. The Internet does not simply enable faster and easier communication, but makes it possible for people around the world to interact closely, reciprocate favors, and build trust. The information and ideas exchanged by members of these cooperative communities become key sources of political power akin to military might and economic strength.

Wu illustrates the rich world history of citizens and leaders exercising political power through communications technology. People in nineteenth-century China, for example, used the telegraph and newspapers to mobilize against the emperor. In 1970, Taiwanese cable television gave voice to a political opposition demanding democracy. Both Qatar (in the 1990s) and Great Britain (in the 1930s) relied on public broadcasters to enhance their influence abroad. Additional case studies from Brazil, Egypt, the United States, Russia, India, the Philippines, and Tunisia reveal how various technologies function to create new political energy, enabling activists to challenge institutions while allowing governments to increase their power at home and abroad.

Forging Trust Communities demonstrates that the way people receive and share information through network communities reveals as much about their political identity as their socioeconomic class, ethnicity, or religion. Scholars and students in political science, public administration, international studies, sociology, and the history of science and technology will find this to be an insightful and indispensable work…(More)”

A computational algorithm for fact-checking


Kurzweil News: “Computers can now do fact-checking for any body of knowledge, according to Indiana University network scientists, writing in an open-access paper published June 17 in PLoS ONE.

Using factual information from summary infoboxes from Wikipedia* as a source, they built a “knowledge graph” with 3 million concepts and 23 million links between them. A link between two concepts in the graph can be read as a simple factual statement, such as “Socrates is a person” or “Paris is the capital of France.”

In the first use of this method, IU scientists created a simple computational fact-checker that assigns “truth scores” to statements concerning history, geography and entertainment, as well as random statements drawn from the text of Wikipedia. In multiple experiments, the automated system consistently matched the assessment of human fact-checkers in terms of the humans’ certitude about the accuracy of these statements.

Dealing with misinformation and disinformation

In what the IU scientists describe as an “automatic game of trivia,” the team applied their algorithm to answer simple questions related to geography, history, and entertainment, including statements that matched states or nations with their capitals, presidents with their spouses, and Oscar-winning film directors with the movie for which they won the Best Picture awards. The majority of tests returned highly accurate truth scores.

Lastly, the scientists used the algorithm to fact-check excerpts from the main text of Wikipedia, which were previously labeled by human fact-checkers as true or false, and found a positive correlation between the truth scores produced by the algorithm and the answers provided by the fact-checkers.

Significantly, the IU team found their computational method could even assess the truthfulness of statements about information not directly contained in the infoboxes. For example, the fact that Steve Tesich — the Serbian-American screenwriter of the classic Hoosier film “Breaking Away” — graduated from IU, despite the information not being specifically addressed in the infobox about him.

Using multiple sources to improve accuracy and richness of data

“The measurement of the truthfulness of statements appears to rely strongly on indirect connections, or ‘paths,’ between concepts,” said Giovanni Luca Ciampaglia, a postdoctoral fellow at the Center for Complex Networks and Systems Research in the IU Bloomington School of Informatics and Computing, who led the study….

“These results are encouraging and exciting. We live in an age of information overload, including abundant misinformation, unsubstantiated rumors and conspiracy theories whose volume threatens to overwhelm journalists and the public. Our experiments point to methods to abstract the vital and complex human task of fact-checking into a network analysis problem, which is easy to solve computationally.”

Expanding the knowledge base

Although the experiments were conducted using Wikipedia, the IU team’s method does not assume any particular source of knowledge. The scientists aim to conduct additional experiments using knowledge graphs built from other sources of human knowledge, such as Freebase, the open-knowledge base built by Google, and note that multiple information sources could be used together to account for different belief systems….(More)”