Browser extension automates citations of online material


Springwise: “Plagiarism is a major concern for colleges today, meaning when it comes to writing a thesis or essay, college students can often spend an inordinate amount of time ensuring their bibliographies are up to scratch, to the detriment of the quality of the actual writing. In the past, services such as ReadCube have made it easier to annotate and search online articles, and now Citelighter automatically generates a citation for any web resource, along with a number of tools to help students organize their research.

The service is a toolbar that sits at the top of the user’s browser while they search for material for their paper. When they’ve found a fact or quote that’s useful, users simply highlight the text and click the Capture button, which saves the clipping to the project they’re working on. Citelight automatically captures the bibliographic information necessary to create a citation that reaches academic standards, and users can also add their own comments for when they come to use the quote in their essay. Citations can be re-ordered within each project to enable students to plot out a rough version of their paper before sitting down to write…”

Potholes and Big Data: Crowdsourcing Our Way to Better Government


Phil Simon in Wired: “Big Data is transforming many industries and functions within organizations with relatively limited budgets.
Consider Thomas M. Menino, up until recently Boston’s longest-serving mayor. At some point in the past few years, Menino realized that it was no longer 1950. Perhaps he was hobnobbing with some techies from MIT at dinner one night. Whatever his motivation, he decided that there just had to be a better, more cost-effective way to maintain and fix the city’s roads. Maybe smartphones could help the city take a more proactive approach to road maintenance.
To that end, in July 2012, the Mayor’s Office of New Urban Mechanics launched a new project called Street Bump, an app that allows drivers to automatically report the road hazards to the city as soon as they hear that unfortunate “thud,” with their smartphones doing all the work.
The app’s developers say their work has already sparked interest from other cities in the U.S., Europe, Africa and elsewhere that are imagining other ways to harness the technology.
Before they even start their trip, drivers using Street Bump fire up the app, then set their smartphones either on the dashboard or in a cup holder. The app takes care of the rest, using the phone’s accelerometer — a motion detector — to sense when a bump is hit. GPS records the location, and the phone transmits it to an AWS remote server.
But that’s not the end of the story. It turned out that the first version of the app reported far too many false positives (i.e., phantom potholes). This finding no doubt gave ammunition to the many naysayers who believe that technology will never be able to do what people can and that things are just fine as they are, thank you. Street Bump 1.0 “collected lots of data but couldn’t differentiate between potholes and other bumps.” After all, your smartphone or cell phone isn’t inert; it moves in the car naturally because the car is moving. And what about the scores of people whose phones “move” because they check their messages at a stoplight?
To their credit, Menino and his motley crew weren’t entirely discouraged by this initial setback. In their gut, they knew that they were on to something. The idea and potential of the Street Bump app were worth pursuing and refining, even if the first version was a bit lacking. Plus, they have plenty of examples from which to learn. It’s not like the iPad, iPod, and iPhone haven’t evolved considerably over time.
Enter InnoCentive, a Massachusetts-based firm specializing in open innovation and crowdsourcing. The City of Boston contracted InnoCentive to improve Street Bump and reduce the amount of tail chasing. The company accepted the challenge and essentially turned it into a contest, a process sometimes called gamification. InnoCentive offered a network of 400,000 experts a share of $25,000 in prize money donated by Liberty Mutual.
Almost immediately, the ideas to improve Street Bump poured in from unexpected places. This crowd had wisdom. Ultimately, the best suggestions came from:

  • A group of hackers in Somerville, Massachusetts, that promotes community education and research
  • The head of the mathematics department at Grand Valley State University in Allendale, MI.
  • An anonymous software engineer

…Crowdsourcing roadside maintenance isn’t just cool. Increasingly, projects like Street Bump are resulting in substantial savings — and better government.”

European Commission launches network to foster web talent through Massive Open Online Courses (MOOCs)


Press Release: “The Commission is launching a network of providers of MOOCs related to web and apps skills. MOOCs are online university courses which enable people to access quality education without having to leave their homes. The new network aims to map the demand for web-related skills across Europe and to promote the use of Massive Open Online Courses (MOOCs) for capacity-building in those fields.
Web-related industry is generating more economic growth than any other part of the European economy, but hundreds of thousands of jobs remain unfilled due to the lack of qualified staff.
European Commission Vice President Neelie Kroes, responsible for the Digital Agenda, said:
“By 2020, 90% of jobs will need digital skills. That is just around the corner, and we aren’t ready! Already businesses in Europe are facing a shortage of skilled ICT workers. We have to fill that gap, and this network we are launching will help us identify where the gaps are. This goes hand in hand with the work being done through the Grand Coalition for Digital Jobs”.
The Commission calls upon web entrepreneurs, universities, MOOC providers and online learners to join the network, which is part of the “Startup Europe” initiative.
Participants in the network benefit from the exchange of experiences and best practices, opportunities for networking, news updates, and the chance to participate in a conference dedicated to MOOCs for web and apps skills scheduled for the second half of 2014. In addition, the network offers a discussion group that can be found on the European Commission’s portal Open Education Europa. The initiative is coordinated by p.a.u. education and in partnership with Iversity.
Useful links
Link to EC press release on the launch of first pan-European university MOOCs
Open Education Europa website
Startup Europe website
Grand Coalition for Digital Jobs website”

Statistics and Open Data: Harvesting unused knowledge, empowering citizens and improving public services


House of Commons Public Administration Committee (Tenth Report):
“1. Open data is playing an increasingly important role in Government and society. It is data that is accessible to all, free of restrictions on use or redistribution and also digital and machine-readable so that it can be combined with other data, and thereby made more useful. This report looks at how the vast amounts of data generated by central and local Government can be used in open ways to improve accountability, make Government work better and strengthen the economy.

2. In this inquiry, we examined progress against a series of major government policy announcements on open data in recent years, and considered the prospects for further development. We heard of government open data initiatives going back some years, including the decision in 2009 to release some Ordnance Survey (OS) data as open data, and the Public Sector Mapping Agreement (PSMA) which makes OS data available for free to the public sector.  The 2012 Open Data White Paper ‘Unleashing the Potential’ says that transparency through open data is “at the heart” of the Government’s agenda and that opening up would “foster innovation and reform public services”. In 2013 the report of the independently-chaired review by Stephan Shakespeare, Chief Executive of the market research and polling company YouGov, of the use, re-use, funding and regulation of Public Sector Information urged Government to move fast to make use of data. He criticised traditional public service attitudes to data before setting out his vision:

    • To paraphrase the great retailer Sir Terry Leahy, to run an enterprise without data is like driving by night with no headlights. And yet that is what Government often does. It has a strong institutional tendency to proceed by hunch, or prejudice, or by the easy option. So the new world of data is good for government, good for business, and above all good for citizens. Imagine if we could combine all the data we produce on education and health, tax and spending, work and productivity, and use that to enhance the myriad decisions which define our future; well, we can, right now. And Britain can be first to make it happen for real.

3. This was followed by publication in October 2013 of a National Action Plan which sets out the Government’s view of the economic potential of open data as well as its aspirations for greater transparency.

4. This inquiry is part of our wider programme of work on statistics and their use in Government. A full description of the studies is set out under the heading “Statistics” in the inquiries section of our website, which can be found at www.parliament.uk/pasc. For this inquiry we received 30 pieces of written evidence and took oral evidence from 12 witnesses. We are grateful to all those who have provided evidence and to our Specialist Adviser on statistics, Simon Briscoe, for his assistance with this inquiry.”

Table of Contents:

Summary
1 Introduction
2 Improving accountability through open data
3 Open Data and Economic Growth
4 Improving Government through open data
5 Moving faster to make a reality of open data
6 A strategic approach to open data?
Conclusion
Conclusions and recommendations

Climate Data Initiative Launches with Strong Public and Private Sector Commitments


John Podesta and Dr. John P. Holdren at the White House blog:  “…today, delivering on a commitment in the President’s Climate Action Plan, we are launching the Climate Data Initiative, an ambitious new effort bringing together extensive open government data and design competitions with commitments from the private and philanthropic sectors to develop data-driven planning and resilience tools for local communities. This effort will help give communities across America the information and tools they need to plan for current and future climate impacts.
The Climate Data Initiative builds on the success of the Obama Administration’s ongoing efforts to unleash the power of open government data. Since data.gov, the central site to find U.S. government data resources, launched in 2009, the Federal government has released troves of valuable data that were previously hard to access in areas such as health, energy, education, public safety, and global development. Today these data are being used by entrepreneurs, researchers, tech innovators, and others to create countless new applications, tools, services, and businesses.
Data from NOAA, NASA, the U.S. Geological Survey, the Department of Defense, and other Federal agencies will be featured on climate.data.gov, a new section within data.gov that opens for business today. The first batch of climate data being made available will focus on coastal flooding and sea level rise. NOAA and NASA will also be announcing an innovation challenge calling on researchers and developers to create data-driven simulations to help plan for the future and to educate the public about the vulnerability of their own communities to sea level rise and flood events.
These and other Federal efforts will be amplified by a number of ambitious private commitments. For example, Esri, the company that produces the ArcGIS software used by thousands of city and regional planning experts, will be partnering with 12 cities across the country to create free and open “maps and apps” to help state and local governments plan for climate change impacts. Google will donate one petabyte—that’s 1,000 terabytes—of cloud storage for climate data, as well as 50 million hours of high-performance computing with the Google Earth Engine platform. The company is challenging the global innovation community to build a high-resolution global terrain model to help communities build resilience to anticipated climate impacts in decades to come. And the World Bank will release a new field guide for the Open Data for Resilience Initiative, which is working in more than 20 countries to map millions of buildings and urban infrastructure….”

Expanding Opportunity through Open Educational Resources


Hal Plotkin and Colleen Chien at the White House: “Using advanced technology to dramatically expand the quality and reach of education has long been a key priority for the Obama Administration.
In December 2013, the President’s Council of Advisors on Science and Technology (PCAST) issued a report exploring the potential of Massive Open Online Courses (MOOCs) to expand access to higher education opportunities. Last month, the President announced a $2B down payment, and another $750M in private-sector commitments to deliver on the President’s ConnectEd initiative, which will connect 99% of American K-12 students to broadband by 2017 at no cost to American taxpayers.
This week, we are happy to be joining with educators, students, and technologists worldwide to recognize and celebrate Open Education Week.
Open Educational Resources (“OER”) are educational resources that are released with copyright licenses allowing for their free use, continuous improvement, and modification by others. The world is moving fast, and OER enables educators and students to access, customize, and remix high-quality course materials reflecting the latest understanding of the world and materials that incorporate state of the art teaching methods – adding their own insights along the way. OER is not a silver bullet solution to the many challenges that teachers, students and schools face. But it is a tool increasingly being used, for example by players like edX and the Kahn Academy, to improve learning outcomes and create scalable platforms for sharing educational resources that reach millions of students worldwide.
Launched at MIT in 2001, OER became a global movement in 2007 when thousands of educators around the globe endorsed the Cape Town Declaration on Open Educational Resources. Another major milestone came in 2011, when Secretary of Education Arne Duncan and then-Secretary of Labor Hilda Solis unveiled the four-year, $2B Trade Adjustment Assistance Community College and Career Training Grant Program (TAACCCT). It was the first Federal program to leverage OER to support the development of a new generation of affordable, post-secondary educational programs that can be completed in two years or less to prepare students for careers in emerging and expanding industries….
Building on this record of success, OSTP and the U.S. Agency for International Development (USAID) are exploring an effort to inspire and empower university students through multidisciplinary OER focused on one of the USAID Grand Challenges, such as securing clean water, saving lives at birth, or improving green agriculture. This effort promises to  be a stepping stone towards leveraging OER to help solve other grand challenges such as the NAE Grand Challenges in Engineering or Grand Challenges in Global Health.
This is great progress, but there is more work to do. We look forward to keeping the community updated right here. To see the winning videos from the U.S. Department of Education’s “Why Open Education Matters” Video Contest, click here.”

Making digital government better


An McKinsey Insight interview with Mike Bracken (UK): “When it comes to the digital world, governments have traditionally placed political, policy, and system needs ahead of the people who require services. Mike Bracken, the executive director of the United Kingdom’s Government Digital Service, is attempting to reverse that paradigm by empowering citizens—and, in the process, improve the delivery of services and save money. In this video interview, Bracken discusses the philosophy behind the digital transformation of public services in the United Kingdom, some early successes, and next steps.

Interview transcript

Putting users first

Government around the world is pretty good at thinking about its own needs. Government puts its own needs first—they often put their political needs followed by the policy needs. The actual machine of government comes second. The third need then generally becomes the system needs, so the IT or whatever system’s driving it. And then out of those four, the user comes a poor fourth, really.
And we’ve inverted that. So let me give you an example. At the moment, if you want to know about tax in the UK , you’re probably going to know that Her Majesty’s Revenue and Customs is a part of government that deals with tax. You’re probably going to know that because you pay tax, right?
But why should you have to know that? Because, really, it’s OK to know that, for that one—but we’ve got 300 agencies, more than that; we’ve got 24 parts of government. If you want to know about, say, gangs, is that a health issue or is that a local issue? Is it a police issue? Is it a social issue, an education issue? Well, actually it’s all of those issues. But you shouldn’t have to know how government is constructed to know what each bit of government is doing about an esoteric issue like gangs.
What we’ve done with gov.uk, and what we’re doing with our transactions, is to make them consistent at the point of user need. Because there’s only one real user need of government digitally, and that’s to recognize that at the point of need, users need to deal with the government. Not a department name or an agency name, they’re dealing with the government. And when they do that, they need it to be consistent, and they need it to be easy to find. Ninety-five percent of our journeys digitally start with a search.
And so our elegantly constructed and expensively constructed front doors are often completely routed around. We’ve got to recognize that and construct our digital services based on user needs….”

Doctors’ #1 Source for Healthcare Information: Wikipedia


Article By Julie Beck in the Atlantic: “In spite of all of our teachers’ and bosses’ warnings that it’s not a trustworthy source of information, we all rely on Wikipedia. Not only when we can’t remember the name of that guy from that movie, which is a fairly low-risk use, but also when we find a weird rash or are just feeling a little off and we’re not sure why. One in three Americans have tried to diagnose a medical condition with the help of the Internet, and a new report says doctors are just as drawn to Wikipedia’s flickering flame.
According to the IMS Institute for Healthcare Informatics’ “Engaging patients through social media” report, Wikipedia is the top source of healthcare information for both doctors and patients. Fifty percent of physicians use Wikipedia for information, especially for specific conditions.
Generally, more people turn to Wikipedia for rare diseases than common conditions. The top five conditions looked up on the site over the past year were: tuberculosis, Crohn’s disease, pneumonia, multiple sclerosis, and diabetes. Patients tend to use Wikipedia as a “starting point for their online self education,” the report says. It also found a “direct correlation between Wikipedia page visits and prescription volumes.”
We already knew that more and more people were turning to the Internet in general and Wikipedia specifically for health information, and we could hardly stop them if we tried.

Being crowd-sourced, the information may well be neutral, but is it accurate? Knowing that doctors, too, are using these resources raises old concerns about the quality of information that comes up when you type your condition into Google.
But doctors are aware of this, and an effort called Wikiproject Medicine is dedicated to improving the quality of medical information on Wikipedia. The IMS report looked at changes to five articles—diabetes, multiple sclerosis, rheumatoid arthritis, breast cancer and prostate cancer—and found them to be in a state of constant flux. Those articles were changed, on average, between 16 and 46 times a month. But one of the major contributors to those articles was Dr. James Heilman, the founder of Wikiproject Medicine’s Medicine Translation task force.
“This task force’s goal is getting 200 medical articles to a good or featured status (only 0.1 percent of articles on Wikipedia have this status), simplifying the English and then translating this content to as many languages as possible,” the report says. “The aim is to improve the quality of the most read medical articles on Wikipedia and ensure that this quality will reach non-English speakers.”…”

Big Data, Big New Businesses


Nigel Shaboldt and Michael Chui: “Many people have long believed that if government and the private sector agreed to share their data more freely, and allow it to be processed using the right analytics, previously unimaginable solutions to countless social, economic, and commercial problems would emerge. They may have no idea how right they are.

Even the most vocal proponents of open data appear to have underestimated how many profitable ideas and businesses stand to be created. More than 40 governments worldwide have committed to opening up their electronic data – including weather records, crime statistics, transport information, and much more – to businesses, consumers, and the general public. The McKinsey Global Institute estimates that the annual value of open data in education, transportation, consumer products, electricity, oil and gas, health care, and consumer finance could reach $3 trillion.

These benefits come in the form of new and better goods and services, as well as efficiency savings for businesses, consumers, and citizens. The range is vast. For example, drawing on data from various government agencies, the Climate Corporation (recently bought for $1 billion) has taken 30 years of weather data, 60 years of data on crop yields, and 14 terabytes of information on soil types to create customized insurance products.

Similarly, real-time traffic and transit information can be accessed on smartphone apps to inform users when the next bus is coming or how to avoid traffic congestion. And, by analyzing online comments about their products, manufacturers can identify which features consumers are most willing to pay for, and develop their business and investment strategies accordingly.

Opportunities are everywhere. A raft of open-data start-ups are now being incubated at the London-based Open Data Institute (ODI), which focuses on improving our understanding of corporate ownership, health-care delivery, energy, finance, transport, and many other areas of public interest.

Consumers are the main beneficiaries, especially in the household-goods market. It is estimated that consumers making better-informed buying decisions across sectors could capture an estimated $1.1 trillion in value annually. Third-party data aggregators are already allowing customers to compare prices across online and brick-and-mortar shops. Many also permit customers to compare quality ratings, safety data (drawn, for example, from official injury reports), information about the provenance of food, and producers’ environmental and labor practices.

Consider the book industry. Bookstores once regarded their inventory as a trade secret. Customers, competitors, and even suppliers seldom knew what stock bookstores held. Nowadays, by contrast, bookstores not only report what stock they carry but also when customers’ orders will arrive. If they did not, they would be excluded from the product-aggregation sites that have come to determine so many buying decisions.

The health-care sector is a prime target for achieving new efficiencies. By sharing the treatment data of a large patient population, for example, care providers can better identify practices that could save $180 billion annually.

The Open Data Institute-backed start-up Mastodon C uses open data on doctors’ prescriptions to differentiate among expensive patent medicines and cheaper “off-patent” varieties; when applied to just one class of drug, that could save around $400 million in one year for the British National Health Service. Meanwhile, open data on acquired infections in British hospitals has led to the publication of hospital-performance tables, a major factor in the 85% drop in reported infections.

There are also opportunities to prevent lifestyle-related diseases and improve treatment by enabling patients to compare their own data with aggregated data on similar patients. This has been shown to motivate patients to improve their diet, exercise more often, and take their medicines regularly. Similarly, letting people compare their energy use with that of their peers could prompt them to save hundreds of billions of dollars in electricity costs each year, to say nothing of reducing carbon emissions.

Such benchmarking is even more valuable for businesses seeking to improve their operational efficiency. The oil and gas industry, for example, could save $450 billion annually by sharing anonymized and aggregated data on the management of upstream and downstream facilities.

Finally, the move toward open data serves a variety of socially desirable ends, ranging from the reuse of publicly funded research to support work on poverty, inclusion, or discrimination, to the disclosure by corporations such as Nike of their supply-chain data and environmental impact.

There are, of course, challenges arising from the proliferation and systematic use of open data. Companies fear for their intellectual property; ordinary citizens worry about how their private information might be used and abused. Last year, Telefónica, the world’s fifth-largest mobile-network provider, tried to allay such fears by launching a digital confidence program to reassure customers that innovations in transparency would be implemented responsibly and without compromising users’ personal information.

The sensitive handling of these issues will be essential if we are to reap the potential $3 trillion in value that usage of open data could deliver each year. Consumers, policymakers, and companies must work together, not just to agree on common standards of analysis, but also to set the ground rules for the protection of privacy and property.”

Are bots taking over Wikipedia?


Kurzweil News: “As crowdsourced Wikipedia has grown too large — with more than 30 million articles in 287 languages — to be entirely edited and managed by volunteers, 12 Wikipedia bots have emerged to pick up the slack.

The bots use Wikidata — a free knowledge base that can be read and edited by both humans and bots — to exchange information between entries and between the 287 languages.

Which raises an interesting question: what portion of Wikipedia edits are generated by humans versus bots?

To find out (and keep track of other bot activity), Thomas Steiner of Google Germany has created an open-source application (and API): Wikipedia and Wikidata Realtime Edit Stats, described in an arXiv paper.
The percentages of bot vs. human edits as shown in the application is constantly changing.  A KurzweilAI snapshot on Feb. 20 at 5:19 AM EST showed an astonishing 42% of Wikipedia being edited by bots. (The application lists the 12 bots.)


Anonymous vs. logged-In humans (credit: Thomas Steiner)
The percentages also vary by language. Only 5% of English edits were by bots; but for Serbian pages, in which few Wikipedians apparently participate, 96% of edits were by bots.

The application also tracks what percentage of edits are by anonymous users. Globally, it was 25 percent in our snapshot and a surprising 34 percent for English — raising interesting questions about corporate and other interests covertly manipulating Wikipedia information.