All European scientific articles to be freely accessible by 2020


EU Presidency: “All scientific articles in Europe must be freely accessible as of 2020. EU member states want to achieve optimal reuse of research data. They are also looking into a European visa for foreign start-up founders.

And, according to the new Innovation Principle, new European legislation must take account of its impact on innovation. These are the main outcomes of the meeting of the Competitiveness Council in Brussels on 27 May.

Sharing knowledge freely

Under the presidency of Netherlands State Secretary for Education, Culture and Science Sander Dekker, the EU ministers responsible for research and innovation decided unanimously to take these significant steps. Mr Dekker is pleased that these ambitions have been translated into clear agreements to maximise the impact of research. ‘Research and innovation generate economic growth and more jobs and provide solutions to societal challenges,’ the state secretary said. ‘And that means a stronger Europe. To achieve that, Europe must be as attractive as possible for researchers and start-ups to locate here and for companies to invest. That calls for knowledge to be freely shared. The time for talking about open access is now past. With these agreements, we are going to achieve it in practice.’

Open access

Open access means that scientific publications on the results of research supported by public and public-private funds must be freely accessible to everyone. That is not yet the case. The results of publicly funded research are currently not accessible to people outside universities and knowledge institutions. As a result, teachers, doctors and entrepreneurs do not have access to the latest scientific insights that are so relevant to their work, and universities have to take out expensive subscriptions with publishers to gain access to publications.

Reusing research data

From 2020, all scientific publications on the results of publicly funded research must be freely available. It also must be able to optimally reuse research data. To achieve that, the data must be made accessible, unless there are well-founded reasons for not doing so, for example intellectual property rights or security or privacy issues….(More)”

Data Science Ethical Framework


UK Cabinet Office: “Data science provides huge opportunities for government. Harnessing new forms of data with increasingly powerful computer techniques increases operational efficiency, improves public services and provides insight for better policymaking.

We want people in government to feel confident using data science techniques to innovate. This guidance is intended to bring together relevant laws and best practice, to give teams robust principles to work with.

The publication is a first version that we are asking the public, experts, civil servants and other interested parties to help us perfect and iterate. This will include taking on evidence from a public dialogue on data science ethics. It was published on 19 May by the Minister for Cabinet Office, Matt Hancock. If you would like to help us iterate the framework, find out how to get in touch at the end of this blog. See Data Science Ethical Framework (PDF, 8.28MB, 17 pages). This file may not be suitable for users of assistive technology. Request an accessible format.

How innovation agencies work


Kirsten Bound and Alex Glennie at NESTA: “This report considers how governments can get better at designing and running innovation agencies, drawing on examples from around the world.

Key findings

  • There is no single model for a ‘successful’ innovation agency.  Although there is much to learn from other countries about best practice in institution and programme design, attempts to directly replicate organisational models that operate in very different contexts are likely to fail.
  • There are a variety of roles that innovation agencies can play. From our case studies, we have identified a number of different approaches that an innovation agency might take, depending on the specific nature of a country’s innovation system, the priorities of policymakers, and available resources.
  • Innovation agencies need a clear mission, but an ability to adapt and experiment. Working towards many different objectives at once or constantly changing strategic direction can make it difficult for an innovation agency to deliver impactful innovation support for businesses. However, a long-term vision of what success looks like should not prevent innovation agencies from experimenting with new approaches, and responding to new needs and opportunities.
  • Innovation agencies should be assessed both quantitatively and qualitatively. Evaluations tend to focus on the financial return they generate, but our research suggests that more effort needs to be put into assessing some of the more qualitative aspects of their role, including the quality of their management, their ability to take (and learn from) strategic risks, and the skill with which they design and implement their programmes.
  • Governments should be both ambitious and realistic about what they expect an innovation agency to achieve. An innovation agency’s role will inevitably be affected by shifts in government priorities. Understanding how innovation agencies shape (and are shaped by) the broader political environment around innovation is a necessary part of ensuring that they are able to deliver on their potential.

Governments around the world are looking for ways to nurture innovative businesses, as a way of solving some of their most urgent economic and societal challenges. Many seek to do this by setting up national innovation agencies: institutions that provide financial and other support to catalyse or drive private sector innovation. Yet we still know relatively little about the range of approaches that these agencies take, what programmes and instruments are likely to work best in a given context, and how to assess their long-term impact.

We have been investigating these questions by studying a diverse group selection of innovation agencies in ten different countries. Our aim has been to improve understanding of the range of existing institutional models and to learn more about their design, evolution and effectiveness. In doing so, we have developed a broad framework to help policymakers think about the set of choices and options they face in the design and management of an innovation agency….(More)”

Smart crowds in smart cities: real life, city scale deployments of a smartphone based participatory crowd management platform


Tobias FrankePaul Lukowicz and Ulf Blanke at the Journal of Internet Services and Applications: “Pedestrian crowds are an integral part of cities. Planning for crowds, monitoring crowds and managing crowds, are fundamental tasks in city management. As a consequence, crowd management is a sprawling R&D area (see related work) that includes theoretical models, simulation tools, as well as various support systems. There has also been significant interest in using computer vision techniques to monitor crowds. However, overall, the topic of crowd management has been given only little attention within the smart city domain. In this paper we report on a platform for smart, city-wide crowd management based on a participatory mobile phone sensing platform. Originally, the apps based on this platform have been conceived as a technology validation tool for crowd based sensing within a basic research project. However, the initial deployments at the Notte Bianca Festival1 in Malta and at the Lord Mayor’s Show in London2 generated so much interest within the civil protection community that it has gradually evolved into a full-blown participatory crowd management system and is now in the process of being commercialized through a startup company. Until today it has been deployed at 14 events in three European countries (UK, Netherlands, Switzerland) and used by well over 100,000 people….

Obtaining knowledge about the current size and density of a crowd is one of the central aspects of crowd monitoring . For the last decades, automatic crowd monitoring in urban areas has mainly been performed by means of image processing . One use case for such video-based applications can be found in, where a CCTV camera-based system is presented that automatically alerts the staff of subway stations when the waiting platform is congested. However, one of the downsides of video-based crowd monitoring is the fact that video cameras tend to be considered as privacy invading. Therefore,  presents a privacy preserving approach to video-based crowd monitoring where crowd sizes are estimated without people models or object tracking.

With respect to the mitigation of catastrophes induced by panicking crowds (e.g. during an evacuation), city planners and architects increasingly rely on tools simulating crowd behaviors in order to optimize infrastructures. Murakami et al. presents an agent based simulation for evacuation scenarios. Shendarkar et al. presents a work that is also based on BSI (believe, desire, intent) agents – those agents however are trained in a virtual reality environment thereby giving greater flexibility to the modeling. Kluepfel et al. on the other hand uses a cellular automaton model for the simulation of crowd movement and egress behavior.

With smartphones becoming everyday items, the concept of crowd sourcing information from users of mobile application has significantly gained traction. Roitman et al. presents a smart city system where the crowd can send eye witness reports thereby creating deeper insights for city officials. Szabo et al. takes this approach one step further and employs the sensors built into smartphones for gathering data for city services such as live transit information. Ghose et al. utilizes the same principle for gathering information on road conditions. Pan et al. uses a combination of crowd sourcing and social media analysis for identifying traffic anomalies….(More)”.

Could a tweet or a text increase college enrollment or student achievement?


 at the Conversation: “Can a few text messages, a timely email or a letter increase college enrollment and student achievement? Such “nudges,” designed carefully using behavioral economics, can be effective.

But when do they work – and when not?

Barriers to success

Consider students who have just graduated high school intending to enroll in college. Even among those who have been accepted to college, 15 percent of low-income students do not enroll by the next fall. For the large share who intend to enroll in community colleges, this number can be as high as 40 percent….

Can a few text messages or a timely email overcome these barriers? My research uses behavioral economics to design low-cost, scalable interventions aimed at improving education outcomes. Behavioral economics suggests several important features to make a nudge effective: simplify complex information, make tasks easier to complete and ensure that support is timely.

So, what makes for an effective nudge?

Improving college enrollment

In 2012, researchers Ben Castleman and Lindsay Page sent 10 text messages to nearly 2,000 college-intending students the summer after high school graduation. These messages provided just-in-time reminders on key financial aid, housing and enrollment deadlines from early July to mid August.

Instead of set meetings with counselors, students could reply to messages and receive on-demand support from college guidance counselors to complete key tasks.

In another intervention – the Expanding College Opportunities Project (ECO) – researchers Caroline Hoxby and Sarah Turner worked to help high-achieving, low-income students enroll in colleges on par with their achievement. The intervention arrived to students as a packet in the mail.

The mailer simplified information by providing a list of colleges tailored to each student’s location along with information about net costs, graduation rates, and application deadlines. Moreover, the mailer included easy-to-claim application fee waivers. All these features reduced both the complexity and cost in applying to a wider range of colleges.

In both cases, researchers found that it significantly improved college outcomes. College enrollment went up by 15 percent in the intervention designed to reduce summer melt for community college students. The ECO project increased the likelihood of admission to a selective college by 78 percent.

When there is no impact

While these interventions are promising, there are important caveats.

For instance, our preliminary findings from ongoing research show that information alone may not be enough. We sent emails and letters to more than one hundred thousand college applicants about financial aid and education-related tax benefits. However, we didn’t provide any additional support to help families through the process of claiming these benefits.

In other words, we didn’t provide any support to complete the tasks – no fee waivers, no connection to guidance counselors – just the email and the letter. Without this support to answer questions or help families complete forms to claim the benefits, we found no impact, even when students opened the emails.

More generally, “nudges” often lead to modest impacts and should be considered only a part of the solution. But there’s a dearth of low-cost, scalable interventions in education, and behavioral economics can help.

Identifying the crucial decision points – when applications are due, forms need to be filled out or school choices are made – and supplying the just-in-time support to families is key….(More).”

Moneyballing Criminal Justice


Anne Milgram in the Atlantic: “…One area in which the potential of data analysis is still not adequately realized,however, is criminal justice. This is somewhat surprising given the success of CompStat, a law enforcement management tool that uses data to figure out how police resources can be used to reduce crime and hold law enforcement officials accountable for results. CompStat is widely credited with contributing to New York City’s dramatic reduction in serious crime over the past two decades. Yet data-driven decision-making has not expanded to the whole of the criminal justice system.

But it could. And, in this respect, the front end of the system — the part of the process that runs from arrest through sentencing — is particularly important. Atthis stage, police, prosecutors, defenders, and courts make key choices about how to deal with offenders — choices that, taken together, have an enormous impact on crime. Yet most jurisdictions do not collect or analyze the data necessary to know whether these decisions are being made in a way that accomplishes the most important goals of the criminal justice system: increased public safety,decreased recidivism, reduced cost, and the fair, efficient administration of justice.

Even in jurisdictions where good data exists, a lack of technology is often an obstacle to using it effectively. Police, jails, courts, district attorneys, and public defenders each keep separate information systems, the data from which is almost never pulled together and analyzed in a way that could answer the questions that matter most: Who is in our criminal justice system? What crimes have been charged? What risks do individual offenders pose? And which option would best protect the public and make the best use of our limited resources?

While debates about prison over-crowding, three strikes laws, and mandatory minimum sentences have captured public attention, the importance of what happens between arrest and sentencing has gone largely unnoticed. Even though I ran the criminal justice system in New Jersey, one of the largest states in the country, I had not realized the magnitude of the pretrial issues until I was tasked by theLaura and John Arnold Foundation with figuring out which aspects of criminal justice had the most need and presented the greatest opportunity for reform….

Technology could help us leverage data to identify offenders who will pose unacceptable risks to society if they are not behind bars and distinguish them from those defendants who will have lower recidivism rates if they are supervised in the community or given alternatives to incarceration before trial. Likewise, it could help us figure out which terms of imprisonment, alternatives to incarceration, and other interventions work best–and for whom. And the list does not end there.

The truth is our criminal justice system already makes these decisions every day.But it makes them without knowing whether they’re the right ones. That needs to change. If data is powerful enough to transform baseball, health care, and education, it can do the same for criminal justice….(More)”

…(More).

Twelve principles for open innovation 2.0


Martin Curley in Nature: “A new mode of innovation is emerging that blurs the lines between universities, industry, governments and communities. It exploits disruptive technologies — such as cloud computing, the Internet of Things and big data — to solve societal challenges sustainably and profitably, and more quickly and ably than before. It is called open innovation 2.0 (ref. 1).

Such innovations are being tested in ‘living labs’ in hundreds of cities. In Dublin, for example, the city council has partnered with my company, the technology firm Intel (of which I am a vice-president), to install a pilot network of sensors to improve flood management by measuring local rain fall and river levels, and detecting blocked drains. Eindhoven in the Netherlands is working with electronics firm Philips and others to develop intelligent street lighting. Communications-technology firm Ericsson, the KTH Royal Institute of Technology, IBM and others are collaborating to test self-driving buses in Kista, Sweden.

Yet many institutions and companies remain unaware of this radical shift. They often confuse invention and innovation. Invention is the creation of a technology or method. Innovation concerns the use of that technology or method to create value. The agile approaches needed for open innovation 2.0 conflict with the ‘command and control’ organizations of the industrial age (see ‘How innovation modes have evolved’). Institutional or societal cultures can inhibit user and citizen involvement. Intellectual-property (IP) models may inhibit collaboration. Government funders can stifle the emergence of ideas by requiring that detailed descriptions of proposed work are specified before research can begin. Measures of success, such as citations, discount innovation and impact. Policymaking lags behind the market place….

Keys to collaborative innovation

  1. Purpose. Efforts and intellects aligned through commitment rather than compliance deliver an impact greater than the sum of their parts. A great example is former US President John F. Kennedy’s vision of putting a man on the Moon. Articulating a shared value that can be created is important. A win–win scenario is more sustainable than a win–lose outcome.
  2. Partner. The ‘quadruple helix’ of government, industry, academia and citizens joining forces aligns goals, amplifies resources, attenuates risk and accelerates progress. A collaboration between Intel, University College London, Imperial College London and Innovate UK’s Future Cities Catapult is working in the Intel Collaborative Research Institute to improve people’s well-being in cities, for example to enable reduction of air pollution.
  3. Platform. An environment for collaboration is a basic requirement. Platforms should be integrated and modular, allowing a plug-and-play approach. They must be open to ensure low barriers to use, catalysing the evolution of a community. Challenges in security, standards, trust and privacy need to be addressed. For example, the Open Connectivity Foundation is securing interoperability for the Internet of Things.
  4. Possibilities. Returns may not come from a product but from the business model that enabled it, a better process or a new user experience. Strategic tools are available, such as industrial designer Larry Keeley’s breakdown of innovations into ten types in four categories: finance, process, offerings and delivery.
  5. Plan. Adoption and scale should be the focus of innovation efforts, not product creation. Around 20% of value is created when an innovation is established; more than 80% comes when it is widely adopted7. Focus on the ‘four Us’: utility (value to the user); usability; user experience; and ubiquity (designing in network effects).
  6. Pyramid. Enable users to drive innovation. They inspired two-thirds of innovations in semiconductors and printed circuit boards, for example. Lego Ideas encourages children and others to submit product proposals — submitters must get 10,000 supporters for their idea to be reviewed. Successful inventors get 1% of royalties.
  7. Problem. Most innovations come from a stated need. Ethnographic research with users, customers or the environment can identify problems and support brainstorming of solutions. Create a road map to ensure the shortest path to a solution.
  8. Prototype. Solutions need to be tested and improved through rapid experimentation with users and citizens. Prototyping shows how applicable a solution is, reduces the risks of failures and can reveal pain points. ‘Hackathons’, where developers come together to rapidly try things, are increasingly common.
  9. Pilot. Projects need to be implemented in the real world on small scales first. The Intel Collaborative Research Institute runs research projects in London’s parks, neighbourhoods and schools. Barcelona’s Laboratori — which involves the quadruple helix — is pioneering open ‘living lab’ methods in the city to boost culture, knowledge, creativity and innovation.
  10. Product. Prototypes need to be converted into viable commercial products or services through scaling up and new infrastructure globally. Cloud computing allows even small start-ups to scale with volume, velocity and resilience.
  11. Product service systems. Organizations need to move from just delivering products to also delivering related services that improve sustainability as well as profitability. Rolls-Royce sells ‘power by the hour’ — hours of flight time rather than jet engines — enabled by advanced telemetry. The ultimate goal of open innovation 2.0 is a circular or performance economy, focused on services and reuse rather than consumption and waste.
  12. Process. Innovation is a team sport. Organizations, ecosystems and communities should measure, manage and improve their innovation processes to deliver results that are predictable, probable and profitable. Agile methods supported by automation shorten the time from idea to implementation….(More)”

Policy Compass


“The Policy Compass helps you to utilise, interact, visualise and interpret the growing amount of publicly available Open Data. By offering easy to use and web-based data processing tools, it opens up the potential of Open Data for people with limited knowledge about technology. By doing so, it enables journalists, scientists and citizens to easily execute transparent and accountable policy evaluation or analysis.

The website already has selected Open Data from a number of reliable international sources but it is also possible to upload your own data for processing. Once you have found the data you’re looking for you can visualise this or create a causal model. The results of your processing activities are then easily shared for discussion or use in your work….(More)”

We know where you live


MIT News Office: “From location data alone, even low-tech snoopers can identify Twitter users’ homes, workplaces….Researchers at MIT and Oxford University have shown that the location stamps on just a handful of Twitter posts — as few as eight over the course of a single day — can be enough to disclose the addresses of the poster’s home and workplace to a relatively low-tech snooper.

The tweets themselves might be otherwise innocuous — links to funny videos, say, or comments on the news. The location information comes from geographic coordinates automatically associated with the tweets.

Twitter’s location-reporting service is off by default, but many Twitter users choose to activate it. The new study is part of a more general project at MIT’s Internet Policy Research Initiative to help raise awareness about just how much privacy people may be giving up when they use social media.

The researchers describe their research in a paper presented last week at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems, where it received an honorable mention in the best-paper competition, a distinction reserved for only 4 percent of papers accepted to the conference.

“Many people have this idea that only machine-learning techniques can discover interesting patterns in location data,” says Ilaria Liccardi, a research scientist at MIT’s Internet Policy Research Initiative and first author on the paper. “And they feel secure that not everyone has the technical knowledge to do that. With this study, what we wanted to show is that when you send location data as a secondary piece of information, it is extremely simple for people with very little technical knowledge to find out where you work or live.”

Conclusions from clustering

In their study, Liccardi and her colleagues — Alfie Abdul-Rahman and Min Chen of Oxford’s e-Research Centre in the U.K. — used real tweets from Twitter users in the Boston area. The users consented to the use of their data, and they also confirmed their home and work addresses, their commuting routes, and the locations of various leisure destinations from which they had tweeted.

The time and location data associated with the tweets were then presented to a group of 45 study participants, who were asked to try to deduce whether the tweets had originated at the Twitter users’ homes, their workplaces, leisure destinations, or locations along their commutes. The participants were not recruited on the basis of any particular expertise in urban studies or the social sciences; they just drew what conclusions they could from location clustering….

Predictably, participants fared better with map-based representations, correctly identifying Twitter users’ homes roughly 65 percent of the time and their workplaces at closer to 70 percent. Even the tabular representation was informative, however, with accuracy rates of just under 50 percent for homes and a surprisingly high 70 percent for workplaces….(More; Full paper )”

Robot Regulators Could Eliminate Human Error


 in the San Francisco Chronicle and Regblog: “Long a fixture of science fiction, artificial intelligence is now part of our daily lives, even if we do not realize it. Through the use of sophisticated machine learning algorithms, for example, computers now work to filter out spam messages automatically from our email. Algorithms also identify us by our photos on Facebook, match us with new friends on online dating sites, and suggest movies to watch on Netflix.

These uses of artificial intelligence hardly seem very troublesome. But should we worry if government agencies start to use machine learning?

Complaints abound even today about the uncaring “bureaucratic machinery” of government. Yet seeing how machine learning is starting to replace jobs in the private sector, we can easily fathom a literal machinery of government in which decisions made by human public servants increasingly become made by machines.

Technologists warn of an impending “singularity,” when artificial intelligence surpasses human intelligence. Entrepreneur Elon Musk cautions that artificial intelligence poses one of our “biggest existential threats.” Renowned physicist Stephen Hawking eerily forecasts that artificial intelligence might even “spell the end of the human race.”

Are we ready for a world of regulation by robot? Such a world is closer than we think—and it could actually be worth welcoming.

Already government agencies rely on machine learning for a variety of routine functions. The Postal Service uses learning algorithms to sort mail, and cities such as Los Angeles use them to time their traffic lights. But while uses like these seem relatively benign, consider that machine learning could also be used to make more consequential decisions. Disability claims might one day be processed automatically with the aid of artificial intelligence. Licenses could be awarded to airplane pilots based on what kinds of safety risks complex algorithms predict each applicant poses.

Learning algorithms are already being explored by the Environmental Protection Agency to help make regulatory decisions about what toxic chemicals to control. Faced with tens of thousands of new chemicals that could potentially be harmful to human health, federal regulators have supported the development of a program to prioritize which of the many chemicals in production should undergo the more in-depth testing. By some estimates, machine learning could save the EPA up to $980,000 per toxic chemical positively identified.

It’s not hard then to imagine a day in which even more regulatory decisions are automated. Researchers have shown that machine learning can lead to better outcomes when determining whether parolees ought to be released or domestic violence orders should be imposed. Could the imposition of regulatory fines one day be determined by a computer instead of a human inspector or judge? Quite possibly so, and this would be a good thing if machine learning could improve accuracy, eliminate bias and prejudice, and reduce human error, all while saving money.

But can we trust a government that bungled the initial rollout of Healthcare.gov to deploy artificial intelligence responsibly? In some circumstances we should….(More)”