La Primaire Wants To Help French Voters Bypass Traditional Parties


Federico Guerrini in Forbes: “French people, like the citizens of many other countries, have little confidence in their government or in their members of parliament.

A recent study by the Center for Political Research of the University of Science-Po(CEVIPOF) in Paris, shows that while residents still trust, in part, their local officials, only 37% of them on average feel the same for those belonging to theNational Assembly, the Senate or the executive.

Three years before, when asked in another poll about of what sprung to mind first when thinking of politics, their first answer was “disgust”.

With this sort of background, it is perhaps unsurprising that a number of activists have decided to try and find new ways to boost political participation, using crowdsourcing, smartphone applications and online platforms to look for candidates outside of the usual circles.

There are several civic tech initiatives in place in France right now. One of the most fascinating is called LaPrimaire.org.

It’s an online platform whose main aim is to organize an open primary election,select a suitable candidate, and allow him to run for President in the 2017elections.

Launched in April by Thibauld Favre and David Guez, an engineer and a lawyer by trade, both with no connection to the political establishment, it has attracted so far 164 self-proposed candidates and some 26,000 voters. Anyone can be elected, as long as they live in France, do not belong to any political party and have a clean criminal record.

primariacandidati

A different class of possible candidates, also present on the website, is composed by the so-called “citoyens plébiscités”, VIPs, politician or celebrities that backers of LaPrimaire.org think should run for president. In both cases, in order to qualify for the next phase of the selection, these people have to secure the vote of at least 500 supporters by July 14….(More)”

Transparency reports make AI decision-making accountable


Phys.org: “Machine-learning algorithms increasingly make decisions about credit, medical diagnoses, personalized recommendations, advertising and job opportunities, among other things, but exactly how usually remains a mystery. Now, new measurement methods developed by Carnegie Mellon University researchers could provide important insights to this process.

 Was it a person’s age, gender or education level that had the most influence on a decision? Was it a particular combination of factors? CMU’s Quantitative Input Influence (QII) measures can provide the relative weight of each factor in the final decision, said Anupam Datta, associate professor of computer science and electrical and computer engineering.

“Demands for algorithmic transparency are increasing as the use of algorithmic decision-making systems grows and as people realize the potential of these systems to introduce or perpetuate racial or sex discrimination or other social harms,” Datta said.

“Some companies are already beginning to provide transparency reports, but work on the computational foundations for these reports has been limited,” he continued. “Our goal was to develop measures of the degree of influence of each factor considered by a system, which could be used to generate transparency reports.”

These reports might be generated in response to a particular incident—why an individual’s loan application was rejected, or why police targeted an individual for scrutiny or what prompted a particular medical diagnosis or treatment. Or they might be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system inappropriately discriminated between groups of people….(More)”

Nudge 2.0: A broader toolkit for lasting behavior change


Cait Lamberton and Benjamin Castleman in the Huffington Post: “Nudges are all around us. Chances are that someone has nudged you today—even if you didn’t realize it. Maybe it was your doctor’s office, sending you a text message about an upcoming appointment. Or maybe it was an airline website, urging you to make a reservation because “only three tickets are left at this price.” In fact, the private sector has been nudging us in one way or another for at least 75 years, since the heyday of the Madison Avenue Ad Men.

It’s taken a few generations, but the public sector is starting to catch on. In policy domains ranging from consumer finance and public health to retirement planning and education, researchers are applying behavioral science insights to help people make more informed decisions that lead to better long-term outcomes.

Sometimes these nudges take the form of changing the rules that determine whether someone participates in a program or not (like switching the default so people are automatically enrolled in a retirement savings plan unless they opt out, rather than only enrolling people who actively sign up for the program). But oftentimes, nudges can be as simple as sending people simplified information about opportunities that are available to them, or reminders about important tasks they have to complete in order to participate in beneficial programs.

A growing body of research demonstrates that nudges like these, despite being low touch and costing very little, can lead to substantial improvements in educational outcomes, whether it’s parents reading more to their children, middle school students completing more class assignments, or college students successfully persisting in college….

As impressive as these results have been, many of the early nudge studies in education have focused on fairly low-hanging fruit. We’re often helping people follow through on an intention they already have, or informing them about opportunities or resources that they didn’t know or were confused about. What’s less clear, however, is how well these strategies can support sustained behavior change, like going to school every day or avoiding substance abuse….

But what if we want to change someone’s direction? In real-world terms, what if a student is struggling in school but isn’t even considering looking for help? What if their lives are too busy for them to search for or meet with a tutor on a consistent basis? What if they have a nagging feeling that they’re just not the kind of person who succeeds in school, so they don’t see the point in even trying?

For these types of behavior change, we need an expanded nudge toolkit—what we’ll call Nudge 2.0. These strategies go beyond information simplification, reminders, and professional assistance, and address the decision-making person more holistically- people’s identity, their psychology, their emotions, and the competing forces that vie for their attention….(More)”

All European scientific articles to be freely accessible by 2020


EU Presidency: “All scientific articles in Europe must be freely accessible as of 2020. EU member states want to achieve optimal reuse of research data. They are also looking into a European visa for foreign start-up founders.

And, according to the new Innovation Principle, new European legislation must take account of its impact on innovation. These are the main outcomes of the meeting of the Competitiveness Council in Brussels on 27 May.

Sharing knowledge freely

Under the presidency of Netherlands State Secretary for Education, Culture and Science Sander Dekker, the EU ministers responsible for research and innovation decided unanimously to take these significant steps. Mr Dekker is pleased that these ambitions have been translated into clear agreements to maximise the impact of research. ‘Research and innovation generate economic growth and more jobs and provide solutions to societal challenges,’ the state secretary said. ‘And that means a stronger Europe. To achieve that, Europe must be as attractive as possible for researchers and start-ups to locate here and for companies to invest. That calls for knowledge to be freely shared. The time for talking about open access is now past. With these agreements, we are going to achieve it in practice.’

Open access

Open access means that scientific publications on the results of research supported by public and public-private funds must be freely accessible to everyone. That is not yet the case. The results of publicly funded research are currently not accessible to people outside universities and knowledge institutions. As a result, teachers, doctors and entrepreneurs do not have access to the latest scientific insights that are so relevant to their work, and universities have to take out expensive subscriptions with publishers to gain access to publications.

Reusing research data

From 2020, all scientific publications on the results of publicly funded research must be freely available. It also must be able to optimally reuse research data. To achieve that, the data must be made accessible, unless there are well-founded reasons for not doing so, for example intellectual property rights or security or privacy issues….(More)”

Time for sharing data to become routine: the seven excuses for not doing so are all invalid


Paper by Richard Smith and Ian Roberts: “Data are more valuable than scientific papers but researchers are incentivised to publish papers not share data. Patients are the main beneficiaries of data sharing but researchers have several incentives not to share: others might use their data to get ahead in the academic rat race; they might be scooped; their results might not be replicable; competitors may reach different conclusions; their data management might be exposed as poor; patient confidentiality might be breached; and technical difficulties make sharing impossible. All of these barriers can be overcome and researchers should be rewarded for sharing data. Data sharing must become routine….(More)”

Data Science Ethical Framework


UK Cabinet Office: “Data science provides huge opportunities for government. Harnessing new forms of data with increasingly powerful computer techniques increases operational efficiency, improves public services and provides insight for better policymaking.

We want people in government to feel confident using data science techniques to innovate. This guidance is intended to bring together relevant laws and best practice, to give teams robust principles to work with.

The publication is a first version that we are asking the public, experts, civil servants and other interested parties to help us perfect and iterate. This will include taking on evidence from a public dialogue on data science ethics. It was published on 19 May by the Minister for Cabinet Office, Matt Hancock. If you would like to help us iterate the framework, find out how to get in touch at the end of this blog. See Data Science Ethical Framework (PDF, 8.28MB, 17 pages). This file may not be suitable for users of assistive technology. Request an accessible format.

Improving patient care by bridging the divide between doctors and data scientists


 at the Conversation: “While wonderful new medical discoveries and innovations are in the news every day, doctors struggle daily with using information and techniques available right now while carefully adopting new concepts and treatments. As a practicing doctor, I deal with uncertainties and unanswered clinical questions all the time….At the moment, a report from the National Academy of Medicine tells us, most doctors base most of their everyday decisions on guidelines from (sometimes biased) expert opinions or small clinical trials. It would be better if they were from multicenter, large, randomized controlled studies, with tightly controlled conditions ensuring the results are as reliable as possible. However, those are expensive and difficult to perform, and even then often exclude a number of important patient groups on the basis of age, disease and sociological factors.

Part of the problem is that health records are traditionally kept on paper, making them hard to analyze en masse. As a result, most of what medical professionals might have learned from experiences was lost – or at least was inaccessible to another doctor meeting with a similar patient.

A digital system would collect and store as much clinical data as possible from as many patients as possible. It could then use information from the past – such as blood pressure, blood sugar levels, heart rate and other measurements of patients’ body functions – to guide future doctors to the best diagnosis and treatment of similar patients.

Industrial giants such as Google, IBM, SAP and Hewlett-Packard have also recognized the potential for this kind of approach, and are now working on how to leverage population data for the precise medical care of individuals.

Collaborating on data and medicine

At the Laboratory of Computational Physiology at the Massachusetts Institute of Technology, we have begun to collect large amounts of detailed patient data in the Medical Information Mart in Intensive Care (MIMIC). It is a database containing information from 60,000 patient admissions to the intensive care units of the Beth Israel Deaconess Medical Center, a Boston teaching hospital affiliated with Harvard Medical School. The data in MIMIC has been meticulously scoured so individual patients cannot be recognized, and is freely shared online with the research community.

But the database itself is not enough. We bring together front-line clinicians (such as nurses, pharmacists and doctors) to identify questions they want to investigate, and data scientists to conduct the appropriate analyses of the MIMIC records. This gives caregivers and patients the best individualized treatment options in the absence of a randomized controlled trial.

Bringing data analysis to the world

At the same time we are working to bring these data-enabled systems to assist with medical decisions to countries with limited health care resources, where research is considered an expensive luxury. Often these countries have few or no medical records – even on paper – to analyze. We can help them collect health data digitally, creating the potential to significantly improve medical care for their populations.

This task is the focus of Sana, a collection of technical, medical and community experts from across the globe that is also based in our group at MIT. Sana has designed a digital health information system specifically for use by health providers and patients in rural and underserved areas.

At its core is an open-source system that uses cellphones – common even in poor and rural nations – to collect, transmit and store all sorts of medical data. It can handle not only basic patient data such as height and weight, but also photos and X-rays, ultrasound videos, and electrical signals from a patient’s brain (EEG) and heart (ECG).

Partnering with universities and health organizations, Sana organizes training sessions (which we call “bootcamps”) and collaborative workshops (called “hackathons”) to connect nurses, doctors and community health workers at the front lines of care with technology experts in or near their communities. In 2015, we held bootcamps and hackathons in Colombia, Uganda, Greece and Mexico. The bootcamps teach students in technical fields like computer science and engineering how to design and develop health apps that can run on cellphones. Immediately following the bootcamp, the medical providers join the group and the hackathon begins…At the end of the day, though, the purpose is not the apps….(More)

Health care data as a public utility: how do we get there?


Mohit Kaushal and Margaret Darling at Brookings: “Forty-six million Americans use mobile fitness and health apps. Over half of providers serving Medicare or Medicaid patients are using electronic health records (EHRs). Despite such advances and proliferation of health data and its collection, we are not yet on an inevitable path to unleashing the often-promisedpower of data” because data remain proprietary and fragmented among insurers, providers, health record companies, government agencies, and researchers.

Despite the technological integration seen in banking and other industries, health care data has remained scattered and inaccessible. EHRs remain fragmented among 861 distinct ambulatory vendors and 277 inpatient vendors as of 2013. Similarly, insurance claims are stored in the databases of insurers, and information about public health—including information about the social determinants of health, such as housing, food security, safety, and education—is often kept in databases belonging to various governmental agencies. These silos wouldn’t necessarily be a problem, except for the lack of interoperability that has long plagued the health care industry.

For this reason, many are reconsidering if health care data is a public good, provided to all members of the public without profit. This idea is not new. In fact, the Institute of Medicine established the Roundtable on Value and Science-Driven Healthcare, citing that:

“A significant challenge to progress resides in the barriers and restrictions that derive from the treatment of medical care data as a proprietary commodity by the organizations involved. Even clinical research and medical care data developed with public funds are often not available for broader analysis and insights. Broader access and use of healthcare data for new insights require not only fostering data system reliability and interoperability but also addressing the matter of individual data ownership and the extent to which data central to progress in health and health care should constitute a public good.”

Indeed, publicly available health care data holds the potential to unlock many innovations, much like what public goods have done in other industries. As publicly available weather data has shown, the public utility of open access information is not only good for consumers, itis good for businesses…(More)”

The Small World Initiative: An Innovative Crowdsourcing Platform for Antibiotics


Ana Maria Barral et al in FASEB Journal: “The Small World Initiative™ (SWI) is an innovative program that encourages students to pursue careers in science and sets forth a unique platform to crowdsource new antibiotics. It centers around an introductory biology course through which students perform original hands-on field and laboratory research in the hunt for new antibiotics. Through a series of student-driven experiments, students collect soil samples, isolate diverse bacteria, test their bacteria against clinically-relevant microorganisms, and characterize those showing inhibitory activity. This is particularly relevant since over two thirds of antibiotics originate from soil bacteria or fungi. SWI’s approach also provides a platform to crowdsource antibiotic discovery by tapping into the intellectual power of many people concurrently addressing a global challenge and advances promising candidates into the drug development pipeline. This unique class approach harnesses the power of active learning to achieve both educational and scientific goals…..We will discuss our preliminary student evaluation results, which show the compelling impact of the program in comparison to traditional introductory courses. Ultimately, the mission of the program is to provide an evidence-based approach to teaching introductory biology concepts in the context of a real-world problem. This approach has been shown to be particularly impactful on underrepresented STEM talent pools, including women and minorities….(More)”

Scientists Are Just as Confused About the Ethics of Big-Data Research as You


Sarah Zhang at Wired: “When a rogue researcher last week released 70,000 OkCupid profiles, complete with usernames and sexual preferences, people were pissed. When Facebook researchers manipulated stories appearing in Newsfeeds for a mood contagion study in 2014, people were really pissed. OkCupid filed a copyright claim to take down the dataset; the journal that published Facebook’s study issued an “expression of concern.” Outrage has a way of shaping ethical boundaries. We learn from mistakes.

Shockingly, though, the researchers behind both of those big data blowups never anticipated public outrage. (The OkCupid research does not seem to have gone through any kind of ethical review process, and a Cornell ethics review board approved the Facebook experiment.) And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

Even fewer have computer science or security expertise, which may be necessary to protect participants in this new kind of research. “The IRB may make very different decisions based on who is on the board, what university it is, and what they’re feeling that day,” says Kelsey Finch, policy counsel at the Future of Privacy Forum. There are hundreds of these IRBs in the US—and they’re grappling with research ethics in the digital age largely on their own….

Or maybe other institutions, like the open science repositories asking researchers to share data, should be picking up the slack on ethical issues. “Someone needs to provide oversight, but the optimal body is unlikely to be an IRB, which usually lacks subject matter expertise in de-identification and re-identification techniques,” Michelle Meyer, a bioethicist at Mount Sinai, writes in an email.

Even among Internet researchers familiar with the power of big data, attitudes vary. When Katie Shilton, an information technology research at the University of Maryland, interviewed 20 online data researchers, she found “significant disagreement” over issues like the ethics of ignoring Terms of Service and obtaining informed consent. Surprisingly, the researchers also said that ethical review boards had never challenged the ethics of their work—but peer reviewers and colleagues had. Various groups like theAssociation of Internet Researchers and the Center for Applied Internet Data Analysis have issued guidelines, but the people who actually have power—those on institutional review boards–are only just catching up.

Outside of academia, companies like Microsoft have started to institute their own ethical review processes. In December, Finch at the Future of Privacy Forum organized a workshop called Beyond IRBs to consider processes for ethical review outside of federally funded research. After all, modern tech companies like Facebook, OkCupid, Snapchat, Netflix sit atop a trove of data 20th century social scientists could have only dreamed up.

Of course, companies experiment on us all the time, whether it’s websites A/B testing headlines or grocery stores changing the configuration of their checkout line. But as these companies hire more data scientists out of PhD programs, academics are seeing an opportunity to bridge the divide and use that data to contribute to public knowledge. Maybe updated ethical guidelines can be forged out of those collaborations. Or it just might be a mess for a while….(More)”