Innovation Meets Citizen Science


Caroline Nickerson at SciStarter: “Citizen science has been around as long as science, but innovative approaches are opening doors to more and deeper forms of public participation.

Below, our editors spotlight a few projects that feature new approaches, novel research, or low-cost instruments. …

Colony B: Unravel the secrets of microscopic life! Colony B is a mobile gaming app developed at McGill University that enables you to contribute to research on microbes. Collect microbes and grow your colony in a fast-paced puzzle game that advances important scientific research.

AirCasting: AirCasting is an open-source, end-to-end solution for collecting, displaying, and sharing health and environmental data using your smartphone. The platform consists of wearable sensors, including a palm-sized air quality monitor called the AirBeam, that detect and report changes in your environment. (Android only.)

LingoBoingo: Getting computers to understand language requires large amounts of linguistic data and “correct” answers to language tasks (what researchers call “gold standard annotations”). Simply by playing language games online, you can help archive languages and create the linguistic data used by researchers to improve language technologies. These games are in English, French, and a new “multi-lingual” category.

TreeSnap: Help our nation’s trees and protect human health in the process. Invasive diseases and pests threaten the health of America’s forests. With the TreeSnap app, you can record the location and health of particular tree species–those unharmed by diseases that have wiped out other species. Scientists then use the collected information to locate candidates for genetic sequencing and breeding programs. Tag trees you find in your community, on your property, or out in the wild to help scientists understand forest health….(More)”.

This tech tells cities when floods are coming–and what they will destroy


Ben Paynter at FastCompany: “Several years ago, one of the eventual founders of One Concern nearly died in a tragic flood. Today, the company specializes in using artificial intelligence to predict how natural disasters are unfolding in real time on a city-block-level basis, in order to help disaster responders save as many lives as possible….

To fix that, One Concern debuted Flood Concern in late 2018. It creates map-based visualizations of where water surges may hit hardest, up to five days ahead of an impending storm. For cities, that includes not just time-lapse breakdowns of how the water will rise, how fast it could move, and what direction it will be flowing, but also what structures will get swamped or washed away, and how differing mitigation efforts–from levy building to dam releases–will impact each scenario. It’s the winner of Fast Company’s 2019 World Changing Ideas Awards in the AI and Data category.

[Image: One Concern]

So far, Flood Concern has been retroactively tested against events like Hurricane Harvey to show that it could have predicted what areas would be most impacted well ahead of the storm. The company, which was founded in Silicon Valley in 2015, started with one of that region’s pressing threats: earthquakes. It’s since earned contracts with cities like San Francisco, Los Angeles, and Cupertino, as well as private insurance companies….

One Concern’s first offering, dubbed Seismic Concern, takes existing information from satellite images and building permits to figure out what kind of ground structures are built on, and what might happen if they started shaking. If a big one hits, the program can extrapolate from the epicenter to suggest the likeliest places for destruction, and then adjust as more data from things like 911 calls and social media gets factored in….(More)”.


Does increased ‘participation’ equal a new-found enthusiasm for democracy?


Blog by Stephen King and Paige Nicol: “With a few months under our belts, 2019 looks unlikely to be the year of a great global turnaround for democracy. The decade of democratic ‘recession’ that Larry Diamond declared in 2015 has dragged on and deepened, and may now be teetering on the edge of becoming a full-blown depression. 

The start of each calendar year is marked by the release of annual indices, rankings, and reports on how democracy is faring around the world. 2018 reports from Freedom House and the Economist Intelligence Unit (EIU) highlighted precipitous declines in civil liberties in long-standing democracies as well as authoritarian states. Some groups, including migrants, women, ethnic and other minorities, opposition politicians, and journalists have been particularly affected by these setbacks. According to the Committee to Protect Journalists, the number of journalists murdered nearly doubled last year, while the number imprisoned remained above 250 for the third consecutive year. 

Yet, the EIU also found a considerable increase in political participation worldwide. Levels of participation (including voting, protesting, and running for elected office, among other dimensions) increased substantially enough last year to offset falling scores in the other four categories of the index. Based on the methodology used, the rise in political participation was significant enough to prevent a decline in the global overall score for democracy for the first time in three years.

Though this development could give cause for optimism we believe it could also raise new concerns. 

In Zimbabwe, Sudan, and Venezuela we see people who, through desperation and frustration, have taken to the streets – a form of participation which has been met with brutal crackdowns. Time has yet to tell what the ultimate outcome of these protests will be, but it is clear that governments with autocratic tendencies have more – and cheaper – tools to monitor, direct, control, and suppress participation than ever before. 

Elsewhere, we see a danger of people becoming dislocated and disenchanted with democracy, as their representatives fail to take meaningful action on the issues that matter to them. In the UK Parliament, as Brexit discussions have become increasingly polarised and fractured along party political and ideological lines, Foreign Secretary Jeremy Hunt warned that there was a threat of social unrest if Parliament was seen to be frustrating the ‘will of the people.’ 

While we see enhanced participation as crucial to just and fair societies, it alone will not be the silver bullet that saves democracy. Whether this trend becomes a cause for hope or concern will depend on three factors: who is participating, what form does participation take, and how is participation received by those with power?…(More)”.

Data Can Help Students Maximize Return on Their College Investment


Blog by Jennifer Latson for Arnold Ventures: “When you buy a car, you want to know it will get you where you’re going. Before you invest in a certain model, you check its record. How does it do in crash tests? Does it have a history of breaking down? Are other owners glad they bought it?

Students choosing between college programs can’t do the same kind of homework. Much of the detailed data we demand when we buy a car isn’t available for postsecondary education — data such as how many students find jobs in the fields they studied, what they earn, how much debt they accumulate, and how quickly they repay it — yet choosing a college is a much more important financial decision.

The most promising solution to filling in the gaps, according to data advocates, is the College Transparency Act, which would create a secure, comprehensive national data network with information on college costs, graduation rates, and student career paths — and make this data publicly available. The bill, which will be discussed in Congress this year, has broad support from both Republicans and Democrats in the House and the Senate in part because it includes precautions to protect privacy and secure student data….

The data needed to answer questions about student success already exists but is scattered among various agencies and institutions: the Department of Educationfor data on student loan repayment; the Treasury Department for earnings information; and schools themselves for graduation rates.

“We can’t connect the dots to find out how these programs are serving certain students, and that’s because the Department of Education isn’t allowed to connect all the information these places have already collected,” says Amy Laitinen, director for higher education at New America, a think tank collaborating with IHEP to promote educational transparency.
And until recently, publicly available federal postsecondary data only included full-time students who’d never enrolled in a college program before, ignoring the more than half of the higher ed population made up of students who attend school part time or who transfer from one institution to another….(More)”.

Progression of the Inevitable


Kevin Kelly at Technium: “…The procession of technological discoveries is inevitable. When the conditions are right — when the necessary web of supporting technology needed for every invention is established — then the next adjacent technological step will emerge as if on cue. If inventor X does not produce it, inventor Y will. The invention of the microphone, the laser, the transistor, the steam turbine, the waterwheel, and the discoveries of oxygen, DNA, and Boolean logic, were all inevitable in roughly the period they appeared. However the particular form of the microphone, its exact circuit, or the specific design of the laser, or the particular materials of the transistor, or the dimensions of the steam turbine, or the peculiar notation of the formula, or the specifics of any invention are not inevitable. Rather they will vary quite widely due to the personality of their finder, the resources at hand, the culture of society they are born into, the economics funding the discovery, and the influence of luck and chance. An incandescent light bulb based on a coil of carbonized bamboo filament heated within a vacuum bulb is not inevitable, but “the electric incandescent light bulb” is. The concept of “the electric incandescent light bulb” abstracted from all the details that can vary while still producing the result — luminance from electricity, for instance  —  is ordained by the technium’s trajectory. We know this because “the electric incandescent light bulb” was invented, re-invented, co-invented, or “first invented” dozens of times. In their book “Edison’s Electric Light: Biography of an Invention”, Robert Friedel and Paul Israel list 23 inventors of incandescent bulbs prior to Edison. It might be fairer to say that Edison was the very last “first” inventor of the electric light.

Lightbulbs



Three independently invented electric light bulbs: Edison’s, Swan’s, and Maxim’s.

Any claim of inevitability is difficult to prove. Convincing proof requires re-running a progression more than once and showing that the outcome is the same each time. That no matter what perturbations thrown at the system, it yields an identical result. To claim that the large-scale trajectory of the technium is inevitable would mean demonstrating that if we re-ran history, the same abstracted inventions would arise again, and in roughly the same relative order.  Without a time machine, there’ll be no indisputable proof, but we do have three types of evidence that suggest that the paths of technologies are inevitable. They are 1) that quantifiable trajectories of progress don’t waver despite attempts to shift them (see my Moore’s Law); 2) that in ancient times when transcontinental communication was slow or null, we find independent timelines of technology in different continents converging upon a set order; and 3) the fact that most inventions and discoveries have been made independently by more than one person….(More)”.

What if You Could Vote for President Like You Rate Uber Drivers?


Essay by Guru Madhavan and Charles Phelps: “…Some experimental studies have begun to offer insights into the benefits of making voting methods—and the very goals of voting—more expressive. In the 2007 French presidential election, for instance, people were offered the chance to participate in an experimental ballot that allowed them to use letter grades to evaluate the candidates just as professors evaluate students. This approach, called the “majority judgment,” provides a clear method to combine those grades into rankings or a final winner. But instead of merely selecting a winner, majority judgment conveys—with a greater degree of expressivity—the voters’ evaluations of their choices. In this experiment, people completed their ballots in about a minute, thus allaying potential concerns that a letter grading system was too complicated to use. What’s more, they seemed more enthusiastic about this method. Scholars Michel Balinski and Rida Laraki, who led this study, point out: “Indeed, one of the most effective arguments for persuading reluctant voters to participate was that the majority judgment allows fuller expression of opinion.”

Additional experiments with more expressive ballots have now been repeated across different countries and elections. According to a 2018 summary of these experiments by social choice theorist Annick Laruelle,  “While ranking all candidates appears to be difficult … participants enjoy the possibility of choosing a grade for each candidate … [and] ballots with three grades are preferred to those … with two grades.” Some participant comments are revealing, stating, “With this ballot we can at last vote with the heart,” or, “Voting with this ballot is a relief.” Voters, according to Laruelle, “Enjoyed the option of voting in favor of several candidates and were especially satisfied of being offered the opportunity to vote against candidates.”…

These opportunities for expression might increase public interest in (and engagement with) democratic decision making, encouraging more thoughtful candidate debates, more substantive election campaigns and advertisements, and richer use of opinion polling to help candidates shape their position statements (once they are aware that the public’s selection process has changed). One could even envision that the basis for funding election campaigns might evolve if funders focused on policy ideas rather than political allegiances and specific candidates. Changes such as these would ideally put the power back in the hands of the people, where it actually belongs in a democracy. These conjectures need to be tested and retested across contexts, ideally through field experiments that leverage research and expertise in engineering, social choice, and political and behavioral sciences.

Standard left-to-right political scales and the way we currently vote do not capture the true complexity of our evolving political identities and preferences. If voting is indeed the true instrument of democracy and much more than a repeated political ritual, it must allow for richer expression. Current methods seem to discourage public participation, the very nucleus of civic life. The essence of civility and democracy is not merely about providing issues and options to vote on but in enabling people to fully express their preferences. For a country founded on choice as its tenet, is it too much to ask for a little bit more choice in how we select our leaders? …(More)”.

Know-how: Big Data, AI and the peculiar dignity of tacit knowledge


Essay by Tim Rogan: “Machine learning – a kind of sub-field of artificial intelligence (AI) – is a means of training algorithms to discern empirical relationships within immense reams of data. Run a purpose-built algorithm by a pile of images of moles that might or might not be cancerous. Then show it images of diagnosed melanoma. Using analytical protocols modelled on the neurons of the human brain, in an iterative process of trial and error, the algorithm figures out how to discriminate between cancers and freckles. It can approximate its answers with a specified and steadily increasing degree of certainty, reaching levels of accuracy that surpass human specialists. Similar processes that refine algorithms to recognise or discover patterns in reams of data are now running right across the global economy: medicine, law, tax collection, marketing and research science are among the domains affected. Welcome to the future, say the economist Erik Brynjolfsson and the computer scientist Tom Mitchell: machine learning is about to transform our lives in something like the way that steam engines and then electricity did in the 19th and 20th centuries. 

Signs of this impending change can still be hard to see. Productivity statistics, for instance, remain worryingly unaffected. This lag is consistent with earlier episodes of the advent of new ‘general purpose technologies’. In past cases, technological innovation took decades to prove transformative. But ideas often move ahead of social and political change. Some of the ways in which machine learning might upend the status quo are already becoming apparent in political economy debates.

The discipline of political economy was created to make sense of a world set spinning by steam-powered and then electric industrialisation. Its central question became how best to regulate economic activity. Centralised control by government or industry, or market freedoms – which optimised outcomes? By the end of the 20th century, the answer seemed, emphatically, to be market-based order. But the advent of machine learning is reopening the state vs market debate. Which between state, firm or market is the better means of coordinating supply and demand? Old answers to that question are coming under new scrutiny. In an eye-catching paper in 2017, the economists Binbin Wang and Xiaoyan Li at Sichuan University in China argued that big data and machine learning give centralised planning a new lease of life. The notion that market coordination of supply and demand encompassed more information than any single intelligence could handle would soon be proved false by 21st-century AI.

How seriously should we take such speculations? Might machine learning bring us full-circle in the history of economic thought, to where measures of economic centralisation and control – condemned long ago as dangerous utopian schemes – return, boasting new levels of efficiency, to constitute a new orthodoxy?

A great deal turns on the status of tacit knowledge….(More)”.

Data: The Lever to Promote Innovation in the EU


Blog Post by Juan Murillo Arias: “…But in order for data to truly become a lever that foments innovation in benefit of society as a whole, we must understand and address the following factors:

1. Disconnected, disperse sources. As users of digital services (transportation, finance, telecommunications, news or entertainment) we leave a different digital footprint for each service that we use. These footprints, which are different facets of the same polyhedron, can even be contradictory on occasion. For this reason, they must be seen as complementary. Analysts should be aware that they must cross data sources from different origins in order to create a reliable picture of our preferences, otherwise we will be basing decisions on partial or biased information. How many times do we receive advertising for items we have already purchased, or tourist destinations where we have already been? And this is just one example of digital marketing. When scoring financial solvency, or monitoring health, the more complete the digital picture is of the person, the more accurate the diagnosis will be.

Furthermore, from the user’s standpoint, proper management of their entire, disperse digital footprint is a challenge. Perhaps centralized consent would be very beneficial. In the financial world, the PSD2 regulations have already forced banks to open this information to other banks if customers so desire. Fostering competition and facilitating portability is the purpose, but this opening up has also enabled the development of new services of information aggregation that are very useful to financial services users. It would be ideal if this step of breaking down barriers and moving toward a more transparent market took place simultaneously in all sectors in order to avoid possible distortions to competition and by extension, consumer harm. Therefore, customer consent would open the door to building a more accurate picture of our preferences.

2. The public and private sectors’ asymmetric capacity to gather data.This is related to citizens using public services less frequently than private services in the new digital channels. However, governments could benefit from the information possessed by private companies. These anonymous, aggregated data can help to ensure a more dynamic public management. Even personal data could open the door to customized education or healthcare on an individual level. In order to analyze all of this, the European Commissionhas created a working group including 23 experts. The purpose is to come up with a series of recommendations regarding the best legal, technical and economic framework to encourage this information transfer across sectors.

3. The lack of incentives for companies and citizens to encourage the reuse of their data.The reality today is that most companies solely use the sources internally. Only a few have decided to explore data sharing through different models (for academic research or for the development of commercial services). As a result of this and other factors, the public sector largely continues using the survey method to gather information instead of reading the digital footprint citizens produce. Multiple studies have demonstrated that this digital footprint would be useful to describe socioeconomic dynamics and monitor the evolution of official statistical indicators. However, these studies have rarely gone on to become pilot projects due to the lack of incentives for a private company to open up to the public sector, or to society in general, making this new activity sustainable.

4. Limited commitment to the diversification of services.Another barrier is the fact that information based product development is somewhat removed from the type of services that the main data generators (telecommunications, banks, commerce, electricity, transportation, etc.) traditionally provide. Therefore, these data based initiatives are not part of their main business and are more closely tied to companies’ innovation areas where exploratory proofs of concept are often not consolidated as a new line of business.

5. Bidirectionality. Data should also flow from the public sector to the rest of society. The first regulatory framework was created for this purpose. Although it is still very recent (the PSI Directive on the re-use of public sector data was passed in 2013), it is currently being revised, in attempt to foster the consolidation of an open data ecosystem that emanates from the public sector as well. On the one hand it would enable greater transparency, and on the other, the development of solutions to improve multiple fields in which public actors are key, such as the environment, transportation and mobility, health, education, justice and the planning and execution of public works. Special emphasis will be placed on high value data sets, such as statistical or geospatial data — data with tremendous potential to accelerate the emergence of a wide variety of information based data products and services that add value.The Commission will begin working with the Member States to identify these data sets.

In its report, Creating Data through Open Data, the European open data portal estimates that government agencies making their data accessible will inject an extra €65 billion in the EU economy this year.

6. The commitment to analytical training and financial incentives for innovation.They are the key factors that have given rise to the digital unicorns that have emerged, more so in the U.S. and China than in Europe….(More)”

New York City ‘Open Data’ Paves Way for Innovative Technology


Leo Gringut at the International Policy Digest: “The philosophy behind “Open Data for All” turns on the idea that easy access to government data offers everyday New Yorkers the chance to grow and innovate: “Data is more than just numbers – it’s information that can create new opportunities and level the playing field for New Yorkers. It’s the illumination that changes frameworks, the insight that turns impenetrable issues into solvable problems.” Fundamentally, the newfound accessibility of City data is revolutionizing NYC business. According to Albert Webber, Program Manager for Open Data, City of New York, a key part of his job is “to engage the civic technology community that we have, which is very strong, very powerful in New York City.”

Fundamentally, Open Data is a game-changer for hundreds of New York companies, from startups to corporate giants, all of whom rely on data for their operations. The effect is set to be particularly profound in New York City’s most important economic sector: real estate. Seeking to transform the real estate and construction market in the City, valued at a record-setting $1 trillion in 2016, companies have been racing to develop tools that will harness the power of Open Data to streamline bureaucracy and management processes.

One such technology is the Citiscape app. Developed by a passionate team of real estate experts with more than 15 years of experience in the field, the app assembles data from the Department of Building and the Environmental Control Board into one easy-to-navigate interface. According to Citiscape Chief Operational Officer Olga Khaykina, the secret is in the app’s simplicity, which puts every aspect of project management at the user’s fingertips. “We made DOB and ECB just one tap away,” said Khaykina. “You’re one tap away from instant and accurate updates and alerts from the DOB that will keep you informed about any changes to ongoing project. One tap away from organized and cloud-saved projects, including accessible and coordinated interaction with all team members through our in-app messenger. And one tap away from uncovering technical information about any building in NYC, just by entering its address.” Gone are the days of continuously refreshing the DOB website in hopes of an update on a minor complaint or a status change regarding your project; Citiscape does the busywork so you can focus on your project.

The Citiscape team emphasized that, without access to Open Data, this project would have been impossible….(More)”.

AI Ethics: Seven Traps


Blog Post by Annette Zimmermann and Bendert Zevenbergen: “… In what follows, we outline seven ‘AI ethics traps’. In doing so, we hope to provide a resource for readers who want to understand and navigate the public debate on the ethics of AI better, who want to contribute to ongoing discussions in an informed and nuanced way, and who want to think critically and constructively about ethical considerations in science and technology more broadly. Of course, not everybody who contributes to the current debate on AI Ethics is guilty of endorsing any or all of these traps: the traps articulate extreme versions of a range of possible misconceptions, formulated in a deliberately strong way to highlight the ways in which one might prematurely dismiss ethical reasoning about AI as futile.

1. The reductionism trap:

“Doing the morally right thing is essentially the same as acting in a fair way. (or: transparent, or egalitarian, or <substitute any other value>). So ethics is the same as fairness (or transparency, or equality, etc.). If we’re being fair, then we’re being ethical.”

            Even though the problem of algorithmic bias and its unfair impact on decision outcomes is an urgent problem, it does not exhaust the ethical problem space. As important as algorithmic fairness is, it is crucial to avoid reducing ethics to a fairness problem alone. Instead, it is important to pay attention to how the ethically valuable goal of optimizing for a specific value like fairness interacts with other important ethical goals. Such goals could include—amongst many others—the goal of creating transparent and explainable systems which are open to democratic oversight and contestation, the goal of improving the predictive accuracy of machine learning systems, the goal of avoiding paternalistic infringements of autonomy rights, or the goal of protecting the privacy interests of data subjects. Sometimes, these different values may conflict: we cannot always optimize for everything at once. This makes it all the more important to adopt a sufficiently rich, pluralistic view of the full range of relevant ethical values at stake—only then can one reflect critically on what kinds of ethical trade-offs one may have to confront.

2. The simplicity trap:

“In order to make ethics practical and action-guiding, we need to distill our moral framework into a user-friendly compliance checklist. After we’ve decided on a particular path of action, we’ll go through that checklist to make sure that we’re being ethical.”

            Given the high visibility and urgency of ethical dilemmas arising in the context of AI, it is not surprising that there are more and more calls to develop actionable AI ethics checklists. For instance, a 2018 draft report by the European Commission’s High-Level Expert Group on Artificial Intelligence specifies a preliminary ‘assessment list’ for ‘trustworthy AI’. While the report plausibly acknowledges that such an assessment list must be context-sensitive and that it is not exhaustive, it nevertheless identifies a list of ten fixed ethical goals, including privacy and transparency. But can and should ethical values be articulated in a checklist in the first place? It is worth examining this underlying assumption critically. After all, a checklist implies a one-off review process: on that view, developers or policy-makers could determine whether a particular system is ethically defensible at a specific moment in time, and then move on without confronting any further ethical concerns once the checklist criteria have been satisfied once. But ethical reasoning cannot be a static one-off assessment: it required an ongoing process of reflection, deliberation, and contestation. Simplicity is good—but the willingness to reconsider simple frameworks, when required, is better. Setting a fixed ethical agenda ahead of time risks obscuring new ethical problems that may arise at a later point in time, or ongoing ethical problems that become apparent to human decision-makers only later.

3. The relativism trap:

“We all disagree about what is morally valuable, so it’s pointless to imagine that there is a universalbaseline against which we can use in order to evaluate moral choices. Nothing is objectively morally good: things can only be morally good relative to each person’s individual value framework.”

            Public discourse on the ethics of AI frequently produces little more than an exchange of personal opinions or institutional positions. In light of pervasive moral disagreement, it is easy to conclude that ethical reasoning can never stand on firm ground: it always seems to be relative to a person’s views and context. But this does not mean that ethical reasoning about AI and its social and political implications is futile: some ethical arguments about AI may ultimately be more persuasive than others. While it may not always be possible to determine ‘the one right answer’, it is often possible to identify at least  some paths of action are clearly wrong, and some paths of action that are comparatively better (if not optimal all things considered). If that is the case, comparing the respective merits of ethical arguments can be action-guiding for developers and policy-makers, despite the presence of moral disagreement. Thus, it is possible and indeed constructive for AI ethics to welcome value pluralism, without collapsing into extreme value relativism.

4. The value alignment trap:

“If relativism is wrong (see #3), there must be one morally right answer. We need to find that right answer, and ensure that everyone in our organisation acts in alignment with that answer. If our ethical reasoning leads to moral disagreement, that means that we have failed.”…(More)”.