The Future of Fishing Is Big Data and Artificial Intelligence


Meg Wilcox at Civil Eats: “New England’s groundfish season is in full swing, as hundreds of dayboat fishermen from Rhode Island to Maine take to the water in search of the region’s iconic cod and haddock. But this year, several dozen of them are hauling in their catch under the watchful eye of video cameras as part of a new effort to use technology to better sustain the area’s fisheries and the communities that depend on them.

Video observation on fishing boats—electronic monitoring—is picking up steam in the Northeast and nationally as a cost-effective means to ensure that fishing vessels aren’t catching more fish than allowed while informing local fisheries management. While several issues remain to be solved before the technology can be widely deployed—such as the costs of reviewing and storing data—electronic monitoring is beginning to deliver on its potential to lower fishermen’s costs, provide scientists with better data, restore trust where it’s broken, and ultimately help consumers gain a greater understanding of where their seafood is coming from….

Muto’s vessel was outfitted with cameras, at a cost of about $8,000, through a collaborative venture between NOAA’s regional office and science centerThe Nature Conservancy (TNC), the Gulf of Maine Research Institute, and the Cape Cod Commercial Fishermen’s Alliance. Camera costs are currently subsidized by NOAA Fisheries and its partners.

The cameras run the entire time Muto and his crew are out on the water. They record how the fisherman handle their discards, the fish they’re not allowed to keep because of size or species type, but that count towards their quotas. The cost is lower than what he’d pay for an in-person monitor.The biggest cost of electronic monitoring, however, is the labor required to review the video. …

Another way to cut costs is to use computers to review the footage. McGuire says there’s been a lot of talk about automating the review, but the common refrain is that it’s still five years off.

To spur faster action, TNC last year spearheaded an online competition, offering a $50,000 prize to computer scientists who could crack the code—that is, teach a computer how to count fish, size them, and identify their species.

“We created an arms race,” says McGuire. “That’s why you do a competition. You’ll never get the top minds to do this because they don’t care about your fish. They all want to work for Google, and one way to get recognized by Google is to win a few of these competitions.”The contest exceeded McGuire’s expectations. “Winners got close to 100 percent in count and 75 percent accurate on identifying species,” he says. “We proved that automated review is now. Not in five years. And now all of the video-review companies are investing in machine leaning.” It’s only a matter of time before a commercial product is available, McGuire believes….(More).

Prescription drugs that kill: The challenge of identifying deaths in government data


Mike Stucka at Data Driven Journalism: “An editor at The Palm Beach Post printed out hundreds of pages of reports and asked a simple question that turned out to be weirdly complex: How many people were being killed by a prescription drug?

That question relied on version of a report that was soon discontinued by the U.S. Food and Drug Administration. Instead, the agency built a new web site that doesn’t allow exports or the ability to see substantial chunks of the data. So, I went to raw data files that were horribly formatted — and, before the project was over, the FDA had reissued some of those data files and taken most of them offline.

But I didn’t give up hope. Behind the data — known as FAERS, or FDA Adverse Event Reporting System — are more than a decade of data for suspected drug complications of nearly every kind. With multiple drugs in many reports, and multiple versions of many reports, the list of drugs alone comes to some 35 million reports. And it’s a potential gold mine.

How much of a gold mine? For one relatively rare drug, meant only for the worst kind of cancer pain, we found records tying the drug to more than 900 deaths. A salesman had hired a former exotic dancer and a former Playboy model to help sell the drug known as Subsys. He then pushed salesmen to up the dosage, John Pacenti and Holly Baltz found in their package, “Pay To Prescribe? The Fentanyl Scandal.”

FAERS has some serious limitations, but some serious benefits. The data can tell you why a drug was prescribed; it can tell you if a person was hospitalized because of a drug reaction, or killed, or permanently disabled. It can tell you what country the report came from. It’s got the patient age. It’s got the date of reporting. It’s got other drugs involved. Dosage. There’s a ton of useful information.

Now the bad stuff: There may be multiple reports for each actual case, as well as multiple versions of a single “case” ID….(More)”

Help NASA create the world’s largest landslide database


EarthSky: “Landslides cause thousands of deaths and billions of dollars in property damage each year. Surprisingly, very few centralized global landslide databases exist, especially those that are publicly available.

Now NASA scientists are working to fill the gap—and they want your help collecting information. In March 2018, NASA scientist Dalia Kirschbaum and several colleagues launched a citizen science project that will make it possible to report landslides you have witnessed, heard about in the news, or found on an online database. All you need to do is log into the Landslide Reporter portal and report the time, location, and date of the landslide – as well as your source of information. You are also encouraged to submit additional details, such as the size of the landslide and what triggered it. And if you have photos, you can upload them.

Kirschbaum’s team will review each entry and submit credible reports to the Cooperative Open Online Landslide Repository (COOLR) — which they hope will eventually be the largest global online landslide catalog available.

Landslide Reporter is designed to improve the quantity and quality of data in COOLR. Currently, COOLR contains NASA’s Global Landslide Catalog, which includes more than 11,000 reports on landslides, debris flows, and rock avalanches. Since the current catalog is based mainly on information from English-language news reports and journalists tend to cover only large and deadly landslides in densely populated areas, many landslides never make it into the database….(More)”.

Open Standards for Data


Guidebook by the Open Data Institute: “Standards for data are often seen as a technical topic that is only relevant to developers and other technologists.

Using this guidebook we hope to highlight that standards are an important tool that are worthy of wider attention.

Standards have an important role in helping us to consistently and repeatably share data. But they are also a tool to help implement policy, create and shape markets and drive social change.

The guidebook isn’t intended to be read from start to finish. Instead we’ve focused on curating a variety of guidance, tools and resources that will be relevant no matter your experience.

On top of providing useful background and case studies, we’ve also provided pointers to help you find existing standards.

Other parts of the guidebook will be most relevant when you’re engaged in the process of scoping and designing new standards….(More)”.

New Zealand explores machine-readable laws to transform government


Apolitical: “The team working to drive New Zealand’s government into the digital age believes that part of the problem is the ways that laws themselves are written. Earlier this year, over a three-week experiment, they’ve tested the theory by rewriting legislation itself as software code.

The team in New Zealand, led by the government’s service innovations team LabPlus, has attempted to improve the interpretation of legislation and vastly ease the creation of digital services by rewriting legislation as code.

Legislation-as-code means taking the “rules” or components of legislation — its logic, requirements and exemptions — and laying them out programmatically so that it can be parsed by a machine. If law can be broken down by a machine, then anyone, even those who aren’t legally trained, can work with it. It helps to standardise the rules in a consistent language across an entire system, giving a view of services, compliance and all the different rules of government.

Over the course of three weeks the team in New Zealand rewrote two sets of legislation as software code: the Rates Rebate Act, a tax rebate designed to lower the costs of owning a home for people on low incomes, and the Holidays Act, which was enacted to grant each employee in New Zealand a guaranteed four weeks a year of holiday.

The way that both policies are written makes them difficult to interpret, and, consequently, deliver. They were written for a paper-based world, and require different service responses from distinct bodies within government based on what the legal status of the citizen using them is. For instance, the residents of retirement villages are eligible to rebates through the Rates Rebate Act, but access it via different people and provide different information than normal ratepayers.

The teams worked to rewrite the legislation, first as “pseudocode” — the rules behind the legislation in a logical chain — then as human-readable legislation and finally as software code, designed to make it far easier for public servants and the public to work out who was eligible for what outcome. In the end, the team had working code for how to digitally deliver two policies.

A step towards digital government

The implications of such techniques are significant. Firstly, machine-readable legislation could speed up interactions between government and business, sparing private organisations the costs in time and money they currently spend interpreting the laws they need to comply with.

If legislation changes, the machine can process it automatically and consistently, saving the cost of employing an expert, or a lawyer, to do this job.

More transformatively for policymaking itself, machine-readable legislation allows public servants to test the impact of policy before they implement it.

“What happens currently is that people design the policy up front and wait to see how it works when you eventually deploy it,” said Richard Pope, one of the original pioneers in the UK’s Government Digital Service (GDS) and the co-author of the UK’s digital service standard. “A better approach is to design the legislation in such a way that gives the teams that are making and delivering a service enough wiggle room to be able to test things.”…(More)”.

The promise and peril of military applications of artificial intelligence


Michael C. Horowitz at the Bulletin of the Atomic Scientists: “Artificial intelligence (AI) is having a moment in the national security space. While the public may still equate the notion of artificial intelligence in the military context with the humanoid robots of the Terminatorfranchise, there has been a significant growth in discussions about the national security consequences of artificial intelligence. These discussions span academia, business, and governments, from Oxford philosopher Nick Bostrom’s concern about the existential risk to humanity posed by artificial intelligence to Tesla founder Elon Musk’s concern that artificial intelligence could trigger World War III to Vladimir Putin’s statement that leadership in AI will be essential to global power in the 21st century.

What does this really mean, especially when you move beyond the rhetoric of revolutionary change and think about the real world consequences of potential applications of artificial intelligence to militaries? Artificial intelligence is not a weapon. Instead, artificial intelligence, from a military perspective, is an enabler, much like electricity and the combustion engine. Thus, the effect of artificial intelligence on military power and international conflict will depend on particular applications of AI for militaries and policymakers. What follows are key issues for thinking about the military consequences of artificial intelligence, including principles for evaluating what artificial intelligence “is” and how it compares to technological changes in the past, what militaries might use artificial intelligence for, potential limitations to the use of artificial intelligence, and then the impact of AI military applications for international politics.

The potential promise of AI—including its ability to improve the speed and accuracy of everything from logistics to battlefield planning and to help improve human decision-making—is driving militaries around the world to accelerate their research into and development of AI applications. For the US military, AI offers a new avenue to sustain its military superiority while potentially reducing costs and risk to US soldiers. For others, especially Russia and China, AI offers something potentially even more valuable—the ability to disrupt US military superiority. National competition in AI leadership is as much or more an issue of economic competition and leadership than anything else, but the potential military impact is also clear. There is significant uncertainty about the pace and trajectory of artificial intelligence research, which means it is always possible that the promise of AI will turn into more hype than reality. Moreover, safety and reliability concerns could limit the ways that militaries choose to employ AI…(More)”,

Navigation by Judgment: Why and When Top Down Management of Foreign Aid Doesn’t Work


Book by Dan Honig: “Foreign aid organizations collectively spend hundreds of billions of dollars annually, with mixed results. Part of the problem in these endeavors lies in their execution. When should foreign aid organizations empower actors on the front lines of delivery to guide aid interventions, and when should distant headquarters lead?

In Navigation by Judgment, Dan Honig argues that high-quality implementation of foreign aid programs often requires contextual information that cannot be seen by those in distant headquarters. Tight controls and a focus on reaching pre-set measurable targets often prevent front-line workers from using skill, local knowledge, and creativity to solve problems in ways that maximize the impact of foreign aid. Drawing on a novel database of over 14,000 discrete development projects across nine aid agencies and eight paired case studies of development projects, Honig concludes that aid agencies will often benefit from giving field agents the authority to use their own judgments to guide aid delivery. This “navigation by judgment” is particularly valuable when environments are unpredictable and when accomplishing an aid program’s goals is hard to accurately measure.

Highlighting a crucial obstacle for effective global aid, Navigation by Judgment shows that the management of aid projects matters for aid effectiveness….(More)”.

Citizenship and democratic production


Article by Mara Balestrini and Valeria Right in Open Democracy: “In the last decades we have seen how the concept of innovation has changed, as not only the ecosystem of innovation-producing agents, but also the ways in which innovation is produced have expanded. The concept of producer-innovation, for example, where companies innovate on the basis of self-generated ideas, has been superseded by the concept of user-innovation, where innovation originates from the observation of the consumers’ needs, and then by the concept of consumer-innovation, where consumers enhanced by the new technologies are themselves able to create their own products. Innovation-related business models have changed too. We now talk about not only patent-protected innovation, but also open innovation and even free innovation, where open knowledge sharing plays a key role.

A similar evolution has taken place in the field of the smart city. While the first smart city models prioritized technology left in the hands of experts as a key factor for solving urban problems, more recent initiatives such as Sharing City (Seoul), Co-city (Bologna), or Fab City (Barcelona) focus on citizen participation, open data economics and collaborative-distributed processes as catalysts for innovative solutions to urban challenges. These initiatives could prompt a new wave in the design of more inclusive and sustainable cities by challenging existing power structures, amplifying the range of solutions to urban problems and, possibly, creating value on a larger scale.

In a context of economic austerity and massive urbanization, public administrations are acknowledging the need to seek innovative alternatives to increasing urban demands. Meanwhile, citizens, harnessing the potential of technologies – many of them accessible through open licenses – are putting their creative capacity into practice and contributing to a wave of innovation that could reinvent even the most established sectors.

Contributive production

The virtuous combination of citizen participation and abilities, digital technologies, and open and collaborative strategies is catalyzing innovation in all areas. Citizen innovation encompasses everything, from work and housing to food and health. The scope of work, for example, is potentially affected by the new processes of manufacturing and production on an individual scale: citizens can now produce small and large objects (new capacity), thanks to easy access to new technologies such as 3D printers (new element); they can also take advantage of new intellectual property licenses by adapting innovations from others and freely sharing their own (new rule) in response to a wide range of needs.

Along these lines, between 2015 and 2016, the city of Bristol launched a citizen innovation program aimed at solving problems related to the state of rented homes, which produced solutions through citizen participation and the use of sensors and open data. Citizens designed and produced themselves temperature and humidity sensors – using open hardware (Raspberry Pi), 3D printers and laser cutters – to combat problems related to home damp. These sensors, placed in the homes, allowed to map the scale of the problem, to differentiate between condensation and humidity, and thus to understand if the problem was due to structural failures of the buildings or to bad habits of the tenants. Through the inclusion of affected citizens, the community felt empowered to contribute ideas towards solutions to its problems, together with the landlords and the City Council.

A similar process is currently being undertaken in Amsterdam, Barcelona and Pristina under the umbrella of the Making Sense Project. In this case, citizens affected by environmental issues are producing their own sensors and urban devices to collect open data about the city and organizing collective action and awareness interventions….

Digital social innovation is disrupting the field of health too. There are different manifestations of these processes. First, platforms such as DataDonors or PatientsLikeMe show that there is an increasing citizen participation in biomedical research through the donation of their own health data…. projects such as OpenCare in Milan and mobile applications like Good Sam show how citizens can organize themselves to provide medical services that otherwise would be very costly or at a scale and granularity that the public sector could hardly afford….

The production processes of these products and services force us to think about their political implications and the role of public institutions, as they question the cities’ existing participation and contribution rules. In times of sociopolitical turbulence and austerity plans such as these, there is a need to design and test new approaches to civic participation, production and management which can strengthen democracy, add value and take into account the aspirations, emotional intelligence and agency of both individuals and communities.

In order for the new wave of citizen production to generate social capital, inclusive innovation and well-being, it is necessary to ensure that all citizens, particularly those from less-represented communities, are empowered to contribute and participate in the design of cities-for-all. It is therefore essential to develop programs to increase citizen access to the new technologies and the acquisition of the knowhow and skills needed to use and transform them….(More)

This piece is an excerpt from an original article published as part of the eBook El ecosistema de la Democracia Abierta.

Israeli, French Politicians Endorse Blockchain for Governance Transparency


Komfie Manolo at Cryptovest: “Blockchain is moving into the world’s political systems, with several influential political figures in Israel and France recently emerging as new believers in the technology. They are betting on blockchain for more transparent governance and have joined the decentralized platform developed by Coalichain.

Among the seven Israeli politicians to endorse the platform are former deputy minister and interior minister Eli Yishay, deputy defense minister Eli Ben-Dan, and HaBait HaYehudi leader Shulamit Mualem-Refaeli. The move for a more accountable democracy has also been supported by Frederic Lefebvre, the founder of French political party Agir.

Levi Samama, co-founder and CEO of Coalichain, said that support for the platform was “a positive indication that politicians are actively seeking ways to be transparent and direct in the way they communicate with the public. In order to impact existing governance mechanisms we need the support and engagement of politicians and citizens alike.”

Acceptance of blockchain is gaining traction in the world of politics.

During last month’s presidential election in Russia, blockchain was used by state-run public opinion research center VTSIOM to track exit polls.

In the US, budding political group Indie Party wants to redefine the country’s political environment by providing an alternative to the established two-party system with a political marketplace that uses blockchain and cryptocurrency….(More)”

Privacy by Design: Building a Privacy Policy People Actually Want to Read


Richard Mabey at the Artificial Lawyer: “…when it came to updating our privacy policy ahead of GDPR it was important to us from the get-go that our privacy policy was not simply a compliance exercise. Legal documents should not be written by lawyers for lawyers; they should be useful, engaging and designed for the end user. But it seemed that we weren’t the only ones to think this. When we read the regulations, it turned out the EU agreed.

Article 12 mandates that privacy notices be “concise, transparent, intelligible and easily accessible”. Legal design is not just a nice to have in the context of privacy; it’s actually a regulatory imperative. With this mandate, the team at Juro set out with a simple aim: design a privacy policy that people would actually want to read.

Here’s how we did it.

Step 1: framing the problem

When it comes to privacy notices, the requirements of GDPR are heavy and the consequences of non-compliance enormous (potentially 4% of annual turnover). We knew therefore that there would be an inherent tension between making the policy engaging and readable, and at the same time robust and legally watertight.

Lawyers know that when it comes to legal drafting, it’s much harder to be concise than wordy. Specifically, it’s much harder to be concise and preserve legal meaning than it is to be wordy. But the fact remains. Privacy notices are suffered as downside risk protections or compliance items, rather than embraced as important customer communications at key touchpoints. So how to marry the two.

We decided that the obvious route of striking out words and translating legalese was not enough. We wanted cakeism: how can we have an exceptionally robust privacy policy, preserve legal nuance and actually make it readable?

Step 2: changing the design process

The usual flow of creating a privacy policy is pretty basic: (1) management asks legal to produce privacy policy, (2) legal sends Word version of privacy policy back to management (back and forth ensues), (3) management checks Word doc and sends it on to engineering for implementation, (4) privacy policy goes live…

Rather than the standard process, we decided to start with the end user and work backwards and started a design sprint (more about this here) on our privacy notice with multiple iterations, rapid prototyping and user testing.

Similarly, this was not going to be a process just for lawyers. We put together a multi-disciplinary team co-led by me and, legal information designer Stefania Passera, with input from our legal counsel Adam, Tom (our content editor), Alice (our marketing manager) and Anton (our front-end developer).

Step 3: choosing design patterns...(More).