Oliver Wearn, RobinFreeman and David Jacoby in Nature: “Machine learning (ML) is revolutionizing efforts to conserve nature. ML algorithms are being applied to predict the extinction risk of thousands of species, assess the global footprint of fisheries, and identify animals and humans in wildlife sensor data recorded in the field. These efforts have recently been given a huge boost with support from the commercial sector. New initiatives, such as Microsoft’s AI for Earth and Google’s AI for Social Good, are bringing new resources and new ML tools to bear on some of the biggest challenges in conservation. In parallel to this, the open data revolution means that global-scale, conservation-relevant datasets can be fed directly to ML algorithms from open data repositories, such as Google Earth Engine for satellite data or Movebank for animal tracking data. Added to these will be Wildlife Insights, a Google-supported platform for hosting and
Weather Service prepares to launch prediction model many forecasters don’t trust
Jason Samenow in the Washington Post: “In a month, the National Weather Service plans to launch its “next generation” weather prediction model with the aim of “better, more timely forecasts.” But many meteorologists familiar with the model fear it is unreliable.
The introduction of a model that forecasters lack confidence in matters, considering the enormous impact that weather has on the economy, valued at around $485 billion annually.
The Weather Service announced Wednesday that the model, known as the GFS-FV3 (FV3 stands for Finite Volume Cubed-Sphere dynamical core), is “tentatively” set to become the United States’ primary forecast model on March 20, pending tests. It is an update to the current version of the GFS (Global Forecast System), popularly known as the American model, which has existed in various forms for more than 30 years
A concern is that if forecasters cannot rely on the FV3, they will be left to rely only on the European model for their predictions without a credible alternative for comparisons. And they’ll also have to pay large fees for the European model data. Whereas model data from the Weather Service is free, the European Center for Medium-Range Weather Forecasts, which produces the European model, charges for access.
But there is an alternative perspective, which is that forecasters will just need to adjust to the new model and learn to account for its biases. That is, a little short-term pain is worth the long-term potential benefits as the model improves
The Weather Service’s parent agency, the National Oceanic and Atmospheric Administration, recently entered an agreement with the National Center for Atmospheric Research to increase collaboration between forecasters and researchers in improving forecast modeling.
In addition, President Trump recently signed into law the Weather Research and Forecast Innovation Act Reauthorization, which establishes the NOAA Earth Prediction Innovation Center, aimed at further enhancing prediction capabilities. But even while NOAA develops relationships and infrastructure to improve the Weather Service’s modeling, the question remains whether the FV3 can meet the forecasting needs of the moment. Until the problems identified are addressed, its introduction could represent a step back in U.S. weather prediction despite a well-intended effort to leap forward….(More).
Should Libraries Be the Keepers of Their Cities’ Public Data?
Linda Poon at CityLab: “In recent years, dozens of U.S. cities have released pools of public data. It’s an effort to improve transparency and drive
But what often gets lost in the conversation is the idea of how public data should be collected, managed, and disseminated so that it serves everyone—rather than just a few residents—and so that people’s privacy and data rights are protected. That’s where librarians come in.
“As far as how private and public data should be handled, there isn’t really a strong model out there,” says Curtis Rogers, communications director for the Urban Library Council (ULC), an association of leading libraries across North America. “So to have the library as the local institution that is the most trusted, and to give them that responsibility, is a whole new paradigm for how data could be handled in a local government.”
In fact, librarians have long been advocates of digital inclusion and literacy. That’s why, last month, ULC launched a new initiative to give public libraries a leading role in a future with artificial intelligence. They kicked it off with a working group meeting in Washington, D.C., where representatives from libraries in cities like Baltimore, Toronto, Toledo, and Milwaukee met to exchange ideas on how to achieve that through education and by taking on a larger role in data governance.
It’s a broad initiative, and Rogers says they are still in the beginning stages of determining what that role will ultimately look like. But the group will discuss how data should be organized and managed, hash out the potential risks of artificial intelligence, and eventually develop a field-wide framework for how libraries can help drive equitable public data policies in cities.
Already, individual libraries are involved with their city’s data. Chattanooga Public Library (which wasn’t part of the working group, but is a member of ULC) began hosting the city’s open data portal in 2014, turning a traditionally print-centered institution into a community data hub. Since then, the portal has added more than 280 data sets and garnered hundreds of thousands of page views, according to a report for the 2018 fiscal year….
The Toronto Public Library is also in a unique position because it may soon sit inside one of North America’s “smartest” cities. Last month, the city’s board of trade published a 17-page report titled “BiblioTech,” calling for the library to oversee data governance for all smart city projects.
It’s a grand example of just how big the potential is for public libraries. Ryan says the proposal remains just that at the moment, and there are no details yet on what such a model would even look like. She adds that they were not involved in drafting the proposal, and were only asked to provide feedback. But the library is willing to entertain the idea.
Such ambitions would be a large undertaking in the U.S., however, especially for smaller libraries that are already understaffed and under-resourced. According to ULC’s survey of its members, only 23 percent of respondents said they have a staff person designated as the AI lead. A little over a quarter said they even have AI-related educational programming, and just 15 percent report being part of any local or national initiative.
Debbie Rabina, a professor of library science at Pratt Institute in New York, also cautions that putting libraries in charge of data governance has to be carefully thought out. It’s one thing for libraries to teach data literacy and privacy, and to help cities disseminate data. But to go further than that—to have libraries collecting and owning data and to have them assessing who can and can’t use the data—can lead to ethical conflicts and unintended consequences that could erode the public’s trust….(More)”.
Bureaucracy vs. Democracy
Philip Howard in The American Interest: “…For 50 years since the 1960s, modern government has been rebuilt on what I call the “philosophy of correctness.” The person making the decision must be able to demonstrate its correctness by compliance with a precise rule or metric, or by objective evidence in a trial-type proceeding. All day long, Americans are trained to ask themselves, “Can I prove that what I’m about to do is legally correct?”
In the age of individual rights, no one talks about the rights of institutions. But the disempowerment of institutional authority in the name of individual rights has led, ironically, to the disempowerment of individuals at every level of responsibility. Instead of striding confidently toward their goals, Americans tiptoe through legal minefields. In virtually every area of social interaction—schools, healthcare, business, public agencies, public works, entrepreneurship, personal services, community activities, nonprofit organizations, churches and synagogues, candor in the workplace, children’s play, speech on campus, and more—studies and reports confirm all the ways that sensible choices are prevented, delayed, or skewed by overbearing regulation, by an overemphasis on objective metrics,3 or by legal fear of violating someone’s alleged rights.
A Three-Part Indictment of Modern Bureaucracy
Reformers have promised to rein in bureaucracy for 40 years, and it’s only gotten more tangled. Public anger at government has escalated at the same time, and particularly in the past decade. While there’s a natural reluctance to abandon a bureaucratic structure that is well-intended, public anger is unlikely to be mollified until there is change, and populist solutions do not bode well for the future of democracy. Overhauling operating structures to permit practical governing choices would re-energize democracy as well as relieve the pressures Americans feel from Big Brother breathing down their necks.
Viewed in hindsight, the operating premise of modern bureaucracy was utopian and designed to fail. Here’s the three-part indictment of why we should abandon it.
1. The Economic Dysfunction of Modern Bureaucracy
Regulatory programs are indisputably wasteful, and frequently extract costs that exceed benefits. The total cost of compliance is high, about $2 trillion for federal regulation alone….
2. Bureaucracy Causes Cognitive Overload
The complex tangle of bureaucratic rules impairs a human’s ability to focus on the actual problem at hand. The phenomenon of the unhelpful bureaucrat, famously depicted in fiction by Dickens, Balzac, Kafka, Gogol, Heller, and others, has generally been characterized as a cultural flaw of the bureaucratic personality. But studies of cognitive overload suggest that the real problem is that people who are thinking about rules actually have diminished capacity to think about solving problems. This overload not only impedes drawing on
3. Bureaucracy Subverts the Rule of Law
The purpose of
The Big (data) Bang: Opportunities and Challenges for Compiling SDG Indicators
Steve
Facebook could be forced to share data on effects to the young
Nicola Davis at The Guardian: “Social media companies such as Facebook and Twitter could be required by law to share data with researchers to help examine potential harms to young people’s health and identify who may be at risk.
Surveys and studies have previously suggested a link between the use of devices and networking sites and an increase in problems among teenagers and younger children ranging from poor sleep to bullying, mental health issues and grooming.
However, high quality research in the area is scarce: among the conundrums that need to be looked at are matters of cause and effect, the size of any impacts, and the importance of the content of material accessed online.
According to a report by the Commons science and technology committee on the effects of social media and screen time among young people, companies should be compelled to protect users and legislation was needed to enable access to data for high quality studies to be carried out.
The committee noted that the government had failed to commission such research and had instead relied on requesting reviews of existing studies. This was despite a 2017 green paper that set out a consultation process on aUK internet safety strategy.
“We understand [social media companies’] eagerness to protect the privacy of users but sharing data with bona fide researchers is the only way society can truly start to understand the impact, both positive and negative, that social media is having on the modern world,” said Norman Lamb, the Liberal Democrat MP who chairs the committee. “During our inquiry, we heard that social media companies had openly refused to share data with researchers who are keen to examine patterns of use and their effects. This is not good enough.”
Prof Andrew Przybylski, the director of research at the Oxford Internet Institute, said the issue of good quality research was vital, adding that many people’s perception of the effect of social media is largely rooted in hype.
“Social media companies must participate in open, robust, and transparent science with independent scientists,” he said. “Their data, which we give them, is both their most valuable resource and it is the only means by which we can effectively study how these platforms affect users.”…(More)”
Privacy concerns collide with the public interest in data
Gillian Tett in the Financial Times: “Late last year Statistics Canada — the agency that collects government figures — launched an innovation: it asked the country’s banks to supply “individual-level financial transactions data” for 500,000 customers to allow it to track economic trends. The agency argued th
Corporate boards around the world should take note. In the past year, executive angst has exploded about the legal and reputational risks created when private customer data leak out, either by accident or in a cyber hack. Last year’s Facebook scandals have been a hot debating topic among chief executives at this week’s World Economic Forum in Davos, as has the EU’s General Data Protection Regulation. However, there is another important side to this Big Data debate: must companies provide private digital data to public bodies for statistical and policy purposes? Or to put it another way, it is time to widen the debate beyond emotive privacy issues to include the public interest and policy needs. The issue has received little public debate thus far, except in Canada. But it is becoming increasingly important.
Companies are sitting on a treasure trove of digital data that offers valuable real-time signals about economic activity. This information could be even more significant than existing
But the biggest data collections sit inside private companies. Big groups know this, and some are trying to respond. Google has created its own measures to track inflation, which it makes publicly available. JPMorgan and other banks crunch customer data and publish reports about general economic and financial trends. Some tech groups are even starting to volunteer data to government bodies. LinkedIn has offered to provide
This Startup Is Challenging Google Maps—and It Needs You
Aarian Marshall at Wired: “A whole lifetime in New York
Ting was one of 761 New Yorkers who downloaded, played with, and occasionally became obsessed with an app called MapNYC this fall, vying for their share of an 8-bitcoin prize (worth about $50,000 at the time). The month-long contest, run by a new mapping startup called StreetCred, was really an experiment. StreetCred’s main research question: How do you convince regular people to build and verify mappingdata?
It turns out that the maps that guide you to the nearest Arby’s, or help your Lyft driver find your house, don’t just materialize. “I took mapping for granted until I started the competition,” Ting says, even though she pulls up Google Maps at least twice a day. “But it’s such an inconvenience if the info on the map is wrong, especially in a place like New York, that’s changing all the time.”
For regular folk, detailed, reliable mapping info is helpful. For businesses, it can be crucial. Some want to be found when a map user searches for the nearest sandwich shop. Others use products that rely on base maps—think Uber, the Weather Channel, your car’s navigation system—and require up-to-date location data. “One of the huge challenges to any geographic database is its currency,” says Renee Sieber, a geographer who studies participatory mapping at McGill University. That is to say, yesterday’s map is no good to anybody doing business today.
StreetCred sees that as an opportunity. “There’s a lot of companies, none of whom I can name, who have location data, and that data needs improvement,” says Randy Meech, CEO of the small startup. (Meech’s last open-source mapping company, a Samsung subsidiary called Mapzen, shut down in January.) Maybe a client found a data set online or purchased one from another company. Either way, it’s static, and that means it’s only a matter of time before it fails to represent reality.
Google Maps, the giant in this space, has created its extensive database through years of web scraping, Streetview roaming, purchasing and collecting satellite data, and both paying and asking volunteers to verify that the businesses it identifies are still in the same place. But the company doesn’t provide all of its specific location or “point of interest” data to developers—where that Thai restaurant is, or where the hiking trail starts, or where the hospital parking lot is located. When it and other mapping services like HERE Technologies, TomTom, and Foursquare do offer that
Data Was Supposed to Fix the U.S. Education System. Here’s Why It Hasn’t.
Simon Rodberg at Harvard Business School: “For too long, the American education system failed too many kids, including far too many poor kids and kids of color, without enough public notice or accountability. To combat this, leaders of all political persuasions championed the use of testing to measure progress and drive better results. Measurement has become so common that in school districts from coast to coast you can now find calendars marked “Data Days,” when teachers are expected to spend time not on teaching, but on analyzing data like end-of-year and mid-year exams, interim assessments, science and social studies and teacher-created and computer-adaptive tests, surveys, attendance and behavior notes. It’s been this way for more than 30 years, and it’s time to try a different approach.
The big numbers are necessary, but the more they proliferate, the less value they add. Data-based answers lead to further data-based questions, testing, and analysis; and the psychology of leaders and policymakers means that the hunt for data gets in the way of actual learning. The drive for data responded to a real problem in education, but bad thinking about testing and data use has made the data cure worse than the disease
The leadership decision at stake is how much data to collect. I’ve heard variations on “In God we trust; all others bring data” at any number of conferences and beginning-of-school-year speeches. But the mantra “we believe in data” is actually only shorthand for “we believe our actions should be informed by the best available data.” In education, that mostly means testing. In other fields, the kind of process is different, but the issue is the same. The key question is not, “will the data be useful?” (of course it can be) or, “will the data be interesting?” (Yes, again.) The proper question for leaders to ask is: will the data help us make better-enough decisions to be worth the cost of getting and using it? So far, the answer is “no.”
Nationwide data suggests that the growth of data-driven schooling hasn’t worked even by its own lights. Harvard professor Daniel Koretz says “The best estimate is that test-based accountability may have produced modest gains in elementary-school mathematics but no appreciable gains in either reading or high-school mathematics — even though reading and mathematics have been its primary focus.”
We wanted data to help us get past the problem of too many students learning too little, but it turns out that data is an insufficient, even misleading answer. It’s possible that all we’ve learned from our hyper-focus on data is that better instruction won’t come from more detailed information, but from changing what people do. That’s what data-driven reform is meant for, of course: convincing teachers of the need to change and
The Internet of Bodies: A Convenient—and, Yes, Creepy—New Platform for Data Discovery
David Horrigan at ALM: “In the Era of the Internet of Things, we’ve become (at least somewhat) comfortable with our refrigerators knowing more about us than we know about ourselves and our Apple watches transmitting our every movement. The Internet of Things has even made it into the courtroom in cases such as the hot tub saga of Amazon Echo’s Alexa in State v. Bates and an unfortunate wife’s Fitbit in State v.
But the Internet of Bodies?…
The Internet of Bodies refers to the legal and policy implications of using the human body as a technology platform,” said Northeastern University law professor Andrea Matwyshyn, who works also as co-director of Northeastern’s Center for Law, Innovation, and Creativity (CLIC).
“In brief, the Internet of Things (IoT) is moving onto and inside the human body, becoming the Internet of Bodies (IoB),” Matwyshyn added
The Internet of Bodies is not merely a theoretical discussion of what might happen in the future. It’s happening already.
Former U.S. Vice President Dick Cheney revealed in 2013 that his physicians ordered the wireless capabilities of his heart implant disabled out of concern for potential assassin hackers, and in 2017, the U.S. Food and Drug Administration recalled almost half a million pacemakers over security issues requiring a firmware update.
It’s not just former vice presidents and heart patients becoming part of the Internet of Bodies. Northeastern’s Matwyshyn notes that so-called “smart pills” with sensors can report back health data from your stomach to smartphones, and a self-tuning brain implant is being tested to treat Alzheimer’s and Parkinson’s.
So, what’s not to like?
Better with Bacon?
“We are attaching everything to the Internet whether we need to or not,” Matwyshyn said, calling it the “Better with Bacon” problem, noting that—as bacon has become a popular condiment in restaurants—chefs are putting it on everything from drinks to cupcakes.
“It’s great if you love bacon, but not if you’re a vegetarian or if you just don’t like bacon. It’s not a bonus,” Matwyshyn added.
Matwyshyn’s bacon analogy raises interesting questions: Do we really need to connect everything to the Internet? Do the data privacy and data protection risks outweigh the benefits?
The Northeastern Law professor divides these IoB devices into three generations: 1) “body external” devices, such as
Chip Party for Chipped Employees
A Wisconsin company, Three Square Market, made headlines in 2017—including an appearance on The Today Show—when the company microchipped its employees, not unlike what veterinarians do with the family pet. Not surprisingly, the company touted the benefits of implanting microchips under the skin of employees, including being able to wave one’s hand at a door instead of having to carry a badge or use a password….(More)”.