GovEx Launches First International Open Data Standards Directory


GT Magazine: “…A nonprofit gov tech group has created an international open data standards directory, aspiring to give cities a singular resource for guidance on formatting data they release to the public…The nature of municipal data is nuanced and diverse, and the format in which it is released often varies depending on subject matter. In other words, a format that works well for public safety data is not necessarily the same that works for info about building permits, transit or budgets. Not having a coordinated and agreed-upon resource to identify the best standards for these different types of info, Nicklin said, creates problems.

One such problem is that it can be time-consuming and challenging for city government data workers to research and identify ideal formats for data. Another is that the lack of info leads to discord between different jurisdictions, meaning one city might format a data set about economic development in an entirely different way than another, making collaboration and comparisons problematic.

What the directory does is provide a list of standards that are in use within municipal governments, as well as an evaluation based on how frequent that use is, whether the format is machine-readable, and whether users have to pay to license it, among other factors.

The directory currently contains 60 standards, some of which are in Spanish, and those involved with the project say they hope to expand their efforts to include more languages. There is also a crowdsourcing component to the directory, in that users are encouraged to make additions and updates….(More)”

Science’s Next Frontier? It’s Civic Engagement


Louise Lief at Discover Magazine: “…As a lay observer who has explored scientists’ relationship to the public, I have often wondered why many scientists and scientific institutions continue to rely on what is known as the “deficit model” of science communication, despite its well-documented shortcomings and even a backfire effect. This approach views the public as  “empty vessels” or “warped minds” ready to be set straight with facts. Perhaps many scientists continue to use it because it’s familiar and mimics classroom instruction. But it’s not doing the job.

Scientists spend much of their time with the public defending science, and little time building trust.

Many scientists also give low priority to trust building. At the 2016 American Association for the Advancement of Science conference, Michigan State University professor John C. Besley showed these results (right) of a survey of scientists’ priorities for engaging with the public online.

Scientists are focusing on the frustrating, reactive task of defending science, spending little time establishing bonds of trust with the public, which comes in last as a professional priority. How much more productive their interactions with the public – and through them, policymakers — would be if establishing trust was a top priority!

There is evidence that the public is hungry for such exchanges. When Research!America asked the public in 2016 how important is it for scientists to inform elected officials and the public about their research and its impact on society, 84 percent said it was very or somewhat important — a number that ironically mirrors the percentage of Americans who cannot name a scientist….

This means scientists need to go even further, venturing into unfamiliar local venues where science may not be mentioned but where communities gather to discuss their problems. Interesting new opportunities to do this are emerging nation wide. In 2014 the Chicago Community Trust, one of the nation’s largest community foundations, launched a series of dinners across the city through a program called On the Table, to discuss community problems and brainstorm possible solutions. In 2014, the first year, almost 10,000 city residents participated. In 2017, almost 100,000 Chicago residents took part. Recently the Trust added a grants component to the program, awarding more than $135,000 in small grants to help participants translate their ideas into action….(More)”.

How We Can Stop Earthquakes From Killing People Before They Even Hit


Justin Worland in Time Magazine: “…Out of that realization came a plan to reshape disaster management using big data. Just a few months later, Wani worked with two fellow Stanford students to create a platform to predict the toll of natural disasters. The concept is simple but also revolutionary. The One Concern software pulls geological and structural data from a variety of public and private sources and uses machine learning to predict the impact of an earthquake down to individual city blocks and buildings. Real-time information input during an earthquake improves how the system responds. And earthquakes represent just the start for the company, which plans to launch a similar program for floods and eventually other natural disasters….

Previous software might identify a general area where responders could expect damage, but it would appear as a “big red blob” that wasn’t helpful when deciding exactly where to send resources, Dayton says. The technology also integrates information from many sources and makes it easy to parse in an emergency situation when every moment matters. The instant damage evaluations mean fast and actionable information, so first responders can prioritize search and rescue in areas most likely to be worst-hit, rather than responding to 911 calls in the order they are received.

One Concern is not the only company that sees an opportunity to use data to rethink disaster response. The mapping company Esri has built rapid-response software that shows expected damage from disasters like earthquakes, wildfires and hurricanes. And the U.S. government has invested in programs to use data to shape disaster response at agencies like the National Oceanic and Atmospheric Administration (NOAA)….(More)”.

Software used to predict crime can now be scoured for bias


Dave Gershgorn in Quartz: “Predictive policing, or the idea that software can foresee where crime will take place, is being adopted across the country—despite being riddled with issues. These algorithms have been shown to disproportionately target minorities, and private companies won’t reveal how their software reached those conclusions.

In an attempt to stand out from the pack, predictive-policing startup CivicScape has released its algorithm and data online for experts to scour, according to Government Technology magazine. The company’s Github page is already populated with its code, as well as a variety of documents detailing how its algorithm interprets police data and what variables are included when predicting crime.

“By making our code and data open-source, we are inviting feedback and conversation about CivicScape in the belief that many eyes make our tools better for all,” the company writes on Github. “We must understand and measure bias in crime data that can result in disparate public safety outcomes within a community.”…

CivicScape claims to not use race or ethnic data to make predictions, although it is aware of other indirect indicators of race that could bias its software. The software also filters out low-level drug crimes, which have been found to be heavily biased against African Americans.

While this startup might be the first to publicly reveal the inner machinations of its algorithm and data practices, it’s not an assurance that predictive policing can be made fair and transparent across the board.

“Lots of research is going on about how algorithms can be transparent, accountable, and fair,” the company writes. “We look forward to being involved in this important conversation.”…(More)”.

Fighting Illegal Fishing With Big Data


Emily Matchar in Smithsonian: “In many ways, the ocean is the Wild West. The distances are vast, the law enforcement agents few and far between, and the legal jurisdiction often unclear. In this environment, illegal activity flourishes. Illegal fishing is so common that experts estimate as much as a third of fish sold in the U.S. was fished illegally. This illegal fishing decimates the ocean’s already dwindling fish populations and gives rise to modern slavery, where fishermen are tricked onto vessels and forced to work, sometimes for years.

A new use of data technology aims to help curb these abuses by shining a light on the high seas. The technology uses ships’ satellite signals to detect instances of transshipment, when two vessels meet at sea to exchange cargo. As transshipment is a major way illegally caught fish makes it into the legal supply chain, tracking it could potentially help stop the practice.

“[Transshipment] really allows people to do something out of sight,” says David Kroodsma, the research program director at Global Fishing Watch, an online data platform launched by Google in partnership with the nonprofits Oceana and SkyTruth. “It’s something that obscures supply chains. It’s basically being able to do things without any oversight. And that’s a problem when you’re using a shared resource like the oceans.”

Global Fishing Watch analyzed some 21 billion satellite signals broadcast by ships, which are required to carry transceivers for collision avoidance, from between 2012 and 2016. It then used an artificial intelligence system it created to identify which ships were refrigerated cargo vessels (known in the industry as “reefers”). They then verified this information with fishery registries and other sources, eventually identifying 794 reefers—90 percent of the world’s total number of such vessels. They tracked instances where a reefer and a fishing vessel were moving at similar speeds in close proximity, labeling these instances as “likely transshipments,” and also traced instances where reefers were traveling in a way that indicated a rendezvous with a fishing vessel, even if no fishing vessel was present—fishing vessels often turn off their satellite systems when they don’t want to be seen. All in all there were more than 90,000 likely or potential transshipments recorded.

Even if these encounters were in fact transshipments, they would not all have been for nefarious purposes. They may have taken place to refuel or load up on supplies. But looking at the patterns of where the potential transshipments happen is revealing. Very few are seen close to the coasts of the U.S., Canada and much of Europe, all places with tight fishery regulations. There are hotspots off the coast of Peru and Argentina, all over Africa, and off the coast of Russia. Some 40 percent of encounters happen in international waters, far enough off the coast that no country has jurisdiction.

The tracked reefers were flying flags from some 40 different countries. But that doesn’t necessarily tell us much about where they really come from. Nearly half of the reefers tracked were flying “flags of convenience,” meaning they’re registered in countries other than where the ship’s owners are from to take advantage of those countries’ lax regulations….(More)”

Read more: http://www.smithsonianmag.com/innovation/fighting-illegal-fishing-big-data-180962321/#7eCwGrGS5v5gWjFz.99
Give the gift of Smithsonian magazine for only $12! http://bit.ly/1cGUiGv
Follow us: @SmithsonianMag on Twitter

Mass Observation: The amazing 80-year experiment to record our daily lives


William Cook at BBC Arts: “Eighty years ago, on 30th January 1937, the New Statesman published a letter which launched the largest (and strangest) writers’ group in British literary history.

An anthropologist called Tom Harrisson, a journalist called Charles Madge and a filmmaker called Humphrey Jennings wrote to the magazine asking for volunteers to take part in a new project called Mass Observation. Over a thousand readers responded, offering their services. Remarkably, this ‘scientific study of human social behaviour’ is still going strong today.

Mass Observation was the product of a growing interest in the social sciences, and a growing belief that the mass media wasn’t accurately reflecting the lives of so-called ordinary people. Instead of entrusting news gathering to jobbing journalists, who were under pressure to provide the stories their editors and proprietors wanted, Mass Observation recruited a secret army of amateur reporters, to track the habits and opinions of ‘the man in the street.’

Ironically, the three founders of this egalitarian movement were all extremely well-to-do. They’d all been to public schools and Oxbridge, but this was the ‘Age of Anxiety’, when capitalism was in chaos and dangerous demagogues were on the rise (plus ça change…).

For these idealistic public schoolboys, socialism was the answer, and Mass Observation was the future. By finding out what ‘ordinary’ folk were really doing, and really thinking, they would forge a new society, more attuned to the needs of the common man.

Mass Observation selected 500 citizen journalists, and gave them regular ‘directives’ to report back on virtually every aspect of their daily lives. They were guaranteed anonymity, which gave them enormous freedom. People opened up about themselves (and their peers) to an unprecedented degree.

Even though they were all unpaid, correspondents devoted a great deal of time to this endeavour – writing at great length, in great detail, over many years. As well as its academic value, Mass Observation proved that autobiography is not the sole preserve of the professional writer. For all of us, the urge to record and reflect upon our lives is a basic human need.

The Second World War was the perfect forum for this vast collective enterprise. Mass Observation became a national diary of life on the home front. For historians, the value of such uncensored revelations is enormous. These intimate accounts of air raids and rationing are far more revealing and evocative than the jolly state-sanctioned reportage of the war years.

After the war, Mass Observation became more commercial, supplying data for market research, and during the 1960s this extraordinary experiment gradually wound down. It was rescued from extinction by the historian Asa Briggs….

The founders of Mass Observation were horrified by what they called “the revival of racial superstition.” Hitler, Franco and Mussolini were in the forefront of their minds. “We are all in danger of extinction from such outbursts of atavism,” they wrote, in 1937. “We look to science to help us, only to find that science is too busy forging new weapons of mass destruction.”

For its founders, Mass Observation was a new science which would build a better future. For its countless correspondents, however, it became something more than that – not merely a social science, but a communal work of art….(More)”.

Artificial Intelligence “Jolted by Success”


Steven Aftergood in SecrecyNews: “Since 2010, the field of artificial intelligence (AI) has been “jolted” by the “broad and unforeseen successes” of one of its component technologies, known as multi-layer neural networks, leading to rapid developments and new applications, according to a new study from the JASON scientific advisory panel.

The JASON panel reviewed the current state of AI research and its potential use by the Department of Defense. See Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, JSR-16-Task-003, January 2017….

The JASON report distinguishes between artificial intelligence — referring to the ability of computers to perform particular tasks that humans do with their brains — and artificial general intelligence (AGI) — meaning a human-like ability to pursue long-term goals and exercise purposive behavior.

“Where AI is oriented around specific tasks, AGI seeks general cognitive abilities.” Recent progress in AI has not been matched by comparable advances in AGI. Sentient machines, let alone a revolt of robots against their creators, are still somewhere far over the horizon, and may be permanently in the realm of fiction.

While many existing DoD weapon systems “have some degree of ‘autonomy’ relying on the technologies of AI, they are in no sense a step–not even a small step–towards ‘autonomy’ in the sense of AGI, that is, the ability to set independent goals or intent,” the JASONs said.

“Indeed, the word ‘autonomy’ conflates two quite different meanings, one relating to ‘freedom of will or action’ (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set based on possibly complex sensor input, as in the word ‘automatic’. In using a terminology like ‘autonomous weapons’, the DoD may, as an unintended consequence, enhance the public’s confusion on this point.”…

This week the Department of Defense announced the demonstration of swarms of “autonomous” micro-drones. “The micro-drones demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying, and self-healing,” according to a January 9 news release.

A journalistic account of recent breakthroughs in the use of artificial intelligence for machine translation appeared in the New York Times Magazine last month. See “The Great A.I. Awakening” by Gideon Lewis-Kraus, December 14, 2016…(More)”

Maybe the Internet Isn’t a Fantastic Tool for Democracy After All


 in NewYork Magazine: “My favorite story about the internet is the one about the anonymous Japanese guy who liberated Czechoslovakia. In 1989, as open dissent was spreading across the country, dissidents were attempting to coordinate efforts outside the watchful eye of Czechoslovak state security. The internet was a nascent technology, and the cops didn’t use it; modems were banned, and activists were able to use only those they could smuggle over the border, one at a time. Enter our Japanese guy. Bruce Sterling, who first told the story of the Japanese guy in a 1995 Wired article, says he talked to four different people who’d met the quiet stranger, but no one knew his name. What really mattered, anyway, is what he brought with him: “a valise full of brand-new and unmarked 2400-baud Taiwanese modems,” which he handed over to a group of engineering students in Prague before walking away. “The students,” Sterling would later write, “immediately used these red-hot 2400-baud scorcher modems to circulate manifestos, declarations of solidarity, rumors, and riot news.” Unrest expanded, the opposition grew, and within months, the Communist regime collapsed.

Is it true? Were free modems the catalyst for the Velvet Revolution? Probably not. But it’s a good story, the kind whose logic and lesson have become so widely understood — and so foundational to the worldview of Silicon Valley — as to make its truth irrelevant. Isn’t the best way to fortify the town square by giving more people access to it? And isn’t it nice to know, as one storied institution and industry after another falls to the internet’s disrupting sword, that everything will be okay in the end — that there might be some growing pains, but connecting billions of people to one another is both inevitable and good? Free speech will expand, democracy will flower, and we’ll all be rich enough to own MacBooks. The new princes of Silicon Valley will lead us into the rational, algorithmically enhanced, globally free future.

Or, they were going to, until earlier this month. The question we face now is: What happens when the industry destroyed is professional politics, the institutions leveled are the same few that prop up liberal democracy, and the values the internet disseminates are racism, nationalism, and demagoguery?

Powerful undemocratic states like China and Russia have for a while now put the internet to use to mislead the public, create the illusion of mass support, and either render opposition invisible or expose it to targeting…(More)”

What’s wrong with big data?


James Bridle in the New Humanist: “In a 2008 article in Wired magazine entitled “The End of Theory”, Chris Anderson argued that the vast amounts of data now available to researchers made the traditional scientific process obsolete. No longer would they need to build models of the world and test them against sampled data. Instead, the complexities of huge and totalising datasets would be processed by immense computing clusters to produce truth itself: “With enough data, the numbers speak for themselves.” As an example, Anderson cited Google’s translation algorithms which, with no knowledge of the underlying structures of languages, were capable of inferring the relationship between them using extensive corpora of translated texts. He extended this approach to genomics, neurology and physics, where scientists are increasingly turning to massive computation to make sense of the volumes of information they have gathered about complex systems. In the age of big data, he argued, “Correlation is enough. We can stop looking for models.”

This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.

“Big data” is not merely a business buzzword, but a way of seeing the world. Driven by technology, markets and politics, it has come to determine much of our thinking, but it is flawed and dangerous. It runs counter to our actual findings when we employ such technologies honestly and with the full understanding of their workings and capabilities. This over-reliance on data, which I call “quantified thinking”, has come to undermine our ability to reason meaningfully about the world, and its effects can be seen across multiple domains.

The assertion is hardly new. Writing in the Dialectic of Enlightenment in 1947, Theodor Adorno and Max Horkheimer decried “the present triumph of the factual mentality” – the predecessor to quantified thinking – and succinctly analysed the big data fallacy, set out by Anderson above. “It does not work by images or concepts, by the fortunate insights, but refers to method, the exploitation of others’ work, and capital … What men want to learn from nature is how to use it in order wholly to dominate it and other men. That is the only aim.” What is different in our own time is that we have built a world-spanning network of communication and computation to test this assertion. While it occasionally engenders entirely new forms of behaviour and interaction, the network most often shows to us with startling clarity the relationships and tendencies which have been latent or occluded until now. In the face of the increased standardisation of knowledge, it becomes harder and harder to argue against quantified thinking, because the advances of technology have been conjoined with the scientific method and social progress. But as I hope to show, technology ultimately reveals its limitations….

“Eroom’s law” – Moore’s law backwards – was recently formulated to describe a problem in pharmacology. Drug discovery has been getting more expensive. Since the 1950s the number of drugs approved for use in human patients per billion US dollars spent on research and development has halved every nine years. This problem has long perplexed researchers. According to the principles of technological growth, the trend should be in the opposite direction. In a 2012 paper in Nature entitled “Diagnosing the decline in pharmaceutical R&D efficiency” the authors propose and investigate several possible causes for this. They begin with social and physical influences, such as increased regulation, increased expectations and the exhaustion of easy targets (the “low hanging fruit” problem). Each of these are – with qualifications – disposed of, leaving open the question of the discovery process itself….(More)

Crowdsourcing and cellphone data could help guide urban revitalization


Science Magazine: “For years, researchers at the MIT Media Lab have been developing a database of images captured at regular distances around several major cities. The images are scored according to different visual characteristics — how safe the depicted areas look, how affluent, how lively, and the like….Adjusted for factors such as population density and distance from city centers, the correlation between perceived safety and visitation rates was strong, but it was particularly strong for women and people over 50. The correlation was negative for people under 30, which means that males in their 20s were actually more likely to visit neighborhoods generally perceived to be unsafe than to visit neighborhoods perceived to be safe.

In the same paper, the researchers also identified several visual features that are highly correlated with judgments that a particular area is safe or unsafe. Consequently, the work could help guide city planners in decisions about how to revitalize declining neighborhoods.,,,

Jacobs’ theory, Hidalgo says, is that neighborhoods in which residents can continuously keep track of street activity tend to be safer; a corollary is that buildings with street-facing windows tend to create a sense of safety, since they imply the possibility of surveillance. Newman’s theory is an elaboration on Jacobs’, suggesting that architectural features that demarcate public and private spaces, such as flights of stairs leading up to apartment entryways or archways separating plazas from the surrounding streets, foster the sense that crossing a threshold will bring on closer scrutiny….(More)”