Open data aims to boost food security prospects


Mark Kinver at BBC News: “Rothamsted Research, a leading agricultural research institution, is attempting to make data from long-term experiments available to all.

In partnership with a data consultancy, is it developing a method to make complex results accessible and useable.

The institution is a member of the Godan Initiative that aims to make data available to the scientific community.

In September, Godan called on the public to sign its global petition to open agricultural research data.

“The continuing challenge we face is that the raw data alone is not sufficient enough on its own for people to make sense of it,” said Chris Rawlings, head of computational and systems biology at Rothamsted Research.

“This is because the long-term experiments are very complex, and they are looking at agriculture and agricultural ecosystems so you need to know a lot of about what the intention of the studies are, how they are being used, and the changes that have taken place over time.”

However, he added: “Even with this level of complexity, we do see significant number of users contacting us or developing links with us.”

One size fits all

The ability to provide open data to all is one of the research organisation’s national capabilities, and forms a defining principle of its web portal to the experiments carried out at its North Wyke Farm Platform in North Devon.

Rothamsted worked in partnership with Tessella, a data consultancy, on the data collected from the experiments, which focused on livestock pastures.

The information being collected, as often as every 15 minutes, includes water run-off levels, soil moisture, meteorological data, and soil nutrients, and this is expected to run for decades.

“The data is quite varied and quite diverse, and [Rothamsted] wants to make to make this data available to the wider research community,” explained Tessella’s Andrew Bowen.

“What Rothamsted needed was a way to store it and a way to present it in a portal in which people could see what they had to offer.”

He told BBC News that there were a number of challenges that needed to be tackled.

One was the management of the data, and the team from Tessella adopted an “agile scrum” approach.

“Basically, what you do is draw up a list of the requirements, of what you need, and we break the project down into short iterations, starting with the highest priority,” he said.

“This means that you are able to take a more exploratory approach to the process of developing software. This is very well suited to the research environment.”…(More)”

Understanding the four types of AI, from reactive robots to self-aware beings


 at The Conversation: “…We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

Type I AI: Reactive machines

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world….

Type II AI: Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel…;

Type III AI: Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

Type IV AI: Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it….

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences….(More)”

AI Ethics: The Future of Humanity 


Report by sparks & honey: “Through our interaction with machines, we develop emotional, human expectations of them. Alexa, for example, comes alive when we speak with it. AI is and will be a representation of its cultural context, the values and ethics we apply to one another as humans.

This machinery is eerily familiar as it mirrors us, and eventually becomes even smarter than us mere mortals. We’re programming its advantages based on how we see ourselves and the world around us, and we’re doing this at an incredible pace. This shift is pervading culture from our perceptions of beauty and aesthetics to how we interact with one another – and our AI.

Infused with technology, we’re asking: what does it mean to be human?

Our report examines:

• The evolution of our empathy from humans to animals and robots
• How we treat AI in its infancy like we do a child, allowing it space to grow
• The spectrum of our emotional comfort in a world embracing AI
• The cultural contexts fueling AI biases, such as gender stereotypes, that drive the direction of AI
• How we place an innate trust in machines, more than we do one another (Download for free)”

 

Environmental Law, Big Data, and the Torrent of Singularities


Essay by William Boyd: “How will big data impact environmental law in the near future? This Essay imagines one possible future for environmental law in 2030 that focuses on the implications of big data for the protection of public health from risks associated with pollution and industrial chemicals. It assumes the perspective of an historian looking back from the end of the twenty-first century at the evolution of environmental law during the late twentieth and early twenty-first centuries.

The premise of the Essay is that big data will drive a major shift in the underlying knowledge practices of environmental law (along with other areas of law focused on health and safety). This change in the epistemic foundations of environmental law, it is argued, will in turn have important, far-reaching implications for environmental law’s normative commitments and for its ability to discharge its statutory responsibilities. In particular, by significantly enhancing the ability of environmental regulators to make harm more visible and more traceable, big data will put considerable pressure on previous understandings of acceptable risk across populations, pushing toward a more singular and more individualized understanding of harm. This will raise new and difficult questions regarding environmental law’s capacity to confront and take responsibility for the actual lives caught up in the tragic choices it is called upon to make. In imagining this near future, the Essay takes a somewhat exaggerated and, some might argue, overly pessimistic view of the implications of big data for environmental law’s efforts to protect public health. This is done not out of a conviction that such a future is likely, but rather to highlight some of the potential problems that may arise as big data becomes a more prominent part of environmental protection. In an age of data triumphalism, such a perspective, it is hoped, may provide grounds for a more critical engagement with the tools and knowledge practices that inform environmental law and the implications of those tools for environmental law’s ability to meet its obligations. Of course, there are other possible futures, and big data surely has the potential to make many positive contributions to environmental protection in the coming decades. Whether it will do so will depend in no small part on the collective choices we make to manage these new capabilities in the years ahead….(More)”

Power to the People: Addressing Big Data Challenges in Neuroscience by Creating a New Cadre of Citizen Neuroscientists


Jane Roskams and Zoran Popović in Neuron: “Global neuroscience projects are producing big data at an unprecedented rate that informatic and artificial intelligence (AI) analytics simply cannot handle. Online games, like Foldit, Eterna, and Eyewire—and now a new neuroscience game, Mozak—are fueling a people-powered research science (PPRS) revolution, creating a global community of “new experts” that over time synergize with computational efforts to accelerate scientific progress, empowering us to use our collective cerebral talents to drive our understanding of our brain….(More)”

Portugal has announced the world’s first nationwide participatory budget


Graça Fonseca at apolitical:”Portugal has announced the world’s first participatory budget on a national scale. The project will let people submit ideas for what the government should spend its money on, and then vote on which ideas are adopted.

Although participatory budgeting has become increasingly popular around the world in the past few years, it has so far been confined to cities and regions, and no country that we know of has attempted it nationwide. To reach as many people as possible, Portugal is also examining another innovation: letting people cast their votes via ATM machines.

‘It’s about quality of life, it’s about the quality of public space, it’s about the quality of life for your children, it’s about your life, OK?’ Graça Fonseca, the minister responsible, told Apolitical. ‘And you have a huge deficit of trust between people and the institutions of democracy. That’s the point we’re starting from and, if you look around, Portugal is not an exception in that among Western societies. We need to build that trust and, in my opinion, it’s urgent. If you don’t do anything, in ten, twenty years you’ll have serious problems.’

Although the official window for proposals begins in January, some have already been submitted to the project’s website. One suggests equipping kindergartens with technology to teach children about robotics. Using the open-source platform Arduino, the plan is to let children play with the tech and so foster scientific understanding from the earliest age.

Proposals can be made in the areas of science, culture, agriculture and lifelong learning, and there will be more than forty events in the new year for people to present and discuss their ideas.

The organisers hope that it will go some way to restoring closer contact between government and its citizens. Previous projects have shown that people who don’t vote in general elections often do cast their ballot on the specific proposals that participatory budgeting entails. Moreover, those who make the proposals often become passionate about them, campaigning for votes, flyering, making YouTube videos, going door-to-door and so fuelling a public discussion that involves ever more people in the process.

On the other side, it can bring public servants nearer to their fellow citizens by sharpening their understanding of what people want and what their priorities are. It can also raise the quality of public services by directing them more precisely to where they’re needed as well as by tapping the collective intelligence and imagination of thousands of participants….

Although it will not be used this year, because the project is still very much in the trial phase, the use of ATMs is potentially revolutionary. As Fonseca puts it, ‘In every remote part of the country, you might have nothing else, but you have an ATM.’ Moreover, an ATM could display proposals and allow people to vote directly, not least because it already contains a secure way of verifying their identity. At the moment, for comparison, people can vote by text or online, sending in the number from their ID card, which is checked against a database….(More)”.

Wikipedia’s not as biased as you might think


Ananya Bhattacharya in Quartz: “The internet is as open as people make it. Often, people limit their Facebook and Twitter circles to likeminded people and only follow certain subreddits, blogs, and news sites, creating an echo chamber of sorts. In a sea of biased content, Wikipedia is one of the few online outlets that strives for neutrality. After 15 years in operation, it’s starting to see results

Researchers at Harvard Business School evaluated almost 4,000 articles in Wikipedia’s online database against the same entries in Encyclopedia Brittanica to compare their biases. They focused on English-language articles about US politics, especially controversial topics, that appeared in both outlets in 2012.

“That is just not a recipe for coming to a conclusion,” Shane Greenstein, one of the study’s authors, said in an interview. “We were surprised that Wikipedia had not failed, had not fallen apart in the last several years.”

Greenstein and his co-author Feng Zhu categorized each article as “blue” or “red.” Drawing from research in political science, they identified terms that are idiosyncratic to each party. For instance, political scientists have identified that Democrats were more likely to use phrases such as “war in Iraq,” “civil rights,” and “trade deficit,” while Republicans used phrases such as “economic growth,” “illegal immigration,” and “border security.”…

“In comparison to expert-based knowledge, collective intelligence does not aggravate the bias of online content when articles are substantially revised,” the authors wrote in the paper. “This is consistent with a best-case scenario in which contributors with different ideologies appear to engage in fruitful online conversations with each other, in contrast to findings from offline settings.”

More surprisingly, the authors found that the 2.8 million registered volunteer editors who were reviewing the articles also became less biased over time. “You can ask questions like ‘do editors with red tendencies tend to go to red articles or blue articles?’” Greenstein said. “You find a prevalence of opposites attract, and that was striking.” The researchers even identified the political stance for a number of anonymous editors based on their IP locations, and the trend held steadfast….(More)”

The Risk to Civil Liberties of Fighting Crime With Big Data


 in the New York Times: “…Sharing data, both among the parts of a big police department and between the police and the private sector, “is a force multiplier,” he said.

Companies working with the military and intelligence agencies have long practiced these kinds of techniques, which the companies are bringing to domestic policing, in much the way surplus military gear has beefed upAmerican SWAT teams.

Palantir first built up its business by offering products like maps of social networks of extremist bombers and terrorist money launderers, and figuring out efficient driving routes to avoid improvised explosive devices.

Palantir used similar data-sifting techniques in New Orleans to spot individuals most associated with murders. Law enforcement departments around Salt Lake City used Palantir to allow common access to 40,000 arrest photos, 520,000 case reports and information like highway and airport data — building human maps of suspected criminal networks.

People in the predictive business sometimes compare what they do to controlling the other side’s “OODA loop,” a term first developed by a fighter pilot and military strategist named John Boyd.

OODA stands for “observe, orient, decide, act” and is a means of managing information in battle.

“Whether it’s war or crime, you have to get inside the other side’s decision cycle and control their environment,” said Robert Stasio, a project manager for cyberanalysis at IBM, and a former United States government intelligence official. “Criminals can learn to anticipate what you’re going to do and shift where they’re working, employ more lookouts.”

IBM sells tools that also enable police to become less predictable, for example, by taking different routes into an area identified as a crime hotspot. It has also conducted studies that show changing tastes among online criminals — for example, a move from hacking retailers’ computers to stealing health care data, which can be used to file for federal tax refunds.

But there are worries about what military-type data analysis means for civil liberties, even among the companies that get rich on it.

“It definitely presents challenges to the less sophisticated type of criminal,but it’s creating a lot of what is called ‘Big Brother’s little helpers,’” Mr.Bowman said. For now, he added, much of the data abundance problem is that “most police aren’t very good at this.”…(More)’

Data Ethics – The New Competitive Advantage


Book by Gry Hasselbalch and Pernille Tranberg: “…describes over 50 cases of mainly private companies working with data ethics to varying degrees

Respect for privacy and the right to control one’s own data are becoming key parameters to gain a competitive edge in today’s business world. Companies, organisations and authorities which view data ethics as a social responsibility,giving it the same importance as environmental awareness and respect for human rights,are tomorrow’s winners. Digital trust is paramount to digital growth and prosperity.
This book combines broad trend analyses with case studies to examine companies which use data ethics to varying degrees. The authors make the case that citizens and consumers are no longer just concerned about a lack of control over their data, but they also have begun to act. In addition, they describe alternative business models, advances in technology and a new European data protection regulation, all of which combine to foster a growing market for data-ethical products and services….(More).

Thinking about Smart Cities: The Travels of a Policy Idea that Promises a Great Deal, but So Far Has Delivered Modest Results


Paper by Amy K. Glasmeier and Molly Nebiolo in Sustainability: “… explores the unique challenge of contemporary urban problems and the technologies that vendors have to solve them. An acknowledged gap exists between widely referenced technologies that city managers utilize to optimize scheduled operations and those that reflect the capability of spontaneity in search of nuance–laden solutions to problems related to the reflexivity of entire systems. With regulation, the first issue type succumbs to rehearsed preparation whereas the second hinges on extemporaneous practice. One is susceptible to ready-made technology applications while the other requires systemic deconstruction and solution-seeking redesign. Research suggests that smart city vendors are expertly configured to address the former, but less adept at and even ill-configured to react to and address the latter. Departures from status quo responses to systemic problems depend on formalizing metrics that enable city monitoring and data collection to assess “smart investments”, regardless of the size of the intervention, and to anticipate the need for designs that preserve the individuality of urban settings as they undergo the transformation to become “smart”….(More)”