Open data aims to boost food security prospects


Mark Kinver at BBC News: “Rothamsted Research, a leading agricultural research institution, is attempting to make data from long-term experiments available to all.

In partnership with a data consultancy, is it developing a method to make complex results accessible and useable.

The institution is a member of the Godan Initiative that aims to make data available to the scientific community.

In September, Godan called on the public to sign its global petition to open agricultural research data.

“The continuing challenge we face is that the raw data alone is not sufficient enough on its own for people to make sense of it,” said Chris Rawlings, head of computational and systems biology at Rothamsted Research.

“This is because the long-term experiments are very complex, and they are looking at agriculture and agricultural ecosystems so you need to know a lot of about what the intention of the studies are, how they are being used, and the changes that have taken place over time.”

However, he added: “Even with this level of complexity, we do see significant number of users contacting us or developing links with us.”

One size fits all

The ability to provide open data to all is one of the research organisation’s national capabilities, and forms a defining principle of its web portal to the experiments carried out at its North Wyke Farm Platform in North Devon.

Rothamsted worked in partnership with Tessella, a data consultancy, on the data collected from the experiments, which focused on livestock pastures.

The information being collected, as often as every 15 minutes, includes water run-off levels, soil moisture, meteorological data, and soil nutrients, and this is expected to run for decades.

“The data is quite varied and quite diverse, and [Rothamsted] wants to make to make this data available to the wider research community,” explained Tessella’s Andrew Bowen.

“What Rothamsted needed was a way to store it and a way to present it in a portal in which people could see what they had to offer.”

He told BBC News that there were a number of challenges that needed to be tackled.

One was the management of the data, and the team from Tessella adopted an “agile scrum” approach.

“Basically, what you do is draw up a list of the requirements, of what you need, and we break the project down into short iterations, starting with the highest priority,” he said.

“This means that you are able to take a more exploratory approach to the process of developing software. This is very well suited to the research environment.”…(More)”

Understanding the four types of AI, from reactive robots to self-aware beings


 at The Conversation: “…We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

Type I AI: Reactive machines

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world….

Type II AI: Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel…;

Type III AI: Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

Type IV AI: Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it….

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences….(More)”

Beyond nudging: it’s time for a second generation of behaviourally-informed social policy


Katherine Curchin at LSE Blog: “…behavioural scientists are calling for a second generation of behaviourally-informed policy. In some policy areas, nudges simply aren’t enough. Behavioural research shows stronger action is required to attack the underlying cause of problems. For example, many scholars have argued that behavioural insights provide a rationale for regulation to protect consumers from manipulation by private sector companies. But what might a second generation of behaviourally-informed social policy look like?

Behavioural insights could provide a justification to change the trajectory of income support policy. Since the 1990s policy attention has focused on the moral character of benefits recipients. Inspired by Lawrence Mead’s paternalist philosophy, governments have tried to increase the resolve of the unemployed to work their way out of poverty. More and more behavioural requirements have been attached to benefits to motivate people to fulfil their obligations to society.

But behavioural research now suggests that these harsh policies are misguided. Behavioural science supports the idea that people often make poor decisions and do things which are not in their long term interests.  But the weakness of individuals’ moral constitution isn’t so much the problem as the unequal distribution of opportunities in society. There are circumstances in which humans are unlikely to flourish no matter how motivated they are.

Normal human psychological limitations – our limited cognitive capacity, limited attention and limited self-control – interact with environment to produce the behaviour that advocates of harsh welfare regimes attribute to permissive welfare. In their book Scarcity, Sendhil Mullainathan and Eldar Shafir argue that the experience of deprivation creates a mindset that makes it harder to process information, pay attention, make good decisions, plan for the future, and resist temptations.

Importantly, behavioural scientists have demonstrated that this mindset can be temporarily created in the laboratory by placing subjects in artificial situations which induce the feeling of not having enough. As a consequence, experimental subjects from middle-class backgrounds suddenly display the short-term thinking and irrational decision making often attributed to a culture of poverty.

Tying inadequate income support to a list of behavioural conditions will most punish those who are suffering most. Empirical studies of welfare conditionality have found that benefit claimants often do not comprehend the complicated rules that apply to them. Some are being punished for lack of understanding rather than deliberate non-compliance.

Behavioural insights can be used to mount a case for a more generous, less punitive approach to income support. The starting point is to acknowledge that some of Mead’s psychological assumptions have turned out to be wrong. The nature of the cognitive machinery humans share imposes limits on how self-disciplined and conscientious we can reasonably expect people living in adverse circumstances to be. We have placed too much emphasis on personal responsibility in recent decades. Why should people internalize the consequences of their behaviour when this behaviour is to a large extent the product of their environment?…(More)”

The Risk to Civil Liberties of Fighting Crime With Big Data


 in the New York Times: “…Sharing data, both among the parts of a big police department and between the police and the private sector, “is a force multiplier,” he said.

Companies working with the military and intelligence agencies have long practiced these kinds of techniques, which the companies are bringing to domestic policing, in much the way surplus military gear has beefed upAmerican SWAT teams.

Palantir first built up its business by offering products like maps of social networks of extremist bombers and terrorist money launderers, and figuring out efficient driving routes to avoid improvised explosive devices.

Palantir used similar data-sifting techniques in New Orleans to spot individuals most associated with murders. Law enforcement departments around Salt Lake City used Palantir to allow common access to 40,000 arrest photos, 520,000 case reports and information like highway and airport data — building human maps of suspected criminal networks.

People in the predictive business sometimes compare what they do to controlling the other side’s “OODA loop,” a term first developed by a fighter pilot and military strategist named John Boyd.

OODA stands for “observe, orient, decide, act” and is a means of managing information in battle.

“Whether it’s war or crime, you have to get inside the other side’s decision cycle and control their environment,” said Robert Stasio, a project manager for cyberanalysis at IBM, and a former United States government intelligence official. “Criminals can learn to anticipate what you’re going to do and shift where they’re working, employ more lookouts.”

IBM sells tools that also enable police to become less predictable, for example, by taking different routes into an area identified as a crime hotspot. It has also conducted studies that show changing tastes among online criminals — for example, a move from hacking retailers’ computers to stealing health care data, which can be used to file for federal tax refunds.

But there are worries about what military-type data analysis means for civil liberties, even among the companies that get rich on it.

“It definitely presents challenges to the less sophisticated type of criminal,but it’s creating a lot of what is called ‘Big Brother’s little helpers,’” Mr.Bowman said. For now, he added, much of the data abundance problem is that “most police aren’t very good at this.”…(More)’

How to ensure smart cities benefit everyone


 at the Conversation Global: “By 2030, 60 percent of the world’s population is expected to live in mega-cities. How all those people live, and what their lives are like, will depend on important choices leaders make today and in the coming years.

Technology has the power to help people live in communities that are more responsive to their needs and that can actually improve their lives. For example, Beijing, notorious for air pollution, is testing a 23-foot-tall air purifier that vacuums up smog, filters the bad particles and releases clear air.

This isn’t a vision of life like on “The Jetsons.” It’s real urban communities responding in real-time to changing weather, times of day and citizen needs. These efforts can span entire communities. They can vary from monitoring traffic to keep cars moving efficiently or measuring air quality to warn residents (or turn on massive air purifiers) when pollution levels climb.

Using data and electronic sensors in this way is often referred to as building “smart cities,” which are the subject of a major global push to improve how cities function. In part a response to incoherent infrastructure design and urban planning of the past, smart cities promise real-time monitoring, analysis and improvement of city decision-making. The results, proponents say, will improve efficiency, environmental sustainability and citizen engagement.

Smart city projects are big investments that are supposed to drive social transformation. Decisions made early in the process determine what exactly will change. But most research and planning regarding smart cities is driven by the technology, rather than the needs of the citizens. Little attention is given to the social, policy and organizational changes that will be required to ensure smart cities are not just technologically savvy but intelligently adaptive to their residents’ needs. Design will make the difference between smart city projects offering great promise or actually reinforcing or even widening the existing gaps in unequal ways their cities serve residents.

City benefits from efficiency

A key feature of smart cities is that they create efficiency. Well-designed technology tools can benefit government agencies, the environment and residents. Smart cities can improve the efficiency of city services by eliminating redundancies, finding ways to save money and streamlining workers’ responsibilities. The results can provide higher-quality services at lower cost….

Environmental effects

Another way to save money involves real-time monitoring of energy use, which can also identify opportunities for environmental improvement.

The city of Chicago has begun implementing an “Array of Things” initiative by installing boxes on municipal light poles with sensors and cameras that can capture air quality, sound levels, temperature, water levels on streets and gutters, and traffic.

The data collected are expected to serve as a sort of “fitness tracker for the city,” by identifying ways to save energy, to address urban flooding and improve living conditions.

Helping residents

Perhaps the largest potential benefit from smart cities will come from enhancing residents’ quality of life. The opportunities cover a broad range of issues, including housing and transportation, happiness and optimism, educational services, environmental conditions and community relationships.

Efforts along this line can include tracking and mapping residents’ health, using data to fight neighborhood blight, identifying instances of discrimination and deploying autonomous vehicles to increase residents’ safety and mobility….(More)“.

Tackling Corruption with People-Powered Data


Sandra Prüfer at Mastercard Center for Inclusive Growth: “Informal fees plague India’s “free” maternal health services. In Nigeria, village households don’t receive the clean cookstoves their government paid for. Around the world, corruption – coupled with the inability to find and share information about it – stymies development in low-income communities.

Now, digital transparency platforms – supplemented with features illiterate and rural populations can use – make it possible for traditionally excluded groups to make their voices heard and access tools they need to grow.

Mapping Corruption Hot Spots in India

One of the problems surrounding access to information is the lack of reliable information in the first place: a popular method to create knowledge is crowdsourcing and enlisting the public to monitor and report on certain issues.

The Mera Swasthya Meri Aawaz platform, which means “Our Health, Our Voice”, is an interactive map in Uttar Pradesh launched by the Indian non-profit organization SAHAYOG. It enables women to anonymously report illicit fees charged for services at maternal health clinics using their mobile phones.

To reduce infant mortality and deaths in childbirth, the Indian government provides free prenatal care and cash incentives to use maternal health clinics, but many charge illegal fees anyway – cutting mothers off from lifesaving healthcare and inhibiting communities’ growth. An estimated 45,000 women in India died in 2015 from complications of pregnancy and childbirth – one of the highest rates of any country in the world; low-income women are disproportionately affected….“Documenting illegal payment demands in real time and aggregating the data online increased governmental willingness to listen,” Sandhya says. “Because the data is linked to technology, its authenticity is not questioned.”

Following the Money in Nigeria

In Nigeria, Connected Development (CODE) also champions open data to combat corruption in infrastructure building, health and education projects. Its mission is to improve access to information and empower local communities to share data that can expose financial irregularities. Since 2012, the Abuja-based watchdog group has investigated twelve capital projects, successfully pressuring the government to release funds including $5.3 million to treat 1,500 lead-poisoned children.

“People activate us: if they know about any project that is supposed to be in their community, but isn’t, they tell us they want us to follow the money – and we’ll take it from there,” says CODE co-founder Oludotun Babayemi.

Users alert the watchdog group directly through its webpage, which publishes open-source data about development projects that are supposed to be happening, based on reports from freedom of information requests to Nigeria’s federal minister of environment, World Bank data and government press releases.

Last year, as part of their #WomenCookstoves reporting campaign, CODE revealed an apparent scam by tracking a $49.8 million government project that was supposed to purchase 750,000 clean cookstoves for rural women. Smoke inhalation diseases disproportionately affect women who spend time cooking over wood fires; according to the World Health Organization, almost 100,000 people die yearly in Nigeria from inhaling wood smoke, the country’s third biggest killer after malaria and AIDS.

“After three months, we found out that only 15 percent of the $48 million was given to the contractor – meaning there were only 45,000 cook stoves out of 750,000 in the county,” Babayemi says….(More)”

Essays on collective intelligence


Thesis by Yiftach Nagar: “This dissertation consists of three essays that advance our understanding of collective-intelligence: how it works, how it can be used, and how it can be augmented. I combine theoretical and empirical work, spanning qualitative inquiry, lab experiments, and design, exploring how novel ways of organizing, enabled by advancements in information technology, can help us work better, innovate, and solve complex problems.

The first essay offers a collective sensemaking model to explain structurational processes in online communities. I draw upon Weick’s model of sensemaking as committed-interpretation, which I ground in a qualitative inquiry into Wikipedia’s policy discussion pages, in attempt to explain how structuration emerges as interpretations are negotiated, and then committed through conversation. I argue that the wiki environment provides conditions that help commitments form, strengthen and diffuse, and that this, in turn, helps explain trends of stabilization observed in previous research.

In the second essay, we characterize a class of semi-structured prediction problems, where patterns are difficult to discern, data are difficult to quantify, and changes occur unexpectedly. Making correct predictions under these conditions can be extremely difficult, and is often associated with high stakes. We argue that in these settings, combining predictions from humans and models can outperform predictions made by groups of people, or computers. In laboratory experiments, we combined human and machine predictions, and find the combined predictions more accurate and more robust than predictions made by groups of only people or only machines.

The third essay addresses a critical bottleneck in open-innovation systems: reviewing and selecting the best submissions, in settings where submissions are complex intellectual artifacts whose evaluation require expertise. To aid expert reviewers, we offer a computational approach we developed and tested using data from the Climate CoLab – a large citizen science platform. Our models approximate expert decisions about the submissions with high accuracy, and their use can save review labor, and accelerate the review process….(More)”

Could online democracy lead to governance by Trumps and trolls?


in The Guardian: “The first two user tutorials are pretty stock standard but, from there, things escalate dramatically. After mastering How to Sign Up and How to RecoverYour Password, users are apparently ready to advance to lesson number three: How to Create a Democracy.

As it turns out, on DemocracyOS, this is a relatively straightforward matter – not overthrowing the previous regime nor exterminating the last traces of the royal lineage in order to pave the way for a new world order. Instead Argentinian developers Democracia en Red have made it a simple matter of clicking a button to form a group and thrash out the policies voters wish to see enacted.

It is one of a range of digital platforms for direct democracy created by developers and activists to redefine the relationship between citizens and their governments,with the powers that be in Latin American city councils through to European anti-austerity parties making the upgrade to democracy 2.0.

Reshaping how government works is a difficult enough pitch by itself but,beyond that, there’s another challenge facing developers – the online trolls are ready and waiting.

Britain alone this year offered up two examples of what impact trolls could have on online direct democracy – there was the case of “BoatyMcBoatface” famously winning a Natural Environment ResearchCouncil poll to determine the name of a multimillion-pound arctic research vessel, and then there was the more serious case of trolls adding the signatures of thousands of residents of countries such as the Cayman Islands and Vatican City to a formal petition calling for a second Brexit referendum, in order to have the entire document disregarded as an online prank.

In the US presidential election even the politicians are getting in on it,with a pro-Hillary Clinton super PAC (political action committee) hiring an army of online commenters to defend the candidate in arguments on social media, while the Republican contender, Donald Trump, is himself engaging in textbook trolling behaviour – whether that’s urging the hacking of Clinton’s emails, revealing the phone number of a Republican rival during the primaries, or unleashing a constant stream of controversial statements as a means of derailing conversations, attracting attention and humiliating his targets.

So what does this mean for digital platforms for direct democracy? By merging the world of the internet with that of politics, will we all end up governed by some fusion of trolls and Trumps promising to build Wally McWallfaces on our borders? And will the technologies of the fourth industrial revolution also usher in a revolution in how democracy functions?…(More)”

The openness buzz in the knowledge economy: Towards taxonomy


Paper by Anne Lundgren in “Environment and Planning C: Government and Policy”: “In the networked information and knowledge-based economy and society, the notions of ‘open’ and ‘openness’ are used in a variety of contexts; open source, open access, open economy, open government, open innovation – just to name a few. This paper aims at discussing openness and developing a taxonomy that may be used to analyse the concept of openness. Are there different qualities of openness? How are these qualities interrelated? What analytical tools may be used to understand openness? In this paper four qualities of openness recurrent in literature and debate are explored: accessibility, transparency, participation and sharing. To further analyse openness new institutional theory as interpreted by Williamson (2000) is used, encompassing four different institutional levels; cultural embeddedness, institutional environment, governance structure and resource allocations. At what institutional levels is openness supported and/or constrained? Accessibility as a quality of openness seems to have a particularly strong relation to the other qualities of openness, whereas the notions of sharing and collaborative economics seem to be the most complex and contested quality of openness in the knowledge-based economy. This research contributes to academia, policy and governance, as handling of challenges with regard to openness vs. closure in different contexts, territorial, institutional and/or organizational, demand not only a better understanding of the concept, but also tools for analysis….(More)”

A decentralized web would give power back to the people online


 at TechCrunch: “…The original purpose of the web and internet, if you recall, was to build a common neural network which everyone can participate in equally for the betterment of humanity.Fortunately, there is an emerging movement to bring the web back to this vision and it even involves some of the key figures from the birth of the web. It’s called the Decentralised Web or Web 3.0, and it describes an emerging trend to build services on the internet which do not depend on any single “central” organisation to function.

So what happened to the initial dream of the web? Much of the altruism faded during the first dot-com bubble, as people realised that an easy way to create value on top of this neutral fabric was to build centralised services which gather, trap and monetise information.

Search Engines (e.g. Google), Social Networks (e.g. Facebook), Chat Apps (e.g. WhatsApp )have grown huge by providing centralised services on the internet. For example, Facebook’s future vision of the internet is to provide access only to the subset of centralised services endorses (Internet.org and Free Basics).

Meanwhile, it disables fundamental internet freedoms such as the ability to link to content via a URL (forcing you to share content only within Facebook) or the ability for search engines to index its contents (other than the Facebook search function).

The Decentralised Web envisions a future world where services such as communication,currency, publishing, social networking, search, archiving etc are provided not by centralised services owned by single organisations, but by technologies which are powered by the people: their own community. Their users.

The core idea of decentralisation is that the operation of a service is not blindly trusted toany single omnipotent company. Instead, responsibility for the service is shared: perhaps by running across multiple federated servers, or perhaps running across client side apps in an entirely “distributed” peer-to-peer model.

Even though the community may be “byzantine” and not have any reason to trust or depend on each other, the rules that describe the decentralised service’s behaviour are designed to force participants to act fairly in order to participate at all, relying heavily on cryptographic techniques such as Merkle trees and digital signatures to allow participants to hold each other accountable.

There are three fundamental areas that the Decentralised Web necessarily champions:privacy, data portability and security.

  • Privacy: Decentralisation forces an increased focus on data privacy. Data is distributed across the network and end-to-end encryption technologies are critical for ensuring that only authorized users can read and write. Access to the data itself is entirely controlled algorithmically by the network as opposed to more centralized networks where typically the owner of that network has full access to data, facilitating  customer profiling and ad targeting.
  • Data Portability: In a decentralized environment, users own their data and choose with whom they share this data. Moreover they retain control of it when they leave a given service provider (assuming the service even has the concept of service providers). This is important. If I want to move from General Motors to BMW today, why should I not be able to take my driving records with me? The same applies to chat platform history or health records.
  • Security: Finally, we live in a world of increased security threats. In a centralized environment, the bigger the silo, the bigger the honeypot is to attract bad actors.Decentralized environments are safer by their general nature against being hacked,infiltrated, acquired, bankrupted or otherwise compromised as they have been built to exist under public scrutiny from the outset….(More)”