Open Data’s Effect on Food Security


Jeremy de Beer, Jeremiah Baarbé, and Sarah Thuswaldner at Open AIR: “Agricultural data is a vital resource in the effort to address food insecurity. This data is used across the food-production chain. For example, farmers rely on agricultural data to decide when to plant crops, scientists use data to conduct research on pests and design disease resistant plants, and governments make policy based on land use data. As the value of agricultural data is understood, there is a growing call for governments and firms to open their agricultural data.

Open data is data that anyone can access, use, or share. Open agricultural data has the potential to address food insecurity by making it easier for farmers and other stakeholders to access and use the data they need. Open data also builds trust and fosters collaboration among stakeholders that can lead to new discoveries to address the problems of feeding a growing population.

 

A network of partnerships is growing around agricultural data research. The Open African Innovation Research (Open AIR) network is researching open agricultural data in partnership with the Plant Phenotyping and Imaging Research Centre (P2IRC) and the Global Institute for Food Security (GIFS). This research builds on a partnership with the Global Open Data for Agriculture and Nutrition (GODAN) and they are exploring partnerships with Open Data for Development (OD4D) and other open data organizations.

…published two works on open agricultural data. Published in partnership with GODAN, “Ownership of Open Data” describes how intellectual property law defines ownership rights in data. Firms that collect data own the rights to data, which is a major factor in the power dynamics of open data. In July, Jeremiah Baarbé and Jeremy de Beer will be presenting “A Data Commons for Food Security” …The paper proposes a licensing model that allows farmers to benefit from the datasets to which they contribute. The license supports SME data collectors, who need sophisticated legal tools; contributors, who need engagement, privacy, control, and benefit sharing; and consumers who need open access….(More)“.

Teaching machines to understand – and summarize – text


 and  in The Conversation: “We humans are swamped with text. It’s not just news and other timely information: Regular people are drowning in legal documents. The problem is so bad we mostly ignore it. Every time a person uses a store’s loyalty rewards card or connects to an online service, his or her activities are governed by the equivalent of hundreds of pages of legalese. Most people pay no attention to these massive documents, often labeled “terms of service,” “user agreement” or “privacy policy.”

These are just part of a much wider societal problem of information overload. There is so much data stored – exabytes of it, as much stored as has ever been spoken by people in all of human history – that it’s humanly impossible to read and interpret everything. Often, we narrow down our pool of information by choosing particular topics or issues to pay attention to. But it’s important to actually know the meaning and contents of the legal documents that govern how our data is stored and who can see it.

As computer science researchers, we are working on ways artificial intelligence algorithms could digest these massive texts and extract their meaning, presenting it in terms regular people can understand….

Examining privacy policies

A modern internet-enabled life today more or less requires trusting for-profit companies with private information (like physical and email addresses, credit card numbers and bank account details) and personal data (photos and videos, email messages and location information).

These companies’ cloud-based systems typically keep multiple copies of users’ data as part of backup plans to prevent service outages. That means there are more potential targets – each data center must be securely protected both physically and electronically. Of course, internet companies recognize customers’ concerns and employ security teams to protect users’ data. But the specific and detailed legal obligations they undertake to do that are found in their impenetrable privacy policies. No regular human – and perhaps even no single attorney – can truly understand them.

In our study, we ask computers to summarize the terms and conditions regular users say they agree to when they click “Accept” or “Agree” buttons for online services. We downloaded the publicly available privacy policies of various internet companies, including Amazon AWS, Facebook, Google, HP, Oracle, PayPal, Salesforce, Snapchat, Twitter and WhatsApp….

Our software examines the text and uses information extraction techniques to identify key information specifying the legal rights, obligations and prohibitions identified in the document. It also uses linguistic analysis to identify whether each rule applies to the service provider, the user or a third-party entity, such as advertisers and marketing companies. Then it presents that information in clear, direct, human-readable statements….(More)”

Artificial intelligence can predict which congressional bills will pass


Other algorithms have predicted whether a bill will survive a congressional committee, or whether the Senate or House of Representatives will vote to approve it—all with varying degrees of success. But John Nay, a computer scientist and co-founder of Skopos Labs, a Nashville-based AI company focused on studying policymaking, wanted to take things one step further. He wanted to predict whether an introduced bill would make it all the way through both chambers—and precisely what its chances were.

Nay started with data on the 103rd Congress (1993–1995) through the 113th Congress (2013–2015), downloaded from a legislation-tracking website call GovTrack. This included the full text of the bills, plus a set of variables, including the number of co-sponsors, the month the bill was introduced, and whether the sponsor was in the majority party of their chamber. Using data on Congresses 103 through 106, he trained machine-learning algorithms—programs that find patterns on their own—to associate bills’ text and contextual variables with their outcomes. He then predicted how each bill would do in the 107th Congress. Then, he trained his algorithms on Congresses 103 through 107 to predict the 108th Congress, and so on.

Nay’s most complex machine-learning algorithm combined several parts. The first part analyzed the language in the bill. It interpreted the meaning of words by how they were embedded in surrounding words. For example, it might see the phrase “obtain a loan for education” and assume “loan” has something to do with “obtain” and “education.” A word’s meaning was then represented as a string of numbers describing its relation to other words. The algorithm combined these numbers to assign each sentence a meaning. Then, it found links between the meanings of sentences and the success of bills that contained them. Three other algorithms found connections between contextual data and bill success. Finally, an umbrella algorithm used the results from those four algorithms to predict what would happen…. his program scored about 65% better than simply guessing that a bill wouldn’t pass, Nay reported last month in PLOS ONE…(More).

Why blockchain could be your next form of ID as a world citizen


 at TechRepublic: “Blockchain is moving from banking to the refugee crisis, as Microsoft and Accenture on Monday announced a partnership to use the technology to provide a legal form of identification for 1.1 billion people worldwide as part of the global public-private partnership ID2020.

The two tech giants developed a prototype that taps Accenture’s blockchain capabilities and runs on Microsoft Azure. The tech tool uses a person’s biometric data, such as a fingerprint or iris scan, to unlock the record-keeping blockchain technology and create a legal ID. This will allow refugees to have a personal identity record they can access from an app on a smartphone to receive assistance at border crossings, or to access basic services such as healthcare, according to a press release.

The prototype is designed so that personally identifiable information (PII) always exists “off chain,” and is not stored in a centralized system. Citizens use their biometric data to access their information, and chose when to share it—preventing the system from being accessed by tyrannical governments that refugees are fleeing from, as ZDNet noted.

Accenture’s platform is currently used in the Biometric Identity Management System operated by the United Nations High Commissioner for Refugees, which has enrolled more than 1.3 million refugees in 29 nations across Asia, Africa, and the Caribbean. The system is predicted to support more than 7 million refugees from 75 countries by 2020, the press release noted.

“People without a documented identity suffer by being excluded from modern society,” said David Treat, a managing director in Accenture’s global blockchain business, in the press release. “Our prototype is personal, private and portable, empowering individuals to access and share appropriate information when convenient and without the worry of using or losing paper documentation.”

ID is key for accessing education, healthcare, voting, banking, housing, and other family benefits, the press release noted. ID2020’s goal is to create a secure, established digital ID system for all citizens worldwide….

Blockchain will likely play an increasing role in both identification and security moving forward, especially as it relates to the Internet of Things (IoT). For example, Telstra, an Australian telecommunications company, is currently experimenting with a combination of blockchain and biometric security for its smart home products, ZDNet reported….(More)”.

AI software created for drones monitors wild animals and poachers


Springwise: “Artificial intelligence software installed into drones is to be used by US tech company Neurala to help protect endangered species from poachers. Working with the region’s Lingbergh Foundation, Neurala is currently helping operations in South Africa, Malawi and Zimbabwe and have had requests from Botswana, Mozambique and Zambia for assistance with combatting poaching.

The software is designed to monitor video as it is streamed back to researchers from unmanned drones that can fly for up to five hours, identifying animals, vehicles and poachers in real time without any human input. It can then alert rangers via the mobile command center if anything out of the ordinary is detected. The software can analyze regular or infrared footage, and therefore works with video taken day or night.

The Lindbergh Foundation will be deploying the technology as part of operation Air Shepherd, which is aimed at protecting elephants and rhinos in Southern Africa from poachers. According to the Foundation, elephants and rhinos are at risk of being extinct in just 10 years if current poaching rates continue, and has logged 5,000 hours of drone flight time over the course of 4,000 missions to date.

The use of drones within business models is proving popular, with recent innovations including a drone painting systemthat created crowdfunded murals and two Swiss hospitals that used a drone to deliver lab samples between them….(More)”.

LSE launches crowdsourcing project inspiring millennials to shape Brexit


LSE Press Release: “A crowdsourcing project inspiring millennials in Britain and the EU to help shape the upcoming Brexit negotiations is being launched by the London School of Economics and Political Science (LSE) this week.

The social media-based project, which hopes to engage 3000 millennials aged 35 and under, kicks off on 23 June, the first anniversary of the life-changing vote to take Britain out of the EU.

One of the Generation Brexit project leaders, Dr Jennifer Jackson-Preece from LSE’s European Institute, said the online platform would give a voice to British and European millennials on the future of Europe in the Brexit negotiations and beyond.

She said: “We’re going to invite millennials from across the UK and Europe to debate, decide and draft policy proposals that will be sent to parliaments in Westminster and Strasbourg during the negotiations.”

Another project leader, Dr Roch Dunin-Wąsowicz, said the pan-European project would seek views from a whole cross section of millennials, including Leavers, Remainers, left and right-wingers, European federalists and nationalists.

“We want to come up with millennial proposals for a mutually beneficial relationship, reflecting the diverse political, cultural, religious and economic backgrounds in the UK and EU.

“We are especially keen to engage the forgotten, the apolitical and the apathetic – for whom Brexit has become a moment of political awakening,” he said.

Generation Brexit follows on the heels of LSE’s Constitution UK crowdsourcing project in 2015, which broke new ground in galvanising people around the country to help shape Britain’s first constitution. The 10-week internet project signed up 1500 people from all corners of the UK to debate how the country should be governed.

Dr Manmit Bhambra, also working on the project, said the success of the Constitution UK platform had laid the foundation for Generation Brexit, with LSE hoping to double the numbers and sign up 3000 participants, split equally between Britain and Europe.

The project can be accessed at www.generationbrexit.org and all updates will be available on Twitter @genbrexit & @lsebrexitvote with the hashtag #GenBrexit, and on facebook.com/GenBrexit… (More)”.

Fly on the Facebook Wall: How UNHCR Listened to Refugees on Social Media


 at Social Media for Good: “In “From a Refugee Perspective” UNHCR shows how to conduct meaningful, qualitative social media monitoring in a humanitarian crisis.

From A Refugee PerspectiveBetween March and December 2016 the project team (one project manager, one Pashto and Dari speaker, two native Arabic speakers and an English copy editor) monitored Facebook conversations related to flight and migration in the Afghan and Arabic speaking communities.

To do this, the team created Facebook accounts, joined relevant Facebook groups and summarised their findings in weekly monitoring reports to UNHCR staff and other interested people. I received these reports every week while working as the UNHCR team leader for the Communicating with Communities team in Greece and found them very useful since they gave me insights into what were some of the burning issues that week.

The project did not monitor Twitter because Twitter was not widely used by the communities.

In “From a Refugee Perspective” UNHCR has now summarised their findings from the ten-month project. The main thing I really liked about this project is that UNHCR invested the resources for proper qualitative social media monitoring, as opposed to the purely quantitative analyses that we see so often and which rarely go beyond keyword counting. To complement the social media information, the team held focus group and other discussions with refugees who had arrived in Europe. Among other things, these discussion provided information on how the refugees and migrants are consuming and exchanging information (related: see this BBC Media Action report).

Of course, this type of research is much more resource intensive than what most organisations have in mind when they want to do social media monitoring, but this report shows that additional resources can also result in more meaningful information.

Smuggling prices

Smuggling prices according to monitored Facebook page. Source: From A Refugee Perspective

Monitoring the conversations on Facebook enabled the team to track trends, such as the rise and fall of prices that smugglers asked for different routes (see image). In addition, it provided fascinating insights into how smugglers are selling their services online….(More)”

Big Data, Data Science, and Civil Rights


Paper by Solon Barocas, Elizabeth Bradley, Vasant Honavar, and Foster Provost:  “Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness—in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well…(More)”.

Big Mind: How Collective Intelligence Can Change Our World


Book by Geoff Mulgan: “A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This “bigger mind”—human and machine capabilities working together—has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies.

Geoff Mulgan explores how collective intelligence has to be consciously organized and orchestrated in order to harness its powers. He looks at recent experiments mobilizing millions of people to solve problems, and at groundbreaking technology like Google Maps and Dove satellites. He also considers why organizations full of smart people and machines can make foolish mistakes—from investment banks losing billions to intelligence agencies misjudging geopolitical events—and shows how to avoid them.

Highlighting differences between environments that stimulate intelligence and those that blunt it, Mulgan shows how human and machine intelligence could solve challenges in business, climate change, democracy, and public health. But for that to happen we’ll need radically new professions, institutions, and ways of thinking.

Informed by the latest work on data, web platforms, and artificial intelligence, Big Mind shows how collective intelligence could help us survive and thrive….(More)”

Nobody Is Smarter or Faster Than Everybody


Rod Collins at Huffington Post: “One of the deepest beliefs of command-and-control management is the assumption that the smartest organization is the one with the smartest individuals. This belief is as old as scientific management itself. According to this way of thinking, just as there is a right way to perform every activity, there are right individuals who are essential for defining what are the right things and for making sure that things are done right. Thus, traditional organizations have long held that the key to the successful achievement of the corporation’s two basic accountabilities of strategy and execution is to hire the smartest individual managers and the brightest functional experts.

Command-and-control management assumes that intelligence fundamentally resides in a select number of star performers who are able to leverage their expertise across large groups of people through proper direction and effective control. Thus, the recruiting efforts and the promotional practices of most companies are focused on competing for and retaining the most talented people. While established management thinking holds that most individual workers are replaceable, this is not so for those star performers whose decision-making and problem-solving prowess are heroically revered. Traditional hierarchical organizations firmly believe in the myth of the individual hero. They are convinced that a single highly intelligent individual can make the difference between success and failure, whether that person is a key senior executive, a functional expert, or even a highly paid consultant.

However, in a rapidly changing world, it is becoming painfully obvious to harried executives that no single individual or even an elite cadre of star performers can adequately process the ever-evolving knowledge of fast-changing markets into operational excellence in real-time. Eric Teller, the CEO of Google X, has astutely recognized that we now live in a world where the pace of technological change exceeds the capacity for most individuals to absorb these changes in real time. If we can’t depend upon smart individuals to process change in time to respond to market developments, what options do business leaders have?

Nobody Is Smarter Than Everybody

If business executives want to build smart companies in a rapidly changing world, they will need to think differently and discover the most untapped resource in their organizations: the collective intelligence of their own people. Innovative organizations, such as Wikipedia and Google, have made this discovery and have leveraged the power of collective intelligence into powerful business models that have radically transformed their industries. The struggling online encyclopedia Nupedia rescued itself from oblivion when it serendipitously discovered an obscure application known as a wiki and transformed itself into Wikipedia by using the wiki platform to leverage the power of collective intelligence. In less than a decade, Wikipedia became the world’s most popular general reference resource. Google, which was a late entry into a crowded field of search engine upstarts, quickly garnered two-thirds of the search market by becoming the first engine to use the wisdom of crowds to rank web pages. These successful enterprises have uncovered the essential management wisdom for our times: Nobody is smarter or faster than everybody….

While smart individuals are important in any organization, it isn’t their unique intelligence that is paramount but rather their unique contributions to the overall intelligence of teams. That’s because the blending of the diverse perspectives of different types of intelligences is often the fastest path to the solution of complex problems, as we learned in the summer of 2011 when a diverse group of over 250,000 experts, non-experts, and unusual suspects in a scientific gaming community called Foldit, solved in ten days a biomolecular problem that had alluded the world’s best scientists for over ten years. This means a self-organized group that required no particular credentials for membership was 365 times more effective and efficient than the world’s most credentialed individual experts. Similarly, the non-credentialed contributors of Wikipedia were able to produce approximately 18,000 articles in its first year of operation compared to only 25 articles produced by academic experts in Nupedia’s first year. This means the wisdom of the crowd was 720 times more effective and efficient than the individual experts. These results are completely counterintuitive to everything that most of us have been taught about how intelligence works. However, as counterintuitive as this may seem, the preeminence of collective intelligence has suddenly become a practical reality thanks to proliferation of digital technology over the last two decades.

As we move from the first wave of the digital revolution, which was sparked by connecting people via the Internet, to the second wave where everyone and everything will be hyper-connected in the emerging Internet of Things, our capacity to aggregate and leverage collective intelligence is likely to accelerate as practical applications of artificial intelligence become everyday realities….(More)”.