Online gamers control trash collecting water robot


Springwise: “Urban Rivers is a Chicago-based charity focused on cleaning up the city’s rivers and re-wilding bankside habitats. One of their most visible pieces of work is a floating habitat installed in the middle of the river that runs through the city. An immediate problem that arose after installation was the accumulation of trash. At first, the company sent someone out on a kayak every other day to clean the habitat. Yet in less than a day, huge amounts of garbage would again be choking the space. The company’s solution was to create a Trash Task Force. The outcome of the Task Force’s work is the TrashBot, a remote-controlled garbage-collecting robot. The TrashBot allows gamers all over the world to do their bit in cleaning up Chicago’s river.

Anyone interested in playing the cleaning game can sign up via the Urban River website. Future development of the bot will likely focus on wildlife monitoring. Similarly, the end goal of the game will be that no one wants to play because there is no more garbage for collection.

From crowdsourced ocean data gathered by the fins of surfers’ boards to a solar-powered autonomous drone that gathers waste from harbor waters, the health of the world’s waterways is being improved in a number of ways. The surfboard fins use sensors to monitor sea salinity, acidity levels and wave motion. Those are all important coastal ecosystem factors that could be affected by climate change. The water drones are intelligent and use on-board cameras and sensors to learn about their environment and avoid other craft as they collect garbage from rivers, canals and harbors….(More)”.

What if a nuke goes off in Washington, D.C.? Simulations of artificial societies help planners cope with the unthinkable


Mitchell Waldrop at Science: “…The point of such models is to avoid describing human affairs from the top down with fixed equations, as is traditionally done in such fields as economics and epidemiology. Instead, outcomes such as a financial crash or the spread of a disease emerge from the bottom up, through the interactions of many individuals, leading to a real-world richness and spontaneity that is otherwise hard to simulate.

That kind of detail is exactly what emergency managers need, says Christopher Barrett, a computer scientist who directs the Biocomplexity Institute at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, which developed the NPS1 model for the government. The NPS1 model can warn managers, for example, that a power failure at point X might well lead to a surprise traffic jam at point Y. If they decide to deploy mobile cell towers in the early hours of the crisis to restore communications, NPS1 can tell them whether more civilians will take to the roads, or fewer. “Agent-based models are how you get all these pieces sorted out and look at the interactions,” Barrett says.

The downside is that models like NPS1 tend to be big—each of the model’s initial runs kept a 500-microprocessor computing cluster busy for a day and a half—forcing the agents to be relatively simple-minded. “There’s a fundamental trade-off between the complexity of individual agents and the size of the simulation,” says Jonathan Pfautz, who funds agent-based modeling of social behavior as a program manager at the Defense Advanced Research Projects Agency in Arlington, Virginia.

But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. In fields as diverse as economics, transportation, public health, and urban planning, more and more decision-makers are taking agent-based models seriously. “They’re the most flexible and detailed models out there,” says Ira Longini, who models epidemics at the University of Florida in Gainesville, “which makes them by far the most effective in understanding and directing policy.”

he roots of agent-based modeling go back at least to the 1940s, when computer pioneers such as Alan Turing experimented with locally interacting bits of software to model complex behavior in physics and biology. But the current wave of development didn’t get underway until the mid-1990s….(More)”.

UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.

From Texts to Tweets to Satellites: The Power of Big Data to Fill Gender Data Gaps


 at UN Foundation Blog: “Twitter posts, credit card purchases, phone calls, and satellites are all part of our day-to-day digital landscape.

Detailed data, known broadly as “big data” because of the massive amounts of passively collected and high-frequency information that such interactions generate, are produced every time we use one of these technologies. These digital traces have great potential and have already developed a track record for application in global development and humanitarian response.

Data2X has focused particularly on what big data can tell us about the lives of women and girls in resource-poor settings. Our research, released today in a new report, Big Data and the Well-Being of Women and Girls, demonstrates how four big data sources can be harnessed to fill gender data gaps and inform policy aimed at mitigating global gender inequality. Big data can complement traditional surveys and other data sources, offering a glimpse into dimensions of girls’ and women’s lives that have otherwise been overlooked and providing a level of precision and timeliness that policymakers need to make actionable decisions.

Here are three findings from our report that underscore the power and potential offered by big data to fill gender data gaps:

  1. Social media data can improve understanding of the mental health of girls and women.

Mental health conditions, from anxiety to depression, are thought to be significant contributors to the global burden of disease, particularly for young women, though precise data on mental health is sparse in most countries. However, research by Georgia Tech University, commissioned by Data2X, finds that social media provides an accurate barometer of mental health status…..

  1. Cell phone and credit card records can illustrate women’s economic and social patterns – and track impacts of shocks in the economy.

Our spending priorities and social habits often indicate economic status, and these activities can also expose economic disparities between women and men.

By compiling cell phone and credit card records, our research partners at MIT traced patterns of women’s expenditures, spending priorities, and physical mobility. The research found that women have less mobility diversity than men, live further away from city centers, and report less total expenditure per capita…..

  1. Satellite imagery can map rivers and roads, but it can also measure gender inequality.

Satellite imagery has the power to capture high-resolution, real-time data on everything from natural landscape features, like vegetation and river flows, to human infrastructure, like roads and schools. Research by our partners at the Flowminder Foundation finds that it is also able to measure gender inequality….(More)”.

Behavior Change for Good Initiative


“At the Behavior Change for Good Initiativewe know that solving the mystery of enduring behavior change offers an enormous opportunity to improve lives. We unite an interdisciplinary team of scientists with leading practitioners in education, healthcare, and consumer financial services, all of whom seek to address the question: How can we make behavior change stick?…

We are developing an interactive digital platform to improve daily decisions about health, education, and savings. For the first time, a world-class team of scientific experts will be able to continually test and improve a behavior change program by seamlessly incorporating the latest insights from their research into massive random-assignment experiments. Their interactive digital platform seeks to improve daily health, education, and savings decisions of millions…(More)”.

The citation graph is one of humankind’s most important intellectual achievements


Dario Taraborelli at BoingBoing: “When researchers write, we don’t just describe new findings — we place them in context by citing the work of others. Citations trace the lineage of ideas, connecting disparate lines of scholarship into a cohesive body of knowledge, and forming the basis of how we know what we know.

Today, citations are also a primary source of data. Funders and evaluation bodies use them to appraise scientific impact and decide which ideas are worth funding to support scientific progress. Because of this, data that forms the citation graph should belong to the public. The Initiative for Open Citations was created to achieve this goal.

Back in the 1950s, reference works like Shepard’s Citations provided lawyers with tools to reconstruct which relevant cases to cite in the context of a court trial. No such a tool existed at the time for identifying citations in scientific publications. Eugene Garfield — the pioneer of modern citation analysis and citation indexing — described the idea of extending this approach to science and engineering as his Eureka moment. Garfield’s first experimental Genetics Citation Index, compiled by the newly-formed Institute for Scientific Information (ISI) in 1961, offered a glimpse into what a full citation index could mean for science at large. It was distributed, for free, to 1,000 libraries and scientists in the United States.

Fast forward to the end of the 20th century. the Web of Science citation index — maintained by Thomson Reuters, who acquired ISI in 1992 — has become the canonical source for scientists, librarians, and funders to search scholarly citations, and for the field of scientometrics, to study the structure and evolution of scientific knowledge. ISI could have turned into a publicly funded initiative, but it started instead as a for-profit effort. In 2016, Thomson Reuters sold its Intellectual Property & Science business to a private-equity fund for $3.55 billion. Its citation index is now owned by Clarivate Analytics.

Raw citation data being non-copyrightable, it’s ironic that the vision of building a comprehensive index of scientific literature has turned into a billion-dollar business, with academic institutions paying cripplingly expensive annual subscriptions for access and the public locked out.

Enter the Initiative for Open Citations.

In 2016, a small group founded the Initiative for Open Citations (I4OC) as a voluntary effort to work with scholarly publishers — who routinely publish this data — to persuade them to release it in the open and promote its unrestricted availability. Before the launch of the I4OC, only 1% of indexed scholarly publications with references were making citation data available in the public domain. When the I4OC was officially announced in 2017, we were able to report that this number had shifted from 1% to 40%. In the main, this was thanks to the swift action of a small number of large academic publishers.

In April 2018, we are celebrating the first anniversary of the initiative. Since the launch, the fraction of indexed scientific articles with open citation data (as measured by Crossref) has surpassed 50% and the number of participating publishers has risen to 490Over half a billion references are now openly available to the public without any copyright restriction. Of the top-20 biggest publishers with citation data, all but 5 — Elsevier, IEEE, Wolters Kluwer Health, IOP Publishing, ACS — now make this data open via Crossref and its APIs. Over 50 organisations — including science funders, platforms and technology organizations, libraries, research and advocacy institutions — have joined us in this journey to help advocate and promote the reuse of open citations….(More)”.

Blockchain Slashes US Govt. Contract Award Time From 100 To 10 Days


Article by Cameron Bishop: “…The US General services Administration built the first federal procurement blockchain proof of concept about six months ago. The procurement blockchain was built to demonstrate how the distributed ledger technology can modernize federal procurement. The pilot project made them realize that blockchain, when combined with artificial intelligence and robotics, provides the foundational architecture for widespread automation.

The proof of concept, which was built in seven weeks, automated the procurement process. More importantly, it reduced the average contract award time from 100 days to less than 10 days. Complex tasks such as financial review was automated through the use of blockchain. It also eliminated human error, bias and subjectivity from the process. A smart contract deployed in the blockchain automatically calculated the financial health score from the offerors’ balance sheets and income statements. The entire process was standardized using commercial and government practices.

Furthermore, the use of blockchain ledger ensured that vendors were kept abreast of the developments. Vendors received alerts on a real-time basis as the offers progress through the workflow. This made the process transparent, while preserving the privacy of each transaction. The success of this pilot project is expected to bring a drastic change in the federal procurement process.

While a blockchain can be public, permissioned, and private, federal agencies may opt for a private blockchain to facilitate procurement transactions among pre-screened vendors with digital identity certificates.

The Federal Acquisition Regulation (FAR) provides guidelines to ensure integrity, openness and fairness in federal procurement. The blockchain technology will enforce those policies through a system of procedural trust embedded into the platform.

By using blockchain technology, the federal procurement process can be more transparent, efficient, faster, and less vulnerable to fraud and abuse. More importantly, by design, a blockchain preserves the integrity of the assets and transactions between multiple parties within the value chain. Additionally, blockchain will avoid unnecessary litigations, while promoting competition in a healthy manner. It will also provide an organization with unique insights into the procurement value chain unavailable previously….(More)”.

To serve a free society, social media must evolve beyond data mining


Barbara Romzek and Aram Sinnreich at The Conversation: “…For years, watchdogs have been warning about sharing information with data-collecting companies, firms engaged in the relatively new line of business called some academics have called “surveillance capitalism.” Most casual internet users are only now realizing how easy – and common – it is for unaccountable and unknown organizations to assemble detailed digital profiles of them. They do this by combining the discrete bits of information consumers have given up to e-tailers, health sites, quiz apps and countless other digital services.

As scholars of public accountability and digital media systems, we know that the business of social media is based on extracting user data and offering it for sale. There’s no simple way for them to protect data as many users might expect. Like the social pollution of fake news, bullying and spam that Facebook’s platform spreads, the company’s privacy crisis also stems from a power imbalance: Facebook knows nearly everything about its users, who know little to nothing about it.

It’s not enough for people to delete their Facebook accounts. Nor is it likely that anyone will successfully replace it with a nonprofit alternativecentering on privacy, transparency and accountability. Furthermore, this problem is not specific just to Facebook. Other companies, including Google and Amazon, also gather and exploit extensive personal data, and are locked in a digital arms race that we believe threatens to destroy privacy altogether….

Governments need to be better guardians of public welfare – including privacy. Many companies using various aspects of technology in new ways have so far avoided regulation by stoking fears that rules might stifle innovation. Facebook and others have often claimed that they’re better at regulating themselves in an ever-changing environment than a slow-moving legislative process could be….

To encourage companies to serve democratic principles and focus on improving people’s lives, we believe the chief business model of the internet needs to shift to building trust and verifying information. While it won’t be an immediate change, social media companies pride themselves on their adaptability and should be able to take on this challenge.

The alternative, of course, could be far more severe. In the 1980s, when federal regulators decided that AT&T was using its power in the telephone market to hurt competition and consumers, they forced the massive conglomerate to break up. A similar but less dramatic change happened in the early 2000s when cellphone companies were forced to let people keep their phone numbers even if they switched carriers.

Data, and particularly individuals’ personal data, are the precious metals of the internet age. Protecting individual data while expanding access to the internet and its many social benefits is a fundamental challenge for free societies. Creating, using and protecting data properly will be crucial to preserving and improving human rights and civil liberties in this still young century. To meet this challenge will require both vigilance and vision, from businesses and their customers, as well as governments and their citizens….(More).

From Crowdsourcing to Extreme Citizen Science: Participatory Research for Environmental Health


P.B. English, M.J. Richardson, and C. Garzón-Galvis in the Annual Review of Public Health: “Environmental health issues are becoming more challenging, and addressing them requires new approaches to research design and decision-making processes. Participatory research approaches, in which researchers and communities are involved in all aspects of a research study, can improve study outcomes and foster greater data accessibility and utility as well as increase public transparency. Here we review varied concepts of participatory research, describe how it complements and overlaps with community engagement and environmental justice, examine its intersection with emerging environmental sensor technologies, and discuss the strengths and limitations of participatory research. Although participatory research includes methodological challenges, such as biases in data collection and data quality, it has been found to increase the relevance of research questions, result in better knowledge production, and impact health policies. Improved research partnerships among government agencies, academia, and communities can increase scientific rigor, build community capacity, and produce sustainable outcomes….(More)”

Who Maps the World?


Sarah Holder at CityLab: “For most of human history, maps have been very exclusive,” said Marie Price, the first woman president of the American Geographical Society, appointed 165 years into its 167-year history. “Only a few people got to make maps, and they were carefully guarded, and they were not participatory.” That’s slowly changing, she said, thanks to democratizing projects like OpenStreetMap (OSM)….

But despite OSM’s democratic aims, and despite the long (albeit mostly hidden) history of lady cartographers, the OSM volunteer community is still composed overwhelmingly of men. A comprehensive statistical breakdown of gender equity in the OSM space has not yet been conducted, but Rachel Levine, a GIS operations and training coordinator with the American Red Cross, said experts estimate that only 2 to 5 percent of OSMers are women. The professional field of cartography is also male-dominated, as is the smaller subset of GIS professionals. While it would follow that the numbers of mappers of color and LGBTQ and gender-nonconforming mappers are similarly small, those statistics have gone largely unexamined….

When it comes to increasing access to health services, safety, and education—things women in many developing countries disproportionately lack—equitable cartographic representation matters. It’s the people who make the map who shape what shows up. On OMS, buildings aren’t just identified as buildings; they’re “tagged” with specifics according to mappers’ and editors’ preferences. “If two to five percent of our mappers are women, that means only a subset of that get[s] to decide what tags are important, and what tags get our attention,” said Levine.

Sports arenas? Lots of those. Strip clubs? Cities contain multitudes. Bars? More than one could possibly comprehend.

Meanwhile, childcare centers, health clinics, abortion clinics, and specialty clinics that deal with women’s health are vastly underrepresented. In 2011, the OSM community rejected an appeal to add the “childcare” tag at all. It was finally approved in 2013, and in the time since, it’s been used more than 12,000 times.

Doctors have been tagged more than 80,000 times, while healthcare facilities that specialize in abortion have been tagged only 10; gynecology, near 1,500; midwife, 233, fertility clinics, none. Only one building has been tagged as a domestic violence facility, and 15 as a gender-based violence facility. That’s not because these facilities don’t exist—it’s because the men mapping them don’t know they do, or don’t care enough to notice.

So much of the importance of mapping is about navigating the world safely. For women, especially women in less developed countries, that safety is harder to secure. “If we tag something as a public toilet, does that mean it has facilities for women? Does it mean the facilities are safe?” asked Levine. “When we’re tagging specifically, ‘This is a female toilet,’ that means somebody has gone in and said, ‘This is accessible to me.’ When women aren’t doing the tagging, we just get the toilet tag.”

“Women’s geography,” Price tells her students, is made up of more than bridges and tunnels. It’s shaped by asking things like: Where on the map do you feel safe? How would you walk from A to B in the city without having to look over your shoulder? It’s hard to map these intangibles—but not impossible….(More).