Peering at Open Peer Review


at the Political Methodologist: “Peer review is an essential part of the modern scientific process. Sending manuscripts for others to scrutinize is such a widespread practice in academia that its importance cannot be overstated. Since the late eighteenth century, when the Philosophical Transactions of the Royal Society pioneered editorial review,1 virtually every scholarly outlet has adopted some sort of pre-publication assessment of received works. Although the specifics may vary, the procedure has remained largely the same since its inception: submit, receive anonymous criticism, revise, restart the process if required. A recent survey of APSA members indicates that political scientists overwhelmingly believe in the value of peer review (95%) and the vast majority of them (80%) think peer review is a useful tool to keep themselves up to date with cutting-edge research (Djupe 2015, 349). But do these figures suggest that journal editors can rest upon their laurels and leave the system as it is?

Not quite. A number of studies have been written about the shortcomings of peer review. The system has been criticised for being too slow (Ware 2008), conservative (Eisenhart 2002), inconsistent (Smith 2006; Hojat, Gonnella, and Caelleigh 2003), nepotist (Sandström and Hällsten 2008), biased against women (Wold and Wennerås 1997), affiliation (Peters and Ceci 1982), nationality (Ernst and Kienbacher 1991) and language (Ross et al. 2006). These complaints have fostered interesting academic debates (e.g. Meadows 1998; Weller 2001), but thus far the literature offers little practical advice on how to tackle peer review problems. One often overlooked aspect in these discussions is how to provide incentives for reviewers to write well-balanced reports. On the one hand, it is not uncommon for reviewers to feel that their work is burdensome and not properly acknowledged. Further, due to the anonymous nature of the reviewing process itself, it is impossible to give the referee proper credit for a constructive report. On the other hand, the reviewers’ right to full anonymity may lead to sub-optimal outcomes as referees can rarely be held accountable for being judgmental (Fabiato 1994).

Open peer review (henceforth OPR) is largely in line with this trend towards a more transparent political science. Several definitions of OPR have been suggested, including more radical ones such as allowing anyone to write pre-publication reviews (crowdsourcing) or by fully replacing peer review with post-publication comments (Ford 2013). However, I believe that by adopting a narrow definition of OPR – only asking referees to sign their reports – we can better accommodate positive aspects of traditional peer review, such as author blinding, into an open framework. Hence, in this text OPR is understood as a reviewing method where both referee information and their reports are disclosed to the public, while the authors’ identities are not known to the reviewers before manuscript publication.

How exactly would OPR increase transparency in political science? As noted by a number of articles on the topic, OPR creates incentives for referees to write insightful reports, or at least it has no adverse impact over the quality of reviews (DeCoursey 2006; Godlee 2002; Groves 2010; Pöschl 2012; Shanahan and Olsen 2014). In a study that used randomized trials to assess the effect of OPR in the British Journal of Psychiatry, Walsh et al. (2000) show that “signed reviews were of higher quality, were more courteous and took longer to complete than unsigned reviews.” Similar results were reported by McNutt et al. (1990, 1374), who affirm that “editors graded signers as more constructive and courteous […], [and] authors graded signers as fairer.” In the same vein, Kowalczuk et al. (2013) measured the difference in review quality in BMC Microbiology and BMC Infectious Diseases and stated that signers received higher ratings for their feedback on methods and for the amount of evidence they mobilised to substantiate their decisions. Van Rooyen and her colleagues ((1999; 2010)) also ran two randomized studies on the subject, and although they did not find a major difference in perceived quality of both types of review, they reported that reviewers in the treatment group also took significantly more time to evaluate the manuscripts in comparison with the control group. They also note authors broadly favored the open system against closed peer review.

Another advantage of OPR is that it offers a clear way for referees to highlight their specialized knowledge. When reviews are signed, referees are able to receive credit for their important, yet virtually unsung, academic contributions. Instead of just having a rather vague “service to profession” section in their CVs, referees can precise information about the topics they are knowledgeable about and which sort of advice they are giving to prospective authors. Moreover, reports assigned a DOI number can be shared as any other piece of scholarly work, which leads to an increase in the body of knowledge of our discipline and a higher number of citations to referees. In this sense, signed reviews can also be useful for universities and funding bodies. It is an additional method to assess the expert knowledge of a prospective candidate. As supervising skills are somewhat difficult to measure, signed reviews are a good proxy for an applicant’s teaching abilities.

OPR provides background to manuscripts at the time of publication (Ford 2015; Lipworth et al. 2011). It is not uncommon for a manuscript to take months, or even years, to be published in a peer-reviewed journal. In the meantime, the text usually undergoes several major revisions, but readers rarely, if ever, see this trial-and-error approach in action. With public reviews, everyone would be able to track the changes made in the original manuscript and understand how the referees improved the text before its final version. Hence, OPR makes the scientific exchange clear, provides useful background information to manuscripts and fosters post-publication discussions by the readership at large.

Signed and public reviews are also important pedagogical tools. OPR gives a rare glimpse of how academic research is actually conducted, making explicit the usual need for multiple iterations between the authors and the editors before an article appears in print. Furthermore, OPR can fill some of the gap in peer-review training for graduate students. OPR allows junior scholars to compare different review styles, understand what the current empirical or theoretical puzzles of their discipline are, and engage in post-publication discussions about topics in which they are interested (Ford 2015; Lipworth et al. 2011)….(More)”

Decoding the Future for National Security


George I. Seffers at Signal: “U.S. intelligence agencies are in the business of predicting the future, but no one has systematically evaluated the accuracy of those predictions—until now. The intelligence community’s cutting-edge research and development agency uses a handful of predictive analytics programs to measure and improve the ability to forecast major events, including political upheavals, disease outbreaks, insider threats and cyber attacks.

The Office for Anticipating Surprise at the Intelligence Advanced Research Projects Activity (IARPA) is a place where crystal balls come in the form of software, tournaments and throngs of people. The office sponsors eight programs designed to improve predictive analytics, which uses a variety of data to forecast events. The programs all focus on incidents outside of the United States, and the information is anonymized to protect privacy. The programs are in different stages, some having recently ended as others are preparing to award contracts.

But they all have one more thing in common: They use tournaments to advance the state of the predictive analytic arts. “We decided to run a series of forecasting tournaments in which people from around the world generate forecasts about, now, thousands of real-world events,” says Jason Matheny, IARPA’s new director. “All of our programs on predictive analytics do use this tournament style of funding and evaluating research.” The Open Source Indicators program used a crowdsourcing technique in which people across the globe offered their predictions on such events as political uprisings, disease outbreaks and elections.

The data analyzed included social media trends, Web search queries and even cancelled dinner reservations—an indication that people are sick. “The methods applied to this were all automated. They used machine learning to comb through billions of pieces of data to look for that signal, that leading indicator, that an event was about to happen,” Matheny explains. “And they made amazing progress. They were able to predict disease outbreaks weeks earlier than traditional reporting.” The recently completed Aggregative Contingent Estimation (ACE) program also used a crowdsourcing competition in which people predicted events, including whether weapons would be tested, treaties would be signed or armed conflict would break out along certain borders. Volunteers were asked to provide information about their own background and what sources they used. IARPA also tested participants’ cognitive reasoning abilities. Volunteers provided their forecasts every day, and IARPA personnel kept score. Interestingly, they discovered the “deep domain” experts were not the best at predicting events. Instead, people with a certain style of thinking came out the winners. “They read a lot, not just from one source, but from multiple sources that come from different viewpoints. They have different sources of data, and they revise their judgments when presented with new information. They don’t stick to their guns,” Matheny reveals. …

The ACE research also contributed to a recently released book, Superforecasting: The Art and Science of Prediction, according to the IARPA director. The book was co-authored, along with Dan Gardner, by Philip Tetlock, the Annenberg University professor of psychology and management at the University of Pennsylvania who also served as a principal investigator for the ACE program. Like ACE, the Crowdsourcing Evidence, Argumentation, Thinking and Evaluation program uses the forecasting tournament format, but it also requires participants to explain and defend their reasoning. The initiative aims to improve analytic thinking by combining structured reasoning techniques with crowdsourcing.

Meanwhile, the Foresight and Understanding from Scientific Exposition (FUSE) program forecasts science and technology breakthroughs….(More)”

Biases in collective platforms: Wikipedia, GitHub and crowdmapping


Stefana Broadbent at Nesta: “Many of the collaboratively developed knowledge platforms we discussed at our recent conference, At The Roots of Collective Intelligence, suffer from a well-known “contributors’ bias”.

More than 85% of Wikipedia’s entries have been written by men 

OpenStack, as with most other Open Source projects, has seen the emergence of a small group of developers who author the majority of the projects. In fact 80% of the commits have been authored by slightly less than 8% of the authors, while 90% of the commits correspond to about 17% of all the authors.

GitHub’s Be Social function allows users to “follow” other participants and receive notification of their activity. The most popular contributors tend therefore to attract other users to the projects they are working on. And Open Street Map has 1.2 million registered users, but less than 15% of them have produced the majority of the 13 million elements of information.

Research by Quattrone, Capra, De Meo (2015) showed that while the content mapped was not different between active and occasional mappers, the social composition of the power users led to a geographical bias, with less affluent areas remaining unmapped more frequently than urban centres.

These well-known biases in crowdsourcing information, also known as the ‘power users’ effect, were discussed by Professor Licia Capra from the Department of Engineering at UCL. Watch the video of her talk here.

In essence, despite the fact that crowd-sourcing platforms are inclusive and open to anyone willing to dedicate the time and effort, there is a process of self-selection. Different factors can explain why there are certain gender and socio economic groups that are drawn to specific activities, but it is clear that there is a progressive reduction of the diversity of contributors over time.

The effect is more extreme where there is the need for continuous contributions. As the Humanitarian Open StreetMap Team project data showed, humanitarian crises attract many users who contribute intensely for a short time, but only very few participants contribute regularly for a long time. Only a small proportion of power users continue editing or adding code for sustained periods. This effect begs two important questions: does the editing job of the active few skew the information made available, and what can be done to avoid this type of concentration?….

The issue of how to attract more volunteers and editors is more complex and is a constant challenge for any crowdsourcing platform. We can look back at when Wikipedia started losing contributors, which coincided with a period of tighter restrictions to the editing process. This suggests that alongside designing the interface in a way to make contributions easy to be created and shared, it is also necessary to design practices and social norms that are immediately and continuously inclusive. – (More)”

 

Tech and Innovation to Re-engage Civic Life


Hollie Russon Gilman at the Stanford Social Innovation Review: “Sometimes even the best-intentioned policymakers overlook the power of people. And even the best-intentioned discussions on social impact and leveraging big data for the social sector can obscure the power of every-day people in their communities.

But time and time again, I’ve seen the transformative power of civic engagement when initiatives are structured well. For example, the other year I witnessed a high school student walk into a school auditorium one evening during Boston’s first-ever youth-driven participatory budgeting project. Participatory budgeting gives residents a structured opportunity to work together to identify neighborhood priorities, work in tandem with government officials to draft viable projects, and prioritize projects to fund. Elected officials in turn pledge to implement these projects and are held accountable to their constituents. Initially intrigued by an experiment in democracy (and maybe the free pizza), this student remained engaged over several months, because she met new members of her community; got to interact with elected officials; and felt like she was working on a concrete objective that could have a tangible, positive impact on her neighborhood.

For many of the young participants, ages 12-25, being part of a participatory budgeting initiative is the first time they are involved in civic life. Many were excited that the City of Boston, in collaboration with the nonprofit Participatory Budgeting Project, empowered young people with the opportunity to allocate $1 million in public funds. Through participating, young people gain invaluable civic skills, and sometimes even a passion that can fuel other engagements in civic and communal life.

This is just one example of a broader civic and social innovation trend. Across the globe, people are working together with their communities to solve seemingly intractable problems, but as diverse as those efforts are, there are also commonalities. Well-structured civic engagement creates the space and provides the tools for people to exert agency over policies. When citizens have concrete objectives, access to necessary technology (whether it’s postcards, trucks, or open data portals), and an eye toward outcomes, social change happens.

Using Technology to Distribute Expertise

Technology is allowing citizens around the world to participate in solving local, national, and global problems. When it comes to large, public bureaucracies, expertise is largely top-down and concentrated. Leveraging technology creates opportunities for people to work together in new ways to solve public problems. One way is through civic crowdfunding platforms like Citizinvestor.com, which cities can use to develop public sector projects for citizen support; several cities in Rhode Island, Oregon, and Philadelphia have successfully pooled citizen resources to fund new public works. Another way is through citizen science. Old Weather, a crowdsourcing project from the National Archives and Zooniverse, enrolls people to transcribe old British ship logs to identify climate change patterns. Platforms like these allow anyone to devote a small amount of time or resources toward a broader public good. And because they have a degree of transparency, people can see the progress and impact of their efforts. ….(More)”

How Fast is Your Carrier? Crowdsourcing Mobile Network Quality with OpenSignal


Discover: I was on a call with Teresa Murphy-Skorzova, Community Growth Manager for OpenSignal, an app that uses crowd-sourcing to aggregate cell phone signals and WiFi strength data throughout the world. …She explains that while cell phone networks like Verizon and AT&T measure the percent of the population that usually has coverage, OpenSignal is “measuring the experience of the user,” mapping signals from the devices themselves in real time. Individuals record their connection as they go about their day. The app recognizes that people and their cell phone devices are, well… mobile.

In reception to Teresa’s curiosity about my connection, I opened the app and pressed the start button, trying a “Speedtest”. A number begins to fluctuate on my screen. Download speed: 14.9 mbps. A new number begins to fluctuate, testing upload speed. 5.3 mbps. I felt like I had just played slots, already anticipating my next results. I tried again, and saw that my download speed was up to 17.5 mbps. I wondered what my speeds were at the coffee shops I frequent. What about in the woods where I took a hike last weekend, or in the subway tunnel where my texts rarely send?

opensignal app gif

…While individuals learn where to find their own best signals, they contribute to a much larger voice about network quality, Teresa explained. “When a user discovers an area that hasn’t been measured or when they discover an area with poor signal, they’re eager to contribute.” While users are interested in their personal signals, OpenSignal is interesting in tracking the aggregated signal of all devices of a particular location and network. Individual device data is therefore kept anonymous.

Some surprising research projects have used OpenSignal’s data to discover implications about health, the economy, and weather. In one of these projects a team at the Royal Netherlands Meteorological Institute (RNMI) collaborated with OpenSignal to expand the rain radar program. Rainfall gradually weakens reception between cell phone towers creating a space-time map of rainfall, or rain radar map, with cellular link data. RNMI looked at OpenSignal data from unlikely rain radar locations. Some areas were remote or impoverished while others had fairly arid climates. They can now determine whether rain radar is feasible on a larger scale….(More)”

Batea: a Wikipedia hack for medical students


Tom Sullivan at HealthCareIT: “Medical students use Wikipedia in great numbers, but what if it were a more trusted source of information?

That’s the idea behind Batea, a piece of software that essentially collects data from clinical reference URLs medical students visit, then aggregates that information to share with WikiProject Medicine, such that relevant medical editors can glean insights about how best to enhance Wikipedia’s medical content.

Batea takes its name from the Spanish name for gold pan, according to Fred Trotter, a data journalist at DocGraph.

“It’s a data mining project,” Trotter explained, “so we wanted a short term that positively referenced mining.”

DocGraph built Batea with support from the Robert Wood Johnson Foundation and, prior to releasing it on Tuesday, operated beta testing pilots of the browser extension at the University of California, San Francisco and the University of Texas, Houston.

UCSF, for instance, has what Trotter described as “a unique program where medical students edit Wikipedia for credit. They helped us tremendously in testing the alpha versions of the software.”

Wikipedia houses some 25,000 medical articles that receive more than 200 million views each month, according to the DocGraph announcement, while 8,000 pharmacology articles are read more than 40 million times a month.

DocGraph is encouraging medical students around the country to download the Batea extension – and anonymously donate their clinical-related browsing history. Should Batea gain critical mass, the potential exists for it to substantively enhance Wikipedia….(More)”

How Big Data is Helping to Tackle Climate Change


Bernard Marr at DataInformed: “Climate scientists have been gathering a great deal of data for a long time, but analytics technology’s catching up is comparatively recent. Now that cloud, distributed storage, and massive amounts of processing power are affordable for almost everyone, those data sets are being put to use. On top of that, the growing number of Internet of Things devices we are carrying around are adding to the amount of data we are collecting. And the rise of social media means more and more people are reporting environmental data and uploading photos and videos of their environment, which also can be analyzed for clues.

Perhaps one of the most ambitious projects that employ big data to study the environment is Microsoft’s Madingley, which is being developed with the intention of creating a simulation of all life on Earth. The project already provides a working simulation of the global carbon cycle, and it is hoped that, eventually, everything from deforestation to animal migration, pollution, and overfishing will be modeled in a real-time “virtual biosphere.” Just a few years ago, the idea of a simulation of the entire planet’s ecosphere would have seemed like ridiculous, pie-in-the-sky thinking. But today it’s something into which one of the world’s biggest companies is pouring serious money. Microsoft is doing this because it believes that analytical technology has finally caught up with the ability to collect and store data.

Another data giant that is developing tools to facilitate analysis of climate and ecological data is EMC. Working with scientists at Acadia National Park in Maine, the company has developed platforms to pull in crowd-sourced data from citizen science portals such as eBird and iNaturalist. This allows park administrators to monitor the impact of climate change on wildlife populations as well as to plan and implement conservation strategies.

Last year, the United Nations, under its Global Pulse data analytics initiative, launched the Big Data Climate Challenge, a competition aimed to promote innovate data-driven climate change projects. Among the first to receive recognition under the program is Global Forest Watch, which combines satellite imagery, crowd-sourced witness accounts, and public datasets to track deforestation around the world, which is believed to be a leading man-made cause of climate change. The project has been promoted as a way for ethical businesses to ensure that their supply chain is not complicit in deforestation.

Other initiatives are targeted at a more personal level, for example by analyzing transit routes that could be used for individual journeys, using Google Maps, and making recommendations based on carbon emissions for each route.

The idea of “smart cities” is central to the concept of the Internet of Things – the idea that everyday objects and tools are becoming increasingly connected, interactive, and intelligent, and capable of communicating with each other independently of humans. Many of the ideas put forward by smart-city pioneers are grounded in climate awareness, such as reducing carbon dioxide emissions and energy waste across urban areas. Smart metering allows utility companies to increase or restrict the flow of electricity, gas, or water to reduce waste and ensure adequate supply at peak periods. Public transport can be efficiently planned to avoid wasted journeys and provide a reliable service that will encourage citizens to leave their cars at home.

These examples raise an important point: It’s apparent that data – big or small – can tell us if, how, and why climate change is happening. But, of course, this is only really valuable to us if it also can tell us what we can do about it. Some projects, such as Weathersafe, which helps coffee growers adapt to changing weather patterns and soil conditions, are designed to help humans deal with climate change. Others are designed to tackle the problem at the root, by highlighting the factors that cause it in the first place and showing us how we can change our behavior to minimize damage….(More)”

Crowdsourced phone camera footage maps conflicts


Springwise: “The UN requires accurate proof when investigating possible war crimes, but with different sides of a conflict providing contradicting evidence, and the unsafe nature of the environment, gaining genuine insight can be problematic. A team based at Goldsmith’s University in the UK are using amateur footage to investigate.

Forensic Architecture makes use of the increasingly prevalent smartphone footage on social media networks. By crowdsourcing several viewpoints around a given location on an accurately 3D rendered map, the team are able to determine where explosive devices were used, and of what calibre. Key resources are smoke plumes from explosions, which provide a unique shape at any moment, allowing the team to map them and identify the smoke at the exact moment from various viewpoints, providing a dossier of evidence to build up evidence against a war crime.

While Forensic Architecture’s method has been developed to validate war crime atrocities, the potential uses in other areas where satellite data are not available are numerous — forest fire sources could be located based on smoke plumes, and potential crowd crush scenarios may be spotted before they occur….(More)”

Build digital democracy


Dirk Helbing & Evangelos Pournaras in Nature: “Fridges, coffee machines, toothbrushes, phones and smart devices are all now equipped with communicating sensors. In ten years, 150 billion ‘things’ will connect with each other and with billions of people. The ‘Internet of Things’ will generate data volumes that double every 12 hours rather than every 12 months, as is the case now.

Blinded by information, we need ‘digital sunglasses’. Whoever builds the filters to monetize this information determines what we see — Google and Facebook, for example. Many choices that people consider their own are already determined by algorithms. Such remote control weakens responsible, self-determined decision-making and thus society too.

The European Court of Justice’s ruling on 6 October that countries and companies must comply with European data-protection laws when transferring data outside the European Union demonstrates that a new digital paradigm is overdue. To ensure that no government, company or person with sole control of digital filters can manipulate our decisions, we need information systems that are transparent, trustworthy and user-controlled. Each of us must be able to choose, modify and build our own tools for winnowing information.

With this in mind, our research team at the Swiss Federal Institute of Technology in Zurich (ETH Zurich), alongside international partners, has started to create a distributed, privacy-preserving ‘digital nervous system’ called Nervousnet. Nervousnet uses the sensor networks that make up the Internet of Things, including those in smartphones, to measure the world around us and to build a collective ‘data commons’. The many challenges ahead will be best solved using an open, participatory platform, an approach that has proved successful for projects such as Wikipedia and the open-source operating system Linux.

A wise king?

The science of human decision-making is far from understood. Yet our habits, routines and social interactions are surprisingly predictable. Our behaviour is increasingly steered by personalized advertisements and search results, recommendation systems and emotion-tracking technologies. Thousands of pieces of metadata have been collected about every one of us (seego.nature.com/stoqsu). Companies and governments can increasingly manipulate our decisions, behaviour and feelings1.

Many policymakers believe that personal data may be used to ‘nudge’ people to make healthier and environmentally friendly decisions. Yet the same technology may also promote nationalism, fuel hate against minorities or skew election outcomes2 if ethical scrutiny, transparency and democratic control are lacking — as they are in most private companies and institutions that use ‘big data’. The combination of nudging with big data about everyone’s behaviour, feelings and interests (‘big nudging’, if you will) could eventually create close to totalitarian power.

Countries have long experimented with using data to run their societies. In the 1970s, Chilean President Salvador Allende created computer networks to optimize industrial productivity3. Today, Singapore considers itself a data-driven ‘social laboratory’4 and other countries seem keen to copy this model.

The Chinese government has begun rating the behaviour of its citizens5. Loans, jobs and travel visas will depend on an individual’s ‘citizen score’, their web history and political opinion. Meanwhile, Baidu — the Chinese equivalent of Google — is joining forces with the military for the ‘China brain project’, using ‘deep learning’ artificial-intelligence algorithms to predict the behaviour of people on the basis of their Internet activity6.

The intentions may be good: it is hoped that big data can improve governance by overcoming irrationality and partisan interests. But the situation also evokes the warning of the eighteenth-century philosopher Immanuel Kant, that the “sovereign acting … to make the people happy according to his notions … becomes a despot”. It is for this reason that the US Declaration of Independence emphasizes the pursuit of happiness of individuals.

Ruling like a ‘benevolent dictator’ or ‘wise king’ cannot work because there is no way to determine a single metric or goal that a leader should maximize. Should it be gross domestic product per capita or sustainability, power or peace, average life span or happiness, or something else?

Better is pluralism. It hedges risks, promotes innovation, collective intelligence and well-being. Approaching complex problems from varied perspectives also helps people to cope with rare and extreme events that are costly for society — such as natural disasters, blackouts or financial meltdowns.

Centralized, top-down control of data has various flaws. First, it will inevitably become corrupted or hacked by extremists or criminals. Second, owing to limitations in data-transmission rates and processing power, top-down solutions often fail to address local needs. Third, manipulating the search for information and intervening in individual choices undermines ‘collective intelligence’7. Fourth, personalized information creates ‘filter bubbles’8. People are exposed less to other opinions, which can increase polarization and conflict9.

Fifth, reducing pluralism is as bad as losing biodiversity, because our economies and societies are like ecosystems with millions of interdependencies. Historically, a reduction in diversity has often led to political instability, collapse or war. Finally, by altering the cultural cues that guide peoples’ decisions, everyday decision-making is disrupted, which undermines rather than bolsters social stability and order.

Big data should be used to solve the world’s problems, not for illegitimate manipulation. But the assumption that ‘more data equals more knowledge, power and success’ does not hold. Although we have never had so much information, we face ever more global threats, including climate change, unstable peace and socio-economic fragility, and political satisfaction is low worldwide. About 50% of today’s jobs will be lost in the next two decades as computers and robots take over tasks. But will we see the macroeconomic benefits that would justify such large-scale ‘creative destruction’? And how can we reinvent half of our economy?

The digital revolution will mainly benefit countries that achieve a ‘win–win–win’ situation for business, politics and citizens alike10. To mobilize the ideas, skills and resources of all, we must build information systems capable of bringing diverse knowledge and ideas together. Online deliberation platforms and reconfigurable networks of smart human minds and artificially intelligent systems can now be used to produce collective intelligence that can cope with the diverse and complex challenges surrounding us….(More)” See Nervousnet project

UK police force trials virtual crime visits over Skype


Nick Summers at Engadget: In an effort to cut costs and make its officers more efficient, police in Peterborough, England are asking citizens to report their crimes over Skype. So, whereas before a local “bobby” would come round to their house, notepad in hand, to ask questions and take down what happened, the entire process will now be conducted over webcam. Alternatively, victims can do the follow-up on the phone or at the station — handy if Skype is being its usual, unreliable self. The system is being trialled for crimes reported via 101, the police’s non-emergency contact number. The force says it’ll give people more flexibility with appointment times, and also ensure officers spend more hours each day on patrol. We suspect it also has something to do with the major budget cuts facing forces up and down the country….(More)”