Book by Michael P. Lynch: “We used to say “seeing is believing”; now, googling is believing. With 24/7 access to nearly all of the world’s information at our fingertips, we no longer trek to the library or the encyclopedia shelf in search of answers. We just open our browsers, type in a few keywords and wait for the information to come to us. Now firmly established as a pioneering work of modern philosophy, The Internet of Us has helped revolutionize our understanding of what it means to be human in the digital age. Indeed, demonstrating that knowledge based on reason plays an essential role in society and that there is more to “knowing” than just acquiring information, leading philosopher Michael P. Lynch shows how our digital way of life makes us value some ways of processing information over others, and thus risks distorting the greatest traits of mankind. Charting a path from Plato’s cave to Google Glass, the result is a necessary guide on how to navigate the philosophical quagmire that is the “Internet of Things.”…(More)”.
Dictionaries and crowdsourcing, wikis and user-generated content
Living Reference Work Entry by Michael Rundel: “It is tempting to dismiss crowdsourcing as a largely trivial recent development which has nothing useful to contribute to serious lexicography. This temptation should be resisted. When applied to dictionary-making, the broad term “crowdsourcing” in fact describes a range of distinct methods for creating or gathering linguistic data. A provisional typology is proposed, distinguishing three approaches which are often lumped under the heading “crowdsourcing.” These are: user-generated content (UGC), the wiki model, and what is referred to here as “crowd-sourcing proper.” Each approach is explained, and examples are given of their applications in linguistic and lexicographic projects. The main argument of this chapter is that each of these methods – if properly understood and carefully managed – has significant potential for lexicography. The strengths and weaknesses of each model are identified, and suggestions are made for exploiting them in order to facilitate or enhance different operations within the process of developing descriptions of language. Crowdsourcing – in its various forms – should be seen as an opportunity rather than as a threat or diversion….(More)”.
The Case for Sharing All of America’s Data on Mosquitoes
Ed Yong in the Atlantic: “The U.S. is sitting on one of the largest data sets on any animal group, but most of it is inaccessible and restricted to local agencies….For decades, agencies around the United States have been collecting data on mosquitoes. Biologists set traps, dissect captured insects, and identify which species they belong to. They’ve done this for millions of mosquitoes, creating an unprecedented trove of information—easily one of the biggest long-term attempts to monitor any group of animals, if not the very biggest.
Currently, these agencies can use their data to check if their attempts to curtail mosquito populations are working. Are they doing enough to remove stagnant water, for example? Do they need to spray pesticides? But if they shared their findings, Martinez and Rund say that scientists could do much more. They could better understand the ecology of these insects, predict the spread of mosquito-borne diseases like dengue fever or Zika, coordinate control efforts across states and counties, and quickly spot the arrival of new invasive species.
That’s why Martinez and Rund are now calling for the creation of a national database of mosquito records that anyone can access. “There’s a huge amount of taxpayer investment and human effort that goes into setting traps, checking them weekly, dissecting all those mosquitoes under a microscope, and tabulating the data,” says Martinez. “It would be a big bang for our buck to collate all that data and make it available.”
The same applies to cases of mosquito-borne diseases like dengue and Zika, but not to populations of the insects themselves. So, during last year’s Zika epidemic, when Martinez wanted to study the Aedes aegypti mosquito that spreads the disease, she had a tough time. “I was really surprised that I couldn’t find data on Aedes aegypti numbers,” she says. Her colleagues explained that scientists use climate variables like temperature and humidity to predict where mosquitoes are going to be abundant. That seemed ludicrous to her, especially since organizations collect information on the actual insects. It’s just that no one ever gathers those figures together….
A few mosquito-related databases do exist, but none are quite right. ArboNET, which is managed by the CDC and state health departments, mainly stores data about mosquito-borne diseases, and whatever information it has on the insects themselves isn’t precise enough in either time or space to be useful for modeling. MosquitoNET, which was developed by the CDC, does track mosquitoes, but “it’s a completely closed system, and hardly anyone has access to it,” says Rund. The Smithsonian Institution’s VectorMap is better in that it’s accessible, “but it lacks any real-time data from the continental United States,” says Rund. “When I checked a few months ago, it had just one record of Aedes aegypti since 2013.”…
Some scientists who work on mosquito control apparently disagree, and negative reviews have stopped Martinez and Rund from publishing their ideas in prominent academic journals. (For now, they’ve uploaded a paper describing their vision to the preprint repository bioRxiv.) “Some control boards say: What if people want to sue us because we’re showing that they have mosquito vectors near their homes, or if their house prices go down?” says Martinez. “And one mosquito-control scientist told me that no one should be able to work with mosquito data unless they’ve gone out and trapped mosquitoes themselves.”…
Debating big data: A literature review on realizing value from big data
Wendy Arianne Günther et al in The Journal of Strategic Information Systems: “Big data has been considered to be a breakthrough technological development over recent years. Notwithstanding, we have as yet limited understanding of how organizations translate its potential into actual social and economic value. We conduct an in-depth systematic review of IS literature on the topic and identify six debates central to how organizations realize value from big data, at different levels of analysis. Based on this review, we identify two socio-technical features of big data that influence value realization: portability and interconnectivity. We argue that, in practice, organizations need to continuously realign work practices, organizational models, and stakeholder interests in order to reap the benefits from big data. We synthesize the findings by means of an integrated model….(More)”.
From Katrina To Harvey: How Disaster Relief Is Evolving With Technology
Cale Guthrie Weissman at Fast Company: “Open data may sound like a nerdy thing, but this weekend has proven it’s also a lifesaver in more ways than one.
As Hurricane Harvey pelted the southern coast of Texas, a local open-data resource helped provide accurate and up-to-date information to the state’s residents. Inside Harris County’s intricate bayou system–intended to both collect water and effectively drain it–gauges were installed to sense when water is overflowing. The sensors transmit the data to a website, which has become a vital go-to for Houston residents….
This open access to flood gauges is just one of the many ways new tech-driven projects have helped improve responses to disasters over the years. “There’s no question that technology has played a much more significant role,” says Lemaitre, “since even Hurricane Sandy.”
While Sandy was noted in 2012 for its ability to connect people with Twitter hashtags and other relatively nascent social apps like Instagram, the last few years have brought a paradigm shift in terms of how emergency relief organizations integrate technology into their responses….
Social media isn’t just for the residents. Local and national agencies–including FEMA–rely on this information and are using it to help create faster and more effective disaster responses. Following the disaster with Hurricane Katrina, FEMA worked over the last decade to revamp its culture and methods for reacting to these sorts of situations. “You’re seeing the federal government adapt pretty quickly,” says Lemaitre.
There are a few examples of this. For instance, FEMA now has an app to push necessary information about disaster preparedness. The agency also employs people to cull the open web for information that would help make its efforts better and more effective. These “social listeners” look at all the available Facebook, Snapchat, and other social media posts in aggregate. Crews are brought on during disasters to gather intelligence, and then report about areas that need relief efforts–getting “the right information to the right people,” says Lemaitre.
There’s also been a change in how this information is used. Often, when disasters are predicted, people send supplies to the affected areas as a way to try and help out. Yet they don’t know exactly where they should send it, and local organizations sometimes become inundated. This creates a huge logistical nightmare for relief organizations that are sitting on thousands of blankets and tarps in one place when they should be actively dispersing them across hundreds of miles.
“Before, you would just have a deluge of things dropped on top of a disaster that weren’t particularly helpful at times,” says Lemaitre. Now people are using sites like Facebook to ask where they should direct the supplies. For example, after a bad flood in Louisiana last year, a woman announced she had food and other necessities on Facebook and was able to direct the supplies to an area in need. This, says Lemaitre, is “the most effective way.”
Put together, Lemaitre has seen agencies evolve with technology to help create better systems for quicker disaster relief. This has also created a culture of learning updates and reacting in real time. Meanwhile, more data is becoming open, which is helping both people and agencies alike. (The National Weather Service, which has long trumpeted its open data for all, has become a revered stalwart for such information, and has already proven indispensable in Houston.)
Most important, the pace of technology has caused organizations to change their own procedures. Twelve years ago, during Katrina, the protocol was to wait until an assessment before deploying any assistance. Now organizations like FEMA know that just doesn’t work. “You can’t afford to lose time,” says Lemaitre. “Deploy as much as you can and be fast about it–you can always scale back.”
It’s important to note that, even with rapid technological improvements, there’s no way to compare one disaster response to another–it’s simply not apples to apples. All the same, organizations are still learning about where they should be looking and how to react, connecting people to their local communities when they need them most….(More)”.
From ‘Opening Up’ to Democratic Renewal: Deepening Public Engagement in Legislative Committees
Carolyn M. Hendriks and Adrian Kay in Government and Opposition: “Many legislatures around the world are undergoing a ‘participatory makeover’. Parliaments are hosting open days and communicating the latest parliamentary updates via websites and social media. Public activities such as these may make parliaments more informative and accessible, but much more could be done to foster meaningful democratic renewal. In particular, participatory efforts ought to be engaging citizens in a central task of legislatures – to deliberate and make decisions on collective issues. In this article, the potential of parliamentary committees to bring the public closer to legislative deliberations is considered. Drawing on insights from the practice and theory of deliberative democracy, the article discusses why and how deeper and more inclusive forms of public engagement can strengthen the epistemic, representative and deliberative capacities of parliamentary committees. Practical examples are considered to illustrate the possibilities and challenges of broadening public involvement in committee work….(More)”
Crowdsourcing the Charlottesville Investigation
Internet sleuths got to work, and by Monday morning they were naming names and calling for arrests.
The name of the helmeted man went viral after New York Daily News columnist Shaun King posted a series of photos on Twitter and Facebook that more clearly showed his face and connected him to photos from a Facebook account. “Neck moles gave it away,” King wrote in his posts, which were shared more than 77,000 times. But the name of the red-bearded assailant was less clear: some on Twitter claimed it was a Texas man who goes by a Nordic alias online. Others were sure it was a Michigan man who, according to Facebook, attended high school with other white nationalist demonstrators depicted in photos from Charlottesville.
After being contacted for comment by The Marshall Project, the Michigan man removed his Facebook page from public view.
Such speculation, especially when it is not conclusive, has created new challenges for law enforcement. There is the obvious risk of false identification. In 2013, internet users wrongly identified university student Sunil Tripathi as a suspect in the Boston marathon bombing, prompting the internet forum Reddit to issue an apology for fostering “online witch hunts.” Already, an Arkansas professor was misidentified as as a torch-bearing protester, though not a criminal suspect, at the Charlottesville rallies.
Beyond the cost to misidentified suspects, the crowdsourced identification of criminal suspects is both a benefit and burden to investigators.
“If someone says: ‘hey, I have a picture of someone assaulting another person, and committing a hate crime,’ that’s great,” said Sgt. Sean Whitcomb, the spokesman for the Seattle Police Department, which used social media to help identify the pilot of a drone that crashed into a 2015 Pride Parade. (The man was convicted in January.) “But saying, ‘I am pretty sure that this person is so and so’. Well, ‘pretty sure’ is not going to cut it.”
Still, credible information can help police establish probable cause, which means they can ask a judge to sign off on either a search warrant, an arrest warrant, or both….(More)“.
Gaming for Infrastructure
Nilmini Rubin & Jennifer Hara at the Stanford Social Innovation Review: “…the American Society of Civil Engineers (ASCE) estimates that the United States needs $4.56 trillion to keep its deteriorating infrastructure current but only has funding to cover less than half of necessary infrastructure spending—leaving the at least country $2.0 trillion short through the next decade. Globally, the picture is bleak as well: World Economic Forum estimates that the infrastructure gap is $1 trillion each year.
What can be done? Some argue that public-private partnerships (PPPs or P3s) are the answer. We agree that they can play an important role—if done well. In a PPP, a private party provides a public asset or service for a government entity, bears significant risk, and is paid on performance. The upside for governments and their citizens is that the private sector can be incentivized to deliver projects on time, within budget, and with reduced construction risk. The private sector can benefit by earning a steady stream of income from a long-term investment from a secure client. From the Grand Parkway Project in Texas to the Queen Alia International Airport in Jordan, PPPs have succeeded domestically and internationally.
The problem is that PPPs can be very hard to design and implement. And since they can involve commitments of millions or even billions of dollars, a PPP failure can be awful. For example, the Berlin Airport is a PPP that is six years behind schedule, and its costs overruns total roughly $3.8 billion to date.
In our experience, it can be useful for would-be partners to practice engaging in a PPP before they dive into a live project. At our organization, Tetra Tech’s Institute for Public-Private Partnerships, for example, we use an online and multiplayer game—the P3 Game—to help make PPPs work.
The game is played with 12 to 16 people who are divided into two teams: a Consortium and a Contracting Authority. In each of four rounds, players mimic the activities they would engage in during the course of a real PPP, and as in real life, they are confronted with unexpected events: The Consortium fails to comply with a routine road inspection, how should the Contracting Authority team respond? The cost of materials skyrockets, how should the Consortium team manage when it has a fixed price contract?
Players from government ministries, legislatures, construction companies, financial institutions, and other entities get to swap roles and experience a PPP from different vantage points. They think through challenges and solve problems together—practicing, failing, learning, and growing—within the confines of the game and with no real-world cost.
More than 1,000 people have participated to date, including representatives of the US Army Corps of Engineers, the World Bank, and Johns Hopkins University, using a variety of scenarios. PPP team members who work on part of the Schiphol-Amsterdam-Almere Project, a $5.6-billion road project in the Netherlands, played the game using their actual contract document….(More)”.
Can AI tools replace feds?
Derek B. Johnson at FCW: “The Heritage Foundation…is calling for increased reliance on automation and the potential creation of a “contractor cloud” offering streamlined access to private sector labor as part of its broader strategy for reorganizing the federal government.
Seeking to take advantage of a united Republican government and a president who has vowed to reform the civil service, the foundation drafted a pair of reports this year attempting to identify strategies for consolidating, merging or eliminating various federal agencies, programs and functions. Among those strategies is a proposal for the Office of Management and Budget to issue a report “examining existing government tasks performed by generously-paid government employees that could be automated.”
Citing research on the potential impacts of automation on the United Kingdom’s civil service, the foundation’s authors estimated that similar efforts across the U.S. government could yield $23.9 billion in reduced personnel costs and a reduction in the size of the federal workforce by 288,000….
The Heritage report also called on the federal government to consider a “contracting cloud.” The idea would essentially be for a government version of TaskRabbit, where agencies could select from a pool of pre-approved individual contractors from the private sector who could be brought in for specialized or seasonal work without going through established contracts. Greszler said the idea came from speaking with subcontractors who complained about having to kick over a certain percentage of their payments to prime contractors even as they did all the work.
Right now the foundation is only calling for the government to examine the potential of the issue and how it would interact with existing or similar vehicles for contracting services like the GSA schedule. Greszler emphasized that any pool of workers would need to be properly vetted to ensure they met federal standards and practices.
“There has to be guidelines or some type of checks, so you’re not having people come off the street and getting access to secure government data,” she said….(More)
Open & Shut
Harsha Devulapalli: “Welcome to Open & Shut — a new blog dedicated to exploring the opportunities and challenges of working with open data in closed societies around the world. Although we’ll be exploring questions relevant to open data practitioners worldwide, we’re particularly interested in seeing how civil society groups and actors in the Global South are using open data to push for greater government transparency, and tackle daunting social and economic challenges facing their societies….Throughout this series we’ll be profiling and interviewing organisations working with open data worldwide, and providing do-it-yourself data tutorials that will be useful for beginners as well as data experts. …
What do we mean by the terms ‘open data’ and ‘closed societies’?
It’s important to be clear about what we’re dealing with, here. So let’s establish some key terms. When we talk about ‘open data’, we mean data that anyone can access, use and share freely. And when we say ‘closed societies’, we’re referring to states or regions in which the political and social environment is actively hostile to notions of openness and public scrutiny, and which hold principles of freedom of information in low esteem. In closed societies, data is either not published at all by the government, or else is only published in inaccessible formats, is missing data, is hard to find or else is just not digitised at all.
Iran is one such state that we would characterise as a ‘closed society’. At Small Media, we’ve had to confront the challenges of poor data practice, secrecy, and government opaqueness while undertaking work to support freedom of information and freedom of expression in the country. Based on these experiences, we’ve been working to build Iran Open Data — a civil society-led open data portal for Iran, in an effort to make Iranian government data more accessible and easier for researchers, journalists, and civil society actors to work with.
.
..Open & Shut will shine a light on the exciting new ways that different groups are using data to question dominant narratives, transform public opinion, and bring about tangible change in closed societies. At the same time, it’ll demonstrate the challenges faced by open data advocates in opening up this valuable data. We intend to get the community talking about the need to build cross-border alliances in order to empower the open data movement, and to exchange knowledge and best practices despite the different needs and circumstances we all face….(More)“