Living Reference Work Entry by Michael Rundel: “It is tempting to dismiss crowdsourcing as a largely trivial recent development which has nothing useful to contribute to serious lexicography. This temptation should be resisted. When applied to dictionary-making, the broad term “crowdsourcing” in fact describes a range of distinct methods for creating or gathering linguistic data. A provisional typology is proposed, distinguishing three approaches which are often lumped under the heading “crowdsourcing.” These are: user-generated content (UGC), the wiki model, and what is referred to here as “crowd-sourcing proper.” Each approach is explained, and examples are given of their applications in linguistic and lexicographic projects. The main argument of this chapter is that each of these methods – if properly understood and carefully managed – has significant potential for lexicography. The strengths and weaknesses of each model are identified, and suggestions are made for exploiting them in order to facilitate or enhance different operations within the process of developing descriptions of language. Crowdsourcing – in its various forms – should be seen as an opportunity rather than as a threat or diversion….(More)”.
Debating big data: A literature review on realizing value from big data
Wendy Arianne Günther et al in The Journal of Strategic Information Systems: “Big data has been considered to be a breakthrough technological development over recent years. Notwithstanding, we have as yet limited understanding of how organizations translate its potential into actual social and economic value. We conduct an in-depth systematic review of IS literature on the topic and identify six debates central to how organizations realize value from big data, at different levels of analysis. Based on this review, we identify two socio-technical features of big data that influence value realization: portability and interconnectivity. We argue that, in practice, organizations need to continuously realign work practices, organizational models, and stakeholder interests in order to reap the benefits from big data. We synthesize the findings by means of an integrated model….(More)”.
Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing
Priscilla Guo, Danielle Kehl, and Sam Kessler at Responsive Communities (Harvard): “In the summer of 2016, some unusual headlines began appearing in news outlets across the United States. “Secret Algorithms That Predict Future Criminals Get a Thumbs Up From the Wisconsin Supreme Court,” read one. Another declared: “There’s software used across the country to predict future criminals. And it’s biased against blacks.” These news stories (and others like them) drew attention to a previously obscure but fast-growing area in the field of criminal justice: the use of risk assessment software, powered by sophisticated and sometimes proprietary algorithms, to predict whether individual criminals are likely candidates for recidivism. In recent years, these programs have spread like wildfire throughout the American judicial system. They are now being used in a broad capacity, in areas ranging from pre-trial risk assessment to sentencing and probation hearings. This paper focuses on the latest—and perhaps most concerning—use of these risk assessment tools: their incorporation into the criminal sentencing process, a development which raises fundamental legal and ethical questions about fairness, accountability, and transparency. The goal is to provide an overview of these issues and offer a set of key considerations and questions for further research that can help local policymakers who are currently implementing or considering implementing similar systems. We start by putting this trend in context: the history of actuarial risk in the American legal system and the evolution of algorithmic risk assessments as the latest incarnation of a much broader trend. We go on to discuss how these tools are used in sentencing specifically and how that differs from other contexts like pre-trial risk assessment. We then delve into the legal and policy questions raised by the use of risk assessment software in sentencing decisions, including the potential for constitutional challenges under the Due Process and Equal Protection clauses of the Fourteenth Amendment. Finally, we summarize the challenges that these systems create for law and policymakers in the United States, and outline a series of possible best practices to ensure that these systems are deployed in a manner that promotes fairness, transparency, and accountability in the criminal justice system….(More)”.
Where’s the ‘Civic’ in CivicTech?
Blog by Pius Enywaru: “The ideology of community participation and development is a crucial topic for any nation or community seeking to attain sustainable development. Here in Uganda, oftentimes when the opportunity for public participation either in local planning or in holding local politicians to account — the ‘don’t care’ attitude reigns….
What works?
Some of these tools include Ask Your Government Uganda, a platform built to help members of the public get the information they want about from 106 public agencies in Uganda. U-Report developed by UNICEF provides an SMS-based social monitoring tool designed to address issues affecting the youth of Uganda. Mentioned in a previous blog post, Parliament Watchbrings the proceedings of the Parliament of Uganda to the citizens. The organization leverages technology to share live updates on social media and provides in-depth analysis to create a better understanding on the business of Parliament. Other tools used include citizen scorecards, public media campaigns and public petitions. Just recently, we have had a few calls to action to get people to sign petitions, with somewhat lackluster results.
What doesn’t work?
Although the usage of these tools have dramatically grown, there is still a lack of awareness and consequently, community participation. In order to understand the interventions which the Government of Uganda believes are necessary for sustainable urban development, it is important to examine the realities pertaining to urban areas and their planning processes. There are many challenges in deploying community participation tools based on ICT such as limited funding and support for such initiatives, low literacy levels, low technical literacy, a large digital divide, low rates of seeking input from communities in developing these tools, lack of adequate government involvement and resistance/distrust of change by both government and citizens. Furthermore, in many of these initiatives, a large marketing or sensitization push is needed to let citizens know that these services exist for their benefit.
There are great minds who have brilliant ideas to try and bring literally everyone on board though civic engagement. When you have a look at their ideas, you will agree that indeed they might make a reputable service and bring about remarkable change in different communities. However, the biggest question has always been, “How do these ideas get executed and adopted by these communities that they target”? These ideas suffer a major setback of lack of inclusivity to enhance community participation. This still remains a puzzle for most folks that have these ideas….(More)”.
E-residency and blockchain
Clare Sullivan and Eric Burger in Computer Law & Security Review: “In December 2014, Estonia became the first nation to open its digital borders to enable anyone, anywhere in the world to apply to become an e-Resident. Estonian e-Residency is essentially a commercial initiative. The e-ID issued to Estonian e-Residents enables commercial activities with the public and private sectors. It does not provide citizenship in its traditional sense, and the e-ID provided to e-Residents is not a travel document. However, in many ways it is an international ‘passport’ to the virtual world. E-Residency is a profound change and the recent announcement that the Estonian government is now partnering with Bitnation to offer a public notary service to Estonian e-Residents based on blockchain technology is of significance. The application of blockchain to e-Residency has the potential to fundamentally change the way identity information is controlled and authenticated. This paper examines the legal, policy, and technical implications of this development….(More)”.
Africa’s open data revolution hampered by challenges
Building Digital Government Strategies
Book by Rodrigo Sandoval-Almazan et al: “This book provides key strategic principles and best practices to guide the design and implementation of digital government strategies. It provides a series of recommendations and findings to think about IT applications in government as a platform for information, services and collaboration, and strategies to avoid identified pitfalls. Digital government research suggests that information technologies have the potential to generate immense public value and transform the relationships between governments, citizens, businesses and other stakeholders. However, developing innovative and high impact solutions for citizens hinges on the development of strategic institutional, organizational and technical capabilities.
Thus far, particular characteristics and problems of the public sector organization promote the development of poorly integrated and difficult to maintain applications. For example, governments maintain separate applications for open data, transparency, and public services, leading to duplication of efforts and a waste of resources. The costs associated with maintaining such sets of poorly integrated systems may limit the use of resources to future projects and innovation.
This book provides best practices and recommendations based on extensive research in both Mexico and the United States on how governments can develop a digital government strategy for creating public value, how to finance digital innovation in the public sector, how to building successful collaboration networks and foster citizen engagement, and how to correctly implement open government projects and open data. It will be of interest to researchers, practitioners, students, and public sector IT professionals that work in the design and implementation of technology-based projects and programs….(More)”.
How data can heal our oceans
Nishan Degnarain and Steve Adler at WEF: “We have collected more data on our oceans in the past two years than in the history of the planet.
There has been a proliferation of remote and near sensors above, on, and beneath the oceans. New low-cost micro satellites ring the earth and can record what happens below daily. Thousands of tidal buoys follow currents transmitting ocean temperature, salinity, acidity and current speed every minute. Undersea autonomous drones photograph and map the continental shelf and seabed, explore deep sea volcanic vents, and can help discover mineral and rare earth deposits.
The volume, diversity and frequency of data is increasing as the cost of sensors fall, new low-cost satellites are launched, and an emerging drone sector begins to offer new insights into our oceans. In addition, new processing capabilities are enhancing the value we receive from such data on the biological, physical and chemical properties of our oceans.
Yet it is not enough.
We need much more data at higher frequency, quality, and variety to understand our oceans to the degree we already understand the land. Less than 5% of the oceans are comprehensively monitored. We need more data collection capacity to unlock the sustainable development potential of the oceans and protect critical ecosystems.
More data from satellites will help identify illegal fishing activity, track plastic pollution, and detect whales and prevent vessel collisions. More data will help speed the placement of offshore wind and tide farms, improve vessel telematics, develop smart aquaculture, protect urban coastal zones, and enhance coastal tourism.
Unlocking the ocean data market
But we’re not there yet.
This new wave of data innovation is constrained by inadequate data supply, demand, and governance. The supply of existing ocean data is locked by paper records, old formats, proprietary archives, inadequate infrastructure, and scarce ocean data skills and capacity.
The market for ocean observation is driven by science and science isn’t adequately funded.
To unlock future commercial potential, new financing mechanisms are needed to create market demand that will stimulate greater investments in new ocean data collection, innovation and capacity.
Efforts such as the Financial Stability Board’s Taskforce on Climate-related Financial Disclosure have gone some way to raise awareness and create demand for such ocean-related climate risk data.
Much data that is produced is collected by nations, universities and research organizations, NGO’s, and the private sector, but only a small percentage is Open Data and widely available.
Data creates more value when it is widely utilized and well governed. Helping organize to improve data infrastructure, quality, integrity, and availability is a requirement for achieving new ocean data-driven business models and markets. New Ocean Data Governance models, standards, platforms, and skills are urgently needed to stimulate new market demand for innovation and sustainable development….(More)”.
Smart or dumb? The real impact of India’s proposal to build 100 smart cities
In promoting this objective, it gave the example of a large development in the island city of Mumbai, Bhendi Bazaar. There, 3-5 storey housing would be replaced with towers of between 40 to 60 storeys to increase density. This has come to be known as “vertical with a vengeance”.
We have obtained details of the proposed project from the developer and the municipal authorities. Using an extended urban metabolism model, which measures the impacts of the built environment, we have assessed its overall impact. We determined how the flows of materials and energy will change as a result of the redevelopment.
Our research shows that the proposal is neither smart nor sustainable.
Measuring impacts
The Indian government clearly defined what they meant with “smart”. Over half of the 11 objectives were environmental and main components of the metabolism of a city. These include adequate water and sanitation, assured electricity, efficient transport, reduced air pollution and resource depletion, and sustainability.
We collected data from various primary and secondary sources. This included physical surveys during site visits, local government agencies, non-governmental organisations, the construction industry and research.
We then made three-dimensional models of the existing and proposed developments to establish morphological changes, including building heights, street widths, parking provision, roof areas, open space, landscaping and other aspects of built form.
Demographic changes (population density, total population) were based on census data, the developer’s calculations and an assessment of available space. Such information about the magnitude of the development and the associated population changes allowed us to analyse the additional resources required as well as the environmental impact….
Case studies such as Bhendi Bazaar provide an example of plans for increased density and urban regeneration. However, they do not offer an answer to the challenge of limited infrastructure to support the resource requirements of such developments.
The results of our research indicate significant adverse impacts on the environment. They show that the metabolism increases at a greater rate than the population grows. On this basis, this proposed development for Mumbai, or the other 99 cities, should not be called smart or sustainable.
With policies that aim to prevent urban sprawl, cities will inevitably grow vertically. But with high-rise housing comes dependence on centralised flows of energy, water supplies and waste disposal. Dependency in turn leads to vulnerability and insecurity….(More)”.
The hidden costs of open data
Sara Friedman at GCN: “As more local governments open their data for public use, the emphasis is often on “free” — using open source tools to freely share already-created government datasets, often with pro bono help from outside groups. But according to a new report, there are unforeseen costs when it comes pushing government datasets out of public-facing platforms — especially when geospatial data is involved.
The research, led by University of Waterloo professor Peter A. Johnson and McGill University professor Renee Sieber, was based on work as part of Geothink.ca partnership research grant and exploration of the direct and indirect costs of open data.
Costs related to data collection, publishing, data sharing, maintenance and updates are increasingly driving governments to third-party providers to help with hosting, standardization and analytical tools for data inspection, the researchers found. GIS implementation also has associated costs to train staff, develop standards, create valuations for geospatial data, connect data to various user communities and get feedback on challenges.
First, open data can create “smoke and mirrors” situation where insufficient resources are put toward deploying open data for government use. Users then experience “transaction costs” when it comes to working in specialist data formats that need additional skills, training and software to use.
Second, the level of investment and quality of open data can lead to “material benefits and social privilege” for communities that devote resources to providing more comprehensive platforms.
While there are some open source data platforms, the majority of solutions are proprietary and charged on a pro-rata basis, which can present a challenge for cities with larger, poor populations compared to smaller, wealthier cities. Issues also arise when governments try to combine their data sets, leading to increased costs to reconcile problems.
The third problem revolves around the private sector pushing for the release of data sets that can benefit their business objectives. Companies could push for the release high-value sets, such as a real-time transit data, to help with their product development goals. This can divert attention from low-value sets, such as those detailing municipal services or installations, that could have a bigger impact on residents “from a civil society perspective.”
If communities decide to release the low-value sets first, Johnson and Sieber think the focus can then be shifted to high-value sets that can help recoup the costs of developing the platforms.
Lastly, the report finds inadvertent consequences could result from tying open data resources to private-sector companies. Public-private open data partnerships could lead to infrastructure problems that prevent data from being widely shared, and help private companies in developing their bids for public services….
Johnson and Sieber encourage communities to ask the following questions before investing in open data:
- Who are the intended constituents for this open data?
- What is the purpose behind the structure for providing this data set?
- Does this data enable the intended users to meet their goals?
- How are privacy concerns addressed?
- Who sets the priorities for release and updates?…(More)”
Read the full report here.