Stefaan Verhulst
Centre for Humanitarian Data: “Survey and needs assessment data, or what is known as ‘microdata’, is essential for providing adequate response to crisis-affected people. However, collecting this information does present risks. Even as great effort is taken to remove unique identifiers such as names and phone numbers from microdata so no individual persons or communities are exposed, combining key variables such as location or ethnicity can still allow for re-identification of individual respondents. Statistical Disclosure Control (SDC) is one method for reducing this risk.
The Centre has developed a Guidance Note on Statistical Disclosure Control that outlines the steps involved in the SDC process, potential applications for its use, case studies and key actions for humanitarian data practitioners to take when managing sensitive microdata. Along with an overview of what SDC is and what tools are available, the Guidance Note outlines how the Centre is using this process to mitigate risk for datasets shared on HDX. …(More)”.
Book edited by Sébastien Lechevalier: ” The major purpose of this book is to clarify the importance of non-technological factors in innovation to cope with contemporary complex societal issues while critically reconsidering the relations between science, technology, innovation (STI), and society. For a few decades now, innovation—mainly derived from technological advancement—has been considered a driving force of economic and societal development and prosperity.
With that in mind, the following questions are dealt with in this book: What are the non-technological sources of innovation? What can the progress of STI bring to humankind? What roles will society be expected to play in the new model of innovation? The authors argue that the majority of so-called technological innovations are actually socio-technical innovations, requiring huge resources for financing activities, adapting regulations, designing adequate policy frames, and shaping new uses and new users while having the appropriate interaction with society.
This book gathers multi- and trans-disciplinary approaches in innovation that go beyond technology and take into account the inter-relations with social and human phenomena. Illustrated by carefully chosen examples and based on broad and well-informed analyses, it is highly recommended to readers who seek an in-depth and up-to-date integrated overview of innovation in its non-technological dimensions….(More)”.
Matthew Hutson at Science: “Artificial intelligence (AI) used to be the specialized domain of data scientists and computer programmers. But companies such as Wolfram Research, which makes Mathematica, are trying to democratize the field, so scientists without AI skills can harness the technology for recognizing patterns in big data. In some cases, they don’t need to code at all. Insights are just a drag-and-drop away. One of the latest systems is software called Ludwig, first made open-source by Uber in February and updated last week. Uber used Ludwig for projects such as predicting food delivery times before releasing it publicly. At least a dozen startups are using it, plus big companies such as Apple, IBM, and Nvidia. And scientists: Tobias Boothe, a biologist at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, uses it to visually distinguish thousands of species of flatworms, a difficult task even for experts. To train Ludwig, he just uploads images and labels….(More)”.
Paper by Robin Carnahan, Randy Hart, and Waldo Jaquith: “Only 13% of large government software projects are successful. State IT projects, in particular, are often challenged because states lack basic knowledge about modern software development, relying on outdated procurement processes.
State governments are increasingly reliant on modern software and hardware to deliver essential services to the public, and the success of any major policy initiative depends on the success of the underlying software infrastructure. Government agencies all confront similar challenges, facing budget and staffing constraints while struggling to modernize legacy technology systems that are out-of-date, inflexible, expensive, and ineffective. Government officials and agencies often rely on the same legacy processes that led to problems in the first place.
The public deserves a government that provides the same world-class technology they get from the commercial marketplace. Trust in government depends on it.
This handbook is designed for executives, budget specialists, legislators, and other “non-technical” decision-makers who fund or oversee state government technology projects. It can help you set these projects up for success by asking the right questions, identifying the right outcomes, and equally important, empowering you with a basic knowledge of the fundamental principles of modern software design.
This handbook also gives you the tools you need to start tackling related problems like:
- The need to use, maintain, and modernize legacy systems simultaneously
- Lock-in from legacy commercial arrangements
- Siloed organizations and risk-averse cultures
- Long budget cycles that don’t always match modern software design practices
- Security threats
- Hiring, staffing, and other resource constraints
This is written specifically for procurement of custom software, but it’s important to recognize that commercial off-the-shelf software (COTS) is often custom and Software as a Service (SaaS) often requires custom code. Once any customization is made, the bulk of this advice in this handbook applies to these commercial offerings. (See “Beware the customized commercial software trap” for details.)
As government leaders, we must be good stewards of public money by demanding easy-to-use, cost-effective, sustainable digital tools for use by the public and civil servants. This handbook will help you do just that….(More)”
Proceedings edited by Alessandra Lazazzara, Francesca Ricciardi and Stefano Za: “The recent surge of interest in digital ecosystems is not only transforming the business landscape, but also poses several human and organizational challenges. Due to the pervasive effects of the transformation on firms and societies alike, both scholars and practitioners are interested in understanding the key mechanisms behind digital ecosystems, their emergence and evolution. In order to disentangle such factors, this book presents a collection of research papers focusing on the relationship between technologies (e.g. digital platforms, AI, infrastructure) and behaviours (e.g. digital learning, knowledge sharing, decision-making). Moreover, it provides critical insights into how digital ecosystems can shape value creation and benefit various stakeholders. The plurality of perspectives offered makes the book particularly relevant for users, companies, scientists and governments. The content is based on a selection of the best papers – original double-blind peer-reviewed contributions – presented at the annual conference of the Italian chapter of the AIS, which took place in Pavia, Italy in October 2018….(More)”.
Paper by Jaehyuk Park et al: “…One of the most popular concepts for policy makers and business economists to understand the structure of the global economy is “cluster”, the geographical agglomeration of interconnected firms such as Silicon Valley, Wall Street, and Hollywood. By studying those well-known clusters, we become to understand the advantage of participating in a geo-industrial cluster for firms and how it is related to the economic growth of a region.
However, the existing definition of geo-industrial cluster is not systematic enough to reveal the whole picture of the global economy. Often, after defining as a group of firms in a certain area, the geo-industrial clusters are considered as independent to each other. As we should consider the interaction between accounting team and marketing team to understand the organizational structure of a firm, the relationships among those geo-industrial clusters are the essential part of the whole picture….
In this new study, my colleagues and I at Indiana University — with support from LinkedIn — have finally overcome these limitations by defining geo-industrial clusters through labor flow and constructing a global labor flow network from LinkedIn’s individual-level job history dataset. Our access to this data was made possible by our selection as one of 11 teams selected to participate in the LinkedIn Economic Graph Challenge.
The transitioning of workers between jobs and firms — also known as labor flow — is considered central in driving firms towards geo-industrial clusters due to knowledge spillover and labor market pooling. In response, we mapped the cluster structure of the world economy based on labor mobility between firms during the last 25 years, constructing a “labor flow network.”
To do this, we leverage LinkedIn’s data on professional demographics and employment histories from more than 500 million people between 1990 and 2015. The network, which captures approximately 130 million job transitions between more than 4 million firms, is the first-ever flow network of global labor.
The resulting “map” allows us to:
- identify geo-industrial clusters systematically and organically using network community detection
- verify the importance of region and industry in labor mobility
- compare the relative importance between the two constraints in different hierarchical levels, and
- reveal the practical advantage of the geo-industrial cluster as a unit of future economic analyses.
- show a better picture of what industry in what region leads the economic growth of the industry or the region, at the same time
- find out emerging and declining skills based on the representativeness of them in growing and declining geo-industrial clusters…(More)”.
Katie Langin at Science: “With more than 30,000 academic journals now in circulation, academics can have a hard time figuring out where to submit their work for publication. The decision is made all the more difficult by the sky-high pressure of today’s academic environment—including working toward tenure and trying to secure funding, which can depend on a researcher’s publication record. So, what does a researcher prioritize?
According to a new study posted on the bioRxiv preprint server, faculty members say they care most about whether the journal is read by the people they most want to reach—but they think their colleagues care most about journal prestige. Perhaps unsurprisingly, prestige also held more sway for untenured faculty members than for their tenured colleagues.
“I think that it is about the security that comes with being later in your career,” says study co-author Juan Pablo Alperin, an assistant professor in the publishing program at Simon Fraser University in Vancouver, Canada. “It means you can stop worrying so much about the specifics of what is being valued; there’s a lot less at stake.”
According to a different preprint that Alperin and his colleagues posted on PeerJ in April, 40% of research-intensive universities in the United States and Canada explicitly mention that journal impact factors can be considered in promotion and tenure decisions. More likely do so unofficially, with faculty members using journal names on a CV as a kind of shorthand for how “good” a candidate’s publication record is. “You can’t ignore the fact that journal impact factor is a reality that gets looked at,” Alperin says. But some argue that journal prestige and impact factor are overemphasized and harm science, and that academics should focus on the quality of individual work rather than journal-wide metrics.
In the new study, only 31% of the 338 faculty members who were surveyed—all from U.S. and Canadian institutions and from a variety of disciplines, including 38% in the life and physical sciences and math—said that journal prestige was “very important” to them when deciding where to submit a manuscript. The highest priority was journal readership, which half said was very important. Fewer respondents felt that publication costs (24%) and open access (10%) deserved the highest importance rating.
But, when those same faculty members were asked to assess how their colleagues make the same decision, journal prestige shot to the top of the list, with 43% of faculty members saying that it was very important to their peers when deciding where to submit a manuscript. Only 30% of faculty members thought the same thing about journal readership—a drop of 20 percentage points compared with how faculty members assessed their own motivations….(More)”.
Literature Review by Jörn Erbguth: “Democratic states are entities where issues are decided by a large group – the people. There is a democratic process that builds upon elections, a legislative procedure, judicial review and separation of powers by checks and balances. Blockchains rely on decentralization, meaning they rely on a large group of participants as well. Blockchains are therefore confronted with similar problems. Even further, blockchains try to avoid central coordinating authorities.
Consensus methods ensure that the systems align with the majority of their participants. Above the layer of the consensus method, blockchain governance coordinates decisions about software updates, bugfixes and possibly other interventions. What are the strengths and weaknesses of this blockchain governance?
Should we use blockchain to secure e-voting? Blockchain governance has two central aspects. First, it is decentralized governance based on a large group of people, which resembles democratic decision-making. Second, it is algorithmic decision-making and limits unwanted human intervention
Cornerstones
Blockchain and democracy can be split into three areas:
First, the use of democratic principles in order to make blockchain work. This ranges from the basic concensus algorithm to the (self-)governance of a blockchain.
Second, blockchain is seen as providing a reliable tool for democracy. This ranges from the use of blockchain for electronic voting to the use in administration.
Third, to study possible impacts of blockchain technology on a democratic society. This focusses on regulatory and legal aspects as well as ethical aspects….(More)”
Krista Chan at Sunlight: “…Housing advocates have an essential role to play in protecting residents from the consequences of real estate speculation. But they’re often at a significant disadvantage; the real estate lobby has access to a wealth of data and technological expertise. Civic hackers and open data could play an essential role in leveling the playing field.
Civic hackers have facilitated wins for housing advocates by scraping data or submitting FOIA requests where data is not open and creating apps to help advocates gain insights that they can turn into action.
Hackers at New York City’s Housing Data Coalition created a host of civic apps that identify problematic landlords by exposing owners behind shell companies, or flagging buildings where tenants are at risk of displacement. In a similar vein, Washington DC’s Housing Insights tool aggregates a wide variety of data to help advocates make decisions about affordable housing.
Barriers and opportunities
Today, the degree to which housing data exists, is openly available, and consistently reliable varies widely, even within cities themselves. Cities with robust communities of affordable housing advocacy groups may not be connected to people who can help open data and build usable tools. Even in cities with robust advocacy and civic tech communities, these groups may not know how to work together because of the significant institutional knowledge that’s required to understand how to best support housing advocacy efforts.
In cities where civic hackers have tried to create useful open housing data repositories, similar data cleaning processes have been replicated, such as record linkage of building owners or identification of rent-controlled units. Civic hackers need to take on these data cleaning and “extract, transform, load” (ETL) processes in order to work with the data itself, even if it’s openly available. The Housing Data Coalition has assembled NYC-DB, a tool which builds a postgres database containing a variety of housing related data pertaining to New York City, and Washington DC’s Housing Insights similarly ingests housing data into a postgres database and API for front-end access.
Since these tools are open source, civic hackers in a multitude of cities can use existing work to develop their own, locally relevant tools to support local housing advocates….(More)”.
Internet Innovations Alliance: “Are Millennials okay with the collection and use of their data online because they grew up with the internet?
In an effort to help inform policymakers about the views of Americans across generations on internet privacy, the Internet Innovation Alliance, in partnership with Icon Talks, the Hispanic Technology & Telecommunications Partnership (HTTP), and the Millennial Action Project, commissioned a national study of U.S. consumers who have witnessed a steady stream of online privacy abuses, data misuses, and security breaches in recent years. The survey examined the concerns of U.S. adults—overall and separated by age group, as well as other demographics—regarding the collection and use of personal data and location information by tech and social media companies, including tailoring the online experience, the potential for their personal financial information to be hacked from online tech and social media companies, and the need for a single, national policy addressing consumer data privacy.
Download: “Concerns About Online Data Privacy Span Generations” IIA white paper pdf.
Download: “Consumer Data Privacy Concerns” Civic Science report pdf….(More)”