Paper by Samiul Hasan: “Post-COVID-19 technologies for higher education and corporate communication have opened-up wonderful opportunity for Online Survey Research. These technologies could be used for one-to-one interview, group interview, group questionnaire survey, online questionnaire survey, or even ‘focus group’ discussions. This new trend, which may aptly be called ‘armchair survey research’ may be the only or new trend in social science research. If that is the case, an obvious question might be what is ‘survey research’ and how is it going to be easier in the post-COVID-19 world? My intention is to offer some help to the promising researchers who have all quality and eagerness to undertake good social science research for publication, but no fund.
The text is divided into three main parts. Part one deals with “Science, Social Science and Research” to highlight some important points about the importance of ‘What’, ‘Why’, and ‘So what’ and ‘framing of a research question’ for a good research. Then the discussion moves to ‘reliability and validity’ in social science research including falsifiability, content validity, and construct validity. This part ends with discussions on concepts, constructs, and variables in a theoretical (conceptual) framework. The second part deals categorically with ‘survey research’ highlighting the use and features of interviews and questionnaire surveys. It deals primarily with the importance and use of nominal response or scale and ordinal response or scale as well as the essentials of question content and wording, and question sequencing. The last part deals with survey research in the post-COVID-19 period highlighting strategies for undertaking better online survey research, without any fund….(More)”.
Paper by Jessica Feldman:”This scoping paper considers how digital tools, such as ICTs and AI, have failed to contribute to the “common good” in any sustained or scalable way. This is attributed to a problem that is at once political-economic and technical.
Many digital tools’ business models are predicated on advertising: framing the user as an individual consumer-to-be-targeted, not as an organization, movement, or any sort of commons. At the level of infrastructure and hardware, the increased privatization and centralization of transmission and production leads to a dangerous bottlenecking of communication power, and to labor and production practices that are undemocratic and damaging to common resources.
These practices escalate collective action problems, pose a threat to democratic decision making, aggravate issues of economic and labor inequality, and harm the environment and health. At the same time, the growth of both AI and online community formation raise questions around the very definition of human subjectivity and modes of relationality. Based on an operational definition of the common good grounded in ethics of care, sustainability, and redistributive justice, suggestions are made for solutions and further research in the areas of participatory design, digital democracy, digital labor, and environmental sustainability….(More)”
Paper by Khaled El Emam et al: “There has been growing interest in data synthesis for enabling the sharing of data for secondary analysis; however, there is a need for a comprehensive privacy risk model for fully synthetic data: If the generative models have been overfit, then it is possible to identify individuals from synthetic data and learn something new about them.
Objective: The purpose of this study is to develop and apply a methodology for evaluating the identity disclosure risks of fully synthetic data.
Methods: A full risk model is presented, which evaluates both identity disclosure and the ability of an adversary to learn something new if there is a match between a synthetic record and a real person. We term this “meaningful identity disclosure risk.” The model is applied on samples from the Washington State Hospital discharge database (2007) and the Canadian COVID-19 cases database. Both of these datasets were synthesized using a sequential decision tree process commonly used to synthesize health and social science data.
Results: The meaningful identity disclosure risk for both of these synthesized samples was below the commonly used 0.09 risk threshold (0.0198 and 0.0086, respectively), and 4 times and 5 times lower than the risk values for the original datasets, respectively.
Conclusions: We have presented a comprehensive identity disclosure risk model for fully synthetic data. The results for this synthesis method on 2 datasets demonstrate that synthesis can reduce meaningful identity disclosure risks considerably. The risk model can be applied in the future to evaluate the privacy of fully synthetic data….(More)”.
Article by Daria Gritsenko and Matthew Wood: “This article examines how modes of governance are reconfigured as a result of using algorithms in the governance process. We argue that deploying algorithmic systems creates a shift toward a special form of design‐based governance, with power exercised ex ante via choice architectures defined through protocols, requiring lower levels of commitment from governing actors. We use governance of three policy problems – speeding, disinformation, and social sharing – to illustrate what happens when algorithms are deployed to enable coordination in modes of hierarchical governance, self‐governance, and co‐governance. Our analysis shows that algorithms increase efficiency while decreasing the space for governing actors’ discretion. Furthermore, we compare the effects of algorithms in each of these cases and explore sources of convergence and divergence between the governance modes. We suggest design‐based governance modes that rely on algorithmic systems might be re‐conceptualized as algorithmic governance to account for the prevalence of algorithms and the significance of their effects….(More)”.
Paper by A humanoid robot named ‘Sophia’ has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence (AI). Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia’s citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of AI robots, we analyse the performativity of Sophia from the perspective of what we call ‘political choreography’: drawing on phenomenological approaches to performance-oriented philosophy of technology. This paper proposes to interpret and discuss the world tour of Sophia as a political choreography that boosts the rise of the social robot market, rather than a statement about robot citizenship or artificial intelligence. We argue that the media performances of the Sophia robot were choreographed to advance specific political interests. We illustrate our philosophical discussion with media material of the Sophia performance, which helps us to explore the mechanisms through which the media spectacle functions hand in hand with advancing the economic interests of technology industries and their governmental promotors. Using a phenomenological approach and attending to the movement of robots, we also criticize the notion of ‘embodied intelligence’ used in the context of social robotics and AI. In this way, we put the discussions about the robot’s rights or citizenship in the context of AI politics and economics….(More)”
Paper by Cass R. Sunstein: “Behavioral science is playing an increasing role in public policy, and it is raising new questions about fundamental issues – the role of government, freedom of choice, paternalism, and human welfare. In diverse nations, public officials are using behavioral findings to combat serious problems – poverty, air pollution, highway safety, COVID-19, discrimination, employment, climate change, and occupational health. Exploring theory and practice, this Element attempts to provide one-stop shopping for those who are new to the area and for those who are familiar with it. With reference to nudges, taxes, mandates, and bans, it offers concrete examples of behaviorally informed policies. It also engages the fundamental questions, include the proper analysis of human welfare in light of behavioral findings. It offers a plea for respecting freedom of choice – so long as people’s choices are adequately informed and free from behavioral biases….(More)”.
Paper by Małgorzata Śmietanka, Hirsh Pithadia and Philip Treleaven: “Federated learning is a pioneering privacy-preserving data technology and also a new machine learning model trained on distributed data sets.
Companies collect huge amounts of historic and real-time data to drive their business and collaborate with other organisations. However, data privacy is becoming increasingly important because of regulations (e.g. EU GDPR) and the need to protect their sensitive and personal data. Companies need to manage data access: firstly within their organizations (so they can control staff access), and secondly protecting raw data when collaborating with third parties. What is more, companies are increasingly looking to ‘monetize’ the data they’ve collected. However, under new legislations, utilising data by different organization is becoming increasingly difficult (Yu, 2016).
Federated learning pioneered by Google is the emerging privacy- preserving data technology and also a new class of distributed machine learning models. This paper discusses federated learning as a solution for privacy-preserving data access and distributed machine learning applied to distributed data sets. It also presents a privacy-preserving federated learning infrastructure….(More)”.
Paper by Redeemer Dornudo Yao Krah and Gerard Mertens: “The study is a systematic literature review that assembles scientific knowledge in local government transparency in the twenty-first Century. The study finds a remarkable growth in research on local government transparency in the first nineteen years, particularly in Europe and North America. Social, economic, political, and institutional factors are found to account for this trend. In vogue among local governments is the use of information technology to enhance transparency. The pressure to become transparent largely comes from the passage of Freedom of Information Laws and open data initiatives of governments….(More)”.
Paper by Heike Schweitzer and Robert Welker: “The paper strives to systematise the debate on access to data from a competition policy angle. At the outset, two general policy approaches to access to data are distinguished: a “private control of data” approach versus an “open access” approach. We argue that, when it comes to private sector data, the “private control of data” approach is preferable. According to this approach, the “whether” and “how” of data access should generally be left to the market. However, public intervention can be justified by significant market failures. We discuss the presence of such market failures and the policy responses, including, in particular, competition policy responses, with a view to three different data access scenarios: access to data by co-generators of usage data (Scenario 1); requests for access to bundled or aggregated usage data by third parties vis-à-vis a service or product provider who controls such datasets, with the goal to enter complementary markets (Scenario 2); requests by firms to access the large usage data troves of the Big Tech online platforms for innovative purposes (Scenario 3). On this basis we develop recommendations for data access policies….(More)”.
Paper by Chris Culnane, Benjamin I. P. Rubinstein, and David Watts: “Adopted by government agencies in Australia, New Zealand, and the UK as policy instrument or as embodied into legislation, the ‘Five Safes’ framework aims to manage risks of releasing data derived from personal information. Despite its popularity, the Five Safes has undergone little legal or technical critical analysis. We argue that the Fives Safes is fundamentally flawed: from being disconnected from existing legal protections and appropriation of notions of safety without providing any means to prefer strong technical measures, to viewing disclosure risk as static through time and not requiring repeat assessment. The Five Safes provides little confidence that resulting data sharing is performed using ‘safety’ best practice or for purposes in service of public interest….(More)”.