The 5Ps of the Crowd Economy


Crowdsourcing week: “As a first step towards a transition to a crowd – focussed organization, it helps to understand what makes up the crowd economy.

1. The people. The crowd economy is empowering, inclusive, disruptive and human centric.

Human-centric values need to be embedded in applications geared towards the crowd economy where the community is the starting point. The crowd economy or collective action is not about mob behavior but very targeted cooperative solutions that help communities better their lives. People-powered platforms are forging these interconnections between users that are breaking down the barriers between creators, producers and end users. By empowering people, organizations are finding new, previously unimagined pathways and solutions to complex problems.

2. The purpose. The crowd economy creates meaningful experiences and shared value.

The crowd economy embodies a culture of shared value creation and social responsibility that distinguishes itself from the traditional one-dimensional thinking and practices of the old economy. People driven initiatives often embody a larger mission to create solutions that work for, and, with all stakeholders. There is more than one channel of communication and the notion that everyone can further his or her purpose is life changing.

3. The platform. Crowds need a medium to interact and produce results.

This pillar of the crowd economy has manifested in the form of technology, connectivity and mobile networks. Soon the Internet of Things will contribute to this medium, amplifying human interactions with powerful data. Platforms like Airbnb and Uber have become synonymous with peer marketplaces and have led to new business paradigms taking shape.

4. The participation. Co-creation and participation are emphasized in the crowd economy and communities take an active stake in crafting positive futures.

The power of participation to accelerate innovation is best seen through crowdfunding, that has enabled early ideas get a jumpstart. Crowd verdict is critical to validate business plans and ideas and working with them only bring financial support but also value product input and iteration.

5. The productivity. Crowd economy fosters faster, cheaper, better and resource efficient processes.

Digital crowd applications for civic activities, disaster relief and humanitarian work are creating widespread impact. Helping and participation comes naturally to us and the networked web has fitted this mindset with wings. …(More)”

NYDatabases


Pressconnects: “As journalists, we work with mountains of data to help us spot trends, identify problems in our community and to hold public officials accountable. We present that data here at NYDatabases.com (formerly RocDocs), so that you can explore the issues yourself.

Learn more about where you live, make informed decisions as a citizen, parent, or homeowner, and help identify stories that we should investigate…..

Data is an important ingredient in our efforts to provide a well-rounded local report on what’s going on in New York. Some of this information comes straight from the source and other information is compiled from reporters and staff on our Data Desk. When possible, we present the databases with the context of news or enterprise from the Pressconnects. Sometimes, however, utility databases such as restaurant inspections and real estate sales throughout New York are there as a resource and updated on a regular basis….(More)”

The Last Mile: Creating Social and Economic Value from Behavioral Insights


New book by Dilip Soman: “Most organizations spend much of their effort on the start of the value creation process: namely, creating a strategy, developing new products or services, and analyzing the market. They pay a lot less attention to the end: the crucial “last mile” where consumers come to their website, store, or sales representatives and make a choice.

In The Last Mile, Dilip Soman shows how to use insights from behavioral science in order to close that gap. Beginning with an introduction to the last mile problem and the concept of choice architecture, the book takes a deep dive into the psychology of choice, money, and time. It explains how to construct behavioral experiments and understand the data on preferences that they provide. Finally, it provides a range of practical tools with which to overcome common last mile difficulties.

The Last Mile helps lay readers not only to understand behavioral science, but to apply its lessons to their own organizations’ last mile problems, whether they work in business, government, or the nonprofit sector. Appealing to anyone who was fascinated by Dan Ariely’s Predictably Irrational, Richard Thaler and Cass Sunstein’s Nudge, or Daniel Kahneman’s Thinking, Fast and Slow but was not sure how those insights could be practically used, The Last Mile is full of solid, practical advice on how to put the lessons of behavioral science to work….(More)”

Policy makers’ perceptions on the transformational effect of Web 2.0 technologies on public services delivery


Paper by Manuel Pedro Rodríguez Bolívar at Electronic Commerce Research: “The growing participation in social networking sites is altering the nature of social relations and changing the nature of political and public dialogue. This paper contributes to the current debate on Web 2.0 technologies and their implications for local governance, identifying the perceptions of policy makers on the use of Web 2.0 in providing public services and on the changing roles that could arise from the resulting interaction between local governments and their stakeholders. The results obtained suggest that policy makers are willing to implement Web 2.0 technologies in providing public services, but preferably under the Bureaucratic model framework, thus retaining a leading role in this implementation. The learning curve of local governments in the use of Web 2.0 technologies is a factor that could influence policy makers’ perceptions. In this respect, many research gaps are identified and further study of the question is recommended….(More)”

One way traffic: The open data initiative project and the need for an effective demand side initiative in Ghana


Paper by Frank L. K. Ohemeng and Kwaku Ofosu-Adarkwa in the Government Information Quarterly: “In recent years the necessity for governments to develop new public values of openness and transparency, and thereby increase their citizenries’ sense of inclusiveness, and their trust in and confidence about their governments, has risen to the point of urgency. The decline of trust in governments, especially in developing countries, has been unprecedented and continuous. A new paradigm that signifies a shift to citizen-driven initiatives over and above state- and market-centric ones calls for innovative thinking that requires openness in government. The need for this new synergy notwithstanding, Open Government cannot be considered truly open unless it also enhances citizen participation and engagement. The Ghana Open Data Initiative (GODI) project strives to create an open data community that will enable government (supply side) and civil society in general (demand side) to exchange data and information. We argue that the GODI is too narrowly focused on the supply side of the project, and suggest that it should generate an even platform to improve interaction between government and citizens to ensure a balance in knowledge sharing with and among all constituencies….(More)”

Big data algorithms can discriminate, and it’s not clear what to do about it


 at the Conversation“This program had absolutely nothing to do with race…but multi-variable equations.”

That’s what Brett Goldstein, a former policeman for the Chicago Police Department (CPD) and current Urban Science Fellow at the University of Chicago’s School for Public Policy, said about a predictive policing algorithm he deployed at the CPD in 2010. His algorithm tells police where to look for criminals based on where people have been arrested previously. It’s a “heat map” of Chicago, and the CPD claims it helps them allocate resources more effectively.

Chicago police also recently collaborated with Miles Wernick, a professor of electrical engineering at Illinois Institute of Technology, to algorithmically generate a “heat list” of 400 individuals it claims have thehighest chance of committing a violent crime. In response to criticism, Wernick said the algorithm does not use “any racial, neighborhood, or other such information” and that the approach is “unbiased” and “quantitative.” By deferring decisions to poorly understood algorithms, industry professionals effectively shed accountability for any negative effects of their code.

But do these algorithms discriminate, treating low-income and black neighborhoods and their inhabitants unfairly? It’s the kind of question many researchers are starting to ask as more and more industries use algorithms to make decisions. It’s true that an algorithm itself is quantitative – it boils down to a sequence of arithmetic steps for solving a problem. The danger is that these algorithms, which are trained on data produced by people, may reflect the biases in that data, perpetuating structural racism and negative biases about minority groups.

There are a lot of challenges to figuring out whether an algorithm embodies bias. First and foremost, many practitioners and “computer experts” still don’t publicly admit that algorithms can easily discriminate.More and more evidence supports that not only is this possible, but it’s happening already. The law is unclear on the legality of biased algorithms, and even algorithms researchers don’t precisely understand what it means for an algorithm to discriminate….

While researchers clearly understand the theoretical dangers of algorithmic discrimination, it’s difficult to cleanly measure the scope of the issue in practice. No company or public institution is willing to publicize its data and algorithms for fear of being labeled racist or sexist, or maybe worse, having a great algorithm stolen by a competitor.

Even when the Chicago Police Department was hit with a Freedom of Information Act request, they did not release their algorithms or heat list, claiming a credible threat to police officers and the people on the list. This makes it difficult for researchers to identify problems and potentially provide solutions.

Legal hurdles

Existing discrimination law in the United States isn’t helping. At best, it’s unclear on how it applies to algorithms; at worst, it’s a mess. Solon Barocas, a postdoc at Princeton, and Andrew Selbst, a law clerk for the Third Circuit US Court of Appeals, argued together that US hiring law fails to address claims about discriminatory algorithms in hiring.

The crux of the argument is called the “business necessity” defense, in which the employer argues that a practice that has a discriminatory effect is justified by being directly related to job performance….(More)”

What factors influence transparency in US local government?


Grichawat Lowatcharin and Charles Menifield at LSE Impact Blog: “The Internet has opened a new arena for interaction between governments and citizens, as it not only provides more efficient and cooperative ways of interacting, but also more efficient service delivery, and more efficient transaction activities. …But to what extent does increased Internet access lead to higher levels of government transparency? …While we found Internet access to be a significant predictor of Internet-enabled transparency in our simplest model, this finding did not hold true in our most extensive model. This does not negate that fact that the variable is an important factor in assessing transparency levels and Internet access. …. Our data shows that total land area, population density, percentage of minority, education attainment, and the council-manager form of government are statistically significant predictors of Internet-enabled transparency.  These findings both confirm and negate the findings of previous researchers. For example, while the effect of education on transparency appears to be the most consistent finding in previous research, we also noted that the rural/urban (population density) dichotomy and the education variable are important factors in assessing transparency levels. Hence, as governments create strategic plans that include growth models, they should not only consider the budgetary ramifications of growth, but also the fact that educated residents want more web based interaction with government. This finding was reinforced by a recent Census Bureau report indicating that some of the cities and counties in Florida and California had population increases greater than ten thousand persons per month during the period 2013-2014.

This article is based on the paper ‘Determinants of Internet-enabled Transparency at the Local Level: A Study of Midwestern County Web Sites’, in State and Local Government Review. (More)”

Making data open for everyone


Kathryn L.S. Pettit and Jonathan Schwabis at UrbanWire: “Over the past few years, there have been some exciting developments in open source tools and programming languages, business intelligence tools, big data, open data, and data visualization. These trends, and others, are changing the way we interact with and consume information and data. And that change is driving more organizations and governments to consider better ways to provide their data to more people.

The World Bank, for example, has a concerted effort underway to open its data in better and more visual ways. Google’s Public Data Explorer brings together large datasets from around the world into a single interface. For-profit providers like OpenGov and Socrata are helping local, state, and federal governments open their data (both internally and externally) in newer platforms.

We are firm believers in open data. (There are, of course, limitations to open data because of privacy or security, but that’s a discussion for another time). But open data is not simply about putting more data on the Internet. It’s not just only about posting files and telling people where to find them. To allow and encourage more people to use and interact with data, that data needs to be useful and readable not only by researchers, but also by the dad in northern Virginia or the student in rural Indiana who wants to know more about their public libraries.

Open data should be easy to access, analyze, and visualize

Many are working hard to provide more data in better ways, but we have a long way to go. Take, for example, the Congressional Budget Office (full disclosure, one of us used to work at CBO). Twice a year, CBO releases its Budget and Economic Outlook, which provides the 10-year budget projections for the federal government. Say you want to analyze 10-year budget projections for the Pell Grant program. You’d need to select “Get Data” and click on “Baseline Projections for Education” and then choose “Pell Grant Programs.” This brings you to a PDF report, where you can copy the data table you’re looking for into a format you can actually use (say, Excel). You would need to repeat the exercise to find projections for the 21 other programs for which the CBO provides data.

In another case, the Bureau of Labor Statistics has tried to provide users with query tools that avoid the use of PDFs, but still require extra steps to process. You can get the unemployment rate data through their Java Applet (which doesn’t work on all browsers, by the way), select the various series you want, and click “Get Data.” On the subsequent screen, you are given some basic formatting options, but the default display shows all of your data series as separate Excel files. You can then copy and paste or download each one and then piece them together.

Taking a step closer to the ideal of open data, the Institute of Museum and Library Services (IMLS)followed President Obama’s May 2013 executive order to make their data open in a machine-readable format. That’s great, but it only goes so far. The IMLS platform, for example, allows you to explore information about your own public library. But the data are labeled with variable names such as BRANLIB and BKMOB that are not intuitive or clear. Users then have to find the data dictionary to understand what data fields mean, how they’re defined, and how to use them.

These efforts to provide more data represent real progress, but often fail to be useful to the average person. They move from publishing data that are not readable (buried in PDFs or systems that allow the user to see only one record at a time) to data that are machine-readable (libraries of raw data files or APIs, from which data can be extracted using computer code). We now need to move from a world in which data are simply machine-readable to one in which data are human-readable….(More)”

New Privacy Research Has Implications for Design and Policy


 at PrivacyTech: “Try visualizing the Internet’s basic architecture. Could you draw it? What would be your mental model for it?

Let’s be more specific: Say you just purchased shoes off a website using your mobile phone at work. How would you visualize that digital process? Would a deeper knowledge of this architecture make more apparent the myriad potential privacy risks in this transaction? Or to put it another way, what would your knowledge, or lack thereof, for these architectural underpinnings reveal about your understanding of privacy and security risks?

Whether you’re a Luddite or a tech wiz, creating these mental models of the Internet is not the easiest endeavor. Just try doing so yourself.

It is an exercise, however, that several individuals underwent for new research that has instructive implications for privacy and security pros.

“So everything I do on the Internet or that other people do on the Internet is basically asking the Internet for information, and the Internet is sending us to various places where the information is and then bringing us back.” – CO1

You’d think those who have a better understanding of how the Internet works would probably have a better understanding of the privacy and security risks, right? Most likely. Paradoxically, though, a better technological understanding may have very little influence on an individual’s response to potential privacy risks.

This is what a dedicated team of researchers from Carnegie Mellon University worked to discover recently in their award-winning paper, “My Data Just Goes Everywhere”: User Mental Models of the Internet and Implications for Privacy and Security—a culmination of research from Ruogu Kang, Laura Dabbish, Nathaniel Fruchter and Sara Kiesler—all from CMU’s Human-Computer Interaction Institute and the Heinz College in Pittsburgh, PA.

“I try to browse through the terms and conditions but there’s so much there I really don’t retain it.” – T11

Presented at the CyLab Usable Privacy and Security Laboratory’s (CUPS) 11thSymposium on Usable Privacy and Security (SOUPS), their research demonstrated that even though savvy and non-savvy users of the Internet have much different perceptions of its architecture, such knowledge was not predictive of whether a user would take the necessary steps to protect their privacy online. Experience, rather, appears to play a more determinate role.

Kang, who led the team, said she was surprised by the results….(More)”

Mining Administrative Data to Spur Urban Revitalization


New paper by Ben Green presented at the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: “After decades of urban investment dominated by sprawl and outward growth, municipal governments in the United States are responsible for the upkeep of urban neighborhoods that have not received sufficient resources or maintenance in many years. One of city governments’ biggest challenges is to revitalize decaying neighborhoods given only limited resources. In this paper, we apply data science techniques to administrative data to help the City of Memphis, Tennessee improve distressed neighborhoods. We develop new methods to efficiently identify homes in need of rehabilitation and to predict the impacts of potential investments on neighborhoods. Our analyses allow Memphis to design neighborhood-improvement strategies that generate greater impacts on communities. Since our work uses data that most US cities already collect, our models and methods are highly portable and inexpensive to implement. We also discuss the challenges we encountered while analyzing government data and deploying our tools, and highlight important steps to improve future data-driven efforts in urban policy….(More)”