A digital white paper by Public Innovation: “In an increasingly complex world, today’s challenges are interconnected. Many have argued that our civic institutions are not equipped to respond with the same velocity at which technology is advancing other sectors of the economy. While this may, in fact, be a fair criticism of our electoral, fiscal, and policy structures, a new mindset is emerging at government’s service delivery layer.
Civic innovation offers a new approach to solving community problems that is emergent, generative, resilient, participatory, human-centered, and driven by a process of validated learning where core assumptions are tested quickly and iteratively – and lead to better solutions that are both impactful and durable. And perhaps most surprisingly, new markets are being created that enable creative problem solvers to sustain their social impact through activities that don’t rely on traditional models of grant funding.
While the Sacramento region is making significant progress in this space, our civic innovation and entrepreneurship ecosystem has yet to reach its full potential. The purpose of this white paper is to make the case for why now is the time for a Regional Civic Technology, Innovation and Entrepreneurship Agenda.
The paper concludes with a set of recommendations for collective action among the region’s public, private, nonprofit organizations, and, of course, our fellow citizens. Appendix A articulates this agenda in the form of a resolution to be adopted by as many cities and counties the region as possible.
A recurring theme in this paper is that technology is fundamentally changing the way humans interact with organizations and each other. In order for regional leaders and residents to be honest with ourselves, we must consciously choose whether or not we are going to raise our expectations and co-create a new civic experience.
Because the future is now and the opportunities are infinite…”
Agency Liability Stemming from Citizen-Generated Data
Paper by Bailey Smith for The Wilson Center’s Science and Technology Innovation Program: “New ways to gather data are on the rise. One of these ways is through citizen science. According to a new paper by Bailey Smith, JD, federal agencies can feel confident about using citizen science for a few reasons. First, the legal system provides significant protection from liability through the Federal Torts Claim Act (FTCA) and Administrative Procedures Act (APA). Second, training and technological innovation has made it easier for the non-scientist to collect high quality data.”
When Big Data Maps Your Safest, Shortest Walk Home
Sarah Laskow at NextCity: “Boston University and University of Pittsburgh researchers are trying to do the same thing that got the creators of the app SketchFactor into so much trouble over the summer. They’re trying to show people how to avoid dangerous spots on city streets while walking from one place to another.
“What we are interested in is finding paths that offer trade-offs between safety and distance,” Esther Galbrun, a postdoc at Boston University, recently said in New York at the 3rd International Workshop on Urban Computing, held in conjunction with KDD2014.
She was presenting, “Safe Navigation in Urban Environments,” which describes a set of algorithms that would give a person walking through a city options for getting from one place to another — the shortest path, the safest path and a number of alternatives that balanced between both factors. The paper takes existing algorithms, well defined in theory — nothing new or fancy, Galbrun says — and applies them to a problem that people face everyday.
Imagine, she suggests, that a person is standing at the Philadelphia Museum of Art, and he wants to walk home, to his place on Wharton Street. (Galbrun and her colleagues looked at Philadelphia and Chicago because those cities have made their crime data openly available.) The walk is about three miles away, and one option would be to take the shortest path back. But maybe he’s worried about safety. Maybe he’s willing to take a little bit of a longer walk if it means he has to worry less about crime. What route should he take then?
Services like Google Maps have excelled at finding the shortest, most direct routes from Point A to Point B. But, increasingly, urban computing is looking to capture other aspects of moving about a place. “Fast is only one option,” says co-author Konstantinos Pelechrinis. “There are noble objectives beyond the surface path that you can put inside this navigation problem.” You might look for the path that will burn the most calories; a Yahoo! lab has considered how to send people along the most scenic route.
But working on routes that do more than give simple directions can have its pitfalls. The SketchFactor app relies both on crime data, when it’s available, and crowdsourced comments to reveal potential trouble spots to users. When it was released this summer, tech reporters and other critics immediately started talking about how it could easily become a conduit for racism. (“Sketchy” is, after all, a very subjective measure.)
So far, though, the problem with the SketchFactor app is less that it offers racially skewed perspectives than that the information it does offer is pretty useless — if entertaining. A pinpoint marked “very sketchy” is just as likely to flag an incident like a Jewish man eating pork products or hipster kids making too much noise as it is to flag a mugging.
Here, then, is a clear example of how Big Data has an advantage over Big Anecdata. The SafePath set-up measures risk more objectively and elegantly. It pulls in openly available crime data and considers simple data like time, location and types of crime. While a crime occurs at a discrete point, the researchers wanted to estimate the risk of a crime on every street, at every point. So they use a mathematical tool that smooths out the crime data over the space of the city and allows them to measure the relative risk of witnessing a crime on every street segment in a city….”
What Is Big Data?
datascience@berkeley Blog: ““Big Data.” It seems like the phrase is everywhere. The term was added to the Oxford English Dictionary in 2013 , appeared in Merriam-Webster’s Collegiate Dictionary by 2014
, and Gartner’s just-released 2014 Hype Cycle
shows “Big Data” passing the “Peak of Inflated Expectations” and on its way down into the “Trough of Disillusionment.” Big Data is all the rage. But what does it actually mean?
A commonly repeated definition cites the three Vs: volume, velocity, and variety. But others argue that it’s not the size of data that counts, but the tools being used, or the insights that can be drawn from a dataset.
To settle the question once and for all, we asked 40+ thought leaders in publishing, fashion, food, automobiles, medicine, marketing and every industry in between how exactly they would define the phrase “Big Data.” Their answers might surprise you! Take a look below to find out what big data is:
- John Akred, Founder and CTO, Silicon Valley Data Science
- Philip Ashlock, Chief Architect of Data.gov
- Jon Bruner, Editor-at-Large, O’Reilly Media
- Reid Bryant, Data Scientist, Brooks Bell
- Mike Cavaretta, Data Scientist and Manager, Ford Motor Company
- Drew Conway, Head of Data, Project Florida
- Rohan Deuskar, CEO and Co-Founder, Stylitics
- Amy Escobar, Data Scientist, 2U
- Josh Ferguson, Chief Technology Officer, Mode Analytics
- John Foreman, Chief Data Scientist, MailChimp
- …
FULL LIST at datascience@berkeley Blog”
Data Mining Reveals How Social Coding Succeeds (And Fails)
Emerging Technology From the arXiv : “Collaborative software development can be hugely successful or fail spectacularly. An analysis of the metadata associated with these projects is teasing apart the difference….
The process of developing software has undergone huge transformation in the last decade or so. One of the key changes has been the evolution of social coding websites, such as GitHub and BitBucket.
These allow anyone to start a collaborative software project that other developers can contribute to on a voluntary basis. Millions of people have used these sites to build software, sometimes with extraordinary success.
Of course, some projects are more successful than others. And that raises an interesting question: what are the differences between successful and unsuccessful projects on these sites?
Today, we get an answer from Yuya Yoshikawa at the Nara Institute of Science and Technology in Japan and a couple of pals at the NTT Laboratories, also in Japan. These guys have analysed the characteristics of over 300,000 collaborative software projects on GitHub to tease apart the factors that contribute to success. Their results provide the first insights into social coding success from this kind of data mining.
A social coding project begins when a group of developers outline a project and begin work on it. These are the “internal developers” and have the power to update the software in a process known as a “commit”. The number of commits is a measure of the activity on the project.
External developers can follow the progress of the project by “starring” it, a form of bookmarking on GitHub. The number of stars is a measure of the project’s popularity. These external developers can also request changes, such as additional features and so on, in a process known as a pull request.
Yoshikawa and co begin by downloading the data associated with over 300,000 projects from the GitHub website. This includes the number of internal developers, the number of stars a project receives over time and the number of pull requests it gets.
The team then analyse the effectiveness of the project by calculating factors such as the number of commits per internal team member, the popularity of the project over time, the number of pull requests that are fulfilled and so on.
The results provide a fascinating insight into the nature of social coding. Yoshikawa and co say the number of internal developers on a project plays a significant role in its success. “Projects with larger numbers of internal members have higher activity, popularity and sociality,” they say….
Ref: arxiv.org/abs/1408.6012 : Collaboration on Social Media: Analyzing Successful Projects on Social Coding”
DATAcide
Douglas Haddow at Adbusters on “The Total Annihilation of Life as We Know It”: “…When the TechCrunchers preach the gospel of disruption, it’s from an industrial perspective that sees life on Earth as a series of business models to be upended. Disrupt or die is the motto, but they never mention the disruptees — the travel agents, the cab drivers, the bellhops. The journalists. The meat in the box before the box is crushed by the anvil of innovation.
“People have ideas about things but it’s a bunch of things. Sign up flow for example, high level things, but sometimes I think — let’s table this for now and put together some idea maps. I feel so empowered because we’re aligned,” someone else says. I look around but can’t trace the source.
It’s hard to focus on his questions when all the conversations occurring parallel to ours combine in a cacophony of sameness, as if we’re all Tedtalking a mantra of ancient buzzwords: Engagement. Intuitive. Connection. User base. Revolutionary. It’s like coke talk gone sour, not words that are meant to say things, but stale semiotics that signify you belong. This is the the new language of business. This is where Wall Street goes to find itself…
The internet is a failed utopia. And we’re all trapped inside of it. But I’m not willing to give up on it yet. It’s where I first discovered punk rock and anarchism. Where I learned about the I Ching and Albert Camus while downloading “Holiday in Cambodia” at 15kbps. It’s where I first perved out on the photos of a girl I would eventually fall in love with. It’s home to me, you and everybody we know.
No, the appropriate question to ask is: “What is the purpose of my life?”
I’ve seen the best minds of my generation sucked dry by the economics of the infinite scroll. Amidst the innovation fatigue inherent to a world with more phones than people, we’ve experienced a spectacular failure of the imagination and turned the internet, likely the only thing between us and a very dark future, into little more than a glorified counting machine.
Am I data, or am I human? The truth is somewhere in between. Next time you click I AGREE on some purposefully confusing terms and conditions form, pause for a moment to interrogate the power that lies behind the code. The dream of the internet may have proven difficult to maintain, but the solution is not to dream less, but to dream harder.”
Roles, Trust, and Reputation in Social Media Knowledge Markets
New book edited by Bertino, Elisa, and Matei Sorin Adam: This title discusses the emerging trends in defining, measuring, and operationalizing reputation as a new and essential component of the knowledge that is generated and consumed online. The book also proposes a future research agenda related to these issues—with the ultimate goal of shaping the next generation of theoretical and analytic strategies needed for understanding how knowledge markets are influenced by social interactions and reputations built around functional roles.
Roles, Trust, and Reputation in Social Media Knowledge Markets exposes issues that have not been satisfactorily dealt with in the current literature. In a broader sense, the volume aims to change the way in which knowledge generation in social media spaces is understood and utilized. The tools, theories, and methodologies proposed here offer concrete avenues for developing the next generation of research strategies and applications that will help: tomorrow’s information consumers make smarter choices, developers to create new tools, and researchers to launch new research programs….
- Proposes new methods for understanding how opinion leaders and influential authors emerge on social media knowledge markets
- Advances new approaches to theory-based understanding of how social media reputations emerge and shape content and public opinion
- Highlights the most important understudied or promising areas of research regarding reputation and authorship on social media
- Reviews existing accomplishments in the field of reputation research on social media knowledge markets
- Features a multidisciplinary team of authors, covering several disciplines
- Includes both senior, established authors and emerging, innovative voices”
Making Policy Public: Participatory Bureaucracy in American Democracy
New book by Susan L. Moffitt: “This book challenges the conventional wisdom that government bureaucrats inevitably seek secrecy and demonstrates how and when participatory bureaucracy manages the enduring tension between bureaucratic administration and democratic accountability. Looking closely at federal level public participation in pharmaceutical regulation and educational assessments within the context of the vast system of American federal advisory committees, this book demonstrates that participatory bureaucracy supports bureaucratic administration in ways consistent with democratic accountability when it focuses on complex tasks and engages diverse expertise. In these conditions, public participation can help produce better policy outcomes, such as safer prescription drugs. Instead of bureaucracy’s opposite or alternative, public participation can work as its complement.
- Argues that public participation through FDA drug review advisory committees leads to safer drug experiences on the market: fewer boxed warnings and fewer drug withdrawals
- Suggests that the American system of public committees is truly vast, involving upwards of 70,000 committee members across 1,000 different committees
- Details that public committees can be a source of transparency in government operations”
With Wikistrat, crowdsourcing gets geopolitical
Much to the surprise of western intelligence, in a matter of weeks Vladimir Putin’s troops would occupy the disputed peninsula and a referendum would be passed authorising secession from Ukraine.
That a dispersed team of thinkers – assembled by a consultancy known as Wikistrat – could out-forecast the world’s leading intelligence agencies seems almost farcical. But it is an eye-opening example of yet another way that crowdsourcing is upending conventional wisdom.
Crowdsourcing has long been heralded as a means to shake up stale thinking in corporate spheres by providing cheaper, faster means of processing information and problem solving. But now even traditionally enigmatic defence and intelligence organisations and other geopolitical soothsayers are getting in on the act by using the “wisdom of the crowd” to predict how the chips of world events might fall.
Meanwhile, companies with crucial geopolitical interests, such as energy and financial services firms, have begun commissioning crowdsourced simulations of their own from Wikistrat to better gauge investment risk.
While some intelligence agencies have experimented with crowdsourcing to gain insights from the general public, Wikistrat uses a “closed crowd” of subject experts and bills itself as the world’s first crowdsourced analytical services consultancy.
A typical simulation, run on its interactive web platform, has roughly 70 participants. The crowd’s expertise and diversity is combined with Wikistrat’s patented model of “collaborative competition” that rewards participants for the breadth and quality of their contributions. The process is designed to provide a fresh view and shatter the traditional confines of groupthink….”
Beta Release of the Open Contracting Data Standard
Open Contracting: “Each year, governments around the world spend over $9 trillion dollars of citizens’ money through public contracts. All too often, however, little to no data is made available to the public about these contracts. If data is available, it is often supplied in ways which make analysis very challenging or downright impossible.
Yet, if data relating to public contracts is released in a clear, reusable and timely way, the rewards will be great. Governments will have data to make better decisions and enhance their effectiveness, private companies will be better able to compete in the market and citizens will be able to hold their governments accountable for how they spend public resources.
To help unlock these benefits, the Open Contracting Partnership (OCP) is pleased to share for broad consultation the Beta Release of the Open Contracting Data Standard (OCDS).This Standard is currently being developed for the OCP by the World Wide Web Foundation through the support of Omidyar Network and the World Bank.
The objective of the Data Standard is to support governments to publish contracting data in a more accessible, interoperable and useful manner and to enable the widest possible range of stakeholders to use contracting data effectively.
Some of the features provided by this Beta Release include a description of the overall Open Contracting Data Standard Model and a JSON Schema for open contracting releases and records that includes a set of recommended fields.
The development of the Open Contracting Data Standard is an open process and inputs and feedback are encouraged. Although this will be an ongoing process, those comments provided before September 30, 2014 will be more likely to fully inform version 1.0 of the Standard. These comments will help refine the standard, both the structure and fields, in preparation for the initial release version.
Those interested in providing comments can do so in two different ways:
- Inline comments on the document – Log in to the Open Contracting Data Standard Github site and then highlight portions of text to add comment. To “reply” to an existing comment, highlight the same portion of text, and then add your comment. See instructions at the top of the Github login page for more help on commenting.
- Mailing list – If you have more general comments that don’t fit well as inline comments, please join the OCDS mailing list and start a discussion with your thoughts….”