Emerging Technology From the arXiv: “Nobody agrees on how to define a city. But the emergence of “natural cities” from social media data sets may change that, say computational geographers…
A city is a large, permanent human settlement. But try and define it more carefully and you’ll soon run into trouble. A settlement that qualifies as a city in Sweden may not qualify in China, for example. And the reasons why one settlement is classified as a town while another as a city can sometimes seem almost arbitrary.
City planners know this problem well. They tend to define cities by administrative, legal or even historical boundaries that have little logic to them. Indeed, the same city can sometimes be defined in various different ways.
That causes all kinds of problems from counting the total population to working out who pays for the upkeep of the place. Which definition do you use?
Now help may be at hand thanks to the work of Bin Jiang and Yufan Miao at the University of Gävle in Sweden. These guys have found a way to use people’s location recorded by social media to define the boundaries of so-called natural cities which have a close resemblance to real cities in the US.
Jiang and Miao began with a dataset from the Brightkite social network, which was active between 2008 and 2010. The site encouraged users to log in with their location details so that they could see other users nearby. So the dataset consists of almost 3 million locations in the US and the dates on which they were logged.
To start off, Jiang and Miao simply placed a dot on a map at the location of each login. They then connected these dots to their neighbours to form triangles that end up covering the entire mainland US.
Next, they calculated the size of each triangle on the map and plotted this size distribution, which turns out to follow a power law. So there are lots of tiny triangles but only a few large ones.
Finally, the calculated the average size of the triangles and then coloured in all those that were smaller than average. The coloured areas are “natural cities”, say Jiang and Miao.
It’s easy to imagine that resulting map of triangles is of little value. But to the evident surprise of ther esearchers, it produces a pretty good approximation of the cities in the US. “We know little about why the procedure works so well but the resulting patterns suggest that the natural cities effectively capture the evolution of real cities,” they say.
That’s handy because it suddenly gives city planners a way to study and compare cities on a level playing field. It allows them to see how cities evolve and change over time too. And it gives them a way to analyse how cities in different parts of the world differ.
Of course, Jiang and Miao will want to find out why this approach reveals city structures in this way. That’s still something of a puzzle but the answer itself may provide an important insight into the nature of cities (or at least into the nature of this dataset).
A few days ago, this blog wrote about how a new science of cities is emerging from the analysis of big data. This is another example and expect to see more.
Ref: http://arxiv.org/abs/1401.6756 : The Evolution of Natural Cities from the Perspective of Location-Based Social Media”
Civic Works Project translates data into community tools
The blog of the John S. and James L. Knight Foundation:”The Civic Works Project is a two-year effort to create apps and other tools to help increase the utility of local government data to benefit community organizations and the broader public. w
This project looks systemically at public and private information that can be used to engage residents, solve community problems and increase government accountability. We believe that there is a new frontier where information can be used to improve public services and community building efforts that benefit local residents.
Through the Civic Works Project, we’re seeking to improve access to information and identify solutions to problems facing diverse communities. Uncovering the value of data—and the stories behind it—can enhance the provision of public services through the smart application of technology.
Here’s some of what we’ve accomplished.
Partnership with WBEZ Public Data Blog
The WBEZ Public Data Blog is dedicated to examining and promoting civic data in Chicago, Cook County and Illinois. WBEZ is partnering with the Smart Chicago Collaborative to provide news and analysis on open government by producing content items that explain and tell stories hidden in public data. The project seeks to increase the utility, understanding, awareness and availability of local civic data. It comprises blog postings on the hidden uses of data and stories from the data, while including diverse voices and discussions on how innovations can improve civic life. It also features interviews with community organizations, businesses, government leaders and residents on challenges that could be solved through more effective use of public data.
Crime and Punishment in Chicago
The Crime and Punishment in Chicago project will provide an index of data sources regarding the criminal justice system in Chicago. This site will aggregate sources of data, how this data is generated, how to get it and what data is unavailable.
Illinois OpenTech Challenge
The Illinois Open Technology Challenge aims to bring governments, developers and communities together to create digital tools that use public data to serve today’s civic needs and promote economic development. Smart Chicago and our partners worked with government officials to publish 138 new datasets (34 in Champaign, 15 in Rockford, 12 in Belleville, and 77 from the 42 municipalities in the South Suburban Mayors and Managers Association) on the State of Illinois data portal. Smart Chicago has worked with developers in meet-ups all over the state—in six locations in four cities with 149 people. The project has also allowed Smart Chicago to conduct outreach in each of our communities to reach regular residents with needs that can be addressed through data and technology.
LocalData + SWOP
The LocalData + SWOP project is part of our effort to help bridge technology gaps in high-capacity organizations. This effort helps the Southwest Organizing Project collect information about vacant and abandoned housing using the LocalData tool.
Affordable Care Act Outreach App
With the ongoing implementation of the Affordable Care Act, community organizations such as LISC-Chicago have been hard at work providing navigators to help residents register through the healthcare.gov site.
Currently, LISC-Chicago organizers are in neighborhoods contacting residents and encouraging them to go to their closest Center for Working Families. Using a combination of software, such as Wufoo and Twilio, Smart Chicago is helping LISC with its outreach by building a tool that enables organizers to send text reminders to sign up for health insurance to residents.
Texting Tools: Twilio and Textizen
Smart Chicago is expanding the Affordable Care Act outreach project to engage residents in other ways using SMS messaging.
Smart Chicago is also a local provider for Textizen, an SMS-based survey tool that civic organizations can use to obtain resident feedback. Organizations can create a survey campaign and then place the survey options on posters, postcards or screens during live events. They can then receive real-time feedback as people text in their answers.
WikiChicago
WikiChicago will be a hyper-local Wikipedia-like website that anyone can edit. For this project, Smart Chicago is partnering with the Chicago Public Library to feature local authors and books about Chicago, and to publish more information about Chicago’s rich history.”
Open data: Strategies for impact
Important though these considerations are, they miss what should be an obvious and more profound alternative.
Right now, organisations like DataKind™ and Periscopic, and many other entrepreneurs, innovators and established social enterprises that use open data, see things differently. They are using these straplines to shake up the status quo, to demonstrate that data-driven businesses can do well by doing good.
And it’s the confluence of the many national and international open data initiatives, and the growing number of technically able, socially responsible organisations that provide the opportunity for social as well as economic growth. The World Wide Web Foundation now estimates that there are over 370 open data initiatives around the world. Collectively, and through portals such as Quandl and and datacatalogs.org, these initiatives have made a staggering quantity of data available – in excess of eight million data sets. In addition, several successful and data-rich companies are entering into a new spirit of philanthropy – by donating their data for the public good. There’s no doubt that opening up data signals a new willingness by governments and businesses all over the world to engage with their citizens and customers in a new and more transparent way.
The challenge, though, is ensuring that these popular national and international open data initiatives are cohesive and impactful. And that the plans drawn up by public sector bodies to release specific data sets are based on the potential the data has to achieve a beneficial outcome, not – or, at least, not solely – based on the cost or ease of publication. Despite the best of intentions, only a relatively small proportion of open data sets now available has the latent potential to create significant economic or social impact. In our push to open up data and government, it seems that we may have fallen into the trap of believing the ends are the same as the means; that effect is the same as cause…”
It’s the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough.
Eric Kansa at LSE Blog: “…However, I’m increasingly convinced that advocating for openness in research (or government) isn’t nearly enough. There’s been too much of an instrumentalist justification for open data an open access. Many advocates talk about how it will cut costs and speed up research and innovation. They also argue that it will make research more “reproducible” and transparent so interpretations can be better vetted by the wider community. Advocates for openness, particularly in open government, also talk about the wonderful commercial opportunities that will come from freeing research…
These are all very big policy issues, but they need to be asked if the Open Movement really stands for reform and not just a further expansion and entrenchment of Neoliberalism. I’m using the term “Neoliberalism” because it resonates as a convenient label for describing how and why so many things seem to suck in Academia. Exploding student debt, vanishing job security, increasing compensation for top administrators, expanding bureaucracy and committee work, corporate management methodologies (Taylorism), and intensified competition for ever-shrinking public funding all fall under the general rubric of Neoliberalism. Neoliberal universities primarily serve the needs of commerce. They need to churn out technically skilled human resources (made desperate for any work by high loads of debt) and easily monetized technical advancements….
“Big Data,” “Data Science,” and “Open Data” are now hot topics at universities. Investments are flowing into dedicated centers and programs to establish institutional leadership in all things related to data. I welcome the new Data Science effort at UC Berkeley to explore how to make research data professionalism fit into the academic reward systems. That sounds great! But will these new data professionals have any real autonomy in shaping how they conduct their research and build their careers? Or will they simply be part of an expanding class of harried and contingent employees- hired and fired through the whims of creative destruction fueled by the latest corporate-academic hype-cycle?
Researchers, including #AltAcs and “data professionals”, need a large measure of freedom. Miriam Posner’s discussion about the career and autonomy limits of Alt-academic-hood help highlight these issues. Unfortunately, there’s only one area where innovation and failure seem survivable, and that’s the world of the start-up. I’ve noticed how the “Entrepreneurial Spirit” gets celebrated lots in this space. I’m guilty of basking in it myself (10 years as a quasi-independent #altAc in a nonprofit I co-founded!).
But in the current Neoliberal setting, being an entrepreneur requires a singular focus on monetizing innovation. PeerJ and Figshare are nice, since they have business models that less “evil” than Elsevier’s. But we need to stop fooling ourselves that the only institutions and programs that we can and should sustain are the ones that can turn a profit. For every PeerJ or Figshare (and these are ultimately just as dependent on continued public financing of research as any grant-driven project), we also need more innovative organizations like the Internet Archive, wholly dedicated to the public good and not the relentless pressure to commoditize everything (especially their patrons’ privacy). We need to be much more critical about the kinds of programs, organizations, and financing strategies we (as a society) can support. I raised the political economy of sustainability issue at a recent ThatCamp and hope to see more discussion.
In reality so much of the Academy’s dysfunctions are driven by our new Gilded Age’s artificial scarcity of money. With wealth concentrated in so few hands, it is very hard to finance risk taking and entreprenurialism in the scholarly community, especially to finance any form of entrepreneurialism that does not turn a profit in a year or two.
Open Access and Open Data will make so much more of a difference if we had the same kind of dynamism in the academic and nonprofit sector as we have in the for-profit start-up sector. After all, Open Access and Open Data can be key enablers to allow much broader participation in research and education. However, broader participation still needs to be financed: you cannot eat an open access publication. We cannot gloss over this key issue.
We need more diverse institutional forms so that researchers can find (or found) the kinds of organizations that best channel their passions into contributions that enrich us all. We need more diverse sources of financing (new foundations, better financed Kickstarters) to connect innovative ideas with the capital needed to see them implemented. Such institutional reforms will make life in the research community much more livable, creative, and dynamic. It would give researchers more options for diverse and varied career trajectories (for-profit or not-for-profit) suited to their interests and contributions.
Making the case to reinvest in the public good will require a long, hard slog. It will be much harder than the campaign for Open Access and Open Data because it will mean contesting Neoliberal ideologies and constituencies that are deeply entrenched in our institutions. However, the constituencies harmed by Neoliberalism, particularly the student community now burdened by over $1 trillion in debt and the middle class more generally, are much larger and very much aware that something is badly amiss. As we celebrate the impressive strides made by the Open Movement in the past year, it’s time we broaden our goals to tackle the needs for wider reform in the financing and organization of research and education.
This post originally appeared on Digging Digitally and is reposted under a CC-BY license.”
Big Data’s Dangerous New Era of Discrimination
Michael Schrage in HBR blog: “Congratulations. You bought into Big Data and it’s paying off Big Time. You slice, dice, parse and process every screen-stroke, clickstream, Like, tweet and touch point that matters to your enterprise. You now know exactly who your best — and worst — customers, clients, employees and partners are. Knowledge is power. But what kind of power does all that knowledge buy?
Big Data creates Big Dilemmas. Greater knowledge of customers creates new potential and power to discriminate. Big Data — and its associated analytics — dramatically increase both the dimensionality and degrees of freedom for detailed discrimination. So where, in your corporate culture and strategy, does value-added personalization and segmentation end and harmful discrimination begin?
Let’s say, for example, that your segmentation data tells you the following:
Your most profitable customers by far are single women between the ages of 34 and 55 closely followed by “happily married” women with at least one child. Divorced women are slightly more profitable than “never marrieds.” Gay males — single and in relationships — are also disproportionately profitable. The “sweet spot” is urban and 28 to 50. These segments collectively account for roughly two-thirds of your profitability. (Unexpected factoid: Your most profitable customers are overwhelmingly Amazon Prime subscriber. What might that mean?)
Going more granular, as Big Data does, offers even sharper ethno-geographic insight into customer behavior and influence:
- Single Asian, Hispanic, and African-American women with urban post codes are most likely to complain about product and service quality to the company. Asian and Hispanic complainers happy with resolution/refund tend to be in the top quintile of profitability. African-American women do not.
- Suburban Caucasian mothers are most likely to use social media to share their complaints, followed closely by Asian and Hispanic mothers. But if resolved early, they’ll promote the firm’s responsiveness online.
- Gay urban males receiving special discounts and promotions are the most effective at driving traffic to your sites.
My point here is that these data are explicit, compelling and undeniable. But how should sophisticated marketers and merchandisers use them?
Campaigns, promotions and loyalty programs targeting women and gay males seem obvious. But should Asian, Hispanic and white females enjoy preferential treatment over African-American women when resolving complaints? After all, they tend to be both more profitable and measurably more willing to effectively use social media. Does it make more marketing sense encouraging African-American female customers to become more social media savvy? Or are resources better invested in getting more from one’s best customers? Similarly, how much effort and ingenuity flow should go into making more gay male customers better social media evangelists? What kinds of offers and promotions could go viral on their networks?…
Of course, the difference between price discrimination and discrimination positively correlated with gender, ethnicity, geography, class, personality and/or technological fluency is vanishingly small. Indeed, the entire epistemological underpinning of Big Data for business is that it cost-effectively makes informed segmentation and personalization possible…..
But the main source of concern won’t be privacy, per se — it will be whether and how companies and organizations like your own use Big Data analytics to justify their segmentation/personalization/discrimination strategies. The more effective Big Data analytics are in profitably segmenting and serving customers, the more likely those algorithms will be audited by regulators or litigators.
Tomorrow’s Big Data challenge isn’t technical; it’s whether managements have algorithms and analytics that are both fairly transparent and transparently fair. Big Data champions and practitioners had better be discriminating about how discriminating they want to be.”
Big Data and the Future of Privacy
John Podesta at the White House blog: “Last Friday, the President spoke to the American people, and the international community, about how to keep us safe from terrorism in a changing world while upholding America’s commitment to liberty and privacy that our values and Constitution require. Our national security challenges are real, but that is surely not the only space where changes in technology are altering the landscape and challenging conceptions of privacy.
That’s why in his speech, the President asked me to lead a comprehensive review of the way that “big data” will affect the way we live and work; the relationship between government and citizens; and how public and private sectors can spur innovation and maximize the opportunities and free flow of this information while minimizing the risks to privacy. I will be joined in this effort by Secretary of Commerce Penny Pritzker, Secretary of Energy Ernie Moniz, the President’s Science Advisor John Holdren, the President’s Economic Advisor Gene Sperling and other senior government officials.
I would like to explain a little bit more about the review, its scope, and what you can expect over the next 90 days.
We are undergoing a revolution in the way that information about our purchases, our conversations, our social networks, our movements, and even our physical identities are collected, stored, analyzed and used. The immense volume, diversity and potential value of data will have profound implications for privacy, the economy, and public policy. The working group will consider all those issues, and specifically how the present and future state of these technologies might motivate changes in our policies across a range of sectors.
When we complete our work, we expect to deliver to the President a report that anticipates future technological trends and frames the key questions that the collection, availability, and use of “big data” raise – both for our government, and the nation as a whole. It will help identify technological changes to watch, whether those technological changes are addressed by the U.S.’s current policy framework and highlight where further government action, funding, research and consideration may be required.
This is going to be a collaborative effort. The President’s Council of Advisors on Science and Technology (PCAST) will conduct a study to explore in-depth the technological dimensions of the intersection of big data and privacy, which will feed into this broader effort. Our working group will consult with industry, civil liberties groups, technologists, privacy experts, international partners, and other national and local government officials on the significance of and future for these technologies. Finally, we will be working with a number of think tanks, academic institutions, and other organizations around the country as they convene stakeholders to discuss these very issues and questions. Likewise, many abroad are analyzing and responding to the challenge and seizing the opportunity of big data. These discussions will help to inform our study.
While we don’t expect to answer all these questions, or produce a comprehensive new policy in 90 days, we expect this work to serve as the foundation for a robust and forward-looking plan of action. Check back on this blog for updates on how you can get involved in the debate and for status updates on our progress.”
Needed: A New Generation of Game Changers to Solve Public Problems
Beth Noveck: “In order to change the way we govern, it is important to train and nurture a new generation of problem solvers who possess the multidisciplinary skills to become effective agents of change. That’s why we at the GovLab have launched The GovLab Academy with the support of the Knight Foundation.
In an effort to help people in their own communities become more effective at developing and implementing creative solutions to compelling challenges, The Gov Lab Academy is offering two new training programs:
1) An online platform with an unbundled and evolving set of topics, modules and instructors on innovations in governance, including themes such as big and open data and crowdsourcing and forthcoming topics on behavioral economics, prizes and challenges, open contracting and performance management for governance;
2) Gov 3.0: A curated and sequenced, 14-week mentoring and training program.
While the online-platform is always freely available, Gov 3.0 begins on January 29, 2014 and we invite you to to participate. Please forward this email to your networks and help us spread the word about the opportunity to participate.
Please consider applying (individuals or teams may apply), if you are:
-
an expert in communications, public policy, law, computer science, engineering, business or design who wants to expand your ability to bring about social change;
-
a public servant who wants to bring innovation to your job;
-
someone with an important idea for positive change but who lacks key skills or resources to realize the vision;
-
interested in joining a network of like-minded, purpose-driven individuals across the country; or
-
someone who is passionate about using technology to solve public problems.
The program includes live instruction and conversation every Wednesday from 5:00– 6:30 PM EST for 14 weeks starting Jan 29, 2014. You will be able to participate remotely via Google Hangout.
Gov 3.0 will allow you to apply evolving technology to the design and implementation of effective solutions to public interest challenges. It will give you an overview of the most current approaches to smarter governance and help you improve your skills in collaboration, communication, and developing and presenting innovative ideas.
Over 14 weeks, you will develop a project and a plan for its implementation, including a long and short description, a presentation deck, a persuasive video and a project blog. Last term’s projects covered such diverse issues as post-Fukushima food safety, science literacy for high schoolers and prison reform for the elderly. In every case, the goal was to identify realistic strategies for making a difference quickly. You can read the entire Gov 3.0 syllabus here.
The program will include national experts and instructors in technology and governance both as guests and as mentors to help you design your project. Last term’s mentors included current and former officials from the White House and various state, local and international governments, academics from a variety of fields, and prominent philanthropists.
People who complete the program will have the opportunity to apply for a special fellowship to pursue their projects further.
Previously taught only on campus, we are offering Gov 3.0 in beta as an online program. This is not a MOOC. It is a mentoring-intensive coaching experience. To maximize the quality of the experience, enrollment is limited.
Please submit your application by January 22, 2014. Accepted applicants (individuals and teams) will be notified on January 24, 2014. We hope to expand the program in the future so please use the same form to let us know if you would like to be kept informed about future opportunities.”
Supporting open government in New Europe
Google Europe Blog: “The “New Europe” countries that joined the European Union over the past decade are moving ahead fast to use the Internet to improve transparency and open government. We recently partnered with Techsoup Global to support online projects driving forward good governance in Romania, the Czech Republic, and most recently, in Slovakia.
Techsoup Global, in partnership with the Slovak Center for Philanthropy, recently held an exciting social-startups awards ceremony Restart Slovakia 2013 in Bratislava. Slovakia’s Deputy Minister of Finance and Digital Champion Peter Pellegrini delivered keynote promoting Internet and Open Data and announced the winners of this year contest. Ambassadors from U.S., Israel and Romania and several distinguished Slovak NGOs also attended the ceremony.
Winning projects included:
- Vzdy a vsade – Always and Everywhere – a volunteer portal offering online and anonymous psychological advice to internet users via chat.
- Nemlcme.sk – a portal providing counsel for victims of sexual assaults.
- Co robim – an educational online library of job careers advising young people how to choose their career paths and dream jobs.
- Mapa zlocinu – an online map displaying various rates of criminality in different neighbourhoods.
- Demagog.sk – a platform focused on analyzing public statements of politicians and releasing information about politicians and truthfulness of their speeches in a user-friendly format.”
What Jelly Means
Steven Johnson: “A few months ago, I found this strange white mold growing in my garden in California. I’m a novice gardener, and to make matters worse, a novice Californian, so I had no idea what these small white cells might portend for my flowers.
This is one of those odd blank spots — I used the call them Googleholes in the early days of the service — where the usual Delphic source of all knowledge comes up relatively useless. The Google algorithm doesn’t know what those white spots are, the way it knows more computational questions, like “what is the top-ranked page for “white mold?” or “what is the capital of Illinois?” What I want, in this situation, is the distinction we usually draw between information and wisdom. I don’t just want to know what the white spots are; I want to know if I should be worried about them, or if they’re just a normal thing during late summer in Northern California gardens.
Now, I’m sure I know a dozen people who would be able to answer this question, but the problem is I don’t really know which people they are. But someone in my extended social network has likely experienced these white spots on their plants, or better yet, gotten rid of them. (Or, for all I know, ate them — I’m trying not to be judgmental.) There are tools out there that would help me run the social search required to find that person. I can just bulk email my entire address book with images of the mold and ask for help. I could go on Quora, or a gardening site.
But the thing is, it’s a type of question that I find myself wanting to ask a lot, and there’s something inefficient about trying to figure the exact right tool to use to ask it each time, particularly when we have seen the value of consolidating so many of our queries into a single, predictable search field at Google.
This is why I am so excited about the new app, Jelly, which launched today. …
Jelly, if you haven’t heard, is the brainchild of Biz Stone, one of Twitter’s co-founders. The service launches today with apps on iOS and Android. (Biz himself has a blog post and video, which you should check out.) I’ve known Biz since the early days of Twitter, and I’m excited to be an adviser and small investor in a company that shares so many of the values around networks and collective intelligence that I’ve been writing about since Emergence.
The thing that’s most surprising about Jelly is how fun it is to answer questions. There’s something strangely satisfying in flipping through the cards, reading questions, scanning the pictures, and looking for a place to be helpful. It’s the same broad gesture of reading, say, a Twitter feed, and pleasantly addictive in the same way, but the intent is so different. Scanning a twitter feed while waiting for the train has the feel of “Here we are now, entertain us.” Scanning Jelly is more like: “I’m here. How can I help?”
A permanent hacker space in the Brazilian Congress
Blog entry by Dan Swislow at OpeningParliament: “On December 17, the presidency of the Brazilian Chamber of Deputies passed a resolution that creates a permanent Laboratório Ráquer or “Hacker Lab” inside the Chamber—a global first.
Read the full text of the resolution in Portuguese.
The resolution mandates the creation of a physical space at the Chamber that is “open for access and use by any citizen, especially programmers and software developers, members of parliament and other public workers, where they can utilize public data in a collaborative fashion for actions that enhance citizenship.”
The idea was born out of a week-long, hackathon (or “hacker marathon”) event hosted by the Chamber of Deputies in November, with the goal of using technology to enhance the transparency of legislative work and increase citizen understanding of the legislative process. More than 40 software developers and designers worked to create 22 applications for computers and mobile devices. The applications were voted on and the top three awarded prizes.
The winner was Meu Congress, a website that allows citizens to track the activities of their elected representatives, and monitor their expenses. Runner-ups included Monitora, Brasil!, an Android application that allows users to track proposed bills, attendance and the Twitter feeds of members; and Deliberatório, an online card game that simulates the deliberation of bills in the Chamber of Deputies.
The hackathon engaged the software developers directly with members and staff of the Chamber of Deputies, including the Chamber’s President, Henrique Eduardo Alves. Hackathon organizer Pedro Markun of Transparencia Hacker made a formal proposal to the President of the Chamber for a permanent outpost, where, as Markun said in an email, “we could hack from inside the leviathan’s belly.”
The Chamber’s Director-General has established nine staff positions for the Hacker Lab under the leadership of the Cristiano Ferri Faria, who spoke with me about the new project.
Faria explained that the hackathon event was a watershed moment for many public officials: “For 90-95% of parliamentarians and probably 80% of civil servants, they didn’t know how amazing a simple app, for instance, can make it much easier to analyze speeches.” Faria pointed to one of the hackathon contest entries, Retórica Parlamentar, which provides an interactive visualization of plenary remarks by members of the Chamber. “When members saw that, they got impressed and wondered, ‘There’s something new going on and we need to understand it and support it.’”