Does Walk Score Walk the Walk?
I remember when I first met Matt Lerner (then CTO of Walk Score - now Vice President of Product and Design at Redfin). Of course, it was at a painfully hip coffee shop in Seattle (where their offices are located). I admit, I was a bit starstruck. Here was this guy who had managed to put walkability on the map (literally!) after countless academic papers showing the health, environmental and even economic benefits of walkable neighborhoods had failed to really move the needle (which of course, I totally get, given that this blog, as you will see, is an attempt to “humanize” a peer-reviewed journal article barely anyone has yet to read because it’s in academic-ese and behind a $36 pay for access firewall). But we soon started geeking out about data and methods and my nerves eased. He eagerly showed me what was on deck for Walk Score - which at the time was a cool heat map showing how many places you could walk to within different time ranges and an early version of bike score and transit score. I told him about my then project with the Metropolitan Washington Council of Governments (our first official State of Place customer).
Of course, we also discussed Walk Score’s limitations - something that to their credit, they’ve always been very upfront about. I mean, to hear Matt tell it, this was originally about using tech to do something cool with maps. They certainly fulfilled and surpassed that goal - walkability is now a major shoe company’s tagline! But while Matt and the Walk Score guys have clearly laid out their methodology and its limits are clear, as a preeminent and easily accessible walkability proxy, it’s tempting to try to use what is essentially a measure of the density of destinations as a proxy for walkability, livability, quality of place, and more. And it has been (see here, here, and here for some examples). But should Walk Score be used to apportion government spending and/or approve development proposals or plans? My colleagues, Julia Koschinsky, Emily Talen, Sungduck Lee, and I recently conducted a study to truly understand when it was and was not appropriate to use Walk Score as a proxy for walkability tied to policy, funding, and/or project approvals.
Here’s how we did it:
We used data from 115 neighborhoods in the Washington, DC Metro region to compare Walk Score to the State of Place Index. Now, this wasn’t meant to be self-serving. It so happens that my colleagues had Walk Score data for the DC Metro region they had obtained for a related HUD-funded study. And I had State of Place data for the same region based on previous work. Also, State of Place is indeed based on “microscale aspects of walkability” - in other words, the nitty-gritty aspects of the built environment, like trees, benches, crosswalks, windows, lighting, etc. (we collected 162 of these features at the time, and there are now over 290, so I’ll spare you and won’t list them all, but you can see them here) that have been empirically tied to whether or not people walk. So by comparing Walk Score to a measure like State of Place, you are essentially looking at whether or not the former is an effective proxy of the urban design features that the latter measures.
We ran this comparison in a variety of contexts to better understand under what circumstances it would and would not be appropriate to use Walk Score as a proxy for walkability. We compared Walk Score to State of Place for the following:
Overall, for all neighborhoods
Within high “access” neighborhoods (Walk Score over 70) vs. lower “access” neighborhoods (Walk Score less than 70)
Within low vs. high-income neighborhoods overall
Within low vs. high-income neighborhoods with high “access” (Walk Score over 70)
Here’s (the CliffsNotes, lay-friendly version of) what we found:
Overall, Walk Score and State of Place are correlated except with respect to:
Personal Safety - As a proxy, Walk Score does not pick up urban design features like graffiti, litter, lighting, etc. (known as “physical incivilities”) that influence people’s perception of safety.
Recreational Facilities - including outdoor and indoor recreational destinations, known to impact physical activity levels overall. Walk Score was not related to recreational facilities in any of the situations tested above.
Walk Score serves as a good proxy for urban design features related to walkability for neighborhoods with Walk Scores above 70 and in high-income areas
BUT
Walk Score does NOT serve as an accurate, reliable measure of walkability for neighborhoods with Walk Scores under 70 and in lower-income areas.
Walk Score tends to overestimate the walkability of high access, low-income communities
Walk Score does not pick up on the poorer quality of the walking experience in these neighborhoods, including lower connectivity, aesthetics, and personal safety.
Features such as personal safety, aesthetics, and street connectivity are more likely to be jeopardized in low-income areas with good walkable access.
Ok, so why does this matter?
Well, it turns out that the vast majority of cities in Walk Score’s database score less than 70...In fact, as of 2014, the average Walk Score for the 141 cities with more than 200,000 people that it ranks is 47 (it was 48 in 2015), ranging from 18 to 87.6; only 9.2% of these cities score above a 70. That means that Walk Score serves as an ineffective proxy for walkability for the vast majority of cities it ranks. For example, nearly 90% of the neighborhoods in Madison, WI (whose overall Walk Score is close to the national average) score below a 70. And even in New York City, which has the highest Walk Score in the U.S., 26.2% of neighborhoods ranked had a Walk Score of less than 70.
This is why you sometimes shake your head at Walk Score’s yearly rankings. Miami? Really? This is one that makes me want to call up Matt (Lerner) every year...Miami is not the 5th most walkable city in the U.S. That’s a ridiculous notion. There’s a reason I start all of my talks bemoaning the walkability of my hometown, explaining how navigating its streets as a car-less teenager was akin to playing a sad, dangerous game of frogger. Seriously, the fact that Overtown, a very low-income neighborhood in Miami, is ranked as Miami’s 4th most walkable neighborhood, with a Walk Score of 80, exemplifies the issues we highlight in this study. Not only does its built environment lack walkability - with respect to personal safety, aesthetics, and traffic safety, among others features - as of 2014, this neighborhood had a crime index of 10,685, 55% higher than the Miami average and nearly 2.3 times higher than the US average. But it’s not just Miami that’s being undeservedly flattered. Our paper highlighted three examples, comparing Walk Score to State of Place, that illustrate why Walk Score is a poor proxy of walkability for lower access (density of destinations) and low-income neighborhoods:
This block in Largo Town Center, MD has a Walk Score of 68, close to the 70 threshold; but its much lower State of Place Index of 30 more accurately reflects the auto-centric nature of this road.
Similarly, Gateway Arts, MD has a moderate Walk Score of 54, but its very low State of Place Index of 6.1 indicates a strong lack of walkability, including no sidewalks, few destinations, and poor aesthetics and personal safety
Finally, Langley Park, Maryland has a Walk Score of 79 but is in a low-income area. Its State of Place Index is only 14.4, indicating its poor walkability, due to interruptions in streetscape continuity, poor pedestrian comfort, and a lack of traffic safety
So what does this mean in practice?
Walk Score is still an easy-to-use tool for lay purposes - like finding an apartment with many different destinations close to you (as long as you understand it’s not really telling you whether that walk will have sidewalks, trees, curb cuts, lighting, blank street facades, etc.). And most practitioners heretofore are aware that Walk Score fails to capture the micro-scale, ‘‘on the ground’’ built environment features - what I call the look, feel, and touch of walkability - that actually impact people’s decisions to walk. What this study now clearly highlights, however, is that this is not a tool that should be used in the context of the redevelopment of less walkable places (i.e., most of the redevelopment happening in the U.S.) nor in low-income neighborhoods.
To date, Walk Score has persisted as an easy, shorthand measure of walkability in spite of the fact that many professionals know that it is more precisely a measure of the density (and to some respect, quality) of destinations within a specific walking distance. But it is one thing to boast about (or bemoan) your latest Walk Score ranking - that’s innocuous (although at times infuriating to data geeks like me). It’s an entirely different thing to use Walk Score as a metric by which to make planning, private investment, public funding or policy decisions; given these findings, that’s irresponsible at best and potentially discriminating at worst.
As my colleagues and I conclude:
“ Walk Score is [either] ‘‘biased’’ toward higher income neighborhoods...[or] lower income neighborhoods remain plagued by walkability risk factors, and...the differentiation with Walk Score serves to highlight that paradox. Larger gaps between pedestrian-based access and walkability measures in low-income and lower-access neighborhoods should motivate increased attention to the continued poor neighborhood quality of places that otherwise seem to provide geographic access.”
State of Place is striving to serve as a transparent, objective, and accurate measurement and forecasting tool for walkability and quality of place, especially as part of the official metrics used in evaluating potential low-income and publicly subsidized development projects. To learn more about methodology, please contact mariela@stateofplace.co.