Can We Make Math More Accessible?

Why are 76% of all math PhDs awarded to men?  One major reason, according to Stanford math professor Jo Boaler, is the way math is taught.

At Stanford University, I teach some of the country’s highest achievers. But when they enter fast-paced lecture halls, even those  were successful in high school mathematics start to think they’re not good enough. One of my undergraduates described the panic she felt when trying to keep pace with a professor: “The material felt like it was flying over my head,” she wrote. “It was like I was watching a lecture at 2x or 3x speed and there was no way to pause or replay it.” She described her fear of failure as “crippling.” This student questioned her intelligence and started to rethink whether she belonged in the field of math at all.

Research tells us that lecturers typically speak at between 100 and 125 words a minute, but students can take note of only about 20 words a minute, often leaving them feeling frustrated and defeated.

This style of teaching doesn’t work for lots of people — one college math class was enough to turn me off from math.  But it hits women and people of color especially hard.

When students struggle in speed-driven math classes, they often believe the problem lies within themselves, not realizing that fast-paced lecturing is a faulty teaching method. The students most likely to internalize the problem are women and students of color.

But there’s no reason math has to be taught the way it currently is.  Recently Boaler ran an interesting math teaching experiment that had impressive results.

In a recent summer camp with 81 middle school students, we taught mathematics through open, creative lessons to demonstrate how mathematics is about thinking deeply, rather than calculating quickly. After 18 lessons, the students improved their mathematics achievement on standardized tests by an average of 50%, the equivalent of 1.6 years of school.

What’s true in math is even more true in data science: if we want more people to use data science, we need to take a hard look at how it’s taught.

What Data Science Could Learn From Mavo

Yesterday, Lea Verou, author of the fabulous book CSS Secrets, announced the launch of Mavo.

Mavo helps you turn your static HTML into reactive web applications without a single line of programming code and no server backend.

Although Mavo is a tool for creating websites and web apps, I think it’s also got a lot to teach data science.

Just as there are a bunch of data science tools that let you quickly take care of business via an easy to use UI, there are hundreds of drag and drop tools for easily designing websites & web apps. And just like easy to use data science tools, these web building tools are “easy to use” right up until you want make something that’s even a little different from what the tool’s creators originally envisioned. As a Mavo research paper puts it,

research indicates that there are high levels of dissatisfaction with [Content Management Systems (CMSs) for building websites]. One reason is that CMSs impose narrow constraints on authors in terms of possible presentation – far narrower than when editing a standalone HTML and CSS document.

What happens when you need to move beyond these narrow constraints? The same thing that happens with data science: a heck of a lot of blood, sweat, and tears.

It is indicative that even implementing a simple to-do application similar to the one in Figure 1 needs 294 lines of JavaScript (not including comments) with AngularJS, 246 with Polymer, 297 with Backbone.js, and 421 with React. Other JavaScript frameworks are in the same ballpark.


Mavo overcomes this problem by extending HMTL so you can do an awful lot with just a few lines of simple code. For example, Mavo has a wonderful system for letting you store data in the browser, in GitHub, or on Dropbox just by adding a little HTML, and you can easily edit that data with an auto-generated, customizable UI.

Similarly, you can create a slider, store its value in a variable, and display the result with just two simple lines of HTML:

Slider value: [strength]/100

Want to display the slider value as a percentage? It’s easy to add a calculation:

Slider value: [strength/100]

At the same time, because it’s built in HTML, you’ve got a lot of control over how it looks; just change the HTML and CSS and you’re good to go.

What’s particularly nice about Mavo is that if you outstrip what’s built into it, you can switch to full blown Javascript.

MavoScript is based on a subset of JavaScript, with a few simplifications to make it easy to use even by people with no JavaScript knowledge. If a Mavo expression is more advanced than MavoScript, it is automatically parsed as JavaScript, allowing experienced web developers to do more with Mavo expressions.

Similarly, Mavo was “designed for extensibility from the ground up,” allowing you to add plug-ins to extend what users can do using HTML.

OK, you say, but still, you’re working in HTML. That’s going to turn off a lot of folks, right?

Mavo says no — and they have peer-reviewed research to back up that claim. They did a usability study with 20 users, and they discovered that

Even users with no programming experience were able to quickly craft Mavo applications.

It’s worth pausing for a moment to acknowledge what the Mavo crew did. Rather than just assuming that because they and the people who decided to check out their work like Mavo, it’s easy for beginners, they ran a study to find out. Considering that research has demonstrated that most programming languages are no easier for beginners to understand than a coding language that was randomly generated, that’s a really big step.

Obviously, if all you ever want to do is build really simple websites, a drag and drop tool is going to be hard to beat. But what Mavo has shown is that it’s possible to create a tool that gives ordinary users an awful lot of room to grow without getting clobbered by a very steep learning curve.  Pandas, R, D3, and the rest of data science could learn a lot from this accomplishment.

For more info on Mavo, which is currently in beta, check out

Why Data Science Needs to Embrace Part-Time Analysts

You’ve started using a data science tool that’s supposed to “empower users.” And for some features, that’s true; it’s really easy to get some things done. But as soon as you need to take one step beyond those features — which almost always happens — it’s bang-head-against-wall time.

But that’s ok. You’re a data analyst. You know the drill. Spend enough time with a tool and eventually you’ll get it. In a few months, the weirdnesses will be second nature to you.

But there’s a big if: only if you spend a lot of your time as a data analyst.

That’s not true for a lot of people who need to crunch data. They probably have a few weekly/monthly reports or analyses that are critical to their work. But they only tweak these reports once or twice a year.

Two to three times a year, they do get to spend more time on analysis. For example, at the beginning of the year they may set up some reports to track their team or department’s new goals, and they analyze the results at the end of the year. They may also have a quarterly report they tweak every once in a while. But they’re not spending time every week or even every month immersed in the tool.

And that is going to bite them on the ass. Even if initially they can carve out enough time to figure out the bizarre commands needed to get something done, six months from now will they remember what they did and why? Not likely.

It’s not that these analysts don’t want to spend more time crunching data. They can see the potential of what they could do with the data they have if they only could spare the time. But it’s only a small slice of the work on their plate.

Ironically, if it was quicker & easier to do some work in data science, they might be able to muck around more frequently. Right now, it’s just not worth the hassle given all the other work on a typical part-time data analyst’s plate.

As AI technology improves, there will be even more part-time analysts who are struggling with this challenge. IBM, Microsoft, and hundreds of startups are trying to figure out how to automate as much of the work involved in using machine learning and other complex techniques. The closer they get to putting these techniques in the hands of Excel power users, the more likely the world of data science will include lots of people who are actively flexing their data science muscles infrequently.

Most of data science is built around the implicit assumption that the people who do it will either be working full time or part time on it. That assumption is understandable: in the world of coding, it’s largely true. But for data science to reach its full potential, it’s going to need to embrace users who don’t or can’t spend anywhere near that kind of time.

Beyond Boot Camps

Boot camps have become increasingly popular way for folks in the community to get started in Data Science. It’s understandable why. Data Science can be pretty overwhelming at first, so getting a concentrated dose with lots of support can be invaluable.

I have a tremendous amount of respect for the people who make data science boot camps happen, and they have made a huge difference in the lives of some of the people who have gone through them. But I think we are at the point where we are hitting the limits of boot camps.

First, most boot camps cost more money than many people can afford. A number of programs aimed at increasing the diversity of Data Science offer scholarships for some or all of their participants. But given how much boot camps cost to run, they can only reach a limited number of people. As a model, it just doesn’t scale – and given how many data science jobs there are out there, that’s a serious problem.

Similarly, most boot camps take far more time than many people can afford. Again, boot camps that try to increase the diversity of data science work very hard to help folks overcome this barrier. But for single working parents and many other people, a model built on one very concentrated dose of learning over several months just isn’t going to work.

Finally, most boot camps simply can’t afford to provide real support once the boot camp is over. This is an issue for a lot of folks who go through boot camps. Because no matter how dedicated the instructors are, many folks can only absorb so much info at one time. That’s a problem even if you’re just learning one programming language or skill. But to retain even a basic mastery of the array of skills many data science jobs require, boot camps don’t offer a good answer.

So in addition to boot camps, we need another approach that can scale up. That’s why Data Chefs argues for creating a continuum of tools and smooth the the learning curve among these tools . Part of the reason we need boot camps is that learning these tools is way too hard. Many of these tools are open source, and of the tools that aren’t they are very interested in growing their markets. There is no reason why a movement couldn’t change the trajectory of these tools to make it far easier to get started and far easier to make progress.

Similarly, there’s no reason we couldn’t create a more robust, community-centered ecosystem around learning and using these tools so a much wider range of folks could get exposed to them, get their feet wet, and begin to make progress at a pace that their lives could handle.

But won’t this take a lot of work? Yes, it will. But so do boot camps.

Boot camps require a staggering amount of time and energy – one of the many reasons I have so much respect for the people who make them happen. For all the time and energy that go into boot camps, they can only reach a limited number of people. And for the most part, each boot camp – or school of boot camps – is an island unto itself. As a result, they never get the payoff of having many people across many communities working together towards a common goal.

So maybe it’s time to think about taking some of the considerable energy going into boot camps right now and use it to build a solution that can reach a lot more people.

Our Data, Ourselves

One of Data Chefs’ core assumptions is that there’s no reason data science can’t be accessible to a much wider audience. Some people think that’s crazy. Slicing and dicing data, making sense of data – it’s just too complicated for anyone other than an expert.

Back in the early 60s, that’s exactly how most folks thought about medicine. Nancy Miriam Hawley recalls recalls an encounter she had with her OB/GYN:

Imagine me as a 23 year old professional young woman asking a question after the doctor (he) recommended that I use a new –to- market pill for birth control.  What’s in this pill? I ask.  His response: condescending pat on my head and literally said “don’t worry your pretty little head!”

Minus the head pat, that was pretty much the standard answer doctors were expected to give. They had years and years of intensive training. How could anyone — let alone a woman — be expected to have any real say in their treatment given that they couldn’t possibly understand medicine?

In 1969, Hawley and several other women who had met at a women’s conference decided it was time for a change.

We had all experienced similar feelings of frustration and anger toward specific doctors and the medical maze in general, and initially we wanted to do something about those doctors who were condescending, paternalistic, judgmental and noninformative. As we talked and shared our experiences with one another, we realized just how much we had to learn about our bodies. So we decided on a summer project: to research those topics which we felt were particularly pertinent to learning about our bodies, to discuss in the group what we had learned, then to write papers individually or in groups of two or three, and finally to present the results in the fall as a course for women on women and their bodies.

As we developed the course we realized more and more that we really were capable of collecting, understanding, and evaluating medical information. Together we evaluated our reading of books and journals, our talks with doctors and friends who were medical students. We found we could discuss, question, and argue with each other in a new spirit of cooperation rather than competition. We were equally struck by how important it was for us to be able to open up with one another and share our feelings about our bodies. The process of talking was as crucial as the facts themselves. Over time the facts and feelings melted together in ways that touched us very deeply, and that is reflected in the changing titles of the course and then the book, from “Women and Their Bodies” to “Women and Our Bodies” to, finally, “Our Bodies, Ourselves.”

Today, the idea that we couldn’t understand enough about medicine to have an informed opinion seems about as antiquated as using leeches. In fact, these days you can even get a degree in the art and science of making medical information accessible to the public.

And as complex as data science is, it’s not in the same league as medicine. To understand the human body, you need to understand biology, physics, chemistry, psychology, statistics, etc. In fact, medicine is so complex that even someone with years and years of training in one medical specialty isn’t qualified to have an expert opinion about another specialty.

So the next time someone talking about data science does the equivalent of patting you on the head, remember that the only reason that they can get away with that crap is that we are just at the beginning of a movement that’s committed to do in data science what those women did “about those doctors who were condescending, paternalistic, judgmental and noninformative.”

Why We Still Need to Worry About Diversity in Tech: the Sexist Idiot Edition

Sarah Drasner is an expert in the arcane, super geeky world of Scalable Vector Graphics (SVG) animation — basically one of the main ways to do  really cool interactive work, like data viz, on the web.  Parts of SVG animation can be mind-numbingly painful enough that it can make daytime drinking under your desk seem like a very reasonable response.  Drasner’s book, SVG Animation, which was published by O’Reilly, is hands down the best book on this subject.  And yet in 2017, she still has to put up with crap like this:

After my talk:

Guy: so who coded your demos?

Me: I did

G: so you used a GUI?

M: no I coded it

G: you code?

M: yes

G: no, like actual code

And as she tweets, this wasn’t a one off:

It’s like, every day now. Just cut it out.

please stop, this shit is exhausting

In case anyone is having trouble wrapping their head around why diversity in tech matters, this is why:  so there are enough women in tech that no guy would dare do this.

Data Viz Revision: Maeve’s Westworld Attribute Matrix

This post contains spoilers about “Westworld


In episode 6 of HBO’s wildly popular drama “Westworld,” viewers got a brief look at the “Attribute Matrix” of Maeve, one of the host androids featured in the show (h/t reddit):


The attribute matrix is a graph of the values assigned to each trait (on a scale from 0 (1?) to 20). The visualization itself is just a radar chart. I’ve reproduced a rough version below for better visibility:


Since I first got a glimpse of this back in episode 6, I’ve been thinking about better ways to visualize the Westworld hosts’ attributes. The biggest problem with using a radar chart is that there doesn’t seem to be any meaningful order or organization of the host attributes; the polygon carved out by the radar chart values is an arbitrary shape that could change drastically with a different attribute order.

Radar charts are sometimes used when comparing multiple attributes among different series of values.  In this example, the values of six different attributes are compared across several countries and the resulting polygons are laid on top of one another:


Kap Lab (via Scott Logic)

In this next example, the concept of small multiples is used to compare the 12 NBA players who made the 2013 All Star Game for the Eastern Conference based on how they rank in 11 statistical categories:


Rami Moghadam

In these 2 examples, the polygon shapes formed by connecting each series value make sense to compare in the context of the visualizations. They compare multiple bundles of values on a common scale. In the two examples above, those bundles are countries and NBA players, respectively.

But in the image from Westworld, only one host’s values–Maeve’s–are shown.  This removes the main advantage a radar chart has, namely, comparing multiple values across many series.

Given that, I decided on 4 potential revisions.


Option 1: Bar Chart


I decided to do a pretty standard bar chart for the first revision.  Gone is the unwieldy polygon of the radar chart; in its place is a series of bars, ranked from highest to lowest.  This allows the audience to more easily grasp the relationship among the attribute values.

I decided against the random order of the original attribute matrix or alphabetical ordering because they don’t really help when looking at a single host’s data.


Option 2: Bullet Chart


This is like the previous bar chart, only with an added series showing the maximum value of 20.  The benefit of this one is that for each attribute, you can see how far the value is from the maximum, so it gives the effect of a bar filling up.  I like this one.


Option 3: Lollipop Chart


This one is similar to the bar chart, only with thinner bars and a filled circle at the end.  The lollipop looks a bit cleaner to me, probably because the bars take up less space.

(h/t Stephanie Evergreen for the Excel tutorial).


Option 4: Table with Conditional Formatting


This final revision is just a table with the values ranked from highest value to lowest. I added shading created by conditional formatting based on the same ranking.  For that reason, the shading is redundant, but I like the look.



I think any one of these is preferable to the original radar chart.  Which one would you choose?  Is there another, more effective visualization that I’ve overlooked?

Data Viz Revision: O’Reilly 2016 Data Science Salary Survey (Part 3)

This post is part of a series based on the data displayed in O’Reilly’s 2016 Data Science Salary Survey. Using the Data Chefs Revision Organizer as a guide, we will rethink and revise some of the visualizations featured in the report.

In this visualization, the authors are trying to show the proportion of  survey respondents based on their location in specific regions of the world:


The blue circles do not depict the underlying data in this map, as they did in the visualizations from the first two posts in this series.  Instead, the blue bubbles here are merely a stylistic choice: they serve as pixels representing the world’s land mass. The numeric values are then laid on top of their corresponding regions.

It’s important to note that while all the categories are regional, the units vary. Sometimes they refer to countries (e.g., the United States, Canada), sometimes to entire continents (e.g., Africa, Asia), and sometimes to vague regional groupings (e.g. Latin America). Given the inconsistency in the data categories, it’s no surprising that the visualization is a little unclear too.

One of the problems with this visualization is that the values are represented as numbers, so the reader does not immediately notice the difference between the size of the values.  If you move back a little bit or squint your eyes until you can’t quite read the exact values, there’s nothing that immediately distinguishes the highest value (United States) and the lowest (Africa). Both appear to be white text that takes up roughly the same amount of space on a blue grid.

As I considered how to revise this map, my first thought was to try to salvage the blue bubble theme by using blue bubbles sized based on the values and placed over a geographic map.  Here’s a mockup I did using carto:


And here’s one I did using PowerBI:


While you can immediately see the size difference in values on these revisions, this type of map still has the same issue as the original, namely, confusion caussed by inconsistent geographic categories.  What countries constitute “Latin America,” for instance? If we assume that a number of the Caribbean island nations are part of Latin America, then it seems a little weird that the value is placed in the middle of South America.  Using another example, respondents from Iceland probably fall under Europe/non-UK, but there’s a disconnect (literally), because the  value bubble is all the way in mainland Europe.

There’s also a secondary problem that arise from the limitations of the tools I used: PowerBI and carto. If you look in my examples, the bubbles are not sized consistently.  In both tools, it’s difficult to make bubble maps in which the size of the circles accurately reflect area, not diameter.  For these reasons, I ruled out the bubble map.

Next, I considered a part/whole visualization, like the ones in part 2, but the fact that there are eight distinct categories, and some of the values are relatively small, I knew that there would be issues seeing the smaller values and their labels.

So, ultimately, I settled on this revision:





It’s just a simple bar chart, with values ranked from highest to lowest.  The benefit of using this simple graph, rather than the map, is that it elimiates the confusion caused by the inconsistent units of the regional categories. Now, because we don’t see every country on this chart, we don’t worry about it.

This may not be as visually appealing as the original, but, sometimes, the simplest solution is the best solution.

Data Viz Revision: O’Reilly 2016 Data Science Salary Survey (Part 2)

This post is part of a series based on the data displayed in O’Reilly’s 2016 Data Science Salary Survey. Using the Data Chefs Revision Organizer as a guide, we will rethink and revise some of the visualizations featured in the report.

In this post, I want to focus on the visualization for the share of survey respondents by self-reported age category:


Again, the authors used the arcing blue circle theme to depict the breakdown by age category.  On the plus side, the data labels are consistently placed, all falling along the bottom-right of each value circle (or the inside of the arc), and the order is intuive: youngest to oldest. Also, the circles appear to be sized properly by area (as opposed to diameter).

Using circles is not necessarily a bad way to depict category data, but doing so has some limitations. The main drawback is that by using distinct circles, you lose the relation of each part to the whole.



For this data, I propose using a form of visualization in which the part/whole relationship is central: pie chart, donut chart, waffle chart, or stacked 100% bar chart, shown below:

oreilly age revisions.png

The biggest downside to using these part/whole visualizations is that there isn’t a lot of room to label smaller values.  For that reason, I created a legend for all the values in each graph.

And, although this isn’t a problem with the visualization itself,  if you pay attention to the values in the original, you’ll see that they add up to greater than 100%: 101% to be exact. What probably happened is that more than one value was rounded up, giving the total an extra full percent.  In my revisions, I changedthe value for the 41-50 category, from 16% to 15% so that the values would sum to 100%. This was a compltely arbitrary choice because I had no access to the raw data to know exactly how they were rounded.

I think any one of these would work in place of the original.  Thoughts?




Data Viz Revision: O’Reilly 2016 Data Science Salary Survey (Part 1)

We will be posting a series based on the data displayed in O’Reilly’s 2016 Data Science Salary Survey. Using the Data Chefs Revision Organizer as a guide, we will rethink and revise some of the visualizations featured in the report.


I recently read  O’Reilly’s 2016 Data Science Salary Survey (by John King & Roger Magoulas). People who worked in the field of Data Science answered questions about their job titles, age, salaries, tools, tasks, etc., and this report summarized the results.  I thought the report offered a pretty fascinating overview of the data science industry, and is definitely worth the read.

However, I was a little thrown off by the choices the authors made in visualizing the data.  Here is a selection of representative pages:


As you can see, King & Magoulas opted to use a series of blue circles to represent the data throughout the report.  While the circles provide a common visual theme, I don’t think they best represent this particular data.

One example is the visualization for tasks: work activities in which the data science survey respondents reported major engagement:


The values are displayed as circle areas, sorted from highest to lowest, starting from bottom-left and curving clockwise around to the bottom-middle.  The relative sizes of the circle areas seem to be accurate., but notice the positioning of the labels on the circles.  From 69% down through 36%, the data and category labels are consistently positioned to the right of each circle.  From 32% on down, the data label placement starts to get inconsistent: left sometimes, right other times, based on space constraints.

This space constraint also forces the authors to alter the positioning of the value circles.  In order to fit the long text of the categories, the bottom right side of the arc had to be squashed. This gives the visualization an odd, bean-like shape.



The revision I’ve proposed, a horizontal bar chart, is a lot cleaner. The data labels are consistent: categories to the left of the bars, values to the right.  Also, the relative sizes of the bars are pretty clear. That’s not really the case with the circle values.


This bar chart may lack the novelty or the visual pop of the original, but I think it’s more appropriate for the data, and far easier to understand.

What do you think?