top of page
Writer's pictureChrissy Fleming

What can election night teach us about testing?

I often tell my students that once you have a product mindset, you start to see parallels to product management almost everywhere.


I may have gone too far.


While I--and millions of Americans and people around he world--await the results of our elections, I find myself reading a lot of predictions. Some people might find comfort in predictions about who is going to win, but the article that has given me the most comfort today is this Reuters article entitled "Don't be fooled by early us vote counts: they might be misleading."


The article doesn't try to predict who will win or lose. Instead, it predicts how the data is likely to arrive based on election processing laws and practices in each state. I've summarized the key points in combination with an electoral map like so:



So what does this have to do with Product Management and testing?

Everything.


When I was interviewing for a Product Director job at Gilt, our Head of Product drew some lines on the board and said, this is what the data looks like after we ran an a/b test. What can you tell from this? It looked something like this:


We then discussed how the early data seemed erratic because there wasn't enough of it yet. It seemed as if the test was pulling ahead now that we had more consistent data, but it was still close enough that we likely needed more time to understand the full implications. It could still be that introducing a new pattern was giving a slight bump that may level-out over time.

(I'd be interested to hear what YOU would read into this, too!)


And here we finally come to my point. Those squiggles on the white board in my interview and the early election data we will encounter have a lot in common. We have a stake in the game and want the data to point in a certain direction, and with every twist and turn of that early data, we could also find ourselves declaring a win or loss too soon. If we are more aware of our own biases and of how the data arrives on our desks, we are more capable of looking at it with clear eyes and a calm head.


And so, here are 3 lessons we can draw from our experience with the election vote and apply to our product practice.



Lesson 1: Understand where, when, and how you will get your data

In my early, more ignorant days as a young product manager, I may have left test design to someone else. I grew to understand how important it was to know how every test was run--how were users segmented? How were surveys sent? What system processes the data? How often is it refreshed? Am I getting the raw results or is it being "filtered" or interpreted to me? How long will I likely have to run the test to reach a result that is significant and valid?


Tonight, we will be comparing data sources with our friends: "this network is saying this, but THAT network says that." The source data--electoral offices--are the same, but it will be interpreted for us in many ways. Always scrutinize your data sources. If you know that a source isn't likely to come in for days, be skeptical of someone saying it's done.



Lesson 2: Try to predict how the data might look in advance...and how it may mislead you

Just as the article on electoral data does so beautifully, you, too must spend some thought predicting what the data will look like so you're not swayed to jump to conclusions. In simplest terms, you should be clear before you run any test what your baseline is and what it will take to declare another version a "winner." I have seen some teams wait until after they have some data to do this math, and it's a huge mistake because they are swayed by their interpretation and by the sunk cost fallacy (well, it's already out there, so.....).


My favorite thing to do when I'm working with a team is to place bets. "How do you think this will go? Our baseline is X." Get everyone on the team to place a bet on what they think the result of the test will be, and agree in advance on what winning the test--and the bet!--looks like.


Either way, it's worth the time spent to think through what the data will look like, and how it might mislead you--perhaps you run the risk of over-sampling mobile vs. desktop, perhaps you are running a survey that has a leading question in it. The more you can think through this in advance, the more you can design against it.


Lesson 3: Sleep on it

Our election isn't going to be called in a few hours, and unless you have absolutely insane amounts of traffic, you will have to wait a while to get definitive results, too. You and your team should absolutely check once a test is live to make sure things are running smoothly and the data is coming in accurately, and then....LEAVE IT ALONE! Resist all urges to report-out early or declare a winner, no matter how excited your stakeholders are for news. Communicate to everyone how long it will be before you have clear results.


Once you've done your part and the results are in other people's hands, try to get some sleep. You'll have to deal with the results--whatever they are--tomorrow.


0 comments

Recent Posts

See All

コメント


bottom of page