June 16, 2025
In this episode, Michael Levitz and Robin Tully discuss the challenges of stagnation in email marketing, exploring how marketers often feel stuck with their strategies. They delve into real-world examples, the importance of data science, and the need for experimentation in topic selection and audience segmentation. The conversation emphasizes the significance of A/B testing, the dynamics of email frequency, and the necessity of continuous testing to drive engagement and improve results.
Michael Levitz (00:00)
Hello and welcome to episode five of Forecasting the Brief. I'm Michael Levitz.
Robin (00:05)
I'm Robin Tully.
Michael Levitz (00:06)
Today, we are going to talk about a phenomenon that we have noticed for a long time and something that we focus on pretty much every day, which is that email marketing can often feel stuck.
Robin (00:19)
Yeah, that is a thing we've heard from people. That's a thing we've seen in previous work experience. Do you want to talk about some of the kind of what we actually mean by email marketing is stuck, how people find themselves being stuck? What does it look like?
Michael Levitz (00:34)
Something we experience frequently is we'll be talking to a customer or a prospect, and they'll say something to the effect of, I'm stuck. We have our list. No matter what we do, we get the same open rates, the same click rates. We've tried A-B testing. It doesn't give us any insight. It doesn't change the way we're doing things. And now we're just kind of stuck feeding the beast, sending
the same types of emails over and over because we don't want things to go down. We don't want our numbers to go down, but we also don't know how to introduce anything new and we don't know how to make the numbers go up.
Robin (01:10)
Yeah. What do you think are the kind of repercussions of being stuck?
Michael Levitz (01:13)
I think one of the probably one of the most damaging ones is people just within the organization start to look at the email channel as a constant and not as something that they can directly impact anymore. So it happens as it basically the contribution of email gets kind of baked in to
monthly numbers, monthly KPIs. And the goal becomes just this keep the trains running kind of situation, as opposed to a place that's actually a dynamic channel that you can grow, that can experiment in, that you can learn things about and use that learning to bake back into the product or the experience. And, you know, that's, that's where we meet a lot of email marketers who are kind of
They feel behind the eight ball. They feel like they can't experiment, that they can't take risks. what they have to do is just constantly feed the beast.
Robin (02:03)
Yeah, and it kind of email into a place that it suffers in, where I think in a healthy system, email should be one of your more experimental things. It's one of the things you're in most control over. You have first party control over it. You're in control of who sees it. You're in control of segmentation, all of these things.
So it can be kind of one of the best ways to test hypotheses you have about your audience and messaging and all of that. But if it just gets batted back into being kind of a KPI in a monthly report, where are you getting, where else could you be gaining that level of kind of dynamic insight from your audience?
Michael Levitz (02:37)
And as you and I have talked about before, it's your, generally your most passionate customers. So a place where you, in a way can make mistakes, you know, can try things and fail, can pressure tests and see if you're leading your audience into a good place. If you're following them where they want to be, or if you're going in a direction they don't want to go and they need to pull back. You know, that's these
often are your highest value customers, which also means you can't piss them off, but they are there. They're engaged with you and they're gonna tell you what they really think. And it's a great place to kind of understand how to keep the brand in sync with their interests. And I think by, know, when email stops being that and starts being this thing of just play it safe and, you know, kind of play the hits all the time, it just loses its
resonance and it also loses its meaning in the organization.
Robin (03:29)
Totally.
So let's talk a bit about kind of the structure of it being stuck. What does this look like? What does this mean? What are the of the costs of it?
So Michael, you've been doing email marketing for a long time. Can you tell me about some of the war room stories of email marketing being stuck?
Michael Levitz (03:46)
I think what one of the most classic ones is not just an email story, but an overarching brand story where in the, was called back then was the diaper wars. And back then you basically had just Pampers against Huggies and a bunch of these new entrants hadn't come in yet. And Pampers
accidentally pigeonholed themselves into this role of being kind of parenthood, being this absolutely perfect experience where there was nothing ever went wrong. Parents never got divorced, you know, all that kind of stuff. There was it was just always perfect. You always loved it. And Huggies came along and was like, you know.
Parenting is awesome, but it's also crazy. And people are overtired and they say things they don't mean to the people they love. Sometimes poop accidentally hits the wall and all these crazy things happen. And you're changing a diaper and you get peed on. And they just talked about the things that every single parent knows to be true. And people loved it. I was changing diapers at that time.
I was actually also working on the Pampers email CRM stream. And I was like, man, how did they beat us to this? And they really got a fantastic reaction from parents who wanted to see their reality reflected in advertising and the brands they were buying from. And they stole actual market share. You could see that their...
percent of share went up directly as they started telling these real stories and also, you know, kind of getting people to tell their parenting stories in social. And I think, you know, that was very visceral for me because here I was.
participating in the Pampers CRM stream every single day. And we were kind of stuck, you know, playing it really safe, kind of defending brand equity. And Huggies was out there just saying, hey, what are the things nobody's talking about that everybody knows is real? Let's talk about that stuff.
Robin (05:44)
Yeah, it's interesting. Well, for one thing, it's fun to think about there being a bad boy of marketing in the diaper wars. But yeah, I mean, I think it's interesting there to hear about like the kind of difference between dynamic campaigns that grow with the audience and try to kind of anticipate what's going on versus just this static brand representation of don't you know who we are, you know, and we will always be this but.
I think for a lot of people receiving marketing, it is kind of nice to see the brand moving, the brand growing. As we mentioned earlier before, with email being such a primary interaction you have with the audience, it is kind of one of the best channels to actually kind of anchor in what your positioning and voice is and what you're growing towards.
Michael Levitz (06:26)
And the one thing I'll mention is, you know, kind of another situation where you get stuck is like you take a risk and it doesn't work and it becomes, you know, visible in the organization. You know, you're maybe the revenue from that email or that month, you know, is down or open rates are down and it gets, you know, kind of presented in a monthly report. And you just don't want to do that again. There's no reason to kind of put yourself in the doghouse unnecessarily.
and then you just stop taking risks. know, kind of the incentive structure is not aligned to that kind of risk taking. And as a result, you just get in this mode of kind of playing the hits over and over and getting into this mode of diminishing returns.
Robin (07:08)
I think it's interesting to use the word risk there because if we're talking about email and marketing as this game theory thing of opponents and information and all of this, like you always need to be taking risk. You always need to be taking actions that are intended to have gains over time. So fundamentally, if you are stuck and you are static and you are in a oppositional game with competitors, you can't really anticipate that they will just be static. So.
sure you know there is always risk but in order to kind of make sure that your boat is floating better than their boats you do need to have some tolerance of risk.
Michael Levitz (07:42)
So one thing we wanted to talk about today is, I think everybody working in email knows the feeling of being stuck. It's this feeling of, I send a little bit better email, I send a little bit worse email, I put more time into it, less time into it, and it's basically the same engagement rates. And it starts to feel.
like there's not a cause and effect. Like I can't move the needle. I have this audience, that's fantastic. They're with me, but I don't directly kind of impact, you know, whether they're engaging with my stuff or not. Can you talk a little bit about how do we unpack from a data science perspective what's really happening when we say stuck?
Robin (08:20)
Yeah, well, I think there are just parodies between what we're seeing in that kind of flow of the writing with what we see in data science modeling, where to give kind of a lightning tour of what is modeling in data science, you you have some data and you have some parameters that you're in control over. And there's a thing called a loss function that you're trying to optimize for. This is just, if I tweak these parameters and I plug in this data, this number will change.
And there's all these other terms of gradient descent and back propagation and all these other things, but that's kind of a simple explanation of it. And you as the marketer have different things that are in your control. Some of them are a little bit more.
direct and some of them are a little bit more indirect, but you have the content that you're writing, the kind of topic that you're discussing, the subject line, the visuals of the email, all these different things that you have access to. And you can tweak those a little bit. But one of the things that becomes interesting in like data science is this notion of a local minima.
If you're not willing to change the parameters that you have far enough, you can be optimizing amongst a small field of kind of the total solution space. You can be saying, all right, I tweak the color of the Jumbotron image in my email and I go from X percent conversion rate to X plus 0.1 % conversion rate.
and then you tweak it again and you go back the other way. But ultimately you're just stuck in this little pit. You don't have the resources or the knowledge or the ability to hop out of that space. And one of the things that you actually do in data science is you can basically kind of...
arbitrarily nudge the parameters that you're dealing with to kind of try to knock the model out of these local minimum. Just say, all right, you've explored this space for a while. How about you just hop over here and you look what's over here. So in the email marketing space, this could be topic selection. You could just be saying you've written, I think, you know, the kind of examples that we gave before was this notion of accessory Tuesday. Every Tuesday, you're writing a campaign about fashion accessories. Well, what if you did a different thing on Tuesday?
What is the gains of that? And all of these actions you're taking are meant to gain information over time about what is the ultimate payout of any of these solutions and which of these things you're in control over matter and which of these don't matter.
So in a lot of ways, the way to get unstuck is to explore a larger swath of the field of the options that you have and then kind of reiterate and reprocess these things.
Michael Levitz (10:46)
you named several kind of key dials, you know, that we have control over as an email marketer. Why don't we talk through a few of those and see how we can apply that to those specific areas. So maybe starting with topic selection,
So let's say, let's take this accessory Tuesday. How am I deciding and how, how should I think about what I'm going to talk about in that particular campaign?
Robin (11:15)
Well, I think the answer that I have to a lot of that is like, what is your audience desiring? And how do you gauge what the audience is desiring? And how do you kind of get closer and closer to that? And how can you bucket out these different aspects of engagement with the audience? And sure, if you have accessory Tuesday, what if you had shoes Tuesday?
whatever example. like, and how do you get that kind of signal about what these other measures are? So I think a lot of it is just this kind of
can you start like reaping from all the signals you have from the audience and trying to figure out which of these are testable and unique and kind of measurably different than what you would have otherwise been doing. And the other thing I think about like topic selection specifically that's very interesting is when we're talking about optimization.
You have this kind of multi-step process where first you kind of pick a topic and then you pick a copy and then you pick the execution of that. And every one of those steps will kind of impact the final payout of that campaign. And then every one of those steps has the kind of compound success and losses of the previous step. So maybe if we have topic A, the total realm of possibility for the success of topic A is
between 2 % and 8%. And if we start talking about the implementation of topic A into campaign A, well then we're saying like of that subset of 2 to 8%, we can now kind of play in that field. And then if we're talking about like layout of the final messaging, we have this ability of, right, of the kind of compound success or failure rate of the previous step, now we're talking about this. So topics in my mind is just one of these very broad things.
that is ultimately going to be very determinative of what the final success rate of the campaign is. And by exploring different topics, you have a larger realm of gains to be acquired than if you were just tweaking the kind of final step of the system.
Michael Levitz (13:16)
I think this is one of the things I do not do as an email marketer. And I think a lot of people don't, which is before committing to a specific topic or even committing to a specific kind of execution of that topic, laying out a diverse set of options that could be activated. So you mentioned before,
having a broader spectrum, you know, not playing within this kind of narrow field of focus. You know, I've, definitely fall into kind of getting into that narrow field. You know, I, know what works, quote unquote, I know what the audience likes, you know, so I'm kind of starting from that, you know, kind of like pinhole versus starting by looking at, Hey, you know, conditions are changing all the time and there's a lot of stuff I'm not aware of, you know, so I don't first, you know, look at
all of Reddit and look at Google search trends and all the things that I know I should do, but I just don't have time to do those things. So I start from that narrow set. How can you start by kind of taking in the full breadth of inputs and then laying out a larger spectrum of options?
Robin (14:24)
Right. So there isn't just some golden rule of, hey, you should spend X amount of time and come up with 30 different topics and they should be of this granularity and you should execute from there. All of these, all of this idea that we have of game theory and marketing and thought leadership and all of this is meant to be this iterative hypothesis driven process over time. So.
I wouldn't recommend you just sit there and try to perfect what this first step looks like. I think a lot of it, you know, the word of the day, the word of the era is priors. A lot of this is look at the priors that you have right now, which is just the beliefs that you presently hold about the levers you have for marketing and try to go one step broader and then go one step broader from that.
We can come up with what these different signals are. We can say, all right, social media is a good signal, and competitors are a good signal, and SEO volume is a good signal, and we're seeing more and more of this.
I don't know if there's like a term of art for it yet, but you know, the AI SEO, what will make your thing appear and the Gemini prompt at the top of Google, right? We can think of all these things as like signals and ways to interact with the audience and all of that and what the audience is seeing. But we shouldn't just start by saying, hey, here's the perfect knowledge that we have about the entire space because we will never have that. We are only ever sampling from
the actions that we can take and the information to be gained. yeah, there isn't some golden rule of you need to do 20 hours of social media research and you need to look at these exact metrics. It really is just what steps can you take now that allow you to acquire information sooner that will then unlock more doors that will then, you know, it's just this iterative gains.
so I guess that's kind of the preface to the answer. And then there are things, you know, we can look at, and we've kind of mentioned some of these of social media news, all these different signals that are a good starting point to kind of triage what possible topics are in the playbook that you have. But really just wanted to highlight that, like,
The best way to be data driven with any of this is to take the amount of data driven that you are today and add one to it and then add one to it tomorrow and do that forever rather than over optimize what the entire system will be.
Michael Levitz (16:37)
Maybe one tactile thing people can do is, you you start with a campaign idea, you know, and these are often done under pressure, you know, you have a lot of other things to do and you need to get this one thing done and while everything else is piling up. there is an opportunity cost to, you know, spending more time on this. Let's say you're doing this Tuesday email, you have a gut instinct on what you would normally write.
What if you then came up with one idea to the left of that, like far to the left and one idea far to the right? So you have like a, so you've brought in that spectrum of topic options and make your selection, you know, based on having this more diverse set of things that you could talk about.
Robin (17:22)
Yeah, I think that's a great first step.
Michael Levitz (17:25)
So that could be an interesting idea. Just try, you know, come up with the thing that you would normally do and then come up with like crazy idea to the left, but viable, crazy idea to the right, but viable. You know, maybe have one based on something you, you know, have seen some critical mass of people talking about on social, you know, maybe one's some critical mass that's picking up steam in industry press or the news or some kind of cultural, you know, phenomenon. And then...
potentially, I guess that could lead us into kind of what, why people often feel disappointed with A-B testing. So I think if we were to have these kind of three diverse set of options where normally I would be starting just with one and executing, I think often a key problem with A-B testing is you're testing things that are just too similar. You know, I can't decide. I have two variations on the subject line.
And I can't decide myself, so I'm just going to A-B test that. Or, you know, I have the red button and the green button, the classic thing. Can you talk a little bit about how to look at A-B testing in a different way? In a less frustrating way?
Robin (18:29)
Right, well one of the things I think ties into how frustrating it can be is the cost of A-B testing at that level that you're talking about, where for a lot of people A-B testing is just gonna be we will send 50%, we will send A to 50 % of our full share and we will send B to 50 % of our full share and we are then committed to the results of that. A large number of people will see option A. So.
I think there's kind of multiple things I would say, but one of them is like, in this idea of start with one idea and then go far to the left and go far to the right. Are there things that you can do that starts to vet the validity of those ideas before they are ultimately sent out to everybody? And this can be simple things. This can be the like rubber duck programming kind of idea of like, if you have a team member and you tell them about option A, B and C, like what is their reaction?
are you seeing external signal that option A, B, or C matters? So fundamentally, you can start getting some of this kind of information pretty early on about the validity of these ideas. And you can do that at a cheaper cost, at a cheaper expense than full email blast.
So that's kind of the first portion. And then the second thing is like, you know, just kind of think about what the actual methodology of your A-B test is and do a canary test where an email will be sent to 3 % of your total audience. And if it doesn't hit some threshold, don't send it to the rest, right? And that can be like a lower cost way to kind of start testing these things. But rather, like you have some kind of budget of experimentation.
And you don't need to spend that entire budget on A B testing the color of the sign up button, right? you can do more smaller experiments that will allow you to start gaining information about a portion of the information rather than just like spending your entire experimentation currency on just this color of the button.
Michael Levitz (20:23)
have a theory that is not grounded in anything other than kind of focus group of one experience that I want to bounce off of here, which is when I try to experiment with new things, I'll do a test. So maybe I don't even need B test, so I just make this email kind of significantly different than the previous one. Or I do an A-B test with a variant that's significantly different. And that's the end of that test.
not a lot happens. and then I just revert back to the meme, you know, business as usual. So, something I wanted to bounce off you is like, I think in email, you can't look at one email to decide, you know, whether this is a new valid approach that, you know, your followers are gonna, you know, kind of enjoy.
And it needs to be there needs to be some level of repetition. So let's say you're going to take that crazy idea on the right. You're to test it out. Yeah, maybe like you said, you test it out with a small percentage of the audience, but not in just one blast. You know, I'm going to say it has to be for something like three to four before you can really tell if it's having an impact, good or bad. What do you think about that?
Robin (21:36)
Yeah, I mean, the way I view that kind of thing is just variance of what you're trying to predict. And there's always going to be some amount of uncertainty. And this is like the statistical term is R squared, you know, how much of the total uncertainty of a thing you're predicting is captured by what you're looking at.
So like, you don't know if your audience segment is sick on that day. You don't know if some future news cycle will kind of push people away from the email that day. So there's a lot of things you don't have control over. And there's a smaller subset of things that you do have control over.
And you want to be kind of sampling and learning about what you do have control over. But yeah, it won't be just this finite test where one thing I emailed, you know, I A B tested the color of the button one time and it had 3 % higher open rate. And thus I am always certain that this
color of the button is the best, right? This is kind of fundamental to multi-armed bandits is that you're always learning about the space, you're always kind of re-indexing, recalculating the payout of these things.
you know, once again, I would say you kind of want to come up with like, low cost experiments in some sense that kind of keep you engaged in different aspects of the space that you have control over. And you want to keep running new things and you want to keep learning over time. Because yeah, and ties in a little bit to what I was saying earlier of like, I think there's this kind of compound loss of emails where like, okay, if I had the, know,
Good or bad topic plus good or bad execution plus good or bad styling is ultimately gonna kind of be determinative of the success of this email. And if I am learning the A-B tested results of good or bad style, well, it's kind of contextually dependent on the thing also having the good topic and all these other kind of prior elements of the email. So.
You want to be testing things at the kind of start of those funnels and kind of see how they play out through. But yeah, you don't want to just have this like implicit like, I have learned one thing and that is a static truth of the world that hex color, blah, blah, blah, blah is the best for my button, Jumbotron button.
Michael Levitz (23:43)
That's a great point. I think often in email, I am trying to quickly find the answer, like the forever answer. You know, I'm gonna test these two things and now the thing I learned is forever. The button shall always be read. The headline will always be one word. know, whatever the thing is, I wanna just lock it in and...
You know, have that become a constant. So it's interesting to start thinking about A B testing differently as this kind of always on experimentation layer. That's kind of acknowledging that, Hey, there's a lot that we don't control. That's changing all the time in the, in the variables of whether an email is successful or not. And we need to constantly have our antenna open to that, you know, or else we'll get left behind.
Even as we're talking about this, I'm getting a little stressed out because I feel like I'm now making more work for myself that I don't have time for. And I know most email marketers find themselves in a situation of being very strapped for time and behind the eight ball. So when we talk about doing one AB test, I can kind of get myself psyched up to do that. When we're saying, okay, well now that one isn't enough, you always need to be doing AB testing or you need to AB test this thing four times.
before you can decide if it's set. Now I'm kind of unplugging. I'm like, I'm not signing up for that. I kind of wanted to know, but maybe now I don't want to know. So this sounds really interesting, but there also are the realities of I'm super busy. How do I kind of embrace these things without killing myself?
Robin (25:16)
I think for me, there's two sides of that coin. Number one is I do think some portion of this is automatable. We are working on a platform that will help you determine which topics are testable and we'll do some of that research for you. So I do think there's like
solutions that can help you determine what some of this stuff is without the high cost of having to do all that research yourself.
there's kind of always a cost, right? And if you are still stuck in the same playbook that you were doing before, I think you can hit that cost by the blinking cursor problem that we've been talking about, where you might not have the thought that you have the time or effort to do something new, but.
by not having an idea of a new thing to do, you might still just be stuck in the, how do I kind of re-implement? How do I do the same previous thing again? So I think you are always kind of like, you are always kind of paying the cost of experimentation. It's just whether you want to pay the same cost for the same results that you've seen before, or whether you're willing to.
pay an equivalent amount of cost but learn something new.
So there's always a cost to coming up with what the idea for the campaign is. And that cost can be paid by playing out a campaign idea that you've done in the past and having the stress of how do I write this in a novel way? Is this still contemporary? Is my audience going to be tired of this messaging? The kind of like confidence payment, you you will be sitting there staring at the blinking cursor for longer, not knowing.
how to proceed.
Or that cost can be paid a little bit more actively where you are saying, okay, I'm willing to experiment with something new and I don't have the blinking cursor problem because I have an idea of a new space that I want to test and I can just play in that new space and try the new thing. So there's kind of, I think the two sides of this cost are number one, the like, the pains of just not knowing what to action on or the excitement of being able to action on something new.
So if that cost is being paid either way, I think the payout of testing something new is gonna be greater than the payout of just playing into the hypothesis that you already have. After X amount of time on Accessory Tuesday, you have a pretty good prior about what is the conversion rate you'll get from your Accessory Tuesday email. And if that's sufficient for all parties that are relevant,
So be it. But I think, for most parties you want improvement and you as the, you know, you as the marketer can be more motivated if you're trying new things, you're not just stuck pushing the same accessories every Tuesday. So when you're paying that cost either way, like,
think it is more motivating and more potentially, you know, the payout will be higher for you're spending that cost to gain new insights about possible successes of your email and new insights about your audience, new insights about potential tactics and topics.
Michael Levitz (28:02)
So I think the next big area for us to talk about from a game theory perspective is audience segmentation. I think that's the next place where you probably have the most powerful dial to turn as an email marketer. So you've got your topics, you've got your A-B testing, and kind of the third tier is, I sending this to everybody? Am I?
sending this to a specific audience and what do I know about what that audience is interested in right now that's gonna help me connect with them.
Robin (28:32)
Yeah, well, you've been seeing a lot of positions about kind of the value of audience segmentation and the cost of more segments and fewer segments and the potential gains there. So do you want to share a little bit about your insights about the gains to be had and the costs of audience segmentation?
Michael Levitz (28:50)
Yeah, you know, think the first, the first important thing from, you know, the email marketer side of things is audience segmentation creates more work, you know? So yes, if I segment to the moon, I can probably increase engagement, but it's not possible, you know, without building some massive newsroom of email marketers that, that we'll never have. So.
I think it's important to, A, factor in kind of time spent when we're all deciding on how many segments we're going to go after. And ideally, come up with the fewest amount of segments that have the highest impact.
So I think, yeah, step one on audience segmentation is keep it as tight as possible so that you have time to do it really well. And then I think kind of step two is this interesting dynamic between some of the behind the scenes.
mechanics of how email deliverability works, like how your emails actually get to someone's inbox promotion tab, et cetera. So there is a benefit to brands to not always be sending to their full list and you are rewarded for segmenting. So I think a lot of brands just do only batch and blast, especially kind of more people that are new to email.
you're getting penalized for that. just by introducing segmented emails, just by that factor alone, your whole email program will benefit and your deliverability will go up, which basically means your engagement rates should see benefits. So that's kind of like a behind the scenes thing that you don't see, but is happening as you segment. And then I think the third piece is you can really start to move the needle.
You know, once you segment, can find some outsize gains there. And what's interesting is, know, segments aren't always pure audience segments in the way we, you know, just traditionally think of them, you know, kind of men that are 35 to 40, or, you know, women that are 21 to 25. You know, it can also be things like, you know, people who've...
you know, bought three things in the last 90 days, people who haven't engaged with you in the last 180 days, et cetera, et So I think, you know, we recommend kind of having that mix of audience segmentation that's partly based on, you know, let's say demographics and partly based on behavior.
Robin (31:13)
I think the behavior segmentation is very interesting because it is first party stuff that you conclusively know and that you can test and that people can kind of cleanly fall into those buckets. I think the like the demographic stuff and all of these things. Yeah, I mean you can do it, but like there is a an increased cost. I would say always of doing that.
it is hard to solve some of these problems. is hard, like named entity reconciliation of this, is this John Smith the same as that John Smith? And what do I know about these John Smiths? It is just kind of an unsolved problem. It is a hard thing to do. And maybe you have some identifier about this person and you can send this to some third party service and get some of this kind contextualizing information about this person, but.
that is hard too. How much do you trust to the service to know that, know, this IP address is John Smith with these demographic traits. it's useful to think of that stuff as like signals and testable hypotheses and elements that play into this whole decision chain that we're talking about. But yeah, I would kind of shy away from just the pure like,
I am entirely confident that my list cleanly falls into these segments because of these demographic traits. I think that's just hard. So I think it should be in balance with the kind of behavior-driven segmentation.
Michael Levitz (32:34)
So then thinking about applying game theory to segmentation, what's kind of the first step there? What's kind of 101 of doing a test?
Robin (32:46)
Well, I think it starts with that.
behavior-driven segmentation, because that is the thing that you are most purely able to observe. You have a legitimate signal that these amount of people purchased a product in the last 90 days. You have a clean signal that these amount of people opened this email. So the ability to determine this segment of people that opened my newsletter and have bought a product in the last 30 days are my loyal customers and thus can have a higher capacity of receiving
emails and maybe they want deeper brand narrative now because they're true fans rather than just wanting the kind of coupon and discount code. Just whatever test you want to make on that group of people is fine, but like when you have the behavior driven segmentation, it is easier to test because you have the kind of clear signals of it. And I think one of the gains of that too is like it is more
fluid about who is in what segment and you don't have to make these kind of conclusions that like I will say that John Smith is always segment a it is dynamic which audience member falls into which bucket and Then you have the full signal of like, okay audience segment a with it has this behavioral traits Have this payout when I send this type of email
That is a good starting point. mean, that gives you testable stuff. You can have kind of fewer, you have less work to do because you'll have fewer buckets because you are in kind of direct control over, well, yes, these are the signals. These are the behavior drivens that are most associated with the customers that I have and what's paying out. You don't have to do this like.
Well, I believe that 18 to 25 year old men are my core audience, but I also need to write an email for 65 plus, you know, it's like, it's just kind of test the behaviors that are working, iterate from there, expand from there, rather than just have the full like, permutation tree of, I need, know, one little campaign to test every permutation of all of these behaviors that I.
have created but don't really have the knowledge that they are true nor valuable.
Michael Levitz (34:50)
Yes, when we launched one of the Samsung phones, we had a huge map of those and tried to create content against each of those and then segment them based on data that we were getting from the advertising platform.
We just took on too much, you know, it became very watered down. What was interesting was a couple of the segments kind of ended up just revealing themselves to us as more powerful than the rest. And the one I remember most was this was not really a recognized thing at the time, but was taking pictures of food at a restaurant. And we were just like, you know, there are people that are just really enjoying this taking pictures of their food thing. You know, why don't we just double down on that?
And we ended up kind of, went with that. We went with a segment around fitness as well. And those two ended up having, you know, just an awesome impact for the campaign. Whereas originally those had just been kind of two cells on a spreadsheet that were equal to all of the, you know, I think it was about a hundred other ones. So I think, I think it's interesting in that had we not done that work, those never would have made it on our radar.
But then as we executed the campaign, we ended up dropping the vast majority of them because, yeah, was just because they existed didn't mean that they, people really wanted to get content around some esoteric thing that we'd identified.
Robin (36:11)
Well, this is kind of like a nerdy diatribe, but like one of the things that happens on a political campaign that gets kind of funny is like, you have so much data that you have access to. have, you know, the voter file from your party. You're buying a lot of data about, you know, just these huge lists of people and like the data you have access to.
and you start looking at all these fields that you have in the data and something will say, is this person a high income earner? Is this person a luxury goods purchaser? And then you start looking at the kind of correlation between these things across all these different data sets and you get this funny thing of like, well, all these people are just kind of buying.
data from each other as well and you're like modeling things off of what people have modeled based off of the data that you already have access to and you can get kind of overwhelmed and drowning about like what is the what is the grounded truth of any of these things and you know statistically it can become a problem with all these like variables that you're plugging in are all just perfectly correlated with each other because
It's, know, a system can't really do anything with that. So there is kind of that funny thing too of like.
Once you start buying too much data, gathering too much data, associating too much stuff, you just kind of lose all meaning where it becomes too hard to actually determine just what is the ground truth, where behavior-driven stuff, you are the sole owner of the ground truth.
Michael Levitz (37:36)
And I think that brings us to our last point, is, and you touched on this for a minute, which is frequency. know, frequency can be a very contentious thing. can feel like a dangerous thing, you know, to a lot of brands, you know, they, absolutely don't want to annoy. They don't want to annoy their most engaged customers. They don't want to damage their brand.
And I think where a lot of brands end up is not sending enough email and kind of scared of testing out their frequency dial. Is there a way that you would recommend approaching kind of testing to hone in on what that rate frequency could be?
Robin (38:12)
Well, I think a lot of people are constrained by the environment that they're in. like, I don't think you can just, if you are currently sending one email a month, I think it is hard to then just say, okay, we're gonna start sending five emails next week. would not imagine, you I would not recommend going into the deep end.
I think one of these tests you can be running is occasionally try a different frequency cadence and see how that pays out. But the other element here is that when we're talking about gaining this information and running these hypotheses, the evaluation doesn't always need to be.
I sent an email to these people. You have some of these other things that are kind of lower cost tests. You can look at any of these signals. You can do the rubber duck testing. You can do a lower, you you can send something to a limited audience segment. You can do a focus group. There's as many different things you can do that all help you acquire the hypothesis information.
without kind of spending the currency of how often you can email your audience. Like, yes, the audience email is ultimately the test that matters the most, but.
every decision along the way there is something that you're in control over and has kind of baked in priors and hypotheses that are worth testing.
Michael Levitz (39:28)
We had Dela Quist on this podcast a couple of weeks ago, and he writes a blog called The Frequency File. And I think, you the more I kind of have looked at this issue, the more, you know, I give him credit for championing this before it was popular. And I think it still is less popular than it should be. You know, and his point is, look, you can email a lot more than you think you can.
And the returns are great for doing that. Obviously there's a time cost for doing that, but the returns are far greater. So, you know, stop being afraid and stop kind of dinging yourself for needing to send more in order to get more returns. You know, even us, when we started working together, we noticed as we sent more email, we, you know, got better.
We increased sales, we did all kinds of stuff and we kind of discredited that. We were like, well, but we had to send more email to do that. How is that impressive? And it is impressive. It's not easy. And it's just a part of the channel that, you know, many people are not paying attention to every single email you're sending. You can't assume that those are being read. So you have to strike that balance where you're not bothering people.
but you're also kind of showing up in the moments that matter.
that can be one of the hardest conversations to have because you just see fear. The organization can have a very strong kind of visceral or like kind of common sense reaction to these things. Like, no, we don't want to bother people. We don't want to send more email. And I think one of the best ways to kind of...
help that conversation is actually just do some counts of, you what are competitors doing? What are some brands that you like doing out there? And people are often surprised to see, whoa, they're emailing, you know, every other day. And I love this brand. I've been getting those emails that I didn't even notice because I like it.
All right, so in conclusion, email can often feel stuck. And what we're advocating here is shake it up a bit, get unstuck by basically looking at a much wider set of possibilities and
giving yourselves the kind of freedom to test some of those. And I think Robin, you made a great point of not every test needs to be an email. I hadn't even thought about that. You know, I'm just so kind of hammer with a nail. You know, I haven't just, you know, run some crazy ideas by you or by the people that are, in a non-virtual world, in a physical world would be sitting next to me. From your perspective, what are kind of
immediate next step someone could do to get unstuck.
Robin (41:59)
I would say just, you what you said earlier of like, write down, know, take a pen and paper and write down what you think your current process is. And then on that same pen and paper, just write down on the left, you know, some idea that swings some in some other direction on on the right, write down some idea that swings in the other direction. And I think there is just the kind of like,
put these into the world and then look at the paper and just kind of, you will have a different reaction, I would say, to just looking at these ideas on the paper and writing down, there signals that play to either of these options? Are there ways I can test either of these options? And I think that is kind of like a low cost way to just go forth and experiment, right? And then once it's down there, you can.
You can do the rubber duck testing, which is literally just kind of voice this to anybody to anything. I mean, the actual programming examples, you're talking to a literal rubber duck on your desk and even just as silly as that sounds like. Explaining your idea to somebody at a low level, even if it's to nobody helps you kind of gain understanding about what it is and what the strengths are. both in that kind of writing, but also talking about it, like.
you will naturally explore a little bit of what works about this and what doesn't and what's testable and what isn't and all of these things. yeah, like, I mean, I guess just push these ideas into the world, push them into the world in kind of a somewhat like.
low stakes way and then you know once some of these things start kind of appearing as being more plausible than the others increase the rigor of the test that you're doing and if you've written this down and it sounds good to you and you've talked to your co-worker and it sounds good to them well try to get the approval to send it to a limited segment of your audience and send it to an audience segment that
is apt to be able to test it, you know, and maybe something that, you the group of people that are least likely to turn or whatever it is, right? Like just kind of this incremental testing of what will gain more and more information over time at the incrementally cheapest cost.
Michael Levitz (44:01)
And I think the last note is if a test fails, keep testing. Don't just do what I've done, which is, okay, I'm just going back to what I know works because I don't have time for this or I don't want to make myself look bad. Keep going with the testing and come out the other side.
Robin (44:17)
Yeah, and equivalently, like, don't set up the test to just fail to validate the biases that you have. Like, you should be excited about testing, you should be excited about, you know, these ideas that you have, and they should be kind of like, you know, going through this process that we've been talking about, but...
if you in your head are just thinking this is doomed to fail, but I'm, you know, pushing it ahead like, yeah, I mean, you're not really testing something that's plausible then you're just kind of.
allowing your priors to maintain.
Michael Levitz (44:45)
All right, well, thank you so much. This has been a great conversation and we will see everyone next week.
Robin (44:51)
Thank you.