Lightning In A Bottle, Every Week

June 15, 2025

Episode Description

In this episode, Michael Levitz and Robin Tully discuss the intricacies of creating effective content strategy briefs. They explore the challenges writers face, such as the 'blinking cursor' syndrome, and how to build confidence in writing through structured briefs. The conversation delves into understanding audience insights, the importance of data-driven strategies, and leveraging weak signals to inform content creation. They emphasize the need for a comprehensive approach to content strategy that combines various data sources to enhance the writing process and meet audience needs effectively.

Transcript

Michael Levitz (00:00)
Hello and welcome to episode four of our newly named podcast, Forecasting the Brief.

my name is Michael Levitz, my partner here, Robin Tully, So today we're going to talk about content strategy briefs. So a core part of what we do is building effective content strategy briefs, and we spent about a year kind of deconstructing and reconstructing what we think is a great brief,

All right, so when we started this, one of the key things we wanted to do was target what we call the blinking cursor, or actually what we heard a lot of people call the blinking cursor. And then we, then we adopted that as a, as a key pain point. And a lot of the marketers and writers that we talked to talked about two key things. One is just facing this blank page and not knowing if they're ready to write, if they were done with their research.

if they had the strategy, if they had the insight. And the other thing was just how do we time box that research, just kind of going down these black holes of unstructured time and having a lot of deadlines, a lot of to-dos and not being able to manage how far to go before kind of just forcing themselves to write and publish. And that led us to...

developing a content strategy brief that basically attacked that blinking cursor and gave them all of the key components that they need to start writing and start having fun and stop procrastinating and stop, you know, going down these endless research holes.

Robin (01:28)
Yeah, I think that two of the kind of main side effects of the blinking cursor that we heard about, one was the lack of predictability in terms of what, how do I know what I'm actually gonna write about? And if I dive down the rabbit hole of doing research myself, I can't guarantee that I will find the correct number of citations. I can't guarantee that I'll find relevant topics. And then just kind of not having that certainty causes a bit of.

inability to begin acting. So there's both the uncertainty and also the inability to predict how long this research endeavor will take. And because of differing time crunches and different amounts of pressure week to week, the marketer is probably going to be kind of spending different amounts of time on each of these steps. So really trying to kind of shore up the confidence and the repetitive side of just guaranteeing that the marketer has an actionable insight every week that's vetted and cited and prioritized.

against all the other signals that they could be seeing and giving that on a silver plate to the marketer every week to just start writing.

Michael Levitz (02:29)
One thing that I feel a lot, and I think this is pretty universal, at least to B2B writers, is that you feel some level of imposter syndrome. You're not the subject matter expert. You're not the engineer. You're not working in the facility where these things are built. And you're being asked to write about these topics and...

there's a feeling of wanting to do endless research because you just have that feeling like someone's going to be reading this and say this person really has no idea what they're talking about. So you kind of go into this endless kind of research cycle and it's largely unnecessary, but you don't know when you know enough to be an expert. And then, you know, as soon as you're done with this piece, you're basically jumping onto the next piece and starting that cycle over again.

So one of the things, you know, after hearing that a bunch, one of the things we set out to do is how do we give you that confidence to ideally skip that imposter syndrome, skip that identity crisis, skip the gut wrenching, you know, can I really, am I really the right person to be doing this task right now and let you just proceed with confidence.

Robin (03:37)
I think also that by giving people the ability to proceed with confidence just allows them to explore new spaces that they were previously not confident enough to explore where if you have all that pressure that you thought about you might find yourself in that pattern of just

picking the hits that you've seen before and trying to replicate those successes because you'll have pretty good buy into your boss. Well, this worked three months ago, so let's try it again now. But really trying to kind of use all of these different data signals that we have access to to kind of explore the space of opportunity before you've even started writing just allows you to explore the field and kind of get more confidence and also try new things. And hopefully that is...

easier but also just more engaging.

Michael Levitz (04:18)
Yeah, I think one of the things that often happens is you fall into a template and then that defines the way you're going to write, as you said, for the next, let's say, three months. And you almost get afraid to change it because, you know, you feel like this is, it's tried and true. And, you know, I think one of the things we're encouraging and kind of empowering people with is

that ability to kind of flex into different styles and different topics and different news cycles to bring kind of fresh ideas and understand, you know, what that audience perspective is rather than just kind of coming with a business perspective and hoping that the audience is kind of following you down that kind of rabbit hole.

Robin (05:02)
Yeah, I think there's that, you know, there's a lot of discussion in the past about meeting the audience where they are and not, and where is the brand currently positioned, and just having the data crunched, having, you know, the topics generated will allow you that flexibility of just understanding where the audience is actually at and how to reach them, what topics are gonna resonate with them.

Michael Levitz (05:25)
So for us, I think that's where things got really interesting. the question is, if we accept this premise that understanding where the audience is and meeting them where they are is a key part of successful business communications, the question becomes, how do you do that in a way that's efficient? Because it can be incredibly time consuming, especially

You know, you're a writer working on a brand, you know, maybe you're internal, maybe you're, you know, an agency or freelance got a ton of stuff going on. And again, that's exactly when you get into this unstructured thing of, right, I'm going to go on to Twitter. I'm going to go on to Reddit. I'm going to, you know, open. I now I have five white papers open. I want to know what Gardner said. I want to know what, you know, all these industry pundits say, so I'm not embarrassing myself and I'm not missing some trend, you know, and all of sudden you've.

totally blown the amount of time you have. So I think what often, when we talked to people, what often happened was they'd stopped meeting the audience where they were. You know, it's just, there's not an efficient way to do it. And they just relied on, hey, I have a decent understanding of what's going on from, you know, just working in this industry. You know, we get these audience insights from, you know, let's say the business intelligence group. So I have a decent feel of it.

I'm just going to go with that because to do those other things just is going to make this task take five times longer than I have.

So where we decided to try to kind of fill a gap is how do we hand them that audience insight that meet your audience where you are on a silver platter. it's there, it doesn't take any more time, but they can just build that into their writing process.

Robin (07:08)
Well, one of the things that becomes difficult about understanding where your audience is at is even kind of baked into the way that we're talking about it, where we're just saying the audience as if it is kind of one, you a person you can go talk to and ask the question, where are you at? Where the audience is just kind of

a thing that is essentially sampleable from all of these different signals. Some portion of your audience signal will come from social media, some portion of it will come from competitors, some portion of it will come from what are people searching for, some portion of it comes from news and articles and all of this. So we try to look at all these different signals to just kind of chip away at the marble slab of who is your audience and get closer and closer. But in that process,

there is kind of an iterative growth thing where we do want to look at what topics are working and help use that to figure out what the actual core audience wants to. But really, it's just about what are all the different tools we have to chip away at this marble block to start carving out, you know, what's the kind of core audience and the reason the marble block kind of analogy actually falls short of it is, well, your audience is also going to change over time and grow over time.

So really it's just this kind of what are all the different signals we can gather to just take the best stab we can at where is the audience today.

Michael Levitz (08:26)
Yeah, it's a great analogy. think on the B2C side, you one of the things that comes to mind, I had worked on Blue Bottle Coffee for a while. And, one of our biggest fears was, you know, this person's a coffee subscriber and they now have bags of coffee across multiple shelves. They're overflowing with coffee. And at some point they're just going to just completely cancel the subscription. You know, so...

subscriptions got more sophisticated and all of a sudden you started seeing people do things like ask how many how many bags of coffee do you have right now? You know, can we update your subscription dynamically for you? You know, there are hilarious stories of people with, you know, rolls of paper towels in their oven in their, you know, every, every place they could store it, you know, these subscriptions that just gotten out of control. And I think, you know, for us, it's

you know, how do we kind of understand if the audience is starting to disconnect, if they're starting to kind of go down a different path from us, you know, how do we not let them have, you know, shelves full of coffee without us realizing, and then all of a sudden we've just lost them. So I think that the interesting question we approached was first, you know, kind of working backwards, what do we need to give to a writer to...

have that information? Like what is that kind of, I think that's kind of a new type of brief. And then working backwards, what are those data sources and how do we combine them to fill that brief with, you know, what they need to meet the audience where they are.

Robin (09:57)
Yeah, I I think that a lot of people would just hear what we're saying and think that it's kind of equivalent perhaps to classic audience segmentation, but it's really very different where kind of to your example about having too many paper towels and all of that, you can go down these.

infinite processes of audience segmentation and trying to figure out exactly, know, who are these people and what are their socio-economic attributes and where are they? And then kind of carving that out, you're never really going to be getting closer to...

what do these people want or what is the kind of leading carrot that guides these people's interests. But by starting with the brief and then looking at signals that kind of validate and stack rank those topics, we are kind of getting better at this carrot of just, it's a little bit more about what are the trends that your audience is following and where are they going and what are the expressions of their actions, both in terms of your kind of product and how they're purchasing it, but also just kind of broader.

market ideas of what are people talking about online and all of that. So really just having that kind of brief and then finding the citations and using that to stack rank them and doing this thing where we can look at an immense number of topics and an immense number of signals and use that all to kind of forecast the likely value of these channels and briefs really does try to get at it a bit more.

actionable and a little bit more, you know, ready to implement than what could otherwise be an infinite process of audience segmentation.

Michael Levitz (11:23)
Yeah, that's very true. And I think, you we had some interesting conversations with people, you know, where at first we wanted to really lean in on audience segmentation. And a lot of the marketers we talked to, you know, gave us feedback, you know, where we had actually giving, given them some, you know, micro segments, a decent amount of segmentation. And they said, Hey, this is creating a lot more work for me. I need, I need.

you to make less work for me, not more. So, you we went from, you know, a lot of work you've done on the Yang campaign of really micro segmenting to making the fewest amount of segments possible that were the most impactful. And how do we, you know, just kind of create the widest message that's, you know, not so wide that it's meaningless.

Robin (12:07)
Yeah, what is the best big tent that you will have this week to gather your audience under rather than how do you just carve up 90 different little tents that you all have to edit just as much and segment just as much and lose people and maybe people in tent A belong in tent B. But if we can just help you figure out what is the most actionable big tent that's available to you this week, you will just kind of naturally reap a huge amount of benefit.

Michael Levitz (12:34)
So when we started, we put together kind of a, you know, let's say generic or, you know, an amalgam of your typical content strategy brief. So we went out there, did a survey of the majority of the briefs or, you know, as many briefs as we could find, you know, looked at kind of what the similarities and differences were. And we just kind of created a template for us that kind of pulled together the best of all of those things.

And we implemented it, we ran with that for a few months. An interesting thing happened where what came out was not particularly actionable. And as we deconstructed it, what we ultimately found was that the briefs were kind of a typical content strategy brief. It really varies from altitude. It'll be some.

high level stuff that, know, who's the audience, you what's the business imperative, what's the objective, that kind of stuff. And then some really tactical stuff, potentially, you know, what's the H1 tag, you know, what are the keywords? And it's missing a middle piece, you know, where I think that's where the writer is supposed to begin the research. And again, because that's the part that we were setting out to solve.

We were still handing the writer their biggest time suck. Their biggest problem was left unsolved. So some of the feedback we got was, okay, like this is a totally responsible content brief, but it doesn't do the things you guys told me you were going to do. I still now need to go start my normal process. So we then took a step back and kind of deconstructed that again, kind of leaving out the stuff you get in regular.

content strategy briefs and really just trying to fill in that, you know, that messy middle, the part of, you know, how do we meet the audience where they are?

Robin (14:15)
Yeah, and do you have any, you know, do you have a picture to paint of where he will be left with that brief and kind of what the next step for the writer is?

Michael Levitz (14:24)
Yeah, I mean, the kind of light bulb moment for me was I was really trying to create a new brief template. And then I just realized I didn't have the ingredients for the brief I wanted to give someone. And, you know, so no amount of, you know, kind of creating new labels or, you know, changing gold to objective or any of those kind of moves were going to matter because the

substance wasn't there. And, you know, that made us kind of go on this journey where we just, you know, kind of started looking at, you know, I kind of shorthanded it like lightning in a bottle. How do I give that spark to a writer over and over? And how do we set it up in a way that we can, you know, kind of operationalize and guarantee that we're going to come up with that? And I think that's where, you know,

what you had been talking about with weak signals really started to come into play because we all know these moments where you find that thing that just unlocks the article. But how do we find that every single time at scale?

Robin (15:31)
Yeah, I I think it is just one of these enablements of modern technology is our ability to just parse vast amounts of data, crunch the numbers and do all these things and do it just in time for when you the marketer needs it and giving you the most relevant themes. And as we kind of talked about earlier, giving you know, the big tent that is an actionable thing for you to proceed with and just having you like.

have access to citations, have access to the numbers, but you have that lightning in a bottle. And the not, you know, one of the kind of things built into that phrase is just the ability for it always to be novel and to be powerful. So in order to do that, we do have to look at this immense amount of data and to, you know, prioritize it and to.

figure out, well, this week, you know, this specific social media group is having an emergent conversation about a topic and it's exploding at some velocity that is very relevant. So we can use that as kind of the signal for why this thing is lightning in a bottle this week. And in the next week, we can see an example where, this brand works in some kind of relatively niche field, perhaps, and the Wall Street Journal has now published an article directly related to them.

So if we can kind of always be tracking and indexing all of these different data sources and always looking at them, we're able to kind of always capture what is the most valuable lightning in a bottle this week, rather than being in a position where we're only ever looking at kind of a limited set of signals and you could be in, and there might not be lightning in a bottle this week.

Michael Levitz (17:00)
That's been one of the most rewarding parts of this. I think a lot of B2B brands that we talk to often feel like, you know, this space doesn't move fast enough or they're not necessarily connected to culture in that way that they have something interesting to say every week. And I think as you started surveying the space and getting deeper into some of these more niche conversations, all of a sudden it was...

culturally relevant and it was interesting and fast moving. It was just a matter of kind of discovering, you know, where these things were happening as opposed to just staying at, you know, if I look at, if I look at Gartner, McKinsey and the Wall Street Journal on a week to week basis, yes, there may not be anything changing in a particular niche space. But if I go down a few levels and look at, you know, kind of where some of these conversations are happening, you know,

then there's often pretty significant debate and also a connection to the news cycle. And I think we've really found that magic of when there's something in the news cycle plus something in the social world and they're combining, that is a huge unlock.

Robin (18:08)
I mean, think it's all about trying to be data driven in the ways that you can be and sure like there are some positions where something might not seem like it's directly actionable to be data driven there. But as we were saying earlier, for one thing, we look at enough signals that there should be some signal in one of the channels every week relevant to your brand. And there's also a little bit of the kind of, well, even if like,

even if there's kind of like the brand believes that there's less of a less ability to be data driven, I still would believe that you would want to be as you know, what is the alternative right to to guess.

And if we're just kind of automating the process behind the scenes for you, like it's still as valuable of a data-driven signal as you could have and the best position that you could have, there might be a slightly more variance to what the kind of expected value of these campaigns is, but it is still maximizing, compounding over time, getting you the best results on a broader time scale.

Michael Levitz (19:07)
Now, when you say being data driven, having worked with you for a couple of years at this point, I know that what you mean by that is different than what is generally meant by that. Like me being, you know, marketing manager somewhere, being data driven means basically I'm looking at my previous, you know, open rates, click rates, that kind of stuff. I'm seeing what kind of content performed the best. You know, maybe I'm grabbing some reports from

business intelligence group if that exists. And I'm basically, you know, I'm not just shooting in the dark. Now what you're talking about is actually very different. Do you want to explain what you mean?

Robin (19:41)
Yeah, I don't think I have kind of a different language for how I would describe what you were just mentioning, but it's a little bit more of like that to me is kind of a bit more being like data tracked data indexed like you are taking an action and then you are measuring it afterwards and sure you can have some idea there of like, well, this action performed better than other actions because I looked at the click through rates of this where.

For our approach and for my approach, it's kind of this Bayesian idea of like how to actually be propelled by the data that you're gathering, the data that you have access to rather than just purely taking the actions and then measuring them after. It's kind of this pre-measure, you know, we can look at this topic and how much it's.

you know, gaining traction in a social media space and we can do simulations of the future and figure out an X percent of these simulations, that topic will keep growing. And, you know, we did a thousand simulations and in 85 % of them, that topic keeps growing. And thus, if we kind of play into that, we are kind of playing more into like, what is the data telling us where things aren't going?

How do I gather as many different things that can all compound together into this kind of like painting of the world, painting of the universe?

And how do we use that to kind of fuel ourselves? Like this might be a little bit too esoteric. There's a book I like a lot called Statistical Rethinking. And then there's a chapter in it where it talks about the big world and the small world. And the big world is just kind of this thing that this is what the rules of the world, you may never actually directly interact with them. But you're always just kind of sampling from this real world through your observations. And then that's giving you kind of the small world, which is

like

what do you know about the big world from your interactions with it. So like in some grand sense there is a true open rate that your audience will have to your emails and every campaign that you send you're kind of resampling that big world number and figuring out okay well it's a little bit closer to this number it's a little bit closer to that other number.

But by doing the kind of Bayesian approach, we're able to say like, all right, these actions have an 85 % confidence interval to be able to know what the click through rate for these campaigns is. We're kind of capturing what that big world is, what that truth is. And then using that as the determination of, well, in X percent of simulations of the future, you know, this topic will do better than these other topics. It's future guided. It's this forecasted Bayesian approach rather than

just PR like I measured KPIs for a campaign that I sent out and it's kind of and I'm gonna chase you know the past I'm gonna chase the measured results that I've already looked at.

Michael Levitz (22:23)
think one of the most interesting things here is that when you're talking about data-driven, you're talking about exploring a space using data tools. Whereas a lot of people think about this as some kind of competition with intuition or gut instinct. I'm replacing my gut instinct with just whatever the data tells me. And the way we're using it here is that

You know, I talked earlier about, you know, desk research being such an important process of writing, you know, and you were throttled by the amount of time it took. were throttled by, you know, tools and access and, you know, all these aspects of how, how limited you were in terms of you could only dive into Reddit, could only dive into Twitter, white papers, et cetera, for so much time before you just, you know, were overrun and couldn't even make sense of it anymore.

When you're talking about being data-driven, it's one layer of this is instead of having 20 tabs open on your browser and feeling like, whoa, I just kind of went deep on this stuff, it's reading 150,000 news sources, reading everything that's been said on Reddit, you know, in the last, let's say three months, you know, just looking at a full spectrum of content and then...

honing in on the few key signals in there that are relevant to the topic you're researching. Would you agree that that's a part of what you define as data-driven?

Robin (23:42)
I would agree. mean, and I think that's correct that like, if I just start looking at, if I just open a tab for the thing that I'm going to research, and then I click on a link from that, and then I click on a link from that, I am only ever doing this kind of like depth first search where like, the constraints of the first thing that I looked at are always going to kind of exist down that chain. like, sure, at the end of 20 tabs, like I will have an idea about something. But really, I just

kind of have an idea about like one of these paths, you one of these research endeavors, one of these permutations where I looked at X social media conversation and I looked at Y article and those merged together into my brain to give me a certain intuition about the value of that pairing where, as you just said, like,

we want to look at it as much more of kind of a breadth first search, where instead of just diving down one of these pairings of signals and resources that you found, we are crunching all of them and we are surfacing which of these, you know, 100, infinite, you know, basically infinite number of paths that could have been researched and just surfacing up like, well, in majority of circumstances, topics A, B, and C rise to the top.

Michael Levitz (24:56)
That's the other really interesting thing that often doesn't happen in kind of self-directed desk research. You know, when we're doing this, we are generally looking for a thing. You know, we're kind of, we have a hunch and now we're going on a search to see if we can, you know, that thing down and come back with a silver bullet. You know, I want to make this argument. Now I've found the perfect example and the perfect stat. And I can say this with confidence.

On the other hand, what you're doing is really allowing kind of surprising things to emerge and kind of seeing where everything settles down after you've read everything, which is very different. And I think, again, both have merits, but what I like to do now is start the same process that I used to do, but armed with all the things that you've brought to the floor.

So instead of kind of starting at that first article and just going down this kind of, you know, treasure hunt of link chasing, I'm starting with these emergent signals and then I can go pursue the ones that, you know, kind of are most important to what I'm writing.

Robin (25:56)
Yeah, like if you're training a data science model, you'll have, you know, a training set.

testing set and a validation set. The model will read and look at the training set. It'll then kind of tune itself based off of the testing set and then it'll be scored off the validation set. And in the ideal world here, the model never looks at the ultimate criteria by which it is graded. But you as a human, like sitting in there and doing the desk research, don't really have that ability, right? From the very first click, know, you do. It's just what you just said. You know what you're looking for.

you know, what will kind of validate the biases that you have, that we all have. And you will just kind of like repeat those, right? We live in a world with a replication crisis where so many papers just can't be repeated because it turns out like the kind of methodology of them was flawed or people were biased and just chased the bias rather than kind of data speak for itself.

And there's an old adage too in data science, you can torture the data to make it say anything. And you can always, you know, just, I want to ex, well, I wanted this campaign to really be the best one. So I'm going to exclude people who had the email for a certain amount of time. And I can kind of shift that number, you know, to, to change the ultimate thing I'm measuring until it becomes favorable for me. But really we just want to be kind of as hands off as possible to.

what those results are, you know, just allow the data to breathe, allow it to speak to what's actually there. And then, you know, like you will have a topic, it'll be vetted and you can then interact with that in the way that you want, in the way that's most preferred to you. can write the campaign that best plays out that topic and your voicing.

with themes that you want to include, but the kind of validity of the topic is pure.

Michael Levitz (27:46)
It reminds me of a time when I was working with one of the smartest engineers I'd ever worked with and we both got into cooking at the same time and we were both into cooking off of YouTube lessons. And he was very excited. He had found this one guy who was amazing and sent it over to me. I looked at the playlist and there were a lot of lessons. I don't remember 30 to 50 of them.

So being like dilettante and like a, you I'm always like poking at 10 things at once instead of going deep on any one thing. I asked him, you know, Hey, which, you know, which ones of these have you tried? Which ones are the, what's the best recipe? And he just looked at me like I was crazy. He was like, I, did them all. Like, why wouldn't you do them all? And I think to me, that was just a shock and it really changed the way I work because it just started.

I just started being way more exhaustive and looking for tools that allowed me to do things like what you're saying, where, you know, the like picking and poking wasn't cutting it anymore. And I really needed to have the full thing and then kind of, you know, proceed once I had that. But that can also be paralyzing, right? Like if you want to work like that, how do you do it? How do you balance that with everything else you have to do?

So I think first seeing the benefit of that working style, but then also not being able to have tools that gave that power to me, really led me to want to give other people this thing and take that work away from them. Because if that's the way you, if you have to go out and do that every time, it's just not possible.

So we have talked about a quote that I cannot remember. So I'll need you to say the exact quote of weak signals versus strong signals. Can you tell us the accurate quote and then talk about why that's important?

Robin (29:36)
don't know if I can quote it exactly, but there's another data science adage of an ensemble of weak classifiers who outperform one strong classifier.

And the basic idea there is that any model that you will have in any domain will have biases. It'll have biases from the data that's available to it. It'll have biases from the mechanisms with which it's trained. It'll have biases from, you know, the...

this proportionate split of training and testing sets and all these things, right? All these models are just loaded with biases. And this is not like a value laid in term in this context, right? This is just, it is what it is. They are tuned to certain things.

So a position that you can find yourself in is like, if I go all in on my one signal, if I go all in on my one indicator, I will live or die based off of the validity or degree of bias of that one model. And for a lot of reasons, know, models will have

amounts of bias that are unknown to you and they can have sampling error and all these other things. But a good way to kind of work around that is to have an ensemble of weak classifiers. So each of these things will kind of cover a smaller portion of how the model is actually fit, what it's actually looking at, but they all get to kind of float together in this like ensemble of experts at the end of the day where.

model that looks at A and then votes will be tallied next to a model that looks at B and votes, and know, however many of these you want, will outperform a model that looked at A, B, C, D, E, and F all at the same time and applied the same bias to all those different things. Because if models, if in that first ensemble approach, if model A looking at data A has bias plus one, and model B looking at data B has minus number,

minus one, well those kind of wash out in a way that wouldn't have happened with model looking at the same thing, or same model looking at the whole thing. So the way that kind of connects to what we've been talking about is just this distribution of risk across all the different sources that we're looking at. It's a little bit of kind of like, you know, the ETF of topics, just like all the risk is just kind of baked into it.

all the information that's available in the market of the audience voice is priced into this ETF and we can just kind of stack rank these and look at them.

Michael Levitz (32:03)
I remember a long time ago I said, I believe it was an IBM conference and they were talking about looking at what links were sent. I don't know if it was in there, some kind of internal messaging app and just curating the top 10.

news links that you know their what is it 100,000 plus employee base was sending and they just realized that hey if a critical mass of people are all sharing this article on a specific day there's a good chance it is important. They you know at the time had no idea kind of what that article was or you know way of analyzing that but they just started to bubble up those top shared links and that

Robin (32:33)
Yeah.

Michael Levitz (32:43)
you know, became valuable for them. And I think in certain ways, what we're doing has kind of roots in something that basic, which is just, if many, you know, as you're saying with these weak signals, if many people are in different ways addressing a particular topic or sentiment, that in itself has interest for us.

Robin (33:02)
Right. What else could it be? Right? Well, what better indicator could there be than all of these things kind of point to the same thing? If it walks like a duck and it quacks like a duck, you know, when you have 100 signals that say this topic is a duck and people like ducks, then like, that's great. That's all there could be. Because as we said earlier, like there isn't just, you know, this

your audience that you can just go talk to and directly ask, you know, that's representative of all the permutations, all the variances of it. So all you're ever doing is just sampling the different angles of it, trying to carve off the minutiae of it, the details of it, and just trying to use that to best understand what the audience is.

Michael Levitz (33:41)
Thank you for this conversation. I know we're not supposed to say what day it is on podcast land, but we ended up doing this on a Saturday after I had a rough reaction to a tetanus shot on a Thursday. So thank you for joining me on a weekend.

Robin (33:57)
Thank you. Thank you for talking. Thanks to the audience for listening.

Michael Levitz (34:01)
Thanks everyone for listening. We will be back next week. We're not at the stage yet where we plan out the topics too far in advance. So whatever happens to be, whatever we are banging our head against next Friday, we'll talk to you about as soon as the weekend.

More Like This

© 2025 Forecast.ing, Inc. All Rights Reserved.