Michael Levitz
May 19, 2025
It was early days, and I was on a sales call. Everything was going great until the prospect offhandedly mentioned, "Hey, just so you know, your website's down."
Not again, I thought. We had no users, no process. Sometimes we pushed our entire code base when we needed to make changes. And sometimes that brought the site down. But in those early days, it felt better to move fast and break things.
I checked immediately and exhaled in relief. The website was live! The problem must be on their side.
"Can you send a screenshot?" I asked, awaiting vindication.
He emailed it over, and my eyes nearly popped out of my head. It was a picture of our live website, so barren of text and images that he thought it was broken.
This moment captures our first year building Forecast.ing: talking to hundreds of content marketers and building out a data science and AI pipeline. But not making time to create a homepage that told our story.
What began as an open-ended research assistant evolved into a content prediction platform inspired by political campaigns and Bayesian statistics. It came to life in three distinct phases: software that we thought the world needed (spoiler alert: it didn't), a Jupyter Notebook-powered spreadsheet that helped us run experiments with content marketers, and the MVP of our prediction engine that launched on May 1st.
Our journey began with Andrew Yang's presidential campaign, where my co-founder, Robin Tully, was Director of Data and Analytics. Faced with limited resources and the challenge of introducing a relatively unknown candidate with unconventional policies, Robin's team built a system they called "Bob."
Bob was a war room map that helped the campaign identify which precincts, caucuses, and districts they should focus on. The system ensemble-modeled a hundred different signals, from demographic data to the distance between a caucus and the nearest campaign office, to predict where they'd find supporters.
But the real innovation wasn't just gathering lots of data. It was in their approach to probability.
As Robin explains it: "It was less relevant to say, 'We think there is a 52% chance that we will get three supporters if we go here.' We thought it was more relevant to say, 'Across these two different distributions, if we have area A or area B, we think that across a large number of simulated results, area B will be more effective.'"
This Bayesian approach, looking at entire probability distributions rather than single-point predictions, became the foundation for our work in content marketing.
We realized that marketers face the same fundamental problem: limited resources, too many options, and high stakes. Just as campaigns need to know where to invest, marketers need to reduce the guesswork in content decisions.
Our first version was technically impressive. It pulled from multiple data sources, ensemble-modeled topic relevance, and let users configure what signals mattered most to them. It was flexible and could adapt to various tasks.
It was also completely wrong for our users.
After over 100 discovery calls with content marketers, the overarching theme was clear: they're under pressure to publish more frequently to more channels and more audience segments. And they are doing more with a smaller team:
"Content marketing used to be my entire job. Now it's the third comma in my title."
Instead of making their work easier, we'd made it harder. We created a customizable, flexible environment where each user had to carefully calibrate their searches and settings. And the results were overwhelming: 20 trending topics and 50 citations per topic.
We were like Apple Music (you tell us what you want to listen to), when we needed to be Spotify (just press play, and we'll take it from here).
We'd written a long letter because we didn't know how to write a short letter.
Having working software felt good, like we were quickly moving through the startup stages. But we'd missed the part about providing the right value for our users.
So we stopped building software and started working with a scrappy Jupyter notebook, followed by what must have been one of the ugliest Google Sheets ever. To make matters worse, that Google Sheet was our sales deck. If you jumped on a call with us at that time, it opened and closed with that Google Sheet.
You know Andy Raskin's Greatest Pitch Deck Ever? We were the opposite: no story, no slides, no shift.
But we were constantly crunching a huge amount of trend data and presenting topic analysis with deep citations. When people responded positively, we built on what they found useful. When they told us they didn't get it or didn't like it, we just hit delete and tried something different. There was no pride, no sunk cost, no delay.
We did about 20 discovery calls a week for about three months. After each day of calls, we worked together in a Google Meet for hours each night, dissecting what we'd heard and updating the data model.
And then a crazy thing happened: we started to be able to predict content topics at the intersection of the brand, audience, competitors, and industry. We started to see patterns, when something worked we did it more, when it didn't work we tried to eliminate that from our process.
Instead of Apple Music (you tell me what you want to listen to), we were – in our tiny way – taking the Spotify approach: here's the best thing for you to say right now. That is, if your Daylist ran a local Python script two people manually executed each day.
We called it: Moneyball for Content Marketing.
After months living in the spreadsheet, we had a critical mass of people asking, "can we use this?" We started rebuilding the software, planning to launch in January.
January came and went, and we still needed one more week. And then one more week. I don't know what happened between January and April, other than the fact that Robin and I worked all day, every day, through every weekend.
If one scene summed it up, I came to LA to work with Robin IRL with the goal of launching our MVP in the beginning of the trip and using the rest of the time to plan out the next phase of work.
Fast forward to the last day of the trip. MVP was still not live. I arrived at Robin's house at 10:30 a.m., and we each went into separate rooms and intensely worked until we were exhausted. We checked in at midnight, and guess what? We figured we had about a week of work left.
Behind the scenes, we were implementing Modern BERT, an open-source library that allowed us to embed text into a vector space, essentially turning words into mathematical objects. This unlocked our ability to do true data science on content topics, objectively scoring their relevance and potential performance.
Instead of just predicting the next word in a sequence (like ChatGPT), we were building a system that could predict which topics would actually drive business results based on quantifiable signals from social media, search, news, and competitive intelligence.
We were creating a poker-influenced decision framework for content. One that focused on expected value across many possible outcomes rather than single-point estimates.
Finally, on May 1, we launched our MVP. Following Reid Hoffman's quote, we were properly embarrassed. And proud. And tired. And relieved.
"If you're not embarrassed by the first version of your product, you've launched too late."
While our LinkedIn feed seems full of people vibe-coding apps over the weekend and launching with users and money, we spent a year elbow to elbow with content marketers, doing research, trading drafts, and digging deep into their industry and audiences. It wasn't glamorous. Much of it happened in Google Sheets and Jupyter notebooks. But it was grounded in mathematical thinking about what actually drives content performance.
In a world where content demands keep increasing while teams shrink, the ability to predict what will work becomes even more critical. As Robin points out:
"It's going to be increasingly significant to create relevant content that constantly calibrates your brand at the center of your audience's world."
That's the question we're dedicated to answering. Sometimes with sophisticated algorithms, and sometimes with ugly spreadsheets.
And in this case, one great Jupyter notebook really did change our world.
Michael Levitz is the co-founder of Forecast.ing, a content prediction platform that helps marketers identify high-performing topics using data science and game theory. He previously led content strategy for global brands like Pampers, Samsung, and Verizon during his time as a Managing Director at R/GA. Michael co-hosts the AI content marketing podcast Forecast.ing the Brief, and his insights have been featured in Forbes, Inc., and TheStreet. Connect with him on LinkedIn.