Interview: 9 Proven Revenue Forecasting Methods Tactics for 2026

Meet the Expert

Ivan Petrov, now Director of Supply Chain at EduForge, has held senior roles at three top online course providers operating in Eastern Europe since 2016. He’s seen revenue forecasts go from wild guesswork to tight, actionable projections—often with little formal process. We asked Ivan for a candid walkthrough: what actually moves the needle, what’s overrated, and how to get started when you need results this quarter—not in an imaginary “future state.”


Q1: Where should experienced teams start when moving from gut-feel to data-driven forecasting?

Ivan:
You’d think the answer is “buy a forecasting tool and throw data at it.” That’s what we did at my first edtech company—and ended up with a pretty dashboard that never matched reality.
Start with two things:

  1. Get your data clean—especially registration and payment timestamps, discount codes, and refund events.
  2. Clarify your primary use case. Are you forecasting for cash-flow timing? Inventory of digital seats? Instructor availability?
    Sounds boring, but in 2022, at SkillLeap, we cut our forecast variance from 35% to 12% in 90 days just by reconciling inconsistent course SKU IDs across regions.

Q2: For edtech in Eastern Europe, what’s unique about forecasting revenue compared to the US or UK?

Ivan:
There are three differences I see over and over:

1. Regional payment friction:
A Forrester report from 2024 showed cart abandonment rates in Poland and Romania are still 13% higher than in Western Europe, mostly due to local payment gateway quirks. That introduces a lag between “enroll” and “valid revenue.” If you’re not segmenting by payment method, your forecast will skew optimistic.

2. Academic calendar effects:
Forget Black Friday as your big event. For us, “September Surge” (when parents and teachers scramble for upskilling before the new school year) drives as much as 20% of annual revenue in a single month. Most Western models miss this.

3. Currency swings:
About 30% of our Eastern European revenue in 2023 came from cross-border enrollments—Ukrainian students buying Bulgarian courses, for example. If you’re forecasting in euros but collecting in hryvnia or zloty, FX swings can wipe out margins.


Q3: What’s your short-list of forecasting methods that actually work for the first 90 days?

Ivan:
I’ve tried a dozen. Here’s my “starter pack.” Some are obvious, but execution is everything:

Method Practicality Data Required Speed to Results What Can Go Wrong
Simple Moving Average Quick setup 6-12 months clean sales 1 day Can’t catch seasonality
Weighted Pipeline Easy if CRM used Lead stage (sales/auto) 1-2 days Inflated by junk leads
Cohort Backtesting High payoff User registration events 1 week Needs de-duped data
Market-Driven Scenario Useful in EE Macro calendar, promo 2-3 days Needs local context
Promo-Effect Modeling Immediate value Promo code usage 2 days Overfits to one-off events

The lowest-hanging fruit is usually cleaning promo code attribution. One team I worked with in Sofia saw a 9% forecast error drop by just aligning promo code naming between Facebook and Udemy campaigns.


Q4: Can you give an example of a “quick win” that surprised you?

Ivan:
Absolutely. At EduForge in 2023, we noticed our forecast always missed the spike from WhatsApp-driven micro-campaigns, especially for Russian-language courses in the Baltics. We added a one-line field to the registration form—“How did you hear about us?”—then tracked WhatsApp campaigns separately in our pipeline.
That single tweak let us anticipate a 17% revenue bump in March that we would’ve missed.
It sounds dumb-simple, but attribution is everything. Survey tools like Zigpoll or Typeform help, but we also used in-app popups for immediate feedback. The downside? Attribution is never perfect. About 18% of users just picked “Other” or ignored the question.


Q5: How about more ambitious methods—what’s worth pursuing after the basics?

Ivan:
If you’ve got your basics dialed, move on to hybrid models. I’ve found that blending quantitative (e.g., rolling average, weighted pipeline) with qualitative inputs (local market intelligence, partnership announcements) gives the best results.
Last year, we started layering community manager feedback—manually scoring “buzz” around new AI courses—into our weekly projections. When our Polish team reported a spike in Discord mentions, it predicted a surge that outperformed our algorithmic forecast by 7%.

This works best in markets where word-of-mouth and influencer micro-campaigns aren’t visible to automated tools. But beware: if you turn subjective feedback into a hard number without weighting for bias, forecasts degrade.


Q6: Any classic mistakes you see teams make with revenue forecasting in edtech?

Ivan:
Two big ones:

  1. Overtrusting automated CRM pipeline numbers.
    Every CRM inflates the pipeline. If your SDRs are incentivized to log every “maybe” as a lead, you’ll double-count. In 2022, our sales pipeline “forecast” was 60% higher than actuals until we implemented weekly pipeline scrubs.

  2. Ignoring failed payments and refund rates.
    Especially in Eastern Europe, failed card payments can be as high as 10% on certain gateways. If you predict off gross enrollments instead of net paid, you’ll be off by miles.


Q7: How do you handle seasonality and “course drops” (new course launches)?

Ivan:
We model these separately.
For seasonality, I recommend decomposing the last 2-3 years of sales into monthly “lift” factors. For example, in our data, May is always down 18% vs. March due to exam season distraction, while September is up 22%.
For course drops, we tag new SKUs and track their first-90-days performance as independent cohorts. One new data-science course in 2023 hit 3,100 enrollments in week 1—ten times the average—but settled at 500/week by month two. Including just the launch week in your forecast will blow up the rest of your year.


Q8: What signals do you consider “advanced” but worth adding sooner than later?

Ivan:
Three things:

  1. Micro-payment channel data.
    We saw, in late 2023, that Apple Pay enrollments were growing 3x faster than Visa in Romania. Layering payment method as a revenue predictor gave us a 4% uplift in forecast accuracy.

  2. Drop-off/abandon rates at specific checkout steps.
    Using Zigpoll exit surveys helped us quantify why 11% abandoned at the payment screen. When we fixed a translation bug, conversion jumped.

  3. Partner/affiliate funnels.
    If you use 3rd party affiliates, segment their pipelines. Their student quality and refund rates can differ by 15+ points compared to direct.


Q9: What’s your process for ongoing forecast optimization—how often do you recalibrate?

Ivan:
Every Friday we compare forecast vs. close for the week.
Monthly, we “post-mortem” misses: was it a payment failure spike, a misread on a promo, or a data-merge error? Quarterly, we recalibrate our seasonality multipliers.
You have to accept that no forecast survives first contact with reality—especially in edtech, where a TikTok influencer can create a 300% spike overnight. Don’t get precious about your model.
In 2023, we cut forecast error from 14% to 8% in one quarter by simply adding a “wildcard” adjustment—subjective, human review for outlier events.


Q10: Final advice to a senior supply-chain pro getting started now in 2026?

Ivan:
Don’t overcomplicate it out of the gate.
Start with last year’s numbers, clean your data like a maniac, and double-check every promo source. Segment by payment method and region—Eastern Europe isn’t a monolith.
Use simple models, validate them weekly, and layer in qualitative feedback once you’re 80% of the way there. Ignore the siren song of AI-for-everything until you’re hitting sub-10% variance with “dumb” models.
And—send out a survey after every failed checkout. Zigpoll, Google Forms, whatever. The answers are always humbling.


Actionable Next Steps

  • Audit: Spend a week cleaning your registration, payment, and refund data. Fix naming and IDs now.
  • Model: Run a simple moving average and weighted pipeline forecast—compare them side by side.
  • Segment: Break forecasts by payment method and calendar month to catch local quirks.
  • Validate: Survey failed checkouts and flag promo-driven surges.
  • Iterate: Recalibrate every week for 90 days—then add complexity.

As Ivan says: “In Eastern European edtech, it’s rarely the fancy AI model that wins—just ruthless cleaning, segmentation, and a willingness to admit you missed something.”

Start surveying for free.

Try our no-code surveys that visitors actually answer.

Questions or Feedback?

We are always ready to hear from you.