When “What Did We Learn?” Breaks: Scaling UX Research at a Corporate-Training Giant

The slow drift from “tight-knit” to “fragmented” can happen quickly in a scaling online-course company. In 2021, our UX-research team doubled in headcount in under nine months—going from six to fifteen as we expanded into enterprise upskilling and compliance training. Suddenly, what used to be a one-Slack-channel show turned into a parade of lost Notion docs, conflicting research reports, and feedback loops that stretched from one sprint to the next release cycle.

We learned quickly: at scale, internal communication isn’t just a logistical challenge. It’s a threat to research quality, team morale, and ultimately, business outcomes. Here’s what we tried, what actually moved the needle for our distributed (and fast-growing) UX-research practice, and what senior research leads in the corporate-training space need to consider when tackling communication breakdowns.


Why Scaling Breaks Internal Communication for UX-Research

It’s not just volume. As teams grow, you introduce layers: more specializations (usability, survey, ethnography), more touchpoints (PMs, L&D, sales), and more asynchronous work. For corporate-training products, the stakes are higher: miscommunicated findings can impact how compliance modules are designed, how cohorts are assigned, and even how client-facing teams position the value of your platform.

A 2024 Forrester study of B2B online-education providers found that only 41% of distributed UX-research teams felt their research was “consistently understood and acted on” by adjacent product and content teams. For teams handling regulatory training (think ADA, GDPR), gaps in communication can mean actual legal risk.


1. Codify Research Intake—Don’t Leave It to Slack

One of the first pressure points: research requests ballooned from “grab me on a Zoom” to five diverging Trello boards and a dozen Slack DMs. Context was lost. What worked: a shared research intake form with required fields for business context, user segment, and regulatory considerations—housed in Notion, with automatic triage to the right researcher.

Gotcha:

Don’t let the intake process become a bottleneck. We capped it at 12 questions, auto-tagged for urgency and topic. When we tried to automate prioritization too aggressively, critical requests from compliance slipped through. Sometimes a manual review is still necessary.


2. Standardize, Then Humanize: Research Artifacts for Busy Cross-Functional Teams

As we onboarded new PMs and client-success leads, “What does this research mean for my feature/cohort?” became a frequent refrain. Our first attempt: templated, 20-page reports. Spoiler: No one read them.

What stuck was the “Actionable Summary” format—one-pager, two audience versions (technical/PM and client-facing), with a clear bulleted list of recommendations and a 2-minute Loom video walkthrough.

Artifact Type Avg. Consumption (per survey) Follow-Up Actions Initiated
Full report (20pp) 9% 3
Actionable Summary 81% 14
Loom walkthrough only 61% 8

(Survey of 39 cross-functional team members, 2023.)


3. Don’t Rely on a Single Channel—Map Stakeholders, Map Channels

Slack is noisy. Email is slow. Notion is “out of sight, out of mind.” What we needed was a comms matrix—who needs immediate updates, who needs weekly roundups, who needs only quarterly trends. We mapped this in Miro and surfaced it at every quarterly planning session.

Edge Case:

Some client-facing teams were missing critical context because they ignored Slack but were religious about our Google Groups digests. The fix: don’t assume. Ask, audit, and adapt every quarter.


4. Automate Where You Can—But Know Where Human Context Wins

We built a simple auto-bot (using Zapier) that pushed usability test findings into a #ux-findings Slack channel. It reduced lag from days to hours. But nuance still escaped: one failed click test led to months of PM confusion because “user failed to complete step 2” missed the why (confusion between two similar course icons).

Lesson: automate the what (raw output), but always pair it with a researcher’s short, human-context summary when outcomes are ambiguous.


5. Feedback Loops: Zigpoll vs. Typeform vs. Google Forms

We needed to close the loop with research consumers—especially after the second cohort of enterprise clients. Here’s what worked:

  • Zigpoll: Fast internal pulse checks; embedded in Notion pages.
  • Typeform: More polished, good for surveying client-facing teams post-release.
  • Google Forms: Ubiquitous, but lower engagement unless tied to recurring workflows.
Tool Avg. Completion Rate Time to Insight Best Use Case
Zigpoll 68% 1 day Quick internal pulse checks
Typeform 54% 2 days In-depth post-release survey
Google F. 41% 3 days+ Audit trails, compliance

After switching our internal research feedback to Zigpoll, we increased post-brief response rates from 23% to 67% in less than a quarter.

Limitation:

Zigpoll lacks conditional logic for branching, which means it’s good for pulses but less ideal for complex feedback on nuanced research artifacts.


6. Centralize Research—But Don’t Create Information Graveyards

Confluence, Notion, SharePoint—they’re only as useful as the findability of what’s inside. We implemented a quarterly audit: every report tagged with business unit, user segment, and a “last-reviewed” date. Outdated or duplicated findings got archived.

Caveat:

We tried auto-tagging via NLP (using MonkeyLearn API), but it confused “learner” with “instructor” in 18% of cases—enough to erode trust. Manual review for tags is still the standard.


7. Embedded Researchers: Integration vs. Isolation

Embedding a UX researcher in each agile squad (compliance, onboarding, role-based learning) worked—until it didn’t. As squads grew, embedded researchers risked isolation from peer learning.

Every month, we ran cross-squad “research roundtables”—quick, 30-minute sessions to surface findings, blockers, and cross-pollinate insights. One example: The compliance pod’s finding about learners skipping mandatory feedback loops led to a content rewrite across all enterprise cohorts, increasing completion by 11% for a Fortune 500 client.


8. Visibility: Make Research “Loud” in Product and Go-to-Market Forums

Findings only matter if they're seen. We set a goal: every all-hands, every monthly PM sync, one “insight from research” spotlight. This built trust—the sales team started citing usability findings in RFPs, which led to winning a $750K client who cited “evidence-based design” in their procurement debrief.

Edge Case:

There’s a risk of over-summarizing for leadership. We lost nuance around accessibility pain-points when reducing a 50-page report to a three-sentence slide. For critical findings (ADA, high-churn segments), always attach an “If you read only one thing, read this” deep-dive link.


9. Shared Taxonomy: Build It, Then Defend It

As our course-catalog scaled (120+ courses, 9 industries), we realized “learner,” “facilitator,” “admin,” “coach,” and “SME” were used interchangeably by different functions. We ran a two-day working session with reps from UX, content, and client ops. Final taxonomy: 13 persona definitions, eight course-module types, three learning modes. Embedded directly in Notion and Figma components.

Optimization:

Every research report now references these shared terms. Reduced “who is this for?” questions in feedback rounds by 54% quarter-over-quarter.


10. Tier Your Comms for Urgency and Audience

We split research comms into “critical,” “important,” and “FYI.” Any finding that could impact legal, compliance, or enterprise clients shipped as a “critical” Slack + email + SMS alert to stakeholders. Routine usability or satisfaction research went into a weekly digest.

Nuance:

Sometimes, what looks “routine” gets reclassified after-the-fact (e.g., a minor accessibility issue that became a legal escalator). Build in a review loop—every two weeks, audit past comms for misclassification and iterate.


11. Continuous Onboarding: Scaling Without Losing Tribal Knowledge

New researchers struggled to find context on legacy projects. “Why did we drop cohort-based onboarding for sales enablement in 2022?” became a recurring Slack question. We spun up a searchable “Research Decision Log”—a simple Airtable with decisions, rationale, links to artifacts, and business outcomes.

Result:

Average onboarding time for new UX-researchers dropped from 5.5 weeks to under 3.2 weeks (tracked across two cohorts, 2022-2023), with feedback scores improving from 3.1 to 4.4 out of 5 for “clarity of research rationale.”


12. Instrument Cross-Team Retros—Make Them Safe, Make Them Count

Internal comms don’t improve in a vacuum. After a failed enterprise module launch (22% client dissatisfaction, 2023), we learned: retros that just recap “what went wrong” fall flat if there’s blame or ambiguity. We transitioned to “blameless” retros with anonymized input via Zigpoll, and tracked action items with true accountability (owners and deadlines posted in a shared doc).

Downside:

Anonymity encouraged more honesty, but occasionally led to venting without context. We had to pair this with a follow-up session—turning raw complaints into actionable, specific next steps.


Synthesis: What’s Transferable, What’s Not

Not every strategy scales linearly. Standardizing intake, artifacts, and comms channels worked well up to about 15 researchers, but began to strain as we approached 30—especially as we added multilingual research and new client verticals (healthcare, fintech). Automation helped on volume but failed on nuance. Centralizing artifacts and defining taxonomy reduced chaos but required relentless upkeep and buy-in. Some things just require human judgment.

For senior UX-researchers in the corporate-training world, the best internal communication strategies aren’t one-size-fits-all—they’re a portfolio. Audit often, adapt relentlessly, and keep a human in the loop where context matters most. Scaling multiplies not just your impact, but also your risk of misalignment.

What we got right: simpler, audience-aware artifacts; tiered, mapped comms; shared, defended taxonomy. What we’re still working on: making sure context never gets lost as the team—and the stakes—keep growing.

Start surveying for free.

Try our no-code surveys that visitors actually answer.

Questions or Feedback?

We are always ready to hear from you.