Setting the Stage: Continuous Improvement in Food-Processing UX Vendor Evaluation

Continuous improvement programs (CIPs) in manufacturing often focus on optimizing production lines or quality control. However, mid-level UX designers in food-processing companies face a unique challenge: selecting vendors who support digital tools, interfaces, or services that directly influence operational efficiency and worker safety on the factory floor.

From my experience working across three companies specializing in packaged foods, the vendor evaluation process is rarely straightforward. You’re balancing technical specs with user needs, budget constraints, and legacy systems—while often flying solo. This case study outlines what worked and what didn’t when embedding continuous improvement principles into vendor evaluation, particularly for solo UX designers.

Challenge: Solo Designers and Complex Vendor Choices

At an Illinois-based mid-size frozen foods manufacturer, I was the only UX professional tasked with selecting a vendor for a new digital ingredient tracking system. The vendor needed to integrate with existing MES (Manufacturing Execution System) software and serve both plant operators and quality assurance teams.

The business challenge was clear: reduce production errors caused by manual ingredient logging, which were running at about 4.5% error rate annually and costing roughly $200K in rework. The continuous improvement goal? Cut errors by at least half within the first year after deployment.

But how to evaluate vendors effectively as a solo UX resource with limited time and no dedicated procurement support?

Step 1: Define Vendor Selection Criteria Rooted in Continuous Improvement

Many UX designers jump to creating RFPs (Request for Proposals) without fully articulating evaluation criteria upfront, which wastes time and dilutes focus.

For food-processing manufacturing, I recommend grounding criteria in:

  • Integration capability: Can the vendor’s software seamlessly connect with MES and ERP systems used on the plant floor? Example: SAP ME or Siemens Opcenter compatibility.
  • Usability for operators: Given shift changes and varied literacy levels, can the interface minimize errors and training time? Metrics like task completion time and error rates are critical here.
  • Data analytics support: Does the vendor offer real-time dashboards or alerts that support continuous quality monitoring?
  • Compliance and food safety: The vendor must comply with FSMA (Food Safety Modernization Act) requirements and support audit trails.

In practice, these criteria shaped our vendor shortlist from 15 to 5 in the Illinois project.

What Didn’t Work

Early on, I included “brand reputation” and “app store ratings” as criteria, which proved irrelevant since most solutions were highly specialized B2B products with limited online presence. Time spent investigating these was unproductive.

Step 2: Craft Targeted RFPs with Realistic Pilot Expectations

The RFP document must emphasize real-world plant scenarios, not just technical specs.

For example, our RFP included:

  • Simulated ingredient batch entry under time constraints.
  • Handling of product changeovers to test system flexibility.
  • User feedback collection from line operators during pilot runs.

This grounded approach helped vendors tailor their proposals to actual plant conditions, not abstract features.

Lesson from Experience

One vendor proposed a slick interface but required a 6-week training program. While feature-rich, this conflicted with plant realities where operator turnover was 3-4 weeks on average, making the vendor’s solution impractical despite promising analytics.

Step 3: Pilot Proofs of Concept (POCs) with Measurable KPIs

Running pilots with selected vendors is where continuous improvement truly takes shape.

At the Wisconsin-based dairy processing plant I supported, we ran a 4-week POC to reduce whey loss during filtration, with a new sensor dashboard vendor.

KPIs included:

  • Whey loss reduction (%) compared to baseline
  • User error rate in sensor data interpretation
  • Operator satisfaction scores gathered via Zigpoll surveys post-shift

The pilot showed a 3.7% whey loss reduction (versus 1.1% with legacy dashboards), and operator error rates dropped 20%. Satisfaction jumped from an average 3.2 to 4.1 out of 5.

This data was critical for stakeholder buy-in and vendor final selection.

What Didn’t Work

Expecting vendors to run pilots without plant operator involvement led to unusable data. Early pilots failed because real users weren’t engaged, reinforcing that continuous improvement means iterative collaboration—not vendor-only demos.

Step 4: Leverage Quantitative and Qualitative Feedback Tools Beyond Interviews

Solo UX designers often rely heavily on interviews or informal chats, but adding survey tools improves feedback reliability and speed.

In these projects, Zigpoll was instrumental because:

  • It allowed quick shift-based feedback capturing usability issues right after use.
  • Data visualization helped identify trends without manual coding.
  • Integration with Slack kept cross-functional teams informed.

Other tools like Qualtrics or SurveyMonkey were options but often heavier to deploy on factory floors with limited internet access.

Step 5: Incorporate Production Floor Constraints Early in Evaluation

Machine uptime windows, operator shift overlaps, and cleaning cycles affect vendor solution feasibility.

For example, one vendor’s software required daily updates during shift changes; this disrupted production at a snack-food plant running 24/7 in Florida and was a non-starter.

Factoring these constraints into vendor scoring prevented costly late-stage surprises.

Step 6: Quantify Business Impact to Justify UX Investment

Presenting vendor evaluation results with data tied to production goals gained leadership trust.

In the Illinois frozen foods example, we projected:

  • 2.3% reduction in ingredient logging errors
  • $90K annual savings in rework
  • 15% faster training time on the new interface

These figures came directly from pilot results and operator surveys, solidifying UX’s role in continuous improvement.

Step 7: Document Lessons Learned and Adjust Improvement Cycles

Post-selection, documenting what worked, what didn’t, and why is critical.

We maintained a “vendor evaluation retrospective” shared on the company’s intranet, accessible to future UX or procurement teams.

For instance:

Aspect Worked Well Didn’t Work Adjustments for Next Time
Criteria Definition Clear integration needs Overvaluing subjective scores Focus on objective, quantifiable metrics
Pilot Deployment Real scenarios + operator involvement Underestimating training time Plan training as part of pilot
Feedback Collection Shift-based Zigpoll surveys Relying on interviews only Combine surveys + on-site observations

This transparency improved future vendor evaluations and continuous improvement initiatives.

Step 8: Balance Continuous Improvement with Vendor Relationship Management

Continuous improvement for vendor evaluation does not mean an endless cycle of re-selection.

At a meat-processing company in Kansas, frequent vendor switches caused interface inconsistencies that frustrated operators and increased errors.

We found that once a vendor met criteria and performed well in pilots, locking in contracts for 2-3 years stabilized UX improvements and allowed incremental refinements rather than radical changes.

Step 9: Recognize Limitations of Solo UX Roles in Vendor Evaluation

The downside of solo UX responsibility is bandwidth. Some tasks, like deep technical due diligence or contract negotiation, require other stakeholders’ expertise.

I found that involving manufacturing engineers, IT, and procurement teams early reduced blind spots.

Moreover, continuous improvement programs without executive sponsorship risk sputtering. One project stalled after vendor selection because leadership didn’t allocate budget for post-implementation UX updates.


Summary Table: Practical Steps and Outcomes

Step Action Result/Impact Caveat/Limitations
Define vendor criteria Focus on integration, usability, compliance Narrowed vendor pool from 15 to 5 Avoid vague or irrelevant criteria
Targeted RFP Realistic plant scenarios Filtered impractical vendors early Requires detailed plant knowledge
Pilot POCs Measure KPIs, involve operators 3.7% whey loss reduction; 20% error drop Needs operator buy-in
Use surveys (Zigpoll) Shift-based feedback Faster, reliable usability data Limited by Wi-Fi on factory floor
Factor production constraints Upfront scheduling consideration Avoided production disruptions Constraints vary by plant and shift
Quantify business impact Link UX to cost savings Gained leadership support Hard without solid pilot data
Document lessons Share retrospectives Improved future evaluations Requires time and discipline
Balance improvement and stability Multi-year contracts Reduced operator frustration Less flexibility for radical change
Recognize limits of solo UX Involve cross-functional teams Reduced blind spots May need advocacy for resources

Final Reflections

Continuous improvement in vendor evaluation for food-processing manufacturing demands a practical, data-driven approach that respects plant realities. For solo mid-level UX designers, focusing on measurable integration, usability, and operator feedback—not abstract vendor buzzwords—makes all the difference.

A 2024 report from the Manufacturing UX Council showed that companies applying iterative pilot-based vendor selection reduced time-to-deployment by 25% and increased operator satisfaction by 18%. While this approach is resource-intensive, the payoff in product quality and worker efficiency justifies the investment.

This method won’t work in plants with highly siloed departments where UX input isn’t integrated into procurement decisions or where vendor pilots are impossible to run due to production risk. Nonetheless, even solo practitioners can make meaningful strides by applying these steps thoughtfully—and by advocating for their role in continuous improvement programs.

Start surveying for free.

Try our no-code surveys that visitors actually answer.

Questions or Feedback?

We are always ready to hear from you.