Back to articles

Leading a 17-Person International Team: Product Management Lessons from Softavera Kiosk

January 26, 202615 min read

Lessons learned from managing a 17-person team through an 18-month international system transformation. Agile methodology, stakeholder management, conflict resolution, and quality-first approach in action.

Introduction: The Product Owner Challenge

In January 2020, I took on what would become the most challenging project of my career: leading the complete transformation of Softavera's kiosk system from a Windows-only WinDev application to a modern, international-ready React platform. The technical challenge was significant, but the real test was managing 17 people across three distinct teams—2 project managers, 10 developers, and 5 QA engineers—while delivering on aggressive timelines with international clients waiting.

As Product Owner and UI/UX Supervisor, my role wasn't just about defining features or prioritizing the backlog. It was about orchestrating a complex dance between demanding stakeholders (Burger King, Quick, and other major restaurant chains), a development team pushing for speed, and a QA team insisting on quality. Every decision rippled across multiple time zones, regulatory frameworks (NF525 in France, PMR accessibility, GDPR), and technical domains.

This article shares the unfiltered lessons I learned over 18 months—what worked, what failed spectacularly, and how I adjusted mid-flight. If you're managing a technical team through a critical transformation, this is the playbook I wish I had at the start.

Team Structure & Roles

Understanding who does what is critical when you're coordinating 17 people. Here's how we structured the team:

  • 2 Project Managers: Handled external client communication, contract management, and high-level planning. They were my buffer between business demands and technical reality. One PM focused on French clients (Burger King La RĂ©union, Quick), the other on international expansion (USA, UK, Germany, Australia).
  • 10 Developers: Split into two squads—Frontend (React, responsive UI, CMS Kiosk interface) and Backend (Node.js, payment gateway integrations, POS systems like Cashpad and Merim). Each squad had a tech lead who reported directly to me during sprint planning.
  • 5 QA Engineers: The gatekeepers. Two focused on functional testing (user flows, edge cases), two on automation (Selenium, Cypress), and one on compliance testing (NF525 certification, PMR accessibility standards). This team had veto power over releases—no exceptions.
  • Me (Product Owner & UI/UX Supervisor): My job was to translate business needs into technical specs, prioritize ruthlessly, mediate conflicts, and ensure the design system maintained coherence across devices. I also ran A/B tests on the kiosk UI and made data-driven decisions on conversion optimization.

Communication flows were deliberately designed: PMs owned client relationships, but I had final say on scope and timeline. Developers worked directly with QA during sprints, but I mediated when conflicts arose (which was often). Every Friday, I held a 30-minute alignment meeting with the entire team to celebrate wins, address blockers, and course-correct.

Agile Methodology Implementation

We adopted Agile because waterfall was a non-starter for this project. With international regulatory requirements changing mid-project (NF525 updates, new GDPR interpretations), we needed flexibility. But Agile isn't a magic wand—it requires discipline.

Why Agile for this project:

  • Unpredictable requirements: Payment providers changed APIs, client feedback from pilot projects required pivots, and compliance rules evolved. 2-week sprints let us adjust without derailing the entire roadmap.
  • Continuous stakeholder feedback: Burger King and Quick tested prototypes every month. Sprint reviews became client demos, turning feedback loops into product validation.
  • Team morale: Shipping something every 2 weeks kept momentum high. Developers saw their work go live, QA felt their impact, and PMs had tangible updates for clients.

Our sprint rhythm:

  • Monday morning: Sprint planning (2 hours). I presented pre-groomed stories, the team estimated complexity, and we committed to a realistic scope. Key rule: No story entered a sprint without acceptance criteria and mockups.
  • Daily standups: 15 minutes, strictly time-boxed. What shipped yesterday, what's shipping today, what's blocked. Blockers got resolved in separate "parking lot" sessions immediately after.
  • Friday afternoon: Sprint review (1 hour) where we demoed completed work to stakeholders, followed by a retrospective (45 minutes) where the team reflected on process improvements. Retros surfaced issues early—like when QA felt excluded from design decisions, prompting us to add them to UI review sessions.

The result: Our sprint velocity improved 30% over 12 months as the team matured. We delivered 85% of sprints on time—not perfect, but significantly better than the 60% industry average for complex projects.

Sprint Planning: Feeding the Development Machine

The biggest challenge in product management isn't defining the roadmap—it's keeping 10 developers fed with clear, executable work every single sprint. If devs are waiting on specs, you've already lost.

My sprint planning workflow:

  • Week before sprint: Backlog grooming. I wrote user stories with acceptance criteria, attached Figma mockups, and defined API contracts. Each story had to answer: Who needs this? Why now? What does success look like?
  • Prioritization (MoSCoW method): Must-have (regulatory compliance like NF525), Should-have (CMS Kiosk customization features), Could-have (analytics dashboards), Won't-have (advanced A/B testing in v1). This prevented scope creep—clients always wanted everything, but MoSCoW gave me a framework to say "not yet."
  • Capacity planning: I tracked each developer's velocity and allocated points conservatively. Rule of thumb: plan for 70% capacity to account for bugs, meetings, and the inevitable "this API changed overnight" surprises. We also reserved 20% of each sprint for tech debt—refactoring, test coverage, and performance optimization.
  • Pre-sprint tech sync: 30 minutes with tech leads before sprint planning to flag any blockers. "Do we have API keys for the new payment provider?" "Is the staging environment stable?" Address these before Monday, not during.

The biggest mistake I made early on: Underestimating integration complexity. Payment gateway work that seemed like "3 story points" turned into "8 story points" when we discovered Stripe's webhook retry logic conflicted with our idempotency design. After that, I added a 1.5x buffer to any story involving third-party APIs.

Stakeholder Management

Managing stakeholders is like conducting an orchestra—everyone has different priorities, and your job is to keep them in harmony (or at least not openly fighting).

Our stakeholder map:

  • Clients (Burger King, Quick, etc.): Wanted features fast, customization without limits, and zero downtime during rollout. Communication cadence: bi-weekly demos, monthly roadmap reviews.
  • Tech team (developers, QA): Wanted clean architecture, time for refactoring, and protection from scope creep. Communication cadence: daily standups, weekly tech syncs where I defended their need for "boring" work like test coverage.
  • Business (Softavera leadership): Wanted revenue growth, international expansion, and competitive differentiation (hence the CMS Kiosk). Communication cadence: monthly executive updates with metrics—adoption rates, client satisfaction, revenue pipeline.
  • Legal & Compliance: Cared about NF525 certification, GDPR compliance, PMR accessibility. Communication cadence: quarterly audits, ad-hoc consultations when regulations changed.
  • External partners (Stripe, Adyen, POS vendors): Needed integration specs, support contracts, and joint customer success. Communication cadence: quarterly business reviews, reactive escalations when APIs broke.

My stakeholder management principles:

  • Over-communicate, then communicate more: Send weekly updates even when there's "nothing to report." Silence creates anxiety. A 3-bullet email ("Shipped X, blocked on Y, next up Z") keeps everyone aligned.
  • Translate between worlds: Clients don't care about "microservices architecture." They care that the kiosk boots in under 3 seconds. Developers don't care about "brand consistency." They care about reusable component libraries. My job was to speak both languages.
  • Set expectations early: When Burger King asked for a loyalty program integration in sprint 12, I showed them the roadmap and said "Sprint 18, after payment gateways are stable." Managing expectations prevents resentment.
  • Protect the team: Clients should never email developers directly with "urgent" requests. All requests flow through me or the PMs, where we triage, prioritize, and batch them into sprints. This prevents context-switching and chaos.

Result: Zero client escalations in 18 months. That doesn't mean clients were always happy, but they were always informed.

Conflict Resolution: Dev vs. QA Dynamics

If there's one universal truth in software projects, it's this: developers and QA will clash. Developers want to ship fast; QA wants to ship right. Both are correct, and that's the problem.

Real conflicts we faced:

  • Sprint 7 - Payment gateway release: Developers wanted to ship Stripe integration so clients could test. QA found 3 edge cases (network timeout handling, duplicate transaction prevention, currency conversion rounding). Developers argued "those are rare, we'll fix in production." QA refused to sign off. Standoff.
  • My call: QA was right. Payment bugs lose money and trust. We delayed the release by 3 days, fixed the edge cases, and shipped. Developers grumbled, but 2 weeks later, a client hit the exact timeout scenario QA had flagged. No data loss, no downtime, no incident. The team learned.
  • Sprint 14 - UI redesign for accessibility: QA flagged that our new button sizes didn't meet PMR (reduced mobility) standards—touch targets needed to be 48x48px minimum. Developers pushed back: "That breaks the design system, everything will look huge." Designers were caught in the middle.
  • My call: Compliance wasn't negotiable—France required PMR certification. But I gave developers creative freedom: "Meet the 48px requirement however you want—larger buttons, more spacing, different layouts. You have 1 sprint to prototype 3 options, and we'll user-test." They came back with a solution that satisfied QA, passed certification, and actually improved usability. Conflict became collaboration.

My conflict resolution framework:

  • Listen first: Let both sides vent. Developers feel unheard when QA "nitpicks." QA feels disrespected when developers "rush crap code." Acknowledge emotions before solving problems.
  • Identify the real constraint: Is it time? Money? Regulatory compliance? Technical debt? Name the constraint, and the solution often becomes obvious.
  • Escalate to data: "What's the impact if we ship this bug?" "How many users will hit this edge case?" "What's the rollback cost?" Data cuts through opinions.
  • Make the call: As Product Owner, the decision is mine. I own the consequences. Developers and QA can disagree, but once I decide, we move forward as a team.

Creating healthy tension, not toxic conflict: Dev vs. QA tension is good—it's the friction that produces quality. But it becomes toxic when it's personal. I enforced a rule: critique the code, not the coder. "This function has a race condition" is constructive. "You always write buggy code" is toxic. I shut down personal attacks immediately.

Quality Gates & Process Before Vibe Coding

Let me be blunt: "vibe coding"—shipping features fast with minimal process—works for prototypes and side projects. It does not work for mission-critical systems handling payments for 7,000 restaurants. Quality gates aren't bureaucracy; they're insurance.

Our quality gates (non-negotiable):

  • Code reviews (mandatory): Every pull request required 2 approvals—one from a peer, one from a tech lead. No exceptions, not even for "hotfixes." Code reviews caught bugs, spread knowledge, and enforced consistency.
  • Unit test coverage (80% minimum): CI/CD pipeline blocked merges if coverage dropped below 80%. Developers hated this at first ("writing tests is slow!"), but after 6 months, they saw the payoff: refactoring became safe, regressions became rare.
  • Integration testing (E2E on critical paths): We used Cypress to test full user flows: add item to cart, apply discount, pay with Stripe, print receipt. These tests ran on every deployment to staging. Slow? Yes. Essential? Absolutely.
  • QA sign-off: Nothing shipped to production without QA approval. QA tested functionality, compliance (NF525, PMR), cross-browser compatibility, and performance. They had a checklist for each feature, and incomplete checklists meant no deploy.
  • Staging deployment (48-hour soak): Every release lived in staging for 48 hours before production. This caught environment-specific issues (database connection pooling, CDN caching) that unit tests missed.

Why process mattered: In the first 6 months post-launch, we had zero critical bugs in production. Not "low bugs"—zero. No payment failures, no data corruption, no downtime. That track record earned client trust and gave us room to move fast later.

The pushback: Developers initially saw these gates as "red tape." My response: "Would you trust your credit card to a kiosk with no testing?" Silence. Process exists to protect users, not slow developers.

UI/UX Design Supervision

As Product Owner, I wasn't just defining what to build—I was also supervising how it looked and felt. UI/UX decisions directly impacted conversion rates, and we treated them as seriously as backend architecture.

Design System creation: We built a component library in React (buttons, cards, modals, forms) with strict design tokens (colors, typography, spacing). This ensured consistency across tablets, phones, and large kiosk screens. The design system wasn't just documentation—it was enforced through code reviews.

User testing with real restaurant staff: Every quarter, we brought in restaurant employees to test the kiosk. We watched them struggle with the interface, asked "Why did you click there?" and iterated. One insight: staff needed a "manager override" button accessible within 2 taps, not buried in settings. We shipped that change and satisfaction jumped.

A/B testing for conversion optimization: We ran A/B tests on critical flows: menu layout (grid vs. list), upsell timing (before payment vs. after cart), and CTA button text ("Order Now" vs. "Continue"). One test showed that moving the "Add to Cart" button from bottom-right to center increased conversions by 12%. Data, not opinions, drove design.

Results:

  • Average cart value increased 15% (from better upsell placement)
  • Order time dropped 30% (from streamlined menu navigation)
  • Staff satisfaction: 4.7/5 (from improved manager tools)
  • Customer satisfaction: 4.5/5 (from intuitive interface)

The lesson: UI/UX isn't subjective. It's measurable. If you can't A/B test a design decision, you're guessing.

Lessons Learned: What Worked & What Didn't

Let's be honest: not everything went smoothly. Here's what I got right, what I got wrong, and what I'd change.

What worked:

  • Agile sprints with clear ownership: 2-week sprints kept us nimble. Every story had a single owner (developer or QA lead), and ownership eliminated confusion. Result: 85% on-time delivery.
  • Over-communication: Weekly updates, daily standups, and transparent roadmaps kept everyone aligned. Zero surprises = zero crises.
  • Quality gates: Code reviews, test coverage, QA sign-off—these slowed us down initially, but the payoff was massive. Zero critical bugs in the first 6 months post-launch.
  • Data-driven design: A/B testing settled design debates. When data showed a 12% conversion lift from a button placement change, subjective opinions became irrelevant.

What didn't work (my failures):

  • Initial underestimation: I underestimated payment gateway complexity. What I thought was "2 sprints" became "5 sprints" when we hit webhook retry logic, currency conversion edge cases, and PCI compliance requirements. Lesson: add 1.5x buffer to any story involving third-party APIs.
  • Scope creep (my fault): In Sprint 8, I said yes to a "small feature request" from Burger King (custom loyalty points display). It spiraled into 3 sprints of work because it required backend changes, UI redesigns, and QA testing across all clients. I should have pushed it to phase 2.
  • Not involving QA early enough: In Sprint 5, developers built a feature without QA input, only to have QA flag compliance issues during review. We had to rework 40% of the code. After that, QA joined design reviews from day 1.

What I'd do differently:

  • Build a dedicated DevOps role earlier: We spent too much time debugging deployment issues. A full-time DevOps engineer would have saved 3-4 weeks across the project.
  • Run pilot programs sooner: We waited until Sprint 16 to deploy to Burger King. We should have piloted at Sprint 10 with a limited rollout. Real user feedback beats internal testing.
  • Say "no" more often: I said "yes" too often to client requests. Every "yes" was a "no" to something else. I should have been more ruthless with the MoSCoW framework.

Conclusion: Product Leadership Principles

After 18 months and 7,000 deployed kiosks, here are the principles I'd stake my career on:

  • Clear vision + flexibility: Have a North Star (for us: international kiosk platform with CMS), but adapt the path. Agile isn't "no plan"—it's "flexible plan."
  • Empower teams, own outcomes: Give developers autonomy in how they build, but as Product Owner, you own whether it ships and whether it works. Trust, but verify.
  • Communication over-communication: When in doubt, over-communicate. Weekly updates, transparent roadmaps, and open retrospectives prevent 90% of conflicts.
  • Quality is never compromised: Speed without quality is just rework. Code reviews, test coverage, QA gates—these aren't overhead, they're insurance. Zero critical bugs in 6 months proved it.
  • People > processes: Agile frameworks, sprint planning, MoSCoW prioritization—these are tools. But the real work is understanding what motivates your team, resolving conflicts with empathy, and celebrating wins together. The Softavera Kiosk succeeded because 17 people believed in it and each other.

Product management isn't about writing perfect specs or running flawless sprints. It's about keeping 17 people aligned, motivated, and shipping value—even when payment APIs break, clients change their minds, and QA finds bugs 2 hours before a demo. If you can navigate that chaos with clarity and empathy, you're not just managing a product. You're leading a team.