
When people hear “QA,” they often picture a team writing tests – or worse, holding up a much-anticipated release to run them. But in reality, our Software Engineer in Test role goes far beyond test execution, and it’s definitely role goes far beyond test execution, and it’s definitely not about slowing things down. In fact, because we’re connected to every team involved in a project, we see it as our responsibility to accelerate delivery by promoting alignment, reducing risk, and supporting smarter decisions across the board.
That mindset was essential when we launched the Central Prioritization System (CPS) project. With multiple teams contributing and urgency driving every milestone, QA had to step up—not just to write tests, but to ensure testability, enable fast feedback loops, and foster shared ownership. In this post, we’ll walk through how we approached QA in this cross-functional, fast-moving environment, what challenges we faced, and what we learned about driving quality without becoming a bottleneck.
The Project
Implementing CPS was a major company-wide initiative, aimed at significantly boosting productivity in our core revenue stream. It played a key role in supporting one of our top strategic priorities.
The system was designed to improve the efficiency of the claiming team by delivering higher-value, pre-qualified video leads. Instead of spending time manually searching for claimable content, claimers (team members responsible for identifying and confirming copyright-eligible content) receive ranked leads directly through ClaimMate (an internal custom plugin claimers use) – allowing them to focus on review and decision-making rather than discovery. This lead generation was powered by data pulled from both internal systems and external third-party content platforms we integrated with.
While the concept was straightforward, the implementation was anything but that. Behind the scenes, CPS relied on complex logic and intricate system integrations.
The project brought together multiple product managers, data scientists, ML engineers, and several engineering teams responsible for platform services, content matching, and tooling along with their dedicated SETs and the entire claims operations team. It was a true cross-functional effort, uniting expertise from across the company to deliver a high-impact solution. With this many moving parts, teams working in different timezones – and under intense time pressure, as CPS was the company’s top strategic priority – coordination and alignment were essential from day one.
Common QA Challenges
When a project is both top priority and under intense urgency, bottlenecks are almost inevitable. Features often reach the testing phase already behind schedule, and suddenly every hour spent in QA feels like a delay – reinforcing the perception that testing is the bottleneck. While that’s not always untrue, the reality is often more about timing: When testing is the final step, any existing delays compress our timeline, amplifying pressure and expectations
Coordinating across many teams is also inherently challenging. Miscommunications, undocumented decisions, or parallel initiatives can easily derail plans – leading to mismatched expectations, incomplete dependencies, and repeated rework. Getting everyone back in sync takes time and effort, especially when the project is moving fast.
Finally, in a large cross-functional effort, ownership can blur. For QA in particular, the role can shift from enabling quality to tracking status – and when testers communicate with nearly every team, it’s easy for them to become unofficial project coordinators. Without clear boundaries, that can dilute focus and stretch capacity in the wrong direction.
Unique Challenges We Faced
On a personal level, the first challenge was that I was a new Lead SET to the company. Just as I was starting to get familiar with our business, systems, and people, I found myself responsible for coordinating the testing of a project that required deep, cross-domain knowledge in all three. To add to that, our QA team was essentially newly formed – most members were still ramping up, without the context or experience needed to follow such a complex initiative end-to-end.
Another major challenge was testing data. Most of the time, this wasn’t data we could generate ourselves. The complexity of the system, combined with our limited internal knowledge of it, made it difficult to ask the right questions -or write the right queries – to get what we needed. On top of that, some of the data lived in systems controlled by other teams, or required specialized setup, making our tests fragile by default. Many test failures weren’t bugs, but data inconsistencies or missing dependencies.
We also faced the challenge of testing in isolation. For much of the early project timeline, key components-such as claim checks and video eligibility-were being developed separately. Without full integration or even a shared data source like CPS data in Snowflake (Orfium’s Data Warehouse,) our tests were limited to checking individual database entries. We couldn’t validate end-to-end behavior, which made it difficult to assess real-world scenarios or have full confidence in system readiness.
Lastly, we had to carefully plan how to roll out the feature to over 140 claimers without disrupting their work. These users directly impact company revenue, so we needed their feedback – but under no circumstances could we risk delivering a broken or inefficient tool. Balancing rollout safety with early validation required close coordination, constant monitoring, and a great deal of care.
How We Navigated Them
To overcome our limited experience and knowledge without slowing the project down, we leaned heavily on trust, planning, and collaboration. As QA, one of our most important responsibilities is to ask the right questions-even when we’re still learning. Thankfully, we work in an environment where help is readily offered and communication is open. By consistently reaching out to the right people, we filled knowledge gaps quickly-and in doing so, we also made our presence and intent visible to the wider team.
What initially seemed like a setback-having a newly formed team with limited context-turned out to be a great opportunity for growth. To make that growth sustainable, we needed clear guidance, support, and most importantly, trust. Everyone knew they weren’t on their own. Mistakes were learning points, not landmines, and that mindset built both confidence and momentum.
We also made it a priority to bring test planning into the conversation early. By clearly scheduling testing activities from the beginning, we ensured that the right people were available when we needed them-despite the project’s intense timeline. This early visibility helped testing be seen as -a planned, integrated phase of delivery rather than a late-stage bottleneck.
Communication was key on multiple fronts. While we drew clear boundaries around our role and avoided becoming de facto project managers, we used our cross-team visibility to flag misalignments early and push for clarity when things got fuzzy. This helped the entire project stay on track.
Data preparation was one of the hardest challenges. We often didn’t have control over the data we needed, and due to system complexity and limited internal knowledge, even knowing what to ask for was tricky. But again, asking early, escalating appropriately, and leaning on more experienced colleagues helped us navigate those blockers. Some compromises were necessary, but QA isn’t about chasing perfection—it’s about managing risk pragmatically.
Finally, when it came time to roll out the feature to 140 claimers, we took a staggered approach to minimize disruption and risk. We identified a small group of experienced claimers to act as early adopters, effectively turning them into expert alpha testers. This not only gave us valuable real-world feedback but also ensured we were already delivering value-even as the rest of the team was still onboarding.
Rollout and Results
With positive feedback already coming in from the alpha team, we felt confident heading into the full rollout. Everyone had put in their best effort, and the launch reflected that-smooth, stable, and without major surprises. Only a handful of minor bugs surfaced, most of which were tied to conditions we simply couldn’t replicate in the integration environment.
In the days that followed, the results spoke for themselves. Claimer efficiency started to climb, and with it, company revenue. Seeing the numbers validate the work was a powerful moment for everyone involved-and it made us even more motivated to build on that foundation with future improvements and iterations.
But beyond the metrics, there was something even more important: we delivered. Despite all the complexity, pressure, and uncertainty, we hit the deadline and brought the project over the finish line together.
In Hindsight
So, did everything go perfectly? Not quite. There were bumps along the way-as there always are. But as a company, we achieved something remarkable: delivering a critical, high-stakes project under pressure and within the deadline. And as a QA team, we did it without letting any major bugs slip through.
What should we carry forward? The excellent collaboration. The willingness to help each other. The pragmatic, realistic mindset. The early involvement of QA in planning. And the unwavering focus and commitment everyone showed in pushing toward a shared goal.
For us in QA especially, this was a strong reminder of the importance of our role-not just in testing, but in connecting the dots across teams. We need to embrace that, while also protecting our focus by defining our limits. Acting with clarity and purpose, rather than reactively, makes a huge difference.
What could improve? First and foremost: automation. While the urgency was real, skipping test automation entirely created technical debt that quickly outweighed the time we thought we were saving. The absence of automated coverage made regression testing much harder than it needed to be – and we felt that pain when it came time to retest the full flow.
To be clear, automation wasn’t abandoned – just postponed. We had collectively agreed to prioritize meeting the deadline first, with the understanding that automation would become a top focus immediately after. But that choice had a cost. In hindsight, even a minimal automation strategy from the start could’ve saved us significant effort down the line.
Ownership could also have been clearer. With better-defined responsibilities, we could’ve focused more on actual testing, and less on chasing updates or bridging gaps in communication. QA can-and should-support alignment, but we shouldn’t become the de facto information hub.
Lastly, more structured coordination across teams would have saved time and confusion. Under pressure, it’s tempting to retreat into team silos and treat cross-team communication as optional. But early, intentional alignment-even if it’s just through a designated representative-can prevent costly missteps down the line.
True quality isn’t a final checkbox, nor is testing an one-off task tacked on at the end; instead, they serve as the compass guiding every handoff, every decision, and every team toward a shared destination.