Your teams are shipping faster than ever. AI tooling has changed what's possible in a sprint, and most of it feels like progress. But it's also changing what product ownership actually means, faster than most job descriptions admit.
Somewhere between the prompt and production, accountability gets blurry. Who reviewed that change? What did the tests actually cover? Was that dependency safe? These aren't hypothetical questions anymore, and they're increasingly landing on product people to answer.
We're hosting a small evening for product and innovation leaders navigating exactly this. We'll open our own codebase on screen and walk through real moments where AI-assisted development created gaps we didn't expect. Then we'll discuss what that means for how you govern, review, and ship, and what good product ownership actually looks like now.
25 seats. Mostly discussion. Bring your own examples.
Thu 25 Jun 2026 · 18:00–21:00 CEST
Panenco HQ · Diestsevest 25 · 3000 Leuven



We look at what changes when your team builds faster with AI, and what that means for the people responsible for quality, accountability, and the decisions that happen before something goes live.
01
When AI accelerates the build, it gets harder to know what your team actually produced and whether anyone understands it well enough to change it later. We discuss what meaningful review looks like when the pace is this high.
02
AI tooling pulls in libraries and suggests fixes fast. We look at what that means for your exposure on secrets, dependencies, and access assumptions, and why "the tool checked it" is not the same as "we checked it."
03
A green dashboard is not the same as confidence. We show the gap between tests that pass and tests that would have caught what actually broke, and what that means for how you define done.
04
Everyone on the team needs the same answer to "is this ready?" We cover what simple agreements on review, logging, and sign-off look like when AI is part of the build process.
Food and drinks. Programme starts at 18:30.
What we'd tell our own leadership: what worked, what surprised us, and what we'd do differently.
Name, company, and the one question about AI-assisted development that you haven't found a good answer to yet.
We open our codebase on screen and work through three questions with the room:
Each question starts with a real moment from our own build. Then we open it up.
Seats are limited.