I · The invisible fleet
Right now, inside your company, employees are using AI tools you don't know about. Not because they're being reckless — because they're trying to do their jobs.
ChatGPT for drafting proposals. Notion AI for meeting notes. Cursor for shipping code faster. Perplexity instead of Google. GitHub Copilot that IT approved — but nobody tracked what data it can reach, or who has access to what it generates.
Then there are the agents. Tools that don't just assist humans but act on their behalf. Connected to your CRM. Able to send emails. With read access to customer records. Deployed by a product manager on a Tuesday afternoon, without a single security review, because the vendor's onboarding flow made it feel as routine as adding a browser extension.
This isn't a story about negligence. It's a story about speed. Business moves faster than governance. AI has made the gap between them catastrophic.
The tools are useful. That's the problem. People adopt what makes them more effective, and AI tools are the most effective productivity lever most knowledge workers have ever had access to. Governance can't win by being the department of "no." It has to become visible before the risk does.
II · The control illusion
Every major AI vendor will tell you their product is enterprise-grade. Secure. Compliant. Governed. They're not lying. They're describing the wrong thing.
Microsoft Copilot ships with DLP policies. In early 2024, those policies failed for weeks without detection — Copilot was surfacing documents users had no right to access, silently, with no alerts raised on either side. The vendor's own control system failed to catch the vendor's own control system failing.
Salesforce Agentforce, released to general availability in late 2024, was found vulnerable to prompt injection costing as little as $5 to exploit. An attacker could redirect an autonomous agent's actions — actions taken on behalf of your company, against your customers — simply by crafting a malicious instruction into the agent's input stream.
When the AI tool is also the governance layer, you have a single point of failure with no independent verification. You're trusting the vendor to audit themselves.
These aren't edge cases to be patched and forgotten. They're a structural problem with how AI governance is being sold. A vendor has every incentive to make their product feel safe, and very little incentive to surface the ways it isn't. Their security logs are designed to satisfy their legal department, not yours.
You wouldn't let a bank audit its own books. You wouldn't trust a contractor to inspect their own work. The logic is no different for AI systems with access to your data, your customers, and your operations.
III · The regulatory moment
The EU didn't pass the AI Act because politicians wanted to regulate technology. They passed it because they watched what happened with social media — a technology that reshaped society before anyone understood its externalities — and decided not to repeat the mistake with AI.
The AI Act matters to you if you're a mid-market company in Europe, even if you think it doesn't. Under Article 28, deployers of AI systems — that's you, when you give employees access to AI tools — share responsibility for ensuring those systems are used in compliance with the Act. You need a register. You need risk classification. You need to be able to demonstrate this to a regulator on request.
GDPR applies too. Every time an employee pastes customer data into an AI prompt, that's a data processing event. If that AI tool's servers are outside the EU, that's a cross-border transfer. If there's no signed DPA with the vendor, that gap is yours to explain — not theirs.
NIS2, which became directly applicable in October 2024, extends supply chain risk obligations to third-party software and services. AI tools are supply chain. A security incident involving an AI vendor that affects your operations is now your incident to report, within 72 hours, with documented remediation steps.
The regulations aren't the enemy. They're the forcing function that makes governance visible, budgetable, and defensible inside your organisation.
We've spoken to dozens of compliance leads at mid-market companies across Germany, the Netherlands, and Poland. Almost none of them have a complete inventory of the AI tools their company uses. Most have no formal process for approving AI tools before deployment. None of them feel comfortable saying they're fully compliant with the AI Act requirements that are already in force. The regulations have arrived. The tooling to meet them, for companies their size, hasn't.
IV · The mid-market blindspot
Large enterprises have security teams. They have GRC platforms, vendor risk management workflows, and people whose entire job is to think about exactly these problems. They're still not fully sorted — but they have infrastructure and budget to get there.
Companies with 50 to 300 employees don't. The existing governance tools — ServiceNow, Archer, the enterprise tiers of every compliance platform — are designed for organisations with dedicated compliance departments. They cost six figures. They take quarters to implement. They assume organisational maturity that most growing companies haven't reached yet, and shouldn't need to reach before they can govern the tools their employees use.
So mid-market companies do nothing. Or they create a shared spreadsheet that goes stale within a month. Or they add "AI governance" as a quarterly agenda item that gets deprioritised when the pipeline is on fire.
Meanwhile, the AI tools proliferate. The agents multiply. The regulatory clock ticks. And the gap between what's happening inside the company and what leadership knows about it grows wider every week.
This is the blindspot the market has decided to ignore because mid-market companies are harder to sell to than enterprises, their deals are smaller, and their governance needs are messier. We think that's the wrong call — both commercially and ethically. The companies that will suffer most from ungoverned AI aren't the ones with dedicated security teams. They're the ones without them.
V · What an independent layer means
The insight behind PanelSec is straightforward: governance has to sit outside the stack it governs. Not inside Copilot settings. Not inside the ChatGPT admin console. Not in a spreadsheet owned by IT. Outside, independent, with its own chain of custody for the data it collects.
An independent governance layer does three things:
It inventories. Every AI tool, every agent, every integration — whether IT-approved or not. The system of record for what AI your company actually uses, not what IT thinks it approved. Discovery has to be automatic, because manual inventories are always wrong by the time you finish them.
It enforces. Not by trusting the vendor's own controls, but by sitting between your people and the tools they use — at the network and identity layer. Block unapproved tools. Require business justification before a new agent gets production credentials. Enforce data classification policies that apply regardless of which AI vendor is on the other end.
It logs. Immutably, in a format that's useful to a regulator, an auditor, or a lawyer. Not a vendor activity log you can't export. A system of record you own, that you can produce on request, that tells a coherent story about what your AI systems did and what controls were in place when they did it.
You can't prove governance happened if your only evidence is the vendor's own logs. That's not independence. That's circular reporting.
The architectural principle is old. It's how financial auditing works. It's how industrial safety regulation works. It's why you hire an external penetration tester instead of asking your developer if their code is secure. Independent verification isn't a sign of distrust — it's the mechanism that makes trust possible between organisations that have different incentives.
VI · What we're building
PanelSec is that layer, built for companies that can't afford to hire an enterprise GRC team but can't afford the consequences of not having one.
We started from a specific person: the compliance lead at a 150-person professional services firm who has EU AI Act obligations landing on their desk, a part-time IT person, and no budget for six-figure software. She needs to know what AI tools her colleagues use. She needs to classify them by risk. She needs to generate a compliance report she can show to a client or a regulator without spending a week pulling data from five different places.
We're also building for the CTO at a 200-person SaaS company who just realised his sales team has been feeding customer data into an AI tool with servers in the US, no DPA, and no approved business justification. And for the IT manager who was told to "get a handle on the AI situation" with a three-day deadline and no tools.
We are not building another security product that generates reports nobody reads. We're building a governance system that makes the right thing easy — inventory your AI, classify its risk, enforce your policies, produce your documentation — with the minimum overhead necessary to actually get used by the people responsible for making it work.
A governance tool that nobody uses is worse than no governance tool. It creates the illusion of control without the substance of it. We have no interest in building that.
We are EU-native. Our servers are in Frankfurt. We don't route your data through the US. We think the regulatory environment in Europe is asking the right questions, even when the specific answers in the legislation are imperfect. We're building for the long term in this regulatory context, not trying to abstract it away.
We're onboarding design partners now — a small number of companies who will use PanelSec in production and have direct input into what we build. If AI governance is your problem today, we'd like to work on it with you.
PanelSec Team
Built in Europe · Hosted in Germany · 2026-02-01