Metadata alone is not enough
Most discussions around AI Act compliance start with metadata.
They shouldn’t.
Metadata is useful—but not resilient enough on its own.
Under Article 50, AI-generated content must not only be labeled, but the labeling must remain detectable and reliable in real-world conditions.
And real-world conditions are messy:
- content is compressed
- resized or cropped
- reformatted or screenshotted
- re-uploaded across platforms
In many of these cases, metadata is lost.
A system that relies only on metadata can fail exactly when verification is needed.
What invisible watermarking solves
Invisible watermarking addresses this limitation.
It embeds a signal directly into the content itself—not as external metadata, but as part of the data.
This enables detection without dependency on metadata.
A properly implemented watermark can survive:
- compression
- resizing and cropping
- format changes
- platform re-uploads
Even if metadata is removed, the content can still be identified as AI-generated.
Why this is required for compliance
The AI Act does not just require labeling.
It requires labeling that is robust against manipulation.
If a system breaks when metadata is stripped, it does not meet that requirement.
Why watermarking alone is not sufficient
Watermarking is essential—but not sufficient.
It has limitations:
- detection can degrade under certain transformations
- implementation varies across formats and models
- it does not provide auditability or standardized verification
For this reason, the AI Office defines a multi-layer approach.
Watermarking is the foundation layer, but it must be combined with:
- standardized credentials (such as C2PA)
- immutable audit logs
- public verification mechanisms
The key takeaway
Watermarking is not optional.
But it is not the full solution either.
It is the foundation of a compliant system.
To understand how watermarking fits into a complete compliance architecture:
Explore AI Act 50 →