Waiva shows you in real time when an AI response is unstable – before you act on it. The last filter between black box and your decision.
Article 26 requires everyone who deploys AI systems to actively monitor them. Not the providers – the users. You. This will be enforced from August 2026.
The problem: You cannot control what happens inside the black box. Training, alignment, safeguards – all beyond your reach.
But you are responsible for what comes out.
Waiva sits at the only point you can control: between output and decision.
No filter. No intervention. Just a signal: This response is stable. Or: You should take a closer look here.
The result: Documented Human Oversight. Not what you decided – but that you checked.
Based on the sReact framework – developed from over 11,500 analyzed AI interactions.
Waiva detects when a response seems uncertain despite sounding convincing – is content-wise distorted – linguistically clean but factually fragile.
A clear score (0–90) makes the structural stability of an AI response immediately visible. You decide what to do with it.
When drift threatens, Waiva shows you concrete action options. Not as instruction – as guidance for your next input.
Every response is analyzed live. No delay, no waiting. The signal appears while you are still reading.
Works with ChatGPT, Claude, Gemini and others. Waiva is model-independent – you stay flexible.
From Plan L: Documented proof of your human oversight practice for auditors and regulators.
Waiva is based on the +1 Principle – developed from over 11,500 human-AI interactions and published in peer-reviewed research.
Read Research on SSRN →"I think it could be a valuable insight."
"This sounds very interesting indeed, perhaps worth publishing."
"The pattern you describe makes sense, and documenting it could be useful."
"This is indeed cognate with the definition of ethics I have been working with for some time."
Prof. Yoshua Bengio (Turing Award 2018) forwarded the research to his team working on AI Honesty Benchmarks.
Use Waiva where you work with AI.
For Chrome & Edge – directly integrated with ChatGPT, Claude, Gemini.
Native iPhone app – with Dynamic Island integration. Planned for Q4 2026.
Waiva does not verify facts. That remains with you.
Waiva does not evaluate people. Only responses.
Waiva does not make decisions. You do.
Waiva stores no content. Zero-logging.
Waiva does not monitor usage. No logs, no profiles.
Waiva does not intervene in responses. No filter, no block.
From individuals to institutions – wherever AI outputs influence decisions.
Waiva analysiert Antworten, nicht Menschen. Deine Nutzung bleibt privat. Deine Entscheidungen bleiben deine.
IONOS server, German law, full digital sovereignty.
Transient analysis in RAM. No permanent storage.
SSH-Key-only, Fail2ban, TLS 1.3, daily backups.
Self-service. Clear boundaries. No negotiation.
Alle Pläne: Self-Service, keine Verhandlung.
Dokumentation: AVV · Security Factsheet · DORA-Sheet · Executive Briefing
Alle Preise zzgl. gesetzlicher MwSt.