ONIOKO builds perception systems that extend human observation. We don't label. We don't judge. We make the invisible visible.
OPMs don't tell you what someone feels. They show you what your eyes missed.
The ONIOKO Principle
Four specialized layers work in sequence to extract, compare, interpret, and track observable human expressions across video, audio, and text channels.
Signal Extraction
The perception layer. CYGNUS processes raw video and audio to extract observable signals: facial Action Units (FACS), vocal prosody patterns (pitch, tempo, rhythm), and postural dynamics. No interpretation happens here. Pure signal extraction at frame-level precision.
Pattern Recognition
The crossmodal engine. ORACLE compares signals across channels simultaneously. When vocal warmth contradicts facial tension, ORACLE flags the incongruence. It doesn't say why. It shows where signals diverge, applying Crossmodal Rules that define congruent versus incongruent expression patterns.
Contextual Interpretation
The insight layer. LUCID takes the patterns ORACLE identified and generates contextual, human-readable observations. In coaching, it might note that verbal confidence didn't match visible tension. In clinical support, it provides FACS-based observation notes for the practitioner. Available for coaching, clinical, and enterprise contexts.
Longitudinal Memory
The memory layer. TRACE tracks patterns across sessions, building a temporal map of behavioral evolution. It surfaces that a particular incongruence appeared in three consecutive sessions, or that posture confidence has improved over time. Available for coaching, clinical, and enterprise contexts.
Most "emotion AI" systems claim to know what someone feels. They assign labels like "happy" or "angry" based on facial expressions. That approach is fundamentally flawed, scientifically contested, and legally problematic under the EU AI Act.
ONIOKO takes a different path. OPM observes what is objectively visible: muscle activations, vocal patterns, posture shifts. It compares these signals across channels to detect incongruences. It never claims to know the internal state. The human observer always makes the final interpretation.
ONIOKO packages the OPM architecture into focused experiences. Each product turns the same observation engine into a more usable, context-shaped workflow.
One perception engine, configured for the context. Each vertical gets the modules it needs and nothing it doesn't.
AI learning companions that perceive student engagement in real time. Tutoring systems that notice when observable signals suggest confusion before a student says a word. Fully EU AI Act compliant with CYGNUS and ORACLE only.
See your own incongruences. OPM shows coaches and their clients where verbal intention and visible expression diverge. Personal communication improvement grounded in observable patterns.
Train teams to read and project confidence. Private to the individual; no HR access, no surveillance. Leaders practice high-stakes conversations with real-time perception feedback.
A perception layer for therapists and clinicians. FACS-based observation assistance that helps practitioners notice what unfolds in session. Not a diagnostic tool. A second pair of trained eyes.
Practice with an avatar audience that perceives your delivery. Get feedback on where your confidence shows and where your signals tell a different story.
A consistent observation instrument at scale. Where human coders introduce fatigue and drift, OPM applies the same detection criteria across thousands of hours.
Whether you're exploring OPM for education, coaching, research, or enterprise, we'd like to hear what you're building.
Questions serious buyers, partners and compliance teams actually ask before they adopt OPM in production.
No. OPM does not infer emotions or mental states. It detects observable signals and cross-channel congruence patterns. That distinction is exactly why education-safe and workplace-safe configurations can be deployed responsibly. Read the compliance documentation.
Never. The system reports observable expression and signal relationships, not hidden feelings. The final interpretation remains with the human professional using the instrument.
Emotion recognition tries to classify inner states from outer behavior. OPM stops one step earlier. It documents what is visible, measurable and crossmodally aligned or misaligned, without making the inferential leap.
In education, ONIOKO can be configured with CYGNUS and ORACLE only. LUCID and TRACE stay off, which prevents emotional profiling and longitudinal behavioral tracking of students.
Most configurations process video and audio in real time and retain only derived descriptors or approved observation outputs. Storage policies depend on the context, consent model and feature set.
Yes. CYGNUS and ORACLE are designed for live readouts, while specialized configurations such as CYGNUS Lite and ORACLE RT support faster-response deployment patterns.
Yes. OPM is modular. Some deployments stop at raw signal extraction and pattern recognition, while others activate contextual insight and longitudinal tracking only where appropriate.
That depends on the deployment contract, but the architecture is built so outputs, retention logic and access controls can be scoped tightly per customer, product and role.
Yes. The architecture is explicitly designed for explainability at the signal layer. Compliance teams can inspect what was detected, what was communicated, and what the system was never allowed to claim.
No. It is an observational instrument, not an autonomous authority. The human expert remains the decision-maker, interpreter and accountable actor.
Yes. OPM can power ONIOKO-native products or be embedded into customer-specific experiences, dashboards, coaching tools and research workflows.
That is exactly what the architecture layer is for. Different sectors can activate different modules, context presets and guardrails without changing the core observational logic.