Emotion Policy + Face Verification for High-Risk Operations
A DeepFace-based security framework that grants access only when identity verification and emotional-risk policy checks both pass.
- About
- Architecture
- Quick Start
- Notebook Demo
- Use Cases
- Why Emotion Gating Matters
- Security Rules
- Documentation
- Contributing
Designed for high-impact environments, this project helps teams prototype emotion-aware access control with deterministic decisions and auditable outcomes.
This repository demonstrates how to secure critical gateways using:
- face verification against a reference image or admin pool with multi-frame identity consensus,
- emotion classification with threshold + weighted policy,
- strict 2/2 authorization rule before protected action execution,
- multi-frame emotion voting with bounded retries and timeout handling.
- Capture live frame from camera.
- Verify identity against reference image and/or admin pool.
- Analyze emotions and evaluate policy (
blocked_emotions, weights, threshold). - If identity and emotion pass, grant access to the protected resource.
- Write structured audit event for every decision.
Identity and emotion use separate thresholds:
- identity uses
identity.distance_threshold, - emotion uses
emotion.threshold.
flowchart TD
start[Access request] --> capture[Capture live frames]
capture --> identity[Identity consensus check]
identity --> identity_ok{Identity passed}
identity_ok -- No --> denied[ACCESS DENIED]
identity_ok -- Yes --> emotion[Emotion policy check]
emotion --> emotion_ok{Emotion policy passed}
emotion_ok -- No --> denied
emotion_ok -- Yes --> granted[ACCESS GRANTED]
Primary runtime (Python 3.13):
py -3.13 -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install -r requirements.txtNote: on some Python 3.13 TensorFlow/DeepFace setups, tf-keras is required and is included in requirements.txt.
Fallback runtime when backend wheels fail:
py -3.12 -m venv .venv-fallback
.\.venv-fallback\Scripts\Activate.ps1
python -m pip install -r requirements-fallback.txtconfig.yaml is auto-created on first run if it does not exist.
python scripts/run_terminal.py --config-path config.yamlThe terminal flow is now:
- Step 1: identity source setup,
- Step 2: choose advanced configuration (
yesto customize,noto use tuned production defaults).
Default non-advanced profile is optimized for speed:
- identity consensus: 5 frames / 2 matches required,
- emotion voting: 3 frames per batch, up to 2 batches.
Runtime output includes stage-level results:
- Identity check: pass/fail details
- Emotion check: pass/fail details
- Final decision: 2/2 pass or deny
Example terminal configuration and runtime view:
On first identity verification run, DeepFace may try downloading vgg_face_weights.h5 (used by VGG-Face) into:
C:\Users\<your_user>\.deepface\weights\vgg_face_weights.h5
If automatic download fails, run:
curl.exe -L "https://github.com/serengil/deepface_models/releases/download/v1.0/vgg_face_weights.h5" -o "$env:USERPROFILE\.deepface\weights\vgg_face_weights.h5"If your network blocks large downloads from GitHub, download the same URL in your browser and place the file at the same path.
On first emotion analysis run, DeepFace may also download facial_expression_model_weights.h5.
After first-time downloads complete, subsequent runs are significantly faster.
TensorFlow startup logs are suppressed by default in launcher, but some backend messages may still appear depending on platform/runtime.
On some Windows setups, OpenCV can fail reading image paths containing non-ASCII characters (for example Greek folder names), which may surface as:
Exception while processing img2_path
The framework includes a Unicode-safe fallback loader (numpy.fromfile + cv2.imdecode) for reference/admin images.
If you still see path-related failures, verify file readability and consider using an ASCII-only path for source images.
Open notebooks/security_framework_demo.ipynb to run the interactive demo with widgets for:
- identity source,
- blocked emotions,
- identity consensus tuning (
frames_per_check,min_matches_required,distance_threshold), - emotion threshold + voting controls (
frames_per_batch,max_batches), - camera window toggle,
- resource naming and gated authorization flow.
- Physical access control for restricted spaces.
- Access to privileged databases, tools, or microservices.
- Mission-critical infrastructure operations.
- Financial signing keys and sensitive document systems.
- AI inference gateways (optional example, not the primary focus).
Biometric identity confirms who requests access.
Emotional risk analysis helps evaluate whether the person should proceed right now.
In high-impact environments, an otherwise authorized operator may pass face verification while still being in a compromised state, for example under coercion (such as being forced to authenticate) or severe distress (for example fear, panic, or aggression after a major personal conflict). This framework adds an emotional-risk gate to reduce approvals during those moments, helping protect critical infrastructure, high-value financial operations, and other irreversible systems where temporary instability can create outsized risk.
Example output after a successful gated decision:
- Add liveness detection and anti-spoofing.
- Add additional biometrics (voice, token, geofencing, hardware keys).
- Add policy profiles by environment and risk level.
- Attach a custom action executor for any protected workflow.
Access is granted only when:
- identity verification succeeds, and
- weighted blocked-emotion score stays below threshold.
All denied decisions return deterministic reason messages.
If a stable emotional classification is not reached after configured batches, access is denied with guidance to retry under better camera/visibility conditions.
Contributions are welcome for policy modules, biometric extensions, and UI improvements. See CONTRIBUTING.md for contribution workflow, standards, and PR checklist.

