Skip to content

The Context Tensor

The Geometry of Trust in a Digital Universe

Trust isn't a yes/no flag. It's a field. The Context Tensor is the mathematical structure that captures the real-time state of trust for any interaction—its weight, direction, and volatility—so authorization can respond like physics, not policy. Instead of static rules, KTP evaluates a living vector space that changes as conditions change.

Live Trust Geometry

Ebase → Risk Deflation → Etrust

Ebase --
Risk --%
Etrust --
Click a dimension to explore
Performance Risk Soul

The Seven Dimensions

The Context Tensor is expressed across seven primary dimensions. Click any dimension to explore its mechanics.

For the full dimensional breakdown (1,707 measurements), see KTP-Tensors RFC.

Incoming Signals

The Context Tensor doesn't operate in a vacuum—it's fed by a continuous stream of live telemetry from across your infrastructure. These signals are normalized, classified through ARQ, and projected into the seven-dimensional trust geometry.

Signal Ingestion Pipeline

From raw telemetry to trust geometry

1,707
Discrete Signals
4
Categories
3
ARQ Classes
7
Dimensions

Signal Categories

Every signal entering the system falls into one of four categories. Each category has distinct sources, update frequencies, and trust implications.

Category Deep Dive

Pa
Packets The pulse of the network

What It Measures

Packet telemetry captures the physical and logical reality of network connectivity. Every TCP handshake, every dropped packet, every millisecond of latency tells a story about reachability, stability, and infrastructure health.

This is the most fundamental signal category—if packets aren't flowing, nothing else matters. Packet signals form the bedrock of Accessibility scoring and provide early warning of infrastructure degradation.

Why It Matters

Network telemetry is impossible to fake at scale. An attacker can forge credentials, but they can't hide the latency of routing through Tor. They can compromise an endpoint, but the C2 beaconing pattern shows up in flow data. Packet signals provide ground truth that higher-layer deception can't mask.

Key Signals

Latency Round-trip time, P50/P95/P99 percentiles
Packet Loss Drop rate, loss spikes, path-specific loss
Throughput Bytes/sec, bandwidth utilization
Handshake Success TCP SYN/ACK completion rate
Retransmissions Packet resend rate, congestion indicators
Flow Duration Connection longevity, session patterns
Path Changes Route shifts, next-hop variations
Link Status Interface up/down, flapping detection

Signal Sources

NetFlow/IPFIX sFlow SNMP Cloud VPC Flow Logs SD-WAN Telemetry Wireless Controllers

ARQ Mapping

Accessibility
40%
Retainability
30%
Quality
30%

Tensor Projection

Primary Mass Time
Secondary Momentum

Full Signal Catalog (~250 signals)

Layer 4: Transport

Signal Description Derived Metrics
src_port Client ephemeral port Port exhaustion, fixed port detection, scan patterns
dest_port Service port Service distribution, dark port access, scan detection
tcp_flags SYN/ACK/FIN/RST SYN flood, RST rate, null/xmas scans, handshake completion
window_size TCP receive window Zero window, window scaling, retransmission correlation
retransmission_rate Packets resent Spike detection, high-retransmit hosts, global trends

Layer 3.5: Flow

Signal Description Derived Metrics
flow_bytes_in/out Byte counts Volume spikes, exfiltration detection, asymmetric flows
flow_packets Packet count per flow Small packet floods, scan detection, avg packet size
flow_duration Connection length Long-lived flows, C2 beaconing, tunnel detection
flow_start_time Initiation timestamp Off-hours activity, burst detection, time correlation
application_id DPI-identified app Shadow IT, new application alerts, usage trends

Layer 3: Network

Signal Description Derived Metrics
src_ip / dest_ip Source/destination Top talkers, reputation, beaconing, new host detection
latency Round-trip time P50/P95/P99, anomaly detection, trend analysis
packet_loss Drop indication Loss rate/spikes, path-specific loss, P99 loss
hop_count TTL-derived Path length, routing changes, excessive hops
bgp_peer_state Routing health Flap detection, idle alerts, prefix changes

Layer 2: Data Link

Signal Description Derived Metrics
src_mac Asset identifier New/rogue MAC detection, spoofing, OUI distribution
vlan_id Broadcast domain VLAN hopping, unused VLAN detection
interface Physical port Utilization, flapping, broadcast storms, errors
link_status UP/DOWN state Availability, flapping, MTTR

Layer 1: Physical

Signal Description Derived Metrics
rssi / snr WiFi signal quality Coverage holes, low-signal clients, interference
channel WiFi channel Utilization, co-channel interference, DFS events
optical_rx_power Fiber light levels Power degradation, link margin, asymmetry
transceiver_temp SFP temperature Overheating alerts, thermal trends
poe_power_draw PoE consumption Budget usage, power anomalies

Lo
Logs & Events The narrative of activity

What It Measures

Log telemetry captures the semantic story of what's happening across identities, applications, and endpoints. Authentication attempts, policy violations, configuration changes, and security events all flow through this category.

Unlike packet data (which shows that something happened), logs show what happened and who did it. This category is essential for behavioral analysis and detecting patterns that span multiple sessions.

Why It Matters

Logs are where intent becomes visible. A failed login is just noise—but 50 failed logins from 50 different IPs against the same account is credential stuffing. Logs provide the context to distinguish between accidents and attacks, mistakes and malice.

Security events in this category directly influence Heat (anomalies increase friction) and Observer (audit coverage affects confidence).

Key Signals

Auth Failures Failed logins, lockouts, MFA denials
Privilege Changes Role escalation, permission grants
Impossible Travel Geo-velocity anomalies
Policy Violations WAF blocks, ACL denials, DLP triggers
Process Execution Command lines, parent-child trees
Config Changes Registry mods, file changes, drift
Session Patterns Duration, concurrency, churn
API Abuse Rate limits, invalid keys, shadow APIs

Signal Sources

SIEM Platforms IdP / IAM Logs EDR/XDR Cloud Audit Logs WAF/Firewall Logs Application Logs

ARQ Mapping

Accessibility
20%
Retainability
30%
Quality
50%

Tensor Projection

Primary Heat Observer
Secondary Inertia

Full Signal Catalog (~120 signals)

Layer 8: Identity

Signal Description Derived Metrics
user_id Human/machine identity Auth volume, failed login rate, concurrent sessions
role RBAC assignment Privilege escalation, toxic combinations, dormant usage
geo_location Access location Impossible travel, new country, high-risk country
device_id Endpoint identifier Device trust score, new device rate, jailbreak detection

Layer 7: Application

Signal Description Derived Metrics
http_method GET/POST/PUT/DELETE Method distribution, unusual usage, high-volume POST
http_status Response codes 5xx error rate, 403 denials, 404 spikes
url_path Resource accessed Path traversal, admin access, sensitive files
user_agent Client string Rare agents, bot detection, spoofing

Layer 6.5: API

Signal Description Derived Metrics
api_endpoint API route Usage patterns, deprecated endpoints, shadow APIs
api_key_id Client identifier Invalid key rate, concurrent usage, key rotation
rate_limit_status Throttling events Quota consumption, abusive clients

Layer 6: Presentation

Signal Description Derived Metrics
tls_version Protocol version Legacy protocol usage, downgrade attacks
cipher_suite Encryption algo Weak cipher usage, PFS adoption

Layer 5: Session

Signal Description Derived Metrics
session_id Session identifier Fixation, churn, concurrent sessions
session_duration Active length Short/long sessions, timeout rate
login_status Auth result Brute force, credential stuffing

Layer 0: Endpoint/Host

Signal Description Derived Metrics
process_name Executable Rare process, living-off-the-land techniques
process_hash File hash Malware match, unknown hashes, first-seen
process_cmd_line Full command Encoded commands, suspicious patterns
registry_key Windows registry Run key mods, persistence detection
file_operation File changes Mass changes, sensitive file access

Me
Metrics The stress gauge

What It Measures

Metrics telemetry captures the quantitative health of infrastructure. CPU utilization, memory pressure, queue depths, error rates, and saturation levels all reveal whether systems are operating within safe bounds or approaching failure.

These signals are periodic snapshots rather than event-driven—they show the state of the system at regular intervals, enabling trend analysis and capacity planning.

Why It Matters

Infrastructure stress directly correlates with trust risk. A service running at 95% CPU has less capacity to validate requests properly. A queue backing up might indicate an attack or a failing dependency. Metrics provide the early warning system before events occur.

Metrics heavily influence Heat (saturation increases friction) and Momentum (degradation trends signal declining trust trajectory).

Key Signals

CPU Utilization Processing pressure, core saturation
Memory Pressure Heap usage, swap activity, OOM risk
Queue Depth Backlog size, processing lag
Error Rate 5xx responses, exceptions, timeouts
Throughput Requests/sec, transactions/sec
Saturation Resource exhaustion indicators
Response Time P50/P95/P99 latency distributions
Connection Pools Pool exhaustion, wait times

Signal Sources

Prometheus CloudWatch Datadog StatsD OpenTelemetry APM Platforms

ARQ Mapping

Accessibility
30%
Retainability
30%
Quality
40%

Tensor Projection

Primary Heat Momentum
Secondary Mass

Full Signal Catalog (~50 signals)

Compute Resources

Signal Description Derived Metrics
cpu_utilization Processor load Average, peak, per-core distribution
memory_usage RAM consumption Heap/stack, swap activity, OOM proximity
disk_io Storage throughput IOPS, latency, queue depth
network_io NIC throughput Bytes in/out, packet rate

Application Health

Signal Description Derived Metrics
request_rate Incoming traffic Requests/sec, burst detection
error_rate Failure ratio 5xx rate, exception frequency
response_time Latency P50/P95/P99, anomaly detection
queue_depth Backlog Size, growth rate, drain time

Trust Dynamics

Signal Description Derived Metrics
throughput Session velocity Sessions/sec processing rate
trust_mass Friction capacity 100 - Accumulated Risk
env_friction Instantaneous risk Current environmental resistance (0-100)
accumulated_risk Session risk Total risk accrued during session

Infrastructure

Signal Description Derived Metrics
connection_pool Pool status Active/idle/waiting counts
thread_count Concurrency Active threads, pool exhaustion
gc_pressure Garbage collection Pause time, frequency, heap churn

At
Attestations The trust anchors

What It Measures

Attestation telemetry captures cryptographic proofs and human validations. Digital signatures, audit certifications, provenance trails, and explicit approvals all provide external verification that grounds trust in something more than self-reported behavior.

This is the only signal category that can directly influence the Soul dimension—attestations carry the weight of constitutional constraints, jurisdictional requirements, and consent verification.

Why It Matters

Attestations provide trust that cannot be fabricated by the subject being evaluated. An agent can manipulate its own logs, but it can't forge a signature from a trusted third party. Attestations anchor the trust calculation in external reality.

When multiple independent attesters agree, confidence compounds. When attestations are missing or stale, Observer confidence drops. When constitutional constraints are attested (GDPR consent, TK Labels), Soul gates engage.

Key Signals

Digital Signatures Cryptographic proofs of origin
Audit Certifications SOC2, ISO 27001, third-party audits
Provenance Trails Chain of custody, origin verification
Human Approvals Manual authorizations, break-glass
Consent Records GDPR consent, opt-in/opt-out state
Sovereignty Markers TK Labels, OCAP/CARE, geofences
Peer Attestations Cross-agent vouching, mesh trust
Time Anchors Notarization, timestamping

Signal Sources

PKI / Certificate Authorities Audit Platforms Consent Management Blockchain / Notary Identity Providers Governance Systems

ARQ Mapping

Accessibility
10%
Retainability
20%
Quality
70%

Tensor Projection

Primary Observer Soul
Secondary Inertia

Full Signal Catalog (~30 signals)

Cryptographic Proofs

Signal Description Derived Metrics
signature_valid Digital signature verification Signer identity, algorithm strength, timestamp
certificate_chain PKI chain validation Chain completeness, revocation status
hash_verification Content integrity Match status, algorithm used

Audit & Compliance

Signal Description Derived Metrics
audit_certification Third-party audit status Certification type, expiry, scope
compliance_attestation Regulatory compliance Standard (SOC2, ISO, etc.), last audit date
policy_attestation Internal policy compliance Policy version, attestation freshness

Consent & Sovereignty

Signal Description Derived Metrics
consent_state User consent record Consent type, timestamp, scope
data_sovereignty Jurisdictional constraints TK Labels, OCAP/CARE markers, geofence status
opt_out_flags Explicit denials Scope of denial, timestamp

Peer & Human Validation

Signal Description Derived Metrics
peer_attestation Cross-agent vouching Attester trust level, attestation count
human_approval Manual authorization Approver identity, approval scope, expiry
break_glass_event Emergency override Override reason, authorizer, audit trail

The Signal Pipeline: From Telemetry to Trust

Before signals reach the seven-dimensional tensor, they pass through a classification layer called ARQ — the foundational physics of digital experience.

The Trust Pipeline

Raw signals → Classification → Projection → Score

Telemetry
Packets, Logs,
Metrics, Attestations
ARQ Classify
Accessibility
Retainability, Quality
Context Tensor
Multidimensional
trust geometry
Trust Score
E = 0–100
Action permitted?

The Three Foundational Questions

Every signal entering the system is classified by three fundamental questions about the digital experience it represents:

Ac
Accessibility Reachability
"Can we reach it?"
Measures whether the resource, agent, or service is reachable. Latency, packet loss, handshake success, and path availability all contribute. If you can't reach it, nothing else matters.
Projects into Mass Time
Re
Retainability Stability
"Can we hold it?"
Measures whether the connection can be sustained. Session stability, connection drops, retry rates, and consistency over time. Reaching something once means little if you can't stay connected.
Projects into Inertia Momentum
Qu
Quality Fidelity
"Is it good?"
Measures the integrity and performance of the interaction. Error rates, throughput, security signals, and validation status. A stable connection that delivers garbage is still a failure.
Projects into Heat Observer

ARQ-to-Tensor Projection Matrix

Each ARQ classification feeds specific tensor dimensions. The mapping is not 1:1 — signals often influence multiple dimensions with different weights.

ARQ Input Primary Dimensions Secondary Signal Examples
Accessibility Mass Time Momentum Latency, packet loss, reachability, handshake success
Retainability Inertia Momentum Mass Session stability, connection drops, retry rates
Quality Heat Observer Inertia Error rates, throughput, security signals, attestations
What about Soul? The Soul dimension stands apart from the ARQ pipeline. It is not derived from telemetry signals — it represents constitutional constraints (jurisdiction, consent, sovereignty) that exist outside the physics of network performance. Soul is checked separately and can veto any action regardless of ARQ-derived trust scores.

Dimension Lens

Each dimension has a distinct signal signature and behavioral effect. Click any dimension to explore its mechanics.

Ma
Mass The weight of observed behavior

How It Works

Mass measures the density and volume of telemetry flowing through the system. Think of it as gravitational weight—agents with more observed, consistent behavior carry more mass and are harder to perturb.

Unlike a simple counter, Mass rewards sustained activity over spikes. An agent processing 1,000 requests over an hour gains more mass than one processing 1,000 requests in a burst. The tensor values consistency and predictability.

Signal Source Example Signals Influence
Packets Request volume, throughput, bytes transferred Primary
Logs Event density, audit trail depth Secondary
Metrics Concurrent sessions, queue depth, active connections Secondary

Behavioral Pattern

Consistency compounds. Steady behavior over time builds mass more effectively than volume spikes. The tensor rewards agents who show up reliably, not those who flood the system episodically.

Failure Mode

Noisy traffic inflation. High-volume garbage requests can artificially inflate Mass without genuine trust improvement. Anti-Goodhart measures detect and penalize this pattern.

Interacts with: Momentum (velocity × mass = force), Inertia (stability anchors gains), Heat (stress erodes accumulated mass)

Mo
Momentum The velocity of trust change

How It Works

Momentum captures the rate and direction of change in an agent's trust posture. It's not about where you are—it's about where you're heading and how fast.

Positive momentum (improving trust trajectory) can accelerate permission grants. Negative momentum (declining trust) triggers early intervention before thresholds are crossed. The system watches the derivative, not just the value.

Signal Source Example Signals Influence
Trust Deltas Score changes per window, trend direction Primary
Volatility Swing magnitude, oscillation frequency Primary
Step Changes Sudden jumps, threshold crossings Secondary

Behavioral Pattern

Direction matters more than position. An agent at E=60 with positive momentum may be granted more than one at E=75 with negative momentum. The system anticipates where you'll be, not just where you are.

Failure Mode

Oscillation whiplash. Rapidly swinging between states—even if averaging to a good score—signals instability. The tensor penalizes agents that can't hold a steady course.

Interacts with: Mass (momentum × mass = force), Inertia (high inertia dampens momentum swings), Time (recent momentum weighted higher)

In
Inertia Resistance to sudden change

How It Works

Inertia measures an agent's resistance to rapid fluctuation. High inertia means the agent has established stable patterns that dampen noise. Low inertia indicates volatility and fragility.

Think of inertia as behavioral ballast. Agents with deep, consistent histories are harder to knock off course by a single bad event—but also slower to recover from genuine compromise. It's a stabilizing force that cuts both ways.

Signal Source Example Signals Influence
Configuration Config stability, drift detection Primary
Dependencies Dependency consistency, version churn Primary
Behavior Pattern consistency, routine adherence Secondary

Behavioral Pattern

Stability is earned over time. New agents have low inertia by design—they haven't proven themselves yet. Inertia builds through sustained, predictable operation. It cannot be rushed.

Failure Mode

Rapid drift signals fragility. Frequent configuration changes, dependency updates, or behavioral shifts indicate an agent that hasn't settled. The tensor interprets this as risk.

Interacts with: Momentum (inertia dampens momentum swings), Mass (high mass + high inertia = stable anchor), Heat (heat can overcome inertia under sustained stress)

He
Heat Environmental stress and friction

How It Works

Heat measures environmental stress, anomaly load, and adversarial pressure. It's the friction coefficient of the trust environment—the resistance that actions must overcome.

Heat rises fast but cools slowly. A burst of errors, a spike in blocked requests, or detected attack patterns all generate heat. Recovery requires sustained periods of clean operation—there are no shortcuts to cooling down.

Signal Source Example Signals Influence
Errors Error rates, exception frequency, 5xx responses Primary
Security WAF blocks, anomaly detections, threat indicators Primary
Load CPU pressure, memory saturation, queue depth Secondary

Behavioral Pattern

Heat is asymmetric. It spikes immediately on stress events but decays on a longer curve. This prevents agents from rapidly alternating between attack and recovery to game the system.

Failure Mode

Sustained heat collapse. Prolonged stress doesn't just reduce trust—it can trigger adaptive dormancy, forcing the agent into Hibernation mode until the environment stabilizes.

Interacts with: Mass (heat erodes accumulated mass), Inertia (high inertia resists heat longer), Observer (observed heat is weighted higher than inferred)

Ti
Time Temporal context and signal decay

How It Works

Time captures temporal context, signal freshness, and decay functions. Not all history is equal—recent signals carry more weight than stale ones, and the tensor applies exponential decay to age out old data.

Time also encodes session context. Long-lived sessions shift baselines differently than ephemeral requests. The tensor understands that an agent's behavior at hour 1 of a session may differ from hour 8—and accounts for it.

Signal Source Example Signals Influence
Freshness Signal age, last-seen timestamps Primary
Session Session duration, activity windows Primary
History Historical baselines, trend windows Secondary

Behavioral Pattern

Recency dominates. A clean last hour matters more than a clean last month. The tensor implements exponential decay—old signals fade, giving agents the ability to recover from past mistakes.

Failure Mode

Stale signal poisoning. If the tensor relies on outdated data, it may grant trust based on conditions that no longer exist. Time decay prevents this, but gaps in telemetry can still mislead.

Interacts with: Momentum (recent momentum weighted higher), Mass (old mass decays without fresh signals), Observer (observation freshness affects confidence)

Ob
Observer Who is watching, and how reliably

How It Works

Observer measures attestation coverage, audit visibility, and peer corroboration. Trust confidence depends not just on what happened, but on how well it was witnessed.

Multiple independent observers raise confidence. A single observer—or gaps in observation—lower it. The tensor implements observer-weighted trust: signals from high-trust attesters carry more weight than those from unknown sources.

Signal Source Example Signals Influence
Attestations Signed proofs, audit logs, peer validations Primary
Coverage Observation gaps, blind spot detection Primary
Peer Visibility Cross-agent corroboration, mesh attestation Secondary

Behavioral Pattern

Independent witnesses compound trust. Three independent observers seeing the same behavior creates higher confidence than one observer seeing it three times. The tensor rewards attestation diversity.

Failure Mode

Blind spot exploitation. Attackers may target gaps in observation coverage. The tensor flags low-visibility zones and may require additional attestation before granting high-privilege actions.

Interacts with: Heat (observed heat weighted higher), Time (stale attestations decay), Soul (Soul constraints require explicit observer confirmation)

So
Soul Constitutional constraints that cannot be overridden

How It Works

Soul is the constitutional veto layer. Unlike the other six dimensions (which contribute weighted values to trust calculations), Soul operates as a binary gate. If a Soul constraint is violated, the action is forbidden—regardless of how high the trust score is.

Soul encodes immutable constraints: jurisdictional requirements, consent state, Indigenous data sovereignty (TK Labels, OCAP/CARE), sacred land protections, and governance vetoes. These are the laws of the digital universe that cannot be negotiated away for operational convenience.

Signal Source Example Signals Influence
Jurisdiction Regulatory flags, data residency requirements Veto
Consent User consent state, opt-out flags Veto
Sovereignty TK Labels, OCAP/CARE markers, sacred geofences Veto

Behavioral Pattern

Soul is not a score—it's a gate. An agent with E=99 is still blocked if Soul=1. This ensures that human rights, cultural sovereignty, and constitutional constraints are never traded away for performance or efficiency.

Failure Mode

Soul violations are absolute. There is no gradual degradation, no warning tier—just immediate denial. Recovery requires explicit governance action to change the underlying constraint, not just improved behavior.

Interacts with: Observer (Soul constraints require explicit attestation), All Dimensions (Soul overrides any numerical trust calculation)

Risk Deflation

Trust isn't just built—it's also deflated by risk. The tensor doesn't ignore threats; it applies them as friction that reduces your effective trust score.

Performance Ebase
×
Risk Factor (1 − R)
=
Trust Score Etrust

Your base performance is deflated by risk. Higher risk means lower effective trust.

How Risk Deflates Trust

Even a perfect operational score can be crushed by unaddressed risk:

Security Risk

Unpatched vulnerabilities, exposed secrets, insecure configurations, active threat indicators.

Ebase = 85 × 0.6 → Etrust = 51

40% security risk drops an excellent score to marginal.

Compliance Risk

Regulatory violations, audit failures, policy breaches, expired certifications.

Ebase = 92 × 0.75 → Etrust = 69

25% compliance risk turns excellent into merely acceptable.

Behavioral Risk

Anomalous patterns, deviation from baseline, sudden capability changes.

Ebase = 78 × 0.5 → Etrust = 39

50% behavioral risk triggers adaptive dormancy.

Risk Compounds

Multiple risk factors multiply together. Security risk of 20% plus compliance risk of 15% doesn't equal 35%—it equals 0.8 × 0.85 = 0.68, or 32% total deflation. Risk stacks against you.

Deep Dive: For full risk taxonomy, scoring methodology, and mitigation strategies, see Risk & Deflation.