MCC Protocol

A cryptographic orchestration layer for agents — built in Rust, secured by certificate-based verification

MCC is a cryptographic orchestration layer for agents and a trusted execution agent framework implemented in Rust. The platform enables secure peer-to-peer communication across distributed networks using certificate-based security with end-to-end encryption. MCC is designed for IoT devices, industrial networks, and autonomous AI agent systems requiring secure, low-latency communication without relying on centralized infrastructure.

Six properties that make MCC different from a VPN or service mesh.

Every architectural decision in MCC is traceable to one of these six goals. No marketing claims, no vague promises — just the properties the protocol actually enforces.

Verified-Trust Architecture

A trusted execution agent framework where every connection requires cryptographic verification. No packet moves without explicit identity, signed certificate, and matching tunnel configuration.

End-to-End Encryption

ChaCha20-Poly1305 AEAD encryption for all traffic. Double-encrypted: tunnel layer + RPC layer. Even compromised intermediate nodes cannot read your data.

Multi-Platform Support

Native TUN/TAP integration on Linux, Android, and macOS. Windows support planned via WinTUN. Same Rust core everywhere.

Resource Efficient

Optimized for embedded systems and resource-constrained devices. Base node uses ~10 MB memory. Idle CPU under 1%. Runs on MIPS-32 routers.

Certificate-Based Authentication

X25519 elliptic curve key exchange with Ed25519-signed certificates. Three-tier hierarchy: Certificate Authority → Network Certificates → Node & Tunnel Certificates.

Hierarchical Mesh Topology

Tree-based parent-child routing optimized for typical IoT deployments. Pancake routing engine with Dijkstra shortest-path computation and next-hop caching.

Six core modules. One Rust binary.

MCC ships as a single Rust binary that exposes a Web UI (Vue.js), a REST API (Axum), and a Unix socket API. Internally, the Core Node Engine orchestrates the Tunnels Manager, the Pancake routing engine, and the Crypto Engine, which together drive the platform-specific TUN device.

Core Node

Central orchestration engine with event-driven architecture using mio for async I/O. Manages peer connections, routing, message scheduling, keep-alive mechanisms, and parent failover.

Tunnel System

Secure endpoint-to-endpoint encrypted communication channels. Client-server tunnel model with configurable TCP/UDP port targets. Each tunnel has a unique cryptographic identity. Gateway tunnels bridge MCC and external networks.

Routing Engine — Pancake

Hierarchical tree-based topology with Dijkstra shortest-path routing and next-hop caching. Parent-child relationship model with automatic parent failover by priority ordering. Property-based routing via tags and DNS names.

Cryptography Module

X25519 ECDH for key exchange (RFC 7748). ChaCha20-Poly1305 AEAD for symmetric encryption (RFC 7539). HKDF for key derivation (RFC 5869). Ed25519 for certificate signing (RFC 8032). Separate cipher instances per connection.

Name Resolution

DNS-based service discovery using BASE32-encoded node IDs. dnsmasq integration for local DNS. mDNS responder for zero-configuration networking on macOS. Tag-based naming for logical grouping.

TUN Device Interface

Cross-platform TUN device abstraction. Native implementations for Linux (linux.rs), macOS (macos.rs via utun), and Android (JNI bridge). Windows via WinTUN planned.

Five principles, enforced by the protocol itself.

MCC implements verified-trust networking as code, not as policy. Each principle below is enforced at the packet level — there is no admin panel where you can disable it.

01

Never Trust, Always Verify

Every packet requires cryptographic authentication. There is no implicit trust based on network location or device origin.

02

Least Privilege Access

Tunnels define explicit port and protocol permissions. A tunnel for tcp:443 cannot carry tcp:5432 traffic. Misuse drops the packet.

03

Certificate-Based Identity

All nodes are authenticated via signed certificates issued under a certificate hierarchy controlled by the network operator.

04

End-to-End Encryption

Data is encrypted at the source and decrypted only at the destination. Intermediate relay nodes can route but cannot read.

05

Network Segmentation

Tunnels isolate traffic between specific endpoints. A compromise in one tunnel does not affect any other tunnel or service.

Tunnels: cryptographically secured, port-restricted, NIS2-aligned.

A tunnel in MCC is a cryptographically secured, bidirectional communication channel between two endpoints with explicit protocol and port restrictions. Each tunnel has a unique Ed25519 endpoint identity, a client or server role, and an explicit TCP/UDP port whitelist. Packets that don't match are silently dropped. The architecture aids compliance with the EU NIS2 Directive (2023) and the EU Cyber Resilience Act (2024) by isolating individual communication channels — preventing lateral movement, the way regulators now expect critical infrastructure to be built.

Layer 1 — Tunnel Encryption

End-to-end for tunneled traffic

  • ChaCha20-Poly1305 AEAD payload encryption
  • Key derived from X25519 ECDH (local, remote)
  • HKDF-derived tunnel key with constant salt
  • Each tunnel has unique Ed25519 endpoint IDs
Layer 2 — RPC Encryption

Outer protection between hops

  • All RPC messages (except Init/InitAck) encrypted
  • Random 12-byte nonce prepended per message
  • Random salts sent in InitAck (different per connection)
  • Separate keys for incoming vs outgoing messages
Why double encryption?

The tunnel layer (constant nonce/salt for performance) is wrapped inside the RPC layer (random nonce/salt per message). The outer layer provides replay protection, fresh keys per session, and double verification — so even though the inner tunnel reuses cryptographic material for efficiency, the outer protection means an attacker who recovers tunnel ciphertext still has nothing to replay.

Four IETF-standardized primitives. No homegrown cryptography.

Every cryptographic operation in MCC is built on a peer-reviewed RFC standard. We use the same primitives that secure Signal, WireGuard, and TLS 1.3.

ComponentStandardReference
X25519 ECDHRFC 7748Curve25519 — 128-bit security
HKDFRFC 5869HMAC-based key derivation
ChaCha20-Poly1305RFC 7539AEAD, ~1.5 GB/s software-only
Ed25519 SignaturesRFC 8032EdDSA — used for certificates
Why X25519?

Current state-of-the-art elliptic curve. 128-bit security level (equivalent to 3072-bit RSA). Constant-time implementation prevents timing attacks. Used by Signal, WireGuard, TLS 1.3.

Why ChaCha20-Poly1305?

No hardware acceleration required (unlike AES-NI). ~1.5 GB/s on modern CPUs, still fast on weak hardware. Constant-time by design (cache-timing resistant). Ideal for MIPS-32 routers and embedded systems.

Hierarchical mesh, not flat DHT. Pancake replaces Kademlia.

MCC implements a hierarchical parent-child tree topology optimized for typical IoT deployments. The current implementation abandons flat mesh for a hierarchical tree structure because most IoT networks naturally form trees with 1-2 root nodes — and flat DHT routing (Kademlia-style) wastes bandwidth on networks that aren't flat.

Pancake key features
  • Dijkstra shortest-path computation
  • Next-hop caching for performance
  • Automatic loop prevention
  • Parent failover with priority ordering
  • Property-based routing via tags and DNS names
Why hierarchical, not flat?
  • Topology updates flow only upward to root, not bidirectionally
  • Eliminates the scanning overhead of flat DHTs
  • Single-direction updates are easier to reason about
  • Reflects the actual physical topology of typical MCC deployments
Failover behavior

Each child node knows a sorted list of parents by IP:port. On timeout, it switches to the next parent in the list. It periodically pings higher-priority parents — when the original parent comes back, the child switches back. KeepAlive messages every 20 seconds maintain bidirectional connectivity through NAT. Parents are never removed from topology (persistent configuration).

Built for the smallest devices. Tested on the largest networks.

MCC was engineered for resource-constrained edge devices first. The same Rust binary runs on a Raspberry Pi Zero 2W, a Teltonika RUT956 router (128 MiB RAM), and 12,000 nodes on a single cloud server.

12,000
Nodes bootstrapped on a single cloud server (<30s)
~10 MB
Base node memory footprint
~1.5 GB/s
ChaCha20-Poly1305 throughput on modern CPUs
€1.7
Saved per device per month vs. legacy MVNOs
Network size
  • Tested with hundreds of nodes
  • Root node bottleneck: full topology storage
  • Scales to thousands per region
Tunnels per node
  • Configurable via network certificate
  • Typical: 10-100 tunnels per node
  • Each tunnel ~2 KB memory overhead
Message rate
  • Event-driven architecture handles thousands/sec
  • UDP transport eliminates TCP handshake overhead
  • Packet loss tolerance via application-level retry

Linux, Android, macOS — all production. Windows planned.

The same Rust core runs on every supported platform with thin platform-specific TUN device implementations. Windows support is planned via WinTUN.

PlatformStatusFeatures
LinuxFullTUN/TAP, iptables integration, capability management (CAP_NET_ADMIN, CAP_NET_RAW), kill switch, port rerouting
AndroidFullVPN service integration, JNI bindings via mcc-android, native DNS resolver, process-based routing
macOSFullNative TUN device support, mDNS responder for .local, system DNS integration
WindowsPlannedTUN/TAP via WinTUN
Servers, desktops, single-board
  • Raspberry Pi 4
  • Raspberry Pi Zero 2W
  • Intel NUC
  • Cloud servers (AMD EPYC, Intel Xeon)

amd64, arm64, armhf, armel — DEB, RPM, tar.gz packages

Industrial routers
  • Teltonika RUT956
  • Teltonika RUT950
  • Teltonika RUT200
  • Teltonika RUTX50

OpenWRT and derivatives — IPK and tar.gz packages

Three deployment scenarios. One protocol. One production fleet.

MCC fits three common deployment topologies: industrial IoT with on-premise parents, distributed services across data centers, and remote access for mobile devices. All three are running today.

Industrial IoT

  • Edge devices (sensors, actuators) connect to on-premise parent
  • On-premise parent connects to cloud root
  • Gateway tunnels expose specific services (Modbus, OPC-UA)
  • Private network certificate for isolation

Distributed Services

  • Microservices across multiple data centers
  • MCC provides secure overlay network
  • DNS-based service discovery
  • SSL certificate distribution for HTTPS

Remote Access

  • Mobile devices connect via public parent nodes
  • Tunnels to specific services (SSH, RDP, VNC)
  • Certificate-based access control via tunnel whitelists
In production

MIBO — 150-bus fleet, North Rhine-Westphalia

MCC is the connectivity layer powering MIBO's 150-bus fleet on the MINT network. Vehicle-signed telemetry (passenger count, CO₂, GPS) flows through MCC tunnels to a Solana & U2U-anchored data chain — production-grade, audit-ready, every day.

Identity layer

Multi-chain attestation

MCC node identities can be anchored on peaq, ICP, Solana, U2U, Linera, Celo, Hedera, Optimism and Lisk — same protocol, multiple roots of trust. Pick the chain your customers and regulators already audit.

Want to run MCC?

MCC is the connectivity layer underneath every Staex product — Connectivity, Connect & Transfer (MINT), and the Agentic Suite. Talk to us about deploying it on your own infrastructure, or dive straight into the documentation.