Skip to content

OPC UA Benchmark

This document records the performance benchmark process and results of NG Gateway for the OPC UA protocol. The tests run the gateway in a resource-constrained Docker container (1 CPU / 1 GB memory), using an external OPC UA simulation server to provide real protocol interaction workloads, and leveraging a Prometheus + Grafana + cAdvisor monitoring stack to collect container-level resource metrics in real time, systematically evaluating the gateway as an OPC UA client under different collection scales and frequencies for resource consumption and operational stability.

The tests cover the following dimensions:

  • Collection Scale Gradient: From a single channel with 10 devices (10,000 points) scaling up to 10 channels with 100 devices (100,000 points)
  • Collection Frequency Comparison: Standard cycle (1000 ms) vs. high-frequency collection (100 ms)
  • Mixed Workload Stress Test: Large-scale data collection combined with concurrent random command dispatching

Test Environment

Hardware Platform

ItemSpecification
CPU4 Cores
Memory24 GB
OSDebian GNU/Linux 12

Gateway Deployment

The gateway is deployed as a docker compose container with resource limits to simulate a constrained edge-side environment:

ResourceLimitReservation
CPU1.0 Core0.5 Core
Memory1000 MiB256 MiB

TIP

Resource constraints are configured via Docker Compose deploy.resources.limits, consistent with Kubernetes Pod resource quota semantics.

docker-compose.yaml
yaml
services:
  gateway:
    image: ${GATEWAY_IMAGE:-shiyuecamus/ng-gateway}:${GATEWAY_TAG:-latest}
    container_name: ng-gateway
    restart: unless-stopped
    ports:
      - "${GATEWAY_HTTP_PORT:-8978}:5678"
      - "${GATEWAY_WS_PORT:-8979}:5679"
    volumes:
      - gateway-data:/app/data
      - gateway-drivers:/app/drivers/custom
      - gateway-plugins:/app/plugins/custom
    deploy:
      resources:
        limits:
          cpus: "${BENCH_CPU_LIMIT:-1.0}"
          memory: "${BENCH_MEM_LIMIT:-1000M}"
        reservations:
          cpus: "${BENCH_CPU_RESERVE:-0.5}"
          memory: "${BENCH_MEM_RESERVE:-256M}"

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:v0.51.0
    container_name: ng-cadvisor
    restart: unless-stopped
    ports:
      - "8080:8080"
    command:
      - --docker_only=true
      - --housekeeping_interval=2s
      - --store_container_labels=true
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /var/run/docker.sock:/var/run/docker.sock:rw
      - /sys:/sys:ro
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
      - /var/lib/docker/:/var/lib/docker:ro
      - /dev/disk/:/dev/disk:ro
    privileged: true
    devices:
      - /dev/kmsg:/dev/kmsg

  prometheus:
    image: prom/prometheus:latest
    container_name: ng-prometheus
    restart: unless-stopped
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - ng-prometheus-data:/prometheus
    command:
      - --config.file=/etc/prometheus/prometheus.yml
      - --storage.tsdb.path=/prometheus
      - --web.enable-lifecycle
    depends_on:
      - cadvisor
      - gateway

  grafana:
    image: grafana/grafana:latest
    container_name: ng-grafana
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      GF_SECURITY_ADMIN_USER: admin
      GF_SECURITY_ADMIN_PASSWORD: admin
      GF_USERS_ALLOW_SIGN_UP: "false"
      GF_PATHS_PROVISIONING: /etc/grafana/provisioning
    volumes:
      - ./grafana/provisioning:/etc/grafana/provisioning:ro
      - ./grafana/dashboards:/var/lib/grafana/dashboards:ro
      - ng-grafana-data:/var/lib/grafana
    depends_on:
      - prometheus

volumes:
  ng-prometheus-data:
  ng-grafana-data:
  gateway-data:
  gateway-drivers:
  gateway-plugins:

Test Tools

OPC UA Simulation Server

Prosys OPC UA Simulation Server is used as the OPC UA server simulator. Prosys OPC UA Simulation Server is a fully featured free OPC UA simulation tool supporting the OPC UA TCP binary transport protocol. It can simulate various data type nodes (Analog / Discrete / String, etc.), provides flexible address space configuration and data change simulation (sine, random, increment patterns), and is widely used for OPC UA client development debugging and performance verification.

Simulation Topology:

ItemConfiguration
Server Endpointopc.tcp://<host>:4840
Simulation Node TypeAnalog (Float / Double)
Data Change ModePeriodic random updates

Mapping Relationship

  • Each OPC UA server endpoint maps to a Channel in ng-gateway — an independent OPC UA session connection
  • Each logical node group maps to a Device within the channel — collected in batches via the Subscription mechanism for node changes
  • Test scenarios create multiple channel connections to the same or different server instances as needed to build different collection workloads

Performance Monitoring Stack

Resource metrics during testing are collected using the cAdvisor + Prometheus + Grafana stack, all orchestrated alongside the gateway container via the same docker compose file:

ComponentVersionRole
cAdvisorv0.51.0Collects container-level resource metrics: CPU usage, memory (RSS / Cache), network bytes sent/received
PrometheuslatestScrapes cAdvisor /metrics endpoint every 2s, persists time-series data
GrafanalatestVisualization dashboards with pre-configured cAdvisor Docker container monitoring

Core Metrics Collected:

MetricPrometheus MetricDescription
CPU Usagecontainer_cpu_usage_seconds_totalCPU usage percentage per container
Memory Usagecontainer_memory_rssResident Set Size
Network Receivecontainer_network_receive_bytes_totalTotal bytes received (rate computed)
Network Transmitcontainer_network_transmit_bytes_totalTotal bytes transmitted (rate computed)

Quick Start:

bash
cd deploy/compose/bench && docker compose up -d
ServiceAccess URL
Grafanahttp://localhost:3000 (admin / admin)
Prometheushttp://localhost:9090
cAdvisorhttp://localhost:8080
ng-gatewayhttp://localhost:8978

Summary

Data Collection Performance

ScenarioChannelsDevices/ChannelPoints/DeviceFrequencyTotal PointsTypeMemoryCPUNetwork Bandwidth
11101,0001000 ms10,000Float3267.1 MiB3.12%rx: 434.0 kB/s
tx: 356.0 kB/s
25101,0001000 ms50,000Float32115.0 Mib5.71%rx: 1.32 MB/s
tx: 1.19 MB/s
310101,0001000 ms100,000Float32165.0 MiB8.28%rx: 2.38 MB/s
tx: 1.95 MB/s
4111,000100 ms1,000Float3245.0 MiB3.50%rx: 216.0 kB/s
tx: 178.0 kB/s
5511,000100 ms5,000Float3251.6 MiB6.82%rx: 1.08 MB/s
tx: 887.0 kB/s
61011,000100 ms10,000Float3256.3 MiB9.48%rx: 2.16 MB/s
tx: 1.78 MB/s
710101,0001000 ms100,000Float32165.0 MiB8.28%rx: 2.38 MB/s
tx: 1.95 MB/s

Mixed Load Performance

ScenarioChannelsDevices/ChannelPoints/DeviceFrequencyTotal PointsTypeDownlink MethodDownlink PointsIterationsMin LatencyMax LatencyAvg Latency
710101,0001000 ms100,000Float32API1001001.795 ms113.257 ms4.194 ms

Test Scenarios & Results

Scenario 1: Basic Collection

  • Config: 1 Channel · 10 Devices · 1,000 Points/Device · 1000 ms Period (Total 10,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
67.1 MiB3.12%rx: 434.0 kB/s
tx: 356.0 kB/s

Resource Monitor Screenshots

Scenario 1 CpuScenario 1 MemoryScenario 1 Network


Scenario 2: Medium Scale Collection

  • Config: 5 Channels · 10 Devices · 1,000 Points/Device · 1000 ms Period (Total 50,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
115.0 Mib5.71%rx: 1.32 MB/s
tx: 1.19 MB/s

Resource Monitor Screenshots

Scenario 2 CpuScenario 2 MemoryScenario 2 Network


Scenario 3: Large Scale Collection

  • Config: 10 Channels · 10 Devices · 1,000 Points/Device · 1000 ms Period (Total 100,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
165.0 MiB8.28%rx: 2.38 MB/s
tx: 1.95 MB/s

Resource Monitor Screenshots

Scenario 3 CpuScenario 3 MemoryScenario 3 Network


Scenario 4: High Frequency (Single Channel)

  • Config: 1 Channel · 1 Device · 1,000 Points/Device · 100 ms Period (Total 1,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
45.0 MiB3.50%rx: 216.0 kB/s
tx: 178.0 kB/s

Resource Monitor Screenshots

Scenario 4 CpuScenario 4 MemoryScenario 4 Network


Scenario 5: High Frequency (Multi Channel)

  • Config: 5 Channels · 1 Device · 1,000 Points/Device · 100 ms Period (Total 5,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
51.6 MiB6.82%rx: 1.08 MB/s
tx: 887.0 kB/s

Resource Monitor Screenshots

Scenario 5 CpuScenario 5 MemoryScenario 5 Network


Scenario 6: High Frequency (Large Scale)

  • Config: 10 Channels · 1 Device · 1,000 Points/Device · 100 ms Period (Total 10,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
56.3 MiB9.48%rx: 2.16 MB/s
tx: 1.78 MB/s

Resource Monitor Screenshots

Scenario 6 CpuScenario 6 MemoryScenario 6 Network


  • Config: 10 Channels · 10 Devices · 1,000 Points/Device · 1000 ms Period (Total 100,000 Points) + Random Command Dispatching

Metrics (Collection)

MemoryCPUNetwork Bandwidth
165.0 MiB8.28%rx: 2.38 MB/s
tx: 1.95 MB/s
Success/FailMin LatencyMax LatencyAvg Latency
100 / 01,795 ms113.257 ms4.194 ms

Resource Monitor Screenshots

Scenario 7 ConsoleScenario 3 CpuScenario 3 MemoryScenario 3 Network

Released under the Apache License 2.0.