Skip to content

Modbus Benchmark

This document records the performance benchmark process and results of NG Gateway for the Modbus TCP protocol. The tests run the gateway in a resource-constrained Docker container (1 CPU / 1 GB memory), using an external Modbus slave simulator to provide real protocol interaction workloads, and leveraging a Prometheus + Grafana + cAdvisor monitoring stack to collect container-level resource metrics in real time, systematically evaluating the gateway's resource consumption and operational stability under different collection scales and frequencies.

The tests cover the following dimensions:

  • Collection Scale Gradient: From a single channel with 10 devices (10,000 points) scaling up to 10 channels with 100 devices (100,000 points)
  • Collection Frequency Comparison: Standard cycle (1000 ms) vs. high-frequency collection (100 ms)
  • Mixed Workload Stress Test: Large-scale data collection combined with concurrent random command dispatching

Test Environment

Hardware Platform

ItemSpecification
CPU4 Cores
Memory24 GB
OSDebian GNU/Linux 12

Gateway Deployment

The gateway is deployed as a docker compose container with resource limits to simulate a constrained edge-side environment:

ResourceLimitReservation
CPU1.0 Core0.5 Core
Memory1000 MiB256 MiB

TIP

Resource constraints are configured via Docker Compose deploy.resources.limits, consistent with Kubernetes Pod resource quota semantics.

docker-compose.yaml
yaml
services:
  gateway:
    image: ${GATEWAY_IMAGE:-shiyuecamus/ng-gateway}:${GATEWAY_TAG:-latest}
    container_name: ng-gateway
    restart: unless-stopped
    ports:
      - "${GATEWAY_HTTP_PORT:-8978}:5678"
      - "${GATEWAY_WS_PORT:-8979}:5679"
    volumes:
      - gateway-data:/app/data
      - gateway-drivers:/app/drivers/custom
      - gateway-plugins:/app/plugins/custom
    deploy:
      resources:
        limits:
          cpus: "${BENCH_CPU_LIMIT:-1.0}"
          memory: "${BENCH_MEM_LIMIT:-1000M}"
        reservations:
          cpus: "${BENCH_CPU_RESERVE:-0.5}"
          memory: "${BENCH_MEM_RESERVE:-256M}"

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:v0.51.0
    container_name: ng-cadvisor
    restart: unless-stopped
    ports:
      - "8080:8080"
    command:
      - --docker_only=true
      - --housekeeping_interval=2s
      - --store_container_labels=true
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /var/run/docker.sock:/var/run/docker.sock:rw
      - /sys:/sys:ro
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
      - /var/lib/docker/:/var/lib/docker:ro
      - /dev/disk/:/dev/disk:ro
    privileged: true
    devices:
      - /dev/kmsg:/dev/kmsg

  prometheus:
    image: prom/prometheus:latest
    container_name: ng-prometheus
    restart: unless-stopped
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - ng-prometheus-data:/prometheus
    command:
      - --config.file=/etc/prometheus/prometheus.yml
      - --storage.tsdb.path=/prometheus
      - --web.enable-lifecycle
    depends_on:
      - cadvisor
      - gateway

  grafana:
    image: grafana/grafana:latest
    container_name: ng-grafana
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      GF_SECURITY_ADMIN_USER: admin
      GF_SECURITY_ADMIN_PASSWORD: admin
      GF_USERS_ALLOW_SIGN_UP: "false"
      GF_PATHS_PROVISIONING: /etc/grafana/provisioning
    volumes:
      - ./grafana/provisioning:/etc/grafana/provisioning:ro
      - ./grafana/dashboards:/var/lib/grafana/dashboards:ro
      - ng-grafana-data:/var/lib/grafana
    depends_on:
      - prometheus

volumes:
  ng-prometheus-data:
  ng-grafana-data:
  gateway-data:
  gateway-drivers:
  gateway-plugins:

Test Tools

Modbus Slave Simulator

Modbus Slave (by Witte Software) is used as the Modbus TCP slave simulator. Modbus Slave is a widely adopted commercial-grade simulation tool in the industrial automation field, supporting Modbus TCP / RTU / ASCII protocols. It can simultaneously simulate multiple independent slave instances and provides flexible register type configuration, auto-increment, and randomized data simulation capabilities, making it suitable for driver development debugging and performance benchmarking.

Simulation Topology:

ItemConfiguration
Independent TCP Connections10 (listening ports 500 ~ 509)
Slaves per Connection10 (Slave ID 1 ~ 10)
Total Simulated Slaves100

Mapping Relationship

  • Each TCP port maps to a Channel in ng-gateway — an independent Modbus TCP connection
  • Each Slave ID maps to a Device within the channel — polled via function codes at different slave addresses
  • Test scenarios connect to a subset or all ports as needed to build collection workloads ranging from 10,000 to 100,000 points

Performance Monitoring Stack

Resource metrics during testing are collected using the cAdvisor + Prometheus + Grafana stack, all orchestrated alongside the gateway container via the same docker compose file:

ComponentVersionRole
cAdvisorv0.51.0Collects container-level resource metrics: CPU usage, memory (RSS / Cache), network bytes sent/received
PrometheuslatestScrapes cAdvisor /metrics endpoint every 2s, persists time-series data
GrafanalatestVisualization dashboards with pre-configured cAdvisor Docker container monitoring

Core Metrics Collected:

MetricPrometheus MetricDescription
CPU Usagecontainer_cpu_usage_seconds_totalCPU usage percentage per container
Memory Usagecontainer_memory_rssResident Set Size
Network Receivecontainer_network_receive_bytes_totalTotal bytes received (rate computed)
Network Transmitcontainer_network_transmit_bytes_totalTotal bytes transmitted (rate computed)

Quick Start:

bash
cd deploy/compose/bench && docker compose up -d
ServiceAccess URL
Grafanahttp://localhost:3000 (admin / admin)
Prometheushttp://localhost:9090
cAdvisorhttp://localhost:8080
ng-gatewayhttp://localhost:8978

Summary

Data Collection Performance

ScenarioChannelsDevices/ChannelPoints/DeviceFrequencyTotal PointsTypeMemoryCPUNetwork Bandwidth
11101,0001000 ms10,000Float3250.8 MiB2.62%rx: 55.2 kB/s
tx: 14 kB/s
25101,0001000 ms50,000Float32103 MiB4.41%rx: 269.0 kB/s
tx: 72.0 kB/s
310101,0001000 ms100,000Float32153 MiB7.03%rx: 542.0 kB/s
tx: 144.0 kB/s
4111,000100 ms1,000Float3244.8 MiB2.60%rx: 47.8 kB/s
tx: 13.5 kB/s
5511,000100 ms5,000Float3250.9 MiB4.61%rx: 265.0 kB/s
tx: 87.3 kB/s
61011,000100 ms10,000Float3255.2 MiB7.56%rx: 530.0 kB/s
tx: 173.0 kB/s
710101,0001000 ms100,000Float32153 MiB7.03%rx: 542.0 kB/s
tx: 144.0 kB/s

Mixed Load Performance

ScenarioChannelsDevices/ChannelPoints/DeviceFrequencyTotal PointsTypeDownlink MethodDownlink PointsIterationsMin LatencyMax LatencyAvg Latency
710101,0001000 ms100,000Float32API10010014.572 ms536.517 ms75.600 ms

Test Scenarios & Results

Scenario 1: Basic Collection

  • Config: 1 Channel · 10 Devices · 1,000 Points/Device · 1000 ms Period (Total 10,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
50.8 MiB2.62%rx: 55.2 kB/s
tx: 14 kB/s

Resource Monitor Screenshots

Scenario 1 CpuScenario 1 MemoryScenario 1 Network


Scenario 2: Medium Scale Collection

  • Config: 5 Channels · 10 Devices · 1,000 Points/Device · 1000 ms Period (Total 50,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
103 MiB4.41%rx: 269.0 kB/s
tx: 72.0 kB/s

Resource Monitor Screenshots

Scenario 2 CpuScenario 2 MemoryScenario 2 Network


Scenario 3: Large Scale Collection

  • Config: 10 Channels · 10 Devices · 1,000 Points/Device · 1000 ms Period (Total 100,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
153 MiB7.03%rx: 542.0 kB/s
tx: 144.0 kB/s

Resource Monitor Screenshots

Scenario 3 CpuScenario 3 MemoryScenario 3 Network


Scenario 4: High Frequency (Single Channel)

  • Config: 1 Channel · 1 Device · 1,000 Points/Device · 100 ms Period (Total 1,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
44.8 MiB2.60%rx: 47.8 kB/s
tx: 13.5 kB/s

Resource Monitor Screenshots

Scenario 4 CpuScenario 4 MemoryScenario 4 Network


Scenario 5: High Frequency (Multi Channel)

  • Config: 5 Channels · 1 Device · 1,000 Points/Device · 100 ms Period (Total 5,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
50.9 MiB4.61%rx: 265.0 kB/s
tx: 87.3 kB/s

Resource Monitor Screenshots

Scenario 5 CpuScenario 5 MemoryScenario 5 Network


Scenario 6: High Frequency (Large Scale)

  • Config: 10 Channels · 1 Device · 1,000 Points/Device · 100 ms Period (Total 10,000 Points)

Metrics

MemoryCPUNetwork Bandwidth
55.2 Mib7.56%rx: 530.0 kB/s
tx: 173.0 kB/s

Resource Monitor Screenshots

Scenario 6 CpuScenario 6 MemoryScenario 6 Network


  • Config: 10 Channels · 10 Devices · 1,000 Points/Device · 1000 ms Period (Total 100,000 Points) + Random Command Dispatching

Metrics (Collection)

MemoryCPUNetwork Bandwidth
153 MiB7.03%rx: 542.0 kB/s
tx: 144.0 kB/s
Success/FailMin LatencyMax LatencyAvg Latency
100 / 014.572 ms536.517 ms75.600 ms

Resource Monitor Screenshots

Scenario 7 ConsoleScenario 3 CpuScenario 3 MemoryScenario 3 Network

Released under the Apache License 2.0.