
Embedded Software Testing
Embedded testing is often slowed down by hardware bottlenecks and hard-to-reproduce failures. SPX provides one repeatable platform to firmware/integration testing, protocol communication, and integration flows from local development to CI/CD.
Use SPX as a virtual device lab for firmware integration testing — deterministic scenarios, protocol simulation, and CI-ready automation.
-
Hardware-free protocol simulation (MQTT, Modbus, BLE, and more)
-
Deterministic, repeatable environments (snapshots, scenarios, logs)
-
Automation-ready for CI/CD (spx-python control channel + Docker stack)
SPX platform overview
SPX is a deterministic simulation stack you can run locally (Docker) and drive from code for repeatable testing. It includes a runtime server for models, instances, scenarios, snapshots, and a REST API, plus an SDK for authoring models and extending simulations.
Why embedded testing is hard
Most embedded teams face the same blockers:
-
Limited access to physical hardware and lab time
-
Many protocol combinations (SCPI, Modbus, MQTT, BLE) across product lines
-
Flaky tests caused by nondeterministic environments
-
Differences between local developer machines and CI runners
The result is slower releases, expensive regressions, and long debugging cycles.

How SPX supports embedded testing
SPX gives you a repeatable and automatable environment where your system under test (SUT) communicates over real protocols, while your tests use spx-python as a control channel to set parameters, advance time, and validate outcomes. 
Core capabilities (bullets):
-
Model-based device definitions (code-defined, versioned)
-
API-driven runtime instances (start/stop/reset)
-
Protocol-level testing for MQTT, Modbus, BLE (and extensible adapters)
-
Scenario-driven validation + deterministic replay
-
Automation-friendly workflows for CI/CD

What you can test with SPX
SPX supports layered embedded testing:
-
Model and configuration validation
-
Unit tests for protocol/client helpers
-
Integration tests against running SPX instances
-
Regression and stress scenarios for timing/state behavior
-
Pack/environment-level tests executed in CI pipelines

Technical capabilities
These capabilities focus on repeatability, protocol realism, and automation—so you can ship embedded integrations with confidence.
Run realistic device behavior without lab hardware:
• Validate protocol communication against simulated devices
• Reproduce edge cases consistently (timeouts, reconnect storms, bad values)
• Extend behavior via protocol adapters when you need bespoke transportsTurn “works on my machine” into repeatable evidence:
• Scenario-driven tests for timing/state transitions
• Snapshots to freeze known-good baselines and compare regressions
• Deterministic behavior to reduce flaky testsDebug faster with structured visibility:
• Protocol and model logs suitable for CI artifacts
• Timing-aware traces to spot race conditions and integration drift
• Re-run the same scenario to confirm fixes with confidenceUse spx-python to automate and stabilize embedded tests:
• Create/reset/start/stop instances programmatically
• Set/read attributes and drive deterministic setup/teardown
• Keep your SUT talking “real protocol”, while tests control the sim Integrate SPX into pipelines:
1. Validate models and test inputs
2. Start the generated Docker test stack
3. Run automated tests
4. Collect logs/diagnostics on failure
5. Stop and clean the environmentThis makes embedded testing versioned, repeatable, and scalable across teams.
SPX projects can be structured to work efficiently with LLM-assisted development, so you can extend simulations faster while keeping them consistent:
• LLM-ready specs describing device models, protocol behavior, scenarios, and expected outcomes.
• Consistent conventions for packs, device models, and integration tests.
• Spec-first iteration — refine specs first, then generate or update models and tests from the specification.
• Outputs — LLM-generated changes are easy to validate because behavior is defined and testable.
• Automation alignment — specs + tests support a tight loop: generate → run scenarios → verify → iterate.This makes it practical to scale simulation content while maintaining quality and reproducibility.






