4.3 KiB
4.3 KiB
Testing Strategy
Layers
1. Unit tests (no IO)
Pure logic that can be tested with cargo test, no containers needed.
hasher.rs (already done)
- Deterministic placement for the same key
- All volumes appear when requesting full replication
- Even distribution across volumes
- Correct key path format
db.rs (TODO)
- Open an in-memory SQLite (
:memory:) - Test put/get/delete/list_keys/all_records round-trip
- Test upsert behavior (put same key twice)
- Test soft delete (deleted flag)
- Test bulk_put
Pure decision functions (TODO, after refactor)
- Given a record and a set of healthy volumes, which volume to redirect to?
- Given fan-out results (list of Ok/Err), which volumes succeeded? Should we rollback?
- Given current vs desired volume placement, what needs to move?
2. Volume client tests (mock HTTP)
Use a lightweight HTTP server in-process (e.g. axum itself or wiremock) to test volume.rs without real nginx.
- PUT blob + .key sidecar → verify both requests made
- GET blob → verify body returned
- DELETE blob → verify both blob and .key deleted
- DELETE non-existent → verify 404 is treated as success
- Health check → respond 200 → healthy; timeout → unhealthy
3. Integration tests (real nginx)
Full end-to-end with Docker containers. These are slower but catch real issues.
Setup
# docker-compose.test.yml
services:
vol1:
image: nginx:alpine
volumes:
- ./tests/nginx.conf:/etc/nginx/conf.d/default.conf
- vol1_data:/data
ports: ["3101:80"]
vol2:
image: nginx:alpine
volumes:
- ./tests/nginx.conf:/etc/nginx/conf.d/default.conf
- vol2_data:/data
ports: ["3102:80"]
vol3:
image: nginx:alpine
volumes:
- ./tests/nginx.conf:/etc/nginx/conf.d/default.conf
- vol3_data:/data
ports: ["3103:80"]
volumes:
vol1_data:
vol2_data:
vol3_data:
# tests/nginx.conf
server {
listen 80;
root /data;
location / {
dav_methods PUT DELETE;
create_full_put_path on;
autoindex on;
autoindex_format json;
}
}
# tests/test_config.toml
[server]
port = 3100
replication_factor = 2
virtual_nodes = 100
[database]
path = "/tmp/mkv-test/index.db"
[[volumes]]
url = "http://localhost:3101"
[[volumes]]
url = "http://localhost:3102"
[[volumes]]
url = "http://localhost:3103"
Test cases
Happy path
- PUT
/hellowith body"world"→ 201 - HEAD
/hello→ 200, Content-Length: 5 - GET
/hello→ 302 to a volume URL - Follow redirect → body is
"world" - GET
/→ list contains"hello" - DELETE
/hello→ 204 - GET
/hello→ 404
Replication verification
- PUT
/replicated→ 201 - Read SQLite directly, verify 2 volumes listed
- GET blob from both volume URLs directly, verify identical content
Volume failure
- PUT
/failtest→ 201 - Stop vol1 container
- GET
/failtest→ should still 302 to vol2 (healthy replica) - PUT
/new-during-failure→ should fail if replication_factor can't be met, or succeed on remaining volumes depending on ring placement - Restart vol1
Rebuild
- PUT several keys
- Delete the SQLite database
- Run
mkv rebuild - GET all keys → should all still work
Rebalance
- PUT several keys with 3 volumes
- Add a 4th volume to config
- Run
mkv rebalance --dry-run→ verify output - Run
mkv rebalance→ verify keys migrated - GET all keys → should all work
Running integration tests
# Start volumes
docker compose -f docker-compose.test.yml up -d
# Wait for nginx to be ready
sleep 2
# Run integration tests
cargo test --test integration
# Tear down
docker compose -f docker-compose.test.yml down -v
The integration test binary (tests/integration.rs) starts the mkv server
in-process on a random port, runs all test cases sequentially (shared state),
then shuts down.
4. What we don't test
- Performance / benchmarks (follow-up)
- TLS, auth (not implemented)
- Concurrent writers racing on the same key (SQLite serializes this correctly by design)
- Blobs > available RAM (streaming is a follow-up)