Skip to main content

Overview

This guide covers common issues you might encounter during boop network development and their solutions.
Quick Fix: Many issues can be resolved by restarting services with docker-compose down && docker-compose up -d.

Service Startup Issues

Services Won’t Start

Symptoms:
  • docker-compose up exits with errors
  • Containers fail to start
  • Health checks fail
Common Causes & Solutions:
# 1. Port conflicts
# Check what's using the ports
netstat -tulpn | grep :40401
lsof -i :40401

# Kill conflicting processes
kill $(lsof -t -i:40401)

# 2. Docker resource limits
# Increase Docker resources (Docker Desktop > Settings > Resources)
# Minimum: 8GB RAM, 4 CPU cores

# 3. Stale containers/volumes
docker-compose down -v
docker system prune -a
docker-compose up -d

# 4. Missing environment files
cp docker/standalone/.env.example docker/standalone/.env
# Edit .env with your settings
Symptoms:
  • API Gateway can’t connect to PostgreSQL
  • Migration errors
  • Database timeout errors
Solutions:
# Check PostgreSQL is running
docker-compose ps postgres
docker-compose logs postgres

# Test database connection
docker-compose exec postgres psql -U boop -d boop -c "SELECT version();"

# Reset database
docker-compose down postgres
docker volume rm $(docker volume ls -q | grep postgres)
docker-compose up -d postgres

# Wait for database to be ready
docker-compose exec postgres pg_isready -U boop

# Run migrations
docker-compose exec api-gateway sqlx migrate run
Environment Variables Check:
# Verify database configuration
docker-compose exec api-gateway env | grep DATABASE

# Should show:
# DATABASE_URL=postgresql://boop:password@postgres:5432/boop
Symptoms:
  • Services can’t communicate
  • WebSocket connections fail
  • API calls timeout
Solutions:
# Check Docker network
docker network ls
docker network inspect $(docker-compose ps -q | head -1 | xargs docker inspect --format='{{range $net, $conf := .NetworkSettings.Networks}}{{$net}}{{end}}')

# Test service connectivity
docker-compose exec api-gateway ping postgres
docker-compose exec api-gateway curl http://postgres:5432

# Restart Docker networking
docker-compose down
docker network prune
docker-compose up -d

Development Workflow Issues

Code Changes Not Reflected

Symptoms: Code changes don’t trigger rebuildsSolutions:
# 1. Use development compose file
cd docker/development
docker-compose up -d

# 2. Verify volume mounts
docker-compose exec api-gateway ls -la /workspace

# 3. Check cargo-watch is running
docker-compose exec api-gateway ps aux | grep cargo

# 4. Manual restart if needed
docker-compose restart api-gateway

# 5. Force rebuild
docker-compose build --no-cache api-gateway
docker-compose up -d api-gateway
Symptoms: Rust compilation failsCommon Issues:
# 1. Dependency issues
# Clear Rust build cache
docker-compose exec api-gateway cargo clean
docker-compose exec api-gateway cargo build

# 2. Outdated Rust version
# Check Rust version
docker-compose exec api-gateway rustc --version
# Should be 1.70+

# 3. Missing dependencies
# Update dependencies
docker-compose exec api-gateway cargo update

# 4. Compile error with clear error message
docker-compose exec api-gateway cargo check
Build Performance:
# Use faster linker (add to .cargo/config.toml)
[target.x86_64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=lld"]

# Enable parallel builds
export CARGO_BUILD_JOBS=4

Testing Issues

Symptoms: Tests fail locally but pass in CI, or vice versaSolutions:
# 1. Clean test environment
cargo clean
cargo test

# 2. Run specific test with output
cargo test test_name -- --nocapture

# 3. Check test database
# Tests should use isolated database
DATABASE_URL=postgresql://test:test@localhost:5433/test_db cargo test

# 4. Verify test data cleanup
# Ensure tests clean up after themselves
Integration Test Issues:
# Start test environment
cd docker/test
docker-compose up -d

# Run integration tests
cargo test --test '*' -- --test-threads=1

# Debug integration test
RUST_LOG=debug cargo test integration_test -- --nocapture
Symptoms: Performance benchmarks fail or are inconsistentSolutions:
# 1. Ensure stable environment
# Close other applications
# Use dedicated test hardware if possible

# 2. Run benchmarks with proper flags
cargo bench --bench authentication_bench

# 3. Check system resources
top
iostat -x 1

# 4. Compare with baseline
git checkout main
cargo bench > baseline.txt
git checkout feature-branch
cargo bench > current.txt

API and Integration Issues

WebSocket Problems

Symptoms: WebSocket connections drop or fail to establishDebugging Steps:
# 1. Test WebSocket endpoint
# Install websocat: https://github.com/vi/websocat
websocat ws://localhost:40403/ws/vendor

# 2. Check connection logs
docker-compose logs bridge-service | grep websocket

# 3. Test with curl (HTTP upgrade)
curl -i -N -H "Connection: Upgrade" \
     -H "Upgrade: websocket" \
     -H "Sec-WebSocket-Key: test" \
     -H "Sec-WebSocket-Version: 13" \
     http://localhost:40403/ws/vendor

# 4. Network configuration check
docker-compose exec bridge-service netstat -tulpn | grep 40403
Common Fixes:
// Increase WebSocket timeouts
WebSocketConfig {
    max_size: Some(64 * 1024 * 1024),
    max_message_size: Some(16 * 1024 * 1024),
    max_frame_size: Some(16 * 1024 * 1024),
    accept_unmasked_frames: false,
}

// Handle connection drops gracefully
loop {
    match ws_stream.next().await {
        Some(Ok(msg)) => handle_message(msg).await,
        Some(Err(e)) => {
            log::error!("WebSocket error: {}", e);
            break;
        }
        None => break,
    }
}
Symptoms: Messages not processed or malformedSolutions:
# 1. Validate message format
echo '{"type":"authenticate","vendor_id":"test","api_key":"test"}' | \
  websocat ws://localhost:40403/ws/vendor

# 2. Check message logs
docker-compose logs bridge-service | grep message

# 3. Test with mock client
cargo run --bin mock-vendor -- \
  --api-addr http://localhost:40401 \
  test-websocket
Debug Message Flow:
// Add detailed logging
#[tracing::instrument(skip(msg))]
async fn handle_websocket_message(msg: Message) {
    tracing::debug!("Received message: {:?}", msg);

    match msg {
        Message::Text(text) => {
            match serde_json::from_str::<VendorMessage>(&text) {
                Ok(vendor_msg) => {
                    tracing::info!("Parsed vendor message: {:?}", vendor_msg);
                    process_vendor_message(vendor_msg).await;
                }
                Err(e) => {
                    tracing::error!("Failed to parse message: {}", e);
                }
            }
        }
        _ => tracing::warn!("Unexpected message type: {:?}", msg),
    }
}

API Request Issues

Symptoms: API requests fail, return errors, or timeoutDebugging Steps:
# 1. Test API health
curl -v http://localhost:40401/health

# 2. Check API logs
docker-compose logs api-gateway | grep -E "(ERROR|WARN)"

# 3. Test authentication
curl -X POST http://localhost:40401/api/v1/auth \
  -H "Content-Type: application/json" \
  -H "X-API-Key: test-key" \
  -d '{"type":"test"}'

# 4. Monitor request/response
curl -w "@curl-format.txt" -o /dev/null -s http://localhost:40401/health

# curl-format.txt:
# time_namelookup: %{time_namelookup}\n
# time_connect: %{time_connect}\n
# time_appconnect: %{time_appconnect}\n
# time_pretransfer: %{time_pretransfer}\n
# time_redirect: %{time_redirect}\n
# time_starttransfer: %{time_starttransfer}\n
# time_total: %{time_total}\n
Rate Limiting Issues:
# Check rate limit headers
curl -I http://localhost:40401/api/v1/auth

# Should include:
# X-RateLimit-Limit: 100
# X-RateLimit-Remaining: 99
# X-RateLimit-Reset: 1640995200

# If rate limited (429 error):
# Wait for reset time or adjust limits in config
Symptoms: Authentication fails with valid credentialsSolutions:
# 1. Verify API key format
echo "API Key: sk_test_..." # Should start with sk_test_ or sk_live_

# 2. Check authentication logs
docker-compose logs api-gateway | grep auth

# 3. Test with different credentials
curl -X POST http://localhost:40401/api/v1/auth \
  -H "X-API-Key: sk_test_valid_key" \
  -H "Content-Type: application/json" \
  -d '{}'

# 4. Verify database has vendors
docker-compose exec postgres psql -U boop -d boop \
  -c "SELECT vendor_id, api_key_hash FROM vendors;"
API Key Management:
// Generate development API key
use sha2::{Sha256, Digest};

fn generate_api_key_hash(api_key: &str) -> String {
    let mut hasher = Sha256::new();
    hasher.update(api_key.as_bytes());
    format!("{:x}", hasher.finalize())
}

// For development, insert test vendor:
// INSERT INTO vendors (vendor_id, api_key_hash)
// VALUES ('test_vendor', 'hash_of_sk_test_key');

Performance Issues

Slow Response Times

Symptoms: API requests take longer than expectedDiagnosis:
# 1. Measure API performance
ab -n 100 -c 10 http://localhost:40401/health

# 2. Profile specific endpoints
curl -w "%{time_total}\n" -o /dev/null -s http://localhost:40401/api/v1/auth

# 3. Check database query performance
docker-compose exec postgres psql -U boop -d boop \
  -c "EXPLAIN ANALYZE SELECT * FROM users WHERE user_id = 'test';"

# 4. Monitor system resources
docker stats --no-stream

# 5. Check for database locks
docker-compose exec postgres psql -U boop -d boop \
  -c "SELECT * FROM pg_stat_activity WHERE state = 'active';"
Optimization Strategies:
// Add database connection pooling
let pool = PgPoolOptions::new()
    .max_connections(50)
    .min_connections(5)
    .acquire_timeout(Duration::from_secs(30))
    .idle_timeout(Duration::from_secs(600))
    .max_lifetime(Duration::from_secs(1800))
    .connect(&database_url)
    .await?;

// Use prepared statements
let stmt = sqlx::query!("SELECT * FROM users WHERE user_id = $1");

// Add caching for frequent queries
use moka::future::Cache;
let cache: Cache<String, User> = Cache::builder()
    .max_capacity(10_000)
    .time_to_live(Duration::from_secs(300))
    .build();
Symptoms: Slow database queries, connection timeoutsSolutions:
-- 1. Check slow queries
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;

-- 2. Analyze table statistics
ANALYZE users;
ANALYZE auth_requests;

-- 3. Check missing indexes
SELECT schemaname, tablename, attname, n_distinct, correlation
FROM pg_stats
WHERE schemaname = 'public'
ORDER BY n_distinct DESC;

-- 4. Add useful indexes
CREATE INDEX CONCURRENTLY idx_users_created_at ON users(created_at);
CREATE INDEX CONCURRENTLY idx_auth_requests_vendor_id ON auth_requests(vendor_id);
Configuration Tuning:
# postgresql.conf optimizations for development
shared_buffers = 256MB
effective_cache_size = 1GB
maintenance_work_mem = 64MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100

Memory Issues

Symptoms: Memory usage continuously increasesDebugging:
# 1. Monitor memory usage
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"

# 2. Use memory profiling
# Add to Cargo.toml: dhat = "0.3"
# Enable memory profiling in code

# 3. Check for resource leaks
lsof -p $(docker-compose exec api-gateway pidof api-gateway)

# 4. Analyze heap dumps
kill -USR1 $(docker-compose exec api-gateway pidof api-gateway)
Common Leak Sources:
// ❌ Bad: Leaking connections
async fn bad_database_usage() {
    let pool = create_pool().await;
    let conn = pool.acquire().await.unwrap();
    // Never released!
}

// ✅ Good: Proper resource management
async fn good_database_usage(pool: &PgPool) -> Result<User, Error> {
    let user = sqlx::query_as!(User, "SELECT * FROM users WHERE id = $1", 1)
        .fetch_optional(pool)
        .await?;
    Ok(user.unwrap_or_default())
} // Connection automatically returned to pool

// ❌ Bad: Unbounded cache
static CACHE: Lazy<HashMap<String, User>> = Lazy::new(HashMap::new);

// ✅ Good: Bounded cache with TTL
static CACHE: Lazy<Cache<String, User>> = Lazy::new(|| {
    Cache::builder()
        .max_capacity(10_000)
        .time_to_live(Duration::from_secs(300))
        .build()
});

Common Error Messages

Compilation Errors

# Error: "could not find `Cargo.toml`"
# Solution: Make sure you're in the right directory
cd /path/to/boop-network
ls Cargo.toml  # Should exist

# Error: "failed to select a version for `dependency`"
# Solution: Update dependencies
cargo update

# Error: "linking with `cc` failed"
# Solution: Install build dependencies
apt-get update && apt-get install -y build-essential pkg-config libssl-dev

# Error: "could not compile `sqlx`"
# Solution: Database URL needed for compile-time checks
export DATABASE_URL="postgresql://boop:password@localhost:5432/boop"
# Error: "Connection refused"
# Check if service is running
docker-compose ps
docker-compose logs service-name

# Error: "Permission denied"
# Fix file permissions
chmod +x scripts/*.sh
sudo chown -R $USER:$USER .

# Error: "Port already in use"
# Find and kill the process
lsof -i :40401
kill -9 $(lsof -t -i:40401)

# Error: "No space left on device"
# Clean Docker resources
docker system prune -a
docker volume prune

Database Errors

-- Error: "role 'boop' does not exist"
CREATE USER boop WITH PASSWORD 'password';
CREATE DATABASE boop OWNER boop;

-- Error: "relation does not exist"
-- Run migrations
sqlx migrate run --database-url $DATABASE_URL

-- Error: "could not connect to server"
-- Check PostgreSQL is running and accepting connections
pg_isready -h localhost -p 5432

-- Error: "too many connections"
-- Check current connections
SELECT count(*) FROM pg_stat_activity;

-- Adjust max_connections in postgresql.conf
ALTER SYSTEM SET max_connections = 200;
SELECT pg_reload_conf();

Emergency Procedures

Complete Reset

When everything is broken:
1

Stop Everything

docker-compose down -v
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
2

Clean Docker

docker system prune -a
docker volume prune
docker network prune
3

Reset Repository

git clean -fdx
git reset --hard HEAD
git pull origin main
4

Fresh Start

cd docker/standalone
cp .env.example .env
docker-compose build --no-cache
docker-compose up -d

Data Recovery

If you need to recover data:
# 1. Check for database backups
ls -la ./backups/

# 2. Export current data before reset
docker-compose exec postgres pg_dump -U boop boop > emergency_backup.sql

# 3. After reset, restore data
docker-compose exec -T postgres psql -U boop -d boop < emergency_backup.sql

Getting Additional Help

Create an Issue

Report bugs or ask for help with specific issues

Join Discord

Real-time help from the development community

Check Logs

Always include relevant logs when asking for help

System Info

Include OS, Docker version, and hardware specs
When asking for help, always include:
  • What you were trying to do
  • What you expected to happen
  • What actually happened
  • Relevant error messages and logs
  • Your environment details (OS, Docker version, etc.)