A multi-exchange copy trading engine built in Rust.
Copy Trader is the real-time copy trading engine behind Autowealth. It copies trades from
a master Bybit account to follower accounts across Bybit, Binance, and Hyperliquid via
WebSocket streams. Parallelizing follower execution via tokio::spawn cut the copy cycle
from ~20s to ~4s. Position-based quantity math with BigDecimal precision,
idempotent execution, and a periodic audit service with alert deduplication. 251 commits,
deployed on AWS via CDK.
Copy trading sounds simple: master opens a position, followers open the same position. The reality is harder. Each follower has different margin, different leverage, and different position sizes. A naive ratio-based approach breaks on partial closes, leverage mismatches, and exchange precision requirements. Copy Trader solves this with position-based math: calculate each follower's quantity from their current position relative to the master's, not from a fixed ratio. The formula self-corrects on every event because it recalculates from live API state rather than accumulating from a running counter, so rounding drift across multiple position increases doesn't accumulate.
┌─ master account (Bybit only) │ WebSocket stream → ExecutionType::Trade events │ ├──▶ pending_orders_cache [ in-memory · keyed by symbol:position_idx ] │ parks OrderUpdate (no margin) until PositionUpdate arrives with confirmed IM │ ├──▶ position_event_handler [ classify: open · increase · partial_close · full_close ] │ ├──▶ quantity_calculator [ position-based math · BigDecimal · no f64 ] │ target = master_position × (follower_equity × margin_pct / master_IM) │ incremental = target − current_follower_position (self-correcting) │ rounding: down for opens (under-expose), up for closes (avoid residuals) │ ├──▶ follower_executor [ tokio::spawn per follower · join_all ] │ ├── leverage_sync [ block opens on failure ] │ ├── validate [ precision · min/max · notional per exchange ] │ ├── smart_reduction [ fallback if order fails validation ] │ └── execute [ idempotency check → release DB conn → API call ] │ ├──▶ audit_service [ periodic reconciliation · MissingPosition / SizeMismatch ] │ 60s grace · 1pp re-alert threshold · telegram alerts │ └──▶ notification_pipeline [ telegram bot · twilio sms · dashboard push ]
A fixed ratio breaks on partial closes and leverage changes. The formula is: follower_target = master_position × (follower_equity × margin_pct / master_IM), then subtract the follower's current position to get the incremental order. The subtraction step is the key insight. It makes the formula self-correcting. Each event recalculates from live API state instead of accumulating from a running counter, so rounding errors on partial closes and leverage adjustments don't compound over a session.
Bybit fires two events for every fill: OrderUpdate arrives first with fill qty and price but no margin data, then PositionUpdate arrives with the confirmed initial margin (IM). The engine parks the order in an in-memory pending_orders_cache keyed by symbol:position_idx and only processes followers when the position event delivers the real IM. Estimating IM from price × qty × leverage is wrong in cross-margin, fractional leverage, and hedge mode. This architecture eliminates that class of error.
The original sequential model ran each follower's pipeline (leverage sync → equity fetch → position fetch → quantity calc → execute) one after another. With per-follower exchange API calls of 1–9s each, that compounded to ~20s per copy cycle. Moving every per-follower call inside a tokio::spawn and collecting results with join_all brought the cycle to ~4s. The tradeoff is that DB connection handling had to be restructured to avoid pool exhaustion under concurrent tasks.
With 15+ concurrent tokio::spawn tasks each holding an r2d2 PooledConnection through a 3–9s exchange API call, a 5-connection Supabase pool deadlocked. In async Rust, connections are held as long as the struct stays in scope. Crossing into HTTP territory while holding a connection blocks every other task waiting on the pool. The fix: acquire connection → idempotency check → drop connection (explicit scope) → exchange API call → acquire new connection → write result. Pool timing instrumentation was added afterward to detect regressions (warns if pool.get() exceeds 500ms).
A periodic audit service compares master positions against follower positions via live API calls and detects MissingPosition, ExtraPosition, and SizeMismatch discrepancies. Without deduplication, a persistent 10% mismatch would spam Telegram on every audit cycle. The 1pp re-alert threshold means an alert fires only if deviation changed by at least 1 percentage point since the last notification, and a 60s grace period suppresses alerts while the copy engine catches up on recent trades. The system stays supervisable rather than noisy.
Copy Trader handles the full coordination problem: three exchanges with divergent precision,
notional, and position-mode rules; WebSocket streams that can disconnect (manual restart
required, alerted via the audit service); and followers with different margin sizes that need
proportional exposure across partial closes without accumulating residuals. The engineering
breakthroughs are concrete: a two-event Bybit cache that waits for confirmed IM rather than
estimating it; a DB connection scoping fix that kept a 5-connection pool alive under 15+
concurrent followers; and a tokio::spawn parallelization that cut the copy cycle
from 20s to 4s. That engine runs under Autowealth's CompoundFox subscribers with a periodic
audit service watching for position drift.