Fortress Rollback API Contracts¶
Version: 1.0 Date: December 6, 2025 Status: Complete
This document specifies preconditions, postconditions, and invariants for all public APIs. It complements formal-spec.md and serves as a reference for verification and documentation.
Table of Contents¶
- Contract Notation
- SessionBuilder
- P2PSession
- SpectatorSession
- SyncTestSession
- GameStateCell
- Request Handling
- Error Catalog
- Event Catalog
- Cross-Cutting Invariants
- Revision History
Contract Notation¶
Each API is documented with:
- Signature: The function signature
- Pre: Preconditions that must hold before calling
- Post: Postconditions guaranteed after successful return
- Errors: Conditions that cause specific errors
- Panics: Should always be "Never" for public APIs
- Invariants: Properties preserved across the call
SessionBuilder¶
SessionBuilder::new() -> Self¶
Pre: None
Post:
num_players = 2max_prediction = 8fps = 60input_delay = 0save_mode = SaveMode::EveryFramedesync_detection = On { interval: 60 }disconnect_timeout = 2000msdisconnect_notify_start = 500ms
Errors: None
Panics: Never
with_num_players(self, n: usize) -> Result<Self, FortressError>¶
Pre: n > 0
Post: self.num_players = n
Errors:
InvalidRequestStructured { kind: ZeroPlayers }- ifn = 0
Panics: Never
add_player(self, player_type: PlayerType, handle: PlayerHandle) -> Result<Self, FortressError>¶
Pre:
handlenot already registered- For
LocalorRemote:handle.0 < num_players - For
Spectator:handle.0 >= num_players
Post:
- Player registered with given type
- For
Local:local_players += 1
Errors:
InvalidRequest("Player handle already in use")- handle duplicateInvalidRequest("...handle should be between 0 and num_players")- invalid player handleInvalidRequest("...handle should be num_players or higher")- invalid spectator handle
Panics: Never
with_max_prediction_window(self, window: usize) -> Self¶
Pre: None
Post:
self.max_prediction = windowwindow = 0→ session operates in lockstep (no rollbacks)
Errors: None
Panics: Never
with_input_delay(self, delay: usize) -> Result<Self, FortressError>¶
Pre: delay <= queue_length - 1 (default max: 127)
Post: self.input_delay = delay
Errors:
InvalidRequestStructured { kind: FrameDelayTooLarge { delay, max_delay } }- ifdelayexceedsinput_queue_config.max_frame_delay()
Panics: Never
with_fps(self, fps: usize) -> Result<Self, FortressError>¶
Pre: fps > 0
Post: self.fps = fps
Errors:
InvalidRequest("FPS should be higher than 0")- iffps = 0
Panics: Never
with_desync_detection_mode(self, mode: DesyncDetection) -> Self¶
Pre: None
Post: self.desync_detection = mode
Errors: None
Panics: Never
with_sparse_saving_mode(self, sparse_saving: bool) -> Self (deprecated)¶
Deprecated since 0.2.0: Use
with_save_mode(SaveMode::Sparse)instead.
Pre: None
Post:
sparse_saving = true→self.save_mode = SaveMode::Sparsesparse_saving = false→self.save_mode = SaveMode::EveryFrame
Errors: None
Panics: Never
with_disconnect_timeout(self, timeout: Duration) -> Self¶
Pre: None
Post: self.disconnect_timeout = timeout
Errors: None
Panics: Never
with_disconnect_behavior(self, behavior: DisconnectBehavior) -> Self¶
Pre: None
Post: self.disconnect_behavior = behavior
Errors: None
Panics: Never
Notes:
- Default is
DisconnectBehavior::Halt, preserving the legacy GGRS-style halt-on-drop semantics. DisconnectBehavior::ContinueWithoutenables graceful peer drop on the automatic disconnect-timeout path: the dropped peer's input queue is frozen at the last confirmed input,FortressEvent::PeerDroppedandFortressEvent::Disconnectedare both emitted, and remaining peers continue advancing.- The setting governs only the automatic-timeout path. The explicit
P2PSession::remove_playeralways performs a graceful drop regardless of this setting; the legacyP2PSession::disconnect_playerretains its non-graceful semantics regardless of this setting.
start_p2p_session(self, socket: impl NonBlockingSocket<T::Address>) -> Result<P2PSession<T>, FortressError>¶
Pre:
- All player handles
0..num_playershave been registered viaadd_player - At least one local player
Post:
- Session created in
Synchronizingstate - All remote endpoints begin synchronization
- Socket ownership transferred to session
Errors:
InvalidRequest("Not enough players have been added...")- missing players
Panics: Never
Invariants Established:
- INV-4: Queue length bounds
- INV-5: Queue index validity
- INV-11: No panics guarantee
start_spectator_session(self, host_addr: T::Address, socket: impl NonBlockingSocket<T::Address>) -> Option<SpectatorSession<T>>¶
Pre: None (no player registration required)
Post:
- Returns
Some(session)with session created inSynchronizingstate - Host endpoint begins synchronization
- Returns
Noneif protocol initialization fails (e.g., serialization issues)
Errors: None (returns Option, not Result)
Panics: Never
start_synctest_session(self) -> Result<SyncTestSession, FortressError>¶
Pre: check_distance < max_prediction
Post:
- Session created (no network, immediate Running state equivalent)
- Rollback simulation enabled
Errors:
InvalidRequest("Check distance too big")- ifcheck_distance >= max_prediction
Panics: Never
P2PSession¶
current_state(&self) -> SessionState¶
Pre: None
Post: Returns Synchronizing or Running
Errors: None
Panics: Never
local_player_handles(&self) -> HandleVec¶
Pre: None
Post: Returns HandleVec of handles where player_type = Local
Errors: None
Panics: Never
poll_remote_clients(&mut self)¶
Pre: None
Post:
- All pending messages from socket processed
- Input queues updated with remote inputs
- Protocol state machines advanced
- Events queued for retrieval
Errors: None (errors converted to events)
Panics: Never
Side Effects:
- May trigger state transitions (Synchronizing → Running)
- May queue
Synchronized,Disconnected,NetworkInterruptedevents
add_local_input(&mut self, handle: PlayerHandle, input: T::Input) -> Result<(), FortressError>¶
Pre:
handleis a local playercurrent_state() = RunningOR input is being buffered- Not exceeding prediction threshold
Post:
- Input stored in
local_inputsmap - Input will be transmitted to remotes on next
advance_frame
Errors:
InvalidPlayerHandle- handle not registered or not localInvalidRequest("Prediction threshold reached")- too far ahead
Panics: Never
advance_frame(&mut self) -> FortressResult<RequestVec<T>>¶
Pre:
current_state() = Running- All local players have provided input via
add_local_input
Post:
- Returns sequence of requests to be processed in order
current_frameincremented (after processing requests)- If rollback needed:
LoadGameStatefollowed bySaveGameState/AdvanceFramepairs - If no rollback:
SaveGameState(unless sparse) thenAdvanceFrame
Errors:
NotSynchronized- ifcurrent_state() != RunningInvalidRequestStructured { kind: MissingLocalInput }- not all local players provided input
Panics: Never
Request Sequence (no rollback, full saving):
Request Sequence (with rollback):
[LoadGameState { frame: K },
SaveGameState { frame: K }, AdvanceFrame { inputs_K },
SaveGameState { frame: K+1 }, AdvanceFrame { inputs_K+1 },
...
SaveGameState { frame: N }, AdvanceFrame { inputs_N }]
Invariants Preserved:
- INV-1: Frame monotonicity (within rollback bounds)
- INV-2: Rollback boundedness
- INV-7: Confirmed frame consistency
- INV-8: Saved frame consistency
events(&mut self) -> EventDrain<'_, T>¶
Pre: None
Post:
- Returns iterator over pending events
- Event queue emptied
Errors: None
Panics: Never
frames_ahead(&self) -> i32¶
Pre: None
Post: Returns frame advantage estimate
Errors: None
Panics: Never
network_stats(&self, handle: PlayerHandle) -> Result<NetworkStats, FortressError>¶
Pre: handle is a remote player
Post: Returns stats (ping, bandwidth, etc.)
Errors:
InvalidPlayerHandle- not a remote playerNotSynchronized- stats not yet available
Panics: Never
disconnect_player(&mut self, handle: PlayerHandle) -> Result<(), FortressError>¶
Pre: None — all caller-side conditions are validated and returned via the Errors section below.
Post:
- Every player handle owned by the dropped endpoint is marked as disconnected on the local connection-status table (multi-handle endpoints — multiple handles sharing a single address — are wound down in full)
- The corresponding network endpoint is disconnected
- Future inputs for any disconnected handle use the default value (the input queue is not frozen — see
remove_playerfor graceful drop, which freezes the queue and replays the last confirmed input)
Errors:
InvalidRequestStructured { kind: DisconnectInvalidHandle { handle } }- handle not registeredInvalidRequestStructured { kind: DisconnectLocalPlayer { handle } }- handle refers to a local playerInvalidRequestStructured { kind: AlreadyDisconnected { handle } }- handle was already disconnectedInternalErrorStructured { kind: DisconnectStatusNotFound { handle } }- internal-invariant violation (a registered remote handle has no corresponding connection-status entry); should not occur in correct code, treat as a library bug
Panics: Never
Notes:
- Does not freeze the player's input queue and does not emit
FortressEvent::PeerDropped. - Always preserves halt-on-drop semantics regardless of the configured
DisconnectBehavior: remaining peers no longer produce confirmed inputs from the dropped peer's endpoint, soadvance_framecannot make progress past that peer's last confirmed frame. - For an explicit graceful drop, prefer
remove_player. - When
player_handleis Remote, operates on the Remote endpoint at the address only — aSpectatorendpoint registered at the sameT::Addressis independent and is not affected, remaining running until it disconnects on its own. Whenplayer_handleis Spectator, only that specific spectator endpoint is disconnected; any Remote endpoint at the same address is left running. Co-locating aRemoteand aSpectatorat the same address is unusual; this note documents the behavior for that edge case.
remove_player(&mut self, player_handle: PlayerHandle) -> Result<(), FortressError>¶
/// Remove a remote player from the session and continue with the remaining
/// peers (graceful drop), regardless of the configured DisconnectBehavior.
Pre: None — all caller-side conditions are validated and returned via the Errors section below.
Post:
- Every non-spectator player handle owned by the dropped endpoint is marked disconnected on the local connection-status table
- Every non-spectator handle's input queue is frozen: it repeats its last confirmed input forever for remaining peers' simulation
- The corresponding network endpoint is disconnected
- One
FortressEvent::PeerDropped { handle, addr }per non-spectator handle at the dropped address is queued, followed by exactly oneFortressEvent::Disconnected { addr }in the same batch confirmed_frame()continues to advance for remaining peers
Errors:
InvalidRequestStructured { kind: DisconnectInvalidHandle { handle } }- handle not registered, or refers to a spectatorInvalidRequestStructured { kind: DisconnectLocalPlayer { handle } }- handle refers to a local playerInvalidRequestStructured { kind: PlayerAlreadyRemoved { handle } }- handle is already marked disconnected (either via a priorremove_playercall, via auto-removal underDisconnectBehavior::ContinueWithout, or via a previous explicitdisconnect_playercall)InternalErrorStructured { kind: DisconnectStatusNotFound { handle } | IndexOutOfBounds { .. } }- internal-invariant violation (a registered handle has no corresponding input queue or connection-status entry); should not occur in correct code, treat as a library bug
Panics: Never
Notes:
- Always opts in to graceful-drop semantics regardless of the session's
DisconnectBehavior. The configuredDisconnectBehavioronly governs the automatic disconnect-timeout path. - The
PeerDroppedevent coexists with the legacyDisconnectedevent; new code should match onPeerDroppedfor graceful-drop-aware handling. - Operates on the Remote endpoint at the targeted address only. A
Spectatorendpoint registered at the sameT::Addressis an independent endpoint and is not affected — it remains running until it disconnects on its own. Co-locating aRemoteand aSpectatorat the same address is unusual; this note documents the behavior for that edge case.
disconnect_behavior(&self) -> DisconnectBehavior¶
Pre: None
Post: Returns the DisconnectBehavior set via SessionBuilder::with_disconnect_behavior (default Halt).
Errors: None
Panics: Never
set_input_delay(&mut self, player_handle: PlayerHandle, delay: usize) -> Result<(), FortressError>¶
Pre: delay is within the configured max_frame_delay() of the input queue (set via InputQueueConfig::max_frame_delay; defaults to queue_length - 1). All other caller-side conditions are validated and returned via the Errors section below.
Post:
- The local player's frame-delay is set to
delay - No-op case (
delay == current_delay): no further side effects - Initial-setup case (no inputs added yet): the new delay applies cleanly with no gap-fill replication. Decreases are also permitted in this case.
- Mid-session increase case (
delay > current_delayafter inputs have been added on a peer with exactly one local player): - The input queue replicates the most recently added input across
delta = delay - current_delaynew gap frames - The same replicated frames are pushed onto every remote endpoint's pending-output buffer and flushed
- The local connection-status
last_frameis advanced to match the queue's newlast_added_frame - Remote peers' input sequences remain strictly monotonic
Errors:
InvalidRequestStructured { kind: NotLocalPlayer { handle } }- handle is not a local playerInvalidRequestStructured { kind: FrameDelayTooLarge { delay, max_delay } }-delayexceedsqueue_length - 1InvalidRequestStructured { kind: InputDelayDecreaseUnsupported { current, requested } }-requested < currentand inputs have already been addedInvalidRequestStructured { kind: InputDelayMidSessionMultiLocalUnsupported { local_players } }- mid-session increase attempted with more than one local player on this peerInvalidRequestStructured { kind: InputDelayMidSessionPendingOutputFull { delta, capacity } }- mid-session increase would push more gap-fill frames into a remote's pending-output buffer than the configuredpending_output_limitallowsInternalErrorStructured { kind: InputQueueGapFillFailed { frame } }- internal invariant violation while replicating gap-fill bytes (should be reported as a bug)
Panics: Never
Invariants Preserved:
- INV-3 (Input Immutability): confirmed inputs are not modified by gap-fill replication
- INV-4 (Queue Bounds): the queue length is unchanged
input_delay(&self, player_handle: PlayerHandle) -> Result<usize, FortressError>¶
Pre: None — all caller-side conditions are validated and returned via the Errors section below.
Post: Returns the current frame-delay for player_handle
Errors:
InvalidRequestStructured { kind: NotLocalPlayer { handle } }- handle is not a local player
Panics: Never
SpectatorSession¶
advance_frame(&mut self) -> FortressResult<RequestVec<T>>¶
Pre: current_state() = Running
Post:
- Returns
AdvanceFramerequests only (no save/load) - May return multiple frames if catching up
Errors:
NotSynchronized- not yet synchronized with host
Panics: Never
SyncTestSession¶
advance_frame(&mut self) -> FortressResult<RequestVec<T>>¶
Pre: All local inputs provided
Post:
- Simulates rollback of
check_distanceframes - Compares checksums for mismatch detection
- Returns requests including save/load/advance
Errors:
InvalidRequeston checksum mismatch (desync detected)
Panics: Never
GameStateCell¶
save(&self, frame: Frame, state: Option<T::State>, checksum: Option<u128>) -> bool¶
Pre:
- Called in response to
SaveGameStaterequest framematches request frame
Post:
- Returns
trueif the save succeeded - Returns
falseifframeisFrame::NULL(save rejected) - State stored and retrievable via
load() - Checksum stored for desync detection (if provided)
Errors: None
Panics: Never
load(&self) -> Option<T::State>¶
Pre: save() was previously called
Post: Returns cloned state
Errors: None (returns None if empty)
Panics: Never
Request Handling¶
Processing Order Contract¶
CRITICAL: Requests from advance_frame() MUST be processed in the exact order returned.
// CORRECT
for request in session.advance_frame()? {
match request {
FortressRequest::LoadGameState { cell, .. } => { /* load */ }
FortressRequest::SaveGameState { cell, frame } => { /* save */ }
FortressRequest::AdvanceFrame { inputs } => { /* advance */ }
}
}
// INCORRECT - DO NOT reorder or skip requests
SaveGameState Contract¶
Pre: game_state.frame == frame (your state matches requested frame)
Post (after handling):
cell.save(frame, Some(state), checksum)called- State is now loadable for rollback
User Responsibility:
- Clone entire game state
- Compute checksum if desync detection enabled
- Call
cell.save()before processing next request
LoadGameState Contract¶
Pre: State was previously saved at frame
Post (after handling):
if let Some(state) = cell.load() { game_state = state; }- Game state restored to frame
frame
User Responsibility:
- Replace entire game state with loaded state
- Subsequent
AdvanceFramerequests will resimulate
AdvanceFrame Contract¶
Pre: Game state is at the correct frame
Post (after handling):
- Game state advanced by one frame
game_state.frame += 1(or equivalent)
User Responsibility:
- Apply all inputs to game state deterministically
- Handle
InputStatus::Disconnectedappropriately - Increment frame counter
Error Catalog¶
Legacy Variants¶
| Error | Cause | Recovery |
|---|---|---|
InvalidRequest { info } |
Invalid operation/parameter (legacy) | Check info message, fix call |
InvalidPlayerHandle { handle, max_handle } |
Handle out of range or wrong type | Use valid handle |
InvalidFrame { frame, reason } |
Frame out of valid range (legacy) | Check frame bounds |
NotSynchronized |
Operation requires Running state | Wait for sync or call poll |
MissingInput { player_handle, frame } |
Confirmed input not available | Internal error, report bug |
PredictionThreshold |
Prediction window exceeded | Wait before adding more input |
Structured Variants (Preferred)¶
| Error | Cause | Recovery |
|---|---|---|
InvalidRequestStructured { kind } |
Invalid operation with structured reason | Match on InvalidRequestKind variants |
InvalidFrameStructured { frame, reason } |
Frame invalid with structured reason | Match on InvalidFrameReason variants |
InternalErrorStructured { kind } |
Library bug with structured context | Report bug with error details |
SerializationErrorStructured { kind } |
Serialization failure | Check input data format |
FrameArithmeticOverflow { frame, operand, operation } |
Frame arithmetic overflow | Check frame bounds |
Selected InvalidRequestKind Variants — Runtime Input Delay and Peer Removal¶
| Variant | Source API | Cause | Recovery |
|---|---|---|---|
InputDelayDecreaseUnsupported { current, requested } |
P2PSession::set_input_delay |
requested < current after inputs have been added |
Mid-session decreases are not supported; carry the lower delay over to the next session |
InputDelayMidSessionMultiLocalUnsupported { local_players } |
P2PSession::set_input_delay |
Mid-session increase attempted with more than one local player on this peer | Set the delay before adding inputs (typically via SessionBuilder::with_input_delay) when running multi-local |
InputDelayMidSessionPendingOutputFull { delta, capacity } |
P2PSession::set_input_delay |
Mid-session increase would enqueue delta gap-fill frames, exceeding remote pending_output_limit capacity |
Apply the change in smaller increments, or wait for the remote to acknowledge outstanding inputs and retry |
PlayerAlreadyRemoved { handle } |
P2PSession::remove_player |
remove_player called when the handle is already marked disconnected — either by a previous remove_player call, by auto-removal via ContinueWithout, or by a previous explicit disconnect_player call |
Treat as a no-op; the peer is already in the graceful-drop terminal state |
NotLocalPlayer { handle } (pre-existing variant) |
P2PSession::set_input_delay / P2PSession::input_delay |
handle is not registered as a local player (it may be a remote player, spectator, or unregistered) |
Pass a registered local player handle (use SessionBuilder::add_player(PlayerType::Local, ..) to register one) |
Selected InternalErrorKind Variants — Runtime Input Delay¶
| Variant | Source API | Cause | Recovery |
|---|---|---|---|
InputQueueGapFillFailed { frame } |
P2PSession::set_input_delay |
Mid-session gap-fill replication failed an internal invariant at frame |
Report as a library bug with the failing frame and the call's parameters |
Event Catalog¶
FortressEvent<T> is not #[non_exhaustive]. Adding new variants is a breaking change for exhaustive matches; recent additions are listed below.
Selected FortressEvent Variants — Disconnect, Graceful Drop, and Input Delay¶
| Variant | When emitted | Coexisting events |
|---|---|---|
PeerDropped { handle, addr } |
Auto-removal under DisconnectBehavior::ContinueWithout after a disconnect timeout, or explicit P2PSession::remove_player call |
One event per non-spectator handle at the dropped address; followed by exactly one Disconnected { addr } after all PeerDropped for the same address in the same batch |
Disconnected { addr } |
Always emitted on peer drop (legacy event); under Halt it appears alone, under graceful drop it appears once per address after that address's PeerDropped events |
Optionally preceded by one or more PeerDropped { handle, addr } (graceful drop, one per handle at the dropped address) |
InputDelayRecommendation { player_handle, current_delay, suggested_delay } |
Reserved for application-level heuristics or future automatic emitters. No built-in emitter currently produces this event. | None |
Cross-Cutting Invariants¶
These invariants are preserved across ALL public API calls:
- INV-3 (Input Immutability): Confirmed inputs never change
- INV-4 (Queue Bounds):
0 ≤ queue.length ≤ 128 - INV-5 (Index Validity):
head, tail ∈ [0, 128) - INV-11 (No Panics): All errors are
Result::Err, never panic
Revision History¶
| Version | Date | Changes |
|---|---|---|
| 1.1 | 2026-05-07 | Added contracts for runtime input delay (P2PSession::set_input_delay, P2PSession::input_delay), configurable disconnect behavior (SessionBuilder::with_disconnect_behavior, P2PSession::disconnect_behavior), and explicit graceful peer removal (P2PSession::remove_player). Documented new InvalidRequestKind/InternalErrorKind variants and the new FortressEvent::PeerDropped and FortressEvent::InputDelayRecommendation events. Added Event Catalog. |
| 1.0 | 2025-12-06 | Complete API contracts |
| 0.1 | 2025-12-06 | Initial draft |