Package jmcore

jmcore - Core library for JoinMarket components

Provides shared functionality for protocol, crypto, and networking.

Sub-modules

jmcore.bitcoin

Bitcoin utilities for JoinMarket …

jmcore.bond_calc

Fidelity bond value calculations.

jmcore.btc_script

Bitcoin script utilities for fidelity bonds …

jmcore.cli_common

Common CLI components for JoinMarket NG …

jmcore.commitment_blacklist

PoDLE commitment blacklist for preventing commitment reuse …

jmcore.config

Base configuration classes for JoinMarket components …

jmcore.confirmation

User confirmation prompts for fund-moving operations.

jmcore.constants

Bitcoin and JoinMarket protocol constants …

jmcore.crypto

Cryptographic primitives for JoinMarket.

jmcore.deduplication

Message deduplication for multi-directory connections …

jmcore.directory_client

Shared DirectoryClient for connecting to JoinMarket directory nodes …

jmcore.encryption

End-to-end encryption wrapper using NaCl public-key authenticated encryption …

jmcore.mempool_api

Mempool.space API client for Bitcoin blockchain queries.

jmcore.models

Core data models using Pydantic for validation and serialization.

jmcore.network

Network primitives and connection management.

jmcore.nick_tracker

Multi-directory aware nick tracking …

jmcore.notifications

Notification system for JoinMarket components …

jmcore.paths

Shared path utilities for JoinMarket data directories …

jmcore.podle

Proof of Discrete Log Equivalence (PoDLE) for JoinMarket …

jmcore.protocol

JoinMarket protocol definitions, message types, and serialization …

jmcore.rate_limiter

Per-peer rate limiting using token bucket algorithm …

jmcore.settings

Unified settings management for JoinMarket components …

jmcore.timenumber

Timenumber utilities for fidelity bond locktimes …

jmcore.tor_control

Tor control port client for creating ephemeral hidden services …

jmcore.version

Centralized version management for JoinMarket NG …

Functions

def add_commitment(commitment: str, persist: bool = True) ‑> bool
Expand source code
def add_commitment(commitment: str, persist: bool = True) -> bool:
    """
    Add a commitment to the global blacklist.

    Convenience function that uses the global blacklist.

    Args:
        commitment: The commitment hash (hex string)
        persist: If True, save to disk immediately

    Returns:
        True if the commitment was newly added, False if already present
    """
    return get_blacklist().add(commitment, persist=persist)

Add a commitment to the global blacklist.

Convenience function that uses the global blacklist.

Args

commitment
The commitment hash (hex string)
persist
If True, save to disk immediately

Returns

True if the commitment was newly added, False if already present

def address_to_scriptpubkey(address: str) ‑> bytes
Expand source code
def address_to_scriptpubkey(address: str) -> bytes:
    """
    Convert Bitcoin address to scriptPubKey.

    Supports:
    - P2WPKH (bc1q..., tb1q..., bcrt1q...)
    - P2WSH (bc1q... 62 chars)
    - P2TR (bc1p... taproot)
    - P2PKH (1..., m..., n...)
    - P2SH (3..., 2...)

    Args:
        address: Bitcoin address string

    Returns:
        scriptPubKey bytes
    """
    # Bech32 (SegWit) addresses
    if address.startswith(("bc1", "tb1", "bcrt1")):
        hrp_end = 4 if address.startswith("bcrt") else 2
        hrp = address[:hrp_end]

        bech32_decoded = bech32_lib.decode(hrp, address)
        if bech32_decoded[0] is None or bech32_decoded[1] is None:
            raise ValueError(f"Invalid bech32 address: {address}")

        witver = bech32_decoded[0]
        witprog = bytes(bech32_decoded[1])

        if witver == 0:
            if len(witprog) == 20:
                # P2WPKH: OP_0 <20-byte-pubkeyhash>
                return bytes([0x00, 0x14]) + witprog
            elif len(witprog) == 32:
                # P2WSH: OP_0 <32-byte-scripthash>
                return bytes([0x00, 0x20]) + witprog
        elif witver == 1 and len(witprog) == 32:
            # P2TR: OP_1 <32-byte-pubkey>
            return bytes([0x51, 0x20]) + witprog

        raise ValueError(f"Unsupported witness version: {witver}")

    # Base58 addresses (legacy)
    decoded = base58.b58decode_check(address)
    version = decoded[0]
    payload = decoded[1:]

    if version in (0x00, 0x6F):  # Mainnet/Testnet P2PKH
        # P2PKH: OP_DUP OP_HASH160 <20-byte-pubkeyhash> OP_EQUALVERIFY OP_CHECKSIG
        return bytes([0x76, 0xA9, 0x14]) + payload + bytes([0x88, 0xAC])
    elif version in (0x05, 0xC4):  # Mainnet/Testnet P2SH
        # P2SH: OP_HASH160 <20-byte-scripthash> OP_EQUAL
        return bytes([0xA9, 0x14]) + payload + bytes([0x87])

    raise ValueError(f"Unknown address version: {version}")

Convert Bitcoin address to scriptPubKey.

Supports: - P2WPKH (bc1q…, tb1q…, bcrt1q…) - P2WSH (bc1q… 62 chars) - P2TR (bc1p… taproot) - P2PKH (1…, m…, n…) - P2SH (3…, 2…)

Args

address
Bitcoin address string

Returns

scriptPubKey bytes

def check_and_add_commitment(commitment: str, persist: bool = True) ‑> bool
Expand source code
def check_and_add_commitment(commitment: str, persist: bool = True) -> bool:
    """
    Check if a commitment is allowed and add it to the blacklist.

    Convenience function that uses the global blacklist.
    This is the primary function to use during CoinJoin processing.

    Args:
        commitment: The commitment hash (hex string)
        persist: If True, save to disk immediately after adding

    Returns:
        True if the commitment is NEW (allowed), False if already blacklisted
    """
    return get_blacklist().check_and_add(commitment, persist=persist)

Check if a commitment is allowed and add it to the blacklist.

Convenience function that uses the global blacklist. This is the primary function to use during CoinJoin processing.

Args

commitment
The commitment hash (hex string)
persist
If True, save to disk immediately after adding

Returns

True if the commitment is NEW (allowed), False if already blacklisted

def check_commitment(commitment: str) ‑> bool
Expand source code
def check_commitment(commitment: str) -> bool:
    """
    Check if a commitment is allowed (not blacklisted).

    Convenience function that uses the global blacklist.

    Args:
        commitment: The commitment hash (hex string)

    Returns:
        True if the commitment is allowed, False if blacklisted
    """
    return not get_blacklist().is_blacklisted(commitment)

Check if a commitment is allowed (not blacklisted).

Convenience function that uses the global blacklist.

Args

commitment
The commitment hash (hex string)

Returns

True if the commitment is allowed, False if blacklisted

def convert_settings_to_notification_config(settings: JoinMarketSettings,
component_name: str = '') ‑> NotificationConfig
Expand source code
def convert_settings_to_notification_config(
    settings: JoinMarketSettings,
    component_name: str = "",
) -> NotificationConfig:
    """
    Convert NotificationSettings from JoinMarketSettings to NotificationConfig.

    This allows the notification system to use the unified settings system
    (config file + env vars + CLI args) instead of only environment variables.

    Args:
        settings: JoinMarketSettings instance with notification configuration
        component_name: Optional component name to include in notification titles.
            If provided, overrides settings.notifications.component_name.
            Examples: "Maker", "Taker", "Directory", "Orderbook Watcher"

    Returns:
        NotificationConfig suitable for use with Notifier
    """
    ns = settings.notifications

    # Convert URL strings to SecretStr
    urls = [SecretStr(url) for url in ns.urls]

    # Notifications are enabled if URLs are provided (auto-enable) or explicitly enabled
    # The enabled flag is primarily for explicit control when URLs are managed elsewhere
    enabled = bool(ns.urls) or ns.enabled

    # Use provided component_name or fall back to settings
    effective_component_name = component_name or ns.component_name

    return NotificationConfig(
        enabled=enabled,
        urls=urls,
        title_prefix=ns.title_prefix,
        component_name=effective_component_name,
        include_amounts=ns.include_amounts,
        include_txids=ns.include_txids,
        include_nick=ns.include_nick,
        use_tor=ns.use_tor,
        tor_socks_host=settings.tor.socks_host,
        tor_socks_port=settings.tor.socks_port,
        notify_fill=ns.notify_fill,
        notify_rejection=ns.notify_rejection,
        notify_signing=ns.notify_signing,
        notify_mempool=ns.notify_mempool,
        notify_confirmed=ns.notify_confirmed,
        notify_nick_change=ns.notify_nick_change,
        notify_disconnect=ns.notify_disconnect,
        notify_coinjoin_start=ns.notify_coinjoin_start,
        notify_coinjoin_complete=ns.notify_coinjoin_complete,
        notify_coinjoin_failed=ns.notify_coinjoin_failed,
        notify_peer_events=ns.notify_peer_events,
        notify_rate_limit=ns.notify_rate_limit,
        notify_startup=ns.notify_startup,
    )

Convert NotificationSettings from JoinMarketSettings to NotificationConfig.

This allows the notification system to use the unified settings system (config file + env vars + CLI args) instead of only environment variables.

Args

settings
JoinMarketSettings instance with notification configuration
component_name
Optional component name to include in notification titles. If provided, overrides settings.notifications.component_name. Examples: "Maker", "Taker", "Directory", "Orderbook Watcher"

Returns

NotificationConfig suitable for use with Notifier

def create_p2wpkh_script_code(pubkey: bytes | str) ‑> bytes
Expand source code
def create_p2wpkh_script_code(pubkey: bytes | str) -> bytes:
    """
    Create scriptCode for P2WPKH signing (BIP143).

    For P2WPKH, the scriptCode is the P2PKH script:
    OP_DUP OP_HASH160 <20-byte-pubkeyhash> OP_EQUALVERIFY OP_CHECKSIG

    Args:
        pubkey: Public key bytes or hex

    Returns:
        25-byte scriptCode
    """
    if isinstance(pubkey, str):
        pubkey = bytes.fromhex(pubkey)

    pubkey_hash = hash160(pubkey)
    # OP_DUP OP_HASH160 PUSH20 <pkh> OP_EQUALVERIFY OP_CHECKSIG
    return b"\x76\xa9\x14" + pubkey_hash + b"\x88\xac"

Create scriptCode for P2WPKH signing (BIP143).

For P2WPKH, the scriptCode is the P2PKH script: OP_DUP OP_HASH160 <20-byte-pubkeyhash> OP_EQUALVERIFY OP_CHECKSIG

Args

pubkey
Public key bytes or hex

Returns

25-byte scriptCode

def decode_varint(data: bytes, offset: int = 0) ‑> tuple[int, int]
Expand source code
def decode_varint(data: bytes, offset: int = 0) -> tuple[int, int]:
    """
    Decode Bitcoin varint from bytes.

    Args:
        data: Input bytes
        offset: Starting offset in data

    Returns:
        (value, new_offset) tuple
    """
    first = data[offset]
    if first < 0xFD:
        return first, offset + 1
    elif first == 0xFD:
        return struct.unpack("<H", data[offset + 1 : offset + 3])[0], offset + 3
    elif first == 0xFE:
        return struct.unpack("<I", data[offset + 1 : offset + 5])[0], offset + 5
    else:
        return struct.unpack("<Q", data[offset + 1 : offset + 9])[0], offset + 9

Decode Bitcoin varint from bytes.

Args

data
Input bytes
offset
Starting offset in data

Returns

(value, new_offset) tuple

def deserialize_revelation(revelation_str: str) ‑> dict[str, typing.Any] | None
Expand source code
def deserialize_revelation(revelation_str: str) -> dict[str, Any] | None:
    """
    Deserialize PoDLE revelation from wire format.

    Format: P|P2|sig|e|utxo (pipe-separated hex strings)
    """
    try:
        parts = revelation_str.split("|")
        if len(parts) != 5:
            logger.warning(f"Invalid revelation format: expected 5 parts, got {len(parts)}")
            return None

        return {
            "P": parts[0],
            "P2": parts[1],
            "sig": parts[2],
            "e": parts[3],
            "utxo": parts[4],
        }

    except Exception as e:
        logger.error(f"Failed to deserialize PoDLE revelation: {e}")
        return None

Deserialize PoDLE revelation from wire format.

Format: P|P2|sig|e|utxo (pipe-separated hex strings)

def encode_varint(n: int) ‑> bytes
Expand source code
def encode_varint(n: int) -> bytes:
    """
    Encode integer as Bitcoin varint.

    Args:
        n: Integer to encode

    Returns:
        Encoded bytes
    """
    if n < 0xFD:
        return bytes([n])
    elif n <= 0xFFFF:
        return bytes([0xFD]) + struct.pack("<H", n)
    elif n <= 0xFFFFFFFF:
        return bytes([0xFE]) + struct.pack("<I", n)
    else:
        return bytes([0xFF]) + struct.pack("<Q", n)

Encode integer as Bitcoin varint.

Args

n
Integer to encode

Returns

Encoded bytes

def ensure_config_file(data_dir: Path | None = None) ‑> pathlib.Path
Expand source code
def ensure_config_file(data_dir: Path | None = None) -> Path:
    """
    Ensure the config file exists, creating a template if it doesn't.

    Args:
        data_dir: Optional data directory path. Uses default if not provided.

    Returns:
        Path to the config file.
    """
    if data_dir is None:
        data_dir = get_default_data_dir()

    config_path = data_dir / "config.toml"

    if not config_path.exists():
        logger.info(f"Creating config file template at {config_path}")
        data_dir.mkdir(parents=True, exist_ok=True)
        config_path.write_text(generate_config_template())

    return config_path

Ensure the config file exists, creating a template if it doesn't.

Args

data_dir
Optional data directory path. Uses default if not provided.

Returns

Path to the config file.

def format_locktime_date(locktime: int) ‑> str
Expand source code
def format_locktime_date(locktime: int) -> str:
    """
    Format a locktime timestamp as a human-readable date.

    Args:
        locktime: Unix timestamp

    Returns:
        Date string in YYYY-MM-DD format
    """
    dt = datetime.fromtimestamp(locktime, tz=UTC)
    return dt.strftime("%Y-%m-%d")

Format a locktime timestamp as a human-readable date.

Args

locktime
Unix timestamp

Returns

Date string in YYYY-MM-DD format

def format_utxo_list(utxos: list[UTXOMetadata],
extended: bool = False) ‑> str
Expand source code
def format_utxo_list(utxos: list[UTXOMetadata], extended: bool = False) -> str:
    """
    Format a list of UTXOs as comma-separated string.

    Args:
        utxos: List of UTXOMetadata objects
        extended: If True, use extended format with scriptpubkey:blockheight

    Returns:
        Comma-separated UTXO string
    """
    if extended:
        return ",".join(u.to_extended_str() for u in utxos)
    else:
        return ",".join(u.to_legacy_str() for u in utxos)

Format a list of UTXOs as comma-separated string.

Args

utxos
List of UTXOMetadata objects
extended
If True, use extended format with scriptpubkey:blockheight

Returns

Comma-separated UTXO string

def generate_config_template() ‑> str
Expand source code
def generate_config_template() -> str:
    """
    Generate a config file template with all settings commented out.

    This allows users to see all available settings with their defaults
    and descriptions, while only uncommenting what they want to change.
    """
    lines: list[str] = []

    lines.append("# JoinMarket NG Configuration")
    lines.append("#")
    lines.append("# This file contains all available settings with their default values.")
    lines.append("# Settings are commented out by default - uncomment to override.")
    lines.append("#")
    lines.append("# Priority (highest to lowest):")
    lines.append("#   1. CLI arguments")
    lines.append("#   2. Environment variables")
    lines.append("#   3. This config file")
    lines.append("#   4. Built-in defaults")
    lines.append("#")
    lines.append("# Environment variables use uppercase with double underscore for nesting:")
    lines.append("#   TOR__SOCKS_HOST=127.0.0.1")
    lines.append("#   BITCOIN__RPC_URL=http://localhost:8332")
    lines.append("#")
    lines.append("")

    # Generate sections for each nested model
    def add_section(title: str, model_cls: type[BaseModel], prefix: str = "") -> None:
        lines.append(f"# {'=' * 60}")
        lines.append(f"# {title}")
        lines.append(f"# {'=' * 60}")
        lines.append(f"[{prefix}]" if prefix else "")
        lines.append("")

        for field_name, field_info in model_cls.model_fields.items():
            # Get description
            desc = field_info.description or ""
            if desc:
                lines.append(f"# {desc}")

            # Get default value
            default = field_info.default
            factory = field_info.default_factory
            if factory is not None:
                # default_factory can be Callable[[], Any] or Callable[[dict], Any]
                # We call with no args for the common case
                try:
                    default = factory()  # type: ignore[call-arg]
                except TypeError:
                    default = factory({})  # type: ignore[call-arg]

            # Format the value for TOML
            if isinstance(default, bool):
                value_str = str(default).lower()
            elif isinstance(default, str):
                value_str = f'"{default}"'
            elif isinstance(default, list):
                # For directory_servers, show example from defaults
                if field_name == "directory_servers" and prefix == "network_config":
                    lines.append("# directory_servers = [")
                    for server in DEFAULT_DIRECTORY_SERVERS["mainnet"]:
                        lines.append(f'#   "{server}",')
                    lines.append("# ]")
                    lines.append("")
                    continue
                value_str = "[]" if not default else str(default).replace("'", '"')
            elif isinstance(default, SecretStr):
                value_str = '""'
            elif default is None:
                # Skip None values with a comment
                lines.append(f"# {field_name} = ")
                lines.append("")
                continue
            elif hasattr(default, "value"):  # Enum - use string value
                value_str = f'"{default.value}"'
            else:
                value_str = str(default)

            lines.append(f"# {field_name} = {value_str}")
            lines.append("")

    # Data directory (top-level)
    lines.append("# Data directory for JoinMarket files")
    lines.append("# Defaults to ~/.joinmarket-ng or $JOINMARKET_DATA_DIR")
    lines.append("# data_dir = ")
    lines.append("")

    # Add all sections
    add_section("Tor Settings", TorSettings, "tor")
    add_section("Bitcoin Backend Settings", BitcoinSettings, "bitcoin")
    add_section("Network Settings", NetworkSettings, "network_config")
    add_section("Wallet Settings", WalletSettings, "wallet")
    add_section("Notification Settings", NotificationSettings, "notifications")
    add_section("Logging Settings", LoggingSettings, "logging")
    add_section("Maker Settings", MakerSettings, "maker")
    add_section("Taker Settings", TakerSettings, "taker")
    add_section("Directory Server Settings", DirectoryServerSettings, "directory_server")
    add_section("Orderbook Watcher Settings", OrderbookWatcherSettings, "orderbook_watcher")

    return "\n".join(lines)

Generate a config file template with all settings commented out.

This allows users to see all available settings with their defaults and descriptions, while only uncommenting what they want to change.

def generate_podle(private_key_bytes: bytes, utxo_str: str, index: int = 0) ‑> PoDLECommitment
Expand source code
def generate_podle(
    private_key_bytes: bytes,
    utxo_str: str,
    index: int = 0,
) -> PoDLECommitment:
    """
    Generate a PoDLE commitment for a UTXO.

    The PoDLE proves that the taker owns the UTXO without revealing
    the private key. It creates a zero-knowledge proof that:
    P = k*G and P2 = k*J have the same discrete log k.

    Args:
        private_key_bytes: 32-byte private key
        utxo_str: UTXO reference as "txid:vout"
        index: NUMS point index (0-9)

    Returns:
        PoDLECommitment with all proof data
    """
    if len(private_key_bytes) != 32:
        raise PoDLEError(f"Invalid private key length: {len(private_key_bytes)}")

    if index not in PRECOMPUTED_NUMS:
        raise PoDLEError(f"Invalid NUMS index: {index}")

    # Get private key as integer
    k = int.from_bytes(private_key_bytes, "big")
    if k == 0 or k >= SECP256K1_N:
        raise PoDLEError("Invalid private key value")

    # Calculate P = k*G (standard public key)
    p_point = scalar_mult_g(k)
    p_bytes = point_to_bytes(p_point)

    # Get NUMS point J
    j_point = get_nums_point(index)

    # Calculate P2 = k*J
    p2_point = point_mult(k, j_point)
    p2_bytes = point_to_bytes(p2_point)

    # Generate commitment C = H(P2)
    commitment = hashlib.sha256(p2_bytes).digest()

    # Generate Schnorr-like proof
    # Choose random nonce k_proof
    k_proof = int.from_bytes(secrets.token_bytes(32), "big") % SECP256K1_N
    if k_proof == 0:
        k_proof = 1

    # Kg = k_proof * G
    kg_point = scalar_mult_g(k_proof)
    kg_bytes = point_to_bytes(kg_point)

    # Kj = k_proof * J
    kj_point = point_mult(k_proof, j_point)
    kj_bytes = point_to_bytes(kj_point)

    # Challenge e = H(Kg || Kj || P || P2)
    e_bytes = hashlib.sha256(kg_bytes + kj_bytes + p_bytes + p2_bytes).digest()
    e = int.from_bytes(e_bytes, "big") % SECP256K1_N

    # Response s = k_proof + e * k (mod n) - JAM compatible
    s = (k_proof + e * k) % SECP256K1_N
    s_bytes = s.to_bytes(32, "big")

    logger.debug(
        f"Generated PoDLE for {utxo_str} using NUMS index {index}, "
        f"commitment={commitment.hex()[:16]}..."
    )

    return PoDLECommitment(
        commitment=commitment,
        p=p_bytes,
        p2=p2_bytes,
        sig=s_bytes,
        e=e_bytes,
        utxo=utxo_str,
        index=index,
    )

Generate a PoDLE commitment for a UTXO.

The PoDLE proves that the taker owns the UTXO without revealing the private key. It creates a zero-knowledge proof that: P = kG and P2 = kJ have the same discrete log k.

Args

private_key_bytes
32-byte private key
utxo_str
UTXO reference as "txid:vout"
index
NUMS point index (0-9)

Returns

PoDLECommitment with all proof data

def get_all_locktimes() ‑> list[int]
Expand source code
def get_all_locktimes() -> list[int]:
    """
    Get all valid locktime timestamps for fidelity bonds.

    This generates all 960 possible locktimes from January 2020
    through December 2099.

    Returns:
        List of Unix timestamps (1st of each month, midnight UTC)
    """
    return [timenumber_to_timestamp(i) for i in range(TIMENUMBER_COUNT)]

Get all valid locktime timestamps for fidelity bonds.

This generates all 960 possible locktimes from January 2020 through December 2099.

Returns

List of Unix timestamps (1st of each month, midnight UTC)

def get_all_nick_states(data_dir: Path | str | None = None) ‑> dict[str, str]
Expand source code
def get_all_nick_states(data_dir: Path | str | None = None) -> dict[str, str]:
    """
    Read all component nick state files from the data directory.

    Useful for discovering all running components and their nicks.

    Args:
        data_dir: Optional data directory (defaults to get_default_data_dir())

    Returns:
        Dict mapping component names to their nicks (e.g., {'maker': 'J5XXX', 'taker': 'J5YYY'})
    """
    if data_dir is None:
        data_dir = get_default_data_dir()
    elif isinstance(data_dir, str):
        data_dir = Path(data_dir)

    state_dir = data_dir / "state"
    if not state_dir.exists():
        return {}

    result: dict[str, str] = {}
    for path in state_dir.glob("*.nick"):
        component = path.stem  # e.g., 'maker' from 'maker.nick'
        try:
            nick = path.read_text().strip()
            if nick:
                result[component] = nick
        except OSError:
            continue

    return result

Read all component nick state files from the data directory.

Useful for discovering all running components and their nicks.

Args

data_dir
Optional data directory (defaults to get_default_data_dir())

Returns

Dict mapping component names to their nicks (e.g., {'maker': 'J5XXX', 'taker': 'J5YYY'})

def get_all_timenumbers() ‑> list[int]
Expand source code
def get_all_timenumbers() -> list[int]:
    """
    Get all valid timenumbers (0 to 959).

    Returns:
        List of integers from 0 to TIMENUMBER_COUNT-1
    """
    return list(range(TIMENUMBER_COUNT))

Get all valid timenumbers (0 to 959).

Returns

List of integers from 0 to TIMENUMBER_COUNT-1

def get_blacklist(blacklist_path: Path | None = None, data_dir: Path | None = None) ‑> CommitmentBlacklist
Expand source code
def get_blacklist(
    blacklist_path: Path | None = None, data_dir: Path | None = None
) -> CommitmentBlacklist:
    """
    Get the global commitment blacklist instance.

    Args:
        blacklist_path: Path to the blacklist file. Only used on first call
                       to initialize the singleton.
        data_dir: Data directory for JoinMarket. Only used on first call
                 to initialize the singleton.

    Returns:
        The global CommitmentBlacklist instance
    """
    global _global_blacklist

    with _global_blacklist_lock:
        if _global_blacklist is None:
            _global_blacklist = CommitmentBlacklist(blacklist_path, data_dir)
        return _global_blacklist

Get the global commitment blacklist instance.

Args

blacklist_path
Path to the blacklist file. Only used on first call to initialize the singleton.
data_dir
Data directory for JoinMarket. Only used on first call to initialize the singleton.

Returns

The global CommitmentBlacklist instance

def get_commitment_blacklist_path(data_dir: Path | None = None) ‑> pathlib.Path
Expand source code
def get_commitment_blacklist_path(data_dir: Path | None = None) -> Path:
    """
    Get the path to the commitment blacklist file.

    Args:
        data_dir: Optional data directory (defaults to get_default_data_dir())

    Returns:
        Path to cmtdata/commitmentlist (compatible with reference JoinMarket)
    """
    if data_dir is None:
        data_dir = get_default_data_dir()

    # Use cmtdata/ subdirectory for commitment data (matches reference implementation)
    cmtdata_dir = data_dir / "cmtdata"
    cmtdata_dir.mkdir(parents=True, exist_ok=True)

    return cmtdata_dir / "commitmentlist"

Get the path to the commitment blacklist file.

Args

data_dir
Optional data directory (defaults to get_default_data_dir())

Returns

Path to cmtdata/commitmentlist (compatible with reference JoinMarket)

def get_config_path() ‑> pathlib.Path
Expand source code
def get_config_path() -> Path:
    """Get the path to the config file."""
    data_dir_env = os.environ.get("JOINMARKET_DATA_DIR")
    data_dir = Path(data_dir_env) if data_dir_env else Path.home() / ".joinmarket-ng"
    return data_dir / "config.toml"

Get the path to the config file.

def get_default_data_dir() ‑> pathlib.Path
Expand source code
def get_default_data_dir() -> Path:
    """
    Get the default JoinMarket data directory.

    Returns ~/.joinmarket-ng or $JOINMARKET_DATA_DIR if set.
    Creates the directory if it doesn't exist.

    For compatibility with reference JoinMarket in Docker, users can
    set JOINMARKET_DATA_DIR=/home/jm/.joinmarket-ng to share the same volume.
    """
    env_path = os.getenv("JOINMARKET_DATA_DIR")
    data_dir = Path(env_path) if env_path else Path.home() / ".joinmarket-ng"

    data_dir.mkdir(parents=True, exist_ok=True)
    return data_dir

Get the default JoinMarket data directory.

Returns ~/.joinmarket-ng or $JOINMARKET_DATA_DIR if set. Creates the directory if it doesn't exist.

For compatibility with reference JoinMarket in Docker, users can set JOINMARKET_DATA_DIR=/home/jm/.joinmarket-ng to share the same volume.

def get_default_directory_nodes(network: NetworkType) ‑> list[str]
Expand source code
def get_default_directory_nodes(network: NetworkType) -> list[str]:
    """Get default directory nodes for a given network."""
    if network == NetworkType.MAINNET:
        return DIRECTORY_NODES_MAINNET.copy()
    elif network == NetworkType.SIGNET:
        return DIRECTORY_NODES_SIGNET.copy()
    elif network == NetworkType.TESTNET:
        return DIRECTORY_NODES_TESTNET.copy()
    # Regtest has no default directory nodes - must be configured
    return []

Get default directory nodes for a given network.

def get_future_locktimes(from_time: int | None = None) ‑> list[int]
Expand source code
def get_future_locktimes(from_time: int | None = None) -> list[int]:
    """
    Get all valid locktime timestamps that are in the future.

    Args:
        from_time: Reference timestamp (default: current time)

    Returns:
        List of future locktime timestamps
    """
    if from_time is None:
        from_time = int(datetime.now(UTC).timestamp())

    return [lt for lt in get_all_locktimes() if lt > from_time]

Get all valid locktime timestamps that are in the future.

Args

from_time
Reference timestamp (default: current time)

Returns

List of future locktime timestamps

def get_hrp(network: str | NetworkType) ‑> str
Expand source code
def get_hrp(network: str | NetworkType) -> str:
    """
    Get bech32 human-readable part for network.

    Args:
        network: Network type (string or enum)

    Returns:
        HRP string (bc, tb, bcrt)
    """
    if isinstance(network, str):
        network = NetworkType(network)
    return HRP_MAP[network]

Get bech32 human-readable part for network.

Args

network
Network type (string or enum)

Returns

HRP string (bc, tb, bcrt)

def get_ignored_makers_path(data_dir: Path | None = None) ‑> pathlib.Path
Expand source code
def get_ignored_makers_path(data_dir: Path | None = None) -> Path:
    """
    Get the path to the ignored makers file (for takers).

    Args:
        data_dir: Optional data directory (defaults to get_default_data_dir())

    Returns:
        Path to ignored_makers.txt
    """
    if data_dir is None:
        data_dir = get_default_data_dir()

    return data_dir / "ignored_makers.txt"

Get the path to the ignored makers file (for takers).

Args

data_dir
Optional data directory (defaults to get_default_data_dir())

Returns

Path to ignored_makers.txt

def get_nearest_valid_locktime(locktime: int, round_up: bool = True) ‑> int
Expand source code
def get_nearest_valid_locktime(locktime: int, round_up: bool = True) -> int:
    """
    Get the nearest valid locktime (1st of month, midnight UTC).

    Args:
        locktime: Any Unix timestamp
        round_up: If True, round to next month; if False, round to previous month

    Returns:
        Valid locktime (1st of month, midnight UTC)

    Example:
        >>> get_nearest_valid_locktime(1577900000)  # Jan 2, 2020
        1580515200  # Feb 1, 2020 (round_up=True)
        >>> get_nearest_valid_locktime(1577900000, round_up=False)
        1577836800  # Jan 1, 2020
    """
    dt = datetime.fromtimestamp(locktime, tz=UTC)

    if round_up:
        # Round to next month if not already 1st at midnight
        if dt.day != 1 or dt.hour != 0 or dt.minute != 0 or dt.second != 0:
            # Move to next month
            if dt.month == 12:
                year = dt.year + 1
                month = 1
            else:
                year = dt.year
                month = dt.month + 1
        else:
            year = dt.year
            month = dt.month
    else:
        # Round to current or previous 1st of month
        year = dt.year
        month = dt.month

    result_dt = datetime(year, month, 1, 0, 0, 0, tzinfo=UTC)
    return int(result_dt.timestamp())

Get the nearest valid locktime (1st of month, midnight UTC).

Args

locktime
Any Unix timestamp
round_up
If True, round to next month; if False, round to previous month

Returns

Valid locktime (1st of month, midnight UTC)

Example

>>> get_nearest_valid_locktime(1577900000)  # Jan 2, 2020
1580515200  # Feb 1, 2020 (round_up=True)
>>> get_nearest_valid_locktime(1577900000, round_up=False)
1577836800  # Jan 1, 2020
def get_nick_state_path(data_dir: Path | str | None = None, component: str = '') ‑> pathlib.Path
Expand source code
def get_nick_state_path(data_dir: Path | str | None = None, component: str = "") -> Path:
    """
    Get the path to a component's nick state file.

    The nick state file stores the current nick of a running component,
    allowing operators to easily identify the nick and enabling cross-component
    protection (e.g., taker excluding own maker nick from peer selection).

    Args:
        data_dir: Optional data directory (defaults to get_default_data_dir())
        component: Component name (e.g., 'maker', 'taker', 'directory', 'orderbook')

    Returns:
        Path to state/<component>.nick (e.g., ~/.joinmarket-ng/state/maker.nick)
    """
    if data_dir is None:
        data_dir = get_default_data_dir()
    elif isinstance(data_dir, str):
        data_dir = Path(data_dir)

    # Use state/ subdirectory to keep state files organized
    state_dir = data_dir / "state"
    state_dir.mkdir(parents=True, exist_ok=True)

    return state_dir / f"{component}.nick"

Get the path to a component's nick state file.

The nick state file stores the current nick of a running component, allowing operators to easily identify the nick and enabling cross-component protection (e.g., taker excluding own maker nick from peer selection).

Args

data_dir
Optional data directory (defaults to get_default_data_dir())
component
Component name (e.g., 'maker', 'taker', 'directory', 'orderbook')

Returns

Path to state/.nick (e.g., ~/.joinmarket-ng/state/maker.nick)

def get_nick_version(nick: str) ‑> int
Expand source code
def get_nick_version(nick: str) -> int:
    """
    Extract protocol version from a JoinMarket nick.

    Nick format: J{version}{hash} where version is a single digit.
    Example: J5abc123... (v5)

    Returns JM_VERSION (5) if version cannot be determined.
    """
    if nick and len(nick) >= 2 and nick[0] == "J" and nick[1].isdigit():
        return int(nick[1])
    return JM_VERSION

Extract protocol version from a JoinMarket nick.

Nick format: J{version}{hash} where version is a single digit. Example: J5abc123… (v5)

Returns JM_VERSION (5) if version cannot be determined.

def get_notifier(settings: JoinMarketSettings | None = None,
component_name: str = '') ‑> Notifier
Expand source code
def get_notifier(
    settings: JoinMarketSettings | None = None,
    component_name: str = "",
) -> Notifier:
    """
    Get the global Notifier instance.

    The notifier is lazily initialized on first use. Configuration is loaded
    from JoinMarketSettings if provided, otherwise from environment variables.

    Args:
        settings: Optional JoinMarketSettings instance. If provided, notification
                  configuration will be taken from settings.notifications
                  (which supports config file + env vars + CLI args).
                  If None, falls back to environment variables only (legacy).
        component_name: Component name to include in notification titles.
            Examples: "Maker", "Taker", "Directory", "Orderbook Watcher".
            This makes it easier to identify which component sent a notification
            when running multiple JoinMarket components.

    Returns:
        Notifier instance
    """
    global _notifier
    if _notifier is None:
        if settings is not None:
            config = convert_settings_to_notification_config(settings, component_name)
        else:
            config = load_notification_config()
            # If component_name provided but no settings, update the config
            if component_name:
                config = NotificationConfig(
                    **{**config.model_dump(), "component_name": component_name}
                )
        _notifier = Notifier(config)
    return _notifier

Get the global Notifier instance.

The notifier is lazily initialized on first use. Configuration is loaded from JoinMarketSettings if provided, otherwise from environment variables.

Args

settings
Optional JoinMarketSettings instance. If provided, notification configuration will be taken from settings.notifications (which supports config file + env vars + CLI args). If None, falls back to environment variables only (legacy).
component_name
Component name to include in notification titles. Examples: "Maker", "Taker", "Directory", "Orderbook Watcher". This makes it easier to identify which component sent a notification when running multiple JoinMarket components.

Returns

Notifier instance

def get_settings(**overrides: Any) ‑> JoinMarketSettings
Expand source code
def get_settings(**overrides: Any) -> JoinMarketSettings:
    """
    Get the JoinMarket settings instance.

    On first call, loads settings from all sources. Subsequent calls
    return the cached instance unless reset_settings() is called.

    Args:
        **overrides: Optional settings overrides (highest priority)

    Returns:
        JoinMarketSettings instance
    """
    global _settings
    if _settings is None or overrides:
        _settings = JoinMarketSettings(**overrides)
    return _settings

Get the JoinMarket settings instance.

On first call, loads settings from all sources. Subsequent calls return the cached instance unless reset_settings() is called.

Args

**overrides
Optional settings overrides (highest priority)

Returns

JoinMarketSettings instance

def get_txid(tx_hex: str) ‑> str
Expand source code
def get_txid(tx_hex: str) -> str:
    """
    Calculate transaction ID (double SHA256 of non-witness data).

    Args:
        tx_hex: Transaction hex

    Returns:
        Transaction ID as hex string
    """
    parsed = parse_transaction(tx_hex)

    # Serialize without witness for txid calculation
    data = serialize_transaction(
        version=parsed.version,
        inputs=parsed.inputs,
        outputs=parsed.outputs,
        locktime=parsed.locktime,
        witnesses=None,  # No witnesses for txid
    )

    return hash256(data)[::-1].hex()

Calculate transaction ID (double SHA256 of non-witness data).

Args

tx_hex
Transaction hex

Returns

Transaction ID as hex string

def get_version() ‑> str
Expand source code
def get_version() -> str:
    """Return the current version string."""
    return __version__

Return the current version string.

def get_version_info() ‑> dict[str, str | int]
Expand source code
def get_version_info() -> dict[str, str | int]:
    """Return version information as a dictionary."""
    major, minor, patch = get_version_tuple()
    return {
        "version": __version__,
        "major": major,
        "minor": minor,
        "patch": patch,
    }

Return version information as a dictionary.

def get_version_tuple() ‑> tuple[int, int, int]
Expand source code
def get_version_tuple() -> tuple[int, int, int]:
    """Return the version as a tuple of (major, minor, patch)."""
    parts = __version__.split(".")
    return (int(parts[0]), int(parts[1]), int(parts[2]))

Return the version as a tuple of (major, minor, patch).

def hash160(data: bytes) ‑> bytes
Expand source code
def hash160(data: bytes) -> bytes:
    """
    RIPEMD160(SHA256(data)) - Used for Bitcoin addresses.

    Args:
        data: Input data to hash

    Returns:
        20-byte hash
    """
    return hashlib.new("ripemd160", hashlib.sha256(data).digest()).digest()

RIPEMD160(SHA256(data)) - Used for Bitcoin addresses.

Args

data
Input data to hash

Returns

20-byte hash

def hash256(data: bytes) ‑> bytes
Expand source code
def hash256(data: bytes) -> bytes:
    """
    SHA256(SHA256(data)) - Used for Bitcoin txids and block hashes.

    Args:
        data: Input data to hash

    Returns:
        32-byte hash
    """
    return hashlib.sha256(hashlib.sha256(data).digest()).digest()

SHA256(SHA256(data)) - Used for Bitcoin txids and block hashes.

Args

data
Input data to hash

Returns

32-byte hash

def is_valid_locktime(locktime: int) ‑> bool
Expand source code
def is_valid_locktime(locktime: int) -> bool:
    """
    Check if a locktime is valid for fidelity bonds.

    A valid locktime is:
    1. Midnight UTC on the 1st of a month
    2. Within the epoch range (Jan 2020 to Dec 2099)

    Args:
        locktime: Unix timestamp to check

    Returns:
        True if valid, False otherwise
    """
    try:
        validate_locktime(locktime)
        timestamp_to_timenumber(locktime)
        return True
    except ValueError:
        return False

Check if a locktime is valid for fidelity bonds.

A valid locktime is: 1. Midnight UTC on the 1st of a month 2. Within the epoch range (Jan 2020 to Dec 2099)

Args

locktime
Unix timestamp to check

Returns

True if valid, False otherwise

def load_notification_config() ‑> NotificationConfig
Expand source code
def load_notification_config() -> NotificationConfig:
    """
    Load notification configuration from the unified settings system.

    This function uses JoinMarketSettings which loads from:
    1. Environment variables (NOTIFICATIONS__*, TOR__*)
    2. Config file (~/.joinmarket-ng/config.toml)
    3. Default values
    """
    from jmcore.settings import JoinMarketSettings

    settings = JoinMarketSettings()
    config = convert_settings_to_notification_config(settings)

    # Log notification configuration status
    if config.enabled:
        logger.info(
            f"Notifications enabled with {len(config.urls)} URL(s), use_tor={config.use_tor}"
        )
    else:
        logger.info("Notifications disabled (no URLs configured)")

    return config

Load notification configuration from the unified settings system.

This function uses JoinMarketSettings which loads from: 1. Environment variables (NOTIFICATIONS__, TOR__) 2. Config file (~/.joinmarket-ng/config.toml) 3. Default values

def parse_locktime_date(date_str: str) ‑> int
Expand source code
def parse_locktime_date(date_str: str) -> int:
    """
    Parse a date string to a locktime timestamp.

    Accepts formats:
    - YYYY-MM-DD (must be 1st of month)
    - YYYY-MM (assumes 1st of month)

    Args:
        date_str: Date string in supported format

    Returns:
        Unix timestamp for midnight UTC on the 1st of the month

    Raises:
        ValueError: If format is invalid or date is not 1st of month
    """
    # Try YYYY-MM format first
    if len(date_str) == 7 and date_str[4] == "-":
        try:
            year = int(date_str[:4])
            month = int(date_str[5:7])
            dt = datetime(year, month, 1, 0, 0, 0, tzinfo=UTC)
            locktime = int(dt.timestamp())
            # Validate it's in range
            timestamp_to_timenumber(locktime)
            return locktime
        except (ValueError, IndexError) as e:
            raise ValueError(f"Invalid date format '{date_str}': {e}") from e

    # Try YYYY-MM-DD format
    if len(date_str) == 10 and date_str[4] == "-" and date_str[7] == "-":
        try:
            year = int(date_str[:4])
            month = int(date_str[5:7])
            day = int(date_str[8:10])

            if day != 1:
                raise ValueError(f"Fidelity bond locktime must be 1st of month, got day {day}")

            dt = datetime(year, month, 1, 0, 0, 0, tzinfo=UTC)
            locktime = int(dt.timestamp())
            # Validate it's in range
            timestamp_to_timenumber(locktime)
            return locktime
        except (ValueError, IndexError) as e:
            raise ValueError(f"Invalid date format '{date_str}': {e}") from e

    raise ValueError(f"Invalid date format '{date_str}'. Use YYYY-MM or YYYY-MM-DD (1st of month)")

Parse a date string to a locktime timestamp.

Accepts formats: - YYYY-MM-DD (must be 1st of month) - YYYY-MM (assumes 1st of month)

Args

date_str
Date string in supported format

Returns

Unix timestamp for midnight UTC on the 1st of the month

Raises

ValueError
If format is invalid or date is not 1st of month
def parse_podle_revelation(revelation: dict[str, Any]) ‑> dict[str, typing.Any] | None
Expand source code
def parse_podle_revelation(revelation: dict[str, Any]) -> dict[str, Any] | None:
    """
    Parse and validate PoDLE revelation structure.

    Expected format from taker:
    {
        'P': <hex string>,
        'P2': <hex string>,
        'sig': <hex string>,
        'e': <hex string>,
        'utxo': <txid:vout or txid:vout:scriptpubkey:blockheight string>
    }

    Returns parsed structure with bytes, or None if invalid.
    Extended format includes scriptpubkey and blockheight for neutrino_compat feature.
    """
    try:
        required_fields = ["P", "P2", "sig", "e", "utxo"]
        for field in required_fields:
            if field not in revelation:
                logger.warning(f"Missing required field in PoDLE revelation: {field}")
                return None

        p_bytes = bytes.fromhex(revelation["P"])
        p2_bytes = bytes.fromhex(revelation["P2"])
        sig_bytes = bytes.fromhex(revelation["sig"])
        e_bytes = bytes.fromhex(revelation["e"])

        utxo_parts = revelation["utxo"].split(":")

        # Legacy format: txid:vout (2 parts)
        # Extended format: txid:vout:scriptpubkey:blockheight (4 parts)
        if len(utxo_parts) == 2:
            txid = utxo_parts[0]
            vout = int(utxo_parts[1])
            scriptpubkey = None
            blockheight = None
        elif len(utxo_parts) == 4:
            txid = utxo_parts[0]
            vout = int(utxo_parts[1])
            scriptpubkey = utxo_parts[2]
            blockheight = int(utxo_parts[3])
            logger.debug(f"Parsed extended UTXO format: {txid}:{vout} with metadata")
        else:
            logger.warning(f"Invalid UTXO format: {revelation['utxo']}")
            return None

        result: dict[str, Any] = {
            "P": p_bytes,
            "P2": p2_bytes,
            "sig": sig_bytes,
            "e": e_bytes,
            "txid": txid,
            "vout": vout,
        }

        # Add extended metadata if present
        if scriptpubkey is not None:
            result["scriptpubkey"] = scriptpubkey
        if blockheight is not None:
            result["blockheight"] = blockheight

        return result

    except Exception as e:
        logger.error(f"Failed to parse PoDLE revelation: {e}")
        return None

Parse and validate PoDLE revelation structure.

Expected format from taker: { 'P': , 'P2': , 'sig': , 'e': , 'utxo': }

Returns parsed structure with bytes, or None if invalid. Extended format includes scriptpubkey and blockheight for neutrino_compat feature.

def parse_transaction(tx_hex: str) ‑> ParsedTransaction
Expand source code
def parse_transaction(tx_hex: str) -> ParsedTransaction:
    """
    Parse a Bitcoin transaction from hex.

    Handles both SegWit and non-SegWit formats.

    Args:
        tx_hex: Transaction hex string

    Returns:
        ParsedTransaction object
    """
    tx_bytes = bytes.fromhex(tx_hex)
    offset = 0

    # Version
    version = struct.unpack("<I", tx_bytes[offset : offset + 4])[0]
    offset += 4

    # Check for SegWit marker
    marker = tx_bytes[offset]
    flag = tx_bytes[offset + 1]
    has_witness = marker == 0x00 and flag == 0x01
    if has_witness:
        offset += 2

    # Inputs
    input_count, offset = decode_varint(tx_bytes, offset)
    inputs = []
    for _ in range(input_count):
        txid = tx_bytes[offset : offset + 32][::-1].hex()
        offset += 32
        vout = struct.unpack("<I", tx_bytes[offset : offset + 4])[0]
        offset += 4
        script_len, offset = decode_varint(tx_bytes, offset)
        scriptsig = tx_bytes[offset : offset + script_len].hex()
        offset += script_len
        sequence = struct.unpack("<I", tx_bytes[offset : offset + 4])[0]
        offset += 4
        inputs.append({"txid": txid, "vout": vout, "scriptsig": scriptsig, "sequence": sequence})

    # Outputs
    output_count, offset = decode_varint(tx_bytes, offset)
    outputs = []
    for _ in range(output_count):
        value = struct.unpack("<Q", tx_bytes[offset : offset + 8])[0]
        offset += 8
        script_len, offset = decode_varint(tx_bytes, offset)
        scriptpubkey = tx_bytes[offset : offset + script_len].hex()
        offset += script_len
        outputs.append({"value": value, "scriptpubkey": scriptpubkey})

    # Witnesses
    witnesses: list[list[bytes]] = []
    if has_witness:
        for _ in range(input_count):
            wit_count, offset = decode_varint(tx_bytes, offset)
            wit_items = []
            for _ in range(wit_count):
                item_len, offset = decode_varint(tx_bytes, offset)
                wit_items.append(tx_bytes[offset : offset + item_len])
                offset += item_len
            witnesses.append(wit_items)

    # Locktime
    locktime = struct.unpack("<I", tx_bytes[offset : offset + 4])[0]

    return ParsedTransaction(
        version=version,
        inputs=inputs,
        outputs=outputs,
        witnesses=witnesses,
        locktime=locktime,
        has_witness=has_witness,
    )

Parse a Bitcoin transaction from hex.

Handles both SegWit and non-SegWit formats.

Args

tx_hex
Transaction hex string

Returns

ParsedTransaction object

def parse_utxo_list(utxo_list_str: str, require_metadata: bool = False) ‑> list[UTXOMetadata]
Expand source code
def parse_utxo_list(utxo_list_str: str, require_metadata: bool = False) -> list[UTXOMetadata]:
    """
    Parse a comma-separated list of UTXOs.

    Args:
        utxo_list_str: Comma-separated UTXOs (legacy or extended format)
        require_metadata: If True, raise error if any UTXO lacks Neutrino metadata

    Returns:
        List of UTXOMetadata objects
    """
    if not utxo_list_str:
        return []

    utxos = []
    for utxo_str in utxo_list_str.split(","):
        utxo = UTXOMetadata.from_str(utxo_str.strip())
        if require_metadata and not utxo.has_neutrino_metadata():
            raise ValueError(f"UTXO {utxo.to_legacy_str()} missing Neutrino metadata")
        utxos.append(utxo)
    return utxos

Parse a comma-separated list of UTXOs.

Args

utxo_list_str
Comma-separated UTXOs (legacy or extended format)
require_metadata
If True, raise error if any UTXO lacks Neutrino metadata

Returns

List of UTXOMetadata objects

def peer_supports_neutrino_compat(handshake_data: dict[str, Any]) ‑> bool
Expand source code
def peer_supports_neutrino_compat(handshake_data: dict[str, Any]) -> bool:
    """
    Check if a peer supports Neutrino-compatible UTXO metadata.

    Args:
        handshake_data: Handshake payload from peer

    Returns:
        True if peer advertises neutrino_compat feature
    """
    features = handshake_data.get("features", {})
    return features.get(FEATURE_NEUTRINO_COMPAT, False)

Check if a peer supports Neutrino-compatible UTXO metadata.

Args

handshake_data
Handshake payload from peer

Returns

True if peer advertises neutrino_compat feature

def pubkey_to_p2wpkh_address(pubkey: bytes | str, network: str | NetworkType = 'mainnet') ‑> str
Expand source code
@validate_call
def pubkey_to_p2wpkh_address(pubkey: bytes | str, network: str | NetworkType = "mainnet") -> str:
    """
    Convert compressed public key to P2WPKH (native SegWit) address.

    Args:
        pubkey: 33-byte compressed public key (bytes or hex string)
        network: Network type

    Returns:
        Bech32 P2WPKH address
    """
    if isinstance(pubkey, str):
        pubkey = bytes.fromhex(pubkey)

    if len(pubkey) != 33:
        raise ValueError(f"Invalid compressed pubkey length: {len(pubkey)}")

    pubkey_hash = hash160(pubkey)
    hrp = get_hrp(network)

    result = bech32_lib.encode(hrp, 0, pubkey_hash)
    if result is None:
        raise ValueError("Failed to encode bech32 address")
    return result

Convert compressed public key to P2WPKH (native SegWit) address.

Args

pubkey
33-byte compressed public key (bytes or hex string)
network
Network type

Returns

Bech32 P2WPKH address

def pubkey_to_p2wpkh_script(pubkey: bytes | str) ‑> bytes
Expand source code
def pubkey_to_p2wpkh_script(pubkey: bytes | str) -> bytes:
    """
    Create P2WPKH scriptPubKey from public key.

    Args:
        pubkey: 33-byte compressed public key (bytes or hex string)

    Returns:
        22-byte P2WPKH scriptPubKey (OP_0 <20-byte-hash>)
    """
    if isinstance(pubkey, str):
        pubkey = bytes.fromhex(pubkey)

    pubkey_hash = hash160(pubkey)
    return bytes([0x00, 0x14]) + pubkey_hash

Create P2WPKH scriptPubKey from public key.

Args

pubkey
33-byte compressed public key (bytes or hex string)

Returns

22-byte P2WPKH scriptPubKey (OP_0 <20-byte-hash>)

def read_nick_state(data_dir: Path | str | None, component: str) ‑> str | None
Expand source code
def read_nick_state(data_dir: Path | str | None, component: str) -> str | None:
    """
    Read a component's nick from its state file.

    Args:
        data_dir: Optional data directory (defaults to get_default_data_dir())
        component: Component name (e.g., 'maker', 'taker', 'directory', 'orderbook')

    Returns:
        The nick string if file exists and is readable, None otherwise
    """
    if data_dir is None:
        data_dir = get_default_data_dir()
    elif isinstance(data_dir, str):
        data_dir = Path(data_dir)

    path = get_nick_state_path(data_dir, component)
    if path.exists():
        try:
            return path.read_text().strip()
        except OSError:
            return None
    return None

Read a component's nick from its state file.

Args

data_dir
Optional data directory (defaults to get_default_data_dir())
component
Component name (e.g., 'maker', 'taker', 'directory', 'orderbook')

Returns

The nick string if file exists and is readable, None otherwise

def remove_nick_state(data_dir: Path | str | None, component: str) ‑> bool
Expand source code
def remove_nick_state(data_dir: Path | str | None, component: str) -> bool:
    """
    Remove a component's nick state file (e.g., on shutdown).

    Args:
        data_dir: Optional data directory (defaults to get_default_data_dir())
        component: Component name (e.g., 'maker', 'taker', 'directory', 'orderbook')

    Returns:
        True if file was removed, False if it didn't exist or removal failed
    """
    if data_dir is None:
        data_dir = get_default_data_dir()
    elif isinstance(data_dir, str):
        data_dir = Path(data_dir)

    path = get_nick_state_path(data_dir, component)
    if path.exists():
        try:
            path.unlink()
            return True
        except OSError:
            return False
    return False

Remove a component's nick state file (e.g., on shutdown).

Args

data_dir
Optional data directory (defaults to get_default_data_dir())
component
Component name (e.g., 'maker', 'taker', 'directory', 'orderbook')

Returns

True if file was removed, False if it didn't exist or removal failed

def reset_notifier() ‑> None
Expand source code
def reset_notifier() -> None:
    """Reset the global notifier (useful for testing)."""
    global _notifier
    _notifier = None

Reset the global notifier (useful for testing).

def reset_settings() ‑> None
Expand source code
def reset_settings() -> None:
    """Reset the global settings instance (useful for testing)."""
    global _settings
    _settings = None

Reset the global settings instance (useful for testing).

def script_to_p2wsh_address(script: bytes, network: str | NetworkType = 'mainnet') ‑> str
Expand source code
@validate_call
def script_to_p2wsh_address(script: bytes, network: str | NetworkType = "mainnet") -> str:
    """
    Convert witness script to P2WSH address.

    Args:
        script: Witness script bytes
        network: Network type

    Returns:
        Bech32 P2WSH address
    """
    script_hash = sha256(script)
    hrp = get_hrp(network)

    result = bech32_lib.encode(hrp, 0, script_hash)
    if result is None:
        raise ValueError("Failed to encode bech32 address")
    return result

Convert witness script to P2WSH address.

Args

script
Witness script bytes
network
Network type

Returns

Bech32 P2WSH address

def script_to_p2wsh_scriptpubkey(script: bytes) ‑> bytes
Expand source code
def script_to_p2wsh_scriptpubkey(script: bytes) -> bytes:
    """
    Create P2WSH scriptPubKey from witness script.

    Args:
        script: Witness script bytes

    Returns:
        34-byte P2WSH scriptPubKey (OP_0 <32-byte-hash>)
    """
    script_hash = sha256(script)
    return bytes([0x00, 0x20]) + script_hash

Create P2WSH scriptPubKey from witness script.

Args

script
Witness script bytes

Returns

34-byte P2WSH scriptPubKey (OP_0 <32-byte-hash>)

def scriptpubkey_to_address(scriptpubkey: bytes, network: str | NetworkType = 'mainnet') ‑> str
Expand source code
@validate_call
def scriptpubkey_to_address(scriptpubkey: bytes, network: str | NetworkType = "mainnet") -> str:
    """
    Convert scriptPubKey to address.

    Supports P2WPKH, P2WSH, P2TR, P2PKH, P2SH.

    Args:
        scriptpubkey: scriptPubKey bytes
        network: Network type

    Returns:
        Bitcoin address string
    """
    if isinstance(network, str):
        network = NetworkType(network)

    hrp = get_hrp(network)

    # P2WPKH
    if len(scriptpubkey) == 22 and scriptpubkey[0] == 0x00 and scriptpubkey[1] == 0x14:
        result = bech32_lib.encode(hrp, 0, scriptpubkey[2:])
        if result is None:
            raise ValueError(f"Failed to encode P2WPKH address: {scriptpubkey.hex()}")
        return result

    # P2WSH
    if len(scriptpubkey) == 34 and scriptpubkey[0] == 0x00 and scriptpubkey[1] == 0x20:
        result = bech32_lib.encode(hrp, 0, scriptpubkey[2:])
        if result is None:
            raise ValueError(f"Failed to encode P2WSH address: {scriptpubkey.hex()}")
        return result

    # P2TR
    if len(scriptpubkey) == 34 and scriptpubkey[0] == 0x51 and scriptpubkey[1] == 0x20:
        result = bech32_lib.encode(hrp, 1, scriptpubkey[2:])
        if result is None:
            raise ValueError(f"Failed to encode P2TR address: {scriptpubkey.hex()}")
        return result

    # P2PKH
    if (
        len(scriptpubkey) == 25
        and scriptpubkey[0] == 0x76
        and scriptpubkey[1] == 0xA9
        and scriptpubkey[2] == 0x14
        and scriptpubkey[23] == 0x88
        and scriptpubkey[24] == 0xAC
    ):
        payload = bytes([P2PKH_VERSION[network]]) + scriptpubkey[3:23]
        return base58.b58encode_check(payload).decode("ascii")

    # P2SH
    if (
        len(scriptpubkey) == 23
        and scriptpubkey[0] == 0xA9
        and scriptpubkey[1] == 0x14
        and scriptpubkey[22] == 0x87
    ):
        payload = bytes([P2SH_VERSION[network]]) + scriptpubkey[2:22]
        return base58.b58encode_check(payload).decode("ascii")

    raise ValueError(f"Unsupported scriptPubKey: {scriptpubkey.hex()}")

Convert scriptPubKey to address.

Supports P2WPKH, P2WSH, P2TR, P2PKH, P2SH.

Args

scriptpubkey
scriptPubKey bytes
network
Network type

Returns

Bitcoin address string

def serialize_outpoint(txid: str, vout: int) ‑> bytes
Expand source code
def serialize_outpoint(txid: str, vout: int) -> bytes:
    """
    Serialize outpoint (txid:vout).

    Args:
        txid: Transaction ID in RPC format (big-endian hex)
        vout: Output index

    Returns:
        36-byte outpoint (little-endian txid + 4-byte vout)
    """
    txid_bytes = bytes.fromhex(txid)[::-1]
    return txid_bytes + struct.pack("<I", vout)

Serialize outpoint (txid:vout).

Args

txid
Transaction ID in RPC format (big-endian hex)
vout
Output index

Returns

36-byte outpoint (little-endian txid + 4-byte vout)

def serialize_revelation(commitment: PoDLECommitment) ‑> str
Expand source code
def serialize_revelation(commitment: PoDLECommitment) -> str:
    """
    Serialize PoDLE revelation to wire format.

    Format: P|P2|sig|e|utxo (pipe-separated hex strings)
    """
    return "|".join(
        [
            commitment.p.hex(),
            commitment.p2.hex(),
            commitment.sig.hex(),
            commitment.e.hex(),
            commitment.utxo,
        ]
    )

Serialize PoDLE revelation to wire format.

Format: P|P2|sig|e|utxo (pipe-separated hex strings)

def serialize_transaction(version: int,
inputs: list[dict[str, Any]],
outputs: list[dict[str, Any]],
locktime: int,
witnesses: list[list[bytes]] | None = None) ‑> bytes
Expand source code
def serialize_transaction(
    version: int,
    inputs: list[dict[str, Any]],
    outputs: list[dict[str, Any]],
    locktime: int,
    witnesses: list[list[bytes]] | None = None,
) -> bytes:
    """
    Serialize a Bitcoin transaction.

    Args:
        version: Transaction version
        inputs: List of input dicts
        outputs: List of output dicts
        locktime: Transaction locktime
        witnesses: Optional list of witness stacks

    Returns:
        Serialized transaction bytes
    """
    has_witness = witnesses is not None and any(w for w in witnesses)

    result = struct.pack("<I", version)

    if has_witness:
        result += bytes([0x00, 0x01])  # SegWit marker and flag

    # Inputs
    result += encode_varint(len(inputs))
    for inp in inputs:
        result += bytes.fromhex(inp["txid"])[::-1]
        result += struct.pack("<I", inp["vout"])
        scriptsig = bytes.fromhex(inp.get("scriptsig", ""))
        result += encode_varint(len(scriptsig))
        result += scriptsig
        result += struct.pack("<I", inp.get("sequence", 0xFFFFFFFF))

    # Outputs
    result += encode_varint(len(outputs))
    for out in outputs:
        result += struct.pack("<Q", out["value"])
        scriptpubkey = bytes.fromhex(out["scriptpubkey"])
        result += encode_varint(len(scriptpubkey))
        result += scriptpubkey

    # Witnesses
    if has_witness and witnesses:
        for witness in witnesses:
            result += encode_varint(len(witness))
            for item in witness:
                result += encode_varint(len(item))
                result += item

    result += struct.pack("<I", locktime)
    return result

Serialize a Bitcoin transaction.

Args

version
Transaction version
inputs
List of input dicts
outputs
List of output dicts
locktime
Transaction locktime
witnesses
Optional list of witness stacks

Returns

Serialized transaction bytes

def set_blacklist_path(blacklist_path: Path | None = None, data_dir: Path | None = None) ‑> None
Expand source code
def set_blacklist_path(blacklist_path: Path | None = None, data_dir: Path | None = None) -> None:
    """
    Set the path for the global blacklist.

    Must be called before any blacklist operations. If the blacklist
    has already been initialized, this will reinitialize it with the new path.

    Args:
        blacklist_path: Explicit path to blacklist file
        data_dir: Data directory (used if blacklist_path is None)
    """
    global _global_blacklist

    with _global_blacklist_lock:
        _global_blacklist = CommitmentBlacklist(blacklist_path, data_dir)
        logger.info(f"Set blacklist path to {_global_blacklist.blacklist_path}")

Set the path for the global blacklist.

Must be called before any blacklist operations. If the blacklist has already been initialized, this will reinitialize it with the new path.

Args

blacklist_path
Explicit path to blacklist file
data_dir
Data directory (used if blacklist_path is None)
def sha256(data: bytes) ‑> bytes
Expand source code
def sha256(data: bytes) -> bytes:
    """
    Single SHA256 hash.

    Args:
        data: Input data to hash

    Returns:
        32-byte hash
    """
    return hashlib.sha256(data).digest()

Single SHA256 hash.

Args

data
Input data to hash

Returns

32-byte hash

def timenumber_to_timestamp(timenumber: int) ‑> int
Expand source code
def timenumber_to_timestamp(timenumber: int) -> int:
    """
    Convert a timenumber to a Unix timestamp.

    Timenumber 0 = January 2020 (epoch)
    Each timenumber increment = 1 month
    Maximum timenumber = 959 (December 2099)

    Args:
        timenumber: Integer from 0 to TIMENUMBER_COUNT-1

    Returns:
        Unix timestamp for 1st of month at midnight UTC

    Raises:
        ValueError: If timenumber is out of range

    Example:
        >>> timenumber_to_timestamp(0)  # Jan 2020
        1577836800
        >>> timenumber_to_timestamp(12)  # Jan 2021
        1609459200
    """
    if timenumber < 0 or timenumber >= TIMENUMBER_COUNT:
        raise ValueError(f"Timenumber must be 0-{TIMENUMBER_COUNT - 1}, got {timenumber}")

    year = TIMELOCK_EPOCH_YEAR + timenumber // MONTHS_IN_YEAR
    month = TIMELOCK_EPOCH_MONTH + timenumber % MONTHS_IN_YEAR

    # Handle month overflow (not needed with epoch starting at January)
    if month > MONTHS_IN_YEAR:
        year += 1
        month -= MONTHS_IN_YEAR

    dt = datetime(year, month, 1, 0, 0, 0, tzinfo=UTC)
    return int(dt.timestamp())

Convert a timenumber to a Unix timestamp.

Timenumber 0 = January 2020 (epoch) Each timenumber increment = 1 month Maximum timenumber = 959 (December 2099)

Args

timenumber
Integer from 0 to TIMENUMBER_COUNT-1

Returns

Unix timestamp for 1st of month at midnight UTC

Raises

ValueError
If timenumber is out of range

Example

>>> timenumber_to_timestamp(0)  # Jan 2020
1577836800
>>> timenumber_to_timestamp(12)  # Jan 2021
1609459200
def timestamp_to_timenumber(locktime: int) ‑> int
Expand source code
def timestamp_to_timenumber(locktime: int) -> int:
    """
    Convert a Unix timestamp to a timenumber.

    The timestamp MUST be midnight UTC on the 1st of a month, otherwise
    this function will raise an error.

    Args:
        locktime: Unix timestamp

    Returns:
        Timenumber (0 to 959)

    Raises:
        ValueError: If locktime is not midnight UTC on 1st of month,
                   or if it's outside the valid range

    Example:
        >>> timestamp_to_timenumber(1577836800)  # Jan 2020
        0
        >>> timestamp_to_timenumber(1609459200)  # Jan 2021
        12
    """
    # Validate the locktime is a valid first-of-month timestamp
    validate_locktime(locktime)

    dt = datetime.fromtimestamp(locktime, tz=UTC)

    # Calculate months since epoch
    year_diff = dt.year - TIMELOCK_EPOCH_YEAR
    month_diff = dt.month - TIMELOCK_EPOCH_MONTH
    timenumber = year_diff * MONTHS_IN_YEAR + month_diff

    if timenumber < 0:
        raise ValueError(
            f"Locktime {locktime} ({dt.strftime('%Y-%m-%d')}) is before epoch "
            f"({TIMELOCK_EPOCH_YEAR}-{TIMELOCK_EPOCH_MONTH:02d})"
        )

    if timenumber >= TIMENUMBER_COUNT:
        max_year = TIMELOCK_EPOCH_YEAR + TIMELOCK_ERA_YEARS - 1
        raise ValueError(
            f"Locktime {locktime} ({dt.strftime('%Y-%m-%d')}) is after maximum ({max_year}-12)"
        )

    return timenumber

Convert a Unix timestamp to a timenumber.

The timestamp MUST be midnight UTC on the 1st of a month, otherwise this function will raise an error.

Args

locktime
Unix timestamp

Returns

Timenumber (0 to 959)

Raises

ValueError
If locktime is not midnight UTC on 1st of month, or if it's outside the valid range

Example

>>> timestamp_to_timenumber(1577836800)  # Jan 2020
0
>>> timestamp_to_timenumber(1609459200)  # Jan 2021
12
def validate_locktime(locktime: int) ‑> None
Expand source code
def validate_locktime(locktime: int) -> None:
    """
    Validate that a locktime is midnight UTC on the 1st of a month.

    Fidelity bonds MUST use locktimes that fall on the 1st of a month
    at exactly midnight UTC. This constraint ensures:
    1. Consistent derivation paths across implementations
    2. Efficient scanning (only 960 possible values)
    3. Compatibility with the reference implementation

    Args:
        locktime: Unix timestamp to validate

    Raises:
        ValueError: If locktime doesn't meet constraints
    """
    try:
        dt = datetime.fromtimestamp(locktime, tz=UTC)
    except (ValueError, OSError) as e:
        raise ValueError(f"Invalid timestamp {locktime}: {e}") from e

    if dt.day != 1:
        raise ValueError(
            f"Locktime must be 1st of month, got day {dt.day} ({dt.strftime('%Y-%m-%d %H:%M:%S')})"
        )

    if dt.hour != 0 or dt.minute != 0 or dt.second != 0 or dt.microsecond != 0:
        raise ValueError(
            f"Locktime must be midnight UTC, got {dt.strftime('%H:%M:%S.%f')} "
            f"({dt.strftime('%Y-%m-%d %H:%M:%S')})"
        )

Validate that a locktime is midnight UTC on the 1st of a month.

Fidelity bonds MUST use locktimes that fall on the 1st of a month at exactly midnight UTC. This constraint ensures: 1. Consistent derivation paths across implementations 2. Efficient scanning (only 960 possible values) 3. Compatibility with the reference implementation

Args

locktime
Unix timestamp to validate

Raises

ValueError
If locktime doesn't meet constraints
def verify_podle(p: bytes,
p2: bytes,
sig: bytes,
e: bytes,
commitment: bytes,
index_range: range = range(0, 10)) ‑> tuple[bool, str]
Expand source code
def verify_podle(
    p: bytes,
    p2: bytes,
    sig: bytes,
    e: bytes,
    commitment: bytes,
    index_range: range = range(10),
) -> tuple[bool, str]:
    """
    Verify PoDLE proof.

    Verifies that P and P2 have the same discrete log (private key)
    without revealing the private key itself.

    Args:
        p: Public key bytes (33 bytes compressed)
        p2: Commitment public key bytes (33 bytes compressed)
        sig: Signature s value (32 bytes)
        e: Challenge e value (32 bytes)
        commitment: sha256(P2) commitment (32 bytes)
        index_range: Allowed NUMS indices to try

    Returns:
        (is_valid, error_message)
    """
    try:
        if len(p) != 33:
            return False, f"Invalid P length: {len(p)}, expected 33"
        if len(p2) != 33:
            return False, f"Invalid P2 length: {len(p2)}, expected 33"
        if len(sig) != 32:
            return False, f"Invalid sig length: {len(sig)}, expected 32"
        if len(e) != 32:
            return False, f"Invalid e length: {len(e)}, expected 32"
        if len(commitment) != 32:
            return False, f"Invalid commitment length: {len(commitment)}, expected 32"

        expected_commitment = hashlib.sha256(p2).digest()
        if commitment != expected_commitment:
            return False, "Commitment does not match H(P2)"

        p_point = PublicKey(p)
        p2_point = PublicKey(p2)

        s_int = int.from_bytes(sig, "big")
        e_int = int.from_bytes(e, "big")

        if s_int >= SECP256K1_N or e_int >= SECP256K1_N:
            return False, "Signature values out of range"

        # sg = s * G
        sg = scalar_mult_g(s_int) if s_int > 0 else None

        # Compute -e mod N for subtraction (JAM compatible: s = k + e*x, verify Kg = s*G - e*P)
        minus_e_int = (-e_int) % SECP256K1_N

        for index in index_range:
            try:
                j = get_nums_point(index)

                # Kg = s*G - e*P = s*G + (-e)*P (JAM compatible verification)
                minus_e_p = point_mult(minus_e_int, p_point)
                kg = point_add(sg, minus_e_p) if sg is not None else minus_e_p

                # Kj = s*J - e*P2 = s*J + (-e)*P2
                minus_e_p2 = point_mult(minus_e_int, p2_point)
                if s_int > 0:
                    sj = point_mult(s_int, j)
                    kj = point_add(sj, minus_e_p2)
                else:
                    kj = minus_e_p2

                kg_bytes = point_to_bytes(kg)
                kj_bytes = point_to_bytes(kj)

                e_check = hashlib.sha256(kg_bytes + kj_bytes + p + p2).digest()

                if e_check == e:
                    logger.debug(f"PoDLE verification successful at index {index}")
                    return True, ""

            except Exception as ex:
                logger.debug(f"PoDLE verification failed at index {index}: {ex}")
                continue

        return False, f"PoDLE verification failed for all indices in {index_range}"

    except Exception as ex:
        logger.error(f"PoDLE verification error: {ex}")
        return False, f"Verification error: {ex}"

Verify PoDLE proof.

Verifies that P and P2 have the same discrete log (private key) without revealing the private key itself.

Args

p
Public key bytes (33 bytes compressed)
p2
Commitment public key bytes (33 bytes compressed)
sig
Signature s value (32 bytes)
e
Challenge e value (32 bytes)
commitment
sha256(P2) commitment (32 bytes)
index_range
Allowed NUMS indices to try

Returns

(is_valid, error_message)

def write_nick_state(data_dir: Path | str | None, component: str, nick: str) ‑> pathlib.Path
Expand source code
def write_nick_state(data_dir: Path | str | None, component: str, nick: str) -> Path:
    """
    Write a component's nick to its state file.

    Creates the state directory if it doesn't exist.

    Args:
        data_dir: Optional data directory (defaults to get_default_data_dir())
        component: Component name (e.g., 'maker', 'taker', 'directory', 'orderbook')
        nick: The nick to write (e.g., 'J5XXXXXXXXX')

    Returns:
        Path to the written state file
    """
    path = get_nick_state_path(data_dir, component)
    path.write_text(nick + "\n")
    return path

Write a component's nick to its state file.

Creates the state directory if it doesn't exist.

Args

data_dir
Optional data directory (defaults to get_default_data_dir())
component
Component name (e.g., 'maker', 'taker', 'directory', 'orderbook')
nick
The nick to write (e.g., 'J5XXXXXXXXX')

Returns

Path to the written state file

Classes

class BackendConfig (**data: Any)
Expand source code
class BackendConfig(BaseModel):
    """
    Configuration for Bitcoin backend connection.

    Supports different backend types:
    - scantxoutset: Bitcoin Core RPC with scantxoutset
    - neutrino: Light client using BIP 157/158
    """

    backend_type: str = Field(
        default="scantxoutset",
        description="Backend type: 'scantxoutset' or 'neutrino'",
    )
    backend_config: dict[str, Any] = Field(
        default_factory=dict,
        description="Backend-specific configuration (RPC credentials, neutrino peers, etc.)",
    )

    model_config = {"frozen": False}

Configuration for Bitcoin backend connection.

Supports different backend types: - scantxoutset: Bitcoin Core RPC with scantxoutset - neutrino: Light client using BIP 157/158

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var backend_config : dict[str, typing.Any]

The type of the None singleton.

var backend_type : str

The type of the None singleton.

var model_config

The type of the None singleton.

class BitcoinSettings (**data: Any)
Expand source code
class BitcoinSettings(BaseModel):
    """Bitcoin backend configuration."""

    backend_type: str = Field(
        default="descriptor_wallet",
        description="Backend type: scantxoutset, descriptor_wallet, or neutrino",
    )
    rpc_url: str = Field(
        default="http://127.0.0.1:8332",
        description="Bitcoin Core RPC URL",
    )
    rpc_user: str = Field(
        default="",
        description="Bitcoin Core RPC username",
    )
    rpc_password: SecretStr = Field(
        default=SecretStr(""),
        description="Bitcoin Core RPC password",
    )
    neutrino_url: str = Field(
        default="http://127.0.0.1:8334",
        description="Neutrino REST API URL (for neutrino backend)",
    )

Bitcoin backend configuration.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var backend_type : str

The type of the None singleton.

var model_config

The type of the None singleton.

var neutrino_url : str

The type of the None singleton.

var rpc_password : pydantic.types.SecretStr

The type of the None singleton.

var rpc_url : str

The type of the None singleton.

var rpc_user : str

The type of the None singleton.

class CommitmentBlacklist (blacklist_path: Path | None = None, data_dir: Path | None = None)
Expand source code
class CommitmentBlacklist:
    """
    Thread-safe commitment blacklist with file persistence.

    The blacklist is stored as a simple text file with one commitment per line.
    This matches the reference implementation's format for compatibility.
    """

    def __init__(self, blacklist_path: Path | None = None, data_dir: Path | None = None):
        """
        Initialize the commitment blacklist.

        Args:
            blacklist_path: Path to the blacklist file. If None, uses data_dir.
            data_dir: Data directory for JoinMarket (defaults to get_default_data_dir()).
                     Only used if blacklist_path is None.
        """
        if blacklist_path is None:
            blacklist_path = get_commitment_blacklist_path(data_dir)
        self.blacklist_path = blacklist_path

        # In-memory cache of blacklisted commitments
        self._commitments: set[str] = set()
        self._lock = threading.Lock()

        # Load existing blacklist from disk
        self._load_from_disk()

    def _load_from_disk(self) -> None:
        """Load blacklist from disk into memory."""
        if not self.blacklist_path.exists():
            logger.debug(f"No existing blacklist at {self.blacklist_path}")
            return

        try:
            with open(self.blacklist_path, encoding="ascii") as f:
                for line in f:
                    commitment = line.strip()
                    if commitment:
                        self._commitments.add(commitment)
            logger.info(f"Loaded {len(self._commitments)} commitments from blacklist")
        except Exception as e:
            logger.error(f"Failed to load blacklist from {self.blacklist_path}: {e}")

    def _save_to_disk(self) -> None:
        """Save in-memory blacklist to disk."""
        try:
            # Ensure parent directory exists
            self.blacklist_path.parent.mkdir(parents=True, exist_ok=True)

            with open(self.blacklist_path, "w", encoding="ascii") as f:
                for commitment in sorted(self._commitments):
                    f.write(commitment + "\n")
                f.flush()
            logger.debug(f"Saved {len(self._commitments)} commitments to blacklist")
        except Exception as e:
            logger.error(f"Failed to save blacklist to {self.blacklist_path}: {e}")

    def is_blacklisted(self, commitment: str) -> bool:
        """
        Check if a commitment is blacklisted.

        Args:
            commitment: The commitment hash (hex string, typically 64 chars)

        Returns:
            True if the commitment is blacklisted, False otherwise
        """
        # Normalize commitment (strip whitespace, lowercase)
        commitment = commitment.strip().lower()

        with self._lock:
            return commitment in self._commitments

    def add(self, commitment: str, persist: bool = True) -> bool:
        """
        Add a commitment to the blacklist.

        Args:
            commitment: The commitment hash (hex string)
            persist: If True, save to disk immediately

        Returns:
            True if the commitment was newly added, False if already present
        """
        # Normalize commitment
        commitment = commitment.strip().lower()

        if not commitment:
            logger.warning("Attempted to add empty commitment to blacklist")
            return False

        with self._lock:
            if commitment in self._commitments:
                return False

            self._commitments.add(commitment)
            logger.debug(f"Added commitment to blacklist: {commitment[:16]}...")

            if persist:
                self._save_to_disk()

            return True

    def check_and_add(self, commitment: str, persist: bool = True) -> bool:
        """
        Check if a commitment is blacklisted, and if not, add it.

        This is the primary method for handling commitments during CoinJoin.
        It atomically checks and adds in a single operation.

        Args:
            commitment: The commitment hash (hex string)
            persist: If True, save to disk immediately after adding

        Returns:
            True if the commitment is NEW (allowed), False if already blacklisted
        """
        # Normalize commitment
        commitment = commitment.strip().lower()

        if not commitment:
            logger.warning("Attempted to check empty commitment")
            return False

        with self._lock:
            if commitment in self._commitments:
                logger.info(f"Commitment already blacklisted: {commitment[:16]}...")
                return False

            self._commitments.add(commitment)
            logger.debug(f"Added commitment to blacklist: {commitment[:16]}...")

            if persist:
                self._save_to_disk()

            return True

    def __len__(self) -> int:
        """Return the number of blacklisted commitments."""
        with self._lock:
            return len(self._commitments)

    def __contains__(self, commitment: str) -> bool:
        """Check if a commitment is blacklisted using 'in' operator."""
        return self.is_blacklisted(commitment)

Thread-safe commitment blacklist with file persistence.

The blacklist is stored as a simple text file with one commitment per line. This matches the reference implementation's format for compatibility.

Initialize the commitment blacklist.

Args

blacklist_path
Path to the blacklist file. If None, uses data_dir.
data_dir
Data directory for JoinMarket (defaults to get_default_data_dir()). Only used if blacklist_path is None.

Methods

def add(self, commitment: str, persist: bool = True) ‑> bool
Expand source code
def add(self, commitment: str, persist: bool = True) -> bool:
    """
    Add a commitment to the blacklist.

    Args:
        commitment: The commitment hash (hex string)
        persist: If True, save to disk immediately

    Returns:
        True if the commitment was newly added, False if already present
    """
    # Normalize commitment
    commitment = commitment.strip().lower()

    if not commitment:
        logger.warning("Attempted to add empty commitment to blacklist")
        return False

    with self._lock:
        if commitment in self._commitments:
            return False

        self._commitments.add(commitment)
        logger.debug(f"Added commitment to blacklist: {commitment[:16]}...")

        if persist:
            self._save_to_disk()

        return True

Add a commitment to the blacklist.

Args

commitment
The commitment hash (hex string)
persist
If True, save to disk immediately

Returns

True if the commitment was newly added, False if already present

def check_and_add(self, commitment: str, persist: bool = True) ‑> bool
Expand source code
def check_and_add(self, commitment: str, persist: bool = True) -> bool:
    """
    Check if a commitment is blacklisted, and if not, add it.

    This is the primary method for handling commitments during CoinJoin.
    It atomically checks and adds in a single operation.

    Args:
        commitment: The commitment hash (hex string)
        persist: If True, save to disk immediately after adding

    Returns:
        True if the commitment is NEW (allowed), False if already blacklisted
    """
    # Normalize commitment
    commitment = commitment.strip().lower()

    if not commitment:
        logger.warning("Attempted to check empty commitment")
        return False

    with self._lock:
        if commitment in self._commitments:
            logger.info(f"Commitment already blacklisted: {commitment[:16]}...")
            return False

        self._commitments.add(commitment)
        logger.debug(f"Added commitment to blacklist: {commitment[:16]}...")

        if persist:
            self._save_to_disk()

        return True

Check if a commitment is blacklisted, and if not, add it.

This is the primary method for handling commitments during CoinJoin. It atomically checks and adds in a single operation.

Args

commitment
The commitment hash (hex string)
persist
If True, save to disk immediately after adding

Returns

True if the commitment is NEW (allowed), False if already blacklisted

def is_blacklisted(self, commitment: str) ‑> bool
Expand source code
def is_blacklisted(self, commitment: str) -> bool:
    """
    Check if a commitment is blacklisted.

    Args:
        commitment: The commitment hash (hex string, typically 64 chars)

    Returns:
        True if the commitment is blacklisted, False otherwise
    """
    # Normalize commitment (strip whitespace, lowercase)
    commitment = commitment.strip().lower()

    with self._lock:
        return commitment in self._commitments

Check if a commitment is blacklisted.

Args

commitment
The commitment hash (hex string, typically 64 chars)

Returns

True if the commitment is blacklisted, False otherwise

class CryptoSession
Expand source code
class CryptoSession:
    """
    Manages encryption state for a coinjoin session with a taker.
    """

    def __init__(self) -> None:
        """Initialize a new crypto session with a fresh keypair."""
        self.keypair: SecretKey = init_keypair()
        self.box: Box | None = None
        self.counterparty_pubkey: str = ""

    def get_pubkey_hex(self) -> str:
        """Get our public key as hex string."""
        pk = get_pubkey(self.keypair, as_hex=True)
        assert isinstance(pk, str)
        return pk

    def setup_encryption(self, counterparty_pubkey_hex: str) -> None:
        """
        Set up encryption with a counterparty's public key.

        Args:
            counterparty_pubkey_hex: Counterparty's public key in hex.
        """
        try:
            counterparty_pk = init_pubkey(counterparty_pubkey_hex)
            self.box = create_encryption_box(self.keypair, counterparty_pk)
            self.counterparty_pubkey = counterparty_pubkey_hex
            logger.debug("Set up encryption box with counterparty")
        except NaclError as e:
            logger.error(f"Failed to set up encryption: {e}")
            raise

    def encrypt(self, message: str) -> str:
        """
        Encrypt a message for the counterparty.

        Args:
            message: Plaintext message.

        Returns:
            Base64-encoded encrypted message.
        """
        if self.box is None:
            raise NaclError("Encryption not set up - call setup_encryption first")
        return encrypt_encode(message, self.box)

    def decrypt(self, message: str) -> str:
        """
        Decrypt a message from the counterparty.

        Args:
            message: Base64-encoded encrypted message.

        Returns:
            Decrypted plaintext.
        """
        if self.box is None:
            raise NaclError("Encryption not set up - call setup_encryption first")
        decrypted = decode_decrypt(message, self.box)
        return decrypted.decode("utf-8")

    @property
    def is_encrypted(self) -> bool:
        """Check if encryption has been set up."""
        return self.box is not None

Manages encryption state for a coinjoin session with a taker.

Initialize a new crypto session with a fresh keypair.

Instance variables

prop is_encrypted : bool
Expand source code
@property
def is_encrypted(self) -> bool:
    """Check if encryption has been set up."""
    return self.box is not None

Check if encryption has been set up.

Methods

def decrypt(self, message: str) ‑> str
Expand source code
def decrypt(self, message: str) -> str:
    """
    Decrypt a message from the counterparty.

    Args:
        message: Base64-encoded encrypted message.

    Returns:
        Decrypted plaintext.
    """
    if self.box is None:
        raise NaclError("Encryption not set up - call setup_encryption first")
    decrypted = decode_decrypt(message, self.box)
    return decrypted.decode("utf-8")

Decrypt a message from the counterparty.

Args

message
Base64-encoded encrypted message.

Returns

Decrypted plaintext.

def encrypt(self, message: str) ‑> str
Expand source code
def encrypt(self, message: str) -> str:
    """
    Encrypt a message for the counterparty.

    Args:
        message: Plaintext message.

    Returns:
        Base64-encoded encrypted message.
    """
    if self.box is None:
        raise NaclError("Encryption not set up - call setup_encryption first")
    return encrypt_encode(message, self.box)

Encrypt a message for the counterparty.

Args

message
Plaintext message.

Returns

Base64-encoded encrypted message.

def get_pubkey_hex(self) ‑> str
Expand source code
def get_pubkey_hex(self) -> str:
    """Get our public key as hex string."""
    pk = get_pubkey(self.keypair, as_hex=True)
    assert isinstance(pk, str)
    return pk

Get our public key as hex string.

def setup_encryption(self, counterparty_pubkey_hex: str) ‑> None
Expand source code
def setup_encryption(self, counterparty_pubkey_hex: str) -> None:
    """
    Set up encryption with a counterparty's public key.

    Args:
        counterparty_pubkey_hex: Counterparty's public key in hex.
    """
    try:
        counterparty_pk = init_pubkey(counterparty_pubkey_hex)
        self.box = create_encryption_box(self.keypair, counterparty_pk)
        self.counterparty_pubkey = counterparty_pubkey_hex
        logger.debug("Set up encryption box with counterparty")
    except NaclError as e:
        logger.error(f"Failed to set up encryption: {e}")
        raise

Set up encryption with a counterparty's public key.

Args

counterparty_pubkey_hex
Counterparty's public key in hex.
class DeduplicationStats (total_processed: int = 0, duplicates_dropped: int = 0, unique_messages: int = 0)
Expand source code
@dataclass
class DeduplicationStats:
    """Statistics about deduplication activity."""

    total_processed: int = 0
    duplicates_dropped: int = 0
    unique_messages: int = 0

    @property
    def duplicate_rate(self) -> float:
        """Return the percentage of messages that were duplicates."""
        if self.total_processed == 0:
            return 0.0
        return (self.duplicates_dropped / self.total_processed) * 100

Statistics about deduplication activity.

Instance variables

prop duplicate_rate : float
Expand source code
@property
def duplicate_rate(self) -> float:
    """Return the percentage of messages that were duplicates."""
    if self.total_processed == 0:
        return 0.0
    return (self.duplicates_dropped / self.total_processed) * 100

Return the percentage of messages that were duplicates.

var duplicates_dropped : int

The type of the None singleton.

var total_processed : int

The type of the None singleton.

var unique_messages : int

The type of the None singleton.

class DirectoryClient (host: str,
port: int,
network: str,
nick_identity: NickIdentity | None = None,
location: str = 'NOT-SERVING-ONION',
socks_host: str = '127.0.0.1',
socks_port: int = 9050,
timeout: float = 30.0,
max_message_size: int = 2097152,
on_disconnect: Callable[[], None] | None = None,
neutrino_compat: bool = False,
peerlist_timeout: float = 60.0)
Expand source code
class DirectoryClient:
    """
    Client for connecting to JoinMarket directory servers.

    Supports:
    - Direct TCP connections (for local/dev)
    - Tor connections (for .onion addresses)
    - Handshake protocol
    - Peerlist fetching
    - Orderbook fetching
    - Continuous listening for updates
    """

    def __init__(
        self,
        host: str,
        port: int,
        network: str,
        nick_identity: NickIdentity | None = None,
        location: str = "NOT-SERVING-ONION",
        socks_host: str = "127.0.0.1",
        socks_port: int = 9050,
        timeout: float = 30.0,
        max_message_size: int = 2097152,
        on_disconnect: Callable[[], None] | None = None,
        neutrino_compat: bool = False,
        peerlist_timeout: float = 60.0,
    ) -> None:
        """
        Initialize DirectoryClient.

        Args:
            host: Directory server hostname or .onion address
            port: Directory server port
            network: Bitcoin network (mainnet, testnet, signet, regtest)
            nick_identity: NickIdentity for message signing (generated if None)
            location: Our location string (onion address or NOT-SERVING-ONION)
            socks_host: SOCKS proxy host for Tor
            socks_port: SOCKS proxy port for Tor
            timeout: Connection timeout in seconds
            max_message_size: Maximum message size in bytes
            on_disconnect: Callback when connection drops
            neutrino_compat: Advertise support for Neutrino-compatible UTXO metadata
            peerlist_timeout: Timeout for first PEERLIST chunk (default 60s, subsequent chunks use 5s)
        """
        self.host = host
        self.port = port
        self.network = network
        self.location = location
        self.socks_host = socks_host
        self.socks_port = socks_port
        self.timeout = timeout
        self.max_message_size = max_message_size
        self.connection: TCPConnection | None = None
        self.nick_identity = nick_identity or NickIdentity(JM_VERSION)
        self.nick = self.nick_identity.nick
        # hostid retained for possible future use (e.g., logging, debugging)
        # Note: NOT used for message signing - always use ONION_HOSTID constant instead
        self.hostid = host
        # Offers indexed by (counterparty, oid) with timestamp metadata
        self.offers: dict[tuple[str, int], OfferWithTimestamp] = {}
        # Bonds indexed by UTXO key (txid:vout)
        self.bonds: dict[str, FidelityBond] = {}
        # Reverse index: bond UTXO key -> set of (counterparty, oid) keys that use this bond
        # Used for deduplication when same bond is used by different nicks
        self._bond_to_offers: dict[str, set[tuple[str, int]]] = {}
        self.peer_features: dict[str, dict[str, bool]] = {}  # nick -> features dict
        # Active peers from last peerlist (nick -> location)
        self._active_peers: dict[str, str] = {}
        self.running = False
        self.on_disconnect = on_disconnect
        self.initial_orderbook_received = False
        self.last_orderbook_request_time: float = 0.0
        self.last_offer_received_time: float | None = None
        self.neutrino_compat = neutrino_compat

        # Version negotiation state (set after handshake)
        self.negotiated_version: int | None = None
        self.directory_neutrino_compat: bool = False
        self.directory_peerlist_features: bool = False  # True if directory supports F: suffix

        # Directory metadata from handshake
        self.directory_motd: str | None = None
        self.directory_nick: str | None = None
        self.directory_proto_ver_min: int | None = None
        self.directory_proto_ver_max: int | None = None
        self.directory_features: dict[str, bool] = {}

        # Timing intervals
        self.peerlist_check_interval = 1800.0
        self.orderbook_refresh_interval = 1800.0
        self.orderbook_retry_interval = 300.0
        self.zero_offer_retry_interval = 600.0

        # Peerlist support tracking
        # If the directory doesn't support getpeerlist (e.g., reference implementation),
        # we track this to avoid spamming unsupported requests
        self._peerlist_supported: bool | None = None  # None = unknown, True/False = known
        self._last_peerlist_request_time: float = 0.0
        self._peerlist_min_interval: float = 60.0  # Minimum seconds between peerlist requests
        self._peerlist_timeout: float = peerlist_timeout  # Timeout for first peerlist chunk
        self._peerlist_chunk_timeout: float = (
            5.0  # Timeout between chunks (end of chunked response)
        )
        self._peerlist_timeout_count: int = 0  # Track consecutive timeouts

        # Message buffer for messages received while waiting for specific responses
        # (e.g., PEERLIST). These messages should be processed, not discarded.
        self._message_buffer: asyncio.Queue[dict[str, Any]] = asyncio.Queue()

    async def connect(self) -> None:
        """Connect to the directory server and perform handshake."""
        try:
            logger.debug(f"DirectoryClient.connect: connecting to {self.host}:{self.port}")
            if not self.host.endswith(".onion"):
                self.connection = await connect_direct(
                    self.host,
                    self.port,
                    self.max_message_size,
                    self.timeout,
                )
                logger.debug("DirectoryClient.connect: direct connection established")
            else:
                self.connection = await connect_via_tor(
                    self.host,
                    self.port,
                    self.socks_host,
                    self.socks_port,
                    self.max_message_size,
                    self.timeout,
                )
                logger.debug("DirectoryClient.connect: tor connection established")
            logger.debug("DirectoryClient.connect: starting handshake")
            await self._handshake()
            logger.debug("DirectoryClient.connect: handshake complete")
        except Exception as e:
            logger.error(f"Failed to connect to {self.host}:{self.port}: {e}", exc_info=True)
            # Clean up connection if handshake failed
            if self.connection:
                with contextlib.suppress(Exception):
                    await self.connection.close()
                self.connection = None
            raise DirectoryClientError(f"Connection failed: {e}") from e

    async def _handshake(self) -> None:
        """
        Perform directory server handshake with feature negotiation.

        We use proto-ver=5 for reference implementation compatibility.
        Features like neutrino_compat are negotiated independently via
        the features dict in the handshake payload.
        """
        if not self.connection:
            raise DirectoryClientError("Not connected")

        # Build our feature set - always include peerlist_features to indicate we support
        # the extended peerlist format with F: suffix for feature information
        our_features: set[str] = {FEATURE_PEERLIST_FEATURES}
        if self.neutrino_compat:
            our_features.add(FEATURE_NEUTRINO_COMPAT)
        feature_set = FeatureSet(features=our_features)

        # Send our handshake with current version and features
        handshake_data = create_handshake_request(
            nick=self.nick,
            location=self.location,
            network=self.network,
            directory=False,
            features=feature_set,
        )
        logger.debug(f"DirectoryClient._handshake: created handshake data: {handshake_data}")
        handshake_msg = {
            "type": MessageType.HANDSHAKE.value,
            "line": json.dumps(handshake_data),
        }
        logger.debug("DirectoryClient._handshake: sending handshake message")
        await self.connection.send(json.dumps(handshake_msg).encode("utf-8"))
        logger.debug("DirectoryClient._handshake: handshake sent, waiting for response")

        # Receive and parse directory's response
        response_data = await asyncio.wait_for(self.connection.receive(), timeout=self.timeout)
        logger.debug(f"DirectoryClient._handshake: received response: {response_data[:200]!r}")
        response = json.loads(response_data.decode("utf-8"))

        if response["type"] not in (MessageType.HANDSHAKE.value, MessageType.DN_HANDSHAKE.value):
            raise DirectoryClientError(f"Unexpected response type: {response['type']}")

        handshake_response = json.loads(response["line"])
        if not handshake_response.get("accepted", False):
            raise DirectoryClientError("Handshake rejected")

        # Extract directory's version range
        # Reference directories only send "proto-ver" (single value, typically 5)
        dir_ver_min = handshake_response.get("proto-ver-min")
        dir_ver_max = handshake_response.get("proto-ver-max")

        if dir_ver_min is None or dir_ver_max is None:
            # Reference directory: only sends single proto-ver
            dir_version = handshake_response.get("proto-ver", 5)
            dir_ver_min = dir_ver_max = dir_version

        # Verify compatibility with our version (we only support v5)
        if not (dir_ver_min <= JM_VERSION <= dir_ver_max):
            raise DirectoryClientError(
                f"No compatible protocol version: we support v{JM_VERSION}, "
                f"directory supports [{dir_ver_min}, {dir_ver_max}]"
            )

        # Use v5 (our only supported version)
        self.negotiated_version = JM_VERSION

        # Check if directory supports Neutrino-compatible metadata
        self.directory_neutrino_compat = peer_supports_neutrino_compat(handshake_response)

        # Check if directory supports peerlist_features (extended peerlist with F: suffix)
        dir_features = handshake_response.get("features", {})
        self.directory_peerlist_features = dir_features.get(FEATURE_PEERLIST_FEATURES, False)

        # Store directory metadata
        self.directory_motd = handshake_response.get("motd")
        self.directory_nick = handshake_response.get("nick")
        self.directory_proto_ver_min = dir_ver_min
        self.directory_proto_ver_max = dir_ver_max
        self.directory_features = dir_features

        logger.info(
            f"Handshake successful with {self.host}:{self.port} (nick: {self.nick}, "
            f"negotiated_version: v{self.negotiated_version}, "
            f"neutrino_compat: {self.directory_neutrino_compat}, "
            f"peerlist_features: {self.directory_peerlist_features})"
        )

    async def get_peerlist(self) -> list[str] | None:
        """
        Fetch the current list of connected peers.

        Note: Reference implementation directories do NOT support GETPEERLIST.
        This method shares peerlist support tracking with get_peerlist_with_features().

        The directory may send multiple PEERLIST messages (chunked response) to
        avoid overwhelming slow Tor connections. This method accumulates peers
        from all chunks.

        Returns:
            List of active peer nicks. Returns empty list if directory doesn't
            support GETPEERLIST. Returns None if rate-limited (use cached data).
        """
        if not self.connection:
            raise DirectoryClientError("Not connected")

        # Skip if we already know this directory doesn't support GETPEERLIST
        # (only applies to directories that didn't announce peerlist_features)
        if self._peerlist_supported is False and not self.directory_peerlist_features:
            logger.debug("Skipping GETPEERLIST - directory doesn't support it")
            return []

        # Rate-limit peerlist requests to avoid spamming
        import time

        current_time = time.time()
        if current_time - self._last_peerlist_request_time < self._peerlist_min_interval:
            logger.debug(
                f"Skipping GETPEERLIST - rate limited "
                f"(last request {current_time - self._last_peerlist_request_time:.1f}s ago)"
            )
            return None

        self._last_peerlist_request_time = current_time

        getpeerlist_msg = {"type": MessageType.GETPEERLIST.value, "line": ""}
        logger.debug("Sending GETPEERLIST request")
        await self.connection.send(json.dumps(getpeerlist_msg).encode("utf-8"))

        start_time = asyncio.get_event_loop().time()

        # Timeout for waiting for the first PEERLIST response
        first_response_timeout = (
            self._peerlist_timeout if self.directory_peerlist_features else self.timeout
        )

        # Timeout between chunks - when this expires after receiving at least one
        # PEERLIST message, we know the directory has finished sending all chunks
        inter_chunk_timeout = self._peerlist_chunk_timeout

        # Accumulate peers from multiple PEERLIST chunks
        all_peers: list[str] = []
        chunks_received = 0
        got_first_response = False

        while True:
            elapsed = asyncio.get_event_loop().time() - start_time

            # Determine timeout for this receive
            if not got_first_response:
                remaining = first_response_timeout - elapsed
                if remaining <= 0:
                    self._handle_peerlist_timeout()
                    return []
                receive_timeout = remaining
            else:
                receive_timeout = inter_chunk_timeout

            try:
                response_data = await asyncio.wait_for(
                    self.connection.receive(), timeout=receive_timeout
                )
                response = json.loads(response_data.decode("utf-8"))
                msg_type = response.get("type")

                if msg_type == MessageType.PEERLIST.value:
                    got_first_response = True
                    chunks_received += 1
                    peerlist_str = response.get("line", "")

                    # Parse this chunk
                    chunk_peers: list[str] = []
                    if peerlist_str:
                        for entry in peerlist_str.split(","):
                            if not entry or not entry.strip():
                                continue
                            if NICK_PEERLOCATOR_SEPARATOR not in entry:
                                logger.debug(f"Skipping metadata entry in peerlist: '{entry}'")
                                continue
                            try:
                                nick, location, disconnected, _features = parse_peerlist_entry(
                                    entry
                                )
                                logger.debug(
                                    f"Parsed peer: {nick} at {location}, "
                                    f"disconnected={disconnected}"
                                )
                                if not disconnected:
                                    chunk_peers.append(nick)
                            except ValueError as e:
                                logger.warning(f"Failed to parse peerlist entry '{entry}': {e}")
                                continue

                    all_peers.extend(chunk_peers)
                    logger.debug(
                        f"Received PEERLIST chunk {chunks_received} with "
                        f"{len(chunk_peers)} peers (total: {len(all_peers)})"
                    )
                    continue

                # Buffer unexpected messages
                logger.trace(
                    f"Buffering unexpected message type {msg_type} while waiting for PEERLIST"
                )
                await self._message_buffer.put(response)

            except TimeoutError:
                if not got_first_response:
                    self._handle_peerlist_timeout()
                    return []
                # Inter-chunk timeout means we're done
                break

            except Exception as e:
                logger.warning(f"Error receiving/parsing message while waiting for PEERLIST: {e}")
                elapsed = asyncio.get_event_loop().time() - start_time
                if not got_first_response and elapsed > first_response_timeout:
                    self._handle_peerlist_timeout()
                    return []
                if got_first_response:
                    break

        # Mark peerlist as supported since we got a valid response
        self._peerlist_supported = True
        self._peerlist_timeout_count = 0

        logger.info(f"Received {len(all_peers)} active peers from {self.host}:{self.port}")
        return all_peers

    async def get_peerlist_with_features(self) -> list[tuple[str, str, FeatureSet]]:
        """
        Fetch the current list of connected peers with their features.

        Uses the standard GETPEERLIST message. If the directory supports
        peerlist_features, the response will include F: suffix with features.

        Note: Reference implementation directories do NOT support GETPEERLIST.
        This method tracks whether the directory supports it and skips requests
        to unsupported directories to avoid spamming warnings in their logs.

        The directory may send multiple PEERLIST messages (chunked response) to
        avoid overwhelming slow Tor connections. This method accumulates peers
        from all chunks until no more PEERLIST messages arrive within the
        inter-chunk timeout.

        Returns:
            List of (nick, location, features) tuples for active peers.
            Features will be empty for directories that don't support peerlist_features.
            Returns empty list if directory doesn't support GETPEERLIST or is rate-limited.
        """
        if not self.connection:
            raise DirectoryClientError("Not connected")

        # Skip if we already know this directory doesn't support GETPEERLIST
        # (only applies to directories that didn't announce peerlist_features)
        if self._peerlist_supported is False and not self.directory_peerlist_features:
            logger.debug("Skipping GETPEERLIST - directory doesn't support it")
            return []

        # Rate-limit peerlist requests to avoid spamming
        import time

        current_time = time.time()
        if current_time - self._last_peerlist_request_time < self._peerlist_min_interval:
            logger.debug(
                f"Skipping GETPEERLIST - rate limited "
                f"(last request {current_time - self._last_peerlist_request_time:.1f}s ago)"
            )
            return []  # Return empty - will use offers for nick tracking

        self._last_peerlist_request_time = current_time

        getpeerlist_msg = {"type": MessageType.GETPEERLIST.value, "line": ""}
        logger.debug("Sending GETPEERLIST request")
        await self.connection.send(json.dumps(getpeerlist_msg).encode("utf-8"))

        start_time = asyncio.get_event_loop().time()

        # Timeout for waiting for the first PEERLIST response
        # Use longer timeout for directories that support peerlist_features
        first_response_timeout = (
            self._peerlist_timeout if self.directory_peerlist_features else self.timeout
        )

        # Timeout between chunks - when this expires after receiving at least one
        # PEERLIST message, we know the directory has finished sending all chunks
        inter_chunk_timeout = self._peerlist_chunk_timeout

        # Accumulate peers from multiple PEERLIST chunks
        all_peers: list[tuple[str, str, FeatureSet]] = []
        chunks_received = 0
        got_first_response = False

        while True:
            elapsed = asyncio.get_event_loop().time() - start_time

            # Determine timeout for this receive
            if not got_first_response:
                # Waiting for first PEERLIST - use full timeout
                remaining = first_response_timeout - elapsed
                if remaining <= 0:
                    self._handle_peerlist_timeout()
                    return []
                receive_timeout = remaining
            else:
                # Already received at least one chunk - use shorter inter-chunk timeout
                receive_timeout = inter_chunk_timeout

            try:
                response_data = await asyncio.wait_for(
                    self.connection.receive(), timeout=receive_timeout
                )
                response = json.loads(response_data.decode("utf-8"))
                msg_type = response.get("type")

                if msg_type == MessageType.PEERLIST.value:
                    got_first_response = True
                    chunks_received += 1
                    peerlist_str = response.get("line", "")
                    chunk_peers = self._handle_peerlist_response(peerlist_str)
                    all_peers.extend(chunk_peers)
                    logger.debug(
                        f"Received PEERLIST chunk {chunks_received} with "
                        f"{len(chunk_peers)} peers (total: {len(all_peers)})"
                    )
                    # Continue to check for more chunks
                    continue

                # Buffer unexpected messages (like PUBMSG offers) for later processing
                logger.trace(
                    f"Buffering unexpected message type {msg_type} while waiting for PEERLIST"
                )
                await self._message_buffer.put(response)

            except TimeoutError:
                if not got_first_response:
                    # Never received any PEERLIST - this is a real timeout
                    self._handle_peerlist_timeout()
                    return []
                # Received at least one chunk, inter-chunk timeout means we're done
                logger.debug(
                    f"Peerlist complete: received {len(all_peers)} peers "
                    f"in {chunks_received} chunks"
                )
                break

            except Exception as e:
                logger.warning(f"Error receiving/parsing message while waiting for PEERLIST: {e}")
                elapsed = asyncio.get_event_loop().time() - start_time
                if not got_first_response and elapsed > first_response_timeout:
                    self._handle_peerlist_timeout()
                    return []
                # If we already have some data, return what we have
                if got_first_response:
                    break

        # Success - reset timeout counter and mark as supported
        self._peerlist_timeout_count = 0
        self._peerlist_supported = True
        return all_peers

    def _handle_peerlist_timeout(self) -> None:
        """Handle timeout when waiting for PEERLIST response."""
        self._peerlist_timeout_count += 1

        if self.directory_peerlist_features:
            # Directory announced peerlist_features during handshake, so it supports
            # GETPEERLIST. Timeout is likely due to large peerlist or network issues.
            logger.warning(
                f"Timed out waiting for PEERLIST from {self.host}:{self.port} "
                f"(attempt {self._peerlist_timeout_count}) - "
                "peerlist may be large or network is slow"
            )
            # Don't disable peerlist requests - directory supports it, just slow
        else:
            # Directory didn't announce peerlist_features - likely reference impl
            logger.info(
                f"Timed out waiting for PEERLIST from {self.host}:{self.port} - "
                "directory likely doesn't support GETPEERLIST (reference implementation)"
            )
            self._peerlist_supported = False

    def _handle_peerlist_response(self, peerlist_str: str) -> list[tuple[str, str, FeatureSet]]:
        """
        Process a PEERLIST response and update internal state.

        Note: Some directories send multiple partial PEERLIST responses (one per peer)
        instead of a single complete list. We handle this by only adding/updating
        peers from each response, not removing nicks that aren't present.

        Removal of stale offers is handled by:
        1. Explicit disconnect markers (;D suffix) in peerlist entries
        2. The periodic peerlist refresh in OrderbookAggregator
        3. Staleness cleanup for directories without GETPEERLIST support

        Args:
            peerlist_str: Comma-separated list of peer entries

        Returns:
            List of active peers (nick, location, features) in this response
        """
        logger.debug(f"Peerlist string: {peerlist_str}")

        # Mark peerlist as supported since we got a valid response
        self._peerlist_supported = True

        if not peerlist_str:
            # Empty peerlist response - just return empty list
            # Don't remove offers as this might be a partial response
            return []

        peers: list[tuple[str, str, FeatureSet]] = []
        explicitly_disconnected: list[str] = []

        for entry in peerlist_str.split(","):
            # Skip empty entries
            if not entry or not entry.strip():
                continue
            # Skip entries without separator - these are metadata (e.g., 'peerlist_features')
            # from the reference implementation, not actual peer entries
            if NICK_PEERLOCATOR_SEPARATOR not in entry:
                logger.debug(f"Skipping metadata entry in peerlist: '{entry}'")
                continue
            try:
                nick, location, disconnected, features = parse_peerlist_entry(entry)
                logger.debug(
                    f"Parsed peer: {nick} at {location}, "
                    f"disconnected={disconnected}, features={features.to_comma_string()}"
                )
                if disconnected:
                    # Nick explicitly marked as disconnected - remove their offers
                    explicitly_disconnected.append(nick)
                else:
                    peers.append((nick, location, features))
                    # Update/add this nick to active peers
                    self._active_peers[nick] = location
                    # Merge features into peer_features cache (never overwrite/downgrade)
                    # This prevents losing features when receiving peerlist from directories
                    # that don't support peerlist_features
                    features_dict = features.to_dict()
                    self._merge_peer_features(nick, features_dict)

                    # Update features on any cached offers for this peer
                    # This fixes the race condition where offers are stored before
                    # peerlist response arrives with features
                    self._update_offer_features(nick, features_dict)
            except ValueError as e:
                logger.warning(f"Failed to parse peerlist entry '{entry}': {e}")
                continue

        # Only remove offers for nicks that are explicitly marked as disconnected
        for nick in explicitly_disconnected:
            self.remove_offers_for_nick(nick)

        logger.trace(
            f"Received {len(peers)} active peers with features from {self.host}:{self.port}"
            + (
                f", {len(explicitly_disconnected)} explicitly disconnected"
                if explicitly_disconnected
                else ""
            )
        )
        return peers

    async def listen_for_messages(self, duration: float = 5.0) -> list[dict[str, Any]]:
        """
        Listen for messages for a specified duration.

        This method collects all messages received within the specified duration.
        It properly handles connection closed errors by raising DirectoryClientError.

        Args:
            duration: How long to listen in seconds

        Returns:
            List of received messages

        Raises:
            DirectoryClientError: If not connected or connection is lost
        """
        if not self.connection:
            raise DirectoryClientError("Not connected")

        # Check connection state before starting
        if not self.connection.is_connected():
            raise DirectoryClientError("Connection closed")

        messages: list[dict[str, Any]] = []
        start_time = asyncio.get_event_loop().time()

        # First, drain any buffered messages into our result list
        # These are messages that were received while waiting for other responses
        while not self._message_buffer.empty():
            try:
                buffered_msg = self._message_buffer.get_nowait()
                logger.trace(
                    f"Processing buffered message type {buffered_msg.get('type')}: "
                    f"{buffered_msg.get('line', '')[:80]}..."
                )
                messages.append(buffered_msg)
            except asyncio.QueueEmpty:
                break

        while asyncio.get_event_loop().time() - start_time < duration:
            try:
                remaining_time = duration - (asyncio.get_event_loop().time() - start_time)
                if remaining_time <= 0:
                    break

                response_data = await asyncio.wait_for(
                    self.connection.receive(), timeout=remaining_time
                )
                response = json.loads(response_data.decode("utf-8"))
                logger.trace(
                    f"Received message type {response.get('type')}: "
                    f"{response.get('line', '')[:80]}..."
                )
                messages.append(response)

            except TimeoutError:
                # Normal timeout - no more messages within duration
                break
            except Exception as e:
                # Connection errors should propagate up so caller can reconnect
                error_msg = str(e).lower()
                if "connection" in error_msg and ("closed" in error_msg or "lost" in error_msg):
                    raise DirectoryClientError(f"Connection lost: {e}") from e
                # Other errors (JSON parse, etc) - log and continue
                logger.warning(f"Error processing message: {e}")
                continue

        logger.trace(f"Collected {len(messages)} messages in {duration}s")
        return messages

    async def fetch_orderbooks(self) -> tuple[list[Offer], list[FidelityBond]]:
        """
        Fetch orderbooks from all connected peers.

        Trusts the directory's orderbook as authoritative - if a maker has an offer
        in the directory, they are considered online. The directory server maintains
        the connection state and removes offers when makers disconnect.

        Returns:
            Tuple of (offers, fidelity_bonds)
        """
        # Use get_peerlist_with_features to populate peer_features cache for neutrino_compat
        # detection. The peerlist itself is not used for offer filtering.
        peers_with_features = await self.get_peerlist_with_features()
        offers: list[Offer] = []
        bonds: list[FidelityBond] = []
        bond_utxo_set: set[str] = set()

        # Log peer count for visibility (but don't filter based on peerlist)
        if peers_with_features:
            logger.info(f"Found {len(peers_with_features)} peers on {self.host}:{self.port}")

        if not self.connection:
            raise DirectoryClientError("Not connected")

        pubmsg = {
            "type": MessageType.PUBMSG.value,
            "line": f"{self.nick}!PUBLIC!!orderbook",
        }
        await self.connection.send(json.dumps(pubmsg).encode("utf-8"))
        logger.debug("Sent !orderbook broadcast to PUBLIC")

        # Based on empirical testing with the main JoinMarket directory server over Tor,
        # the response time distribution shows significant delays:
        # - Median: ~38s (50% of offers)
        # - 75th percentile: ~65s
        # - 90th percentile: ~93s
        # - 95th percentile: ~101s
        # - 99th percentile: ~115s
        # - Max observed: ~119s
        # Using 120s (95th percentile + 20% buffer) ensures we capture ~95% of all offers.
        # The wide distribution is due to Tor latency and maker response times.
        #
        # For testing, this can be overridden via JM_ORDERBOOK_WAIT_TIME environment variable.
        listen_duration = float(os.environ.get("JM_ORDERBOOK_WAIT_TIME", "120.0"))
        logger.info(f"Listening for offer announcements for {listen_duration} seconds...")
        messages = await self.listen_for_messages(duration=listen_duration)

        logger.info(f"Received {len(messages)} messages, parsing offers...")

        for response in messages:
            try:
                msg_type = response.get("type")
                line = response["line"]

                # Handle PEERLIST messages to keep peer features and active peers updated
                if msg_type == MessageType.PEERLIST.value:
                    try:
                        self._handle_peerlist_response(line)
                        logger.debug("Processed PEERLIST during orderbook fetch")
                    except Exception as e:
                        logger.debug(f"Failed to process PEERLIST: {e}")
                    continue

                if msg_type not in (MessageType.PUBMSG.value, MessageType.PRIVMSG.value):
                    logger.debug(f"Skipping message type {msg_type}")
                    continue

                logger.debug(f"Processing message type {msg_type}: {line[:100]}...")

                parts = line.split(COMMAND_PREFIX)
                if len(parts) < 3:
                    logger.debug(f"Message has insufficient parts: {len(parts)}")
                    continue

                from_nick = parts[0]
                to_nick = parts[1]
                rest = COMMAND_PREFIX.join(parts[2:])

                if not rest.strip():
                    logger.debug("Empty message content")
                    continue

                offer_types = ["sw0absoffer", "sw0reloffer", "swabsoffer", "swreloffer"]
                parsed = False
                for offer_type in offer_types:
                    if rest.startswith(offer_type):
                        try:
                            # Split on '!' to extract flags (neutrino, tbond)
                            # Format: sw0reloffer 0 750000 790107726787 500 0.001!neutrino!tbond <proof>
                            # NOTE: !neutrino in offers is deprecated - primary detection is via
                            # handshake features. This parsing is kept for backwards compatibility.
                            rest_parts = rest.split(COMMAND_PREFIX)
                            offer_line = rest_parts[0]
                            bond_data = None
                            neutrino_compat = False

                            # Parse flags after the offer line (backwards compat for !neutrino)
                            for flag_part in rest_parts[1:]:
                                if flag_part.startswith("neutrino"):
                                    neutrino_compat = True
                                    logger.debug(f"Maker {from_nick} requires neutrino_compat")
                                elif flag_part.startswith("tbond "):
                                    bond_parts = flag_part[6:].split()
                                    if bond_parts:
                                        bond_proof_b64 = bond_parts[0]
                                        # For PRIVMSG, the maker signs with taker's actual nick
                                        # For PUBMSG, both nicks are the maker's (self-signed)
                                        is_privmsg = msg_type == MessageType.PRIVMSG.value
                                        taker_nick_for_proof = to_nick if is_privmsg else from_nick
                                        bond_data = parse_fidelity_bond_proof(
                                            bond_proof_b64, from_nick, taker_nick_for_proof
                                        )
                                        if bond_data:
                                            logger.debug(
                                                f"Parsed fidelity bond from {from_nick}: "
                                                f"txid={bond_data['utxo_txid'][:16]}..., "
                                                f"locktime={bond_data['locktime']}"
                                            )

                                            utxo_str = (
                                                f"{bond_data['utxo_txid']}:{bond_data['utxo_vout']}"
                                            )
                                            if utxo_str not in bond_utxo_set:
                                                bond_utxo_set.add(utxo_str)
                                                bond = FidelityBond(
                                                    counterparty=from_nick,
                                                    utxo_txid=bond_data["utxo_txid"],
                                                    utxo_vout=bond_data["utxo_vout"],
                                                    locktime=bond_data["locktime"],
                                                    script=bond_data["utxo_pub"],
                                                    utxo_confirmations=0,
                                                    cert_expiry=bond_data["cert_expiry"],
                                                    fidelity_bond_data=bond_data,
                                                )
                                                bonds.append(bond)

                            offer_parts = offer_line.split()
                            if len(offer_parts) < 6:
                                logger.warning(
                                    f"Offer from {from_nick} has {len(offer_parts)} parts, need 6"
                                )
                                continue

                            oid = int(offer_parts[1])
                            minsize = int(offer_parts[2])
                            maxsize = int(offer_parts[3])
                            txfee = int(offer_parts[4])
                            cjfee_str = offer_parts[5]

                            if offer_type in ["sw0absoffer", "swabsoffer"]:
                                cjfee = str(int(cjfee_str))
                            else:
                                cjfee = str(Decimal(cjfee_str))

                            offer = Offer(
                                counterparty=from_nick,
                                oid=oid,
                                ordertype=OfferType(offer_type),
                                minsize=minsize,
                                maxsize=maxsize,
                                txfee=txfee,
                                cjfee=cjfee,
                                fidelity_bond_value=0,
                                neutrino_compat=neutrino_compat,
                                features=self.peer_features.get(from_nick, {}),
                            )
                            offers.append(offer)

                            if bond_data:
                                offer.fidelity_bond_data = bond_data

                            logger.debug(
                                f"Parsed {offer_type} from {from_nick}: "
                                f"oid={oid}, size={minsize}-{maxsize}, fee={cjfee}, "
                                f"has_bond={bond_data is not None}, neutrino_compat={neutrino_compat}"
                            )
                            parsed = True
                        except Exception as e:
                            logger.warning(f"Failed to parse {offer_type} from {from_nick}: {e}")
                        break

                if not parsed:
                    logger.debug(f"Message not an offer: {rest[:50]}...")

            except Exception as e:
                logger.warning(f"Failed to process message: {e}")
                continue

        # NOTE: We trust the directory's orderbook as authoritative.
        # If a maker has an offer in the directory, they are considered online.
        # The directory server maintains the connection state and removes offers
        # when makers disconnect. Peerlist responses may be delayed or unavailable,
        # so we don't filter offers based on peerlist presence.
        #
        # This prevents incorrectly rejecting valid offers from active makers
        # whose peerlist entry hasn't been received yet.

        logger.info(
            f"Fetched {len(offers)} offers and {len(bonds)} fidelity bonds from "
            f"{self.host}:{self.port}"
        )
        return offers, bonds

    async def send_public_message(self, message: str) -> None:
        """
        Send a public message to all peers.

        Args:
            message: Message to broadcast
        """
        if not self.connection:
            raise DirectoryClientError("Not connected")

        pubmsg = {
            "type": MessageType.PUBMSG.value,
            "line": f"{self.nick}!PUBLIC!{message}",
        }
        await self.connection.send(json.dumps(pubmsg).encode("utf-8"))

    async def send_private_message(self, recipient: str, command: str, data: str) -> None:
        """
        Send a signed private message to a specific peer.

        JoinMarket requires all private messages to be signed with the sender's
        nick private key. The signature is appended to the message:
        Format: "!<command> <data> <pubkey_hex> <signature>"

        The message-to-sign is: data + hostid (to prevent replay attacks)
        Note: Only the data is signed, NOT the command prefix.

        Args:
            recipient: Target peer nick
            command: Command name (without ! prefix, e.g., 'fill', 'auth', 'tx')
            data: Command arguments to send (will be signed)
        """
        if not self.connection:
            raise DirectoryClientError("Not connected")

        # Sign just the data (not the command) with our nick identity
        # Reference: rawmessage = ' '.join(message[1:].split(' ')[1:-2])
        # This means they extract [1:-2] which is the args, not the command
        # So we sign: data + hostid
        # IMPORTANT: Always use ONION_HOSTID ("onion-network"), NOT the directory hostname.
        # The reference implementation uses a fixed hostid for ALL onion message channels
        # (see jmdaemon/onionmc.py line 635: self.hostid = "onion-network")
        signed_data = self.nick_identity.sign_message(data, ONION_HOSTID)

        # JoinMarket message format: from_nick!to_nick!command <args>
        # The COMMAND_PREFIX ("!") is used ONLY as a field separator between
        # from_nick, to_nick, and the message content. The command itself
        # does NOT have a "!" prefix.
        # Format: "<command> <signed_data>" where signed_data = "<data> <pubkey_hex> <sig_b64>"
        full_message = f"{command} {signed_data}"

        privmsg = {
            "type": MessageType.PRIVMSG.value,
            "line": f"{self.nick}!{recipient}!{full_message}",
        }
        await self.connection.send(json.dumps(privmsg).encode("utf-8"))

    async def close(self) -> None:
        """Close the connection to the directory server."""
        if self.connection:
            try:
                # NOTE: We skip sending DISCONNECT (801) because the reference implementation
                # crashes on unhandled control messages.
                pass
            except Exception:
                pass
            finally:
                await self.connection.close()
                self.connection = None

    def stop(self) -> None:
        """Stop continuous listening."""
        self.running = False

    async def listen_continuously(self, request_orderbook: bool = True) -> None:
        """
        Continuously listen for messages and update internal offer/bond caches.

        This method runs indefinitely until stop() is called or connection is lost.
        Used by orderbook_watcher and maker to maintain live orderbook state.

        Args:
            request_orderbook: If True, send !orderbook request on startup to get
                current offers from makers. Set to False for maker bots that don't
                need to receive other offers.
        """
        if not self.connection:
            raise DirectoryClientError("Not connected")

        logger.info(f"Starting continuous listening on {self.host}:{self.port}")
        self.running = True

        # Fetch peerlist with features to populate peer_features cache
        # This allows us to know which features each maker supports
        # Note: This may return empty if directory doesn't support GETPEERLIST (reference impl)
        try:
            await self.get_peerlist_with_features()
            if self._peerlist_supported:
                logger.info(f"Populated peer_features cache with {len(self.peer_features)} peers")
            else:
                logger.info(
                    "Directory doesn't support GETPEERLIST - peer features will be "
                    "learned from offer messages"
                )
        except Exception as e:
            logger.warning(f"Failed to fetch peerlist with features: {e}")

        # Request current orderbook from makers
        if request_orderbook:
            try:
                pubmsg = {
                    "type": MessageType.PUBMSG.value,
                    "line": f"{self.nick}!PUBLIC!!orderbook",
                }
                await self.connection.send(json.dumps(pubmsg).encode("utf-8"))
                logger.info("Sent !orderbook request to get current offers")
            except Exception as e:
                logger.warning(f"Failed to send !orderbook request: {e}")

        # Track when we last sent an orderbook request (to avoid spamming)
        import time

        last_orderbook_request = time.time()
        orderbook_request_min_interval = 60.0  # Minimum 60 seconds between requests

        while self.running:
            try:
                # First check if we have buffered messages from previous operations
                # (e.g., messages received while waiting for PEERLIST)
                if not self._message_buffer.empty():
                    message = await self._message_buffer.get()
                    logger.trace("Processing buffered message from queue")
                else:
                    # Read next message with timeout
                    data = await asyncio.wait_for(self.connection.receive(), timeout=5.0)

                    if not data:
                        logger.warning(f"Connection to {self.host}:{self.port} closed")
                        break

                    message = json.loads(data.decode("utf-8"))
                msg_type = message.get("type")
                line = message.get("line", "")

                # Handle PEERLIST responses (from periodic or automatic requests)
                if msg_type == MessageType.PEERLIST.value:
                    try:
                        self._handle_peerlist_response(line)
                    except Exception as e:
                        logger.debug(f"Failed to process PEERLIST: {e}")
                    continue

                # Process PUBMSG and PRIVMSG to update offers/bonds cache
                # Reference implementation sends offer responses to !orderbook via PRIVMSG
                if msg_type in (MessageType.PUBMSG.value, MessageType.PRIVMSG.value):
                    try:
                        parts = line.split(COMMAND_PREFIX)
                        if len(parts) >= 3:
                            from_nick = parts[0]
                            to_nick = parts[1]
                            rest = COMMAND_PREFIX.join(parts[2:])

                            # Accept PUBLIC broadcasts or messages addressed to us
                            if to_nick == "PUBLIC" or to_nick == self.nick:
                                # If we don't have features for this peer, it's a new peer.
                                # Track them with empty features for now - we'll get their features
                                # from the initial peerlist or from their offer messages
                                is_new_peer = from_nick not in self.peer_features
                                current_time = time.time()

                                if is_new_peer:
                                    # Track new peer - merge empty features (will be a no-op
                                    # if we already know their features from another source)
                                    # Features will be populated from offer messages or peerlist
                                    self._merge_peer_features(from_nick, {})
                                    logger.debug(f"Discovered new peer: {from_nick}")

                                    # If directory supports peerlist_features, request updated peerlist
                                    # to get this peer's features immediately
                                    if (
                                        self.directory_peerlist_features
                                        and self._peerlist_supported
                                    ):
                                        try:
                                            # Request peerlist to get features for new peer
                                            # This is a background task - don't block message processing
                                            asyncio.create_task(
                                                self._refresh_peerlist_for_new_peer()
                                            )
                                        except Exception as e:
                                            logger.debug(
                                                f"Failed to request peerlist for new peer: {e}"
                                            )

                                    # Request orderbook from new peer (rate-limited)
                                    if (
                                        request_orderbook
                                        and current_time - last_orderbook_request
                                        > orderbook_request_min_interval
                                    ):
                                        try:
                                            pubmsg = {
                                                "type": MessageType.PUBMSG.value,
                                                "line": f"{self.nick}!PUBLIC!!orderbook",
                                            }
                                            await self.connection.send(
                                                json.dumps(pubmsg).encode("utf-8")
                                            )
                                            last_orderbook_request = current_time
                                            logger.info(
                                                f"Sent !orderbook request for new peer {from_nick}"
                                            )
                                        except Exception as e:
                                            logger.debug(f"Failed to send !orderbook: {e}")

                                # Parse offer announcements
                                for offer_type_prefix in [
                                    "sw0reloffer",
                                    "sw0absoffer",
                                    "swreloffer",
                                    "swabsoffer",
                                ]:
                                    if rest.startswith(offer_type_prefix):
                                        # Separate offer from fidelity bond data
                                        rest_parts = rest.split(COMMAND_PREFIX, 1)
                                        offer_line = rest_parts[0].strip()

                                        # Parse fidelity bond if present
                                        bond_data = None
                                        if len(rest_parts) > 1 and rest_parts[1].startswith(
                                            "tbond "
                                        ):
                                            bond_parts = rest_parts[1][6:].split()
                                            if bond_parts:
                                                bond_proof_b64 = bond_parts[0]
                                                # For PUBLIC announcements, maker uses their own nick
                                                # as taker_nick when creating the proof.
                                                # For PRIVMSG (response to !orderbook), maker signs
                                                # for the recipient (us).
                                                taker_nick_for_proof = (
                                                    from_nick if to_nick == "PUBLIC" else to_nick
                                                )
                                                bond_data = parse_fidelity_bond_proof(
                                                    bond_proof_b64, from_nick, taker_nick_for_proof
                                                )
                                                if bond_data:
                                                    logger.debug(
                                                        f"Parsed fidelity bond from {from_nick}: "
                                                        f"txid={bond_data['utxo_txid'][:16]}..., "
                                                        f"locktime={bond_data['locktime']}"
                                                    )
                                                    # Store bond in bonds cache
                                                    utxo_str = (
                                                        f"{bond_data['utxo_txid']}:"
                                                        f"{bond_data['utxo_vout']}"
                                                    )
                                                    bond = FidelityBond(
                                                        counterparty=from_nick,
                                                        utxo_txid=bond_data["utxo_txid"],
                                                        utxo_vout=bond_data["utxo_vout"],
                                                        locktime=bond_data["locktime"],
                                                        script=bond_data["utxo_pub"],
                                                        utxo_confirmations=0,
                                                        cert_expiry=bond_data["cert_expiry"],
                                                        fidelity_bond_data=bond_data,
                                                    )
                                                    self.bonds[utxo_str] = bond

                                        offer_parts = offer_line.split()
                                        if len(offer_parts) >= 6:
                                            try:
                                                oid = int(offer_parts[1])
                                                minsize = int(offer_parts[2])
                                                maxsize = int(offer_parts[3])
                                                txfee = int(offer_parts[4])
                                                cjfee_str = offer_parts[5]

                                                if offer_type_prefix in [
                                                    "sw0absoffer",
                                                    "swabsoffer",
                                                ]:
                                                    cjfee = str(int(cjfee_str))
                                                else:
                                                    cjfee = str(Decimal(cjfee_str))

                                                offer = Offer(
                                                    counterparty=from_nick,
                                                    oid=oid,
                                                    ordertype=OfferType(offer_type_prefix),
                                                    minsize=minsize,
                                                    maxsize=maxsize,
                                                    txfee=txfee,
                                                    cjfee=cjfee,
                                                    fidelity_bond_value=0,
                                                    fidelity_bond_data=bond_data,
                                                    features=self.peer_features.get(from_nick, {}),
                                                )

                                                # Extract bond UTXO key for deduplication
                                                bond_utxo_key: str | None = None
                                                if bond_data:
                                                    bond_utxo_key = (
                                                        f"{bond_data['utxo_txid']}:"
                                                        f"{bond_data['utxo_vout']}"
                                                    )

                                                # Update cache using tuple key
                                                offer_key = (from_nick, oid)
                                                self._store_offer(offer_key, offer, bond_utxo_key)

                                                # Track this peer as "known" even if peerlist didn't
                                                # return features. This prevents re-triggering new peer
                                                # logic for every message from this peer.
                                                if from_nick not in self.peer_features:
                                                    self.peer_features[from_nick] = {}

                                                logger.debug(
                                                    f"Updated offer cache: {from_nick} "
                                                    f"{offer_type_prefix} oid={oid}"
                                                    + (" (with bond)" if bond_data else "")
                                                )
                                            except Exception as e:
                                                logger.debug(f"Failed to parse offer update: {e}")
                                        break
                    except Exception as e:
                        logger.debug(f"Failed to process PUBMSG: {e}")

            except TimeoutError:
                continue
            except asyncio.CancelledError:
                logger.info(f"Continuous listening on {self.host}:{self.port} cancelled")
                break
            except Exception as e:
                logger.error(f"Error in continuous listening: {e}")
                if self.on_disconnect:
                    self.on_disconnect()
                break

        self.running = False
        logger.info(f"Stopped continuous listening on {self.host}:{self.port}")

    def _store_offer(
        self,
        offer_key: tuple[str, int],
        offer: Offer,
        bond_utxo_key: str | None = None,
    ) -> None:
        """
        Store an offer with timestamp and handle bond-based deduplication.

        When a maker restarts with a new nick but the same fidelity bond, we need to
        remove the old offer(s) associated with that bond to prevent duplicates.

        Args:
            offer_key: Tuple of (counterparty, oid)
            offer: The offer to store
            bond_utxo_key: Bond UTXO key (txid:vout) if offer has a fidelity bond
        """
        current_time = time.time()

        # If this offer has a fidelity bond, check for and remove old offers with same bond
        if bond_utxo_key:
            # Get all offer keys that previously used this bond
            old_offer_keys = self._bond_to_offers.get(bond_utxo_key, set()).copy()

            # Remove old offers from DIFFERENT makers using same bond (maker restart scenario)
            # Keep multiple offers from SAME maker (same counterparty, different oids)
            for old_key in old_offer_keys:
                if (
                    old_key != offer_key
                    and old_key in self.offers
                    and old_key[0] != offer_key[0]  # Different counterparty
                ):
                    logger.debug(
                        f"Removing stale offer from {old_key[0]} oid={old_key[1]} - "
                        f"same bond UTXO now used by {offer_key[0]}"
                    )
                    del self.offers[old_key]

            # Update bond -> offers mapping: add this offer to the set
            if bond_utxo_key not in self._bond_to_offers:
                self._bond_to_offers[bond_utxo_key] = set()
            self._bond_to_offers[bond_utxo_key].add(offer_key)
        else:
            # Remove this offer from any previous bond mapping
            old_offer_data = self.offers.get(offer_key)
            if old_offer_data and old_offer_data.bond_utxo_key:
                old_bond_key = old_offer_data.bond_utxo_key
                if old_bond_key in self._bond_to_offers:
                    self._bond_to_offers[old_bond_key].discard(offer_key)

        # Store the new offer with timestamp
        self.offers[offer_key] = OfferWithTimestamp(
            offer=offer, received_at=current_time, bond_utxo_key=bond_utxo_key
        )

    def _update_offer_features(self, nick: str, features: dict[str, bool]) -> int:
        """
        Update features on all cached offers for a specific peer.

        This is called when we receive updated feature information from peerlist,
        ensuring that offers stored before features were known get updated.

        Args:
            nick: The nick to update features for
            features: New features dict to apply

        Returns:
            Number of offers updated
        """
        updated = 0
        for key, offer_ts in self.offers.items():
            if key[0] == nick:
                # Update features on the cached offer
                # Merge new features with any existing ones (new features take precedence)
                for feature, value in features.items():
                    if value:  # Only set true features
                        offer_ts.offer.features[feature] = value
                updated += 1

        if updated > 0:
            logger.debug(
                f"Updated features on {updated} cached offer(s) for {nick}: "
                f"{[k for k, v in features.items() if v]}"
            )

        return updated

    def _merge_peer_features(self, nick: str, new_features: dict[str, bool]) -> None:
        """
        Merge new features into the peer_features cache for a nick.

        Features are cumulative - once a peer advertises a feature, we keep it.
        This prevents losing features when receiving updates from directories
        that don't support peerlist_features.

        Args:
            nick: The peer's nick
            new_features: New features dict to merge (only True values are added)
        """
        existing = self.peer_features.get(nick, {})
        for feature, value in new_features.items():
            if value:  # Only set true features, never downgrade
                existing[feature] = value
        self.peer_features[nick] = existing

    def remove_offers_for_nick(self, nick: str) -> int:
        """
        Remove all offers from a specific nick (e.g., when nick goes offline).

        This is the equivalent of the reference implementation's on_nick_leave callback.

        Args:
            nick: The nick to remove offers for

        Returns:
            Number of offers removed
        """
        keys_to_remove = [key for key in self.offers if key[0] == nick]
        removed = 0

        for key in keys_to_remove:
            offer_data = self.offers.pop(key, None)
            if offer_data:
                removed += 1
                # Clean up bond mapping
                if offer_data.bond_utxo_key and offer_data.bond_utxo_key in self._bond_to_offers:
                    self._bond_to_offers[offer_data.bond_utxo_key].discard(key)

        if removed > 0:
            logger.info(f"Removed {removed} offers for nick {nick} (left/offline)")

        # Also remove from peer_features and active_peers
        self.peer_features.pop(nick, None)
        self._active_peers.pop(nick, None)

        # Remove any bonds from this nick
        bonds_to_remove = [k for k, v in self.bonds.items() if v.counterparty == nick]
        for bond_key in bonds_to_remove:
            del self.bonds[bond_key]

        return removed

    async def _refresh_peerlist_for_new_peer(self) -> None:
        """
        Refresh peerlist to get features for newly discovered peers.

        This is called as a background task when a new peer is discovered
        to immediately fetch their features from the directory's peerlist.
        """
        try:
            # Small delay to batch multiple new peer discoveries
            await asyncio.sleep(2.0)

            # Request peerlist - this will update peer_features
            peers = await self.get_peerlist_with_features()
            if peers:
                logger.debug(
                    f"Refreshed peerlist for new peer discovery: {len(peers)} active peers"
                )
        except Exception as e:
            logger.debug(f"Failed to refresh peerlist for new peer: {e}")

    def get_active_nicks(self) -> set[str]:
        """Get set of nicks from the last peerlist update."""
        return set(self._active_peers.keys())

    def cleanup_stale_offers(self, max_age_seconds: float = 1800.0) -> int:
        """
        Remove offers that haven't been re-announced within the staleness threshold.

        This is a fallback cleanup mechanism for directories that don't support
        GETPEERLIST (reference implementation). For offers with fidelity bonds,
        bond-based deduplication handles most cases, but this catches offers
        from makers that silently went offline.

        Args:
            max_age_seconds: Maximum age in seconds before an offer is considered stale.
                Default is 30 minutes (1800 seconds).

        Returns:
            Number of stale offers removed
        """
        current_time = time.time()
        stale_keys: list[tuple[str, int]] = []

        for key, offer_data in self.offers.items():
            age = current_time - offer_data.received_at
            if age > max_age_seconds:
                stale_keys.append(key)

        removed = 0
        for key in stale_keys:
            removed_offer: OfferWithTimestamp | None = self.offers.pop(key, None)
            if removed_offer:
                removed += 1
                # Clean up bond mapping
                if (
                    removed_offer.bond_utxo_key
                    and removed_offer.bond_utxo_key in self._bond_to_offers
                ):
                    self._bond_to_offers[removed_offer.bond_utxo_key].discard(key)
                logger.debug(
                    f"Removed stale offer from {key[0]} oid={key[1]} "
                    f"(age={current_time - removed_offer.received_at:.0f}s)"
                )

        if removed > 0:
            logger.info(f"Cleaned up {removed} stale offers (older than {max_age_seconds}s)")

        return removed

    def get_current_offers(self) -> list[Offer]:
        """Get the current list of cached offers."""
        return [offer_data.offer for offer_data in self.offers.values()]

    def get_offers_with_timestamps(self) -> list[OfferWithTimestamp]:
        """Get offers with their timestamp metadata."""
        return list(self.offers.values())

    def get_current_bonds(self) -> list[FidelityBond]:
        """Get the current list of cached fidelity bonds."""
        return list(self.bonds.values())

    def supports_extended_utxo_format(self) -> bool:
        """
        Check if we should use extended UTXO format with this directory.

        Extended format (txid:vout:scriptpubkey:blockheight) is used when
        both sides advertise neutrino_compat feature. Protocol version
        is not checked - features are negotiated independently.

        Returns:
            True if extended UTXO format should be used
        """
        return self.neutrino_compat and self.directory_neutrino_compat

    def get_negotiated_version(self) -> int:
        """
        Get the negotiated protocol version.

        Returns:
            Negotiated version (always 5 with feature-based approach)
        """
        return self.negotiated_version if self.negotiated_version is not None else JM_VERSION

Client for connecting to JoinMarket directory servers.

Supports: - Direct TCP connections (for local/dev) - Tor connections (for .onion addresses) - Handshake protocol - Peerlist fetching - Orderbook fetching - Continuous listening for updates

Initialize DirectoryClient.

Args

host
Directory server hostname or .onion address
port
Directory server port
network
Bitcoin network (mainnet, testnet, signet, regtest)
nick_identity
NickIdentity for message signing (generated if None)
location
Our location string (onion address or NOT-SERVING-ONION)
socks_host
SOCKS proxy host for Tor
socks_port
SOCKS proxy port for Tor
timeout
Connection timeout in seconds
max_message_size
Maximum message size in bytes
on_disconnect
Callback when connection drops
neutrino_compat
Advertise support for Neutrino-compatible UTXO metadata
peerlist_timeout
Timeout for first PEERLIST chunk (default 60s, subsequent chunks use 5s)

Methods

def cleanup_stale_offers(self, max_age_seconds: float = 1800.0) ‑> int
Expand source code
def cleanup_stale_offers(self, max_age_seconds: float = 1800.0) -> int:
    """
    Remove offers that haven't been re-announced within the staleness threshold.

    This is a fallback cleanup mechanism for directories that don't support
    GETPEERLIST (reference implementation). For offers with fidelity bonds,
    bond-based deduplication handles most cases, but this catches offers
    from makers that silently went offline.

    Args:
        max_age_seconds: Maximum age in seconds before an offer is considered stale.
            Default is 30 minutes (1800 seconds).

    Returns:
        Number of stale offers removed
    """
    current_time = time.time()
    stale_keys: list[tuple[str, int]] = []

    for key, offer_data in self.offers.items():
        age = current_time - offer_data.received_at
        if age > max_age_seconds:
            stale_keys.append(key)

    removed = 0
    for key in stale_keys:
        removed_offer: OfferWithTimestamp | None = self.offers.pop(key, None)
        if removed_offer:
            removed += 1
            # Clean up bond mapping
            if (
                removed_offer.bond_utxo_key
                and removed_offer.bond_utxo_key in self._bond_to_offers
            ):
                self._bond_to_offers[removed_offer.bond_utxo_key].discard(key)
            logger.debug(
                f"Removed stale offer from {key[0]} oid={key[1]} "
                f"(age={current_time - removed_offer.received_at:.0f}s)"
            )

    if removed > 0:
        logger.info(f"Cleaned up {removed} stale offers (older than {max_age_seconds}s)")

    return removed

Remove offers that haven't been re-announced within the staleness threshold.

This is a fallback cleanup mechanism for directories that don't support GETPEERLIST (reference implementation). For offers with fidelity bonds, bond-based deduplication handles most cases, but this catches offers from makers that silently went offline.

Args

max_age_seconds
Maximum age in seconds before an offer is considered stale. Default is 30 minutes (1800 seconds).

Returns

Number of stale offers removed

async def close(self) ‑> None
Expand source code
async def close(self) -> None:
    """Close the connection to the directory server."""
    if self.connection:
        try:
            # NOTE: We skip sending DISCONNECT (801) because the reference implementation
            # crashes on unhandled control messages.
            pass
        except Exception:
            pass
        finally:
            await self.connection.close()
            self.connection = None

Close the connection to the directory server.

async def connect(self) ‑> None
Expand source code
async def connect(self) -> None:
    """Connect to the directory server and perform handshake."""
    try:
        logger.debug(f"DirectoryClient.connect: connecting to {self.host}:{self.port}")
        if not self.host.endswith(".onion"):
            self.connection = await connect_direct(
                self.host,
                self.port,
                self.max_message_size,
                self.timeout,
            )
            logger.debug("DirectoryClient.connect: direct connection established")
        else:
            self.connection = await connect_via_tor(
                self.host,
                self.port,
                self.socks_host,
                self.socks_port,
                self.max_message_size,
                self.timeout,
            )
            logger.debug("DirectoryClient.connect: tor connection established")
        logger.debug("DirectoryClient.connect: starting handshake")
        await self._handshake()
        logger.debug("DirectoryClient.connect: handshake complete")
    except Exception as e:
        logger.error(f"Failed to connect to {self.host}:{self.port}: {e}", exc_info=True)
        # Clean up connection if handshake failed
        if self.connection:
            with contextlib.suppress(Exception):
                await self.connection.close()
            self.connection = None
        raise DirectoryClientError(f"Connection failed: {e}") from e

Connect to the directory server and perform handshake.

async def fetch_orderbooks(self) ‑> tuple[list[Offer], list[FidelityBond]]
Expand source code
async def fetch_orderbooks(self) -> tuple[list[Offer], list[FidelityBond]]:
    """
    Fetch orderbooks from all connected peers.

    Trusts the directory's orderbook as authoritative - if a maker has an offer
    in the directory, they are considered online. The directory server maintains
    the connection state and removes offers when makers disconnect.

    Returns:
        Tuple of (offers, fidelity_bonds)
    """
    # Use get_peerlist_with_features to populate peer_features cache for neutrino_compat
    # detection. The peerlist itself is not used for offer filtering.
    peers_with_features = await self.get_peerlist_with_features()
    offers: list[Offer] = []
    bonds: list[FidelityBond] = []
    bond_utxo_set: set[str] = set()

    # Log peer count for visibility (but don't filter based on peerlist)
    if peers_with_features:
        logger.info(f"Found {len(peers_with_features)} peers on {self.host}:{self.port}")

    if not self.connection:
        raise DirectoryClientError("Not connected")

    pubmsg = {
        "type": MessageType.PUBMSG.value,
        "line": f"{self.nick}!PUBLIC!!orderbook",
    }
    await self.connection.send(json.dumps(pubmsg).encode("utf-8"))
    logger.debug("Sent !orderbook broadcast to PUBLIC")

    # Based on empirical testing with the main JoinMarket directory server over Tor,
    # the response time distribution shows significant delays:
    # - Median: ~38s (50% of offers)
    # - 75th percentile: ~65s
    # - 90th percentile: ~93s
    # - 95th percentile: ~101s
    # - 99th percentile: ~115s
    # - Max observed: ~119s
    # Using 120s (95th percentile + 20% buffer) ensures we capture ~95% of all offers.
    # The wide distribution is due to Tor latency and maker response times.
    #
    # For testing, this can be overridden via JM_ORDERBOOK_WAIT_TIME environment variable.
    listen_duration = float(os.environ.get("JM_ORDERBOOK_WAIT_TIME", "120.0"))
    logger.info(f"Listening for offer announcements for {listen_duration} seconds...")
    messages = await self.listen_for_messages(duration=listen_duration)

    logger.info(f"Received {len(messages)} messages, parsing offers...")

    for response in messages:
        try:
            msg_type = response.get("type")
            line = response["line"]

            # Handle PEERLIST messages to keep peer features and active peers updated
            if msg_type == MessageType.PEERLIST.value:
                try:
                    self._handle_peerlist_response(line)
                    logger.debug("Processed PEERLIST during orderbook fetch")
                except Exception as e:
                    logger.debug(f"Failed to process PEERLIST: {e}")
                continue

            if msg_type not in (MessageType.PUBMSG.value, MessageType.PRIVMSG.value):
                logger.debug(f"Skipping message type {msg_type}")
                continue

            logger.debug(f"Processing message type {msg_type}: {line[:100]}...")

            parts = line.split(COMMAND_PREFIX)
            if len(parts) < 3:
                logger.debug(f"Message has insufficient parts: {len(parts)}")
                continue

            from_nick = parts[0]
            to_nick = parts[1]
            rest = COMMAND_PREFIX.join(parts[2:])

            if not rest.strip():
                logger.debug("Empty message content")
                continue

            offer_types = ["sw0absoffer", "sw0reloffer", "swabsoffer", "swreloffer"]
            parsed = False
            for offer_type in offer_types:
                if rest.startswith(offer_type):
                    try:
                        # Split on '!' to extract flags (neutrino, tbond)
                        # Format: sw0reloffer 0 750000 790107726787 500 0.001!neutrino!tbond <proof>
                        # NOTE: !neutrino in offers is deprecated - primary detection is via
                        # handshake features. This parsing is kept for backwards compatibility.
                        rest_parts = rest.split(COMMAND_PREFIX)
                        offer_line = rest_parts[0]
                        bond_data = None
                        neutrino_compat = False

                        # Parse flags after the offer line (backwards compat for !neutrino)
                        for flag_part in rest_parts[1:]:
                            if flag_part.startswith("neutrino"):
                                neutrino_compat = True
                                logger.debug(f"Maker {from_nick} requires neutrino_compat")
                            elif flag_part.startswith("tbond "):
                                bond_parts = flag_part[6:].split()
                                if bond_parts:
                                    bond_proof_b64 = bond_parts[0]
                                    # For PRIVMSG, the maker signs with taker's actual nick
                                    # For PUBMSG, both nicks are the maker's (self-signed)
                                    is_privmsg = msg_type == MessageType.PRIVMSG.value
                                    taker_nick_for_proof = to_nick if is_privmsg else from_nick
                                    bond_data = parse_fidelity_bond_proof(
                                        bond_proof_b64, from_nick, taker_nick_for_proof
                                    )
                                    if bond_data:
                                        logger.debug(
                                            f"Parsed fidelity bond from {from_nick}: "
                                            f"txid={bond_data['utxo_txid'][:16]}..., "
                                            f"locktime={bond_data['locktime']}"
                                        )

                                        utxo_str = (
                                            f"{bond_data['utxo_txid']}:{bond_data['utxo_vout']}"
                                        )
                                        if utxo_str not in bond_utxo_set:
                                            bond_utxo_set.add(utxo_str)
                                            bond = FidelityBond(
                                                counterparty=from_nick,
                                                utxo_txid=bond_data["utxo_txid"],
                                                utxo_vout=bond_data["utxo_vout"],
                                                locktime=bond_data["locktime"],
                                                script=bond_data["utxo_pub"],
                                                utxo_confirmations=0,
                                                cert_expiry=bond_data["cert_expiry"],
                                                fidelity_bond_data=bond_data,
                                            )
                                            bonds.append(bond)

                        offer_parts = offer_line.split()
                        if len(offer_parts) < 6:
                            logger.warning(
                                f"Offer from {from_nick} has {len(offer_parts)} parts, need 6"
                            )
                            continue

                        oid = int(offer_parts[1])
                        minsize = int(offer_parts[2])
                        maxsize = int(offer_parts[3])
                        txfee = int(offer_parts[4])
                        cjfee_str = offer_parts[5]

                        if offer_type in ["sw0absoffer", "swabsoffer"]:
                            cjfee = str(int(cjfee_str))
                        else:
                            cjfee = str(Decimal(cjfee_str))

                        offer = Offer(
                            counterparty=from_nick,
                            oid=oid,
                            ordertype=OfferType(offer_type),
                            minsize=minsize,
                            maxsize=maxsize,
                            txfee=txfee,
                            cjfee=cjfee,
                            fidelity_bond_value=0,
                            neutrino_compat=neutrino_compat,
                            features=self.peer_features.get(from_nick, {}),
                        )
                        offers.append(offer)

                        if bond_data:
                            offer.fidelity_bond_data = bond_data

                        logger.debug(
                            f"Parsed {offer_type} from {from_nick}: "
                            f"oid={oid}, size={minsize}-{maxsize}, fee={cjfee}, "
                            f"has_bond={bond_data is not None}, neutrino_compat={neutrino_compat}"
                        )
                        parsed = True
                    except Exception as e:
                        logger.warning(f"Failed to parse {offer_type} from {from_nick}: {e}")
                    break

            if not parsed:
                logger.debug(f"Message not an offer: {rest[:50]}...")

        except Exception as e:
            logger.warning(f"Failed to process message: {e}")
            continue

    # NOTE: We trust the directory's orderbook as authoritative.
    # If a maker has an offer in the directory, they are considered online.
    # The directory server maintains the connection state and removes offers
    # when makers disconnect. Peerlist responses may be delayed or unavailable,
    # so we don't filter offers based on peerlist presence.
    #
    # This prevents incorrectly rejecting valid offers from active makers
    # whose peerlist entry hasn't been received yet.

    logger.info(
        f"Fetched {len(offers)} offers and {len(bonds)} fidelity bonds from "
        f"{self.host}:{self.port}"
    )
    return offers, bonds

Fetch orderbooks from all connected peers.

Trusts the directory's orderbook as authoritative - if a maker has an offer in the directory, they are considered online. The directory server maintains the connection state and removes offers when makers disconnect.

Returns

Tuple of (offers, fidelity_bonds)

def get_active_nicks(self) ‑> set[str]
Expand source code
def get_active_nicks(self) -> set[str]:
    """Get set of nicks from the last peerlist update."""
    return set(self._active_peers.keys())

Get set of nicks from the last peerlist update.

def get_current_bonds(self) ‑> list[FidelityBond]
Expand source code
def get_current_bonds(self) -> list[FidelityBond]:
    """Get the current list of cached fidelity bonds."""
    return list(self.bonds.values())

Get the current list of cached fidelity bonds.

def get_current_offers(self) ‑> list[Offer]
Expand source code
def get_current_offers(self) -> list[Offer]:
    """Get the current list of cached offers."""
    return [offer_data.offer for offer_data in self.offers.values()]

Get the current list of cached offers.

def get_negotiated_version(self) ‑> int
Expand source code
def get_negotiated_version(self) -> int:
    """
    Get the negotiated protocol version.

    Returns:
        Negotiated version (always 5 with feature-based approach)
    """
    return self.negotiated_version if self.negotiated_version is not None else JM_VERSION

Get the negotiated protocol version.

Returns

Negotiated version (always 5 with feature-based approach)

def get_offers_with_timestamps(self) ‑> list[OfferWithTimestamp]
Expand source code
def get_offers_with_timestamps(self) -> list[OfferWithTimestamp]:
    """Get offers with their timestamp metadata."""
    return list(self.offers.values())

Get offers with their timestamp metadata.

async def get_peerlist(self) ‑> list[str] | None
Expand source code
async def get_peerlist(self) -> list[str] | None:
    """
    Fetch the current list of connected peers.

    Note: Reference implementation directories do NOT support GETPEERLIST.
    This method shares peerlist support tracking with get_peerlist_with_features().

    The directory may send multiple PEERLIST messages (chunked response) to
    avoid overwhelming slow Tor connections. This method accumulates peers
    from all chunks.

    Returns:
        List of active peer nicks. Returns empty list if directory doesn't
        support GETPEERLIST. Returns None if rate-limited (use cached data).
    """
    if not self.connection:
        raise DirectoryClientError("Not connected")

    # Skip if we already know this directory doesn't support GETPEERLIST
    # (only applies to directories that didn't announce peerlist_features)
    if self._peerlist_supported is False and not self.directory_peerlist_features:
        logger.debug("Skipping GETPEERLIST - directory doesn't support it")
        return []

    # Rate-limit peerlist requests to avoid spamming
    import time

    current_time = time.time()
    if current_time - self._last_peerlist_request_time < self._peerlist_min_interval:
        logger.debug(
            f"Skipping GETPEERLIST - rate limited "
            f"(last request {current_time - self._last_peerlist_request_time:.1f}s ago)"
        )
        return None

    self._last_peerlist_request_time = current_time

    getpeerlist_msg = {"type": MessageType.GETPEERLIST.value, "line": ""}
    logger.debug("Sending GETPEERLIST request")
    await self.connection.send(json.dumps(getpeerlist_msg).encode("utf-8"))

    start_time = asyncio.get_event_loop().time()

    # Timeout for waiting for the first PEERLIST response
    first_response_timeout = (
        self._peerlist_timeout if self.directory_peerlist_features else self.timeout
    )

    # Timeout between chunks - when this expires after receiving at least one
    # PEERLIST message, we know the directory has finished sending all chunks
    inter_chunk_timeout = self._peerlist_chunk_timeout

    # Accumulate peers from multiple PEERLIST chunks
    all_peers: list[str] = []
    chunks_received = 0
    got_first_response = False

    while True:
        elapsed = asyncio.get_event_loop().time() - start_time

        # Determine timeout for this receive
        if not got_first_response:
            remaining = first_response_timeout - elapsed
            if remaining <= 0:
                self._handle_peerlist_timeout()
                return []
            receive_timeout = remaining
        else:
            receive_timeout = inter_chunk_timeout

        try:
            response_data = await asyncio.wait_for(
                self.connection.receive(), timeout=receive_timeout
            )
            response = json.loads(response_data.decode("utf-8"))
            msg_type = response.get("type")

            if msg_type == MessageType.PEERLIST.value:
                got_first_response = True
                chunks_received += 1
                peerlist_str = response.get("line", "")

                # Parse this chunk
                chunk_peers: list[str] = []
                if peerlist_str:
                    for entry in peerlist_str.split(","):
                        if not entry or not entry.strip():
                            continue
                        if NICK_PEERLOCATOR_SEPARATOR not in entry:
                            logger.debug(f"Skipping metadata entry in peerlist: '{entry}'")
                            continue
                        try:
                            nick, location, disconnected, _features = parse_peerlist_entry(
                                entry
                            )
                            logger.debug(
                                f"Parsed peer: {nick} at {location}, "
                                f"disconnected={disconnected}"
                            )
                            if not disconnected:
                                chunk_peers.append(nick)
                        except ValueError as e:
                            logger.warning(f"Failed to parse peerlist entry '{entry}': {e}")
                            continue

                all_peers.extend(chunk_peers)
                logger.debug(
                    f"Received PEERLIST chunk {chunks_received} with "
                    f"{len(chunk_peers)} peers (total: {len(all_peers)})"
                )
                continue

            # Buffer unexpected messages
            logger.trace(
                f"Buffering unexpected message type {msg_type} while waiting for PEERLIST"
            )
            await self._message_buffer.put(response)

        except TimeoutError:
            if not got_first_response:
                self._handle_peerlist_timeout()
                return []
            # Inter-chunk timeout means we're done
            break

        except Exception as e:
            logger.warning(f"Error receiving/parsing message while waiting for PEERLIST: {e}")
            elapsed = asyncio.get_event_loop().time() - start_time
            if not got_first_response and elapsed > first_response_timeout:
                self._handle_peerlist_timeout()
                return []
            if got_first_response:
                break

    # Mark peerlist as supported since we got a valid response
    self._peerlist_supported = True
    self._peerlist_timeout_count = 0

    logger.info(f"Received {len(all_peers)} active peers from {self.host}:{self.port}")
    return all_peers

Fetch the current list of connected peers.

Note: Reference implementation directories do NOT support GETPEERLIST. This method shares peerlist support tracking with get_peerlist_with_features().

The directory may send multiple PEERLIST messages (chunked response) to avoid overwhelming slow Tor connections. This method accumulates peers from all chunks.

Returns

List of active peer nicks. Returns empty list if directory doesn't support GETPEERLIST. Returns None if rate-limited (use cached data).

async def get_peerlist_with_features(self) ‑> list[tuple[str, str, FeatureSet]]
Expand source code
async def get_peerlist_with_features(self) -> list[tuple[str, str, FeatureSet]]:
    """
    Fetch the current list of connected peers with their features.

    Uses the standard GETPEERLIST message. If the directory supports
    peerlist_features, the response will include F: suffix with features.

    Note: Reference implementation directories do NOT support GETPEERLIST.
    This method tracks whether the directory supports it and skips requests
    to unsupported directories to avoid spamming warnings in their logs.

    The directory may send multiple PEERLIST messages (chunked response) to
    avoid overwhelming slow Tor connections. This method accumulates peers
    from all chunks until no more PEERLIST messages arrive within the
    inter-chunk timeout.

    Returns:
        List of (nick, location, features) tuples for active peers.
        Features will be empty for directories that don't support peerlist_features.
        Returns empty list if directory doesn't support GETPEERLIST or is rate-limited.
    """
    if not self.connection:
        raise DirectoryClientError("Not connected")

    # Skip if we already know this directory doesn't support GETPEERLIST
    # (only applies to directories that didn't announce peerlist_features)
    if self._peerlist_supported is False and not self.directory_peerlist_features:
        logger.debug("Skipping GETPEERLIST - directory doesn't support it")
        return []

    # Rate-limit peerlist requests to avoid spamming
    import time

    current_time = time.time()
    if current_time - self._last_peerlist_request_time < self._peerlist_min_interval:
        logger.debug(
            f"Skipping GETPEERLIST - rate limited "
            f"(last request {current_time - self._last_peerlist_request_time:.1f}s ago)"
        )
        return []  # Return empty - will use offers for nick tracking

    self._last_peerlist_request_time = current_time

    getpeerlist_msg = {"type": MessageType.GETPEERLIST.value, "line": ""}
    logger.debug("Sending GETPEERLIST request")
    await self.connection.send(json.dumps(getpeerlist_msg).encode("utf-8"))

    start_time = asyncio.get_event_loop().time()

    # Timeout for waiting for the first PEERLIST response
    # Use longer timeout for directories that support peerlist_features
    first_response_timeout = (
        self._peerlist_timeout if self.directory_peerlist_features else self.timeout
    )

    # Timeout between chunks - when this expires after receiving at least one
    # PEERLIST message, we know the directory has finished sending all chunks
    inter_chunk_timeout = self._peerlist_chunk_timeout

    # Accumulate peers from multiple PEERLIST chunks
    all_peers: list[tuple[str, str, FeatureSet]] = []
    chunks_received = 0
    got_first_response = False

    while True:
        elapsed = asyncio.get_event_loop().time() - start_time

        # Determine timeout for this receive
        if not got_first_response:
            # Waiting for first PEERLIST - use full timeout
            remaining = first_response_timeout - elapsed
            if remaining <= 0:
                self._handle_peerlist_timeout()
                return []
            receive_timeout = remaining
        else:
            # Already received at least one chunk - use shorter inter-chunk timeout
            receive_timeout = inter_chunk_timeout

        try:
            response_data = await asyncio.wait_for(
                self.connection.receive(), timeout=receive_timeout
            )
            response = json.loads(response_data.decode("utf-8"))
            msg_type = response.get("type")

            if msg_type == MessageType.PEERLIST.value:
                got_first_response = True
                chunks_received += 1
                peerlist_str = response.get("line", "")
                chunk_peers = self._handle_peerlist_response(peerlist_str)
                all_peers.extend(chunk_peers)
                logger.debug(
                    f"Received PEERLIST chunk {chunks_received} with "
                    f"{len(chunk_peers)} peers (total: {len(all_peers)})"
                )
                # Continue to check for more chunks
                continue

            # Buffer unexpected messages (like PUBMSG offers) for later processing
            logger.trace(
                f"Buffering unexpected message type {msg_type} while waiting for PEERLIST"
            )
            await self._message_buffer.put(response)

        except TimeoutError:
            if not got_first_response:
                # Never received any PEERLIST - this is a real timeout
                self._handle_peerlist_timeout()
                return []
            # Received at least one chunk, inter-chunk timeout means we're done
            logger.debug(
                f"Peerlist complete: received {len(all_peers)} peers "
                f"in {chunks_received} chunks"
            )
            break

        except Exception as e:
            logger.warning(f"Error receiving/parsing message while waiting for PEERLIST: {e}")
            elapsed = asyncio.get_event_loop().time() - start_time
            if not got_first_response and elapsed > first_response_timeout:
                self._handle_peerlist_timeout()
                return []
            # If we already have some data, return what we have
            if got_first_response:
                break

    # Success - reset timeout counter and mark as supported
    self._peerlist_timeout_count = 0
    self._peerlist_supported = True
    return all_peers

Fetch the current list of connected peers with their features.

Uses the standard GETPEERLIST message. If the directory supports peerlist_features, the response will include F: suffix with features.

Note: Reference implementation directories do NOT support GETPEERLIST. This method tracks whether the directory supports it and skips requests to unsupported directories to avoid spamming warnings in their logs.

The directory may send multiple PEERLIST messages (chunked response) to avoid overwhelming slow Tor connections. This method accumulates peers from all chunks until no more PEERLIST messages arrive within the inter-chunk timeout.

Returns

List of (nick, location, features) tuples for active peers. Features will be empty for directories that don't support peerlist_features. Returns empty list if directory doesn't support GETPEERLIST or is rate-limited.

async def listen_continuously(self, request_orderbook: bool = True) ‑> None
Expand source code
async def listen_continuously(self, request_orderbook: bool = True) -> None:
    """
    Continuously listen for messages and update internal offer/bond caches.

    This method runs indefinitely until stop() is called or connection is lost.
    Used by orderbook_watcher and maker to maintain live orderbook state.

    Args:
        request_orderbook: If True, send !orderbook request on startup to get
            current offers from makers. Set to False for maker bots that don't
            need to receive other offers.
    """
    if not self.connection:
        raise DirectoryClientError("Not connected")

    logger.info(f"Starting continuous listening on {self.host}:{self.port}")
    self.running = True

    # Fetch peerlist with features to populate peer_features cache
    # This allows us to know which features each maker supports
    # Note: This may return empty if directory doesn't support GETPEERLIST (reference impl)
    try:
        await self.get_peerlist_with_features()
        if self._peerlist_supported:
            logger.info(f"Populated peer_features cache with {len(self.peer_features)} peers")
        else:
            logger.info(
                "Directory doesn't support GETPEERLIST - peer features will be "
                "learned from offer messages"
            )
    except Exception as e:
        logger.warning(f"Failed to fetch peerlist with features: {e}")

    # Request current orderbook from makers
    if request_orderbook:
        try:
            pubmsg = {
                "type": MessageType.PUBMSG.value,
                "line": f"{self.nick}!PUBLIC!!orderbook",
            }
            await self.connection.send(json.dumps(pubmsg).encode("utf-8"))
            logger.info("Sent !orderbook request to get current offers")
        except Exception as e:
            logger.warning(f"Failed to send !orderbook request: {e}")

    # Track when we last sent an orderbook request (to avoid spamming)
    import time

    last_orderbook_request = time.time()
    orderbook_request_min_interval = 60.0  # Minimum 60 seconds between requests

    while self.running:
        try:
            # First check if we have buffered messages from previous operations
            # (e.g., messages received while waiting for PEERLIST)
            if not self._message_buffer.empty():
                message = await self._message_buffer.get()
                logger.trace("Processing buffered message from queue")
            else:
                # Read next message with timeout
                data = await asyncio.wait_for(self.connection.receive(), timeout=5.0)

                if not data:
                    logger.warning(f"Connection to {self.host}:{self.port} closed")
                    break

                message = json.loads(data.decode("utf-8"))
            msg_type = message.get("type")
            line = message.get("line", "")

            # Handle PEERLIST responses (from periodic or automatic requests)
            if msg_type == MessageType.PEERLIST.value:
                try:
                    self._handle_peerlist_response(line)
                except Exception as e:
                    logger.debug(f"Failed to process PEERLIST: {e}")
                continue

            # Process PUBMSG and PRIVMSG to update offers/bonds cache
            # Reference implementation sends offer responses to !orderbook via PRIVMSG
            if msg_type in (MessageType.PUBMSG.value, MessageType.PRIVMSG.value):
                try:
                    parts = line.split(COMMAND_PREFIX)
                    if len(parts) >= 3:
                        from_nick = parts[0]
                        to_nick = parts[1]
                        rest = COMMAND_PREFIX.join(parts[2:])

                        # Accept PUBLIC broadcasts or messages addressed to us
                        if to_nick == "PUBLIC" or to_nick == self.nick:
                            # If we don't have features for this peer, it's a new peer.
                            # Track them with empty features for now - we'll get their features
                            # from the initial peerlist or from their offer messages
                            is_new_peer = from_nick not in self.peer_features
                            current_time = time.time()

                            if is_new_peer:
                                # Track new peer - merge empty features (will be a no-op
                                # if we already know their features from another source)
                                # Features will be populated from offer messages or peerlist
                                self._merge_peer_features(from_nick, {})
                                logger.debug(f"Discovered new peer: {from_nick}")

                                # If directory supports peerlist_features, request updated peerlist
                                # to get this peer's features immediately
                                if (
                                    self.directory_peerlist_features
                                    and self._peerlist_supported
                                ):
                                    try:
                                        # Request peerlist to get features for new peer
                                        # This is a background task - don't block message processing
                                        asyncio.create_task(
                                            self._refresh_peerlist_for_new_peer()
                                        )
                                    except Exception as e:
                                        logger.debug(
                                            f"Failed to request peerlist for new peer: {e}"
                                        )

                                # Request orderbook from new peer (rate-limited)
                                if (
                                    request_orderbook
                                    and current_time - last_orderbook_request
                                    > orderbook_request_min_interval
                                ):
                                    try:
                                        pubmsg = {
                                            "type": MessageType.PUBMSG.value,
                                            "line": f"{self.nick}!PUBLIC!!orderbook",
                                        }
                                        await self.connection.send(
                                            json.dumps(pubmsg).encode("utf-8")
                                        )
                                        last_orderbook_request = current_time
                                        logger.info(
                                            f"Sent !orderbook request for new peer {from_nick}"
                                        )
                                    except Exception as e:
                                        logger.debug(f"Failed to send !orderbook: {e}")

                            # Parse offer announcements
                            for offer_type_prefix in [
                                "sw0reloffer",
                                "sw0absoffer",
                                "swreloffer",
                                "swabsoffer",
                            ]:
                                if rest.startswith(offer_type_prefix):
                                    # Separate offer from fidelity bond data
                                    rest_parts = rest.split(COMMAND_PREFIX, 1)
                                    offer_line = rest_parts[0].strip()

                                    # Parse fidelity bond if present
                                    bond_data = None
                                    if len(rest_parts) > 1 and rest_parts[1].startswith(
                                        "tbond "
                                    ):
                                        bond_parts = rest_parts[1][6:].split()
                                        if bond_parts:
                                            bond_proof_b64 = bond_parts[0]
                                            # For PUBLIC announcements, maker uses their own nick
                                            # as taker_nick when creating the proof.
                                            # For PRIVMSG (response to !orderbook), maker signs
                                            # for the recipient (us).
                                            taker_nick_for_proof = (
                                                from_nick if to_nick == "PUBLIC" else to_nick
                                            )
                                            bond_data = parse_fidelity_bond_proof(
                                                bond_proof_b64, from_nick, taker_nick_for_proof
                                            )
                                            if bond_data:
                                                logger.debug(
                                                    f"Parsed fidelity bond from {from_nick}: "
                                                    f"txid={bond_data['utxo_txid'][:16]}..., "
                                                    f"locktime={bond_data['locktime']}"
                                                )
                                                # Store bond in bonds cache
                                                utxo_str = (
                                                    f"{bond_data['utxo_txid']}:"
                                                    f"{bond_data['utxo_vout']}"
                                                )
                                                bond = FidelityBond(
                                                    counterparty=from_nick,
                                                    utxo_txid=bond_data["utxo_txid"],
                                                    utxo_vout=bond_data["utxo_vout"],
                                                    locktime=bond_data["locktime"],
                                                    script=bond_data["utxo_pub"],
                                                    utxo_confirmations=0,
                                                    cert_expiry=bond_data["cert_expiry"],
                                                    fidelity_bond_data=bond_data,
                                                )
                                                self.bonds[utxo_str] = bond

                                    offer_parts = offer_line.split()
                                    if len(offer_parts) >= 6:
                                        try:
                                            oid = int(offer_parts[1])
                                            minsize = int(offer_parts[2])
                                            maxsize = int(offer_parts[3])
                                            txfee = int(offer_parts[4])
                                            cjfee_str = offer_parts[5]

                                            if offer_type_prefix in [
                                                "sw0absoffer",
                                                "swabsoffer",
                                            ]:
                                                cjfee = str(int(cjfee_str))
                                            else:
                                                cjfee = str(Decimal(cjfee_str))

                                            offer = Offer(
                                                counterparty=from_nick,
                                                oid=oid,
                                                ordertype=OfferType(offer_type_prefix),
                                                minsize=minsize,
                                                maxsize=maxsize,
                                                txfee=txfee,
                                                cjfee=cjfee,
                                                fidelity_bond_value=0,
                                                fidelity_bond_data=bond_data,
                                                features=self.peer_features.get(from_nick, {}),
                                            )

                                            # Extract bond UTXO key for deduplication
                                            bond_utxo_key: str | None = None
                                            if bond_data:
                                                bond_utxo_key = (
                                                    f"{bond_data['utxo_txid']}:"
                                                    f"{bond_data['utxo_vout']}"
                                                )

                                            # Update cache using tuple key
                                            offer_key = (from_nick, oid)
                                            self._store_offer(offer_key, offer, bond_utxo_key)

                                            # Track this peer as "known" even if peerlist didn't
                                            # return features. This prevents re-triggering new peer
                                            # logic for every message from this peer.
                                            if from_nick not in self.peer_features:
                                                self.peer_features[from_nick] = {}

                                            logger.debug(
                                                f"Updated offer cache: {from_nick} "
                                                f"{offer_type_prefix} oid={oid}"
                                                + (" (with bond)" if bond_data else "")
                                            )
                                        except Exception as e:
                                            logger.debug(f"Failed to parse offer update: {e}")
                                    break
                except Exception as e:
                    logger.debug(f"Failed to process PUBMSG: {e}")

        except TimeoutError:
            continue
        except asyncio.CancelledError:
            logger.info(f"Continuous listening on {self.host}:{self.port} cancelled")
            break
        except Exception as e:
            logger.error(f"Error in continuous listening: {e}")
            if self.on_disconnect:
                self.on_disconnect()
            break

    self.running = False
    logger.info(f"Stopped continuous listening on {self.host}:{self.port}")

Continuously listen for messages and update internal offer/bond caches.

This method runs indefinitely until stop() is called or connection is lost. Used by orderbook_watcher and maker to maintain live orderbook state.

Args

request_orderbook
If True, send !orderbook request on startup to get current offers from makers. Set to False for maker bots that don't need to receive other offers.
async def listen_for_messages(self, duration: float = 5.0) ‑> list[dict[str, typing.Any]]
Expand source code
async def listen_for_messages(self, duration: float = 5.0) -> list[dict[str, Any]]:
    """
    Listen for messages for a specified duration.

    This method collects all messages received within the specified duration.
    It properly handles connection closed errors by raising DirectoryClientError.

    Args:
        duration: How long to listen in seconds

    Returns:
        List of received messages

    Raises:
        DirectoryClientError: If not connected or connection is lost
    """
    if not self.connection:
        raise DirectoryClientError("Not connected")

    # Check connection state before starting
    if not self.connection.is_connected():
        raise DirectoryClientError("Connection closed")

    messages: list[dict[str, Any]] = []
    start_time = asyncio.get_event_loop().time()

    # First, drain any buffered messages into our result list
    # These are messages that were received while waiting for other responses
    while not self._message_buffer.empty():
        try:
            buffered_msg = self._message_buffer.get_nowait()
            logger.trace(
                f"Processing buffered message type {buffered_msg.get('type')}: "
                f"{buffered_msg.get('line', '')[:80]}..."
            )
            messages.append(buffered_msg)
        except asyncio.QueueEmpty:
            break

    while asyncio.get_event_loop().time() - start_time < duration:
        try:
            remaining_time = duration - (asyncio.get_event_loop().time() - start_time)
            if remaining_time <= 0:
                break

            response_data = await asyncio.wait_for(
                self.connection.receive(), timeout=remaining_time
            )
            response = json.loads(response_data.decode("utf-8"))
            logger.trace(
                f"Received message type {response.get('type')}: "
                f"{response.get('line', '')[:80]}..."
            )
            messages.append(response)

        except TimeoutError:
            # Normal timeout - no more messages within duration
            break
        except Exception as e:
            # Connection errors should propagate up so caller can reconnect
            error_msg = str(e).lower()
            if "connection" in error_msg and ("closed" in error_msg or "lost" in error_msg):
                raise DirectoryClientError(f"Connection lost: {e}") from e
            # Other errors (JSON parse, etc) - log and continue
            logger.warning(f"Error processing message: {e}")
            continue

    logger.trace(f"Collected {len(messages)} messages in {duration}s")
    return messages

Listen for messages for a specified duration.

This method collects all messages received within the specified duration. It properly handles connection closed errors by raising DirectoryClientError.

Args

duration
How long to listen in seconds

Returns

List of received messages

Raises

DirectoryClientError
If not connected or connection is lost
def remove_offers_for_nick(self, nick: str) ‑> int
Expand source code
def remove_offers_for_nick(self, nick: str) -> int:
    """
    Remove all offers from a specific nick (e.g., when nick goes offline).

    This is the equivalent of the reference implementation's on_nick_leave callback.

    Args:
        nick: The nick to remove offers for

    Returns:
        Number of offers removed
    """
    keys_to_remove = [key for key in self.offers if key[0] == nick]
    removed = 0

    for key in keys_to_remove:
        offer_data = self.offers.pop(key, None)
        if offer_data:
            removed += 1
            # Clean up bond mapping
            if offer_data.bond_utxo_key and offer_data.bond_utxo_key in self._bond_to_offers:
                self._bond_to_offers[offer_data.bond_utxo_key].discard(key)

    if removed > 0:
        logger.info(f"Removed {removed} offers for nick {nick} (left/offline)")

    # Also remove from peer_features and active_peers
    self.peer_features.pop(nick, None)
    self._active_peers.pop(nick, None)

    # Remove any bonds from this nick
    bonds_to_remove = [k for k, v in self.bonds.items() if v.counterparty == nick]
    for bond_key in bonds_to_remove:
        del self.bonds[bond_key]

    return removed

Remove all offers from a specific nick (e.g., when nick goes offline).

This is the equivalent of the reference implementation's on_nick_leave callback.

Args

nick
The nick to remove offers for

Returns

Number of offers removed

async def send_private_message(self, recipient: str, command: str, data: str) ‑> None
Expand source code
async def send_private_message(self, recipient: str, command: str, data: str) -> None:
    """
    Send a signed private message to a specific peer.

    JoinMarket requires all private messages to be signed with the sender's
    nick private key. The signature is appended to the message:
    Format: "!<command> <data> <pubkey_hex> <signature>"

    The message-to-sign is: data + hostid (to prevent replay attacks)
    Note: Only the data is signed, NOT the command prefix.

    Args:
        recipient: Target peer nick
        command: Command name (without ! prefix, e.g., 'fill', 'auth', 'tx')
        data: Command arguments to send (will be signed)
    """
    if not self.connection:
        raise DirectoryClientError("Not connected")

    # Sign just the data (not the command) with our nick identity
    # Reference: rawmessage = ' '.join(message[1:].split(' ')[1:-2])
    # This means they extract [1:-2] which is the args, not the command
    # So we sign: data + hostid
    # IMPORTANT: Always use ONION_HOSTID ("onion-network"), NOT the directory hostname.
    # The reference implementation uses a fixed hostid for ALL onion message channels
    # (see jmdaemon/onionmc.py line 635: self.hostid = "onion-network")
    signed_data = self.nick_identity.sign_message(data, ONION_HOSTID)

    # JoinMarket message format: from_nick!to_nick!command <args>
    # The COMMAND_PREFIX ("!") is used ONLY as a field separator between
    # from_nick, to_nick, and the message content. The command itself
    # does NOT have a "!" prefix.
    # Format: "<command> <signed_data>" where signed_data = "<data> <pubkey_hex> <sig_b64>"
    full_message = f"{command} {signed_data}"

    privmsg = {
        "type": MessageType.PRIVMSG.value,
        "line": f"{self.nick}!{recipient}!{full_message}",
    }
    await self.connection.send(json.dumps(privmsg).encode("utf-8"))

Send a signed private message to a specific peer.

JoinMarket requires all private messages to be signed with the sender's nick private key. The signature is appended to the message: Format: "! "

The message-to-sign is: data + hostid (to prevent replay attacks) Note: Only the data is signed, NOT the command prefix.

Args

recipient
Target peer nick
command
Command name (without ! prefix, e.g., 'fill', 'auth', 'tx')
data
Command arguments to send (will be signed)
async def send_public_message(self, message: str) ‑> None
Expand source code
async def send_public_message(self, message: str) -> None:
    """
    Send a public message to all peers.

    Args:
        message: Message to broadcast
    """
    if not self.connection:
        raise DirectoryClientError("Not connected")

    pubmsg = {
        "type": MessageType.PUBMSG.value,
        "line": f"{self.nick}!PUBLIC!{message}",
    }
    await self.connection.send(json.dumps(pubmsg).encode("utf-8"))

Send a public message to all peers.

Args

message
Message to broadcast
def stop(self) ‑> None
Expand source code
def stop(self) -> None:
    """Stop continuous listening."""
    self.running = False

Stop continuous listening.

def supports_extended_utxo_format(self) ‑> bool
Expand source code
def supports_extended_utxo_format(self) -> bool:
    """
    Check if we should use extended UTXO format with this directory.

    Extended format (txid:vout:scriptpubkey:blockheight) is used when
    both sides advertise neutrino_compat feature. Protocol version
    is not checked - features are negotiated independently.

    Returns:
        True if extended UTXO format should be used
    """
    return self.neutrino_compat and self.directory_neutrino_compat

Check if we should use extended UTXO format with this directory.

Extended format (txid:vout:scriptpubkey:blockheight) is used when both sides advertise neutrino_compat feature. Protocol version is not checked - features are negotiated independently.

Returns

True if extended UTXO format should be used

class DirectoryClientError (*args, **kwargs)
Expand source code
class DirectoryClientError(Exception):
    """Error raised by DirectoryClient operations."""

Error raised by DirectoryClient operations.

Ancestors

  • builtins.Exception
  • builtins.BaseException
class DirectoryServerConfig (**data: Any)
Expand source code
class DirectoryServerConfig(BaseModel):
    """
    Configuration for directory server instances.

    Used by standalone directory servers, not by clients.
    """

    network: NetworkType = Field(
        default=NetworkType.MAINNET, description="Network type for the directory server"
    )
    host: str = Field(default="127.0.0.1", description="Host address to bind to")
    port: int = Field(default=5222, ge=1, le=65535, description="Port to listen on")

    # Limits
    max_peers: int = Field(default=10000, ge=1, description="Maximum number of connected peers")
    max_message_size: int = Field(
        default=2097152, ge=1024, description="Maximum message size in bytes (default: 2MB)"
    )
    max_line_length: int = Field(
        default=65536, ge=1024, description="Maximum JSON-line message length (default: 64KB)"
    )
    max_json_nesting_depth: int = Field(
        default=10, ge=1, le=100, description="Maximum nesting depth for JSON parsing"
    )

    # Rate limiting
    # Higher limits to accommodate makers responding to orderbook requests
    # A single maker might send multiple offer messages + bond proofs rapidly
    message_rate_limit: int = Field(
        default=500, ge=1, description="Messages per second (sustained)"
    )
    message_burst_limit: int = Field(default=1000, ge=1, description="Maximum burst size")
    rate_limit_disconnect_threshold: int = Field(
        default=200, ge=1, description="Disconnect after N violations"
    )

    # Broadcasting
    broadcast_batch_size: int = Field(
        default=50,
        ge=1,
        description="Batch size for concurrent broadcasts (lower = less memory)",
    )

    # Logging
    log_level: str = Field(default="INFO", description="Logging level")

    # Server info
    motd: str = Field(
        default="JoinMarket Directory Server https://github.com/m0wer/joinmarket-ng",
        description="Message of the day sent to clients",
    )

    # Health check
    health_check_host: str = Field(
        default="127.0.0.1", description="Host for health check endpoint"
    )
    health_check_port: int = Field(
        default=8080, ge=1, le=65535, description="Port for health check endpoint"
    )

    model_config = {"frozen": False}

Configuration for directory server instances.

Used by standalone directory servers, not by clients.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var broadcast_batch_size : int

The type of the None singleton.

var health_check_host : str

The type of the None singleton.

var health_check_port : int

The type of the None singleton.

var host : str

The type of the None singleton.

var log_level : str

The type of the None singleton.

var max_json_nesting_depth : int

The type of the None singleton.

var max_line_length : int

The type of the None singleton.

var max_message_size : int

The type of the None singleton.

var max_peers : int

The type of the None singleton.

var message_burst_limit : int

The type of the None singleton.

var message_rate_limit : int

The type of the None singleton.

var model_config

The type of the None singleton.

var motd : str

The type of the None singleton.

var networkNetworkType

The type of the None singleton.

var port : int

The type of the None singleton.

var rate_limit_disconnect_threshold : int

The type of the None singleton.

class DirectoryServerSettings (**data: Any)
Expand source code
class DirectoryServerSettings(BaseModel):
    """Directory server specific settings."""

    host: str = Field(
        default="127.0.0.1",
        description="Host address to bind to",
    )
    port: int = Field(
        default=5222,
        ge=0,
        le=65535,
        description="Port to listen on (0 = let OS assign)",
    )
    max_peers: int = Field(
        default=10000,
        ge=1,
        description="Maximum number of connected peers",
    )
    max_message_size: int = Field(
        default=2097152,
        ge=1024,
        description="Maximum message size in bytes (2MB default)",
    )
    max_line_length: int = Field(
        default=65536,
        ge=1024,
        description="Maximum JSON-line message length (64KB default)",
    )
    max_json_nesting_depth: int = Field(
        default=10,
        ge=1,
        description="Maximum nesting depth for JSON parsing",
    )
    message_rate_limit: int = Field(
        default=500,
        ge=1,
        description="Messages per second (sustained)",
    )
    message_burst_limit: int = Field(
        default=1000,
        ge=1,
        description="Maximum burst size",
    )
    rate_limit_disconnect_threshold: int = Field(
        default=0,
        ge=0,
        description="Disconnect after N rate limit violations (0 = never disconnect)",
    )
    broadcast_batch_size: int = Field(
        default=50,
        ge=1,
        description="Batch size for concurrent broadcasts",
    )
    health_check_host: str = Field(
        default="127.0.0.1",
        description="Host for health check endpoint",
    )
    health_check_port: int = Field(
        default=8080,
        ge=0,
        le=65535,
        description="Port for health check endpoint (0 = let OS assign)",
    )
    motd: str = Field(
        default="JoinMarket NG Directory Server https://github.com/m0wer/joinmarket-ng/",
        description="Message of the day sent to clients",
    )

Directory server specific settings.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var broadcast_batch_size : int

The type of the None singleton.

var health_check_host : str

The type of the None singleton.

var health_check_port : int

The type of the None singleton.

var host : str

The type of the None singleton.

var max_json_nesting_depth : int

The type of the None singleton.

var max_line_length : int

The type of the None singleton.

var max_message_size : int

The type of the None singleton.

var max_peers : int

The type of the None singleton.

var message_burst_limit : int

The type of the None singleton.

var message_rate_limit : int

The type of the None singleton.

var model_config

The type of the None singleton.

var motd : str

The type of the None singleton.

var port : int

The type of the None singleton.

var rate_limit_disconnect_threshold : int

The type of the None singleton.

class EphemeralHiddenService (service_id: str,
private_key: str | None = None,
ports: list[tuple[int, str]] | None = None)
Expand source code
class EphemeralHiddenService:
    """
    Represents an ephemeral hidden service created via Tor control port.

    Ephemeral hidden services are transient - they exist only while
    the control connection is open. When the connection closes,
    the hidden service is automatically removed.
    """

    def __init__(
        self,
        service_id: str,
        private_key: str | None = None,
        ports: list[tuple[int, str]] | None = None,
    ):
        """
        Initialize ephemeral hidden service info.

        Args:
            service_id: The .onion address without .onion suffix (56 chars for v3)
            private_key: Optional private key for recreating the service
            ports: List of (virtual_port, target) mappings
        """
        self.service_id = service_id
        self.private_key = private_key
        self.ports = ports or []

    @property
    def onion_address(self) -> str:
        """Get the full .onion address."""
        return f"{self.service_id}.onion"

    def __repr__(self) -> str:
        return f"EphemeralHiddenService({self.onion_address}, ports={self.ports})"

Represents an ephemeral hidden service created via Tor control port.

Ephemeral hidden services are transient - they exist only while the control connection is open. When the connection closes, the hidden service is automatically removed.

Initialize ephemeral hidden service info.

Args

service_id
The .onion address without .onion suffix (56 chars for v3)
private_key
Optional private key for recreating the service
ports
List of (virtual_port, target) mappings

Instance variables

prop onion_address : str
Expand source code
@property
def onion_address(self) -> str:
    """Get the full .onion address."""
    return f"{self.service_id}.onion"

Get the full .onion address.

class FeatureSet (*args: Any, **kwargs: Any)
Expand source code
@dataclass
class FeatureSet:
    """
    Represents a set of protocol features advertised by a peer.

    Used for feature negotiation during handshake and CoinJoin sessions.
    """

    features: set[str] = Field(default_factory=set)

    @classmethod
    def from_handshake(cls, handshake_data: dict[str, Any]) -> FeatureSet:
        """Extract features from a handshake payload."""
        features_dict = handshake_data.get("features", {})
        # Only include features that are set to True
        features = {k for k, v in features_dict.items() if v is True}
        return cls(features=features)

    @classmethod
    def from_list(cls, feature_list: list[str]) -> FeatureSet:
        """Create from a list of feature names."""
        return cls(features=set(feature_list))

    @classmethod
    def from_comma_string(cls, s: str) -> FeatureSet:
        """Parse from plus-separated string (e.g., 'neutrino_compat+push_encrypted').

        Note: Despite the method name, uses '+' as separator because the peerlist
        itself uses ',' to separate entries. The name is kept for backward compatibility.
        Also accepts ',' for legacy/handshake use cases.
        """
        if not s or not s.strip():
            return cls(features=set())
        # Support both + (peerlist) and , (legacy/handshake) separators
        if "+" in s:
            return cls(features={f.strip() for f in s.split("+") if f.strip()})
        return cls(features={f.strip() for f in s.split(",") if f.strip()})

    def to_dict(self) -> dict[str, bool]:
        """Convert to dict for JSON serialization."""
        return dict.fromkeys(sorted(self.features), True)

    def to_comma_string(self) -> str:
        """Convert to plus-separated string for peerlist F: suffix.

        Note: Uses '+' as separator instead of ',' because the peerlist
        itself uses ',' to separate entries. Using ',' for features would
        cause parsing ambiguity.
        """
        return "+".join(sorted(self.features))

    def supports(self, feature: str) -> bool:
        """Check if this set includes a specific feature."""
        return feature in self.features

    def supports_neutrino_compat(self) -> bool:
        """Check if neutrino_compat is supported."""
        return FEATURE_NEUTRINO_COMPAT in self.features

    def supports_push_encrypted(self) -> bool:
        """Check if push_encrypted is supported."""
        return FEATURE_PUSH_ENCRYPTED in self.features

    def supports_peerlist_features(self) -> bool:
        """Check if peer supports extended peerlist with features (F: suffix)."""
        return FEATURE_PEERLIST_FEATURES in self.features

    def validate_dependencies(self) -> tuple[bool, str]:
        """Check that all feature dependencies are satisfied."""
        for feature in self.features:
            deps = FEATURE_DEPENDENCIES.get(feature, [])
            for dep in deps:
                if dep not in self.features:
                    return False, f"Feature '{feature}' requires '{dep}'"
        return True, ""

    def intersection(self, other: FeatureSet) -> FeatureSet:
        """Return features supported by both sets."""
        return FeatureSet(features=self.features & other.features)

    def __bool__(self) -> bool:
        """True if any features are set."""
        return bool(self.features)

    def __contains__(self, feature: str) -> bool:
        return feature in self.features

    def __iter__(self):
        return iter(self.features)

    def __len__(self) -> int:
        return len(self.features)

Represents a set of protocol features advertised by a peer.

Used for feature negotiation during handshake and CoinJoin sessions.

Static methods

def from_comma_string(s: str) ‑> FeatureSet

Parse from plus-separated string (e.g., 'neutrino_compat+push_encrypted').

Note: Despite the method name, uses '+' as separator because the peerlist itself uses ',' to separate entries. The name is kept for backward compatibility. Also accepts ',' for legacy/handshake use cases.

def from_handshake(handshake_data: dict[str, Any]) ‑> FeatureSet

Extract features from a handshake payload.

def from_list(feature_list: list[str]) ‑> FeatureSet

Create from a list of feature names.

Instance variables

var features : set[str]

The type of the None singleton.

Methods

def intersection(self,
other: FeatureSet) ‑> FeatureSet
Expand source code
def intersection(self, other: FeatureSet) -> FeatureSet:
    """Return features supported by both sets."""
    return FeatureSet(features=self.features & other.features)

Return features supported by both sets.

def supports(self, feature: str) ‑> bool
Expand source code
def supports(self, feature: str) -> bool:
    """Check if this set includes a specific feature."""
    return feature in self.features

Check if this set includes a specific feature.

def supports_neutrino_compat(self) ‑> bool
Expand source code
def supports_neutrino_compat(self) -> bool:
    """Check if neutrino_compat is supported."""
    return FEATURE_NEUTRINO_COMPAT in self.features

Check if neutrino_compat is supported.

def supports_peerlist_features(self) ‑> bool
Expand source code
def supports_peerlist_features(self) -> bool:
    """Check if peer supports extended peerlist with features (F: suffix)."""
    return FEATURE_PEERLIST_FEATURES in self.features

Check if peer supports extended peerlist with features (F: suffix).

def supports_push_encrypted(self) ‑> bool
Expand source code
def supports_push_encrypted(self) -> bool:
    """Check if push_encrypted is supported."""
    return FEATURE_PUSH_ENCRYPTED in self.features

Check if push_encrypted is supported.

def to_comma_string(self) ‑> str
Expand source code
def to_comma_string(self) -> str:
    """Convert to plus-separated string for peerlist F: suffix.

    Note: Uses '+' as separator instead of ',' because the peerlist
    itself uses ',' to separate entries. Using ',' for features would
    cause parsing ambiguity.
    """
    return "+".join(sorted(self.features))

Convert to plus-separated string for peerlist F: suffix.

Note: Uses '+' as separator instead of ',' because the peerlist itself uses ',' to separate entries. Using ',' for features would cause parsing ambiguity.

def to_dict(self) ‑> dict[str, bool]
Expand source code
def to_dict(self) -> dict[str, bool]:
    """Convert to dict for JSON serialization."""
    return dict.fromkeys(sorted(self.features), True)

Convert to dict for JSON serialization.

def validate_dependencies(self) ‑> tuple[bool, str]
Expand source code
def validate_dependencies(self) -> tuple[bool, str]:
    """Check that all feature dependencies are satisfied."""
    for feature in self.features:
        deps = FEATURE_DEPENDENCIES.get(feature, [])
        for dep in deps:
            if dep not in self.features:
                return False, f"Feature '{feature}' requires '{dep}'"
    return True, ""

Check that all feature dependencies are satisfied.

class JoinMarketSettings (**values: Any)
Expand source code
class JoinMarketSettings(BaseSettings):
    """
    Main JoinMarket settings class.

    Loads configuration from multiple sources with the following priority:
    1. CLI arguments (not handled here, passed to component __init__)
    2. Environment variables
    3. TOML config file (~/.joinmarket-ng/config.toml)
    4. Default values
    """

    model_config = SettingsConfigDict(
        env_prefix="",  # No prefix by default, use env_nested_delimiter for nested
        env_nested_delimiter="__",
        case_sensitive=False,
        extra="ignore",  # Ignore unknown fields (for forward compatibility)
    )

    # Marker for config file path discovery
    _config_file_path: ClassVar[Path | None] = None

    # Core settings
    data_dir: Path | None = Field(
        default=None,
        description="Data directory (defaults to ~/.joinmarket-ng)",
    )

    # Nested settings groups
    tor: TorSettings = Field(default_factory=TorSettings)
    bitcoin: BitcoinSettings = Field(default_factory=BitcoinSettings)
    network_config: NetworkSettings = Field(default_factory=NetworkSettings)
    wallet: WalletSettings = Field(default_factory=WalletSettings)
    notifications: NotificationSettings = Field(default_factory=NotificationSettings)
    logging: LoggingSettings = Field(default_factory=LoggingSettings)

    # Component-specific settings
    maker: MakerSettings = Field(default_factory=MakerSettings)
    taker: TakerSettings = Field(default_factory=TakerSettings)
    directory_server: DirectoryServerSettings = Field(default_factory=DirectoryServerSettings)
    orderbook_watcher: OrderbookWatcherSettings = Field(default_factory=OrderbookWatcherSettings)

    @classmethod
    def settings_customise_sources(
        cls,
        settings_cls: type[BaseSettings],
        init_settings: PydanticBaseSettingsSource,
        env_settings: PydanticBaseSettingsSource,
        dotenv_settings: PydanticBaseSettingsSource,
        file_secret_settings: PydanticBaseSettingsSource,
    ) -> tuple[PydanticBaseSettingsSource, ...]:
        """
        Customize settings sources and their priority.

        Priority (highest to lowest):
        1. init_settings (CLI arguments passed to constructor)
        2. env_settings (environment variables with __ delimiter)
        3. toml_settings (config.toml file)
        4. defaults (in field definitions)
        """
        toml_source = TomlConfigSettingsSource(settings_cls)
        return (
            init_settings,
            env_settings,
            toml_source,
        )

    def get_data_dir(self) -> Path:
        """Get the data directory, using default if not set."""
        if self.data_dir is not None:
            return self.data_dir
        return get_default_data_dir()

    def get_directory_servers(self) -> list[str]:
        """Get directory servers, using network defaults if not set."""
        if self.network_config.directory_servers:
            return self.network_config.directory_servers
        network_name = self.network_config.network.value
        return DEFAULT_DIRECTORY_SERVERS.get(network_name, [])

Main JoinMarket settings class.

Loads configuration from multiple sources with the following priority: 1. CLI arguments (not handled here, passed to component init) 2. Environment variables 3. TOML config file (~/.joinmarket-ng/config.toml) 4. Default values

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic_settings.main.BaseSettings
  • pydantic.main.BaseModel

Class variables

var bitcoinBitcoinSettings

The type of the None singleton.

var data_dir : pathlib.Path | None

The type of the None singleton.

var directory_serverDirectoryServerSettings

The type of the None singleton.

var loggingLoggingSettings

The type of the None singleton.

var makerMakerSettings

The type of the None singleton.

var model_config : ClassVar[pydantic_settings.main.SettingsConfigDict]

The type of the None singleton.

var network_configNetworkSettings

The type of the None singleton.

var notificationsNotificationSettings

The type of the None singleton.

var orderbook_watcherOrderbookWatcherSettings

The type of the None singleton.

var takerTakerSettings

The type of the None singleton.

var torTorSettings

The type of the None singleton.

var walletWalletSettings

The type of the None singleton.

Static methods

def settings_customise_sources(settings_cls: type[BaseSettings],
init_settings: PydanticBaseSettingsSource,
env_settings: PydanticBaseSettingsSource,
dotenv_settings: PydanticBaseSettingsSource,
file_secret_settings: PydanticBaseSettingsSource) ‑> tuple[pydantic_settings.sources.base.PydanticBaseSettingsSource, ...]

Customize settings sources and their priority.

Priority (highest to lowest): 1. init_settings (CLI arguments passed to constructor) 2. env_settings (environment variables with __ delimiter) 3. toml_settings (config.toml file) 4. defaults (in field definitions)

Methods

def get_data_dir(self) ‑> pathlib.Path
Expand source code
def get_data_dir(self) -> Path:
    """Get the data directory, using default if not set."""
    if self.data_dir is not None:
        return self.data_dir
    return get_default_data_dir()

Get the data directory, using default if not set.

def get_directory_servers(self) ‑> list[str]
Expand source code
def get_directory_servers(self) -> list[str]:
    """Get directory servers, using network defaults if not set."""
    if self.network_config.directory_servers:
        return self.network_config.directory_servers
    network_name = self.network_config.network.value
    return DEFAULT_DIRECTORY_SERVERS.get(network_name, [])

Get directory servers, using network defaults if not set.

class LoggingSettings (**data: Any)
Expand source code
class LoggingSettings(BaseModel):
    """Logging configuration."""

    level: str = Field(
        default="INFO",
        description="Log level: TRACE, DEBUG, INFO, WARNING, ERROR",
    )
    sensitive: bool = Field(
        default=False,
        description="Enable sensitive logging (mnemonics, keys)",
    )

Logging configuration.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var level : str

The type of the None singleton.

var model_config

The type of the None singleton.

var sensitive : bool

The type of the None singleton.

class MakerSettings (**data: Any)
Expand source code
class MakerSettings(BaseModel):
    """Maker-specific settings."""

    min_size: int = Field(
        default=100000,
        ge=0,
        description="Minimum CoinJoin amount in satoshis",
    )
    offer_type: str = Field(
        default="sw0reloffer",
        description="Offer type: sw0reloffer (relative) or sw0absoffer (absolute)",
    )
    cj_fee_relative: str = Field(
        default="0.001",
        description="Relative CoinJoin fee (0.001 = 0.1%)",
    )
    cj_fee_absolute: int = Field(
        default=500,
        ge=0,
        description="Absolute CoinJoin fee in satoshis",
    )
    tx_fee_contribution: int = Field(
        default=0,
        ge=0,
        description="Transaction fee contribution in satoshis",
    )
    min_confirmations: int = Field(
        default=1,
        ge=0,
        description="Minimum confirmations for UTXOs",
    )
    merge_algorithm: str = Field(
        default="default",
        description="UTXO selection: default, gradual, greedy, random",
    )
    session_timeout_sec: int = Field(
        default=300,
        ge=60,
        description="Maximum time for a CoinJoin session",
    )
    pending_tx_timeout_min: int = Field(
        default=60,
        ge=10,
        le=1440,
        description="Minutes before marking unbroadcast CoinJoins as failed",
    )
    rescan_interval_sec: int = Field(
        default=600,
        ge=60,
        description="Interval for periodic wallet rescans",
    )
    # Hidden service settings
    onion_serving_host: str = Field(
        default="127.0.0.1",
        description="Bind address for incoming connections",
    )
    onion_serving_port: int = Field(
        default=5222,
        ge=0,
        le=65535,
        description="Port for incoming onion connections",
    )
    # Rate limiting
    message_rate_limit: int = Field(
        default=10,
        ge=1,
        description="Messages per second per peer (sustained)",
    )
    message_burst_limit: int = Field(
        default=100,
        ge=1,
        description="Maximum burst messages per peer",
    )

    @field_validator("cj_fee_relative", mode="before")
    @classmethod
    def normalize_cj_fee_relative(cls, v: str | float | int) -> str:
        """
        Normalize cj_fee_relative to avoid scientific notation.

        Pydantic may coerce float values (from env vars, TOML, or JSON) to strings,
        which can result in scientific notation for small values (e.g., 1e-05).
        The JoinMarket protocol expects decimal notation (e.g., 0.00001).
        """
        if isinstance(v, (int, float)):
            # Use Decimal to preserve precision and avoid scientific notation
            return format(Decimal(str(v)), "f")
        # Already a string - check if it contains scientific notation
        if "e" in v.lower():
            try:
                return format(Decimal(v), "f")
            except InvalidOperation:
                pass  # Let pydantic handle the validation error
        return v

Maker-specific settings.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var cj_fee_absolute : int

The type of the None singleton.

var cj_fee_relative : str

The type of the None singleton.

var merge_algorithm : str

The type of the None singleton.

var message_burst_limit : int

The type of the None singleton.

var message_rate_limit : int

The type of the None singleton.

var min_confirmations : int

The type of the None singleton.

var min_size : int

The type of the None singleton.

var model_config

The type of the None singleton.

var offer_type : str

The type of the None singleton.

var onion_serving_host : str

The type of the None singleton.

var onion_serving_port : int

The type of the None singleton.

var pending_tx_timeout_min : int

The type of the None singleton.

var rescan_interval_sec : int

The type of the None singleton.

var session_timeout_sec : int

The type of the None singleton.

var tx_fee_contribution : int

The type of the None singleton.

Static methods

def normalize_cj_fee_relative(v: str | float | int) ‑> str

Normalize cj_fee_relative to avoid scientific notation.

Pydantic may coerce float values (from env vars, TOML, or JSON) to strings, which can result in scientific notation for small values (e.g., 1e-05). The JoinMarket protocol expects decimal notation (e.g., 0.00001).

class MessageDeduplicator (window_seconds: float = 30.0)
Expand source code
class MessageDeduplicator:
    """
    Deduplicates messages received from multiple sources.

    When makers/takers are connected to N directory servers, they receive each
    message N times. This class tracks recently-seen messages to:
    1. Avoid processing duplicates (especially expensive operations like !auth, !tx)
    2. Prevent rate limiter from counting duplicates as violations
    3. Track which source each message came from for better logging

    Design:
    - Simple time-based deduplication window (default 30s)
    - Message fingerprint: from_nick + command + first_arg (e.g., "alice:fill:order123")
    - Tracks first source for each message to enable better logging
    - Auto-cleanup of old entries to prevent memory leaks

    Example:
        >>> dedup = MessageDeduplicator(window_seconds=30.0)
        >>> fp = MessageDeduplicator.make_fingerprint("alice", "fill", "order123")
        >>> is_dup, source, count = dedup.is_duplicate(fp, "dir1.onion")
        >>> print(f"Duplicate: {is_dup}, first source: {source}, count: {count}")
        Duplicate: False, first source: dir1.onion, count: 1
        >>> is_dup, source, count = dedup.is_duplicate(fp, "dir2.onion")
        >>> print(f"Duplicate: {is_dup}, first source: {source}, count: {count}")
        Duplicate: True, first source: dir1.onion, count: 2
    """

    def __init__(self, window_seconds: float = 30.0):
        """
        Initialize deduplicator.

        Args:
            window_seconds: How long to remember messages (default 30s).
                           Should be longer than expected network latency variance.
        """
        self.window_seconds = window_seconds
        self._seen: dict[str, SeenMessage] = {}
        self._stats = DeduplicationStats()

    def is_duplicate(self, fingerprint: str, source: str) -> tuple[bool, str, int]:
        """
        Check if this message is a duplicate.

        Args:
            fingerprint: Unique identifier for the message (use make_fingerprint)
            source: Identifier for where message came from (e.g., directory URL)

        Returns:
            Tuple of (is_duplicate, first_source, total_count):
            - is_duplicate: True if seen before within window
            - first_source: Which source saw it first
            - total_count: How many times we've seen this message
        """
        now = time.monotonic()
        self._cleanup(now)
        self._stats.total_processed += 1

        if fingerprint in self._seen:
            entry = self._seen[fingerprint]
            entry.count += 1
            self._stats.duplicates_dropped += 1
            return (True, entry.source, entry.count)

        # First time seeing this message
        self._seen[fingerprint] = SeenMessage(timestamp=now, source=source, count=1)
        self._stats.unique_messages += 1
        return (False, source, 1)

    def _cleanup(self, now: float) -> None:
        """Remove entries older than the window."""
        cutoff = now - self.window_seconds
        expired = [fp for fp, entry in self._seen.items() if entry.timestamp < cutoff]
        for fp in expired:
            del self._seen[fp]

    @staticmethod
    def make_fingerprint(from_nick: str, command: str, first_arg: str = "") -> str:
        """
        Create a message fingerprint for deduplication.

        The fingerprint uniquely identifies a message based on:
        - Who sent it (from_nick)
        - What command it is (fill, auth, tx, pubkey, ioauth, sig, etc.)
        - The primary identifier (order ID, transaction hash, etc.)

        Args:
            from_nick: Who sent the message
            command: Command name (fill, auth, tx, etc.)
            first_arg: First argument (e.g., order ID for fill)

        Returns:
            String fingerprint like "alice:fill:order123"
        """
        return f"{from_nick}:{command}:{first_arg}"

    @property
    def stats(self) -> DeduplicationStats:
        """Get deduplication statistics."""
        return self._stats

    def reset_stats(self) -> None:
        """Reset statistics counters."""
        self._stats = DeduplicationStats()

    def clear(self) -> None:
        """Clear all seen messages and reset stats."""
        self._seen.clear()
        self.reset_stats()

    def __len__(self) -> int:
        """Return number of messages currently being tracked."""
        return len(self._seen)

Deduplicates messages received from multiple sources.

When makers/takers are connected to N directory servers, they receive each message N times. This class tracks recently-seen messages to: 1. Avoid processing duplicates (especially expensive operations like !auth, !tx) 2. Prevent rate limiter from counting duplicates as violations 3. Track which source each message came from for better logging

Design: - Simple time-based deduplication window (default 30s) - Message fingerprint: from_nick + command + first_arg (e.g., "alice:fill:order123") - Tracks first source for each message to enable better logging - Auto-cleanup of old entries to prevent memory leaks

Example

>>> dedup = MessageDeduplicator(window_seconds=30.0)
>>> fp = MessageDeduplicator.make_fingerprint("alice", "fill", "order123")
>>> is_dup, source, count = dedup.is_duplicate(fp, "dir1.onion")
>>> print(f"Duplicate: {is_dup}, first source: {source}, count: {count}")
Duplicate: False, first source: dir1.onion, count: 1
>>> is_dup, source, count = dedup.is_duplicate(fp, "dir2.onion")
>>> print(f"Duplicate: {is_dup}, first source: {source}, count: {count}")
Duplicate: True, first source: dir1.onion, count: 2

Initialize deduplicator.

Args

window_seconds
How long to remember messages (default 30s). Should be longer than expected network latency variance.

Static methods

def make_fingerprint(from_nick: str, command: str, first_arg: str = '') ‑> str
Expand source code
@staticmethod
def make_fingerprint(from_nick: str, command: str, first_arg: str = "") -> str:
    """
    Create a message fingerprint for deduplication.

    The fingerprint uniquely identifies a message based on:
    - Who sent it (from_nick)
    - What command it is (fill, auth, tx, pubkey, ioauth, sig, etc.)
    - The primary identifier (order ID, transaction hash, etc.)

    Args:
        from_nick: Who sent the message
        command: Command name (fill, auth, tx, etc.)
        first_arg: First argument (e.g., order ID for fill)

    Returns:
        String fingerprint like "alice:fill:order123"
    """
    return f"{from_nick}:{command}:{first_arg}"

Create a message fingerprint for deduplication.

The fingerprint uniquely identifies a message based on: - Who sent it (from_nick) - What command it is (fill, auth, tx, pubkey, ioauth, sig, etc.) - The primary identifier (order ID, transaction hash, etc.)

Args

from_nick
Who sent the message
command
Command name (fill, auth, tx, etc.)
first_arg
First argument (e.g., order ID for fill)

Returns

String fingerprint like "alice:fill:order123"

Instance variables

prop statsDeduplicationStats
Expand source code
@property
def stats(self) -> DeduplicationStats:
    """Get deduplication statistics."""
    return self._stats

Get deduplication statistics.

Methods

def clear(self) ‑> None
Expand source code
def clear(self) -> None:
    """Clear all seen messages and reset stats."""
    self._seen.clear()
    self.reset_stats()

Clear all seen messages and reset stats.

def is_duplicate(self, fingerprint: str, source: str) ‑> tuple[bool, str, int]
Expand source code
def is_duplicate(self, fingerprint: str, source: str) -> tuple[bool, str, int]:
    """
    Check if this message is a duplicate.

    Args:
        fingerprint: Unique identifier for the message (use make_fingerprint)
        source: Identifier for where message came from (e.g., directory URL)

    Returns:
        Tuple of (is_duplicate, first_source, total_count):
        - is_duplicate: True if seen before within window
        - first_source: Which source saw it first
        - total_count: How many times we've seen this message
    """
    now = time.monotonic()
    self._cleanup(now)
    self._stats.total_processed += 1

    if fingerprint in self._seen:
        entry = self._seen[fingerprint]
        entry.count += 1
        self._stats.duplicates_dropped += 1
        return (True, entry.source, entry.count)

    # First time seeing this message
    self._seen[fingerprint] = SeenMessage(timestamp=now, source=source, count=1)
    self._stats.unique_messages += 1
    return (False, source, 1)

Check if this message is a duplicate.

Args

fingerprint
Unique identifier for the message (use make_fingerprint)
source
Identifier for where message came from (e.g., directory URL)

Returns

Tuple of (is_duplicate, first_source, total_count): - is_duplicate: True if seen before within window - first_source: Which source saw it first - total_count: How many times we've seen this message

def reset_stats(self) ‑> None
Expand source code
def reset_stats(self) -> None:
    """Reset statistics counters."""
    self._stats = DeduplicationStats()

Reset statistics counters.

class MessageEnvelope (**data: Any)
Expand source code
class MessageEnvelope(BaseModel):
    message_type: int = Field(..., ge=0)
    payload: str
    timestamp: datetime = Field(default_factory=lambda: datetime.now(UTC))

    def to_bytes(self) -> bytes:
        import json

        result = json.dumps({"type": self.message_type, "line": self.payload}).encode("utf-8")
        return result

    @classmethod
    def from_bytes(
        cls, data: bytes, max_line_length: int = 65536, max_json_nesting_depth: int = 10
    ) -> MessageEnvelope:
        """
        Parse a message envelope from bytes with security limits.

        Args:
            data: Raw message bytes (without \\r\\n terminator)
            max_line_length: Maximum allowed line length in bytes (default 64KB)
            max_json_nesting_depth: Maximum JSON nesting depth (default 10)

        Returns:
            Parsed MessageEnvelope

        Raises:
            MessageParsingError: If message exceeds security limits
            json.JSONDecodeError: If JSON is malformed
        """
        import json

        # Check line length BEFORE parsing to prevent DoS
        if len(data) > max_line_length:
            raise MessageParsingError(
                f"Message line length {len(data)} exceeds maximum of {max_line_length} bytes"
            )

        # Parse JSON
        obj = json.loads(data)

        # Validate nesting depth BEFORE creating model
        validate_json_nesting_depth(obj, max_json_nesting_depth)

        return cls(message_type=obj["type"], payload=obj["line"])

Usage Documentation

Models

A base class for creating Pydantic models.

Attributes

__class_vars__
The names of the class variables defined on the model.
__private_attributes__
Metadata about the private attributes of the model.
__signature__
The synthesized __init__ [Signature][inspect.Signature] of the model.
__pydantic_complete__
Whether model building is completed, or if there are still undefined fields.
__pydantic_core_schema__
The core schema of the model.
__pydantic_custom_init__
Whether the model has a custom __init__ function.
__pydantic_decorators__
Metadata containing the decorators defined on the model. This replaces Model.__validators__ and Model.__root_validators__ from Pydantic V1.
__pydantic_generic_metadata__
Metadata for generic models; contains data used for a similar purpose to args, origin, parameters in typing-module generics. May eventually be replaced by these.
__pydantic_parent_namespace__
Parent namespace of the model, used for automatic rebuilding of models.
__pydantic_post_init__
The name of the post-init method for the model, if defined.
__pydantic_root_model__
Whether the model is a [RootModel][pydantic.root_model.RootModel].
__pydantic_serializer__
The pydantic-core SchemaSerializer used to dump instances of the model.
__pydantic_validator__
The pydantic-core SchemaValidator used to validate instances of the model.
__pydantic_fields__
A dictionary of field names and their corresponding [FieldInfo][pydantic.fields.FieldInfo] objects.
__pydantic_computed_fields__
A dictionary of computed field names and their corresponding [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] objects.
__pydantic_extra__
A dictionary containing extra values, if [extra][pydantic.config.ConfigDict.extra] is set to 'allow'.
__pydantic_fields_set__
The names of fields explicitly set during instantiation.
__pydantic_private__
Values of private attributes set on the model instance.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var message_type : int

The type of the None singleton.

var model_config

The type of the None singleton.

var payload : str

The type of the None singleton.

var timestamp : datetime.datetime

The type of the None singleton.

Static methods

def from_bytes(data: bytes, max_line_length: int = 65536, max_json_nesting_depth: int = 10) ‑> MessageEnvelope

Parse a message envelope from bytes with security limits.

Args

data
Raw message bytes (without \r\n terminator)
max_line_length
Maximum allowed line length in bytes (default 64KB)
max_json_nesting_depth
Maximum JSON nesting depth (default 10)

Returns

Parsed MessageEnvelope

Raises

MessageParsingError
If message exceeds security limits
json.JSONDecodeError
If JSON is malformed

Methods

def to_bytes(self) ‑> bytes
Expand source code
def to_bytes(self) -> bytes:
    import json

    result = json.dumps({"type": self.message_type, "line": self.payload}).encode("utf-8")
    return result
class MessageType (*values)
Expand source code
class MessageType(IntEnum):
    PRIVMSG = 685
    PUBMSG = 687
    PEERLIST = 789
    GETPEERLIST = 791
    HANDSHAKE = 793
    DN_HANDSHAKE = 795
    PING = 797
    PONG = 799
    DISCONNECT = 801

    CONNECT = 785
    CONNECT_IN = 797

Enum where members are also (and must be) ints

Ancestors

  • enum.IntEnum
  • builtins.int
  • enum.ReprEnum
  • enum.Enum

Class variables

var CONNECT

The type of the None singleton.

var CONNECT_IN

The type of the None singleton.

var DISCONNECT

The type of the None singleton.

var DN_HANDSHAKE

The type of the None singleton.

var GETPEERLIST

The type of the None singleton.

var HANDSHAKE

The type of the None singleton.

var PEERLIST

The type of the None singleton.

var PING

The type of the None singleton.

var PONG

The type of the None singleton.

var PRIVMSG

The type of the None singleton.

var PUBMSG

The type of the None singleton.

class NaclError (*args, **kwargs)
Expand source code
class NaclError(Exception):
    """Exception for NaCl encryption errors."""

    pass

Exception for NaCl encryption errors.

Ancestors

  • builtins.Exception
  • builtins.BaseException
class NetworkSettings (**data: Any)
Expand source code
class NetworkSettings(BaseModel):
    """Network configuration."""

    network: NetworkType = Field(
        default=NetworkType.MAINNET,
        description="JoinMarket protocol network (mainnet, testnet, signet, regtest)",
    )
    bitcoin_network: NetworkType | None = Field(
        default=None,
        description="Bitcoin network for address generation (defaults to network)",
    )
    directory_servers: list[str] = Field(
        default_factory=list,
        description="Directory server addresses (host:port). Uses defaults if empty.",
    )

Network configuration.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var bitcoin_networkNetworkType | None

The type of the None singleton.

var directory_servers : list[str]

The type of the None singleton.

var model_config

The type of the None singleton.

var networkNetworkType

The type of the None singleton.

class NickTracker (on_nick_leave: Callable[[str], None] | None = None)
Expand source code
class NickTracker(Generic[TDirectory]):
    """
    Tracks nick availability across multiple directory servers.

    A nick is considered "active" if it appears on at least one directory.
    A nick is only marked as "gone" when ALL directories report it as disconnected.

    This implements the multi-directory awareness pattern from the reference
    implementation (onionmc.py lines 1078-1103).
    """

    def __init__(self, on_nick_leave: Callable[[str], None] | None = None):
        """
        Initialize the nick tracker.

        Args:
            on_nick_leave: Optional callback when a nick leaves ALL directories
        """
        # active_nicks[nick] = {directory1: True, directory2: True, ...}
        # True = nick is present on this directory, False = gone from this directory
        self.active_nicks: dict[str, dict[TDirectory, bool]] = {}
        self.on_nick_leave = on_nick_leave

    def update_nick(self, nick: str, directory: TDirectory, is_present: bool) -> None:
        """
        Update a nick's presence status on a specific directory.

        Args:
            nick: The nick to update
            directory: The directory reporting the status
            is_present: True if nick is present on this directory, False if gone
        """
        if nick not in self.active_nicks:
            self.active_nicks[nick] = {}

        old_status = self.active_nicks[nick].get(directory)
        self.active_nicks[nick][directory] = is_present

        # Check if this update causes the nick to be completely gone
        if not is_present and old_status is True:
            # Nick just disappeared from this directory
            # Check if it's still present on any other directory
            if not self.is_nick_active(nick):
                logger.info(
                    f"Nick {nick} has left all directories "
                    f"(directories: {list(self.active_nicks[nick].keys())})"
                )
                if self.on_nick_leave:
                    self.on_nick_leave(nick)
                # Clean up the entry
                del self.active_nicks[nick]
        elif is_present and old_status is False:
            logger.debug(
                f"Nick {nick} returned to directory {directory} (was previously marked gone)"
            )

    def mark_nick_present(self, nick: str, directory: TDirectory) -> None:
        """
        Mark a nick as present on a directory.

        Args:
            nick: The nick
            directory: The directory where the nick is present
        """
        self.update_nick(nick, directory, True)

    def mark_nick_gone(self, nick: str, directory: TDirectory) -> None:
        """
        Mark a nick as gone from a directory.

        If this is the last directory where the nick was present,
        triggers the on_nick_leave callback.

        Args:
            nick: The nick
            directory: The directory where the nick left
        """
        self.update_nick(nick, directory, False)

    def is_nick_active(self, nick: str) -> bool:
        """
        Check if a nick is active on at least one directory.

        Args:
            nick: The nick to check

        Returns:
            True if nick is present on at least one directory
        """
        if nick not in self.active_nicks:
            return False
        return any(status for status in self.active_nicks[nick].values())

    def get_active_directories_for_nick(self, nick: str) -> list[TDirectory]:
        """
        Get list of directories where a nick is currently present.

        Args:
            nick: The nick to query

        Returns:
            List of directories where nick is active
        """
        if nick not in self.active_nicks:
            return []
        return [
            directory for directory, is_present in self.active_nicks[nick].items() if is_present
        ]

    def get_all_active_nicks(self) -> set[str]:
        """
        Get all nicks that are active on at least one directory.

        Returns:
            Set of active nicks
        """
        return {nick for nick in self.active_nicks if self.is_nick_active(nick)}

    def remove_directory(self, directory: TDirectory) -> list[str]:
        """
        Remove a directory from tracking (when connection is lost).

        Returns list of nicks that became completely gone after removing this directory.

        Args:
            directory: The directory to remove

        Returns:
            List of nicks that are no longer active after removing this directory
        """
        gone_nicks = []

        for nick in list(self.active_nicks.keys()):
            if directory in self.active_nicks[nick]:
                # Remove this directory from the nick's tracking
                del self.active_nicks[nick][directory]

                # Check if nick is now gone from all directories
                if not self.active_nicks[nick]:
                    # No directories left for this nick
                    logger.info(f"Nick {nick} is gone (last directory {directory} was removed)")
                    gone_nicks.append(nick)
                    if self.on_nick_leave:
                        self.on_nick_leave(nick)
                    del self.active_nicks[nick]
                elif not self.is_nick_active(nick):
                    # Still tracked on some directories but marked as gone on all
                    logger.info(
                        f"Nick {nick} is gone from all remaining directories "
                        f"after removing {directory}"
                    )
                    gone_nicks.append(nick)
                    if self.on_nick_leave:
                        self.on_nick_leave(nick)
                    del self.active_nicks[nick]

        if gone_nicks:
            logger.info(
                f"After removing directory {directory}, {len(gone_nicks)} nicks are gone: "
                f"{gone_nicks}"
            )

        return gone_nicks

    def sync_with_peerlist(self, directory: TDirectory, active_nicks: set[str]) -> None:
        """
        Synchronize nick tracking with a directory's peerlist.

        This is called after fetching a peerlist from a directory to update
        the nick tracking state. Nicks not in the peerlist are marked as gone
        from that directory.

        Args:
            directory: The directory reporting the peerlist
            active_nicks: Set of nicks currently active on this directory
        """
        # First, mark all nicks in the peerlist as present
        for nick in active_nicks:
            self.mark_nick_present(nick, directory)

        # Then, mark nicks we're tracking but not in this peerlist as gone from this directory
        for nick in list(self.active_nicks.keys()):
            if directory in self.active_nicks[nick] and nick not in active_nicks:
                self.mark_nick_gone(nick, directory)

    def __repr__(self) -> str:
        """String representation showing active nicks and their directories."""
        return f"NickTracker(active_nicks={len(self.get_all_active_nicks())})"

Tracks nick availability across multiple directory servers.

A nick is considered "active" if it appears on at least one directory. A nick is only marked as "gone" when ALL directories report it as disconnected.

This implements the multi-directory awareness pattern from the reference implementation (onionmc.py lines 1078-1103).

Initialize the nick tracker.

Args

on_nick_leave
Optional callback when a nick leaves ALL directories

Ancestors

  • typing.Generic

Methods

def get_active_directories_for_nick(self, nick: str) ‑> list[~TDirectory]
Expand source code
def get_active_directories_for_nick(self, nick: str) -> list[TDirectory]:
    """
    Get list of directories where a nick is currently present.

    Args:
        nick: The nick to query

    Returns:
        List of directories where nick is active
    """
    if nick not in self.active_nicks:
        return []
    return [
        directory for directory, is_present in self.active_nicks[nick].items() if is_present
    ]

Get list of directories where a nick is currently present.

Args

nick
The nick to query

Returns

List of directories where nick is active

def get_all_active_nicks(self) ‑> set[str]
Expand source code
def get_all_active_nicks(self) -> set[str]:
    """
    Get all nicks that are active on at least one directory.

    Returns:
        Set of active nicks
    """
    return {nick for nick in self.active_nicks if self.is_nick_active(nick)}

Get all nicks that are active on at least one directory.

Returns

Set of active nicks

def is_nick_active(self, nick: str) ‑> bool
Expand source code
def is_nick_active(self, nick: str) -> bool:
    """
    Check if a nick is active on at least one directory.

    Args:
        nick: The nick to check

    Returns:
        True if nick is present on at least one directory
    """
    if nick not in self.active_nicks:
        return False
    return any(status for status in self.active_nicks[nick].values())

Check if a nick is active on at least one directory.

Args

nick
The nick to check

Returns

True if nick is present on at least one directory

def mark_nick_gone(self, nick: str, directory: TDirectory) ‑> None
Expand source code
def mark_nick_gone(self, nick: str, directory: TDirectory) -> None:
    """
    Mark a nick as gone from a directory.

    If this is the last directory where the nick was present,
    triggers the on_nick_leave callback.

    Args:
        nick: The nick
        directory: The directory where the nick left
    """
    self.update_nick(nick, directory, False)

Mark a nick as gone from a directory.

If this is the last directory where the nick was present, triggers the on_nick_leave callback.

Args

nick
The nick
directory
The directory where the nick left
def mark_nick_present(self, nick: str, directory: TDirectory) ‑> None
Expand source code
def mark_nick_present(self, nick: str, directory: TDirectory) -> None:
    """
    Mark a nick as present on a directory.

    Args:
        nick: The nick
        directory: The directory where the nick is present
    """
    self.update_nick(nick, directory, True)

Mark a nick as present on a directory.

Args

nick
The nick
directory
The directory where the nick is present
def remove_directory(self, directory: TDirectory) ‑> list[str]
Expand source code
def remove_directory(self, directory: TDirectory) -> list[str]:
    """
    Remove a directory from tracking (when connection is lost).

    Returns list of nicks that became completely gone after removing this directory.

    Args:
        directory: The directory to remove

    Returns:
        List of nicks that are no longer active after removing this directory
    """
    gone_nicks = []

    for nick in list(self.active_nicks.keys()):
        if directory in self.active_nicks[nick]:
            # Remove this directory from the nick's tracking
            del self.active_nicks[nick][directory]

            # Check if nick is now gone from all directories
            if not self.active_nicks[nick]:
                # No directories left for this nick
                logger.info(f"Nick {nick} is gone (last directory {directory} was removed)")
                gone_nicks.append(nick)
                if self.on_nick_leave:
                    self.on_nick_leave(nick)
                del self.active_nicks[nick]
            elif not self.is_nick_active(nick):
                # Still tracked on some directories but marked as gone on all
                logger.info(
                    f"Nick {nick} is gone from all remaining directories "
                    f"after removing {directory}"
                )
                gone_nicks.append(nick)
                if self.on_nick_leave:
                    self.on_nick_leave(nick)
                del self.active_nicks[nick]

    if gone_nicks:
        logger.info(
            f"After removing directory {directory}, {len(gone_nicks)} nicks are gone: "
            f"{gone_nicks}"
        )

    return gone_nicks

Remove a directory from tracking (when connection is lost).

Returns list of nicks that became completely gone after removing this directory.

Args

directory
The directory to remove

Returns

List of nicks that are no longer active after removing this directory

def sync_with_peerlist(self, directory: TDirectory, active_nicks: set[str]) ‑> None
Expand source code
def sync_with_peerlist(self, directory: TDirectory, active_nicks: set[str]) -> None:
    """
    Synchronize nick tracking with a directory's peerlist.

    This is called after fetching a peerlist from a directory to update
    the nick tracking state. Nicks not in the peerlist are marked as gone
    from that directory.

    Args:
        directory: The directory reporting the peerlist
        active_nicks: Set of nicks currently active on this directory
    """
    # First, mark all nicks in the peerlist as present
    for nick in active_nicks:
        self.mark_nick_present(nick, directory)

    # Then, mark nicks we're tracking but not in this peerlist as gone from this directory
    for nick in list(self.active_nicks.keys()):
        if directory in self.active_nicks[nick] and nick not in active_nicks:
            self.mark_nick_gone(nick, directory)

Synchronize nick tracking with a directory's peerlist.

This is called after fetching a peerlist from a directory to update the nick tracking state. Nicks not in the peerlist are marked as gone from that directory.

Args

directory
The directory reporting the peerlist
active_nicks
Set of nicks currently active on this directory
def update_nick(self, nick: str, directory: TDirectory, is_present: bool) ‑> None
Expand source code
def update_nick(self, nick: str, directory: TDirectory, is_present: bool) -> None:
    """
    Update a nick's presence status on a specific directory.

    Args:
        nick: The nick to update
        directory: The directory reporting the status
        is_present: True if nick is present on this directory, False if gone
    """
    if nick not in self.active_nicks:
        self.active_nicks[nick] = {}

    old_status = self.active_nicks[nick].get(directory)
    self.active_nicks[nick][directory] = is_present

    # Check if this update causes the nick to be completely gone
    if not is_present and old_status is True:
        # Nick just disappeared from this directory
        # Check if it's still present on any other directory
        if not self.is_nick_active(nick):
            logger.info(
                f"Nick {nick} has left all directories "
                f"(directories: {list(self.active_nicks[nick].keys())})"
            )
            if self.on_nick_leave:
                self.on_nick_leave(nick)
            # Clean up the entry
            del self.active_nicks[nick]
    elif is_present and old_status is False:
        logger.debug(
            f"Nick {nick} returned to directory {directory} (was previously marked gone)"
        )

Update a nick's presence status on a specific directory.

Args

nick
The nick to update
directory
The directory reporting the status
is_present
True if nick is present on this directory, False if gone
class NotificationConfig (**data: Any)
Expand source code
class NotificationConfig(BaseModel):
    """
    Configuration for the notification system.

    All configuration is loaded from environment variables.
    """

    # Core settings
    enabled: bool = Field(
        default=False,
        description="Master switch for notifications",
    )
    urls: list[SecretStr] = Field(
        default_factory=list,
        description="List of Apprise notification URLs",
    )
    title_prefix: str = Field(
        default="JoinMarket NG",
        description="Prefix for all notification titles",
    )
    component_name: str = Field(
        default="",
        description="Component name to include in notification titles (e.g., 'Maker', 'Taker')",
    )

    # Privacy settings - exclude sensitive data from notifications
    include_amounts: bool = Field(
        default=True,
        description="Include amounts in notifications",
    )
    include_txids: bool = Field(
        default=False,
        description="Include transaction IDs in notifications (privacy risk)",
    )
    include_nick: bool = Field(
        default=True,
        description="Include peer nicks in notifications",
    )

    # Tor/Proxy settings
    use_tor: bool = Field(
        default=True,
        description="Route notifications through Tor SOCKS proxy",
    )
    tor_socks_host: str = Field(
        default="127.0.0.1",
        description="Tor SOCKS5 proxy host (only used if use_tor=True)",
    )
    tor_socks_port: int = Field(
        default=9050,
        ge=1,
        le=65535,
        description="Tor SOCKS5 proxy port (only used if use_tor=True)",
    )

    # Event type toggles (all enabled by default if notifications are enabled)
    notify_fill: bool = Field(default=True, description="Notify on !fill requests")
    notify_rejection: bool = Field(default=True, description="Notify on rejections")
    notify_signing: bool = Field(default=True, description="Notify on tx signing")
    notify_mempool: bool = Field(default=True, description="Notify on mempool detection")
    notify_confirmed: bool = Field(default=True, description="Notify on confirmation")
    notify_nick_change: bool = Field(default=True, description="Notify on nick change")
    notify_disconnect: bool = Field(default=True, description="Notify on directory disconnect")
    notify_coinjoin_start: bool = Field(default=True, description="Notify on CoinJoin start")
    notify_coinjoin_complete: bool = Field(default=True, description="Notify on CoinJoin complete")
    notify_coinjoin_failed: bool = Field(default=True, description="Notify on CoinJoin failure")
    notify_peer_events: bool = Field(default=False, description="Notify on peer connect/disconnect")
    notify_rate_limit: bool = Field(default=True, description="Notify on rate limit bans")
    notify_startup: bool = Field(default=True, description="Notify on component startup")

    model_config = {"frozen": False}

Configuration for the notification system.

All configuration is loaded from environment variables.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var component_name : str

The type of the None singleton.

var enabled : bool

The type of the None singleton.

var include_amounts : bool

The type of the None singleton.

var include_nick : bool

The type of the None singleton.

var include_txids : bool

The type of the None singleton.

var model_config

The type of the None singleton.

var notify_coinjoin_complete : bool

The type of the None singleton.

var notify_coinjoin_failed : bool

The type of the None singleton.

var notify_coinjoin_start : bool

The type of the None singleton.

var notify_confirmed : bool

The type of the None singleton.

var notify_disconnect : bool

The type of the None singleton.

var notify_fill : bool

The type of the None singleton.

var notify_mempool : bool

The type of the None singleton.

var notify_nick_change : bool

The type of the None singleton.

var notify_peer_events : bool

The type of the None singleton.

var notify_rate_limit : bool

The type of the None singleton.

var notify_rejection : bool

The type of the None singleton.

var notify_signing : bool

The type of the None singleton.

var notify_startup : bool

The type of the None singleton.

var title_prefix : str

The type of the None singleton.

var tor_socks_host : str

The type of the None singleton.

var tor_socks_port : int

The type of the None singleton.

var urls : list[pydantic.types.SecretStr]

The type of the None singleton.

var use_tor : bool

The type of the None singleton.

class NotificationPriority (*values)
Expand source code
class NotificationPriority(str, Enum):
    """Notification priority levels (maps to Apprise NotifyType)."""

    INFO = "info"
    SUCCESS = "success"
    WARNING = "warning"
    FAILURE = "failure"

Notification priority levels (maps to Apprise NotifyType).

Ancestors

  • builtins.str
  • enum.Enum

Class variables

var FAILURE

The type of the None singleton.

var INFO

The type of the None singleton.

var SUCCESS

The type of the None singleton.

var WARNING

The type of the None singleton.

class NotificationSettings (**data: Any)
Expand source code
class NotificationSettings(BaseModel):
    """Notification system configuration."""

    enabled: bool = Field(
        default=False,
        description="Enable notifications (requires urls to be set)",
    )
    urls: list[str] = Field(
        default_factory=list,
        description='Apprise notification URLs (e.g., ["tgram://bottoken/ChatID", "gotify://hostname/token"])',
    )
    title_prefix: str = Field(
        default="JoinMarket NG",
        description="Prefix for notification titles",
    )
    component_name: str = Field(
        default="",
        description="Component name in notification titles (e.g., 'Maker', 'Taker'). "
        "Usually set programmatically by each component.",
    )
    include_amounts: bool = Field(
        default=True,
        description="Include amounts in notifications",
    )
    include_txids: bool = Field(
        default=False,
        description="Include transaction IDs in notifications (privacy risk)",
    )
    include_nick: bool = Field(
        default=True,
        description="Include peer nicks in notifications",
    )
    use_tor: bool = Field(
        default=True,
        description="Route notifications through Tor SOCKS proxy",
    )
    # Event type toggles
    notify_fill: bool = Field(default=True, description="Notify on !fill requests")
    notify_rejection: bool = Field(default=True, description="Notify on rejections")
    notify_signing: bool = Field(default=True, description="Notify on transaction signing")
    notify_mempool: bool = Field(default=True, description="Notify on mempool detection")
    notify_confirmed: bool = Field(default=True, description="Notify on confirmation")
    notify_nick_change: bool = Field(default=True, description="Notify on nick change")
    notify_disconnect: bool = Field(default=True, description="Notify on directory disconnect")
    notify_coinjoin_start: bool = Field(default=True, description="Notify on CoinJoin start")
    notify_coinjoin_complete: bool = Field(default=True, description="Notify on CoinJoin complete")
    notify_coinjoin_failed: bool = Field(default=True, description="Notify on CoinJoin failure")
    notify_peer_events: bool = Field(default=False, description="Notify on peer connect/disconnect")
    notify_rate_limit: bool = Field(default=True, description="Notify on rate limit bans")
    notify_startup: bool = Field(default=True, description="Notify on component startup")

Notification system configuration.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var component_name : str

The type of the None singleton.

var enabled : bool

The type of the None singleton.

var include_amounts : bool

The type of the None singleton.

var include_nick : bool

The type of the None singleton.

var include_txids : bool

The type of the None singleton.

var model_config

The type of the None singleton.

var notify_coinjoin_complete : bool

The type of the None singleton.

var notify_coinjoin_failed : bool

The type of the None singleton.

var notify_coinjoin_start : bool

The type of the None singleton.

var notify_confirmed : bool

The type of the None singleton.

var notify_disconnect : bool

The type of the None singleton.

var notify_fill : bool

The type of the None singleton.

var notify_mempool : bool

The type of the None singleton.

var notify_nick_change : bool

The type of the None singleton.

var notify_peer_events : bool

The type of the None singleton.

var notify_rate_limit : bool

The type of the None singleton.

var notify_rejection : bool

The type of the None singleton.

var notify_signing : bool

The type of the None singleton.

var notify_startup : bool

The type of the None singleton.

var title_prefix : str

The type of the None singleton.

var urls : list[str]

The type of the None singleton.

var use_tor : bool

The type of the None singleton.

class Notifier (config: NotificationConfig | None = None)
Expand source code
class Notifier:
    """
    Notification sender using Apprise.

    Thread-safe and async-friendly. Notification failures are logged but
    don't raise exceptions - notifications should never block protocol operations.
    """

    def __init__(self, config: NotificationConfig | None = None):
        """
        Initialize the notifier.

        Args:
            config: Notification configuration. If None, loads from environment.
        """
        self.config = config or load_notification_config()
        self._apprise: Any | None = None
        self._initialized = False
        self._lock = asyncio.Lock()

    async def _ensure_initialized(self) -> bool:
        """Lazily initialize Apprise. Returns True if ready to send."""
        if not self.config.enabled or not self.config.urls:
            return False

        if self._initialized:
            return self._apprise is not None

        async with self._lock:
            if self._initialized:
                return self._apprise is not None

            try:
                import apprise

                # Configure proxy environment variables if Tor is enabled
                if self.config.use_tor:
                    # Use the Tor configuration from settings
                    tor_host = self.config.tor_socks_host
                    tor_port = self.config.tor_socks_port
                    # Use socks5h:// to resolve DNS through the proxy (important for .onion)
                    proxy_url = f"socks5h://{tor_host}:{tor_port}"
                    # Set environment variables that Apprise/requests will use
                    os.environ["HTTP_PROXY"] = proxy_url
                    os.environ["HTTPS_PROXY"] = proxy_url
                    logger.info(f"Configuring notifications to route through Tor: {proxy_url}")

                self._apprise = apprise.Apprise()

                # Use longer timeout for Tor connections (default is 4s, too short for Tor)
                # Tor circuit establishment can take 10-30 seconds
                # Use Apprise's cto (connection timeout) and rto (read timeout) URL parameters
                for secret_url in self.config.urls:
                    # Get the actual URL string from SecretStr
                    url = secret_url.get_secret_value()

                    if self.config.use_tor:
                        # Append timeout parameters to URL for Tor connections
                        # cto = connection timeout, rto = read timeout (both in seconds)
                        timeout_params = "cto=30&rto=30"
                        if "?" in url:
                            url_with_timeout = f"{url}&{timeout_params}"
                        else:
                            url_with_timeout = f"{url}?{timeout_params}"
                    else:
                        url_with_timeout = url

                    if not self._apprise.add(url_with_timeout):
                        logger.warning(f"Failed to add notification URL: {url[:30]}...")

                if len(self._apprise) == 0:
                    logger.warning("No valid notification URLs configured")
                    self._apprise = None
                else:
                    logger.info(f"Notifications enabled with {len(self._apprise)} service(s)")

            except ImportError:
                logger.warning(
                    "Apprise not installed. Install with: pip install apprise\n"
                    "Notifications will be disabled."
                )
                self._apprise = None
            except Exception as e:
                logger.warning(f"Failed to initialize notifications: {e}")
                self._apprise = None

            self._initialized = True
            return self._apprise is not None

    async def _send(
        self,
        title: str,
        body: str,
        priority: NotificationPriority = NotificationPriority.INFO,
    ) -> bool:
        """
        Send a notification via Apprise.

        Args:
            title: Notification title (will be prefixed)
            body: Notification body
            priority: Notification priority

        Returns:
            True if sent successfully to at least one service
        """
        if not await self._ensure_initialized():
            return False

        # At this point, _apprise is guaranteed to be initialized
        assert self._apprise is not None
        apprise_instance = self._apprise  # Bind to local for type narrowing

        try:
            import apprise

            # Map our priority to Apprise NotifyType
            notify_type = {
                NotificationPriority.INFO: apprise.NotifyType.INFO,
                NotificationPriority.SUCCESS: apprise.NotifyType.SUCCESS,
                NotificationPriority.WARNING: apprise.NotifyType.WARNING,
                NotificationPriority.FAILURE: apprise.NotifyType.FAILURE,
            }.get(priority, apprise.NotifyType.INFO)

            # Build title: "JoinMarket NG (Maker): Title" or "JoinMarket NG: Title" if no component
            if self.config.component_name:
                full_title = f"{self.config.title_prefix} ({self.config.component_name}): {title}"
            else:
                full_title = f"{self.config.title_prefix}: {title}"

            # Send asynchronously if apprise supports it, otherwise in executor
            if hasattr(apprise_instance, "async_notify"):
                result = await apprise_instance.async_notify(
                    title=full_title,
                    body=body,
                    notify_type=notify_type,
                )
            else:
                # Run synchronous notify in thread pool
                loop = asyncio.get_event_loop()
                result = await loop.run_in_executor(
                    None,
                    lambda: apprise_instance.notify(
                        title=full_title,
                        body=body,
                        notify_type=notify_type,
                    ),
                )

            if not result:
                logger.warning(
                    f"Notification failed: {title}. "
                    "Check Tor connectivity and notification service URL. "
                    "Ensure PySocks is installed for SOCKS proxy support."
                )
            else:
                logger.debug(f"Notification sent: {title}")
            return result

        except Exception as e:
            logger.warning(f"Failed to send notification '{title}': {e}")
            return False

    def _format_amount(self, sats: int) -> str:
        """Format satoshi amount for display."""
        if not self.config.include_amounts:
            return "[hidden]"
        if sats >= 100_000_000:
            return f"{sats / 100_000_000:.4f} BTC"
        return f"{sats:,} sats"

    def _format_nick(self, nick: str) -> str:
        """Format nick for display."""
        if not self.config.include_nick:
            return "[hidden]"
        return nick

    def _format_txid(self, txid: str) -> str:
        """Format txid for display."""
        if not self.config.include_txids:
            return "[hidden]"
        return f"{txid[:16]}..."

    # =========================================================================
    # Maker notifications
    # =========================================================================

    async def notify_fill_request(
        self,
        taker_nick: str,
        cj_amount: int,
        offer_id: int,
    ) -> bool:
        """Notify when a !fill request is received (maker)."""
        if not self.config.notify_fill:
            return False

        return await self._send(
            title="Fill Request Received",
            body=(
                f"Taker: {self._format_nick(taker_nick)}\n"
                f"Amount: {self._format_amount(cj_amount)}\n"
                f"Offer ID: {offer_id}"
            ),
            priority=NotificationPriority.INFO,
        )

    async def notify_rejection(
        self,
        taker_nick: str,
        reason: str,
        details: str = "",
    ) -> bool:
        """Notify when rejecting a taker request (maker)."""
        if not self.config.notify_rejection:
            return False

        body = f"Taker: {self._format_nick(taker_nick)}\nReason: {reason}"
        if details:
            body += f"\nDetails: {details}"

        return await self._send(
            title="Request Rejected",
            body=body,
            priority=NotificationPriority.WARNING,
        )

    async def notify_tx_signed(
        self,
        taker_nick: str,
        cj_amount: int,
        num_inputs: int,
        fee_earned: int,
    ) -> bool:
        """Notify when transaction is signed (maker)."""
        if not self.config.notify_signing:
            return False

        return await self._send(
            title="Transaction Signed",
            body=(
                f"Taker: {self._format_nick(taker_nick)}\n"
                f"CJ Amount: {self._format_amount(cj_amount)}\n"
                f"Inputs signed: {num_inputs}\n"
                f"Fee earned: {self._format_amount(fee_earned)}"
            ),
            priority=NotificationPriority.SUCCESS,
        )

    async def notify_mempool(
        self,
        txid: str,
        cj_amount: int,
        role: str = "maker",
    ) -> bool:
        """Notify when CoinJoin is seen in mempool."""
        if not self.config.notify_mempool:
            return False

        return await self._send(
            title="CoinJoin in Mempool",
            body=(
                f"Role: {role.capitalize()}\n"
                f"TxID: {self._format_txid(txid)}\n"
                f"Amount: {self._format_amount(cj_amount)}"
            ),
            priority=NotificationPriority.INFO,
        )

    async def notify_confirmed(
        self,
        txid: str,
        cj_amount: int,
        confirmations: int,
        role: str = "maker",
    ) -> bool:
        """Notify when CoinJoin is confirmed."""
        if not self.config.notify_confirmed:
            return False

        return await self._send(
            title="CoinJoin Confirmed",
            body=(
                f"Role: {role.capitalize()}\n"
                f"TxID: {self._format_txid(txid)}\n"
                f"Amount: {self._format_amount(cj_amount)}\n"
                f"Confirmations: {confirmations}"
            ),
            priority=NotificationPriority.SUCCESS,
        )

    async def notify_nick_change(
        self,
        old_nick: str,
        new_nick: str,
    ) -> bool:
        """Notify when maker nick changes (privacy feature)."""
        if not self.config.notify_nick_change:
            return False

        return await self._send(
            title="Nick Changed",
            body=(f"Old: {self._format_nick(old_nick)}\nNew: {self._format_nick(new_nick)}"),
            priority=NotificationPriority.INFO,
        )

    async def notify_directory_disconnect(
        self,
        server: str,
        connected_count: int,
        total_count: int,
        reconnecting: bool = True,
    ) -> bool:
        """Notify when disconnected from a directory server."""
        if not self.config.notify_disconnect:
            return False

        status = "reconnecting" if reconnecting else "disconnected"
        priority = NotificationPriority.WARNING
        if connected_count == 0:
            priority = NotificationPriority.FAILURE

        return await self._send(
            title="Directory Server Disconnected",
            body=(
                f"Server: {server[:30]}...\n"
                f"Status: {status}\n"
                f"Connected: {connected_count}/{total_count}"
            ),
            priority=priority,
        )

    async def notify_all_directories_disconnected(self) -> bool:
        """Notify when disconnected from ALL directory servers (critical)."""
        return await self._send(
            title="CRITICAL: All Directories Disconnected",
            body=(
                "Lost connection to ALL directory servers.\n"
                "No CoinJoins possible until reconnected.\n"
                "Check network connectivity and Tor status."
            ),
            priority=NotificationPriority.FAILURE,
        )

    async def notify_directory_reconnect(
        self,
        server: str,
        connected_count: int,
        total_count: int,
    ) -> bool:
        """Notify when successfully reconnected to a directory server."""
        if not self.config.notify_disconnect:
            return False

        return await self._send(
            title="Directory Server Reconnected",
            body=(f"Server: {server[:30]}...\nConnected: {connected_count}/{total_count}"),
            priority=NotificationPriority.SUCCESS,
        )

    # =========================================================================
    # Taker notifications
    # =========================================================================

    async def notify_coinjoin_start(
        self,
        cj_amount: int,
        num_makers: int,
        destination: str,
    ) -> bool:
        """Notify when CoinJoin is initiated (taker)."""
        if not self.config.notify_coinjoin_start:
            return False

        dest_display = "internal" if destination == "INTERNAL" else f"{destination[:12]}..."

        return await self._send(
            title="CoinJoin Started",
            body=(
                f"Amount: {self._format_amount(cj_amount)}\n"
                f"Makers: {num_makers}\n"
                f"Destination: {dest_display}"
            ),
            priority=NotificationPriority.INFO,
        )

    async def notify_coinjoin_complete(
        self,
        txid: str,
        cj_amount: int,
        num_makers: int,
        total_fees: int,
    ) -> bool:
        """Notify when CoinJoin completes successfully (taker)."""
        if not self.config.notify_coinjoin_complete:
            return False

        return await self._send(
            title="CoinJoin Complete",
            body=(
                f"TxID: {self._format_txid(txid)}\n"
                f"Amount: {self._format_amount(cj_amount)}\n"
                f"Makers: {num_makers}\n"
                f"Total fees: {self._format_amount(total_fees)}"
            ),
            priority=NotificationPriority.SUCCESS,
        )

    async def notify_coinjoin_failed(
        self,
        reason: str,
        phase: str = "",
        cj_amount: int = 0,
    ) -> bool:
        """Notify when CoinJoin fails (taker)."""
        if not self.config.notify_coinjoin_failed:
            return False

        body = f"Reason: {reason}"
        if phase:
            body = f"Phase: {phase}\n" + body
        if cj_amount > 0:
            body += f"\nAmount: {self._format_amount(cj_amount)}"

        return await self._send(
            title="CoinJoin Failed",
            body=body,
            priority=NotificationPriority.FAILURE,
        )

    # =========================================================================
    # Directory server notifications
    # =========================================================================

    async def notify_peer_connected(
        self,
        nick: str,
        location: str,
        total_peers: int,
    ) -> bool:
        """Notify when a new peer connects (directory server)."""
        if not self.config.notify_peer_events:
            return False

        return await self._send(
            title="Peer Connected",
            body=(
                f"Nick: {self._format_nick(nick)}\n"
                f"Location: {location[:30]}...\n"
                f"Total peers: {total_peers}"
            ),
            priority=NotificationPriority.INFO,
        )

    async def notify_peer_disconnected(
        self,
        nick: str,
        total_peers: int,
    ) -> bool:
        """Notify when a peer disconnects (directory server)."""
        if not self.config.notify_peer_events:
            return False

        return await self._send(
            title="Peer Disconnected",
            body=(f"Nick: {self._format_nick(nick)}\nRemaining peers: {total_peers}"),
            priority=NotificationPriority.INFO,
        )

    async def notify_peer_banned(
        self,
        nick: str,
        reason: str,
        duration: int,
    ) -> bool:
        """Notify when a peer is banned for rate limit violations."""
        if not self.config.notify_rate_limit:
            return False

        return await self._send(
            title="Peer Banned",
            body=(f"Nick: {self._format_nick(nick)}\nReason: {reason}\nDuration: {duration}s"),
            priority=NotificationPriority.WARNING,
        )

    # =========================================================================
    # Orderbook watcher notifications
    # =========================================================================

    async def notify_orderbook_status(
        self,
        connected_directories: int,
        total_directories: int,
        total_offers: int,
        total_makers: int,
    ) -> bool:
        """Notify orderbook status summary."""
        return await self._send(
            title="Orderbook Status",
            body=(
                f"Directories: {connected_directories}/{total_directories}\n"
                f"Offers: {total_offers}\n"
                f"Makers: {total_makers}"
            ),
            priority=NotificationPriority.INFO,
        )

    async def notify_maker_offline(
        self,
        nick: str,
        last_seen: str,
    ) -> bool:
        """Notify when a maker goes offline."""
        return await self._send(
            title="Maker Offline",
            body=(f"Nick: {self._format_nick(nick)}\nLast seen: {last_seen}"),
            priority=NotificationPriority.INFO,
        )

    # =========================================================================
    # Generic notification
    # =========================================================================

    async def notify_startup(
        self,
        component: str,
        version: str = "",
        network: str = "",
        nick: str = "",
    ) -> bool:
        """
        Notify when a component starts up.

        Args:
            component: Component name (e.g., "Maker", "Taker", "Directory", "Orderbook Watcher")
            version: Optional version string
            network: Optional network name (e.g., "mainnet", "signet")
            nick: Optional component nick (e.g., "J5XXXXXXXXX")
        """
        if not self.config.notify_startup:
            return False

        body = f"Component: {component}"
        if nick:
            body += f"\nNick: {self._format_nick(nick)}"
        if version:
            body += f"\nVersion: {version}"
        if network:
            body += f"\nNetwork: {network}"

        return await self._send(
            title="Component Started",
            body=body,
            priority=NotificationPriority.INFO,
        )

    async def notify(
        self,
        title: str,
        body: str,
        priority: NotificationPriority = NotificationPriority.INFO,
    ) -> bool:
        """Send a generic notification."""
        return await self._send(title, body, priority)

Notification sender using Apprise.

Thread-safe and async-friendly. Notification failures are logged but don't raise exceptions - notifications should never block protocol operations.

Initialize the notifier.

Args

config
Notification configuration. If None, loads from environment.

Methods

async def notify(self,
title: str,
body: str,
priority: NotificationPriority = NotificationPriority.INFO) ‑> bool
Expand source code
async def notify(
    self,
    title: str,
    body: str,
    priority: NotificationPriority = NotificationPriority.INFO,
) -> bool:
    """Send a generic notification."""
    return await self._send(title, body, priority)

Send a generic notification.

async def notify_all_directories_disconnected(self) ‑> bool
Expand source code
async def notify_all_directories_disconnected(self) -> bool:
    """Notify when disconnected from ALL directory servers (critical)."""
    return await self._send(
        title="CRITICAL: All Directories Disconnected",
        body=(
            "Lost connection to ALL directory servers.\n"
            "No CoinJoins possible until reconnected.\n"
            "Check network connectivity and Tor status."
        ),
        priority=NotificationPriority.FAILURE,
    )

Notify when disconnected from ALL directory servers (critical).

async def notify_coinjoin_complete(self, txid: str, cj_amount: int, num_makers: int, total_fees: int) ‑> bool
Expand source code
async def notify_coinjoin_complete(
    self,
    txid: str,
    cj_amount: int,
    num_makers: int,
    total_fees: int,
) -> bool:
    """Notify when CoinJoin completes successfully (taker)."""
    if not self.config.notify_coinjoin_complete:
        return False

    return await self._send(
        title="CoinJoin Complete",
        body=(
            f"TxID: {self._format_txid(txid)}\n"
            f"Amount: {self._format_amount(cj_amount)}\n"
            f"Makers: {num_makers}\n"
            f"Total fees: {self._format_amount(total_fees)}"
        ),
        priority=NotificationPriority.SUCCESS,
    )

Notify when CoinJoin completes successfully (taker).

async def notify_coinjoin_failed(self, reason: str, phase: str = '', cj_amount: int = 0) ‑> bool
Expand source code
async def notify_coinjoin_failed(
    self,
    reason: str,
    phase: str = "",
    cj_amount: int = 0,
) -> bool:
    """Notify when CoinJoin fails (taker)."""
    if not self.config.notify_coinjoin_failed:
        return False

    body = f"Reason: {reason}"
    if phase:
        body = f"Phase: {phase}\n" + body
    if cj_amount > 0:
        body += f"\nAmount: {self._format_amount(cj_amount)}"

    return await self._send(
        title="CoinJoin Failed",
        body=body,
        priority=NotificationPriority.FAILURE,
    )

Notify when CoinJoin fails (taker).

async def notify_coinjoin_start(self, cj_amount: int, num_makers: int, destination: str) ‑> bool
Expand source code
async def notify_coinjoin_start(
    self,
    cj_amount: int,
    num_makers: int,
    destination: str,
) -> bool:
    """Notify when CoinJoin is initiated (taker)."""
    if not self.config.notify_coinjoin_start:
        return False

    dest_display = "internal" if destination == "INTERNAL" else f"{destination[:12]}..."

    return await self._send(
        title="CoinJoin Started",
        body=(
            f"Amount: {self._format_amount(cj_amount)}\n"
            f"Makers: {num_makers}\n"
            f"Destination: {dest_display}"
        ),
        priority=NotificationPriority.INFO,
    )

Notify when CoinJoin is initiated (taker).

async def notify_confirmed(self, txid: str, cj_amount: int, confirmations: int, role: str = 'maker') ‑> bool
Expand source code
async def notify_confirmed(
    self,
    txid: str,
    cj_amount: int,
    confirmations: int,
    role: str = "maker",
) -> bool:
    """Notify when CoinJoin is confirmed."""
    if not self.config.notify_confirmed:
        return False

    return await self._send(
        title="CoinJoin Confirmed",
        body=(
            f"Role: {role.capitalize()}\n"
            f"TxID: {self._format_txid(txid)}\n"
            f"Amount: {self._format_amount(cj_amount)}\n"
            f"Confirmations: {confirmations}"
        ),
        priority=NotificationPriority.SUCCESS,
    )

Notify when CoinJoin is confirmed.

async def notify_directory_disconnect(self,
server: str,
connected_count: int,
total_count: int,
reconnecting: bool = True) ‑> bool
Expand source code
async def notify_directory_disconnect(
    self,
    server: str,
    connected_count: int,
    total_count: int,
    reconnecting: bool = True,
) -> bool:
    """Notify when disconnected from a directory server."""
    if not self.config.notify_disconnect:
        return False

    status = "reconnecting" if reconnecting else "disconnected"
    priority = NotificationPriority.WARNING
    if connected_count == 0:
        priority = NotificationPriority.FAILURE

    return await self._send(
        title="Directory Server Disconnected",
        body=(
            f"Server: {server[:30]}...\n"
            f"Status: {status}\n"
            f"Connected: {connected_count}/{total_count}"
        ),
        priority=priority,
    )

Notify when disconnected from a directory server.

async def notify_directory_reconnect(self, server: str, connected_count: int, total_count: int) ‑> bool
Expand source code
async def notify_directory_reconnect(
    self,
    server: str,
    connected_count: int,
    total_count: int,
) -> bool:
    """Notify when successfully reconnected to a directory server."""
    if not self.config.notify_disconnect:
        return False

    return await self._send(
        title="Directory Server Reconnected",
        body=(f"Server: {server[:30]}...\nConnected: {connected_count}/{total_count}"),
        priority=NotificationPriority.SUCCESS,
    )

Notify when successfully reconnected to a directory server.

async def notify_fill_request(self, taker_nick: str, cj_amount: int, offer_id: int) ‑> bool
Expand source code
async def notify_fill_request(
    self,
    taker_nick: str,
    cj_amount: int,
    offer_id: int,
) -> bool:
    """Notify when a !fill request is received (maker)."""
    if not self.config.notify_fill:
        return False

    return await self._send(
        title="Fill Request Received",
        body=(
            f"Taker: {self._format_nick(taker_nick)}\n"
            f"Amount: {self._format_amount(cj_amount)}\n"
            f"Offer ID: {offer_id}"
        ),
        priority=NotificationPriority.INFO,
    )

Notify when a !fill request is received (maker).

async def notify_maker_offline(self, nick: str, last_seen: str) ‑> bool
Expand source code
async def notify_maker_offline(
    self,
    nick: str,
    last_seen: str,
) -> bool:
    """Notify when a maker goes offline."""
    return await self._send(
        title="Maker Offline",
        body=(f"Nick: {self._format_nick(nick)}\nLast seen: {last_seen}"),
        priority=NotificationPriority.INFO,
    )

Notify when a maker goes offline.

async def notify_mempool(self, txid: str, cj_amount: int, role: str = 'maker') ‑> bool
Expand source code
async def notify_mempool(
    self,
    txid: str,
    cj_amount: int,
    role: str = "maker",
) -> bool:
    """Notify when CoinJoin is seen in mempool."""
    if not self.config.notify_mempool:
        return False

    return await self._send(
        title="CoinJoin in Mempool",
        body=(
            f"Role: {role.capitalize()}\n"
            f"TxID: {self._format_txid(txid)}\n"
            f"Amount: {self._format_amount(cj_amount)}"
        ),
        priority=NotificationPriority.INFO,
    )

Notify when CoinJoin is seen in mempool.

async def notify_nick_change(self, old_nick: str, new_nick: str) ‑> bool
Expand source code
async def notify_nick_change(
    self,
    old_nick: str,
    new_nick: str,
) -> bool:
    """Notify when maker nick changes (privacy feature)."""
    if not self.config.notify_nick_change:
        return False

    return await self._send(
        title="Nick Changed",
        body=(f"Old: {self._format_nick(old_nick)}\nNew: {self._format_nick(new_nick)}"),
        priority=NotificationPriority.INFO,
    )

Notify when maker nick changes (privacy feature).

async def notify_orderbook_status(self,
connected_directories: int,
total_directories: int,
total_offers: int,
total_makers: int) ‑> bool
Expand source code
async def notify_orderbook_status(
    self,
    connected_directories: int,
    total_directories: int,
    total_offers: int,
    total_makers: int,
) -> bool:
    """Notify orderbook status summary."""
    return await self._send(
        title="Orderbook Status",
        body=(
            f"Directories: {connected_directories}/{total_directories}\n"
            f"Offers: {total_offers}\n"
            f"Makers: {total_makers}"
        ),
        priority=NotificationPriority.INFO,
    )

Notify orderbook status summary.

async def notify_peer_banned(self, nick: str, reason: str, duration: int) ‑> bool
Expand source code
async def notify_peer_banned(
    self,
    nick: str,
    reason: str,
    duration: int,
) -> bool:
    """Notify when a peer is banned for rate limit violations."""
    if not self.config.notify_rate_limit:
        return False

    return await self._send(
        title="Peer Banned",
        body=(f"Nick: {self._format_nick(nick)}\nReason: {reason}\nDuration: {duration}s"),
        priority=NotificationPriority.WARNING,
    )

Notify when a peer is banned for rate limit violations.

async def notify_peer_connected(self, nick: str, location: str, total_peers: int) ‑> bool
Expand source code
async def notify_peer_connected(
    self,
    nick: str,
    location: str,
    total_peers: int,
) -> bool:
    """Notify when a new peer connects (directory server)."""
    if not self.config.notify_peer_events:
        return False

    return await self._send(
        title="Peer Connected",
        body=(
            f"Nick: {self._format_nick(nick)}\n"
            f"Location: {location[:30]}...\n"
            f"Total peers: {total_peers}"
        ),
        priority=NotificationPriority.INFO,
    )

Notify when a new peer connects (directory server).

async def notify_peer_disconnected(self, nick: str, total_peers: int) ‑> bool
Expand source code
async def notify_peer_disconnected(
    self,
    nick: str,
    total_peers: int,
) -> bool:
    """Notify when a peer disconnects (directory server)."""
    if not self.config.notify_peer_events:
        return False

    return await self._send(
        title="Peer Disconnected",
        body=(f"Nick: {self._format_nick(nick)}\nRemaining peers: {total_peers}"),
        priority=NotificationPriority.INFO,
    )

Notify when a peer disconnects (directory server).

async def notify_rejection(self, taker_nick: str, reason: str, details: str = '') ‑> bool
Expand source code
async def notify_rejection(
    self,
    taker_nick: str,
    reason: str,
    details: str = "",
) -> bool:
    """Notify when rejecting a taker request (maker)."""
    if not self.config.notify_rejection:
        return False

    body = f"Taker: {self._format_nick(taker_nick)}\nReason: {reason}"
    if details:
        body += f"\nDetails: {details}"

    return await self._send(
        title="Request Rejected",
        body=body,
        priority=NotificationPriority.WARNING,
    )

Notify when rejecting a taker request (maker).

async def notify_startup(self, component: str, version: str = '', network: str = '', nick: str = '') ‑> bool
Expand source code
async def notify_startup(
    self,
    component: str,
    version: str = "",
    network: str = "",
    nick: str = "",
) -> bool:
    """
    Notify when a component starts up.

    Args:
        component: Component name (e.g., "Maker", "Taker", "Directory", "Orderbook Watcher")
        version: Optional version string
        network: Optional network name (e.g., "mainnet", "signet")
        nick: Optional component nick (e.g., "J5XXXXXXXXX")
    """
    if not self.config.notify_startup:
        return False

    body = f"Component: {component}"
    if nick:
        body += f"\nNick: {self._format_nick(nick)}"
    if version:
        body += f"\nVersion: {version}"
    if network:
        body += f"\nNetwork: {network}"

    return await self._send(
        title="Component Started",
        body=body,
        priority=NotificationPriority.INFO,
    )

Notify when a component starts up.

Args

component
Component name (e.g., "Maker", "Taker", "Directory", "Orderbook Watcher")
version
Optional version string
network
Optional network name (e.g., "mainnet", "signet")
nick
Optional component nick (e.g., "J5XXXXXXXXX")
async def notify_tx_signed(self, taker_nick: str, cj_amount: int, num_inputs: int, fee_earned: int) ‑> bool
Expand source code
async def notify_tx_signed(
    self,
    taker_nick: str,
    cj_amount: int,
    num_inputs: int,
    fee_earned: int,
) -> bool:
    """Notify when transaction is signed (maker)."""
    if not self.config.notify_signing:
        return False

    return await self._send(
        title="Transaction Signed",
        body=(
            f"Taker: {self._format_nick(taker_nick)}\n"
            f"CJ Amount: {self._format_amount(cj_amount)}\n"
            f"Inputs signed: {num_inputs}\n"
            f"Fee earned: {self._format_amount(fee_earned)}"
        ),
        priority=NotificationPriority.SUCCESS,
    )

Notify when transaction is signed (maker).

class OfferWithTimestamp (offer: Offer, received_at: float, bond_utxo_key: str | None = None)
Expand source code
class OfferWithTimestamp:
    """Wrapper for Offer with metadata for staleness tracking."""

    __slots__ = ("offer", "received_at", "bond_utxo_key")

    def __init__(self, offer: Offer, received_at: float, bond_utxo_key: str | None = None) -> None:
        self.offer = offer
        self.received_at = received_at
        # Bond UTXO key (txid:vout) for deduplication across nick changes
        self.bond_utxo_key = bond_utxo_key

Wrapper for Offer with metadata for staleness tracking.

Instance variables

var bond_utxo_key : None
Expand source code
class OfferWithTimestamp:
    """Wrapper for Offer with metadata for staleness tracking."""

    __slots__ = ("offer", "received_at", "bond_utxo_key")

    def __init__(self, offer: Offer, received_at: float, bond_utxo_key: str | None = None) -> None:
        self.offer = offer
        self.received_at = received_at
        # Bond UTXO key (txid:vout) for deduplication across nick changes
        self.bond_utxo_key = bond_utxo_key
var offer : None
Expand source code
class OfferWithTimestamp:
    """Wrapper for Offer with metadata for staleness tracking."""

    __slots__ = ("offer", "received_at", "bond_utxo_key")

    def __init__(self, offer: Offer, received_at: float, bond_utxo_key: str | None = None) -> None:
        self.offer = offer
        self.received_at = received_at
        # Bond UTXO key (txid:vout) for deduplication across nick changes
        self.bond_utxo_key = bond_utxo_key
var received_at : None
Expand source code
class OfferWithTimestamp:
    """Wrapper for Offer with metadata for staleness tracking."""

    __slots__ = ("offer", "received_at", "bond_utxo_key")

    def __init__(self, offer: Offer, received_at: float, bond_utxo_key: str | None = None) -> None:
        self.offer = offer
        self.received_at = received_at
        # Bond UTXO key (txid:vout) for deduplication across nick changes
        self.bond_utxo_key = bond_utxo_key
class OrderbookWatcherSettings (**data: Any)
Expand source code
class OrderbookWatcherSettings(BaseModel):
    """Orderbook watcher specific settings."""

    http_host: str = Field(
        default="0.0.0.0",
        description="HTTP server bind address",
    )
    http_port: int = Field(
        default=8000,
        ge=1,
        le=65535,
        description="HTTP server port",
    )
    update_interval: int = Field(
        default=60,
        ge=10,
        description="Update interval in seconds",
    )
    mempool_api_url: str = Field(
        default="http://mempopwcaqoi7z5xj5zplfdwk5bgzyl3hemx725d4a3agado6xtk3kqd.onion/api",
        description="Mempool API URL for transaction lookups",
    )
    mempool_web_url: str | None = Field(
        default="https://mempool.sgn.space",
        description="Mempool web URL for human-readable links",
    )
    uptime_grace_period: int = Field(
        default=60,
        ge=0,
        description="Grace period before tracking uptime",
    )
    max_message_size: int = Field(
        default=2097152,
        ge=1024,
        description="Maximum message size in bytes (2MB default)",
    )
    connection_timeout: float = Field(
        default=30.0,
        gt=0.0,
        description="Connection timeout in seconds",
    )

Orderbook watcher specific settings.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var connection_timeout : float

The type of the None singleton.

var http_host : str

The type of the None singleton.

var http_port : int

The type of the None singleton.

var max_message_size : int

The type of the None singleton.

var mempool_api_url : str

The type of the None singleton.

var mempool_web_url : str | None

The type of the None singleton.

var model_config

The type of the None singleton.

var update_interval : int

The type of the None singleton.

var uptime_grace_period : int

The type of the None singleton.

class ParsedTransaction (*args: Any, **kwargs: Any)
Expand source code
@dataclass
class ParsedTransaction:
    """Parsed Bitcoin transaction."""

    version: int
    inputs: list[dict[str, Any]]
    outputs: list[dict[str, Any]]
    witnesses: list[list[bytes]]
    locktime: int
    has_witness: bool

Parsed Bitcoin transaction.

Instance variables

var has_witness : bool

The type of the None singleton.

var inputs : list[dict[str, typing.Any]]

The type of the None singleton.

var locktime : int

The type of the None singleton.

var outputs : list[dict[str, typing.Any]]

The type of the None singleton.

var version : int

The type of the None singleton.

var witnesses : list[list[bytes]]

The type of the None singleton.

class PeerInfo (**data: Any)
Expand source code
class PeerInfo(BaseModel):
    nick: str = Field(..., min_length=1, max_length=64)
    onion_address: str = Field(..., pattern=r"^[a-z2-7]{56}\.onion$|^NOT-SERVING-ONION$")
    port: int = Field(..., ge=-1, le=65535)
    status: PeerStatus = PeerStatus.UNCONNECTED
    is_directory: bool = False
    network: NetworkType = NetworkType.MAINNET
    last_seen: datetime | None = None
    features: dict[str, Any] = Field(default_factory=dict)
    protocol_version: int = Field(default=5, ge=5, le=10)  # Negotiated protocol version
    neutrino_compat: bool = False  # True if peer supports extended UTXO metadata

    @field_validator("onion_address")
    @classmethod
    def validate_onion(cls, v: str) -> str:
        if v == "NOT-SERVING-ONION":
            return v
        if not v.endswith(".onion"):
            raise ValueError("Invalid onion address")
        return v

    @field_validator("port")
    @classmethod
    def validate_port(cls, v: int, info) -> int:
        if v == -1 and info.data.get("onion_address") == "NOT-SERVING-ONION":
            return v
        if v < 1 or v > 65535:
            raise ValueError("Port must be between 1 and 65535")
        return v

    @cached_property
    def location_string(self) -> str:
        if self.onion_address == "NOT-SERVING-ONION":
            return "NOT-SERVING-ONION"
        return f"{self.onion_address}:{self.port}"

    def supports_extended_utxo(self) -> bool:
        """Check if this peer supports extended UTXO format (neutrino_compat)."""
        # With feature-based detection, we check the neutrino_compat flag
        # which is set from the features dict during handshake
        return self.neutrino_compat

    model_config = {"frozen": False}

Usage Documentation

Models

A base class for creating Pydantic models.

Attributes

__class_vars__
The names of the class variables defined on the model.
__private_attributes__
Metadata about the private attributes of the model.
__signature__
The synthesized __init__ [Signature][inspect.Signature] of the model.
__pydantic_complete__
Whether model building is completed, or if there are still undefined fields.
__pydantic_core_schema__
The core schema of the model.
__pydantic_custom_init__
Whether the model has a custom __init__ function.
__pydantic_decorators__
Metadata containing the decorators defined on the model. This replaces Model.__validators__ and Model.__root_validators__ from Pydantic V1.
__pydantic_generic_metadata__
Metadata for generic models; contains data used for a similar purpose to args, origin, parameters in typing-module generics. May eventually be replaced by these.
__pydantic_parent_namespace__
Parent namespace of the model, used for automatic rebuilding of models.
__pydantic_post_init__
The name of the post-init method for the model, if defined.
__pydantic_root_model__
Whether the model is a [RootModel][pydantic.root_model.RootModel].
__pydantic_serializer__
The pydantic-core SchemaSerializer used to dump instances of the model.
__pydantic_validator__
The pydantic-core SchemaValidator used to validate instances of the model.
__pydantic_fields__
A dictionary of field names and their corresponding [FieldInfo][pydantic.fields.FieldInfo] objects.
__pydantic_computed_fields__
A dictionary of computed field names and their corresponding [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] objects.
__pydantic_extra__
A dictionary containing extra values, if [extra][pydantic.config.ConfigDict.extra] is set to 'allow'.
__pydantic_fields_set__
The names of fields explicitly set during instantiation.
__pydantic_private__
Values of private attributes set on the model instance.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var features : dict[str, typing.Any]

The type of the None singleton.

var is_directory : bool

The type of the None singleton.

var last_seen : datetime.datetime | None

The type of the None singleton.

var model_config

The type of the None singleton.

var networkNetworkType

The type of the None singleton.

var neutrino_compat : bool

The type of the None singleton.

var nick : str

The type of the None singleton.

var onion_address : str

The type of the None singleton.

var port : int

The type of the None singleton.

var protocol_version : int

The type of the None singleton.

var statusPeerStatus

The type of the None singleton.

Static methods

def validate_onion(v: str) ‑> str
def validate_port(v: int, info) ‑> int

Instance variables

var location_string : str
Expand source code
@cached_property
def location_string(self) -> str:
    if self.onion_address == "NOT-SERVING-ONION":
        return "NOT-SERVING-ONION"
    return f"{self.onion_address}:{self.port}"

Methods

def supports_extended_utxo(self) ‑> bool
Expand source code
def supports_extended_utxo(self) -> bool:
    """Check if this peer supports extended UTXO format (neutrino_compat)."""
    # With feature-based detection, we check the neutrino_compat flag
    # which is set from the features dict during handshake
    return self.neutrino_compat

Check if this peer supports extended UTXO format (neutrino_compat).

class PoDLECommitment (*args: Any, **kwargs: Any)
Expand source code
@dataclass
class PoDLECommitment:
    """PoDLE commitment data generated by taker."""

    commitment: bytes  # H(P2) - 32 bytes
    p: bytes  # Public key P = k*G - 33 bytes compressed
    p2: bytes  # Commitment point P2 = k*J - 33 bytes compressed
    sig: bytes  # Schnorr signature s - 32 bytes
    e: bytes  # Challenge e - 32 bytes
    utxo: str  # UTXO reference "txid:vout"
    index: int  # NUMS point index used

    def to_revelation(self) -> dict[str, str]:
        """Convert to revelation format for sending to maker."""
        return {
            "P": self.p.hex(),
            "P2": self.p2.hex(),
            "sig": self.sig.hex(),
            "e": self.e.hex(),
            "utxo": self.utxo,
        }

    def to_commitment_str(self) -> str:
        """
        Get commitment as string with type prefix.

        JoinMarket requires a commitment type prefix to allow future
        commitment schemes. "P" indicates a standard PoDLE commitment.
        Format: "P" + hex(commitment)
        """
        return "P" + self.commitment.hex()

PoDLE commitment data generated by taker.

Instance variables

var commitment : bytes

The type of the None singleton.

var e : bytes

The type of the None singleton.

var index : int

The type of the None singleton.

var p : bytes

The type of the None singleton.

var p2 : bytes

The type of the None singleton.

var sig : bytes

The type of the None singleton.

var utxo : str

The type of the None singleton.

Methods

def to_commitment_str(self) ‑> str
Expand source code
def to_commitment_str(self) -> str:
    """
    Get commitment as string with type prefix.

    JoinMarket requires a commitment type prefix to allow future
    commitment schemes. "P" indicates a standard PoDLE commitment.
    Format: "P" + hex(commitment)
    """
    return "P" + self.commitment.hex()

Get commitment as string with type prefix.

JoinMarket requires a commitment type prefix to allow future commitment schemes. "P" indicates a standard PoDLE commitment. Format: "P" + hex(commitment)

def to_revelation(self) ‑> dict[str, str]
Expand source code
def to_revelation(self) -> dict[str, str]:
    """Convert to revelation format for sending to maker."""
    return {
        "P": self.p.hex(),
        "P2": self.p2.hex(),
        "sig": self.sig.hex(),
        "e": self.e.hex(),
        "utxo": self.utxo,
    }

Convert to revelation format for sending to maker.

class PoDLEError (*args, **kwargs)
Expand source code
class PoDLEError(Exception):
    """PoDLE generation or verification error."""

    pass

PoDLE generation or verification error.

Ancestors

  • builtins.Exception
  • builtins.BaseException
class ProtocolMessage (**data: Any)
Expand source code
class ProtocolMessage(BaseModel):
    type: MessageType
    payload: dict[str, Any]

    def to_json(self) -> str:
        return json.dumps({"type": self.type.value, "data": self.payload})

    @classmethod
    def from_json(cls, data: str) -> ProtocolMessage:
        obj = json.loads(data)
        return cls(type=MessageType(obj["type"]), payload=obj["data"])

    def to_bytes(self) -> bytes:
        return self.to_json().encode("utf-8")

    @classmethod
    def from_bytes(cls, data: bytes) -> ProtocolMessage:
        return cls.from_json(data.decode("utf-8"))

Usage Documentation

Models

A base class for creating Pydantic models.

Attributes

__class_vars__
The names of the class variables defined on the model.
__private_attributes__
Metadata about the private attributes of the model.
__signature__
The synthesized __init__ [Signature][inspect.Signature] of the model.
__pydantic_complete__
Whether model building is completed, or if there are still undefined fields.
__pydantic_core_schema__
The core schema of the model.
__pydantic_custom_init__
Whether the model has a custom __init__ function.
__pydantic_decorators__
Metadata containing the decorators defined on the model. This replaces Model.__validators__ and Model.__root_validators__ from Pydantic V1.
__pydantic_generic_metadata__
Metadata for generic models; contains data used for a similar purpose to args, origin, parameters in typing-module generics. May eventually be replaced by these.
__pydantic_parent_namespace__
Parent namespace of the model, used for automatic rebuilding of models.
__pydantic_post_init__
The name of the post-init method for the model, if defined.
__pydantic_root_model__
Whether the model is a [RootModel][pydantic.root_model.RootModel].
__pydantic_serializer__
The pydantic-core SchemaSerializer used to dump instances of the model.
__pydantic_validator__
The pydantic-core SchemaValidator used to validate instances of the model.
__pydantic_fields__
A dictionary of field names and their corresponding [FieldInfo][pydantic.fields.FieldInfo] objects.
__pydantic_computed_fields__
A dictionary of computed field names and their corresponding [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] objects.
__pydantic_extra__
A dictionary containing extra values, if [extra][pydantic.config.ConfigDict.extra] is set to 'allow'.
__pydantic_fields_set__
The names of fields explicitly set during instantiation.
__pydantic_private__
Values of private attributes set on the model instance.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var model_config

The type of the None singleton.

var payload : dict[str, typing.Any]

The type of the None singleton.

var typeMessageType

The type of the None singleton.

Static methods

def from_bytes(data: bytes) ‑> ProtocolMessage
def from_json(data: str) ‑> ProtocolMessage

Methods

def to_bytes(self) ‑> bytes
Expand source code
def to_bytes(self) -> bytes:
    return self.to_json().encode("utf-8")
def to_json(self) ‑> str
Expand source code
def to_json(self) -> str:
    return json.dumps({"type": self.type.value, "data": self.payload})
class RateLimiter (rate_limit: int = 10,
burst_limit: int | None = None,
disconnect_threshold: int | None = None)
Expand source code
class RateLimiter:
    """
    Per-peer rate limiter using token bucket algorithm.

    Configuration:
    - rate_limit: messages per second (sustained rate)
    - burst_limit: maximum burst size (default: 10x rate_limit)
    - disconnect_threshold: violations before disconnect (default: None = never)

    Default settings (10 msg/sec sustained, 100 msg burst):
    - Allows ~10 seconds of continuous max-rate traffic before throttling
    - Prevents DoS from rapid spam while allowing legitimate burst patterns
    - Example: taker requesting orderbook from multiple makers simultaneously

    Security:
    - Rate limit by connection ID, not self-declared nick, to prevent impersonation
    - Nick spoofing attack: attacker claims victim's nick to get them rate limited
    - Use connection-based keys until identity is cryptographically verified
    """

    @validate_call
    def __init__(
        self,
        rate_limit: int = 10,
        burst_limit: int | None = None,
        disconnect_threshold: int | None = None,
    ):
        """
        Initialize rate limiter.

        Args:
            rate_limit: Maximum messages per second (sustained, default: 10)
            burst_limit: Maximum burst size (default: 10x rate_limit = 100)
            disconnect_threshold: Violations before disconnect (None = never disconnect)
        """
        self.rate_limit = rate_limit
        self.burst_limit = burst_limit or (rate_limit * 10)
        self.disconnect_threshold = disconnect_threshold
        self._buckets: dict[str, TokenBucket] = {}
        self._violation_counts: dict[str, int] = {}

    def check(self, peer_key: str) -> tuple[RateLimitAction, float]:
        """
        Check rate limit and return recommended action.

        Returns:
            Tuple of (action, delay_seconds):
            - ALLOW: Message allowed, delay=0
            - DELAY: Message should be delayed/dropped, delay=recommended wait time
            - DISCONNECT: Peer should be disconnected (severe abuse), delay=0
        """
        if peer_key not in self._buckets:
            self._buckets[peer_key] = TokenBucket(
                capacity=self.burst_limit,
                refill_rate=float(self.rate_limit),
            )

        bucket = self._buckets[peer_key]
        allowed = bucket.consume()

        if allowed:
            return (RateLimitAction.ALLOW, 0.0)

        # Rate limited - increment violation count
        self._violation_counts[peer_key] = self._violation_counts.get(peer_key, 0) + 1
        violations = self._violation_counts[peer_key]

        # Check if we should disconnect (only if threshold is set)
        if self.disconnect_threshold is not None and violations >= self.disconnect_threshold:
            return (RateLimitAction.DISCONNECT, 0.0)

        # Otherwise, recommend delay
        delay = bucket.get_delay_seconds()
        return (RateLimitAction.DELAY, delay)

    def remove_peer(self, peer_key: str) -> None:
        """Remove rate limit state for a disconnected peer."""
        self._buckets.pop(peer_key, None)
        self._violation_counts.pop(peer_key, None)

    def get_violation_count(self, peer_key: str) -> int:
        """Get the number of rate limit violations for a peer."""
        return self._violation_counts.get(peer_key, 0)

    def get_delay_for_peer(self, peer_key: str) -> float:
        """Get recommended delay in seconds for a rate-limited peer."""
        bucket = self._buckets.get(peer_key)
        if bucket is None:
            return 0.0
        return bucket.get_delay_seconds()

    def cleanup_old_peers(self, max_idle_seconds: float = 3600.0) -> int:
        """
        Remove peers that haven't sent messages in max_idle_seconds.

        Returns the number of peers removed.
        """
        now = time.monotonic()
        stale_peers = [
            peer_key
            for peer_key, bucket in self._buckets.items()
            if now - bucket.last_refill > max_idle_seconds
        ]

        for peer_key in stale_peers:
            self.remove_peer(peer_key)

        return len(stale_peers)

    def get_stats(self) -> dict:
        """Get rate limiter statistics."""
        return {
            "tracked_peers": len(self._buckets),
            "total_violations": sum(self._violation_counts.values()),
            "top_violators": sorted(
                self._violation_counts.items(),
                key=lambda x: x[1],
                reverse=True,
            )[:10],
        }

    def clear(self) -> None:
        """Clear all rate limit state."""
        self._buckets.clear()
        self._violation_counts.clear()

Per-peer rate limiter using token bucket algorithm.

Configuration: - rate_limit: messages per second (sustained rate) - burst_limit: maximum burst size (default: 10x rate_limit) - disconnect_threshold: violations before disconnect (default: None = never)

Default settings (10 msg/sec sustained, 100 msg burst): - Allows ~10 seconds of continuous max-rate traffic before throttling - Prevents DoS from rapid spam while allowing legitimate burst patterns - Example: taker requesting orderbook from multiple makers simultaneously

Security: - Rate limit by connection ID, not self-declared nick, to prevent impersonation - Nick spoofing attack: attacker claims victim's nick to get them rate limited - Use connection-based keys until identity is cryptographically verified

Initialize rate limiter.

Args

rate_limit
Maximum messages per second (sustained, default: 10)
burst_limit
Maximum burst size (default: 10x rate_limit = 100)
disconnect_threshold
Violations before disconnect (None = never disconnect)

Methods

def check(self, peer_key: str) ‑> tuple[RateLimitAction, float]
Expand source code
def check(self, peer_key: str) -> tuple[RateLimitAction, float]:
    """
    Check rate limit and return recommended action.

    Returns:
        Tuple of (action, delay_seconds):
        - ALLOW: Message allowed, delay=0
        - DELAY: Message should be delayed/dropped, delay=recommended wait time
        - DISCONNECT: Peer should be disconnected (severe abuse), delay=0
    """
    if peer_key not in self._buckets:
        self._buckets[peer_key] = TokenBucket(
            capacity=self.burst_limit,
            refill_rate=float(self.rate_limit),
        )

    bucket = self._buckets[peer_key]
    allowed = bucket.consume()

    if allowed:
        return (RateLimitAction.ALLOW, 0.0)

    # Rate limited - increment violation count
    self._violation_counts[peer_key] = self._violation_counts.get(peer_key, 0) + 1
    violations = self._violation_counts[peer_key]

    # Check if we should disconnect (only if threshold is set)
    if self.disconnect_threshold is not None and violations >= self.disconnect_threshold:
        return (RateLimitAction.DISCONNECT, 0.0)

    # Otherwise, recommend delay
    delay = bucket.get_delay_seconds()
    return (RateLimitAction.DELAY, delay)

Check rate limit and return recommended action.

Returns

Tuple of (action, delay_seconds): - ALLOW: Message allowed, delay=0 - DELAY: Message should be delayed/dropped, delay=recommended wait time - DISCONNECT: Peer should be disconnected (severe abuse), delay=0

def cleanup_old_peers(self, max_idle_seconds: float = 3600.0) ‑> int
Expand source code
def cleanup_old_peers(self, max_idle_seconds: float = 3600.0) -> int:
    """
    Remove peers that haven't sent messages in max_idle_seconds.

    Returns the number of peers removed.
    """
    now = time.monotonic()
    stale_peers = [
        peer_key
        for peer_key, bucket in self._buckets.items()
        if now - bucket.last_refill > max_idle_seconds
    ]

    for peer_key in stale_peers:
        self.remove_peer(peer_key)

    return len(stale_peers)

Remove peers that haven't sent messages in max_idle_seconds.

Returns the number of peers removed.

def clear(self) ‑> None
Expand source code
def clear(self) -> None:
    """Clear all rate limit state."""
    self._buckets.clear()
    self._violation_counts.clear()

Clear all rate limit state.

def get_delay_for_peer(self, peer_key: str) ‑> float
Expand source code
def get_delay_for_peer(self, peer_key: str) -> float:
    """Get recommended delay in seconds for a rate-limited peer."""
    bucket = self._buckets.get(peer_key)
    if bucket is None:
        return 0.0
    return bucket.get_delay_seconds()

Get recommended delay in seconds for a rate-limited peer.

def get_stats(self) ‑> dict
Expand source code
def get_stats(self) -> dict:
    """Get rate limiter statistics."""
    return {
        "tracked_peers": len(self._buckets),
        "total_violations": sum(self._violation_counts.values()),
        "top_violators": sorted(
            self._violation_counts.items(),
            key=lambda x: x[1],
            reverse=True,
        )[:10],
    }

Get rate limiter statistics.

def get_violation_count(self, peer_key: str) ‑> int
Expand source code
def get_violation_count(self, peer_key: str) -> int:
    """Get the number of rate limit violations for a peer."""
    return self._violation_counts.get(peer_key, 0)

Get the number of rate limit violations for a peer.

def remove_peer(self, peer_key: str) ‑> None
Expand source code
def remove_peer(self, peer_key: str) -> None:
    """Remove rate limit state for a disconnected peer."""
    self._buckets.pop(peer_key, None)
    self._violation_counts.pop(peer_key, None)

Remove rate limit state for a disconnected peer.

class RequiredFeatures (*args: Any, **kwargs: Any)
Expand source code
@dataclass
class RequiredFeatures:
    """
    Features that this peer requires from counterparties.

    Used to filter incompatible peers during maker selection.
    """

    required: set[str] = Field(default_factory=set)

    @classmethod
    def for_neutrino_taker(cls) -> RequiredFeatures:
        """Create requirements for a taker using Neutrino backend."""
        return cls(required={FEATURE_NEUTRINO_COMPAT})

    @classmethod
    def none(cls) -> RequiredFeatures:
        """No required features."""
        return cls(required=set())

    def is_compatible(self, peer_features: FeatureSet) -> tuple[bool, str]:
        """Check if peer supports all required features."""
        missing = self.required - peer_features.features
        if missing:
            return False, f"Missing required features: {missing}"
        return True, ""

    def __bool__(self) -> bool:
        return bool(self.required)

Features that this peer requires from counterparties.

Used to filter incompatible peers during maker selection.

Static methods

def for_neutrino_taker() ‑> RequiredFeatures

Create requirements for a taker using Neutrino backend.

def none() ‑> RequiredFeatures

No required features.

Instance variables

var required : set[str]

The type of the None singleton.

Methods

def is_compatible(self,
peer_features: FeatureSet) ‑> tuple[bool, str]
Expand source code
def is_compatible(self, peer_features: FeatureSet) -> tuple[bool, str]:
    """Check if peer supports all required features."""
    missing = self.required - peer_features.features
    if missing:
        return False, f"Missing required features: {missing}"
    return True, ""

Check if peer supports all required features.

class ResponseDeduplicator
Expand source code
class ResponseDeduplicator:
    """
    Specialized deduplicator for taker response collection.

    When a taker sends requests to makers via multiple directory servers,
    it may receive duplicate responses. This class helps collect unique
    responses while tracking duplicates.

    Unlike MessageDeduplicator which uses time-based expiry, this class
    is designed for short-lived request-response cycles and requires
    explicit reset between rounds.

    Example:
        >>> dedup = ResponseDeduplicator()
        >>> # Collect pubkey responses from makers
        >>> dedup.add_response("maker1", "pubkey", pubkey_data, "dir1")
        True  # First response
        >>> dedup.add_response("maker1", "pubkey", pubkey_data, "dir2")
        False  # Duplicate
        >>> responses = dedup.get_responses("pubkey")
        >>> len(responses)
        1
    """

    @dataclass
    class ResponseEntry:
        """A collected response."""

        nick: str
        data: object
        source: str
        timestamp: float = field(default_factory=time.monotonic)
        duplicate_count: int = 0

    def __init__(self) -> None:
        """Initialize response deduplicator."""
        # command -> nick -> ResponseEntry
        self._responses: dict[str, dict[str, ResponseDeduplicator.ResponseEntry]] = {}
        self._stats = DeduplicationStats()

    def add_response(self, nick: str, command: str, data: object, source: str) -> bool:
        """
        Add a response, returning True if it's new (not a duplicate).

        Args:
            nick: The maker nick who sent the response
            command: Response type (pubkey, ioauth, sig, etc.)
            data: The response data
            source: Which directory server it came from

        Returns:
            True if this is a new response, False if duplicate
        """
        self._stats.total_processed += 1

        if command not in self._responses:
            self._responses[command] = {}

        if nick in self._responses[command]:
            # Duplicate response from same maker
            self._responses[command][nick].duplicate_count += 1
            self._stats.duplicates_dropped += 1
            return False

        # New response
        self._responses[command][nick] = self.ResponseEntry(nick=nick, data=data, source=source)
        self._stats.unique_messages += 1
        return True

    def get_responses(self, command: str) -> dict[str, ResponseEntry]:
        """
        Get all unique responses for a command type.

        Args:
            command: Response type to get

        Returns:
            Dict mapping nick -> ResponseEntry
        """
        return self._responses.get(command, {})

    def get_response_count(self, command: str) -> int:
        """Get number of unique responses for a command."""
        return len(self._responses.get(command, {}))

    def has_response(self, nick: str, command: str) -> bool:
        """Check if we have a response from a specific maker."""
        return nick in self._responses.get(command, {})

    @property
    def stats(self) -> DeduplicationStats:
        """Get deduplication statistics."""
        return self._stats

    def reset(self) -> None:
        """Clear all responses and reset stats for next round."""
        self._responses.clear()
        self._stats = DeduplicationStats()

    def reset_command(self, command: str) -> None:
        """Clear responses for a specific command type."""
        if command in self._responses:
            del self._responses[command]

Specialized deduplicator for taker response collection.

When a taker sends requests to makers via multiple directory servers, it may receive duplicate responses. This class helps collect unique responses while tracking duplicates.

Unlike MessageDeduplicator which uses time-based expiry, this class is designed for short-lived request-response cycles and requires explicit reset between rounds.

Example

>>> dedup = ResponseDeduplicator()
>>> # Collect pubkey responses from makers
>>> dedup.add_response("maker1", "pubkey", pubkey_data, "dir1")
True  # First response
>>> dedup.add_response("maker1", "pubkey", pubkey_data, "dir2")
False  # Duplicate
>>> responses = dedup.get_responses("pubkey")
>>> len(responses)
1

Initialize response deduplicator.

Class variables

var ResponseEntry

A collected response.

Instance variables

prop statsDeduplicationStats
Expand source code
@property
def stats(self) -> DeduplicationStats:
    """Get deduplication statistics."""
    return self._stats

Get deduplication statistics.

Methods

def add_response(self, nick: str, command: str, data: object, source: str) ‑> bool
Expand source code
def add_response(self, nick: str, command: str, data: object, source: str) -> bool:
    """
    Add a response, returning True if it's new (not a duplicate).

    Args:
        nick: The maker nick who sent the response
        command: Response type (pubkey, ioauth, sig, etc.)
        data: The response data
        source: Which directory server it came from

    Returns:
        True if this is a new response, False if duplicate
    """
    self._stats.total_processed += 1

    if command not in self._responses:
        self._responses[command] = {}

    if nick in self._responses[command]:
        # Duplicate response from same maker
        self._responses[command][nick].duplicate_count += 1
        self._stats.duplicates_dropped += 1
        return False

    # New response
    self._responses[command][nick] = self.ResponseEntry(nick=nick, data=data, source=source)
    self._stats.unique_messages += 1
    return True

Add a response, returning True if it's new (not a duplicate).

Args

nick
The maker nick who sent the response
command
Response type (pubkey, ioauth, sig, etc.)
data
The response data
source
Which directory server it came from

Returns

True if this is a new response, False if duplicate

def get_response_count(self, command: str) ‑> int
Expand source code
def get_response_count(self, command: str) -> int:
    """Get number of unique responses for a command."""
    return len(self._responses.get(command, {}))

Get number of unique responses for a command.

def get_responses(self, command: str) ‑> dict[str, ResponseEntry]
Expand source code
def get_responses(self, command: str) -> dict[str, ResponseEntry]:
    """
    Get all unique responses for a command type.

    Args:
        command: Response type to get

    Returns:
        Dict mapping nick -> ResponseEntry
    """
    return self._responses.get(command, {})

Get all unique responses for a command type.

Args

command
Response type to get

Returns

Dict mapping nick -> ResponseEntry

def has_response(self, nick: str, command: str) ‑> bool
Expand source code
def has_response(self, nick: str, command: str) -> bool:
    """Check if we have a response from a specific maker."""
    return nick in self._responses.get(command, {})

Check if we have a response from a specific maker.

def reset(self) ‑> None
Expand source code
def reset(self) -> None:
    """Clear all responses and reset stats for next round."""
    self._responses.clear()
    self._stats = DeduplicationStats()

Clear all responses and reset stats for next round.

def reset_command(self, command: str) ‑> None
Expand source code
def reset_command(self, command: str) -> None:
    """Clear responses for a specific command type."""
    if command in self._responses:
        del self._responses[command]

Clear responses for a specific command type.

class TakerSettings (**data: Any)
Expand source code
class TakerSettings(BaseModel):
    """Taker-specific settings."""

    counterparty_count: int = Field(
        default=10,
        ge=1,
        le=20,
        description="Number of makers to select for CoinJoin",
    )
    max_cj_fee_abs: int = Field(
        default=500,
        ge=0,
        description="Maximum absolute CoinJoin fee in satoshis",
    )
    max_cj_fee_rel: str = Field(
        default="0.001",
        description="Maximum relative CoinJoin fee (0.001 = 0.1%)",
    )
    tx_fee_factor: float = Field(
        default=3.0,
        ge=1.0,
        description="Multiply estimated fee by this factor",
    )
    fee_block_target: int | None = Field(
        default=None,
        ge=1,
        le=1008,
        description="Target blocks for fee estimation",
    )
    bondless_makers_allowance: float = Field(
        default=0.0,
        ge=0.0,
        le=1.0,
        description="Fraction of time to choose makers randomly",
    )
    bond_value_exponent: float = Field(
        default=1.3,
        gt=0.0,
        description="Exponent for fidelity bond value calculation",
    )
    bondless_require_zero_fee: bool = Field(
        default=True,
        description="Require zero absolute fee for bondless maker spots",
    )
    maker_timeout_sec: int = Field(
        default=60,
        ge=10,
        description="Timeout for maker responses",
    )
    order_wait_time: float = Field(
        default=120.0,
        ge=1.0,
        description=(
            "Seconds to wait for orderbook responses. Empirical testing shows 95th "
            "percentile response time over Tor is ~101s. Default 120s (with 20% buffer) "
            "captures ~95% of offers."
        ),
    )
    tx_broadcast: str = Field(
        default="random-peer",
        description="Broadcast policy: self, random-peer, multiple-peers, not-self",
    )
    broadcast_peer_count: int = Field(
        default=3,
        ge=1,
        description="Number of peers for multiple-peers broadcast",
    )
    minimum_makers: int = Field(
        default=1,
        ge=1,
        description="Minimum number of makers required",
    )
    rescan_interval_sec: int = Field(
        default=600,
        ge=60,
        description="Interval for periodic wallet rescans",
    )

Taker-specific settings.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var bond_value_exponent : float

The type of the None singleton.

var bondless_makers_allowance : float

The type of the None singleton.

var bondless_require_zero_fee : bool

The type of the None singleton.

var broadcast_peer_count : int

The type of the None singleton.

var counterparty_count : int

The type of the None singleton.

var fee_block_target : int | None

The type of the None singleton.

var maker_timeout_sec : int

The type of the None singleton.

var max_cj_fee_abs : int

The type of the None singleton.

var max_cj_fee_rel : str

The type of the None singleton.

var minimum_makers : int

The type of the None singleton.

var model_config

The type of the None singleton.

var order_wait_time : float

The type of the None singleton.

var rescan_interval_sec : int

The type of the None singleton.

var tx_broadcast : str

The type of the None singleton.

var tx_fee_factor : float

The type of the None singleton.

class TokenBucket (*args: Any, **kwargs: Any)
Expand source code
@dataclass
class TokenBucket:
    """
    Token bucket for rate limiting.

    Tokens are added at a fixed rate up to a maximum capacity.
    Each message consumes one token. If no tokens are available,
    the message is rejected.
    """

    capacity: int  # Maximum tokens (burst allowance)
    refill_rate: float  # Tokens per second
    tokens: float = Field(init=False)
    last_refill: float = Field(init=False)

    def __post_init__(self) -> None:
        self.tokens = float(self.capacity)
        self.last_refill = time.monotonic()

    def consume(self, tokens: int = 1) -> bool:
        """
        Try to consume tokens. Returns True if successful, False if rate limited.
        """
        now = time.monotonic()
        elapsed = now - self.last_refill
        self.last_refill = now

        # Refill tokens based on elapsed time
        self.tokens = min(self.capacity, self.tokens + elapsed * self.refill_rate)

        if self.tokens >= tokens:
            self.tokens -= tokens
            return True
        return False

    def get_delay_seconds(self) -> float:
        """
        Get recommended delay in seconds before next message would be allowed.
        Returns 0 if tokens are available.
        """
        if self.tokens >= 1:
            return 0.0
        # Calculate time needed to refill 1 token
        tokens_needed = 1 - self.tokens
        return tokens_needed / self.refill_rate

    def reset(self) -> None:
        """Reset bucket to full capacity."""
        self.tokens = float(self.capacity)
        self.last_refill = time.monotonic()

Token bucket for rate limiting.

Tokens are added at a fixed rate up to a maximum capacity. Each message consumes one token. If no tokens are available, the message is rejected.

Instance variables

var capacity : int

The type of the None singleton.

var last_refill : float

The type of the None singleton.

var refill_rate : float

The type of the None singleton.

var tokens : float

The type of the None singleton.

Methods

def consume(self, tokens: int = 1) ‑> bool
Expand source code
def consume(self, tokens: int = 1) -> bool:
    """
    Try to consume tokens. Returns True if successful, False if rate limited.
    """
    now = time.monotonic()
    elapsed = now - self.last_refill
    self.last_refill = now

    # Refill tokens based on elapsed time
    self.tokens = min(self.capacity, self.tokens + elapsed * self.refill_rate)

    if self.tokens >= tokens:
        self.tokens -= tokens
        return True
    return False

Try to consume tokens. Returns True if successful, False if rate limited.

def get_delay_seconds(self) ‑> float
Expand source code
def get_delay_seconds(self) -> float:
    """
    Get recommended delay in seconds before next message would be allowed.
    Returns 0 if tokens are available.
    """
    if self.tokens >= 1:
        return 0.0
    # Calculate time needed to refill 1 token
    tokens_needed = 1 - self.tokens
    return tokens_needed / self.refill_rate

Get recommended delay in seconds before next message would be allowed. Returns 0 if tokens are available.

def reset(self) ‑> None
Expand source code
def reset(self) -> None:
    """Reset bucket to full capacity."""
    self.tokens = float(self.capacity)
    self.last_refill = time.monotonic()

Reset bucket to full capacity.

class TorAuthenticationError (*args, **kwargs)
Expand source code
class TorAuthenticationError(TorControlError):
    """Authentication with Tor control port failed."""

    pass

Authentication with Tor control port failed.

Ancestors

class TorConfig (**data: Any)
Expand source code
class TorConfig(BaseModel):
    """
    Configuration for Tor SOCKS proxy connection.

    Used for outgoing connections to directory servers and peers.
    """

    socks_host: str = Field(default="127.0.0.1", description="Tor SOCKS5 proxy host address")
    socks_port: int = Field(default=9050, ge=1, le=65535, description="Tor SOCKS5 proxy port")

    model_config = {"frozen": False}

Configuration for Tor SOCKS proxy connection.

Used for outgoing connections to directory servers and peers.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var model_config

The type of the None singleton.

var socks_host : str

The type of the None singleton.

var socks_port : int

The type of the None singleton.

class TorControlClient (control_host: str = '127.0.0.1',
control_port: int = 9051,
cookie_path: str | Path | None = None,
password: str | None = None)
Expand source code
class TorControlClient:
    """
    Async client for Tor control protocol.

    Supports cookie authentication and ephemeral hidden service creation.
    The client maintains a persistent connection to control port.

    Example:
        async with TorControlClient() as client:
            hs = await client.create_ephemeral_hidden_service(
                ports=[(8765, "127.0.0.1:8765")]
            )
            print(f"Hidden service: {hs.onion_address}")
            # Service exists while connection is open
        # Service removed when context exits
    """

    def __init__(
        self,
        control_host: str = "127.0.0.1",
        control_port: int = 9051,
        cookie_path: str | Path | None = None,
        password: str | None = None,
    ):
        """
        Initialize Tor control client.

        Args:
            control_host: Tor control port host
            control_port: Tor control port number
            cookie_path: Path to cookie auth file (usually /var/lib/tor/control_auth_cookie)
            password: Optional password for HASHEDPASSWORD auth (not recommended)
        """
        self.control_host = control_host
        self.control_port = control_port
        self.cookie_path = Path(cookie_path) if cookie_path else None
        self.password = password

        self._reader: asyncio.StreamReader | None = None
        self._writer: asyncio.StreamWriter | None = None
        self._connected = False
        self._authenticated = False
        self._read_lock = asyncio.Lock()
        self._write_lock = asyncio.Lock()

        # Track created hidden services for cleanup
        self._hidden_services: list[EphemeralHiddenService] = []

    async def __aenter__(self) -> TorControlClient:
        """Async context manager entry - connect and authenticate."""
        await self.connect()
        await self.authenticate()
        return self

    async def __aexit__(
        self,
        exc_type: type[BaseException] | None,
        exc_val: BaseException | None,
        exc_tb: object,
    ) -> None:
        """Async context manager exit - close connection."""
        await self.close()

    async def connect(self) -> None:
        """Connect to Tor control port."""
        if self._connected:
            return

        try:
            logger.debug(f"Connecting to Tor control port {self.control_host}:{self.control_port}")
            self._reader, self._writer = await asyncio.wait_for(
                asyncio.open_connection(self.control_host, self.control_port),
                timeout=10.0,
            )
            self._connected = True
            logger.info(f"Connected to Tor control port at {self.control_host}:{self.control_port}")
        except TimeoutError as e:
            raise TorControlError(
                f"Timeout connecting to Tor control port at {self.control_host}:{self.control_port}"
            ) from e
        except OSError as e:
            raise TorControlError(
                f"Failed to connect to Tor control port at "
                f"{self.control_host}:{self.control_port}: {e}"
            ) from e

    async def close(self) -> None:
        """Close connection to Tor control port."""
        if not self._connected:
            return

        self._connected = False
        self._authenticated = False
        self._hidden_services.clear()

        if self._writer:
            try:
                self._writer.close()
                await self._writer.wait_closed()
            except Exception:
                pass
            self._writer = None
        self._reader = None

        logger.debug("Closed Tor control connection")

    async def _send_command(self, command: str) -> None:
        """Send a command to Tor control port."""
        if not self._connected or not self._writer:
            raise TorControlError("Not connected to Tor control port")

        async with self._write_lock:
            logger.trace(f"Tor control send: {command}")
            self._writer.write(f"{command}\r\n".encode())
            await self._writer.drain()

    async def _read_response(self) -> list[tuple[str, str, str]]:
        """
        Read response from Tor control port.

        Returns:
            List of (status_code, separator, message) tuples.
            Separator is '-' for multi-line, ' ' for last/single line, '+' for data.
        """
        if not self._connected or not self._reader:
            raise TorControlError("Not connected to Tor control port")

        responses: list[tuple[str, str, str]] = []

        async with self._read_lock:
            while True:
                try:
                    line = await asyncio.wait_for(self._reader.readline(), timeout=30.0)
                except TimeoutError as e:
                    raise TorControlError("Timeout reading from Tor control port") from e

                if not line:
                    raise TorControlError("Connection closed by Tor")

                line_str = line.decode("utf-8").rstrip("\r\n")
                logger.trace(f"Tor control recv: {line_str}")

                if len(line_str) < 4:
                    raise TorControlError(f"Invalid response format: {line_str}")

                status_code = line_str[:3]
                separator = line_str[3]
                message = line_str[4:]

                responses.append((status_code, separator, message))

                # Handle multi-line data responses (status+data)
                if separator == "+":
                    # Read until we see a line with just "."
                    data_lines: list[str] = []
                    while True:
                        data_line = await self._reader.readline()
                        data_str = data_line.decode("utf-8").rstrip("\r\n")
                        if data_str == ".":
                            break
                        data_lines.append(data_str)
                    # Store data as message content
                    responses[-1] = (status_code, separator, "\n".join(data_lines))

                # Single line or last line of multi-line response
                if separator == " ":
                    break

        return responses

    async def _command(self, command: str) -> list[tuple[str, str, str]]:
        """Send command and read response."""
        await self._send_command(command)
        return await self._read_response()

    def _check_success(
        self, responses: list[tuple[str, str, str]], expected_code: str = "250"
    ) -> None:
        """Check if response indicates success."""
        if not responses:
            raise TorControlError("Empty response from Tor")

        # Check the last response (final status)
        status_code, _, message = responses[-1]
        if status_code != expected_code:
            raise TorControlError(f"Tor command failed: {status_code} {message}")

    async def authenticate(self) -> None:
        """
        Authenticate with Tor control port.

        Tries cookie authentication first if cookie_path is set,
        then falls back to password if provided.
        """
        if self._authenticated:
            return

        if not self._connected:
            await self.connect()

        # Try cookie authentication
        if self.cookie_path:
            await self._authenticate_cookie()
            return

        # Try password authentication
        if self.password:
            await self._authenticate_password()
            return

        # Try null authentication (for permissive configs)
        try:
            responses = await self._command("AUTHENTICATE")
            self._check_success(responses)
            self._authenticated = True
            logger.info("Authenticated with Tor (null auth)")
        except TorControlError as e:
            raise TorAuthenticationError(
                "No authentication method configured. Provide cookie_path or password."
            ) from e

    async def _authenticate_cookie(self) -> None:
        """Authenticate using cookie file."""
        if not self.cookie_path:
            raise TorAuthenticationError("Cookie path not configured")

        try:
            cookie_data = self.cookie_path.read_bytes()
            cookie_hex = cookie_data.hex()
        except FileNotFoundError as e:
            raise TorAuthenticationError(f"Cookie file not found: {self.cookie_path}") from e
        except PermissionError as e:
            raise TorAuthenticationError(
                f"Permission denied reading cookie file: {self.cookie_path}"
            ) from e

        try:
            responses = await self._command(f"AUTHENTICATE {cookie_hex}")
            self._check_success(responses)
            self._authenticated = True
            logger.info("Authenticated with Tor using cookie")
        except TorControlError as e:
            raise TorAuthenticationError(f"Cookie authentication failed: {e}") from e

    async def _authenticate_password(self) -> None:
        """Authenticate using password."""
        if not self.password:
            raise TorAuthenticationError("Password not configured")

        # Quote the password properly
        escaped_password = self.password.replace("\\", "\\\\").replace('"', '\\"')

        try:
            responses = await self._command(f'AUTHENTICATE "{escaped_password}"')
            self._check_success(responses)
            self._authenticated = True
            logger.info("Authenticated with Tor using password")
        except TorControlError as e:
            raise TorAuthenticationError(f"Password authentication failed: {e}") from e

    async def get_info(self, key: str) -> str:
        """
        Get information from Tor.

        Args:
            key: Information key (e.g., "version", "config-file")

        Returns:
            The requested information value
        """
        if not self._authenticated:
            raise TorControlError("Not authenticated")

        responses = await self._command(f"GETINFO {key}")
        self._check_success(responses)

        # Parse key=value from first response
        for status, _, message in responses:
            if status == "250" and "=" in message:
                _, value = message.split("=", 1)
                return value

        raise TorControlError(f"Could not parse GETINFO response for {key}")

    async def create_ephemeral_hidden_service(
        self,
        ports: list[tuple[int, str]],
        key_type: str = "NEW",
        key_blob: str = "ED25519-V3",
        discard_pk: bool = False,
        detach: bool = False,
        await_publication: bool = False,
        max_streams: int | None = None,
    ) -> EphemeralHiddenService:
        """
        Create an ephemeral hidden service using ADD_ONION.

        Ephemeral services exist only while the control connection is open.
        When the connection closes, the hidden service is automatically removed.

        Args:
            ports: List of (virtual_port, target) tuples.
                   Target is "host:port" or just "port" for localhost.
            key_type: "NEW" for new key, "ED25519-V3" or "RSA1024" for existing key
            key_blob: For NEW: "ED25519-V3" (recommended) or "RSA1024"
                      For existing: base64-encoded private key
            discard_pk: If True, don't return the private key
            detach: If True, service persists after control connection closes
            await_publication: If True, wait for HS descriptor to be published
            max_streams: Maximum concurrent streams (None for unlimited)

        Returns:
            EphemeralHiddenService with the created service details

        Example:
            # Create service that forwards port 80 to local 8080
            hs = await client.create_ephemeral_hidden_service(
                ports=[(80, "127.0.0.1:8080")]
            )
        """
        if not self._authenticated:
            raise TorControlError("Not authenticated")

        # Build port specifications
        port_specs = []
        for virtual_port, target in ports:
            port_specs.append(f"Port={virtual_port},{target}")

        # Build flags
        flags = []
        if discard_pk:
            flags.append("DiscardPK")
        if detach:
            flags.append("Detach")
        if await_publication:
            flags.append("AwaitPublication")

        # Build command
        cmd_parts = [f"ADD_ONION {key_type}:{key_blob}"]
        cmd_parts.extend(port_specs)

        if flags:
            cmd_parts.append(f"Flags={','.join(flags)}")

        if max_streams is not None:
            cmd_parts.append(f"MaxStreams={max_streams}")

        command = " ".join(cmd_parts)

        try:
            responses = await self._command(command)
            self._check_success(responses)
        except TorControlError as e:
            raise TorHiddenServiceError(f"Failed to create hidden service: {e}") from e

        # Parse response to get service ID and optional private key
        service_id: str | None = None
        private_key: str | None = None

        for status, _, message in responses:
            if status == "250":
                if message.startswith("ServiceID="):
                    service_id = message.split("=", 1)[1]
                elif message.startswith("PrivateKey="):
                    private_key = message.split("=", 1)[1]

        if not service_id:
            raise TorHiddenServiceError("No ServiceID in ADD_ONION response")

        hs = EphemeralHiddenService(
            service_id=service_id,
            private_key=private_key,
            ports=list(ports),
        )

        if not detach:
            self._hidden_services.append(hs)

        logger.info(f"Created ephemeral hidden service: {hs.onion_address}")
        return hs

    async def delete_ephemeral_hidden_service(self, service_id: str) -> None:
        """
        Delete an ephemeral hidden service.

        Args:
            service_id: The service ID (without .onion suffix)
        """
        if not self._authenticated:
            raise TorControlError("Not authenticated")

        # Strip .onion if included
        if service_id.endswith(".onion"):
            service_id = service_id[:-6]

        try:
            responses = await self._command(f"DEL_ONION {service_id}")
            self._check_success(responses)
            logger.info(f"Deleted hidden service: {service_id}")
        except TorControlError as e:
            raise TorHiddenServiceError(f"Failed to delete hidden service: {e}") from e

        # Remove from tracking
        self._hidden_services = [hs for hs in self._hidden_services if hs.service_id != service_id]

    async def get_version(self) -> str:
        """Get Tor version string."""
        return await self.get_info("version")

    @property
    def is_connected(self) -> bool:
        """Check if connected to control port."""
        return self._connected

    @property
    def is_authenticated(self) -> bool:
        """Check if authenticated."""
        return self._authenticated

    @property
    def hidden_services(self) -> list[EphemeralHiddenService]:
        """Get list of active ephemeral hidden services created by this client."""
        return list(self._hidden_services)

Async client for Tor control protocol.

Supports cookie authentication and ephemeral hidden service creation. The client maintains a persistent connection to control port.

Example

async with TorControlClient() as client: hs = await client.create_ephemeral_hidden_service( ports=[(8765, "127.0.0.1:8765")] ) print(f"Hidden service: {hs.onion_address}") # Service exists while connection is open

Service removed when context exits

Initialize Tor control client.

Args

control_host
Tor control port host
control_port
Tor control port number
cookie_path
Path to cookie auth file (usually /var/lib/tor/control_auth_cookie)
password
Optional password for HASHEDPASSWORD auth (not recommended)

Instance variables

prop hidden_services : list[EphemeralHiddenService]
Expand source code
@property
def hidden_services(self) -> list[EphemeralHiddenService]:
    """Get list of active ephemeral hidden services created by this client."""
    return list(self._hidden_services)

Get list of active ephemeral hidden services created by this client.

prop is_authenticated : bool
Expand source code
@property
def is_authenticated(self) -> bool:
    """Check if authenticated."""
    return self._authenticated

Check if authenticated.

prop is_connected : bool
Expand source code
@property
def is_connected(self) -> bool:
    """Check if connected to control port."""
    return self._connected

Check if connected to control port.

Methods

async def authenticate(self) ‑> None
Expand source code
async def authenticate(self) -> None:
    """
    Authenticate with Tor control port.

    Tries cookie authentication first if cookie_path is set,
    then falls back to password if provided.
    """
    if self._authenticated:
        return

    if not self._connected:
        await self.connect()

    # Try cookie authentication
    if self.cookie_path:
        await self._authenticate_cookie()
        return

    # Try password authentication
    if self.password:
        await self._authenticate_password()
        return

    # Try null authentication (for permissive configs)
    try:
        responses = await self._command("AUTHENTICATE")
        self._check_success(responses)
        self._authenticated = True
        logger.info("Authenticated with Tor (null auth)")
    except TorControlError as e:
        raise TorAuthenticationError(
            "No authentication method configured. Provide cookie_path or password."
        ) from e

Authenticate with Tor control port.

Tries cookie authentication first if cookie_path is set, then falls back to password if provided.

async def close(self) ‑> None
Expand source code
async def close(self) -> None:
    """Close connection to Tor control port."""
    if not self._connected:
        return

    self._connected = False
    self._authenticated = False
    self._hidden_services.clear()

    if self._writer:
        try:
            self._writer.close()
            await self._writer.wait_closed()
        except Exception:
            pass
        self._writer = None
    self._reader = None

    logger.debug("Closed Tor control connection")

Close connection to Tor control port.

async def connect(self) ‑> None
Expand source code
async def connect(self) -> None:
    """Connect to Tor control port."""
    if self._connected:
        return

    try:
        logger.debug(f"Connecting to Tor control port {self.control_host}:{self.control_port}")
        self._reader, self._writer = await asyncio.wait_for(
            asyncio.open_connection(self.control_host, self.control_port),
            timeout=10.0,
        )
        self._connected = True
        logger.info(f"Connected to Tor control port at {self.control_host}:{self.control_port}")
    except TimeoutError as e:
        raise TorControlError(
            f"Timeout connecting to Tor control port at {self.control_host}:{self.control_port}"
        ) from e
    except OSError as e:
        raise TorControlError(
            f"Failed to connect to Tor control port at "
            f"{self.control_host}:{self.control_port}: {e}"
        ) from e

Connect to Tor control port.

async def create_ephemeral_hidden_service(self,
ports: list[tuple[int, str]],
key_type: str = 'NEW',
key_blob: str = 'ED25519-V3',
discard_pk: bool = False,
detach: bool = False,
await_publication: bool = False,
max_streams: int | None = None) ‑> EphemeralHiddenService
Expand source code
async def create_ephemeral_hidden_service(
    self,
    ports: list[tuple[int, str]],
    key_type: str = "NEW",
    key_blob: str = "ED25519-V3",
    discard_pk: bool = False,
    detach: bool = False,
    await_publication: bool = False,
    max_streams: int | None = None,
) -> EphemeralHiddenService:
    """
    Create an ephemeral hidden service using ADD_ONION.

    Ephemeral services exist only while the control connection is open.
    When the connection closes, the hidden service is automatically removed.

    Args:
        ports: List of (virtual_port, target) tuples.
               Target is "host:port" or just "port" for localhost.
        key_type: "NEW" for new key, "ED25519-V3" or "RSA1024" for existing key
        key_blob: For NEW: "ED25519-V3" (recommended) or "RSA1024"
                  For existing: base64-encoded private key
        discard_pk: If True, don't return the private key
        detach: If True, service persists after control connection closes
        await_publication: If True, wait for HS descriptor to be published
        max_streams: Maximum concurrent streams (None for unlimited)

    Returns:
        EphemeralHiddenService with the created service details

    Example:
        # Create service that forwards port 80 to local 8080
        hs = await client.create_ephemeral_hidden_service(
            ports=[(80, "127.0.0.1:8080")]
        )
    """
    if not self._authenticated:
        raise TorControlError("Not authenticated")

    # Build port specifications
    port_specs = []
    for virtual_port, target in ports:
        port_specs.append(f"Port={virtual_port},{target}")

    # Build flags
    flags = []
    if discard_pk:
        flags.append("DiscardPK")
    if detach:
        flags.append("Detach")
    if await_publication:
        flags.append("AwaitPublication")

    # Build command
    cmd_parts = [f"ADD_ONION {key_type}:{key_blob}"]
    cmd_parts.extend(port_specs)

    if flags:
        cmd_parts.append(f"Flags={','.join(flags)}")

    if max_streams is not None:
        cmd_parts.append(f"MaxStreams={max_streams}")

    command = " ".join(cmd_parts)

    try:
        responses = await self._command(command)
        self._check_success(responses)
    except TorControlError as e:
        raise TorHiddenServiceError(f"Failed to create hidden service: {e}") from e

    # Parse response to get service ID and optional private key
    service_id: str | None = None
    private_key: str | None = None

    for status, _, message in responses:
        if status == "250":
            if message.startswith("ServiceID="):
                service_id = message.split("=", 1)[1]
            elif message.startswith("PrivateKey="):
                private_key = message.split("=", 1)[1]

    if not service_id:
        raise TorHiddenServiceError("No ServiceID in ADD_ONION response")

    hs = EphemeralHiddenService(
        service_id=service_id,
        private_key=private_key,
        ports=list(ports),
    )

    if not detach:
        self._hidden_services.append(hs)

    logger.info(f"Created ephemeral hidden service: {hs.onion_address}")
    return hs

Create an ephemeral hidden service using ADD_ONION.

Ephemeral services exist only while the control connection is open. When the connection closes, the hidden service is automatically removed.

Args

ports
List of (virtual_port, target) tuples. Target is "host:port" or just "port" for localhost.
key_type
"NEW" for new key, "ED25519-V3" or "RSA1024" for existing key
key_blob
For NEW: "ED25519-V3" (recommended) or "RSA1024" For existing: base64-encoded private key
discard_pk
If True, don't return the private key
detach
If True, service persists after control connection closes
await_publication
If True, wait for HS descriptor to be published
max_streams
Maximum concurrent streams (None for unlimited)

Returns

EphemeralHiddenService with the created service details

Example

Create service that forwards port 80 to local 8080

hs = await client.create_ephemeral_hidden_service( ports=[(80, "127.0.0.1:8080")] )

async def delete_ephemeral_hidden_service(self, service_id: str) ‑> None
Expand source code
async def delete_ephemeral_hidden_service(self, service_id: str) -> None:
    """
    Delete an ephemeral hidden service.

    Args:
        service_id: The service ID (without .onion suffix)
    """
    if not self._authenticated:
        raise TorControlError("Not authenticated")

    # Strip .onion if included
    if service_id.endswith(".onion"):
        service_id = service_id[:-6]

    try:
        responses = await self._command(f"DEL_ONION {service_id}")
        self._check_success(responses)
        logger.info(f"Deleted hidden service: {service_id}")
    except TorControlError as e:
        raise TorHiddenServiceError(f"Failed to delete hidden service: {e}") from e

    # Remove from tracking
    self._hidden_services = [hs for hs in self._hidden_services if hs.service_id != service_id]

Delete an ephemeral hidden service.

Args

service_id
The service ID (without .onion suffix)
async def get_info(self, key: str) ‑> str
Expand source code
async def get_info(self, key: str) -> str:
    """
    Get information from Tor.

    Args:
        key: Information key (e.g., "version", "config-file")

    Returns:
        The requested information value
    """
    if not self._authenticated:
        raise TorControlError("Not authenticated")

    responses = await self._command(f"GETINFO {key}")
    self._check_success(responses)

    # Parse key=value from first response
    for status, _, message in responses:
        if status == "250" and "=" in message:
            _, value = message.split("=", 1)
            return value

    raise TorControlError(f"Could not parse GETINFO response for {key}")

Get information from Tor.

Args

key
Information key (e.g., "version", "config-file")

Returns

The requested information value

async def get_version(self) ‑> str
Expand source code
async def get_version(self) -> str:
    """Get Tor version string."""
    return await self.get_info("version")

Get Tor version string.

class TorControlConfig (**data: Any)
Expand source code
class TorControlConfig(BaseModel):
    """
    Configuration for Tor control port connection.

    When enabled, allows dynamic creation of ephemeral hidden services
    at startup using Tor's control port. This allows generating a new
    .onion address each time without needing to pre-configure the hidden
    service in torrc.

    Requires Tor to be configured with:
        ControlPort 127.0.0.1:9051
        CookieAuthentication 1
        CookieAuthFile /var/lib/tor/control_auth_cookie

    Environment variables (via pydantic-settings):
        TOR__CONTROL_HOST - Tor control host (default: 127.0.0.1)
        TOR__CONTROL_PORT - Tor control port (default: 9051)
        TOR__COOKIE_PATH - Cookie auth file path
        TOR__PASSWORD - Tor control password (not recommended)
    """

    enabled: bool = Field(default=True, description="Enable Tor control port integration")
    host: str = Field(default="127.0.0.1", description="Tor control port host")
    port: int = Field(default=9051, ge=1, le=65535, description="Tor control port")
    cookie_path: Path | None = Field(
        default=None,
        description="Path to Tor cookie auth file (e.g., /var/lib/tor/control_auth_cookie)",
    )
    password: SecretStr | None = Field(
        default=None,
        description="Password for HASHEDPASSWORD auth (not recommended, use cookie auth)",
    )

    model_config = {"frozen": False}

Configuration for Tor control port connection.

When enabled, allows dynamic creation of ephemeral hidden services at startup using Tor's control port. This allows generating a new .onion address each time without needing to pre-configure the hidden service in torrc.

Requires Tor to be configured with: ControlPort 127.0.0.1:9051 CookieAuthentication 1 CookieAuthFile /var/lib/tor/control_auth_cookie

Environment variables (via pydantic-settings): TOR__CONTROL_HOST - Tor control host (default: 127.0.0.1) TOR__CONTROL_PORT - Tor control port (default: 9051) TOR__COOKIE_PATH - Cookie auth file path TOR__PASSWORD - Tor control password (not recommended)

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var cookie_path : pathlib.Path | None

The type of the None singleton.

var enabled : bool

The type of the None singleton.

var host : str

The type of the None singleton.

var model_config

The type of the None singleton.

var password : pydantic.types.SecretStr | None

The type of the None singleton.

var port : int

The type of the None singleton.

class TorControlError (*args, **kwargs)
Expand source code
class TorControlError(Exception):
    """Base exception for Tor control errors."""

    pass

Base exception for Tor control errors.

Ancestors

  • builtins.Exception
  • builtins.BaseException

Subclasses

class TorHiddenServiceError (*args, **kwargs)
Expand source code
class TorHiddenServiceError(TorControlError):
    """Failed to create or manage hidden service."""

    pass

Failed to create or manage hidden service.

Ancestors

class TorSettings (**data: Any)
Expand source code
class TorSettings(BaseModel):
    """Tor proxy and control port configuration."""

    # SOCKS proxy settings
    socks_host: str = Field(
        default="127.0.0.1",
        description="Tor SOCKS5 proxy host",
    )
    socks_port: int = Field(
        default=9050,
        ge=1,
        le=65535,
        description="Tor SOCKS5 proxy port",
    )

    # Control port settings
    control_enabled: bool = Field(
        default=True,
        description="Enable Tor control port integration for ephemeral hidden services",
    )
    control_host: str = Field(
        default="127.0.0.1",
        description="Tor control port host",
    )
    control_port: int = Field(
        default=9051,
        ge=1,
        le=65535,
        description="Tor control port",
    )
    cookie_path: str | None = Field(
        default=None,
        description="Path to Tor cookie auth file",
    )
    password: SecretStr | None = Field(
        default=None,
        description="Tor control port password (use cookie auth instead if possible)",
    )

    # Hidden service target (for makers)
    target_host: str = Field(
        default="127.0.0.1",
        description="Target host for Tor hidden service (usually container name in Docker)",
    )

Tor proxy and control port configuration.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var control_enabled : bool

The type of the None singleton.

var control_host : str

The type of the None singleton.

var control_port : int

The type of the None singleton.

var cookie_path : str | None

The type of the None singleton.

var model_config

The type of the None singleton.

var password : pydantic.types.SecretStr | None

The type of the None singleton.

var socks_host : str

The type of the None singleton.

var socks_port : int

The type of the None singleton.

var target_host : str

The type of the None singleton.

class TxInput (*args: Any, **kwargs: Any)
Expand source code
@dataclass
class TxInput:
    """Transaction input."""

    txid: str  # In RPC format (big-endian hex)
    vout: int
    value: int = 0
    scriptpubkey: str = ""
    scriptsig: str = ""
    sequence: int = 0xFFFFFFFF

Transaction input.

Instance variables

var scriptpubkey : str

The type of the None singleton.

var scriptsig : str

The type of the None singleton.

var sequence : int

The type of the None singleton.

var txid : str

The type of the None singleton.

var value : int

The type of the None singleton.

var vout : int

The type of the None singleton.

class TxOutput (*args: Any, **kwargs: Any)
Expand source code
@dataclass
class TxOutput:
    """Transaction output."""

    address: str
    value: int
    scriptpubkey: str = ""

Transaction output.

Instance variables

var address : str

The type of the None singleton.

var scriptpubkey : str

The type of the None singleton.

var value : int

The type of the None singleton.

class UTXOMetadata (*args: Any, **kwargs: Any)
Expand source code
@dataclass
class UTXOMetadata:
    """
    Extended UTXO metadata for Neutrino-compatible verification.

    This allows light clients to verify UTXOs without arbitrary blockchain queries
    by providing the scriptPubKey (for Neutrino watch list) and block height
    (for efficient rescan starting point).
    """

    txid: str
    vout: int
    scriptpubkey: str | None = None  # Hex-encoded scriptPubKey
    blockheight: int | None = None  # Block height where UTXO was confirmed

    def to_legacy_str(self) -> str:
        """Format as legacy string: txid:vout"""
        return f"{self.txid}:{self.vout}"

    def to_extended_str(self) -> str:
        """Format as extended string: txid:vout:scriptpubkey:blockheight"""
        if self.scriptpubkey is None or self.blockheight is None:
            return self.to_legacy_str()
        return f"{self.txid}:{self.vout}:{self.scriptpubkey}:{self.blockheight}"

    @classmethod
    def from_str(cls, s: str) -> UTXOMetadata:
        """
        Parse UTXO string in either legacy or extended format.

        Legacy format: txid:vout
        Extended format: txid:vout:scriptpubkey:blockheight
        """
        parts = s.split(":")
        if len(parts) == 2:
            # Legacy format
            return cls(txid=parts[0], vout=int(parts[1]))
        elif len(parts) == 4:
            # Extended format
            return cls(
                txid=parts[0],
                vout=int(parts[1]),
                scriptpubkey=parts[2],
                blockheight=int(parts[3]),
            )
        else:
            raise ValueError(f"Invalid UTXO format: {s}")

    def has_neutrino_metadata(self) -> bool:
        """Check if this UTXO has the metadata needed for Neutrino verification."""
        return self.scriptpubkey is not None and self.blockheight is not None

    @staticmethod
    def is_valid_scriptpubkey(scriptpubkey: str) -> bool:
        """Validate scriptPubKey format (hex string)."""
        if not scriptpubkey:
            return False
        # Must be valid hex
        if not re.match(r"^[0-9a-fA-F]+$", scriptpubkey):
            return False
        # Common scriptPubKey lengths (in hex chars):
        # P2PKH: 50 (25 bytes), P2SH: 46 (23 bytes)
        # P2WPKH: 44 (22 bytes), P2WSH: 68 (34 bytes)
        # P2TR: 68 (34 bytes)
        return not (len(scriptpubkey) < 4 or len(scriptpubkey) > 200)

Extended UTXO metadata for Neutrino-compatible verification.

This allows light clients to verify UTXOs without arbitrary blockchain queries by providing the scriptPubKey (for Neutrino watch list) and block height (for efficient rescan starting point).

Static methods

def from_str(s: str) ‑> UTXOMetadata

Parse UTXO string in either legacy or extended format.

Legacy format: txid:vout Extended format: txid:vout:scriptpubkey:blockheight

def is_valid_scriptpubkey(scriptpubkey: str) ‑> bool
Expand source code
@staticmethod
def is_valid_scriptpubkey(scriptpubkey: str) -> bool:
    """Validate scriptPubKey format (hex string)."""
    if not scriptpubkey:
        return False
    # Must be valid hex
    if not re.match(r"^[0-9a-fA-F]+$", scriptpubkey):
        return False
    # Common scriptPubKey lengths (in hex chars):
    # P2PKH: 50 (25 bytes), P2SH: 46 (23 bytes)
    # P2WPKH: 44 (22 bytes), P2WSH: 68 (34 bytes)
    # P2TR: 68 (34 bytes)
    return not (len(scriptpubkey) < 4 or len(scriptpubkey) > 200)

Validate scriptPubKey format (hex string).

Instance variables

var blockheight : int | None

The type of the None singleton.

var scriptpubkey : str | None

The type of the None singleton.

var txid : str

The type of the None singleton.

var vout : int

The type of the None singleton.

Methods

def has_neutrino_metadata(self) ‑> bool
Expand source code
def has_neutrino_metadata(self) -> bool:
    """Check if this UTXO has the metadata needed for Neutrino verification."""
    return self.scriptpubkey is not None and self.blockheight is not None

Check if this UTXO has the metadata needed for Neutrino verification.

def to_extended_str(self) ‑> str
Expand source code
def to_extended_str(self) -> str:
    """Format as extended string: txid:vout:scriptpubkey:blockheight"""
    if self.scriptpubkey is None or self.blockheight is None:
        return self.to_legacy_str()
    return f"{self.txid}:{self.vout}:{self.scriptpubkey}:{self.blockheight}"

Format as extended string: txid:vout:scriptpubkey:blockheight

def to_legacy_str(self) ‑> str
Expand source code
def to_legacy_str(self) -> str:
    """Format as legacy string: txid:vout"""
    return f"{self.txid}:{self.vout}"

Format as legacy string: txid:vout

class WalletConfig (**data: Any)
Expand source code
class WalletConfig(BaseModel):
    """
    Base wallet configuration shared by all JoinMarket wallet users.

    Includes wallet seed, network settings, HD wallet structure, and
    backend connection details.
    """

    # Wallet seed
    mnemonic: SecretStr = Field(..., description="BIP39 mnemonic phrase for wallet seed")
    passphrase: SecretStr = Field(
        default_factory=lambda: SecretStr(""),
        description="BIP39 passphrase (13th/25th word)",
    )

    # Network settings
    network: NetworkType = Field(
        default=NetworkType.MAINNET,
        description="Protocol network for directory server handshakes",
    )
    bitcoin_network: NetworkType | None = Field(
        default=None,
        description="Bitcoin network for address generation (defaults to same as network)",
    )

    # Data directory
    data_dir: Path | None = Field(
        default=None,
        description=(
            "Data directory for JoinMarket files (commitment blacklist, history, etc.). "
            "Defaults to ~/.joinmarket-ng or $JOINMARKET_DATA_DIR if set"
        ),
    )

    # Backend configuration
    backend_type: str = Field(
        default="scantxoutset",
        description="Backend type: 'scantxoutset' or 'neutrino'",
    )
    backend_config: dict[str, Any] = Field(
        default_factory=dict,
        description="Backend-specific configuration",
    )

    # Directory servers
    directory_servers: list[str] = Field(
        default_factory=list,
        description="List of directory server URLs (e.g., ['onion_host:port', ...])",
    )

    # Tor/SOCKS configuration
    socks_host: str = Field(default="127.0.0.1", description="Tor SOCKS5 proxy host")
    socks_port: int = Field(default=9050, ge=1, le=65535, description="Tor SOCKS5 proxy port")

    # HD wallet structure
    mixdepth_count: int = Field(
        default=5,
        ge=1,
        le=10,
        description="Number of mixdepths in the wallet (privacy compartments)",
    )
    gap_limit: int = Field(default=20, ge=6, description="BIP44 gap limit for address scanning")

    # Dust threshold
    dust_threshold: int = Field(
        default=DUST_THRESHOLD,
        ge=0,
        description="Dust threshold in satoshis for change outputs (default: 27300)",
    )

    # Descriptor wallet scan configuration
    smart_scan: bool = Field(
        default=True,
        description=(
            "Use smart scan for fast startup (scan from ~1 year ago instead of genesis). "
            "A full rescan runs in background to catch any older transactions."
        ),
    )
    background_full_rescan: bool = Field(
        default=True,
        description=(
            "Run full blockchain rescan in background after smart scan. "
            "This ensures no transactions are missed while allowing fast startup."
        ),
    )
    scan_lookback_blocks: int = Field(
        default=52_560,
        ge=0,
        description=(
            "Number of blocks to look back for smart scan (default: ~1 year = 52560 blocks). "
            "Set to 0 to always scan from genesis (slow but complete)."
        ),
    )

    model_config = {"frozen": False}

    @model_validator(mode="after")
    def set_bitcoin_network_default(self) -> WalletConfig:
        """If bitcoin_network is not set, default to the protocol network."""
        if self.bitcoin_network is None:
            object.__setattr__(self, "bitcoin_network", self.network)
        return self

Base wallet configuration shared by all JoinMarket wallet users.

Includes wallet seed, network settings, HD wallet structure, and backend connection details.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Subclasses

Class variables

var backend_config : dict[str, typing.Any]

The type of the None singleton.

var backend_type : str

The type of the None singleton.

var background_full_rescan : bool

The type of the None singleton.

var bitcoin_networkNetworkType | None

The type of the None singleton.

var data_dir : pathlib.Path | None

The type of the None singleton.

var directory_servers : list[str]

The type of the None singleton.

var dust_threshold : int

The type of the None singleton.

var gap_limit : int

The type of the None singleton.

var mixdepth_count : int

The type of the None singleton.

var mnemonic : pydantic.types.SecretStr

The type of the None singleton.

var model_config

The type of the None singleton.

var networkNetworkType

The type of the None singleton.

var passphrase : pydantic.types.SecretStr

The type of the None singleton.

var scan_lookback_blocks : int

The type of the None singleton.

var smart_scan : bool

The type of the None singleton.

var socks_host : str

The type of the None singleton.

var socks_port : int

The type of the None singleton.

Methods

def set_bitcoin_network_default(self) ‑> WalletConfig
Expand source code
@model_validator(mode="after")
def set_bitcoin_network_default(self) -> WalletConfig:
    """If bitcoin_network is not set, default to the protocol network."""
    if self.bitcoin_network is None:
        object.__setattr__(self, "bitcoin_network", self.network)
    return self

If bitcoin_network is not set, default to the protocol network.

class WalletSettings (**data: Any)
Expand source code
class WalletSettings(BaseModel):
    """Wallet configuration."""

    mixdepth_count: int = Field(
        default=5,
        ge=1,
        le=10,
        description="Number of mixdepths (privacy compartments)",
    )
    gap_limit: int = Field(
        default=20,
        ge=6,
        description="BIP44 gap limit for address scanning",
    )
    dust_threshold: int = Field(
        default=27300,
        ge=0,
        description="Dust threshold in satoshis",
    )
    smart_scan: bool = Field(
        default=True,
        description="Use smart scan for fast startup",
    )
    background_full_rescan: bool = Field(
        default=True,
        description="Run full blockchain rescan in background",
    )
    scan_lookback_blocks: int = Field(
        default=52560,
        ge=0,
        description="Blocks to look back for smart scan (~1 year default)",
    )
    scan_start_height: int | None = Field(
        default=None,
        ge=0,
        description="Explicit start height for initial scan (overrides scan_lookback_blocks if set)",
    )
    default_fee_block_target: int = Field(
        default=3,
        ge=1,
        le=1008,
        description="Default block target for fee estimation in wallet transactions",
    )
    mnemonic_file: str | None = Field(
        default=None,
        description="Default path to mnemonic file",
    )
    mnemonic_password: SecretStr | None = Field(
        default=None,
        description="Password for encrypted mnemonic file",
    )
    bip39_passphrase: SecretStr | None = Field(
        default=None,
        description="BIP39 passphrase (13th/25th word). For security, prefer BIP39_PASSPHRASE env var.",
    )

Wallet configuration.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Ancestors

  • pydantic.main.BaseModel

Class variables

var background_full_rescan : bool

The type of the None singleton.

var bip39_passphrase : pydantic.types.SecretStr | None

The type of the None singleton.

var default_fee_block_target : int

The type of the None singleton.

var dust_threshold : int

The type of the None singleton.

var gap_limit : int

The type of the None singleton.

var mixdepth_count : int

The type of the None singleton.

var mnemonic_file : str | None

The type of the None singleton.

var mnemonic_password : pydantic.types.SecretStr | None

The type of the None singleton.

var model_config

The type of the None singleton.

var scan_lookback_blocks : int

The type of the None singleton.

var scan_start_height : int | None

The type of the None singleton.

var smart_scan : bool

The type of the None singleton.