Skip to content

Discrete Module

Problems

DiscreteProblem

Bases: ABC, Generic[SolutionT]

Abstract base class for discrete optimization problems.

A DiscreteProblem defines the search space, objective function, neighborhood structure, and perturbation operator. It is stateless: all randomness is injected via an rng parameter owned by the caller (typically the ILS sampler).

Subclasses must implement all abstract methods to define: - Solution generation and identity - Fitness evaluation (pure function) - Local search (hill climbing with problem-specific neighborhood) - Perturbation (escape operator for ILS)

The ILS sampler calls these methods without knowing the solution representation, neighborhood structure, or move operators.

Type parameter

SolutionT: The solution representation type (e.g., list[int] for bitstrings, list[int] for permutations).

Source code in src/lonkit/discrete/problems/problem.py
class DiscreteProblem(ABC, Generic[SolutionT]):
    """
    Abstract base class for discrete optimization problems.

    A DiscreteProblem defines the search space, objective function,
    neighborhood structure, and perturbation operator. It is **stateless**:
    all randomness is injected via an `rng` parameter owned by the caller
    (typically the ILS sampler).

    Subclasses must implement all abstract methods to define:
    - Solution generation and identity
    - Fitness evaluation (pure function)
    - Local search (hill climbing with problem-specific neighborhood)
    - Perturbation (escape operator for ILS)

    The ILS sampler calls these methods without knowing the solution
    representation, neighborhood structure, or move operators.

    Type parameter:
        SolutionT: The solution representation type (e.g., list[int] for
        bitstrings, list[int] for permutations).
    """

    @property
    def minimize(self) -> bool:
        """Whether this is a minimization problem. Default: True."""
        return True

    def is_better(self, a: float, b: float) -> bool:
        """Return True if fitness `a` is strictly better than `b`."""
        return a < b if self.minimize else a > b

    def is_better_or_equal(self, a: float, b: float) -> bool:
        """Return True if fitness `a` is better than or equal to `b`."""
        return a <= b if self.minimize else a >= b

    def compare(self, a: float, b: float) -> int:
        """
        Compare two fitness values.

        Returns:
            1 if a is better, -1 if b is better, 0 if equal.
        """
        if self.is_better(a, b):
            return 1
        if self.is_better(b, a):
            return -1
        return 0

    @abstractmethod
    def random_solution(self, rng: np.random.Generator) -> SolutionT:
        """
        Generate a random initial solution.

        Args:
            rng: Random number generator (owned by the caller).

        Returns:
            A solution object (representation is problem-specific).
            The returned solution does NOT need to be a local optimum.
        """
        ...

    @abstractmethod
    def evaluate(self, solution: SolutionT) -> float:
        """
        Evaluate and return the fitness of a solution.

        This must be a **pure function**: same solution always returns
        the same fitness, no side effects.

        Args:
            solution: A solution object.

        Returns:
            Scalar fitness value.
        """
        ...

    @abstractmethod
    def local_search(
        self, solution: SolutionT, rng: np.random.Generator
    ) -> tuple[SolutionT, float]:
        """
        Run local search from a starting solution to find a local optimum.

        This encapsulates the full hill-climbing loop:
        neighborhood definition, scanning strategy (first/best improvement),
        and termination.

        **Stochastic local search note**: When using first-improvement with
        randomized scan order, the local optimum reached from a given
        starting solution depends on the RNG state. This means basin
        identity is sampler-state dependent -- the same starting solution
        may reach different local optima across runs. This is an intentional
        and well-understood property of stochastic local search in LON
        construction.

        Args:
            solution: Starting solution (not necessarily a local optimum).
            rng: Random number generator (owned by the caller).

        Returns:
            Tuple of (local_optimum_solution, fitness).
        """
        ...

    @abstractmethod
    def perturb(self, solution: SolutionT, rng: np.random.Generator) -> SolutionT:
        """
        Apply a perturbation to escape the current basin of attraction.

        The perturbation should be strong enough to potentially reach a
        different basin but not so strong as to be random restart.

        Args:
            solution: A local optimum solution.
            rng: Random number generator (owned by the caller).

        Returns:
            A perturbed solution (NOT necessarily a local optimum).
        """
        ...

    @abstractmethod
    def solution_id(self, solution: SolutionT) -> str:
        """
        Return a unique string identifier for this solution.

        Two solutions that represent the same point in the search space
        MUST return the same ID. Two different solutions MUST return
        different IDs.

        Args:
            solution: A solution object.

        Returns:
            String identifier (used as node ID in the LON).
        """
        ...

minimize property

minimize: bool

Whether this is a minimization problem. Default: True.

is_better

is_better(a: float, b: float) -> bool

Return True if fitness a is strictly better than b.

Source code in src/lonkit/discrete/problems/problem.py
def is_better(self, a: float, b: float) -> bool:
    """Return True if fitness `a` is strictly better than `b`."""
    return a < b if self.minimize else a > b

is_better_or_equal

is_better_or_equal(a: float, b: float) -> bool

Return True if fitness a is better than or equal to b.

Source code in src/lonkit/discrete/problems/problem.py
def is_better_or_equal(self, a: float, b: float) -> bool:
    """Return True if fitness `a` is better than or equal to `b`."""
    return a <= b if self.minimize else a >= b

compare

compare(a: float, b: float) -> int

Compare two fitness values.

Returns:

Type Description
int

1 if a is better, -1 if b is better, 0 if equal.

Source code in src/lonkit/discrete/problems/problem.py
def compare(self, a: float, b: float) -> int:
    """
    Compare two fitness values.

    Returns:
        1 if a is better, -1 if b is better, 0 if equal.
    """
    if self.is_better(a, b):
        return 1
    if self.is_better(b, a):
        return -1
    return 0

random_solution abstractmethod

random_solution(rng: Generator) -> SolutionT

Generate a random initial solution.

Parameters:

Name Type Description Default
rng Generator

Random number generator (owned by the caller).

required

Returns:

Type Description
SolutionT

A solution object (representation is problem-specific).

SolutionT

The returned solution does NOT need to be a local optimum.

Source code in src/lonkit/discrete/problems/problem.py
@abstractmethod
def random_solution(self, rng: np.random.Generator) -> SolutionT:
    """
    Generate a random initial solution.

    Args:
        rng: Random number generator (owned by the caller).

    Returns:
        A solution object (representation is problem-specific).
        The returned solution does NOT need to be a local optimum.
    """
    ...

evaluate abstractmethod

evaluate(solution: SolutionT) -> float

Evaluate and return the fitness of a solution.

This must be a pure function: same solution always returns the same fitness, no side effects.

Parameters:

Name Type Description Default
solution SolutionT

A solution object.

required

Returns:

Type Description
float

Scalar fitness value.

Source code in src/lonkit/discrete/problems/problem.py
@abstractmethod
def evaluate(self, solution: SolutionT) -> float:
    """
    Evaluate and return the fitness of a solution.

    This must be a **pure function**: same solution always returns
    the same fitness, no side effects.

    Args:
        solution: A solution object.

    Returns:
        Scalar fitness value.
    """
    ...
local_search(
    solution: SolutionT, rng: Generator
) -> tuple[SolutionT, float]

Run local search from a starting solution to find a local optimum.

This encapsulates the full hill-climbing loop: neighborhood definition, scanning strategy (first/best improvement), and termination.

Stochastic local search note: When using first-improvement with randomized scan order, the local optimum reached from a given starting solution depends on the RNG state. This means basin identity is sampler-state dependent -- the same starting solution may reach different local optima across runs. This is an intentional and well-understood property of stochastic local search in LON construction.

Parameters:

Name Type Description Default
solution SolutionT

Starting solution (not necessarily a local optimum).

required
rng Generator

Random number generator (owned by the caller).

required

Returns:

Type Description
tuple[SolutionT, float]

Tuple of (local_optimum_solution, fitness).

Source code in src/lonkit/discrete/problems/problem.py
@abstractmethod
def local_search(
    self, solution: SolutionT, rng: np.random.Generator
) -> tuple[SolutionT, float]:
    """
    Run local search from a starting solution to find a local optimum.

    This encapsulates the full hill-climbing loop:
    neighborhood definition, scanning strategy (first/best improvement),
    and termination.

    **Stochastic local search note**: When using first-improvement with
    randomized scan order, the local optimum reached from a given
    starting solution depends on the RNG state. This means basin
    identity is sampler-state dependent -- the same starting solution
    may reach different local optima across runs. This is an intentional
    and well-understood property of stochastic local search in LON
    construction.

    Args:
        solution: Starting solution (not necessarily a local optimum).
        rng: Random number generator (owned by the caller).

    Returns:
        Tuple of (local_optimum_solution, fitness).
    """
    ...

perturb abstractmethod

perturb(solution: SolutionT, rng: Generator) -> SolutionT

Apply a perturbation to escape the current basin of attraction.

The perturbation should be strong enough to potentially reach a different basin but not so strong as to be random restart.

Parameters:

Name Type Description Default
solution SolutionT

A local optimum solution.

required
rng Generator

Random number generator (owned by the caller).

required

Returns:

Type Description
SolutionT

A perturbed solution (NOT necessarily a local optimum).

Source code in src/lonkit/discrete/problems/problem.py
@abstractmethod
def perturb(self, solution: SolutionT, rng: np.random.Generator) -> SolutionT:
    """
    Apply a perturbation to escape the current basin of attraction.

    The perturbation should be strong enough to potentially reach a
    different basin but not so strong as to be random restart.

    Args:
        solution: A local optimum solution.
        rng: Random number generator (owned by the caller).

    Returns:
        A perturbed solution (NOT necessarily a local optimum).
    """
    ...

solution_id abstractmethod

solution_id(solution: SolutionT) -> str

Return a unique string identifier for this solution.

Two solutions that represent the same point in the search space MUST return the same ID. Two different solutions MUST return different IDs.

Parameters:

Name Type Description Default
solution SolutionT

A solution object.

required

Returns:

Type Description
str

String identifier (used as node ID in the LON).

Source code in src/lonkit/discrete/problems/problem.py
@abstractmethod
def solution_id(self, solution: SolutionT) -> str:
    """
    Return a unique string identifier for this solution.

    Two solutions that represent the same point in the search space
    MUST return the same ID. Two different solutions MUST return
    different IDs.

    Args:
        solution: A solution object.

    Returns:
        String identifier (used as node ID in the LON).
    """
    ...

BitstringProblem

Bases: DiscreteProblem[list[int]]

Base class for problems with binary string representation.

Provides common bitstring operations: - Random solution generation (random bitstring) - Perturbation (k random bit flips) - Solution identity (join bits as string) - Hill climbing with first/best-improvement flip neighborhood

Subclasses must implement evaluate() and may override minimize or local_search().

The problem is stateless: n, n_perturbation_flips, and first_improvement are immutable configuration. All randomness comes from the rng parameter passed by the caller.

Source code in src/lonkit/discrete/problems/bitstring.py
class BitstringProblem(DiscreteProblem[list[int]]):
    """
    Base class for problems with binary string representation.

    Provides common bitstring operations:
    - Random solution generation (random bitstring)
    - Perturbation (k random bit flips)
    - Solution identity (join bits as string)
    - Hill climbing with first/best-improvement flip neighborhood

    Subclasses must implement `evaluate()` and may override
    `minimize` or `local_search()`.

    The problem is **stateless**: `n`, `n_perturbation_flips`, and
    `first_improvement` are immutable configuration. All randomness
    comes from the `rng` parameter passed by the caller.
    """

    def __init__(
        self,
        n: int,
        n_perturbation_flips: int = 2,
        first_improvement: bool = True,
    ):
        """
        Args:
            n: Length of the bitstring. Must be > 0.
            n_perturbation_flips: Number of random bit flips per perturbation.
                Must be in [1, n].
            first_improvement: If True, local search uses first-improvement
                hill climbing (stochastic -- scan order randomized each pass).
                If False, uses best-improvement (deterministic).

        Raises:
            ValueError: If n <= 0 or n_perturbation_flips is out of [1, n].
        """
        if n <= 0:
            raise ValueError(f"n must be positive, got {n}")
        if n_perturbation_flips <= 0 or n_perturbation_flips > n:
            raise ValueError(
                f"n_perturbation_flips must be in [1, {n}], got {n_perturbation_flips}"
            )
        self.n = n
        self.n_perturbation_flips = n_perturbation_flips
        self.first_improvement = first_improvement

    def random_solution(self, rng: np.random.Generator) -> list[int]:
        result: list[int] = rng.integers(0, 2, size=self.n).tolist()
        return result

    def solution_id(self, solution: list[int]) -> str:
        return "".join(str(b) for b in solution)

    def perturb(self, solution: list[int], rng: np.random.Generator) -> list[int]:
        """Flip `n_perturbation_flips` random distinct bits."""
        sol = list(solution)  # copy
        indices = rng.choice(self.n, size=self.n_perturbation_flips, replace=False)
        for i in indices:
            sol[i] = 1 - sol[i]
        return sol

    def delta_evaluate(
        self,
        solution: list[int],  # noqa: ARG002
        index: int,  # noqa: ARG002
    ) -> float | None:
        """
        Optional delta evaluation hook for flipping bit at `index`.

        Returns the fitness *change* (delta) if efficient delta evaluation
        is supported, or None to fall back to full evaluation.

        The default implementation returns None. Subclasses like OneMax
        can override this for O(1) evaluation instead of O(n).

        Args:
            solution: Current solution (not modified).
            index: Bit index that would be flipped.

        Returns:
            Fitness delta (new_fitness - current_fitness), or None.
        """
        return None

    def local_search(
        self, solution: list[int], rng: np.random.Generator
    ) -> tuple[list[int], float]:
        """
        First/best improvement hill climbing with 1-bit-flip neighborhood.

        Scans all N bit positions. For first-improvement, the scan order
        is randomized each pass (stochastic). For best-improvement, the
        scan is deterministic. Stops when no improving neighbor exists.

        Uses `delta_evaluate()` when available for O(1) neighbor evaluation;
        falls back to full `evaluate()` otherwise.
        """
        sol = list(solution)  # copy
        current_fitness = self.evaluate(sol)
        improved = True

        while improved:
            improved = False
            indices = list(range(self.n))
            if self.first_improvement:
                rng.shuffle(indices)

            best_delta_index = -1
            best_delta_fitness = current_fitness

            for i in indices:
                # Try delta evaluation first
                delta = self.delta_evaluate(sol, i)
                if delta is not None:
                    new_fitness = current_fitness + delta
                else:
                    # Full evaluation: flip, evaluate, undo
                    sol[i] = 1 - sol[i]
                    new_fitness = self.evaluate(sol)
                    sol[i] = 1 - sol[i]

                if self.is_better(new_fitness, current_fitness):
                    if self.first_improvement:
                        # Accept immediately
                        sol[i] = 1 - sol[i]
                        current_fitness = new_fitness
                        improved = True
                        break
                    else:
                        # Track best
                        if self.is_better(new_fitness, best_delta_fitness):
                            best_delta_fitness = new_fitness
                            best_delta_index = i

            if not self.first_improvement and best_delta_index >= 0:
                sol[best_delta_index] = 1 - sol[best_delta_index]
                current_fitness = best_delta_fitness
                improved = True

        return sol, current_fitness

__init__

__init__(
    n: int,
    n_perturbation_flips: int = 2,
    first_improvement: bool = True,
)

Parameters:

Name Type Description Default
n int

Length of the bitstring. Must be > 0.

required
n_perturbation_flips int

Number of random bit flips per perturbation. Must be in [1, n].

2
first_improvement bool

If True, local search uses first-improvement hill climbing (stochastic -- scan order randomized each pass). If False, uses best-improvement (deterministic).

True

Raises:

Type Description
ValueError

If n <= 0 or n_perturbation_flips is out of [1, n].

Source code in src/lonkit/discrete/problems/bitstring.py
def __init__(
    self,
    n: int,
    n_perturbation_flips: int = 2,
    first_improvement: bool = True,
):
    """
    Args:
        n: Length of the bitstring. Must be > 0.
        n_perturbation_flips: Number of random bit flips per perturbation.
            Must be in [1, n].
        first_improvement: If True, local search uses first-improvement
            hill climbing (stochastic -- scan order randomized each pass).
            If False, uses best-improvement (deterministic).

    Raises:
        ValueError: If n <= 0 or n_perturbation_flips is out of [1, n].
    """
    if n <= 0:
        raise ValueError(f"n must be positive, got {n}")
    if n_perturbation_flips <= 0 or n_perturbation_flips > n:
        raise ValueError(
            f"n_perturbation_flips must be in [1, {n}], got {n_perturbation_flips}"
        )
    self.n = n
    self.n_perturbation_flips = n_perturbation_flips
    self.first_improvement = first_improvement

perturb

perturb(solution: list[int], rng: Generator) -> list[int]

Flip n_perturbation_flips random distinct bits.

Source code in src/lonkit/discrete/problems/bitstring.py
def perturb(self, solution: list[int], rng: np.random.Generator) -> list[int]:
    """Flip `n_perturbation_flips` random distinct bits."""
    sol = list(solution)  # copy
    indices = rng.choice(self.n, size=self.n_perturbation_flips, replace=False)
    for i in indices:
        sol[i] = 1 - sol[i]
    return sol

delta_evaluate

delta_evaluate(
    solution: list[int], index: int
) -> float | None

Optional delta evaluation hook for flipping bit at index.

Returns the fitness change (delta) if efficient delta evaluation is supported, or None to fall back to full evaluation.

The default implementation returns None. Subclasses like OneMax can override this for O(1) evaluation instead of O(n).

Parameters:

Name Type Description Default
solution list[int]

Current solution (not modified).

required
index int

Bit index that would be flipped.

required

Returns:

Type Description
float | None

Fitness delta (new_fitness - current_fitness), or None.

Source code in src/lonkit/discrete/problems/bitstring.py
def delta_evaluate(
    self,
    solution: list[int],  # noqa: ARG002
    index: int,  # noqa: ARG002
) -> float | None:
    """
    Optional delta evaluation hook for flipping bit at `index`.

    Returns the fitness *change* (delta) if efficient delta evaluation
    is supported, or None to fall back to full evaluation.

    The default implementation returns None. Subclasses like OneMax
    can override this for O(1) evaluation instead of O(n).

    Args:
        solution: Current solution (not modified).
        index: Bit index that would be flipped.

    Returns:
        Fitness delta (new_fitness - current_fitness), or None.
    """
    return None
local_search(
    solution: list[int], rng: Generator
) -> tuple[list[int], float]

First/best improvement hill climbing with 1-bit-flip neighborhood.

Scans all N bit positions. For first-improvement, the scan order is randomized each pass (stochastic). For best-improvement, the scan is deterministic. Stops when no improving neighbor exists.

Uses delta_evaluate() when available for O(1) neighbor evaluation; falls back to full evaluate() otherwise.

Source code in src/lonkit/discrete/problems/bitstring.py
def local_search(
    self, solution: list[int], rng: np.random.Generator
) -> tuple[list[int], float]:
    """
    First/best improvement hill climbing with 1-bit-flip neighborhood.

    Scans all N bit positions. For first-improvement, the scan order
    is randomized each pass (stochastic). For best-improvement, the
    scan is deterministic. Stops when no improving neighbor exists.

    Uses `delta_evaluate()` when available for O(1) neighbor evaluation;
    falls back to full `evaluate()` otherwise.
    """
    sol = list(solution)  # copy
    current_fitness = self.evaluate(sol)
    improved = True

    while improved:
        improved = False
        indices = list(range(self.n))
        if self.first_improvement:
            rng.shuffle(indices)

        best_delta_index = -1
        best_delta_fitness = current_fitness

        for i in indices:
            # Try delta evaluation first
            delta = self.delta_evaluate(sol, i)
            if delta is not None:
                new_fitness = current_fitness + delta
            else:
                # Full evaluation: flip, evaluate, undo
                sol[i] = 1 - sol[i]
                new_fitness = self.evaluate(sol)
                sol[i] = 1 - sol[i]

            if self.is_better(new_fitness, current_fitness):
                if self.first_improvement:
                    # Accept immediately
                    sol[i] = 1 - sol[i]
                    current_fitness = new_fitness
                    improved = True
                    break
                else:
                    # Track best
                    if self.is_better(new_fitness, best_delta_fitness):
                        best_delta_fitness = new_fitness
                        best_delta_index = i

        if not self.first_improvement and best_delta_index >= 0:
            sol[best_delta_index] = 1 - sol[best_delta_index]
            current_fitness = best_delta_fitness
            improved = True

    return sol, current_fitness

NumberPartitioning

Bases: BitstringProblem

Number Partitioning Problem (NPP).

Given a set of N positive integers, partition them into two subsets such that the absolute difference of their sums is minimized.

A solution is a bitstring of length N. Bit i=0 means item i goes to subset A, bit i=1 means subset B.

Fitness = |sum(A) - sum(B)| (minimization, optimal = 0).

Construction: provide either explicit weights or both k and instance_seed for random instance generation.

Parameters:

Name Type Description Default
n int

Number of items. Must be > 0.

required
k float | None

Hardness parameter. Items drawn uniformly from [1, 2^(n*k)]. Higher k = harder instances (phase transition around k ~ 1.0). Required if weights is not provided. Must be > 0.

None
instance_seed int | None

Seed for generating item weights. Required if weights is not provided.

None
weights list[int] | None

Explicit item weights. If provided, k and instance_seed are ignored. Length must equal n.

None
n_perturbation_flips int

Number of random flips per perturbation (default: 2).

2
first_improvement bool

If True, local search uses first-improvement hill climbing (stochastic -- scan order randomized each pass). If False, uses best-improvement (deterministic). Default: True.

True
Source code in src/lonkit/discrete/problems/bitstring.py
class NumberPartitioning(BitstringProblem):
    """
    Number Partitioning Problem (NPP).

    Given a set of N positive integers, partition them into two subsets
    such that the absolute difference of their sums is minimized.

    A solution is a bitstring of length N. Bit i=0 means item i goes
    to subset A, bit i=1 means subset B.

    Fitness = |sum(A) - sum(B)|  (minimization, optimal = 0).

    Construction: provide either explicit `weights` or both `k` and
    `instance_seed` for random instance generation.

    Args:
        n: Number of items. Must be > 0.
        k: Hardness parameter. Items drawn uniformly from [1, 2^(n*k)].
            Higher k = harder instances (phase transition around k ~ 1.0).
            Required if `weights` is not provided. Must be > 0.
        instance_seed: Seed for generating item weights.
            Required if `weights` is not provided.
        weights: Explicit item weights. If provided, `k` and
            `instance_seed` are ignored. Length must equal `n`.
        n_perturbation_flips: Number of random flips per perturbation (default: 2).
        first_improvement: If True, local search uses first-improvement
            hill climbing (stochastic -- scan order randomized each pass).
            If False, uses best-improvement (deterministic). Default: True.
    """

    @property
    def minimize(self) -> bool:
        return True

    def __init__(
        self,
        n: int,
        k: float | None = None,
        instance_seed: int | None = None,
        weights: list[int] | None = None,
        n_perturbation_flips: int = 2,
        first_improvement: bool = True,
    ):
        super().__init__(n, n_perturbation_flips, first_improvement)
        if weights is not None:
            if k is not None or instance_seed is not None:
                warnings.warn(
                    "Both `weights` and `k`/`instance_seed` were provided. "
                    "`weights` will be used and `k`/`instance_seed` will be ignored.",
                    UserWarning,
                    stacklevel=2,
                )
            if len(weights) != n:
                raise ValueError(f"weights length ({len(weights)}) must equal n ({n})")
            if any(w <= 0 for w in weights):
                raise ValueError("All weights must be positive")
            self.weights = list(weights)
            self.k = None
            self.instance_seed = None
        elif k is not None and instance_seed is not None:
            if k <= 0:
                raise ValueError(f"k must be positive, got {k}")
            self.k = k
            self.instance_seed = instance_seed
            # Generate item weights using a separate RNG
            rng = np.random.default_rng(instance_seed)
            upper = round(math.pow(2, n * k))
            self.weights = rng.integers(1, upper + 1, size=n).tolist()
        else:
            raise ValueError("Provide either `weights` or both `k` and `instance_seed`")

    def evaluate(self, solution: list[int]) -> float:
        cost_a = sum(self.weights[i] for i in range(self.n) if solution[i] == 0)
        cost_b = sum(self.weights[i] for i in range(self.n) if solution[i] == 1)
        return float(abs(cost_a - cost_b))

OneMax

Bases: BitstringProblem

OneMax problem: maximize the number of 1-bits.

Fitness = sum(bits). Single global optimum at all-ones.

Supports O(1) delta evaluation: flipping bit i changes fitness by -1 (if 1->0) or +1 (if 0->1).

Source code in src/lonkit/discrete/problems/bitstring.py
class OneMax(BitstringProblem):
    """
    OneMax problem: maximize the number of 1-bits.

    Fitness = sum(bits). Single global optimum at all-ones.

    Supports O(1) delta evaluation: flipping bit i changes
    fitness by -1 (if 1->0) or +1 (if 0->1).
    """

    @property
    def minimize(self) -> bool:
        return False

    def evaluate(self, solution: list[int]) -> float:
        return float(sum(solution))

    def delta_evaluate(self, solution: list[int], index: int) -> float | None:
        """O(1) delta evaluation for OneMax."""
        return -1.0 if solution[index] == 1 else 1.0

delta_evaluate

delta_evaluate(
    solution: list[int], index: int
) -> float | None

O(1) delta evaluation for OneMax.

Source code in src/lonkit/discrete/problems/bitstring.py
def delta_evaluate(self, solution: list[int], index: int) -> float | None:
    """O(1) delta evaluation for OneMax."""
    return -1.0 if solution[index] == 1 else 1.0

Sampling

ILSSamplerConfig dataclass

Configuration for Iterated Local Search sampling.

Attributes:

Name Type Description
n_runs int

Number of independent ILS runs. Default: 100.

n_iter_no_change int | None

Maximum consecutive non-improving iterations before stopping each run. Use None for no limit. Setting both n_iter_no_change and max_iter to None will result in an error. Default: 100.

max_iter int | None

Maximum total iterations per run. Use None for no limit. Setting both n_iter_no_change and max_iter to None will result in an error. Default: None.

accept_equal bool

If True, accept moves to equal-fitness optima (greedy with equal acceptance). If False, only accept strictly improving moves. Default: True.

seed int | None

Random seed for reproducibility. Controls ALL search randomness: initial solution generation, local search scan order, and perturbation. The problem instance is stateless and receives the sampler's RNG. Default: None.

Source code in src/lonkit/discrete/sampling.py
@dataclass
class ILSSamplerConfig:
    """
    Configuration for Iterated Local Search sampling.

    Attributes:
        n_runs: Number of independent ILS runs. Default: 100.
        n_iter_no_change: Maximum consecutive non-improving iterations
            before stopping each run. Use None for no limit.
            Setting both n_iter_no_change and max_iter to None will
            result in an error. Default: 100.
        max_iter: Maximum total iterations per run. Use None for no
            limit. Setting both n_iter_no_change and max_iter to None
            will result in an error. Default: None.
        accept_equal: If True, accept moves to equal-fitness optima
            (greedy with equal acceptance). If False, only accept
            strictly improving moves. Default: True.
        seed: Random seed for reproducibility. Controls ALL search
            randomness: initial solution generation, local search
            scan order, and perturbation. The problem instance is
            stateless and receives the sampler's RNG. Default: None.
    """

    n_runs: int = 100
    n_iter_no_change: int | None = 100
    max_iter: int | None = None
    accept_equal: bool = True
    seed: int | None = None

    def __post_init__(self) -> None:
        if self.n_iter_no_change is not None and self.n_iter_no_change <= 0:
            raise ValueError("n_iter_no_change must be positive or None.")
        if self.max_iter is not None and self.max_iter <= 0:
            raise ValueError("max_iter must be positive or None.")
        if self.n_iter_no_change is None and self.max_iter is None:
            raise ValueError(
                "At least one stopping criterion must be set: " "n_iter_no_change and/or max_iter."
            )

ILSSampler

Iterated Local Search sampler for constructing Local Optima Networks from discrete optimization problems.

ILS alternates between perturbation and local search to explore the space of local optima. Transitions between local optima are recorded in the same trace format as BasinHoppingSampler, enabling direct use with LON.from_trace_data().

The sampler owns all search randomness via a single numpy.random.Generator instance created from ILSSamplerConfig.seed. The problem instance is stateless — the sampler passes its RNG into every problem method call (random_solution, local_search, perturb).

Example

from lonkit.discrete import NumberPartitioning, ILSSampler, ILSSamplerConfig problem = NumberPartitioning(n=20, k=0.5, instance_seed=1) config = ILSSamplerConfig(n_runs=10, n_iter_no_change=100, seed=42) sampler = ILSSampler(config) result = sampler.sample(problem) lon = sampler.sample_to_lon(result)

Source code in src/lonkit/discrete/sampling.py
class ILSSampler:
    """
    Iterated Local Search sampler for constructing Local Optima Networks
    from discrete optimization problems.

    ILS alternates between perturbation and local search to explore the
    space of local optima. Transitions between local optima are recorded
    in the same trace format as BasinHoppingSampler, enabling direct use
    with LON.from_trace_data().

    The sampler owns all search randomness via a single `numpy.random.Generator`
    instance created from `ILSSamplerConfig.seed`. The problem instance
    is stateless — the sampler passes its RNG into every problem method
    call (`random_solution`, `local_search`, `perturb`).

    Example:
        >>> from lonkit.discrete import NumberPartitioning, ILSSampler, ILSSamplerConfig
        >>> problem = NumberPartitioning(n=20, k=0.5, instance_seed=1)
        >>> config = ILSSamplerConfig(n_runs=10, n_iter_no_change=100, seed=42)
        >>> sampler = ILSSampler(config)
        >>> result = sampler.sample(problem)
        >>> lon = sampler.sample_to_lon(result)
    """

    def __init__(self, config: ILSSamplerConfig | None = None):
        self.config = config or ILSSamplerConfig()

    def sample(
        self,
        problem: DiscreteProblem[Any],
        progress_callback: Callable[[int, int], None] | None = None,
    ) -> ILSResult:
        """
        Run ILS sampling and construct trace data.

        Args:
            problem: A DiscreteProblem instance defining the optimization
                problem, local search, and perturbation operators.
                The problem must be stateless — the sampler provides
                the RNG.
            progress_callback: Optional callback(run, total_runs) for
                progress reporting. Default: None.

        Returns:
            ILSResult with trace DataFrame and raw records.
        """
        rng = np.random.default_rng(self.config.seed)
        raw_records = []

        for run in range(1, self.config.n_runs + 1):
            if progress_callback:
                progress_callback(run, self.config.n_runs)

            run_records = self._ils_run(problem, run, rng)
            raw_records.extend(run_records)

        trace_df = self._construct_trace_data(raw_records)
        return ILSResult(trace_df=trace_df, raw_records=raw_records, minimize=problem.minimize)

    def _ils_run(
        self,
        problem: DiscreteProblem[Any],
        run: int,
        rng: np.random.Generator,
    ) -> list[dict]:
        """
        Execute a single ILS run.

        The algorithm:
        1. Generate random initial solution
        2. Run local search to find first local optimum
        3. Loop:
           a. Perturb current best local optimum
           b. Run local search on perturbed solution
           c. Record transition (current -> new)
           d. Accept if new is better or equal (configurable)
           e. Stop after n_iter_no_change or max_iter

        Args:
            problem: The discrete problem instance.
            run: Run number (1-indexed, for trace DataFrame).
            rng: Random number generator (owned by the sampler).

        Returns:
            List of raw record dicts for this run.
        """
        records = []

        # Initial solution and local search
        initial_sol = problem.random_solution(rng)
        current_sol, current_fitness = problem.local_search(initial_sol, rng)
        current_id = problem.solution_id(current_sol)

        iters_without_improvement = 0
        iter_index = 0

        while True:
            # Check stopping criteria
            if self.config.max_iter is not None and iter_index >= self.config.max_iter:
                break
            if (
                self.config.n_iter_no_change is not None
                and iters_without_improvement >= self.config.n_iter_no_change
            ):
                break

            # Perturb and local search
            perturbed_sol = problem.perturb(current_sol, rng)
            new_sol, new_fitness = problem.local_search(perturbed_sol, rng)
            new_id = problem.solution_id(new_sol)

            if self.config.accept_equal:
                accepted = problem.is_better_or_equal(new_fitness, current_fitness)
            else:
                accepted = problem.is_better(new_fitness, current_fitness)

            records.append(
                {
                    "run": run,
                    "iteration": iter_index,
                    "current_id": current_id,
                    "current_fitness": current_fitness,
                    "new_id": new_id,
                    "new_fitness": new_fitness,
                    "accepted": accepted,
                }
            )

            if problem.is_better(new_fitness, current_fitness):
                iters_without_improvement = 0
            else:
                iters_without_improvement += 1

            if accepted:
                current_sol = new_sol
                current_fitness = new_fitness
                current_id = new_id

            iter_index += 1

        return records

    def _construct_trace_data(self, raw_records: list[dict]) -> pd.DataFrame:
        """
        Construct trace data from accepted transitions in raw records.

        Args:
            raw_records: List of raw sampling records from ILS.

        Returns:
            DataFrame with columns `[run, fit1, node1, fit2, node2]` representing
            accepted transitions only.
        """
        trace_records = []

        for rec in raw_records:
            if not rec["accepted"]:
                continue

            trace_records.append(
                {
                    "run": rec["run"],
                    "fit1": rec["current_fitness"],
                    "node1": rec["current_id"],
                    "fit2": rec["new_fitness"],
                    "node2": rec["new_id"],
                }
            )

        return pd.DataFrame(
            trace_records,
            columns=["run", "fit1", "node1", "fit2", "node2"],
        )

    def sample_to_lon(
        self,
        sampler_result: ILSResult,
        lon_config: LONConfig | None = None,
    ) -> LON:
        """
        Construct a LON from an `ILSResult`.

        Convenience wrapper that passes the trace data to
        LON.from_trace_data(). Equivalent to calling
        LON.from_trace_data(sampler_result.trace_df, config=lon_config).

        Args:
            sampler_result: Result returned by `sample()`.
            lon_config: LON construction configuration. If `None`, uses
                default `LONConfig`. Default: `None`.

        Returns:
            `LON` instance constructed from the sampling trace.
        """
        trace_df = sampler_result.trace_df

        if trace_df.empty:
            return LON(minimize=sampler_result.minimize)

        config = (
            replace(lon_config, minimize=sampler_result.minimize)
            if lon_config is not None
            else LONConfig(minimize=sampler_result.minimize)
        )
        return LON.from_trace_data(trace_df, config=config)

sample

sample(
    problem: DiscreteProblem[Any],
    progress_callback: Callable[[int, int], None]
    | None = None,
) -> ILSResult

Run ILS sampling and construct trace data.

Parameters:

Name Type Description Default
problem DiscreteProblem[Any]

A DiscreteProblem instance defining the optimization problem, local search, and perturbation operators. The problem must be stateless — the sampler provides the RNG.

required
progress_callback Callable[[int, int], None] | None

Optional callback(run, total_runs) for progress reporting. Default: None.

None

Returns:

Type Description
ILSResult

ILSResult with trace DataFrame and raw records.

Source code in src/lonkit/discrete/sampling.py
def sample(
    self,
    problem: DiscreteProblem[Any],
    progress_callback: Callable[[int, int], None] | None = None,
) -> ILSResult:
    """
    Run ILS sampling and construct trace data.

    Args:
        problem: A DiscreteProblem instance defining the optimization
            problem, local search, and perturbation operators.
            The problem must be stateless — the sampler provides
            the RNG.
        progress_callback: Optional callback(run, total_runs) for
            progress reporting. Default: None.

    Returns:
        ILSResult with trace DataFrame and raw records.
    """
    rng = np.random.default_rng(self.config.seed)
    raw_records = []

    for run in range(1, self.config.n_runs + 1):
        if progress_callback:
            progress_callback(run, self.config.n_runs)

        run_records = self._ils_run(problem, run, rng)
        raw_records.extend(run_records)

    trace_df = self._construct_trace_data(raw_records)
    return ILSResult(trace_df=trace_df, raw_records=raw_records, minimize=problem.minimize)

sample_to_lon

sample_to_lon(
    sampler_result: ILSResult,
    lon_config: LONConfig | None = None,
) -> LON

Construct a LON from an ILSResult.

Convenience wrapper that passes the trace data to LON.from_trace_data(). Equivalent to calling LON.from_trace_data(sampler_result.trace_df, config=lon_config).

Parameters:

Name Type Description Default
sampler_result ILSResult

Result returned by sample().

required
lon_config LONConfig | None

LON construction configuration. If None, uses default LONConfig. Default: None.

None

Returns:

Type Description
LON

LON instance constructed from the sampling trace.

Source code in src/lonkit/discrete/sampling.py
def sample_to_lon(
    self,
    sampler_result: ILSResult,
    lon_config: LONConfig | None = None,
) -> LON:
    """
    Construct a LON from an `ILSResult`.

    Convenience wrapper that passes the trace data to
    LON.from_trace_data(). Equivalent to calling
    LON.from_trace_data(sampler_result.trace_df, config=lon_config).

    Args:
        sampler_result: Result returned by `sample()`.
        lon_config: LON construction configuration. If `None`, uses
            default `LONConfig`. Default: `None`.

    Returns:
        `LON` instance constructed from the sampling trace.
    """
    trace_df = sampler_result.trace_df

    if trace_df.empty:
        return LON(minimize=sampler_result.minimize)

    config = (
        replace(lon_config, minimize=sampler_result.minimize)
        if lon_config is not None
        else LONConfig(minimize=sampler_result.minimize)
    )
    return LON.from_trace_data(trace_df, config=config)

ILSResult dataclass

Result of ILS sampling.

Attributes:

Name Type Description
trace_df DataFrame

DataFrame with columns [run, fit1, node1, fit2, node2] representing transitions between local optima.

raw_records list[dict]

List of dicts with per-iteration data. Each dict has keys: run, iteration, current_id, current_fitness, new_id, new_fitness, accepted.

minimize bool

Whether the problem is a minimization problem. Default: True.

Source code in src/lonkit/discrete/sampling.py
@dataclass
class ILSResult:
    """
    Result of ILS sampling.

    Attributes:
        trace_df: DataFrame with columns [run, fit1, node1, fit2, node2]
            representing transitions between local optima.
        raw_records: List of dicts with per-iteration data. Each dict
            has keys: run, iteration, current_id, current_fitness,
            new_id, new_fitness, accepted.
        minimize: Whether the problem is a minimization problem. Default: True.
    """

    trace_df: pd.DataFrame
    raw_records: list[dict]
    minimize: bool = True