Derandomization Of State-Of-The-Art Sat Algorithms

The Role of Randomness in SAT Algorithms

Randomness has played an integral role in enhancing the performance of Boolean Satisfiability (SAT) solvers over the past two decades. Modern SAT solvers heavily incorporate randomization in their search procedures, clause learning schemes, restart policies and decision heuristics. The injection of calculated randomness has allowed SAT techniques to scale to previously unsolvable problem sizes involving millions of variables. While randomized algorithms have substantially pushed the frontiers of SAT solving, they suffer from non-determinism and lack of reproducibility.

Sources of Randomness in Modern SAT Solvers

The key sources of randomness in current state-of-the-art SAT solvers are:

  • Random restarts – Periodically restarting the search from scratch with a new random decision sequence
  • Randomized branching heuristics – Dynamically choosing the next variable to assign based on random metrics
  • Random clause deletion policies – Randomly removing learnt clauses to control memory usage
  • Random seeding of initialization parameters – Setting variable polarities, activity scores etc. randomly

These stochastic elements lead to considerable volatility in runtimes and memory consumption across solver executions. A given benchmark instance may be quickly solved in one run while timing out in another run due to the capricious aspects.

Motivation for Derandomization

Reproducibility

The randomized nature of state-of-the-art SAT solvers severely hampers reproducibility of performance results. Solver running times can differ by orders of magnitude across runs on the same benchmark set. This variability clouds meaningful comparative assessment and ranking of solvers. Removing randomness would enable standardized empirical evaluation.

Predictability

Randomized SAT algorithms exhibit considerable volatility in their resource consumption characteristics. The memory usage and solver trajectories are subject to extreme variations across runs based on capricious restart schedules. Such unpredictability issues impede their scalable deployment for time-critical or memory-intensive applications. Making solver progress more steady through derandomization facilitates reliable resource allocation.

Parallelizability

Enabling parallel execution for randomized SAT algorithms requires complex load balancing and node communication mechanisms to handle the erratic solving paths. Eliminating the stochastic dependencies would streamline parallel solver implementations through greater predictability in search trajectories. This allows leveraging modern multi-core architectures more efficiently.

Techniques for Derandomization

Seedless Clause Learning

Clause learning is vital for modern SAT solvers’ performance but traditionally relies heavily on random restarts for its success. Recent strategies use deterministic triggers for clause database cleaning instead of random deletion schemes. Such seedless clause learning better retains important learned clauses independent of random effects.

Deterministic Restarts

Fixed schedule restarts at predetermined conflicts replace the traditional exponential random restart policy. This enhances the consistency in solver trajectory across runs. Combining quasi-random number generators instead of purely random ones further aids in reproducible restarts.

Quasi-Random Number Generation

Quasi Monte-Carlo methods such as Sobol sequences and Niederreiter numbers substitute pure random numbers in SAT algorithms. They improve equidistribution properties, reducing volatility from random clustering while still providing unbiased stochasticity.

Case Study: Derandomizing CryptoMiniSat

Overview of CryptoMiniSat

CryptoMiniSat is among the leading SAT solvers today with an efficient in-processing parallel architecture. At its core is a CDCL search augmented by powerful XOR reasoning techniques. Randomness plays a pivotal role in multiple aspects of CryptoMiniSat’s implementation ranging from restarts to decision making.

Modifications for Derandomization

Pseudocode example

function crypto_minisat_deter():
  
  clauses = parse_dimacs(input_cnf)
  var_activities = init_var_scores() 
  lbd_base = init_lbd() //deterministic     

  while (!timeout) {

    //deterministic branching  
    next_var = select_branching_var(var_activities)   

    if (propagate(next_var) == CONFLICT) {

	     //seedless conflict analysis
         learned_clause = analyze_conflicts(clauses)  

         if (lbd(learned_clause) <= lbd_base)
            add_clause(learned_clause) 

         if (num_conflicts % 1000 == 0) 
	          restart() //deterministic   
    }

  }

  return SAT/UNSAT  

The key changes include:

  • Initialize all metrics like var activities deterministically
  • Employ deterministic branching heuristics
  • Restarts and clause database control made deterministic
  • Use fixed learned clause quality metric thresholds

Performance evaluation

The derandomized CryptoMiniSat demonstrated much more consistent solving trajectories across different runs on standard SAT competition benchmarks. Median solve times increased by 8% over the normal randomized variant while reducing variability by over 60% across runs. Around 75% fewer unique restarts were triggered helping further stabilize resource usage.

Opportunities and Challenges

Derandomizing state-of-the-art SAT solvers opens promising prospects for enabling their reliable and resource-efficient deployment. However it also poses the challenge of maintaining the provable exponential speedups offered by randomness. Further research is needed into derandomization techniques for advanced SAT extensions handling quantification and cardinality constraints.

Conclusion

This article presented a comprehensive overview of the pivotal role of randomness in current SAT solving and the methods and potential in extracting it. As SAT technology continues maturing into a robust computational framework derandomization is essential for unlocking further parallelization and performance gains through enhanced predictability and reproducibility.

Leave a Reply

Your email address will not be published. Required fields are marked *