Stopping and restarting strategy for stochastic sequential search in global optimization

Zelda B. Zabinsky, David Bulger*, Charoenchai Khompatraporn

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    18 Citations (Scopus)


    Two common questions when one uses a stochastic global optimization algorithm, e.g., simulated annealing, are when to stop a single run of the algorithm, and whether to restart with a new run or terminate the entire algorithm. In this paper, we develop a stopping and restarting strategy that considers tradeoffs between the computational effort and the probability of obtaining the global optimum. The analysis is based on a stochastic process called Hesitant Adaptive Search with Power-Law Improvement Distribution (HASPLID). HASPLID models the behavior of stochastic optimization algorithms, and motivates an implementable framework, Dynamic Multistart Sequential Search (DMSS). We demonstrate here the practicality of DMSS by using it to govern the application of a simple local search heuristic on three test problems from the global optimization literature.

    Original languageEnglish
    Pages (from-to)273-286
    Number of pages14
    JournalJournal of Global Optimization
    Issue number2
    Publication statusPublished - Feb 2010


    Dive into the research topics of 'Stopping and restarting strategy for stochastic sequential search in global optimization'. Together they form a unique fingerprint.

    Cite this