It is a challenging topic to automatically determine upper and lower bounds on the complexity of term rewrite systems (TRSs for short). Here complexity of a TRS basically refers to the maximal length of derivations, measured in the size of the initial terms. Still, depending on rewrite strategies and restrictions on the set of allowed starting terms, various notions of complexity exist. The current focus of the competition is on providing polynomial upper bounds to the complexity of TRSs. Eventually the competition will be extended to lower bounds.
Overview of the Complexity Category
Currently, depending on considered set of starting terms and strategy, the competition is divided into four categories:
|Derivational Complexity||Runtime Complexity|
Derivational complexity and runtime complexity differ in the sense that for derivational complexity, no restrictions on the set of considered starting terms is put. For runtime complexity however, only basic terms (i.e., terms where arguments are formed from constructor symbols) are taken into account. See for example (Hirokawa, Moser, 2008) (Moser, 2009) for further reading on the notion of runtime complexity.
Furthermore, we distinguish between complexities induced by full rewriting as opposed to complexities induced by innermost rewriting.
Overview of the Competition
The complexity category exists since 2008 and is part of the termination competition. Since 2014 the competition is hosted on StarExec, detailed results of current or previous competitions can be found on the Termination Portal or here.
If you want to participate in the competition, check the Termination-Portal.
Participating Teams of Past and Upcoming Competitions
Here you can find a list (sorted by tool name) of participants of past competitions and supposed participants of the upcoming competition.
An overview of the proof techniques applied by the participants in past competitions can be found here.
For detailed information on the testsuites employed and the actual results of the tools see the Termination-Portal.
Overview of the Competition Rules
Problems are given in the new TPDB-format, see Termination-Portal, and are selected randomly from the newest version of the Termination Problem Database. In the selection process, emphasis is put onto selecting problems that could not be automatically verified in past competitions, c.f. page problem selection for details. Tools are ranked not only by number of succeeded runs, but in the scoring also the preciseness of the bounds is taken into account. See page scoring for further details. Unfortunately the database is currently somehow biased towards termination problems. We encourage everybody to submit new problems to the TPDB that are interesting from a complexity related perspective. (In submission please indicate that these examples should go into the complexity benchmark.)