Rules

This page provides the rules for the complexity competition.

Rules of the Competition

Input Format

Problems are given in the newly TPDB-format, cf. Termination-Portal. where the XML-element problem will have the type complexity given. Further, depending on the category DC, iDC, RC and iRC, the attributes strategy and startterm will be set to FULL/INNERMOST and full/constructor-based respectively.

Output Format

The output format is adapted so that additional information on the asymptotic complexity is given for lower as well as upper bounds. Hence the output written to the first line of STDOUT shall be a complexity statement according to the following grammar:

S -> NO | MAYBE | YES( F, F) | YES( ?, F) | YES( F, ?)
F -> O(1) | O(n^Nat) | POLY

where "Nat" is a non-zero natural number and YES(F1, F2) means F2 is upper bound and that F1 is a lower-bound. "O(n^k)" is the usual big-Oh notation and "POLY" indicates an unspecified polynomial. Either of the functions F1, F2 (but not both) may be replaced by ``don't know, indicated by ?. Any remaining output on STDOUT will be considered as proof output and has to follow the normal rules for the competition.

Rewriting Modulo

In all categories relative rewriting rules are supported. While the semantics of relative rewriting is clear for full rewriting, there is a choice howto define relative rewriting in the innermost case: in order to innermost rewrite R modulo S, we employ innermost rewriting with respect to the union of R and S.

Scoring

Currently we focus on (polynomial) upper bounds. As the output format indicates, this restriction should be lifted later, see below. In order to take into account the quality of the upper bound provided by the different tools, we propose the following scoring algorithm, where we suppose the number of competitors is x.

Firstly, for each TRS the competing tools are ranked, where constant complexity, i.e., output "YES(?,O(1))" is best and "MAYBE", "NO" or time-out is worst. As long as the output is of form "YES(?,O(n^k))" or "YES(?,POLY)" the rank of the tool defines the number of points. More precisely the best tool gets x+1 points, the second gets x points and so on. On the other hand a negative output ("MAYBE", "NO" or time-out) gets 0 points. If two or more tools would get the same rank, the rank of the remaining tools is adapted in the usual way.

Secondly, all resulting points for all considered systems are summed up and the contestant with the highest number of points wins. If this cannot establish a winner, the total number of wins is counted. If this still doesn't produce a winner, we give up and provide two (or more) winners.

Problem Sets and Problem Selection

Testbeds

All authors of complexity tools that participated in the last complexity competition agreed upon the following selection of examples from the TPDB. For simplicity, we decided on using the same testbed for full and innermost rewriting. However, we decided on two separate testbeds for derivational and runtime complexity analysis. The testbeds are described below in detail, additionally a list of problems for TPDB version 8.0 is provided for RC and DC. For the individual subcategories RC, RCi, DC and DCi all strategy and start-term annotations should be overwritten appropriately. Thus we simply ignore for instance context-sensitive strategies.

Derivational Complexity

For derivational complexity analysis we restrict the TPDB as follows:

  1. keep only one instance of duplicated problems (modulo strategy annotations). A list of the TRSs that appear multiple times in the current TPDB version 8.0 can be found here.
  2. remove all problems with relative, theory or conditional annotations. The reason for this is that none of the tools can handle those problems currently.
Runtime Complexity

For runtime complexity analysis, we further restrict the testbed for derivational complexity analysis as follows:

  1. remove all examples as determined by the tool Oops that participated in the competition of 2010. A list of these examples for TPDB version 8.0 can be found here. This step aims at eliminating trivial problems that
    1. admit only 0-ary constructors, or
    2. admit only 0-ary defined function symbols, or
    3. admit only rules with nested defined function symbols on left-hand sides, thus all basic terms are normal forms.
  2. remove higher-order examples from following folder, as here the notion of runtime complexity is inapropriate:
    1. AotoYamada_05,
    2. Applicative_05,
    3. Applicative_AG01_innermost,
    4. Applicative_first_order_05, and
    5. Mixed_HO_10.
Selection Function

On the above described testbeds, a randomly chosen subset that is actually used in the competition is determined as follows. Here we denote by select the function that relates each family from the TPDB to the number of randomly chosen examples within this family as defined for the termination competition. The idea is to make select aware of different difficulties of proving complexity bounds. We do so by

  1. partitioning each family F into n different sets F = F1 ∪ ... ∪ Fn, where the sets Fi may be seen as collections of TRSs similar in difficulty. For this years competition we propose following partitioning of a family F:
    we propose to partition each family into
    (i) those upon which a polynomial bound could be shown automatically in last years competition (denoted by Fauto below) and
    (ii) those where a polynomial bound could not be shown (Fnonauto).
  2. In accordance to the above described partitioning, we define a probability distribution p on F. For this year's competition we propose the following distribution:
    for all subcategories and families F, we propose p(Fauto) = 0.4 and p(Fnonauto) = 0.6. That is, we want to consider 40% examples that could be solved automatically in last years competition, and 60% of examples that could not be solved automatically. Based on the probability distribution p we define the extended selection function select_comp(F,i) = min(|Fi|, p(i) * select(F)). Here |Fi| denotes the size of Fi.
  3. From each partition Fi of a family F, we randomly select select_comp(F,i) examples.

Test Cases

In the following test cases we restrict to full rewriting.

test cases - derivational complexity
R = {a(b(x)) -> b(a(x))} 
    expected output "YES(?,O(n^2))" or "YES(O(n^1),O(n^2))" or "YES(O(n^2),O(n^2))"

R = {a(a(x)) -> b(c(x)), b(b(x)) -> a(c(x)), c(c(x)) -> a(b(x))}
    expected output "YES(O(n^2),?)" or "YES(?,?)"

R = {+(s(x),+(y,z)) -> +(x,+(s(s(y)),z)), +(s(x),+(y,+(z,w))) -> +(x,+(z,+(y,w)))}
    expected output "YES(?,?)"

test cases - runtime complexity

R = {a(b(x)) -> b(b(a(x)))}
    expected output "YES(?,O(n^1))" or "YES(O(n^1),O(n^1))"

R = {plus(0,y) -> y, plus(s(x),y) -> s(plus(x,y)), mul(0,y) -> 0, 
    mul(s(x),y) -> plus(mul(x,y),y)}
    expected output "YES(?,O(n^2))" or "YES(O(n^1),O(n^2))" or "YES(O(n^2),O(n^2))"

R = {f(x,0) -> s(0), f(s(x),s(y)) -> s(f(x,y)), g(0,x) -> g(f(x,x),x)}
    expected output "YES(?,O(n^1))" or "YES(O(n^1),O(n^1))"

R = {f(0) -> c, f(s(x)) -> c(f(x),f(x))}
    expected output "YES(?,?)"

In the following test cases we restrict to innermost rewriting.

test cases - derivational complexity

R = {f(x) -> c(x,x)}
    expected output "YES(O(n^1),O(n^1))" or "YES(?,O(n^1))"

test cases - runtime complexity

R = {f(x) -> c(x,x), g(0) -> 0, g(s(x)) -> f(g(x))}
    expected output "YES(O(n^1),O(n^1))" or "YES(?,O(n^1))"