SYNTCOMP 2024 Results

The results of the 2024 edition of SYNTCOMP are in. First, we give a list of the winners for each one of the realizability tracks ran this year: Parity games, LTL, and finite-word LTL. Then, we give more details about running times and scores as well as some graphs (showing only the best configuration per tool, i.e. the one that solved the most benchmarks). Finally, we link to the participants of the competition.

The data used comes from the StarExec jobs. All data, scripts, benchmarks, and (binaries of the) tools are archived as a Zenodo artifact here. Congratulations to the winners and thanks to all the participants!

The winners

Parity game realizability

  • Sequential realizability: Knor
  • Parallel realizability: Knor

LTL realizability

  • Sequential realizability: SemML
  • Parallel realizability: SemML

LTLf realizability

  • Sequential realizability: Nike
  • Parallel realizability: Nike

Parity game Realizability

SolverConfigurationSolved (seq/par)CPU time (s)Wallclock time (s)
Knorreal_rtl458/458421.28421.96
ltlsyntpgreal378/37810441.4810442.41

LTL Realizability

SolverConfigurationSolved (seq/par)CPU time (s)Wallclock time (s)
SemMLsemml_real972/96732740.5624985.99
Acacia bonsaireal768/78831902.3935682.57
A.bonsai (k-d trees)real734/75027733.0730132.75
ltlsyntsegrealacd944/93627596.3927286.30
ltlsyntsegrealds899/89127256.0326623.95
ltlsyntsegreallar941/93328680.5728347.82
sdfreal840/83830893.4631067.37
sdfreal_vg_hors_concours839/83829815.8131876.83

LTLf Realizability

SolverConfigurationSolved (seq/par)CPU time (s)Wallclock time (s)
LydiaSyftdefault.sh220/2206882.876887.78
LydiaSyftprint_strategy_in_file.sh220/2206888.986893.82
Nikebdd_ff230/2303972.703943.48
Nikebdd_random181/18118472.6918464.63
Nikebdd_tt225/2255319.705283.62
Nikedefault234/2345326.305326.69
Nikehash_random137/13712187.8312167.53
Nikehash_tt231/2316464.696462.08
lisaseqreal1.sh209/2095649.485645.42
lisaseqreal2.sh192/1928297.108295.19
lisaseqreal3.sh190/1907642.337646.71
ltlfsyntsegltlfrealbase77/7722988.6022990.84
ltlfsyntsegltlfrealopt77/7722951.0022952.93
ltlfsyntsegltlfsyntbase78/7824793.5624795.99
ltlfsyntsegltlfsyntopt77/7723028.2923030.55
topledefault131/1319375.099339.21

Benchmarks and Solvers

The set of benchmarks used in 2024 can be fetched from the usual repository.

Below, you will find information regarding the participating solvers, their source code, license type, and links to their most recent reports or preferred publications to be cited in case you use the tool in your academic work.

SolverSource repoLicenseReport to cite
ltlsyntGitLab repoGPL-3.0+Paper
Acacia bonsaiGitHub repoGPL-3.0Paper
KnorGitHub repoGPL-3.0Paper
sdfGitHub repoMITarXiv report
NikeGitHub repoAGPL v3.0arXiv report
LydiaSyftGitHub repoAGPL v3.0report
LisaGitHub repoGPL v3.0arXiv report
topleCustom: all rights reservedContact authors
SemMLGitLab repoGPL v3.0Contact author

SYNTCOMP 2024: Call for Solvers and Benchmarks

We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2024 and SYNT 2024. The competition will run mostly in the same way as in previous years. If you plan on submitting a benchmark or solver, please go to StarExec, create a user, and contact us to be added to the SYNTCOMP space. There are main tracks for safety specifications in AIGER format, for LTL specifications in TLSF, for parity games in extended HOA, and:

New this year: The LTLf specification semantics have been updated so that the controller is expected to dictate the end of a trace. Please contact the organizers if this is not clear.

Each track is separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, in any of the formats (AIGER, TLSF, extended HOA). New benchmarks can be included in SYNTCOMP 2024 if they are correctly submitted to our benchmark repository by June 1, 2024. Please create a new issue, attach an archive with your new benchmarks with a sensible name, and make sure the archive should also contains a readme.md or readme.pdf describing the benchmarks, relevant papers, etc. We will check if everything is OK, and then add it to the repository. Alternatively, you can create a pull request with your benchmarks. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2024 if it is correctly uploaded into StarExec by June 1, 2024.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland, guillermo.perez@uantwerpen.be, or philipp@lrde.epita.fr

SYNTCOMP 2023 Results

The results of the 2023 edition of SYNTCOMP are in. First, we give a list of the winners for each one of the tracks ran this year: Parity games, LTL, and finite-word LTL. Then, we give more details about running times and scores as well as some graphs (showing only the best configuration per tool, i.e. the one that solved the most benchmarks). Finally, we link to the participants of the competition.

All graphs and tables were obtained as described here. The data used comes from the StarExec jobs. All data, scripts, benchmarks, and (binaries of the) tools are archived as a Zenodo artifact here. Congratulations to the winners and thanks to all the participants!

The winners

Parity games

  • Sequential realizability: Knor
  • Parallel realizability: Knor
  • Sequential synthesis: ltlsynt
  • Parallel synthesis: Knor
  • Sequential synthesis (quality): Strix
  • Parallel synthesis (quality): Strix

LTL

  • Sequential realizability: Strix
  • Parallel realizability: Strix
  • Sequential synthesis: Strix
  • Parallel synthesis: Strix
  • Sequential synthesis (quality): Strix
  • Parallel synthesis (quality): Strix

LTLf

  • Sequential realizability: Nike
  • Parallel realizability: Nike

Parity game Realizability

Solver Configuration Solved (seq/par) CPU time (s) Wallclock time (s)
Knor repair real_tl 876/876 1035.45 648.11
Strix PGAME hoa_realizability_parallel 800/835 61377.20 65541.26
Strix PGAME hoa_realizability_sequential 802/802 50668.25 49536.76
ltlsynt pgreal 733/733 39810.50 39811.10

Parity game Synthesis

Solver Config Solved (seq/par) CPU time (s) Wallclock (s) Score (seq/par)
Knor repair synt_sym 282/282 3257.02 3257.45 439.37/439.37
Knor repair synt_sym_abc 283/283 3846.52 3568.78 489.49/489.49
Knor repair synt_tl 272/272 9.66 9.25 289.48/289.48
Strix PGAME hoa_synthesis_parallel 285/285 4596.27 4359.80 554.48/554.48
Strix PGAME hoa_synthesis_sequential 284/284 4562.27 4350.23 552.65/552.65
ltlsynt pgsynt 292/292 1687.63 1672.48 464.30/464.30
ltlsynt pgsyntabc1 294/294 1718.05 1720.58 504.33/504.33

LTL Realizability

Solver Configuration Solved (seq/par) CPU time (s) Wallclock time (s)
SPORE ltl-real-recpar-bdd-seq 479/479 30222.87 30201.10
SPORE ltl-real-recpar-fbdd-32-nodecomp-seq 666/666 35025.32 34928.26
SPORE ltl-real-recpar-fbdd-32-seq 588/588 31316.82 31223.80
SPORE ltl-real-recpar-fbdd-64-nodecomp-seq 660/660 30986.12 30956.60
SPORE ltl-real-recpar-fbdd-64-seq 600/600 33176.62 33168.87
Strix ltl_real_acd_bfs 914/914 31507.38 31516.81
Strix ltl_real_zlk_bfs 921/921 34268.65 34271.52
Strix ltl_real_zlk_pq 948/948 35447.26 35409.72
abonsai real 703/716 26947.08 26005.19
ltlsynt23 segrealacd 845/845 39379.60 39370.62
ltlsynt23 segrealds 809/809 32401.23 32353.34
ltlsynt23 segreallar 845/845 35477.35 35460.15
sdf real 782/786 23287.54 24888.38
sdf real_p 709/723 36894.35 39242.12

LTL Synthesis

Solver Config Solved (seq/par) CPU time (s) Wallclock (s) Score (seq/par)
Otus otus-ltl-synthesis-parallel-sylvan 330/332 7791.82 3340.19 264.98/266.96
Otus otus-ltl-synthesis-sequential-jbdd 327/327 2248.72 2235.18 265.37/265.37
Strix ltl_synth_acd_bfs 472/472 11715.48 11701.96 863.33/863.33
Strix ltl_synth_zlk_bfs 490/490 15849.71 15805.41 895.74/895.74
Strix ltl_synth_zlk_pq 488/488 12977.18 12917.15 888.37/888.37
abonsai synt 382/385 11886.38 7609.35 573.52/578.05
ltlsynt23 segsyntacdabc 416/416 9392.32 9395.10 573.13/573.13
ltlsynt23 segsyntdsabc 412/412 5319.05 5313.80 614.11/614.11
ltlsynt23 segsyntlarabc 413/413 9472.05 9456.58 559.24/559.24
sdf synt 415/417 4007.84 5611.84 493.06/496.44
sdf synt_p 381/383 12838.66 11905.57 431.96/435.34

LTLf Realizability

Solver Configuration Solved (seq/par) CPU time (s) Wallclock time (s)
LydiaSyft default.sh 255/255 3885.47 3890.82
LydiaSyft print_strategy_in_file.sh 255/255 3964.93 3969.52
Nike bdd_ff 264/264 4621.82 4617.48
Nike bdd_random 235/235 40752.96 40766.24
Nike bdd_tt 260/260 5832.94 5820.82
Nike default 269/269 6383.23 6364.09
Nike hash_random 177/177 30639.28 30647.51
Nike hash_tt 266/266 9680.75 9678.20
lisa-syntcomp seqreal1.sh 225/225 7078.87 7081.29
lisa-syntcomp seqreal2.sh 206/206 9674.94 9664.04
lisa-syntcomp seqreal3.sh 205/205 9147.83 9152.23
ltlfsynt23prebuild segltlfrealbase 85/85 18451.05 18453.08
ltlfsynt23prebuild segltlfrealopt 85/85 18422.37 18415.12
ltlfsynt23prebuild segltlfsyntbase 85/85 18555.46 18540.76
ltlfsynt23prebuild segltlfsyntopt 85/85 18503.75 18505.74
tople default 147/147 15514.12 15523.45

Benchmarks and Solvers

The set of benchmarks used in 2023 can be fetched from the usual repository.

Below, you will find information regarding the participating solvers, their
source code, license type, and links to their most recent reports or preferred
publications to be cited in case you use the tool in your academic work.

Solver Source repo License Report to cite
ltlsynt GitLab repo GPL-3.0+
Acacia bonsai GitHub repo GPL-3.0 Paper
Knor GitHub repo GPL-3.0 arXiv report
sdf GitHub repo MIT arXiv report
Nike GitHub repo AGPL v3.0 arXiv report
LydiaSyft GitHub repo AGPL v3.0 report
Lisa GitHub repo GPL v3.0 arXiv report
tople Custom: all rights reserved Contact authors

SYNTCOMP 2023: Call for Solvers and Benchmarks

We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2023 and SYNT 2023. The competition will run mostly in the same way as in previous years. If you plan on submitting a benchmark or solver, please go to StarExec, create a user, and contact us to be added to the SYNTCOMP space. There are main tracks for safety specifications in AIGER format, for LTL specifications in TLSF, for parity games in extended HOA, and:

New this year: LTLf specifications in an extended version of the TLSF format! If you are interested, please contact us for more details on the exact semantics and the format we will be using for this track.

Each track is separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, in any of the formats (AIGER, TLSF, extended HOA). New benchmarks can be included in SYNTCOMP 2023 if they are correctly submitted to our benchmark repository by June 1, 2023. Please create a new issue, attach an archive with your new benchmarks with a sensible name, and make sure the archive should also contains a readme.md or readme.pdf describing the benchmarks, relevant papers, etc. We will check if everything is OK, and then add it to the repository. Alternatively, you can create a pull request with your benchmarks. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2023 if it is correctly uploaded into StarExec by June 1, 2023.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland, guillermo.perez@uantwerpen.be, or philipp@lrde.epita.fr

SYNTCOMP 2022 Results

The results of the 2022 competition on Reactive Synthesis are in! As in previous years, the competition was run on StarExec. This year, the competition consisted of LTL tracks, with specifications in TLSF, and parity-game tracks, with specifications given in extended HOA. It is worth mentioning that we have removed the separation between so-called parallel and sequential tracks in favor of having two rankings based on total Wallclock time vs. total CPU time used by solvers. This year, though, the rankings happen to coincide. For more information on this and other updates to the competition, you can take a look at the most recent report about the competition.

Rankings

We present first the results for the parity-game tracks and then move to the LTL tracks.

Parity games: realizability

To summarize, the ranking of the participating tools is as follows.
  1. Knor
  2. Strix
  3. ltlsynt
More details regarding each solver configuration is given in the table below and the cactus plots that follow (where only the best configuration per tool is shown). The raw information can be fetched from the StarExec job.
Solver Configuration Solved benchmarks Total CPU time (s) Total Wallclock time (s)
Strix hoa_realizability_parallel 548 308532 124160
Strix hoa_realizability_sequential 515 89584 89232
ltlsynt pgreal 441 69416 69421
Knor real_ltl 587 1722 1039

Parity games: synthesis

To summarize, the time-based ranking of the participating tools is the same as for the realizability sub-track. More details regarding each solver configuration is given in the table below and the cactus plots that follow (where only the best configuration per tool is shown). The raw information can be fetched from the StarExec job.
Solver Configuration Solved benchmarks Total CPU time (s) Total Wallclock time (s)
Strix hoa_synthesis_parallel 470 159456 73077
Strix hoa_synthesis_sequential 451 48321 47884
ltlsynt pgsynt 334 69254 69250
ltlsynt pgsyntabc1 335 69386 69440
Knor synt_sym 278 4839 4839
Knor synt_sym_abc 278 5706 5401
Knor synt_tl 480 1695 994
The quality ranking is quite different.
  1. Strix
  2. ltlsynt
  3. Knor
More details regarding each solver configuration is given in the table below.
Solver Configuration Synthesized controllers Score
Strix hoa_synthesis_parallel 199 385
Strix hoa_synthesis_sequential 199 384
ltlsynt pgsynt 204 321
ltlsynt pgsyntabc1 205 358
Knor synt_sym 197 292
Knor synt_sym_abc 197 340
Knor synt_tl 201 268

LTL: realizability

To summarize, the ranking of the participating tools is as follows.
  1. Strix
  2. ltlsynt
  3. Acacia bonsai
  4. sdf
  5. SPORE
  6. Otus
More details regarding each solver configuration is given in the table below and the cactus plots that follow (where only the best configuration per tool is shown). The raw information can be fetched from the StarExec job.
Solver Configuration Solved benchmarks Total CPU time (s) Total Wallclock time (s)
_acab_sbbest826211030 86632
SPOREltl-real-recpar-bdd-seq51967800 67830
SPOREltl-real-recpar-fbdd-32-nodecomp-seq71571950 71960
SPOREltl-real-recpar-fbdd-32-seq64669159 69183
SPOREltl-real-recpar-fbdd-64-nodecomp-seq71572367 72309
SPOREltl-real-recpar-fbdd-64-seq64668257 68305
Strixltl_real_acd_bfs89243963 43974
Strixltl_real_zlk_bfs89339510 39518
Strixltl_real_zlk_pq92446340 46294
Otusotus-ltl-realizability-parallel-sylvan588227382 57363
Otusotus-ltl-realizability-sequential-jbdd55832009 32011
sdfreal78350466 40399
sdfreal_p734129252 72772
ltlsyntsegrealacd83167066 67064
ltlsyntsegrealds80055803 55810
ltlsyntsegreallar82857983 57956

LTL: synthesis

To summarize, the time-based ranking of the participating tools is as follows.
  1. Strix
  2. sdf
  3. ltlsynt
  4. Otus
More details regarding each solver configuration is given in the table below and the cactus plots that follow (where only the best configuration per tool is shown). The raw information can be fetched from the StarExec job.
Solver Configuration Solved benchmarks Total CPU time (s) Total Wallclock time (s)
Strixltl_synth_acd_bfs8203021830195
Strixltl_synth_zlk_bfs8182300923012
Strixltl_synth_zlk_pq8452945829380
Otusotus-ltl-synthesis-parallel-sylvan50312510331644
Otusotus-ltl-synthesis-sequential-jbdd4902773927705
ltlsyntsegsyntacdabc70950445504463
ltlsyntsegsyntdsabc6894608246196
ltlsyntsegsyntlarabc7034925149337
sdfsynt7214777237822
sdfsynt_p66611253261328
For the quality ranking, ltlsynt and sdf swap places.
  1. Strix
  2. ltlsynt
  3. sdf
  4. Otus
More details regarding each solver configuration is given in the table below.
Solver Configuration Synthesized controllers Score
Strixltl_synth_acd_bfs 433785
Strixltl_synth_zlk_bfs 432780
Strixltl_synth_zlk_pq 430772
Otusotus-ltl-synthesis-parallel-sylvan 305243
Otusotus-ltl-synthesis-sequential-jbdd 298238
ltlsyntsegsyntacdabc 375500
ltlsyntsegsyntdsabc 377540
ltlsyntsegsyntlarabc 371487
sdfsynt 382447
sdfsynt_p 354400

Benchmarks and Solvers

The set of benchmarks used in 2022 can be fetched from the usual repository. Below, you will find information regarding the participating solvers, their source code, license type, and links to their most recent reports or preferred publications to be cited in case you use the tool in your academic work.
Solver StarExec link Source repo License Report to cite
ltlsynt StarExec GitLab repo GPL-3.0+
Acacia bonsai GitHub repo GPL-3.0 arXiv report
KnorGitHub repoGPL-3.0arXiv report
sdfGitHub repoMITarXiv report

SYNTCOMP 2022: Call for tools and benchmarks

We are preparing for the annual reactive synthesis competition. This year SYNTCOMP will be part of the FLoC’22 olympic games. The competition will run in time for the results to be presented during CAV 2022 and SYNT 2022.

The competition will run on StarExec, as in 2020 and 2021. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user,
and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for
– safety specifications in AIGER format,
– LTL specifications in TLSF, and
– parity games in Hanoi Omega-Automata (HOA) format,
each separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in this year’s SYNTCOMP if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by July 1. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until July 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in this year’s SYNTCOMP if it is correctly uploaded (i.e. passes a test run) into StarExec by July 1.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be

SYNTCOMP 2021 Results

The results of SYNTCOMP 2021 have been presented at the SYNT workshop, part of CAV 2021. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here (see the subspaces corresponding to each track).

This year, the competition consisted of LTL tracks, with specifications in TLSF, and parity-game tracks, with specifications in extended HOA. Since we had an increase in participants, this year we had sequential and parallel versions of the realizability and synthesis sub-tracks! Also worth mentioning is the reorganization of our benchmark library.

Winners

Knor won the parity-game synthesis track, solving 276/303 benchmarks. It also won the hard-parity-game realizability track by solving 216/217 benchmarks.

Strix placed first in the quality ranking for parity-game synthesis. In its LTL/TLSF version, Strix won the LTL sequential realizability track by solving 827/924 benchmarks; and placed first in the quality ranking for LTL sequential synthesis.

sdf won the LTL parallel realizability track, solving 730/942 benchmarks, and the LTL parallel synthesis track.

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2021: Call for Benchmarks and Solvers

We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2021 and SYNT 2021.

The competition will run on StarExec, as in 2020. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user,
and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for
– safety specifications in AIGER format,
– LTL specifications in TLSF, and
– parity games in Hanoi Omega-Automata (HOA) format,
each separated into subtracks for realizability checking and synthesis, and
into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in SYNTCOMP 2021 if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by June 1, 2021. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2021 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2021.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be

SYNTCOMP 2020 Results

The results of SYNTCOMP 2020 have been presented at the SYNT workshop, part of CAV 2020. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here (see the subspaces corresponding to each track).

This year, the competition was split into a safety track, based on specifications in AIGER format an LTL track, with specifications in TLSF, and a parity-game track, with specifications in an extended HOA format. Just like last year, for the safety track we have only considered the realizability-checking task. Furthermore, we have only ran the sequential mode of the resulting three categories. For the LTL track and the synthesis task we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

simpleBDDSolver (solved 186 out of 274 problems in the AIGER/safety Realizability Track)

Strix (solved 424 out of 434 problems in the TLSF/LTL Realizability track and won the quality ranking in the TLSF/LTL Synthesis Track)

Strix (solved all problems in the EHOA/parity game Realizability track and was the only participant in the synthesis track)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2020: Call for Benchmarks and Solvers

We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2020 and SYNT 2020.

The competition will run on StarExec, as in 2019. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user,
and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for
– safety specifications in AIGER format,
– LTL specifications in TLSF, and
– a new track for parity games in Hanoi Omega-Automata (HOA) format,
each separated into subtracks for realizability checking and synthesis, and
into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in SYNTCOMP 2020 if they are correctly uploaded (i.e. pass syntax checks) into StarExec by June 1, 2020. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2020 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2020.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be