Category Archives: Uncategorized

SYNTCOMP 2022 Results

The results of the 2022 competition on Reactive Synthesis are in! As in previous years, the competition was run on StarExec. This year, the competition consisted of LTL tracks, with specifications in TLSF, and parity-game tracks, with specifications given in extended HOA. It is worth mentioning that we have removed the separation between so-called parallel and sequential tracks in favor of having two rankings based on total Wallclock time vs. total CPU time used by solvers. This year, though, the rankings happen to coincide. For more information on this and other updates to the competition, you can take a look at the most recent report about the competition.

Rankings

We present first the results for the parity-game tracks and then move to the LTL tracks.

Parity games: realizability

To summarize, the ranking of the participating tools is as follows.
  1. Knor
  2. Strix
  3. ltlsynt
More details regarding each solver configuration is given in the table below and the cactus plots that follow (where only the best configuration per tool is shown). The raw information can be fetched from the StarExec job.
Solver Configuration Solved benchmarks Total CPU time (s) Total Wallclock time (s)
Strix hoa_realizability_parallel 548 308532 124160
Strix hoa_realizability_sequential 515 89584 89232
ltlsynt pgreal 441 69416 69421
Knor real_ltl 587 1722 1039

Parity games: synthesis

To summarize, the time-based ranking of the participating tools is the same as for the realizability sub-track. More details regarding each solver configuration is given in the table below and the cactus plots that follow (where only the best configuration per tool is shown). The raw information can be fetched from the StarExec job.
Solver Configuration Solved benchmarks Total CPU time (s) Total Wallclock time (s)
Strix hoa_synthesis_parallel 470 159456 73077
Strix hoa_synthesis_sequential 451 48321 47884
ltlsynt pgsynt 334 69254 69250
ltlsynt pgsyntabc1 335 69386 69440
Knor synt_sym 278 4839 4839
Knor synt_sym_abc 278 5706 5401
Knor synt_tl 480 1695 994
The quality ranking is quite different.
  1. Strix
  2. ltlsynt
  3. Knor
More details regarding each solver configuration is given in the table below.
Solver Configuration Synthesized controllers Score
Strix hoa_synthesis_parallel 199 385
Strix hoa_synthesis_sequential 199 384
ltlsynt pgsynt 204 321
ltlsynt pgsyntabc1 205 358
Knor synt_sym 197 292
Knor synt_sym_abc 197 340
Knor synt_tl 201 268

LTL: realizability

To summarize, the ranking of the participating tools is as follows.
  1. Strix
  2. ltlsynt
  3. Acacia bonsai
  4. sdf
  5. SPORE
  6. Otus
More details regarding each solver configuration is given in the table below and the cactus plots that follow (where only the best configuration per tool is shown). The raw information can be fetched from the StarExec job.
Solver Configuration Solved benchmarks Total CPU time (s) Total Wallclock time (s)
_acab_sbbest826211030 86632
SPOREltl-real-recpar-bdd-seq51967800 67830
SPOREltl-real-recpar-fbdd-32-nodecomp-seq71571950 71960
SPOREltl-real-recpar-fbdd-32-seq64669159 69183
SPOREltl-real-recpar-fbdd-64-nodecomp-seq71572367 72309
SPOREltl-real-recpar-fbdd-64-seq64668257 68305
Strixltl_real_acd_bfs89243963 43974
Strixltl_real_zlk_bfs89339510 39518
Strixltl_real_zlk_pq92446340 46294
Otusotus-ltl-realizability-parallel-sylvan588227382 57363
Otusotus-ltl-realizability-sequential-jbdd55832009 32011
sdfreal78350466 40399
sdfreal_p734129252 72772
ltlsyntsegrealacd83167066 67064
ltlsyntsegrealds80055803 55810
ltlsyntsegreallar82857983 57956

LTL: synthesis

To summarize, the time-based ranking of the participating tools is as follows.
  1. Strix
  2. sdf
  3. ltlsynt
  4. Otus
More details regarding each solver configuration is given in the table below and the cactus plots that follow (where only the best configuration per tool is shown). The raw information can be fetched from the StarExec job.
Solver Configuration Solved benchmarks Total CPU time (s) Total Wallclock time (s)
Strixltl_synth_acd_bfs8203021830195
Strixltl_synth_zlk_bfs8182300923012
Strixltl_synth_zlk_pq8452945829380
Otusotus-ltl-synthesis-parallel-sylvan50312510331644
Otusotus-ltl-synthesis-sequential-jbdd4902773927705
ltlsyntsegsyntacdabc70950445504463
ltlsyntsegsyntdsabc6894608246196
ltlsyntsegsyntlarabc7034925149337
sdfsynt7214777237822
sdfsynt_p66611253261328
For the quality ranking, ltlsynt and sdf swap places.
  1. Strix
  2. ltlsynt
  3. sdf
  4. Otus
More details regarding each solver configuration is given in the table below.
Solver Configuration Synthesized controllers Score
Strixltl_synth_acd_bfs 433785
Strixltl_synth_zlk_bfs 432780
Strixltl_synth_zlk_pq 430772
Otusotus-ltl-synthesis-parallel-sylvan 305243
Otusotus-ltl-synthesis-sequential-jbdd 298238
ltlsyntsegsyntacdabc 375500
ltlsyntsegsyntdsabc 377540
ltlsyntsegsyntlarabc 371487
sdfsynt 382447
sdfsynt_p 354400

Benchmarks and Solvers

The set of benchmarks used in 2022 can be fetched from the usual repository. Below, you will find information regarding the participating solvers, their source code, license type, and links to their most recent reports or preferred publications to be cited in case you use the tool in your academic work.
Solver StarExec link Source repo License Report to cite
ltlsynt StarExec GitLab repo GPL-3.0+
Acacia bonsai GitHub repo GPL-3.0 arXiv report
KnorGitHub repoGPL-3.0arXiv report
sdfGitHub repoMITarXiv report

SYNTCOMP 2022: Call for tools and benchmarks

We are preparing for the annual reactive synthesis competition. This year SYNTCOMP will be part of the FLoC’22 olympic games. The competition will run in time for the results to be presented during CAV 2022 and SYNT 2022.

The competition will run on StarExec, as in 2020 and 2021. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user,
and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for
– safety specifications in AIGER format,
– LTL specifications in TLSF, and
– parity games in Hanoi Omega-Automata (HOA) format,
each separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in this year’s SYNTCOMP if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by July 1. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until July 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in this year’s SYNTCOMP if it is correctly uploaded (i.e. passes a test run) into StarExec by July 1.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be

SYNTCOMP 2021 Results

The results of SYNTCOMP 2021 have been presented at the SYNT workshop, part of CAV 2021. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here (see the subspaces corresponding to each track).

This year, the competition consisted of LTL tracks, with specifications in TLSF, and parity-game tracks, with specifications in extended HOA. Since we had an increase in participants, this year we had sequential and parallel versions of the realizability and synthesis sub-tracks! Also worth mentioning is the reorganization of our benchmark library.

Winners

Knor won the parity-game synthesis track, solving 276/303 benchmarks. It also won the hard-parity-game realizability track by solving 216/217 benchmarks.

Strix placed first in the quality ranking for parity-game synthesis. In its LTL/TLSF version, Strix won the LTL sequential realizability track by solving 827/924 benchmarks; and placed first in the quality ranking for LTL sequential synthesis.

sdf won the LTL parallel realizability track, solving 730/942 benchmarks, and the LTL parallel synthesis track.

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2021: Call for Benchmarks and Solvers

We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2021 and SYNT 2021.

The competition will run on StarExec, as in 2020. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user,
and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for
– safety specifications in AIGER format,
– LTL specifications in TLSF, and
– parity games in Hanoi Omega-Automata (HOA) format,
each separated into subtracks for realizability checking and synthesis, and
into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in SYNTCOMP 2021 if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by June 1, 2021. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2021 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2021.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be

SYNTCOMP 2020 Results

The results of SYNTCOMP 2020 have been presented at the SYNT workshop, part of CAV 2020. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here (see the subspaces corresponding to each track).

This year, the competition was split into a safety track, based on specifications in AIGER format an LTL track, with specifications in TLSF, and a parity-game track, with specifications in an extended HOA format. Just like last year, for the safety track we have only considered the realizability-checking task. Furthermore, we have only ran the sequential mode of the resulting three categories. For the LTL track and the synthesis task we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

simpleBDDSolver (solved 186 out of 274 problems in the AIGER/safety Realizability Track)

Strix (solved 424 out of 434 problems in the TLSF/LTL Realizability track and won the quality ranking in the TLSF/LTL Synthesis Track)

Strix (solved all problems in the EHOA/parity game Realizability track and was the only participant in the synthesis track)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2020: Call for Benchmarks and Solvers

We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2020 and SYNT 2020.

The competition will run on StarExec, as in 2019. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user,
and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for
– safety specifications in AIGER format,
– LTL specifications in TLSF, and
– a new track for parity games in Hanoi Omega-Automata (HOA) format,
each separated into subtracks for realizability checking and synthesis, and
into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in SYNTCOMP 2020 if they are correctly uploaded (i.e. pass syntax checks) into StarExec by June 1, 2020. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2020 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2020.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be

SYNTCOMP 2019 Results

The results of SYNTCOMP 2019 have been presented at the SYNT workshop and at CAV 2019. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here (see the subspaces corresponding to each track).

Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. This year exceptionally, for the safety track we have only considered the realizability-checking task. Furthermore, we have only ran the sequential mode of the resulting three categories. For the LTL track and the synthesis task we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

AbsSynthe (solved 192 out of 274 problems in the AIGER/safety Realizability Track, sequential mode)

Strix (solved the most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, with 717 points in sequential mode)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2019: Call for Benchmarks and Solvers

We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2019 and SYNT 2019.

The competition will run mostly in the same way as in 2016–2018, but will now be managed and ran on StarExec. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user, and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Participation: SYNTCOMP tutorial @ ETAPS *

This year, we are organizing a tutorial on the state of the art of reactive synthesis and the SYNTCOMP: https://conf.researchr.org/track/etaps-2019/syntcompcamp-2019-tutorial The target audience of this short tutorial (3.5 hours) are researchers that are interested in the state of the art regarding theory and implementations of the automatic synthesis of reactive systems. The tutorial will be separated into two parts: the first part will give a background on the history and the theoretical foundations of reactive synthesis and SYNTCOMP. The second part will go in-depth on one of the successful approaches and present its implementation in the tool BoSy (see https://www.react.uni-saarland.de/tools/bosy/).

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. New benchmarks can be included in SYNTCOMP 2019 if they are correctly uploaded (i.e. pass syntax checks) into StarExec by June 1, 2019. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2019 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2019.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be

SYNTCOMP 2018 Results

The results of SYNTCOMP 2018 have been presented at the SYNT workshop and at FLoC 2018, and the slides of the SYNT presentation can be found here. The experiments can now be browsed on the web-frontend of our EDACC instance (experiments with “Legacy” in the name are based on older solvers running on this year’s benchmarks), and a more detailed analysis of the results is forthcoming.

Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. In each of the tracks, we consider the different tasks of realizability checking and synthesis, and split evaluation into sequential and parallel execution modes. Finally, for the synthesis tasks we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

Simple BDD Solver (solved 165 out of 234 problems in the AIGER/safety Realizability Track, sequential mode)

TermiteSAT (solved 179 out of 234 problems in the AIGER/safety Realizability Track, parallel mode)

SafetySynth (solved 154 out of 234 problems in the AIGER/safety Synthesis Track, sequential mode, and won the quality ranking in that mode with 224 points)

AbsSynthe (solved 156 out of 234 in the AIGER/safety Synthesis Track, parallel mode)

Demiurge (won the quality ranking in the AIGER/safety Synthesis Track, parallel mode, with 240 points)

Strix (solved most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, with 413 points in sequential and 446 in parallel mode)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!