The results of the 2022 competition on Reactive Synthesis are in! As in
previous years, the competition was run on StarExec. This year, the competition
consisted of LTL tracks, with specifications in TLSF, and parity-game tracks,
with specifications given in extended HOA. It is worth mentioning that we have
removed the separation between so-called parallel and sequential tracks in
favor of having two rankings based on total Wallclock time vs. total CPU time
used by solvers. This year, though, the rankings happen to coincide. For more
information on this and other updates to the competition, you can take a look
at the most recent report about the
competition.
Rankings
We present first the results for the parity-game tracks and then move to the
LTL tracks.
Parity games: realizability
To summarize, the ranking of the participating tools is as follows.
Knor
Strix
ltlsynt
More details regarding each solver configuration is given in the table below
and the cactus plots that follow
(where only the best configuration per tool is shown).
The raw information can be fetched from the
StarExec
job.
Solver
Configuration
Solved benchmarks
Total CPU time (s)
Total Wallclock time (s)
Strix
hoa_realizability_parallel
548
308532
124160
Strix
hoa_realizability_sequential
515
89584
89232
ltlsynt
pgreal
441
69416
69421
Knor
real_ltl
587
1722
1039
Parity games: synthesis
To summarize, the time-based ranking of the participating tools is the same as
for the realizability sub-track. More details regarding each solver
configuration is given in the table below and the cactus plots that follow
(where only the best configuration per tool is shown).
The raw information can be fetched from the StarExec
job.
Solver
Configuration
Solved benchmarks
Total CPU time (s)
Total Wallclock time (s)
Strix
hoa_synthesis_parallel
470
159456
73077
Strix
hoa_synthesis_sequential
451
48321
47884
ltlsynt
pgsynt
334
69254
69250
ltlsynt
pgsyntabc1
335
69386
69440
Knor
synt_sym
278
4839
4839
Knor
synt_sym_abc
278
5706
5401
Knor
synt_tl
480
1695
994
The quality ranking is quite different.
Strix
ltlsynt
Knor
More details regarding each solver
configuration is given in the table below.
Solver
Configuration
Synthesized controllers
Score
Strix
hoa_synthesis_parallel
199
385
Strix
hoa_synthesis_sequential
199
384
ltlsynt
pgsynt
204
321
ltlsynt
pgsyntabc1
205
358
Knor
synt_sym
197
292
Knor
synt_sym_abc
197
340
Knor
synt_tl
201
268
LTL: realizability
To summarize, the ranking of the participating tools is as follows.
Strix
ltlsynt
Acacia bonsai
sdf
SPORE
Otus
More details regarding each solver configuration is given in the table below
and the cactus plots that follow
(where only the best configuration per tool is shown).
The raw information can be fetched from the
StarExec
job.
Solver
Configuration
Solved benchmarks
Total CPU time (s)
Total Wallclock time (s)
_acab_sb
best
826
211030
86632
SPORE
ltl-real-recpar-bdd-seq
519
67800
67830
SPORE
ltl-real-recpar-fbdd-32-nodecomp-seq
715
71950
71960
SPORE
ltl-real-recpar-fbdd-32-seq
646
69159
69183
SPORE
ltl-real-recpar-fbdd-64-nodecomp-seq
715
72367
72309
SPORE
ltl-real-recpar-fbdd-64-seq
646
68257
68305
Strix
ltl_real_acd_bfs
892
43963
43974
Strix
ltl_real_zlk_bfs
893
39510
39518
Strix
ltl_real_zlk_pq
924
46340
46294
Otus
otus-ltl-realizability-parallel-sylvan
588
227382
57363
Otus
otus-ltl-realizability-sequential-jbdd
558
32009
32011
sdf
real
783
50466
40399
sdf
real_p
734
129252
72772
ltlsynt
segrealacd
831
67066
67064
ltlsynt
segrealds
800
55803
55810
ltlsynt
segreallar
828
57983
57956
LTL: synthesis
To summarize, the time-based ranking of the participating tools is as follows.
Strix
sdf
ltlsynt
Otus
More details regarding each solver
configuration is given in the table below and the cactus plots that follow
(where only the best configuration per tool is shown).
The raw information can be fetched from the StarExec
job.
Solver
Configuration
Solved benchmarks
Total CPU time (s)
Total Wallclock time (s)
Strix
ltl_synth_acd_bfs
820
30218
30195
Strix
ltl_synth_zlk_bfs
818
23009
23012
Strix
ltl_synth_zlk_pq
845
29458
29380
Otus
otus-ltl-synthesis-parallel-sylvan
503
125103
31644
Otus
otus-ltl-synthesis-sequential-jbdd
490
27739
27705
ltlsynt
segsyntacdabc
709
50445
504463
ltlsynt
segsyntdsabc
689
46082
46196
ltlsynt
segsyntlarabc
703
49251
49337
sdf
synt
721
47772
37822
sdf
synt_p
666
112532
61328
For the quality ranking, ltlsynt and sdf swap places.
Strix
ltlsynt
sdf
Otus
More details regarding each solver
configuration is given in the table below.
Solver
Configuration
Synthesized controllers
Score
Strix
ltl_synth_acd_bfs
433
785
Strix
ltl_synth_zlk_bfs
432
780
Strix
ltl_synth_zlk_pq
430
772
Otus
otus-ltl-synthesis-parallel-sylvan
305
243
Otus
otus-ltl-synthesis-sequential-jbdd
298
238
ltlsynt
segsyntacdabc
375
500
ltlsynt
segsyntdsabc
377
540
ltlsynt
segsyntlarabc
371
487
sdf
synt
382
447
sdf
synt_p
354
400
Benchmarks and Solvers
The set of benchmarks used in 2022 can be fetched from
the usual
repository.
Below, you will find information regarding the participating solvers, their
source code, license type, and links to their most recent reports or preferred
publications to be cited in case you use the tool in your academic work.
We are preparing for the annual reactive synthesis competition. This year SYNTCOMP will be part of the FLoC’22 olympic games. The competition will run in time for the results to be presented during CAV 2022 and SYNT 2022.
The competition will run on StarExec, as in 2020 and 2021. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user, and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for – safety specifications in AIGER format, – LTL specifications in TLSF, and – parity games in Hanoi Omega-Automata (HOA) format, each separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in this year’s SYNTCOMP if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by July 1. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until July 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in this year’s SYNTCOMP if it is correctly uploaded (i.e. passes a test run) into StarExec by July 1.
The results of SYNTCOMP 2021 have been presented at the SYNT workshop, part of CAV 2021. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here(see the subspaces corresponding to each track).
This year, the competition consisted of LTL tracks, with specifications in TLSF, and parity-game tracks, with specifications in extended HOA. Since we had an increase in participants, this year we had sequential and parallel versions of the realizability and synthesis sub-tracks! Also worth mentioning is the reorganization of our benchmark library.
Winners
Knor won the parity-game synthesis track, solving 276/303 benchmarks. It also won the hard-parity-game realizability track by solving 216/217 benchmarks.
Strix placed first in the quality ranking for parity-game synthesis. In its LTL/TLSF version, Strix won the LTL sequential realizability track by solving 827/924 benchmarks; and placed first in the quality ranking for LTL sequential synthesis.
sdf won the LTL parallel realizability track, solving 730/942 benchmarks, and the LTL parallel synthesis track.
Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!
We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2021 and SYNT 2021.
The competition will run on StarExec, as in 2020. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user, and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for – safety specifications in AIGER format, – LTL specifications in TLSF, and – parity games in Hanoi Omega-Automata (HOA) format, each separated into subtracks for realizability checking and synthesis, and into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in SYNTCOMP 2021 if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by June 1, 2021. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2021 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2021.
The results of SYNTCOMP 2020 have been presented at the SYNT workshop, part of CAV 2020. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here(see the subspaces corresponding to each track).
This year, the competition was split into a safety track, based on specifications in AIGER format an LTL track, with specifications in TLSF, and a parity-game track, with specifications in an extended HOA format. Just like last year, for the safety track we have only considered the realizability-checking task. Furthermore, we have only ran the sequential mode of the resulting three categories. For the LTL track and the synthesis task we have an additional ranking based on not only the quantity, but also the quality of solutions.
Here are the tools that had the best results in these categories:
simpleBDDSolver (solved 186 out of 274 problems in the AIGER/safety Realizability Track)
Strix (solved 424 out of 434 problems in the TLSF/LTL Realizability track and won the quality ranking in the TLSF/LTL Synthesis Track)
Strix (solved all problems in the EHOA/parity game Realizability track and was the only participant in the synthesis track)
Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!
We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2020 and SYNT 2020.
The competition will run on StarExec, as in 2019. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user, and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for – safety specifications in AIGER format, – LTL specifications in TLSF, and – a new track for parity games in Hanoi Omega-Automata (HOA) format, each separated into subtracks for realizability checking and synthesis, and into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in SYNTCOMP 2020 if they are correctly uploaded (i.e. pass syntax checks) into StarExec by June 1, 2020. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2020 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2020.
The results of SYNTCOMP 2019 have been presented at the SYNT workshop and at CAV 2019. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here(see the subspaces corresponding to each track).
Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. This year exceptionally, for the safety track we have only considered the realizability-checking task. Furthermore, we have only ran the sequential mode of the resulting three categories. For the LTL track and the synthesis task we have an additional ranking based on not only the quantity, but also the quality of solutions.
Here are the tools that had the best results in these categories:
AbsSynthe (solved 192 out of 274 problems in the AIGER/safety Realizability Track, sequential mode)
Strix (solved the most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, with 717 points in sequential mode)
Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!
Since there are some issues with the web frontend of our EDACC installation, the results of SYNTCOMP 2018 can now also be downloaded in our repository at https://bitbucket.org/swenjacobs/syntcomp/src (in folder “Results2018”).
We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2019 and SYNT 2019.
The competition will run mostly in the same way as in 2016–2018, but will now be managed and ran on StarExec. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user, and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Participation: SYNTCOMP tutorial @ ETAPS *
This year, we are organizing a tutorial on the state of the art of reactive synthesis and the SYNTCOMP: https://conf.researchr.org/track/etaps-2019/syntcompcamp-2019-tutorial The target audience of this short tutorial (3.5 hours) are researchers that are interested in the state of the art regarding theory and implementations of the automatic synthesis of reactive systems. The tutorial will be separated into two parts: the first part will give a background on the history and the theoretical foundations of reactive synthesis and SYNTCOMP. The second part will go in-depth on one of the successful approaches and present its implementation in the tool BoSy (see https://www.react.uni-saarland.de/tools/bosy/).
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. New benchmarks can be included in SYNTCOMP 2019 if they are correctly uploaded (i.e. pass syntax checks) into StarExec by June 1, 2019. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2019 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2019.
The results of SYNTCOMP 2018 have been presented at the SYNT workshop and at FLoC 2018, and the slides of the SYNT presentation can be found here. The experiments can now be browsed on the web-frontend of our EDACC instance (experiments with “Legacy” in the name are based on older solvers running on this year’s benchmarks), and a more detailed analysis of the results is forthcoming.
Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. In each of the tracks, we consider the different tasks of realizability checking and synthesis, and split evaluation into sequential and parallel execution modes. Finally, for the synthesis tasks we have an additional ranking based on not only the quantity, but also the quality of solutions.
Here are the tools that had the best results in these categories:
Simple BDD Solver (solved 165 out of 234 problems in the AIGER/safety Realizability Track, sequential mode)
TermiteSAT (solved 179 out of 234 problems in the AIGER/safety Realizability Track, parallel mode)
SafetySynth (solved 154 out of 234 problems in the AIGER/safety Synthesis Track, sequential mode, and won the quality ranking in that mode with 224 points)
AbsSynthe (solved 156 out of 234 in the AIGER/safety Synthesis Track, parallel mode)
Demiurge (won the quality ranking in the AIGER/safety Synthesis Track, parallel mode, with 240 points)
Strix (solved most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, with 413 points in sequential and 446 in parallel mode)
Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!