New Track for SYNTCOMP 2020

Dear synthesis enthusiasts, for the next edition of the Reactive Synthesis competition (SYNTCOMP 2020) we will have a new track which is meant to further distinguish the strengths of LTL/TLSF synthesis tools. The new track will have as input deterministic parity automata, an intermediate representation for most LTL-synthesis pipelines.

The new input format is an extension of the Hanoi Omega-Automata (HOA) format. There exist Java and C++ parsers (see the website) for the original format and the extension is such that those parsers should work for it. Furthermore, for PGSolver-format fanatics we provide a translator from the new format to PGSolver format. The output format will remain AIGER, as explained in the arXiv document describing the input/output formats.

We encourage you to start preparing benchmarks and tools for this new track as soon as possible and to try them out in our StarExec community. If you have questions or comments regarding the input/output formats, do not hesitate to contact us.

SYNTCOMP 2019 Results

The results of SYNTCOMP 2019 have been presented at the SYNT workshop and at CAV 2019. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here (see the subspaces corresponding to each track).

Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. This year exceptionally, for the safety track we have only considered the realizability-checking task. Furthermore, we have only ran the sequential mode of the resulting three categories. For the LTL track and the synthesis task we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

AbsSynthe (solved 192 out of 274 problems in the AIGER/safety Realizability Track, sequential mode)

Strix (solved the most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, with 717 points in sequential mode)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2019: Call for Benchmarks and Solvers

We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2019 and SYNT 2019.

The competition will run mostly in the same way as in 2016–2018, but will now be managed and ran on StarExec. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user, and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Participation: SYNTCOMP tutorial @ ETAPS *

This year, we are organizing a tutorial on the state of the art of reactive synthesis and the SYNTCOMP: https://conf.researchr.org/track/etaps-2019/syntcompcamp-2019-tutorial The target audience of this short tutorial (3.5 hours) are researchers that are interested in the state of the art regarding theory and implementations of the automatic synthesis of reactive systems. The tutorial will be separated into two parts: the first part will give a background on the history and the theoretical foundations of reactive synthesis and SYNTCOMP. The second part will go in-depth on one of the successful approaches and present its implementation in the tool BoSy (see https://www.react.uni-saarland.de/tools/bosy/).

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. New benchmarks can be included in SYNTCOMP 2019 if they are correctly uploaded (i.e. pass syntax checks) into StarExec by June 1, 2019. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2019 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2019.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be

SYNTCOMP 2018 Results

The results of SYNTCOMP 2018 have been presented at the SYNT workshop and at FLoC 2018, and the slides of the SYNT presentation can be found here. The experiments can now be browsed on the web-frontend of our EDACC instance (experiments with “Legacy” in the name are based on older solvers running on this year’s benchmarks), and a more detailed analysis of the results is forthcoming.

Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. In each of the tracks, we consider the different tasks of realizability checking and synthesis, and split evaluation into sequential and parallel execution modes. Finally, for the synthesis tasks we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

Simple BDD Solver (solved 165 out of 234 problems in the AIGER/safety Realizability Track, sequential mode)

TermiteSAT (solved 179 out of 234 problems in the AIGER/safety Realizability Track, parallel mode)

SafetySynth (solved 154 out of 234 problems in the AIGER/safety Synthesis Track, sequential mode, and won the quality ranking in that mode with 224 points)

AbsSynthe (solved 156 out of 234 in the AIGER/safety Synthesis Track, parallel mode)

Demiurge (won the quality ranking in the AIGER/safety Synthesis Track, parallel mode, with 240 points)

Strix (solved most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, with 413 points in sequential and 446 in parallel mode)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2018: Call for Benchmarks and Solvers

We are preparing for the fifth annual reactive synthesis competition, which will again be affiliated with CAV, and will run in time for the results to be presented at CAV and SYNT, which are both part of the Federated Logics Conference (FLoC) 2018.

The competition will run mostly in the same way as in 2016 and 2017: there are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and evaluated separately in sequential and parallel execution modes. Like last year, we supply the  synthesis format conversion tool SyFCo that allows to convert TLSF into several forms, including input formats of a number of existing synthesis tools. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. We accept new benchmarks at any time. New benchmarks can be included in SYNTCOMP 2018 if they are sent to us by June 1, 2018.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until May 25. As in previous years, we will allow updates of the solvers for some time after that (possibly after feedback from the organizers), at least until June 1. However, it would help the organizers to know in advance how many solvers may be entering, and who is seriously interested in entering SYNTCOMP. Thus, if you plan to enter one or more solvers into SYNTCOMP 2018, please let us know at your earliest convenience.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, write to jacobs@react.uni-saarland.de.

On behalf of the SYNTCOMP team,

Swen

SYNTCOMP 2017 Results

The results of SYNTCOMP 2017 have been presented at the SYNT workshop and at CAV 2017, and the experiments can be inspected in the web-frontend of our EDACC instanceAn analysis of the results has been published in the proceedings of SYNT 2017.

Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. In each of the tracks, we consider the different tasks of realizability checking and synthesis, and split evaluation into sequential and parallel execution modes. Finally, for the synthesis tasks we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

Simple BDD Solver (solved 171 out of 234 problems in the AIGER/safety Realizability Track, sequential mode)

TermiteSAT (solved 186 out of 234 problems in the AIGER/safety Realizability Track, parallel mode)

SafetySynth (solved 155 out of 234 problems in the AIGER/safety Synthesis Track, sequential mode, and won the quality ranking in that mode with 236 points)

AbsSynthe (solved 169 out of 2015 in the AIGER/safety Synthesis Track, parallel mode)

Demiurge (won the quality ranking in the AIGER/safety Synthesis Track, parallel mode, with 266 points)

Party (solved most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, parallel mode, with 308 points)

BoSy (won the TLSF/LTL Synthesis Track, sequential mode, with 298 points)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2017: Call for Benchmarks and Solvers

We are preparing for the fourth annual reactive synthesis competition, which will again be affiliated with CAV, and will run in time for the results to be presented at CAV 2017 and SYNT 2017.

The competition will run mostly in the same way as last year: there are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and evaluated separately in sequential and parallel execution modes. Like last year, we supply the  synthesis format conversion tool SyFCo that allows to convert TLSF into several forms, including input formats of a number of existing synthesis tools.
The main difference to last year is that we will again have a quality metric, based on the size of the constructed implementations. Additionally, we will try to set up a track for GR(1) specifications. If you are interested in the latter, please contact me.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. We accept new benchmarks at any time. New benchmarks can be included in SYNTCOMP 2017 if they are sent to us by April 30 May 7, 2017. If you send us a benchmark family for SYNTCOMP, we encourage you to also submit a paper describing the benchmark (including its motivation, properties, and possibly an analysis with existing tools) to the SYNT workshop.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until May 25. As in previous years, we will allow updates of the solvers for some time after that (possibly after feedback from the organizers), at least until June 1. However, it would help the organizers to know in advance how many solvers may be entering, and who is seriously interested in entering SYNTCOMP. Thus, if you plan to enter one or more solvers into SYNTCOMP 2017, please let us know at your earliest convenience.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, write to jacobs@react.uni-saarland.de.

On behalf of the SYNTCOMP team,

Swen