Category Archives: Uncategorized

SYNTCOMP 2018: Call for Benchmarks and Solvers

We are preparing for the fifth annual reactive synthesis competition, which will again be affiliated with CAV, and will run in time for the results to be presented at CAV and SYNT, which are both part of the Federated Logics Conference (FLoC) 2018.

The competition will run mostly in the same way as in 2016 and 2017: there are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and evaluated separately in sequential and parallel execution modes. Like last year, we supply the  synthesis format conversion tool SyFCo that allows to convert TLSF into several forms, including input formats of a number of existing synthesis tools. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. We accept new benchmarks at any time. New benchmarks can be included in SYNTCOMP 2018 if they are sent to us by June 1, 2018.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until May 25. As in previous years, we will allow updates of the solvers for some time after that (possibly after feedback from the organizers), at least until June 1. However, it would help the organizers to know in advance how many solvers may be entering, and who is seriously interested in entering SYNTCOMP. Thus, if you plan to enter one or more solvers into SYNTCOMP 2018, please let us know at your earliest convenience.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, write to jacobs@react.uni-saarland.de.

On behalf of the SYNTCOMP team,

Swen

SYNTCOMP 2017 Results

The results of SYNTCOMP 2017 have been presented at the SYNT workshop and at CAV 2017, and the experiments can be inspected in the web-frontend of our EDACC instanceAn analysis of the results has been published in the proceedings of SYNT 2017.

Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. In each of the tracks, we consider the different tasks of realizability checking and synthesis, and split evaluation into sequential and parallel execution modes. Finally, for the synthesis tasks we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

Simple BDD Solver (solved 171 out of 234 problems in the AIGER/safety Realizability Track, sequential mode)

TermiteSAT (solved 186 out of 234 problems in the AIGER/safety Realizability Track, parallel mode)

SafetySynth (solved 155 out of 234 problems in the AIGER/safety Synthesis Track, sequential mode, and won the quality ranking in that mode with 236 points)

AbsSynthe (solved 169 out of 2015 in the AIGER/safety Synthesis Track, parallel mode)

Demiurge (won the quality ranking in the AIGER/safety Synthesis Track, parallel mode, with 266 points)

Party (solved most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, parallel mode, with 308 points)

BoSy (won the TLSF/LTL Synthesis Track, sequential mode, with 298 points)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2017: Call for Benchmarks and Solvers

We are preparing for the fourth annual reactive synthesis competition, which will again be affiliated with CAV, and will run in time for the results to be presented at CAV 2017 and SYNT 2017.

The competition will run mostly in the same way as last year: there are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and evaluated separately in sequential and parallel execution modes. Like last year, we supply the  synthesis format conversion tool SyFCo that allows to convert TLSF into several forms, including input formats of a number of existing synthesis tools.
The main difference to last year is that we will again have a quality metric, based on the size of the constructed implementations. Additionally, we will try to set up a track for GR(1) specifications. If you are interested in the latter, please contact me.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. We accept new benchmarks at any time. New benchmarks can be included in SYNTCOMP 2017 if they are sent to us by April 30 May 7, 2017. If you send us a benchmark family for SYNTCOMP, we encourage you to also submit a paper describing the benchmark (including its motivation, properties, and possibly an analysis with existing tools) to the SYNT workshop.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until May 25. As in previous years, we will allow updates of the solvers for some time after that (possibly after feedback from the organizers), at least until June 1. However, it would help the organizers to know in advance how many solvers may be entering, and who is seriously interested in entering SYNTCOMP. Thus, if you plan to enter one or more solvers into SYNTCOMP 2017, please let us know at your earliest convenience.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, write to jacobs@react.uni-saarland.de.

On behalf of the SYNTCOMP team,

Swen

SYNTCOMP 2016: Updated Schedule

We have received initial submissions for SYNTCOMP 2016 over the weekend and have made the following update to the schedule:

  • Tools for the AIGER track are still due on May 31. If you have not submitted an initial version, but still want to compete, contact me.
  • Tools for the TLSF track are now due on June 11. But again, if you have not submitted an initial version, please contact me *now* (since the schedule will be pretty tight after the extended deadline).

 

TLSF v1.1

We have just published a small update to the temporal logic synthesis format (TLSF). The main changes are:

  • additional sections for initial properties of environment and system, and for invariant properties of the environment (these have also been renamed, but the old names are still supported), and
  • support for user-defined enumeration types.

The changes are described in the new TLSF format description, and are supported by the first release of the synthesis format conversion tool (SyFCo).

The rules and call for benchmarks and solvers pages have been updated accordingly.

SYNTCOMP 2015: Results

SYNTCOMP 2015 has been another big success, as witnessed by a greatly extended benchmark library and impressive improvements in most of the participants. The results have been presented at the SYNT workshop (slides) and are available on the web-frontend of our EDACC instance.

Most notably, the following tools had the best results in some of the categories:

Simple BDD Solver (solved 195 out of 250 problems in the Realizability Track)

AbsSynthe (solved 161 out of 239 problems in the Synthesis Track, sequential mode)

Demiurge (solved 180 out of 239 problems in the Synthesis Track, parallel mode)

Many thanks to all contributors of benchmarks and all participants!

Synthesis Competition 2014: Results

(This post replicates information that was previously included on the front page of this site. It does not contain new information, except for a link to the written report on arXiv and STTT)

SYNTCOMP 2014 was a big success: 5 synthesis tools competed in 4 different categories, crunching on the 569 benchmark instances we collected for the first competition. The results have been presented at CAV (slides) and the SYNT workshop (slides). The written report, including descriptions of the benchmarks, tools, and results, is available on arXiv.
Update: the report for SYNTCOMP 2014 has now appeared in STTT (Software Tools for Technology Transfer).

The winners of the 4 tracks are:

Synthesis (Sequential):
AbsSynthe from R. Brenguier, G.A. Pérez, J.-F. Raskin, O. Sankur (UL de Bruxelles)

Synthesis (Parallel):
Demiurge from R. Könighofer (TU Graz), M. Seidl (JKU Linz)

Realizability (Sequential):
Simple BDD Solver from L. Ryzhyk (University of Toronto, NICTA) and A. Walker (NICTA)

Realizabiltiy (Parallel):
Basil from R. Ehlers (University of Bremen, DFKI Bremen)

As part of the FLoC Olympic Games, every track winner received a Kurt Gödel medal in silver, handed over by Ed Clarke:

VSL 2014_by_Nadja Meister_IMG_1377
(f.l.t.r.: Leonid Ryzhyk, Rüdiger Ehlers, Ed Clarke, Guillermo A. Pérez, Martina Seidl, Swen Jacobs, Thomas Krennwallner)
(c) VSL / Nadja Meister

Detailed information on the first competition can be found here. If you are interested, you can have a look at our rules, and download a testing framework that contains a simple reference implementation of a synthesis tool that handles the competition format, along with a set of benchmarks and a model checker to verify results.

Benchmark collection for 2015 finished

We have finished collecting benchmarks for SYNTCOMP 2015. Joining the 569 benchmark instances from last year are more than 2000 new benchmark instances, including new and challenging instances of existing benchmarks and 6 completely new sets of benchmarks.
We are currently working hard on testing and categorizing the benchmarks, and coming up with a scheme for selecting benchmarks for the competition (with the goal to have a good distribution across benchmarks from different problem sets and of different difficulty).

For testing purposes, the preliminary set of new benchmarks is available in folder Benchmarks2015 of our Bitbucket repository. Note that some of the files still contain errors (like 2 instead of 1 outputs), and many of them are very hard and have not been solved by any of the tools from last year in our test runs.