New benchmarks in TLSF have been added to our benchmark repository. They include a decomposed version of the well-known AMBA benchmark, and unrealizable variants of the parameterized benchmarks from last year.
With SYNTCOMP 2016, we have the first major extension of the competition, from pure safety properties to full LTL. The new track is based on the TLSF format, and had 3 participants for the first competition. The AIGER-based safety track also still exists, and we had 3 completely new participants in that track also, in addition to 3 that have participated before.
The SYNTCOMP 2016 results have been presented at the SYNT workshop (slides) and are available on the web-frontend of our EDACC instance. An analysis of the results has been published in the proceedings of SYNT 2016.
Most notably, the following tools had the best results in some of the categories:
Simple BDD Solver (solved 175 out of 234 problems in the AIGER/safety Realizability Track, sequential mode)
AbsSynthe (solved 181 out of 234 problems in the AIGER/safety Realizability Track, parallel mode, and 165 out of 2015 in the Synthesis Track, parallel mode)
SafetySynth (solved 153 out of 215 problems in the AIGER/safety Synthesis Track, sequential mode)
Acacia4Aiger (solved 153 out of 195 problems in the TLSF/LTL Realizability Track)
BoSy (solved 138 out of 185 problems in the TLSF/LTL Synthesis Track)
Many thanks to all contributors of benchmarks and all participants!
We are preparing for the third annual reactive synthesis competition, which will again be affiliated with CAV, and will run in time for the results to be presented at CAV 2016 and SYNT 2016.
We have a major extension of the competition this year: in addition to safety specifications in AIGER format, we will have tracks for synthesis from temporal logic specifications in LTL and GR(1). Check out the description of the temporal logic synthesis format (TLSF) if you are interested in submitting a solver or benchmarks in the new format. We also supply the synthesis format conversion tool SyFCo that allows to rewrite the new format into several forms, including input formats of a number of existing synthesis tools.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. We accept new benchmarks at any time. New benchmarks can be included in SYNTCOMP 2016 if they are sent to us by April 30, 2016.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until May 21 (initial version), with a possibility to update the solver until May 31 (possibly after feedback from the organizers). However, it would help the organizers to know in advance how many solvers may be entering, and who is seriously interested in entering SYNTCOMP. Thus, if you plan to enter one or more solvers into SYNTCOMP 2016, please let us know at your earliest convenience.
* Communication *
If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, write to firstname.lastname@example.org.
On behalf of the SYNTCOMP team,
Preparations for SYNTCOMP 2015 have started. The main tracks of the competition will be the same as this year (with modified evaluation rules), and the schedule will (presumably) also be similar:
– benchmarks can be submitted until End of April 2015
– tools can be submitted until End of May 2015
– after submission, tools will be tested, and we will allow re-submission in case of errors if time permits
– the competition will be finished until CAV 2015
Most importantly at this point, we are looking for more benchmarks for SYNTCOMP 2015. If you can supply us with benchmarks in AIGER format, you will be able to upload them directly to our EDACC-system soon. If you have an interesting benchmark set that still needs to be translated, please let us know.
More details on SYNTCOMP 2015, and a full archive of the data produced at the first competition, will be released soon.
The deadline for submission of synthesis tools for SYNTCOMP 2014 has passed, and we are now starting testing in the competition environment. We are happy to say that we have submissions of a good number of solvers from 6 different research groups.
We are willing to accept last minute entrants if they run without major problems in our environment (and of course, submitted tools can still be re-submitted during the next 2-3 weeks under the same condition). Please contact me (email@example.com) if you still want to submit a tool.
We found and fixed bugs in the reference implementation supplied with the testing framework for SYNTCOMP (here). We will however not update the package. Instead, please download the latest version from Bitbucket (to update, simply replace aisy.py from your installation with the one from the repository).
As of today, the benchmark set for SYNTCOMP 2014 is fixed. Overall, we have collected 6 sets of benchmarks, with more than 500 individual synthesis problems. The benchmarks are available in our Bitbucket repository
on the Synthesis Competition Server (after registration). Of course we still welcome new benchmarks, but they will not be used in the 2014 competition anymore.
Solver submission for 2014 is open on the same server. If you are developing a synthesis tool for the competition, please have a look at the reference implementation and testing framework, and submit a first version of your tool until End of May 2014.
Most importantly at this point, we are looking for more benchmarks. These can directly be uploaded to the EDACC-system, which will also be used to run the competition itself. Upon registration, you are able to upload benchmarks, and also download all the benchmarks that have already been converted to the competition format. See the Call for Benchmarks for details.
Furthermore, the same account allows you to upload synthesis tools. If they compile on our competition cluster, they can enter SyntComp 2014. Official submission of tools will be in May 2014. See the rules and the submission page for details.
Today we present the result of our open discussions up to and
including at the SYNT workshop, and some internal discussions
We think that, in order to have a successful competition at CAV/SYNT
2014, the three most important things are: i) keeping the format
simple, ii) having sufficiently many competing tools, iii) supplying
the community with a format, benchmark set and framework for testing
their tools as early as possible.
These three points led us to choosing the AIGER format — restricted to
safety specifications, like in the main category of the HWMCC — both
as input and ouput format for the synthesis competition. For the input
format, we need to add a partition of inputs into controllable and
uncontrollable, but otherwise we can build on a
well-known language with a clear semantics and existing tool
support. For the output format, choosing AIGER allows us to almost
directly check the synthesized artifacts with any model checker that
supports the format (in particular, any that competes in the HWMCC). A
description of how we plan to use the AIGER format in
reactive synthesis can be found here: Format Proposal (v0.1).
In the coming weeks, we will be working on
i) converting as many benchmarks as possible to the AIGER format, with
bounded approximation of liveness properties by safety properties
ii) implementing a framework that allows potential participants to
adapt their tools to the new in- and ouput format, and test them in
an environment that resembles that of the competition itself
iii) developing a formal description of the format and a detailed
ruleset for the competition.
The discussions in St. Petersburg made clear that there is an interest
in more expressive specifications, like full LTL. There are a number
of additional problems to be solved for supporting this in the
competition. Most importantly, on the one hand LTL poses a rather high
entry barrier, even with existing tools for conversion to different
automata formats, and on the other hand we cannot expect the resulting
artifacts to be verified formally by existing model checkers. Thus, it
will be much more difficult to assess and rank
solutions. Still, we are thinking about how we could include a full
LTL track into the competition, and are open to suggestions on how to
best achieve this.
As always, questions, comments, and suggestions are very welcome!