[asciidoc-lang-dev] Could we create a common test suite for all implementation efforts?
El 9/3/21 a las 12:50, Dan Allen escribió:
> If you aren't already, I would strongly encourage you to test
scenarios in Asciidoctor first. I think you will often get an answer
faster that way.
I agree. I've tested a lot of scenarios against Asciidoctor in the last
few months and my view of what AsciiDoc actually is has substantially
changed. The documentation we currently have is quite complete for
writers, but frustratingly inaccurate for implementers (it's the job of
this WG to change this situation, of course).
Something that I'd find immensely useful is a shared,
language-independent test suite, based on the current test suite for
Asciidoctor. It would allow for the different implementations to start
converging _before_ we have an accurate, complete and agreed upon
description of the language, and cooperatively discover what are the
common pain points (implementation and definition-wise), and what
Asciidoctor decisions need to be challenged.
I could try to call Asciidoctor's testing machinery from my (non-Ruby
based) implementation, but I'd prefer to try another approach first.
Many test cases in Asciidoctor consist in:
* Some input string
* An Xpath expression (or CSS selector)
* The number of objects the Xpath expression (or CSS selector) is
expected to find when evaluated in the output produced by Asciidoctor
(or any other condition on those objects)
See, for example:
Those triples could exist in a language-independent format (e.g. JSON or
YAML) to be easily used for testing in any programming language. (Not
all of Asciidoctor's test suite can be easily translated into this
declarative form, but the part it is would be IMO very useful).
Then, I wonder:
1. Is this testing information already available in a non-Ruby format
2. Has someone tried to do something similar to what I'm suggesting?
3. Do you think it is feasible/useful parsing the Asciidoctor test suite
(or using Ruby introspection somehow) and semi-automatically generate
this set of language-independent test cases? Do you have a better idea
for starting to collect as many test cases as possible in a way that is
useful for all implementation projects?
4. Would you participate in an effort to create this
language-independent test suite?