[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A proposal...

On 2018-04-23 09:00, Jim Jagielski wrote:
It seems that, IMO, if there was not so much concern about
"regressions" in releases, this whole revisit-versioning debate would
not have come up. This implies, to me at least, that the root cause
(as I've said before) appears to be one related to QA and testing more
than anything. Unless we address this, then nothing else really

We have a test framework. The questions are:

 1. Are we using it?
 2. Are we using it sufficiently well?
 3. If not, what can we do to improve that?
 4. Can we supplement/replace it w/ other frameworks?

My opinion (I think mentioned here on-list before, too) is that the framework is too... mystical. A lot of us do not understand how it works and it's a significant cognitive exercise to get started. Getting it installed and up and running is also non-trivial.

I am willing to invest time working with anyone who would like to generate more documentation to demystify the framework. Pair programming, maybe, to go with this newfangled test driven design thought??? :-). I do not understand the ins and outs of the framework very well, but am willing to learn more to ferret out the things that should be better documented. Things like, "How do I add a vhost for a specific test?", "Are there any convenient test wrappers for HTTP(s) requests?", "How do I write a test case from scratch?" would be a great first start.

Also, FWIW, at $dayjob we use serverspec ( as a testing framework for infrastructure like httpd. After some initial thrashing and avoidance, I've come to like it quite well. If we prefer to keep with a scripting language for tests (I do), Ruby is a decent choice since it has all the niceties that we'd expect (HTTP(s), XML/JSON/YML, threading, native testing framework, crypto) built in. I'm happy to provide an example or two if anyone is interested in exploring the topic in more depth.

It does seem to me that each time we patch something, there should be
a test added or extended which covers that bug. We have gotten lax in
that. Same for features. And the more substantial the change (ie, the
more core code it touches, or the more it refactors something), the
more we should envision what tests can be in place which ensure
nothing breaks.

In other words: nothing backported unless it also involves some
changes to the Perl test framework or some pretty convincing reasons
why it's not required.

I completely support creating this as a procedure, provided we tackle the "how do I test stuff" doco challenges, too.

Daniel Ruggeri