[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A proposal...

> Am 23.04.2018 um 16:00 schrieb Jim Jagielski <jim@xxxxxxxxxxx>:
> It seems that, IMO, if there was not so much concern about "regressions" in releases, this whole revisit-versioning debate would not have come up. This implies, to me at least, that the root cause (as I've said before) appears to be one related to QA and testing more than anything. Unless we address this, then nothing else really matters.
> We have a test framework. The questions are:

Personal view/usage answers:

> 1. Are we using it?

On release candidates only.

> 2. Are we using it sufficiently well?

 * I only added very basic tests for h2, since Perl's capabilities here are rather limited.
 * the whole framework was hard to figure out. It took me a while to get vhost setups working.

> 3. If not, what can we do to improve that?

 * A CI setup would help.

> 4. Can we supplement/replace it w/ other frameworks?

 * For mod_h2 I started with just shell scripts. Those still make my h2 test suite,
   using nghttp and curl client as well as go (if available).
 * For mod_md I used pytest which I found an excellent framework. The test suite
   is available in the github repository of mod_md
 * Based on Robert Swiecki's hongfuzz, there is a h2fuzz project for fuzzing
   our server at This works very well on a Linux
   style system.

So, I do run a collection of things. All are documented, but none is really tied into
the httpd testing framework.

> It does seem to me that each time we patch something, there should be a test added or extended which covers that bug. We have gotten lax in that. Same for features. And the more substantial the change (ie, the more core code it touches, or the more it refactors something), the more we should envision what tests can be in place which ensure nothing breaks.

I do that for stuff I wrote myself. Not because I care only about that, but because the coverage and documentation of other server parts does give me an idea of what should work and what should not. So, I am the wrong guy to place assertions into test cases for those code parts.

Example: the current mod_ssl enabled quirkyness discovered by Joe would ideally be documented now in a new test case. But neither me nor Yann would have found that before release via testing (the tests worked) nor did we anticipate such breakage.

Such undocumented and untested behaviour, which nevertheless is considered a regression, cannot be avoided, since it cannot be anticipated by people currently working on those code parts. This is a legacy of the past, it seems, which we can only overcome by breakage and resulting, added test cases.

> In other words: nothing backported unless it also involves some changes to the Perl test framework or some pretty convincing reasons why it's not required.

See above, this will not fix the unforeseeable breakage that results from use cases unknown and untested.