[XviD-devel] [In progress] devapi4 -- mpeg matrices

Michael Militzer michael at xvid.org
Fri Nov 28 16:31:54 CET 2003


Hi,

Quoting Edouard Gomez <ed.gomez at free.fr>:

> Michael Militzer (michael at xvid.org) wrote:
> > > h263 is identical. mpeg will be identical too once i reintroduce the
> > > RD optimized bias.
> > 
> > ok, so obviously the new code is _not_ identical after all - so do you
> > have any clue at all why it's different ?
> 
> Fixed point computing probably.
> 
> Old mpeg code was using real division and ... let's see old h263...
>  - the 1/2q division used a 16 bit precision with a +1 bias, now it uses
>    15bit precision with +0 bias

The old mpeg code also used fixed point multiply + shift and no division
(at least the SIMD versions, maybe the c-version was intended to be a
reference implementation and used real divisions).

> I'll check if changing this generates same bitstream later.
> 
> > Now that  the new code behaves  different: do you have  any prove that
> > it's really better?
> 
> Quality wise, see next answer. Thread/instance wise, now it's *safe*, so
> it's better.
>
> Let's explain again the problem:
>  - xvid shares a single matrix slot amongst all instances (encoders or
>    decoders). So the code is full of data races at the de/quant stage.

[...]

I know the problem. I was referring to that you reported a nice gain from
the new code, but tested only one single sequence. As long as you haven't
tested a wider variety of sequences, you can't say at all whether this
PSNR difference (where noone knows yet where it comes from) really is a
gain or maybe a loss overall. The difference between the old and the new
code might be a benefit for the one sequence you tested, but may turn out
to be a disadvantage when testing a wider range of sequences. So it's not
clear so far that the new code is better (PSNR) than the old one.

You wrote earlier on that the old quant framework had a design mistake:
I think you can't put it like this. When the old quant code had been
written, it wasn't even planned to ever support custom matrices, so the
design was right and everything worked nice (after the initial bugs got
squashed). Later on we introduced custom matrices and in order to adopt
the quant code to this change, we introduced the quant matrices as
global variables just as a quick hack until someone adopts the code to
accept the quant matrices as function parameters. Now I think that the
modification that is needed to turn a global variable into a parameter
doesn't necessarily require to completely replace/rewrite the whole
quantization code (c, mmx, xmm, 3dn + equivalent h.263 quant code).

What just concerns me is that we've been replacing huge amounts of code
in dev-api-4 by newly rewritten/beautified/no tested code without a real
need to do so (ME split, 2pass rewrite, now the quant code replacement).
Every such replacement of old code by some 'nicer' code means the risk of
introducing bugs - and it's not just an abstract risk: With each single
replacement I mentioned above, bugs actually were introduced. We've found
some serious ME bugs, that were only introduced due to beautifying code
and also the 2-pass code was broken for b-frames, and since dev-api-4
isn't in public testing yet, we can't say at all if we really found all
bugs or if there are many more that we just overlooked ourselves.

I just want to say that the quant code (as well as the 2pass code and
other code as well) was in use for more than two years and had been proven
to be stable. Now we're on the road to XviD 1.0 which should become our
first official _stable_ release and we're again and again removing stable
code from the XviD code base and replace it by completely different and
not at all as thoroughly-tested code - this is just counter-productive to
our goal to reach a stable mile-stone from which to go on. I remember that
we had problems with our quant code when it was rather new: people noticed
strange, ugly looking blocks that were caused by overflows with very low
quants that we overlooked. Skal is right to suggest that we'd need to
check if his code works as expected for very low and very high quants...

In result this means, that all newly introduced code needs special testing
in a public beta testing phase: We have to figure out if the new quant
code really works without flaws, we have to wait for user feed-back to
find out whether users like the behaviour of the new 2-pass code as much
as the old one and so on. This all costs time and every new code, that
needs further testing, makes the beta phase longer.

It's not that I wouldn't believe you that all the new code is indeed nicer,
but that's not the point: We wanted 1.0 to be a rock-solid and stable
version but with each replacement of a bigger code part we're moving away
from this goal. We've agreed on a feature freeze a long time ago already,
in order to stabilize the code base and only concentrate on bug fixing
or portability issues. However if bug-fixing means removing several
thousand lines of tested code and replacing it by new code, then having
a feature freeze is just utterly ridiculous.

bye,
Michael


More information about the XviD-devel mailing list