[XviD-devel] HUGE ME regressions between beta1 and beta2

Michael Militzer michael at xvid.org
Sat Nov 26 12:30:37 CET 2005


Hi,

Quoting Radek Czyz <radoslaw at syskin.cjb.net>:

> Michael Militzer wrote:
> 
> > Well, the idea was: imho, the lambda*MV_bits cost function bears the
> danger
> > that lowers perceived quality even though it increases PSNR. (...)
> 
> OK that's a good theory. However, isn't "moving walls" the current 
> problem with xvid? Static trails are divx3 problem but we didn't have 
> them since forever. By popular vote, robustness of MVs to noise was one 
> of my goals for 1.2 development.

Yes, certainly moving walls are a big problem. But I wouldn't say that trails
are a problem that cannot appear in XviD. E.g. with the old lambda tables,
trails were very apparent starting from QP=8 and higher. Just that few people
used higher quants and you don't notice in B-frames (due to 90% direct 
usually). Also, if you want to give it a try: double or triple the current
lambda and trails will appear ;) So the difficulty of removing the "wobbling
walls" will be to use (0,0) on these really static MBs but not favour (0,0)
(or short vectors) in general so much that trails and smearing will appear.
 
> > That basically also shouldn't be the case at low quants. At higher quants,
> > yes sure, as the old lambda tables were introducing such a huge cost
> penalty
> > that each MB was pinned at MV (0,0). Certainly faster but resulted in
> > horrible quality at higher quants.
> 
> Yeah I'm puzzled by the slowdown. It doesn't seem to be related to the 
> lambda values directly - if I double lambda, speed is still pretty much 
> the same. A solid 5% loss, corresponding to ~20% more candidate checks.
> 
> The candidate checks happen because fewer predictors repeat and because 
> diamonds take more iterations to complete (which partially contradicts btw.)

Hm, that's strange. Less predictors repating is actually a good thing - also
I wouldn't expect a huge slow-down from few predictors more. If the diamond
doesn't terminate that may have a larger influence. But still it's strange:
after all, I believe that the MV penalty isn't much different with the new
lambda table than with the old one for QP<=6, say. So there shouldn't be much
difference in the search - regarding speed and quality. Which version do you
use for speed comparison btw? One directly before the lambda patch applied
and the other one with the new lambdas? So is the speed loss really only due
to the new lambdas or could there also be other reasons?
 
> > Perhaps, one of the early stop criterias don't work properly anymore. E.g.
> > the early stop to prevent extensive 8x8 search. I've just seen that these
> > early stops use fixed thresholds. Imho, they should be adaptive to be more
> > robust...
> 
> I turned them off for my investigation. It's not it.

Hm, the vector search shouldn't be so much different at low quants actually.
So that's strange...
 
> > Actually, there shouldn't be large file size differences. If you look at
> > the graphs I plotted, the sample points of before and after the lambda
> > change are rather close at low quants. With the new linear lambda tables
> > just performing a bit better.
> 
> Yes the differences are 1% - 2% of total size so you can't see them on 
> graphs. But if you'd look at first pass statistics file, every single 
> b-frame is larger with new code. It's non-texture data that is larger, 
> but boosting lambda for b-frames seems to have little effect.
> It might be the skip (direct_none_mv) decision.

Well, you would expect that the new MV bit cost penalty added is smaller 
than with old lambdas - at least that's what I'd conclude from your report
that diamonds don't terminate as early anymore. In this case, it should 
even be easier for a MB to fulfill the B_SKIP criterion. So it seems odd
then that B-frame header/MV data require more bits...

Michael



More information about the XviD-devel mailing list