[XviD-devel] R-D "optimal"

Christoph Lampert chl at math.uni-bonn.de
Wed May 7 16:08:24 CEST 2003


Hi,

a small remark and I would love some discussion with the experts: When I
first heard of rate-distortion-optimal methods (e.g. Trellis based
quantization), I understood "optimal" as "best possible". After I saw how
it is done, I realized, it's only "optimal" in the sense of minimal cost
for a cost function weighting bits versus distorsion by a _fixed_
lagrangian parameter. 

E.g. one rule of thumb is to set   lambda = 0.85*quant*quant ;
and calculate cost function as   

J(parameters) = sum-of-squared-error + lambda * bits-needed,

so e.g. for quant 10, we spend 1 extra bit if this lowers SSE by 85,
for quant 2 already if it lowers SSE by 3. (scaling might be a little 
different though...) 

I hesitate to call this "optimal", because the best value of lambda might
be completely different for different input clips! Or for different
frames... 

What can we do about this? Do you know about how to solve this?

gruel 





More information about the XviD-devel mailing list