chl at math.uni-bonn.de
Fri Jan 30 20:33:30 CET 2004
On Fri, 30 Jan 2004, tomas carnecky wrote:
> The way of a frame thru the encoder (feel free to correct me).
> The motion estimation is somewhere between these steps, but I
> don't know where, this is from a ppt presentation of how JPEG
> 1. Transforming to YUV colorspace
> 2. Color subsampling: 8x8 blocks
> Motion_Estimation, depending on previous frame
> 3. DCT 
> 4. Quantization
> 5. Serializisation
> 6. Coding (Huffman/Arithmentic)
> And on what kind of data does the 'next' frame depend?
> Christoph said that it depends on 'image data, which
> means the frame after DCT/quantization/iDCT'.
> Why after iDTC? This is the reverse operation as DTC, and
> I thought that it's only used while decoding a stream?
Yes, but the encoder has to build up an internal copy of the
decoded images because it has to "know" what the decoder sees
Please check documentation on MPEG, not JPEG on this stuff.
Speed issues on MPEG are completely different from speed
problems of JPEG.
> Anyway, there are still two steps after quantization.
> So while thread two processes the frame further (step 5 and 6),
> thread one could already start analyzing the next frame.
> Anyone has an idea of how long these different steps take (in
> percent)? Which one is the most 'expensive' (cpu/memory/io)?
Motion Estimation and Compensation is (by far) the most expensive step.
Over 80% of total time.
>  JPEG 2000 uses DWT: Discrete Wavelet Transformation,
> which seems to be better that DCT. Why isn't it used in
Because MPEG4 is rather old, because wavelets are great for still images,
but there are problems with motion-estimation, because wavelets aren't
that great for the bitrates MPEG-4 (or at least XviD) is targeted for.
> Is the usage of DCT defined in the standart?
More information about the XviD-devel