5586
|
1
|
|
2 ============================================================
|
|
3
|
7727
|
4 NOTE: the 'libvo2 from scratch' plan was abandoned, we're changing libvo1 now.
|
5586
|
5
|
7727
|
6 So, this draft is ONLY A DRAFT, see libvo.txt for the current code docs!
|
5586
|
7
|
|
8 ============================================================
|
|
9
|
4091
|
10 //First Announce by Ivan Kalvachev
|
|
11 //Some explanations by Arpi & Pontscho
|
3342
|
12
|
4091
|
13 If you have any suggestion related to the subjects in this document you
|
3491
|
14 could send them to mplayer developer or advanced users mail lists. If you are
|
|
15 developer and have CVS access do not delete parts of this document, but you
|
|
16 could feel free to add paragraphs that you will sign with your name.
|
4091
|
17 Be warned that the text could be changed, modified and your name could be
|
|
18 moved at the top of the document.
|
3491
|
19
|
3342
|
20 1.libvo2 drivers
|
|
21 1.1 functions
|
|
22 Currently these functions are implemented:
|
|
23 init
|
|
24 control
|
|
25 start
|
|
26 stop
|
|
27 get_surface
|
4091
|
28 update_surface - renamed draw
|
|
29 show_surface - renamed flip_page
|
3342
|
30 query
|
|
31 hw_decode
|
|
32 subpicture
|
|
33
|
4091
|
34 Here is detailed description of the functions:
|
|
35 init - initialisation. It is called once on mplayer start
|
|
36 control - this function is message oriented interface for controlling the libvo2 driver
|
|
37 start - sets given mode and display it on the screen
|
|
38 stop - closes libvo2 driver, after stop we may call start again
|
3342
|
39 query - the negotiation is more complex than just finding which imgfmt the
|
4091
|
40 device could show, we must have list of capabilities, etc.
|
3491
|
41 This function will have at least 3 modes:
|
4091
|
42 a) return list with description of available modes.
|
|
43 b) check could we use this mode with these parameters. E.g. if we want
|
3342
|
44 RGB32 with 3 surfaces for windows image 800x600 we may get out of video
|
|
45 memory. We don't want error because this mode could be used with 2
|
|
46 surfaces.
|
|
47 c) return supported subpicture formats if any.
|
4091
|
48 +d) supported functionality by hw_decode
|
3342
|
49
|
|
50 As you may see I have removed some functionality from control() and made
|
|
51 separate function. Why? It is generally good thing functions that are
|
|
52 critical to the driver to have it's own implementation.
|
4091
|
53 get_surface - this function give us surfaces where we could write. In most
|
|
54 cases this is video memory, but it is possible to be and computer RAM, with
|
|
55 some special meaning (AGP memory , X shared memory, GL texture ...).
|
3342
|
56
|
|
57 update_surface - as in the note above, this is draw function. Why I change
|
|
58 it's name? I have 2 reasons, first I don't want implementation like vo1,
|
|
59 second it really must update video surface, it must directly call the
|
|
60 system function that will do it. This function should work only with
|
3491
|
61 slices, the size of slice should not be limited and should be passed
|
|
62 (e.g ystart, yend), if we want draw function, we will call one form libvo2
|
4091
|
63 core, that will call this one with ystart=0; yend=Ymax;. Also some system
|
3342
|
64 screen update functions wait for vertical retrace before return, other
|
|
65 functions just can't handle partial updates. In this case we should inform
|
|
66 libvo2 core that device cannot slice, and libvo2 core must take care of
|
4091
|
67 the additional buffering and update_surface becomes usual draw function.
|
|
68 When update_surface() is used with combination on get_surface(), ONLY VALID
|
|
69 POINTERS ARE THESE RETURNED BY get_surface(). Watch out with cropping.
|
3342
|
70
|
4091
|
71 show_surface - this functions is always called on frame change. it is used
|
|
72 to show the given surface on the screen.
|
3491
|
73 If there is only one surface then it is always visible and this function
|
|
74 does nothing.
|
4091
|
75
|
3491
|
76 hw_decode - to make all dvb,dxr3, TV etc. developers happy. This function
|
3342
|
77 is for you. Be careful, don't OBSEBE it, think and for the future, this
|
|
78 function should have and ability to control HW IDCT, MC that one day will
|
|
79 be supported and under linux. Be careful:)
|
|
80
|
|
81 subpicture - this function will place subtitles. It must be called once to
|
|
82 place them and once to remove them, it should not be called on every
|
|
83 frame, the driver will take care of this. Currently I propose this
|
|
84 implementation: we get array of bitmaps. Each one have its own starting
|
|
85 x, y and it's own height and width, each one (or all together) could be
|
|
86 in specific imgfmt (spfmt). THE BITMAPS SHOULD NOT OVERLAP! This may not
|
|
87 be hw limitation but sw subtitles may get confused if they work as 'c'
|
4091
|
88 filter (look my libvo2 core). Anyway, so far I don't know hardware that
|
|
89 have such limitations, but it is safer to be so (and faster I think).
|
|
90 It is generally good to merge small bitmaps (like characters) in larger
|
|
91 ones and make all subtitles as one bitmap( or one bitmap for one subtitle line).
|
|
92 There will be and one for each OSD time & seek/brightness/contrast/volume bar.
|
3342
|
93
|
|
94 1.2 control()
|
|
95 OK, here is list of some control()s that I think that could be useful:
|
|
96 SET_ASPECT
|
|
97 SET_SCALLE_X, SET_SIZE_X
|
|
98 SET_SCALLE_Y, SET_SIZE_Y
|
|
99 RESET_SIZE
|
|
100 GET/SET_POSITION_X
|
|
101 GET/SET_POSTIION_Y
|
|
102 GET/SET_RESOLUTION
|
|
103 GET/SET_DISPLAY
|
|
104 GET/SET_ATTRIBUTES
|
4091
|
105 + GET/SET_WIN_DECORATION
|
3342
|
106
|
|
107 Here is description of how these controls to be used:
|
|
108
|
|
109 SET_ASPECT - this is the move/video aspect, why not calculate it in
|
|
110 different place (mplayer.c) and pass the results to driver by
|
|
111 set_size_x/y. First this is only if hardware could scale. Second we may
|
3491
|
112 need this value if we have TV and we won't calculate any new height and
|
|
113 width.
|
3342
|
114
|
|
115 SET_SCALLE_X/Y - this is to enlarge/downscale the image, it WILL NOT
|
|
116 override SET_ASPECT, they will have cumulative effect, this could be used
|
|
117 for deinterlacing (HALF SIZE). Second if we want to zoom 200% we don't
|
|
118 want to lose aspect calculations. Or better SET_SCALLE to work with
|
|
119 current size?
|
|
120
|
|
121 SET_SIZE_X/Y - This is for custom enlarge, to save some scale calculation
|
|
122 and for more precise results.
|
|
123
|
3491
|
124 RESET_SIZE - Set the original size of image, we must call SET_ASPECT again.
|
3342
|
125
|
|
126 GET/SET_POSOTION_X/Y - This if for windows only, to allow custom move on
|
|
127 window.
|
|
128
|
|
129 GET/SET_RESOLUTION - change resolution and/or bpp if possible. To be used
|
3491
|
130 for changing desktop resolution or the resolution of the current
|
3342
|
131 fullscreen mode (NOT TO SET IT just to change it if we don't like it)
|
|
132
|
|
133 GET/SET_DISPLAY - mainly for X11 and remote displays. Not very useful, but
|
|
134 may be handy.
|
|
135
|
|
136 GET/SET_ATTRIBUTES - Xv overlays have contrast, brightness, hue,
|
|
137 saturation etc. these and others could be controlled by this. If we want
|
|
138 to query it we must call GET_*, and the to check does our attribute is in
|
|
139 there (xv developers be careful, 2 or 3 of default attributes sometimes
|
|
140 are not queried by X, but could be set).
|
|
141
|
3491
|
142 Do you think that TV encoding (NTSC,PAL,SECAM) should have it's own attribute?
|
3342
|
143 I would like to hear the GUI developers. Could we separate Mouse/Keyboard
|
|
144 from the driver. What info do you need to do it. Don't forget that SDL have
|
|
145 it's own keyboard/mouse interface. Maybe we should allow video driver to
|
|
146 change the libin driver ?
|
|
147
|
3491
|
148 <SOP>
|
|
149 Arpi wrote:
|
|
150 I've asked Pontscho (he doesn't understand english well...).
|
|
151 There is 2 option of GUI<->mplayer interface.
|
|
152
|
|
153 The current, ugly (IMHO) way:
|
|
154 gui have the control of the video window, it does handle resizing, moving,
|
|
155 key events etc. all window manipulation in libvo drivers are disabled as gui
|
|
156 is enabled. it was required as libvo isn't inited and running when gui
|
|
157 already display the video window.
|
|
158
|
|
159 The wanted way:
|
|
160 GUI shouldn't control the X window directly, it should use libvo2 control
|
|
161 calls to resize/move/etc it. But there is a big problem: X cannot be opened
|
|
162 twice from a process. It means GUI and libvo2 should share the X connection.
|
|
163 And, as GUI run first (and when file is selected etc then libvo2 is started)
|
|
164 it should connect to X and later pass the connection to libvo2. It needs an
|
|
165 extra control() call and some extra code in mplayer.c
|
|
166
|
|
167 but this way gui could work with non-X stuff, like SDL, fbdev (on second
|
|
168 head for TVout etc), hardware decoders (dvb.dxr3) etc.
|
|
169
|
|
170 as X is so special, libvo2 should have a core function to open/get an X
|
|
171 connection, and it should be used by all X-based X drivers and gui.
|
|
172
|
|
173 also, GUI needs functions to get mouse and keyboard events, and to
|
|
174 enable/disable window decoration (title, border).
|
|
175
|
|
176 we need fullscreen switch control function too.
|
|
177
|
|
178 > Maybe we should allow video driver to change the libin driver ?
|
|
179 forget libin. most input stuff is handled by libvo drivers.
|
|
180 think of all X stuff (x11,xv,dga,xmga,gl), SDL, aalib, svgalib.
|
|
181 only a few transparent drivers (fbdev, mga, tdfxfb, vesa) has not, but all
|
|
182 of them are running on console (and maybe on second head) at fullscreen, so
|
|
183 they may not need mouse events. console keyboard events are already catched
|
|
184 and handled by getch2.
|
|
185
|
|
186 I can't see any sense of writing libin.
|
|
187
|
|
188 mpalyer.c should _handle_ all input events, collected from lirc interface,
|
|
189 getch2, libvo2 etc. and it should set update flags, for gui and osd.
|
|
190
|
|
191 but we should share some plugin code. examples: *_vid code, all common X
|
|
192 code. it can be either implementing them in libvo2 core (and called from
|
|
193 plugins) or include these files from all drivers which need it. later method
|
|
194 is a bit cleaner (from viewpoint of core-plugin independency) but results
|
|
195 bigger binaries...
|
|
196 <EOP, Arpi>
|
|
197
|
|
198 Btw. when we finish we will have libin, but it will be spread around mplayer.
|
4091
|
199 I agree that libin could be build in in libvo2 driver, but there have to be
|
|
200 standart way to send commands to the mplayer itself.
|
|
201
|
3491
|
202
|
3342
|
203 1.3. query()
|
|
204
|
|
205 Here come and some attributes for the queried modes, each supported mode
|
|
206 should have such description. It is even possible to have more than one mode
|
4091
|
207 that could display given imgfmt. I think that we have to separate window from fullscreen
|
|
208 modes and to have yv12 mode for window and yv12 fullscreen mode. We need and naming
|
|
209 scheme, in order to have *.conf control over modes - to disable buggy modes, to limit
|
|
210 surfaces (buggy ones), to manually disable slices etc. The naming should not change from
|
|
211 one computer to another and have to be flexible.
|
3491
|
212 {
|
4091
|
213 IMGFMT - image format (RGB,YV12, etc...)
|
|
214
|
|
215 Height - the height of fullscreen mode or the maximum height of window mode
|
|
216
|
|
217 Width - the width of fullscreen mode or the maximum withd of window mode
|
|
218
|
|
219 }
|
|
220 {
|
|
221 Scale y/n - hardware scale, do you think that we must have one for x and
|
3342
|
222 one for y (win does)?
|
|
223
|
|
224 Fullscreen y/n - if the supported mode is fullscreen, if we have yv12 for
|
3491
|
225 fullscreen and window we must threat them as separate modes.
|
|
226
|
|
227 Window y/n - The mode will show the image in a window. Could be removed as
|
|
228 it is mutually exclusive with Fullscreen
|
3342
|
229
|
|
230 GetSurface y/n - if driver could give us video surface we'll use get_surface()
|
|
231
|
|
232 UpdateSurfece y/n - if driver will update video surface through sys function (X,SDL)
|
|
233
|
|
234 HWdecode y/n - if driver could take advantage of hw_decode()
|
|
235
|
|
236 MaxSurfaces 1..n - Theoretical maximum of surfaces
|
|
237
|
|
238 SubPicture y/n - Could we put subpicture (OSD) of any kind by hw
|
|
239
|
|
240 WriteCombine y/n - if GetSurface==yes, most (or all) pci&agp cards are
|
|
241 extremely slow on byte access, this is hint to vo2 core those surfaces
|
4091
|
242 that got affected by WC. Some surfaces are in memory (X shm, OpenGL textures)
|
|
243 This is only a hint.
|
3342
|
244
|
|
245 us_clip y/n - if UpdateSurface=yes, this shows could update_surface()
|
|
246 remove strides (when stride> width ), this is used and for cropping. If
|
|
247 not, we must do it.
|
|
248
|
3491
|
249 us_slice y/n - if UpdateSurface=yes, this shows that update_surface()
|
|
250 could draw slices and that after updating surface,it won't wait for
|
|
251 vertical retrace, so we could update surface slice by slice.
|
|
252 If us_slice==n we will have to accumulate all slices in some buffer.
|
3342
|
253
|
|
254 us_upsidedown - if UpdateSufrace=yes, this shows that update_suface()
|
4091
|
255 could flip the image vertically. In some case this could be combined with
|
3491
|
256 us_clip /stride tricks/
|
3342
|
257
|
|
258 switch_resoliton y/n - if window=y, this shows could we switch resolution
|
3491
|
259 of desktop, if fullscreen==y, shows that we could change resolution, after
|
3342
|
260 we have set the fullscreen mode.
|
|
261
|
|
262 deinterlace y/n - indicates that the device could deinterlace on it's own
|
4091
|
263 (radeon, TV out).
|
|
264 }
|
3342
|
265 1.4 conclusion
|
|
266
|
|
267 As you see, I have removed all additional buffering from the driver. There
|
3491
|
268 is a lot of functionality that should be checked and handled by libvo2 core.
|
4091
|
269 If some of the functionality is not supported the libvo2 core should add filters
|
|
270 that will support it by software.
|
3342
|
271
|
4091
|
272 Some of the parameters should be able to
|
|
273 be overridden by user config, mainly
|
|
274 to disable buggy modes or parameters. I
|
|
275 believe that this should not be done
|
|
276 by command line as there are enough
|
|
277 commands now.
|
|
278
|
|
279 I wait comments and ideas.
|
3491
|
280 //--------------------------------------------------------------------------
|
3342
|
281 2. libvo2 core
|
|
282 2.1 functions
|
|
283 now these function are implemented:
|
|
284 init
|
|
285 new
|
|
286 start
|
|
287 query_format
|
|
288 close
|
|
289
|
|
290 and as draw.c:
|
|
291 choose_buffering
|
|
292 draw_slice_start
|
|
293 draw_slice
|
|
294 draw_frame
|
|
295 flip
|
|
296
|
|
297 init() is called at mplayer start. internal initialisation.
|
|
298 new() -> rename to open_drv() or something like this.
|
|
299 query_format -> not usable in this form, this function mean that all
|
|
300 negotiation will be performed outside libvo2. Replace or find better name.
|
|
301 close -> open/close :)
|
|
302
|
|
303 choose_buffering - all buffering must stay hidden. The only exception is for
|
|
304 hw_decode. In the new implementation this functions is not usable.
|
3491
|
305 It will be replaced with some kind of negotiation.
|
4091
|
306 draw_slice_start, draw_slice -> if you like it this way, then it's OK. But i think that
|
|
307 draw_slice_done could help.
|
|
308
|
3342
|
309 draw_frame -> classic draw function.
|
|
310
|
|
311 2.2 Minimal buffering
|
|
312
|
|
313 I should say that I stand after the idea all buffering, postprocessing,
|
|
314 format conversion , sw draw of subtitles, etc to be done in libvo2 core.
|
|
315 Why? First this is the only way we could fully control buffering and
|
|
316 decrease it to minimum. Less buffers means less coping. In some cases this
|
3491
|
317 could have the opposite effect (look at direct rendering).
|
3342
|
318
|
|
319 The first step of the analyse is to find out what we need:
|
|
320
|
3491
|
321 DECODER - num_out_buffers={1/2/3/...}
|
|
322 {
|
|
323 buffer_type:{fixed/static/movable}
|
|
324 read_only:{yes/no}
|
|
325 } * (num_out_buffers)
|
3342
|
326 slice:{not/supported}
|
|
327
|
|
328 FILTER 1..x - processing:{ c-copy(buff1,buff2), p-process(buff1) },
|
|
329 slice:{not/supported}
|
|
330 write_combine:{not/safe},
|
|
331 runtime_remove:{static/dynamic}
|
|
332
|
4091
|
333 VIDEO_OUT - method:{get_surface,update_surface},
|
3342
|
334 slice:{not/supported},
|
|
335 write_combine:{not/safe},
|
|
336 clip:{can/not},
|
|
337 upsidedown:(can/not),
|
|
338 surfaces:{1/2/3,..,n}
|
|
339
|
|
340
|
4091
|
341
|
|
342 I use one letter code for the type of filters. You could find them in filters section.
|
3342
|
343 Details:
|
|
344
|
|
345 DECODER - We always get buffer from the decoder, some decoders could give
|
|
346 pointer to it's internal buffers, other takes pointers to buffers where
|
|
347 they should store the final image. Some decoders could call draw_slice
|
|
348 after they have finished with some portion of the image.
|
|
349
|
3491
|
350 num_out_buffers - number of output buffers. Each one could have it's own
|
|
351 parameters. In the usual case there will be only one buffer. Some
|
|
352 decoders may have 2 internal buffers like odivx, or like mpeg12 - 3 buffers
|
|
353 of different types(2 static and 1 temp).
|
|
354
|
|
355 buffer_type -
|
|
356 - fixed - we don't have control where the buffer will be. We could
|
|
357 just take pointer to this buffer. No direct rendering is possible.
|
|
358 - static - we could set this buffer but then we can't change it's position.
|
|
359 - movable - we could set this buffer to any location at any time.
|
|
360 read_only - the data in this buffer will be used in future so we must not
|
|
361 try to write in there or we'll corrupt the video. If we have any 'p' kind
|
|
362 of filter we'll make copy.
|
3342
|
363
|
|
364 slice - this flag shows that decoder knows and want to work with slices.
|
|
365
|
|
366 FILTER - postprocessing, sw drawing subtitles, format conversion, crop,
|
3491
|
367 external filters.
|
3342
|
368
|
|
369 slice - could this filter work with slice order. We could use slice even
|
|
370 when decoder does not support slice, we just need 2 or more filters that
|
|
371 does. This could give us remarkable speed boost.
|
|
372
|
|
373 processing - some filters can copy the image from one buffer to the other,
|
|
374 I call them 'c', convert and crop(stride copy) are good examples but don't
|
|
375 forget simple 1:1 copy. Other filters does process only part if the image,
|
3491
|
376 and could reuse the given buffer, e.g. putting subtitles. I call them 'p'
|
|
377 Other filters could work in one buffer, but could work and with 2, I call
|
|
378 them 't' class, after analyse they will fade to 'c' or 'p'.
|
3342
|
379
|
|
380 runtime_remove - postprocess with autoq. Subtitles appear and disappear,
|
|
381 should we copy image from one buffer to another if there is no processing
|
|
382 at all?
|
|
383
|
|
384 //clip, crop, upsidedown - all 'c' filters must support strides, and should
|
|
385 be able to remove them and to make some tricks like crop and upside_down.
|
|
386
|
|
387 VIDEO_OUT - take a look of libvo2 driver I propose.
|
|
388 method - If we get surface -'S'. If we use draw* (update_surface) - 'd'
|
|
389
|
3491
|
390 As you may see hw_decode don't have complicated buffering:)
|
4091
|
391
|
3342
|
392 I make the analyse this way. First I put decoder buffer, then I put all
|
4091
|
393 filters, that may be needed, and finally I put video out method. Then I add
|
|
394 temp buffers where needed. This is simple enough to be made on runtime.
|
|
395
|
|
396 2.5 Various
|
|
397 2.5.1 clip&crop - we have x1,y1 that shows how much of the beginning and
|
|
398 x2,y2 how much of the end we should remove.
|
|
399 plane+=(x1*sizeof(pixel))+(y1*stride);//let plane point to 1'st visible pixel
|
|
400 height-=y1+y2;
|
|
401 width-=x1+x2;
|
|
402 isn't is simple? no copy just change few variables. In order to make normal
|
|
403 plane we just need to copy it to frame where stide=width;
|
|
404
|
|
405 2.5.2 flip,upsidedown - in windows this is indicated by negative height, here
|
|
406 in mplayer we may use negative stride, so we must make sure that filters and
|
|
407 drivers could use negative stride
|
|
408 plane+=(width-1)*stride;//point to the last line
|
|
409 stride=-stride;//make stride point to previus line
|
|
410 and this one is very simple, and I hope that could work with all know image formats
|
|
411
|
|
412 BE careful, some modes may pack 2 pixels in 1 byte!
|
|
413 Other modes (YUYV) require y1 to be multiply of 2.
|
|
414
|
|
415 stride is always in bytes, while width & height are in pixels
|
|
416
|
|
417 2.5.3 PostProcessing
|
|
418 Arpi was afraid that postprocessing needs more internal data to work. I think
|
|
419 that the quantization table should be passed as additional plane.
|
|
420 How to be done this? When using Frame structure there is qbase that should point
|
|
421 to quantization table. The only problem is that usually the table is with fixed
|
|
422 size. I expect recommendations how to be properly implemented. should we crop it? Or add qstride, qheight, qwidth? Or mark the size of marcoblocks and
|
|
423 calc table size form image size? Currently pp work with fixed 8x8 blocks.
|
|
424 There may have and problem with interlaced images.
|
|
425 / for frame look at 2.3.4 /
|
|
426 I recommend splitting postprocessing to it's original filters and ability to
|
|
427 use them separately.
|
3342
|
428
|
|
429 2.3. Rules for minimal buffering
|
4091
|
430 2.3.1 Direct rendering.
|
3491
|
431 Direct rendering means that the decoder will use video surface as output buffer.
|
|
432 Most of the decoders have internal buffers and on request they copy
|
4091
|
433 the ready image from one of them to a given location. As we can't get pointer
|
3491
|
434 to the internal buffer the fastest way is to give video surface as
|
|
435 output buffer and the decoder will draw it for us. This is safe as most of
|
|
436 copy routines are optimised for double words aligned access.
|
|
437 If we get internal buffer, we could copy the image on our own. This is not
|
|
438 direct rendering but it gets the same speed. If fact that's why -vc odivx
|
|
439 is faster that -vc divx4 while they use the same divx4linux library.
|
|
440 Sometimes it's possible to set video surface as internal buffer, but in most
|
|
441 cases the decoding process is byte oriented and many unaligned access is
|
|
442 performed. Moreover, reading from video memory on some cards is extremely
|
|
443 slow, about 4 times and more (and this is without setting MTRR), but some
|
|
444 decoders could take advantage of this. In the best case (reading performed
|
|
445 from the cache and using combined write ) we'll watch DivX movie with the same
|
|
446 speed as DivX4 is skipping frames.
|
|
447
|
|
448 What does we need for Direct Rendering?
|
|
449 1. We should be able to get video surfaces.
|
|
450 2. The decoder should have at least one buffer with buffer_type != fixed.
|
|
451 3. If we have 'c' filter we can not use direct rendering. If we have
|
|
452 'p' filter we may allow it.
|
|
453 4. If decoder have one static buffer, then we are limited to 1 video surface.
|
4091
|
454 In this case we may see how the frame is rendered (ugly refresh in best case)
|
|
455 5. Each static buffer and each read_only buffer needs to have it own
|
3491
|
456 video surface. If we don't have enough ... well we may make some tricks
|
|
457 but it is too complicated //using direct rendering for the first in
|
4091
|
458 the list and the rest will use memory buffering. And we must have (1 or 2 ) free
|
3491
|
459 video surfaces for the rest of decoder buffers//
|
4091
|
460 6. Normal (buffer_type=movable, read_only=no) buffer could be redirected to
|
3491
|
461 any available video surface.
|
|
462
|
4091
|
463 2.3.2 Normal process
|
|
464 The usual case libvo2 core takes responsibility to move the data. It must
|
3491
|
465 follow these rules:
|
4091
|
466 1. The 'p' filters process in the buffer of the left, if we have read_only
|
|
467 buffer then vo2 core must insert 'c' copy filter and temp buffer.
|
|
468 2. With 'c' filter we must make sure that we have buffer on the right(->) side. I think
|
|
469 that
|
|
470 3. In the usual case 't' are replaced with 'p' except when 't' is before video surface.
|
|
471 We must have at least one 'c' if core have to make crop, clip, or flip image
|
3491
|
472 upside down.
|
4091
|
473 4. Take care for the additional buffering when we have 1 surface (the libvo1 way).
|
|
474 5. Be aware that some filters must be before other. E.g. Postporcessing should
|
3342
|
475 be before subtitles:)
|
4091
|
476 6. If we want scale (-zoom), and vo2 driver can't make it then add and scale
|
3491
|
477 filter 'c'. For better understanding I have only one convert filter that can
|
|
478 copy, convert, scale, convert and scale. In mplayer it really will be only
|
|
479 one filter.
|
4091
|
480 7. If we have video surface then the final 'c' filters will update it for us. If the filter
|
|
481 and video surface are not WriteCombine safe we may add buffering. In case we use both
|
|
482 get_surface and update_surface, after writing in video surface we must call and
|
|
483 update_sufrace() function.
|
|
484
|
|
485 If we must update_surface() then we will call it with the last buffer. This buffer could
|
|
486 be and the internal decoder buffer if there are no 'c' filters. This buffer could be
|
|
487 returned and by get_surface().
|
|
488
|
|
489 2.3.3 Slices.
|
|
490 Slice is a small rectangle of the image. In decoders world it represents
|
|
491 independently rendered portion of the image. In mplayer slice width is equal
|
|
492 to the image width, the height is usually 8 but there is no problem to vary.
|
|
493 The slices advantage is that working with smaller part of the image the most
|
|
494 of data stays in the cache, so post processing would read the data for free.
|
|
495 This makes slice processing of video data preferred even when decoder and/or
|
|
496 video driver couldn't work with slices.
|
|
497 Smaller slices increase possibility of data to be in the cache, but also
|
|
498 increase the overhead of function calls( brunch prediction too), so it may be
|
|
499 good to tune the size, when it is possible (mainly at 2 filter slices)
|
|
500
|
|
501 Here are some rules:
|
|
502 1. Slices are always with width of the image
|
|
503 2. Slices always are one after another, so you could not skip few lines because
|
|
504 they are not changed. This is made for postprocessing filter as there may
|
|
505 have different output image based on different neighbourhood lines(slices).
|
|
506 3. Slice always finish with last line, this is extended of 2. rule.
|
|
507 4. Slice buffers are normal buffers that could contain a whole frame. This is
|
|
508 need in case we have to accumulate slices for frame process (draw). This is
|
|
509 needed and for pp filters.
|
|
510 5. Slice processing could be used if:
|
|
511 5.1. decoder know for slices and call function when one is completed. The next
|
|
512 filter (or video driver) should be able to work with slices.
|
|
513 5.2. Two or more filters could work with slices. Call them one after another.
|
|
514 The result will be accumulated in the buffer of the last filter (look down
|
|
515 for 'p' type)
|
|
516 5.3. If the final filter can slice and vo2_driver can slice
|
|
517 6. All filers should have independent counters for processed lines. These counters
|
|
518 must be controlled by vo2 core.
|
|
519
|
|
520 2.3.3.1 Slice counters.
|
|
521 For the incoming image we need:
|
|
522 1. value that show the last valid line.
|
|
523 2. value that show the line from where filter will start working. It is updated by the
|
|
524 filter to remember what portion of the image is processed. Vo2 core will zero it
|
|
525 on new frame.
|
|
526
|
|
527 For the result image we need:
|
|
528 1. value that show which line is ready. This will be last valid line for next filter.
|
|
529
|
|
530 The filter may need more internal variables. And as it may be used 2 or more times
|
|
531 in one chain it must be reentrant. So that internal variables should be passed to
|
|
532 filter as parameter.
|
|
533
|
|
534 2.3.3.2 Auto slice.
|
|
535 In case we have complete frame that will be processed by few filters that support slices, we must start processing this frame slice by slice. We have same situation
|
|
536 when one filter accumulates too many lines and forces the next filters to work with bigger slice.
|
|
537 To avoid that case and to automatically start slicing we need to limit the slice size
|
|
538 and when slice is bigger to break it apart. If some filter needs more image lines then
|
|
539 it will wait until it accumulates them.
|
|
540
|
|
541 2.3.4. Frame structure
|
|
542 So far we have buffer, that contain image, we have filters that work with
|
|
543 buffers. For croping and for normal work with the image data we need to know
|
|
544 dimensions of the image. We also need some structure to pass to the filters as
|
|
545 they have to know from where to read, and where they should write.
|
|
546 So I introduce Frame struct:
|
|
547 {
|
|
548 imgfmt - the image format, the most important parameter
|
|
549 height, width - dimensions in pixel
|
|
550 stride - size of image line in bytes, it could be larger then width*sizeof(pixel),
|
|
551 it could be and negative (for vertical flip)
|
|
552 base0,base1,base2,base3 - pointers to planes, thay depend on imgfmt
|
|
553 baseq - quant storage plane. we may need to add qstride, or some qhight/qwidth
|
|
554 palette - pointer to table with palette colors.
|
|
555 flags read-only - this frame is read only.
|
|
556 //screen position ??
|
|
557 }
|
3342
|
558
|
|
559
|
|
560 2.4 Negotiation
|
|
561 Few words about negotiation. It is hard thing to find the best mode. Here is
|
|
562 algorithm that could find the best mode. But first I must say that we need
|
|
563 some kind of weight for the filters and drawing. I think that we could use
|
|
564 something like megabytes/second, something that we may measure or benchmark.
|
|
565
|
|
566 1. We choose codec
|
|
567 2. We choose video driver.
|
|
568 3. For each combination find the total weight and if there are any
|
|
569 optional filters find min and max weight. Be careful max weight is not
|
4091
|
570 always at maximum filters!! (e.g. cropping)
|
3342
|
571 4. Compare the results.
|
|
572
|
|
573 I may say that we don't need automatic codec selection as now we could put
|
|
574 best codecs at beginning of codecs.conf as it is now. We may need to make
|
3491
|
575 the same thing with videodrv.conf. Or better make config files with preferred
|
|
576 order of decoders and video modes:)
|
3342
|
577
|
4091
|
578 I wait comments and ideas.
|