Mercurial > mplayer.hg
annotate DOCS/tech/libvo2.txt @ 16423:a73af2f8b863
When specifying a VIDIX subdevice the name needs to be written out
including the .so.
author | diego |
---|---|
date | Tue, 06 Sep 2005 22:47:57 +0000 |
parents | d4e2bdc246a3 |
children |
rev | line source |
---|---|
9484 | 1 ******************************************************************************* |
2 ****************************************************************************** | |
3 WARNING: THIS FILE IS OBSOLETE, SEE libvo.txt INSTEAD | |
4 ******************************************************************************* | |
5 ******************************************************************************* | |
5586 | 6 |
7 ============================================================ | |
8 | |
7727 | 9 NOTE: the 'libvo2 from scratch' plan was abandoned, we're changing libvo1 now. |
5586 | 10 |
7727 | 11 So, this draft is ONLY A DRAFT, see libvo.txt for the current code docs! |
5586 | 12 |
13 ============================================================ | |
14 | |
4091 | 15 //First Announce by Ivan Kalvachev |
16 //Some explanations by Arpi & Pontscho | |
3342 | 17 |
4091 | 18 If you have any suggestion related to the subjects in this document you |
3491 | 19 could send them to mplayer developer or advanced users mail lists. If you are |
20 developer and have CVS access do not delete parts of this document, but you | |
21 could feel free to add paragraphs that you will sign with your name. | |
4091 | 22 Be warned that the text could be changed, modified and your name could be |
23 moved at the top of the document. | |
3491 | 24 |
3342 | 25 1.libvo2 drivers |
26 1.1 functions | |
27 Currently these functions are implemented: | |
28 init | |
29 control | |
30 start | |
31 stop | |
32 get_surface | |
4091 | 33 update_surface - renamed draw |
34 show_surface - renamed flip_page | |
3342 | 35 query |
36 hw_decode | |
37 subpicture | |
38 | |
4091 | 39 Here is detailed description of the functions: |
40 init - initialisation. It is called once on mplayer start | |
41 control - this function is message oriented interface for controlling the libvo2 driver | |
42 start - sets given mode and display it on the screen | |
43 stop - closes libvo2 driver, after stop we may call start again | |
3342 | 44 query - the negotiation is more complex than just finding which imgfmt the |
4091 | 45 device could show, we must have list of capabilities, etc. |
3491 | 46 This function will have at least 3 modes: |
4091 | 47 a) return list with description of available modes. |
48 b) check could we use this mode with these parameters. E.g. if we want | |
3342 | 49 RGB32 with 3 surfaces for windows image 800x600 we may get out of video |
50 memory. We don't want error because this mode could be used with 2 | |
51 surfaces. | |
52 c) return supported subpicture formats if any. | |
4091 | 53 +d) supported functionality by hw_decode |
3342 | 54 |
55 As you may see I have removed some functionality from control() and made | |
56 separate function. Why? It is generally good thing functions that are | |
57 critical to the driver to have it's own implementation. | |
4091 | 58 get_surface - this function give us surfaces where we could write. In most |
59 cases this is video memory, but it is possible to be and computer RAM, with | |
60 some special meaning (AGP memory , X shared memory, GL texture ...). | |
3342 | 61 |
62 update_surface - as in the note above, this is draw function. Why I change | |
63 it's name? I have 2 reasons, first I don't want implementation like vo1, | |
64 second it really must update video surface, it must directly call the | |
65 system function that will do it. This function should work only with | |
3491 | 66 slices, the size of slice should not be limited and should be passed |
67 (e.g ystart, yend), if we want draw function, we will call one form libvo2 | |
4091 | 68 core, that will call this one with ystart=0; yend=Ymax;. Also some system |
3342 | 69 screen update functions wait for vertical retrace before return, other |
70 functions just can't handle partial updates. In this case we should inform | |
71 libvo2 core that device cannot slice, and libvo2 core must take care of | |
4091 | 72 the additional buffering and update_surface becomes usual draw function. |
73 When update_surface() is used with combination on get_surface(), ONLY VALID | |
74 POINTERS ARE THESE RETURNED BY get_surface(). Watch out with cropping. | |
3342 | 75 |
4091 | 76 show_surface - this functions is always called on frame change. it is used |
77 to show the given surface on the screen. | |
3491 | 78 If there is only one surface then it is always visible and this function |
79 does nothing. | |
4091 | 80 |
3491 | 81 hw_decode - to make all dvb,dxr3, TV etc. developers happy. This function |
3342 | 82 is for you. Be careful, don't OBSEBE it, think and for the future, this |
83 function should have and ability to control HW IDCT, MC that one day will | |
84 be supported and under linux. Be careful:) | |
85 | |
86 subpicture - this function will place subtitles. It must be called once to | |
87 place them and once to remove them, it should not be called on every | |
88 frame, the driver will take care of this. Currently I propose this | |
89 implementation: we get array of bitmaps. Each one have its own starting | |
90 x, y and it's own height and width, each one (or all together) could be | |
91 in specific imgfmt (spfmt). THE BITMAPS SHOULD NOT OVERLAP! This may not | |
92 be hw limitation but sw subtitles may get confused if they work as 'c' | |
4091 | 93 filter (look my libvo2 core). Anyway, so far I don't know hardware that |
94 have such limitations, but it is safer to be so (and faster I think). | |
95 It is generally good to merge small bitmaps (like characters) in larger | |
96 ones and make all subtitles as one bitmap( or one bitmap for one subtitle line). | |
97 There will be and one for each OSD time & seek/brightness/contrast/volume bar. | |
3342 | 98 |
99 1.2 control() | |
100 OK, here is list of some control()s that I think that could be useful: | |
101 SET_ASPECT | |
102 SET_SCALLE_X, SET_SIZE_X | |
103 SET_SCALLE_Y, SET_SIZE_Y | |
104 RESET_SIZE | |
105 GET/SET_POSITION_X | |
106 GET/SET_POSTIION_Y | |
107 GET/SET_RESOLUTION | |
108 GET/SET_DISPLAY | |
109 GET/SET_ATTRIBUTES | |
4091 | 110 + GET/SET_WIN_DECORATION |
3342 | 111 |
112 Here is description of how these controls to be used: | |
113 | |
114 SET_ASPECT - this is the move/video aspect, why not calculate it in | |
115 different place (mplayer.c) and pass the results to driver by | |
116 set_size_x/y. First this is only if hardware could scale. Second we may | |
3491 | 117 need this value if we have TV and we won't calculate any new height and |
118 width. | |
3342 | 119 |
120 SET_SCALLE_X/Y - this is to enlarge/downscale the image, it WILL NOT | |
121 override SET_ASPECT, they will have cumulative effect, this could be used | |
122 for deinterlacing (HALF SIZE). Second if we want to zoom 200% we don't | |
123 want to lose aspect calculations. Or better SET_SCALLE to work with | |
124 current size? | |
125 | |
126 SET_SIZE_X/Y - This is for custom enlarge, to save some scale calculation | |
127 and for more precise results. | |
128 | |
3491 | 129 RESET_SIZE - Set the original size of image, we must call SET_ASPECT again. |
3342 | 130 |
131 GET/SET_POSOTION_X/Y - This if for windows only, to allow custom move on | |
132 window. | |
133 | |
134 GET/SET_RESOLUTION - change resolution and/or bpp if possible. To be used | |
3491 | 135 for changing desktop resolution or the resolution of the current |
3342 | 136 fullscreen mode (NOT TO SET IT just to change it if we don't like it) |
137 | |
138 GET/SET_DISPLAY - mainly for X11 and remote displays. Not very useful, but | |
139 may be handy. | |
140 | |
141 GET/SET_ATTRIBUTES - Xv overlays have contrast, brightness, hue, | |
142 saturation etc. these and others could be controlled by this. If we want | |
143 to query it we must call GET_*, and the to check does our attribute is in | |
144 there (xv developers be careful, 2 or 3 of default attributes sometimes | |
145 are not queried by X, but could be set). | |
146 | |
3491 | 147 Do you think that TV encoding (NTSC,PAL,SECAM) should have it's own attribute? |
3342 | 148 I would like to hear the GUI developers. Could we separate Mouse/Keyboard |
149 from the driver. What info do you need to do it. Don't forget that SDL have | |
150 it's own keyboard/mouse interface. Maybe we should allow video driver to | |
151 change the libin driver ? | |
152 | |
3491 | 153 <SOP> |
154 Arpi wrote: | |
155 I've asked Pontscho (he doesn't understand english well...). | |
156 There is 2 option of GUI<->mplayer interface. | |
157 | |
158 The current, ugly (IMHO) way: | |
159 gui have the control of the video window, it does handle resizing, moving, | |
160 key events etc. all window manipulation in libvo drivers are disabled as gui | |
161 is enabled. it was required as libvo isn't inited and running when gui | |
162 already display the video window. | |
163 | |
164 The wanted way: | |
165 GUI shouldn't control the X window directly, it should use libvo2 control | |
166 calls to resize/move/etc it. But there is a big problem: X cannot be opened | |
167 twice from a process. It means GUI and libvo2 should share the X connection. | |
168 And, as GUI run first (and when file is selected etc then libvo2 is started) | |
169 it should connect to X and later pass the connection to libvo2. It needs an | |
170 extra control() call and some extra code in mplayer.c | |
171 | |
172 but this way gui could work with non-X stuff, like SDL, fbdev (on second | |
173 head for TVout etc), hardware decoders (dvb.dxr3) etc. | |
174 | |
175 as X is so special, libvo2 should have a core function to open/get an X | |
176 connection, and it should be used by all X-based X drivers and gui. | |
177 | |
178 also, GUI needs functions to get mouse and keyboard events, and to | |
179 enable/disable window decoration (title, border). | |
180 | |
181 we need fullscreen switch control function too. | |
182 | |
183 > Maybe we should allow video driver to change the libin driver ? | |
184 forget libin. most input stuff is handled by libvo drivers. | |
12216
d4e2bdc246a3
-vo caca documentation, patch by Pigeon <pigeon@pigeond.net>
diego
parents:
9484
diff
changeset
|
185 think of all X stuff (x11,xv,dga,xmga,gl), SDL, aalib, libcaca, svgalib. |
3491 | 186 only a few transparent drivers (fbdev, mga, tdfxfb, vesa) has not, but all |
187 of them are running on console (and maybe on second head) at fullscreen, so | |
188 they may not need mouse events. console keyboard events are already catched | |
189 and handled by getch2. | |
190 | |
191 I can't see any sense of writing libin. | |
192 | |
193 mpalyer.c should _handle_ all input events, collected from lirc interface, | |
194 getch2, libvo2 etc. and it should set update flags, for gui and osd. | |
195 | |
196 but we should share some plugin code. examples: *_vid code, all common X | |
197 code. it can be either implementing them in libvo2 core (and called from | |
198 plugins) or include these files from all drivers which need it. later method | |
199 is a bit cleaner (from viewpoint of core-plugin independency) but results | |
200 bigger binaries... | |
201 <EOP, Arpi> | |
202 | |
203 Btw. when we finish we will have libin, but it will be spread around mplayer. | |
4091 | 204 I agree that libin could be build in in libvo2 driver, but there have to be |
205 standart way to send commands to the mplayer itself. | |
206 | |
3491 | 207 |
3342 | 208 1.3. query() |
209 | |
210 Here come and some attributes for the queried modes, each supported mode | |
211 should have such description. It is even possible to have more than one mode | |
4091 | 212 that could display given imgfmt. I think that we have to separate window from fullscreen |
213 modes and to have yv12 mode for window and yv12 fullscreen mode. We need and naming | |
214 scheme, in order to have *.conf control over modes - to disable buggy modes, to limit | |
215 surfaces (buggy ones), to manually disable slices etc. The naming should not change from | |
216 one computer to another and have to be flexible. | |
3491 | 217 { |
4091 | 218 IMGFMT - image format (RGB,YV12, etc...) |
219 | |
220 Height - the height of fullscreen mode or the maximum height of window mode | |
221 | |
222 Width - the width of fullscreen mode or the maximum withd of window mode | |
223 | |
224 } | |
225 { | |
226 Scale y/n - hardware scale, do you think that we must have one for x and | |
3342 | 227 one for y (win does)? |
228 | |
229 Fullscreen y/n - if the supported mode is fullscreen, if we have yv12 for | |
3491 | 230 fullscreen and window we must threat them as separate modes. |
231 | |
232 Window y/n - The mode will show the image in a window. Could be removed as | |
233 it is mutually exclusive with Fullscreen | |
3342 | 234 |
235 GetSurface y/n - if driver could give us video surface we'll use get_surface() | |
236 | |
237 UpdateSurfece y/n - if driver will update video surface through sys function (X,SDL) | |
238 | |
239 HWdecode y/n - if driver could take advantage of hw_decode() | |
240 | |
241 MaxSurfaces 1..n - Theoretical maximum of surfaces | |
242 | |
243 SubPicture y/n - Could we put subpicture (OSD) of any kind by hw | |
244 | |
245 WriteCombine y/n - if GetSurface==yes, most (or all) pci&agp cards are | |
246 extremely slow on byte access, this is hint to vo2 core those surfaces | |
4091 | 247 that got affected by WC. Some surfaces are in memory (X shm, OpenGL textures) |
248 This is only a hint. | |
3342 | 249 |
250 us_clip y/n - if UpdateSurface=yes, this shows could update_surface() | |
251 remove strides (when stride> width ), this is used and for cropping. If | |
252 not, we must do it. | |
253 | |
3491 | 254 us_slice y/n - if UpdateSurface=yes, this shows that update_surface() |
255 could draw slices and that after updating surface,it won't wait for | |
256 vertical retrace, so we could update surface slice by slice. | |
257 If us_slice==n we will have to accumulate all slices in some buffer. | |
3342 | 258 |
259 us_upsidedown - if UpdateSufrace=yes, this shows that update_suface() | |
4091 | 260 could flip the image vertically. In some case this could be combined with |
3491 | 261 us_clip /stride tricks/ |
3342 | 262 |
263 switch_resoliton y/n - if window=y, this shows could we switch resolution | |
3491 | 264 of desktop, if fullscreen==y, shows that we could change resolution, after |
3342 | 265 we have set the fullscreen mode. |
266 | |
267 deinterlace y/n - indicates that the device could deinterlace on it's own | |
4091 | 268 (radeon, TV out). |
269 } | |
3342 | 270 1.4 conclusion |
271 | |
272 As you see, I have removed all additional buffering from the driver. There | |
3491 | 273 is a lot of functionality that should be checked and handled by libvo2 core. |
4091 | 274 If some of the functionality is not supported the libvo2 core should add filters |
275 that will support it by software. | |
3342 | 276 |
4091 | 277 Some of the parameters should be able to |
278 be overridden by user config, mainly | |
279 to disable buggy modes or parameters. I | |
280 believe that this should not be done | |
281 by command line as there are enough | |
282 commands now. | |
283 | |
284 I wait comments and ideas. | |
3491 | 285 //-------------------------------------------------------------------------- |
3342 | 286 2. libvo2 core |
287 2.1 functions | |
288 now these function are implemented: | |
289 init | |
290 new | |
291 start | |
292 query_format | |
293 close | |
294 | |
295 and as draw.c: | |
296 choose_buffering | |
297 draw_slice_start | |
298 draw_slice | |
299 draw_frame | |
300 flip | |
301 | |
302 init() is called at mplayer start. internal initialisation. | |
303 new() -> rename to open_drv() or something like this. | |
304 query_format -> not usable in this form, this function mean that all | |
305 negotiation will be performed outside libvo2. Replace or find better name. | |
306 close -> open/close :) | |
307 | |
308 choose_buffering - all buffering must stay hidden. The only exception is for | |
309 hw_decode. In the new implementation this functions is not usable. | |
3491 | 310 It will be replaced with some kind of negotiation. |
4091 | 311 draw_slice_start, draw_slice -> if you like it this way, then it's OK. But i think that |
312 draw_slice_done could help. | |
313 | |
3342 | 314 draw_frame -> classic draw function. |
315 | |
316 2.2 Minimal buffering | |
317 | |
318 I should say that I stand after the idea all buffering, postprocessing, | |
319 format conversion , sw draw of subtitles, etc to be done in libvo2 core. | |
320 Why? First this is the only way we could fully control buffering and | |
321 decrease it to minimum. Less buffers means less coping. In some cases this | |
3491 | 322 could have the opposite effect (look at direct rendering). |
3342 | 323 |
324 The first step of the analyse is to find out what we need: | |
325 | |
3491 | 326 DECODER - num_out_buffers={1/2/3/...} |
327 { | |
328 buffer_type:{fixed/static/movable} | |
329 read_only:{yes/no} | |
330 } * (num_out_buffers) | |
3342 | 331 slice:{not/supported} |
332 | |
333 FILTER 1..x - processing:{ c-copy(buff1,buff2), p-process(buff1) }, | |
334 slice:{not/supported} | |
335 write_combine:{not/safe}, | |
336 runtime_remove:{static/dynamic} | |
337 | |
4091 | 338 VIDEO_OUT - method:{get_surface,update_surface}, |
3342 | 339 slice:{not/supported}, |
340 write_combine:{not/safe}, | |
341 clip:{can/not}, | |
342 upsidedown:(can/not), | |
343 surfaces:{1/2/3,..,n} | |
344 | |
345 | |
4091 | 346 |
347 I use one letter code for the type of filters. You could find them in filters section. | |
3342 | 348 Details: |
349 | |
350 DECODER - We always get buffer from the decoder, some decoders could give | |
351 pointer to it's internal buffers, other takes pointers to buffers where | |
352 they should store the final image. Some decoders could call draw_slice | |
353 after they have finished with some portion of the image. | |
354 | |
3491 | 355 num_out_buffers - number of output buffers. Each one could have it's own |
356 parameters. In the usual case there will be only one buffer. Some | |
357 decoders may have 2 internal buffers like odivx, or like mpeg12 - 3 buffers | |
358 of different types(2 static and 1 temp). | |
359 | |
360 buffer_type - | |
361 - fixed - we don't have control where the buffer will be. We could | |
362 just take pointer to this buffer. No direct rendering is possible. | |
363 - static - we could set this buffer but then we can't change it's position. | |
364 - movable - we could set this buffer to any location at any time. | |
365 read_only - the data in this buffer will be used in future so we must not | |
366 try to write in there or we'll corrupt the video. If we have any 'p' kind | |
367 of filter we'll make copy. | |
3342 | 368 |
369 slice - this flag shows that decoder knows and want to work with slices. | |
370 | |
371 FILTER - postprocessing, sw drawing subtitles, format conversion, crop, | |
3491 | 372 external filters. |
3342 | 373 |
374 slice - could this filter work with slice order. We could use slice even | |
375 when decoder does not support slice, we just need 2 or more filters that | |
376 does. This could give us remarkable speed boost. | |
377 | |
378 processing - some filters can copy the image from one buffer to the other, | |
379 I call them 'c', convert and crop(stride copy) are good examples but don't | |
380 forget simple 1:1 copy. Other filters does process only part if the image, | |
3491 | 381 and could reuse the given buffer, e.g. putting subtitles. I call them 'p' |
382 Other filters could work in one buffer, but could work and with 2, I call | |
383 them 't' class, after analyse they will fade to 'c' or 'p'. | |
3342 | 384 |
385 runtime_remove - postprocess with autoq. Subtitles appear and disappear, | |
386 should we copy image from one buffer to another if there is no processing | |
387 at all? | |
388 | |
389 //clip, crop, upsidedown - all 'c' filters must support strides, and should | |
390 be able to remove them and to make some tricks like crop and upside_down. | |
391 | |
392 VIDEO_OUT - take a look of libvo2 driver I propose. | |
393 method - If we get surface -'S'. If we use draw* (update_surface) - 'd' | |
394 | |
3491 | 395 As you may see hw_decode don't have complicated buffering:) |
4091 | 396 |
3342 | 397 I make the analyse this way. First I put decoder buffer, then I put all |
4091 | 398 filters, that may be needed, and finally I put video out method. Then I add |
399 temp buffers where needed. This is simple enough to be made on runtime. | |
400 | |
401 2.5 Various | |
402 2.5.1 clip&crop - we have x1,y1 that shows how much of the beginning and | |
403 x2,y2 how much of the end we should remove. | |
404 plane+=(x1*sizeof(pixel))+(y1*stride);//let plane point to 1'st visible pixel | |
405 height-=y1+y2; | |
406 width-=x1+x2; | |
407 isn't is simple? no copy just change few variables. In order to make normal | |
408 plane we just need to copy it to frame where stide=width; | |
409 | |
410 2.5.2 flip,upsidedown - in windows this is indicated by negative height, here | |
411 in mplayer we may use negative stride, so we must make sure that filters and | |
412 drivers could use negative stride | |
413 plane+=(width-1)*stride;//point to the last line | |
414 stride=-stride;//make stride point to previus line | |
415 and this one is very simple, and I hope that could work with all know image formats | |
416 | |
417 BE careful, some modes may pack 2 pixels in 1 byte! | |
418 Other modes (YUYV) require y1 to be multiply of 2. | |
419 | |
420 stride is always in bytes, while width & height are in pixels | |
421 | |
422 2.5.3 PostProcessing | |
423 Arpi was afraid that postprocessing needs more internal data to work. I think | |
424 that the quantization table should be passed as additional plane. | |
425 How to be done this? When using Frame structure there is qbase that should point | |
426 to quantization table. The only problem is that usually the table is with fixed | |
427 size. I expect recommendations how to be properly implemented. should we crop it? Or add qstride, qheight, qwidth? Or mark the size of marcoblocks and | |
428 calc table size form image size? Currently pp work with fixed 8x8 blocks. | |
429 There may have and problem with interlaced images. | |
430 / for frame look at 2.3.4 / | |
431 I recommend splitting postprocessing to it's original filters and ability to | |
432 use them separately. | |
3342 | 433 |
434 2.3. Rules for minimal buffering | |
4091 | 435 2.3.1 Direct rendering. |
3491 | 436 Direct rendering means that the decoder will use video surface as output buffer. |
437 Most of the decoders have internal buffers and on request they copy | |
4091 | 438 the ready image from one of them to a given location. As we can't get pointer |
3491 | 439 to the internal buffer the fastest way is to give video surface as |
440 output buffer and the decoder will draw it for us. This is safe as most of | |
441 copy routines are optimised for double words aligned access. | |
442 If we get internal buffer, we could copy the image on our own. This is not | |
443 direct rendering but it gets the same speed. If fact that's why -vc odivx | |
444 is faster that -vc divx4 while they use the same divx4linux library. | |
445 Sometimes it's possible to set video surface as internal buffer, but in most | |
446 cases the decoding process is byte oriented and many unaligned access is | |
447 performed. Moreover, reading from video memory on some cards is extremely | |
448 slow, about 4 times and more (and this is without setting MTRR), but some | |
449 decoders could take advantage of this. In the best case (reading performed | |
450 from the cache and using combined write ) we'll watch DivX movie with the same | |
451 speed as DivX4 is skipping frames. | |
452 | |
453 What does we need for Direct Rendering? | |
454 1. We should be able to get video surfaces. | |
455 2. The decoder should have at least one buffer with buffer_type != fixed. | |
456 3. If we have 'c' filter we can not use direct rendering. If we have | |
457 'p' filter we may allow it. | |
458 4. If decoder have one static buffer, then we are limited to 1 video surface. | |
4091 | 459 In this case we may see how the frame is rendered (ugly refresh in best case) |
460 5. Each static buffer and each read_only buffer needs to have it own | |
3491 | 461 video surface. If we don't have enough ... well we may make some tricks |
462 but it is too complicated //using direct rendering for the first in | |
4091 | 463 the list and the rest will use memory buffering. And we must have (1 or 2 ) free |
3491 | 464 video surfaces for the rest of decoder buffers// |
4091 | 465 6. Normal (buffer_type=movable, read_only=no) buffer could be redirected to |
3491 | 466 any available video surface. |
467 | |
4091 | 468 2.3.2 Normal process |
469 The usual case libvo2 core takes responsibility to move the data. It must | |
3491 | 470 follow these rules: |
4091 | 471 1. The 'p' filters process in the buffer of the left, if we have read_only |
472 buffer then vo2 core must insert 'c' copy filter and temp buffer. | |
473 2. With 'c' filter we must make sure that we have buffer on the right(->) side. I think | |
474 that | |
475 3. In the usual case 't' are replaced with 'p' except when 't' is before video surface. | |
476 We must have at least one 'c' if core have to make crop, clip, or flip image | |
3491 | 477 upside down. |
4091 | 478 4. Take care for the additional buffering when we have 1 surface (the libvo1 way). |
479 5. Be aware that some filters must be before other. E.g. Postporcessing should | |
3342 | 480 be before subtitles:) |
4091 | 481 6. If we want scale (-zoom), and vo2 driver can't make it then add and scale |
3491 | 482 filter 'c'. For better understanding I have only one convert filter that can |
483 copy, convert, scale, convert and scale. In mplayer it really will be only | |
484 one filter. | |
4091 | 485 7. If we have video surface then the final 'c' filters will update it for us. If the filter |
486 and video surface are not WriteCombine safe we may add buffering. In case we use both | |
487 get_surface and update_surface, after writing in video surface we must call and | |
488 update_sufrace() function. | |
489 | |
490 If we must update_surface() then we will call it with the last buffer. This buffer could | |
491 be and the internal decoder buffer if there are no 'c' filters. This buffer could be | |
492 returned and by get_surface(). | |
493 | |
494 2.3.3 Slices. | |
495 Slice is a small rectangle of the image. In decoders world it represents | |
496 independently rendered portion of the image. In mplayer slice width is equal | |
497 to the image width, the height is usually 8 but there is no problem to vary. | |
498 The slices advantage is that working with smaller part of the image the most | |
499 of data stays in the cache, so post processing would read the data for free. | |
500 This makes slice processing of video data preferred even when decoder and/or | |
501 video driver couldn't work with slices. | |
502 Smaller slices increase possibility of data to be in the cache, but also | |
503 increase the overhead of function calls( brunch prediction too), so it may be | |
504 good to tune the size, when it is possible (mainly at 2 filter slices) | |
505 | |
506 Here are some rules: | |
507 1. Slices are always with width of the image | |
508 2. Slices always are one after another, so you could not skip few lines because | |
509 they are not changed. This is made for postprocessing filter as there may | |
510 have different output image based on different neighbourhood lines(slices). | |
511 3. Slice always finish with last line, this is extended of 2. rule. | |
512 4. Slice buffers are normal buffers that could contain a whole frame. This is | |
513 need in case we have to accumulate slices for frame process (draw). This is | |
514 needed and for pp filters. | |
515 5. Slice processing could be used if: | |
516 5.1. decoder know for slices and call function when one is completed. The next | |
517 filter (or video driver) should be able to work with slices. | |
518 5.2. Two or more filters could work with slices. Call them one after another. | |
519 The result will be accumulated in the buffer of the last filter (look down | |
520 for 'p' type) | |
521 5.3. If the final filter can slice and vo2_driver can slice | |
522 6. All filers should have independent counters for processed lines. These counters | |
523 must be controlled by vo2 core. | |
524 | |
525 2.3.3.1 Slice counters. | |
526 For the incoming image we need: | |
527 1. value that show the last valid line. | |
528 2. value that show the line from where filter will start working. It is updated by the | |
529 filter to remember what portion of the image is processed. Vo2 core will zero it | |
530 on new frame. | |
531 | |
532 For the result image we need: | |
533 1. value that show which line is ready. This will be last valid line for next filter. | |
534 | |
535 The filter may need more internal variables. And as it may be used 2 or more times | |
536 in one chain it must be reentrant. So that internal variables should be passed to | |
537 filter as parameter. | |
538 | |
539 2.3.3.2 Auto slice. | |
540 In case we have complete frame that will be processed by few filters that support slices, we must start processing this frame slice by slice. We have same situation | |
541 when one filter accumulates too many lines and forces the next filters to work with bigger slice. | |
542 To avoid that case and to automatically start slicing we need to limit the slice size | |
543 and when slice is bigger to break it apart. If some filter needs more image lines then | |
544 it will wait until it accumulates them. | |
545 | |
546 2.3.4. Frame structure | |
547 So far we have buffer, that contain image, we have filters that work with | |
548 buffers. For croping and for normal work with the image data we need to know | |
549 dimensions of the image. We also need some structure to pass to the filters as | |
550 they have to know from where to read, and where they should write. | |
551 So I introduce Frame struct: | |
552 { | |
553 imgfmt - the image format, the most important parameter | |
554 height, width - dimensions in pixel | |
555 stride - size of image line in bytes, it could be larger then width*sizeof(pixel), | |
556 it could be and negative (for vertical flip) | |
557 base0,base1,base2,base3 - pointers to planes, thay depend on imgfmt | |
558 baseq - quant storage plane. we may need to add qstride, or some qhight/qwidth | |
559 palette - pointer to table with palette colors. | |
560 flags read-only - this frame is read only. | |
561 //screen position ?? | |
562 } | |
3342 | 563 |
564 | |
565 2.4 Negotiation | |
566 Few words about negotiation. It is hard thing to find the best mode. Here is | |
567 algorithm that could find the best mode. But first I must say that we need | |
568 some kind of weight for the filters and drawing. I think that we could use | |
569 something like megabytes/second, something that we may measure or benchmark. | |
570 | |
571 1. We choose codec | |
572 2. We choose video driver. | |
573 3. For each combination find the total weight and if there are any | |
574 optional filters find min and max weight. Be careful max weight is not | |
4091 | 575 always at maximum filters!! (e.g. cropping) |
3342 | 576 4. Compare the results. |
577 | |
578 I may say that we don't need automatic codec selection as now we could put | |
579 best codecs at beginning of codecs.conf as it is now. We may need to make | |
3491 | 580 the same thing with videodrv.conf. Or better make config files with preferred |
581 order of decoders and video modes:) | |
3342 | 582 |
4091 | 583 I wait comments and ideas. |