132
|
1 So, I'll describe how this stuff works.
|
|
2
|
|
3 The basis of the program's structure is basically logical, however it's
|
|
4 a big hack :)
|
|
5
|
|
6 The main modules:
|
|
7
|
551
|
8 1. streamer.c: this is the input, this reads the file or the VCD. what it has
|
|
9 to know: appropriate buffering by sector, seek, skip functions,
|
132
|
10 reading by bytes, or blocks with any size.
|
|
11 The stream_t structure describes the input stream, file/device.
|
|
12
|
136
|
13 2. demuxer.c: this does the demultiplexing of the input to audio and video
|
132
|
14 channels, and their reading by buffered packages.
|
|
15 The demuxer.c is basically a framework, which is the same for all the
|
|
16 input formats, and there are parsers for each of them (mpeg-es,
|
|
17 mpeg-ps, avi, avi-ni, asf), these are in the demux_*.c files.
|
|
18 The structure is the demuxer_t. There is only one demuxer.
|
|
19
|
551
|
20 2.a. demux_packet_t, that is DP.
|
|
21 Contains one chunk (avi) or packet (asf,mpg). They are stored in memory as
|
|
22 in chained list, cause of their different size.
|
|
23
|
|
24 2.b. demuxer stream, that is DS. Struct: demux_stream_t
|
|
25 Every channel (a/v) has one. This contains the packets for the stream (see
|
|
26 2.a). For now, there can be 2 for each demuxer, one for the audio and
|
|
27 one for the video.
|
132
|
28
|
551
|
29 2.c. stream header. There are 2 types (for now): sh_audio_t and sh_video_t
|
|
30 This contains every parameter essential for decoding, such as input/output
|
|
31 buffers, chosen codec, fps, etc. There are each for every stream in
|
|
32 the file. At least one for video, if sound is present then another,
|
|
33 but if there are more, then there'll be one structure for each.
|
|
34 These are filled according to the header (avi/asf), or demux_mpg.c
|
|
35 does it (mpg) if it founds a new stream. If a new stream is found,
|
|
36 the ====> Found audio/video stream: <id> messages is displayed.
|
|
37
|
|
38 The chosen stream header and its demuxer are connected together
|
|
39 (ds->sh and sh->ds) to simplify the usage. So it's enough to pass the
|
|
40 ds or the sh, depending on the function.
|
|
41
|
|
42 For example: we have an asf file, 6 streams inside it, 1 audio, 5
|
|
43 video. During the reading of the header, 6 sh structs are created,
|
|
44 1 audio and 5 video. When it starts reading the packet, it chooses
|
|
45 the first found audio & video stream, and sets the sh pointers of
|
|
46 d_audio and d_video according to them. So later it reads only these
|
|
47 streams. Of course the user can force choosing a specific stream with
|
|
48 -vid and -aid switches.
|
|
49 A good example for this is the DVD, where the english stream is not
|
|
50 always the first, so every VOB has different language :)
|
|
51 That's when we have to use for example the -aid 128 switch.
|
|
52
|
132
|
53 Now, how this reading works?
|
|
54 - demuxer.c/demux_read_data() is called, it gets how many bytes,
|
|
55 and where (memory address), would we like to read, and from which
|
|
56 DS. The codecs call this.
|
|
57 - this checks if the given DS's buffer contains something, if so, it
|
|
58 reads from there as much as needed. If there isn't enough, it calls
|
|
59 ds_fill_buffer(), which:
|
|
60 - checks if the given DS has buffered packages (DP's), if so, it moves
|
|
61 the oldest to the buffer, and reads on. If the list is empty, it
|
|
62 calls demux_fill_buffer() :
|
|
63 - this calls the parser for the input format, which reads the file
|
|
64 onward, and moves the found packages to their buffers.
|
|
65 Well it we'd like an audio package, but only a bunch of video
|
|
66 packages are available, then sooner or later the:
|
|
67 DEMUXER: Too many (%d in %d bytes) audio packets in the buffer
|
|
68 error shows up.
|
|
69
|
|
70 So everything is ok 'till now, I want to move them to a separate lib.
|
|
71
|
|
72 Now, go on:
|
|
73
|
|
74 3. mplayer.c - ooh, he's the boss :)
|
|
75 The timing is solved odd, since it has/recommended to be done differently
|
138
|
76 for each of the formats, and sometimes can be done in many ways.
|
132
|
77 There are the a_frame and v_frame float variables, they store the
|
|
78 just played a/v position is seconds.
|
|
79 A new frame is displayed if v_frame<a_frame, and sound is decoded if
|
|
80 a_frame<v_frame.
|
|
81 When playing (a/v), it increases the variables by the duration of the
|
|
82 played a/v. In video, it's usually 1.0/fps, but I have to mention that
|
138
|
83 fps doesn't really matters at video, for example asf doesn't have that,
|
|
84 instead there is "duration" and it can change per frame.
|
132
|
85 MPEG2 has "repeat_count" which delays the frame by 1-2.5 ...
|
|
86 Maybe only AVI and MPEG1 has fixed fps.
|
|
87
|
138
|
88 So everything works right until the audio and video are in perfect
|
132
|
89 synchronity, since the audio goes, it gives the timing, and if the
|
|
90 time of a frame passed, the next frame is displayed.
|
|
91 But what if these two aren't synchronized in the input file?
|
|
92 PTS correction kicks in. The input demuxers read the PTS (presentation
|
|
93 timestamp) of the packages, and with it we can see if the streams
|
|
94 are synchronized. Then MPlayer can correct the a_frame, within
|
|
95 a given maximal bounder (see -mc option). The summary of the
|
|
96 corrections can be found in c_total .
|
|
97
|
|
98 Of course this is not everything, several things suck.
|
|
99 For example the soundcards delay, which has to be corrected by
|
|
100 MPlayer: that's why it needs the size of the audio buffer. It can
|
|
101 be measured with select(), which is unfortunately not supported by
|
|
102 every card... That's when it has to be given with the -abs option.
|
|
103
|
|
104 Then there's another problem: in MPEG, the PTS is not given by
|
|
105 frames, rather by sectors, which can contain 10 frames, or only 0.1 .
|
|
106 In order this won't fuck up timing, we average the PTS by 5 frames,
|
|
107 and use this when correcting.
|
|
108
|
|
109 Life didn't get simpler with AVI. There's the "official" timing
|
|
110 method, the BPS-based, so the header contains how many compressed
|
|
111 audio bytes belong to one second of frames.
|
|
112 Of course this doesn't always work... why it should :)
|
|
113 So I emulate the MPEG's PTS/sector method on AVI, that is the
|
|
114 AVI parser calculates a fake PTS for every read chunk, decided by
|
|
115 the type of the frames. This is how my timing is done. And sometimes
|
|
116 this works better.
|
|
117
|
|
118 In AVI, usually there is a bigger piece of audio stored first, then
|
|
119 comes the video. This needs to be calculated into the delay, this is
|
|
120 called "Initial PTS delay".
|
|
121 Of course there are 2 of them, one is stored in the header and not
|
|
122 really used :) the other isn't stored anywhere, this can only be
|
|
123 measured...
|
|
124
|
|
125 4. Codecs. They are separate libs.
|
|
126 For example libac3, libmpeg2, xa/*, alaw.c, opendivx/*, loader, mp3lib.
|
|
127 mplayer.c calls them if a piece of audio or video needs to be played.
|
|
128 (see the beginning of 3.)
|
|
129 And they call the appropriate demuxer, to get the compressed data.
|
|
130 (see 2.)
|
551
|
131 We have to pass the appropriate stream header as parameter (sh_audio/
|
|
132 sh_video), this should contain all the needed info for decoding
|
|
133 (the demuxer too: sh->ds).
|
|
134 The codecs' seprating is underway, the audio is already done, the video is
|
|
135 work-in-progress. The aim is that mplayer.c won't have to know
|
|
136 which are the codecs and how to use 'em, instead it should call
|
|
137 an init/decode audio/video function.
|
132
|
138
|
551
|
139 5. libvo: this displays the frame.
|
|
140 The constants for different pixelformats are defined in img_format.h,
|
|
141 their usage is mandatory.
|
|
142
|
|
143 Each vo driver _has_ to implement these:
|
132
|
144
|
551
|
145 query_format() - queries if a given pixelformat is supported.
|
|
146 return value: flags:
|
|
147 0x1 - supported (by hardware or conversion)
|
|
148 0x2 - supported (by hardware, without conversion)
|
|
149 0x4 - sub/osd supported (has draw_alpha)
|
|
150 IMPORTANT: it's mandatorial that every vo driver support the YV12 format,
|
|
151 and one (or both) of BGR15 and BGR24, with conversion, if needed.
|
|
152 If these aren't supported, not every codec will work! The mpeg codecs
|
|
153 can output only YV12, and the older win32 DLLs only 15 and 24bpp.
|
|
154 There is a fast MMX-using 15->16bpp converter, so it's not a
|
|
155 significant speed-decrease!
|
|
156
|
|
157 The BPP table, if the driver can't change bpp:
|
|
158 current bpp has to accept these
|
|
159 15 15
|
|
160 16 15,16
|
|
161 24 24
|
|
162 24,32 24,32
|
|
163
|
|
164 If it can change bpp (for example DGA 2, fbdev, svgalib), then if possible
|
|
165 we have to change to the desired bpp. If the hardware doesn't support,
|
|
166 we have to change to the one closest to it, and do conversion!
|
|
167
|
|
168 init() - this is called before displaying of the first frame -
|
|
169 initializing buffers, etc.
|
|
170
|
|
171 draw_slice(): this displays YV12 pictures (3 planes, one full sized that
|
|
172 contains brightness (Y), and 2 quarter-sized which the colour-info
|
|
173 (U,V). MPEG codecs (libmpeg2, opendivx) use this. This doesn't have
|
132
|
174 to display the whole frame, only update small parts of it.
|
551
|
175
|
|
176 draw_frame(): this is the older interface, this displays only complete
|
133
|
177 frames, and can do only packed format (YUY2, RGB/BGR).
|
132
|
178 Win32 codecs use this (DivX, Indeo, etc).
|
551
|
179
|
|
180 draw_alpha(): this displays subtitles and OSD.
|
|
181 It's a bit tricky to use it, since it's not a part of libvo API,
|
|
182 but a callback-style stuff. The flip_page() has to call
|
|
183 vo_draw_text(), so that it passes the size of the screen and the
|
|
184 corresponding draw_alpha() implementation for the pixelformat
|
|
185 (function pointer). The vo_draw_text() checks the characters to draw,
|
|
186 and calls draw_alpha() for each.
|
|
187 As a help, osd.c contains draw_alpha for each pixelformats, use this
|
|
188 if possible!
|
|
189
|
|
190 flip_page(): this is called after each frame, this diplays the buffer for
|
|
191 real. This is 'swapbuffers' when double-buffering.
|
|
192
|
|
193
|