diff options
Diffstat (limited to 'chromium/third_party/ffmpeg/doc/ffmpeg.texi')
-rw-r--r-- | chromium/third_party/ffmpeg/doc/ffmpeg.texi | 89 |
1 files changed, 54 insertions, 35 deletions
diff --git a/chromium/third_party/ffmpeg/doc/ffmpeg.texi b/chromium/third_party/ffmpeg/doc/ffmpeg.texi index 0a930cebf85..765b2a7ebc6 100644 --- a/chromium/third_party/ffmpeg/doc/ffmpeg.texi +++ b/chromium/third_party/ffmpeg/doc/ffmpeg.texi @@ -80,11 +80,23 @@ The transcoding process in @command{ffmpeg} for each output can be described by the following diagram: @example - _______ ______________ _________ ______________ ________ -| | | | | | | | | | -| input | demuxer | encoded data | decoder | decoded | encoder | encoded data | muxer | output | -| file | ---------> | packets | ---------> | frames | ---------> | packets | -------> | file | -|_______| |______________| |_________| |______________| |________| + _______ ______________ +| | | | +| input | demuxer | encoded data | decoder +| file | ---------> | packets | -----+ +|_______| |______________| | + v + _________ + | | + | decoded | + | frames | + |_________| + ________ ______________ | +| | | | | +| output | <-------- | encoded data | <----+ +| file | muxer | packets | encoder +|________| |______________| + @end example @@ -112,11 +124,16 @@ the same type. In the above diagram they can be represented by simply inserting an additional step between decoding and encoding: @example - _________ __________ ______________ -| | | | | | -| decoded | simple filtergraph | filtered | encoder | encoded data | -| frames | -------------------> | frames | ---------> | packets | -|_________| |__________| |______________| + _________ ______________ +| | | | +| decoded | | encoded data | +| frames |\ _ | packets | +|_________| \ /||______________| + \ __________ / + simple _\|| | / encoder + filtergraph | filtered |/ + | frames | + |__________| @end example @@ -125,10 +142,10 @@ Simple filtergraphs are configured with the per-stream @option{-filter} option A simple filtergraph for video can look for example like this: @example - _______ _____________ _______ _____ ________ -| | | | | | | | | | -| input | ---> | deinterlace | ---> | scale | ---> | fps | ---> | output | -|_______| |_____________| |_______| |_____| |________| + _______ _____________ _______ ________ +| | | | | | | | +| input | ---> | deinterlace | ---> | scale | ---> | output | +|_______| |_____________| |_______| |________| @end example @@ -285,23 +302,20 @@ input until the timestamps reach @var{position}. @var{position} may be either in seconds or in @code{hh:mm:ss[.xxx]} form. @item -itsoffset @var{offset} (@emph{input}) -Set the input time offset in seconds. -@code{[-]hh:mm:ss[.xxx]} syntax is also supported. -The offset is added to the timestamps of the input files. -Specifying a positive offset means that the corresponding -streams are delayed by @var{offset} seconds. +Set the input time offset. + +@var{offset} must be a time duration specification, +see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}. + +The offset is added to the timestamps of the input files. Specifying +a positive offset means that the corresponding streams are delayed by +the time duration specified in @var{offset}. -@item -timestamp @var{time} (@emph{output}) +@item -timestamp @var{date} (@emph{output}) Set the recording timestamp in the container. -The syntax for @var{time} is: -@example -now|([(YYYY-MM-DD|YYYYMMDD)[T|t| ]]((HH:MM:SS[.m...])|(HHMMSS[.m...]))[Z|z]) -@end example -If the value is "now" it takes the current time. -Time is local time unless 'Z' or 'z' is appended, in which case it is -interpreted as UTC. -If the year-month-day part is not specified it takes the current -year-month-day. + +@var{date} must be a time duration specification, +see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}. @item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata}) Set a metadata key/value pair. @@ -508,9 +522,6 @@ prefix is ``ffmpeg2pass''. The complete file name will be @file{PREFIX-N.log}, where N is a number specific to the output stream -@item -vlang @var{code} -Set the ISO 639 language code (3 letters) of the current video stream. - @item -vf @var{filtergraph} (@emph{output}) Create the filtergraph specified by @var{filtergraph} and use it to filter the stream. @@ -632,8 +643,14 @@ Do not use any hardware acceleration (the default). @item auto Automatically select the hardware acceleration method. +@item vda +Use Apple VDA hardware acceleration. + @item vdpau Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration. + +@item dxva2 +Use DXVA2 (DirectX Video Acceleration) hardware acceleration. @end table This option has no effect if the selected hwaccel is not available or not @@ -656,6 +673,10 @@ method chosen. @item vdpau For VDPAU, this option specifies the X11 display/screen to use. If this option is not specified, the value of the @var{DISPLAY} environment variable is used + +@item dxva2 +For DXVA2, this option should contain the number of the display adapter to use. +If this option is not specified, the default adapter is used. @end table @end table @@ -709,8 +730,6 @@ stereo but not 6 channels as 5.1. The default is to always try to guess. Use @section Subtitle options: @table @option -@item -slang @var{code} -Set the ISO 639 language code (3 letters) of the current subtitle stream. @item -scodec @var{codec} (@emph{input/output}) Set the subtitle codec. This is an alias for @code{-codec:s}. @item -sn (@emph{output}) @@ -1039,7 +1058,7 @@ ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264 ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt @end example -@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{per-stream}) +@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream}) Force a tag/fourcc for matching streams. @item -timecode @var{hh}:@var{mm}:@var{ss}SEP@var{ff} |