

Some threads ( here and here) described exactly what I was experiencing. A fresh search around the web brought me on another track: fps ( framerate/ frames per second)! Doing video editing in Blender, I just could not get rid of these audio-video offsets and strips differed in length.

Today I needed to finally solve this issue and entered another round in tackling this problem. Nabble - libav-users - Seems stream 0 codec frame rate differs.In FLV codec frame rate differs from container frame.


codec frame rate differs from container frame rate.Seems stream 1 codec frame rate differs from.(other users wrangling with this problem and their ideas/solutions): In my case I did this by adding a speed control effect in Blender's SequenceEditor and setting the lenth of video to be = the length of audio. So you need to find a way to adjust the video's length to the audio-length-reference. In this case, I found out that relying on the extracted audio length is a good solution as it appears that ffmpeg seldomly (at least it never happened to me) speeds up/ slows down audio - it's always played at the right rate, whereas the video often receives a speedup/slowdown. async 1 is a special case where only the start of the audio stream is corrected without any later correctionĪ scenario where these options are useless is, let's say, where you extract the audio in one step, and the video in another, so ffmpeg can't adjust the two against each other. "Stretches/squeezes" the audio stream to match the timestamps, the parameter is the maximum samples per second by which the audio is changed. `-async samples_per_second' Audio sync method. You can leave either video or audio unchanged and sync the remaining stream(s) to the unchanged one. With -map you can select from which stream the timestamps should be taken. Video will be stretched/squeezed to match the timestamps, it is done by duplicating and dropping frames. Also, it seems as if some containers are prone to "forgetting" about fps rates of their elements.Ī look at ffmpeg's docs tells us that there is (might be) a cure for this: `-vsync parameter' Video sync method. Having a look at ffmpeg's command-line output when -i identifying or converting such a video, you are likely to see something like: " Seems stream 1 codec frame rate differs from container frame rate: 59.99 (11998/200) -> 30.00 (30/1)" This is a good hint at that the video is somehow screwed up on the encoder side. Call this audio video offset, audio video mismatch or simply audio video runtime/length difference (this sentence to allow users find this post here.). When you use ffmpeg to, for example, extract audio and video as seperate streams, or if you import it into Blender's fabulous NLE, the Sequence Editor, you might encounter a different length of the audio and video part of the video.
