When working with a software like Video Slave, a certain knowledge about video codecs and related topics is preferable. You don't need to know mathematical or technical details, but knowing a couple of buzzwords helps a lot. This article will explain some of the most common terms and clarify some common misunderstandings.
A video codec describes how video data is processed when writing to (encoding) or reading from (decoding) a movie file. The word 'codec' originates from '(en)coder' and 'decoder'.
Almost all common codecs are lossy - some more than others - to reduce the amount of data but this does not necessarily mean a visible loss in quality.
In video coding, there are two groups of codecs. Intraframe-only and interframe codecs.
The first kind is pretty straight forward: every frame of the movie is stored directly in the movie's data block and decoded one after the other during playback. Codecs of this type typically produce large movie files as every movie frame must be stored in its entirety.
The most common intraframe-only codecs used nowadays are: DV and Motion JPEG for SD and Apple Pro Res, AVID DNxHD and AVC Intra for HD material.
Interframe codecs in contrast work with so-called "Groups of Pictures" (GoP). Generally speaking, this means that not every frame of the movie is encoded directly but rather more just every 5th, 10th, 20th ... frame (depending on the encoder setting). The fully encoded frames are called I(ntra)-frames or keyframes.
In between two I-frames, just the "difference" to the preceding frame is computed and stored. With this procedure it is possible to omit a lot of data. This drastically decreases the resulting file size of movies encoded with such codecs. That's also the reason why movies encoded with interframe codecs are so popular. H.264 is today's most commonly used interframe codec.
Pros and Cons
While intraframe-only codecs like Apple Pro Res for example produce way larger files, they don't require a lot of CPU when being played back as the frame data can simply be read and decoded without a lot of processing that needs to take place.
Interframe codecs, such as H.264, produce much smaller files but require a comparably high amount of CPU power as more data needs to be processed. You have to choose wisely what's more important for your particular use case. (Hint: choose i-frame only codecs!)
We strongly recommend to not use codecs like H.264 with Video Slave 1. Video Slave 2 can handle interframe files properly but the performance will be better with intraframe only files.
Ask your editor for a file encoded with an I-frame only codec like Apple Pro Res. The files will be way larger regarding file size but they provide the best playback and sync quality with Video Slave and Video Slave 2.
Simply encoding the video data leaves you with a block of raw data. To be able to use it, it must be put into a file along with some other information so the player can read all the data back properly. That's what a container is used for. A container specifies the structure of the information stored inside a file. It also defines which types of media and/or metadata can be embedded.
Imagine a simple container that can hold movie and audio data. When filled, it might look like this:
Ok, so we have a video and audio track. This seems sufficient at first, but what if we want to embed a second audio track or some more metadata like subtitles or timecode? Seems as if the container can't handle that. Not all containers can handle everything. This link provides more information about which container formats are capable of holding which audio and video codecs as well as other metadata.
What you can see is that the QuickTime container format has "Yes" in almost all columns of this table making it a very versatile container supporting suitable for post production work. This is probably the reason why this container is still so popular in the movie industry (despite its age - more than 20 years).
As a side note: while Apple's QuickTime playback engine is deprecated since OS X 10.8, this does not affect the container format. The MOV container still is fully supported by Apple.
Timecode tracks and Video Slave
This section is only relevant for Video Slave 1. Video Slave 2 only reads timecode tracks but doesn't write them. It uses another approach to store the timecode information.
As explained above, the QuickTime container as well as others use tracks to manage the movie's media internally. The timecode information is also stored in a separate track inside the movie file. The editor can decide to add a timecode track when creating the playout. If he doesn't, Video Slave does not have a timing reference. If he doesn't, you can simply add a TC track from within Video Slave. This will then be stored within the movie file for you and Video Slave can playback the file in sync.
In this aspect, Video Slave is different from movies being played back from within your DAW. When using Video Slave, the movie has to provide all information including which timecode is associated with each movie frame. When you put a movie in your DAW, you accomplish this association by placing it somewhere onto the DAW's timeline.
That's why you don't need a timecode track in this case. However, adding one in Video Slave is not a big deal.
Important info: the MP4 container is not capable of holding timecode tracks. That's why movies stored in this container (file extension .mp4) can't playback in sync in Video Slave. Convert them to the MOV container with QuickTime, Compressor or a similar tool or ask your editor to send you movies embedded in the QuickTime container (.mov file extension).