Match moving
In visual effects, match moving is a technique that allows the insertion of 2D elements, other live action elements or CG computer graphics into live-action footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot. It also allows for the removal of live action elements from the live action shot. The term is used loosely to describe several different methods of extracting camera motion information from a motion picture. Sometimes referred to as motion tracking or camera solving, match moving is related to rotoscoping and photogrammetry. Match moving is sometimes confused with motion capture, which records the motion of objects, often human actors, rather than the camera. Typically, motion capture requires special cameras and sensors and a controlled environment (although recent developments such as the Kinect camera and Apple's Face ID have begun to change this). Match moving is also distinct from motion control photography, which uses mechanical hardware to execute multiple identical camera moves. Match moving, by contrast, is typically a software-based technology, applied after the fact to normal footage recorded in uncontrolled environments with an ordinary camera.
"Camera Tracking" redirects here. For other uses, see Motion tracking.
Match moving is primarily used to track the movement of a camera through a shot so that an identical virtual camera move can be reproduced in a 3D animation program. When new animated elements are composited back into the original live-action shot, they will appear in perfectly matched perspective and therefore appear seamless.
As it is mostly software-based, match moving has become increasingly affordable as the cost of computer power has declined; it is now an established visual-effects tool and is even used in live television broadcasts as part of providing effects such as the yellow virtual down-line in American football.
Match moving has two forms. Some compositing programs, such as Shake, Adobe Substance, Adobe After Effects, and Discreet Combustion, include two-dimensional motion tracking capabilities. Two dimensional match moving only tracks features in two-dimensional space, without any concern to camera movement or distortion. It can be used to add motion blur or image stabilization effects to footage. This technique is sufficient to create realistic effects when the original footage does not include major changes in camera perspective. For example, a billboard deep in the background of a shot can often be replaced using two-dimensional tracking.
Three-dimensional match moving tools make it possible to extrapolate three-dimensional information from two-dimensional photography. These tools allow users to derive camera movement and other relative motion from arbitrary footage. The tracking information can be transferred to computer graphics software and used to animate virtual cameras and simulated objects. Programs capable of 3-D match moving include:
Refining[edit]
Since there are often multiple possible solutions to the calibration process and a significant amount of error can accumulate, the final step to match moving often involves refining the solution by hand. This could mean altering the camera motion itself or giving hints to the calibration mechanism. This interactive calibration is referred to as "refining".
Most match moving applications are based on similar algorithms for tracking and calibration. Often, the initial results obtained are similar. However, each program has different refining capabilities.
Real time[edit]
On-set, real-time camera tracking is becoming more widely used in feature film production to allow elements that will be inserted in post-production be visualised live on-set. This has the benefit of helping the director and actors improve performances by actually seeing set extensions or CGI characters whilst (or shortly after) they do a take. No longer do they need to perform to green/blue screens and have no feedback of the result. Eye-line references, actor positioning, and CGI interaction can now be done live on-set giving everyone confidence that the shot is correct and going to work in the final composite.
To achieve this, a number of components from hardware to software need to be combined. Software collects all of the 360 degrees of freedom movement of the camera as well as metadata such as zoom, focus, iris and shutter elements from many different types of hardware devices, ranging from motion capture systems such as active LED marker based system from PhaseSpace, passive systems such as Motion Analysis or Vicon, to rotary encoders fitted to camera cranes and dollies such as Technocranes and Fisher Dollies, or inertia & gyroscopic sensors mounted directly to the camera. There are also laser based tracking systems that can be attached to anything, including Steadicams, to track cameras outside in the rain at distances of up to 30 meters.
Motion control cameras can also be used as a source or destination for 3D camera data. Camera moves can be pre-visualised in advance and then converted into motion control data that drives a camera crane along precisely the same path as the 3-D camera. Encoders on the crane can also be used in real time on-set to reverse this process to generate live 3D cameras. The data can be sent to any number of different 3D applications, allowing 3D artists to modify their CGI elements live on set as well. The main advantage being that set design issues that would be time-consuming and costly issues later down the line can be sorted out during the shooting process, ensuring that the actors "fit" within each environment for each shot whilst they do their performances.
Real time motion capture systems can also be mixed within camera data stream allowing virtual characters to be inserted into live shots on-set. This dramatically improves the interaction between real and non-real MoCap driven characters as both plate and CGI performances can be choreographed together.