The best way to convey the story and director's vision through visual effects begins with good pre-shoot planning. Very often we can find substantial cost savings and/or improved VFX quality with minor adjustments to the planned image acquisition. The right combination of practical and digital effects work can save money and lead to better final results - and the time to determine this is well before shooting.

Here we can help. At General Titles™ we are filmmakers first - our interest is in helping you convey the story and realize your vision within the confines of your budget. From pre-shoot planning, thru on set supervision, and post completion, we will work with you every step of the way.

THINGS TO CONSIDER

How often has someone said "we'll fix it in post?" And true, we can do a lot with the very advanced software we now have at our disposal. But still, the best end results come from the best pre-planning. With that in mind, some things to consider:

First of all, while an on set VFX Supervisor will add a small amount to the shooting budget, the extra planning, organization, record keeping, and technical advice will save a huge amount of money in post production. Plus, properly planned VFX end up looking better as a result.

But if you simply cannot bring an on-set VFX Supervisor to your shoot, here are a few other tips to keep in mind:

SHOOTING WITH GREEN/BLUE/BLACK SCREENS FOR KEYING & MATTES

Green Screen is not the solution to all your VFX needs. In fact, shooting green screen - if not planned appropriately - may cause you serious problems down the road - including footage that is unusable. There are times that black screen, blue screen, or even red, or white screen may be a better choice.

If you do shoot green screen, keep in mind that green screen should be under exposed by one to one and a half stops relative to you scene. In other words, if your meter says T-5.6 for the scene, then the screen should be lit for about T-4 or as low as T-3.3 as measured by a spot meter.

If on the other hand you are using a blue screen, then over expose by one stop - so in a T-5.6 scene, the blue screen should be lit to read about T-8 using a spot meter. Blue screens are better for light hair (blondes on blue) and has less spill issues. But the blue channel in the camera is a bit noisier and lower res than the green channel.

For black screen, ideally the screen area should be below the black clip of the camera. While this is not always achievable, try for getting the screen a stop or more darker than the darkest part of the foreground subject as read on a spot meter. Things like backlit smoke work really well on black screens. 

Grey, White, Red screens all depend on the foreground subject as to if they are the appropriate choice. But much of the time, regardless of the screen type used, there will be a great deal of manual rotoscoping (using Mocha) to cut out the foreground subjects. In this case, sometimes a smooth neutral screen might be best. But this is something that should be planned, tested, and decided well ahead of time.

And whenever doing any form of color screen, NEVER use a color filter on the camera!

  • Under-expose GREEN screens by 1.5 stops.
  • Over-expose BLUE screens by 1 stop.
  • BLACK screen has a lot of advantages for some subjects.

PLANNING FOR ROTO and RIG & ELEMENT REMOVALS

Modern roto tools like Mocha have made rotoscoping accessible for filmmakers of all budgets. But roto is still labor intensive, and planning ahead with your final desired result in mind will make all the difference. For instance, lets say you have a foam mat for a stunt - some might put down a green screen colored foam matt, but this is a common mistake. A better choice is to put down a mat that is colored to match the surface that will replace it in the comp. For instance, if the floor the stunt man is falling on is brown wood, then the mat should be the same tone of brown. later in post, the stunt man will be rotoed and a plate of the real wood floor will replace the foam - of the foam is the same color, it will hide the matte line of the roto, making a much more seamless comp.

Similarly with rigs - ideally a rig would be colored similar to the background element that is going to replace it. And critically, make sure the rig never passes between the subject and the camera, especially never occlude hard to replace parts of the subject like hair and face.

And finally, the biggest problem with rig removal is - what are you going to replace it with? Normally we want to get a clean plate with no actors, no rig, and in the same lighting. But what if it's a Stedicam shot? You can't duplicate that! If you use a motion control rig, you can do a clean plate pass - but moco rigs are expensive. What you really need to do is take a lot of hi-resolution still images with the same lighting of the surrounding elements of the set, clear of any rigs and other objects. These images are then used to create an synthetic background to replace the rig and unwanted elements.

  • Even with modern tools, roto is labor intensive.
  • When possible, use rigs or stunt elements that are colored similar to what they will be replaced with.
  • Never let the rig pass between the camera and detailed parts of the subject like hair and face.
  • Using a DSLR, take a lot of HDR stills of the set with the same lighting to create fill elements.

SHOOTING TV AND COMPUTER MONITORS

Nothing is more annoying that having a shot of a blank TV monitor with little bits of tape placed on the monitor face. Some people do this thinking it is needed for tracking, but it is almost never needed, nor used, and creates a big problem for a shot that should have been very simple.

NEVER put tape or tracking marks on the face of a monitor please!!!

The reality is that the natural monitor corners are sufficient in nearly all cases to provide enough geometry to track (see tracking below). If you are going to push in so that two or more of the corners will disappear in the shot, then shoot at a high resolution like 4K, and frame wider to capture the entire monitor, and we'll crop in post after tracking.

Of course, the easiest way to have a monitor screen play in-shot with image is to pre-plan the image and play it on the actual monitor. But this is frequently impossible - the monitor source might not even be created yet, and there are limited frame rates you can use (depending on monitor type).

Assuming that the monitor needs to have something put "into" it, the easiest way is to use linear light ADD to comp in an image over a black empty monitor face (i.e. the monitor turned completely off). this way, all the natural room reflections are maintained. But this is not a perfect solution - if the monitor is off, while the reflections will be natural and its easy to comp an image into it, there will be no natural light from the monitor into the surrounding subjects of the scene.This can create a complicated "relighting" pass involving roto to selectively adjust the scene elements. Alternately it can require a special kicker light to add light where the monitor would.

Nevertheless, if nothing transparent like flying hair or glasses, smoke, or liquids pass in front of the monitor, then keeping it off and black is normally the best choice.

If you do have transparent objects passing in front of the monitor, then you might want to feed it a pure green or blue image. Just like the sections on keying above, set the brightness for the appropriate exposure -1.5 stops for green, +1 stop for blue.

  • Never put tape or other "tracking marks" on the reflective monitor glass where image will go.
  • A black "off" monitor is often the best choice for compositing.
  • Feed green or blue at the appropriate level if you need to key behind moving, reflective, or transparent objects.

PLANNING FOR TRACKING

Tracking is more "post voodoo" like green screen, and often misunderstood. Like shooting for keying, tracking requires thinking ahead. And there are different kinds of tracking, each with its own uses, advantages, and disadvantages.

2D: This is a simple form of tracking that just analyses one (or more) tracking marks to derive the most basic camera or object movement in a scene. It is used mainly for image stabilization, but also occasionally for placing non-3D objects into a scene (such as making a 2D title follow a person's head movement).

2.5D aka "planar" tracking: This is a key feature of Mocha - it looks at flat-ish objects (like a TV or phone screen, or a billboard) and creates a track of that flat object, including perspective. This is much faster than a full 3D track (below), and much better in getting great results for things like TV screens or sign replacements.

3D Camera Solve: This is the most tricky and time consuming of all tracking methods - but it does things the other methods can't. When you do a 3D track, you will actually be virtually recreating the camera used to shoot the scene. This virtual camera will be solved to include the focal length, and along with tracking indicators for key objects in the scene, can be imported into the 3D CGI programs such as Maya, Cinema 4D, etc. This creates a duplicate of the scene in the virtual 3D space of the CGI program. Then, the objects created in CGI can be literally placed right into the scene. 

Once you know what you want to do, then it becomes clear which tracking method you will need to be using - and of course how to plan and setup the shot.

Use 2D when: you just need to stabilize a shot, or pin something 2D like a title to an object in the shot. All this needs is a couple places where there is a high contrast object to track. If you are expecting the need to stabilize, then that object should be part of the background, and not a part of a moving subject.

Use 2.5D when you need to track the face of a TV monitor, phone, or sign, or a rig removal involving a flat wall, or are doing a simple set extension, or need to fill in/cover up a bogey on set. (Mocha Pro has a feature that allows us to cover up a problem like a C-stand or Light that ended up in the shot doing an auto track and fill). For a 2.5D to work well, we need to see the corners of the monitor or object - as long as we see corners with some amount of clean contrast, we can track it. 

Use 3D when you need to place an object into a scene. Let's say you have a helicopter shot of a bridge that you bought as stock footage - now you need your hero's car to drive across it - well, we 3D track the bridge plate, and that tells us not only how the camera is moving but also where the bridge is located in the scene. We import all this data into the CGI platform, add in a CGI model of the hero's car, then animate it driving over the virtual bridge. And of course this is the method we'd use to create digital characters to place into a scene - from a monster to a coffee cup to an ant crawling on a desk - we put it into the scene using a 3D track. 

To get a good clean 3D track we need a lot of visual points that are stable in the scene to track - ideally some are on foreground-locked objects, and some background.  Lateral camera movement helps as it creates parallax among the near and far points. Moving subjects like walking people are masked out before the tracking is done (so dense crowd scenes become a problem). If you only pan the camera on a tripod, you will NOT be able to get a true 3D solve (but a 2.5D solve will work fine).

Other things that give problems to 3D solves are cameras with rolling shutter problems, and zooming (changing the focal length of the lens during the shot). If your camera has a rolling shutter issue, then it might be best to use a very short shutter speed to reduce the effect of the rolling shutter, and add motion blur in post (unless you what that "Private Ryan" effect). And as for zooming - don't do it! Shoot at a higher resolution, and we'll do an animated scale/crop to get the same effect as a zoom in post.

The question also arises, when do you place dedicated tracking marks into the shot? The answer is "as little as possible". Certainly if you have a big green screen, there need to be ample track marks on the screen so that a track can be created. But don't put in track marks without a good reason! For instance, the corners of a door or picture on a wall can provide good natural tracking points. Little bits of black gravel on a concrete sidewalk also provide "natural" marks that don't need to be painted out.

Remember that if you add dedicated tracking marks, they will need to be removed. So always try to keep them on the most plain surfaces, and don't put them on reflective surfaces like computer monitors, mirrors, etc. Not only will they be difficult to paint out, but the reflections of the reflective object will ALSO interfere with the tracker - reflective object like moving objects generally have to be masked in the tracking software to exclude them from interfering. 

  • Plan for the type of track you will need for your desired result - 2D, 2.5D, or 3D.
  • Avoid putting in artificial tracking marks when there are plenty of stable natural tracking marks in the scene.
  • 3D solves require lateral camera movement. 2.5D solves do not.
  • Keep tracking markers away from reflective surfaces.
  • Don't rack the zoom in the shot. If you must, take accurate notes of the focal lengths at each end.
  • Rolling shutter can seriously mess with 3D tracking/solving.

PLANNING FOR CGI ELEMENTS

If your shot calls for adding in a 3D CGI object, you also need to create light probes of the scene. A light probe is an HDR image of the scene's lighting. You can create it using a still camera with a fisheye lens, or by shooting a mirror sphere. A silver christmas tree ornament is surprisingly useful for this. Shoot an HDR image of the sphere closeup using a DSLR to create the environment map, but also shoot a short clip of the sphere with the motion camera for alignment purposes. And don't forget to shoot an X-Rite or other suitable color chart.

The HDR light probe image is what will be used in the CGI application to re-create the actual lighting used in the scene. This will allow the CGI created objects to match exactly the real world lighting so that they look like they were actually in the scene.] when it was shot.

If you don't get a light probe image(s), you will never get an accurate match of the lighting of the scene as shot. The light probe also provides the surrounding scene to generate the reflections for reflective objects of the scene elements where it is placed.

In addition to the light probe, it is also very helpful to take accurate measurements and make a complete floor plan of  the scene, including camera height, declination, focal length, aperture, shutter speed, sensor size, etc.

  • Always get light probe images.
  • Create an accurate floor plan of each setup, especially the camera.

INTENTIONALLY DEGRADED IMAGES

If you are doing a "found footage" film, or just need some video to look like it came from a vintage VHS camcorder don't use a camcorder! Shoot it in HD (at 60fps) and degrade it in post. Why tie your hands and deal with the problems using something like a cell phone or VHS camcorder for your priceless footage?

We did all the image degrading for The Fourth Kind and Evidence, among others, and we took the HD footage and using our proprietary techniques created images that are indistinguishable from naturally degraded VHS or cell cameras. Analog tape distortion, crinkled tape, digital "hits", and digital damage, we will make it look authentic - and you'll be able to adjust it as needed later.

To make this work best: Shoot in HD at 60 fps, using a camera with a small image sensor. We'll take that 60 fps footage and make it look and feel just like any form of antique or distressed video.

If you want to degrade to "16mm", "Super 8" then a different frame rate might be called for (24p, or 50 which can become 17 by skipping every third frame)  (Super 8 commonly shot at 18).

  • It's always best to degrade video in post for complete control.
  • Some low res film elements might be better to shoot, but the look can also be created convincingly in post.
  • If degrading to "cheap video", then shoot at 60 fps, and in most cases use a camera with a small sensor.
  • If degrading to a low-res film element, choose a frame rate that lets us get close to what an actual rate would be for the format.

 SUMMING IT ALL UP

These are just a few of the things to look out for that an on-set VFX supervisor will examine and consider. I have personally encountered countless times when the post budget for a film was hundreds of thousands of dollars more than it needed to be, because the on-set work was done incorrectly. Yes, we can fix nearly everything in post - but you'll get better results and cheaper by using VFX only to augment story and not to fix big problems!

Link1 | Link2 | Link3

Copyright © 2018. All Rights Reserved.